Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add all annotation generation helper scripts #25

Open
1 of 2 tasks
rabitt opened this issue Jan 14, 2016 · 14 comments
Open
1 of 2 tasks

Add all annotation generation helper scripts #25

rabitt opened this issue Jan 14, 2016 · 14 comments
Assignees
Milestone

Comments

@rabitt
Copy link
Contributor

rabitt commented Jan 14, 2016

  • melody annotation generator script
  • Instrument Activation script
@rabitt rabitt added this to the v1.2 milestone Jan 14, 2016
@rabitt rabitt self-assigned this Jan 14, 2016
@faroit
Copy link
Contributor

faroit commented Feb 14, 2016

I ported the instrument activation script and will commit soon.

  • How would you prefer the corresponding tests to be designed? Testing against the already existing annotations?
  • Are the instrument activation annotations completely machine generated or were they user corrected?

@rabitt
Copy link
Contributor Author

rabitt commented Feb 15, 2016

@faroit:

How would you prefer the corresponding tests to be designed? Testing against the already existing annotations?

Yes - I think it'd be ideal to test against the existing annotations. I did something similar for the melody annotations here.

One caveat - the activations for multitracks with has_bleed=True were computed with separate files that were first source-separated. So, in the unit test, only test it on multitracks with has_bleed=False and ignore the others for now.

Are the instrument activation annotations completely machine generated or were they user corrected?

Completely machine generated. One day they should probably be user corrected, or at least verified...

@faroit
Copy link
Contributor

faroit commented Feb 15, 2016

👍 roger that

@faroit
Copy link
Contributor

faroit commented Sep 25, 2016

Working on it now.

It would be helpful to have working branch aka where tests do not fail. Could you fix the medleydb_v1.2 branch?

Currently I am getting ImportErrors:

ImportError: No module named medleydb

======================================================================
ERROR: Failure: ImportError (No module named medleydb)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
    addr.filename, addr.module)
  File "/usr/local/lib/python2.7/site-packages/nose/importer.py", line 47, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/usr/local/lib/python2.7/site-packages/nose/importer.py", line 94, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/Users/stf_remote/repositories/medleydb/tests/test_utils.py", line 4, in <module>
    import medleydb
ImportError: No module named medleydb

@faroit
Copy link
Contributor

faroit commented Sep 27, 2016

@rabitt do you need help fixing the v.1.2 branch so that it passes travis?

@rabitt
Copy link
Contributor Author

rabitt commented Sep 27, 2016

Yes sorry about this! Will fix it today.

On Tuesday, September 27, 2016, Fabian-Robert Stöter <
notifications@github.com> wrote:

@rabitt https://github.com/rabitt do you need help fixing the v.1.2
branch so that it passes travis?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#25 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AHypc7_v_w1MInumXEKtyz2MbXF8PeQbks5quPJZgaJpZM4HFMOf
.

@rabitt
Copy link
Contributor Author

rabitt commented Sep 27, 2016

@faroit It looks like the tests are fine - there was just a "pending check" from Review Ninja which I'm disabling. Am I missing something?

@faroit
Copy link
Contributor

faroit commented Sep 27, 2016

runs fine now here on my machine as well... was a problem with nosetests... pytest is so much more robust ;-)

Anyway, now I don't have the dataset with me, but will add a WIP tomorrow

@faroit
Copy link
Contributor

faroit commented Sep 28, 2016

another questions: For now I use scipy.signal and librosa to compute the activations. This would add quite a bit of additional dependencies. Thinking about this, I feel that the medleydb/annotate dir might be better outside the main medleydb module path.
Also this would make sense to me, because you are only calling the generate_*.py functions from your unit tests. So: I would propose moving annotate to a separate directory like scripts and add the additional dependencies like scipy to the test environment in setup.py

@rabitt @bmcfee what do you think?

@rabitt
Copy link
Contributor Author

rabitt commented Sep 28, 2016

I'm not so opposed to adding dependencies. I guess my preference would be to either keep it as is, adding the two dependencies, or alternatively to create a separate module entirely for generating automatic annotations on a multitrack.

@faroit
Copy link
Contributor

faroit commented Sep 28, 2016

uh, another questions: to test the activations I need to read the wav files

  1. Is it okay to add pysoundfile as dependency? would also come in handy: I could add an audio reader method as well so that users can get the audio instead of just the path
  2. How to test this? Is testing against the medleydb samples set okay? Travis should then download this first. @rabitt Can you add this to the .travis.yml and/or a fetch shell script?

@rabitt
Copy link
Contributor Author

rabitt commented Sep 28, 2016

  1. Can you explain why this is needed? If possible I'd prefer to just rely on librosa's audio reader
  2. yes using the medleydb samples is ok. Instead of having travis download them, what about making a very short version of one of them (say, 5-10 seconds) and adding it to tests/data? There are still tests I need to write, and having a mini-multitrack there for testing will be useful in general.

@faroit
Copy link
Contributor

faroit commented Sep 28, 2016

Can you explain why this is needed? If possible I'd prefer to just rely on librosa's audio reader

oops, I always tend to forget that librosa has a wav reader. Sure I can use this instead.

what about making a very short version of one of them (say, 5-10 seconds) and adding it to tests/data?

I don't like adding unnecessary binary files to repositories. The sample set is more than 400 MB, so I would download it on demand for purpose of unit testing. You are probably not short on bandwidth at NYU, right? ;-)

By the way: test_melody_annotations fails when using just the sample set. Maybe you could add logic here...

@faroit
Copy link
Contributor

faroit commented Sep 29, 2016

things are getting more concrete in #49 now

@rabitt rabitt modified the milestones: v1.3, v1.2 Oct 16, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants