-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing information #5
Comments
I found a paper from 1995 that is not included in the repository. Kaminskyj, I., & Materka, A. (1995). Automatic source identification of monophonic musical instrument sounds. Proceedings of the IEEE International Conference On Neural Networks,1, (pp. 189-194). |
And I think the work done by Kostek is not included. Kostek, B. (1995). Feature extraction methods for the intelligent processing of musical sounds. AES 100th convention,Audio Engineering Society. Kostek, B. (1998). Soft computing-based recognition of musical sounds. In L. Polkowski & A. Skowron (Eds.) Rough Sets in Knowledge Discovery. Heidelberg: Physica-Verlag. Kostek, B. (1999). Soft computing in acoustics: Applications of neural networks, fuzzy logic and rough sets to musical acoustics. Heidelberg: Physica Verlag. Kostek, B., & Czyzewski, A. (2000). An approach to the automatic classification of musical sounds. AES 108th convention. Paris: Audio Engineering Society. Kostek, B., &Czyzewski, A. (2001). Representing musical instrument sounds for their automatic classification. Journal of the Audio Engineering Society, 49, (9) 768-785. Kostek, B., &Krolikowski, R. (1997). Application of artificial neural networks to the recognition of musical sounds. Archives of Acoustics, 22, (1) 27-50. Kostek, B., &Wieczorkowska, A. (1997). Parametric representation of musical sounds. Archives of Acoustics, 22, (1) 3-26. It would be nice to add these early works -- is where the publications are more scarce! :) |
Thanks for your suggestions! |
In dl4m.bib:
task
fieldarchitecture
fielddataset
fieldVisualisations:
Tips and tricks:
http://forums.fast.ai/t/30-best-practices/12344
Unsorted references waiting to be processed:
https://github.com/davidwfong/ViolinMelodyCNNs
https://www.researchgate.net/publication/325120491_Modeling_Music_Studies_of_Music_Transcription_Music_Perception_and_Music_Production
http://www.cs.dartmouth.edu/~sarroff/papers/sarroff2018a.pdf
https://www.cs.dartmouth.edu/~sarroff/pages/publications/
https://gitlab.com/rckennedy15/CAPSTONE_2017-2018
https://gitlab.com/kidaa/biaxial-rnn-music-composition
https://github.com/chrisdonahue/wavegan
https://www.tandfonline.com/doi/full/10.1080/09298215.2018.1458885?af=R
https://github.com/Veleslavia/ICMR2017
https://github.com/rupakvignesh/Singing-Voice-Separation
https://github.com/tae-jun/resemul
http://repository.ubn.ru.nl/bitstream/handle/2066/179506/179506.pdf?sequence=1
http://www.mdpi.com/2076-3417/8/1/150/htm
https://arxiv.org/pdf/1511.06939.pdf
https://link.springer.com/chapter/10.1007/978-3-319-73600-6_11
https://link.springer.com/chapter/10.1007/978-3-319-73603-7_44
https://arxiv.org/abs/1711.00927
https://arxiv.org/abs/1803.02421
https://arxiv.org/abs/1803.02353
http://jingxixu.com/files/deeplearning.pdf
https://arxiv.org/abs/1803.05428
https://www.sciencedirect.com/science/article/pii/S0925231218302431
https://arxiv.org/abs/1705.09792
http://www.apsipa.org/proceedings/2017/CONTENTS/papers2017/15DecFriday/FA-01/FA-01.4.pdf
https://arxiv.org/pdf/1611.06265.pdf
https://arxiv.org/abs/1802.09221
https://github.com/remyhuang/pop-music-highlighter
https://remyhuang.github.io/files/huang17ismir-lbd.pdf
https://remyhuang.github.io/files/huang17apsipa.pdf
https://github.com/markostam : multiple DL applied to CSI, ...
https://link.springer.com/article/10.1007/s11265-018-1334-2
https://www.researchgate.net/publication
https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/34703/Ycart%20Polyphonic%20Music%20Sequence%202018%20Accepted.pdf?sequence=3/323184729_BachProp_Learning_to_Compose_Music_in_Multiple_Styles
https://www.sciencedirect.com/science/article/pii/S1574954117302467
http://cs229.stanford.edu/proj2017/final-reports/5242716.pdf
https://github.com/keunwoochoi/LSTMetallica
https://arxiv.org/abs/1802.08370
http://cs229.stanford.edu/proj2017/final-reports/5241796.pdf
https://github.com/pawelpeksa/music_emotion_recognition_neuralnets
https://arxiv.org/abs/1802.06432
https://arxiv.org/abs/1705.10843
http://cs229.stanford.edu/proj2017/final-reports/5244969.pdf
https://github.com/tatsuyah/deep-improvisation
http://media.aau.dk/smc/ml4audio/
http://papers.nips.cc/paper/6146-soundnet-learning-sound-representations-from-unlabeled-video.pdf
https://github.com/deepsound-project/genre-recognition
https://github.com/umbrellabeach/music-generation-with-DL
https://github.com/corticph/MSTmodel
https://arxiv.org/pdf/1706.09588.pdf
https://arxiv.org/abs/1802.05162
https://link.springer.com/article/10.1007/s10844-018-0497-4
https://github.com/devicehive/devicehive-audio-analysis
https://arxiv.org/abs/1802.04051
https://arxiv.org/abs/1802.04208
https://magenta.tensorflow.org/onsets-frames
dblp.uni-trier.de/db/conf/icmc/icmc2002 (ctrl+f neural network)
http://tandfonline.com/doi/full/10.1080/09298215.2017.1367820?af=R&
https://arxiv.org/pdf/1801.01589.pdf
https://arxiv.org/abs/1802.03144
https://arxiv.org/pdf/1712.00866.pdf
https://www.linux.ime.usp.br/~iancarv/mac0499/tcc.pdf
https://arxiv.org/abs/1802.06182
https://arxiv.org/abs/1712.05119
https://arxiv.org/abs/1712.05274
https://github.com/jakobabesser/walking_bass_transcription_dnn
https://towardsdatascience.com/how-i-created-a-classifier-to-determine-the-potential-popularity-of-a-song-6d63093ba221
https://github.com/ds7711/music_genre_classification
https://arxiv.org/abs/1710.10451
https://arxiv.org/abs/1712.07799
https://arxiv.org/pdf/1712.08370.pdf
https://arxiv.org/abs/1710.10974
https://arxiv.org/abs/1802.05178
https://arxiv.org/pdf/1703.01789.pdf
https://arxiv.org/abs/1711.05772
https://arxiv.org/abs/1801.01589
https://arxiv.org/abs/1712.09668
https://github.com/unnati-xyz/music-generation
https://github.com/calclavia/DeepJ and https://arxiv.org/pdf/1801.00887.pdf
https://scholar.google.fr/scholar?hl=fr&as_sdt=0%2C5&q=Automatic+Programming+of+VST+Sound+Synthesizers+using+Deep+Networks+and+Other+Techniques+MJ+Yee-King%2C+L+Fedden%2C+M+d%27Inverno&btnG=
https://arxiv.org/ftp/arxiv/papers/1712/1712.01011.pdf
https://github.com/AI-ON/Few-Shot-Music-Generation
https://christophm.github.io/interpretable-ml-book/
https://github.com/dshieble/Music_RNN_RBM
https://github.com/feynmanliang/bachbot
https://github.com/awjuliani/sound-cnn
https://github.com/robbiebarrat/rapping-neural-network
https://www.researchgate.net/publication/322977005_Audio_Event_Detection_Using_Multiple-Input_Convolutional_Neural_Network
https://arxiv.org/abs/1712.04371
https://arxiv.org/abs/1712.01011
https://arxiv.org/abs/1707.09219
https://arxiv.org/abs/1712.05901
https://arxiv.org/abs/1712.06076
https://arxiv.org/abs/1712.02898
https://arxiv.org/abs/1712.03228
https://arxiv.org/abs/1712.04382
https://arxiv.org/abs/1712.01456
https://arxiv.org/abs/1712.03835
https://arxiv.org/abs/1712.00334
https://arxiv.org/abs/1712.00640
https://arxiv.org/abs/1712.00866
https://arxiv.org/abs/1712.00254
https://arxiv.org/pdf/1712.05119.pdf
https://arxiv.org/abs/1712.00166
https://arxiv.org/pdf/1711.11160.pdf
https://arxiv.org/pdf/1711.08976.pdf
https://github.com/drscotthawley/panotti
https://arxiv.org/abs/1703.10847
http://www.music-ir.org/mirex/abstracts/2017/LPNKK1.pdf
http://www.music-ir.org/mirex/abstracts/2017/PLNPH1.pdf
https://github.com/zhangqianhui/AdversarialNetsPapers
https://github.com/LqNoob/MelodyExtraction-MCDNN
https://github.com/EdwardLin2014/CNN-with-IBM-for-Singing-Voice-Separation
https://github.com/posenhuang/deeplearningsourceseparation
https://github.com/minzwon/kakao/blob/master/analyzing.ipynb
https://www.researchgate.net/publication/278662921_Deep_Image_Features_in_Music_Information_Retrieval
https://arxiv.org/abs/1611.09827v2
https://arxiv.org/abs/1711.08976
https://github.com/kkp15/kkp15.github.io
https://www.sciencedirect.com/science/article/pii/S0925231217317666
https://arxiv.org/pdf/1710.11428.pdf (http://mirlab.org:8080/demo/SVSGAN/)
www.karindressler.de/papers/dissertation_dressler.pdf
https://arxiv.org/abs/1705.09792
http://ieeexplore.ieee.org/abstract/document/8103116/
https://arxiv.org/abs/1709.04384
https://arxiv.org/abs/1711.05772
https://ismir2017.smcnus.org/lbds/Kim2017a.pdf
https://arxiv.org/pdf/1711.04845.pdf
https://ismir2017.smcnus.org/lbds/Schedl2017.pdf
https://lib.ugent.be/fulltxt/RUG01/002/367/502/RUG01-002367502_2017_0001_AC.pdf
https://arxiv.org/abs/1412.6596
https://arxiv.org/abs/1706.02361
https://github.com/jthickstun/thickstun2017learning
https://arxiv.org/abs/1707.09219
https://arxiv.org/abs/1711.01369
https://arxiv.org/abs/1710.11428
https://arxiv.org/abs/1706.02361
https://arxiv.org/abs/1711.04480
https://arxiv.org/abs/1706.06525
https://github.com/qiuqiangkong/ICASSP2018_audioset
https://arxiv.org/abs/1707.05589
https://www.researchgate.net/publication/320632662_Music_Genre_Classification_Using_Masked_Conditional_Neural_Networks
https://arxiv.org/pdf/1711.02209.pdf
https://arxiv.org/pdf/1711.01369.pdf
https://arxiv.org/pdf/1711.00927.pdf
https://arxiv.org/abs/1709.06298
https://arxiv.org/abs/1709.04384
https://ismir2017.smcnus.org/lbds/Suh2017.pdf
https://ismir2017.smcnus.org/lbds/Pons2017.pdf
https://link.springer.com/chapter/10.1007/978-3-319-69911-0_14
https://www.preprints.org/manuscript/201711.0027/v1
https://github.com/Js-Mim
https://www.researchgate.net/publication/320867112_Audio_Set_classification_with_attention_model_A_probabilistic_perspective
https://arxiv.org/abs/1711.00351
https://arxiv.org/pdf/1710.10451.pdf
https://arxiv.org/pdf/1710.11153.pdf
http://www.cs.tut.fi/sgn/arg/dcase2017/documents/challenge_technical_reports/DCASE2017_Piczak_208.pdf
http://www.cs.tut.fi/sgn/arg/dcase2017/documents/challenge_technical_reports/DCASE2017_Maka_203.pdf
https://arxiv.org/abs/1711.00913
https://arxiv.org/abs/1711.00927
https://arxiv.org/find/cs/1/au:+Oord_A/0/1/0/all/0/1 & https://avdnoord.github.io/homepage/vqvae/
https://github.com/qiuqiangkong/ICASSP2018_joint_separation_classification
https://www.researchgate.net/publication/320859133_SymCHM-An_Unsupervised_Approach_for_Pattern_Discovery_in_Symbolic_Music_with_a_Compositional_Hierarchical_Model
https://www.researchgate.net/publication/315570382_Single_Channel_Audio_Source_Separation_using_Convolutional_Denoising_Autoencoders
https://github.com/andabi/music-source-separation
https://github.com/andabi/deep-voice-conversion
https://arxiv.org/abs/1711.00229
https://arxiv.org/abs/1711.00048
https://arxiv.org/pdf/1710.11549.pdf
https://arxiv.org/abs/1711.02209
https://arxiv.org/abs/1710.11473
https://arxiv.org/abs/1710.11428
https://arxiv.org/abs/1710.11418
https://arxiv.org/abs/1710.11385
https://arxiv.org/abs/1710.11153
http://danetapi.com/chimera
https://arxiv.org/abs/1710.10451
https://github.com/lamtharnhantrakul/audio_kernels
https://github.com/Impro-Visor/lstmprovisor-python
https://github.com/hexahedria/biaxial-rnn-music-composition
https://github.com/rabitt/ismir2017-deepsalience
https://www.researchgate.net/publication/313895490_Comparing_Shallow_versus_Deep_Neural_Network_Architectures_for_Automatic_Music_Genre_Classification
https://github.com/marl/crepe
https://www.semanticscholar.org/search?year%5B%5D=1991&year%5B%5D=2017&q=deep%20learning%20music%20audio%20neural%20network&sort=relevance
http://rodrigob.github.io/are_we_there_yet/build/
https://github.com/syhw/wer_are_we
https://www.researchgate.net/publication/320589850_Masked_Conditional_Neural_Networks_for_Audio_Classification
https://arxiv.org/pdf/1606.04930.pdf
https://github.com/emilylawton/deep-learning-resources
https://www.audiolabs-erlangen.de/resources/MIR/2017-GI-Tutorial-Musik/2017_MuellerWeissBalke_GI_DeepLearningMIR.pdf
https://books.google.fr/books?hl=fr&lr=&id=1_06DwAAQBAJ&oi=fnd&pg=PA237&ots=QHQvylLIO7&sig=pSqGpvQxa9RUX601lf40mQBPDX8#v=onepage&q&f=false
http://cmmr2017.inesctec.pt/wp-content/uploads/2017/09/43_CMMR_2017_paper_31.pdf
https://ismir2017.smcnus.org/wp-content/uploads/2017/10/217_Paper.pdf
https://ismir2017.smcnus.org/wp-content/uploads/2017/10/77_Paper.pdf
https://ismir2017.smcnus.org/wp-content/uploads/2017/10/28_Paper.pdf
https://ismir2017.smcnus.org/wp-content/uploads/2017/10/91_Paper.pdf
https://ismir2017.smcnus.org/wp-content/uploads/2017/10/17_Paper.pdf
https://ismir2017.smcnus.org/wp-content/uploads/2017/10/123_Paper.pdf
https://ismir2017.smcnus.org/wp-content/uploads/2017/10/137_Paper.pdf
PDF: musicalmetacreation.org/buddydrive/file/smith/ & source : http://musicalmetacreation.org/proceedings__trashed/mume-2017/
https://www.researchgate.net/publication/320519760_Musical_Query-by-Semantic-Description_Based_on_Convolutional_Neural_Network
https://www.researchgate.net/publication/314382920_Inside_the_Spectrogram_Convolutional_Neural_Networks_in_Audio_Processing
http://ieeexplore.ieee.org/abstract/document/8073570/
https://ismir2017.smcnus.org/wp-content/uploads/2017/10/135_Paper.pdf
https://www.researchgate.net/publication/320488483_Acoustic_Scene_Classification_by_Combining_Autoencoder-Based_Dimensionality_Reduction_and_Convolutional_Neural_Networks
https://www.mendeley.com/research-papers/deep-multimodal-approach-coldstart-music-recommendation-1/?dgcid=raven_md_feed_email
https://www.mendeley.com/research-papers/classification-audio-signals-using-svm-rbfnn-1/?dgcid=raven_md_feed_email
https://ismir2017.smcnus.org/wp-content/uploads/2017/10/9_Paper.pdf
https://link.springer.com/chapter/10.1007/978-3-319-68121-4_18
https://ismir2017.smcnus.org/wp-content/uploads/2017/10/164_Paper.pdf
https://www.researchgate.net/publication/320416485_A_System_for_2017_DCASE_Challenge_Using_Deep_Sequenrial_Image_and_Wavelet_Features?discoverMore=1
https://www.researchgate.net/publication/315100151_Improving_music_source_separation_based_on_deep_neural_networks_through_data_augmentation_and_network_blending
https://www.researchgate.net/publication/320333553_Data_augmentation_for_deep_learning_source_separation_of_HipHop_songs
https://github.com/karoldvl/paper-2017-DCASE
https://repositori.upf.edu/bitstream/handle/10230/32919/Martel_2017.pdf?sequence=1&isAllowed=y
https://www.researchgate.net/publication/320333553_Data_augmentation_for_deep_learning_source_separation_of_HipHop_songs?discoverMore=1
https://arxiv.org/abs/1710.04288
http://www.cs.tut.fi/sgn/arg/dcase2017/documents/challenge_technical_reports/DCASE2017_Lee_201.pdf
http://www.cs.tut.fi/sgn/arg/dcase2017/documents/challenge_technical_reports/DCASE2017_Yu_188.pdf
http://www.semanticaudio.co.uk/wp-content/uploads/2017/09/WIMP2017_Martinez-RamirezReiss.pdf
https://arxiv.org/pdf/1609.04243.pdf
https://arxiv.org/abs/1711.01634
https://arxiv.org/pdf/1706.02361.pdf
https://github.com/RichardYang40148/MidiNet/tree/master/v1
https://ismir2017.smcnus.org/programschedule/
https://twitter.com/keunwoochoi/status/912341967648018435
http://ieeexplore.ieee.org/abstract/document/8049362/ A data-driven model of tonal chord sequence complexity Bruno Di Giorgi ; Simon Dixon ; Massimiliano Zanoni ; Augusto Sarti 2017
https://www.researchgate.net/publication/282997080_A_survey_Time_travel_in_deep_learning_space_An_introduction_to_deep_learning_models_and_how_deep_learning_models_evolved_from_the_initial_ideas
https://www.researchgate.net/publication/317265107_Attention_and_Localization_Based_on_a_Deep_Convolutional_Recurrent_Model_for_Weakly_Supervised_Audio_Tagging
https://www.researchgate.net/publication/319276246_A_Recurrent_Encoder-Decoder_Approach_With_Skip-Filtering_Connections_for_Monaural_Singing_Voice_Separation
https://www.researchgate.net/publication/296704118_Deep_Neural_Networks_for_Dynamic_Range_Compression_in_Mastering_Applications
http://c4dm.eecs.qmul.ac.uk/news/news.2016-11-25.C4DM_Seminar_-_Tian_Cheng_and_Siddharth_Sigtia_(Video_Available).html
https://arxiv.org/abs/1703.08019
http://slim-sig.irisa.fr/me17/Mediaeval_2017_paper_49.pdf
https://groups.csail.mit.edu/sls/publications/2017/YuZhang_PhD_Thesis.pdf
https://dl.gi.de/bitstream/handle/20.500.12116/3859/B1-9.pdf?sequence=1&isAllowed=y
https://www.meetup.com/fr-FR/Berlin-Music-Information-Retrieval-Meetup/events/243855597/?eventId=243855597
https://www.researchgate.net/publication/315570382_Single_Channel_Audio_Source_Separation_using_Convolutional_Denoising_Autoencoders
https://www.researchgate.net/publication/320409290_Wavelets_Revisited_for_the_Classification_of_Acoustic_Scenes
https://scholar.google.fr/citations?user=YOY2MFEAAAAJ&hl=fr&oi=sra
https://www.researchgate.net/publication/314382920_Inside_the_Spectrogram_Convolutional_Neural_Networks_in_Audio_Processing
http://benanne.github.io/2014/08/05/spotify-cnns.html
https://vaplab.ee.ncu.edu.tw/english/pcchang/pdf/j52.pdf
https://github.com/auDeep/auDeep
https://www.cs.tut.fi/sgn/arg/dcase2017/documents/challenge_technical_reports/DCASE2017_Amiriparian_173.pdf
https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/25936/QUINTON_Elio_Final_PhD_030817.pdf?sequence=1
https://csmc2017.wordpress.com/proceedings/
http://ofai.at/~jan.schlueter/
https://www.audiolabs-erlangen.de/fau/assistant/balke/publications Deep Learning for Jazz Walking Bass Transcription
https://link.springer.com/chapter/10.1007/978-3-319-63450-0_14
https://www.researchgate.net/publication/318030697_Multi-scale_Multi-band_DenseNets_for_Audio_Source_Separation
https://www.researchgate.net/publication/282001406_Deep_neural_network_based_instrument_extraction_from_music
https://www.researchgate.net/publication/315100151_Improving_music_source_separation_based_on_deep_neural_networks_through_data_augmentation_and_network_blending
http://ieeexplore.ieee.org/abstract/document/7994970/
http://www.semanticaudio.co.uk/wp-content/uploads/2017/09/WIMP2017_Martinez-RamirezReiss.pdf
https://github.com/search?utf8=%E2%9C%93&q=deep+learning+music&type=
https://www.researchgate.net/publication/318030697_Multi-scale_Multi-band_DenseNets_for_Audio_Source_Separation?_esc=Profile%3A%3AInterests&_iepl%5BviewId%5D=1VIp27Fb9rzMbMunG8OwuWAr&_iepl%5BprofilePublicationItemVariant%5D=default&_iepl%5Bcontexts%5D%5B0%5D=prfipi&_iepl%5BtargetEntityId%5D=PB%3A318030697&_iepl%5BinteractionType%5D=publicationTitle
https://arxiv.org/abs/1706.07162
https://github.com/oriolromani/MIRdeepLearning
https://link.springer.com/chapter/10.1007/978-3-319-68612-7_40
https://arxiv.org/abs/1703.09039
Convolution-based Classification of Audio and Symbolic Representations of Music. Gissel Velarde, Carlos Cancino Chacón, David Meredith, Tillman Weyde and Maarten Grachten. October 22, 2016 (unpublished)
DL4M 2018 articles (to be considered after dealing with 2017):
https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8332139
https://arxiv.org/abs/1711.00048 ICASSP2018
https://www.tandfonline.com/doi/full/10.1080/09298215.2018.1438476?af=R
https://arxiv.org/abs/1803.04357
https://arxiv.org/abs/1803.04030
https://arxiv.org/abs/1709.01674
https://arxiv.org/abs/1803.06841
https://arxiv.org/abs/1804.04053
https://arxiv.org/abs/1803.06841
https://arxiv.org/abs/1803.08629
https://arxiv.org/abs/1804.00047
https://arxiv.org/abs/1804.00525
https://arxiv.org/ftp/arxiv/papers/1804/1804.02918.pdf
https://arxiv.org/abs/1804.04212
http://www.mdpi.com/2076-3417/8/4/507/htm
https://arxiv.org/abs/1801.07141
https://arxiv.org/abs/1705.06979
https://arxiv.org/abs/1612.04742
https://www.researchgate.net/publication/322216935_Jazz_music_sub-genre_classification_using_deep_learning
https://www.researchgate.net/profile/Loris_Nanni/publication/323938467_Ensemble_of_deep_learning_visual_and_acoustic_features_for_music_genre_classification/links/5ab52e3745851515f599c5da/Ensemble-of-deep-learning-visual-and-acoustic-features-for-music-genre-classification.pdf
http://www.mdpi.com/2076-3417/8/4/606/htm
https://github.com/pkmital/time-domain-neural-audio-style-transfer
https://arxiv.org/abs/1804.07145
https://arxiv.org/abs/1804.07297
https://arxiv.org/abs/1803.01271
https://hal-lirmm.ccsd.cnrs.fr/lirmm-01766781/document
https://arxiv.org/abs/1804.07300
https://arxiv.org/abs/1804.07690
https://arxiv.org/abs/1804.08300
https://arxiv.org/abs/1804.08167
https://dl.acm.org/citation.cfm?id=3191822
https://dl.acm.org/citation.cfm?id=3191823
https://arxiv.org/abs/1709.00611
https://arxiv.org/abs/1804.09399
https://arxiv.org/abs/1804.02918
https://arxiv.org/abs/1804.09808
https://arxiv.org/abs/1804.07297
https://dspace.library.uvic.ca/bitstream/handle/1828/9264/Singh_Harpreet_MSc_2018.pdf?sequence=3&isAllowed=y
https://arxiv.org/pdf/1804.09202.pdf
https://arxiv.org/abs/1805.00237 with https://github.com/jordipons/elmarc
https://github.com/NarainKrishnamurthy/BeatGAN2.0
https://arxiv.org/abs/1804.09808
https://github.com/johnglover/sound-rnn
https://github.com/NadzeyaKadakova/Studies/blob/master/95-jazznet/Jazz%20Solo%20with%20an%20LSTM%20Network%20.ipynb
https://www.politesi.polimi.it/bitstream/10589/139073/1/tesi.pdf
https://arxiv.org/abs/1805.02043
https://arxiv.org/abs/1805.02603
https://arxiv.org/abs/1805.03647
https://github.com/gantheory/playlist-cleaning
https://arxiv.org/pdf/1805.02410.pdf
https://arxiv.org/abs/1803.01271
https://ieeexplore.ieee.org/abstract/document/8356323/
https://arxiv.org/abs/1805.05324
https://marl.smusic.nyu.edu/nieto/publications/TISMIR2018.pdf
http://www.aes.org/e-lib/browse.cfm?elib=19513
https://arxiv.org/abs/1805.07848
https://arxiv.org/abs/1805.08559
https://arxiv.org/abs/1805.08501
https://arxiv.org/abs/1805.10808
https://arxiv.org/abs/1804.00525
https://arxiv.org/abs/1805.10548
https://arxiv.org/abs/1805.12176
https://arxiv.org/abs/1806.00195
https://arxiv.org/abs/1801.10492
https://arxiv.org/abs/1806.00509
https://arxiv.org/abs/1806.00770
https://arxiv.org/abs/1806.01180
https://arxiv.org/abs/1805.08559 (https://github.com/sungheonpark/music_source_sepearation_SH_net)
https://arxiv.org/abs/1806.08724
https://arxiv.org/abs/1806.08686
Some speech articles:
https://arxiv.org/pdf/1710.09798.pdf
https://arxiv.org/abs/1804.02918
https://infoscience.epfl.ch/record/203464/files/Palaz_Idiap-RR-18-2014.pdf
https://link.springer.com/chapter/10.1007/978-3-319-66429-3_2
https://www.researchgate.net/profile/Cong-Thanh_Do/publication/319269623_Improved_Automatic_Speech_Recognition_Using_Subband_Temporal_Envelope_Features_and_Time-Delay_Neural_Network_Denoising_Autoencoder/links/599f388a4585151e3c6acdd8/Improved-Automatic-Speech-Recognition-Using-Subband-Temporal-Envelope-Features-and-Time-Delay-Neural-Network-Denoising-Autoencoder.pdf
https://arxiv.org/pdf/1708.08740.pdf
http://newiranians.ir/TASLP2339736-proof.pdf
https://asmp-eurasipjournals.springeropen.com/articles/most-recent/rss.xml
https://arxiv.org/pdf/1709.00308.pdf
https://www.researchgate.net/publication/312520074_A_review_on_Deep_Learning_approaches_in_Speaker_Identification
https://www.researchgate.net/publication/317711457_A_Hybrid_Approach_with_Multi-channel_I-Vectors_and_Convolutional_Neural_Networks_for_Acoustic_Scene_Classification
https://www.researchgate.net/publication/320180136_Large-scale_weakly_supervised_audio_classification_using_gated_convolutional_neural_network
The text was updated successfully, but these errors were encountered: