Publication

Musical Source Separation

2020
Student project
Abstract

Musical source separation is a complex topic that has been extensively explored in the signal processing community and has benefited greatly from recent machine learning research. Many deep learning models with impressive source separation quality have been released in the last couple of years, all of them dealing with studio recorded music split into four instrument categories, vocals, drums, bass and other. We study how we can extend the number of instrument categories and conclude that electric guitar is also feasible to separate. We then turn our attention towards learning relevant signal encodings using parameterized filterbanks and we observe that filterbanks can not improve over simple convolutions on their own, but can help if the encoder is composed of both convolutions and filterbanks. Finally, we try to adapt models trained on studio music to live music separation and conclude that models trained on clean data also provide the best performance on live music as well.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.