TY - GEN
T1 - Mixing-Specific Data Augmentation Techniques for Improved Blind Violin/Piano Source Separation
AU - Chiu, Ching Yu
AU - Hsiao, Wen Yi
AU - Yeh, Yin Cheng
AU - Yang, Yi Hsuan
AU - Su, Alvin Wen Yu
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/9/21
Y1 - 2020/9/21
N2 - Blind music source separation has been a popular and active subject of research in both the music information retrieval and signal processing communities. To counter the lack of available multi-track data for supervised model training, a data augmentation method that creates artificial mixtures by combining tracks from different songs has been shown useful in recent works. Following this light, we examine further in this paper extended data augmentation methods that consider more sophisticated mixing settings employed in the modern music production routine, the relationship between the tracks to be combined, and factors of silence. As a case study, we consider the separation of violin and piano tracks in a violin piano ensemble, evaluating the performance in terms of common metrics, namely SDR, SIR, and SAR. In addition to examining the effectiveness of these new data augmentation methods, we also study the influence of the amount of training data. Our evaluation shows that the proposed mixing-specific data augmentation methods can help improve the performance of a deep learning-based model for source separation, especially in the case of small training data.
AB - Blind music source separation has been a popular and active subject of research in both the music information retrieval and signal processing communities. To counter the lack of available multi-track data for supervised model training, a data augmentation method that creates artificial mixtures by combining tracks from different songs has been shown useful in recent works. Following this light, we examine further in this paper extended data augmentation methods that consider more sophisticated mixing settings employed in the modern music production routine, the relationship between the tracks to be combined, and factors of silence. As a case study, we consider the separation of violin and piano tracks in a violin piano ensemble, evaluating the performance in terms of common metrics, namely SDR, SIR, and SAR. In addition to examining the effectiveness of these new data augmentation methods, we also study the influence of the amount of training data. Our evaluation shows that the proposed mixing-specific data augmentation methods can help improve the performance of a deep learning-based model for source separation, especially in the case of small training data.
UR - http://www.scopus.com/inward/record.url?scp=85099187450&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099187450&partnerID=8YFLogxK
U2 - 10.1109/MMSP48831.2020.9287146
DO - 10.1109/MMSP48831.2020.9287146
M3 - Conference contribution
AN - SCOPUS:85099187450
T3 - IEEE 22nd International Workshop on Multimedia Signal Processing, MMSP 2020
BT - IEEE 22nd International Workshop on Multimedia Signal Processing, MMSP 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 22nd IEEE International Workshop on Multimedia Signal Processing, MMSP 2020
Y2 - 21 September 2020 through 24 September 2020
ER -