TY - GEN
T1 - Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking
AU - Chiu, Ching Yu
AU - Ching, Joann
AU - Hsiao, Wen Yi
AU - Chen, Yu Hua
AU - Su, Alvin Wen Yu
AU - Yang, Yi Hsuan
N1 - Publisher Copyright:
© 2021 European Signal Processing Conference. All rights reserved.
PY - 2021
Y1 - 2021
N2 - Due to advances in deep learning, the performance of automatic beat and downbeat tracking in musical audio signals has seen great improvement in recent years. In training such deep learning based models, data augmentation has been found an important technique. However, existing data augmentation methods for this task mainly target at balancing the distribution of the training data with respect to their tempo. In this paper, we investigate another approach for data augmentation, to account for the composition of the training data in terms of the percussive and non-percussive sound sources. Specifically, we propose to employ a blind drum separation model to segregate the drum and non-drum sounds from each training audio signal, filtering out training signals that are drumless, and then use the obtained drum and non-drum stems to augment the training data. We report experiments on four completely unseen test sets, validating the effectiveness of the proposed method, and accordingly the importance of drum sound composition in the training data for beat and downbeat tracking.
AB - Due to advances in deep learning, the performance of automatic beat and downbeat tracking in musical audio signals has seen great improvement in recent years. In training such deep learning based models, data augmentation has been found an important technique. However, existing data augmentation methods for this task mainly target at balancing the distribution of the training data with respect to their tempo. In this paper, we investigate another approach for data augmentation, to account for the composition of the training data in terms of the percussive and non-percussive sound sources. Specifically, we propose to employ a blind drum separation model to segregate the drum and non-drum sounds from each training audio signal, filtering out training signals that are drumless, and then use the obtained drum and non-drum stems to augment the training data. We report experiments on four completely unseen test sets, validating the effectiveness of the proposed method, and accordingly the importance of drum sound composition in the training data for beat and downbeat tracking.
UR - http://www.scopus.com/inward/record.url?scp=85109371259&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85109371259&partnerID=8YFLogxK
U2 - 10.23919/EUSIPCO54536.2021.9616022
DO - 10.23919/EUSIPCO54536.2021.9616022
M3 - Conference contribution
AN - SCOPUS:85109371259
T3 - European Signal Processing Conference
SP - 391
EP - 395
BT - 29th European Signal Processing Conference, EUSIPCO 2021 - Proceedings
PB - European Signal Processing Conference, EUSIPCO
T2 - 29th European Signal Processing Conference, EUSIPCO 2021
Y2 - 23 August 2021 through 27 August 2021
ER -