Automatic violin synthesis using expressive musical term features

Chih Hong Yang, Pei Ching Li, Wen-Yu Su, Li Su, Yi Hsuan Yang

研究成果: Paper

1 引文 (Scopus)

摘要

The control of interpretational properties such as duration, vibrato, and dynamics is important in music performance. Musicians continuously manipulate such properties to achieve different expressive intentions. This paper presents a synthesis system that automatically converts a mechanical, deadpan interpretation to distinct expressions by controlling these expressive factors. Extending from a prior work on expressive musical term analysis, we derive a subset of essential features as the control parameters, such as the relative time position of the energy peak in a note and the mean temporal length of the notes. An algorithm is proposed to manipulate the energy contour (i.e. for dynamics) of a note. The intended expressions of the synthesized sounds are evaluated in terms of the ability of the machine model developed in the prior work. Ten musical expressions such as Risoluto and Maestoso are considered, and the evaluation is done using held-out music pieces. Our evaluations show that it is easier for the machine to recognize the expressions of the synthetic version, comparing to those of the real recordings of an amateur student. While a listening test is under construction as a next step for further performance validation, this work represents to our best knowledge a first attempt to build and quantitatively evaluate a system for EMT analysis/synthesis.

原文English
頁面209-215
頁數7
出版狀態Published - 2016 一月 1
事件19th International Conference on Digital Audio Effects, DAFx 2016 - Brno, Czech Republic
持續時間: 2016 九月 52016 九月 9

Other

Other19th International Conference on Digital Audio Effects, DAFx 2016
國家Czech Republic
城市Brno
期間16-09-0516-09-09

指紋

music
synthesis
Acoustic waves
Students
evaluation
students
set theory
recording
acoustics
Musical Terms
Violin
Expressive
energy
Energy
Evaluation
Intentions
Musicians
Maestoso
Music Performance
Convert

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Acoustics and Ultrasonics
  • Music
  • Computer Science Applications

引用此文

Yang, C. H., Li, P. C., Su, W-Y., Su, L., & Yang, Y. H. (2016). Automatic violin synthesis using expressive musical term features. 209-215. 論文發表於 19th International Conference on Digital Audio Effects, DAFx 2016, Brno, Czech Republic.
Yang, Chih Hong ; Li, Pei Ching ; Su, Wen-Yu ; Su, Li ; Yang, Yi Hsuan. / Automatic violin synthesis using expressive musical term features. 論文發表於 19th International Conference on Digital Audio Effects, DAFx 2016, Brno, Czech Republic.7 p.
@conference{512569451e684e4884cdef65056f60f9,
title = "Automatic violin synthesis using expressive musical term features",
abstract = "The control of interpretational properties such as duration, vibrato, and dynamics is important in music performance. Musicians continuously manipulate such properties to achieve different expressive intentions. This paper presents a synthesis system that automatically converts a mechanical, deadpan interpretation to distinct expressions by controlling these expressive factors. Extending from a prior work on expressive musical term analysis, we derive a subset of essential features as the control parameters, such as the relative time position of the energy peak in a note and the mean temporal length of the notes. An algorithm is proposed to manipulate the energy contour (i.e. for dynamics) of a note. The intended expressions of the synthesized sounds are evaluated in terms of the ability of the machine model developed in the prior work. Ten musical expressions such as Risoluto and Maestoso are considered, and the evaluation is done using held-out music pieces. Our evaluations show that it is easier for the machine to recognize the expressions of the synthetic version, comparing to those of the real recordings of an amateur student. While a listening test is under construction as a next step for further performance validation, this work represents to our best knowledge a first attempt to build and quantitatively evaluate a system for EMT analysis/synthesis.",
author = "Yang, {Chih Hong} and Li, {Pei Ching} and Wen-Yu Su and Li Su and Yang, {Yi Hsuan}",
year = "2016",
month = "1",
day = "1",
language = "English",
pages = "209--215",
note = "19th International Conference on Digital Audio Effects, DAFx 2016 ; Conference date: 05-09-2016 Through 09-09-2016",

}

Yang, CH, Li, PC, Su, W-Y, Su, L & Yang, YH 2016, 'Automatic violin synthesis using expressive musical term features' 論文發表於 19th International Conference on Digital Audio Effects, DAFx 2016, Brno, Czech Republic, 16-09-05 - 16-09-09, 頁 209-215.

Automatic violin synthesis using expressive musical term features. / Yang, Chih Hong; Li, Pei Ching; Su, Wen-Yu; Su, Li; Yang, Yi Hsuan.

2016. 209-215 論文發表於 19th International Conference on Digital Audio Effects, DAFx 2016, Brno, Czech Republic.

研究成果: Paper

TY - CONF

T1 - Automatic violin synthesis using expressive musical term features

AU - Yang, Chih Hong

AU - Li, Pei Ching

AU - Su, Wen-Yu

AU - Su, Li

AU - Yang, Yi Hsuan

PY - 2016/1/1

Y1 - 2016/1/1

N2 - The control of interpretational properties such as duration, vibrato, and dynamics is important in music performance. Musicians continuously manipulate such properties to achieve different expressive intentions. This paper presents a synthesis system that automatically converts a mechanical, deadpan interpretation to distinct expressions by controlling these expressive factors. Extending from a prior work on expressive musical term analysis, we derive a subset of essential features as the control parameters, such as the relative time position of the energy peak in a note and the mean temporal length of the notes. An algorithm is proposed to manipulate the energy contour (i.e. for dynamics) of a note. The intended expressions of the synthesized sounds are evaluated in terms of the ability of the machine model developed in the prior work. Ten musical expressions such as Risoluto and Maestoso are considered, and the evaluation is done using held-out music pieces. Our evaluations show that it is easier for the machine to recognize the expressions of the synthetic version, comparing to those of the real recordings of an amateur student. While a listening test is under construction as a next step for further performance validation, this work represents to our best knowledge a first attempt to build and quantitatively evaluate a system for EMT analysis/synthesis.

AB - The control of interpretational properties such as duration, vibrato, and dynamics is important in music performance. Musicians continuously manipulate such properties to achieve different expressive intentions. This paper presents a synthesis system that automatically converts a mechanical, deadpan interpretation to distinct expressions by controlling these expressive factors. Extending from a prior work on expressive musical term analysis, we derive a subset of essential features as the control parameters, such as the relative time position of the energy peak in a note and the mean temporal length of the notes. An algorithm is proposed to manipulate the energy contour (i.e. for dynamics) of a note. The intended expressions of the synthesized sounds are evaluated in terms of the ability of the machine model developed in the prior work. Ten musical expressions such as Risoluto and Maestoso are considered, and the evaluation is done using held-out music pieces. Our evaluations show that it is easier for the machine to recognize the expressions of the synthetic version, comparing to those of the real recordings of an amateur student. While a listening test is under construction as a next step for further performance validation, this work represents to our best knowledge a first attempt to build and quantitatively evaluate a system for EMT analysis/synthesis.

UR - http://www.scopus.com/inward/record.url?scp=85030245783&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85030245783&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85030245783

SP - 209

EP - 215

ER -

Yang CH, Li PC, Su W-Y, Su L, Yang YH. Automatic violin synthesis using expressive musical term features. 2016. 論文發表於 19th International Conference on Digital Audio Effects, DAFx 2016, Brno, Czech Republic.