In emotional speech synthesis, a large speech database is required for high-quality speech output. Voice conversion needs only a compact-sized speech database for each emotion. This study designs and accumulates a set of phonetically balanced smallsized emotional parallel speech databases to construct conversion functions. The Gaussian mixture bigram model (GMBM) is adopted as the conversion function to characterize the temporal and spectral evolution of the speech signal. The conversion function is initially constructed for each instance of parallel subsyllable pairs in the collected speech database. To reduce the total number of conversion functions and select an appropriate conversion function, this study presents a framework by incorporating linguistic and spectral information for conversion function clustering and selection. Subjective and objective evaluations with statistical hypothesis testing are conducted to evaluate the quality of the converted speech. The proposed method compares favorably with previous methods in conversion-based emotional speech synthesis.
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Hardware and Architecture
- Computational Theory and Mathematics