This paper presents a mechanism on detecting emotional expression of music with feature selection approach. Happiness, sadness, anger, and peace are considered in the classification problem. The thirty-seven features were extracted to represent the characteristics of music samples, such as rhythm, dynamic, pitch, and timbre features. The kernel-based class separability (KBCS) was introduced to prioritize features for emotion classification because not all features have the same importance in achieving emotional expression. Two feature transformation techniques, principal component analysis (PCA) and linear discriminant analysis (LDA) were applied after the feature selection. The inclusion of these two techniques can effectively improve the classification accuracy. To the end, the k-nearest neighborhood (k-NN) classifier is adopted. The results indicate that the proposed method in the study can achieve accuracy at almost 90%.