TY - GEN
T1 - VRank
T2 - 17th IEEE International Workshop on Multimedia Signal Processing, MMSP 2015
AU - Lim, Tekoing
AU - Hua, Kai Lung
AU - Wang, Hong Cyuan
AU - Zhao, Kai Wen
AU - Hu, Min Chun
AU - Cheng, Wen Huang
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/11/30
Y1 - 2015/11/30
N2 - Ranking algorithms have proven the potential for human age estimation. Currently, a common paradigm is to compare the input face with reference faces of known age to generate a ranking relation whereby the first-rank reference is exploited for labeling the input face. In this paper, we proposed a framework to improve upon the typical ranking model, called Voting system on Ranking model (VRank), by leveraging relational information (comparative relations, i.e. if the input face is younger or older than each of the references) to make a more robust estimation. Our approach has several advantages: firstly, comparative relations can be explicitly involved to benefit the estimation task; secondly, few incorrect comparisons will not influence much the accuracy of the result, making this approach more robust than the conventional approach; finally, we propose to incorporate the deep learning architecture for training, which extracts robust facial features for increasing the effectiveness of classification. In comparison to the best results from the state-of-the-art methods, the VRank showed a significant outperformance on all the benchmarks, with a relative improvement of 5.74% ∼ 69.45% (FG-NET), 19.09% ∼ 68.71% (MORPH), and 0.55% ∼ 17.73% (IoG).
AB - Ranking algorithms have proven the potential for human age estimation. Currently, a common paradigm is to compare the input face with reference faces of known age to generate a ranking relation whereby the first-rank reference is exploited for labeling the input face. In this paper, we proposed a framework to improve upon the typical ranking model, called Voting system on Ranking model (VRank), by leveraging relational information (comparative relations, i.e. if the input face is younger or older than each of the references) to make a more robust estimation. Our approach has several advantages: firstly, comparative relations can be explicitly involved to benefit the estimation task; secondly, few incorrect comparisons will not influence much the accuracy of the result, making this approach more robust than the conventional approach; finally, we propose to incorporate the deep learning architecture for training, which extracts robust facial features for increasing the effectiveness of classification. In comparison to the best results from the state-of-the-art methods, the VRank showed a significant outperformance on all the benchmarks, with a relative improvement of 5.74% ∼ 69.45% (FG-NET), 19.09% ∼ 68.71% (MORPH), and 0.55% ∼ 17.73% (IoG).
UR - http://www.scopus.com/inward/record.url?scp=84960345403&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84960345403&partnerID=8YFLogxK
U2 - 10.1109/MMSP.2015.7340789
DO - 10.1109/MMSP.2015.7340789
M3 - Conference contribution
AN - SCOPUS:84960345403
T3 - 2015 IEEE 17th International Workshop on Multimedia Signal Processing, MMSP 2015
BT - 2015 IEEE 17th International Workshop on Multimedia Signal Processing, MMSP 2015
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 19 October 2015 through 21 October 2015
ER -