TY - GEN
T1 - PONAS
T2 - 2021 International Joint Conference on Neural Networks, IJCNN 2021
AU - Huang, Sian Yao
AU - Chu, Wei Ta
N1 - Funding Information:
Acknowledgement This work was funded in part by Qual-comm through a Taiwan University Research Collaboration Project and in part by the Ministry of Science and Technology, Taiwan, under grants 108-2221-E-006-227-MY3, 107-2923-E-006-009-MY3, and 109-2218-E-002-015.
Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/18
Y1 - 2021/7/18
N2 - We propose a Progressive One-Shot Neural Architecture Search (PONAS) method to achieve a very efficient model searching for various hardware constraints. Given a constraint, most neural architecture search (NAS) methods either sample a set of sub-networks according to a pre-trained accuracy predictor, or adopt the evolutionary algorithm to evolve specialized networks from the supernet. Both approaches are time consuming. Here our key idea for very efficient deployment is, when searching the architecture space, constructing a table that stores the validation accuracy of all candidate blocks at all layers. For a stricter hardware constraint, the architecture of a specialized network can be efficiently determined based on this table by picking the best candidate blocks that yield the least accuracy loss. To accomplish this idea, we propose the PONAS method to combine advantages of progressive NAS and one-shot methods. A two-stage training scheme, including the meta training stage and the fine-tuning stage, is proposed to make the search process efficient and stable. During search, we evaluate candidate blocks in different layers and construct an accuracy table that is to be used in architecture searching. Comprehensive experiments verify that PONAS is extremely flexible, and is able to find architecture of a specialized network in around 10 seconds. In ImageNet classification, 76.29% top-1 accuracy can be obtained, which is comparable with the state of the arts.
AB - We propose a Progressive One-Shot Neural Architecture Search (PONAS) method to achieve a very efficient model searching for various hardware constraints. Given a constraint, most neural architecture search (NAS) methods either sample a set of sub-networks according to a pre-trained accuracy predictor, or adopt the evolutionary algorithm to evolve specialized networks from the supernet. Both approaches are time consuming. Here our key idea for very efficient deployment is, when searching the architecture space, constructing a table that stores the validation accuracy of all candidate blocks at all layers. For a stricter hardware constraint, the architecture of a specialized network can be efficiently determined based on this table by picking the best candidate blocks that yield the least accuracy loss. To accomplish this idea, we propose the PONAS method to combine advantages of progressive NAS and one-shot methods. A two-stage training scheme, including the meta training stage and the fine-tuning stage, is proposed to make the search process efficient and stable. During search, we evaluate candidate blocks in different layers and construct an accuracy table that is to be used in architecture searching. Comprehensive experiments verify that PONAS is extremely flexible, and is able to find architecture of a specialized network in around 10 seconds. In ImageNet classification, 76.29% top-1 accuracy can be obtained, which is comparable with the state of the arts.
UR - http://www.scopus.com/inward/record.url?scp=85116401714&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85116401714&partnerID=8YFLogxK
U2 - 10.1109/IJCNN52387.2021.9533470
DO - 10.1109/IJCNN52387.2021.9533470
M3 - Conference contribution
AN - SCOPUS:85116401714
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2021 - International Joint Conference on Neural Networks, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 18 July 2021 through 22 July 2021
ER -