TY - JOUR
T1 - Hybrid Learning
T2 - When Centralized Learning Meets Federated Learning in the Mobile Edge Computing Systems
AU - Feng, Chenyuan
AU - Yang, Howard H.
AU - Wang, Siye
AU - Zhao, Zhongyuan
AU - Quek, Tony Q.S.
N1 - Publisher Copyright:
© 1972-2012 IEEE.
PY - 2023/12/1
Y1 - 2023/12/1
N2 - Federated learning is a new artificial intelligence technology with which an edge server can orchestrate with multiple end users to train a global model collaboratively. Under this setting, users only upload the locally trained parameters instead of their local data, substantially reducing communication costs and boosting data privacy. Nonetheless, federated learning mainly relies on users' local training, overlooking the abundant computing resources owned by the edge server. To exploit the edge server's processing power, we propose a hybrid learning paradigm that consists of centralized and federated learning components. This scheme uploads a portion of users' data for centralized learning when the local model is trained under federated learning. We derive a theoretical upper bound for the model accuracy, which can be used to assess the performance of the proposed new learning paradigm. To balance the computation and communication resources for a good model accuracy performance, we establish a joint optimization problem of model accuracy, latency, and energy consumption. We also devise the corresponding joint optimization algorithm to solve the problem. Experiment results show that compared with centralized and federated learning, the proposed hybrid learning algorithm can effectively improve the model accuracy and significantly reduce computation and communication resources.
AB - Federated learning is a new artificial intelligence technology with which an edge server can orchestrate with multiple end users to train a global model collaboratively. Under this setting, users only upload the locally trained parameters instead of their local data, substantially reducing communication costs and boosting data privacy. Nonetheless, federated learning mainly relies on users' local training, overlooking the abundant computing resources owned by the edge server. To exploit the edge server's processing power, we propose a hybrid learning paradigm that consists of centralized and federated learning components. This scheme uploads a portion of users' data for centralized learning when the local model is trained under federated learning. We derive a theoretical upper bound for the model accuracy, which can be used to assess the performance of the proposed new learning paradigm. To balance the computation and communication resources for a good model accuracy performance, we establish a joint optimization problem of model accuracy, latency, and energy consumption. We also devise the corresponding joint optimization algorithm to solve the problem. Experiment results show that compared with centralized and federated learning, the proposed hybrid learning algorithm can effectively improve the model accuracy and significantly reduce computation and communication resources.
UR - http://www.scopus.com/inward/record.url?scp=85170521975&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85170521975&partnerID=8YFLogxK
U2 - 10.1109/TCOMM.2023.3310529
DO - 10.1109/TCOMM.2023.3310529
M3 - Article
AN - SCOPUS:85170521975
SN - 0090-6778
VL - 71
SP - 7008
EP - 7022
JO - IEEE Transactions on Communications
JF - IEEE Transactions on Communications
IS - 12
ER -