TY - GEN
T1 - AoI and energy consumption oriented dynamic status updating in caching enabled IoT networks
AU - Xu, Chao
AU - Wang, Xijun
AU - Yang, Howard H.
AU - Sun, Hongguang
AU - Quek, Tony Q.S.
N1 - Funding Information:
ACKNOWLEDGMENTS This paper is supported by National Natural Science Foundation of China (61701372 and 61701372), Talents Special Foundation of Northwest A & F University (Z111021801 and Z111021801), Key Research and Development Program of Shaanxi (2019ZDLNY07-02-01), Fundamental Research Funds for the Central Universities of China(19lgpy79), and Research Fund of the Key Laboratory of Wireless Sensor Network & Communication (20190912).
PY - 2020/7
Y1 - 2020/7
N2 - Caching has been regarded as a promising technique to alleviate energy consumption of sensors in Internet of Things (IoT) networks by responding to users' requests with the data packets stored in the edge caching node (ECN). For real-time applications in caching enabled IoT networks, it is essential to develop dynamic status update strategies to strike a balance between the information freshness experienced by users and energy consumed by the sensor, which, however, is not well addressed. In this paper, we first depict the evolution of information freshness, in terms of age of information (AoI), at each user. Then, we formulate a dynamic status update optimization problem to minimize the expectation of a long-term accumulative cost, which jointly considers the users' AoI and sensor's energy consumption. To solve this problem, a Markov Decision Process (MDP) is formulated to cast the status updating procedure, and a model-free reinforcement learning algorithm is proposed, with which the challenge brought by the unknown of the formulated MDP's dynamics can be addressed. Finally, simulations are conducted to validate the convergence of our proposed algorithm and its effectiveness compared with the zero-wait baseline policy.
AB - Caching has been regarded as a promising technique to alleviate energy consumption of sensors in Internet of Things (IoT) networks by responding to users' requests with the data packets stored in the edge caching node (ECN). For real-time applications in caching enabled IoT networks, it is essential to develop dynamic status update strategies to strike a balance between the information freshness experienced by users and energy consumed by the sensor, which, however, is not well addressed. In this paper, we first depict the evolution of information freshness, in terms of age of information (AoI), at each user. Then, we formulate a dynamic status update optimization problem to minimize the expectation of a long-term accumulative cost, which jointly considers the users' AoI and sensor's energy consumption. To solve this problem, a Markov Decision Process (MDP) is formulated to cast the status updating procedure, and a model-free reinforcement learning algorithm is proposed, with which the challenge brought by the unknown of the formulated MDP's dynamics can be addressed. Finally, simulations are conducted to validate the convergence of our proposed algorithm and its effectiveness compared with the zero-wait baseline policy.
UR - http://www.scopus.com/inward/record.url?scp=85091537567&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091537567&partnerID=8YFLogxK
U2 - 10.1109/INFOCOMWKSHPS50562.2020.9162687
DO - 10.1109/INFOCOMWKSHPS50562.2020.9162687
M3 - Conference contribution
AN - SCOPUS:85091537567
T3 - IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2020
SP - 710
EP - 715
BT - IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE INFOCOM Conference on Computer Communications Workshops, INFOCOM WKSHPS 2020
Y2 - 6 July 2020 through 9 July 2020
ER -