TY - JOUR
T1 - A Deep Learning-Based Cloud-Edge Healthcare System With Time-of-Flight Cameras
AU - Lee, Shuenn Yuh
AU - Huang, Ting Yun
AU - Yen, Chun Yueh
AU - Lee, I. Pei
AU - Chen, Ju Yi
AU - Huang, Chun Rong
N1 - Publisher Copyright:
© 2001-2012 IEEE.
PY - 2024/3/1
Y1 - 2024/3/1
N2 - This study proposes a comprehensive and vision-based long-term healthcare system that includes time-of-flight (ToF) cameras at the front end, the Raspberry Pi at the edge point, and image database and classification at a cloud server. First, the ToF cameras capture human actions through depth maps. Next, the Raspberry Pi accomplishes image preprocessing and sends the resulting images to the cloud server by wireless transmission. Finally, the cloud server performs human action recognition by using the proposed temporal frame correlation recognition model. Our model expands object detection to the 3-D space based on continuous ToF images. Depth maps of ToF images do not record users' identities or environments, which prevents users from committing privacy violations. The study also builds a human action dataset, where each frame is recorded and labeled as five actions including sitting, standing, lying, getting up, and falling. After further optimization in the future, the system can improve the long-term healthcare environment and relieve the burden of nursing on elderly care.
AB - This study proposes a comprehensive and vision-based long-term healthcare system that includes time-of-flight (ToF) cameras at the front end, the Raspberry Pi at the edge point, and image database and classification at a cloud server. First, the ToF cameras capture human actions through depth maps. Next, the Raspberry Pi accomplishes image preprocessing and sends the resulting images to the cloud server by wireless transmission. Finally, the cloud server performs human action recognition by using the proposed temporal frame correlation recognition model. Our model expands object detection to the 3-D space based on continuous ToF images. Depth maps of ToF images do not record users' identities or environments, which prevents users from committing privacy violations. The study also builds a human action dataset, where each frame is recorded and labeled as five actions including sitting, standing, lying, getting up, and falling. After further optimization in the future, the system can improve the long-term healthcare environment and relieve the burden of nursing on elderly care.
UR - http://www.scopus.com/inward/record.url?scp=85181562958&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85181562958&partnerID=8YFLogxK
U2 - 10.1109/JSEN.2023.3347718
DO - 10.1109/JSEN.2023.3347718
M3 - Article
AN - SCOPUS:85181562958
SN - 1530-437X
VL - 24
SP - 7064
EP - 7074
JO - IEEE Sensors Journal
JF - IEEE Sensors Journal
IS - 5
ER -