TY - JOUR
T1 - Indoor positioning using convolution neural network to regress camera pose
AU - Ciou, Jing Mei
AU - Lu, E. H.C.
N1 - Funding Information:
This research was partially supported by Ministry of Science and Technology, Taiwan, R.O.C. under grant no. MOST 107-2119-M-006-028 - and The Mobile Platform Survey and Mapping Technology Development Plan under grant no. 107SU1207, for providing indoor mobile mapping platform and helping us to collect the necessary information and related data for the experiment.
Publisher Copyright:
© Authors 2019.
PY - 2019/6/4
Y1 - 2019/6/4
N2 - In recent years, the issue of indoor positioning has become more and more popular and attracted more attention. Under the absence of GNSS, how to more accurately position is one of the challenges on the positioning technology. Camera positioning can be calculated by image and objects. Therefore, this study focuses on locating the user's camera position, but how to calculate the camera position efficiently is a very challenging problem. With the rapid development of neural network in image recognition, computer can not only process images quickly, but also achieve good results. Convolution Neural Network (CNN) can sense the local area of the image and find some high-resolution local features. These basic features are likely to form the basis of human vision and become an effective means to improve the recognition rate. We use a 23-layer convolutional neural network architecture and set different sizes of input images to train the end-to-end task of location recognition to regress the camera's position and direction. We choose the sites where are the underground parking lot for the experiment. Compared with other indoor environments such as chess, office and kitchen, the condition of this place is very severe. Therefore, how to design algorithms to train and exclude dynamic objects using neural networks is very exploratory. The experimental results show that our proposed solution can effectively reduce the error of indoor positioning.
AB - In recent years, the issue of indoor positioning has become more and more popular and attracted more attention. Under the absence of GNSS, how to more accurately position is one of the challenges on the positioning technology. Camera positioning can be calculated by image and objects. Therefore, this study focuses on locating the user's camera position, but how to calculate the camera position efficiently is a very challenging problem. With the rapid development of neural network in image recognition, computer can not only process images quickly, but also achieve good results. Convolution Neural Network (CNN) can sense the local area of the image and find some high-resolution local features. These basic features are likely to form the basis of human vision and become an effective means to improve the recognition rate. We use a 23-layer convolutional neural network architecture and set different sizes of input images to train the end-to-end task of location recognition to regress the camera's position and direction. We choose the sites where are the underground parking lot for the experiment. Compared with other indoor environments such as chess, office and kitchen, the condition of this place is very severe. Therefore, how to design algorithms to train and exclude dynamic objects using neural networks is very exploratory. The experimental results show that our proposed solution can effectively reduce the error of indoor positioning.
UR - http://www.scopus.com/inward/record.url?scp=85067480004&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85067480004&partnerID=8YFLogxK
U2 - 10.5194/isprs-archives-XLII-2-W13-1289-2019
DO - 10.5194/isprs-archives-XLII-2-W13-1289-2019
M3 - Conference article
AN - SCOPUS:85067480004
SN - 1682-1750
VL - 42
SP - 1289
EP - 1294
JO - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
JF - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
IS - 2/W13
T2 - 4th ISPRS Geospatial Week 2019
Y2 - 10 June 2019 through 14 June 2019
ER -