Automatic image matching has been an essential task in the field of digital photogrammetry. Photogrammetric triangulation needs control points and tie points in the overlapped images to construct connections among images. Usually, we choose corners or some specific marks on images as tie points. On the other hand, SIFT (Scale Invariant feature transform) is known for image matching in computer vision, which extracts feature points in each image, and matches these images together according to their unique descriptors. This study discompose SIFT algorithm, in which contains octaves and levels, representing different spatial scales and image resolutions, to figure out the relationship hidden behind the algorithm. We first start from large scale images, step by step to match small scale ones, comparing accurate matches of different scale combinations to analyze the differences and accuracy. However, there could be some error matching, so we use RANSAC(RANdom SAmple Consensus) to remove outliers for higher accuracy and precision. By affine transformation and solving relative orientations of each image pair, we can get the residuals of images that go through several different levels of image matching. This study is tyring to analyze how different scales and image resolution affect image matching results.
|出版狀態||Published - 2017 一月 1|
|事件||38th Asian Conference on Remote Sensing - Space Applications: Touching Human Lives, ACRS 2017 - New Delhi, India|
持續時間: 2017 十月 23 → 2017 十月 27
|Other||38th Asian Conference on Remote Sensing - Space Applications: Touching Human Lives, ACRS 2017|
|期間||17-10-23 → 17-10-27|
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications