Automatic image matching has been an essential task in the field of digital photogrammetry. Photogrammetric triangulation needs control points and tie points in the overlapped images to construct connections among images. Usually, we choose corners or some specific marks on images as tie points. On the other hand, SIFT (Scale Invariant feature transform) is known for image matching in computer vision, which extracts feature points in each image, and matches these images together according to their unique descriptors. This study discompose SIFT algorithm, in which contains octaves and levels, representing different spatial scales and image resolutions, to figure out the relationship hidden behind the algorithm. We first start from large scale images, step by step to match small scale ones, comparing accurate matches of different scale combinations to analyze the differences and accuracy. However, there could be some error matching, so we use RANSAC(RANdom SAmple Consensus) to remove outliers for higher accuracy and precision. By affine transformation and solving relative orientations of each image pair, we can get the residuals of images that go through several different levels of image matching. This study is tyring to analyze how different scales and image resolution affect image matching results.
|Publication status||Published - 2017 Jan 1|
|Event||38th Asian Conference on Remote Sensing - Space Applications: Touching Human Lives, ACRS 2017 - New Delhi, India|
Duration: 2017 Oct 23 → 2017 Oct 27
|Other||38th Asian Conference on Remote Sensing - Space Applications: Touching Human Lives, ACRS 2017|
|Period||17-10-23 → 17-10-27|
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications