Problems of modern methods of three-dimensional photogrammetry
DOI:
https://doi.org/10.31649/1999-9941-2024-60-2-31-41Keywords:
photogrammetry, 3-D model, scale-invariant feature transformation, SIFT algorithm, image key points, neural networkAbstract
Abstract. Technologies of three-dimensional photogrammetry, which is one of the methods for creating computer-generated 3D models of objects, have a wide range of scientific and practical applications in fields such as manufacturing, construction, architecture, geodesy, and medicine. However, the primary challenges of photogrammetric methods are related to their high labor intensity. This work explores the fundamentals of the photogrammetric method for obtaining three-dimensional models of objects, analyzing its key drawbacks and limitations associated with the need to identify key elements across numerous images of an object taken from different angles and then align them accordingly. One of the most effective image comparison methods that can be used in photogrammetric processing to identify key elements in object images is the scale-invariant feature transformation (SIFT) algorithm. This paper analyzes the main stages of implementing this algorithm and provides an overview of several modifications that enhance its performance by eliminating redundant key points and reducing the dimensionality of descriptors used to distinguish each key point from others. Further improvements in performance and reduction of errors in 3D model creation can be achieved by removing frames or images that do not contain common features due to sharp changes in shooting angle or specific object characteristics in the preliminary stage. To accomplish this, the use of a neural network is proposed to analyze the similarity between each pair of sequentially taken images, which are preprocessed into binary form. Removing such images not only saves time by avoiding unnecessary searches for key points on an object’s image but also reduces the likelihood of obtaining erroneous matches between key points on different images of the object.
References
Abdel-Hakim A. E. and Farag, A. A. (2006) CSIFT: A SIFT Descriptor with Color Invariant Characteristics, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06) (pp. 1978-1983), New York, NY, USA, doi: 10.1109/CVPR.2006.95.
Caolan P., Farzad R., Diptangshu P., Hannah Th., Nigel Cl. A (2023) Framework for Realistic Vir-tual Representation for Immersive Training Environments. In Proceedings Of The 23rd International Conference On Construction Applications Of Virtual Reality (pp. 274-287). Florence, Italy: University of Florence.
Gosling Th. Recommended Computer Workstation For Agisoft Metashape (2023). Retrieved from https://www.workstationspecialist.com/recommended-computer-workstation-for-agisoft-metashape/.
Hossein-Nejad Z., Agahi H. and Mahmoodzadeh A. (2021) Image matching based on the adaptive redundant keypoint elimination method in the SIFT algorithm. Theoretical Advances, vol.24, 669–683.
Kotlyk S., Romanyuk O., Sokolova O., Kotlyk D. (2022) Development of affordable technology for creating 3D computer models based on photogrammetry. Part I. Automation of techno-logical and business processes, 14 (2), 37-50.
Lowe D.G. (1999) Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision (pp. 1150 - 1157). Kerkyra, Greece.
Lowe D.G. (2004) Distinctive Image Features from Scale-Invariant Keypoints. International Jour-nal of Computer Vision 60, 91–110.
Mikolajczyk K. and Schmid, C. (2005) A performance evaluation of local descriptors, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27 (10), 1615-1630, doi: 10.1109/TPAMI.2005.188.
Misra S., Li H., He J. (2019). Machine Learning for Subsurface Characterization. Gulf Profes-sional Publishing.
Morel Jean-Michel and Yu Guoshen. (2009). ASIFT: a new framework for fully affine invariant image comparison. SIAM Journal on Imaging Sciences, 2 (2), 438-469.
Nebel S, Beege M, Schneider S and Rey GD (2020) A Review of Photogrammetry and Photoreal-istic 3D Models in Education From a Psychological Perspective. Front. Educ. vol.5, 1–15.
Olagoke A.S., Ibrahiman H., Teoh S.S. (2020) Literature survey on multi-camera system and its application. IEEE Access, 8, 172892-172922.
Pavlov S. V., Romanyuk S. O., Nechiporuk M. L. (2018) Adaptive determination of diffuse and specular components of color for the rendering of face images when planning plastic sur-gery. Scientific Journal "ScienceRise", 49 (8), 2018, 24-28.
Romaniuk S. O., Pavlov S. V., Titova N. V. and Koval L. G. (2022) Using graphic 3D images of faces for express diagnosis and construction of biomedical devices. Optoelectronic In-formation-Power Technologies, 42 (2), 12–20.
Scholtens A. (2023) Capturing Reality in the Fascinating World of Photogrammetry. Sas155
Tan X. and Triggs B. (2010) Enhanced local texture feature sets for face recognition under diffi-cult lighting conditions. IEEE Trans Image Process, 19 (6), 1635-1650.
Tang L., Ma S., Ma X., You H. (2022) Research on Image Matching of Improved SIFT Algorithm Based on Stability Factor and Feature Descriptor Simplification. Applied Sciences, 12 (17), 8448 - 8466.
Zmejevskis L. PC For Photogrammetry – What Hardware Do You Need? (2022) Retrieved from https://www.pix-pro.com/blog/photogrammetry-pc.
Downloads
-
PDF (Українська)
Downloads: 4