УДК: 004.93'12, 004.93'14
Correlating images of three-dimensional scenes by clusterizing the correlated local attributes, using the Hough transform
Full text «Opticheskii Zhurnal»
Full text on elibrary.ru
Publication in Journal of Optical Technology
Малашин Р.О. Сопоставление изображений трехмерных сцен с помощью кластеризации сопоставленных локальных признаков посредством преобразования Хафа // Оптический журнал. 2014. Т. 81. № 6. С. 34–42.
Malashin R.O. Correlating images of three-dimensional scenes by clusterizing the correlated local attributes, using the Hough transform [in Russian] // Opticheskii Zhurnal. 2014. V. 81. № 6. P. 34–42.
R. O. Malashin, "Correlating images of three-dimensional scenes by clusterizing the correlated local attributes, using the Hough transform," Journal of Optical Technology. 81(6), 327-333 (2014). https://doi.org/10.1364/JOT.81.000327
This paper describes algorithms for correlating images of arbitrary three-dimensional scenes by clusterizing correlated key points, using the Hough transform. The method is based on the well-known method of detecting objects, but an alternative approach is proposed for verifying clusters of correlated key points. Experimental results are given for different types of key points, confirming that the proposed method has a significant advantage over the use of the fundamental matrix.
three-dimensional scenes correlating, local attributes, Hough transform
Acknowledgements:This work was carried out with the support of the Ministry of Education and Science of the Russian Federation and with the state financial support of the leading universities of the Russian Federation (Subsidy 074-U01).
OCIS codes: 100.3008, 100.5760
References:1. A. Loui and M. Das, “Matching of complex scenes based on constrained clustering,” in AAAI Fall Symposium: Multimedia Information Extraction, vol. FS-08-05, (2008), pp. 28–30.
2. V. Lutsiv, A. Potapov, T. Novikova, and N. Lapina, “Hierarchical 3D structural matching in the aerospace photographs and indoor scenes,” Proc. SPIE 5807, 455 (2005).
3. M. V. Peterson, “Clustering of a set of identified points on images of dynamic scenes, based on the principle of minimum description length,” J. Opt. Technol. 77, 701 (2010).
4. A. S. Potapov, I. A. Malyshev, A. E. Puysha, and A. N. Averkin, “New paradigm of learnable computer vision algorithms based on the representational MDL principle,” Proc. SPIE 7696, 769606 (2010).
5. D. H. Ballard, “Generalizing the Hough transform to detect arbitrary shapes,” Pattern Recogn. 13, 111 (1981).
6. D. G. Lowe, “Object recognition from local scale-invariant features,” in The Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, Kerkyra, Greece, September 20–27, 1999, pp. 1150–1157.
7. H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded Up Robust Features,” in Proceedings of the Ninth European Conference on Computer Vision, Graz, Austria, May 7–13, 2006, pp. 404–417.
8. D. Lowe, “Local feature view clustering for 3D object recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, USA, December 2001, pp. 682–688.
9. R. Raguram, J. M. Frahm, and M. Pollefeys, “A comparative analysis of RANSAC techniques leading to adaptive real-time random sample consensus,” in Proceedings of the European Conference on Computer Vision, Marseille, France, October 12–18, 2008, pp. 500–513.
10. ERSP 3.1. Robotic Development Platform, http://www.mobile‑vision‑technologies.eu/archiv/download/MVT_ersp.pdf.
11. S. Leutenegger, M. Chli, and R. Siegwart, “BRISK: Binary Robust Invariant Scalable Keypoints,” in Proceedings of the International Conference on Computer Vision, Barcelona, Spain, November 8–11, 2011, pp. 2548–2555.