ITMO
ru/ ru

ISSN: 1023-5086

ru/

ISSN: 1023-5086

Scientific and technical

Opticheskii Zhurnal

A full-text English translation of the journal is published by Optica Publishing Group under the title “Journal of Optical Technology”

Article submission Подать статью
Больше информации Back

DOI: 10.17586/1023-5086-2023-90-08-44-54

УДК: 004.932.4, 004.832.32

Method of sharpening of combined stereo images in presence of optical distortions

For Russian citation (Opticheskii Zhurnal):

Малашин Р.О., Михалькова М.А. Метод повышения резкости совмещённых стереоснимков при наличии оптических дисторсий // Оптический журнал. 2023. Т. 90. № 8. С. 44–54. http://doi.org/10.17586/1023-5086-2023-90-08-44-54

For citation (Journal of Optical Technology):
Roman Malashin and Maria Mikhalkova, "Method for sharpening combined stereo images in the presence of optical distortions," Journal of Optical Technology. 90(8), 444-450 (2023).  https://doi.org/10.1364/JOT.90.000444
Abstract:

Subject of the research. The methods of combining images taken from different angles in the presence of lens distortions are the subject of the research. Purpose of the work. Development of a way to beautifully blur an image when transferring one image into another image's coordinate system. Method. The problem of obtaining a single transformation for transferring of the image from the coordinate system of the original two images is considered. An expression representing a superposition of several geometric transformations associated with distortions caused by the distortions of two lenses and the individual shift of all pixels, determined by stereovision or optical flow methods is obtained. Results. A feature of the problem is that some of the transformations are described analytically (distortions), and some are described by a matrix of pixel shifts. It is shown that transformations described not by an optical flow, but by a mapping matrix, are more convenient for use in this problem. Expressions that connect the coordinate systems of two images directly are derived. The proposed approach of encapsulating the distortion equations into a matrix describing the optical flow of rectified images makes it possible to improve the quality of the image converted to the coordinate system of the second frame. This is achieved by means of avoiding the sequential transfer of the image into several intermediate coordinate systems, which are associated with the use of bilinear interpolation and, consequently, excessive smoothing. An experimental verification of the developed approach confirming the better characteristics of the resulting image compared to the traditional approach was carried out. Practical significance. The developed method can be applied in optoelectronic systems with multiple lenses in tasks that require, for example, improving the image of a wide-angle camera using the image of a narrow-field lens.

Keywords:

stereo vision, optical flow, lens distortions, identification of pixels in two images, bilinear interpolation

Acknowledgements:

the research leading to these results was supported by State Program 47 of the State Enterprise "Scientific and Technological Development of the Russian Federation" (2019–2030), topic 0134-2019-0006

OCIS codes: 070.2025, 110.2960, 080.1753, 080.2720, 100.2980, 100.3010, 100.4994

References:

1.    Chambolle A. An algorithm for total variation minimization and applications // Journal of Mathematical Imaging and Vision. 2004. V. 20. P. 89–97. https://doi.org/10.1023/B:JMIV.0000011325.36760.1e

2.   Hirschmuller H., Scharstein D. Evaluation of stereo matching costs on images with radiometric differences // IEEE Transactions on Pattern Analysis and Machine Intelligence. 2009. V. 31. № 9. P. 1582–1599. https://doi.org/ 10.1109/TPAMI.2008.221

3.   Li S.Z. Markov random field models in computer vision // Proceedings of the Third European Conference on Computer Vision. 1994. V. 11. P. 361–370. https://doi.org/10.1007/BFb0028368

4.   Hur J., Roth S. Mirrorflow: Exploiting symmetries in joint optical flow and occlusion estimation // 2017 IEEE International Conference on Computer Vision (ICCV). 2017. P. 312–321. https://doi.org/ 10.1109/ICCV.2017.42

5.   Yamaguchi K., McAllester D., Urtasun R. Efficient joint segmentation, occlusion labeling, stereo and flow estimation // European Conference on Computer Vision. 2014. P. 756–771. https://doi.org/10.1007/978-3-319-10602-1_49

6.   Revaud J., Weinzaepfel P., Harchaoui Z. et al. Epicflow: Edge-preserving interpolation of correspondences for optical flow // 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015. P. 1164–1172. https://doi.org/ 10.1109/CVPR.2015.7298720

7.    Sun D., Yang X., Liu M.-Y. et al. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume // 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018. P. 8934–8943. https://doi.org/ 10.1109/CVPR.2018.00931

8.   Darmon F., Monasse P. The polar epipolar rectification // Image Processing On Line. 2021. P. 56–75. https://doi.org/10.5201/ipol.2021.328

9.   Raad L., Oliver M., Ballester C. et al. On anisotropic optical flow inpainting algorithms // Image Processing On Line. 2020. P. 78–104. https://doi.org/10.5201/ipol.2020.281

10. Dagobert T., Monzón N., Sánchez J. Comparison of optical flow methods under stereomatching with short baselines // Image Processing On Line. 2019. P. 329–359. https://doi.org/10.5201/ipol.2019.217

11.  Rodríguez M., Facciolo G., Morel J.-M. Robust homography estimation from local affine maps // Image Processing On Line. 2023. P. 65–89. https://doi.org/10.5201/ipol.2023.356

12.  Garamendi J.F., Lazcano V., Ballester C. Joint TV-L1 optical flow and occlusion estimation // Image Processing On Line. 2019. P. 432–452. https://doi.org/10.5201/ipol.2019.118

13.  Dagobert T., Grompone von Gioi R., C. de Franchis et al. Cloud detection by luminance and inter-band parallax analysis for pushbroom satellite imagers // Image Processing On Line. 2020. P. 167–190. https://doi.org/10.5201/ipol.2020.271

14.  Dosovitskiy A., Fischer P., Ilg E., Hausser P et al. Flownet: Learning optical flow with convolutional networks // 2015 IEEE International Conference on Computer Vision (ICCV). 2015. P. 2758–2766. https://doi.org/10.1109/ICCV.2015.316

15.  Ren Z., Yan J., Ni B. et al. Unsupervised deep learning for optical flow estimation // Proceedings of the AAAI Conference on Artificial Intelligence. 2017. P. 1495–1501. V. 31. № 1. https://doi.org/10.1609/aaai.v31i1.10723

16.  Ilg E., Mayer N., Saikia T. et al. Flownet 2.0: Evolution of optical flow estimation with deep networks // in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. P. 2462–2470. https://doi.org/ 10.1109/CVPR.2017.179

17.  Ranjan A., Black M.J. Optical flow estimation using a spatial pyramid network // 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017. P. 2720–2729. https:doi.org/10.1109/CVPR.2017.291

18. Hui T.-W., Tang X., Loy C.C. LiteFlowNet: A lightweight convolutional neural network for optical flow estimation // 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018. P. 8981–8989. https://doi.org/ 10.1109/CVPR.2018.00936

19.  Hur J., Roth S. Iterative residual refinement for joint optical flow and occlusion estimation // 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019. P. 5747–5756. https://doi.org/ 10.1109/CVPR.2019.00590

20. Seokju C., Sunghwan H., Seungryong K. CATs++: Boosting cost aggregation with convolutions and transformers // arXiv preprint arXiv:2202.06817. 2022

21.  Luo K., Wang C., Liu S. et al. Upflow: Upsampling pyramid for unsupervised optical flow learning // The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2021. P. 1045–1054. https://doi.org/10.1109/CVPR46437.2021.00110

22. Meister S., Hur J., Roth S. UnFlow: Unsupervised learning of optical flow with a bidirectional census loss // Thirty-Second AAAI Conference on Artificial Intelligence. 2018. V. 32. № 1. P. 7251–7259. https://doi.org/10.1609/aaai.v32i1.12276

23. Jonschkowski R., Stone A., Barron J.T. et al. What matters in unsupervised optical flow // The European Conference on Computer Vision (ECCV). 2020. V. 12347. https://doi.org/10.1007/978-3-030-58536-5_33

24. Zhong Y., Ji P., Wang J. et al. Unsupervised deep epipolar flow for stationary or dynamic scenes // 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019. P. 12087–12096. https://doi.org/10.1109/CVPR.2019.01237

25.      Hartley R., Zisserman A. Multiple view geometry in computer vision. N.Y.: Cambridge University Press, 2003. 607 p.