ITMO
ru/ ru

ISSN: 1023-5086

ru/

ISSN: 1023-5086

Scientific and technical

Opticheskii Zhurnal

A full-text English translation of the journal is published by Optica Publishing Group under the title “Journal of Optical Technology”

Article submission Подать статью
Больше информации Back

DOI: 10.17586/1023-5086-2025-92-04-60-70

УДК: 004.932.2

Extraction of low-contrast blurred objects in space images using convolutional neural networks

For Russian citation (Opticheskii Zhurnal):

Гмыря В.А., Трещалин А.П. Выделение малоконтрастных смазанных объектов на космических изображениях с помощью свёрточных нейронных сетей // Оптический журнал. 2025. Т. 92. № 4. С. 60–70. http://doi.org/10.17586/1023-5086-2025-92-
04-60-70

 

Gmyria V.A., Treshchalin A.P. Extraction of low-contrast blurred objects in space images using convolutional neural networks [in Russian] // Opticheskii Zhurnal. 2025. V. 92. № 4. P. 60–70. http://doi.org/10.17586/1023-5086-2025-92-04-60-70

For citation (Journal of Optical Technology):
-
Abstract:

Subject of study. Efficiency of applying convolutional neural networks in the problem of extracting low-contrast blurred objects in an image. Aim of study. Evaluate the accuracy and degree of reliability of the extraction of low-contrast blurred objects using convolutional neural network models. To establish which of the studied neural networks and at what values of the signal-to-noise ratio of objects provide higher performance than the traditional algorithm based on threshold processing. Method. In the study convolutional neural networks are used to obtain binary masks of starry sky images. In order to remove noise, a pixel-wise multiplication of the binary mask with the original image is performed. The result of the multiplication is fed to the input of the center of mass algorithm to calculate the centroids of objects. For evaluating the accuracy of object extraction, segmentation quality and centroid error metrics were used, and the extraction coefficient was used to evaluate the degree of reliability. Main results. The algorithm for object extraction based on a convolutional neural network has been proposed. To generate training and test datasets, the algorithm simulating the operation of onboard optoelectronic system of a spacecraft was implemented in the MATLAB programming environment. It has been established that U-Net and SegNet models are more effective tool for extracting low-contrast blurred objects. The signal-to-noise ratio ranges of objects have been determined, at which these models exhibit the best metrics of efficiency of object extraction. Practical significance. The proposed object extraction algorithm based on a neural network model allows for the extraction of lower-contrast objects compared to traditional algorithm, and also provides a lower error in calculating centroids. The results obtained during the research will serve as a basis for further work aimed at implementing the proposed object extraction algorithm on the layout of the onboard optoelectronic system of a spacecraft.

Keywords:

convolutional neural network, low-contrast blurred object, segmentation, object extraction, centroid

OCIS codes: 100.2000, 100.2960, 100.4996, 100.3008

References:

1. Dong W., Tao S., Xu G., Chen Y. Blind deconvolution for Poissonian blurred image with total variation and L0-norm gradient regularizations // IEEE Transactions on Image Processing. 2020. V. 30. P. 1030–1043. https://doi.org/10.1109/TIP.2020.3038518
2. Zhou H., Chen Y., Feng H., Lv G., Xu Z., Li Q. Rotated rectangular aperture imaging through multi-frame blind deconvolution with Hyper-Laplacian priors // Optics Express. 2021. V. 29. № 8. P. 12145–12159. https://doi.org/10.1364/OE.424129
3. Chen X., Yang R., Guo C., Ge S., Wu Z., Liu X. HyperLaplacian regularized non-local low-rank prior for blind image deblurring // IEEE Access. 2020. V. 8. P. 136917–136929. https://doi.org/10.1109/ACCESS.2020. 3010540
4. Xu Z., Chen H., Li Z. Blind image deblurring using group sparse representation // Digital Signal Processing. 2020. V. 102. P. 102736. https://doi.org/10.1016/j.dsp.2020.102736
5. Gong D., Zhang Z., Shi Q., Van den Hengel A., Shen C., Zhang Y. Learning deep gradient descent optimization for image deconvolution // IEEE transactions on neural networks and learning systems. 2020. V. 31. № 12. P. 5468–5482. https://doi.org/10.1109/TNNLS.2020. 2968289
6. Ma X., Xia X., Zhang Z., Wang G., Qian H. Star image processing of SINS/CNS integrated navigation system based on 1DWF under high dynamic conditions // 2016 IEEE/ION Position, Location and Navigation Symposium (PLANS). Savannah, USA. April 11–14, 2016. P. 514–518. https://doi.org/10.1109/PLANS.2016.7479740
7. Wang K., Zhang C., Li Y., Kan X. A new restoration algorithm for the smeared image of a SINS-aided star sensor // The Journal of Navigation. 2014. V. 67. № 5. P. 881–898. https://doi.org/10.1017/S0373463314000277
8. Mu Z., Wang J., He X., Wei Z., He J., Zhang L., Lv Y., He D. Restoration method of a blurred star image for a star sensor under dynamic conditions // Sensors. 2019. V. 19. № 19. P. 4127. https://doi.org/10.3390/s19194127
9. Zhang H., Niu Y., Lu J., Zhang H. Accurate and autonomous star acquisition method for star sensor under complex conditions // Mathematical Problems in Engineering. 2017. V. 2017. № 1. P. 1643967. https://doi.org/10.1155/2017/1643967
10. Vianna P., Farias R., de Albuquerque Pereira W.C. U-Net and SegNet performances on lesion segmentation of breast ultrasonography images // Research on Biomedical Engineering. 2021. V. 37. P. 171–179. https://doi.org/10.1007/s42600-021-00137-4
11. Dorgham O., Naser M. A., Ryalat M. H., Hyari A., Al-Najdawi N., Mirjalili S. U-NetCTS: U-Net deep neural network for fully automatic segmentation of 3D CT DICOM volume // Smart Health. 2022. V. 26. P. 100304. https://doi.org/10.1016/j.smhl.2022.100304
12. Imtiaz T., Fattah S.A., Saquib M. ConDANet: Contourlet Driven Attention Network for automatic nuclei segmentation in histopathology images // IEEE Access. 2023. https://doi.org/10.1109/ACCESS.2023.3321799
13. Seong H., Hyun J., Kim E. FOSNet: An end-to-end trainable deep neural network for scene recognition // IEEE Access. 2020. V. 8. P. 82066–82077. https://doi.org/10.1109/ACCESS.2020.2989863
14. Du H., Wang W., Wang X., Wang Y. Autonomous landing scene recognition based on transfer learning for drones // Journal of systems engineering and electronics. 2023. V. 34. № 1. P. 28–35. https://doi.org/ 10.23919/JSEE.2023.000031
15. Wang L., Liu Y., Fu L., Wang Y., Tang N. Functional intelligence-based scene recognition scheme for MAV environment-adaptive navigation // Drones. 2022. V. 6. № 5. P. 120. https://doi.org/10.3390/drones6050120
16. Tong X., Su S., Wu P., Guo R., Wei J., Zuo Z., Sun B. MSAFFNet: A multi-scale label-supervised attention feature fusion network for infrared small target detection // IEEE Transactions on Geoscience and Remote Sensing. 2023. V. 61. P. 1–16. https://doi.org/10.1109/TGRS.2023.3279253
17. Du J., Lu H., Hu M., Zhang L., Shen X. CNN-based infrared dim small target detection algorithm using target-oriented shallow-deep features and effective small anchor // IET image processing. 2021. V. 15. № 1. P. 1–15. https://doi.org/10.1049/ipr2.12001
18. Zuo Z., Tong X., Wei J., Su S., Wu P., Guo R., Sun B. AFFPN: Attention fusion feature pyramid network for small infrared target detection // Remote Sensing. 2022. V. 14. № 14. P. 3412. https://doi.org/10.3390/rs14143412
19. Строилов Н.А., Купцов Т.В., Базина Е.А., Никитин А.В., Эльяшев Я.Д., Юматов Б.А. Определение функции рассеяния точки оптической системы звёздных датчиков // Современные проблемы дистанционного зондирования Земли из космоса. 2022. Т. 19. № 6. С. 41–49. https://doi.org/10.21046/2070-7401-2022-19-6-41-49
 Stroilov N. A., Kuptsov T. V., Bazina E. A., Nikitin A. V., Elyashev Ya. D., Yumatov B.A. Determination of the point spread function of the optical system of star sensors [in Russian] // Modern problems of remote sensing of the Earth from space. 2022. V. 19. № 6. P. 41–49. https://doi.org/10.21046/2070-7401-2022-19-6-41-49
20. Yan J., Jiang J., Zhang G. Dynamic imaging model and parameter optimization for a star tracker // Optics Express. 2016. V. 24. № 6. P. 5961–5983. https://doi.org/10.1364/OE.24.005961

21. Zhang G. Star identification. Beijing, China: National Defense Industry Press, 2011. P. 57–58. 22. Ronneberger O., Fischer P., Brox T. U-net: Convolutional networks for biomedical image segmentation // Medical image computing and computer-assisted intervention. MICCAI 2015: 18th international conference. Munich, Germany. October 5–9, 2015. P. 234–241. https://doi.org/10.1007/978-3-319-24574-4_28
23. Badrinarayanan V., Kendall A., Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation // IEEE transactions on pattern analysis and machine intelligence. 2017. V. 39. № 12. P. 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615
24. Paszke A., Chaurasia A., Kim S., Culurciello E. ENet: A deep neural network architecture for real-time semantic segmentation // arXiv preprint arXiv:1606. 02147. 2016. https://doi.org/10.48550/arXiv.1606.02147
25. Jadon S. A survey of loss functions for semantic segmentation // 2020 IEEE conference on computational intelligence in bioinformatics and computational biology (CIBCB). Vina del Mar, Chile. October 27–29. 2020. P. 1–7. https://doi.org/10.1109/CIBCB48159.2020.9277638
26. Kingma D.P., Ba J. Adam: A method for stochastic optimization // arXiv preprint arXiv:1412.6980. 2014. https://doi.org/10.48550/arXiv.1412.6980