ITMO
ru/ ru

ISSN: 1023-5086

ru/

ISSN: 1023-5086

Scientific and technical

Opticheskii Zhurnal

A full-text English translation of the journal is published by Optica Publishing Group under the title “Journal of Optical Technology”

Article submission Подать статью
Больше информации Back

DOI: 10.17586/1023-5086-2019-86-10-30-38

УДК: 004.932.72

Using convolutional neural networks to automatically select small artificial space objects on optical images of a starry sky

For Russian citation (Opticheskii Zhurnal):

Цыцулин А.К., Бобровский А.И., Морозов А.В., Павлов В.А., Галеева М.А. Применение свёрточных нейронных сетей для автоматической селекции малоразмерных искусственных космических объектов на оптических изображениях звёздного неба // Оптический журнал. 2019. Т. 86. № 10. С. 30–38. http://doi.org/10.17586/1023-5086-2019-86-10-30-38

 

Tsytsulin A.K., Bobrovskiy A.I., Morozov A.V., Pavlov V.A., Galeeva M.A. Using convolutional neural networks to automatically select small artificial space objects on optical images of a starry sky [in Russian] // Opticheskii Zhurnal. 2019. V. 86. № 10. P. 30–38. http://doi.org/10.17586/1023-5086-2019-86-10-30-38

For citation (Journal of Optical Technology):

A. K. Tsytsulin, A. I. Bobrovskiĭ, A. V. Morozov, V. A. Pavlov, and M. A. Galeeva, "Using convolutional neural networks to automatically select small artificial space objects on optical images of a starry sky," Journal of Optical Technology. 86(10), 627-633 (2019). https://doi.org/10.1364/JOT.86.000627

Abstract:

This article discusses the use of convolutional neural networks to solve the problem of automatically selecting moving objects on a moving starry background when their images exhibit speed blur. The article gives the results of testing several networks that have substantially less structural complexity than does the prototype. The estimates obtained for the accuracy and selection rate of several of the networks studied here are evidence that it is promising to use such networks to detect, classify, and estimate the location of two types of objects in the instrument’s coordinate system when resources are severely limited.

Keywords:

automatical selection, motion blur, artificial space object, convolutional neural network

OCIS codes: 100.4996

References:

1. A. A. Luk’yanitsa and A. G. Shishkin, Digital Video-Image Processing (ISS Press, Moscow, 2009).
2. G. V. Levko, A. I. Bobrovskiı˘, A. V. Morozov, and A. K. Tsytsulin, “Detecting objects on a starry background,” Vopr. Radioelektron. Ser. Tekh. Telev. (2), 29–38 (2016).
3. S. Khaı˘kin, Neural Networks (Izd. Dom Vil’yams, Moscow, 2006).
4. S. Nikolenko, A. Kadurin, and E. Arkhangel’skaya, Deep Training (Piter, St. Petersburg, 2018).
5. L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees (Wadsworth & Brooks/Cole Advanced Books & Software, Monterey, California, 1984).
6. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: unified, real-time object detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Las Vegas, Nevada, 2016), pp. 779–788.
7. A. S. Potapov, I. N. Zhdanov, O. V. Shcherbakov, N. Skorobogatko, H. Latapie, and E. Fenoglio, “Semantic image retrieval by uniting deep neural networks and cognitive architectures,” Lect. Notes Comput. Sci. 10999, 196–206 (2018).
8. L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen, “Deep learning for generic object detection: a survey,” arXiv:1809.02165 (2018).
9. P. Viola and M. J. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (Kauai, Hawaii, 2001), pp. 511–518.
10. A. K. Tsytsulin, A. V. Morozov, A. I. Bobrovskiı˘, Yu. V. Baskova, and V. A. Pavlov, “Classification of small images of space objects from motion attributes with the help of a trained algorithm,” Vopr. Radioelektron. Ser. Tekh. Telev. (3), 72–80 (2018).
11. P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE Trans. Pattern Anal. Mach. Intel. 32(9), 1627–1645 (2010).
12. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (San Diego, California, 2005), pp. 886–893.
13. D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of the Seventh IEEE International Conference on Computer Vision (1999), pp. 1150–1157.
14. V. R. Lutsiv, “Convolutional deep-learning artificial neural networks,” J. Opt. Technol. 82(8), 499–508 (2015) [Opt. Zh. 82(8), 11–23 (2015)].
15. O. I. Garin, “Method of adjusting a multistage model for detecting visual objects in a convolutional neural network,” Neı˘rokomp’. Razrab. Primen. (2), 50–56 (2018).
16. D. S. Chirov and A. N. Stetsyuk, “Using artificial neural networks in onboard systems of special robotic complexes,” Neı˘rokomp’. Razrab. Primen. (3), 42–43 (2017).
17. J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Honolulu, Hawaii, 2017), pp. 6517–6525.
18. J. Redmon and A. Farhadi, “YOLOv3: an incremental improvement,” arXiv:abs/1804.02767 (2018).
19. T. Y. Lin, P. Goyal, and R. Girshick, “Focal loss for dense object detection,” arXiv:1708.02002 (2017).
20. M. Lin, Q. Chen, and S. Yan, “Network in network,” arXiv:1312.4400v3 (2013).
21. http://www.pjreddie.com/darknet/yolo/.
22. K. S. Markelov, “Model of the increase of the information content of digital images based on the method of super-resolution,” Inzh. Vestn. MGTU im. N. É. Baumana (3), 525–542 (2013).