ru/ ru

ISSN: 1023-5086


ISSN: 1023-5086

Scientific and technical

Opticheskii Zhurnal

A full-text English translation of the journal is published by Optica Publishing Group under the title “Journal of Optical Technology”

Article submission Подать статью
Больше информации Back

DOI: 10.17586/1023-5086-2022-89-08-64-75

УДК: 612.84, 612.843.7, 004.93, 621.397.3

Image analysis and error detection in source software code

For Russian citation (Opticheskii Zhurnal):

Скуратова К.А., Шелепин Е.Ю., Малашин Р.О., Шелепин Ю.Е. Анализ изображений и поиск ошибок в текстах исходного программного кода // Оптический журнал. 2022. Т. 89. № 8. С. 64–75.


Skuratova K.A., Shelepin E.Yu., Malashin R.O., Shelepin Yu.E. Image analysis and error detection in source software code [in Russian] // Opticheskii Zhurnal. 2022. V. 89. № 8. P. 64–75. 

For citation (Journal of Optical Technology):

K. A. Skuratova, E. Yu. Shelepin, R. O. Malashin, and Yu. E. Shelepin, "Image analysis and error detection in source software code," Journal of Optical Technology. 89(8), 476-483 (2022).


Subject of study. We study cognitive mechanisms for the analysis of images of source software code and detection of errors in it by a human. Aim. The aim of this study is to determine the influence of the professional skill of visual error detection in a Python source code on the control over oculomotor processes to be used for the subsequent modeling of the identified principles in artificial intelligence systems. Method. The eye movement properties of a human during error detection in the images of software code were investigated using eye movement tracking technology by means of the Russian software–hardware suite “Neurobureau,” which enabled a complete set of psychophysiological studies to be conducted. The test subjects completed two tasks: explaining the software code and finding an error in it. Each task comprised 10 stimuli with software code in Python language normalized to length and complexity and with syntax highlighting. The task completion time was not limited. Eight programmers with different professional experience (from 1 to 13 years) participated in the study. Main results. With increasing professional skill in image analysis, humans develop eye movement strategies enabling more effective performance of tasks with minimum effort. These strategies entail division of an integral software code into individual units relevant for the analysis. We discovered that experienced programmers exhibited fewer fixations, shorter scanpath length, and larger saccade amplitudes than novice programmers. Notably, the speed of saccades, in particular the broad searching eye movements during the task of software code explanation, increases with professional experience. We demonstrated that visual error detection is mainly determined by recognition of the details in a text that can be found exclusively based on the semantics and grammar of the programming language. We established that reading of software code texts is different from both viewing visual scenes and reading texts written in a natural language. Professional experience minimizes the efforts in this work, similar to any other type of professional activity. Eye movement control during code reading is guided by the knowledge of the programming language, understanding of context, and knowledge of the semantics and grammar of the language, as well as the low-spatial-frequency description of the images of strings and words, which is used to develop the skill of reading and recognition of the general configuration of strings and words in a natural language. Practical significance. The conclusions obtained in this study can be used to describe the existing techniques and create new neuromorphic algorithms (based on the strategies developed during human evolution) for the generation and correction of software code.


pattern recognition, visual search, visual skill, eye movements, images, program codes


The research was supported by the Ministry of Science and Higher Education of RF under the agreement No. 075-15-2020-921 of 13.11.2020. 

OCIS codes: 330.2210, 330.5020, 330.4270


1. F. Isiaka and A. Ibrahim, “Page pattern recognition in eye movement validation,” Int. Refereed J. Eng. Sci. 3(8), 45–55 (2014).
2. J. Z. Lim, J. Mountstephens, and J. Teo, “Eye-tracking feature extraction for biometric machine learning,” Front. Neurorob. 15, 796895 (2022).
3. K. A. Skuratova, E. Yu. Shelepin, and N. P. Yarovaya, “Optical search and visual expertise,” J. Opt. Technol. 88(12), 700–705 (2021) [Opt. Zh. 88(12), 28–35 (2021)].
4. F. Chan, “Tracking eye movements over source code” (2018),
5. C. Schulte, “Block model: an educational model of program comprehension as a tool for a scholarly approach to teaching,” in Proceedings of the Fourth International Workshop on Computing Education Research (2008), pp. 149–160.
6. T. Busjahn, R. Bednarik, A. Begel, M. Crosby, J. H. Paterson, C. Schulte, B. Sharif, and S. Tamm, “Eye movements in code reading: relaxing the linear order,” in IEEE 23rd International Conference on Program Comprehension (2015), pp. 255–265.
7. T. Busjahn, R. Bednarik, and C. Schulte, “What influences dwell time during source code reading: analysis of element type and frequency as factors,” in Proceedings of the Symposium on Eye Tracking Research and Applications (2014), pp. 335–338.
8. B. Liblit, A. Begel, and E. Sweetser, “Cognitive perspectives on the role of naming in computer programs,” in PPIG (2006), p. 11.
9. A. Begel and H. Vrzakova, “Eye movements in code review,” in Workshop on Eye Movements in Programming (2018).
10. Yu. E. Shelepin, A. K. Kharauzov, O. V. Zhukova, S. V. Pronin, M. S. Kuprianov, and O. V. Tsvetkov, “Masking and detection of hidden signals in dynamic images,” J. Opt. Technol. 87(10), 624–632 (2020) [Opt. Zh. 87(10), 89–102 (2020)].
11. A. Gegenfurtner, E. Kok, K. van Geel, A. de Bruin, H. Jarodzka, A. Szulewski, and J. J. G. van Merriënboer, “The challenges of studying visual expertise in medical image diagnosis,” Med. Educ. 51(1), 97–104 (2017).
12. A. Robins, J. Rountree, and N. Rountree, “Learning and teaching programming: a review and discussion,” Comput. Sci. Educ. 13(2), 137–172 (2003).
13. D. J. Gilmore and T. R. G. Green, “Programming plans and programming expertise,” Q. J. Exp. Psychol. Sec. A 40(3), 423–442 (1988).

14. Z. Sharafi, Z. Soh, and Y.-G. Guéhéneuc, “A systematic literature review on the usage of eye-tracking in software engineering,” Inf. Softw. Technol. 67(7), 79–107 (2015).
15. V. Bauhoff, M. Huff, and S. Schwan, “Distance matters: spatial contiguity effects as trade-off between gaze switches and memory load,” Appl. Cogn. Psychol. 26(6), 863–871 (2012).
16. V. D. Glezer, Recognition of Visual Images (Nauka, Leningrad, 1966).
17. F. W. Campbell and J. G. Robson, “Application of Fourier analyses to the visibility of gratings,” J. Physiol. 197, 551–557 (1968).
18. D. Burr and C. Morrone, “Constructing stable spatial maps of the world,” Perception 41, 1355–1372 (2012).
19. D. Burr and J. Ross, “A visual sense of number,” Curr. Biol. 18, 425–428 (2008).
20. C. J. Leonard and S. J. Luck, “The role of magnocellular signals in oculomotor attentional capture,” J. Vis. 11(13):11, 11–12 (2011).
21. D. Snowden, “Complex acts of knowing: paradox and descriptive self-awareness,” J. Knowl. Manag. 6(2), 100–111 (2002).
22. K. Rayner, K. Binder, J. Ashby, and A. Pollatsek, “Eye movement control in reading: word predictability has little influence on initial landing positions in words,” Vis. Res. 41(7), 943–954 (2001).
23. A. A. Lamminpiya, S. V. Pronin, and Yu. E. Shelepin, “Spatial frequency text filtering for local and global analysis,” J. Opt. Technol. 85(8), 476–481 (2018) [Opt. Zh. 85(8), 39–45 (2018)].
24. G. J. Burton, N. D. Haig, and I. R. Moorhead, “A self-similar stack model for human and machine vision,” Biol. Cybern. 53, 397–403 (1986).
25. P. Burt and E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. COM-31(4), 532–540 (1983).
26. T. Busjahn and S. Tamm, “A deeper analysis of AOI coverage in code reading,” in ETRA ’21 Short Papers: ACM Symposium on Eye Tracking Research and Applications (2021), paper 31.
27. B. Pinna, E. Shelepin, and K. Deiana, “Chromatic accentuation in dyslexia-useful implications for effective assistive technology,” in Materials of the IEEE International Symposium “Video and Audio Signal Processing in the Context of Neurotechnologies” (2016), pp. 34–36.
28. A. Sarkar, “The impact of syntax colouring on program comprehension,” in Proceedings of the 26th Annual Conference of the Psychology of Programming Interest Group (2015), pp. 49–58.
29. R. O. Malashin, “Principle of least action in dynamically configured image analysis systems,” J. Opt. Technol. 86(11), 678–685 (2019) [Opt. Zh. 86(11), 5–13 (2019)].
30. R. O. Malashin, “Sparsely ensembled convolutional neural network classifiers via reinforcement learning,” in 6th International Conference on Machine Learning Technologies (ICMLT) (2021), pp. 102–110.
31. Y.-F. Tuan, “Images and mental maps,” Ann. Assoc. Am. Geogr. 65(2), 205–213 (1975).
32. T. B. Brown, B. Mann, N. Ryder, et al., “Language models are few-shot learners,” arxiv:2005.14165 (2020).
33. M. Chen, J. Tworek, H. Jun, et al., “Evaluating large language models trained on code,” arxiv:2107.03374 (2021).
34. Y. Li, D. Choi, J. Chung, et al., “Competition-level code generation with AlphaCode” arXiv:2203.07814 (2022).
35. W. Fedus, B. Zoph, and N. Shazeer, “Switch transformers: scaling to trillion parameter models with simple and efficient sparsity,” arxiv:2101.03961 (2021).
36. N. Du, Y. Huang, A. M. Dai, et al., “GLaM: efficient scaling of language models with mixture-of-experts,” arxiv:2112.06905 (2021).
37. B. Zoph, I. Bello, S. Kumar, N. Du, Y. Huang, J. Dean, N. Shazeer, and W. Fedus, “Designing effective sparse expert models,” arxiv:2202.08906v1 (2022).