ITMO
ru/ ru

ISSN: 1023-5086

ru/

ISSN: 1023-5086

Scientific and technical

Opticheskii Zhurnal

A full-text English translation of the journal is published by Optica Publishing Group under the title “Journal of Optical Technology”

Article submission Подать статью
Больше информации Back

DOI: 10.17586/1023-5086-2018-85-08-61-66

УДК: 612.843.7, 612.825, 51-76, 004.932.1

Visualization of information encoded by neurons in the higher-level areas of the visual system

For Russian citation (Opticheskii Zhurnal):

Малахова Е.Ю. Визуализация информации, кодируемой нейронами высших областей зрительной системы // Оптический журнал. 2018. Т. 85. № 8. С. 61–66. http://doi.org/10.17586/1023-5086-2018-85-08-61-66

 

Malakhova E.Yu. Visualization of information encoded by neurons in the higher-level areas of the visual system [in Russian] // Opticheskii Zhurnal. 2018. V. 85. № 8. P. 61–66. http://doi.org/10.17586/1023-5086-2018-85-08-61-66

For citation (Journal of Optical Technology):

E. Malakhova, "Visualization of information encoded by neurons in the higher-level areas of the visual system," Journal of Optical Technology. 85(8), 494-498 (2018). https://doi.org/10.1364/JOT.85.000494

Abstract:

This paper introduces an application of artificial neural networks for visualization of functions of neurons in the higher visual areas of the brain. First, a model that enables the prediction of an evoked neural response was implemented. The model has a correlation coefficient of up to 0.82 for certain cortical columns. Then, an approach to explaining representations encoded by neurons was proposed. The approach is based on generating images maximizing an activation in the model. A comparison of the visualization results with the experimental data suggests that the approach can be used to study the properties of the higher-level areas of the visual cortex.

Keywords:

vision modeling, artificial neural network, image generation, neural networks visualization, temporal cortex

Acknowledgements:

The research was supported by the Program of Fundamental Scientific Research of State Academies for 2013–2020 (GP -14, section 63).

OCIS codes: 330.4060, 200.4260, 330.4270, 100.3190

References:

1. A. Nguyen, J. Clune, Y. Bengio, A. Dosovitskiy, and J. Yosinski, “Plug & play generative networks: conditional iterative generation of images in latent space,” in IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii (2017), pp. 3510–3520.
2. A. Mordvintsev, C. Olah, and M. Tyka, “Inceptionism: going deeper into neural networks,” Google AI Blog (2015).
3. C. Olah, A. Mordvintsev, and L. Schubert, “Feature visualization,” Distill 2(11), e7 (2017).
4. A. Mahendran and A. Vedaldi, “Visualizing deep convolutional neural networks using natural pre-images,” Int. J. Comput. Vis. 120(3), 233–255 (2016).
5. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv:1312.6199 (2014).
6. A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled: high confidence predictions for unrecognizable images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, Massachusetts (2015), pp. 427–436.
7. T. Sato, G. Uchida, M. D. Lescroart, J. Kitazono, M. Okada, and M. Tanifuji, “Object representation in inferior temporal cortex is organized hierarchically in a mosaic-like structure,” J. Neurosci. 33(42), 16642–16656 (2013).
8. C. F. Cadieu, H. Hong, D. L. Yamins, N. Pinto, D. Ardila, E. A. Solomon, and J. J. DiCarlo, “Deep neural networks rival the representation of primate IT cortex for core visual object recognition,” PLoS Comput. Biol. 10(12), e1003963 (2014).
9. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2015).
10. G. O. Roberts and R. L. Tweedie, “Exponential convergence of Langevin distributions and their discrete approximations,” Bernoulli 2(4), 341–363 (1996).