ITMO
ru/ ru

ISSN: 1023-5086

ru/

ISSN: 1023-5086

Scientific and technical

Opticheskii Zhurnal

A full-text English translation of the journal is published by Optica Publishing Group under the title “Journal of Optical Technology”

Article submission Подать статью
Больше информации Back

DOI: 10.17586/1023-5086-2018-85-08-67-76

УДК: 612.82, 159.931, 004.93'1, 004.932

Automatic classification of visual stimuli using an observer’s electroencephalogram

For Russian citation (Opticheskii Zhurnal):

Пономарев С.В., Maлашин Р.О., Моисеенко Г.А. Автоматическая классификация зрительных стимулов по электроэнцефалограмме наблюдателя // Оптический журнал. 2018. Т. 85. № 8. С. 67–76. http://doi.org/10.17586/1023-5086-2018-85-08-67-76

 

Ponomarev S.V., Malashin R.O., Moiseenko G.A. Automatic classification of visual stimuli using an observer’s electroencephalogram [in Russian] // Opticheskii Zhurnal. 2018. V. 85. № 8. P. 67–76. http://doi.org/10.17586/1023-5086-2018-85-08-67-76

For citation (Journal of Optical Technology):

S. V. Ponomarev, R. O. Malashin, and G. A. Moiseenko, "Automatic classification of visual stimuli using an observer’s electroencephalogram," Journal of Optical Technology. 85(8), 499-506 (2018). https://doi.org/10.1364/JOT.85.000499

Abstract:

This paper discusses the problem of automatically classifying visual stimuli (animate and inanimate objects filtered at high and low spatial frequencies) using an observer’s electroencephalogram. Classical machine-learning methods (a support-vector machine that employs, among other things, wavelet attributes) and convolutional and recurrent deep-learning neural networks were used for the classification. The recognition accuracy was analyzed as a function of the selected classification methods, the placement of the electrodes, the time intervals, and the problem to be solved. The results show that the classification accuracy is 79% for sharp/smeared images, 61% for animate/inanimate objects, and 50% for classifying four classes of images.

Keywords:

single evoked potentials recognition, method of support vectors, neural networks, cognitive evoked potentials

Acknowledgements:

The research was supported by the Program of Fundamental Scientific Research of State Academies for 2013–2020 (GP-14, section 63).

OCIS codes: 100.4996, 330.4270, 330.5000

References:

1. C. Spaminato, S. Palazzo, I. Kavasidis, D. Giordano, M. Shah, and N. Souly, “Deep learning human mind for automated visual classification,” arXiv:1609.00344 (2017).
2. C. Chagas, Evoked Potentials in the Norm and Pathology (Mir, Moscow, 1975).
3. L. R. Zenkov and M. A. Ronkin, Functional Diagnosis of Neural Diseases: Handbook for Physicians (MED Press Inform, Moscow, 2013).
4. V. V. Gnezditskiı˘, Evoked Potentials of the Brain in Clinical Practice (MED Press Inform, Moscow, 2003).
5. V. M. Bondarko, M. V. Danilova, N. N. Krasil’nikov, L. I. Leushina, A. A. Nevskaya, and Yu. E. Shelepin, Three-Dimensional Vision (Nauka, St. Petersburg, 1999).
6. Yu. E. Shelepin, Introduction to Neuroiconics: A Monograph (Troitskiı˘ Most, St. Petersburg, 2017).
7. B. R. Sheth and R. Young, “Two visual pathways in primates based on sampling of space: exploitation and exploration of visual information,” Front. Integr. Neurosci. 10, 37 (2016).
8. G. A. Moiseenko, E. A. Vershinina, S. V. Pronin, V. N. Chikhman, E. S. Mikhaı˘lova, and Yu. E. Shelepin, “Latent periods of the components of evoked potentials in problems of classifying images subjected to wavelet filtering,” Fiziol. Chel. 42(6), 37–48 (2016).
9. G. A. Moiseenko, Yu. E. Shelepin, A. K. Kharauzov, S. V. Pronin, V. N. Chikhman, and O. A. Vakhrameeva, “Classification and recognition of images of animate and inanimate objects,” J. Opt. Technol. 82(10), 685–693 (2015) [Opt. Zh. 82(10), 53–64 (2015)].
10. M. Yazici and M. Ulutas, in “Classification of EEG signals using time domain features,” in 23rd Signal Processing and Communications Applications Conference (2015), pp. 2358–2361.
11. N. R. Anderson and K. J. Wisneski, “Automated analysis and trending of the raw EEG signal,” Am. J. Electroneurodiagn. Technol. 48, 166–191 (2008).
12. D. Gajić, Z. Djurovic, S. Di Gennaro, and F. Gustafsson, “Classification of EEG signals for detection of epileptic seizures based on wavelets and statistical pattern recognition,” Biomed. Eng. 26, 1450021 (2014).
13. R. E. J. Yohanes, W. Ser, and G.-b. Huang, “Discrete wavelet transform coefficients for emotion recognition from EEG signals,” in Annual International Conference of the IEEE Engineering in Medicine and Biology Society (2012), pp. 2251–2254.
14. MATLAB Wavelet Toolbox, https://www.mathworks.com/products/wavelet.html.
15. G. Lee, F. Wasilewski, R. Gommers, K. Wohlfahrt, A. O’Leary, and H. Nahrstaedt, “PyWavelets—Wavelet Transforms in Python,” 2006, https://github.com/PyWavelets/pywt.

16. S. J. Luck and E. S. Kappenman, “ERP Components and selective attention,” in The Oxford Handbook of Event-Related Potential Components (Oxford University Press, Oxford, 2013), chap. 11, pp. 295–327.
17. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput. 9(8), 1735–1780 (1997).
18. P. Bashivan, I. Rish, M. Yeasin, and N. Codella, “Learning representation from EEG with deep recurrent-convolutional neural networks,” arXiv:1511.06448 (2016).
19. Z. Tang and S. Sun, “Single-trial EEG classification of motor imagery using deep convolutional neural networks,” Optik (Munich, Ger.) 130, 11–18 (2017).
20. R. O. Malashin, “Extraction of object hierarchy data from trained deep-learning neural networks via analysis of the confusion matrix,” J. Opt. Technol. 83(10), 599–603 (2016) [Opt. Zh. 83(10), 24–30 (2016)].
21. D. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep learning by exponential linear units (ELUs),” in International Conference on Learning Representations, 2016, pp. 1–14.
22. S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arXiv:1502.03167 (2015).
23. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT Press, Cambridge, Mass., 2016).