ITMO
ru/ ru

ISSN: 1023-5086

ru/

ISSN: 1023-5086

Scientific and technical

Opticheskii Zhurnal

A full-text English translation of the journal is published by Optica Publishing Group under the title “Journal of Optical Technology”

Article submission Подать статью
Больше информации Back

DOI: 10.17586/1023-5086-2022-89-08-76-85

УДК: 612.84, 004.931, 004.932

Evaluation of level of foreign language proficiency based on eye movement data

For Russian citation (Opticheskii Zhurnal):

Демарева В.А., Голубинская А.В., Еделева Ю.А., Голубин Р.В. Оценка уровня знания иностранного языка на основе данных о движениях глаз // Оптический журнал. 2022. Т. 89. № 8. С. 76–85. http://doi.org/10.17586/1023-5086-2022-89-08-76-85

 

Demareva V.A., Golubinskaya A.V., Edeleva Yu.A., Golubin R.V. Evaluation of level of foreign language proficiency based on eye movement data  [in Russian] // Opticheskii Zhurnal. 2022. V. 89. № 8. P. 76–85. http://doi.org/10.17586/1023-5086-2022-89-08-76-85

For citation (Journal of Optical Technology):

V. A. Demareva, A. V. Golubinskaya, Yu. A. Edeleva, and R. V. Golubin, "Evaluation of level of foreign language proficiency based on eye movement data," Journal of Optical Technology. 89(8), 484-489 (2022). https://doi.org/10.1364/JOT.89.000484

Abstract:

Subject of study. Foreign language proficiency was evaluated using the optical recording of eye movements. Representation of unconscious cognitive processes in oculographic data during reading and correlation of eye movement parameters with the complexity of the text perceived by the reader were analyzed. Aim of study. The study was aimed at determining the correlation between the eye movement features of individuals reading texts in native and foreign languages, considering the peculiarities of the language and level of language proficiency. Method. The study comprised two stages. The total sample size was 63 Russian-speaking students, aged from 19 to 27 years, for whom English was a foreign language. The subjects’ level of proficiency in the Russian and English languages was estimated using the C-test method. The experimental task consisted of reading the English texts in the slides and answering the comprehension questions after a nine-point calibration procedure. Eye movements were recorded binocularly using an SMI-High Speed Tracker 1250 setup at a sampling rate of 500 Hz. The first stage was aimed at investigating eye movement features of native Russian speakers who are highly proficient in the English language while reading texts in both languages. In the second stage, the eye movement features of individuals reading texts in a foreign language at different levels of proficiency were investigated. Main results. The eye movement parameters of the individuals varied, depending on whether they were reading texts in their native language (Russian) or the foreign language (English), even at high proficiency in the foreign language. The eye movement patterns changed when they were reading texts in the foreign language (English) at different levels of proficiency. Practical significance. Based on the obtained data, the text complexity for the test subject can be evaluated. The text complexity evaluation using eye movement features enables algorithms to be designed for normalizing different texts in terms of complexity. The results of the study can be used when designing automated systems for evaluating foreign language proficiency level.

Keywords:

pattern recognition, optical system, texts, reading, text recognition, native language, foreign language, eye movements, reading models, understanding level assessment

Acknowledgements:

The research was supported by the grant of the President of the Russian Federation for the government support of young Russian scientists (Competition МК-2021) МК-6208.2021.2.

OCIS codes: 170.5380, 150.1135

References:

1. P. Tsakanikas, D. Pavlidis, and G. J. Nychas, “High throughput multispectral image processing with applications in food science,” PloS ONE 10(10), e0140122 (2015).
2. H. P. Chan, R. K. Samala, L. M. Hadjiiski, and C. Zhou, “Deep learning in medical image analysis,” Adv. Exp. Med. Biol. 1213, 3–21 (2020).
3. Z. Huang, Q. Li, J. Lu, J. Feng, J. Hu, and P. Chen, “Recent advances in medical image processing,” Acta Cytol. 65(4), 310–323 (2021).
4. A. M. Esmaeel, T. ElMelegy, and M. Abdelgawad, “Multi-purpose machine vision platform for different microfluidics applications,” Biomed. Microdevices 21(3), 68 (2019).
5. A. F. A. Fernandes, J. R. R. Dórea, and G. J. M. Rosa, “Image analysis and computer vision applications in animal sciences: an overview,” Front. Vet. Sci. 7, 551269 (2020).
6. S. Kwak, G. Bae, M. Kim, and H. Byun, “Unusual behavior detection in the entry gate scenes of subway station using Bayesian networks and inference,” Proc. SPIE 6813, 681311 (2008).
7. D. Yeap, P. T. Hichwa, M. Y. Rajapakse, D. J. Peirano, M. M. McCartney, N. J. Kenyon, and C. E. Davis, “Machine vision methods, natural language processing, and machine learning algorithms for automated dispersion plot analysis and chemical identification from complex mixtures,” Anal. Chem. 91(16), 10509–10517 (2019).
8. N. Valliappan, N. Dai, E. Steinberg, J. He, K. Rogers, V. Ramachandran, P. Xu, M. Shojaeizadeh, L. Guo, K. Kohlhoff, and V. Navalpakkam, “Accelerating eye movement research via accurate and affordable smartphone eye tracking,” Nat. Commun. 11(1), 4553 (2020).
9. D. Cazzato, M. Leo, C. Distante, and H. Voos, “When I look into your eyes: a survey on computer vision contributions for human gaze estimation and tracking,” Sensors 20(13), 3739 (2020).
10. H. Rahman, M. U. Ahmed, S. Barua, P. Funk, and S. Begum, “Vision-based driver’s cognitive load classification consideringeye movement using machine learning and deep learning,” Sensors 21(23), 8019 (2021).
11. A. Lewandowska, I. Rejer, K. Bortko, and J. Jankowski, “Eye-tracker study of influence of affective disruptive content on user’s visual attention and emotional state,” Sensors 22(2), 547 (2022).
12. K. A. Brookhuis, “Assessment of drivers’ workload: performance, subjective and physiological indices,” in Stress, Workload and Fatigue (Laurence Erlbaum Associates Inc., Mahwah, New Jersey, 2001), pp. 321–333.
13. D. Tao, H. Tan, H. Wang, X. Zhang, X. Qu, and T. Zhang, “A systematic review of physiological measures of mental workload,” Int. J. Environ. Res. Public Health 16(15), 2716 (2019).
14. A. F. Nawal, “Cognitive load theory in the context of second language academic writing,” Higher Educ. Pedagog. 3(1), 385–402 (2018).
15. M. Andrzejewska and A. Stoli ´nska, “Comparing the difficulty of tasks using eye tracking combined with subjective and behavioural criteria,” J. Eye Mov. Res. 9(3), 3 (2016).
16. K. Rayner, “Eye movements in reading and information processing: 20 years of research,” Psychol. Bull. 124(3), 372–422 (1998).
17. S. P. Tiffin-Richards and S. Schroeder, “The development of wrap-up processes in text reading: a study of children’s eye movements,” J. Exp. Psychol. Learn. Mem. Cognit. 44(7), 1051–1063 (2018).
18. G. E. Raney, S. J. Campbell, and J. C. Bovee, “Using eye movements to evaluate the cognitive processes involved in text comprehension,” J. Visualized Exp. 83, e50780 (2014).
19. E. D. Reichle, T. Warren, and K. McConnell, “Using E-Z Reader to model the effects of higher-level language processing on eye movements during reading,” Psychon. Bull. Rev. 16, 1–21 (2009).
20. R. Engbert, A. Nuthmann, E. Richter, and R. Kliegl, “SWIFT: a dynamical model of saccade generation during reading,” Psychol. Rev. 112, 777–813 (2005).
21. A. V. Dubasova, “Eye movement during reading: from general to special theories,” Academia, https://www.academia.edu/3074458.
22. A. A. Lamminpiya, S. V. Pronin, and Yu. E. Shelepin, “Spatial frequency text filtering for local and global analysis,” J. Opt. Technol. 85(8), 476–481 (2018) [Opt. Zh. 85(8), 39–45 (2018)].
23. A. M. Lamminpiya, O. A. Vakhrameeva, D. V. Wright, S. V. Pronin, and Yu. E. Shelepin, “Effect of wavelet filtering of text images on eye movements in reading,” Sens. Syst. 27(1), 3–9 (2013).
24. E. D. Reichle and E. M. Reingold, “Neurophysiological constraints on the eye-mind link,” Front. Hum. Neurosci. 7, 361 (2013).
25. E. M. Reingold, E. D. Reichle, M. G. Glaholt, and H. Sheridan, “Direct lexical control of eye movements in reading: evidence from survival analysis of fixation durations,” Cognit. Psychol. 65, 177–206 (2012).
26. S. P. Liversedge, D. Drieghe, X. Li, G. Yan, X. Bai, and J. Hyönä, “Universality in eye movements and reading: a trilingual investigation,” Cognition 147, 1–20 (2016).
27. E. Babaii and H. Ansary, “The C-test: a valid operationalization of reduced redundancy principle?” System 29, 209–219 (2011).
28. S. V. Cook, N. B. Pandža, A. K. Lancaster, and K. Gor, “Fuzzy non-native phonolexical representations lead to fuzzy form-to-meaning mappings,” Front. Psychol. 7 (2016).
29. C. Felser and I. Cunnings, “Processing reflexives in a second language: the timing of structural and discourse-level constraints,” Appl. Psycholinguist. 33, 571–603 (2012).
30. C. Frenck-Mestre and J. Pynte, “Syntactic ambiguity resolution while reading in a second and native languages,” Q. J. Exp. Psychol. 50A, 119–148 (1997).
31. L. B. Fernandez, R. Bothe, and S. E. Allen, “The role of L1 reading direction on L2 perceptual span: an eye-tracking study investigating Hindi and Urdu speakers,” Second Lang. Res. (to be published).
32. V. A. Demareva and Y. A. Edeleva, “Eye-tracking based L2 detection: universal and specific eye movement patterns in L1 and L2 reading,” Procedia Comput. Sci. 169, 673–676 (2020).
33. A. Izmalkova, I. Blinnikova, and M. Rabeson, “Linear and non-linear patterns of eye movements in lexical search: expert versus novice language learners,” in Advances in Cognitive Research, Artificial Intelligence and Neuroinformatics. Intercognsci 2020. Advances in Intelligent Systems and Computing (Springer, Cham, Switzerland 2020).
34. I. V. Blinnikova, M. D. Rabeson, and A. I. Izmalkova, “Eye movements and word recognition during visual semantic search: differences between expert and novice language learners,” Psychol. Russ.: State of the Art 12(1), 129–146 (2019).
35. K. Hengeveld and S. Leufkens, “Transparent and non-transparent languages,” Folia Linguist. 52(1), 139–175 (2018).
36. M. O. Hamid, I. Hardy, and V. Reyes, “Test-takers’ perspectives on a global test of English: questions of fairness, justice and validity,” Lang. Test Asia 9, 16 (2019).
37. V. A. Barabanshchikov, Detection and Analysis of Human Gaze Direction (Institut Psikhologii RAN, Moscow, 2013).
38. Y. Berzak, B. Katz, and R. Levy, “Assessing language proficiency from eye movements in reading,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2018), pp. 1986–1996.
39. E. Yu. Malakhova, E. Yu. Shelepin, and R. O. Malashin, “Temporal data processing from webcam eye tracking using artificial neural networks,” J. Opt. Technol. 85(3), 186–188 (2018) [Opt. Zh. 85(3), 77–80 (2018)].