УДК: 004.932
Modification of the nearest-neighbor-based image recognition method using a local metric
Full text «Opticheskii Zhurnal»
Full text on elibrary.ru
Publication in Journal of Optical Technology
Потапов А.С. Модификация метода распознавания образов по критерию ближайшего соседа с использованием локальной метрики // Оптический журнал. 2016. Т. 83. № 12. С. 48–53.
Potapov A.S. Modification of the nearest-neighbor-based image recognition method using a local metric [in Russian] // Opticheskii Zhurnal. 2016. V. 83. № 12. P. 48–53.
A. S. Potapov, "Modification of the nearest-neighbor-based image recognition method using a local metric," Journal of Optical Technology. 83(12), 749-752 (2016). https://doi.org/10.1364/JOT.83.000749
This study presents an analysis of the causes of insufficient efficiency of the nearest neighbor method, compared with deep learning networks. The primary cause is the incorrect use of Euclidean distance to the nearest neighbor for estimating the distance from the analyzed pattern to a region occupied by a class. To overcome this problem, it is necessary to construct a local estimate of the distance metric. Therefore, the proposed method combines the concepts of “tangential distance” and Mahalanobis distance. Based on this method, a modified nearest neighbor method is suggested. Validation experiments on MNIST data show that the new method reduces the recognition error rate from 3.8% to 0.8%, which is lower than the recognition error rate of 1.1% for the nearest neighbor method modified based on the concept of “tangential distance,” the calculation of which requires a priori information on allowable pattern transformations.
image recognition, nearest neighbor method, MNIST
Acknowledgements:The research was supported by the Ministry of Education and Science of the Russian Federation (Minobrnauka); Russian Federation (074-U01).
OCIS codes: 150.1135
References:1. J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Netw. 61, 85–117 (2015).
2. V. P. Lutsiv, “Convolutional deep-learning artificial neural networks,” J. Opt. Technol. 82(8), 499–508 (2015) [Opt. Zh. 82(8), 11–23 (2015)].
3. Y. LeCun, “The unreasonable effectiveness of deep learning,” https://www.cs.princeton.edu/events/event/unreasonable‑effectiveness‑deep‑learning (accessed 18 May 2016).
4. Y. LeCun, C. Cortes, and C. J. Burges, “The MNIST database of handwritten digits,” http://yann.lecun.com/exdb/mnist/ (accessed 18 May 2016).
5. Y. Bengio, “Deep learning of representations for unsupervised and transfer learning,” J. Mach. Learn. Res. 27, 17–36 (2012).
6. R. O. Malashin and A. B. Kadykov, “Investigation of the generalizing capabilities of convolutional neural networks in forming rotation-invariant attributes,” J. Opt. Technol. 82(8), 509–515 (2015) [Opt. Zh. 82(8), 24–32 (2015)].
7. P. Simard, Y. LeCun, and J. S. Denker, “Efficient pattern recognition using a new transformation distance,” Adv. Neural Inf. Process. Syst. 5, 50–58 (1992).
8. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
9. L. Yang, R. Jin, R. Sukthankar, and Y. Liu, “An efficient algorithm for local distance metric learning,” in Proceedings of the 21st National Conference on Artificial Intelligence (Boston, Massachusetts, 2006), Vol. 1, pp. 543–548.
10. J. Wang, A. Woznica, and A. Kalousis, “Parametric local metric learning for nearest neighbor classification,” arXiv:1209.3056 (2012).