ITMO
ru/ ru

ISSN: 1023-5086

ru/

ISSN: 1023-5086

Scientific and technical

Opticheskii Zhurnal

A full-text English translation of the journal is published by Optica Publishing Group under the title “Journal of Optical Technology”

Article submission Подать статью
Больше информации Back

DOI: 10.17586/1023-5086-2020-87-10-59-68

УДК: 004.932.4

Image enhancement by deep neural networks using high-level information

For Russian citation (Opticheskii Zhurnal):
Титаренко М.А., Малашин Р.О. Метод улучшения изображений с помощью глубоких нейронных сетей при использовании высокоуровневой информации// Оптический журнал. 2020. Т. 87. № 10. С. 59–68. http://doi.org/10.17586/1023-5086-2020-87-10-59-68   Titarenko M. A. and Malashin R. O. Image enhancement by deep neural networks using high-level information [in Russian] // Opticheskii Zhurnal. 2020. V. 87. № 10. P. 59–68. http://doi.org/10.17586/1023-5086-2020-87-10-59-68
For citation (Journal of Optical Technology):
M. A. Titarenko and R. O. Malashin, "Image enhancement by deep neural networks using high-level information," Journal of Optical Technology. 87(10), 604-610 (2020). https://doi.org/10.1364/JOT.87.000604
Abstract:

A method is investigated for training neural networks for image enhancement, based on using information from the features of neural networks trained in image classification. Experiments are performed to identify the optimal loss function that achieves maximum precision in the classification of images superposed with noise or blurring. The dependence of the best configuration of training parameters on the type of detrimental influence and target is demonstrated. This is the first study, to our knowledge, to compare the influence of such a loss function on the precision of the restoration and recognition with the utilization of a single classifier trained under the influence of distorting factors. We show that it is reasonable to correct some simple distortions “outside” the classifier, while others are better corrected “inside.”

Keywords:

image improvement, deep neural networks, loss function based on network features

OCIS codes: 150.1135, 100.2980

References:

1. J. Bruna, P. Sprechmann, and Y. LeCun, “Super-resolution with deep convolutional sufficient statistics,” in International Conference on Learning Representations (2016), p. 17.
2. A. Lucas, S. Lopez-Tapia, R. Molina, and A. K. Katsaggelos, “Generative adversarial networks and perceptual losses for video super-resolution,” IEEE Trans. Image Process. 28(7), 3312–3327
(2019).
3. J. Johnson, A. Alahi, and F. Li, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision (2016), p. 18.
4. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations (2015), p. 14.
5. R. Szeliski, Computer Vision: Algorithms and Applications (Springer- Verlag, New York, 2011), p. 979.
6. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” arXiv:1409.0575 (2015).
7. L. Ding, W. Bihan, J. Jiao, Z. Wang, and T. S. Huang, “Connecting image denoising and high-level vision tasks via deep learning,” arXiv:1809.01826 (2018).
8. M. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” arXiv:1311.2901 (2013).
9. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241.
10. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), p. 9.
11. Machine learning framework PyTorch, https://pytorch.org.
12. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980 (2017).