Application of artificial neural networks for the restoration of objects in radar images

Аuthors
*, **Moscow Aviation Institute (National Research University), 4, Volokolamskoe shosse, Moscow, А-80, GSP-3, 125993, Russia
*e-mail: m-m99@yandex.ru
**e-mail: gvrk61@mail.ru
Abstract
The paper considers the possibility of using artificial neural networks (ANN) to suppress noise in radar images. The main task is to use a neural network model to filter noise and restore image clarity. For this purpose, a data set has been developed and generated, designed to train the network in order to effectively apply it in real conditions.
The paper uses an autoencoder model as an ANN, which is capable of creating compact representations of images in a hidden layer. Such a network allows you to identify the main features of images (features) and reduce the dimension of data, which, as studies have shown, is very effective in noise filtering tasks.
It is assumed that the ANN in question will be used to improve the visual perception of large-sized objects in radar images. In many practical applications, such objects can be represented as a set of interconnected simple geometric shapes such as rectangles, circles, triangles, etc. Therefore, simple shapes of these types are used as test objects in the analysis and comparison of various filtering algorithms.
The paper compares the efficiency of ANN and classical filtering algorithms, such as median filtering, averaging filter and Gaussian filter. Two metrics were used as performance criteria when comparing different image recovery algorithms – the Structural Similarity Index (SSIM) and the peak Signal–to–Noise Ratio (PSNR). The principles of calculating these metrics for each pair of images – the original and the restored – are described.
The method of creating a dataset (images) used in the training of the ANN and its testing is described. Examples of noise removal when observing useful objects in the form of simple geometric shapes – a square, a circle, a triangle, two arcs – are given. The restored images were obtained using two methods – using a trained ANN and using traditional filters. The results of calculations of filtration efficiency indicators for various objects on the radar are presented. The calculations were performed using ANN filtering and three types of filters – median, averaging, and Gaussian filters.
The calculation results showed that when using ANN, the filtration efficiency is significantly higher: the value of the SSIM metric for ANN exceeds similar values for filters by about 7...20 times; for the PSNR metric – by about 1.1...10 times. The resulting gain values depend on the shape of the object being restored and the noise level.
Keywords:
deep learning, neural networks, radar image, noise filteringReferences
- El-Darymli K., E. W. Gill, Power D., Moloney C. Automatic target recognition in synthetic aperture radar imagery: a state-of-the-art review. IEEE Access. 2016. V. 4, P. 6014–6058. DOI: 10.1109/ACCESS.2016.2611492
- Amoon M., Rezai-Rad G.A. Automatic target recognition of synthetic aperture radar (SAR) images based onoptimal selection of Zernike moments features. IET Computer Vision. 2014. V. 8, No. 2. P. 77–85. DOI: 10.1049/iet-cvi.2013.0027
- Ding B., Wen G., Ma C. et al. Target recognition in synthetic aperture radar images using binary morphological operations. Journal of Applied Remote Sensing. 2016. V. 10, No. 4. DOI: 10.1117/1.JRS.10.046006
- Shan C., Huang B., Li M. Binary morphological filtering of dominant scattering area residues for SAR target recognition. Computational Intelligence and Neuroscience. 2018. V. 2. DOI: 10.1155/2018/9680465
- Jin L., Chen J., Peng X. Synthetic aperture radar target classification via joint sparse representation of multi-level dominant scattering images. Optik. 2019. V. 186, P. 110–119. DOI: 10.1016/j.ijleo.2019.04.014
- Tan J., Fan X., Wang S. et al. Target recognition of SAR images by partially matching of target outlines. Journal of Electromagnetic Waves and Applications. 2019. V. 33, No. 7. P. 865–881. DOI: 10.1080/09205071.2018.1495580
- Papson S., Narayanan R.M. Classification via the shadow region in SAR imagery. IEEE Transactions on Aerospace and Electronic Systems. 2012.V. 48, No. 2. P. 969–980. DOI: 10.1109/TAES.2012.6178042
- Mishra A.K. Validation of PCA and LDA for SAR ATR. In Proceedings of the IEEE Region 10 Conference TENCON, November 2008, Hyderabad, India. P. 1–6.
- Zhao Q., Principe J.C. Support vector machines for SAR automatic target recognition. IEEE Transactions on Aerospace and Electronic Systems. 2001. V. 37, No. 2. P. 643–654. DOI: 10.1109/7.937475
- Liu H., Li S. Decision fusion of sparse representation and support vector machine for SAR image target recognition. Neurocomputing. 2013. V. 113, P. 97–104. DOI: 10.1016/j.neucom.2013.01.033
- Tiagaraianm J.J., Ramamurthy K.N., Knee P., Spanias A., Berisha V. Sparse representations for automatic target classification in SAR images. In Proceedings of the 4th International Conference on Signal Processing and Communication Systems. March 2010. Limassol, Cyprus. P. 1–4. DOI: 10.1109/ISCCSP.2010.5463416
- Song H., Ji K., Zhang Y. et al. Sparse representation-based SAR image target classification on the 10-class MSTAR data set. Applied Sciences. 2016. V. 6, No. 26. DOI: 10.3390/app6010026
- Ding B., Wen G. Sparsity constraint nearest subspace classifier for target recognition of SAR images. Journal of Visual Communication and Image Representation. 2018. V. 52, P. 170–176. DOI: 10.1016/j.jvcir.2018.02.012
- Kang M., Ji K., Leng X. et al. Synthetic aperture radar target recognition with feature fusion based on a stacked autoencoder. Sensors. 2017. V. 17, No. 1. P. 192. DOI: 10.3390/s17010192
- Morgan D.E. Deep convolutional neural networks for ATR from SAR imagery. Proceedings of the SPIE. 2015. V. 9475, P. 1–13. DOI: 10.1117/12.2176558
- Chen S., Wang H., Xu F. et al. Target classification using the deep convolutional networks for SAR images. IEEE Geoscience and Remote Sensing Letters. 2016. V. 47, No. 6. P. 1685–1697. DOI: 10.1109/TGRS.2016.2551720
- Zhao J., Zhang Z., Yu W. et al. A cascade coupled convolutional neural network guided visual attention method for ship detection from SAR images. IEEE Access. 2018. V. 6, P. 50693–50708. DOI: 10.1109/ACCESS.2018.2869289
- Min R., Lan H., Cao Z. et al. A gradually distilled CNN for SAR target recognition. IEEE Access. 2019. V. 7, P. 42190–42200. DOI: 10.1109/ACCESS.2019.2906564
- Ol'kina D.S. Algorithm of semantic image segmentation for solving the problem of positionong an aircraft on the Earth`s surface. Trudy MAI. 2023. No. 130. (In Russ.). URL: https://trudymai.ru/eng/published.php?ID=174617. DOI: 10.34759/trd-2023-130-18
- Chernikov A.A. Algorithm for detecting and classification of objects on a unhomogeneous background for optoelectronic systems. Trudy MAI. 2023. No. 129. (In Russ.). URL: https://trudymai.ru/ eng/published.php?ID=173039. DOI: 10.34759/trd-2023-129-26
- Koval' N.A. Comparative analysis of neural network architectures in the task of detection and identification of target and velocity jammer signals. Trudy MAI. 2024. No. 134. (In Russ.). URL: https://trudymai.ru/eng/published.php?ID=178473
- Gavrilov K.Yu., Kamensky I.V., Kirdyashkin I.V. Digital image processing in Matlab. - Moscow: Goryachaya Liniya - Telecom Publ., 2025. 160 p.
- Gavrilov K.Yu., Kamenskii K.V., Malyutina O.A. Trajectory signal modelling in the aperture synthesis radar based on optical images of the Earth surface. Trudy MAI. 2021. No. 118. (In Russ.). URL: https://trudymai.ru/eng/published.php?ID=158252. DOI: 10.34759/trd-2021-118-12
- Nikolenko S., Kadurin A., Arkhangel'skaya E. Glubokoe obuchenie (Deep learning). Saint Petersburg: Piter Publ., 2018. 480 p.
- Antonio Gulli, Anita Kapoor, Sujit Pal. Deep Learning with TensorFlow2 and Keras. -Birmingham-Mumbai, Packet Publishing, 2019. 646 p.
- Sholle F. Glubokoe obuchenie na Python (Deep Learning in Python). Saint Petersburg: Piter Publ., 2018. 576 p.
- Wang Z., Bovik A.C., Sheikh H.R., Simoncelli E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Transactions on Image Processing. 2004. V. 13, Issue 4. P. 600-612. DOI: 10.1109/TIP.2003.819861
- Rajiv Kumar Gurjwar, Divya Rishi Sahu, Deepak Singh Tomar. An Approach to Reveal Website Defacement. International Journal of Computer Science and Information Security. June 2013. V. 11, no. 6.
Download