Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter June 21, 2013

Aircraft Image Recognition System Using Phase Correlation Method

  • K. Roopa EMAIL logo and T.V. Rama Murthy

Abstract

This article describes the aircraft image recognition system implemented using the phase correlation technique in Matlab environment. The phase correlation is computed by using the normalized cross-power spectrum between the database and the template test image. The main objective of this article is to develop methods for static analysis of aircraft images. An unknown fighter aircraft is recognized by comparing its static image with those from a database of images of aircraft. This work is a research initiative involving the use of image processing techniques to detect three-dimensional (3D) aircraft object based on their 2D images, providing feedback information for strategic purposes. The phase correlation technique is found to give a better recognition result for the set of database and test images considered, compared with the invariant moments. The phase correlation method is also used in other areas such as image registration. Aircraft images used include those from AeroIndia 2011 held at Bangalore, India.

1 Introduction

Automatic recognition of fighter aircraft is vital for defense applications to take decisions for strategic purposes. The features should be independent of the object’s position and orientation. In this study, the phase correlation method is described for aircraft identification and the results are compared with the moment invariant technique. These two techniques are still relevant considering their varied applicability.

Since Dudani et al.’s work [4], aircraft image identification problem has been pursued using various methods for industrial and military areas by researchers. Moment invariants were derived by Hu [13] and were used by others [2, 6, 19, 23], either in its direct form or in modified form for different applications. Dudani et al. [4] described an automatic recognition of aircraft types by extracting the invariant moments from binary television images. They used six different aircraft types; viz., F-4 Phantom, Mirage IIIC, MIG21, F-105, F-104, and B-57. They used 132 test images consisting of 22 images in each of the six aircraft types. The complete training set was based on >3000 images for the six different types of aircraft. They used two distinct decision rules in classification, viz., a Bayes decision rule and a distance-weighted k-nearest-neighbor rule. Rihaczek and Hershkowitz [25] described an identification system for large aircraft by using features related to length and wing span. An aircraft identification system using back-propagation neural network is presented by Somaie et al. [30]. They used six different types of aircraft models. They made use of the Hotelling transform to align aircraft pattern with its principal axis. Kim et al. [17] used multilayer neural networks for identification and orientation estimation of aircraft. Molina et al. [20] described an identification of aircraft in the airport movement area. They proposed a video identification algorithm, based on the tail number recognition. Hsieh et al. [9] described a hierarchical recognition method to recognize the types of aircraft. Saghafi et al. [28] described aircraft type recognition by using an area-based feature extraction method and a multilayer perceptron neural network for classification. Their work addressed classification of aircraft by type from optical or infrared images assuming a single object present in the image. They used three-dimensional (3D) computer models of five different aircraft, including Bell 206, C-130 Hercules, AH-1 Cobra, Su-25, and Mustang, and for each model, 1080 images were generated for training purposes. Zhu and Ma [32] gave a review of 3D aircraft image recognition methods.

Zitova and Flusser [33] gave an extensive survey of various image registration methods. Yan and Liu [31] described a phase correlation-based feature matching. Nakamori et al. [22] described a 3D object matching using the phase correlation method. They proposed a method of calculating the degree of similarity between 3D objects considering translation and rotation along the x, y, and z axes using the phase correlation method. Ito et al. [14] described a fingerprint recognition algorithm combining phase-based image matching and feature-based matching.

The proposed problem is similar to the content-based image retrieval (CBIR). In a CBIR system, a feature vector is extracted from each image in the training set and the set of all feature vectors is stored in a database. At query time, a feature vector is extracted from the query image and it is matched against those stored in the database. Examples of features used in such techniques are color, texture, and shape.

Datta et al. [3] described CBIR as any technology that in principle helps to organize digital picture archives by their visual content. In this article, the authors have given a survey of various contributions in the current decade related to image retrieval. Jain and Singh [15] provided a survey on CBIR systems using clustering techniques for large data sets. Rui et al. [27] provided a survey in the area of CBIR. Singh and Hemachandran [29] proposed a CBIR method that combines color and texture features. Borde and Bhosle [1] described a CBIR technique using the clustering features extracted from images based on row mean clustering, column mean clustering, row mean DCT clustering, column mean DCT clustering, row mean wavelet clustering, and column mean wavelet clustering. Then, a similarity measure was used to compare the cluster distance of query image with cluster distance of images in the database. Katare et al. [16] described a CBIR system for images containing multiple objects using a combination of shape and color features. The system retrieves those images that contain the query object regardless of the other objects. Rao et al. [24] described a CBIR system using exact Legendre moments to form the feature vector and support vector machine classifier.

The organization of the article is as follows: Section 2 gives a description of the invariant moments. Section 3 gives a description of phase correlation method. Section 4 reports the experiments and results. Section 5 gives conclusions of our study.

2 Invariant Moments [8]

The invariant moments do not change with respect to object translation, scale change, mirroring (within a minus sign), and rotation. Flusser [5] gave a survey of object recognition methods based on image moments. Gangadhar et al. [7] described 2D geometric moment invariants for detection and comparison of objects. Mukundan [21] described a fast computation of geometric moments and invariants using Schlick’s approximation.

In the invariant moment method, at query time, an invariant moment is extracted from the query image and it is matched against the same invariant moments extracted and stored in the memory for database images. Then, recognition is made based on the nearest match. The block diagram of the invariant moment method is given in Figure 1.

Figure 1 Block Diagram of the Invariant Moment Method.
Figure 1

Block Diagram of the Invariant Moment Method.

The 2D moment, mpq, of order (p+ q) of a digital image f(x, y) of size M× N is defined as

The corresponding central moment of order (p+ q), i.e., μpq is defined as

where p= 0, 1, 2, …, q= 0, 1, 2, …, are integers and

The normalized central moments, ηpq are given by

where

for p+ q= 2, 3, …9.

A set of seven invariant moments can be derived from the second and third moments. The first invariant moment is given by

3 Phase Correlation Method [18, 33]

The moments and functions of moments can be employed as the invariant global features of an image. Generally, these features are invariant under image translation, scale change, and rotation only when they are computed from the original 2D images. In practice, one observes the digitized, quantized, and often noisy version of the image, and all these properties are satisfied only approximately. The phase correlation method, however, shows strong robustness against non-uniform, time-varying illumination disturbances. Hence, the phase correlation method gives a better performance than the moments approach.

The phase correlation is based on the Fourier shift property: a shift in the spatial coordinate frame between two functions results in a linear phase difference in the frequency domain of the Fourier transform of the two functions. Given two 2D functions g(x, y) and h(x, y) representing two images related by a translational shift a in the horizontal and b in the vertical direction,

and the corresponding Fourier transforms are denoted by G(u, v) and H(u, v), then

The phase correlation is defined as the normalized cross power spectrum between G and H, which is a matrix:

If G(u, v) and H(u, v) are continuous functions, then the inverse Fourier transform (IFT) of Q(u, v) is a delta function:

where the location of the function peak identifies the magnitude of the shift between the images. When two images are similar, their phase correlation function gives a distinct sharp peak. When two images are not similar, the peak drops significantly [14].

In the phase correlation method, at query time, a feature is extracted by computing the normalized cross-power spectrum of the template test image with each database image. Then recognition is made on the basis of the maximum peak inverse value.

The phase correlation method was originally proposed for the registration of translated images. The computational time savings are more significant compared with the spatial correlation methods if the images to be matched are large.

Steps used for algorithm implementation:

  1. Read the database images. Convert color images to gray scale. Resize to 256×256. Perform 2D fast Fourier transform (FFT) of the database images.

  2. Read the test image. Convert to gray scale if it is a color image. Resize to 256×256.

  3. Find the row and column indices, say i, j, of the first “zero” value in the segmented test image. With (i–50) and (j–50) as reference, crop a rectangle of width and height 100 in the resized test image. Thus, the template test image is obtained by non-interactively cropping the resized test image.

  4. Zero pad the template test image (to keep its size the same as that of the database image) and perform 2D FFT and take its complex conjugate.

  5. Obtain the normalized cross-power spectrum R of the template test image with each database image by using the following equation:

    where f is a database image; g is the template test image cropped from the resized test image;

    is the Fourier transform of a database image f, and
    is the complex conjugate of the Fourier transform of the zero padded template test image g. The denominator indicates the absolute value.

  6. Get the peak of the IFT of R, i.e.,

    in each case.

  7. The database image with which the maximum peak value is obtained is the recognized test image.

  8. Determine the coordinates of the maximum peak value of r.

  9. Display the location of the template test image in the database image using the above obtained coordinates and the size of the template test image.

4 Experiments and Results

The following eight classes of fighter aircraft images are maintained in the database:

Chengdu_J10 [12]

Chengdu_J20 [10]

EF_Typhoon [12]

JF-17 [12]

LCA_Tejas [12]

Mig35 [12]

FA18[AeroIndia 2011, Bangalore], [11]

Sukhoi_Su30[AeroIndia 2011, Bangalore]

The images of aircraft include different orientation of pitch, yaw, and roll axes. Fifteen database images and 61 test images are used. It is assumed that the test images belong to the same category as those used in the database. The database images are shown in Figures 2–16.

Figures 2–16 (2) Chengdu_J10. (3) Chengdu_J20. (4) EF_Typhoon. (5) JF17. (6) LCA_Tejas. (7) Mig35. (8) FA18(1). (9) FA18(2). (10) FA18(3). (11) FA18(4). (12) FA18(5). (13) Sukhoi_Su30(1). (14) Sukhoi_Su30(2). (15) Sukhoi_Su30(3). (16) Sukhoi_Su30(4).
Figures 2–16

(2) Chengdu_J10. (3) Chengdu_J20. (4) EF_Typhoon. (5) JF17. (6) LCA_Tejas. (7) Mig35. (8) FA18(1). (9) FA18(2). (10) FA18(3). (11) FA18(4). (12) FA18(5). (13) Sukhoi_Su30(1). (14) Sukhoi_Su30(2). (15) Sukhoi_Su30(3). (16) Sukhoi_Su30(4).

The simulation results are given in Figures 1726. In Figures 21 and 26, the sequence of the images represented on the x-axis is Image1: Chengdu_J10, Image2: Chengdu_J20, Image3: EF_Typhoon, Image4: JF-17, Image5: LCA_Tejas, Image6: Mig35, Image7–11: FA18(1–5), Image12–15: Sukhoi_Su30 (1–4). Comparison of results of the moment invariance and phase correlation methods is given in Table 1.

Figures 17–19 (17) Test Image1: FA18. (18) Template Test Image Cropped from the Test Image1. (19) Template Location in the Database Image for Test Image1.
Figures 17–19

(17) Test Image1: FA18. (18) Template Test Image Cropped from the Test Image1. (19) Template Location in the Database Image for Test Image1.

Figure 21 Comparison of Peak Phase Correlation Values for Test Image1 and Different Database Images.
Figure 21

Comparison of Peak Phase Correlation Values for Test Image1 and Different Database Images.

Figure 26 Comparison of Peak Phase Correlation Values for Test Image2 and Different Database Images.
Figure 26

Comparison of Peak Phase Correlation Values for Test Image2 and Different Database Images.

Table 1

Comparison of Results of Moment Invariance (MI) [26] and Phase Correlation Methods.

No. of images in the databaseNo. of test images usedNo. of test images recognized properlyOverall % recognition = no. of test images recognized properly/total no. of test images
MI methodPhase correlation methodMI methodPhase correlation method
15613154(31/61)

50.82
(54/61)

88.52

Figure 17 shows a Test Image1: FA18. Figure 18 shows the template test image cropped from Test Image1. The rectangle in Figure 19 indicates the location of the template test image in the database image for Test Image1. Figure 20 does not have a single peak, indicating that there is no exact match for the Test Image1 in the database. Figure 21 indicates that the database image 9 (the highlighted stem), i.e., FA18(3), has the maximum peak phase correlation value with the Test Image1.

Figure 20 Surf Plot of Phase Correlation for Test Image1.
Figure 20

Surf Plot of Phase Correlation for Test Image1.

Figures 22–24 (22) Test Image2: Sukhoi_Su30. (23) Template Test Image Cropped from the Test Image2. (24) Template Location in the Database Image for Test Image2.
Figures 22–24

(22) Test Image2: Sukhoi_Su30. (23) Template Test Image Cropped from the Test Image2. (24) Template Location in the Database Image for Test Image2.

Figure 22 shows a Test Image2: Sukhoi_Su30. Figure 22 shows the template test image cropped from Test Image2. The rectangle in Figure 24 indicates the location of the template test image in the database image for Test Image2. Figure 25 shows a sharp peak indicating that there is a close match for the Test Image2 in the database. The location of the peak indicates the matching position. Figure 26 indicates that the database image 13 (the highlighted stem), i.e., Sukhoi_Su30(2), has the maximum peak phase correlation value with the Test Image2.

Figure 25 Surf Plot of Phase Correlation for Test Image2.
Figure 25

Surf Plot of Phase Correlation for Test Image2.

5 Conclusions

In this study, an aircraft image recognition system using phase correlation method is described. A comparison is made between this method and the moment invariant method. Better performance is observed in the phase correlation method as shown in Table 1. The data set in this work is limited because of lack of availability of such data in the open literature. For future work, the authors plan to use a larger dataset.


Corresponding author: K. Roopa, Department of Telecommunication, Sir M. Visvesvaraya Institute of Technology, Bangalore 562157, India

The authors thank the managements and principals of Sir M. Visvesvaraya Institute of Technology and Reva Institute of Technology and Management for their encouragement. The authors also thank the director of the R&D Cell, JNTUH, and the research review committee of JNTUH for the suggestions.

Bibliography

[1] S. Borde and U. Bhosle, Content based image retrieval using clustering, Int. J. Comput. Appl.60 (2012), 20–27.Search in Google Scholar

[2] C. C. Chen, Improved moment invariants for shape discrimination, Pattern Recognit.26 (1993), 683–686.10.1016/0031-3203(93)90121-CSearch in Google Scholar

[3] R. Datta, D. Joshi, J. Li and J. Z. Wang, Image retrieval: ideas, influences, and trends of the new age, ACM Comput. Surv.40 (2008), 1–60.10.1145/1348246.1348248Search in Google Scholar

[4] S. A. Dudani, K. J. Breeding and R. B. McGhee, Aircraft identification by moment invariants, IEEE Trans. Comput.C-26 (1977), 39–46.10.1109/TC.1977.5009272Search in Google Scholar

[5] J. Flusser, Moment invariants in image analysis, World Acad. Sci. Eng. Technol.11 (2005), 376– 381.Search in Google Scholar

[6] J. Flusser and T. Suk, Pattern recognition by affine moment invariants, Pattern Recognit.26 (1993), 167–174.10.1016/0031-3203(93)90098-HSearch in Google Scholar

[7] Y. Gangadhar, D. Lakshmi Srinivasulu and V. S. Giridhar Akula, Detection and comparison of objects using two dimensional geometric moment invariants, Int. J. Inf. Educ. Technol.2 (2012), 458–460.10.7763/IJIET.2012.V2.178Search in Google Scholar

[8] R. C. Gonzalez and R. E. Woods, Digital image processing, 3rd edition, Dorling Kindersley (India) Pvt. Ltd., India, 2009.Search in Google Scholar

[9] J. W. Hsieh, J. M. Chen, C. H. Chuang and K. C. Fan, Aircraft type recognition in satellite images, Vision Image Signal Process. IEE Proc.152 (2005), 307–315.10.1049/ip-vis:20049020Search in Google Scholar

[10] http://www.combataircraft.com/en/Military-Aircraft/Chengdu/J-20/. Accessed June, 2013.Search in Google Scholar

[11] http://www.fighter-aircraft.com. Accessed June, 2013.Search in Google Scholar

[12] http://www.fighter-planes.com,Accessed June, 2013.Search in Google Scholar

[13] M. K. Hu, Visual pattern recognition by moment invariants, IRE Trans. Inf. Theory8 (1962), 179–187.10.1109/TIT.1962.1057692Search in Google Scholar

[14] K. Ito, A. Morita, T. Aoki, H. Nakajima, K. Kobayashi and T. Higuchi, A fingerprint recognition algorithm combining phase-based image matching and feature-based matching, pp. 316–325, Springer-Verlag, Berlin, 2005.10.1007/11608288_43Search in Google Scholar

[15] M. Jain and S. K. Singh, A survey on: content based image retrieval systems using clustering techniques for large data sets, Int. J. Manag. Inf. Technol.3 (2011), 23–39.10.5121/ijmit.2011.3403Search in Google Scholar

[16] A. Katare, S. K. Mitra and A. Banerjee, Content based image retrieval system for multiobject images using combined features, in: IEEEProceedings of the International Conference on Computing: Theory and Applications (ICCTA’07), 2007.10.1109/ICCTA.2007.44Search in Google Scholar

[17] D.-Y. Kim, S.-II. Chien and H. Son, Multiclass 3-D aircraft identification and orientation estimation using multilayer feedforward neural network, IEEE International Joint Conference on Neural Networks1, pp. 758–764, 18–21 Nov 1991.Search in Google Scholar

[18] J. G. Liu and P. J. Mason, Essential image processing and GIS for remote sensing, pp. 113–114, John Wiley & Sons, 2009.10.1002/9781118687963Search in Google Scholar

[19] M. Mercimek, K. Gulez and T. V. Mumcu, Real object recognition using moment invariants, Sadhana30 (2005) 765–775.10.1007/BF02716709Search in Google Scholar

[20] J. M. Molina, J. García, A. Berlanga, J. Besada and J. Portillo, Automatic video system for aircraft identification, in: ISIF, pp. 1387–1394, 2002.Search in Google Scholar

[21] R. Mukundan, Fast computation of geometric moments and invariants using Schlick’s approximation, Int. J. Pattern Recognit. Artif. Intell.22 (2008), 1363–1377.10.1142/S0218001408006764Search in Google Scholar

[22] T. Nakamori, H. Okabayashi and A. Kawanaka, 3-D object matching using phase correlation method, in: IEEE 10th International Conference on Signal Processing (ICSP), 24–28 Oct., pp. 1275–1278, 2010.Search in Google Scholar

[23] R. J. Ramteke, Invariant moments based feature extraction for handwritten Devanagari vowels recognition, Int. J. Comput. Appl.1 (2010), 1–5.10.5120/392-585Search in Google Scholar

[24] Ch. S. Rao, S. S. Kumar and B. Chandra Mohan, Content based image retrieval using exact Legendre moments and support vector machine, Int. J. Multimedia Appl.2 (2010), 69–79.10.5121/ijma.2010.2206Search in Google Scholar

[25] A. W. Rihaczek and S. J. Hershkowitz, Identification of large aircraft, IEEE Trans. Aerospace Electron. Syst.37 (2001), 706–710.10.1109/7.937482Search in Google Scholar

[26] K. Roopa and T. V. Rama Murthy, Aircraft recognition system using image analysis, in: Proceedings of International Conference on Emerging Research in Electronics, Computer Science and Technology (ICERECT-2012), Springer LNEE 248, 2013.10.1007/978-81-322-1157-0_21Search in Google Scholar

[27] Y. Rui, T. S. Huang and S.-F. Chang, Image retrieval: current techniques, promising directions, and open issues, J. Vis. Commun. Image Represent.10 (1999), 39–62.10.1006/jvci.1999.0413Search in Google Scholar

[28] F. Saghafi, S. M. Khansari Zadeh and V. E. Bakhsh, Aircraft visual identification by neural networks, JAST5 (2008), 123–128.Search in Google Scholar

[29] S. M. Singh and K. Hemachandran, Content-based image retrieval using color moment and Gabor texture feature, IJCSI Int. J. Comput. Sci. Issues9 (2012), 299–309.Search in Google Scholar

[30] A. A. Somaie, A. Badr and T. Salah, Aircraft image recognition using back propagation, in: Proceedings of the CIE International Conference on Radar, pp. 498–501, 2001.Search in Google Scholar

[31] H. Yan and J. G. Liu, Robust phase correlation based feature matching for image co-registration and DEM generation, Remote Sens. Spatial Inf. Sci.37 (2008), 1751–1756.Search in Google Scholar

[32] X. Zhu and C. Ma, A review of 3D aircraft image recognition methods, Adv. Mater. Res.433–440 (2012), 2794–2801.10.4028/www.scientific.net/AMR.433-440.2794Search in Google Scholar

[33] B. Zitova and J. Flusser, Image registration methods: a survey, Image Vis. Comput.21 (2003), 977–1000.10.1016/S0262-8856(03)00137-9Search in Google Scholar

Received: 2013-5-14
Published Online: 2013-06-21
Published in Print: 2013-09-01

©2013 by Walter de Gruyter Berlin Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 8.6.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2013-0035/html
Scroll to top button