Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter September 16, 2014

A Novel Edge Detection Algorithm Based on Texture Feature Coding

  • Abdulkadir Sengur

    Abdulkadir Sengur received his bachelor’s degree in electronics and computers education from Firat University, Turkey, in 1999 and master’s degree from Firat University in 2003. He became a research assistant in the Technical Education Faculty of Firat University in February 2001. He obtained a PhD degree from the Department of Electrical-Electronics Engineering, Engineering Faculty, Firat University. He is currently an associate professor in the Technology Faculty of Firat University. His research interests include signal processing, image segmentation, pattern recognition, medical image processing, and computer vision.

    , Yanhui Guo

    Yanhui Guo received a BS degree in automatic control from Zhengzhou University, P.R. China, in 1999; MS degree in pattern recognition and intelligence system from Harbin Institute of Technology, Harbin, Heilongjiang Province, P.R. China, in 2002; and PhD degree from the Department of Computer Science, Utah State University, USA, in 2010. He is currently working in the School of Science and Technology, Saint Thomas University. His research interests include image processing, pattern recognition, medical image processing, computer-aided detection/diagnosis, fuzzy logic, and neutrosophic theory.

    EMAIL logo
    , Mehmet Ustundag

    Mehmet Ustundag received his bachelor’s degree in electronics and computers education from Firat University, Turkey, in 2002, and master’s degree from Firat University in 2006. He became a research assistant in the Technical Education Faculty of Firat University in February 2004. He obtained a PhD degree from the Department of Electrical-Electronics Engineering, Engineering Faculty, Firat University. His research interests include signal processing and FPGA.

    and Ömer Faruk Alcin

    O. Faruk Alcin received his bachelor’s degree in electronics and computers education from Firat University, Turkey, in 2007, and master’s degree from Firat University in 2011. He became a research assistant in the Technical Education Faculty of Firat University in February 2009. He is currently a PhD candidate at the Department of Electrical-Electronics Engineering, Engineering Faculty, Firat University. His research interests include FPGA and sigma-delta converter.

Abstract

A new edge detection technique based on the texture feature coding method (TFCM) is proposed. The TFCM is a texture analysis scheme that is generally used in texture-based image segmentation and classification applications. The TFCM transforms an input image into a texture feature image whose pixel values represent the texture information of the pixel in the original image. Then, on the basis of the transformed image, several features are calculated as texture descriptors. In this article, the TFCM is employed differently to construct an edge detector. In particular, the texture feature number (TFN) of the TFCM is considered. In other words, the TFN image is used for subsequent processes. After obtaining the TFN image, a simple thresholding scheme is employed for obtaining the coarse edge image. Finally, an edge-thinning procedure is used to obtain the tuned edges. We conducted several experiments on a variety of images and compared the results with the popular existing methods such as the Sobel, Prewitt, Canny, and Canny–Deriche edge detectors. The obtained results were evaluated quantitatively with the Figure of Merit criterion. The experimental results demonstrated that our proposed method improved the edge detection performance greatly. We further implemented the proposed edge detector with a hardware system. To this end, a field programmable gate array chip was used. The related simulations were carried out with the MATLAB Simulink tool. Both software and hardware implementations demonstrated the efficiency of the proposed edge detector.

1 Introduction

Edge detection is an important research direction in computer vision and pattern recognition [10, 12, 17]. Edges define the boundaries between different regions in an image, which is helpful in image segmentation and object recognition. Edges can show the shadows in an image or any other distinct changes in the intensity of an image. Therefore, edge detection is a fundamental of low-level image processing, and accurate locations of edges are necessary for consequent higher-level processing [5].

Up to now, many edge detection algorithms have been proposed, including classical methods such as the Prewitt and Sobel operators, as well as the more sophisticated algorithms from Canny [2] and Marr–Hildreth [13]. The Prewitt operator calculates the gradient of the image intensity at each point, giving the direction of the largest possible increase from light to dark and the rate of change in that direction. The classical Sobel algorithm has such advantages as small amount calculation and high calculation speed; thus, it has been applied extensively in many fields. Because of the limited edge direction that the classical Sobel algorithm could only detect image with low noise-proof character, its appliance also has many limitations. In addition, detection methods based on the Canny algorithm and its varieties have also been used because of the “best” edge detection wave filter in respect of the high precision index. The Marr–Hildreth edge detection method is a gradient-based operator that uses the Laplacian to take the second derivative of an image. The idea is that if there is a step difference in the intensity of the image, it will be represented by a zero crossing in the second derivative. The above methods are gradient based. In these methods, the assumption is that edges are the pixels with a high gradient. A fast rate of change of intensity at some direction given by the angle of the gradient vector is observed at the edge pixels.

In this article, we explored the applicability of the texture feature coding method (TFCM) for edge detection purposes. The TFCM, which forms the basis for texture features, was first discussed by Horng [8] and later applied in various applications such as tumor detection and landmine detection [11, 18]. Different from the above-mentioned works [8, 11, 18], in this work, we opted to use just a part of the TFCM. In other words, we were interested in the TFNs where the input gray-level image is transformed to the TFN image by means of differencing in the image domain followed by successive stages of vector classification. After this mapping procedure, thresholding and edge-thinning mechanisms were employed for obtaining the final edge detected image.

Recently, several researchers proposed various hardware implementations of image processing applications. The implementation of several edge detection algorithms such as the Sobel, Prewitt, Robert, and Compass edge detectors on field programmable gate array (FPGA) was explained briefly in Ref. [16]. Su and Sie [15] proposed a novel scheme that not only can perform a new method called chaotic and edge enhanced error diffusion, but also can be reduced to perform the conventional Floyd–Steinberg error diffusion. Also, the authors demonstrated the hardware performance of their scheme by using a FPGA chip to offer possibly further applications. Dewan et al. [4] presented an edge-segment-based moving object segmentation algorithm independent of a background model. The proposed method detects moving edges using the three most recent frames where moving regions are extracted by employing a watershed-based algorithm. Hermanto et al. [6] proposed a thinning algorithm in a fingerprint extraction algorithm as a part of a fingerprint recognition system. The authors also implemented the algorithm into an FPGA device.

Moreover, we implemented the proposed edge detection method by using the system generator (SG). The SG is a useful tool for application of the digital signal processing (DSP) algorithms for FPGA implementation. FPGAs provide a better platform for real-time algorithms on application-specific hardware with substantially greater performance than programmable DSPs. The remainder of this article is organized as follows: in Section 2, we briefly review the TFCM method. In Section 3, we describe how we use TFCM as an edge detector for optical images. We give several experimental results and comparisons in Section 4. Finally, we conclude our work in Section 5.

2 Texture Feature Coding Method

The basic rationale behind the TFCM technique, as proposed by Horng [8], is the translation of an intensity image to a texture feature number (TFN) image by means of differencing in the image domain followed by successive stages of vector classification.

Consider a pixel (i, j) in an intensity image and its surrounding 3 × 3 pixel neighborhoods that is illustrated in Figure 1. Horng separates the pixels in the neighborhood into horizontal, vertical, and diagonal connectivity sets. The differences are then calculated along each vector in each of these connectivity sets, and the resulting two-element difference vectors are thresholded at a tolerance (ε) into quantized two-element vectors taking values from the set {–1, 0, 1}, corresponding to negative, no change, and positive difference values, respectively.

Figure 1 3 × 3 Pixel Neighborhoods.
Figure 1

3 × 3 Pixel Neighborhoods.

This process maps a 3 × 3 pixel neighborhood to two sets of two 2 × 1 quantized difference vectors. After differencing and thresholding, the TFCM maps the individual quantized difference vectors to gray-level class numbers based on the degree of variation in each vector. Such a classification scheme is described in Ref. [8] and reproduced in Figure 2.

Figure 2 Types of Gray-Level Graphical Structure Variations and Corresponding Gray-Level Class Numbers (1–4).
Figure 2

Types of Gray-Level Graphical Structure Variations and Corresponding Gray-Level Class Numbers (1–4).

In Figure 2, the falling lines correspond to the quantized difference vector values of –1, flat lines correspond to the quantized difference vector values of 0, and rising lines correspond to the quantized difference vector values of 1. Thus, Figure 2 provides a mapping from each of the quantized difference vectors to gray-level class numbers taking values 1–4. Each pair of gray-level class numbers is combined into a single initial texture-feature class number using the mapping shown in Table 1. This mapping takes each set of two gray-level class numbers and maps each set to initial feature numbers taking values 1–10.

Table 1

Mapping from Gray-Level Numbers (1–4) to Initial Class Numbers (1–10).

Table 1 Mapping from Gray-Level Numbers (1–4) to Initial Class Numbers (1–10).

Finally, we can then map the initial feature numbers calculated to a single class number using the mapping shown in Table 2. By applying the TFCM procedure at each pixel location (i, j) in an image, the approach maps the intensity images into two-dimensional TFN images taking discrete values 0–54.

Table 2

Mapping from Primary (1) and Secondary (2) Initial Feature Numbers to TFN.

Table 2 Mapping from Primary (1) and Secondary (2) Initial Feature Numbers to TFN.

3 Proposed Method

The TFCM was designed to characterize the texture regions of a given image by transforming the image to a TFN image. Regarding the TFCM mapping process described, one must note that in each successive class assignment stage, either quantized difference vectors are mapped to gray-level class numbers, gray-level class numbers are mapped to initial feature numbers, or initial feature numbers are mapped to TFNs. The resulting feature numbers at every stage are chosen so that higher feature or class numbers correspond to higher degrees of gray-level variation. Thus, for the final TFNs, values of 0 represent little gray-level variation and values of 54 represent high degrees of gray-level variation over the immediate neighborhood of (i, j). When an intensity image has been transformed to a TFN image, it is possible to apply one of the texture descriptor methods such as gray-level co-occurrence matrices to the TFN image, and extract features such as mean convergence, code variance, code entropy, and uniformity.

In this study, instead of using the TFCM-based texture feature extraction method, we only considered the TFN for edge detection. The algorithm for translating a single pixel from an image intensity value to a TFN is as follows [18]:

  1. A 3 × 3 matrix z from image I centered at pixel (i, j) such that z2,2 = Ii,j. Assign a numeric threshold ε > 0.

  2. Calculate the difference vectors along the directions.

    • ΔV = [z1,2z2,2, z2,2z3,2]′; //Vertical

    • ΔH = [z2,1z2,2, z2,2z2,3]′; //Horizontal

    • ΔD1 = [z1,1z2,2, z2,2z3,3]′; //Diagonal

    • ΔD2 = [z1,3z2,2, z2,2z3,1]′; //Anti-diagonal

  3. Apply a threshold to obtain the quantized two-element vectors. The related pseudo-code is given as follows:

    • for U∈{ΔV, ΔH, ΔD1, ΔD2}

        for k∈[1, 2]

         if Uk < –ε

          Uk = –1

        else if Uk > ε

         Uk = 1

        else

         Uk = 0

       end

       end

  4. Determine the gray-level class number T1(U) from Uk, based on the illustration in Figure 2.

  5. Determine the initial feature number T2(primary) from T1V) and T1H) and T2(secondary) from T1D1) and T1D2) based on Table 1.

  6. Determine TFN from T2(primary) and T2(secondary) based on Table 2.

We illustrate the proposed TFN-based edge detection on a given image that is depicted in Figure 3. Figure 3A shows a disc image comprising a shape of intensity 0 in a background of intensity 255. The obtained TFN image is depicted in Figure 3B where a gray-level variation tolerance of 50 is selected.

Figure 3 TFN based edge detection.(A) Disc image; (B) TFN image; (C) thresholded image; and (D) final image (after edge thinning).
Figure 3

TFN based edge detection.

(A) Disc image; (B) TFN image; (C) thresholded image; and (D) final image (after edge thinning).

Note that the TFCM features accentuate changes in the intensity levels of the image, taking large values at locations of large image gradients. In other words, intensity 0 in the TFN image in Figure 3B represents the homogenous regions and the high-intensity values in the TFN image in Figure 3B indicate the edge regions of the image. The next stage is to apply a threshold, to decide whether edges are present or not at the TFN image. Figure 3C shows the obtained image after thresholding. Finally, an edge-thinning technique is used to remove the redundant points on the edge of the thresholded TFN image. The obtained image is depicted in Figure 3D. The Hilditch thinning algorithm was considered in the edge-thinning part of the proposed scheme. The Hilditch thinning algorithm is widely used as a useful method in the image processing community [7]. There are two types of Hilditch algorithm, one using a 4 × 4 window and the other using a 3 × 3 window. In this article, the 3 × 3 window type was considered.

Let us consider a pixel p in the center of a 3 × 3 window. for the pixel p, the thinning algorithm decides whether to keep it as part of the result skeleton or delete it from the image. For this purpose, the eight neighbors of pixel p must be investigated; when pixel p is part of a skeleton, pixel p will be deleted or not deleted according to five conditions [7]. The five conditions must be considered together. However, during one path process, the value of pixel p should be set to another value such as –1 according to the Hilditch algorithm; it will be set to 0 until all pixels in the image have been investigated during this path. Then, this process is repeated until no changes are made. The reader may refer to Ref. [7] for a detailed information about the algorithm.

4 Experimental Results and Analysis

In this section, the proposed TFCM-based edge detection algorithm is compared with a variety of existing edge detection methods such as Prewitt, Sobel, and Canny. Figure 4A is the original cameraman gray-scale image with 256 gray-scale levels. Figure 4 panels B to D are the result of the processed original cameraman image after applying the Prewitt, Sobel, and Canny edge detector, respectively. Figure 4E is the result of the processed cameraman image by our proposed edge detection algorithm described in this article. For the Prewitt, Sobel, and Canny algorithms, we used the default parameters that are already defined in Ref. [14]. For the TFCM algorithm, we assigned 20 for the tolerance (ε) and 7 is assigned as the final thresholding. These values were determined according to earlier experimental studies.

Figure 4 Comparison of the proposed method with traditional edge detection methods.(A) Cameraman image; (B) Prewitt; (C) Sobel; (D) Canny; and (E) our proposal.
Figure 4

Comparison of the proposed method with traditional edge detection methods.

(A) Cameraman image; (B) Prewitt; (C) Sobel; (D) Canny; and (E) our proposal.

According to the experimental results shown in Figure 4B–E, Prewitt and Sobel could not detect the edges of the building and the hand of the cameraman that are shown by dashed red circles and ellipsoids, respectively. However, Canny and the TFCM-based method could detect the edges of both regions that are depicted with the red dashed circle and ellipsoid. However, Canny detected many false edges on the grass region and the trousers of the cameraman. Finally, the TFCM method yielded acceptable edges when we considered the entire edges of the cameraman image.

We also experimented with a variety of images that can be seen in Figure 5. The first column of Figure 5 shows the original images. In the first row of Figure 5, several connected white-filled discs are located in a black background. In the second row, we used the popular “rice” image, and in the third row we constructed an artificial image where three rectangular blocks are connected in a white background. The results can be seen in the second column of Figure 5. First of all, we used 38 for the tolerance (ε) and 4 was assigned for the threshold value. When we considered the results, the detected edges were more pinpointed, integral, and continual, and the edge information was more abundant. When we considered the discs and rice grains, all edges were detected. However, when we examined the three rectangular blocks results carefully, we realized that the corners of the rectangular blocks were lost.

Figure 5 Experimental Results for Various Images.(A) Original image; (B) proposed method.
Figure 5

Experimental Results for Various Images.

(A) Original image; (B) proposed method.

Moreover, we evaluated the performance of the proposed method quantitatively [1]. The approach adopted here is to use the ground truth provided by a simulated image in conjunction with the most widely used measure of edge deviation, Pratt’s Figure of Merit (FOM) [1]. Pratt considered three major areas of error associated with the determination of an edge: missing valid edge points, failure to localize edge points, and classification of noise fluctuations as edge points. In addition to these considerations, when measuring edge detection performance, edge detectors that produce smeared edge locations should be penalized, whereas those that produce edge locations that are localized should be awarded credit. Hence, Pratt introduced the FOM technique as one that balances the three types of error above, defined as

(1)FOM=1max{ID,II}k=1ID11+α(dk)2, (1)

where II and ID are the number of ideal and detected edge points, respectively, and dk is the separation distance of the kth detected edge point normal to a line of ideal edge points. The scaling constant α provides a relative penalty between smeared and isolated offset edges and was set to α = 0.2. FOM = 1 corresponds to a perfect match between the ideal and detected edge points. As the deviation of the detected points increases, the FOM approaches zero.

We further experimented on various images and calculated the FOM values. The related FOM values are tabulated in Table 3, and the obtained edge detections are depicted in Figure 6.

Figure 6 Further Experimental Results for Various Images.(A) Original image [6]; (B) ground-truth edges; (C) proposed method; (D) Canny method; and (E) Canny–Deriche method.
Figure 6

Further Experimental Results for Various Images.

(A) Original image [6]; (B) ground-truth edges; (C) proposed method; (D) Canny method; and (E) Canny–Deriche method.

Table 3

FOM Values for Both the Proposed and the Canny Method.

ImageProposed MethodCannyCanny–Deriche
Figure 6-1-(A)0.12250.10190.1245
Figure 6-2-(A)0.66590.29150.6619
Figure 6-3-(A)0.24270.12140.2415

We also compared the obtained results with the Canny edge detection. The obtained results are evident to show the superiority of the proposed edge detection scheme. We also conducted experiments to compare the proposed method with the Canny–Deriche edge detection algorithm [3]. Deriche proposed an unlimited band extension of Canny’s filter in 1987. In his work, a second-order recursive implementation was used to avoid truncating the filter. Detailed information can be found in Ref. [3]. The implementation parameters of Deriche’s algorithm such as α, high threshold, and low threshold were set to 5, 20, and 40, respectively. These values were obtained during the experimental works. The obtained edge images for Deriche’s algorithm are depicted in Figure 6E, and related FOM values are tabulated in Table 3. With visual inspection, the proposed method and Deriche’s algorithm yielded similar results. However, when we controlled the FOM results from Table 3, we can see that the proposed method yielded high FOM values for Figure 6-2 and 6-3. Deriche’s algorithm just obtained better edges for Figure 6-1. Both of the proposed method and Deriche’s algorithm outperformed the Canny algorithm according to the visual inspection and the FOM values.

Finally, we conducted an experiment to investigate the performance of the proposed method for a special case where an image has two segments and the intensity differences are smaller between the pixels in each segment and the intensity differences between the adjacent neighboring pixels are higher. Thus, we constructed an artificial image, which is shown in Figure 7A. The dark part of the image has an average 15 gray-level value with 5 variance, and the light part has an average 20 gray-level value with 5 variance. The obtained result is shown in Figure 7C. As can be seen in Figure 7C, the proposed method could not obtain the exact edge. Moreover, there are several point-type noises in each segment part. We also show the Canny result in Figure 7C. The Canny method also could not get the exact edges, and there are many false edges. We also calculated the related FOM value where the proposed method yielded 0.2523 and the Canny method yielded 0.1717.

Figure 7 Edge Detection for a Special Image.(A) Original image; (B) proposed method’s result; and (C) Canny’s result.
Figure 7

Edge Detection for a Special Image.

(A) Original image; (B) proposed method’s result; and (C) Canny’s result.

We conducted further experiments with two images and evaluated the results quantitatively by calculating the FOM values. The related images are given in Figure 8A. In the first image, there is a white ball on a floor with its shadow, and in the second image there are four rectangles with different gray levels. In these experiments, we compared our method with the Laplacian and Roberts edge detectors. In the second column of Figure 8, the ground-truth edges are given and the other columns show the proposed method, Laplacian, and Roberts results, respectively. The related FOM values are tabulated in Table 4. Both with visual inspection and FOM criterion, our proposed method outperformed the other methods.

Figure 8 Further Experimental Results for Various Images.(A) Original image [6]; (B) ground-truth edges; (C) proposed method; (D) Laplacian edge; and (E) Roberts.
Figure 8

Further Experimental Results for Various Images.

(A) Original image [6]; (B) ground-truth edges; (C) proposed method; (D) Laplacian edge; and (E) Roberts.

Table 4

FOM Values for Proposed, Laplacian, and Roberts Edge Detectors.

ImageProposed MethodLaplacianRoberts
Figure 8-1-(A)0.21520.10090.2145
Figure 8-2-(A)0.70500.59150.4019

5 Hardware Implementation

Recently, the key for implementing high-performance image processing applications is the use of programmable logical devices, in particular FPGAs. However, for those applications in which high-level complex algorithms are involved, a complete hardware implementation is impractical. Therefore, it is usual to employ hybrid software/hardware implementation in which the hardware carries out the acceleration of specialized functions and a processor, usually a conventional central processing unit, accomplishes general-purpose computing. The implementation of the hardware/software applications is performed in different environments, using programming language for the software and a hardware description language (VHDL or Verilog) for the hardware description [9]. Unlike the general HDL languages, SG presents a model-based design interface using an extended library of building blocks [19]. In addition, SG takes the abstraction level one step higher and uses a simulation environment to provide a graphical approach. The connected graphical building blocks and their parameterization form the description of the model. The block diagram of the proposed TFCM-based edge detection algorithm is depicted in Figure 9. We used the SG platform to implement the proposed method.

Figure 9 Block Diagram of the Proposed TFCM-Based Edge Detection Algorithm.
Figure 9

Block Diagram of the Proposed TFCM-Based Edge Detection Algorithm.

As can be seen from Figure 9, the acquisition of the input images from the MATLAB workspace is the first step of the implementation. The acquired images are then conveyed to the SG blocks with the “gateway in” and “register” blocks. The essential processes that need to be handled for edge detection are performed in the TFCM-based edge detection block. The blocks that were used to construct the TFCM-based edge detection are given in Figure 10.

Figure 10 Blocks for TFCM Edge Detection.
Figure 10

Blocks for TFCM Edge Detection.

In Figure 10, we first constructed a differentiating and thresholding block where the first three steps of the proposed algorithm were implemented. An illustration is given in Figure 11 for the differentiating and thresholding block. As can be seen in Figure 11, line buffers, adder logics, registers, source code blocks, and constant block are incorporated to construct the model. Two line buffers are needed to separate three sequential lines of the input image. The code block contains the following code for obtaining the quantized two-element vectors taking values from the set {–1, 0, 1}.

Figure 11 Differentiating and Thresholding Block.
Figure 11

Differentiating and Thresholding Block.

It is also worth mentioning that we used several MCode blocks to perform the fourth, fifth, and sixth steps of the proposed edge detection algorithm that was described in Section 3. In other words, after differentiating and thresholding, the TFCM maps the individual quantized difference vectors to gray-level class numbers based on the degree of variation in each vector. For this purpose, we employed three MCode blocks namely “kadirizm,” “mehmetizm,” and “omerizm.” The gray-level class numbers were obtained after running the kadirizm blocks. Each of the obtained pair of gray-level class numbers was then combined into a single initial texture-feature class number by using another code block that uses the mapping shown in Table 1. This procedure was performed in the mehmetizm MCode block. Essentially, the output of the mehmetizm MCode block z varies between 1 and 10 to construct the mapping from gray-level numbers to initial class numbers. Finally, another code block, namely omerizm, is employed to calculate a single class number using the mapping shown in Table 2. The above code is also an abstract illustration of the entire code that is used to construct the TFN number. The output parameter z varies in the 0–54 range. After obtaining the TFN number, we employed a thresholding process to extract the edges of the input image.

Finally, an edge-thinning technique is used to remove the unwanted spurious points on the edge of the thresholded TFN image. In Figure 12, we presented the skeleton block.

Figure 12 Skeleton Block.
Figure 12

Skeleton Block.

We used the above hardware implementation on a variety of images with different sizes. As we mentioned generally, hardware implementation has an effect on accelerating the speed of performance. In Table 5, we give an analysis that related the processing time in terms of the image size and the operation clock period.

Table 5

SG Operation Time.

Size (pixels)Operation Time (s)
198 × 1750.028
200 × 2000.032
600 × 4500.216

6 Conclusions

In this article, we employed the TFCM method for edge detection to detect edges in an image. The TFCM method was designed to capture the explicit features that are used to distinguish textures. To illustrate the performance of the TFCM-based edge detection, we conducted several experiments with grayscale images. The experimental results showed that the algorithm is efficient. We also conducted several experiments to compare our results with the popular edge detection algorithms such as the Prewitt, Sobel, and Canny edge detection algorithms. The detected edges with our proposal were more pinpointed, integral, and continual, and the edge information was more abundant. We also demonstrated the hardware performance of our scheme by using a FPGA chip to offer possibly further applications. We used the SG method, which is a useful tool for application of the DSP algorithms for FPGA implementations. Moreover, we have not considered the noisy images. In our future works, we will investigate the use of TFCM for noisy and textured images.


Corresponding author: Yanhui Guo, School of Science, Technology and Engineering Management, St. Thomas University, Miami Gardens, FL 33054, USA, e-mail:

About the authors

Abdulkadir Sengur

Abdulkadir Sengur received his bachelor’s degree in electronics and computers education from Firat University, Turkey, in 1999 and master’s degree from Firat University in 2003. He became a research assistant in the Technical Education Faculty of Firat University in February 2001. He obtained a PhD degree from the Department of Electrical-Electronics Engineering, Engineering Faculty, Firat University. He is currently an associate professor in the Technology Faculty of Firat University. His research interests include signal processing, image segmentation, pattern recognition, medical image processing, and computer vision.

Yanhui Guo

Yanhui Guo received a BS degree in automatic control from Zhengzhou University, P.R. China, in 1999; MS degree in pattern recognition and intelligence system from Harbin Institute of Technology, Harbin, Heilongjiang Province, P.R. China, in 2002; and PhD degree from the Department of Computer Science, Utah State University, USA, in 2010. He is currently working in the School of Science and Technology, Saint Thomas University. His research interests include image processing, pattern recognition, medical image processing, computer-aided detection/diagnosis, fuzzy logic, and neutrosophic theory.

Mehmet Ustundag

Mehmet Ustundag received his bachelor’s degree in electronics and computers education from Firat University, Turkey, in 2002, and master’s degree from Firat University in 2006. He became a research assistant in the Technical Education Faculty of Firat University in February 2004. He obtained a PhD degree from the Department of Electrical-Electronics Engineering, Engineering Faculty, Firat University. His research interests include signal processing and FPGA.

Ömer Faruk Alcin

O. Faruk Alcin received his bachelor’s degree in electronics and computers education from Firat University, Turkey, in 2007, and master’s degree from Firat University in 2011. He became a research assistant in the Technical Education Faculty of Firat University in February 2009. He is currently a PhD candidate at the Department of Electrical-Electronics Engineering, Engineering Faculty, Firat University. His research interests include FPGA and sigma-delta converter.

Bibliography

[1] I. E. Abdou and W. K. Pratt, Quantitative design and evaluation of enhancement/thresholding edge detectors, Proc. IEEE67 (1979), 753–763.10.1109/PROC.1979.11325Search in Google Scholar

[2] J. Canny, A computational approach to edge detection, IEEE Trans. Pattern Anal. Mach. Intell.8 (1986), 679–714.10.1109/TPAMI.1986.4767851Search in Google Scholar

[3] R. Deriche, Using Canny’s criteria to derive a recursively implemented optimal edge detector, Int. J. Vision1 (1987), 167–187.10.1007/BF00123164Search in Google Scholar

[4] M. A. A. Dewan, M. J. Hossain and O. Chae, Segmentation of moving object for content based applications, in: Consumer Electronics, ICCE’09, pp. 1–2, 2009.10.1109/ICCE.2009.5012349Search in Google Scholar

[5] R. C. Gonzalez and R. E. Woods. Digital image processing, Addison-Wesley Inc., Reading, MA, 1993.Search in Google Scholar

[6] L. Hermanto, S. A. Sudiro and E. P. Wibowa, Hardware implementation of fingerprint image thinning algorithm in FPGA device, in: International Conference on Networking and Information Technology, pp. 187–191, 2010.10.1109/ICNIT.2010.5508534Search in Google Scholar

[7] C. J. Hilditch, Linear skeletons from square cupboards, in: Machine Intelligence IV (B. Meltzer and D. Mitchie, eds.), pp. 403–420, University Press, Edinburgh, 1969.Search in Google Scholar

[8] M. H. Horng, Texture feature coding method for texture classification, Opt. Eng. 42 (2003), 228–238.10.1117/1.1527932Search in Google Scholar

[9] J. Hwang, B. Milne, N. Shirazi and J. D. Stroomer, System level tools for DSP in FPGAs, in: Proceedings of the 11th International Conference on Field-Programmable Logic and Applications, pp. 534–543, 2001.10.1007/3-540-44687-7_55Search in Google Scholar

[10] J. Y. Liang, Edge detection of color images based on improved morphological gradient operators, Appl. Mech. Mater.511–512, (2014), 550.10.4028/www.scientific.net/AMM.511-512.550Search in Google Scholar

[11] J. Liang, X. Zhao, R. Xu, C. Kwan and C. I. Chang, Target detection with texture feature coding method and support vector machines. in: Proc. ICASSP, Montreal, QC, Canada, pp. II-713–II-716, 2004.Search in Google Scholar

[12] C. Lopez-Molina, M. Galar, H. Bustince and B. De Baets, On the impact of anisotropic diffusion on edge detection, Pattern Recognit.47 (2014), 270–281.10.1016/j.patcog.2013.07.009Search in Google Scholar

[13] D. Marr and E. Hildreth, Theory of edge detection, in: Proceedings of the Royal Society (Sec B, 207), pp. 187–217, 1980.10.1098/rspb.1980.0020Search in Google Scholar PubMed

[14] MATLAB version 7.10.0, The MathWorks Inc., Natick, MA, 2010.Search in Google Scholar

[15] C. Y. Su and Y. L. Sie, An FPGA implementation of chaotic and edge enhanced error diffusion, IEEE Trans. Consumer Electron.56 (2010), 1755–1762.10.1109/TCE.2010.5606322Search in Google Scholar

[16] K. C. Sudeep and M. A. Jharna, Novel architecture for real time implementation of edge detectors on FPGA, Int. J. Comput. Sci.8 (2011), 193–202.Search in Google Scholar

[17] Q. Sun, Y. Hou, Q. Tan, C. Li and M. Liu, A robust edge detection method with sub-pixel accuracy, Optik – Int. J. Light Electron Optics, Available online 29 March 2014.10.1016/j.ijleo.2014.02.001Search in Google Scholar

[18] P. Torrione and L. M. Collins, Texture features for antitank landmine detection using ground penetrating radar. IEEE Trans. Geosci. Remote Sens.45 (2007), 2374–2382.10.1109/TGRS.2007.896548Search in Google Scholar

[19] Xilinx System Generator User’s Guide, http://www.xilinx.com.Search in Google Scholar

Received: 2014-3-25
Published Online: 2014-9-16
Published in Print: 2015-6-1

©2015 by De Gruyter

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 2.5.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2014-0075/html
Scroll to top button