Skip to content
BY 4.0 license Open Access Published by De Gruyter January 24, 2022

Edge detail enhancement algorithm for high-dynamic range images

  • Lanfei Zhao EMAIL logo and Qidan Zhu

Abstract

Existing image enhancement methods have problems of a slow data transmission and poor conversion effect, resulting in a low image-recognition rate and recognition efficiency. To solve these problems and improve the recognition accuracy and recognition efficiency of image features, this study proposes an edge detail enhancement algorithm for a high-dynamic range image. The original image is transformed by Fourier transform, and the low-frequency and high-frequency images are obtained by the frequency-domain Gaussian filtering and inverse Fourier transform. The low-frequency image is processed by the contrast limited adaptive histogram equalization, and the high-frequency image is obtained by the nonsharpening masking and gray transformation. The low-frequency enhanced and the high-frequency enhanced images are weighted and fused to enhance the edge details of the image. Finally, the experimental results show that the proposed high-dynamic range image edge detail enhancement algorithm maintains the image recognition rate of more than 80% during the practical application, and the recognition time is within 1,200 min, which enhances the image effect, improves the recognition accuracy and recognition efficiency of image characteristics, and fully meets the research requirements.

1 Introduction

There is a very large dynamic range in real scenes, and it has a very rich color and light information although the human eye can partly observe the dynamic range of real scenes. However, due to the restriction of charge coupler and analog-to-digital converter, the commonly used electronic display equipment is quite limited and can only record the contrast, brightness, and color information in the local range in the real scene, which is difficult to reach the extent recorded by the human eye. When the dynamic range of the scene exceeds the camera acquisition range, the photographer usually changes the camera exposure to control the brightness information captured. However, due to the camera hardware structure limitations, its adjustment can only occur within a certain range of brightness. The resulting image will always appear with overexposed or underexposed areas, which will result in the loss of details in the scene. An enhanced recognition of high-dynamic range images is, therefore, required.

Document [1] proposes an image enhancement algorithm based on a dual-domain decomposition that simultaneously achieves the image contrast improvement with noise suppression. First, using Gaussian filter to decompose the image into a base layer and a detail layer, the decoupling of contrast improvement and noise suppression are realized; second, using the single-scale Retinex with correction function and nonsampling shear noise reduction algorithm, the basic and detail layers are enhanced; and finally, the synthetic layered image, extension gray scale value, strengthen the differential operator, the gray scale extension and details of the synthetic image are realized and the enhanced color uniformity and prominent details are ensured. However, it is found that the method is slow to identify because of the complicated identification process. Document [2] proposes a weak illumination image enhancement algorithm based on convolutional neural networks (CNNs). First, four weak light images get four derived images: restricted contrast adaptive histogram equilibrium derivative, gamma transformation derivative, logarithmic transformation derivative, and bright channel enhancement derivative; second, weak light image and its four derived images are input into CNNs; and finally, the output is an enhanced image. The image data transmission and conversion of this method are not effective, resulting in a low recognition accuracy and cannot meet the actual needs. Document [3] uses artificial intelligence technology and definition list algorithm to identify the retinal fundus images, to build a defense model against the speckle noise attack in the image recognition, through the adversarial training and feature fusion strategy, to identify the correct pixels of the image, and to prevent the interference of the image noise. The method has a high image recognition accuracy, and a low amount of time wasted.

The above image enhancement method has a poor data transmission and conversion effect in the practical application process, leading to the degradation of image details, and the low image recognition rate and recognition efficiency. To improve the image recognition rate and recognition efficiency, we propose optimization methods for a high-dynamic range image edge detail enhancement algorithm. The edge details are enhanced according to improving the gray-scale resolution and detail resolution, so as to effectively restore and enhance the definition of dynamic image contour edge and detail and improve the image recognition effect. The innovation point of this article is to enhance the process of image edge details, based on the fuzzy dynamic image and the reverse difference calculation, remove the noise in the image data, improve the gray value of the dynamic image, facilitate the subsequent recognition work, and improve the recognition speed. At the same time, the image gradient operator, Laplacian operator and antisharpening model are summarized in the spatial domain, so as to improve the image quality and better improve the image edge detail enhancement effect and recognition efficiency.

The contributions of this article are as follows: accurately enhances virtual reality images and improves image recognition accuracy and improves the recognition efficiency and shortens the recognition cycle.

The organizational structure of this article is as follows:

  • 1. Introduction

  • 2. Optimization of the edge detail enhancement algorithm for high-dynamic range images

  • 2.1. Image detail gradient value processing algorithm

  • 2.2. Frequency domain edge denoising algorithm for dynamic images

  • 2.3. Implementation of image edge detail enhancement

  • 3. Experimental result

  • 4. Conclusion

2 Optimization of edge detail enhancement algorithm for high-dynamic range images

2.1 Image detail gradient value processing algorithm

To ensure the practical application effect of the edge detail enhancement algorithm of high-dynamic range image, first, the edge detail enhancement processing method of a high-dynamic range image is optimized. The first-order differential, which is often used in the dynamic image processing, is also called the gradient. According to the fuzzy information of the edge detail object, which the difference between the image and the background changes with the time axis, the feature of the fuzzy edge region in the dynamic image is detected [4]. Figure 1 shows the extraction process of fuzzy image detail features.

Figure 1 
                  Edge detail feature extraction process of dynamic image.
Figure 1

Edge detail feature extraction process of dynamic image.

Let the image be f(x, y), and define the gradient vector ∂ of f(x, y) at the point (x, y). Then the numerical value of the edge gradient of the dynamic image is calculated, which can be expressed as follows:

(1) G [ f ( x , y ) ] = [ ( f / x ) 2 + ( f / y ) 2 ] .

The maximum change rate of the value of the image gradient is a, and the increment of the unit distance j in the direction i is a. Then the algorithm is further improved as follows:

(2) G [ f ( x , y ) ] = a { [ f ( i , j ) f ( i + 1 , j ) ] 2 + [ f ( i , j ) f ( i , j + 1 ) ] 2 } 1 / 2 .

To enhance the edge details of high dynamic image, the following absolute difference algorithm is used to calculate the approximate value of the image.

(3) Δ G [ f ( x , y ) ] a | f ( i , j ) f ( i + 1 , j ) | + | f ( i , j ) f ( i , j + 1 ) | .

In the above operators, the approximate value of image edge feature gradient is not unique. According to the orthogonal direction and enhancement factor of different images, there are many kinds of difference approximations, and different difference approximations constitute different gradient operators [5]. In the process of image processing, the gradient value G[f(x, y)] is directly applied to represent the image. The gradient value is proportional to the difference between the gray level values of adjacent pixels [6]. Therefore, the value is very small in the slow changing area of the image, but it is very large in the fast changing part such as the line contour. This is the essence of the image after gradient operation, which can make its details clear and achieve the purpose of sharpening [7]. In the process of image detail enhancement, Fourier transform operator is another commonly used edge enhancement operator, which is an isotropic second-order differential operator.

(4) 2 f = 2 f x 2 + 2 f y 2 .

Based on Fourier transform, the algorithm of edge detail gray level normalization is as follows:

(5) L = | f ( i , j ) f ( i , j + 1 ) α 2 f ( x , y ) .

In the above algorithm summary, α is used to adjust the enhancement amplitude. The larger α is, the larger the enhancement amplitude of dynamic image is; on the contrary, the smaller α is, the smaller the enhancement amplitude of dynamic image is ref. [8]. For the discrete form of digital image, the differential can be approximately replaced by difference:

(6) g ( i , j ) = f ( i , j ) L α [ f ( i + 1 , j ) + f ( i 1 , j ) + f ( i , j + 1 ) + f ( i , j 1 ) 4 f ( i , j ) ] .

Further optimization is carried out based on the principle of a linear unsharp masking algorithm: first, the original image is low-pass filtered to produce a passivated blurred image, and the original image is subtracted from the blurred image to obtain an image with high-frequency components, and then the high-frequency image is magnified with a parameter K and superimposed with the original image to produce an image with an enhanced edge [9]. If the original image is subtracted from the blurred image, the low-frequency components of s(x, y) will lose a lot, whereas the high-frequency components will be preserved completely. Therefore, the image with the high-frequency component is magnified with a parameter and then superimposed with the original image R. After the image edge features are superimposed, the high-frequency component of the image is enhanced, whereas the low-frequency component is hardly affected. Based on this, the image edge superposition function is calculated:

(7) U ( x , y ) = R g ( i , j ) + f ( x , y ) + K s ( x , y ) .

Its mathematical expression can be expressed as follows:

(8) A ( x , y ) = S i j U ( x ) + K v [ f ( x , y ) f ¯ ( x , y ) ] ,

where S ij is the original image feature, and v is the blurred image after low-pass filtering. Based on this, the equivalent high pass filter N is normalized and output [10]. If M is the gain coefficient, the mean filter is further used to realize the low-pass filtering of image edge details:

(9) f ¯ ( x , y ) = 1 M × N i = 1 j = 1 f ( i , j ) .

Furthermore, the relationship between the target and the fuzzy scene is extracted based on the fuzzy image information, which is the basis of a series of subsequent processing, such as target tracking, recognition, classification and detection, and realized the fast detection of the dynamic image edge detail features.

2.2 Dynamic image edge denoising algorithm in frequency domain

The image features of the dynamic image edge in frequency domain space can be expressed as the combination of various frequency component information. In the dynamic image edge processing in frequency domain, the image frequency can be redistributed and denoised by suppressing a certain region or individual frequency components, while ensuring that other components are not affected, so as to achieve the purpose of image enhancement [11]. The key of the enhancement method based on frequency domain lies in the mutual conversion between spatial T and frequency domain E H. Based on the enhancement algorithm of dynamic image in frequency domain, the enhancement process can be expressed as follows:

(10) λ = A ( x ) T 1 { E H [ T [ f ( x , y ) ] } .

In the process of dynamic image edge denoising in the frequency domain, convolution of image features and Fourier transform are usually the necessary theoretical support. Through the design of the transfer function to enhance the edge number of the dynamic image, we can get the desired enhancement effect [12]. The purpose of the low-frequency filtering is to remove the high-frequency information and retain the low-frequency information. Because the details and noise in the image correspond to the high-frequency information in the Fourier spectrum of the image, the low-pass filter can remove or suppress the noise and blur the image details, which is similar to the spatial smoothing filter [13]. The low-pass filter is mainly to select an appropriate transfer function H(u, v) to suppress the D(u, v) of the high-frequency components of the dynamic image edge details. The following will introduce the two-dimensional low-pass filter transfer function of dynamic image filtering, and the specific algorithm is as follows:

(11) H ( u , v ) = 0 , D ( u , v ) > D 0 , 1 , D ( u , v ) D 0 .

where D is the cutoff frequency, which is a nonnegative value. D(u, v) is the distance from the point (u, v) to the origin of the frequency plane.

(12) D ( u , v ) = λ H ( u , v ) D 0 ( u 2 + v 2 ) 1 / 2 .

When the edge feature frequency of the dynamic image is less than the cut-off frequency D 0, it can pass through the low-pass filter without any influence [14]. On the contrary, when the edge feature frequency is greater than the cut-off frequency D 0, it will not be suppressed by the filter. When the frequency is at the cut-off frequency D 0, it will cause serious damage [15]. If the i-th row of the gray co-occurrence matrix in the image, the i-th column element represents the pixel with gray level j in the image, in the direction, the occurrence frequency is expressed by the following formula, in which other pixels with gray level i is separated by e distance:

(13) P ( i , j , δ , θ ) = R i e ( x + Δ x , y + Δ y ) | f ( x , y ) D ( u , v ) ,

(14) f ( x + Δ x , y + Δ y ) = j ; { x = 0 , 1 , , N x 1 ; y = 0 , 1 , , N y 1 } .

If (m, n) represents the feature point in the dynamic image reference value, θ represents the corresponding point after transformation, N x is the detail change model of edge image, and then the denoising relationship of image edge features meets the following requirements.

(15) x = N x m cos θ n sin θ + a , y = N x n cos θ + m sin θ + b .

To enhance the image details and reconstruct the edge image, an effective regularization parameter is designed to represent the two kinds of prior knowledge [16]. The regularization parameter control method is used to represent the local two-dimensional sparsity in spatial domain, so as to make the remote sensing image more sparse [17]. The improved sparse joint representation model of remote sensing image is as follows:

(16) ψ cac ( x ) = τ ψ [ 2 D x p P ( i , j , δ , θ ) + λ ψ N 3 D x P f ( x + Δ x , y + Δ y ) .

In the formula, τ represent the regularization constraints, λ represents the regularization parameter, which are used to measure the sparsity of the two prior knowledge.

Specifically, ψ I 2 D x p according to the prior knowledge of local smoothing, the local continuity of the image is maintained and the noise is effectively suppressed [18]. Assuming that the gradient image of remote sensing image is Laplace distribution, the remote sensing image has sparsity after convolution, and the sparsity of local two-dimensional space is described:

(17) ψ I 2 D x p = D x p = D v x p + D h x p .

Furthermore, the gray information matching method is adopted, and the gray level between two images to be matched is used as the benchmark measure. When the gray information similarity meets the set threshold, the image matching is successful. Image quality evaluation is mainly a comprehensive evaluation of image processing methods [19,20]. In the process of image quality evaluation, not only the objective numerical analysis of the processing method but also the subjective factors of the observer himself, such as psychology, vision and experience, should be considered. Therefore, human beings need to combine subjective evaluation with objective evaluation to measure the weight between them and get comprehensive evaluation results. Subjective evaluation of image quality will lead to different results when evaluating the same image due to different observers [21,22]. Therefore, to overcome the shortcomings of subjective evaluation, we need to put forward objective standards to evaluate the image quality. The objective evaluation method of image quality mainly takes the image feature as the input quantity, calculates the objective value through the mathematical model, and compares with the standard value, so as to achieve the objective evaluation purpose. Several objective evaluation methods of image quality are adopted as follows.

The average gradient is the definition of the image, which reflects the ability of the image to express the detail contrast. The larger the value is, the clearer the image is:

(18) G ¯ = 1 ( M 1 ) ( N 1 ) i = 1 M 1 j = 1 N 1 1 2 f i , j i 2 + f i , j j 2 ,

where f is the pixel value of the coordinate points (i, j) in the image. From the perspective of information theory, entropy reflects the richness of image information, and the value of entropy represents the amount of information carried by the image. In general, if the information entropy of an image is large, it means that it contains abundant information, and the image quality is ideal.

(19) H = I = 0 255 P i j log 2 ψ I 2 D x p .

Among them:

(20) P i j = H f ( i , j ) / G ¯ N 2 ,

where f(i, j) is the number of times the feature binary (i, j) appears, N is the scale of the image, and P reflects the overall characteristics of the gray value distribution between the central pixel and its neighboring pixels. Root mean square error is mainly used to calculate the degree of deviation between two groups of data.

(21) RMSE = 1 Z K i = 1 n j = 1 m ( q i g j ) 2 ,

where Z and K are the size of the image, q i is the pixel gray value of the processed image i, and the g j is the pixel gray value of the original image j. For the gray value of the processed image, the root mean square error of the color image can be calculated by channels. The high-dynamic range image dynamic compression and detail enhancement are carried out, based on the analysis of the digital image detail enhancement technology, the high-dynamic range image enhancement algorithm is studied, and the results obtained by the algorithm are compared with adaptive gain control and histogram equalization algorithms. Based on this, the fine denoising process of dynamic image is optimized, and the specific steps are shown in Figure 2:

Figure 2 
                  Fine denoising process flow of dynamic image.
Figure 2

Fine denoising process flow of dynamic image.

Based on the idea of hierarchical processing algorithm, an adaptive image enhancement algorithm in dynamic scene is proposed. The algorithm uses the guided filter as the frequency divider to divide the high-dynamic range image, and uses the original image to be tested as the guided image to maintain the original image edge and suppress the large area background noise. The original high-dynamic range image is processed hierarchically to obtain the base layer image containing the large dynamic low-frequency scene information. Then, the original high-dynamic range image is compared with the base layer image after frequency division to obtain the detail layer image containing small dynamic high-frequency texture information. According to the different feature information contained in the base layer and detail layer images, the base layer and detail layer images are processed using two different processing methods, dynamic compression based on one-dimensional array and adaptive gain control, to compress the base layer image into low-dynamic range image while retaining the details of the original image with high-dynamic range. In addition, the weight coefficient generated by the pilot filter when processing the original image with high-dynamic range is used as the adaptive gain control coefficient to process the detail layer image, which also achieves the purpose of noise suppression and edge preservation of the detail layer image. Finally, different fusion ratio coefficients are obtained according to different scenes of the image to complete the self-adaptive control of the base layer image and the detail layer image adaptive fusion to achieve the purpose of adaptive scene.

2.3 Implementation of image edge detail enhancement

In the high dynamic original image, if the difference between the objects in the image is too large, the dynamic range of the image will be very large. In this case, if the traditional image compression algorithm is directly used to compress the dynamic range of the high dynamic original image to the low-dynamic range of the display device, the ratio of the low-dynamic range to the high-dynamic range is very small. If it is too small, the image compressed by this kind of algorithm will show a very narrow distribution range, and in the process of compression, the small target containing less pixels will be lost due to the reason of gray concentration. Therefore, to have a minimal impact on the detail information in the process of dynamic compression of high-dynamic range image, it is necessary to compress the high-dynamic range image into the original image. The high-frequency and the low-frequency parts are discussed separately, so that the compression process of high-dynamic range and the highlighting process of detail information do not affect each other. Based on this, we first optimize the processing steps of the high dynamic image edge detail enhancement, as shown in the Figure 3:

Figure 3 
                  Processing steps of high dynamic image edge detail enhancement.
Figure 3

Processing steps of high dynamic image edge detail enhancement.

The enhancement of dynamic image can improve the image quality and provide accurate data basis for local fuzzy feature extraction. The high-frequency image f is processed by unsharp masking (UM), and the edges and details of the image are enhanced to get the UM enhanced image G u (x, y). To select the appropriate threshold and gamma value for G u (x, y), the contrast of the image is improved, the display effect of the image is improved, and the high-frequency enhanced image G u (x, y) is generated. In the process of dynamic image enhancement, we need to use the following formula to calculate the image gray matrix:

(22) n ¯ = E H ( x , y ) 1 P i j G u ( x , y ) ( j , k ) T UM g q ( j , k ) ,

where, P ij is the number of all pixels in the dynamic image, q is a constant, and the value is usually 2 in clear condition. In the process of image enhancement, the input data is a dynamic image-related parameter data sequence, and the output data is the image gray enhancement model, which can be described by the following formula:

(23) F ( y , z , α , β ) = i 1 n ¯ y sin δ + z cos α > β , i 2 n ¯ y sin δ + z cos α β .

Among them, β is the local fuzzy spatial position parameter of dynamic image, δ is the local pixel direction component, and i 1 and i 2 are the gray values at both ends of any pixel in the dynamic image. According to the image gray enhancement model, q 1 and q 2 are, respectively, used to describe the gray value of the pixel, which is the proportion of the dynamic image pixel in the dynamic image:

(24) q 1 + q 2 = 1 , n 1 = i 1 l q 1 + i 2 l q 2 .

Based on the above algorithm, the following results can be obtained:

(25) β = n 2 n 1 2 , t = ( n 3 + 2 n 1 3 3 n 1 n 2 ) β 3 .

If only a single 3 × 3 neighborhood is used for smoothing, when the gray level of the center pixel is greatly disturbed by noise, because only the gray value obtained by convolution of four neighboring pixels and the template is used as the new gray value of the target point, it cannot achieve effective smoothing, so the selection of neighborhood scale is the key of this algorithm. For each pixel (i, j) of the gray image, first the gray level mean m (i.e., the mean value of the gray level values of 9 pixels) and the gray level standard deviation σ in the 3 × 3 neighborhood is calculated, and the gray level of each pixel (i, j) in the neighborhood is set as r ij , then:

(26) σ = F ( y , z , α , β ) t ( r i j m ) 2 .

According to the relationship between the gray level of the center pixel r in the neighborhood and the gray level mean m and the gray level standard deviation of the neighborhood, the scale of the neighborhood is determined as follows:

(27) α r m < σ , The scale of the neighborhood is 3 × 3 , α r m σ , The scale of the neighborhood is 5 × 5 ,

where α is the undetermined coefficient. In the region of slow gray change, because the gray level of each pixel is very close, so σ is very small, α r m σ is easy to meet. The gray value obtained by the convolution operation of nine neighborhood points in the neighborhood with the minimum uniformity and the template to the target pixel are almost the same as the gray level R of the central pixel and will not damage the image content. If the gray level of the center pixel is affected by noise, and its r becomes large, αr − mσ is more easily satisfied, and the gray value obtained by convolution operation of nine neighborhood points in the neighborhood with the minimum uniformity and the template is more desirable. When a boundary passes through the neighborhood, the gray level of each pixel in the neighborhood fluctuates greatly, so σ is larger, which makes αrm. In practical application, the commonly used frequency-domain filters include the Gaussian filter, Butterworth filter, ideal filter, etc. Gaussian low-pass filter and Gaussian high-pass filter are used to complete the separation of low-frequency and high-frequency images. The specific separation steps are shown in Figure 4.

Figure 4 
                  Image high-frequency and low-frequency separation steps.
Figure 4

Image high-frequency and low-frequency separation steps.

If you want to find small motion in an image and enlarge its features, the display of the input image is modeled as the pixel density trajectory observed in a related frame. This is like simply calculating the translation from one pixel to the next in each frame and displaying the image with small motion magnification. This pixel recording method is based on Lagrange description method. However, such a method will lead to artificial conversion between magnified and nonmagnified pixels in a single structure. The main steps in the motion amplification involve the reliable estimation of motion and the aggregation of the group of pixels to be amplified. The motion of each pixel is estimated by analyzing the feature points and local intensity allocation and grouping them. Based on this, the image detail enhancement region magnification processing method is improved, and the specific steps are shown in Figure 5.

Figure 5 
                  Image detail enhancement region enlargement processing.
Figure 5

Image detail enhancement region enlargement processing.

When enlarging the image detail enhancement area, it is necessary to record the frame first to prevent the small motion caused by camera jitter from being enlarged. In this step, it is assumed that the input image sequence describes a main static scene. Then, the initial tracking of the detected feature points is completed, and the reflection curve that can best remove the motion of the tracked feature points is found. After the intensity normalization of any exposure change, the recorded image is used for subsequent motion analysis. The main purpose of the high-dynamic range image layering is to process the high-frequency and low-frequency parts of a high-dynamic range image step by step. The advantage of this method is that it can expand the important detail information with a small change of gray level and suppress the high-frequency noise part, on the one hand, and compress the background part with a large dynamic range, on the other hand. The effective enhancement of dynamic image details ensures the effect of the image enhancement.

3 Experimental results

To verify the practical application effect of the high-dynamic range image edge detail enhancement algorithm, the experiment is carried out on a 4-core 16 GB memory machine using MATLAB (Made by MathWorks, USA) program without optimization. The calculation result of each image will take more than 10 min. The separable binomial distribution filter is used to construct the image structure, and the routine prototype is also used to construct the real-time online image to reflect small changes, which is essentially like a microscope with time domain changes. It is implemented by C++ programming language, is a complete central processing unit based, 30 frames per second 640 * 480 image processing, and it can be further accelerated by using graphics processing unit. Subjective analysis enables observers to refer to a certain evaluation level or rely on their own experience to observe and judge the contrast, clarity, detail texture, and other aspects of the image processed by the algorithm, so as to make a subjective analysis of the advantages and disadvantages of the image visual effect. To minimize the difference between the subjective evaluation results of each observer, the observer is usually given a certain evaluation basis, which has been prescribed. After the results are obtained, the scores given by all observers are statistically averaged, and the final average result is regarded as the generally accepted evaluation result. The commonly used fixed evaluation criteria are shown in Table 1:

Table 1

Measurement and evaluation of the image detail enhancement effect

Grade Fixed evaluation scale Relative evaluation scale
Level 1 Excellent The best of a set of contrast images
Level 2 Good A set of contrast images above the average level
Level 3 Ordinary The average level of a set of contrast images
Level 4 Poor A set of contrast images below the average level
Level 5 Bad The worst of a set of contrast images

Furthermore, two sample images displayed in the model are selected, and 10,000 dynamic images with noise are randomly generated as the training data set. On this basis, 200 pieces of data were randomly selected for reference data set. Parameters are adjusted during the test to evaluate the performance of classification and enhancement. The regularization parameter λ can make the expression of the remote sensing image more sparse, effectively inhibiting the noise, and has an important impact on the performance of identifying the noise image, so it requires the optimal solution of the regularization parameter. To obtain the regularization, optimal parameters need to find the minimized J(θ). The parameter θ is vector. By selecting different λ, the average sum of error of θ on the cross-validation set is measured. The minimum model of the cross-validation set is taken as the final choice of the regularization parameter with the following formula:

(28) J ( θ ) = 1 2 n i = 1 n ( τ θ ( x i ) y i ) 2 λ 2 n j = 1 n θ j 2 .

After obtaining the regularization optimal solution, it is inserted into the formula (16) of Section 2.2 to reduce noise interference. The curve comparison results of the image edge detail enhancement recognition rate are shown in Figure 6.

Figure 6 
               Image edge detail enhancement recognition rate curve.
Figure 6

Image edge detail enhancement recognition rate curve.

From Figure 6, with the increase of the test points, the recognition rate of the design algorithm still maintains more than 80%, but the recognition rate of Document [1] methods and Document [2] have decreased to less than 60%. This is because the design algorithm is based on the mathematical model, weighted integrates the image enhancement effect of a high-frequency image, reduces the evaluation difference of the same image quality, improves the recognition accuracy, accurately enhances the virtual reality image, and has obvious advantages in the image enhancement recognition.

Based on the accuracy of the test identification, the identification efficiency of the design algorithm, Document [1] algorithm and Document [2] algorithm, the test results are shown in Figure 7.

Figure 7 
               Image edge details enhanced recognition efficiency curve.
Figure 7

Image edge details enhanced recognition efficiency curve.

Figure 7 shows that, with the increase of the image data, the recognition time of the design algorithm still maintains less than 1,200 min, but the recognition time of Document [1] methods and Document [2] have decreased to more than 1,600. This is because the design algorithm denoises the image before extracting the image pixels, and the accuracy also effectively improves the efficiency of pixel recognition and shortens the recognition period.

4 Conclusion

To better improve image edge detail enhancement and recognition efficiency, this study presents an edge detail enhancement algorithm for the high-dynamic range images. Layered image processing, enhance low-frequency image edge details, the high-frequency image is processed by gray transformation, the final enhancement image is obtained by image weighted fusion. The experimental results show that the method of this study enhances the overall effect of the image and enhances the image details at the same time. Through practice, this algorithm is used in 3D coherence slice data volume, not only the operation speed is fast, but also the effect is obvious. The fault polygon extracted from the image after denoising by this algorithm has a great improvement in accuracy, effectively improves the efficiency of fault interpretation, and shortens the interpretation cycle. However, due to the limited conditions, the enhanced recognition effect of the image has not reached more than 90%, and future research should continue to improve the image recognition ability of the enhanced algorithm.

  1. Conflict of interest: Authors state no conflict of interest.

References

[1] Tian ZJ, Wang ML, Zhang YG. Image enhancement algorithm based on dual domain decomposition. Acta Electron Sin. 2020;7:1311–20.Search in Google Scholar

[2] Cheng Y, Deng D, Yan J, Fan C. Weakly illuminated image enhancement algorithm based on convolutional neural network. J Computer Appl. 2019;39(4):1162–9.Search in Google Scholar

[3] Lal S, Rehman SU, Shah JH, Meraj T, Rauf HT, Damaševičius R, et al. Adversarial attack and defence through adversarial training and feature fusion for diabetic retinopathy recognition. Sensors. 2021;21(11):3922.10.3390/s21113922Search in Google Scholar PubMed PubMed Central

[4] Chen SY, Lin C, Chuang SJ, Kao ZY. Weighted background suppression target detection using sparse image enhancement technique for newly grown tree leaves. Remote Sens. 2019;11(9):1081.10.3390/rs11091081Search in Google Scholar

[5] Asahara A, Arai Y, Saito T, Ishi-Hayase J, Akahane K, Minoshima K. Dual-comb-based asynchronous pump-probe measurement with an ultrawide temporal dynamic range for characterization of photo-excited InAs quantum dots. Appl Phys Express. 2020;13(6):062003.10.35848/1882-0786/ab8b4fSearch in Google Scholar

[6] Zhang XQ, Zhao D, Ma YD, Wang YC, Zhang LX, Guo WJ, et al. Joint over and under exposures correction by aggregated retinex propagation for image enhancement. IEEE Signal Process Lett. 2020;27:1210–4.10.1109/LSP.2020.3008347Search in Google Scholar

[7] Lu CH, Shao BE. Environment-aware multiscene image enhancement for internet of things enabled edge cameras. IEEE Syst J. 2020;151(3):3439–49.10.1109/JSYST.2020.2993800Search in Google Scholar

[8] Xie X, Zhan Y, Wang Y, Lucas JF, Zhang Y, Luo C. Comparative analysis on Landsat image enhancement using fractional and integral differential operators. Computing. 2020;102(1):247–61.10.1007/s00607-019-00737-0Search in Google Scholar

[9] Li L, Si Y, Jia Z. Microscopy mineral image enhancement based on improved adaptive threshold in nonsubsampled shearlet transform domain. AIP Adv. 2018;8(3):035002.10.1063/1.4998400Search in Google Scholar

[10] Hajri S, Kallel F, Ben Hamida A, Nait-Ali A. Finger-knuckle-print image enhancement based on brightness preserving dynamic fuzzy histogram equalization and filtering process. J Electron Imaging. 2018;27(3):33035.10.1117/1.JEI.27.3.033035Search in Google Scholar

[11] Pillai MS, Chaudhary G, Khari M, Crespo RG. Real-time image enhancement for an automatic automobile accident detection through CCTV using deep learning. Int J High Perform Comput Appl. 2021;25:11929–40.10.1007/s00500-021-05576-wSearch in Google Scholar

[12] Wang A, Xu Y, Wei X, Cui B. Semantic segmentation of crop and weed using an encoder-decoder network and image enhancement method under uncontrolled outdoor illumination. IEEE Access. 2020;8:81724–34.10.1109/ACCESS.2020.2991354Search in Google Scholar

[13] Li T, Yang Q, Rong S, Chen L, He B. Underwater image enhancement framework and its application on an autonomous underwater vehicle platform. Optical Eng. 2020;59(8):083102–10060.10.1117/1.OE.59.8.083102Search in Google Scholar

[14] Rojhani N, Passafiume M, Lucarelli M, Collodi G, Cidronali A. Assessment of compressive sensing 2 * 2 MIMO antenna design for millimeter-wave radar image enhancement. Electronics. 2020;9(4):624.10.3390/electronics9040624Search in Google Scholar

[15] Román JC, Escobar R, Martínez F, Noguera JL, Legal-Ayala H, Pinto-Roa DP. Medical image enhancement with brightness and detail preserving using multiscale top-hat transform by reconstruction. Electron Notes Theor Computer Sci. 2020;349:69–80.10.1016/j.entcs.2020.02.013Search in Google Scholar

[16] Matin F, Jeong Y, Park H. Retinex-based image enhancement with particle swarm optimization and multi-objective function. IEICE Trans Inf Syst. 2020;E103.D(12):2721–4.10.1587/transinf.2020EDL8085Search in Google Scholar

[17] Pardo A, Gutiérrez-Gutiérrez JA, López-Higuera JM, Conde OM. Context-free hyperspectral image enhancement for wide-field optical biomarker visualization. Biomed Opt Exp. 2020;11(1):133–48.10.1364/BOE.11.000133Search in Google Scholar PubMed PubMed Central

[18] Prabu M, Shanker NR, Celine Kavida A, Ganesh E. Geometric distortion and mixed pixel elimination via TDYWT image enhancement for precise spatial measurement to avoid land survey error modeling. Soft Comput. 2020;24(8):1–19.10.1007/s00500-020-04814-xSearch in Google Scholar

[19] Bai L, Zhang W, Pan X, Zhao C. Underwater image enhancement based on global and local equalization of histogram and dual-image multi-scale fusion. IEEE Access. 2020;8:128973–90.10.1109/ACCESS.2020.3009161Search in Google Scholar

[20] Ding D. Weld pool image procession based on the Fourier-DNA low-pass filtering. J Comput Methods Sci Eng. 2021;21(1):59–70.10.3233/JCM-204307Search in Google Scholar

[21] Srinivas K, Bhandari AK, Singh A. Low-contrast image enhancement using spatial contextual similarity histogram computation and color reconstruction. J Frankl Inst. 2020;357(18):13941–63.10.1016/j.jfranklin.2020.10.013Search in Google Scholar

[22] Ning G, Bai Y. Biomedical named entity recognition based on Glove-BLSTM-CRF model. J Comput Methods Sci Eng. 2021;21(1):125–33.10.3233/JCM-204419Search in Google Scholar

Received: 2021-06-22
Revised: 2021-11-03
Accepted: 2021-11-05
Published Online: 2022-01-24

© 2022 Lanfei Zhao and Qidan Zhu, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 26.5.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2022-0008/html
Scroll to top button