Skip to content
BY 4.0 license Open Access Published by De Gruyter March 4, 2022

Image denoising algorithm of social network based on multifeature fusion

  • Lanfei Zhao EMAIL logo and Qidan Zhu

Abstract

A social network image denoising algorithm based on multifeature fusion is proposed. Based on the multifeature fusion theory, the process of social network image denoising is regarded as the fitting process of neural network, and a simple and efficient convolution neural structure of multifeature fusion is constructed for image denoising. The gray features of social network image are collected, and the gray values are denoising and cleaning. Based on the image features, multiple denoising is carried out to ensure the accuracy of social network image denoising algorithm and improve the accuracy of image processing. Experiments show that the average noise of the image processed by the algorithm designed in this study is reduced by 8.6905 dB, which is much larger than that of other methods, and the signal-to-noise ratio of the output image is high, which is maintained at about 30 dB, which has a high effect in the process of practical application.

1 Introduction

Image denoising is a classical problem in the field of image, and it is also an important preprocessing step in visual tasks, according to the image characteristics and noise characteristics of the fusion rule studied, in combination with Gaussian scale mixture model of Bayesian least squares method, based on the singular value decomposition denoising and 3D filtering denoising algorithm based on the block matching method is optimized, the social network based on feature fusion of image denoising algorithms, to achieve a better denoising effect [1]. In the process of social network image denoising, it is usually necessary to select the parameters artificially, and the denoising steps are complex, which requires a lot of time and calculation cost. In the acquisition of image features, the photoelectric conversion of equipment will introduce noise, image transmission, channel noise will also interfere with the image, and hence, the image quality is often reduced [2]. In addition, because the noise in the image will make the information contained in the image has uncertainty, so that people cannot identify and understand the image well. When the image is recognized and segmented, the image noise will cause serious deviation in the processing results. The error caused by this kind of deviation will bring huge loss, and image denoising has important research significance. Some scholars have proposed a method of denoising and super-resolution reconstruction of single-space target image based on deep convolution neural network (CNN) [3]. Based on a very deep CNN-based network, combined with global residual learning and local residual learning, the noise is removed, and the lost details of low spatial resolution images are reconstructed. Some scholars proposed to study the constrained second-total generalized variation (TGV) model, which includes nonnegative and bounded constraints as a special case [4]. By using the equivalent definition of the second-order TGV, the proposed constrained minimization problem is transformed into the minimization problem of the sum of two convex functions, one of which is composed of linear transformation. Then, the relaxed primal dual approximation algorithm is used to solve it. The advantage of this algorithm is that there is no matrix inversion and does not involve any subproblems. The constraint on the pixel value of the image is realized, and the original image is estimated from the noisy or damaged image. The above methods can realize image denoising and compensate the lost details of the image. However, in practical application, it is found that image denoising will weaken the edge and blur the image vision.

Therefore, the research combined with the social network image denoising algorithm based on multifeature fusion fully proves that the generated adversative network can retain the texture details in the image when the image is operated at the pixel level. The idea of generative antagonism is further applied to image denoising. To ensure the correctness of the discriminant network, an adversarial network is generated by using a noise image with fixed noise intensity. For images containing noise of different intensities, to choose network parameters adaptively and build a neural network to estimate the noise intensity, according to the estimated results of the corresponding network parameters loaded, to complete the image denoising work.

2 Image denoising algorithm for social networks

2.1 Social network image feature acquisition

The task of social network image feature acquisition and denoising is to process the image containing noise, get the denoised image, and make the denoised image more similar to the real scene. Therefore, denoising problem can be used as the inverse problem of image observation [5]. When the image is collected, the model of the image is actually observed after being affected by the characteristics of the acquisition system and the noise of the transmission channel. When the original scene passes through the acquisition system and transmission channel, it is affected by multiplicative noise and additive noise to get the final noise-containing image. It can be expressed by the following formula:

(1) Y = H X + N .

In the formula, X represents the original scene to be observed, H represents the multiplicative noise generated through the system, which is mainly generated by the acquisition system, and N represents the additive noise generated during the channel propagation. The image denoising task is to infer the original image according to the observation model under the condition that the noisy image Y is known [6]. In image acquisition equipment, thermal noise will be generated in conductor element due to thermal motion of electrons. The thermal noise is related to the absolute temperature of the conductor. Because the noise caused by this motion does not change with the frequency of the signal, the distribution is very uniform at all frequencies. It can be considered that thermal noise has the same energy distribution at different frequencies, so it is also called white noise [7]. White noise is also called Gaussian noise because of its Gaussian probability distribution in image space domain. Emitted noise is the noise generated by the nonuniform flow of current, and the amplitude of the noise component also obeys the Gaussian distribution, so it also belongs to the Gaussian noise [8]. The probability distribution of the Gaussian distribution is shown by the formula, where u represents the mean value of noise, and a is the standard deviation of Gaussian noise.

(2) p ( z ) = 1 Y 2 π a e z u 2 σ 2 .

The current operation block is represented by P, and each pixel in P is represented by z. The searched reference block is represented by Q, and each pixel in Q is σ. The current size of the operation block is k * k, and the size of the search window is m * m. The image block Q similar to P is searched within the range of m * k centered on the pixel in the upper left corner of P, and the similarity between the image blocks is measured by the Euclide distance distributed between Q and P. The similarity measure is shown in the following formula:

(3) d ( P , Q ) = p ( z ) P i Q i 2 2 k × k .

From noise image P i in order to restore the original image, the image segmentation into several overlapping signals with noise in the image block, and similar tiles together, as each similar internal block, and between them there is a big redundancy, so with 2 distributed singular value decomposition (DSVD) and 1 DSVD remove similar internal and the correlation between the image block, and a preliminary estimate is obtained by threshold shrinkage transform coefficient image [9]. In the second stage, similar image blocks in the original noisy image will be found according to the preliminary estimated image, and the above operation will be carried out on the noisy image again. Because the noise in the first stage is significantly reduced, the similar image blocks obtained in the second stage will be more accurate, and this accuracy can better improve the denoising effect in the second stage [10]. The proposed algorithm can remove the noise while preserving the local details of the image better. The specific image feature acquisition process is as shown in Figure 1.

Figure 1 
                  Image feature acquisition process of block singular value decomposition.
Figure 1

Image feature acquisition process of block singular value decomposition.

For each pixel to be denoised, the image block feature with its center and size of k * k is used. To estimate the corresponding noiseless image block a from the noisy image block with 2 DSVD, a set of training samples is needed to calculate the covariance matrix of a. The more accurate the covariance matrix of a is, the closer the estimated image block is to a. Because the image has local and nonlocal self-similarity, the training samples of a can be obtained by searching in the whole image [11]. However, to improve the search speed, the search area is limited to the local window with k * k as the center and L * L as the size. The specific image similarity feature search model is shown in Figure 2.

Figure 2 
                  Image similar feature search model.
Figure 2

Image similar feature search model.

The search process can be completed by many existing methods, such as block matching and k-means clustering. Here, a simple and effective block matching method is selected. In the image denoising network, it is usually necessary to extract texture features and recover the denoised image through the features [12]. However, in the process of social network image denoising, we use deep convolution network to extract the noise distribution in the image and remove the noise distribution in the noisy image through cross layer connection to achieve the purpose of image denoising [13]. The success of social network image denoising proves that noise distribution is easier to be learned by network than image features. To make better use of the antinoise network to restore the image texture, it is necessary to extract the redundant features in the image and denoise the image in the feature domain. To denoise the image in the feature domain, we can learn from the structure of DNCNN and separate the noise distribution from the image features in the feature domain [14]. A convolution layer with seven layers is used to extract the noise in the feature domain. The convolution kernel of 3 × 3 is used in each layer of convolution network. The convolution features are normalized in batch, and the correction linear activation unit is used as the activation function [15]. A cross layer connection structure is used between the original multiscale features and the extracted noise results, and the multiscale features are subtracted from the extracted noise distribution to obtain the image denoising features [16]. The structure of feature domain denoising is shown in Figure 3.

Figure 3 
                  Denoising structure of social network image features.
Figure 3

Denoising structure of social network image features.

In the denoising structure of social network image feature domain, small convolution check is used to convolute the input noisy features, and the number of channels in the feature layer is kept unchanged for subsequent cross layer subtraction [17]. In the convolution process of the network, with the change of the input characteristics, the shallow parameters of the network change, which leads to the fluctuation of the input distribution in the middle layer and hinders the subsequent network parameter learning [18]. The parameter learning depends on the initial weight of the network, and the learning rate needs to be adjusted slowly to ensure the correctness of the training results.

2.2 Image noise information cleaning algorithm

Based on the global adaptive denoising algorithm, an adaptive fractional calculus algorithm based on small probability strategy (only for a kind of low-intensity noise image) is proposed. While using adaptive fractional integral to remove noise, the fractional differential is used to enhance and retain the texture of the image so that the whole image can be clearer and the image can be normalized by batch. The batch normalization layer normalizes the input data distribution to the value x with the mean value of zero and the variance of 1, transforms the distribution of x, and introduces the learnable parameters a, b, and c representing the standard deviation of the input data distribution, b representing the mean value of the input data. The batch normalization layer learns the average distribution y of the input, which is used as the output of the batch normalization layer, so that the subsequent network can obtain a relatively stable distribution and is no longer disturbed by the change of shallow network parameters, so that the neural network can converge quickly without layer by layer optimization. The specific algorithm is as follows:

(4) y = a x + b , a = Var [ x ] , d = E ( x ) .

Based on the above algorithm, the cross layer connection of shallow and deep networks can not only fuse features and eliminate the distribution of noise features, but also promote the optimization of the network and solve the problem of network degradation with the increase of layers [19]. When constructing shallow network and deep network to solve specific tasks, the accuracy of test set of deep network is lower than that of shallow network, but its fitting ability is strong in theory. According to the author’s analysis of deep residual network, when the information flow propagates backward, it will be hindered by the deep structure, resulting in network degradation, which makes it difficult to optimize the network. When the cross layer structure connects the shallow network and the deep network, when the network updates the weights, it can reduce the depth of back propagation and solve the problem that the network is difficult to optimize [20]. The application of the cross layer connection makes the feature extraction layer in the network get better optimization, and multiscale feature extraction is the key link in the confrontation training, so the cross layer connection can make the network output texture features more abundant image. When the fractional order is negative, the fractional order is integral operation, and when the fractional order is positive, the fractional order is differential operation. According to the continuous change of the fractional order, the corresponding adaptive fractional calculus function is established. The function has a negative order at the image noise, a positive order at the image edge, a positive order at the image weak texture, and a small positive order at the image smoothing. When the integral level −1 ≤ v noise ≤ −0.5, it has a good attenuation effect for a high frequency image noise.

(5) W = v noise = ( 1 ) × 1 2 ( M noise ( i , j ) + Y noise ) t Y mbe t , M noise ( i , j ) = t d ( P , Q ) , v texure = M ( i , j ) 2 t 2 , 2 < M ( i , j ) < t , 0 , 0 < M ( i , j ) < 2 .

In this function, t is the threshold of noise gradient obtained by the small probability strategy, and V is the fractional order corresponding to each pixel. M noise(i, j) is the average gradient of noise points, and Y noise(i, j) is the best threshold of noise points, t = X noise. The average error determines the degree of image distortion by calculating the average difference of each pixel value between the real image and the denoised image.

(6) MSE = W 1 M × N i = 1 M j = 1 N ( f ˆ ( i , j ) f ( i , j ) ) 2 ,

where f represents the pixel value of the original clean image j at i, and M and N represent the length and width of the image, respectively. Although the calculation of mean square error is simple, it does not consider the influence of human eyes, so its calculation results are often different from the results observed by human eyes [21]. The peak signal-to-noise ratio (PSNR) represents the ratio of the maximum possible power of the signal-to-noise power.

(7) PSNR = 10 × ln ( 2 L 1 ) 2 MSE ,

where L is the maximum gray level, and MSE is the mean square error. The larger the PSNR value is, the higher the quality of the image is; that is, the less noise is contained in the image, indicating that the effect of the image denoising method is better. Image enhancement factor is mainly used to measure the edge protection performance of image denoising algorithm.

When the fractional order is −1 ≤ v noise ≤ −0.5, the segmented noise points are denoised, and a good denoising effect is achieved. When the noise points with different intensities correspond to different integration orders, the negative order (negative value) is smaller at the strong noise point but larger at the weak noise point, so a better noise reduction effect can be obtained [22]. For the image with noise, all kinds of noise exist in the image. If only the same fractional integral order is used to deal with the noise, it is difficult to achieve good denoising effect. Therefore, this study proposes a global adaptive fractional integral image denoising algorithm, which performs the fractional integral operation on each pixel. The average value of the gradient amplitude of each pixel in the image f(i, j) in eight directions is set as M(i, j), and then it is standardized to correspond to the integral order of the pixel. Using the maximum value y and minimum value x of M(i, j) to normalize the gradient amplitude of pixels, the dynamic fractional integral order can be obtained.

(8) v = ( 1 ) × M ( i , j ) X Y M ( i , j ) .

Therefore, it can be realized that there is a small negative order at the place where the mean value of gradient is large (regarded as noise point), and the fractional order integral of this order has a great attenuation effect on the noise; there is a corresponding integral order at the place where the gradient amplitude is medium and small (regarded as image texture point), which has a certain enhancement and retention effect on the image texture. The global adaptive fractional integration algorithm can be used to process the whole image by substituting the integral order of the formula into the mask, and then convoluting with the noise image.

2.3 Design the realization of network image denoising

The convolution neural structure of social network is divided into convolution layer and pooling layer, and each convolution layer is connected with a pooling layer combined with neural network features. Convolution layer is mainly responsible for feature extraction and uses multiple convolution checks to convolute the output of the previous layer [23]. Because each convolution kernel has a different assignment, each convolution kernel can represent an image feature, such as texture feature and edge feature, after convolution operation of convolution layer, multiple feature maps can be obtained, so convolution layer can also be called feature extraction layer. Pooling layer is also called sampling layer. According to the local correlation principle of image, the feature points in small areas are integrated to get new features. Based on this, the structural features of social network are analyzed, as shown in Figure 4.

Figure 4 
                  Structural characteristics of social networks.
Figure 4

Structural characteristics of social networks.

Average the feature points in each specified area. Suppose the size of the input matrix is 4 × 4, the size of the filter is 2 × 2, and the moving step is 2, then the mean pooling operation is shown in Figure 5.

Figure 5 
                  Image denoising average pooling diagram.
Figure 5

Image denoising average pooling diagram.

By adding a cross layer connection structure between the two residual modules, the features of the three layers in the network are connected together. To reduce the difficulty of network optimization, the loss of effective features is reduced. At the same time, low dimensional features and abstract high-dimensional features are combined to facilitate the subsequent network to filter and reconstruct the image. The residual structure and network cross layer connection method are used to optimize the information transmission of the network, which makes the network easier to converge while retaining more image features. In the network training, the features are screened, and the screened features are gradually fused, which is conducive to the improvement of the generated network by the discrimination network. Therefore, the generated network can restore the denoised image with better image quality after the confrontation training. The training process of convolutional neural structure of social network is a supervised learning process, so the training set of convolutional neural structure of social network must be a labeled dataset, and the network training process is shown in Figure 6.

Figure 6 
                  Optimization of social network image denoising process.
Figure 6

Optimization of social network image denoising process.

Before the start of training, the parameters in convolutional neural structure of social network will be initialized to some relatively small number according to Gaussian distribution. The reason why we choose small random number is that at the beginning of training, the network will not reach saturation state because the parameters are too large. Assuming that the number of samples in the training set is n and the number of test samples is m, the mean square error is selected as the loss function, the size of the training batch is specified, the training samples and their corresponding labels are input into the network, the error value between the output and the input is calculated, and the weight and bias of each layer in the network are continuously updated through the back propagation algorithm to optimize the weight and bias. The loss function is minimized. After the training, the optimal convolutional neural structure model of social network will be obtained. The model with optimal parameters will be saved, and the test samples will be input into the saved optimal model. The output is the corresponding denoised image. The estimated image obtained in the first stage of denoising is used to search for similar blocks. Because most of the noise is removed from the estimated image, the similar image blocks are more accurate. According to the position of the similar image blocks in the estimated image, the similar image blocks in the original noisy image are found. The transform matrix of 2 DSVD is calculated by using the similar blocks in the estimated image. The transform matrix is used to remove the redundancy of similar blocks in the noisy image. However, for the redundancy between similar blocks, the discrete cosine transform is selected to remove it, and the similar image blocks found by the first stage estimation image are more accurate, which can reduce the amount of calculation. As Wiener filtering can minimize the mean square error of the original signal and the estimated signal when the statistical characteristics of the original signal are known, the statistical characteristics of the original noiseless image are calculated by using the image estimated in the first stage, and the noise is suppressed by using the Wiener filtering shrinkage transform coefficient to achieve the research requirements of image denoising.

3 Experimental results

To verify the practical application effect of social network image denoising algorithm based on multifeature fusion, it is necessary to compare the denoising results of existing image denoising algorithms and 400 images of human life and natural landform in cbsd400 and Berkeley Segmentation Dataset (BSD) and MNIST Database dataset. To facilitate training, the input image is cut into a number of 43 × 43 image blocks, and these image blocks are grayed and data enhanced. Finally, 50,000 image blocks are trained. The experiment adopts 64 bit Windows 10 system, uses TensorFlow deep learning framework for network training, Intel corei5 7300CPU@3.2 GHz, the gtx1050 is 16 GB memory, and the graphics card is gtx1050.

The algorithms in earlier studies [37] are used as comparison methods for testing. Considering the original image in Figures 4 and 5 as the object, the added noises are Gaussian noise with the mean value of 0, Poisson noise with the variance of 0.01, salt and pepper noise with the intensity of 0.04, and salt and pepper noise with the intensity of 0.03, respectively. The image resolution is 256 × 256, considering the order of 0.5. The PSNR of different methods is shown in Table 1.

Table 1

Image noise after algorithm processing

Denoising algorithm PSNR
Gaussian noise Poisson noise Multiplicative noise Salt and pepper noise
The algorithm in this study 25.74 29.35 31.55 27.69
The algorithm of deep convolution network in ref. [3] 18.6 28.85 27.35 18.63
Constrained second-order total generalized variational algorithm in ref. [4] 15.34 19.67 21.36 17.67
SVM algorithm in ref. [5] 19.67 16.68 18.96 17.41
Multifeature fusion algorithm in ref. [6] 21.58 21.36 22.35 20.54
Hybrid deep learning algorithm in ref. [7] 23.56 22.39 26.68 20.68

According to the analysis of Table 1, the average PSNR of the algorithm in this article is 25.58 under different noise conditions, whereas the average PSNR of the algorithm from refs [37] is 23.36, 18.51, 18.18, 21.46, and 23.33, respectively. Therefore, the algorithm in this study can output higher PSNR under different noise conditions. It shows that the algorithm can reduce noise and avoid losing too much image information, so that the image can keep high quality.

With the structural similarity (SSIM) as the index, the similarity between the denoised image and the original image is quantitatively evaluated, and its value range is [0, 1]. The larger the value, the smaller the image distortion. From 400 images of human body and natural landforms in cbsd400, BSD, and MNIST databases, 50 images were randomly selected and divided into five groups with 10 images in each group. Add the same amount of Gaussian noise to each group of images, and test the SSIM value of images after denoising in different directions. The average SSIM value of each group of images is shown in Table 2.

Table 2

SSIM values of different methods

Groups 1 2 3 4 5
The algorithm in this article 0.87 0.96 0.92 0.91 0.88
The algorithm of deep convolution network in ref. [3] 0.67 0.81 0.67 0.88 0.76
Constrained second-order total generalized variational algorithm in ref. [4] 0.58 0.77 0.73 0.67 0.59
SVM algorithm in ref. [5] 0.69 0.72 0.71 0.85 0.76
Multifeature fusion algorithm in ref. [6] 0.66 0.68 0.77 0.81 0.61
Hybrid deep learning algorithm in ref. [7] 0.71 0.79 0.68 0.72 0.66

As can be seen from Table 2, the average value of SSIM obtained by this method after processing five groups of images is 0.908, whereas the average values of SSIM obtained by the algorithms from refs [37] are 0.758, 0.668, 0.746, 0.706, and 0.712, which indicates that the structural similarity of images processed by this method is much higher than that of other methods. This is because the method in this study enhances the texture of image processing by fractional differential algorithm and retains the structure of the image based on removing noise, which makes the rendering effect of the image better.

4 Conclusion

Based on the global adaptive fractional-order denoising algorithm, a social network denoising algorithm based on multifeature fusion is proposed for a class of low-intensity salt and pepper noise images. For the whole image, the algorithm uses fractional integral algorithm in noise points and fractional differential algorithm in texture points, which makes the image denoising and texture enhancement and makes the image clearer and better in visual effect. Experiments show that this method has a good image denoising effect for various types of images, PSNR is 25.58, and the average value of SSIM is 0.908, which is superior to other methods. It shows that this method can effectively denoise images while preserving image details and avoiding image distortion. In the future research, we can consider the related research of image super-resolution reconstruction to further improve the image output effect.

  1. Conflict of interest: Authors state no conflict of interest.

References

[1] Wang J, Yu XF, Ouyang N, Zhao S, Yao H, Guan X, et al. An online surface water COD measurement method based on multi-source spectral feature-level fusion. RSC Adv. 2019;9(20):11296–304.10.1039/C8RA10089FSearch in Google Scholar

[2] Fu Y, Fan C, Zou L, Yang Y, Liu Y. Image denoising of real photographs with generative adversarial network for data augmentation. J Electron Imaging. 2019;28(5):53017.10.1117/1.JEI.28.5.053017Search in Google Scholar

[3] Feng X, Su X, Shen J, Jin H. Single space object image denoising and super-resolution reconstructing using deep convolutional networks. Remote Sens. 2019;11(16):1910.10.3390/rs11161910Search in Google Scholar

[4] Liu X, Tang Y, Yang Y. Primal-dual algorithm to solve the constrained second-order total generalized variational model for image denoising. J Electron Imaging. 2019;28(4):43017.1–15.10.1117/1.JEI.28.4.043017Search in Google Scholar

[5] Li R, Gu H, Hu B, She Z. Multi-feature fusion and damage identification of large generator stator insulation based on lamb wave detection and SVM method. Sensors. 2019;19(17):3733.10.3390/s19173733Search in Google Scholar PubMed PubMed Central

[6] Zhou T, Wang Y, Wang CX, Salous S, Liu L, Tao C. Multi-feature fusion based recognition and relevance analysis of propagation scenes for high-speed railway channels. IEEE Trans Vehicular Technol. 2020;69(8):8107–18.10.1109/TVT.2020.2999313Search in Google Scholar

[7] Abdi A, Hasan S, Shamsuddin SM, Idris N, Piran J. A hybrid deep learning architecture for opinion-oriented multi-document summarization based on multi-feature fusion. Knowl Syst. 2020;213:106658.10.1016/j.knosys.2020.106658Search in Google Scholar

[8] Wang Z, Qian L, Han C, Shi L. Application of multi-feature fusion and random forests to the automated detection of myocardial infarction. Cognit Syst Res. 2020;59:15–26.10.1016/j.cogsys.2019.09.001Search in Google Scholar

[9] Bhat PG, Subudhi BN, Veerakumar T, Laxmi V, Gaur MS. Multi-feature fusion in particle filter framework for visual tracking. IEEE Sens J. 2020;20(5):2405–15.10.1109/JSEN.2019.2954331Search in Google Scholar

[10] Deng H, Tao J, Song X, Zhang C. Estimation of the parameters of a weighted nuclear norm model and its application in image denoising. Inf Sci. 2020;528:246–64.10.1016/j.ins.2020.04.028Search in Google Scholar

[11] Ma Y, Wei B, Feng P, He P, Guo X, Wang G. Low-dose CT image denoising using a generative adversarial network with a hybrid loss function for noise learning. IEEE Access. 2020;8:67519–29.10.1109/ACCESS.2020.2986388Search in Google Scholar

[12] Kumwilaisak W, Piriyatharawet T, Lasang P, Thatphithakkul N. Image denoising with deep convolutional neural and multi-directional long short-term memory networks under Poisson noise environments. IEEE Access. 2020;8:86998–7010.10.1109/ACCESS.2020.2991988Search in Google Scholar

[13] Chen J, Lin Y, Du L, Kang M, Chi X, Wang Z, et al. Single low-dose CT image denoising using a generative adversarial network with modified U-net generator and multi-level discriminator. IEEE Access. 2020;8:133470–87.10.1109/ACCESS.2020.3006512Search in Google Scholar

[14] Narasimha C, Rao AN. Integrating Taylor-Krill herd-based SVM to fuzzy-based adaptive filter for medical image denoising. IET Image Process. 2020;14(3):442–50.10.1049/iet-ipr.2018.6434Search in Google Scholar

[15] Shen C, Wu X, Zhao D, Li S, Cao H, Zhao H, et al. Comprehensive heading error processing technique using image denoising and tilt-induced error compensation for polarization compass. IEEE Access. 2020;8:187222–31.10.1109/ACCESS.2020.3028418Search in Google Scholar

[16] Valsesia D, Fracastoro G, Magli E. Deep graph-convolutional image denoising. IEEE Trans Image Process. 2020;29:8226–37.10.1109/TIP.2020.3013166Search in Google Scholar PubMed

[17] Lyu Q, Guo M, Ma M, Mankin R. External prior learning and internal mean sparse coding for image denoising. J Electron Imaging. 2019;28(3):33014.1–15.10.1117/1.JEI.28.3.033014Search in Google Scholar

[18] Xie M, Zhang Z, Zheng W, Li Y, Cao K. Multi-frame star image denoising algorithm based on deep reinforcement learning and mixed Poisson-Gaussian likelihood. Sensors. 2020;20(21):5983.10.3390/s20215983Search in Google Scholar PubMed PubMed Central

[19] Mahalakshmi T, Sreenivas A. Adaptive filter with type-2 fuzzy system and optimization-based kernel interpolation for satellite image denoising. Computer J. 2020;63(6):913–26.10.1093/comjnl/bxz168Search in Google Scholar

[20] Jang SJ, Hwang Y. Noise-aware and light-weight VLSI design of bilateral filter for robust and fast image denoising in mobile systems. Sensors. 2020;20(17):4722.10.3390/s20174722Search in Google Scholar PubMed PubMed Central

[21] Chinnusamy GS, Shanmugasundaram D. Genetic fuzzy optimized approximate multiplier design based non-linear anisotropic diffusion image denoising in VLSI. J Ambient Intell Humanized Comput. 2021;22(9):1–12. 10.1007/s12652-021-03027-w.Search in Google Scholar

[22] Roels J, Vernaillen F, Kremer A, Gonçalves A, Aelterman J, Luong HQ, et al. An interactive ImageJ plugin for semi-automated image denoising in electron microscopy. Nat Commun. 2020;11(1):771.10.1038/s41467-020-14529-0Search in Google Scholar PubMed PubMed Central

[23] Dhannawat R. A new faster, better pixels weighted don’t care filter for image denoising and deblurring. Int J Adv Trends Computer Sci Eng. 2020;9(2):2302–9.10.30534/ijatcse/2020/212922020Search in Google Scholar

Received: 2021-06-22
Revised: 2021-12-13
Accepted: 2021-12-22
Published Online: 2022-03-04

© 2022 Lanfei Zhao and Qidan Zhu, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 30.5.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2022-0019/html
Scroll to top button