Skip to content
BY 4.0 license Open Access Published by De Gruyter February 23, 2021

Optimized LMS algorithm for system identification and noise cancellation

  • Qianhua Ling EMAIL logo , Mohammad Asif Ikbal and P. Kumar

Abstract

Optimization by definition is the action of making most effective or the best use of a resource or situation and that is required almost in every field of engineering. In this work, the optimization of Least Mean square (LMS) algorithm is carried out with the help of Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO). Efforts have been made to find out the advantages and disadvantages of combining gradient based (LMS) algorithm with Swarm Intelligence SI (ACO, PSO). This optimization of LMS algorithm will help us in further extending the uses of adaptive filtering to the system having multi-model error surface that is still a gray area of adaptive filtering. Because the available version of LMS algorithm that plays an important role in adaptive filtering is a gradient based algorithm, that get stuck at the local minima of system with multi-model error surface considering it global minima, resulting in an non-optimized convergence. By virtue of the proposed method we have got a profound solution for the problem associated with system with multimodal error surface. The results depict significant improvements in the performance and displayed fast convergence rate, rather stucking at local minima. Both the SI techniques displayed their own advantage and can be separately combined with LMS algorithm for adaptive filtering. This optimization of LMS algorithm will further help to resolve serious interference and noise issues and holds a very important application in the field of biomedical science.

1 Introduction

The design of filters is optimum only when the statistical characteristics of the input data matches with the prior information based on which filter is designed. A very popular and easy way of ensuring the optimum design is the use of estimate and plug procedure. This is a two stage process and required excess hardware. A more efficient method is the use of adaptive filtering. These filters depend on a recursive algorithm that makes the performance of filter satisfactory even when the complete knowledge of the relevant signal characteristic is not available. These algorithm starts with some predetermined set of initial conditions, that may be collected based on the information whatever we have regarding the system environment. Various recursive algorithms have been purposed and each one of them is having advantage over other depending on the various factors, like rate of convergence, Misadjustment, Tracking, Robustness and computational requirements [1]. A new field is emerged for discrete optimization, known as Swarm Intelligence (SI). Within few decades only the use of SI has expanded in almost every field because of its tremendous performance. Enough research work has been carried out explaining about SI, and various algorithms have also been developed like Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and Artificial Honey Bee (ABC). There is a vast scope of using these algorithms in various engineering fields like, Linear Prediction, Inverse Modeling, System Identification, and Feed forward Control etc [1].

In this work, we have contributed for the optimization of the LMS algorithm using Swarm Intelligence. The LMS algorithm, as well as others related to it, is widely used in various applications of adaptive filtering due to its computational simplicity [2,3,4,5,6]. But the use of LMS is limited to system with uni-model error surface. In the following section we have explained the problem associated with the LMS algorithm in detail, in the later section we have discussed the proposed model followed by detailed description of ACO and PSO and have concluded the paper with important results and discussion.

1.1 Problem Identification

A very popular and simple recursive algorithm is Least Mean Square (LMS) which is widely used in the designing of adaptive filters because of its various advantages. It was introduced by Widrow and Hoff in 1959 and uses a gradient-based method of steepest decent. But the use of LMS or other recursive algorithm discussed so far is limited to systems having uni-model error surface. Following problems persist when we try to extend the implementation for system with Multi-model error surface.

The LMS algorithm moves in the direction of error surface in order to find the global minima, in that course it is quite possible that it may stuck to a local minima that may lead system to be unstable during the process of adaption. Figure 1, demonstrate the difference between systems having uni-model and multi-model error surfaces [8,9,10,11,12,13].

  1. Another limiting factor of LMS algorithm is the dependency of its convergence speed over the Eigen-value spread of R (Correlation Matrix).

  2. Further, the choice of step-size (μ) is very critical in LMS. For it to converge, μ should be in the following range [14].

    0<μ<1λmax

    Where, λmax represents the largest eigenvalue of the correlation matrix R. Figure 2 demonstrate the convergence of the algorithm for various values of μ.

Figure 1 Representation of Uni-model and Multi-model error surface
Figure 1

Representation of Uni-model and Multi-model error surface

Figure 2 (a) shows the LMS error for μ = 0.013, this value of μ falls under the specified limit, hence displaying a decent convergence. (b) shows the LMS error for μ = 0.0013, this value of μ is small, so the rate of convergence is slow and it's not converged after 3000 samples also, and when when μ = 0.2 the LMS error is erratic as shown in (c)
Figure 2

(a) shows the LMS error for μ = 0.013, this value of μ falls under the specified limit, hence displaying a decent convergence. (b) shows the LMS error for μ = 0.0013, this value of μ is small, so the rate of convergence is slow and it's not converged after 3000 samples also, and when when μ = 0.2 the LMS error is erratic as shown in (c)

1.2 Organization of paper

Rest of the paper is organized as follows. Next section is covering the essential literatures authors have gone through to find out the existing model and associated problems. In section three the authors have briefly discussed the main concept of the proposed model with the help of block diagram. In subsequent sections the optimization techniques like Ant colony and Particle swarm is discussed with their basics concepts and flow charts. In section six the authors have discussed about the obtained results and analyzed the findings and in next section the paper is concluded accumulating all the results and mentioning the important findings.

2 Literature Review

For the development of the proposed concept and to identify the need for the improvement in the existing model the authors have gone through various pieces of literature. In this section, few of them are listed with a brief about the contribution.

In [15] the authors have proposed an optimized normalized LMS algorithm the identification of systems with the consideration of state variable model. The proposed algorithm helps to reduce the misalignment and is based on the regularization parameters and normalized step-size. For the validity of the proposed model, the authors have analyzed it with acoustic echo cancellation and have claimed to achieve less misalignment and fast convergence. In [16] the authors have provided another solution for the problem occurring in the case of increasing parameter space. The authors have particularly addressed the problems associated with the basic adaptive algorithm and Wiener filter. They have carried out a study and a comparative study is done for the kalam filter and optimized LMS algorithm. they have discussed some experimental results also to strengthen their view. In [17] the authors have discussed the problem of selection of step size and provided a new approach for the appropriate selection. They have clamied that the variable step size algorithm can be represented as Kalman filter in some specific conditions. This is only possible when the step size of the LMS algorithm and state noise of the Kalma filter are chosen with precision. They have managed to calculate the optimum step size estimating the probability density function of coefficient estimation error and measurement noise variance. In [18] the authors have proposed and advanced 0-LMS (0-ILMS) algorithm for the identification of system of sparse kind. They have derived the condition of convergence on step size. They have further discussed the parameter selection criterion for optimal mean square deviation. With their work, they have concluded that the steady-state mean MSD of the proposed algorithm in comparison to 0-LMS algorithm is less sensitive to measurement noise power and tunning parameters. In [19] the authors have combined the virtues of both the famous Normalized LMS and LMS algorithm and bring a trade-off between low misadjustment and fast convergence. They managed to achieve this by choosing the appropriate controlled parameters. Whereas the time-varying parameters are being proposed under various rules. In their work authors have proposed an optimized LMS algorithm for the models having variable state. They have proposed a method to choose an appropriate value of the step size to reduce the misalignment. In [20] the authors have mentioned the drawback of the LMS algorithm and proposed a modification of it so that the algorithm can be used with its limitations and its application can be diversified. They have improved the robustness of the algorithm with pre-wighten the input signal that helps in optimization of the Cholesky factor of autocorrelation matrix of input.

After studying all these efforts the authors have figure out the need of optimization of LMS algorithm and selection of appropriate step size so that it can be effectively used for the identification of systems having multimodel error surface.

3 Proposed Model

From the above discussion we may conclude that our purpose is to design something which should not stuck at the local minima of a multi-model error surface, rather it should be capable of finding the global minima to provide optimum result. This can be achieved by combining the LMS algorithm with SI based algorithm. We have selected ACO and PSO, in this case. These algorithm combined with LMS will restrict it to stuck at local minima and will facilitate it to search for the best possible result.

In brief we can say that we have divided the entire process in two parts, in first part LMS will play its role and will be used to apply parameter vector, and as soon as its limitations started, we will employ ACO or PSO. These will help us in finding the optimum value of μ so as we can get the optimum solution in case of system with Multi-Model error surface also.

Figure 3 depicts the entire adaptive filter design, where u(n), u(n−1). . . .u(n-M+1) is assumed to be the input signals at a time n with M adjustable parameters. This input is applied to the Transversal Filter Model (TFM) and unknown system simultaneously and the corresponding outputs are named as y(n) and d(n). The model parameters will be adjusted according to the value of y(n) and that can be given by (1)

(1) y(n)=K=0M1w^k(n)u(nk)

Where ŵ0(n), ŵ1(n). . . . . . . and ŵM−1(n) represents the estimated model parameters. We compare the value of y(n) and d(n) with the help of summer which produces the output as modeling error e(n) [7, 8, 18].

Figure 3 Proposed Model, where u(n) is the input applied to the adaptive filter, y(n) is the output of adaptive filter, d(n) is the desired response and e(n) is the estimated error
Figure 3

Proposed Model, where u(n) is the input applied to the adaptive filter, y(n) is the output of adaptive filter, d(n) is the desired response and e(n) is the estimated error

In general at any given time n, the non-zero value of e(n) implies that the model deviates from the unknown system. For minimizing this error, we apply e(n) to ACO or PSO for calculating the optimum value of step size μ. These algorithms make it possible even in the case of system with Multi-Model error surface. As discussed earlier the optimum value of μ is must for proper convergence of LMS algorithm. Now the LMS algorithm with this optimum value of μ is provided with sample input signal u(n), u(n-1). . . . u(n-M+1). With these collective efforts we will be able to get updated value of model parameters to be used for next iteration. Thus at time n+1, we will get a new value of y(n), and hence the new value of e(n). This whole process will repeat itself for large number of iterations until we are not getting the minimum possible value of e(n) [21, 22].

4 Ant Colony Optimization

In this section we will elaborate ACO and explain how it can be used for finding the optimum value of μ.

4.1 Basic configuration

It's really surprising how Ants can find the optimum path to reach to any food source, even if they can’t see. For understanding the sciences behind this refer Figure 4. First, an ant has left her nest in search of food, and finds it somewhere; the other ants will follow the pheromone trails laid by her. If there are different paths for the same source, then the pheromones deposited on the shortest path will last longer than the pheromones deposited on the longest path, so the shortest path will be more appealing for new ants coming out of nest, slowly all of them will start moving through the shortest path and the pheromone concentration will become high on that path, whereas the pheromone concentration on the longest path will decay slowly [11].

Figure 4 Basic concept of Ant colony optimization
Figure 4

Basic concept of Ant colony optimization

4.2 Flow chart and implementation of ACO

The ACO is inspired by theses Ant colonies. In the process of elaborating the implementation of ACO an analogy is created between the parameters of Ant colony and the algorithm (see Table 1).

Table 1

Analogy between two systems

Natural terms Terms for use in algorithm
Natural territory graph (nodes and edges)
food source and nest start and destination nodes
ants our artificial ants
visibility the reciprocal of distance
pheromones artificial pheromones
foraging behavior Random walk through graph (guided by pheromones)

Based on this analogy a flow chart is presented in Figure 5 to design the algorithm. Following are the equations that will help us further in implementing the ACO for optimization [9, 23].

(2) pij=τij(t)(1dij)βjnodesτij(t)(1dij)β
(3) τij(t+1)=(1ρ)τij+k=colony(i.j)QLm
Figure 5 Flow chart for ACO
Figure 5

Flow chart for ACO

Equation (2) represents the probability of ant to move between the two nodes i and j and (3) represents the local updates of pheromone after travelling from node to node.

4.3 Algorithm

To begin, concentration of pheromone τij is set to each link (i, j);

  • The value of k is assigned.

  • Using equation (2) build a path from nest to food source.

Remove the paths having least concentration of pheromone, they are referred as cycles, and compute each route weight f (xk(t)). When there are no feasible candidate's nodes then the predecessor of that node is included as a former node of the path.

  • As a next step we have to use (3) for Updating the concentration of pheromone.

  • In last the algorithm will end in any of the following mentioned three different ways:

    1. If we have reached the maximum number of approaches

    2. If we have got an acceptable solution using f (xk(t))< ε.

    3. Or if, all ants are following the same path.

5 Particle Swarm Optimization

5.1 Basic configuration

For understanding the PSO refer Figure 6, in which an example of fishermen is shown, they are in search of the fish. In general the big fish are hidden in the deepest valley, and difficult to caught, so at first both the fishermen are in search of deepest valley, with mutual efforts. At every step they are sharing the depth of the pond with each other. At an instant fisherman 1 (F1) will arrive at valley 1 (V1), which may appear him to be the deepest, but not actually, similarly fisherman 2 (F2) will reach at valley 2 (V2), and in his case it is the deepest one. F1 may consider his valley as deepest, but after communication with F2, he will realize that V1 is the deepest valley he has discovered (personal best, Pbest) but not the deepest valley of the pond, and V2, valley, discovered by F2 is the deepest one (Global best, Gbest). This is how F1 will move towards V2. So we can say rather stucking at local minima; F1 managed to escape from it and will move towards the global minima [12, 15].

Figure 6 Basic cncept of PSO
Figure 6

Basic cncept of PSO

5.2 Flow chart and implementation of PSO

The flow chart of the PSO is represented in Figure 7 representing the essentials steps that swarms follows and how this swarm optimization is plotted for the optimization of system. The cost function in this case is Mean Squared Error (MSE), given by (4),

(4) JMSE(n)=12e2(n)pn(e(n))de(n)=12E{e2(n)}

Where pn(e) represents the probability density function of the error at time n and E{.} is just a simple representation of RHS of (4). This cost function will be equally helpful for the system with Multi-Model error surface, because of the following reasons [24, 25].

  • The Minima of this cost function is well defined, in respect with the parameters of W(n);

  • The values of coefficient of the unknown system obtained with this minima, is capable of minimizing the error signal e(n). That is an indication that y(n) is approaching d(n).

  • This function can be represented as a derivative of the parameters of W(n), hence it is considered as a smooth function of the parameters of W(n) [26, 27].

Figure 7 Flow chart for PSO
Figure 7

Flow chart for PSO

This is an excellent property which make it possible to determine optimum values of the coefficients based on the available statics and knowledge of d(n) and u(n); further it is a simple yet effective procedure, by which we can adjust the filter parameters [28, 29].

6 Simulation & the analysis of result

Table 2 represents the control parameter values of PSO and ACO used in the simulation. These controlled parameters have decided in accordance to create a replica of the actual ant colony and particle swarm behaviors in the optimization process. In the case of PSO the mail element chosen are swarm size, cognitive and inertia factor and lower and upper bonds. Where as in the case of ACO parameters are chosen according to the behavior of ants followed when they are in search of food source. By choosing the appropriate values of controlled parameters these optimization algorithms are used for calculating the value of step size μ considering the importance of appropriate selection of step size mentioned in section 1.1. This will be further used to find the MSE. In Figure 8, a graph is shown with the calculated values of μ, while considering ACO and PSO separately. Where α is a parameter that controls the convergece speed in direct propotion.

Table 2

Control parameter values

PSO ACO
Swarm size = 5 Number of Ants = 20
Cognitive factor, c1 = 0.5 Evaporation Parameter = 0.1
Inertia factor, ω = 0.5 Pheromone(P) = 0.2
Social factor, c2 = 0.5 +P = 0.2
Lower Bound = 0.02 −P = 0.3
Upper Bound = 0.05 Maximum Tour = 600
Minimum Value = −1000
Maximum Value = 1000
Lower Bound = 0.02
Upper Bound = 0.05
Figure 8 Values of μ calculated with the help of ACO and PSO
Figure 8

Values of μ calculated with the help of ACO and PSO

With the help of all the calculated values of μ, Minimum Mean Square Error (MMSE) will be calculated. Figure 9 represents the value of MMSE for various values of μ calculated by PSO and Figure 10 is representing the corresponding values of MMSE, for different values of μ, calculated with the help of ACO. Figure 11, 12 are representing the convergence of algorithm for those particular values of μ, for which we have got the minimum value of MMSE, while PSO and ACO were used for calculating the value of μ, respectively. These simulations are obtained by running the customize program on MATLAB and choosing the obtained values of μ shown in Figure 8.

Figure 9 Values of MMSE calculated for various values of μ (PSO)
Figure 9

Values of MMSE calculated for various values of μ (PSO)

Figure 10 Values of MMSE calculated for various values of μ (ACO)
Figure 10

Values of MMSE calculated for various values of μ (ACO)

The convergence of the proposed algorithm shown in Figure 11, 12 is close to the ideal expected convergence of the LMS algorithm. The difference and advantage of this approach is it validity for the systems having multimodal error surfaces. The obtained simulations are plotted for the systems with multimodal error surfaces and adequate convergence of the algorithm validates the edge of its applicability for system with multimodal error surfaces. That allowed the extension of use of LMS algorithm with almost all kind of system and its application can be more diversified, it can be used in system identification and echo cancellation for IIR filters and other multimodal error surface.

Figure 11 Representing the convergence of algorithm for the minimum value of MMSE (PSO)
Figure 11

Representing the convergence of algorithm for the minimum value of MMSE (PSO)

Figure 12 Representing the convergence of algorithm for the minimum value of MMSE (ACO)
Figure 12

Representing the convergence of algorithm for the minimum value of MMSE (ACO)

7 Conclusion

In this work two SI based optimization techniques, namely ACO and PSO have been successfully used for the optimization of LMS algorithm for adaptive filter design, so that it can be successfully used to identify the system having multi-model error surface. Referring the results we have concluded that the proposed method is a powerful, simple algorithm in comparison with the other related works. Both ACO and PSO have produced almost similar kind of result, but the value of MMSE is smaller in case of ACO. The approach of combining ACO and PSO with LMS algorithm may be used for other applications of adaptive filters like, in system identification, noise/echo cancellation and in the field of biomedical science, for example in extracting heartbeat signals from ambient noise in stethoscopes.

References

[1] Wang, Yu-xin, Xue-zhen Li, and Zheng-yi Wang. “Parameters optimization of SVM based on the swarm intelligence.” Journal of Physics: Conference Series. Vol. 1437. No. 1. IOP Publishing, 2020.10.1088/1742-6596/1437/1/012005Search in Google Scholar

[2] Arikawa, Manabu, Masaki Sato, and Kazunori Hayashi. “Wide range rate adaptation of QAM-based probabilistic constellation shaping using a fixed FEC with blind adaptive equalization.” Optics Express 28.2 (2020): 1300–1315.10.1364/OE.383097Search in Google Scholar PubMed

[3] Mohammed Zidane Rui Dinis B. “A new combination of adaptive channel estimation methods and TORC equalizer in MC-CDMA systems” First published: 20 April 2020 https://doi.org/10.1002/dac.4429.10.1002/dac.4429Search in Google Scholar

[4] Diniz, Paulo SR. “Conventional RLS adaptive filter.” Adaptive Filtering. Springer, Cham, 2020. 157–187.10.1007/978-3-030-29057-3_5Search in Google Scholar

[5] Diniz, Paulo SR. “Kalman filters.” Adaptive Filtering. Springer, Cham, 2020. 431–456.10.1007/978-3-030-29057-3_14Search in Google Scholar

[6] Chang, Wei-Der. “Coefficient estimation of IIR filter by a multiple crossover genetic algorithm.” Computers & Mathematics with Applications 51.9–10 (2006): 1437–1444.10.1016/j.camwa.2006.01.003Search in Google Scholar

[7] A. Rusu, S. Ciochină and C. Paleologu, “On the Step-Size optimization of the LMS Algorithm,” 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 2019, pp. 168–173, doi: 10.1109/TSP.2019.8768842.Search in Google Scholar

[8] Paulo S. R. Diniz, Introduction to Adaptive Filtering, Springer, Cham 5th edition, 2020.Search in Google Scholar

[9] Meera Dasha, Trilochan Panigrahib, RenuSharma, Distributed parameter estimation of IIR system using diffusion particle swarm optimization algorithm, Journal of King Saud University - Engineering Sciences Volume 31, Issue 4, October 2019, Pages 345–35410.1016/j.jksues.2017.11.002Search in Google Scholar

[10] Marco Dorigo, Thomas Stützle, Ant Colony Optimization: Overview and Recent Advances, Handbook of Metaheuristics, 2019, Volume 272, ISBN : 978-3-319-91085-7, Marco Dorigo, Thomas Stützle.Search in Google Scholar

[11] W. Deng, J. Xu and H. Zhao, “An Improved Ant Colony Optimization Algorithm Based on Hybrid Strategies for Scheduling Problem,” in IEEE Access, vol. 7, pp. 20281–20292, 2019, doi: 10.1109/ACCESS.2019.2897580.Search in Google Scholar

[12] Bansal J.C. (2019) Particle Swarm Optimization. In: Bansal J., Singh P., Pal N. (eds) Evolutionary and Swarm Intelligence Algorithms. Studies in Computational Intelligence, vol 779. Springer, Cham10.1007/978-3-319-91341-4_2Search in Google Scholar

[13] Chen, Mingli & Van Veen, Barry & Wakai, Ronald. (2006). Linear minimum mean-square error filtering for evoked responses: Application to fetal MEG. IEEE transactions on bio-medical engineering. 53. 959–63. 10.1109/TBME.2006.872822.Search in Google Scholar PubMed

[14] Martin H. Weik, minimum mean-square error restoration, In: Computer Science and Communications Dictionary. Springer, Boston, MA (2017).Search in Google Scholar

[15] Ciochină, Silviu, Constantin Paleologu, and Jacob Benesty. “An optimized NLMS algorithm for system identification.” Signal Processing 118 (2016): 115–121.10.1016/j.sigpro.2015.06.016Search in Google Scholar

[16] Dogariu, Laura-Maria, et al. “A connection between the Kalman filter and an optimized LMS algorithm for bilinear forms.” Algorithms 11.12 (2018): 211.10.3390/a11120211Search in Google Scholar

[17] Lopes, Paulo AC. “Bayesian step least mean squares algorithm for Gaussian signals.” IET Signal Processing 14.8 (2020): 506–512.10.1049/iet-spr.2020.0058Search in Google Scholar

[18] Luo, Lei, and Antai Xie. “Steady-state mean-square deviation analysis of improved 0-norm-constraint LMS algorithm for sparse system identification.” Signal Processing (2020): 107658.10.1016/j.sigpro.2020.107658Search in Google Scholar

[19] A. Rusu, S. Ciochină and C. Paleologu, “On the Step-Size optimization of the LMS Algorithm,” 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 2019, pp. 168–173, doi: 10.1109/TSP.2019.8768842.Search in Google Scholar

[20] Sengupta, Saptarshi, Sanchita Basak, and Richard Alan Peters. “Particle Swarm Optimization: A survey of historical and recent developments with hybridization perspectives.” Machine Learning and Knowledge Extraction 1.1 (2019): 157–191.10.3390/make1010010Search in Google Scholar

[21] Diniz, Paulo SR. “Introduction to Adaptive Filtering.” Adaptive Filtering. Springer, Cham, 2020. 1–8.10.1007/978-3-030-29057-3_1Search in Google Scholar

[22] Simon O. Haykin, “Adaptive Filter Theory” Prentice-Hall, Inc. Division of Simon and Schuster One Lake Street Upper Saddle River, NJ United States, 5rd edition.Search in Google Scholar

[23] Rajni and AkashTayal, “Step Size Optimization of LMS Algorithm Using Particle Swarm Optimization Algorithm in System Identification” IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.6, June 2013.Search in Google Scholar

[24] Ochoa Ortiz-Zezzatti, Alberto, Rivera, Gilberto, Gómez-Santillán, Claudia, Sánchez Lara, Benito, “Handbook of Research on Metaheuristics for Order Picking Optimization in Warehouses to Smart Cities” IGI Global, 05-Apr-2019 pp-192.10.4018/978-1-5225-8131-4Search in Google Scholar

[25] Sinha R., Choubey A., Mahto S.K., Ranjan P. (2019) Quantum Behaved Particle Swarm Optimization Technique Applied to FIR-Based Linear and Nonlinear Channel Equalizer. In: Bhatia S., Tiwari S., Mishra K., Trivedi M. (eds) Advances in Computer Communication and Computational Sciences. Advances in Intelligent Systems and Computing, vol 759. Springer, Singapor10.1007/978-981-13-0341-8_4Search in Google Scholar

[26] Kumar, Dinesh, et al. “A Holistic Survey on Disaster and Disruption in Optical Communication Network.” Recent Advances in Electrical & Electronic Engineering (Recent Patents on Electrical & Electronic Engineering) 13.2 (2020): 130–135.10.2174/2352096512666190215141938Search in Google Scholar

[27] Poongodi, M., et al. “Prediction of the price of Ethereum blockchain cryptocurrency in an industrial finance system.” Computers & Electrical Engineering 81 (2020): 106527.10.1016/j.compeleceng.2019.106527Search in Google Scholar

[28] Rathee, Geetanjali, et al. “A trust management scheme to secure mobile information centric networks.” Computer Communications 151 (2020): 66–75.10.1016/j.comcom.2019.12.024Search in Google Scholar

[29] Sharma, Ashutosh, et al. “A Secure, Energy-and SLA-Efficient (SESE) E-Healthcare Framework for Quickest Data Transmission Using Cyber-Physical System.” Sensors 19.9 (2019): 2119.10.3390/s19092119Search in Google Scholar PubMed PubMed Central

Received: 2020-11-05
Accepted: 2020-12-01
Published Online: 2021-02-23

© 2020 Qianhua Ling et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 6.6.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2020-0081/html
Scroll to top button