Abstract

Conventional optimization methods are not efficient enough to solve many of the naturally complicated optimization problems. Thus, inspired by nature, metaheuristic algorithms can be utilized as a new kind of problem solvers in solution to these types of optimization problems. In this paper, an optimization algorithm is proposed which is capable of finding the expected quality of different locations and also tuning its exploration-exploitation dilemma to the location of an individual. A novel particle swarm optimization algorithm is presented which implements the conditioning learning behavior so that the particles are led to perform a natural conditioning behavior on an unconditioned motive. In the problem space, particles are classified into several categories so that if a particle lies within a low diversity category, it would have a tendency to move towards its best personal experience. But, if the particle’s category is with high diversity, it would have the tendency to move towards the global optimum of that category. The idea of the birds’ sensitivity to its flying space is also utilized to increase the particles’ speed in undesired spaces in order to leave those spaces as soon as possible. However, in desirable spaces, the particles’ velocity is reduced to provide a situation in which the particles have more time to explore their environment. In the proposed algorithm, the birds’ instinctive behavior is implemented to construct an initial population randomly or chaotically. Experiments provided to compare the proposed algorithm with the state-of-the-art methods show that our optimization algorithm is one of the most efficient and appropriate ones to solve the static optimization problems.

1. Introduction

Optimization is a subfield of artificial intelligence [110] in which a raised problem could be transformed into an optimization function with a random initial solution which is improved through the next designed steps [1, 2]. Since there is more than one solution to a problem, we can receive the best response to an arbitrary problem using mathematical global optimization tools [11]. In order to determine the best answer in local optimization, some factors should be taken into account including the solution, the amount of the permissible error, and the problems [1214] such as the best work of art, the most beautiful landscapes, and the most pleasant piece of music. Some popular stochastic algorithms, such as naturally motivated ones and population-based ones, emulate problem-solving strategies in nature including the race of some creatures with the main goal of survival which makes these creatures to be evolved in several shapes. An artificial intelligence technique based on the collective behavior in decentralized and self-organized systems, which usually includes a number of simple agents locally interacting with each other and with the environment they are in, is swarm intelligence (SI). In these algorithms, although there is not a usual centralized control for the agents’ behaviors, a collective behavior is often provided due to the local interactions of the agents.

Pavlov, a Russian physiologist, recognized that the dogs splashed their saliva only when they see someone who had fed them before even if those people had not any food in their hand the next time. Based on this observation, he proposed the conditioning learning in which the animal finds out how to relate a reward or punishment with a natural motive. This theory is used to move the swarms with high spatial distribution diversity to the global optimum and also to move the swarms with low level of diversity to the local optima in the problem space. In addition, in order to modify the proposed algorithm to an improved version of the PSO algorithm, some other ideas related to the instinctive behavior of the birds and their speed are considered.

There are some defects in the original PSO algorithm including the following instances: first, this algorithm is weak in creation of the random initial population; second, quality of the problem space is not considered, and speed of the particles with that quality is not adjusted; third, PSO is time wasting to achieve an optimal solution by choosing a point between the local and global optima. These mentioned drawbacks are all resolved in this work using the instinctive conditioning behavior of the birds. The rest of this paper is organized as follows. In Section 2, a summary of the related previous works is outlined. The proposed algorithm of the paper is presented in Section 3. Simulation results and the consequences are provided in Section 4, and Section 5 deals with the conclusions and future works.

2. Literature

There are two groups of the naturally motivated metaheuristic optimization algorithms. The first one is single-solution-based algorithms which offer a gradually improving single random solution. The second group, which is more popular in the literature, is called the multisolution-based algorithm. The group which we focus on offers multiple solutions that are gradually enhanced as a whole.

One of the best evolutionary algorithms, proposed by Holland [15] in 1975, is the genetic algorithm (GA) which is a random evolutionary optimization algorithm inspired by the evolution of creatures in nature. This is a general purpose optimizer because it has an extraordinary optimization performance. At the beginning of this algorithm, an initial population of the chromosomes or potential solutions which is usually a chromosome or a binary vector is considered and then a relevant fitness function is used to measure the merit of each chromosome which results in production of a new solution set [16] by choosing the best chromosomes and entering them into a mating stage where they are crossed over and mutated.

Similar to other related articles, we can show that optimization problems of this work lead to cost function minimizations. Using instinctive and collective behavior of the flying birds, a naturally motivated optimization algorithm is provided, whose mechanism is similar to that of the particle swarm optimization algorithm. Conditioning learning behavior of the animals, which is the basic of this article and is a type of consistent learning, is expected to result in a mechanism to be able to solve any complex problem. The proposed algorithm offers some useful advantages. One advantage in the proposed PSO version is that this algorithm utilizes a beneficial collective behavior inspired by nature, and another one is that in comparison with other optimization algorithms, many of its specific parameters can be automatically set without significant loss of performance. But, a drawback of this proposed algorithm is that it is highly probable that an improper problem for the original PSO version is an improper problem for the proposed optimization algorithm.

The social behavior of birds to search for food had been the motive for proposing of a population-based general purpose optimization algorithm in 1995 [17], namely, PSO, which is a computation-oriented random intelligence optimization algorithm. Many advantages, such as computational efficiency, search mechanism, simple concept, and ease of implementation, have introduced SI-based algorithms as helpful tools in optimization applications. Each particle in PSO represents a population member with small or low-volume random mass which induces reaching a better behavior with speed and acceleration. Therefore, a particle could be a solution in a multidimensional space which can set its position in the search space based on the best position it has reached by itself (pbest), the best position reached by swarm (gbest) during the search process, and its speed. SI-based method has been used in some applications of the PSO algorithm [1820] to calculate the trajectory in the binary search space to solve the knapsack problem (KP). Some problems could also be solved using a combination of PSO with guided local search and some concepts derived from the GA [21]. A binary PSO algorithm has been proposed to solve the KP using the mutation operator.

Optimization algorithms motivated by the bees’ behavior result in some SI algorithms different from PSO; the most well-known one is called the artificial bee colony (ABC) optimization algorithm. It is a prevalent optimization algorithm developed from the bees’ SI foraging behavior [22]. ABC algorithm includes the artificial bee colony containing three types of worker, onlooker, and scout bees. Waiting in a dance area, onlooker bees create and choose a food source, worker bees go to the food sources which are previously recognized, and the scout bees make a random search for new food sources. A possible solution to the optimization problem is represented by location of a food source, and quality of the solution depends on the quantity of the nectar from the food source. Also, a virtual swarm of bees moving randomly in the two-dimensional search space is supposed, and when some target solution is found, bees have interacted with each other. Some other methods are derived from the original ABC method. Job-shop scheduling problem [23] (JSSP) has been presented when behavioral change of the onlooker behavior has changed, and SI is used to convert continuous values to binary ones. Also, the combinatorial ABC (CABC) algorithm has been presented [24] with discrete coding for the travel salesman path (TSP) problem.

Another algorithm, namely, the bees algorithm (BA), introduced on a mathematical function [25], includes a D-dimensional bee vector, which corresponds to the problem variables, and is named as a candidate solution here, which represents a site (food source) visit possessing a certain fitness value. In BA, scout bees randomly searching for new sites and worker bees frequently searching the neighbors for higher values of fitness function balance the exploration-exploitation of the algorithm. Best bees are those with the best fitness values, and elite sites are the sites visited by them. This algorithm assigns the majority of the bees near to the best sites to search neighbors of the best selected sites. Adjoined optimization functions, scheduling tasks [26], and binary data clustering [27] are examples of BA applications.

A metaheuristic algorithm searching for suitable conditions during improvisation of jazz music in the natural process of musical performance is called the harmony search algorithm (HSA) [28]. An assigned form by the standard of aesthetics is searched by the improvisation to find harmony in a piece of jazz music (a proper condition), which is equivalent to an optimization process in which a global solution (a proper condition) found by a specified objective function is searched.

Motivated by the behavior of a set of imperialists who compete with each other to overcome the colonies, an evolutionary optimization technique called the imperialist competitive algorithm (ICA) has been introduced. This algorithm also begins with an initial population whose individual members are classified into two groups of colonists and imperialists. Power of the colonies assigns them to their related imperialists. Power of any state is reversely related with its cost, so that the imperialists with more power are more dominant [29]. Interaction of the imperialist powers and their colonies has a characteristic in which the culture of the colonies is changed after a while so that they gradually become similar to one of their ruling imperialists. This is named the attraction policy which means the colonies move towards their imperialists and is executed by imperialist countries after the nineteenth century. In recent years, many other algorithms in line with improving famous optimization algorithms such as the particle bee optimization algorithm (PBOA) [30], novel particle swarm optimization algorithm (NPSOA) [31], cuckoo search optimization algorithm (CSOA or CS) [32], differential search optimization algorithm (DSOA or DSA) [33], and bird mating optimization algorithm (BMOA or BMO) [34] are presented. In order to improve the PSO algorithm [35], many algorithms have been presented using a dynamic multipopulation method. Some of these algorithms are the sinusoidal differential evolution optimization algorithm (SDEOA) [36], joint operation optimization algorithm (JOOA) [37], and dynamic multiswarm particle swarm optimizer with cooperative algorithm (DMSPSOCA) [35].

In order to make different algorithms more efficient, parameters and ideas from other methods could be used. For instance, the CSOA could be enhanced by employing chaos parameters [38]. In 2015, the PSO algorithm was enhanced employing some cognitive learning mechanisms for solving an optimization problem [39]. In 2016, the ant colony optimization algorithm (ACOA) was integrated with GA, and a new optimization algorithm was introduced [40]. A study used GA to discover the closest solution to the best one to deal with nonlinear multimodal optimization problems [41]. There are also other methods among the latest comparable optimizers [31, 4245].

3. Adaptive PSO

The proposed method is presented in this section. The population is initialized at first. After that, the population is randomly partitioned into a set of subpopulations. Also, the problem space is classified into several virtual subspaces. Each subspace is a hypercube. Indeed, each dimension out of all of the dimensions is partitioned into equal-size slices. Therefore, we have subspaces. Then, the particles will be transferred with lower speed in the more valuable subspaces. Also, in each subpopulation, a special set of the movement coefficients ( and ) is used. Also, the special set of the movement coefficients for each subpopulation adaptively changes during optimization. Finally, the best solution produced during optimization is considered to be the optimal solution to the problem found by the proposed optimization algorithm called also as the adaptive particle swarm optimization algorithm (APSOA).

The pseudocode of the APSOA is depicted in Algorithm 1. Variables , , , , , , , , and function are inputs of this algorithm. Here, and are up-bound and low-bound vectors of the problem space. is the number of dimensions. is the population size. is the number of subpopulations. Finally, shows the objective function. First, the algorithm calls the initialization function, denoted by . The pseudocode of the initialization function is depicted in Algorithm 2. It takes variables , , , , , and function . It returns a population , the best particle , and a sparse subspace rating vector . is an array whose key is a string. If it is called with a numeric string as its key, it will return the value of the th subspace where the value of is an integer in . As it is a sparse subspace rate array, if a subspace has still no value, we assume its value as 1 by default. If is called with “sum” as its key, it will return the summation of all values associated with all subspaces. Indeed, is at first. Each time the best particle of a subpopulation is located in the th subspace, its value is added by one unit. Also, each time the best particle of the whole population is located in the th subspace, its value is added by one unit. is a population of individuals, where stands for the th individual in population . Each individual in population is an object containing the following fields:: the position of the th individual: the velocity of the th individual: the position of the best place met by the th individual: index of the subspace in which the th individual is located: fitness of the th individual: fitness of the best memory of the th individual

: adaptive particle swarm optimization function
 Input:
  PS: population size
  N: subpopulation Size
  α: coefficient update rate
  D: problem size
  θ: fragmentation size
  MG: maximum generations of the algorithm
  MaxV: an array of D values; MaxVd is the maximum value in the domain of the dth dimension of the problem space
  MinV: an array of D values; MinVd is the minimum value in the domain of the dth dimension of the problem space
  C1: an array of N values; C1(i) is the first movement coefficient of the ith subpopulation
  C2: an array of N values; C2(i) is the second movement coefficient of the ith subpopulation
  F: a given objective function
  Output:
  ĝ: the best found particle
(1)
(2)For i = 1 : MG
(2.1) For p = 1 : PS
(2.1.1)     
(2.1.2)     
(2.1.3)     
(2.1.4)     
(2.2)
(2.3)
(2.4)
(2.5)
(2.6)
(2.7)
(2.8)
(2.8.1)     
(2.8.2)     
(2.8.3)     ;
(2.8.4)     
(2.8.5)     
(2.8.6)     
(2.8.7)     
(2.9)
(2.9.1)     
: initialization function
 Input:
PS: population size
D: problem size
MaxV: an array of D values; MaxVd is the maximum value in the domain of the dth dimension of the problem space
MinV: an array of D values; MinVd is the minimum value in the domain of the dth dimension of the problem space
θ: fragmentation size
: a given objective function
 Output:
: a population
: global best particle
: sparse subspace rate array
(1)
(2)For
(2.1)
(2.2)For d = 1 : D
(2.2.1)     r = A random or chaotic value from uniform distribution in interval [0, 1]
(2.2.2)     
(2.2.2)     
(2.2.2)     
(2.2.3)     
(2.2.4)     r = A random or chaotic value from uniform distribution in interval [0, 1]
(2.2.5)     
(2.3)
(2.4)
(2.5)
(3)
(4)
(5)
(6)
(7)

After the proposed method (depicted in Algorithm 1) called the initialization function, it iterates a loop times (statement 2 in Algorithm 1). Each time, all population individuals are updated using an individual update function (depicted in Algorithm 3 and explained in the following paragraph) (statement 2.1 in Algorithm 1). After updating all population individuals, the global best, i.e., , will be updated. Then, of the subspace where the global best is located is updated (statement 2.2 to statement 2.6 in Algorithm 1). In the following step, of the subspaces where the local bests of subpopulations are located is updated (statement 2.8 in Algorithm 1). Finally, the coefficients of any subpopulation are updated using the coefficient update function (depicted in Algorithm 4 and explained in the following paragraph) (statement 2.9 in Algorithm 1).

: velocity and location update function
 Input:
: an individual or particle
: global best particle
D: problem size
C1: first coefficient
C2: second coefficient
θ: fragmentation size
: sparse subspace rate array
MaxV: an array of D values; MaxVd is the maximum value in the domain of the dth dimension of the problem space
MinV: an array of D values; MinVd is the minimum value in the domain of the dth dimension of the problem space
: a given objective function
 Output:
: an individual or particle
(1)
(2)For d = 1 : D
(2.1) [r1, r2] = two random or chaotic values of uniform distribution in interval [0, 1]
(2.2)
(2.3)
(2.4)
(2.5)
(2.6)
(3)
(4)
(5)
(5.1)
(5.2)
: coefficient update function
 Input:
C1: first coefficient
C2: second coefficient
α: coefficient change rate
π: an exploration flag
 Output:
C1: first coefficient
C2: second coefficient
(1)if π
(1.1)
(1.2)
  else
(1.3)
(1.4)

For the individual update function , which is depicted in Algorithm 3, the only different section from the original PSO is the statement 2.2 in Algorithm 3 where a coefficient (denoted by ) is computed to be multiplied in the movement equation in the following statement. It speeds up the individuals in the useless subspaces and slows them down in the useful subspaces. The coefficient update function , which is depicted in Algorithm 4, adaptively changes the values of coefficients of each subpopulation.

4. Implementation of Experimentations

In this section, simulation results of the proposed method are provided in several parts, and they are compared with other similar methods. The results are provided in four parts, and different modern methods have been compared in each part on different problems which include a real-world industrial application.

In the first three parts, the average value and standard deviation of the cost function value, i.e., , are calculated to evaluate the performance of each algorithm, where is the minimum value (of the th objective function) found by an algorithm in a run and is the actual optimal value (of the th objective function).

4.1. Experimental Results: CEC 2009

Here, problems defined by the CEC 2009 benchmark [46] are provided in Table 1. By comparison with the proposed method performance when the chaotic number generator (CNG) is used against the proposed method performance when the random number generator (RNG) is used (on objective functions of the CEC 2009 benchmark), we concluded that slightly better results are obtained when CNG is used. We only used the logistic map [47] with parameter (as CNG) and uniform distribution (as RNG). Therefore, though we can use both throughout this paper, we totally experimented using the logistic map [47] with parameter (as CNG).

The proposed method in this part is compared with the following algorithms: GA [15], differential evolution (DE) [48], PSO [17], BA [25], PBOA [30], NPSO [31], moderate-random-search strategy PSO (MRPSO) [49], ensemble of mutation strategies and control parameter S with DE (EPSDE) [50], cooperative coevolution inspired ABC (CCABC) [51], and firefly algorithm (FA) [52]. The results of each method are derived for the same parameters used by its main corresponding method. But, some of the shared parameters are fixed in all methods including the population size () and the number of fitness evaluations (), respectively, equal to and , where is the problem size and is always set to 50. We set the target objective cost function (denoted by ) always as where is an integer in and is the optimal target value of the th objective function. We have used mean and std. dev as two criteria in presenting the results which are, respectively, defined as the average of the best costs of the particles and the standard deviation of the best costs of the particles in different runs on all of the 26 functions shown in Table 1. Table 2 shows the results for assessment of other methods on all of the 26 functions shown in Table 1 in detail; also, it validates the results using the Friedman test with a value of 3.12E − 03. The results presented in Table 2 have been summarized in Table 3.

The results show that, in almost all of the functions in the CEC 2009 benchmark, the proposed method converges to the optimum as the particles’ average cost is counted. Since the proposed algorithm fully converges to the optimal point, the particles will have a zero standard deviation at the end of the assessments in different methods which means that the results of the proposed method together with other methods have presented the best performance. The results also show that there are some functions such as F8, F12, and F18 in which only the proposed method achieves the optimal point which proves the superiority of the proposed method. In the process of the Friedman test, the results’ process is proved to be normal, and it is proved that those results are not obtained by random.

4.2. Experimental Results: CEC 2005

The proposed method in this part is compared with the following algorithms: the modified bat algorithm hybridizing by differential evolution (MBADE [10]), rain optimization algorithm (ROA) [42], CS [32], teaching-learning-based optimization (TLBO) algorithm [53], DSA [33], and BMO algorithm [34]. In this part, we have compared the proposed method based on 25 problems defined by the CEC 2005 benchmark [54]. All of the problems in the CEC 2005 benchmark [54] are summarized in Table 4. The results of each method are derived for the same parameters used by its main corresponding method. But, some of the shared parameters are fixed in all methods including and , respectively, equal to and , where is always set to 50. We set the target objective cost function always as where is an integer in and is the optimal target value of the th objective function. We use mean and std. dev as two criteria in presenting the results which are, respectively, defined as the average of the best costs of the particles and the standard deviation of the best costs of the particles in different runs on all of the 25 functions shown in Table 4. Table 5 shows the results for assessment of other methods on all of the 25 functions shown in Table 4 in detail; also, it validates the results using the Friedman test with a value of 7.92E − 05. The results presented in Table 5 have been summarized in Table 6.

According to Table 5, it is easy to conclude that the proposed algorithm has a better quality in almost all functions compared with other methods; it is always in top-3 methods. Studying Table 5 also shows that the proposed method has the best performance in 16 functions.

The proposed approach has the best performance due to reaching the zero error global optimum with the CS algorithm in F1 and also with DSA in F9. There are only 9 functions in which the proposed method does not have the best performance. We can also conclude that the proposed method has a desirable ability to result a satisfactory diversity in the problem space. It is shown by the statistical test that results of the proposed method are with the best values among different methods.

Finally, the proposed algorithm complexity according to the specified guidelines of CEC 2005 [54] is shown in Table 7.

All the mentioned methods are tested on different numbers of fitness evaluations, and the results are shown in Figure 1, where , , and are, respectively, equal to 30, , and . We can see from Figure 1 that regardless of the changes in , the proposed algorithm converges to a better solution, and also it needs less rather than other methods to converge to a solution with the same quality in most of the fitness functions. This test also shows that the proposed algorithm is one of the best methods on all different numbers of fitness assessments and meets the best cost among them. In addition, when the fitness assessment number is increased, better results are derived.

In Figure 2, the results of the proposed method in terms of some criteria for different in the set of are presented for the first six objective functions in the CEC 2005 benchmark. The considered population size in this experiment is 20, for fitness function assessments. Figure 2 includes the best and the mean costs of the population members and also their standard deviation and the execution time when applying the APSOA method on test benchmark functions F1–F6 for different dimension sizes. Using the presented results in Figure 2, we can express that increasing dimension sizes makes the problem more complicated which, in turn, causes the best cost value to be raised.

4.3. Experimental Results: CEC 2010 and Real-World Problems

In this part, we are going to compare our proposed method with other recently proposed metaheuristic methods. Here, we use the benchmark functions of the test series CEC 2010 [10] and set and to 40 and , respectively. 20 famous objective functions of the CEC 2010 [10] benchmark are used as F1–F20 for assessment of this section (F1–F3 are separable, F4–F8 are 1-group N-nonseparable, F9–F13 are -groups N-nonseparable, F14–F18 are -groups N-nonseparable, and finally F19–F20 are nonseparable). is 1,000 throughout this section; besides, , i.e., the number of variables in each nonseparable subcomponent, is 50 here. Also, a set of 4 real-world problems is used as F21–F24 for assessment of this section. The first two real-world problems, i.e., F21 and F22, are, respectively, problem number 1 and problem number 7 in CEC 2011 [6]. F23 and F24 are the problem of the linear equation system [7] and the problem of the polynomial fitting system [8]. It is worthy to be mentioned that 51 independent runs are performed, and the results are averaged over them. The comparison is taken with the following methods: SDEOA [36], JOOA [37], diversity neighborhood search-enhanced particle swarm optimization (DNSPSO [9]), and D-PSO-C [35].

According to the results presented in Table 8, except for function F13 in which APSOA is not in top-3, APSOA is always among the top-3. It is the best at the end of the specified number of fitness function evaluations, for half of our problems. The summary of the results presented in Table 8 is shown in Table 9. The proposed APSOA exhibits the best performance among all methods according to the results presented in Table 9. The results in Table 8 are validated by the Friedman test with a value of 1.92E − 02.

4.4. Experimental Results: A Real-World Problem

Artificial intelligence is an appropriate candidate in solving many of the real-world electrical problems [5571]. In this part, we have solved a problem of combined heat and power economic dispatch [6, 71] using our proposed method and different optimization algorithms. A particle is shown as a vector with size 9 denoted by P = [pow1, pow2, pow3, pow4, pöw1, pöw2, hëat1, hëat2, heat1]. Then, the cost function of the problem issubject towhere the functions , , , , , , and are defined as follows:

Also, two other conditions and , for two cogeneration power-heat units depicted in Figure 3 should be met. In Figure 3, the valid work regions for cogeneration power-heat unit numbers 1 and 2 are depicted.

According to the parameters defined in the papers related to different optimization algorithms, their population size has been set here. Let and represent the best solution in the population of a method after fitness evaluations and its cost function value, respectively. We have used and the related cost value in method for the results to be fair. For this problem, the provided solutions using different optimization algorithms are shown in Table 10 in which we can see that the proposed method has the best performance.

5. Conclusions and Future Works

Nature-inspired social and solitary behaviors have motivated numerous algorithms in different scientific studies which are usually successful and efficient. In this paper, instinctive behaviors of birds are used to provide a more accurate, more target-oriented, and more controlled algorithm than the basic swarm algorithms. According to the classical conditioning learning behavior, a model is presented in which a normal task based on a natural stimulant for each particle is implemented in the search space. This model implies that when a particle experiences a low diversity category, it will move towards the local optimal point, while if it lies in a high diversity category, it will be moved towards the global optimum of its category.

An initial population according to the elite particles is also generated based on the assumption that those birds which have no sufficient energy will encounter flight problems. Another goal in the proposed algorithm is to provide more exploitation time for the particles in valuable spaces which motivated us to reduce their velocity by creating some changes in the velocity equation and vice-versa. Simulation results of the proposed method provided in four parts proved that it is an efficient and reliable algorithm in static functions compared with other algorithms. We concluded that our method uses a mechanism that finds more accurate solutions in a simpler and faster way, and it also has a better operation in industrial applications in comparison with prior methods.

For future works, we propose to use the following ideas based on what this paper was accomplished: applying the idea of chaos theory in the initial population, studying the quantum particles in the abovementioned algorithm, and implementing the algorithm of this paper to solve dynamic optimization problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.