Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter January 20, 2017

A Grain Output Combination Forecast Model Modified by Data Fusion Algorithm

  • Xiang Wan , Bing-Xiang Liu and Xing Xu EMAIL logo

Abstract

To deal with the lack of accuracy and generalization ability in some single models, grain output models were built with lots of relevant data, based on the powerful non-linear reflection of the back-propagation (BP) neural network. Three kinds of grain output models were built and took advantage of – particle swarm optimization algorithm, mind evolutionary algorithm, and genetic algorithm – to optimize the BP neural network. By the use of data fusion algorithm, the outcomes of different models can be modified and fused together, and the combination-predicted outcome can be obtained finally. Taking advantage of this combination model to predict the total grain output of China, the results showed that the total grain output in 2015 was a bit larger than the actual value of about 0.0115%. It was much more accurate than the three single models. The experimental results verify the feasibility and validity of the combination model.

1 Introduction

Accurate and timely grain growth monitoring and yield forecasting play a significant role in guiding agricultural production and ensuring food security. It can also achieve national policy formulation and resource reasonable configuration. Grain output prediction is a complex problem in agriculture, economics, and statistics. It suffers from climate, financial investment, resource allocation, and many other factors. Scholars at home and abroad have done a lot of researches on grain output prediction. Wit and van Diepen used ensemble Kalman filter to modify the soil moisture data of four Western European countries from 1992 to 2000, and then combined with the satellite remote sensing technology forecasted corn and wheat yields successfully [4]. Simelton et al. used the linear mixed-effects model to analyze various factors and the correlation between grain outputs [12]. Lai used high-pass (HP) filter and found that there is an obvious cyclical fluctuation in Chinese grain production [7]. Liu and Yin used the gray correlation analysis method to analyze the correlation degree between the dynamic change in Chinese grain production and a variety of factors affecting grain production from 1999 to 2011 [9]. Xu and Cao used historical data of total grain yield of Jilin Province as the research object, and eliminated the secondary influencing factors by using the rough set theory, and then established the rough set and back-propagation (BP) neural network [16].

The influencing factors of grain production are very complex. Most grain production prediction models are thus far established on the basis of statistics and agronomy. The effects of pesticide and chemical fertilizer consumption, sown acreage of grain, disaster area, agricultural machinery gross power, and available irrigation area had been studied by using these single models. There are all kinds of linear and non-linear effects in the complex factors; however, there is no single model that can put all of these factors into the research system. Then, a combination model emerges as the times require. The importance of combination forecast is integrating information from single models in order to improve the stability, accuracy, and reliability of the combination model [3].

In this paper, a grain output forecast model was built based on the BP neural network. With the help of particle swarm optimization (PSO) algorithm, mind evolutionary algorithm (MEA), and genetic algorithm (GA) to optimize the connection weights and thresholds of the BP neural network, the networks obtain their best weights and thresholds to build the PSO-BP model, MEA-BP model, and GA-BP model, respectively. Then, by the use of data fusion algorithm founded by Hu and Liu, the outcomes of the three single models can be modified and fused together, and finally the weights of single models and the combination-predicted outcome can be obtained [5].

2 BP Neural Network Model

2.1 BP Neural Network Theory

BP neural network is a multilayer feedforward learning network and trained according to error backward propagation. The strength of network connections between neurons changes by the size of the weights and thresholds. In order to improve the characteristics of training samples and the accuracy of network, while the network is training and learning data, weights and thresholds are changed to optimize the connection strength between neurons. The biggest advantage of the BP neural network is that the network can approximate any non-linear function with arbitrary precision without knowing these mathematical expressions exactly. With the help of training sample BP, weights and thresholds change to adjust the network and achieve the goal of the network.

2.2 BP Neural Network Design

It was proved by Hecht-Nielsen that a three-layer neural network with enough nodes can solve common problems. In this paper, relying on China National Bureau of Statistics grain output data from 1990 to 2014, a three-layer BP neural network was built with a single hidden layer [2]. Data were classified from 1990 to 2009 for training, and data from 2010 to 2014 were taken as test data for the network. The input layer neurons were the corresponding years, and the output was the annual grain yield of China.

Determining the hidden layer of a neural network is the key to building a neural network successfully. By testing the number of hidden layer nodes one by one, finally the network obtained the best number of hidden layer nodes, which was 5. The transferring and training functions were tansig-purelin-trainlm, the number of training steps was 200, the goal of training was 0.03, and the network learning rate was 0.01. The BP neural network was built. As seen in Figure 1, when the training step was 7, the training was over.

Figure 1: Correlation of Network between Training Steps and Error Curve.
Figure 1:

Correlation of Network between Training Steps and Error Curve.

2.3 Comprehensive Performance Metric of Models

As is known to all, the precision of a model is closely related to errors. In order to know the relative merits between different models, some comprehensive performance metric functions were used in this paper.

(1)SSE=t=1N(XtX^t)2,
(2)RMSE=t=1N(XtX^t)2/N,
(3)MAE=t=1N|XtX^t|/N,
(4)2-norm=t=1N(XtX^t)2.

2.4 Analysis of the Performances of BP Neural Network

The 2-norm of training data was 3774.7259, RMSE=844.05, MAE=699.13; the 2-norm of testing data was 8721.3483, RMSE=3900.31, MAE=3502.02. Through contrast analysis, it was found that the network was short of prediction accuracy and the generalization ability was weak; thus, it must be improved.

The weights and thresholds of the BP neural network are random numbers initialized in interval [−0.5, 0.5]; however, weights and thresholds have a great influence on network training. The weights and thresholds of network without optimization by algorithms will drop into the local extreme. These problems may lead to the inability of finding the global optimal point. Moreover, it also causes a decline in network generalization ability, has low precision, has no practical value, and so on.

3 Improvement and Optimization of the BP Neural Network Model

3.1 PSO Algorithm to Optimize the Weights and Thresholds of the BP Neural Network

The guiding thought of PSO was simulating the foraging behavior of birds. Iteration is used to search the optimal solutions. The optimal point of each particle is Pbest, and the global optimal point of population is Gbest. In iteration processes, each particle updates its speed and position with the help of Gbest and Pbest parameters, as shown below:

(5)Vidk+1=wVidk+c1r1(PidkXidk)+c2r2(PgdkXidk),
(6)Xidk+1=Xidk+V(k+1)id,

where w is the inertia weight; d=1, 2, …, D; i=1, 2, …, n; k is the current iteration times; Vid is the speed of particles; c1 and c2 are non-negative constants, called acceleration factors; and r1 and r2 are random numbers in interval [0, 1].

Based on the PSO, scholars had made many improvements. In order to balance the differences between global search ability and local search ability, a linear decreasing inertia weight was introduced into the PSO algorithm by Shi and Eberhart [11]. By improvement and dynamic adjustment of the acceleration factor, Li et al. maintained the generation diversity in the early phase and improved the algorithm performance as the iteration process approaches the end [8]. The decreasing inertia weight formula and acceleration factor formula in this paper is

(7)w(k)=wstart(wstartwend)(kTmax)2,
(8)c1=cstart(cstartcend)(Tmaxk)Tmax,c2=4c1,

where wstart is the initial inertia weight; wend is the termination of evolution to maximum generation; c1 is the individual extreme acceleration; c2 is the population extreme acceleration; cstart is the initial individual extreme acceleration; cend is the termination individual extreme acceleration; k is the current evolution generation; and Tmax is the maximum generation. In the hybrid algorithm, wstart=0.9, wend=0.4, cstart=4, and cend=0.2.

3.2 MEA to Optimize the Weights and Thresholds of the BP Neural Network

The human thinking mode afforded Sun a lesson, and MEA was published in 1998 [13]. MEA was a reference and improvement of evolutionary computation. With “population” and “evolution” of evolutionary computation lineage, “similar taxis operator” and “dissimilation operator” were proposed. The two operators coordinate with each other and keep a certain independence. Whichever operator improved, the algorithm can greatly improve the search efficiency. Operators can also guide the algorithm toward the good direction and avoid “premature” problems of evolutionary computation. The processes of MEA are shown in Figure 2.

Figure 2: Processes of MEA.
Figure 2:

Processes of MEA.

MEA not only owns high-efficiency parallel, but also can record the procedure of evolution at more than one generation [15]. With the help of MEA to optimize the weights and thresholds of the BP neural network, the training time of the MEA-BP model was sharply reduced and the precision of the model was much higher than that of the BP neural network.

3.3 GA to Optimize the Weights and Thresholds of the BP Neural Network

In order to search solutions of spaces, solutions are encoded by GA. The algorithm generates a number of individuals called “chromosome” and chooses some chromosomes in a random way. Through generations of selection, crossover, and mutation processes, the network can obtain the best chromosomes and converges to global optimal solution eventually. The processes of GA are shown in Figure 3.

Figure 3: Processes of GA.
Figure 3:

Processes of GA.

The GATBX toolbox of Sheffield University in England was used to optimize the weights and thresholds of the BP neural network to obtain the optimal weights and thresholds, and then these weights and thresholds were given to the network learning and training data.

4 Primary Simulation Experiment and Analyses

In this paper, relying on the China National Bureau of Statistics grain output data from 1990 to 2014, three kinds of models were trained and tested by the network. Data were classified from 1990 to 2009 for training, and data from 2010 to 2014 were taken as test data for the network. The comprehensive performance metric of the three models can be seen as follows.

4.1 Comprehensive Performance Metric of the PSO-BP Model

As seen in Figure 4, when the population evolution reaches the 300th generation, the simulation data are highly identical to actual output data, and the error reaches a predetermined accuracy standard. The results show that the accuracy and generalization ability of the PSO-BP model is obviously better than that of BP network. For training data, the 2-norm of the PSO-BP model is 2250.8066, RMSE=503.30, MAE=391.92, while the 2-norm of testing data is 1834.6315, RMSE=820.47, MAE=665.87. Although the accuracy and generalization ability of the PSO-BP model is higher than that of the BP network, the training time of the model is too long. This is a shortcoming of the PSO-BP model.

Figure 4: Comparison between Simulative Results and Actual Data by PSO-BP Model.
Figure 4:

Comparison between Simulative Results and Actual Data by PSO-BP Model.

4.2 Comprehensive Performance Metric of the MEA-BP Model

When evolution is over, a comparison between simulative results and actual data can be seen in Figure 5. Obviously, the speed of the MEA-BP model is much higher than that of the PSO-BP model; simulative results can be obtained at once. Although the accuracy of the MEA-BP model is slightly lower than that of the PSO-BP model, the accuracy of the MEA-BP model is much better than that of the BP neural network. The predicted value and actual value in 2010 were almost overlapping. From 2011 to 2014, the prediction error of the MEA-BP model was between 2% and 4%; the grain growth rate of the MEA-BP model prediction was slightly lower than the actual value. MEA is a prediction-based similar taxis strategy, which can enhance the search efficiency in evolution processes and has much stronger adaptive ability than basic evolutionary computation. However, MEA cannot overcome the disadvantage of easily getting into the local extreme in the later evolution period and keeps the rapidity of the previous period [14].

Figure 5: Comparison between Simulative Results and Actual Data by MEA-BP Model.
Figure 5:

Comparison between Simulative Results and Actual Data by MEA-BP Model.

4.3 Comprehensive Performance Metric of the GA-BP Model

As seen in Figure 6, when population evolution reaches the 60th generation, the network reaches the predetermined accuracy standard. The experiment indicated that GA can jump out of the local optimum and approach the global optimum of the system. Using GA to optimize the BP neural network has certain feasibility [10]. The 2-norm of testing data is 1024.2416, RMSE=458.05, MAE=440.61. This shows that the GA-BP model has powerful generalization ability. Unfortunately, the errors of training data are larger than those of the PSO-BP model, RMSE=865.70, MAE=721.35. Obviously, the GA-BP model is lacking in stability and accuracy of training, and the testing is inconsistent. There exists the problem of “early maturity” in GA, and it needs to be further improved.

Figure 6: Comparison between Simulative Results and Actual Data.
Figure 6:

Comparison between Simulative Results and Actual Data.

4.4 Performances of the Three Models

By comparison, the PSO algorithm is easier for realization. It saves a lot of complex genetic encoding and crossover mutation processes. The PSO algorithm also has memory ability, and particles can search the current situation and dynamically adjust the searching strategies in time. Although the PSO-BP model lacks prediction accuracy, the PSO-BP neural network has an obviously better improvement than the GA-BP network in stability. MEA has high search efficiency in the evolution process; however, the grain growth rate of the MEA-BP model prediction has a time-delay problem.

This paper found, through comparison, that no matter what kind of global optimization algorithm was used to optimize the BP neural network, there must be a certain probability that the BP neural network drops into local extreme. Is there a model that can exert the merits of different models to obtain good results on prediction? A combination model based on variable weighting coefficients, which combine the advantages of the PSO-BP, MEA-BP, and GA-BP models, will be discussed in the next section, and it can make the combination model more reliable, stable, and accurate.

5 A Combination Forecast Model Modified by Data Fusion Algorithm

5.1 Theory and Overview of Combination Forecast

In 1969, Bates and Granger put forward the combination forecast in the journal Operations Research for the first time; the researches on combination forecast are in full swing at home and abroad [1]. The combination forecast uses information provided by different forecasting models together to build a combination model in the form of a weighted average. Generally speaking, the thought of the combination forecast can be expressed as a mathematical programming problem, as follows:

(9)fmin=f(l1,l2,,lm)S.T{i=1mli=1li0,i=1,2,,m,

where f(l1,l2,,lm) is the objective function and (l1,l2,,lm) are the weighted coefficients of different methods.

For the same set of data, different models have their advantages and disadvantages. All kinds of useful information are explained by these models from different angles. Although the methods and principles of models are different, models need not be mutually exclusive and, in some cases, should even be mutually reinforcing. Theories and practices show that the combination model has a higher stability than single models, and the prediction accuracy is greatly increased. The combination model greatly expands the application fields and has more agility.

5.2 Theory and Overview of Data Fusion Algorithm

Data fusion algorithm is one of the hot research fields. Its application derives its origin from the military field for the first time. In order to get as much information as possible from intelligence, data from different sources and types need to be processed. Data can be modified and fused together by the algorithm. It is much more accurate and reliable than old data. Existing data fusion algorithms can be seen as follows: (i) algebraic method; (ii) regression method; (iii) principal component transform fusion; (iv) Kauth-Thomas transformation; (v) wavelet transformation; (vi) intensity hue saturation (IHS) transformation; (vii) Bayes estimation; (viii) Dempster-Shafter method; and (ix) artificial neural network.

The advantage of using these methods is that they have the ability to take information from different models and synthesize them. It can eliminate contradictions between data from different sources and improve the efficiency of data. However, the anti-interference ability and analysis ability of traditional data fusion algorithms are very poor. They have to rely on the quality of original data; however, large quantities of original data contain noise. They cannot find the internal links between different types of data.

A new data fusion algorithm based on relative distance was proposed by Hu and Liu in 2005 [6]. Supposing that a1, a2, …, am are expected values of m models at the same time, the relative distance and support degree of any two values can be defined as

(10)dij=|aiaj|,i,j(1,2,...,m)rij=cos(πdij2max{dij}).

In order to get the final outcome a, the wi of each ai needs to be obtained first. Where v1, v2, …, vn is a group of non-negative numbers, wi is calculated as wi=j=1mvjrij. (rij )m×m has its biggest module-eigenvalue λ and eigenvector vλ , and the weights of each model and the fused result can be calculated as follows:

(11)wi=viλj=1mvjλ,
(12)a=i=1mwiai.

5.3 Grain Output Combination Forecast Model Modified by Data Fusion Algorithm

Through the research, there is a greater improvement of the combination model than single models in precision and stability. The combination model has much stronger generalization ability than the traditional BP neural network.

In this paper, the weight coefficients of the combination model can be obtained with the help of data fusion algorithm. Firstly, the relative distances and support degree of different data should be figured out. Then, using support degree matrix, the biggest module-eigenvector vλ can be derived. Based on the properties of eigenvalue and eigenvector, weight coefficients wi can be obtained. Finally, the outcomes of different models can be modified and fused together with the help of weight coefficients.

Compared with the previous approach proposed in “A Population Prediction Model Based on Variable Weight Coefficients,” it is an easy way to use the data fusion method based on the BP neural network; however, the theory system and the research achievement of the BP neural network is not perfect. Researches on parameter definitions and optimization focused on experience and traditional arithmetic model building in the past literature. Moreover, the theoretical defect makes fusion results have certain randomness. Data fusion algorithm based on relative distance can excavate the information from original data. There are enough theories to support the claim that the fused results can be more scientific and reliable.

6 Total Grain Output of China Predicted by the Combination Model

Based on a quantitative analysis method, the combination model was used to predict the total grain output of China. For qualitative analysis, analysis of the correlated influence factors were not considered in the paper yet. Because the combination model is only suitable for medium- and short-term grain output prediction, the total grain output of China from 2015 to 2019 can be predicted. The outcomes and weight coefficients of each model can be seen in Tables 1 and 2.

Table 1:

Total Grain Output Predicted by Different Models from 2015 to 2019.

YearModel
Combination Model/MtPSO-BP/MtMEA-BP/MtGA-BP/Mt
2015621.5066623.6038587.1376621.9142
2016624.7402638.7843590.1123625.8911
2017623.6638653.4206592.0870623.7945
2018617.9479667.3316593.3695616.4192
2019608.4169680.3744594.1870606.4047
Table 2:

Weight Coefficients of Each Model from 2015 to 2019.

YearModel
WPSO-BPWMEA-BPWGA-BP
20150.48180.03510.4831
20160.39440.17430.4312
20170.30070.28500.4143
20180.19990.37510.4250
20190.10060.44410.4553

Since the total grain output of China fell to its lowest level in 2003, China has increased the investment in agriculture and rural areas. Thanks to the government’s support on three agricultural policies, the grain output of China has grown continuously for several years. According to “Grain Output Announcement in 2015,” published by the China National Bureau of Statistics on December 8, 2015, the grain output in 2015 was 621.435 million tons. The grain output of China has grown continuously for 12 years successfully. As seen in Table 1 and Figure 7, using the grain output combination forecast model modified by data fusion algorithm to predict grain output is scientific and reliable. The predicted value of the combination model in 2015 is more accurate than any of the three single models; the predicted value is 0.0716 million tons larger than the actual grain output. The absolute percentage error of the combination model is 0.0115%, while error of the MEA-BP model is smaller than the actual value of about 5.5191%. The errors between the PSO-BP and GA-BP models are 0.3490% and 0.0771%, respectively, which are larger than the actual values.

Figure 7: Comparison between Simulative Results and Actual Data in 2015.
Figure 7:

Comparison between Simulative Results and Actual Data in 2015.

On April 27, 2016, the Chinese Academy of Social Sciences’ Green Cover Book of the Rural predicted that the grain output in 2016 is 630 million tons. Only the combination model and the GA-BP model were close to the level of the official forecast. The predicted value in 2016 between the combination model and the GA-BP model were 624.7402 million tons and 625.8911 million tons, respectively. The accuracy and reliability of the combination model in this paper can be verified.

7 U.S. Wheat Production Predicted by Combination Model

To test the reliability of the combination model further, instability data that a single model can hardly deal with were used in the combination model. Relying on U.S. Wheat Production of Food and Agriculture Organization from 1990 to 2014, a three-layer BP neural network was built (http://faostat3.fao.org/download/Q/QC/E). With the help of GA and MEA to optimize the BP neural network’s connection weights and thresholds, the networks obtained their best weights and thresholds to build the GA-BP model and the MEA-BP model, respectively. Data were classified from 1990 to 2009 for training, and data from 2010 to 2014 were taken as test data for network. U.S. wheat production from 1990 to 2014 can be seen in Figure 8. The data of U.S. wheat production have many outliers, and both the single models and the combination model can hardly deal with them.

Figure 8: U.S. Wheat Production from 1990 to 2014.
Figure 8:

U.S. Wheat Production from 1990 to 2014.

As seen in Figure 9, the prediction results of single models are unsatisfactory. The predicted value of the GA-BP model has hardly moved since 2011. The wheat downward trend predicted by the MEA-BP model is much faster than the actual value. Moreover, the absolute percentage error of the BP model is big. However, the stability and reliability of “Combination Forecast Model Modified by Data Fusion Algorithm” can be reflected clearly.

Figure 9: Comparison between Simulative Results and Actual Data by Different Models.
Figure 9:

Comparison between Simulative Results and Actual Data by Different Models.

As seen in Table 3, with the increase of forecast time length, the accuracy of the combination model is in second place at all times, except in 2014. The error of the combination model is a little larger than those of the MEA-BP model and the BP model in 2014. From the above analysis, it can be seen that combination forecast model modified by data fusion algorithm has very practical applications. Although the accuracy of some single models are better than that of the combination model at certain time points, their robustness are very poor. If data have many outliers, the predicted values of single models will become meaningless.

Table 3:

Absolute Percentage Errors of Different Models.

YearModel
Combination Model/APEGA-BP Model/APEMEA-BP Model/APEBP Model/APE
2010−2.71% ②−2.77% ③+0.11% ①−4.55% ④
2011+6.06% ②+6.70% ④+6.14% ③+3.49% ①
2012−8.41% ②−5.94% ①−8.49% ③−9.20% ④
2013−3.49% ②+0.07% ①−3.82% ④−3.55% ③
2014+0.99% ③+4.72% ④+0.01% ①+0.88% ②
  1. ① means the most accurate model of all; ② means the second accurate model; ③ means the third accurate model; ④ means the least accurate model of all.

According to US Department of Agriculture data, there was a small increase in wheat production: about 55.80 million tons in 2015. The predicted value of the combination model is 55.9405 million tons. The absolute percentage error of the combination model is 0.252%, while the error of the BP model is 0.128%. The error of the combination model is a little larger than the absolute percentage error of the BP model. However, the error between the GA-BP model and the MEA-BP model is 3.957% and 1.027%, respectively.

8 Conclusions

The grain output combination forecast model modified by data fusion algorithm, which contains three kinds of useful information and the advantages of three different single models, is very scientific and reasonable. In addition, it can also overcome the special defects of single models. The weight coefficients of the combination model, which is a function of time series, robustness, and accuracy, are greatly better improved than those of the single models. Moreover, the combination model is scientific, reasonable, and reliable for medium- and short-term prediction. The thinking method of this model can be used into other instability data that some single models can hardly deal with. However, the advantages of the combination model can be more obvious than growing data.

Based on the foregoing study, it can be found that the total grain output of China in 2015 and 2016 will continue to grow; however, there is a modest dip in 2017 and the declines will persist in 2018. Overall, there is an obvious cyclical fluctuation in Chinese grain output, and it shows zigzag tendencies with the increase.

Acknowledgments

This paper was supported by the Science Foundation of Jiangxi Provincial Department of Education (GJJ14639, GJJ151547), the National Natural Science Foundation of Jiangxi Province (20142BAB217024).

Bibliography

[1] J. M. Bates and C. W. J. Granger, The combination of forecasts, Oper. Res. Q.20 (1969), 451–468.10.1057/jors.1969.103Search in Google Scholar

[2] China National Bureau of Statistics, China Statistics Yearbook, China Statistics Press, Beijing, 1991–2015.Search in Google Scholar

[3] H. J. Dai, Combination Prediction Model and its Application Research, Master Dissertation of Central South University, 2007.Search in Google Scholar

[4] A. J. W. de Wit and C. A. van Diepen, Crop model data assimilation with the ensemble Kalman filter for improving regional crop yield forecasts, Agric. For. Meteorol.146 (2007), 38–56.10.1016/j.agrformet.2007.05.004Search in Google Scholar

[5] Z. T. Hu and X. S. Liu, A practical data fusion algorithm, Process Autom. Instrum.26 (2005), 7–9.Search in Google Scholar

[6] Z. T. Hu and X. S. Liu, Method of multi-sensor data fusion based on relative distance, Syst. Eng. Electron.28 (2006), 196–198.Search in Google Scholar

[7] H. B. Lai, The fluctuation of Chinese grain production and its structure analysis, J. Agrotech. Econ.175 (2009), 91–96.Search in Google Scholar

[8] X. Li, S. M. Gu and H. Nian, Prediction for grain yield based on improved PSO optimized BP neural network, J. Minnan Normal Univ.83 (2014), 56–61.Search in Google Scholar

[9] L. H. Liu and C. B. Yin, Analysis based on gray correlation of Chinese grain production, J. Henan Agric. Univ.47 (2013), 751–756.Search in Google Scholar

[10] C. Y. Liu, J. C. Ling, L. Y. Kou, L. X. Chou and J. Q. Wu, Performance comparison between GA-BP neural network and BP neural network, Chin. J. Health Stat.30 (2013), 173–176, 181.Search in Google Scholar

[11] Y. Shi and R. Eberhart, Fuzzy adaptive particle swarm optimization, in: Proc. Congress on Evolutionary Computation, Seoul, Korea, pp. 101–106, 2001.Search in Google Scholar

[12] E. Simelton, E. D. G. Fraser, M. Termansen, T. G. Benton, S. N. Gosling, A. South, N. W. Arnell and A. J. Challinor, The socioeconomics of food crop production and climate change vulnerability: a global scale quantitative analysis of how grain crops are sensitive to drought, Food Sec.4 (2012), 163–179.10.1007/s12571-012-0173-4Search in Google Scholar

[13] C. Y. Sun, Mind evolution based machine learning: framework and the implementation of optimization, in: Proceedings of IEEE International Conference on Intelligent Engineering Systems, pp. 355–359, 1998.Search in Google Scholar

[14] F. Wang, K. M. Xie and J. X. Liu, Swarm intelligence based MEA design, Control Decis.25 (2010), 145–148.Search in Google Scholar

[15] X. C. Wang, F. Shi, L. Yu and Y. Li, Analysis 43 neural network examples of MATLAB, Beijing University of Aeronautics and Astronautics Press, Beijing, 2013.Search in Google Scholar

[16] X. M. Xu and L. Y. Cao, Study on prediction of grain yield based on rough set and BP neural network, J. NE Agric. Univ.45 (2014), 95–100.Search in Google Scholar

Received: 2016-6-3
Published Online: 2017-1-20
Published in Print: 2018-3-28

©2018 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 8.6.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2016-0072/html
Scroll to top button