Next Article in Journal
Desalinating Real Shale Gas Wastewater by Membrane Distillation: Performance and Potentials
Previous Article in Journal
Development and Application of Membrane Aerated Biofilm Reactor (MABR)—A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deriving Operating Rules of Hydropower Reservoirs Using Multi-Strategy Ensemble Henry Gas Solubility Optimization-Driven Support Vector Machine

China Three Gorges Corporation, Yichang 443133, China
*
Author to whom correspondence should be addressed.
Water 2023, 15(3), 437; https://doi.org/10.3390/w15030437
Submission received: 3 January 2023 / Accepted: 16 January 2023 / Published: 22 January 2023
(This article belongs to the Section Hydrology)

Abstract

:
Hydropower is an important clean renewable energy that plays a key role in coping with issues such as global energy security, environmental protection, and climate change. In order to improve the optimal operation ability of hydropower reservoirs in the context of forecast runoff with limited accuracy and prediction period, there has been a growing interest in deriving operating rules of hydropower reservoirs. Reasonable operation decision is very important for safe operation of reservoirs and efficient utilization of water resources. Therefore, a novel method of operation rules derivation is proposed in this study. Optimal operation model of hydropower reservoir is established and support vector machine (SVM) is used to derive operation rules based on the optimal operation results. In order to improve the performance of SVM, the Henry gas solubility optimization (HGSO) is used to optimize its hyperparameters for the first time. Meanwhile, multiple strategies are applied to overcome the drawbacks of HGSO. The multi-verse optimizer (MVO) is used to enhance the exploration capability of basic HGSO. Quadratic interpolation (QI) is used to improve the exploitation ability of HGSO. In this study, the Xiluodu and Xiangjiaba hydropower reservoirs in the upper Yangtze River of China were selected as a case study. First, the improved HGSO called MVQIHGSO was tested on 23 classical benchmark functions. Then, it was employed to optimize hyperparameters of SVM model for deriving operation rules. The results and statistical studies indicate that the improved HGSO outperforms the comparison algorithms in exploration and exploitation. The obtained results imply that the novel method named MVQIHGSO-SVM can provide a new practical tool to deriving operation rules for hydropower reservoirs, which is conducive to the safe and efficient utilization of water resources.

1. Introduction

Climate change has undoubtedly emerged as the great challenging global problem facing humanity in this era, which significantly threatens the human life, industrial production, and ecological environment [1,2,3,4]. The main reason of increased climate change is the large amount and rapid growth of non-renewable energy consumption over the past several decades, mainly fossil fuels [5,6]. In response to climate change, global warming will be limited to “well below 2 °C” before the level of industrialization, and efforts will be made to control it at 1.5 °C [7]. Meanwhile, clean renewable resources including hydropower, solar energy, wind energy, geothermal energy, biomass energy, and tidal power have attracted widespread attention from policy-makers and academics in all developed and developing countries of the world to deal with the problem of ecological deficit [8,9]. Particularly, hydropower is the most widely used renewable energy in human society due to its advantages of low operating cost, stability, and flexible on demand power supply as well as other comprehensive benefits such as flood control, irrigation, navigation, and ecological environment protection [10,11]. However, the hydropower utilization is inevitably limited by natural conditions such as hydrology, climate, and geomorphology [12]. Actually, to solve this problem, reservoirs’ optimal operation is able to change the temporal and spatial distribution of water resources, which is an important mean and tool to realize the rational allocation and efficient utilization of water resources [13].
Generally, the mathematical programming techniques and meta-heuristic optimization methods are the two approaches used for optimal operation of reservoirs based on the inflow runoff [14,15,16,17]. Specifically, the mathematical programming can be divided into three types: linear programming (LP), non-linear programming (NLP), and dynamic programming (DP) as well as its many variants such as incremental dynamic programming (IDP), successive approximation DP (DPSA), discrete differential DP (DDDP), and progressive optimality algorithm (POA) [18]. LP is the most widely used optimization technology in the field of water resources planning and management due to its simplicity and adaptability; however, it cannot deal with the non-linear constraints of the reservoir optimal operation [19]. To solve this problem, NLP is applied to optimize the reservoir storage and release process; however, the application of NLP to large scale reservoirs optimal operation is limited by the high computational complexity [19,20]. The aim of DP is to overcome the inherent problems of reservoir operation including non-linear and stochastic features, which has been the most popular optimization method so far. Nevertheless, it would suffer from “curve of dimensionality” with the reservoir scales increase [19,21]. Fortunately, many variants of DP mainly composed of IDP, DPSA, DDDP, and POA were developed to overcome the dimensionality problem in the past decades [22]. On the other hand, the emergence of meta-heuristic optimization methods provides a novel and promising way for solving reservoir optimal operation model, which only requires objective function information without derivatives or other auxiliary knowledge [23]. Although a large number of research studies have focused on the application of these method to reservoir operation, they are rarely applied in the actual reservoir engineering operation due to the randomness of results and low convergence speed. In addition, the above techniques may lead to ineffective and inefficient operation under the changing environment conditions. For example, the operator needs to carry out repeated trial calculations to formulate a better reservoir operation scheme because of the uncertainties in reservoir operations, leading to great difficulties to the work of decision makers.
In order to formulate reasonable operation scheme with high efficiency, a simple, scalable, and adaptable operation rule is important and necessary for operators [24]. Operation diagram and operation function are the two types of operation rules [25]. The former is a control curve diagram to guide the operation of the reservoir by dividing the storage capacity of the reservoir into different operation area through some index line formed by water storage and water supply. The operation diagram is characterized by simplicity, intuition, and easy operation; however, the application of operation diagram is often affected by the subjectivity of operators. Therefore, there is still ample room to improve the economic benefits of the reservoirs. Fortunately, operation function is another form of operation rule with clearly defined input and output to make operational decisions considering uncertain conditions [26]. For example, the end water level as decision variable during a period can be determined by certain input, such as initial water level, inflow runoff, and other hydrometeorological information. This method has been regarded as the most promising future lines of research in this field [27,28,29]. The operation rule derivation methods based on operation function can be divided into two types: regression analysis and machine learning technique [30]. The common point of these two methods is that the optimal operation process of reservoir groups should be obtained according to the implicit stochastic optimization method [31]. The significant difference between the two method is that the expression of regression analysis is based on speculation, inspection and correction, in which the process is complex and greatly affected by human factors; however, the latter can solve this problem well and obtains the reservoir group operation function with good adaptability.
Support vector machine (SVM) is a newly developed machine learning method based on statistical learning [32,33]. It has the advantages of global optimization and good generalization performance, and the diversity of kernel functions also solves different types of data simulation and classification problem in processing small sample data. As we all know, hyperparameter setting has a direct impact on the performance of SVM. At present, grid search method and meta-heuristic intelligence algorithms are two commonly used methods to optimize hyperparameter. In essence, the former is an enumeration procedure used to determine the combinations of available hyperparameters, which is very time-consuming [34]. The latter assumes that a good combination of hyperparameters has an aggregation effect, which is able to find the most suitable hyperparameter combination in a shorter time [35]. Therefore, the hybrid model combined meta-heuristic optimization algorithm and machine learning method is applied for the operation rule derivation of reservoir groups in this study. In order to improve the derivation performance, a new meta-heuristic optimization algorithm namely Henry gas solubility optimization, short for HGSO, is introduced to optimize the hyperparameters of SVM for the first time. Unfortunately, it is found that the basic HGSO algorithm has some disadvantages such as local optima stagnation and low convergence speed [36]. To handle these drawbacks of the basic HGSO algorithm, multiple strategies with multi-verse optimizer (MVO) and quadratic interpolation method (QI) are firstly applied to enhance the exploration and exploitation ability of basic HGSO, called MVQIHGSO algorithm in the present paper. As a result, the hybrid model combined MVQIHGSO algorithm and SVM method, called MVQIHGSO-SVM is proposed to derivate operation rule of hydropower reservoirs to maximize the hydropower benefits.
A novel operation rule derivation method is proposed based on the improved HGSO algorithm and SVM. The reminder of this paper is organized as follows. In Section 2, the process of operation rule derivation using the proposed MVQIHGSO-SVM method was introduced, including the optimal operation model of hydropower reservoirs, the basic HGSO algorithm and its improvement strategies as well as the basic principle of SVM. In Section 3, numerical experiments were carried out to validate the performance of the MVQIHGSO algorithm. In Section 4, the Xiluodu and Xiangjiaba hydropower reservoirs in the lower reaches of the Jinsha river of the upper Yangtze river in China were selected as a case study. In Section 5, the conclusions were drawn.

2. Materials and Methods

2.1. Optimal Operation Model of Hydropower Reservoirs

The objective that is maximizing the total hydropower generation and complex constraint set from hydropower reservoirs constitute the optimal operation model of hydropower reservoirs.

2.1.1. Objectives

Please see equation bellow
E = max i = 1 N t = 1 T P i , t Δ t P i , t = k i H i , t Q i , t
where E is the total hydropower generation produced during operation periods; N is the number of hydropower reservoirs; T is the total operation periods; Pi,t, ki,t, Hi,t, and Qi,t are power output, efficiency coefficient, water head, and turbine discharge of reservoir i at period t, respectively; Δ t is the operation interval.

2.1.2. Constraints

(1) Water balance Equations
V i , t + 1 = V i , t + ( I i , t Q i , t t o t a l ) I i , t = q i , t + j = 1 N i O j , t O i , t = Q i , t s + Q i , t
where Vi,t, Ii,t, Q i , t t o t a l , qi,t, and Q i , t s are the storage volume, total inflow, total outflow, local inflow, and water spillage of reservoir i at period t, respectively; Ni is the number of upstream reservoirs directly connected to the reservoir i; Oj,t is the outflow from upstream reservoir j directly connected to the reservoir i at period t.
(2) Water head Equations
H i , t = 1 2 ( Z i , t + Z i , t 1 ) Z i , t d
where Zi,t and Z i , t d are the forebay water level and downstream water level of reservoir i at period t, respectively.
(3) Forebay water level limits
Z i , t min Z i , t Z i , t max
where Z i , t min and Z i , t max are the minimum and maximum forebay water levels of reservoir i at period t, respectively.
(4) Forebay water level fluctuation limits
Z i , t Z i , t 1 Δ Z i max
where Δ Z i max is the acceptable maximum fluctuation of forebay water level of reservoir i allowed for a timestep.
(5) Turbine discharge limits
Q i , t min Q i , t min ( Q i , t max , Q i , t c )
where Q i , t min , Q i , t max , and Q i , t c are the minimum and maximum turbine discharges as well as reservoir discharge capacity of reservoir i at period t, respectively.
(6) Total discharge limits
Q i , t t o t a l , min Q i , t t o t a l Q i , t t o t a l , max
where Q i , t t o t a l , min and Q i , t t o t a l , max are the minimum and maximum total discharges of reservoir i at period t, respectively.
(7) Power output limits
P i , t min P i , t min ( P i , t max , P i , t e )
where P i , t min , P i , t max , and P i , t e are the minimum and maximum power outputs as well as expected power outputs of reservoir i at period t, respectively.
(8) Total power output limits
E = max i = 1 N t = 1 T P i , t Δ t P i , t = k i H i , t Q i , t
where P t t o t a l , min and P t t o t a l , max are the minimum and maximum total power outputs of CHRs at period t.
(9) Initial and target forebay water level limits
Z i , 0 = Z i i n i t i a l Z i , T = Z i t a r g e t
where Z i i n i t i a l and Z i t a r g e t are the preset initial and target forebay water levels of reservoir i at period zero and T, respectively.
(10) Non-linear characteristic curves limits
V i , t = f i , 1 ( Z i , t ) Q i , t c = f i , 2 ( Z i , t 1 ) Z i , t d = f i , 3 ( Q i , t t o t a l ) P i , t e = f i , 4 ( H i , t , Q i , t ) Z i t a r g e t = f i , 1 1 ( V i s + V i , 0 )
where f i , 1 ( ) , f i , 2 ( ) , f i , 3 ( ) , and f i , 4 ( ) are the nonlinear stage–storage curve, stage–discharge capacity curve, stage–downstream water level curve, and stage–discharge-head–output curve of reservoir i, respectively; V i s is the required storage capacity of reservoir i during operation periods; V i , 0 is the storage volume of reservoir i at period zero.
(11) Storage and release water limits
V s = i = 1 N V i s
where Vs is the total required storage capacity of multiple reservoir system.
It is noteworthy that the hybrid optimization algorithm combined dynamic programming (DP), successive approximation DP (DPSA) and progressive optimality algorithm (POA), called DP-POA-DPSA deterministic optimization method is used to solve the established optimal operation model of hydropower reservoirs, where penalty function method is used to deal with constraint violations.

2.2. Summary of Henry Gas Solubility Optimization (HGSO)

Henry’s law is a famous gas law in the field of physical chemistry formulated by famous chemical scientist Henry in 19th century [37]. Briefly, the law states that the amount of dissolved gas in a liquid is proportional to its partial pressure above the liquid when the gas and liquid reach equilibrium in the case of certain temperature. The key concept to understand Henry’s law is solubility, which changes with variation in temperature and pressure. Specifically, the increased temperature leads to the lower solubility of gas, conversely, increased pressure results in higher solubility of gas. By controlling the two factors of temperature and pressure, the highest equilibrium state with best gas can be achieved based on Henry’s law. Inspired by this phenomenon, Henry Gas Solubility Optimization (HGSO) algorithm was developed to mimic the physical and chemical process described by Henry’s law [38]. HGSO is a novel physics-inspired metaheuristic optimization algorithm based on population proposed in recent years and empirical study reveals that HGSO has a significant merit to balance between exploration and exploitation.
Similar to the majority of the well-known metaheuristic optimization algorithms, the optimization process of HGSO also includes two phases: the exploration phase and the exploitation phase. To achieve the balance between exploration and exploitation, the optimization process can be further divided into four stages: initialization stage, evaluation stage, updating stage, and termination stage. HGSO is no exception. In the initialization stage, the gas population are randomly generated with the population size N, lower bound vector lb, and upper bound vector ub as input, which can be formulated as follows:
g i = l b + r 1 , i ( u b l b ) i = 1 ,   2 ,   ,   N
where g i denotes the gas i; r 1 , i denotes the random number distributed in range from 0 to 1. It is noteworthy that the gas population are divided into several groups, denoted by l, with same Henry’s constant value. Then, each gas of the groups in population is evaluated based on the objective function in evaluation stage. Further, several key parameters such as Henry’s constant H, partial pressure P, and the ratio of enthalpy of dissolution to gas constant are randomly generated to update Henry’s coefficient and solubility for next generation, which can be summarized as follows:
H l ( i t + 1 ) = H l ( i t ) exp ( C l ( 1 T ( i t ) 1 T θ ) )
S i , l ( i t ) = K H l ( i t + 1 ) P i , l ( i t )
where H l ( i t ) = r 2 c 1 , P i , l = r 3 c 2 , and C l = r 4 c 3 , in which r 2 , r 3 , and r 4 are all random numbers in range from 0 to 1, c 1 , c 2 , and c 3 are constant values, which are set to 0.05, 100, and 0.01, respectively; it denotes the number of current iteration; T ( i t ) is the temperature in current iteration time and T ( i t ) = exp ( i t M a x I t ) , in which M a x I t denotes the maximum number of iterations. Based on these parameters in current iteration, the gas positions of each group are updated in updating stage according to the following formulation:
g i , l ( i t + 1 ) = g i , l ( i t ) + r 5 γ F ( G l ( i t ) g i , j ( i t ) ) + r 6 α F ( S i , l ( i t ) G l ( i t ) g i , j ( i t ) )
γ = β exp ( F ( i t ) + ε F i , l ( i t ) + ε )
where g i , l denotes the gas i in group l; r 5 and r 6 are all random numbers in range from 0 to 1; γ denotes the interaction ability of gases in the same group; α , β , and ε are optimization parameters and are set to 1, 1, and 0.5, respectively; F denotes the direction parameter used to control the search direction of gas; G i , l ( i t ) and F i , l ( i t ) denote the best gas and its fitness in group l; F ( i t ) denotes the best fitness value obtained so far. In order to enhance the exploration capability of HGSO, the inferior gases are reupdated in the same way as initializing the population as follows:
N = N ( c 4 + r 7 ( c 5 c 4 ) )
g i , l = l b + r 8 ( u b l b )
where N denotes the number of inferior gases; r 7 and r 8 are random number in range from 0 to 1; c 4 = 0.1 and c 5 = 0.1 ; g i , l denotes the updated gas corresponding to gas i in group l. Finally, the HGSO algorithm is stopped when the termination criterion is met.

2.3. The Proposed MVQIHGSO

2.3.1. Multi-verse Optimizer (MVO)

The main inspiration for multi-verse optimizer comes from multi-verse theory, which has aroused great interest in the field of science, philosophy, and theology. In 2016, multi-verse optimizer was developed by Mirjalili and Lewis based on three simple yet important concepts including white holes, black holes, and wormholes to mimic the physic nexus of multiple universes in multi-verse theory [39]. Specially, in order to formulate the expansion of the universe, the higher expansion rate of the universe, the higher the probability of the existence of white holes, conversely, the lower expansion rate of the universe, the higher the probability of forming black holes.
The two key components guiding the optimization direction of the algorithm are white/black hole tunnel mechanism and wormhole mechanism. The former states that special transmission path can be established to realize the exchange of objects between different universes, such that the objects form black hole in the universe with low expansion rate will be replaced by the objects in the universe connected from the other end of the tunnel. In this manner, the inferior universe has the chance to explore the parallel space formed by multiple universe to evolve into the one with higher expansion rate. The mathematical model of white/black hole tunnel can be formulated as follows:
u i , j = u k , j       r 1 < n o r m r ( u i ) u i , j       r 1 n o r m r ( u i )
where u i denotes the universe i; u i , j denotes the object j of universe i, wherein j = 1 , 2 , , D and D denotes the number of objects in a universe. n o r m r ( ) aims to normalize the expansion rate of universe to a length of 1; r 1 denotes the random number uniformly distributed on [0, 1]. It is noteworthy that universe and object correspond to the individual and decision variable in optimization algorithm, respectively.
Additionally, the wormhole belongs to the universe with highest expansion rate. In order to share the information of objects in wormhole with other universes, objects in wormholes have a high probability of being randomly transmitted to other universes. In this way, the universe diversity is enhanced, avoiding slow expansion or stopping expansion during the process of evolution. The wormhole mechanism can be mathematically described as follows:
u i , j = U j + T D R ( l b j + r 2 ( u b j l b j ) )       r 3 < 0.5   and   r 4 < W E P U j T D R ( l b j + r 2 ( u b j l b j ) )       r 3 0.5   and   r 4 < W E P u i , j                                                                                                     r 4 0.5
where U j denotes the object j of the best universe; r 2 , r 3 , and r 4 denote the random number uniformly distributed on [0, 1], respectively; TDR and WEP denotes travelling distance rate and wormhole existence probability, respectively, wherein the former is used to control the transmission range of objects in the best universe and the latter is applied to reflect the possibility of the existence of wormhole. The detailed information of MVO can be referred to the original paper [39].

2.3.2. Quadratic Interpolation Strategy (QI)

Quadratic interpolation has been proved to be a commonly and effective method to improve the exploitation ability of swarm intelligence algorithm [40,41,42]. QI is essentially a one-dimensional optimization method, which can easily and quickly discover the optimal solution in the context of low-dimensional space. Its core idea is to construct a quadratic curve to simulate the objective function around several candidate solutions, so that the optimal function value can be calculated in a derivative-free way, which greatly reduces the cost of solving the local objective value. In other words, QI method tries to find better promising search agent in the vicinity of individuals evolved so far, leading to improve the population diversity and enhance the exploitation ability of the algorithm, so that it can increase the probability of guiding the search agent toward the global optimal region.
Generally, QI method is the last step before evolution into the next generation. In this step, three individuals will be selected to construct a quadratic curve to simulate the objective function. In this manner, the extreme point of the approximate objective function can be easily obtained by quadratic function. For example, s a = ( s a , 1 , s a , 2 , , s a , j , , s a , D ) , s b = ( s b , 1 , s b , 2 , , s b , j , , s b , D ) , and s c = ( s c , 1 , s c , 2 , , s c , j , , s c , D ) are the three individuals selected to form the quadratic curve, the extreme point s p = ( s p , 1 , s p , 2 , s p , j , , s p , D ) of the quadratic, which is also the promising possible solution based on the three selected individuals, can be formulated as follows:
s p , j = ( ( s a , j ) 2 ( s c , j ) 2 ) f ( s b ) + ( ( s c , j ) 2 ( s b , j ) 2 ) f ( s a ) + ( ( s b , j ) 2 ( s a , j ) 2 ) f ( s c ) 2 ( ( s a , j s c , j ) f ( s b ) + ( s c , j s b , j ) f ( s a ) + ( s b , j s a , j ) f ( s c ) )
where j = 1 , 2 , , D ; f ( ) denotes the fitness value of a specific individual. In practice, the individuals in current iteration are ranked from small to large based on their fitness values. Then, all three consecutive individuals are selected to perform the QI method. As a result, for the population with size N where N 3 , the QI method is implemented N-2 times. Based on the greedy law, if the resulting individual by QI method performs better than the selected three individuals, the first individual will be replaced.
As can be seen from the above analysis, QI can be regarded as a crossover operator to an extent. In other words, the potential information hidden in individuals is fully utilized to improve the performance of inferior solution, so that the promising solution with high quality can be discovered.

2.3.3. The Proposed Enhanced HGSO Algorithm

In the proposed MVQIHGSO algorithm, the MVO operator and QI strategy are MVO operator is applied to enhance the exploration capability and improve the balance between exploration and exploitation of basic HGSO. At the same time, QI strategy is used to find the promising candidate solution around the search space formed by the current individual. In other words, the inferior solution is replaced with the solution obtained by QI strategy to increase the diversity and quality of population, leading to faster convergence rate and higher exploitation accuracy. The pseudo-code of MVQIHGSO is given in Algorithm 1.
Algorithm 1. Detailed information of MVQIHGSO algorithm.
Pseudocode of the MVQIHGSO
01: Inputs: the population size N; the maximum number of iteration MaxIt; the constant values including c1, c2, c3 in Equations (2) and (3), c4 and c5 in Equation (6), α, β, and ε in Equatios (4) and (5); the number of groups l
02: Initialize the gas population within the lower and upper boundary
03: Divide the gas population into specific groups with the same Henry’s constant value
04: Evaluate the gas of each group in the population
05: Obtain the best gas of the whole population and the l best gases corresponding to l groups
06: for it from 1 to MaxIt do
07: Generate random number r within [0, 1]
08: if r smaller than 0.5 do
09:  Update the gas position by basic HGSO
10: else do
11:  Update the gas position by MVO
12: end if
13: Sort the updated gas population by HGSO and MVO
14: Updating the gas population from 1 to N-2 by QI strategy based on greedy law
15: Evaluate the fitness of the final updated gas population
16: Update the best gas position obtained so far and the l best gases corresponding to l groups
17: end for
18: Outputs: the best global gas position and the corresponding optimal fitness value

2.4. Support Vector Machine

Support vector machine (SVM) is a famous machine learning algorithm based on statistical learning theory and the principle of structural risk minimization that has received the most attention. SVM was firstly proposed by Cortes and Vapnik with excellent performance in term of high generalization ability [32]. Recently, there has been a growing interest from many scholars and researchers in applying SVM in various fields for classification and regression analysis such as streamflow forecasting, classification of image, text and hypertext categorization, classification of satellite data, and classification of proteins, which have achieved lots of advances. The main idea of SVM is transforming the nonlinear input space to a high-dimensional feature space to construct a hyper-plane by nonlinear mapping [43].
The operation rule derivation belongs to regression problem, so epsilon-insensitive SVM (ɛ-SVM) regression is applied in the present study. ɛ-SVM was designed by Drucker in 1997 to solve the regression problem [44]. Suppose the truth yet unknown function of input space x as G(x) and F(x, w) is a family of functions controlled by weight vector w, the aim of SVM is find a best w denoted as w ^ to minimize a measure of error between G(x) and a specific approximate function of F(x, w). The optimal value of w ^ depends on the primal loss function given as follows:
L =                                       0                               if   y i F ( x i , w ^ ) < ε y i F ( x i , w ^ ) ε                               otherwise
where xi is the input vector of sample point i; yi is the observation value corresponding to sample point i; F ( x i , w ^ ) denotes the predicted value of sample point i; ε is defined as a cube so that the loss is zero in the case of F ( x i , w ^ ) is within the tube, while the loss is the magnitude of the difference between F ( x i , w ^ ) and the radius ε of the tube [44]. In order to solve the optimization problem in term of the above primal loss function, the slack variables ξ and ξ are introduced to minimize:
U i = 1 N ξ i + i = 1 N ξ i + 1 2 ( w t w )
where U ( ) is the function of ξ and ξ , which emphasizes more on the error when U is large, and vice versa, it emphasizes more on the norm of the weights resulting in a better generalization; N is the number of training vectors. The corresponding constraints are as follows:
y i ( w t v i ) b = ε + ξ i ( w t v i ) + b y i ε + ξ i                                             ξ i 0                                             ξ i 0
where v i is the training vector i; b denotes a group of bias. On this basis, the Lagrangian can be formulated based on the Lagrange multipliers γ i and α i as follows:
L = 1 2 ( w t w ) + U i = 1 N ξ i + i = 1 N ξ i i = 1 N α i [ y i ( w t v i ) b + ε + ξ i ]                         i = 1 N α i [ ( w t v i ) + b y i + ε + ξ i ] i = 1 N ( γ i ξ i + γ i ξ i )
As can be seen from Equation (14) that in addition to parameters w, b, and ξ i , Lagrangian is also related to the Lagrange multipliers, which makes the direct solution of the parameters of ɛ-SVM with respect to w, b, and ξ i by taking the partial derivative of the Lagrangian impossible. Fortunately, the saddle point of L by differentiating with respect to w, b, and ξ i leads to the equivalent maximization of the dual space, which can be formulated as follows:
W ( α , α ) = ε i = 1 N ( α i + α i ) + i = 1 N y i ( α i α i ) 1 2 i , j = 1 N ( α i α i ) ( α j α j ) ( v t v + 1 ) p
with the constraints:
0 α i U 0 α i U i = 1 ,   ,   N i = 1 N α i = i = 1 N α i
As a result, the prediction of y ( p ) corresponding to new input vector x ( p ) can be formulated as follows:
y ( p ) = i = 1 N ( α i α i ) ( v i t x ( p ) + 1 ) p + b
In order to effectively and efficiently solve various linear and nonlinear regression problems, the most critical component of SVM, namely kernel function, is described here. The application of kernel function ensures that the vector dot product of high-dimensional space can be directly calculated in low-dimensional space according to the Hilbert–Schmidt theory [45], greatly reducing calculation complexity and avoiding the curse of dimensional. In this manner, the performance of SVM either in classification and regression is significantly improved. The operation of kernel function is as follows:
K ( x 1 , x 2 ) = ϕ ( x 1 ) , ϕ ( x 2 )
where K ( ) is the kernel function; ϕ ( x ) is a transformation that maps x to a high-dimensional space; denotes dot product operator. The commonly used kernel functions in SVM include linear function, polynomial function, radial basis function (RBF), and sigmoidal function. Generally, RBF can achieve significant superiority against other kernel functions, which can be expressed as follows:
K R B F ( x 1 , x 2 ) = exp ( γ x 1 x 2 2 )     γ   >   0
where γ is one hyperparameter of SVM.
It is noteworthy that data processing is needed to standardize the input samples to eliminate the numerical calculation instability caused by feature dimension and improve the convergence speed. The standardization of input sample x can be expressed as follows:
x = x u σ
where x is the standardized sample; u and σ are the mean vector and standard deviation vector, respectively.
In summarize, the MVQIHGSO and SVM are combined to construct a novel optimal operation rule derivation method, which is called MVQIHGSO-SVM model. The proposed MVQIHGSO is used to optimize the hyperparameters of SVM, the flowchart of MVQIHGSO-SVM is shown in Figure 1. Furthermore, several commonly used validation indices are used to comprehensively evaluate the performance of the proposed MVQIHGSO-SVM model. The validation indices are composed of coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE). Different indices can validate the performances of the model in different dimensions. Specifically, R2 is used to characterize the linear correlation between the observed data and its predicted data; RMSE is adopted to indicate the average degree of dispersion between the observed data and its predicted data; MAE and MAPE are applied to quantify the deviation of the predicted data from the observed data. The formulation of validation indices are as follows:
R 2 = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ i ) 2
R M S E = 1 n i = 1 n ( y i y ^ i ) 2
M A E = 1 n i = 1 n y i y ^ i
M A P E = 100 n i = 1 n y i y ^ i y i
It is noteworthy that the smaller these indicators, the better the performance of the model.

3. Experimental Evaluation and Discussion

In order to verify the performance of the proposed MVQIHGSO algorithm, a comprehensive set of benchmark functions including 23 widely used functions in the field of intelligence algorithm are adopted as test bed. These benchmark function set includes three types: unimodal (F1–F7), multimodal (F8–F13), and fixed-dimension multimodal (F14–F23), which are all minimization optimization problems. F1–F7 are used to test the convergence speed and exploitation accuracy of algorithms because of only one local or global optima exists in the search space; F8–F13 are used to test the exploration capacity when facing many local optimal solutions; F14–F23 are used to test the ability of local optima avoidance and the balance between exploration and exploitation of the algorithms. More detailed information about these benchmark functions including function expressions, dim and boundary of decision variables, and the theoretical global optima are presented in Table 1, Table 2 and Table 3.
In addition, for a general and fair comparison, the proposed MVQIHGSO are compared with the basic HGSO and several state-of-art metaheuristic optimization algorithms including particle swarm optimization (PSO) [46], differential evolution algorithm (DE) [47], multi-verse optimizer (MVO) [36], sine cosine algorithm (SCA) [48], opposition-based sine cosine algorithm (OBSCA) [49], grey wolf optimizer (GWO) [50], and improved grey wolf optimizer (IGWO) [51]. It is noteworthy that the parameters settings of the comparison algorithms are the same as those used in the original literature. At the same time, without the loss of generality, for the specific common parameters, for example, the population size, the dim of decision variables, and the maximum number of iterations are set to 50, 30, and 1000. Specially, in order to reduce the sudden impact of the randomness of the algorithm on the overall performance, all the optimization algorithms are carried out with 30 runs independently for each benchmark function to analyze the statistical performance of the algorithms.

3.1. Statistical Results and Analysis

In this subsection, the aforementioned algorithms were run 30 times on each benchmark function, such that the statistical results including average and standard deviation were calculated based on the optimization results. The corresponding statistical results were recorded and presented in Table 4, where the best statistical results with respect to average values of algorithm are presented in bold. As shown in Table 4 that MVQIHGSO is able to provide very competitive results and has the smallest average values on 16 out of the 23 benchmark functions, more than half of them. More specifically, the analysis of specific performance is as follows.
For unimodal functions, the proposed algorithm has achieved significant superiority against other well-known algorithms in F1, F2, F3, and F7, indicating the superior performance of MVQIHGSO in term of finding the global optimum in local search space during exploitation phase. This is because the quadratic interpolation method is imbedded into the basic HGSO algorithm, such that the ability of the basic HGSO to exploit the optimum is enhanced. For multimodal functions, the proposed algorithm also outperforms other algorithms in F9, F10, and F11, indicating its good exploration capacity in the case of a large number of local optima because of the introduced white/black hole mechanism from MVO algorithm to solve local optima stagnation by sudden change. For fixed-dimension multimodal functions, the proposed algorithm achieves the best performance in all the benchmark functions compared with other algorithms caused by the incorporated wormhole mechanism with adaptive controlling parameters to balance exploration and exploitation. The above analysis based on statistical results reveal that the resulting algorithm has merits with respect to robust and statistically sound in term of exploration phase and exploitation phase.

3.2. Non-Parameter Test Results and Analysis

Friedman test is a non-parametric statistical test developed by Milton Friedman, which is often used to detect differences in treatments across multiple test attempts [52,53]. For the performance comparison of metaheuristic algorithms, it uses rank information based on the statistical results of several runs to examine significant differences in multiple population distributions [54]. Therefore, in this subsection, Friedman test was first used to benchmark the overall performance of MVQIHGSO and other state-of-art algorithms. In this manner, the average ranks of the algorithms are obtained in Table 5. As shown in Table 5 that the Friedman rank of the proposed MVQIHGSO is 2.543, which is the smallest value among all the algorithms. The second-best algorithm is IGWO with the Friedman rank 3.369. These results show that the overall performance of the proposed algorithm is better in terms of exploration and exploitation.
Additionally, in order to further explore the performance difference in each run to exclude the case that superiority and dissimilarity occurs by chance, another non-parameter test named Wilcoxon singed-rank test [55] was also carried out to determine whether there is a statistical significant difference between the proposed MVQIHGSO and other state-of-art algorithms in this subsection. The null hypothesis is that the differences have a median of zero and the alternative hypothesis that the median is not equal to zero for a two-sided test or greater (or smaller) than zero for a one-sided test. The results of Wilcoxon singed-rank test with a significance level of 5% is represented in Table 6. As shown in Table 6, the proposed algorithm performs significantly better than other algorithms in the case where the p value is less than 0.05. This can strongly demonstrate the potential capabilities of the proposed algorithm in coping with the complex optimization problems in real world.
From the analysis of Friedman test and Wilcoxon rank sum test, it can be concluded that the MVQIHGSO is able to avoid local optima stagnation and find the global optimal solution more accurately. This is due to the introduction of MVO operator and QI strategy for improving the position updating mechanism of the basic HGSO. In this way, the convergence speed can also be improved, which can be found in Figure 2. Figure 2 shows that the convergence speed of MVQIHGSO is significantly higher than other algorithms.

4. Case Study

4.1. Study Region

The main stream of the Jinsha River is the largest hydropower plant planned and constructed in China. The lower reaches of the Jinsha River (from the Yalong River Estuary to Yibin) have the richest hydropower resources, with a length of 782 km and a drop of 729 m. Xiluodu and Xiangjiaba (XLD and XJB) hydropower reservoirs, located in the lower reaches of the Jinsha River, are China’s third and fifth largest hydropower stations with a total installed capacity of 18,600 MW, equivalent to building a new Three Gorges hydropower station. The landscape of XLD and XJB hydropower stations are shown in Figure 3, and the characteristic parameters are given in Table 7.

4.2. Data Description

The historical inflow of XLD hydropower reservoir is used as the input of the established optimal operation model of XLD and XJB hydropower reservoir system. Since the first unit of XLD hydropower station was put into operation in July, 2013, the historical inflow series are from 2014 to 2020. The optimization algorithm combined DP, POA, and DPSA is used to solve the established optimal operation model. Then the optimal operation process data of XLD and XJB hydropower reservoir system, including water level and outflow, are used as the input of optimal operation rule derivation model MVQIHGSO-SVM.
Specifically, the obtained daily operation data of seven years (from 2014 to 2020) of XLD and XJB hydropower stations were obtained with a total of 2557 observed sample data. Five years of the observed sample data from 2014 to 2018 were used as the training set of SVM model and the remaining data are used as validation set. Specifically, in order to improve the fitting accuracy, the end water level during a period was selected as decision variable. At the same time, in the implicit stochastic optimal operation of hydropower reservoirs, the initial water level and inflow of the period are often used as the input variables of the operation rules [9,56]. In addition, in order to consider the hydraulic connection between reservoirs, the initial water level of the same period of the adjacent reservoirs is also used as an input variables.

4.3. Results and Discussion

In order to verify the performance of the proposed MVQIHGSO-SVM model, other hybrid models including HGSO-SVM, PSO-SVM, SCA-SVM, and Grid-SVM models were compared in this study. The different validation indices of all the models are calculated according to Equations (33)–(36), which are given in Table 8. Table 8 shows that for the same input data, the proposed MVQIHGSO-SVM model has more accurate results than the other models. Obviously, the equipped HGSO with multiple strategies cause a reduction in R2, RMSE, MAE, and MAPE values in the prediction of end water level during a period. The best model according to the highest values of R2, and the lowest values of RMSE, MAE and MAPE are the proposed MVQIHGSO-SVM as the best optimum model by a value of R2 = 0.998, RMSE = 0.340, MAE = 0.126, and MAPE = 0.021% for XLD and R2 = 0.998, RMSE = 0.164, MAE = 0.075, and MAPE = 0.019% for XJB. This means that the proposed MVQIHGSO method can find the best hyperparameter combination to improve the ability of SVM with respect to operation rule derivation. The remaining models, from best to worst according to RMSE value, are Grid-SVM, SCA-SVM, PSO-SVM, and HGSO-SVM for XLD and Grid-SVM, SCA-SVM, PSO-SVM, and HGSO-SVM for XJB. The worst model for XLD and XJB of all hybrid models are HGSO-SVM. The performance of the proposed model is improved by 17.27% and 14.58% compared with HGSO in term of RMSE value for XLD and XJB, respectively. The good performance of the proposed model in predicting data can be attributed to the following two reasons: (1) SVM shows unique advantages in solving small sample and non-linear problems by using kernel functions; (2) the multiple strategies including MVO algorithm and QI strategy improve the optimization ability of the basic HGSO.
In order to clearly show the effect of different methods to predict the end water level during a period, scatter plots of all hybrid models in the validation set are presented in Figure 4. Figure 4 shows that there is a significant quantitative correlation trend between observed and predicted water level process because the plotted points lie closely on a 45 degree straight line through the origin. However, it can be observed that performance of different hybrid models is slightly different and the proposed MVQIHGSO-SVM has a better performance than other hybrid models. For example, Figure 4 shows that the prediction error is small when XLD is at high water levels using MVQIHGSO-SVM, conversely, it is large using HGSO-SVM, SCA-SVM, and PSO-SVM. For XJB, the water level in the higher water level interval and the lower water level interval can be accurately predicted using MVQIHGSO-SVM model.
In order to further explore the difference in prediction accuracy of different hybrid models, the comparison of observed and predicted water level process of XLD and XJB are presented in Figure 5. As shown in Figure 5a for XLD, in the transitional stage from low water level to high water level, the prediction results of each hybrid model are quite different. The predicted water level process based on MVQIHGSO is closer to the observed values, followed by Grid-SVM, SCA-SVM, PSO-SVM, and HGSO-SVM. Here, combined with the inflow process of XLD from 1 January 2020 to 29 February 2020 in Figure 6, it can be observed that the inflow of XLD is relatively large after 23 January 2019. Therefore, the reservoir vacates the capacity in advance, resulting in the decline of the water level from 15 January 2019. From this point of view, the proposed MVQIHGSO-SVM can more accurately store and release floods in order to deal with future large and deterministic inflow processes. As shown in Figure 5b for XJB, when it is operating at a high water level, only the predicted water level process by MVQIHGSO-SVM meets the maximum water level constraint, namely the normal water level 380 m. This shows that the proposed MVQIHGSO-SVM model can more accurately find the operation rules of hydropower reservoirs considering the complex and non-linear constraints and the production experience of operators than other hybrid models.
Finally, the predicted water level processes based on all hybrid models are used as the input data of conventional operation model of XLD and XJB hydropower reservoirs. In addition, the optimal operation results of XLD-XJB are also calculated. These results are given in Table 9. Table 9 shows that the highest total hydropower generation is obtained by optimal operation model, reaching 196.41 TWh. MVQIHGSO-SVM ranks second, followed by Gird-SVM, SCA-SVM, HGSO-SVM, PSO-SVM, and conventional operation. The hydropower generation obtained by conventional operation is 193.85 TWh, which is 2.56 TWh less than the optimal operation result. The hydropower generation calculated by the proposed MVQIHGSO-SVM model is closer to the optimal operation result, and the hydropower generation increased the most compared to conventional scheduling, reaching 2.23 TWh, increasing by 1.15%. In summary, the operation rule of hydropower reservoirs derived by the proposed MVQIHGSO-SVM model outperform other hybrid models, which is able to obtain total hydropower generation closer to the optimal dispatch model.

5. Conclusions

In this study, a novel operation rule derivation method combining improved HGSO and SVM is proposed. Multiple strategies including MVO operator and QI method are used to cope with the drawbacks faced by the original HGSO. Then the improved HGSO called MVQIHGSO is applied to optimize the hyperparameters of SVM model to derive the operation rules of hydropower reservoirs, forming MVQIHGSO-SVM model. The main contributions of this paper can be concluded below.
(1)
Multiple strategies are equipped into HGSO to improve its performance in exploration and exploitation. The multi-verse optimizer (MVO) is used to enhance the exploration capability of basic HGSO and help the inferior agent to escape from local optimal. Quadratic interpolation (QI) is used to improve the exploitation ability of HGSO. Finally, the exploration and exploitation are balanced by integrating the multiple strategies.
(2)
MVQIHGSO with multiple strategies is benchmarked by 23 classical benchmark functions. The results demonstrates that MVQIHGSO outperforms most of the well-known metaheuristic algorithms and has a superior efficacy compared to the competitors based on the convergence accuracy and speed.
(3)
MVQIHGSO-SVM model is used to derive operating rules of hydropower reservoirs. The XLD and XJB in the upper Yangtze River are selected as a case study. The results indicate that the proposed MVQIHGSO-SVM model can accurately obtain the joint operation rules of hydropower reservoirs. The total hydropower generation calculated by the proposed hybrid model is closer to the optimal operation result, and the hydropower generation increased the most compared to conventional scheduling, reaching 22.25 × 108 kWh, increasing by 1.15%.
In the future, the improved HGSO algorithm can be combined with other machine learning method for deriving operation rules of huge hydropower reservoirs.

Author Contributions

H.Q. conceived the original idea, and H.Q. designed the methodology. H.Q. and S.Z. collected the data. H.Q. developed the code and performed the study. H.Q., T.H. and Y.X. contributed to the interpretation of the results. H.Q. wrote the paper, and T.H. revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been supported by National Key Research and Development Program of China (2021YFC3200305, 2022YFC3002705), Independent scientific research project of China Three Gorges Corporation (WWKY-2020-0299).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Due to the strict security requirements from the departments, some or all data, models, or code generated or used in the study are proprietary or confidential in nature and may only be provided with restrictions (e.g., anonymized data).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhai, R.; Tao, F. Contributions of climate change and human activities to runoff change in seven typical catchments across China. Sci. Total Environ. 2017, 605, 219–229. [Google Scholar] [CrossRef] [PubMed]
  2. Rodriguez, R.S.; Ürge-Vorsatz, D.; Barau, A.S. Sustainable Development Goals and climate change adaptation in cities. Nat. Clim. Chang. 2018, 8, 181–183. [Google Scholar] [CrossRef]
  3. Qiu, H.; Chen, L.; Zhou, J.; He, Z.; Zhang, H. Risk analysis of water supply-hydropower generation-environment nexus in the cascade reservoir operation. J. Clean. Prod. 2021, 283, 124239. [Google Scholar] [CrossRef]
  4. Qiu, H.; Zhou, J.; Chen, L.; Zhu, Y. Multiple Strategies Based Salp Swarm Algorithm for Optimal Operation of Multiple Hydropower Reservoirs. Water 2021, 13, 2753. [Google Scholar] [CrossRef]
  5. Baloch, M.A.; Mahmood, N.; Zhang, J.W. Effect of natural resources, renewable energy and economic development on CO2 emissions in BRICS countries. Sci. Total Environ. 2019, 678, 632–638. [Google Scholar]
  6. Su, C.W.; Umar, M.; Khan, Z. Does fiscal decentralization and eco-innovation promote renewable energy consumption? Analyzing the role of political risk. Sci. Total Environ. 2021, 751, 142220. [Google Scholar] [CrossRef]
  7. Kang, J.N.; Wei, Y.M.; Liu, L.C.; Han, R.; Yu, B.Y.; Wang, J.W. Energy systems for climate change mitigation: A systematic review. Appl. Energy 2020, 263, 114602. [Google Scholar] [CrossRef]
  8. Xu, J.; Ni, T.; Zheng, B. Hydropower development trends from a technological paradigm perspective. Energy Convers. Manag. 2015, 90, 195–206. [Google Scholar] [CrossRef]
  9. Li, H.; Liu, P.; Guo, S.; Cheng, L.; Huang, K.; Feng, M.; He, S.; Ming, B. Deriving adaptive long-term complementary operating rules for a large-scale hydro-photovoltaic hybrid power plant using ensemble Kalman filter. Appl. Energy 2021, 301, 117482. [Google Scholar] [CrossRef]
  10. Berga, L. The role of hydropower in climate change mitigation and adaptation: A review. Engineering 2016, 2, 313–318. [Google Scholar] [CrossRef] [Green Version]
  11. Killingtveit, A. Hydropower. In Managing Global Warming; Academic Press: Cambridge, MA, USA, 2019; pp. 265–315. [Google Scholar]
  12. Provansal, M.; Dufour, S.; Sabatier, F.; Anthony, E.J.; Raccasi, G.; Robresco, S. The geomorphic evolution and sediment balance of the lower Rhône River (southern France) over the last 130 years: Hydropower dams versus other control factors. Geomorphology 2014, 219, 27–41. [Google Scholar] [CrossRef]
  13. Zhang, H.; Chang, J.; Gao, C.; Wu, H.; Wang, Y.; Lei, K.; Long, R.; Zhang, L. Cascade hydropower plants operation considering comprehensive ecological water demands. Energy Convers. Manag. 2019, 180, 119–133. [Google Scholar] [CrossRef]
  14. Brandão, J.L.B. Performance of the equivalent reservoir modelling technique for multi-reservoir hydropower systems. Water Resour. Manag. 2010, 24, 3101–3114. [Google Scholar] [CrossRef]
  15. Fu, X.; Li, A.; Wang, L.; Ji, C. Short-term scheduling of cascade reservoirs using an immune algorithm-based particle swarm optimization. Comput. Math. Appl. 2011, 62, 2463–2471. [Google Scholar] [CrossRef] [Green Version]
  16. Allawi, M.F.; Jaafar, O.; Ehteram, M.; Hamzah, F.M.; El-Shafie, A. Synchronizing artificial intelligence models for operating the dam and reservoir system. Water Resour. Manag. 2018, 32, 3373–3389. [Google Scholar] [CrossRef]
  17. Dobson, B.; Wagener, T.; Pianosi, F. An argument-driven classification and comparison of reservoir operation optimization methods. Adv. Water Resour. 2019, 128, 74–86. [Google Scholar] [CrossRef]
  18. Howson, H.R.; Sancho, N.G.F. A new algorithm for the solution of multi-state dynamic programming problems. Math. Program. 1975, 8, 104–116. [Google Scholar] [CrossRef]
  19. Hossain, M.S.; El-Shafie, A. Intelligent systems in optimizing reservoir operation policy: A review. Water Resour. Manag. 2013, 27, 3387–3407. [Google Scholar] [CrossRef]
  20. Yeh, W.W.G. Reservoir management and operations models: A state-of-the-art review. Water Resour. Res. 1985, 21, 1797–1818. [Google Scholar] [CrossRef]
  21. Feng, M.; Liu, P.; Guo, S.; Shi, L.; Deng, C.; Ming, B. Deriving adaptive operating rules of hydropower reservoirs using time-varying parameters generated by the E n KF. Water Resour. Res. 2017, 53, 6885–6907. [Google Scholar] [CrossRef]
  22. Kumar, D.N.; Baliarsingh, F. Folded dynamic programming for optimal operation of multireservoir system. Water Resour. Manag. 2003, 17, 337–353. [Google Scholar] [CrossRef]
  23. Pant, M.; Rani, D. Large scale reservoir operation through integrated meta-heuristic approach. Memetic Comput. 2021, 13, 359–382. [Google Scholar]
  24. Srinivasan, K.; Kumar, K. Multi-objective simulation-optimization model for long-term reservoir operation using piecewise linear hedging rule. Water Resour. Manag. 2018, 32, 1901–1911. [Google Scholar] [CrossRef]
  25. Mehta, R.; Jain, S.K. Optimal operation of a multi-purpose reservoir using neuro-fuzzy technique. Water Resour. Manag. 2009, 23, 509–529. [Google Scholar] [CrossRef]
  26. Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Wu, X.Y. Optimization of hydropower system operation by uniform dynamic programming for dimensionality reduction. Energy 2017, 134, 718–730. [Google Scholar] [CrossRef]
  27. Bhaskar, N.R.; Whitlatch, E.E., Jr. Derivation of monthly reservoir release policies. Water Resour. Res. 1980, 16, 987–993. [Google Scholar] [CrossRef]
  28. Feng, Z.K.; Niu, W.J.; Zhang, R.; Wang, S.; Cheng, C.T. Operation rule derivation of hydropower reservoir by k-means clustering method and extreme learning machine based on particle swarm optimization. J. Hydrol. 2019, 576, 229–238. [Google Scholar] [CrossRef]
  29. He, S.; Guo, S.; Yang, G.; Chen, K.; Liu, D.; Zhou, Y. Optimizing operation rules of cascade reservoirs for adapting climate change. Water Resour. Manag. 2020, 34, 101–120. [Google Scholar] [CrossRef]
  30. Niu, W.J.; Feng, Z.K.; Feng, B.F.; Min, Y.W.; Cheng, C.T.; Zhou, J.Z. Comparison of multiple linear regression, artificial neural network, extreme learning machine, and support vector machine in deriving operation rule of hydropower reservoir. Water 2019, 11, 88. [Google Scholar] [CrossRef] [Green Version]
  31. Liu, P.; Li, L.; Chen, G.; Rheinheimer, D.E. Parameter uncertainty analysis of reservoir operating rules based on implicit stochastic optimization. J. Hydrol. 2014, 514, 102–113. [Google Scholar] [CrossRef]
  32. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  33. Zhang, D.; Lin, J.; Peng, Q.; Wang, D.; Yang, T.; Sorooshian, S.; Liu, X.; Zhuang, J. Modeling and simulating of reservoir operation using the artificial neural network, support vector regression, deep learning algorithm. J. Hydrol. 2018, 565, 720–736. [Google Scholar] [CrossRef]
  34. Li, L.L.; Zhao, X.; Tseng, M.L.; Tan, R.R. Short-term wind power forecasting based on support vector machine with improved dragonfly algorithm. J. Clean. Prod. 2020, 242, 118447. [Google Scholar] [CrossRef]
  35. Albardan, M.; Klein, J.; Colot, O. SPOCC: Scalable POssibilistic Classifier Combination-toward robust aggregation of classifiers. Expert Syst. Appl. 2020, 150, 113332. [Google Scholar] [CrossRef] [Green Version]
  36. Chilakala, L.R.; Kishore, G.N. Optimal deep belief network with opposition-based hybrid grasshopper and honeybee optimization algorithm for lung cancer classification: A DBNGHHB approach. Int. J. Imaging Syst. Technol. 2021, 31, 1404–1423. [Google Scholar] [CrossRef]
  37. Henry, W. Experiments on the quantity of gases absorbed by water, at different temperatures, and under different pressures. Philos. Trans. R. Soc. Lond. 1803, 93, 29–274. [Google Scholar]
  38. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  39. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  40. Yang, Y.; Zong, X.; Yao, D.; Li, S. Improved Alopex-based evolutionary algorithm (AEA) by quadratic interpolation and its application to kinetic parameter estimations. Appl. Soft Comput. 2017, 51, 23–38. [Google Scholar] [CrossRef]
  41. Guo, W.Y.; Wang, Y.; Dai, F.; Xu, P. Improved sine cosine algorithm combined with optimal neighborhood and quadratic interpolation strategy. Eng. Appl. Artif. Intell. 2020, 94, 103779. [Google Scholar] [CrossRef]
  42. Zhang, H.; Cai, Z.; Ye, X.; Wang, M.; Kuang, F.; Chen, H.; Li, C.; Li, Y. A multi-strategy enhanced salp swarm algorithm for global optimization. Eng. Comput. 2020, 38, 1177–1203. [Google Scholar] [CrossRef]
  43. Scholkopf, B.; Smola, A.J. Learning with kernels: Support vector machines, regularization, optimization, and beyond. In Adaptive Computation and Machine Learning Series; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  44. Drucker, H.; Burges, C.J.; Kaufman, L.; Smola, A.; Vapnik, V. Support vector regression machines. Adv. Neural Inf. Process. Syst. 1997, 9, 155–161. [Google Scholar]
  45. Courant, R.; Hilbert, D. The calculus of variations. In Methods of Mathematical Physics; Interscience Publishers: New York, NY, USA, 1953; pp. 164–274. [Google Scholar]
  46. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-international conference on neural networks, Perth, WA, Australia, 27 November 1995–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  47. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  48. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  49. Abd Elaziz, M.; Oliva, D.; Xiong, S. An improved opposition-based sine cosine algorithm for global optimization. Expert Syst. Appl. 2017, 90, 484–500. [Google Scholar] [CrossRef]
  50. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  51. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  52. Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  53. Carrasco, J.; García, S.; Rueda, M.M.; Das, S.; Herrera, F. Recent trends in the use of statistical tests for comparing swarm and evolutionary computing algorithms: Practical guidelines and a critical review. Swarm Evol. Comput. 2020, 54, 100665. [Google Scholar] [CrossRef] [Green Version]
  54. Muthusamy, H.; Ravindran, S.; Yaacob, S.; Polat, K. An improved elephant herding optimization using sine–cosine mechanism and opposition based learning for global optimization problems. Expert Syst. Appl. 2021, 172, 114607. [Google Scholar] [CrossRef]
  55. Wilcoxon, F. Individual comparisons by ranking methods. In Breakthroughs in Statistics; Springer: New York, NY, USA, 1992; pp. 196–202. [Google Scholar]
  56. Young, G.K., Jr. Finding reservoir operating rules. J. Hydraul. Div. 1967, 93, 297–322. [Google Scholar] [CrossRef]
Figure 1. Flowchart of MVQIHGSO-SVM method.
Figure 1. Flowchart of MVQIHGSO-SVM method.
Water 15 00437 g001
Figure 2. Convergence curves of algorithms for F1, F4, F9, and F11.
Figure 2. Convergence curves of algorithms for F1, F4, F9, and F11.
Water 15 00437 g002
Figure 3. Location of XLD-XJB cascade reservoirs.
Figure 3. Location of XLD-XJB cascade reservoirs.
Water 15 00437 g003
Figure 4. Scatter plots of all hybrid models in the validation set (unit: m).
Figure 4. Scatter plots of all hybrid models in the validation set (unit: m).
Water 15 00437 g004
Figure 5. Observed and predicted water level process (unit: m).
Figure 5. Observed and predicted water level process (unit: m).
Water 15 00437 g005
Figure 6. Inflow process of XLD from 1 January 2020 to 29 February 2020.
Figure 6. Inflow process of XLD from 1 January 2020 to 29 February 2020.
Water 15 00437 g006
Table 1. Description of unimodal benchmark functions.
Table 1. Description of unimodal benchmark functions.
FunctionDimRangefmin
f 1 ( x ) = i = 1 n x i 2 30[−100, 100]0
f 2 ( x ) = i = 1 n x i + i = 1 n x i 30[−10, 10]0
f 3 ( x ) = i = 1 n j 1 i x j 2 30[−100, 100]0
f 4 ( x ) = max i x i , 1 i n 30[−100, 100]0
f 5 ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 30[−30, 30]0
f 6 ( x ) = i = 1 n x i + 0.5 2 30[−100, 100]0
f 7 ( x ) = i = 1 n i x i 4 + r a n d o m 0 , 1 30[−1.28, 1.28]0
Table 2. Description of multimodal benchmark functions.
Table 2. Description of multimodal benchmark functions.
FunctionDimRangefmin
f 8 ( x ) = i = 1 n x i sin ( x i ) 30[−500, 500]−418.9829 × 5
f 9 ( x ) = i = 1 n x i 2 10 cos ( 2 π x i ) + 10 30[−5.12, 5.12]0
f 10 ( x ) = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos ( 2 π x i ) + 20 + e 30[−32, 32]0
f 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 30[−600, 600]0
f 12 ( x ) = π n 10 sin ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 1 + 10 sin 2 ( π y i + 1 ) + ( y n 1 ) 2 + i = 1 n u ( x i , 10 , 100 , 4 ) y i = 1 + x i + 1 4 u ( x i , a , k , m ) = k x i a m                     x i > a 0                                                       a < x i < a k x i a m           x i < a 30[−50, 50]0
f 13 ( x ) = 0.1 sin 2 ( 3 π x ) + i = 1 n ( x i 1 ) 2 1 + sin 2 ( 3 π x i + 1 ) + ( x n 1 ) 2 1 + sin 2 ( 2 π x n ) + i = 1 n u ( x i , 5 , 100 , 4 ) 30[−50, 50]0
Table 3. Description of fixed-dimension multimodal benchmark functions.
Table 3. Description of fixed-dimension multimodal benchmark functions.
FunctionDimRangefmin
f 14 ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 1 2[−65, 65]1
f 15 ( x ) = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00030
f 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
f 17 ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 + 10 1 1 8 π cos x 1 + 10 2[−5, 5]0.398
f 18 ( x ) = 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) × 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) 2[−2, 2]3
f 19 ( x ) = i = 1 4 c i exp j = 1 3 a i j ( x j p i j ) 2 3[1, 3]−3.86
f 20 ( x ) = i = 1 4 c i exp j = 1 6 a i j ( x j p i j ) 2 6[0, 1]−3.32
f 21 ( x ) = i = 1 5 ( X a i ) ( X a i ) T + c i 1 4[0, 10]−10.1532
f 22 ( x ) = i = 1 7 ( X a i ) ( X a i ) T + c i 1 4[0, 10]−10.4028
f 23 ( x ) = i = 1 10 ( X a i ) ( X a i ) T + c i 1 4[0, 10]−10.5363
Table 4. Statistical results of the proposed MVQIHGSO and other state-of-art algorithms.
Table 4. Statistical results of the proposed MVQIHGSO and other state-of-art algorithms.
FunctionsIndicatorMVQIHGSOHGSOPSODEMVOSCAOBSCAGWOIGWO
UnimodalF1Mean0.001.71 × 10−712.33 × 10−405.29 × 10−121.79 × 10−12.74 × 10−31.11 × 10−272.48 × 10−702.57 × 10−71
Std9.34 × 10−710.001.04 × 10−392.22 × 10−125.80 × 10−27.88 × 10−34.22 × 10−277.10 × 10−706.07 × 10−71
F2Mean4.10 × 10−1851.25 × 10−438.79 × 10−34.65 × 10−83.553.17 × 10−64.21 × 10−256.67 × 10−411.95 × 10−42
Std6.83 × 10−430.003.15 × 10−21.23 × 10−81.81 × 1014.67 × 10−62.18 × 10−247.20 × 10−413.62 × 10−42
F3Mean0.001.04 × 10−691.09 × 10−12.41 × 1041.83 × 1012.10 × 1031.32 × 10−37.80 × 10−198.61 × 10−13
Std5.66 × 10−690.001.28 × 10−12.77 × 1036.901.73 × 1035.17 × 10−33.92 × 10−184.38 × 10−12
F4Mean2.06 × 10−721.51 × 10−1831.09 × 10−12.006.33 × 10−11.36 × 1011.55 × 10−51.15 × 10−177.93 × 10−15
Std1.13 × 10−710.007.10 × 10−22.44 × 10−12.77 × 10−19.402.88 × 10−51.69 × 10−176.16 × 10−15
F5Mean2.85 × 1012.80 × 1014.06 × 1014.65 × 1013.08 × 1025.54 × 1022.79 × 1012.65 × 1012.25 × 101
Std2.70 × 10−16.13 × 10−12.77 × 1012.31 × 1015.98 × 1022.18 × 1033.06 × 10−17.80 × 10−12.79 × 10−1
F6Mean1.74 × 10−13.443.63 × 10−234.92 × 10−121.66 × 10−14.294.104.13 × 10−11.01 × 10−5
Std6.39 × 10−25.26 × 10−11.84 × 10−221.84 × 10−124.89 × 10−23.92 × 10−12.73 × 10−12.57 × 10−12.39 × 10−6
F7Mean6.76 × 10−57.92 × 10−49.32 × 10−32.59 × 10−21.22 × 10−22.35 × 10−21.47 × 10−34.72 × 10−48.64 × 10−4
Std4.35 × 10−44.84 × 10−54.00 × 10−34.70 × 10−35.04 × 10−32.54 × 10−21.03 × 10−33.15 × 10−43.80 × 10−4
MultimodalF8Mean−1.02 × 104−2.64 × 105−6.68 × 103−1.25 × 104−8.18 × 103−3.97 × 103−4.08 × 103−6.09 × 103−9.62 × 103
Std1.08 × 1036.38 × 1055.86 × 1028.39 × 1017.31 × 1022.69 × 1022.26 × 1027.63 × 1021.29 × 103
F9Mean0.000.004.67 × 1016.20 × 1011.05 × 1021.29 × 1010.001.78 × 10−11.39 × 101
Std0.000.001.45 × 1015.963.31 × 1011.98 × 1010.008.08 × 10−16.96
F10Mean1.01 × 10−151.72 × 10−156.36 × 10−16.01 × 10−77.92 × 10−11.11 × 1011.09 × 10−11.26 × 10−149.06 × 10−15
Std1.53 × 10−156.49 × 10−167.67 × 10−11.10 × 10−77.47 × 10−19.725.12 × 10−12.97 × 10−152.31 × 10−15
F11Mean0.000.001.66 × 10−27.82 × 10−114.49 × 10−11.72 × 10−14.67 × 10−114.53 × 10−41.89 × 10−3
Std0.000.002.13 × 10−21.51 × 10−108.69 × 10−22.35 × 10−12.56 × 10−102.48 × 10−34.56 × 10−3
F12Mean7.73 × 10−43.43 × 10−14.49 × 10−26.66 × 10−138.71 × 10−11.174.48 × 10−13.01 × 10−27.46 × 10−7
Std3.65 × 10−41.18 × 10−17.04 × 10−23.91 × 10−138.25 × 10−11.899.21 × 10−22.30 × 10−22.44 × 10−7
F13Mean2.00 × 10−22.492.07 × 10−23.01 × 10−123.78 × 10−23.202.283.05 × 10−11.63 × 10−2
Std8.67 × 10−33.23 × 10−13.65 × 10−21.72 × 10−121.87 × 10−21.521.47 × 10−12.03 × 10−13.70 × 10−2
Fixed-dimension multimodalF14Mean9.98 × 10−11.142.589.98 × 10−19.98 × 10−11.331.203.229.98 × 10−1
Std1.61 × 10−122.87 × 10−12.010.006.12 × 10−127.52 × 10−16.05 × 10−13.544.12 × 10−17
F15Mean3.08 × 10−43.53 × 10−43.84 × 10−46.49 × 10−43.33 × 10−39.72 × 10−47.45 × 10−44.35 × 10−33.34 × 10−4
Std4.16 × 10−84.13 × 10−52.95 × 10−49.37 × 10−56.80 × 10−34.47 × 10−41.14 × 10−48.14 × 10−31.43 × 10−4
F16Mean−1.03−1.03−1.03−1.03−1.03−1.03-1.03−1.03−1.03
Std5.34 × 10−122.10 × 10−56.78 × 10−166.78 × 10−164.39 × 10−81.50 × 10−57.18 × 10−72.78 × 10−96.78 × 10−16
F17Mean0.3980.3990.3980.3980.3980.3990.3980.3980.398
Std1.49 × 10−109.22 × 10−40.000.007.35 × 10−86.37 × 10−42.89 × 10−43.25 × 10−70.00
F18Mean3.003.003.003.003.003.003.003.003.00
Std2.86 × 10−101.13 × 10−51.50 × 10−151.90 × 10−156.23 × 10−71.51 × 10−54.30 × 10−63.90 × 10−67.14 × 10−16
F19Mean−3.86−3.86−3.86−3.86−3.86−3.86−3.86−3.86−3.86
Std1.53 × 10−92.17 × 10−32.71 × 10−152.71 × 10−152.67 × 10−72.89 × 10−31.76 × 10−31.30 × 10−32.71 × 10−15
F20Mean−3.32−3.12−3.29−3.31−3.29−2.96−3.16−3.29−3.31
Std3.02 × 10−27.03 × 10−25.54 × 10−21.38 × 10−155.56 × 10−23.10 × 10−14.12 × 10−25.20 × 10−23.02 × 10−2
F21Mean−10.2−4.89−6.15−9.96−7.28−4.28−9.30−9.31−9.82
Std1.938.91 × 10−23.441.68 × 10−32.792.161.07 × 10−11.921.28
F22Mean−10.4−4.91−9.06−10.0−8.58−4.05−10.2−10.2−10.4
Std1.351.15 × 10−12.775.07 × 10−82.902.261.44 × 10−19.63 × 10−17.32 × 10−9
F23Mean−10.5−4.96−8.11−10.5−9.56−4.97−10.4−10.4−10.5
Std2.59 × 10−51.02 × 10−13.551.49 × 10−132.251.701.04 × 10−19.79 × 10−11.09 × 10−14
Table 5. Statistical results of the proposed MVQIHGSO and other state-of-art algorithms from Friedman test.
Table 5. Statistical results of the proposed MVQIHGSO and other state-of-art algorithms from Friedman test.
AlgorithmsFriedman RanksFinal Ranks
MVQIHGSO2.5431
HGSO4.6083
PSO5.6527
DE4.6084
MVO6.4568
SCA7.8049
OBSCA5.2606
GWO4.6955
IGWO3.3692
Table 6. Statistical results of the proposed MVQIHGSO and other state-of-art algorithms from Wilcoxon test ( p 0.05 ).
Table 6. Statistical results of the proposed MVQIHGSO and other state-of-art algorithms from Wilcoxon test ( p 0.05 ).
Compared AlgorithmsUnimodal FunctionsMultimodal FunctionsFixed-Dimension Functions
MVQIHGSO vs. HGSO2.5940 × 10−80.10127.4567 × 10−4
MVQIHGSO vs. PSO1.4838 × 10−124.1333 × 10−100.8573
MVQIHGSO vs. DE6.6342 × 10−192.5731 × 10−40.0916
MVQIHGSO vs. MVO1.7618 × 10−325.2700 × 10−310.0273
MVQIHGSO vs. SCA9.6394 × 10−276.4376 × 10−253.2725 × 10−5
MVQIHGSO vs. OBSCA1.0240 × 10−94.1657 × 10−40.0227
MVQIHGSO vs. GWO5.5859 × 10−70.00720.0654
MVQIHGSO vs. IGWO7.5243 × 10−50.00110.0402
Table 7. Characteristics information of XLD-XJB cascade reservoirs.
Table 7. Characteristics information of XLD-XJB cascade reservoirs.
CharacteristicsHydropower StationsUnits
XiluoduXiangjiaba
Completion date20132012-
Watershed area0.450.45million km2
Dead water level540370m
Flood control limited water level560370m
Normal water level600380m
Regulated storage6.460.903billion m3
The minimum release12001200m3/s
The minimum output10001000MW
Installed capacity12,6006000MW
Efficiency coefficient8.88.8-
Table 8. Validation indices value of different operation rule derivation models on validation set.
Table 8. Validation indices value of different operation rule derivation models on validation set.
ReservoirsModelsR2RMSEMAEMAPE
XLDMVQIHGSO-SVM0.9980.3400.1260.021%
HGSO-SVM0.9970.4110.1510.025%
SCA-SVM0.9970.4050.1490.025%
PSO-SVM0.9980.4050.1500.025%
Grid-SVM0.9980.3570.1510.025%
XJBMVQIHGSO-SVM0.9980.1640.0750.019%
HGSO-SVM0.9970.1920.0980.026%
SCA-SVM0.9970.1890.0920.025%
PSO-SVM0.9960.1920.0960.025%
Grid-SVM0.9970.1870.0860.023%
Table 9. Total hydropower generation based on observed and predicted data as well as optimal operation.
Table 9. Total hydropower generation based on observed and predicted data as well as optimal operation.
ReservoirsHydropower Generation (TWh)
ObservedMVQIHGSO-SVMHGSO-SVMSCA-SVMPSO-SVMGrid-SVMOptimization
XLD129.82131.30131.03131.03131.03131.12131.41
XJB64.0364.7864.5664.5664.5664.6165.00
Total193.85196.08195.59195.58195.58195.73196.41
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qiu, H.; Hu, T.; Zhang, S.; Xiao, Y. Deriving Operating Rules of Hydropower Reservoirs Using Multi-Strategy Ensemble Henry Gas Solubility Optimization-Driven Support Vector Machine. Water 2023, 15, 437. https://doi.org/10.3390/w15030437

AMA Style

Qiu H, Hu T, Zhang S, Xiao Y. Deriving Operating Rules of Hydropower Reservoirs Using Multi-Strategy Ensemble Henry Gas Solubility Optimization-Driven Support Vector Machine. Water. 2023; 15(3):437. https://doi.org/10.3390/w15030437

Chicago/Turabian Style

Qiu, Hongya, Ting Hu, Song Zhang, and Yangfan Xiao. 2023. "Deriving Operating Rules of Hydropower Reservoirs Using Multi-Strategy Ensemble Henry Gas Solubility Optimization-Driven Support Vector Machine" Water 15, no. 3: 437. https://doi.org/10.3390/w15030437

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop