Next Article in Journal
Detection of Cyanotoxin-Producing Genes in a Eutrophic Reservoir (Billings Reservoir, São Paulo, Brazil)
Next Article in Special Issue
Monthly Rainfall Anomalies Forecasting for Southwestern Colombia Using Artificial Neural Networks Approaches
Previous Article in Journal
Correction: Zhao, X. et al. Eco-Efficiency of End-of-Pipe Systems: An Extended Environmental Cost Efficiency Framework for Wastewater Treatment. Water 2020, 12, 454
Previous Article in Special Issue
Efficient Double-Tee Junction Mixing Assessment by Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pipeline Scour Rates Prediction-Based Model Utilizing a Multilayer Perceptron-Colliding Body Algorithm

1
Department of Water Engineering and Hydraulic Structures, Faculty of Civil Engineering, Semnan University, Semnan, Iran
2
Institute of Energy Infrastructure (IEI), Universiti Tenaga Nasional (UNITEN), Selangor 43000, Malaysia
3
Department of Civil Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Selangor, Malaysia
4
Institute of Sustainable Energy (ISE), Universiti Tenaga Nasional (UNITEN), Selangor 43000, Malaysia
5
Department of Civil Engineering, College of Engineering, Universiti Tenaga Nasional (UNITEN), Selangor 43000, Malaysia
6
Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
7
Agricultural Department, Payam Noor University, Tehran, Iran
8
Department of Civil Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia
9
National Water Center, United Arab Emirates University, Al Ain, UAE
*
Authors to whom correspondence should be addressed.
Water 2020, 12(3), 902; https://doi.org/10.3390/w12030902
Submission received: 27 November 2019 / Revised: 17 February 2020 / Accepted: 4 March 2020 / Published: 23 March 2020
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)

Abstract

:
In this research, the advanced multilayer perceptron (MLP) models are utilized to predict the free rate of expansion that usually occurs around the pipeline (PL) because of waves. The MLP model was structured by integrating it with three optimization algorithms: particle swarm optimization (PSO), whale algorithm (WA), and colliding bodies’ optimization (CBO). The sediment size, wave characteristics, and PL geometry were used as the inputs for the applied models. Moreover, the scour rate, vertical scour rate along the pipeline, and scour rate at both right and left sides of the pipeline were predicted as the model outputs. Results of the three suggested models, MLP-CBO, MLP-WA, and MLP-PSO, for both testing and training sessions were assessed based on different statistical indices. The results indicated that the MLP-CBO model performed better in comparison to the MLP-PSO, MLP-WA, regression, and empirical models. The MLP-CBO can be used as a powerful soft-computing model for predictions.

1. Introduction

Background

Protecting hydraulic structures against scouring is a vital issue for water engineering. More specifically, the depth of scouring is the most interesting hydraulic parameter in the scouring phenomena [1]. The maximum depth of scouring is achieved due to the interaction of the flow, bed materials, and the pipeline (PL). The start of scouring underneath a PL leads to free spans (FS) [2]. PLs are one of the most essential structures utilized to convey oil, gas, and drinking water. Landside and bed scouring below the PLs threaten the PLs [2]. Numerous experimental studies have been performed to determine the critical conditions for the onset of free span development [3]. Predicting three-dimensional free spans is required to provide PL stability. Figure 1 illustrates scouring behavior that can lead to free spans; the diagram shows the essential parameters to keep in mind when designing a pipeline that must survive storms, especially the FS length and growth rate. When the waves travel in opposite directions, the scour rate is propagated (Figure 1). For submarine pipelines, several experiments have been carried out to determine the depth and the time scale of the local scour. Notwithstanding the results of these studies, a precise prediction for the scour rate at all dimensions below the pipelines is required [2]. The empirical methods are restricted to the limits of the dataset range. In this context, it could be concluded that the empirical models do not experience proper generalization ability for estimating the maximum scour below the PLs. In recent decades, soft computing models such as genetic programming (GP) [4,5,6,7], artificial neural network (ANN) [8,9,10,11], adaptive neuro fuzzy interface system (ANFIS) [12,13,14,15], and regression tree [16,17] were successfully utilized to forecast the scour depth below PLs. It should be noted that the scour rate is dependent on the flow direction, flow characteristics, PL diameter, and sediment properties. Additionally, the structural damages related to human activities occur in the free spans [1]. The natural self-burial below the PLs is caused by the free spans. One of the most significant issues in the design of PLs is assessing the length of the free spans during storms.
In the case of empirical models, no comprehensive study on the effects of different parameters on scouring have been reported [2].
An important motivation for hydraulic engineers is whether a hybrid soft computing model with an evolutionary algorithm can improve the accuracy of the performance in a field where they have yet to be examined. Different optimization algorithms such as genetic algorithm (GA), firefly algorithm (FFA), shark algorithm (SA), and bat algorithm (BA) have been utilized to train the soft computing models [19,20,21,22,23]. In recent years, multilayer perceptron models have been applied to forecast the scour depth around hydraulic structures. Applying the multilayer perceptron (MLP) models and optimization algorithms achieved successful performance in various fields of science such as scour forecast below the PLs, modeling of hydraulic jumps, spillway design, and stable open channel design [24,25,26]. In this study, the structure of an MLP model based on a new optimization algorithm (colliding bodies optimization (CBO)) was used for estimating three-dimensional scour along PLs. The vertical scour rate, scour rate along the PL, and scour rate to the left and right of the PL were computed as the outputs.
The physical conceptions of energy and momentum were utilized to define the CBO [27]. The CBO has been widely utilized in the field of mathematical sciences, structural optimization problems, benchmark functions, power engineering, and image processing [28,29,30]. The algorithm is simple and does not rely on any internal parameters [30]. The outstanding enhancement presented in this research is the application of a CBO model at the neuron level of an MLP. As a result, there is no need for the MLP to update the weight values thorough the training level by a new optimization algorithm.

2. Material and Methods

2.1. Multilayer Perceptron (MLP)

MLP is known as a well-considered group of ANN-based neural models, which are capable of approximating nonlinear problems based on a layered structure [31,32]. In ANN models, the evolution of information occurs based on three kinds of layers: input, middle (hidden), and output. Figure 2 shows an MLP structure that has a single middle layer. Two functions can be performed on every node of the MLP: activation and summation. The output of input variables, bias values, and input values can be calculated based on the summation function explained in Equation (1):
S j = i = 1 n ω i j I i + β j
where n is the total number of inputs, Ii is the input variable i, β j is a bias value, ω i j is connection weight, I is the input layer, and j is in the next level. The activation function is utilized in the MLP model based on the results of Equation (2). Different activation functions can be used in the MLP model; according to previous research, the sigmoid function is used most often. This function can be computed based on Equation (2)
f j = 1 1 + e S j
Therefore, the final output neuron j is computed using Equation (3):
y i = f j ( i = 1 n ω i j I j + β j )
After constructing the final structure of the MLP, the training process is used to tune the weighting vectors of the MLP model. These weighting coefficients should be updated to estimate the results and optimize the total error of the MLP model. The training level of the MLP model is a challenging level, which has an important effect on the ability of the MLP to solve complex problems (Figure 2).

2.2. Colliding Body Optimization

The colliding bodies’ optimization (CBO) algorithm uses the physical laws of energy and momentum that govern the collisions that occur between solid bodies to solve nonlinear problems. A set of random initial populations is used to define the CBO. In each level of the algorithm, all populations are arranged according to their fitness functions. In this algorithm, one object collides with another object and they attempt to reach a minimum energy level. When a collision happens in an isolated system, the total momentum of the system of objects is conserved. The colliding bodies are regarded as the initial population. The mass of each colliding body is computed based on the following equation:
M ( i ) = 1 O F i i = 1 n 1 O F i
where O F is the objective function and M is the matrix of the objects. The algorithm, like other optimization problems, has multiple agents. Each solution is regarded as a colliding body. Each solution has a specified mass based on Equation (4). A solution with good value has larger mass than those of the bad ones.
The arrangement of the colliding bodies’ (CBs) fitness function values is carried out in ascending order. The CBs are divided into two classes: moving and stationary. The stationary CBs are good solutions and the moving solutions move toward the stationary group. The velocity of stationary CBs is zero before the collision. The change of body position indicates the velocity of CBs before the collision. The time steps change with the number of iterations. The velocity of the moving objects before the collision is computed based on the following equations:
v i = x i x i n 2 , i = n 2 + 1 , . . , n
where v i is the velocity of ith CB in the moving object groups, x i is the position of ith CB in the moving object groups, and x i n 2 is the ith CB pair position of xi in the previous group. The initial condition of positions is computed as follows:
x i 0 = x min + r a n d . ( x max x min ) , i = 1 , 2 , . . , n
where x min is the minimum value of the decision variable, x max is the maximum value of the decision variable, rand is a random number, and x i 0 is the initial value of the CBs.
The moving CBs collide with stationary CBs to enhance their locations. After the collision, the new velocities of objects are computed based on the following equation:
v i = ( m i + n 2 + ε m i + n 2 ) v i + n 2 ( m i + m i + n 2 ) , i = 1 , . . n 2
The velocity of CBs in moving groups is as follows:
v i = ( m i ε m i n 2 ) v i ( m i + m i n 2 ) , i = n 2 + 1 , n 2 + 2 , n 2 + 3 , . . , n
where m is the mass of each colliding body and ε is the coefficient of restitution of the two colliding bodies:
ε = 1 i t e r i t e r ( max )
where iter is the current iteration number and i t e r ( max ) is the total number of iterations.
The new position for each stationary object is as follows:
x i n e w = x i n 2 + r a n d . v i
where x i n e w is the new position after the collision of the ith moving CB, x i n 2 is the old position of the ith stationary object, and r a n d is a random number. The sign (.) denotes an element-by-element multiplication.
The new position for each moving object is as follows:
x i n e w = x i + r a n d . v i i
where x i n e w is the new position after the collision of the ith stationary CB, x i is the old position after the collision of the ith stationary CB, and v i i is the velocity after the collision of the ith stationary object. The sign (.) denotes an element-by-element multiplication.
The CBO is designed based on the physical laws of collision between two objects that use the concept of conversation of energy; as certain objects collide with another object, the collided system moves in concordance with the principle of conservation of energy. In fact, in most collisions of two objects, one object loses momentum and hence slows down while the other gains momentum and speeds up, as presented in the following formula and in Figure 3.
m 1 u 1 + m 2 u 2 = m 1 v 1 + m 2 v 2
where m 1 is the mass of the first object, m 2 is the mass of the second object, u 1 is the initial velocity of the first object, u 2 is the initial velocity of the second object, v 1 is the final velocity of the first object, and v 2 is the final velocity of the second object.

2.3. Particle Swarm Optimization (PSO)

The PSO algorithm was inspired by the social behavior of particles in the search space. The PSO is made of two components: position and velocity. The position vectors are the values of decision variables. The velocity vectors are utilized to update the position vectors:
X i ( t + 1 ) = X i ( t ) + V i ( t + 1 )
V i ( t + 1 ) = w V i t + c 1 r 1 ( P i ( t ) X i ( t ) ) + c 2 r 2 ( G ( t ) X i ( t ) )
where X i ( t + 1 ) is the position of the ith particle at the t th iteration, V i t is the velocity of ith particle at the tth iteration, w is inertia weight, c 1 is the individual coefficient, c 2 is the social coefficient, r1 is a random number, r2 is a random number, P i ( t ) is the best solution gained by the ith particle until the tth iteration, and G ( t ) is the best solution obtained by all particles until the th iteration (Figure 4).

2.4. Whale Algorithm (WA)

The WA was inspired from the bubble-net foraging trait of whales in nature. Whales utilize bubbles to trap prey. In the algorithm, each whale (candidate solution) can contain the n decision variables. The position of each whale is updated based on the following equation:
Y t + 1 = Y ( t ) A . D D = | C . Y ( t ) Y t |
where Y t is the solution vector in the ith iteration, Y ( t ) is the possible location of prey in the ith iteration, || is the absolute value, and the sign (·) is an element-by-element multiplication. A and C are parameter vectors computed for each dimension:
A = 2 a . R 1 a
C = 2 . R 2
where R 1 is a random vector, R 2 is a random vector, and a is a linearly decreasing coefficient from 2 to 0.
The whale algorithm uses the above equations to relocate to different origins around a given prey. Whales use a spiral movement to attack prey. The shape of this movement and its effect on the movement of whales is shown in Figure 5. The spiral movement is updated based on the following equation:
Y t + 1 = D e b t cos ( 2 a π t ) + Y t
Moreover, the whales encircle prey during the hunt. In order to mathematically simulate the encircling trait, the following equation is used:
Y ( t + 1 ) = [ Y ( t + 1 ) A . D r 3 0.5 D e b t cos ( 2 π t ) + Y ( t ) r 3 > 0.5 ]
where r 3 is a random number and D = | C . Y ( t ) Y t | , D : | Y ( t ) Y t | , and b are constants for determining the shape of spiral movement (Figure 5). | C . Y ( t ) Y t | indicates the distance of the ith whale to the prey (best solution obtained so far).

2.5. Optimization Algorithms for Training MLPs

To combine the MLP and optimization algorithms, two technical aspects should be determined: how to encode the solutions and how to determine the objective function. In applied optimization algorithms, all particles, whales, and colliding bodies can be encoded as one-dimensional vectors, including random numbers in [−1, 1], where all are developed using MATLAB and R software (the codes are not available). Each solution shows a candidate MLP. Figure 6 shows the details of the experimental model. As in Figure 7, the encoded agent contains a set of bias terms and weight connections. The number of weights and biases in the MLP is used to calculate the length of this vector. To calculate the objective function, all agents (particles, whales, and colliding bodies) are sent to the MLP model as the connection weights and bias terms. The MLP model assesses those agents based on the used training datasets. Then, the MLP indicates the fitness of all agents. The following objective function is used:
M S E = 1 n i = 1 n ( z i z ^ i ) 2
where M S E is the mean squared error, n is the number of data points, z i is the observed data, and z ^ i is the estimated data. The following indices also are used to evaluate the performance of the models:
P B I A S = i = 1 N ( Y i o b s Y i s i m ) i 1 = 1 Y i o b s
N S E = 1 [ i = 1 n ( Y i o b s Y i s i m ) 2 i = 1 n ( Y i o b s Y m e a n ) 2 ]
M A E = 1 n i = 1 n | ( Y i o b s Y i s i m ) |
where MAE is mean absolute error, N S E is Nash–Sutcliff efficiency, Y i o b s is the observed data, Y m e a n is the average of observed data, Y i s i m is the simulated data, and PBIAS is percent bias.

3. Datasets

In recent years, many studies have analyzed pipeline (PL) scour by virtue of realizing the lateral propagation of scour holes along the PLs. The laboratory studies indicated that the scour rates rely on wave properties, physical properties of materials (such as soil (sandy or clay), erosive materials, and the size of bed materials), and the PL geometry (PL arrangement). The following equations are used to define the relationship between scour propagation and effective parameters [34]:
V ( H ) = f ( θ w , K C , e B , sin α ) V ( R ) = f ( θ w , K C , e B , sin α ) V ( L ) = f ( θ w , K C , e B , sin α ) V ( V ) = f ( θ w , K C , e B , sin α )
where V ( H ) is the scour rate (SR) along the PL, V ( R ) is the SR to the right of the PL, V ( L ) is the SR to the left of the PL, V ( V ) is the vertical SR, e is the embedment depth, B is the pipe diameter, K C is the Keulegan parameter, θ w is the shields parameter, and α is the flow incident angle to the PL.
θ w = τ w ρ g ( ρ s ρ 1 ) d 50 K C = U w T B f w = 0.023 r 0.52 r = U w T 4 π d 50
where ρ is the mass density of water, d 50 is the median sediment size, PL is the pipe diameter, U w is the maximum undistributed orbital rate, T is the wave period, and ρ s is the mass density of the bed material. Cheng et al. (2014) [33] carried out laboratory studies in a wave flume of 50 m in length and 2.5 m in depth and 4 m in width. In the current study, a concrete sandpit of 4 m length and 0.25 depth was constructed as the test section. The downstream end of the sandpit was 7 m from the perforated stone beach and 4 m from the wavemaker. Table 1 shows the range of parameters for this experiment. The live bed status was considered for all experiments. In this study, 38 datasets were used (25 datasets for training and 13 datasets for the testing level). The inputs were H, T, Uw, and em and the outputs were V*H, V*L, V*R, and V*v. The data published by [33] were used in this study.

4. Discussion and Results

4.1. The Selection of Optimization Algorithm Parameters

The performance of optimization algorithms rely on the correct selection of parameters. Among different models, the Taguchi model has been successfully applied in a wide range of studies. The Taguchi model utilizes an orthogonal array to categorize the experimental outputs. The Taguchi model attempts to find the best level of random parameters and also minimize the impact of noise. Here, the pleasing value is known by the signal term (S). The noise (N) also shows the undesirable value. The S/N is computed by the following equation:
S N r a t i o = 10 log ( o b j e c t i v e ( f u n c t i o n ) 2 )
The aim is to maximize the signal-to-noise ratio. The minimum number of experiments to be conducted is based on the following equation:
N = 1 + N V ( L 1 )
where N is the minimum number of experiments, L is the number of levels, and N V is the number of parameters. For example, The PSO algorithm was selected to explain the application of the Taguchi model because the PSO has more random parameters in comparison to the WA and CBO models.
The population size, individual coefficient, the social coefficient, and inertia weight were regarded as random parameters. Each parameter has four levels (Table 2a). According to the Taguchi model, the minimum number of experiments is equal to 13. The S/N ratio was computed for each level against each parameter. The larger S/N ratio corresponds to the best values of parameters. Similarity, the WA and CBO parameters were obtained in this article (Table 2b,c). In fact, random parameters such as population size influence the outputs of the algorithms. Table 2b shows the computed S/N rate versus the population size. Four levels were considered for the population size of the CBO. The S/N with the highest value shows the best size for the CBO.

4.2. The Statistical Results for Different Soft Computing Models

The results for the testing and training levels are presented in Table 3 From the testing results, the longitudinal scour rate (LSR) showed better accuracy in comparison with vertical, left, and right scour rates (Table 3). In other words, the LSR was the most accurate rate as verified by NSE = 0.92, PBIAS = 0.14, and MAE = 0.367 mm/s calculated for the testing data. Table 3 indicates that the MLP-CBO provided the vertical scour rate with the highest accuracy in terms of MAE = 0.416, NSE = 0.90, and PBIAS = 0.20 compared to the scour rates to the right and left of the PLs. In the VR* forecast, outputs indicated that the MLP-CBO provided a more accurate estimation of scour rates than those using the VL*.
At the testing level, Table 3 demonstrates that the MLP-WA produced a better estimation of the longitudinal scour rate than those gained using the VR*, VV*, and VL*. In addition, the error indices of MAE and PBIAS given by the vertical scour rate are lower than those obtained using the VL* and VR*. The MAE value given for forecasting the VR* was 3.12, while this parameter for estimation of the VL* was 3.18. In the VR* prediction, outputs indicated that the MLP-WA provided more accurate estimation of scour rates than those using the VL*.
In the VH* prediction, outputs of statistical parameters showed that the MLP-PSO model provided a more accurate prediction of scour rate (MAE: 0.391, NSE: 0.90, and PBIAS: 0.23 in the testing level) than those obtained using the VR*, VV*, and VL*. Table 3 demonstrates the VV* with high accuracy in terms of MAE: 0.449, PBIAS: 0.32, and NSE: 0.86 (test level) compared to the VR* and VL*. In the VR* prediction, the results indicate that the MLP-PSO provides a more accurate forecast of scour rates than those obtained using the VL*.

4.3. Comparison of Soft Computing Models

Twenty-five datasets were used as the inputs to the models for the training level to fine-tune the ANN models. The optimization algorithms were used to find the optimal value of the ANN parameters such as weight and connection. Then, the prepared models in the training level with 13 new datasets were tested to evaluate the accuracy of the model with the new data.
Performance of the MLP-CBO model showed better accuracy compared to the MLP-PSO and MLP-WA models. For VL* prediction, the suggested MLP-PSO model gave inaccurate results with MAE: 0.742 mm/s, PBIAS: 0.45, and NSE 0.80 (test level) compared with those of the MLP-CBO and MLP-WA models. The error indices given by the MLP-CBO model for the VV* showed a significantly superior result to those of the MLP-PSO and MLP-WA models. In addition, the results showed that the MLP-WA model provided a better estimation than the MLP-PSO model. In Figure 8, a Taylor diagram was drawn for the final simulation outputs of all the applied computing models using three criteria of RMSE (root mean square error), standard deviation, and correlation. According to the obtained results for VH*, it can be observed that the obtained results of MLP-CBO model were close to and just below the semicircular zone RMSE < 0.5 and gave higher values of the correlation coefficient. The results indicate that the MLP-CBO model provided better results than the other models. Figure 9 indicates the convergence graph for the different optimization algorithms. The results indicate that the CBO model showed faster convergence in comparison to the other algorithms. However, the MLP-CBO model had more accuracy than the other models. The CBO model could find more accurate values for the weights and biases. Thus, the MLP-CBO model had the least value of error function among all the models. The accuracy level of MLP-CBO was high for all parameters VL*, VV*, VH*, and VR*. Additionally, the MLP-CBO model had faster performance than the other algorithms because of faster convergence. The general results indicate that all models had a better estimation for VH* in comparison with parameters VL*, VV*, VH*, and VR*. It should be noted that all models carried out estimations based on four input datasets while the empirical models need more inputs. Although the ANN models gave more accurate estimations, they ignore the physical interactions in the pipes. The numerical models have to consider more detail for boundary conditions for estimation of scour parameters.

4.4. Comparison Analysis

Recently, Cheng et al. (2014) [33] proposed an empirical equation to estimate the value of the scour rate longitudinally as follows:
V H = 11.3 ( 1 e B ( 1 + sin ( α ) ) ) θ w 5 3 5 3
The following equations have been proposed for the estimation of scour rates. They use the least square method to estimate scour rates [34].
V H = 256.7 ( θ w ) 1.31 K C 10.054 ( e B ) 0.02 ( 1 + sin α ) 0.103
V L = 1.765 ( θ w ) 0.579 K C 0.37 ( e B ) 0.308 ( 1 + sin α ) 1.288
V R = 0.468 ( θ w ) 0.579 K C 0.37 ( e B ) 0.308 ( 1 + sin α ) 1.288
V V = 0.661 ( θ w ) 0.949 K C 0.397 ( e B ) 0.355 ( 1 + sin α ) 0.38
Statistical results of the MLP-CBO indicate significantly lower error in prediction of VH* in comparison with those of Equations (28) and (29) (Table 4). Equation (28) was found to be the best model among other equations. Performance of the empirical equations showed lower accuracy in prediction of scour rates in comparison with those of the soft computing models. NSE, PBIAS, and MAE values given by the Equation (29) for the VH* forecast gave a superior result to the Equation (28) in this case. Performance of the MLP-PSO and MLP WA models showed notably accurate estimation of the score rates compared with the results estimated using the empirical models.

4.5. Parametric Analysis

Figure 10 indicates the effect of α on the scour propagation along the PL. The first result was related to α = 0 and KC = 8.7. The MLP-PSO and MLP-WA models underestimated VH*, while Equation (29) overestimated VH* (Figure 10a). The results indicate that the MLP-CBO model provided higher accuracy of VH* compared to those of the other models. For α = 0 and K c = 15.8 , the MLP-PSO and MLP-WA models significantly overestimated the VH*. The MLP-CBO model provided higher accuracy of scour rate estimation in comparison to those of the other models (Figure 10). From Figure 10c, the outputs indicate that MLP-PSO and MLP-WA models could not present an accurate estimation of VH* for e/B = 0.1. Figure 10d shows that the MLP-PSO and MLP-WA models indicated a downward trend, which overestimated the VH in comparison to that of the MLP-CBO model.
The details of the uncertainty computation and the Latin hypercube sampling for the model are as follows:
1
At the beginning, the fitness g (b) and the uncertainty ranges for the parameters are determined, where mean square error has been chosen as the objective function.
2
The Latin hypercube is performed in the range of [bmin, bmax], which is initially set to [bj, abs_mean, bj, abs_max]; the corresponding fitness functions are evaluated and the sensitivity matrix J and the parameters covariance matrix C are calculated.
It should be mentioned that the uncertainty of inputs affects the outputs. In fact, the data are experimental and the measurement of data may have errors. Thus, the errors in the input data transfer to the output data when the input data are used to simulate the outputs. Thus, the uncertainty boundary for the outputs should be computed. When the soft computing models are used, the error may propagate in the whole of structure of the model.
Thus, it is necessary to evaluate the uncertainty of applied soft computing models. The Monte Carlo (MC) method provides a perfect statistical explanation of changes in the output variables due to the uncertainties in the inputs, which could be present in the following equations at different stages in the computation of uncertainty:
J i j = Δ g i Δ b i
C = s g 2 ( J T J ) 1
To evaluate the uncertainty of models, three indices were used:
p = 1 k c o u n t [ X | X L X X U ]
d ¯ = d ¯ x σ x
d ¯ x = 1 k l = 1 k ( X u X l )
where k is the number of data points, X u is the upper boundary of the input variable, X L is the lower boundary of the variable, σ x is the standard deviation, d ¯ x is the average distance between the upper and lower boundaries obtained, Xi is the input variable, and p is the percentage of the observed data bracketed by the 95% prediction uncertainty. A value of less than one is good for the d parameter. The models with larger p (maximum: 100%) values indicate better performance. The results of the uncertainty analysis are shown in Figure 11. It is clear that the MLP-CBO model increased the accuracy of the MLP-WA and MLP-PSO models. It can be concluded that the MLP-CBO and MLP-PSO models are the best and worst models, respectively. For VL* prediction, the suggested MLP-CBO model gave inaccurate results with p: 0.97 and d: 0.12 (test level) compared with those of the MLP-PSO and MLP-WA models. In the VR* forecast, outputs indicated that the MLP-CBO model provided higher p and lower d values than those using the VL*.
The RMSE-observations standard deviation ratio (RSR) is a useful index to evaluate the computational models as follows:
R S R = i = 1 N ( Y i o b s Y i s i m ) 2 i = 1 N ( Y i o b s Y i m e a n ) 2
where Y i m e a n is the average observed data. The general performance rating for the RSR index is shown in Table 5:
Figure 12 shows the RSR value for the soft computing models. The error indices given by the MLP-CBO model for VR*, VV*, VH*, and VL* showed very good performance compared with those of the MLP-PSO and MLP-WA models.

5. Conclusions

In this article, the MLP-CBO, MLP-PSO, and MLP-WA models were applied to forecast the scour rates below the PL due to waves. The PSO, CBO, and WA were used to update the weighting coefficients through the training level. The results indicate that the MLP-CBO model forecasted the longitudinal scour rate with MAEs of 0.345 and 0.367 in the training and testing level, respectively. In the VR* prediction, the results reveal that the MLP-PSO model provides a more accurate scour rate forecast than those obtained using the VL*. The performance of the empirical equations showed a lower accuracy in the prediction of scour rates in comparison with those of the soft computing models. Considering the complexity of the problem, assessment of the scour rates could be carried out successfully using the design curves. However, there is a need to apply a particular pre-processing analysis for data in order to have a more appropriate input–output pattern that could lead to a more accurate prediction for scour rate. In addition, it would be useful to integrate the model with a multi-objective algorithm that might simultaneously identify the most effective inputs and find the optimal values of the ANN parameters. In fact, some studies have a large number of inputs, so the selection of the appropriate inputs was complex. Thus, methods such as the principle component analysis or multi-objective algorithms can be used to enhance the accuracy of models with a selection of appropriate inputs.

Author Contributions

Conceptualization, M.E. and F.B.B.; Methodology, A.N.A. and L.L.; Software, S.D.L. and H.A.A.; Validation, C.M.F.; Writing—original draft, A.E.-S. All authors have read and agreed to the published version of the manuscript.

Funding

The authors appreciate the financial support received from the research grant coded GPD082A-2018 funded by the University of Malaya and from IPSR of the Universiti Tunku Abdul Rahman.

Acknowledgments

The authors gratefully acknowledge the technical facility support received from the University of Malaya, Malaysia.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

ANFISAdaptive Neuro Fuzzy Interface System
ANNArtificial Neural Network
BABat Algorithm
CBOColliding Bodies’ Optimization
FFAFirefly Algorithm
FSFree Spans
GAGenetic Algorithm
GPGenetic Programming
MLPMultilayer Perceptron
MLP-CBOMultilayer Perceptron-Colliding Bodies’ Optimization
MLP-PSOMultilayer Perceptron-Particle Swarm Optimization
MLP-WAMultilayer Perceptron-Whale Algorithm
PLPipe Line
PSOParticle Swarm Optimization
SAShark Algorithm
WAWhale Algorithm

References

  1. Parsaie, A.; Haghiabi, A.H.; Moradinejad, A. Prediction of Scour Depth below River Pipeline Using Support Vector Machine. KSCE J. Civ. Eng. 2019, 23, 2503–2513. [Google Scholar] [CrossRef]
  2. Peng, Z.; Zou, Q.-P.; Lin, P. A Partial Cell Technique for Modeling the Morphological Change and Scour. Coast. Eng. 2018, 131, 88–105. [Google Scholar] [CrossRef]
  3. Mawat, M.J.; Khudier, A.S.; Hashim, S.J. Evaluation Study of Free Spanning Subjected to Hydrodynamic Loads. J. Univ. Babylon 2018, 26, 227–237. [Google Scholar]
  4. Najafzadeh, M.; Shiri, J.; Rezaie-Balf, M. New Expression-Based Models to Estimate Scour Depth at Clear Water Conditions in Rectangular Channels. Mar. Georesour. Geotechnol. 2017, 36, 227–235. [Google Scholar] [CrossRef]
  5. Jamei, M.; Ahmadianfar, I. Prediction of Scour Depth at Piers with Debris Accumulation Effects Using Linear Genetic Programming. Mar. Georesour. Geotechnol. 2019, 1–12. [Google Scholar] [CrossRef]
  6. Sharafati, A.; Yasa, R.; Azamathulla, H.M. Assessment of Stochastic Approaches in Prediction of Wave-Induced Pipeline Scour Depth. J. Pipeline Syst. Eng. Pract. 2018, 9. [Google Scholar] [CrossRef]
  7. Najafzadeh, M.; Kargar, A.R. Gene-Expression Programming, Evolutionary Polynomial Regression, and Model Tree to Evaluate Local Scour Depth at Culvert Outlets. J. Pipeline Syst. Eng. Pract. 2019, 10. [Google Scholar] [CrossRef]
  8. Dang, N.M.; Tran Anh, D.; Dang, T.D. ANN Optimized by PSO and Firefly Algorithms for Predicting Scour Depths around Bridge Piers. Eng. Comput. 2019. [Google Scholar] [CrossRef]
  9. Moradi, F.; Bonakdari, H.; Kisi, O.; Ebtehaj, I.; Shiri, J.; Gharabaghi, B. Abutment Scour Depth Modeling Using Neuro-Fuzzy-Embedded Techniques. Mar. Georesour. Geotechnol. 2018, 37, 190–200. [Google Scholar] [CrossRef]
  10. Niazkar, M.; Afzali, S.H. Developing a New Accuracy-Improved Model for Estimating Scour Depth around Piers Using a Hybrid Method. Iran. J. Sci. Technol. Trans. Civ. Eng. 2018, 43, 179–189. [Google Scholar] [CrossRef]
  11. Eghbalzadeh, A.; Hayati, M.; Rezaei, A.; Javan, M. Prediction of Equilibrium Scour Depth in Uniform Non-Cohesive Sediments Downstream of an Apron Using Computational Intelligence. Eur. J. Environ. Civ. Eng. 2016, 22, 28–41. [Google Scholar] [CrossRef]
  12. Sreedhara, B.M.; Rao, M.; Mandal, S. Application of an Evolutionary Technique (PSO–SVM) and ANFIS in Clear-Water Scour Depth Prediction around Bridge Piers. Neural Comput. Appl. 2018, 31, 7335–7349. [Google Scholar] [CrossRef]
  13. Hassanzadeh, Y.; Jafari-Bavil-Olyaei, A.; Aalami, M.-T.; Kardan, N. Experimental and Numerical Investigation of Bridge Pier Scour Estimation Using ANFIS and Teaching-Learning-Based Optimization Methods. Eng. Comput. 2018, 35, 1103–1120. [Google Scholar] [CrossRef]
  14. Aamir, M.; Ahmad, Z. Estimation of Maximum Scour Depth Downstream of an Apron under Submerged Wall Jets. J. Hydroinform. 2019, 21, 523–540. [Google Scholar] [CrossRef]
  15. Azamathulla, H.M.D.; Ghani, A.A. ANFIS-Based Approach for Predicting the Scour Depth at Culvert Outlets. J. Pipeline Syst. Eng. Pract. 2011, 2, 35–40. [Google Scholar] [CrossRef]
  16. Haghiabi, A.H. Closure to “Prediction of River Pipeline Scour Depth Using Multivariate Adaptive Regression Splines” by Amir Hamzeh Haghiabi. J. Pipeline Syst. Eng. Pract. 2019, 10. [Google Scholar] [CrossRef] [Green Version]
  17. Ahmad, N.; Bihs, H.; Myrhaug, D.; Kamath, A.; Arntsen, Ø.A. Numerical Modeling of Breaking Wave Induced Seawall Scour. Coast. Eng. 2019, 150, 108–120. [Google Scholar] [CrossRef]
  18. Cheng, L.; Yeow, K.; Zhang, Z.; Teng, B. Three-Dimensional Scour below Offshore Pipelines in Steady Currents. Coast. Eng. 2009, 56, 577–590. [Google Scholar] [CrossRef]
  19. Ehteram, M.; El-Shafie, A.H.; Hin, L.S.; Othman, F.; Koting, S.; Karami, H.; Mousavi, S.-F.; Farzin, S.; Ahmed, A.N.; Zawawi, B.; et al. Toward Bridging Future Irrigation Deficits Utilizing the Shark Algorithm Integrated with a Climate Change Model. Appl. Sci. 2019, 9, 3960. [Google Scholar] [CrossRef] [Green Version]
  20. Ehteram, M.; Binti Koting, S.; Afan, H.A.; Mohd, N.S.; Malek, M.A.; Ahmed, A.N.; El-shafie, A.H.; Onn, C.C.; Lai, S.H.; El-Shafie, A. New Evolutionary Algorithm for Optimizing Hydropower Generation Considering Multireservoir Systems. Appl. Sci. 2019, 9, 2280. [Google Scholar] [CrossRef] [Green Version]
  21. Li, Y.; Jiang, P.; She, Q.; Lin, G. Research on Air Pollutant Concentration Prediction Method Based on Self-Adaptive Neuro-Fuzzy Weighted Extreme Learning Machine. Environ. Pollut. 2018. [Google Scholar] [CrossRef]
  22. Ehteram, M.; Singh, V.P.; Ferdowsi, A.; Mousavi, S.F.; Farzin, S.; Karami, H.; Mohd, N.S.; Afan, H.A.; Lai, S.H.; Kisi, O.; et al. An Improved Model Based on the Support Vector Machine and Cuckoo Algorithm for Simulating Reference Evapotranspiration. PLoS ONE 2019, 14. [Google Scholar] [CrossRef] [PubMed]
  23. Karami, H.; Ehteram, M.; Mousavi, S.-F.; Farzin, S.; Kisi, O.; El-Shafie, A. Optimization of Energy Management and Conversion in the Water Systems Based on Evolutionary Algorithms. Neural Comput. Appl. 2018, 31, 5951–5964. [Google Scholar] [CrossRef]
  24. Zhang, R.; Wu, P. The Investigation of Shape Factors in Determining Scour Depth at Culvert Outlets. ISH J. Hydraul. Eng. 2019, 1–7. [Google Scholar] [CrossRef]
  25. Das, B.S.; Devi, K.; Khatua, K.K. Prediction of Discharge in Converging and Diverging Compound Channel by Gene Expression Programming. ISH J. Hydraul. Eng. 2019, 1–11. [Google Scholar] [CrossRef]
  26. Roushangar, K.; Foroudi Khowr, A.; Saneie, M. Experimental Study and Artificial Intelligence-Based Modeling of Discharge Coefficient of Converging Ogee Spillways. ISH J. Hydraul. Eng. 2019, 1–8. [Google Scholar] [CrossRef]
  27. Kaveh, A.; Dadras, A.; Montazeran, A.H. Chaotic Enhanced Colliding Bodies Algorithms for Size Optimization of Truss Structures. Acta Mech. 2018, 229, 2883–2907. [Google Scholar] [CrossRef]
  28. Panda, A.; Pani, S. Determining Approximate Solutions of Nonlinear Ordinary Differential Equations Using Orthogonal Colliding Bodies Optimization. Neural Process. Lett. 2017, 48, 219–243. [Google Scholar] [CrossRef]
  29. Kaveh, A.; Sabeti, S. Structural Optimization of Jacket Supporting Structures for Offshore Wind Turbines Using Colliding Bodies Optimization Algorithm. Struct. Des. Tall Spec. Build. 2018, 27, e1494. [Google Scholar] [CrossRef]
  30. Kaveh, A.; Rezaei, M.; Shiravand, M.R. Optimal Design of Nonlinear Large-Scale Suspendome Using Cascade Optimization. Int. J. Space Struct. 2017, 33, 3–18. [Google Scholar] [CrossRef] [Green Version]
  31. Ehteram, M.; Karami, H.; Mousavi, S.F.; Farzin, S.; Celeste, A.B.; Shafie, A.-E. Reservoir Operation by a New Evolutionary Algorithm: Kidney Algorithm. Water Resour. Manag. 2018, 32, 4681–4706. [Google Scholar] [CrossRef]
  32. Ehteram, M.; Singh, V.P.; Karami, H.; Hosseini, K.; Dianatikhah, M.; Hossain, M.; Ming Fai, C.; El-Shafie, A. Irrigation Management Based on Reservoir Operation with an Improved Weed Algorithm. Water 2018, 10, 1267. [Google Scholar] [CrossRef] [Green Version]
  33. Cheng, L.; Yeow, K.; Zang, Z.; Li, F. 3D Scour below Pipelines under Waves and Combined Waves and Currents. Coast. Eng. 2014, 83, 137–149. [Google Scholar] [CrossRef] [Green Version]
  34. Najafzadeh, M.; Saberi-Movahed, F. GMDH-GEP to Predict Free Span Expansion Rates below Pipelines under Waves. Mar. Georesour. Geotechnol. 2018, 37, 375–392. [Google Scholar] [CrossRef]
  35. Moriasi, D.N.; Arnold, J.G.; Van Liew, M.W.; Bingner, R.L.; Harmel, R.D.; Veith, T.L. Model Evaluation Guidelines for Systematic Quantification of Accuracy in Watershed Simulations. Trans. ASABE 2007, 50, 885–900. [Google Scholar] [CrossRef]
Figure 1. Three-dimensional scour process below the pipeline (PL) [18].
Figure 1. Three-dimensional scour process below the pipeline (PL) [18].
Water 12 00902 g001
Figure 2. The MLP structure for the current study ( β I : bias for the hidden neuron and β H : bias for the output neuron).
Figure 2. The MLP structure for the current study ( β I : bias for the hidden neuron and β H : bias for the output neuron).
Water 12 00902 g002
Figure 3. The structure of the colliding body (CB) algorithm (NFE: Maximum Number of Objective Function Evaluations).
Figure 3. The structure of the colliding body (CB) algorithm (NFE: Maximum Number of Objective Function Evaluations).
Water 12 00902 g003
Figure 4. The details of the particle swarm optimization (PSO) components including location and velocity of particles.
Figure 4. The details of the particle swarm optimization (PSO) components including location and velocity of particles.
Water 12 00902 g004
Figure 5. Bubble-net feeding behavior of humpback whales.
Figure 5. Bubble-net feeding behavior of humpback whales.
Water 12 00902 g005
Figure 6. The details of experimental model [33].
Figure 6. The details of experimental model [33].
Water 12 00902 g006
Figure 7. Structure of a decision variable in the Salp Swarm Algorithm (SSA) including bias and weight.
Figure 7. Structure of a decision variable in the Salp Swarm Algorithm (SSA) including bias and weight.
Water 12 00902 g007
Figure 8. Taylor diagram for soft computing models. Based on correlation, RMSE, and standard deviation for VR*, VV*, VL*, and VH*.
Figure 8. Taylor diagram for soft computing models. Based on correlation, RMSE, and standard deviation for VR*, VV*, VL*, and VH*.
Water 12 00902 g008
Figure 9. Convergence curve for algorithms using the number of iterations and objective function value.
Figure 9. Convergence curve for algorithms using the number of iterations and objective function value.
Water 12 00902 g009
Figure 10. The variation of parameters versus scour rates. (a) = 0 ,   K c = 8.7 ; (b) = 0 ,   K c = 15.8 ; (c) = 15 ,   K c = 15.8 ; (d) = 15 ,   K c = 18 .
Figure 10. The variation of parameters versus scour rates. (a) = 0 ,   K c = 8.7 ; (b) = 0 ,   K c = 15.8 ; (c) = 15 ,   K c = 15.8 ; (d) = 15 ,   K c = 18 .
Water 12 00902 g010
Figure 11. Results of uncertainty analysis of soft computing models using d and p indices.
Figure 11. Results of uncertainty analysis of soft computing models using d and p indices.
Water 12 00902 g011
Figure 12. The computed RSR for different models.
Figure 12. The computed RSR for different models.
Water 12 00902 g012
Table 1. Ranges of input–output parameters for the scour rate prediction.
Table 1. Ranges of input–output parameters for the scour rate prediction.
ParametersRange
H (m) (input)0.13–0.17
T (s) (input)1.5–2.0
Uw (m/s) (input)0.29–0.45
em (mm) (input)5–20
V*H (mm/s) (output)1.27–5.162
V*L (mm/s) (output)1.19–4.527
V*R (mm/s) (output)1.11–4.49
V*v (mm/s) (output)0.592–2.405
θ w (input)0.18–0.30
KC (input)8.7–18
e/B (input)0.10–0.40
α (rad) (input)0.0–0.70
Table 2. (a) The parameter levels and signal-to-noise (S/N) ratio for the PSO, (b) the best parameters for the colliding bodies’ optimization (CBO), and (c) the best parameters for the whale algorithm (WA).
Table 2. (a) The parameter levels and signal-to-noise (S/N) ratio for the PSO, (b) the best parameters for the colliding bodies’ optimization (CBO), and (c) the best parameters for the whale algorithm (WA).
a
ParameterPopulation SizeInertia WeightIndividual CoefficientSocial Coefficient
Level 1100, S/N:0.870.2, S/N:0.211.6, S/N:0.321.6, S/N:0.30
Level 2200, S/N:0.760.40, S/N:0.151.8, S/N:0.541.8, S/N:0.41
Level 3300 S/N:0.820.60, S/N:0.422.0, S/N:0.302.0, S/N:0.24
Level 4400, S/N:0.870.80, S/N:0.552.2, S/N:0.452.2, S/N:0.32
b
ParameterPopulation Size
Level 1100, S/N:0.84
Level 2200, S/N:0.96
Level 3300 S/N:0.83
Level 4400, S/N:0.87
c
WA, population size: 100, the maximum number of iterations: 200
Table 3. The statistical results for soft computing models. MAE: mean absolute error; PBIAS: percent bias; NSE: Nash–Sutcliff efficiency; MLP: multilayer perceptron.
Table 3. The statistical results for soft computing models. MAE: mean absolute error; PBIAS: percent bias; NSE: Nash–Sutcliff efficiency; MLP: multilayer perceptron.
ModelMAE (mm/s)PBIASNSE
Train
MLP-CBO (VH*)0.3450.120.95
MLP-WA (VH*)0.3890.170.93
MLP-PSO (VH*)0.3930.220.92
Test
MLP-CBO (VH*)0.3670.140.92
MLP-WA (VH*)0.3790.190.91
MLP-PSO (VH*)0.3910.230.90
Train
MLP-CBO (VV*)0.4120.180.91
MLP-WA (VV*)0.4220.220.89
MLP-PSO (VV*)0.4340.250.87
Test
MLP-CBO (VV*)0.4160.200.90
MLP-WA (VV*)0.4320.290.88
MLP-PSO (VV*)0.4490.320.86
Train
MLP-CBO (VR*)0.5120.270.89
MLP-WA (VR*)0.5220.320.87
MLP-PSO (VR*)0.5230.340.85
Test
MLP-CBO (VR*)0.5340.290.87
MLP-WA (VR*)0.5410.350.86
MLP-PSO (VR*)0.5550.370.85
Train
MLP-CBO (VL*)0.6120.310.86
MLP-WA (VL*)0.6210.390.84
MLP-PSO (VL*)0.6290.420.83
Test
MLP-CBO (VL*)0.7140.330.85
MLP-WA (VL*)0.7380.420.82
MLP-PSO (VL*)0.7420.450.80
Table 4. The results of models for all data.
Table 4. The results of models for all data.
MODELMAE (MM/S)PBIASNSE
EQUATION (28)1.590.550.87
EQUATION (29)1.340.520.86
EQUATION (30)1.370.560.85
EQUATION (31)1.420.550.81
EQUATION (32)1.450.540.80
MLP-CBO (VH*)0.3670.140.90
MLP-WA (VH*)0.3910.230.89
MLP-PSO (VH*)0.3990.250.87
MLP-CBO (VL*)0.7210.420.89
MLP-WA (VL*)0.7450.450.87
MLP-PSO (VL*)0.8140.490.86
MLP-CBO (VR*)0.6210.320.85
MLP-WA (VR*)0.6340.360.82
MLP-PSO (VR*)0.6420.380.80
MLP-CBO (VV*)0.5210.230.88
MLP-WA (VV*)0.5550.250.89
MLP-PSO (VV*)0.5910.270.89
Table 5. General performance rating for the RMSE-observations standard deviation ratio (RSR) index [35].
Table 5. General performance rating for the RMSE-observations standard deviation ratio (RSR) index [35].
Performance RatingRSR Value
Very Good0.00 ≤ RSR ≤ 0.5
Good0.500 ≤ RSR ≤ 0.600
Satisfactory 0.600 ≤ RSR ≤ 0.700
UnsatisfactoryRSR > 0.7

Share and Cite

MDPI and ACS Style

Ehteram, M.; Ahmed, A.N.; Ling, L.; Fai, C.M.; Latif, S.D.; Afan, H.A.; Banadkooki, F.B.; El-Shafie, A. Pipeline Scour Rates Prediction-Based Model Utilizing a Multilayer Perceptron-Colliding Body Algorithm. Water 2020, 12, 902. https://doi.org/10.3390/w12030902

AMA Style

Ehteram M, Ahmed AN, Ling L, Fai CM, Latif SD, Afan HA, Banadkooki FB, El-Shafie A. Pipeline Scour Rates Prediction-Based Model Utilizing a Multilayer Perceptron-Colliding Body Algorithm. Water. 2020; 12(3):902. https://doi.org/10.3390/w12030902

Chicago/Turabian Style

Ehteram, Mohammad, Ali Najah Ahmed, Lloyd Ling, Chow Ming Fai, Sarmad Dashti Latif, Haitham Abdulmohsin Afan, Fatemeh Barzegari Banadkooki, and Ahmed El-Shafie. 2020. "Pipeline Scour Rates Prediction-Based Model Utilizing a Multilayer Perceptron-Colliding Body Algorithm" Water 12, no. 3: 902. https://doi.org/10.3390/w12030902

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop