Artificial Neural Network
An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes information using a connectionist approach to computation. They are powerful tools for modelling, especially when the underlying data relationship is unknown. ANNs can identify and learn correlated patterns between input data sets and corresponding target values. After training, ANNs can be used to predict the outcome of new independent input data.
ANNs have been applied to many geotechnical engineering problems such as in pile capacity prediction, modelling soil behaviour, site characterisation, earth retaining structures, settlement of structures, slope stability, design of tunnels and underground openings, liquefaction, soil permeability and hydraulic conductivity, soil compaction, soil swelling and classification of soils.
Figure 1: Three layer neural network
Figure1 show three layer neural network consist first layer has input neurons, second layer of hidden neurons, third layer of output neurons. Supervised neural networks are trained in order to produce desired outputs in response to training set of inputs. It is trained by providing it with input and matching output patterns.
It used in the modelling and controlling of dynamic systems, classifying noisy data, and predicting future events. Unsupervised neural networks, on the other hand, are trained by letting the network continually adjusting itself to new input. It is or Self-organisation in which an (output) unit is trained to respond to clusters of pattern within the inputs. Reinforcement Learning is be considered as an intermediate form of the above two types of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment.
For an artificial neuron, the weight is a number, and represents the synapse. A negative weight reflects an inhibitory connection, while positive values designate excitatory connections. All inputs are summed altogether and modified by the weights and refers as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be -1 and 1.
A neuron is a real function of the input vector (x0, x2, … xk). The out put is obtained as f(yj)
Where, f is a function, typically the sigmoid (logistic or tangent hyperbolic) function. A graphical presentation of neuron is given in figure 2. Mathematically a Multi-Layer Perceptron network is a function consisting of compositions of weighted sums of the functions corresponding to the neurons.
Figure 2: Graphical presentation of neuron in ANN
There are several types of architecture of NNs. However, the two most widely used NNs Feed forward networks and Recurrent networks. In a feed forward network, information flows in one direction along connecting pathways, from the input layer via the hidden layers to the final output layer. There is no feedback (loops) i.e., the output of any layer does not affect that same or preceding layer. Feed-forward neural networks, where the data ow from input to output units is strictly feedforward. The data processing can extend over multiple (layers of) units, but no feedback connections are present.
Figure 3: A multi-layer feed forward neural network
These networks differ from feed forward network architectures in the sense that there is at least one feedback loop. Thus, in these networks, for example, there could exist one layer with feedback connections as shown in figure below. There could also be neurons with self feed back links, i.e. the output of a neuron is fed back into itself as input.
Recurrent neural networks that do contain feedback connections. In some cases, the activation values of the units undergo a relaxation process such that the neural network will evolve to a stable state in which these activations do not change anymore. In other applications, the change of the activation values of the output neurons are significant, such that the dynamical behaviour constitutes the output of the neural network (Pearlmutter, 1990).
Figure 4: A recurrent neural network
Black propagation network
The back-propagation algorithm is a non-linear extension of the least mean squares (LMS) algorithm for multi-layer perceptrons. It is the most widely used of the neural network paradigms and has been successfully applied in many fields of model-free function estimation. The back propagation network (BPN) is expensive computationally, especially during the training process. Properly trained BPN tends to produce reasonable results when presented with new data set inputs.
A BPN is usually layered, with each layer fully interconnected to the layers below and above it. The first layer is the input layer, the only layer in the network that can receive external input. The second layer is the hidden layer, in which the processing units are interconnected to the layers below and above it. The third layer is the output layer. Each unit of the hidden layer is interconnected with the units of the output layer. Units are not interconnected to other units within the same layer. Each interconnection is assigned an associative connection strength, expressed as weight (Figure 1). Weights are adjusted during the training of the network. In BPN, the training is supervised, in which case the network is presented with target values for each input pattern. The input space of the network is considered to be linearly separable.
The various steps in developing a neural network model are: summarized below & the example is shown by metlab software.
The input variables important for modeling variable(s) under study are selected by suitable variable selection procedures.
Formation of training, testing and validation sets
The data set is divided into three distinct sets called training, testing and validation sets. The training set is the largest set and is used by neural network to learn patterns present in the data. The testing set is used to evaluate the generalization ability of a supposedly trained network. A final check on the performance of the trained network is made using validation set.
Neural network architecture
Neural network architecture defines its structure including number of hidden layers, number of hidden nodes and number of output nodes etc. Number of hidden layers: The hidden layer(s) provide the network with its ability to generalize. In theory, a neural network with one hidden layer with a sufficient number of hidden neurons is capable of approximating any continuous function. In practice, neural network with one and occasionally two hidden layers are widely used and have to perform very well.
• Number of hidden nodes: There is no magic formula for selecting the optimum number of hidden neurons. However, some thumb rules are available for calculating number of hidden neurons. A rough approximation can be obtained by the geometric pyramid rule proposed by Masters (1993). For a three layer network with n input and m output neurons, the hidden layer would have sqrt(n*m) neurons.
• Activation function: Activation functions are mathematical formula that determine the output of a processing node. Each unit takes its net input and applies an activation function to it. Non linear functions have been used as activation functions such as logistic, tanh etc. Transfer functions such as sigmoid are commonly used because they are nonlinear and continuously differentiable which are desirable for network learning.
The most common error function minimized in neural networks is the sum of squared errors. ther error functions offered by different software include least absolute deviations, least fourth powers, asymmetric least squares and percentage differences.
Neural network training
Training a neural network to learn patterns in the data involves iteratively presenting it with examples of the correct known answers. The objective of training is to find the set of weights between the neurons that determine the global minimum of error function. This involves decision regarding the number of iteration i.e., when to stop training a neural network and the selection of learning rate.
The various reserche have used ANN to predict to the slope slabiling or slope failure or factor of safety. The Back propagation neural network is used to calculate the factor of safety. Nine input parameters and one output parameter are used in the analysis. The output parameter is the factor of the safety of the slopes, the input parameters are the height of slope, the inclination of slope, the height of water level, the depth of firm base, the cohesion of soil, the friction angle of soil, the unit weight of soil, but the important input parameters are horizontal and vertical seismic coefficients.
Slope failures are complex natural phenomena that constitute a serious natural hazard in many countries. To prevent or mitigate the landslide damage, slope-stability analyses and stabilization require an understanding and evaluation of the processes that govern the behavior of the slopes. The factor of safety based on an appropriate geotechnical model as an index of stability, is required in order to evaluate slope stability. Many variables are involved in slope stability evaluation and the calculation of the factor of safety requires geometrical data, physical data on the geologic materials and their shear-strength parameters (cohesion and angle of internal friction), information on pore-water pressures, etc.
The determination of the non-linear behaviour of multivariate dynamic systems often presents a challenging and demanding problem. The impact of these parameters on the stability of slopes is investigats through the use of computational tools called neural networks. the input data for slope stability estimation consist of values of geotechnical and geometrical input parameters. the network estimates the factor of safety (FS) that can be modelled as a function approximation problem, or the stability status (S) that can be modelled either as a function approximation problem or as a classification model. The performance of the network is measured and the results are compared to those obtained by means of standard analytical methods.
A series of ANNs were created in order to predict the safety factor and estimate stability against the circular failure mechanism and the wedge failure mechanism.
Ann and fuzzy set could primarily be used in two ways in slope stability. One is prediction of various strength and physico mechanical properties by previously used properties.
Other is direct prediction of factor of safety or stability based on simulation of huge data set or incorporating the case studies.
The rock slopes have important role for the design and excavation in various open pit mine and also civil engineering projects all around the world. Initial the condision and friction angle can be trained by neural network. input parameter of compressive strength and later on cohesion and friction angle were calculated by compressive strength and these properties were used as input for finite difference code to analysing slope stability and determine the factor of safety.
Figure 6: Flow Chart to determine the properties and analysis the Slope stability by ANN