13.2 Probabilistic Analysis

Traditional methods use principles of static equilibrium to evaluate the balance of driving and resisting forces. The factor of safety is defined as the resisting forces divided by the driving forces, or alternatively as the shear strength divided by the calculated shear stresses. A factor of safety greater than one indicates a stable slope; a value less than one indicates impending failure.

Probabilistic analysis in slope stability involves source uncertainty. A probabilistic analysis is based on a randomness of the parameters affected by uncertainties. The accuracy of an experimental probability density function depends on the number of observations. Geometrical features of rock discontinuities such as spacing, orientation and persistence can be gathered easily than shear strength features of discontinuities. A probabilistic analysis model requires the knowledge or the reliable estimation of the independence of the random variables or the correlation between random variables.

Many variables are involved in slope stability evaluation and the calculation of the factor of safety. It requires geometrical data, physical data on the geologic materials and their shear-strength parameters (cohesion and angle of internal friction), pore-water pressures, geometry of slope, and the unit weights, water pressure, seismic acceleration and friction angle, etc. Traditional slope stability analysis uses single value for each variable to calculate the factor of safety. The output of a traditional stability analysis is a single-value of factor of safety in deterministic estimate. Single value of the factor of safety approach cannot quantify the probability of failure, associated with a particular design. A probabilistic approach to studying geotechnical issues offers a systematic way to treat uncertainties, especially slope stability.

The variable associated with slope design is uncertain due to many reasons. Therefore, to account for uncertainty the probabilistic method can be used for assessing the stability of slope. There are many source of uncertainty in slope stability analysis. The associated uncertainty varies from analysis to analyses and is case specific.  The uncertainties are

1.      Site topography

2.      Site stratigraphy and variability

3.      Geologic origin and characteristics of subsurface materials

4.      Groundwater level

5.      In- situ soil and /or rock characteristics

6.      Engineering properties of rock mass

7.      Soil &  rock behavior

Parameters such as the angle of friction of rock joints, the uniaxial compressive strength of rock specimens, the inclination and orientation of discontinuities in a rock mass and the measured in situ stresses in the rock surrounding an opening do not have a single fixed value but may assume any number of values. There is no way of predicting exactly what the value of one of these parameters will be at any given location. Hence these parameters are described as random variables.

The arithmetic mean, often referred to as simply the mean or average. Suppose the data {}. Then the arithmetic mean is defined via the equation as

The variance describes the extent of the range of the random variable about the mean and is calculated as

 

The standard deviation () is given by the positive square root of the variance. A small standard deviation will indicate a tightly clustered data set while a large standard deviation will be found for a data set in which there is a large scatter about the mean.

The coefficient of variation (COV) is the ratio of the standard deviation to the mean. It is dimensionless and it is a particularly useful measure of uncertainty. A small uncertainty would typically be represented by a COV = 0.05 while considerable uncertainty would be indicated by a COV = 0.25.

 

If a pair of random variables (X and Y, for example) depend on each other, the variable X and Y are considered to be correlated, and their covariance is defined by

This covariance is very similar to the variance. If the covariance is normalized by the standard of the X and Y variable, the correlation coefficient, , may be described by

The correlation coefficient ranges in value from -1 to +1. The case  indicates a perfect positive, liner correlation between the variable between the variable X and Y. The case  indicates a negative, or inverse correlation, where high values of Y occur for low values of X. If the two random variable are linearly independent, then .

 

This analytical method uses information about the probability distribution of the slope’s characteristics to determine the probability distribution of the output of the analysis. Knowledge of the probability distribution of the output allows the engineer to assess the probability of slope failure.

 

Probability density function

Probability density function (PDF) of a continuous random variable is a function that describes the relative likelihood for this random variable to occur at a given point. The probability density function integral over the entire space is equal to one. The PDF defines the distribution of the random variable and can take many shapes, but the most common ones used in geotechnical applications are the normal and lognormal, although the triangular distribution is also gaining popularity.

The normal distribution extends to infinity in both directions, but this is often not a realistic expression of geotechnical data for which the likely upper and lower bounds of a parameter can be defined. For these conditions, it is appropriate to use the beta distribution which has finite maximum and minimum points, and can be uniform, skewed to the left or right, U-shaped or J-shaped (Harr, 1977). However, where there is little information on the distribution of the data, a simple triangular distribution can be used which are defined by three values: the most likely, and the minimum and maximum values. Figure 3 and 4 show the lognormal and normal probability distribution function with mean and standard deviation respectively.

 

 

 

 

Figure 3: Log-normal probability density functions

Figure 4: Normal Probability density function

 

Normal distribution: The normal or Gaussian distribution is the most common type of probability distribution function. It is generally used for probabilistic studies in geotechnical engineering The problem of defining a normal distribution is to estimate the values of the governing parameters which are the true mean (μ) and standard deviation (σ).  The PDF for the normal distribution with a mean, μ, and deviation defined by

 

The distribution is symmetric about the mean, and the random variable can take on values between -

• Beta distributions are very versatile distributions which can be used to replace almost any of the common distributions and which do not suffer from the extreme value problems discussed above because the domain (range) is bounded by specified values.

• Exponential distributions are sometimes used to define events such as the occurrence of earthquakes or rockbursts or quantities such as the length of joints in a rock mass.

• Lognormal distributions are useful when considering processes such as the crushing of aggregates in which the final particle size results from a number of collisions of particles of many sizes moving in different directions with different velocities.

• Weibull distributions are used to represent the lifetime of devices in reliability studies or the outcome of tests such as point load tests on rock core in which a few very high values may occur.

 

Sampling techniques: Consider a problem in which the factor of safety depends upon a number of random variables such as the cohesive strength c, the angle of friction φ and the acceleration α due to earthquakes or large blasts.


 

 

 

The uncertainties inherent to any project should be recognized. Probabilistic analysis takes into consideration the inherent variability and uncertainties in the analysis parameters. Probabilistic analysis produces a direct estimate of the distribution of either the factor of safety or critical height associated with a design or analysis situation.

There are several probabilistic techniques that can be used to evaluate geotechnical situations. Specifically, for geotechnical analysis, researchers have conducted probabilistic evaluations using: Monte Carlo simulations & Point Estimate Method.

 

Monte Carlo method

It uses pseudo-random numbers to sample from probability distributions. Large numbers of samples are generated and used to calculate factor of safety. Monte Carlo techniques can be applied to a wide variety of problems involving random behaviour and a number of algorithms are available for generating random Monte Carlo samples from different types of input probability distributions.

 

The input parameters for a Monte Carlo simulation fall into two categories, the deterministic parameters used for a conventional analysis and the parameters which define the distribution of the input variables. For slope stability analysis the deterministic parameters are:

·         Critical Height (H) or Factor of Safety (FS)

·         Slope Angle from the Horizontal Plane (β)

·         Angle of Friction (φ)

·         Cohesion (c)

·         Unit Weight (γ)

·         Saturated Unit Weight (γ Sat)

·         Submerged Unit Weight (γ ′)

For each of these parameters, Monte Carlo simulation requires definition of the descriptive statistics which define the parameters’ distribution. Depending on the data the descriptive statistics may include:

·         Maximum

·         Mean

·         Minimum

·         Standard Deviation

·         Coefficient of Variation

In the Monte Carlo simulation, the values for each of the input parameters in the analytical equations are determined by sampling from their respective distributions. The required input values are determined during the simulation based on Latin Hypercubic sampling (LHS).

LHS sampling stratifies the input probability distributions during the simulation process. Monte Carlo Simulation concept specifically designated the use of random sampling procedures for treating deterministic mathematical situations. The foundation of the Monte Carlo gained significance with the development of computers to automate the laborious calculations.

The first step of a Monte Carlo simulation is to identify a deterministic model where multiple input variables are used to estimate an outcome. Step two requires that all variables or parameters be identified. The probability distribution for each independent variable is established for the simulation model, (ie normal, beta, log normal, etc). A random trial process is initiated to establish a probability distribution function for the deterministic situation being modeled. During each pass, a random value from the distribution function for each parameter is selected and entered into the calculation. Numerous solutions are obtained by making multiple passes through the program to obtain a solution for each pass. The appropriate number of passes for an analysis is a function of the number of input parameters, the complexity of the modeled situation, and the desired precision of the output. The final result of a Monte Carlo simulation is a probability distribution of the output parameter.

 

The component random variables for each calculation are needed from a sample of random values that are based on the selected PDF of the variable. Although these PDFs can take on any shape; the normal, lognormal, analysis beta and uniform distributions are among the most favored for analysis.  The Monto Carlo simulation follows a four step process:

1.      For each component random variable being considered, select a random value that conforms to the assigned distribution.

2.      Calculated the value of the FOS using the adopted performance function and the output values obtained from step1.

3.      Repeat steps 1 and 2 many times, storing the FOS result from each must calculation.

4.      Use the calculated FOS values from the Monto Carlo simulation to estimate (a) the probability,  (b) the sample mean and variance, and (c) the FOS PDF from the histogram.

It should be noted that as each Monto Carlo simulation will use a different sequence of random values, the resulting probabilities, means, variances, and histograms may be slightly different. As the number of trials increases the error will be minimize smaller.

 

Monte Carlo simulation produces a distribution of factor of safety rather than a single valued. The results of a traditional analysis, using a single value for each input parameter can be compared to the distribution from the Monte Carlo simulation to determine the relative level of conservatism associated with the conventional design.

 

Point Estimate Method

It is an approximate numerical integration approach to probability modeling. The Generalised Point Estimate Method, can be used for rapid calculation of the mean and standard deviation of a quantity such as a factor of safety which depends upon random behaviour of input variables. To calculate a quantity such as a factor of safety, two point estimates are made at one standard deviation on either side of the mean i.e. (μ ± σ). The factor of safety is calculated for every possible combination of point estimates, producing 2n solutions where ‘n’ is the number of random variables involved. The mean and the standard deviation of the factor of safety are then calculated from these 2n solutions.

There is sometimes reluctance to use probabilistic design when design data is limited and that may not be representative of the population. In these circumstances, it is possible to use subjective assessment techniques that provide reasonably reliable probability values from small samples (Roberds, 1990). The use of probability analysis in design requires that there be generally accepted ranges of probability of failure for different types of structure, as there are for factors of safety.  The evaluation of the PEM results in a single number for the sample data. This single value is a representative of the sampled population. Thornton (1994) used the PEM to evaluate the probability of slope failures. Input parameters may be assumed to be normally distributed. A model can be developed to estimate the factor of safety. Thornton recognized that application of this methodology requires criteria to define the acceptable level of risk.