Pit limits optimization using a stochastic process
CIM Bulletin, Vol. 1, No. 6, 2006
S.E. Jalali, M. Ataee-pour, K. Shahriar
The file is a zipped PDF document.
In recent years, large numbers of algorithms have been developed for the optimization of pit limits, which are either rigorous or heuristic. They may or may not guarantee the true optimization solution. However, most of these algorithms follow a deterministic approach, which predicts a single outcome from a given set of circumstances. In this paper, a new algorithm is introduced using a stochastic approach, which predicts a set of possible outcomes weighted by their likelihood or probabilities. The proposed algorithm is based on the principles of the Markov chain process, and benefits from a mathematical support based on the probability theory. From this point of view, it opens a new horizon for optimization of the ultimate pit limits and is highly distinguished from alternative algorithms.
In order to define the optimum mining limits using the Markov chain process, a number of stages are required. Firstly, a conventional 2D economic model is constructed for the mining area, called the primary model. A virtual block is then added on top of each column of the primary model to build the intermediate model. Next, geometrical constraints inherent in the open pit mining method are imposed on the intermediate model to construct the final model, on which the algorithm is implemented. Because the application of the suggested algorithm is based on forming and applying the probabilistic matrices, a corresponding matrix is presented for each model.
Each block of the final model, referred to as a possible pit depth, may be considered a random variable. An old-fashioned but very useful and highly intuitive definition describes a random variable as a variable that takes on its value by chance. A stochastic process is a family of random variables that are distinguished by their state space, or the range of possible values for the random variables.
Assuming that the current possible pit depth is located at a certain block, the probability for any block of the model to be taken as the pit depth at the next state is determined through a probability distribution function. This function is used to valuate all the pit depths at the next column, for any given possibility, as the current state of the system. A relative weight (probability) is given to each of the next possible pit depths, which is normally proportional to their corresponding economic values. The set of probabilities may be arranged in a square matrix, referred to as the transition matrix. Each row of the transition matrix indicates the probability of transition of the system from a certain state to all members of the state spaces. Calculating the Markov stationary distribution of the transition matrix, the probability of the system remaining at any state is obtained. In each column, the possible pit depth with the highest probability is selected as the optimum pit depth of that column. Because the pit slope constraint is reflected in the definition of the probability distribution function, setting of various optimum pit depths at columns of the model will automatically satisfy the maximum allowable pit slope. A numerical example was used to investigate the validation of the suggested algorithm. The sample model was optimized by both the 2D dynamic programming (DP) algorithm of Lerchs and Grossmann and the proposed new one. The results were in good agreement with each other and showed the validity of the new algorithm.
The proposed algorithm has been described for 2D problems. Extension of the algorithm to 3D cases gets more complicated and entails very large size transition matrices. Nevertheless, using powerful computers, it is possible to solve the problem. Moreover, it should be noted that the nature of the proposed algorithm is such that it can provide a 3D analysis. That is, it is not a 2D algorithm, which requires smoothing when applied to 3D models. In other words, the optimization method is the same for 2D and 3D cases. Application of the Markov chain process to the problem, as described in this paper, strongly depends on the definition of the probability distribution function defined for transition of the system from any state to the next one. Due to the vital influence of the probability distribution function, a comprehensive study should be conducted to define the most appropriate function.