Evaluation of Monte Carlo Methods for Assessing Uncertainty
- Ning Liu (U. of Oklahoma) | Dean S. Oliver (U. of Oklahoma)
- Document ID
- Society of Petroleum Engineers
- SPE Journal
- Publication Date
- June 2003
- Document Type
- Journal Paper
- 188 - 195
- 2003. Society of Petroleum Engineers
- 5.6.4 Drillstem/Well Testing, 4.1.5 Processing Equipment, 5.6.5 Tracers, 5.1 Reservoir Characterisation, 5.1.5 Geologic Modeling, 5.1.8 Seismic Modelling, 5.5.8 History Matching, 5.6.9 Production Forecasting, 4.1.2 Separation and Treating
- 3 in the last 30 days
- 1,054 since 2007
- Show more detail
- View rights & permissions
|SPE Member Price:||USD 12.00|
|SPE Non-Member Price:||USD 35.00|
Uncertainty in future reservoir performance is usually evaluated from the simulated performance of a small number of reservoir models. Unfortunately, most of the methods for generating reservoir models conditional to production data are known to create a distribution of realizations that is only approximately correct. In this paper, we evaluate the ability of the various sampling methods to correctly assess the uncertainty in reservoir predictions by comparing the distribution of realizations with a standard distribution from a Markov chain Monte Carlo method. The ensemble of realizations from five sampling algorithms for a synthetic, 1D, single-phase flow problem were compared in order to establish the best algorithm under controlled conditions. Five thousand realizations were generated from each of the approximate sampling algorithms. The distributions of realizations from the approximate methods were compared to the distributions from the exact methods. In general, the method of randomized maximum likelihood performed better than other approximate methods.
The only practical methods for quantifying uncertainty in reservoir performance require the generation of multiple random reservoir models conditional to available data. By simulating the future production from each realization, an empirical distribution of production characteristics is obtained. The validity of this method for quantifying uncertainty depends strongly on the quality of the distribution of reservoir models generated. Methods for sampling from the a posteriori probability density function (pdf) of reservoir flow models conditioned to production data have been widely reported in the literature. Rigorous methods of sampling from the a posteriori distribution for reservoir properties have been applied by Oliver et al.,1 Bonet-Cunha et al.,2 and Omre et al.3 Most other attempts to quantify uncertainty in reservoir performance are based on approximate sampling algorithms. The purpose of this study is to evaluate the distribution of samples from several of these approximate methods. The same assumptions and models were used for all methods in this study, because differences in the model assumptions have made it difficult to draw quantitative conclusions on the reliability of the methods in previous studies.4-6
The methods evaluated here belong to two types: those that are known to sample correctly, and those that are only approximately correct. In the first category, we consider the rejection algorithm (REJ) and a Markov chain Monte Carlo algorithm (MCMC). The three approximate methods we consider are linearization about the MAP (LMAP), randomized maximum likelihood (RML), and pilot point methods (PP) with six and nine pilot point locations. Each of these methods was used to generate a large number of reservoir realizations from a single-phase, 1D synthetic problem, where the observed data was in the form of dynamic pressure, and the unknown reservoir characteristics were the porosity and permeability of each gridblock.
Three comparative investigations of the validity of sampling algorithms for quantifying uncertainty of reservoir performance conditional to flow data have been reported in the literature. Zimmerman et al.4 compared several geostatistical inverse techniques to determine which is better suited for making probabilistic forecasts of solute transport in an aquifer. The comparison criteria were the predicted travel times and the travel paths taken by conservative tracers if accidentally released from storage at the Waste Isolation Pilot Plant (WIPP) site in New Mexico. The main conclusion achieved by this study was the importance of the appropriate selection of the variogram and "the time and experience devoted by the user of the method."
In a large investigation of uncertainty quantification supported by the European Commission, Floris et al.5 applied several methods of evaluating uncertainty to a synthetic study based on a real field case. Participants received reservoir parameters only at well locations, "historic" production data (with noise), and a general geologic description of the reservoir. Nine different techniques for conditioning of the reservoir models to the production data were evaluated, and results (production forecast for a certain period) were compared in the form of a cumulative distribution function. Variation in the parameterization of the problem was identified as the main discriminating factor. The differences in the quality of the history matching and the production forecast caused by the distinct approaches to the problem also resulted in major differences in the resulting cumulative distribution functions.
Barker et al.6 used the same synthetic test problem as Floris et al.,5 but focused their investigation of sampling on three methods: history-matching of multiple realizations using a pilot-point approach, rejection sampling, and Markov chain Monte Carlo. They obtained very different distributions of realizations from rejection and Markov chain Monte Carlo methods. The difference was attributed to variations in the prior information used by the participants, but this made evaluation of the results difficult.
Because our objective is to compare the distribution of the realizations of reservoir predictions generated by approximate sampling methods with the distribution generated by methods that are known to assess uncertainty correctly, it was important to choose the test problem carefully. We know that some of the approximate methods sample correctly when the relationships between the conditioning data and the model parameters are linear, so we designed our problem to be highly nonlinear. We also needed to be able to generate large numbers of realizations so that the resulting distributions do not depend significantly on the random seed. By choosing a single-phase transient flow problem with highly accurate pressure measurements, fairly large uncertainty in the property field, and a short correlation length, we were able to obtain a problem with multiple local maxima in the likelihood function, yet for which a flow simulation required only 0.02 seconds.
Our test problem is a 1D heterogeneous reservoir whose permeability and porosity fields are shown in Fig. 1. The reservoir is discretized into 20 gridblocks, each of which is 50 feet in length. Both the log-permeability (ln k) and porosity fields were assumed to be multivariate Gaussian with exponential covariance and a range of 175 ft. The prior means for porosity and log-permeability are 0.25 and 4.5, respectively. The standard deviation of the porosity field is 0.05, and the standard deviation of the log-permeability field is 1.0. The correlation coefficient between porosity and log-permeability is 0.5. The flow is single-phase, with an oil viscosity of 2 cp and a total compressibility of 4×10-6 psi-1. The initial reservoir pressure is 3500 psi.
|File Size||447 KB||Number of Pages||8|