Assessing Multiple Resolution Scales in History Matching With Metamodels
- Adolfo Antonio Rodriguez (U. of Texas Austin) | Hector Klie (U. of Texas Austin) | Mary Fanett Wheeler (U. of Texas Austin) | Rafael Banchs (Polytechnic University of Catalonia)
- Document ID
- Society of Petroleum Engineers
- SPE Reservoir Simulation Symposium, 26-28 February, Houston, Texas, U.S.A.
- Publication Date
- Document Type
- Conference Paper
- 2007. Society of Petroleum Engineers
- 5.5.3 Scaling Methods, 6.1.5 Human Resources, Competence and Training, 5.5 Reservoir Simulation, 5.3.1 Flow in Porous Media, 4.3.4 Scale, 5.1.5 Geologic Modeling, 5.3.2 Multiphase Flow, 4.1.2 Separation and Treating, 5.5.8 History Matching, 4.1.5 Processing Equipment
- 0 in the last 30 days
- 211 since 2007
- Show more detail
- View rights & permissions
|SPE Member Price:||USD 8.50|
|SPE Non-Member Price:||USD 25.00|
In this paper we present a new multiscale approach to history matching assisted by a neural network metamodel. The method starts with the construction of a fine scale a priori model that includes geological and geostatistical information. We then apply a singular value decomposition (SVD) in order to obtain a parametric representation of the permeability field, in a way that a fixed set of eigenimages
are determined with the parameters to be inverted as weights in the expansion. Through this procedure not only is the number of parameters significantly reduced, but also the weights in the SVD expansion define a hierarchy that naturally separates the different resolution scales in the system. We show that the multiscale procedure alone helps to significantly reduce the CPU time required to accomplish the parameter estimation. Furthermore, the reduced parameter space facilitated the training of the neural network engine.
Parameter estimation in reservoir engineering is a challenging task due to the heterogeneous nature of the systems combined with the complex physical processes involved. Data are scarce, insufficient and subject to different sources of noise and uncertainty. These factors make it difficult to reliably cope with the estimation of a large number of parameters. Additionally, flow processes in
porous media are highly nonlinear. Reservoirs are generally large (3-D extending to several miles) and contain heterogeneities that span over many different scales. Adequate reservoir simulation implies the generation of models containing hundreds of thousands to millions of gridblocks on which the flow equations must be numerically solved for long production periods (e.g., 10-30 years).
Consequently, a high degree of computer power required to carry out simulations within a reasonable time frame (from minutes to a few hours).
The fact that very large and detailed models are required in order to perform accurate reservoir predictions, combined with very limited data availability, makes the parameter estimation problem ill-conditioned. This calls for the application of some kind of regularization in order to obtained reliable solutions.
Different approaches have been suggested in order to cope with the ill-posed nature of the history matching problem. Kitanidis1 and, later, by Oliver2 proposed the randomized likelihood method (RML) which seeks to find an a posteriori probability distribution by performing different history matches starting from unconditional realizations. The idea is to generate predictions based on a set of equally probable reservoir models in order to assess uncertainty. The problem with this method is that models with a large number of parameters require a large number of realizations, making it computationally intensive.
Multiscale methods have been introduced as a promising alternative3. The idea behind these methods lies in the generation of a coarsening sequence of reservoir models. One then proceeds to history-match these models starting from the coarsest to the finest. The solution at each level is then downscaled to the following level and used to generate a new, higher resolution model. The procedure is repeated until a model of the desired resolution is obtained or it is impossible to improve the solution. This approach attempts
to reduce the computational burden by performing most of the effort at coarser scales, while reducing the ill-posedness due to the reduction in the number of parameters that arise from the coarsening process.
An alternative approach to accelerate the history matching process is based on the contruction of proxy models aimed at minimizing the number of simulation runs. A possibility is to use an analytical model to fit the simulation results from the sampled parameter space. A
promising option that has gained popularity in the oil industry is the use of learning neural engines4, 5. The main drawback of proxy models is that adequate training may be difficult to accomplish when a large number of unknown parameters exist.
|File Size||702 KB||Number of Pages||7|