Scalability of the Deterministic and Bayesian Approaches to Production-Data Integration Into Reservoir Models
- Leonardo Vega (Texas A&M U.) | Danny Rojas (Texas A&M U.) | Akhil Datta-Gupta (Texas A&M U.)
- Document ID
- Society of Petroleum Engineers
- SPE Journal
- Publication Date
- September 2004
- Document Type
- Journal Paper
- 330 - 338
- 2004. Society of Petroleum Engineers
- 5.6.3 Deterministic Methods, 5.1.5 Geologic Modeling, 7.6.2 Data Integration, 4.3.4 Scale, 5.4 Enhanced Recovery, 5.4.2 Gas Injection Methods, 5.5.7 Streamline Simulation, 5.1 Reservoir Characterisation, 5.1.8 Seismic Modelling, 4.1.2 Separation and Treating, 5.5 Reservoir Simulation, 5.6.5 Tracers, 5.5.8 History Matching, 5.6.1 Open hole/cased hole log analysis, 5.6.4 Drillstem/Well Testing, 5.8.7 Carbonate Reservoir
- 0 in the last 30 days
- 270 since 2007
- Show more detail
- View rights & permissions
|SPE Member Price:||USD 10.00|
|SPE Non-Member Price:||USD 30.00|
Current techniques for production-data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter-smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications, although the computational costs associated with the two methods can vary widely. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods.
We systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. We also propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas.
The practice of inferring reservoir property distributions from dynamic observations of reservoir performance such as transient pressure/tracer response or production data typically involves the solution of an inverse problem. 1--10 Such inverse problems for reservoir characterization are typically undetermined and can lead to instability and nonuniqueness in the solution. 11,12 To remedy the situation, we generally resort to data-independent prior information that can limit the "plausible" models that satisfy the data. We will examine two different approaches for incorporating prior information during production data integration into reservoir models: "Bayesian" and "deterministic."11,13-15 The two approaches differ fundamentally in the way in which probability is introduced into the calculation and in their treatment of observed data and prior information.13 The Bayesian approach associates probability with the prior information, whereas the deterministic approach treats it as fixed. In fact, in the deterministic approach, probability enters into the calculations only through the data errors, which generally have a random component associated with them.
Our goal in this paper is not to advocate either the Bayesian or the deterministic approach during production data integration. Both the approaches have been used very successfully under a wide variety of reservoir conditions.1-10 Also, the advantages and the disadvantages of these approaches are well documented in the literature.11-15 Instead, we will focus on the computational efficiency of the two methods, especially for large-scale field applications. Of particular interest to us is the scaling of the computational costs for these two methods with an increasing number of unknown parameters. Current industry practice involves generation of reservoir models consisting of several hundreds of thousands to millions of gridblocks. Integration of production data into such high-resolution reservoir models can become computationally prohibitive. In this respect, the scaling behavior of the Bayesian vs. the deterministic approach can become a deciding factor in adopting one approach over the other.
The outline of our paper is as follows. First, we provide a brief mathematical background of the Bayesian and deterministic approaches as applied to production-data integration into reservoir models. Second, we systematically investigate the scaling of the computation time for the two methods with respect to the model size or the number of unknown parameters. Third, we propose an efficient and robust adaptation of the Bayesian formulation that can lead to orders-of-magnitude savings in computation time for model sizes larger than 100,000 gridblocks. Our proposed method is based on an analytic computation of the square root of the inverse of the covariance matrix during production-data integration using the Bayesian approach. We present a simple finite-difference stencil for the calculation of the square root of the inverse. This allows us to pose the Bayesian inverse problem in a manner analogous to the deterministic approach and the use of efficient sparse matrix solvers during the minimization of the data misfit. Finally, we illustrate the power and utility of our proposed method using synthetic and field examples. The field application involves integration of water-cut response into the geologic model for the Goldsmith field in west Texas.
Background and Scaling Properties: Bayesian vs. Deterministic Approach
In this section we briefly review the Bayesian and the deterministic approaches to production-data integration during reservoir characterization. We also examine the scaling of the computation time for the two approaches with respect to the number of gridblocks or unknown parameters.
This approach follows from Bayes' rule and provides a natural framework for combining prior information related to the geologic model with the production data.11 The goal is to derive a more refined statistical distribution for the model parameters, known as the posterior distribution, which will be more tightly constrained compared to the prior distribution. We can then explore the posterior distribution to obtain plausible models given the data, or simply use the posterior mean as an estimate and the posterior standard deviation as a measure of confidence interval.
|File Size||5 MB||Number of Pages||9|