|Publisher||Society of Petroleum Engineers||Language||English|
|Content Type||Conference Paper|
|Title||Exploratory Data Analysis in Reservoir Characterization Projects|
Keith R. Holdaway, SPE, SAS Institute Inc.
SPE/EAGE Reservoir Characterization and Simulation Conference, 19-21 October 2009, Abu Dhabi, UAE
2009. Society of Petroleum Engineers
|6.7.2 Recovery Factors
“Simplicity is the ultimate sophistication.”
Leonardo da Vinci
The process to characterize the reservoirs of a mature field encapsulates the analysis of large data sets collated from well tests, production history and core analysis results enhanced by high resolution mapping of seismic attributes to reservoir properties. It is imperative to capture the more subtle observations inherent in these data sets, to comprehend the structure of the data. Invariably, geostatistical methods can be implemented to accurately quantify heterogeneity, integrate scalable data and capture the scope of uncertainty. However, between 50 and 70 per cent of allotted time for any reservoir characterization study worth its investment should be concentrated on Exploratory Data Analysis (EDA). As an overture to spatial analysis, simulation and uncertainty quantification, exploratory data analysis ensures consistent data integration, data aggregation and data management, underpinned by univariate, bivariate and multivariate analysis.
This paper not only details some of the more common EDA steps that initiate efficient reservoir characterization projects, but also underlines the importance of the EDA school of thought often overlooked or even precluded prior to the spatial analysis, kriging, simulation and uncertainty quantification steps. See Figure 1 for a comprehensive reservoir characterization project flow chart as it cycles through each step from EDA to uncertainty analysis. Illustrated by a case study to optimize recovery factors, the EDA techniques are enumerated and graphically explicated. Using a suite of statistical tools in various workflows across an enterprise business intelligence framework, data integrity is preserved and managed efficiently to enable various regression models to be deemed appropriate, implementing stepwise algorithms and other techniques to render reliable knowledge of those reservoir properties that are most influential in increasing recovery factors.
Exploratory data analysis is an approach to analyzing data for the purpose of formulating hypotheses worth testing, complementing the tools of conventional statistics for testing hypotheses. It was so named by John Tukey to contrast with confirmatory data analysis, the term used for the set of ideas about hypothesis testing, p-values and confidence intervals.
Tukey held that too much emphasis in statistics was placed on statistical hypothesis testing (confirmatory data analysis); more emphasis needed to be placed on using data to suggest hypotheses to test. In particular, he held that confusing the two types of analyses and employing them on the same set of data can lead to systematic bias owing to the issues inherent in testing hypotheses suggested by the data.
Data analysis often falls into two phases: exploratory and confirmatory. The exploratory phase “isolates patterns and features of the data and reveals these forcefully to the analyst” (Hoaglin, Mosteller, and Tukey²). If a model is fit to the data, exploratory analysis finds patterns that represent deviations from the model. These patterns lead the analyst to revise the model, and the process is repeated.
|File Size||2,984 KB||20|