A Machine Learning Approach to Studies of Recovery Efficiency
- J.P. Brown (Ultimate Resources Inc.)
- Document ID
- Society of Petroleum Engineers
- Petroleum Computer Conference, 17-20 June, Dallas, Texas
- Publication Date
- Document Type
- Conference Paper
- 1991. Society of Petroleum Engineers
- 7.6.6 Artificial Intelligence, 5.7.2 Recovery Factors, 4.1.2 Separation and Treating, 2.4.3 Sand/Solids Control, 4.1.5 Processing Equipment, 4.6 Natural Gas
- 6 in the last 30 days
- 344 since 2007
- Show more detail
- View rights & permissions
|SPE Member Price:||USD 8.50|
|SPE Non-Member Price:||USD 25.00|
The American Petroleum Institute published studies of Recovery Efficiency in 1967 and 1964. The first report provided useful correlations from an industry database but provided useful correlations from an industry database but the second showed that, without subjective weighting, the 1967 results appeared invalid. This new independent report demonstrates that through Machine Learning, objective predictions can non be achieved. predictions can non be achieved
Extensive studies of Recovery Efficiency by a sub-committee of the American Petroleum Institute continued from 1956 through 1967 and from 1976 through 1964 (Figure 1). Some 50 attributes from 675 producing reservoirs were systematically collected in a database which covered primary and secondary production. primary and secondary production. This 1991 pilot study is derived from the original database using 21 attributes from the 675 reservoirs, and only considering primary production. The resulting demonstration of the application of Machine Learning (Figure 2) together with computer graphics and regression techniques as Predictive Tools, will show that these methods work, and that they could be applied extensively to all types of databases throughout the industry.
A cross plot of two key attributes, using all the original data points (Figure 3), illustrates how diverse the distributions can be. This paper will describe a rigorous but objective sequence of procedures which can be used to separate out those subsets which will be the most effective for making predictions. In contrast to the original diverse cross plot, the key table of the study (Table 1), shows the detailed predictions of Recovery Efficiency which it is possible to make by carefully selecting sub-sets of the database.
Diversity is caused by mixing Apples and Oranges. What needs to be done is to find a way to separate the data into whatever sub-bases have the cleanest possible correlations. This is not a new idea. In 1967 the API subcommittee "filtered" out sub-bases such as Sandstone reservoirs with Water Drive and in 1964 they tried such sub-bases as sandstone in Texas with Solution Bas Drive. This filtering did improve the correlations but it did not change the committee's opinion that the choice of these attributes for filtering was subjective.
Machine Learning, the subject of this paper, provides a way to choose filter factors objectively, and does in fact support all the choices made by the subcommittee.
Machine Learning relies on object oriented programming to compare the distributions of attribute values, both scalar (numbers) and non-scalar (descriptive terms), and to develop Rules.
|File Size||371 KB||Number of Pages||9|