Production-Optimization Strategy Using a Hybrid Genetic Algorithm
- Chris Carpenter (JPT Technology Editor)
- Document ID
- Society of Petroleum Engineers
- Journal of Petroleum Technology
- Publication Date
- December 2016
- Document Type
- Journal Paper
- 54 - 55
- 2015. Society of Petroleum Engineers
- 1 in the last 30 days
- 164 since 2007
- Show more detail
- View rights & permissions
|SPE Member Price:||Free|
|SPE Non-Member Price:||USD 17.00|
This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 177442, “Production-Optimization Strategy Using a Hybrid Genetic Algorithm,” by Damian Dion Salam, Irwan Gunardi, and Amega Yasutra, Bandung Institute of Technology, prepared for the 2015 Abu Dhabi International Petroleum Exhibition and Conference, Abu Dhabi, 9–12 November. The paper has not been peer reviewed.
The optimization algorithm used in this work is a hybrid genetic algorithm (HGA), which is the combination of GAs with artificial neural networks (ANNs) and evolution strategies (ESs). This HGA attempts to simplify the complex and diverse parameters governing the production-optimization problem. The HGA is coupled with a commercial simulator and has been applied to real fields to quantify the benefits of this HGA over a base case with the conventional GA.
GAs. GAs are part of a larger group of methods in artificial intelligence (AI) called evolutionary computation. These methods are inspired by natural evolution in biology. The GA has been well-recognized as an optimization method that has the ability to work in a solution space with nonsmooth and nonlinear topology, where traditional methods generally fail. Several entities that make up the building blocks of GAs have their direct counterpart in nature. Populations, individuals and their fitness, generations, and genomes are all present both in nature and in GAs. A detailed discussion of the GA method is provided in the complete paper.
ANNs. ANNs are a method in AI inspired by brain structure and function. The method aims to interpret the functions, processes, simulators, or similar artifacts that produce input/output patterns by learning by use of given training points. The learning process for the ANN in this study uses a back-propagation (BP) algorithm. After learning from training data has been accomplished by an ANN, the system will receive input, process it, and provide the output. Many variants and types of ANNs exist. In ANNs, there are some nodes that receive inputs, some nodes that provide output, and hidden nodes in between. Neural networks are composed of nodes or units connected by directed links.
Learning in an ANN is typically accomplished through use of examples. This is also called “training” in ANN because the learning is achieved by adjusting the weights iteratively so that a trained ANN can perform well to interpret the function by use of testing points. The most common method in training the ANN is the BP method.
|File Size||233 KB||Number of Pages||2|