Application of Probabilistic Reliability Methods to Tubular Designs
- Mike L. Payne (Arco Oil & Gas Co.) | John D. Swanson (Arco Oil & Gas Co.)
- Document ID
- Society of Petroleum Engineers
- SPE Drilling Engineering
- Publication Date
- December 1990
- Document Type
- Journal Paper
- 299 - 305
- 1990. Society of Petroleum Engineers
- 4.1.2 Separation and Treating, 1.6 Drilling Operations, 1.14.1 Casing Design, 1.14 Casing and Cementing
- 2 in the last 30 days
- 427 since 2007
- Show more detail
- View rights & permissions
|SPE Member Price:||USD 5.00|
|SPE Non-Member Price:||USD 35.00|
Standard tubular design methods are based on a specific margin (i.e., "safety factor") between the maximum anticipated field load and the published rating of the tubular. This technique minimizes the risk of failure but promotes overdesign because of the conservativeness in the tubular rating and in the assumed high-load case. To quantify the safety of a particular design, we developed a new method that accounts for the variation of field loadings and tubular performance. Probability distributions for load and capacity are developed on the basis of assumed field-load histories and actual test results on tubular performance. Reliability design methods, originally used in civil engineering, are used to combine these distributions and to quantify the probability of failure. Results show that current design factors do not provide an effective reliability measurement. As a result of these limitations, true-cost/reliability decisions on designs cannot be made with conventional techniques. If reliability is quantified with the proposed method, decisions can be made that properly balance economics, safety, and uncertainty. This paper should assist engineers who need alternative design approaches and managers interested in a better understanding of how existing tubular design methods relate to risk exposure and optimal costs.
Total domestic drilling costs have averaged over $22 billion per year during the past 10 years. Tubular goods represented about $3.5 billion, 16% of these average yearly expenditures, and accounted for the second largest percentage of total costs on almost all drilling projects.1 Because of the magnitude of these costs, the need to refine design methods is critical, but casing design methods remain controversial and inexact. Design factors and loading considerations vary greatly within the industry. A historical review of design factors demonstrates an ongoing trend toward more exact designs and less conservativeness.
Before 1939, collapse designs were based on average failure pressures and a design factor of 1.50. The source of the 1.50 design factor is unknown, but the API published setting-depth tables with the average collapse rating, the design factor of 1.50, and a standard collapse gradient of 0.5 psi/ft. Analysis of collapse data indicated that minimum collapse pressure occurred at about 75% of average. Hence, when API revised the collapse ratings from average to minimum, engineers reduced collapse design factors by 75% (from 1.50 to 1.125). This collapse design factor remains an industry standard. In 1951, Hills2 reviewed casing design practices and conducted a survey of operator philosophies. He promoted the 1.125 collapse design factor and cited tension design factors of 1.50 for pipe-body yield strength (PBYS) and 2.00 for ultimate joint strength. An average burst design factor of 1.50 with a range of 1.10 to 1.75 was also specified.
In 1954, Saye and Richardson3 discussed the field testing of casing strings with lower-than-normal design factors. Results showed that design factors could be substantially reduced. Before this study in the Elk City field in Oklahoma, Saye and Richardson used design factors of 1.60 in tension, 1.00 in collapse, and 1.33 in burst. They experimented with tension design factors by running eight strings with design factors as low as 1.40. The strings designed with the 1.40 tension-design factor did not fail, but tests with lower design factors were not run and the 1.40 tension-design factor was adopted. This is even more remarkable when measured reciprocating loads were added to buoyant string weight, reducing actual design factors from 1.40 to 1.28.
Forty strings were run with this lower design factor without incident to validate the results of the program. Collapse tests involved drilling additional rathole and then casing it off to allow application of evacuation pressures in the well using packer assemblies below the productive zones. Five wells were tested at collapse pressures greater than the rating until failure occurred or equipment limitations were reached. Test results indicated that casing with design factors 0.90 in the uncemented interval and 0.75 in the cemented interval sustained imposed loads without failure. These results prompted the lowering of collapse design factors to 0.85 in the cemented interval while maintaining 1.00 in the uncemented interval. Burst tests were not performed. Therefore, the 1.33 burst-design factor was not changed. This field testing of design factors resulted in a uniform lowering of previous design margins.
In March 1955, Moody4 surveyed 38 companies and summarized the casing design factors being used by the industry. The survey showed that about 70% of the collapse designs were based on design factors of 1.125, while about 17% were based on factors of 1.00. Tension design factors ranged between 2.00 and 1.60, and burst design factors ranged between 1.33 and 1.00. In 1978, Greenip5 cited 1.125 for collapse, 1.80 for tension, and 1.10 for burst. In 1986, Bourgoyne et al.6 cited 1.10 for both burst and collapse and 1.60 for tension. Thus, although some significant changes have occurred in isolated instances, design factors have remained essentially unchanged for many years.
The premise of all casing design methods is to balance casing costs with design reliability - i.e., the correct casing design for a given situation is not only reliable, but also economical. The void in current practice is not only reliable, but also economical. The void in current practice is the quantitative balance of design reliability and cost. The most economical casing design is achieved by finding the casing string that most closely exceeds a company's or an engineer's specified design factors. Unfortunately, the basis for the design factor is often not well-founded because design factors have been handed down through the years and only rarely subjected to field or analytical verification.
Current design practice (see Fig. 1) shows the perceived reliability of the string design vs. the design factor for an example set of design factors. This figure shows the perception that any design factor less than the minimum is completely unacceptable and has no chance for success while all design factors at or above the minimum are very reliable with a 100% probability of success. To generate such a drastic breakover between reliability and failure, the probability distributions for field load and tubular capacity shown in Fig. 2 would have to be assumed. Fig. 2 shows an assumed 100% probability of occurrence at maximum field load and minimum tubular capacity. This simplistic view of design variables, however, is inadequate. Consider the following dilemmas.
1. A design factor of 1.20 must be more reliable than a design factor of 1.10 and so forth, but no means to quantify how reliability is improved exists. Also, if a design factor of 1.01 is as acceptable as a design factor of 1.00, then how unacceptable is a design factor of 0.99 or 0.98?
2. Design factors 1.00 are used for collapse in certain circumstances. A design factor of 1.00 implies no safety margin because the rating is matched to the field load. However, even a design factor of 1.00 includes a safety margin because of the conservativeness built into the rating and the load. If designs are based on hidden margins, shouldn't those margins be quantified and explicitly handled in the string design?
|File Size||593 KB||Number of Pages||7|