Interview

The Outlook for Pipeline Risk Assessment

A 2014* Interview with W. Kent Muhlbauer

*but still relevant!

PHMSA has recently expressed criticism regarding how Integrity Management Plan (IMP) risk assessment (RA) for pipelines is being conducted.  Do you also see problems?

There is a wide range of practice among pipeline operators right now.  Some RA is admittedly in need of improvement—not yet meeting the intent of the IMP regulation.  However, I believe that is not due to lack of good intention but rather incomplete understanding of risk.  Risk is a relatively new concept and not easy to fully grasp.  To address PHMSA’s concerns, we as an industry need to improve our understanding of risk and how to measure it.

 

What’s new in the world of pipeline risk assessment?

In the last few years, the emergence of the US IMP regulations has prompted the development of more robust RA methodologies specifically designed for pipelines.  Even though PHMSA and others have identified weaknesses among some practitioners, much progress has been made.  Previous methodologies fell into two categories:  1) scoring systems designed for simple ranking of pipeline segments, and 2) statistics-based quantitative risk assessments (QRA’s) used in more robust applications, often for industrial sites and for certain regulatory and legal needs.  The first were popular among the pre-IMP voluntary practitioners but were limited in their ability to accurately measure risk and to meet IMP regulatory requirements.  The second category was costly and ill-suited for long linear assets, like pipelines.

 

You note two categories of previous risk assessment methodologies.  What about others, like ‘scenario-based’ or ‘subject matter experts’, that are listed in some standards?

I think that listing is confusing tools with risk assessment methodologies.  The two examples you mention are important ingredients in any good risk assessment but they are certainly not complete risk assessments themselves.

 

What are the newest pipeline risk assessment methodologies like?

They’re powerful, intuitive, easy to set up, less costly, and vastly more informative than either of the previous approaches.  By independent examination of key aspects of risk and the use of verifiable measurement units, the whole landscape of the risks becomes apparent.  That leads to much improved decision-making.

 

How can they be both easy and more informative?

More informative since they produce the same output as the classic QRA but are more accurate.  Easy because they directly capture our understanding of pipelines and what can cause them to fail.  The word ‘directly’ is key here.  Previous methods relied on inferential data and/or scoring schemes that tended to interfere with our understanding.

 

If they do the same thing as QRA, why not just use classical QRA?

Several reasons:  classic QRA is expensive and awkward to apply to a long, linear asset in a constantly changing natural environment—can you imagine developing and maintaining event trees/fault trees along every foot of every pipeline?  Classical QRA was created by statisticians and relies heavily on historical failure frequencies.  Ask a statistician how often something will happen in the future and he will ask how often it has happened in the past.  I often hear something like “we can’t do QRA because we don’t have data.”  I think what they mean is that they believe that databases full of incident frequencies—how often each pipeline component has failed by each failure mechanism—are needed before they can produce the QRA type risk estimates. That’s simply not correct.  It’s a carryover from the notion of a purely statistics-driven approach.  While such historical failure data is helpful, it is by no means essential to RA.  We should take an engineering- and physics-based approach rather than rely on questionable or inadequate statistical data.

 

But if I need to estimate (‘quantify)’how often a pipeline segment will fail from a certain threat, don’t I need to have numbers telling me how often similar pipelines have failed in the past from that threat?

No, it’s not essential.  It’s helpful to have such numbers, but not necessary and sometimes even counterproductive.  Note that the historical numbers are often not very relevant to the future—how often do conditions and reactions to previous incidents remain so static that history can accurately predict the future?  Sometimes, perhaps, but caution is warranted.  With or without historical comparable data, the best way to predict future events is to understand and properly model the mechanisms that lead to the events.

 

Why do we need more robust results?  Why not just use scores?

Even though they were developed to help simplify an analysis, scoring and indexing systems add an unnecessary level of complexity and obscurity to a risk assessment.  Numerical estimates of risk—a measure of some consequence over time and space, like ‘failures per mile-year’—are the most meaningful measures of risk we can create.  Anything less is a compromise.  Compromises lead to inaccuracies; inaccuracies lead to diminished decision-making, leading to mis-allocation of resources; leading to more risk than is necessary.  Good risk estimates are gold.  If you can get the most meaningful numbers at the same cost as compromise measures, why would you settle for less?

 

Are you advocating exclusively a quantitative or probabilistic RA?

Terminology has been getting in the way of understanding in the field of RA.  Terms like quantitative, semi-quantitative, qualitative, probabilistic, etc. mean different things to different people.  I do believe that for true understanding of risk and for the vast majority of regulatory, legal, and technical uses of pipeline risk assessments, numerical risk estimates in the form of consequence per length per time are essential.  Anything less is an unnecessary compromise.

 

What about the concern that a more robust methodology suffers more from lack of any data?  (i.e.,” If I don’t have much info on the pipeline, I may as well use a simple ranking approach”.)

That is a myth.  In the absence of recorded information, a robust RA methodology forces SME’s to make careful and informed estimates based on their experience and judgment.  From direct estimates of real-world phenomena, reasonable risk estimates emerge, pending the acquisition of better data.  Therefore, I would respond that lack of information should drive you towards a more robust methodology.  Using a lesser RA approach with a small amount of data just compounds the inaccuracies and does not improve understanding of risk.

 

It sounds like you have methods that very accurately predict failure potential.  True?

Unfortunately, no.  While the new modeling approaches are powerful and the best we’ve ever had, there is still huge uncertainty.  We are unable to accurately predict failures on specific pipe segments except in extreme cases.  With good underlying data, we can do a decent job of predicting the behavior of numerous pipe segments over longer periods of time—the behavior of a population of pipeline segments.  That is of significant benefit when determining risk management strategies.

 

Nonetheless, it sounds like you’re saying there are now pipeline RA approaches that are both better and cheaper than past practice . . . ?

True.  RA that follows the Essential Elements* (EE) guidelines avoids the pitfalls that befall many current practices.  Yet, we can still apply all of the data that was collected for the previous approaches.  Pitfall avoidance, full transparency, and re-use of data makes the approach more efficient than other practices.  Plus, the recommended approaches now generate the most meaningful measurements of risk that we know of.

 

Sounds too good to be true.  What’s the catch?

One catch is that we have to overcome our resistance to the kinds of risk estimate values that are needed.  When faced with a number such as 1.2E-4 failure/mile-year, many react with immediate negative reaction, far beyond a healthy skepticism.  Perhaps it is the scientific notation, or the probabilistic implication, or the ‘illusion of knowledge’, or some other aspect that evokes such reactions.  I find that such biases disappear very quickly however, once an audience sees the utility of the numbers and makes the connection — ‘Hey, that’s actually a close estimate to what the real-world risk is.’

Another ‘catch’ is the one we touched on previously.  Rare events like pipeline failures have a large element of randomness, at least from our current technical perspective.  That means that, no matter how good the modeling, some will still be disappointed by the high uncertainty that must be accompany predictions on specific pipeline segments.

 

What’s behind the EE guideline document that DNV and you recently released?

We are advocating a degree of standardization that serves all stake holders.  This list of essential elements sets forth the minimum ingredients for acceptable pipeline risk assessment.  Every risk assessment should have these elements.  A specific methodology and detailed processes are intentionally NOT essential elements, so there is room for creativity and customized solutions.  DNV’s recognition of the need for such a guideline, with their long history of technical risk consulting and solid reputation, demonstrates the seriousness of this effort.  If regulators encounter too many substandard pipeline RA practices, then prescriptive mandates might be deemed necessary.  Such mandates are usually less efficient than approaches that permit flexibility while prescribing only certain elements.  Hence, the benefit of the EE guidelines.

 

*The Essential Elements of Pipeline Risk Assessment were discussed in the March 2012 issue of this magazine and published by DNV and Mr. Muhlbauer as an insert in the May 2012 issue of Pipeline & Gas Journal.  The Essential Elements were in direct response to PHMSA’s Advanced Notice for Proposed Rule Making (ANPRM) of August, 2011.

Published September 2014

Read the pdf version of the article