Performance evaluation of the LDS

Performance Evaluation

The pipeline operator should establish KPIs (e.g. sensitivity, reliability) to measure LDS performance.

Internal Review

Establish a comprehensive internal data repository to facilitate the analysis process.

The results of the internal review on the LDS side may include:

  • Identified gaps in strategy
  • Performance metrics evaluation results
  • Performance monitoring and target changes
  • Testing/tuning results
  • Training results
  • Notable equipment maintenance activities
  • Improvements suggested, undertaken, and completed

External Review

External comparisons may provide a basis to compare the performance of the LDSs with other pipeline operators, industry data sources, and jurisdictional data sources. The purpose of the external review is to seek benchmarking information and improvement possibilities.

Key Performance Indicators (KPIs)

Monitoring of the LDSs may be realized through defining the correct KPIs and collecting the data consistently, reporting properly, and acting on the data once it is evaluated.

Leak detection uses four performance metrics; accuracy, sensitivity, reliability, and robustness. However, these metrics cannot be directly applied to evaluating the overall performance of an LDP. Therefore, KPIs should be defined to monitor the overall effectiveness of the LDP.

Each pipeline operator should develop their own usable list of KPIs. As lagging KPIS are related to spill events, I will only provide examples of leading KPIs in this review.

Level 3 – Pipeline operator internal measures, leading indicators, operationally focused KPIs

Leading KPIs

  • Percentage of non-leak leak alarms that are analyzed, rationalized, addressed and documented by the leak detection analyst in a given time period
  • Number of non-leak leak alarms generated from the LDS
  • Amount of time that LDS is in alarm during operation
  • Percentage of total pipeline covered by a continuously monitored LDS
  • Percentage of total pipeline where actual LDS performance meets design criteria
  • Percentage of time that the LDS is available during operations (uptime of the LDS)
  • Number of tests conducted on an LDS in a given yea
  • Percentage of LDSs with non-tuned thresholds
  • Percentage of LDSs that undergo reviews of alarms or notifications in each year
  • Percentage of leak alarms where the cause of the alarms or notifications is identified, i.e, communication, metering, instrumentation, SCADA, etc
  • Number of times per year that an LDS has had tuning changes in threshold limits

Level 4 – Pipeline operator internal measures - leading indicators focused on personnel performance and management

Leading KPIs

  • Percentage of Pipeline Controllers who are trained on the concepts of the LDS on an annual basis
  • Whether leak causes reviewed on an annual basis and new information included in updating the pipeline operator leak detection strategy
  • Average time to correct an instrument malfunction that impacts an operational LDS
  • Percentage of MOC items that impact the Pipeline Controller LDP training
  • Leak detection staffing levels per mile of pipeline in operation
  • Percentage of LDSs where alarm settings are reviewed and confirmed on an annual basis

The strategy may have set and KPIs may be used to measure progress against the performance requirements for the overall LDP. KPIs to measure the overall LDP are likely to be different than those used in monitoring individual pipelines.

Periodic Reporting

Most significant KPIs should be reported to the pipeline operator’s management on an annual basis.

Another purpose served by KPIs is the ability to benchmark a single pipeline operator’s performance against a larger group. To achieve this type of benchmarking ability, several lagging KPls are identified in Table 6 (Level 1) and Table 7 (Level 2) that should be collected to allow for inter-company comparison.

Leading and Lagging Indicators

Lagging indicators are those KPIs which measure an event after it has already occurred, whereas leading indicators will help companies take a more proactive stance in managing their LDP

KPls that measure an event after it has already occurred. This view indicates the number of failures or events that have taken place in a given time period but do not necessarily assist in determining the underlying causal factor.

An example of a lagging indicator is a measure of how many pipeline leaks were alarmed by an LDS in a given time period, given that the LDS was designed to detect a leak of that size.

Leading Indicators (for a more proactive stance)


Leading indicators are used to predict a future outcome of a process. Example a KPI to measure of how consistently Pipeline Controllers are trained in the use, understanding, and operation of the LDSs implemented within a Control Center. The assumption is that well-trained Pipeline Controllers better understand the data presented to them and respond better. Therefore, a KPI to reflect this may be the percentage of Pipeline Controllers who are trained on the concepts of the LDSs on an annual basis.

Level 1 and Level 2 KPIs are generally lagging KPIs and should be initially collected to allow industry-wide benchmarking of overall LDP performance. Levels 3 and 4 are only internally collected and reported.

The difference between Level 1 and Level 2 KPls is based on whether or not the incident meets the PHMSA definition of a significant incident. Level 1 KPIs are LOC events that are PHMSA- reportable significant incidents.

Level 1 and Level 2 KPIs in this document are outcomes focused and are directly tied to some measure of each
pipeline leak. Examples include the number of leaks that were detected by the leak detection system, amount of
product leaked where the LDS was designed to detect a leak of that size, and the cost to the pipeline operator from the leak where the LDS was designed to detect a leak of that size. These measures may help answer the question of whether LDSs are effective in detecting and minimizing the amount of product that leaks from the pipeline.

Level 3 KPIs are more operationally focused and help to understand how well LDSs are performing once implemented in a pipeline operator’s environment. Examples include the number of non-leak alarms generated from the LDS.

Level 4 KPIs are generally leading KPIs and may be useful to determine whether or not a defined process is being executed correctly. Level 3 and Level 4 events have the potential to lead to Level 1 or 2 events.

Dual Assurance

Dual assurance is a concept whereby a leading indicator at a lower level is matched with a lagging indicator at a higher level. The goal is to predict where the performance of a process is clearly and directly tied to performance at a higher-level objective. An example of this relationship in an LDP would be a leading KPI to measure the percentage of non-leak alarms that are analyzed, rationalized, addressed, and documented by the leak detection analyst in a given time period compared to a lagging indicator where a Pipeline Controllers’ shutdown percentage in response to leak alarms is measured. The assumption being that a more careful, thorough evaluation of non-leak alarms by a leak detection analyst and tuning of the LDS would result in a lower number of unwarranted shutdown situations. If the pipeline operator is able to properly address non-leak alarms in the LDS, only true leak alarms are indicated to the Pipeline Controller.

Data Normalization

Data normalization refers to the effort to make data comparable (for example, over time or between different entities). Normalization is necessary to compare data between various operators. For normalization to work, it is necessary to understand the basis of the data and to have a common definition for the items. For example, if the definition of a leak is different between operators, then it is not possible to compare their KPIs. In this RP, the leak definition in line with the CFR is recommended.

Read part 8

References

  1. American Petroleum Association. (2014). Recommended Practice 1175, Pipeline Leak Detection Program Management (2st ed.). Washington, DC: Author.

A Look Inside API Recommended Practice 1175 Series

A Look Inside API Recommended Practice 1175, Part 1
A Look Inside API Recommended Practice 1175, Part 2
A Look Inside API Recommended Practice 1175, Part 3
A Look Inside API Recommended Practice 1175, Part 4
A Look Inside API Recommended Practice 1175, Part 5
A Look Inside API Recommended Practice 1175, Part 6
A Look Inside API Recommended Practice 1175, Part 8

Categories: Best practice advice Industry update

By: Atmos International
Date: 17 April 2019