Print This Post

Improving Pipeline Information Sharing

Improving Pipeline Information Sharing was originally published in the online magazine Informed Infrastructure on May 8, 2014.

On September 9, 2010, a section of natural gas pipeline in San Bruno, California, exploded leading to significant property damage and the death of 9 people. The cause of the explosion was ultimately traced to defective welds and a decision to increase the pipeline operating pressure. Legal action was taken against the operating company, and a number of its executives are facing criminal indictments. Of particular interest is the charge, under the U.S. Natural Gas Pipeline Safety Act of 1968, that they failed to maintain adequate records, evaluate risks of pipeline corrosion and leaks, and prioritize and address potential threats between 2003 and 2010.

In order to operate a pipeline safely, pipelines must be regularly inspected, and the information from a range of vendors, including component manufacturers, inspection companies, corrosion engineering companies, and component repair companies, must be provided to the operator. The operator uses this information to make critical decisions about the pipeline such as increasing/decreasing the operating pressure, maintenance or replacement of components, deployment of security, cleanup, or emergency response systems, notification of the public or regulatory authorities, or changes in the schedule and type of pipeline inspections.

The pipeline industry has invested significant energy into the development of information standards including the development of PODS (Pipeline Open Data Standard), the creation of the Petroleum Open Software Corporation (POSC), and the latter’s CAESAR project. CAESAR developed, over the years, from a vocabulary of terms to an ontology for the petroleum industry focused on the upstream segment.

In spite of these developments, the routine acquisition, integration, and utilization of information by operators remains a challenge. In practice, information exchange remains very much an ad hoc affair using structured date, where some semantics can be inferred from the document structure, and unstructured data, where semantics cannot be inferred from the document structure. Even the encodings of the data sources (both structured and unstructured) are not common across all vendors.

At the last meeting of the Open Geospatial Consortium (OGC), an ad hoc meeting was held, including operators, data management, and engineering companies, to see if there was interest in the development of a new standard, nominally labelled PipelineML (possibly Pipeline Modeling Language), focused exclusively on the exchange of data between vendors and operators. This meeting resulted in an agreement to create a PipelineML Standards Working Group (SWG). The objective of this SWG is to create, and get adoption and usage of, the PipelineML standard across the Oil and Gas pipeline industry.

The basic use cases underlying the proposed PipelineML are still emerging; however, it is clear that vendors need to report information in a standard way relative to location in the pipeline. Such information may include description of pipeline components themselves (.e.g. pipe diameter, section length, wall thickness etc.), as well as structured documents, sensor data (e.g. defect measurement by pig devices), structured reports, and unstructured documents. The relationships between the components of the pipeline and one another, and between the components and all other data elements at a specific point in time, will be critical. Making it easy for vendors to provide information in an automated way and with minimal disruption to their systems will also be essential to broad adoption and usage. The ability to easily integrate legacy data of all kinds (e.g. CAD, GIS) will also be very important.

In terms of importance, I would argue that getting a basic pipeline framework (components and connectivity) in place, against which any vendor can report information, is a first critical step. Not losing information is key, even if the information is initially not in a structured and integratable encoding. The next step would be to get as much of the information as possible into a structured form so that it can be easily integrated and consumed by the operator’s information systems.

If the effort and energy is provided, the information interchange problems can be resolved such that more informed decisions regarding pipeline operating parameters are made, and tragedies like San Bruno may be minimized or avoided altogether.