/** Google Analytics start **/ /** Google Analytics end **/
Categories

Overcoming data challenges in #oil and #gas

Digital solutions aimed at oil and gas will struggle against deep-seated data challenges that plague the industry. What can be done about them?

 

 

I’ve been kicking around the oil and gas industry for a couple of decades, and in all that time, we’ve made only a little progress against the same data issues. Well, change is in the air, because the next wave of business benefits driven by digital solutions will need solutions to these challenges. Oil and gas companies are going to finally step up to address these issues.

 

Data origination

 

Much useful data for oil and gas companies actually originates with suppliers (technologies, assets, services, equipment) to the industry. It’s their equipment that has the sensors that create data in the first instance. However, suppliers often have structural disincentives to provide open, standardised, easily accessed data for their customers out of fear that they will commoditise their offerings. Indeed, many suppliers aim to sell fully integrated offerings to their customers, consisting of tightly coupled product families, that lock in their customers.

 

These “closed” systems do not have APIs for easy integration with other systems. For example, a downhole pump equipped with sensors may throw off lots of useful data, but that data is usually in a proprietary format and the pump controllers lack the software smarts (either intentionally or unintentionally) to allow access to that data. Closed systems prevent owners from adopting best in class components and leading edge technology into their structures.

 

Next generation buyers of oil and gas technology are challenging the entrenched suppliers of technology to the industry. Future procurement specifications for control systems will demand that they be standards-based, open, secure and interoperable.

 

 

Reliability, accuracy, availability

 

Oil and gas has many legacy practices that make data hard to work with. First, the data is stored in separate departmental or functional silos, in various incompatible formats, with technologies and solutions selected with narrow terms of reference. Frequently, the data is captured more as an afterthought than as an integral part of the business processes, with little concern for anomalies or inaccuracies. Errors in the data make it less trustworthy. Over time, measurement devices can drift out of alignment with real conditions and require the occasional recalibration, a costly exercise.

 

Engineers then spend considerable time just pulling these large datasets together, trying to overcome errors in the data to improve its reliability, or sharpening its accuracy to reflect real operating conditions.

 

Throw in a bit of organisational politics that slow everything down, plus a can-do and independent engineering culture, and not surprisingly, oil and gas datasets duplicate and multiply with abandonment, further clouding data reliability.

 

Variety and volume, accelerating

 

Oil and gas data shows the same two vectors as data in other industries – tremendous variety of data, and enormous volumes – with one important distinction. Industry data is generally structured (it’s tabular ) rather than unstructured (it’s pictures). I think of this as “large” data rather than “big” data.

 

These two vectors, variety and volume, have always shown robust annual growth, but these are now accelerating thanks to digital advancement. Next generation technologies such as autonomous kit generate more of the unstructured data (more variety). As sensors fall in price, they will appear on more kit (more sources) and generate yet more data (more volume) more frequently.

 

Oil and gas is already proficient at collecting and consuming large data. There are even pockets of capability in big data (think seismic, and map processing). But most oil and gas executives will readily admit that their companies are not very good at exploiting it, or analysing it for insights and business improvement.

 

The industry will need to step up its efforts in managing what is a rapidly growing and materially shifting data environment.

 

Value of data

 

Accounting rules play a role in determining how oil and gas companies treat their data. For example, is data “free”, as in “no cost”? Is data valueless? Take a look at any oil and gas balance sheet and you typically find just a few data assets. A good example is seismic data which is usually recorded as the cost to recapture the data and not necessarily the value that could be released through effective analysis of the data.

 

In general, oil and gas entities do not record their data assets on their balance sheets with a value. Data is more associated with being a cost or an expense, recorded on the profit and loss statement as IT cost, and is something to be managed.

 

Expense items in business struggle to attract capital investment and talented people, and during the commodity downcycle, expenses come under intense cost pressure.

 

While capital markets and accounting rules may block highly formalised valuations for data assets, managers can always create their own internal accounting metrics to promote the right kinds of behaviour.

 

Data accountability

 

With all this data, there would surely be clear accountability for stewarding that data.

 

Yes and no. Accountability is diffused and shared.

 

The most senior executive charged with managing commercial data is usually the Chief Information Officer, who typically reports to the CFO or to the SVP of Corporate Services. Her role is generally confined to the IT side of the business (ERP, trading, email services, phone), run the data center and networks, adapt to technology advances, provide cyber security.

 

The Chief Operating Officer will have accountability for operations technology (or OT), for running the facilities that extract oil and gas, process it, and transport it along the value chain. Some oil and gas businesses are more asset-driven, and feature asset managers who will have accountability for OT for their assets, and no others.

 

The VP Exploration will have accountability for all the geology and subsurface technical tools for interpreting geologic data.

 

This much diversity in accountability hinders the consistent management of data, introduces security weaknesses, and makes analysis of the business more difficult than it needs to be.

 

Net Net

 

When oil and gas prices are high, and margins are robust, the traditional approach to data management is dismissible as a minor cost of doing business. However, with this prolonged downcycle, no more easily extracted costs, and powerful new data-driven technologies on the horizon, the traditional approach is looking profoundly ill suited to the future.

 

Taking action

 

Faced with a large and rapidly growing mountain of data of uncertain origin, that might not be original data, whose accuracy and reliability is questionable, that could be stale dated, and promising digital tools coming quickly, where does one start?

 

Setting a Strategy

 

If data challenges were just technical in nature, companies could simply purchase a fix or develop a hack and be done. But as I’ve set out, the challenges are much more complicated and the issues are devilishly interrelated. Therefore, companies need a more strategic approach. Data issues must be tackled more holistically so that changes in one aspect (such as agreement to adopt data standards) are not in conflict with another (such as procurement policies based on least cost).

 

A good strategy for dealing with data would include a survey of the key issues surrounding data, a view as to what “good” data management looks like, and who are leaders in dealing with data (and the leaders might be outside the industry). The strategy would set out key goals and objectives for data (high reliability, low duplication, clear accountability, and so forth), a set of investments required to achieve the goals (technical, standards, procurement), an organization and resources to own the data initiatives and a set of metrics to monitor progress.

 

Drivers of value

 

A good step is to cut the mountain of data down to a workable size. This means quickly finding the data that matters. Start with the development of an oil and gas value driver tree, which is a representation of how value is created. The tree helps identify what data contributes to value creation and what data is of lesser value.

 

Once the drivers of value are clear, a value assessment can be placed on the data required to manage the driver. Managers can then properly allocate capital and talent based on the value of the data. Existing data sets that support the drivers of value can be identified.

 

Analytics tools

 

In my experience, oil and gas companies are not short of analytic tools. Indeed, the opposite problem is more likely – that there are too many tools to chose from. Diversity of tools is not always a good thing – clever models built in one tool may not be leveraged in another tool.

 

I would put the tools into a tool library and let the power of crowd usage help drive tool uptake.

 

Data set repository

 

A big time saver is to simply make data assets more accessible. Instead of data assets being hidden away on server drives or departmental systems or inside ERP systems, data sets could be made accessible by being stored in one place, accompanied by their descriptive data (or meta data). The meta data is key as it enables searching, filtering, building relationships between datasets, labeling charts, providing references, and so on.

 

In time, new data can be made to conform to agreed data standards, but such standards need to be agreed first.

 

 

Share this ... Facebooktwittergoogle_plusredditlinkedinmail
No Comments

Leave a Reply

%d bloggers like this: