Categories
Data Science & Analytics Drilling & Completions Transfer

Demystifying Completions Data: Collecting and Organizing Data for Analytics (Part 2)

As promised, let’s now walk through a specific example to illustrate an approach to analytics that we’ve seen be very effective.

I’m going to focus more on the methodology and the tools used rather than the actual analysis. The development of stacked pay is critical to the Permian as well as other plays. Containment and understanding vertical frac propagation is key to developing these resources economically. We might want to ask if a given pumping design (pump rate, intensity, landing) will stay in the target interval or break into other, less desirable rock. There are some fundamental tradeoffs that we might want to explore. For example, we may break out of zone if we pump above a given rate. If we lower the pump rate and increase the duration of the job, we need to have some confidence that the increase in day rates will yield better returns.

We can first build simulations for the frac and look at the effects of different completions designs. We can look at offset wells and historical data – though that could be challenging to piece together. We may ultimately want to validate the simulation and test different frac designs. We could do this changing the pumping schedule at different stages along the lateral of multiple wells.


Data collection

With this specific question in mind, we need to determine what data to collect. The directional survey, the formation tops (from reference well logs) and the frac van data will all be needed. However, we will also want micro seismic to see where the frac goes. Since we want to understand why the frac is either contained or not we will also need the stress profile across the intervals of interest. These could be derived from logs but ideally measured from DFITs. We may also want to collect other data types that we think could be proxies to relate back to the stress profile, like bulk seismic or interpreted geologic maps.

These data types will be collected by different vendors, at different times, and delivered to the operator in a variety of formats. We have bulk data, time series data, data processed by vendors, data interpreted by engineers and geologists. Meaningful conclusions cannot be derived from any one data type, only by integrating them can we start to see a mosaic.

Integration

Integrating the data means overcoming a series of challenges. We first need to decide where this data will live. Outlook does not make a good or sustainable data depository. Putting it all on a shared drive is not ideal as it’s difficult to relate. We could stand up a SQL database or bring all the data into an application and let it live there but both have drawbacks. Our approach leverages Petro.ai which uses a NoSQL back end. This provides a highly scalable and performant environment for the variety of data we will need. Also, by not trapping the data in an application (in some proprietary format) it can easily be reused to answer other questions or by other people in the future.



Getting the data co-located is a start but there’s more work to be done before we can run analytics. Throwing everything into a data lake doesn’t get us to an answer and it’s why we now have the term “data swamp”. A critical step is relating the data to each other. Petro.ai takes this raw data and transforms it using a standard, open data model and robust well alias system; all built from the ground up for O&G. For example, different pressure pumping vendors will have different names for common variables (maybe even different well names) that we need to reconcile. We use a well-centric data model that currently supports over 60 data types and exposes the data through an open API.



Petro.ai also accounts for things like coordinate reference systems, time zones, and units. These are critical corrections to make since we want to be able to reuse as much of our work as possible in future analysis. Contrast this approach with the one dataset – one use case approach where you essentially rebuild the data source for every question you want to ask. We’ve seen the pitfalls of that approach as you quickly run into sustainability challenges around supporting these separate instances. At this point we have an analytics staging ground that we can actually use.

Interacting with and analyzing data

With the data integrated we need to decide how users are going to interact with the data. That could be through Matlab, Spotfire, python, excel, or PowerBI. Obviously, there are trade-offs here as well. Python and Matlab are very flexible but require a lot of user expertise. We need to consider not only the skill set of the people doing the analysis, but the skill set of the those who may ultimately leverage the insights and workflows. Do only a small group of power users need to run this analysis, or do we want every completions engineer to be able to take these results and apply them to their wells? We see a big push for the latter and so our approach has been to use a combination of custom web apps we’ve created along with O&G specific Spotfire integrations. Spotfire is widespread in O&G and it’s great for workflows. We’ve added custom visualizations and calculations to Spotfire to aid in the analysis. For example, we can bring in the directional surveys, grids, and micro seismic points to see them in 3D.


Figure 4: Petro.ai enables a user friendly interface, meeting engineers where they are already working with integrations to Spotfire and web apps.

We now have the data merged in an open, NoSQL back end, and have presented that processed data to end users through Spotfire where the data can be visualized and interrogated to answer our questions. We can get the well-well and well-top spacing. We can see the extent of vertical frac propagation from the micro seismic data. From here we can characterize the frac response at each stage to determine where we went out of zone. We’re building a 360 view of the reservoir to form a computational model that can be used to pull out insights.

In the third and final post of this series, we will continue this containment example and review how we can extend our analysis across an asset. We’ll also revisit the data integration challenges as we expand our approach to other questions we may want to ask while designing completions.

 

Categories
Data Science & Analytics Drilling & Completions Transfer

Demystifying Completions Data: Collecting and Organizing Data for Analytics (Part 1)

The oil and gas industry collects a huge amount of data trying to better understand what’s happening in the subsurface. These observations and measurements come in a range of data types that must be pieced together to garner insights. In this blog series we’ll review some of these data types and discuss an approach to integrating data to better inform decision making processes.

Before getting into the data, it’s important to note why every company needs a data strategy. Capital efficiency is now the name of the game in unconventionals. Investors are pushing for free cash flow, not just year over year increases in production. The nearby slide is from one operator but virtually every investor deck has a slide like this one. There are positive trends that operators can show – price concessions from service providers, efficiency gains in drilling, completions, facilities and increases in lateral length. Despite these gains, as an industry, shale is still not profitable. How much further can operators push these trends? How will this chart be created next year? Single-silo efficiencies are gone, and the next step change will only come from an integrated approach where the data acquired across the well lifecycle can be unlocked to fuel cross-silo insights.

Figure 1: Virtually every investor deck has a figure like this one. There are positive trends that operators can show– price concessions from service providers, efficiency gains in drilling, completions, facilities and increases in lateral length. Despite these gains, as an industry, shale is still not profitable. How much further can operators push these trends? How will this chart be created next year?

This is especially true in completions, which represent 60% of the well costs and touches so many domains. What does completions optimization mean? It’s a common phrase that gets thrown around a lot. Let’s unpack this wide-ranging topic into a series of specific questions.

  1. How does frac geometry change with completions design?
  2. How do you select an ideal landing zone?
  3. What operations sequence will lead to the best outcomes?
  4. What effect does well spacing have on production?
  5. Will diverter improve recovery?

This is just a small subset, but we can see these are complex, multidisciplinary questions. As an industry, we’re collecting and streaming massive amounts of data to try and figure this out. Companies are standing up centers of excellence around data science to get to the bottom of it. However, these issues require input from geology, geomechancis, drilling, reservoir engineering, completions, and production – the entire team. It’s very difficult to connect all the dots.

There’s also no one size fits all solution; shales are very heterogenous and your assets are very different from someone else’s, both in the subsurface and surface. Tradeoffs exist and design parameters need to be tied back to ROI. Here again, there are significant differences in strategy depending on your company’s strategy and goals.

Managing a data tsunami

When we don’t know what’s happening, we can observe, and there’s a lot of things we can observe, a lot of data we can collect. Here are some examples that I’ve grouped into two buckets: diagnostic data that you would collect specifically to better understand what’s happening and operational data that is collected as part of the job execution.

The amount of data available is massive – and only increasing as new diagnostics techniques, new acquisition systems and new edge devices come out. What data is important? What data do we really need? Collecting data is expensive so we need to make sure the value is there.

Figure 2: Here are some examples of diagnostic data that you would collect specifically to better understand what’s happening and operational data that is collected as part of the job execution.

The data we collect is of little value in isolation. Someone needs to piece everything together before we can run analytics and before we can start to see trends and insights. However, there is not standards around data formats or delivery mechanisms and so operators have had to bear the burden of stitching everything together. This is a burden not only for the operators, but also creates problems for service providers whose data is delivered as a summary pdf with raw data in Excel and is difficult to use beyond of the original job. The value of their data and their services is diminished when their work product has only limited use.

Thinking through an approach

A common approach to answering questions and collecting data is the science pad, the scope of which can vary significantly. The average unconventional well costs between $6 and 8M but a science pad can easily approach $12M and that doesn’t take into account costs of the time people will spend planning and analyzing the job. This exercise requires collecting and integrating data, applying engineering knowledge, and then building models. Taking science learnings to scale is the only way to justify the high costs associated with these projects.

Whether on a science pad or just as part of a normal completions process, data should be collected and analyzed to improve the development strategy. A scientific approach to completions optimization can help ensure continuous improvement. This starts with a hypothesis – not data collection. Start with a very specific question. This hypothesis informs what data needs to be collected. The analysis should then either validate or invalidate our hypothesis. If we end there, we’ve at least learned something, but if we can go one step further and find common or bulk data that are proxies for these diagnostics, we can scale the learnings with predictive models. Data science can play a major role here to avoid making far reaching decisions based off very few sample points. Just because we observed something in 2 or 3 wells where we collected all this data does not mean we will always see the same response. We can use data science to validate these learnings against historical data and understand the limits where we can apply them versus where we may need to collect more data.

In part 2 of this series, we’ll walk through an example of this approach that addresses vertical frac propagation. Specifically, we’ll dive into collecting, integrating, and interacting with the required data. Stay tuned!