Categories
Business Intelligence Tools Data Science & Analytics Drilling & Completions Geology & Geoscience Reservoir Engineering

Passion for Change: Bonanza Creek Energy

with Kyle Gorynski, Director Reservoir Characterization and Exploration at Bonanza Creek Energy

If you missed the first part in this series, you can find the start of our conversation here.

Why do you think there’s been so much hesitation around change?

There’s a strong engrained culture in oil and gas with  a generation of people who’ve been doing this for decades – although on completely different rocks, play types and extraction techniques. There’s been pushback on adopting new geology or engineering software because people become so comfortable and familiar with the tools they use. I think a lot of it is simply the unique culture in oil and gas. Shale is still new, and our guesses are still evolving. I think we’re getting closer and closer to what that right answer is.

New organizations will have to adopt new technologies and adapt to new trends, because there’s no other way to make this business work.

As a scientist or engineer or manager in this space, we really have to be cognizant that the goal of a lot of vendors out there is not to help you get the right answer, it’s to make money. The onus is on us to vet everyone and make sure we’re getting the answers we want. 

Machine learning is simply another tool, like a physics-based model, aimed to help us predict an outcome and increase the precision and accuracy of these predictions. People have made pretty fantastic inferences from these kinds of tools. 

You can’t just pay a company to apply machine learning to a project. You need to help them utilize the correct inputs, the relationships, and ensure the predictions and outcomes match the other observations you have from other datasets.

I don’t think any organization should be cutting-edge for the sake of being cutting-edge. The goal is to solve these very specific technical challenges quicker and with more accuracy. Our job is to extract hydrocarbons in the most safe, environmentally friendly, and economic fashion. Technology like machine learning and AI are tools that can help us achieve these goals and needs to be done correctly.

Can you share any successes around data science or machine learning at your company?

The industry has been using these techniques for a long time. In their simplest form, people have been cross-plotting data since the early days of the oil and gas industry, trying to build relationships between things. At the beginning of my career, I remember using neural networks to predict log response.

Now we use predictive algorithms to help us predict log response where we don’t have certain logs. Let’s say we want to predict lithologies—carbonate-clay-quartz-feldspar content in a well— we’ll build relationships between triple-combo logs, and more sophisticated, but scarce elemental capture spectroscopy logs. We don’t have ECS logs everywhere, but we have triple-combo everywhere, so if you can build a relationship between those, then you have a massive dataset you can use to map your asset. That’s a simple way we use this type of technology. 

Like almost every company now, we’re also predicting performance. That’s how we’re able to make live economic decisions. We have a tool where we can put in a bunch of geologic and engineering inputs and it’ll predict production rates through time that we can forecast, add new costs, and run economics live. We’re running Monte Carlo simulations on variable rates, volumes, lateral length, spacing, commodity pricing, and costs that are based in our best estimates to predict tens of thousands of outcomes to try to help us better understand what the best decision could possibly be. I think that’s the most impactful place it’s being used, and I think that trend is being adopted more and more in industry as I talk to my peers. 

Type curve generation is no longer grabbing a set of wells and grouping them together and fitting a curve to it, but it’s trying to predict the infinite amount of outcomes that are between the extremes.

Have you seen any success among your competitors using technology, specifically data science and analytics tools?

There’s some great work out there across the board. I had a lot of fun at Encana (now Ovintiv) seeing a lot of my peers who are exceptionally smart really trying to adopt new technology to solve problems. I’ve seen some amazing work getting people to adopt new ideas, new thoughts, new predictions. I like going to URTeC. I think that’s a fantastic conference. I always find a number of great sets of technical work that has come out. 

I think the industry is doing a great job. There’s a ton of really smart people out there that know how to do this work. I think a lot of young people are really adopting coding and this bigger picture approach to subsurface, where it’s not just you’re an engineer or you’re a geoscientist, you really have to understand the fluid, the pore system, the stresses, what you’re doing to it. There’s no way we can be impactful unless we understand the really big picture, and people are getting much better at that, trying to use tools and develop skillsets that allow them to solve these problems a lot quicker.

How would you describe Petro.ai?

We see you guys as filling a gap we have. It’s the ability to pull data together. It’s the ability to simply apply big data to a dataset we quite frankly don’t have the time or the capability to do in-house. Petro.ai provides us with a very important service that allows us to get to a point that would take us 12-18 months to get to on our own, but in only a couple months. What we really like about it is the fact that we’re developing something that’s unique and new and therefore has our input and involvement, so we’re not just sending you a dataset and asking for an answer, we’re trying to say what we think drives the results, and we also want your feedback. So you’re also a group of experts as well that not only have your own experiences, but you’ve seen people’s assets and plays and how everyone else in industry is looking at it, so it’s nice to have this group of consultants that have the same goal – to address a problem and try to figure it out. We want to get to an answer as quickly as we possibly can and start to apply those learnings as quickly as we possibly can. 

Kyle Gorynski is currently Director of Reservoir Characterization and Exploration at Bonanza Creek Energy.  Kyle previously worked at Ovintiv where he spent 7 years in various technical and leadership roles, most recently as the Manager of Reservoir Characterization for their Eagle Ford and Austin Chalk assets.  Although he is heavily involved on the technical side of subsurface engineering and geoscience, his primarily focus is on their practical applications in resource and business development . Kyle received his B.S. and M.S. in Geology from the University of Kansas in 2008 and 2011, respectively.
Categories
Drilling & Completions Passion for Change Reservoir Engineering

Passion for Change: Bonanza Creek Energy

with Kyle Gorynski, Director Reservoir Characterization and Exploration at Bonanza Creek Energy 

Tell us about your background and what you do now.

I’m from Kansas and got my Bachelor’s and Master’s degrees in Geoscience at University of Kansas, then moved straight out to Denver. Spent the first seven years of my career with Ovintiv, which was previously Encana, and had various roles, starting with mainly geology functions, and eventually working as a manager. I’ve always been interested in the technical side of the industry and novel approaches to petrophysics, geomechanics, and reservoir mapping, and how new data and new analyses can drive decision-making. It’s important that our decisions are driven through science and statistics and less through drillbit and opinion alone. I joined Bonanza Creek about a year and a half ago.

I’m the Director of Reservoir Characterization and Exploration. This role has two primary functions. One is an asset development function and the other is Business Development/Exploration. Asset development is the value optimization of our asset to maximize on key economic metrics by: 

  • understanding the subsurface to determine baseline performance
  • understanding key engineering drivers that impact performance 
  • applying those insights to modify things in real-time like spacing, stacking, completion design, well flowback etc.

At Bonanza Creek, one of the things our CEO Eric Greager likes to say is, “We’re unique because we have the agility of a small company, with the technical sophistication of a larger enterprise.” 

This allows us to respond to things that are changing quite rapidly, from the costs of goods and services to our own evolving understanding of the reservoir. By rapidly adapting, we’re able to maximize value and economic return.

That’s the main piece. The other part of the role is the exploration and business development function. We apply the same principles I just described to other assets inside and outside our basin and work with the greater operations and finance groups to determine an asset’s current value and what its potential future value could be.

How do you incorporate technology into your approach?

Technology is applied everywhere we possibly can. We have powerful technically savvy people who can develop tools and use tools to guide all our decision making. 

That’s where Petro.ai comes in. We need help building additional tools to make real-time decisions. That’s going to help us stay lean and agile but also make sure we have the right information to be making the most informed decisions. It comes down to the right data and the right people.

We need the ability to make decisions at multiple levels within an organization, so decisions can be made quickly without a top-down approach but with a high level of trust. We need to make sure the technical work is vetted and have a culture of best practices built-in for engineering and geoscience evaluation so we can have a lot of trust in our workflows. When that expertise is already built into tools—a lot of the equations, the input, the math—that helps us have trust in the inputs as well as the outputs and allows us to make those quick decisions.

How do you define real-time?

Our ultimate goal on the completions side is to be making real-time changes while we’re pumping – so minute by minute. That’s what motivated the project we have going with Petro.ai, to start turning knobs during the job, making sure the reservoir is sufficiently stimulated and not over capitalized. 

Let’s say we have sixteen wells per section permitted, but all of a sudden commodity prices drop and it’s at forty bucks, so we’ll only drill eight of those wells. Once those eight wells are in the ground, you’re stuck with that decision. Then maybe commodity prices go up, and we get a good price on sand or water, then we rerun the economics and what the type curves look like. Then we’ll make a new decision, for example, on the amount of sand or water or what the size of our stages are. 

We are making really quick decisions in terms of completions on a well-by-well basis. As price fluctuates and we’re teetering on the edge of break-even, we have to be real flexible in terms of trying to maximize the economics. 

Our next step with Petro.ai is using our 3D seismic data, well architecture, geosteering, and drilling data to understand what kind of rock we’re actually treating to make sure we’re putting a specific design for that specific formation, and we’re also reading the rock at the same time. We’re taking that information, which is mainly pressure response during the job, and trying to learn from that live.

Why do you have a passion for change in this industry?

I’m passionate about understanding the subsurface and the whole E&P industry and our evolving understanding of unconventionals. I’ve always been passionate about the big picture, trying to zoom out and understand how everything interconnects.

Change is a reality. It’s unavoidable. The pace of change and the path you take differs between organizations. The industry as a whole has been incredibly slow to adopt change. We’re now on the verge of a large extinction event. The E&P companies that remain will be in a better position to thrive once this is all over.

Unfortunately, something even as simple as generating a return on your investments has eluded many companies. For these companies, it is often the stubborn top-down culture that has resisted change and is their greatest detriment. It’s an exciting and scary time today. However, these events allow the best to survive and force others to adapt and change – for the better. 

Investments on a single well can be aprox. 6 to 10 million dollars for a 2-mile lateral in the U.S.. Investments on learning are 10s to 100s of thousands of dollars for an entire year. There’s a lot of capital destruction that could have been avoided by doing homework, collecting data, doing analysis, and connecting the dots from math to modeling to statistics all the way to a barrel of oil.

Science often ends up in a folder. It takes good leadership, management, and technical work to ensure that you’re making decisions with all your data and information. The point is to make better decisions and to make better wells. 

The conversation continues here.

Kyle Gorynski is currently Director of Reservoir Characterization and Exploration at Bonanza Creek Energy.  Kyle previously worked at Ovintiv where he spent 7 years in various technical and leadership roles, most recently as the Manager of Reservoir Characterization for their Eagle Ford and Austin Chalk assets.  Although he is heavily involved on the technical side of subsurface engineering and geoscience, his primarily focus is on their practical applications in resource and business development . Kyle received his B.S. and M.S. in Geology from the University of Kansas in 2008 and 2011, respectively.
Categories
Drilling & Completions Reservoir Engineering Transfer

The Impact of Well Orientation on Production in North American Shale Basins

Full house at the January 22nd Calgary Lunch and Learn presented by Kyle LaMotta, VP of Analytics at Petro.ai, on the impact of well orientation in the Montney and Duvernay plays

As North American shale reservoirs reach maturity, the need to optimize development plans has become more demanding and essential. There are many variables that are within our control as we design a well: lateral length, landing zone, completion design, and so on. In situ stress is clearly not something we can directly control, but we can optimize around using an under-appreciated mechanism: well orientation.

The vast scale of available data on monthly production and well orientation provides an opportunity for data science and machine learning to help optimize on this variable.

Our team at Petro.ai has done some clever work, investigating how well orientation with respect to the maximum horizontal stress can be an important variable in well design. This unique approach blends principles of geomechanics with data science techniques to uncover new insights in completions design.

We had the incredible opportunity to visit Calgary on January 22nd, during a thankfully mild week, for a lunch and learn about the effects of well orientation on production in the Montney and Duvernay. We greatly appreciated the warm welcome, fantastic turnout, and keen interest in our presentation by Petro.ai VP of Analytics Kyle LaMotta. He brought his passion and expertise to an awesome and insightful event! 

Because of the amazing response in Calgary, we wanted to bring this lunch and learn event to some upcoming cities. Coming soon to:

For more info on all our upcoming events, please join the Petro.ai community for updates. 

Categories
Drilling & Completions Transfer

Rig Count and Operator Size: Recent Trends

Like many in our industry, the team here at Petro.ai keeps a close eye on oil prices, rig count, and analyst reports to stay in tune with what’s happening. Richard Gaut, our CFO, and I were discussing the recent trends in the rig count which led us to dive into the data. Essentially, we were curious as to how different types of E&P’s are adjusting their activity in the current market. There’s been a lot of news recently on how the super-majors are now finally up to speed in unconventionals and that smaller operators won’t be able to compete.

Figure 1: North American Rig Count (from BakerHughes)

The figure above shows the North American rig count from BakerHuges and we can see the steady recovery from mid-2016 through 2018, followed by another decline in 2019. But has this drop been evenly distributed among operators? TPH provides a more detailed breakdown of the rig count, segmenting the rigs by operator. I put the operators into one of three buckets:

  • Large-cap and integrated E&P’s
  • Mid-cap and small-cap publicly traded E&P’s
  • Privately held E&P’s

The figure below shows the breakdown between these three groups overlaid with the total rig count. You can see the mid and small caps in yellow are a shrinking segment. Figure 3 shows these groups as percentages and makes the divergence between the groups extremely visible.

Figure 2: Rig count segmented by operator type

Figure 3: Percentage breakdown of rigs by operator type

Since the recovery in 2016, privately held companies have taken on a larger share of the rigs and that trend continues through the recent downturn in 2019. This is likely because they are tasked by their financial backers to deploy capital and have no choice but to keep drilling. The large-caps and integrated oil companies have been staying constant or growing slightly since mid-2016 as has been reported. These operators have deep pockets and can offset losses in unconventionals with profits made elsewhere – at least until they become profitable. The story is very different for the small and mid-caps. These operators have experienced the sharpest drop in activity as they are forced by investors to live with in their cash flows.

The data used in this analysis were pulled at the end of September. We typically see a slowdown in activity in Q4 and recent news shows that this slowdown might be worse than normal. It’s likely the divergence we see will only continue through the end of the year.

Next, I split out the rig count by basin and found some interesting trends there which I’ll elaborate on in a second blog post.

Categories
Drilling & Completions Transfer

Gun Barrel Diagram: Calculate and Visualize Well Spacing Part 2

https://www.youtube.com/watch?v=B5t2gZd3rEs
Kyle LaMotta walks through the gun barrel diagram using Petro.ai and Spotfire.

Part two of this blog series builds on my last post and provides step by step procedures for dynamically calculating well spacing and building gun barrel diagrams with Petro.ai. The short video above from Kyle LaMotta highlights the key steps but keep reading for a detailed procedure of how this is done.

Required Data

The gun barrel calculation requires two sets of data; wellbore directional surveys and formation structure grids. It is also possible to run the gun barrel calculation without structure grids, but an additional data function is required.

The surveys must have X, Y, Z, MD, and a well identifier, and the X & Y should be shifted to the correct geo positions, not the X & Y offset values provided by the survey company. Check out this template if you need to convert a drilling survey with azimuth, inclination, measured depth to XYZ. The formation grids must have an X, Y, Z, and formation name. It’s also important to note that the two data sources must be projected in the same coordinate reference system with Z values referencing the same datum (e.g. TVDSS). Another useful template can be found here to convert a CRS in Spotfire.  With this information the Gun Barrel function will calculate the 3D distance between the midpoints of horizontal wellbores.

After loading these data sources into Spotfire (either via Petro.ai or by directly adding the tables), click Tools > Subsurface > Classify subsurface intervals. Clicking this will bring up your “Classify Subsurface Intervals” window. Here you be prompted to fill in the dropdown menus with the relevant information.


Figure 4: Classify Subsurface Intervals (and Gun Barrel) Menu

XYZ Input: Map to wellbore surveys

The top left section, shown below, is used to map columns to your wellbore surveys table. Select your wellbore surveys data table, then fill in the X, Y, and Z dropdowns. Be sure to select “use active filtering”. This will allow the user to filter and mark select groupings of wells and determine the Gun Barrel distances for that select grouping.


Figure 5: Classify Subsurface Intervals (and Gun Barrel) Menu: XYZ input box Horizons

Horizons: Map to surface grids

The top middle section, shown below, is used to map columns to your wellbore surface grids. Select your horizons data table. This will be a table that has your horizon grid data points. This table should contain X, Y and Z data points for each individual horizon. See below for reference. Be sure to select “use active filtering”. This will allow the user to filter the surface grids around the wells of interest, which will significantly improve computation time.


Figure 6: Classify Subsurface Intervals (and Gun Barrel) Menu: Horizons Input Box


Figure 7: Formation Grids data table requirement (example)

Output: Table names and columns

The top right section, shown below, is used to configure the output table and column names. Transfer Columns are simply the additional columns that will be displayed in the output data table, check the column name boxes to include any metadata columns that will help you identify your wells.


Figure 8: Classify Subsurface Intervals (and Gun Barrel) Menu: Output options

Gun Barrel Settings

The bottom half of the window shown below enables the Gun Barrel calculations to run and allows the user to configure settings for the gun barrel.

To enable the gun barrel, check the “Enable Gun Barrel” checkbox. In the Wellbores section, select the Well identifier and MD from your Wellbore Survey data table. Next, use the input boxes to define buffer dimensions around the lateral. This allows the user to update the dimensions around the laterals, creating a rectangular prism for the calculations. In general, the preset values are sufficient.

The right side, Gun Barrel – Output, is used to name the spacing result table generated by the calculation. Use the input box to update the table name. At this point, you are ready to run the calculations: click the OK button.


Figure 9: Classify Subsurface Intervals (and Gun Barrel) Menu: Gun Barrel Input Box

Gun Barrel Spotfire Template

With all the proper data imported and mapped, it is now possible use the Gun Barrel workflow. It’s possible to run the interval classifier and gun barrel calculator in any Spotfire DXP using Ruths.ai software, but we recommend starting with the Gun Barrel Spotfire template, as it’s preconfigured to automate this workflow. This template will be published soon, stay tuned for updates!

Select the Gun Barrel, mark a group of wells on the Map Chart, then click the “Update Gun Barrel” button on the left side menu. And that’s it! You have kicked off the calculation for the 3D distance between the horizontal midpoints of each selected wellbore. Just monitor the Spotfire notifications area to see when the function execution completes.

Figure 10: Map Chart: select wells

The map chart above is displaying the horizontal midpoint X Y points. Depending on your preference, you can change these to the heel or toe if you have those points available to you in your data set.

After running the gun barrel calculation, the results are displayed in four different visualizations to help the user interpret these findings. Each will be explained below.

Figure 11: Gun Barrel Spotfire Template: (Side menu Update Gun Barrel Button)

The “Spacing Report” cross table shows a tabular view of the data. This takes each combination of wells and displays the 3D distance between each of the combination pairs (e.g. A to B, A to C, and B to C. This view also provides the distances dx, dy and dz of each of the midpoints between the well pairs, and a flag indicating whether the combination crosses a horizon interval.

Figure 12: Spacing Report: Tabular View

A table is great for looking up specific values; now let’s look at the visual representation of the same data. The Gun Barrel diagram helps to do this. The Gun Barrel diagram displays the data in a 2D vertical cross-sectional view through, and perpendicular to, all wellbore lateral section midpoints, allowing the user to view the horizontal midpoint wellbore paths head on, (as if staring down the barrel of a gun).

Figure 13: Gun Barrel Visualization

Important note: This is a 2D vertical cross section showing the midpoint along the lateral of the selected wells, regardless of whether they are in a direct line or are spatially staggered from a bird’s-eye view.

Figure 14: Using the spacing report and Gun Barrel together to interpret the data.

With these two views it is possible to quickly interpret the gun barrel calculations and visualize the spacing results.

Lastly, viewing this data in map view and a 3D subsurface view help to further orient the user to the well’s spatial position in the field (shown below).

Figure 15: 3D Subsurface view and Map Chart view.

Note that the Map chart shows the horizontal portion of each selected well, with the heel and toe points colored by their respective formation intervals. In the above example, the black dots represent the lateral section midpoints of each well.

Let me know if you have any issues with the above procedure. Good luck making your gun barrel plots and spacing reports!

Categories
Drilling & Completions Reservoir Engineering Transfer

Gun Barrel Diagram: Calculate and Visualize Well Spacing Part 1

Introduction

The Petro.ai Gun Barrel workflow allows the user to quickly find the 3D distances between the midpoints of the lateral section of selected nearby horizontal wells. This critically important information was once only possible to calculate using specialized geoscience software or through painstaking and time-consuming manual work. With the Petro.ai integrations, we can now calculate this information directly from Spotfire:


Figure 1: Petro.ai Gun Barrel View

Categories
Drilling & Completions Transfer

From Science Pads to Every Pad: Diagnostic Measurements Cross the Chasm

My energy finance career began in late 2014. As commodity prices fell from $100 to $35, I had a front row seat to the devastation. To quote Berkshire Hathaway’s 2001 letter, “you only find out who is swimming naked when the tide goes out.” It turned out that many unconventional operators and service businesses didn’t own bathing suits. My job was to identify fundamentally strong businesses that needed additional capital … not to survive, but to thrive in the new “low tide” environment. To follow Buffett’s analogy, I was buying flippers and goggles for the modest swimmers.


Following 18 months of carnage, commodity prices began to improve in 2016. Surviving operators had been forced to rethink their business models: pivoting from “frac ‘n’ flip” to “hydrocarbon manufacturing”. Over the following two years, I familiarized myself with hundreds of service companies and their operator customers. The entire industry was chasing two seemingly conflating objectives: 1) creating wells that are more productive 2) creating wells that are less expensive. This is the Shale Operator Dual Mandate: make wells better while making them cheaper.

Shale Operator Dual Mandate


As an oil service investor, I was uniquely focused on how each company I met helped their customers accomplish the Shale Operator Dual Mandate. Services that achieved one goal would likely survive, but not thrive. However, those that helped customers meet both goals were sure to be the winners in the “new normal” price environment of $50 oil.

Transformations happened quickly throughout the OFS value chain. Zipper fracs drastically improved surface efficiencies, ultralong laterals were drilled further than ever before, in-basin sand mines appeared overnight, and new measurements came to the fore. These new measurements deliver incredible insights: fracture half length, well productivity by zone, vertical frac growth, optimal perforation placement, and much more.

In Basin Sand

New Measurements

While zipper fracs, long laterals, and local sand have taken over their respective markets, new measurements have struggled to gain traction outside of “science pads”. Frustrated technical service providers bemoan the resistance to change and slow pace of adoption in our industry. These obstacles failed to slow the advance of zipper fracs, long laterals, and local sand … why have disruptive new measurement technologies been on the outside looking in?

Challenge #1: Unclear Economics

The first challenge for new measurements is unclear economics. Despite the recent improvements, unconventional development remains cash flow negative … and has been since its inception.

The above data suggests ~ $400B of cash burn since 2001 … small wonder operators are wary of unproven returns on investment! (Note: to be fair, operators were incentivized by capital markets to outspend cash flow for the great majority of this period. Only recently has Wall Street evolved its thinking to contemplate cash on cash returns, as opposed to NAVs).

For any technology to become mainstream, it must either immediately lower costs (e.g. zipper fracs, local sand) or have obvious paybacks (e.g. long laterals). New measurements, by contrast, do not clearly map to economic returns. Instead, these service providers tend to focus on “interesting” engineering data and operational case studies. Operators will not put a technology into wide use until its economic impact is fully understood. This can mean waiting months for offset wells to come online or years for neighboring operators to release results.

Challenge #2: Changing How Customers Work

The second challenge, which is just as important to end users, is that service providers must deliver insights within a customer’s existing workflow. Operators are busier than ever before. E&P companies have experienced waves of layoffs, leaving those remaining to perform tasks previously done by now-departed colleagues.

In addition, many service providers don’t appreciate the opportunity cost of elongating an existing customer workflow to incorporate new variables. A smaller staff is already being asked to perform more work per person; it should be no surprise that customers are hesitant to allocate budget dollars to perform even more individual work.

Challenge #3: No Silver Bullets

While each new diagnostic data type is an important piece of the subsurface puzzle, no single element can complete the picture on its own. Instead, each measurement should be contextualized alongside others. For example, fiber optic measurements can be viewed alongside tracer data to better determine which stages are contributing the most to production. When each diagnostic data source is delivered in different medium, it becomes nearly impossible to overlay these measurements into a single view.

The Oxbow Theory

The combination of the above factors leads to the “Oxbow Theory” of new measurement abandonment. As you may know, as rivers age, certain sections of the river meander off course. Over time, sediment is redistributed around the meander, further enhancing the river’s bend. Eventually, the force of the river overwhelms the small remaining ‘meander neck’, and an oxbow lake is created. Sediment deposited by the (now straight) river prevents the oxbow lake from ever rejoining the river’s flow. By the same token, new measurement techniques that do not cater to existing workflows may be trialed but will not gain full adoption. Instead, they become oxbow lakes: abandoned to the side of further-entrenched workflows.

Our Solution: Petro.ai

Petro.ai is the only analytics platform designed for oil and gas. If you’re an operator, we can help make sense of the tsunami of data delivered by a fragmented universe of service providers. If you’re a service company, we can help deliver your digital answer product in a format readily useable by your customers. Please reach out to info@petro.ai to learn more.

Categories
Drilling & Completions Transfer

Extracting and Utilizing Valuable Rock Property Data from Drill Cuttings

Why the Fuss About Rock?

The relentless improvement in computing speed and power, furthered by the advent of essentially unlimited cloud-based computing, has allowed the upstream oil and gas industry to construct and run complex, multi-disciplinary simulations. We can now simulate entire fields – if not basins – at high resolution, incorporating multiple wells within highly heterogeneous structural and lithological environments. Computing power is no longer the limitation; we’re constrained by our input data.

Massive volumes of three-dimensional seismic data and petrophysical logs can be interpreted and correlated to produce detailed geological models. However, populating those models with representative mineralogical, geomechanical, and flow properties relies on interpolation between infrequent physical measurements and well logs. This data scarcity introduces significant uncertainty into the model, making it harder to history match with actual well performance and reducing predictive usefulness.

Physical measurements – to which wireline logs are calibrated and from which critical correlation coefficients are determined – are typically made on samples taken from whole cores. Some measurements can be performed on side wall cores collected after the well has been drilled using a wireline tool. However, intact side wall cores suitable for making rock property measurements aren’t always retrieved, and the sampling depth is relatively inaccurate compared to the precise drilling of test plugs from a whole core at surface.

The drilling, retrieval, and analysis of a full core adds significantly to well construction time and can cost anywhere from $0.5 to $2.0 million. In today’s cash-constrained world, that’s a tough ask. Core analysis has become faster and cheaper, but cores are still only collected from less than one percent of wells drilled. This produces a very sparse data set of rock property measurements across an area of interest.

Overlooked Rock Samples

Drill cuttings are an often-overlooked source of high-density rock samples. Although most wells are mud logged for operational geology purposes, such as picking casing points or landing zones, the cuttings samples are frequently discarded without performing any further analysis.

Geochemical analysis at the wellsite is sometimes used to assist with directional drilling. However, detailed characterization typically requires transporting the samples to a central laboratory where they can be properly inspected, separated from extraneous material, and processed to ensure a consistent and representative measurement.

At Premier, we encourage our clients to secure and archive cuttings samples from every well they drill. Even if the cuttings won’t be analyzed right away, they represent a rich, dense sample set from which lateral heterogeneity can be measured and used to fill in the gaps between wells where whole core has been cut and evaluated.

Figure 1: The images above show how rock properties can be collected, analyzed, and visualized from cuttings.

Rapid, high-resolution XRF measurements and state-of-the-art x-ray diffraction (XRD) can be used to match mineralogical signatures from cuttings to chemofacies observed in offset cores.

Information from cuttings samples expedited from the wellsite for fast-turnaround analysis can be used to optimize completion and stimulation of the well being drilled. For example, geomechanical properties correlated with the chemofacies identified along a horizontal well can be used to adjust stimulation stage boundaries. The objective is for every fracture initiation point within a stage to encounter rock with similar geomechanical characteristics, increasing the probability of successful fracture initiation at each point.

On a slower timeline, a more detailed suite of laboratory measurements can be completed, providing information about porosity, pore structure, permeability, thermal maturity, and hydrocarbon content.

The Cuttings Motherlode

In late 2017, Premier Oilfield Group acquired the Midland International Sample Library – now renamed the Premier Sample Library (PSL), which houses an incredible collection of drill cuttings and core samples dating back to the 1940s. The compelling story of how we saved it from imminent demise is the subject of another article. Suffice to say the premises needed some TLC, but the collection itself is in remarkable condition.

Figure 2: Original sample library at the time it was acquired by Premier.

The library is home to over fifty million samples from an estimated two hundred thousand wells – most of them onshore the United States, and many within areas of contemporary interest like the Permian Basin. It gives us the ability to produce high-density rock property datasets that can be used to reduce the uncertainty in all manner of subsurface models and simulations.

New samples are donated to the library each week, many of them from horizontal wells. This provides invaluable insight into lateral facies changes and reservoir heterogeneity. Instead of relying only on the sparse core measurements discussed earlier, our 3D reservoir models can now be populated with superior geostatistical distributions conditioned with data from hundreds of sets of horizontal well cuttings. In an ideal world, cuttings samples from every new well drilled would be contributed to the collection, preserving that information for future generations of explorers and developers.

We have already helped many clients study previously overlooked intervals by reaching back in time to test samples from wells passing through to historically more prolific horizons. Thanks to their predecessors’ rock squirreling habits, these clients have access to an otherwise unobtainable set of data.

Our processing team works around the clock to prepare and characterize library samples, adding more than 2,000 consistently generated data points to our database each day. Since it would take years to work through the entire collection, we have prioritized wells that will give us broad data coverage across basins of greatest industry interest. Over time, guided by our clients’ data needs, we will expand that coverage and create even higher-density data sets.

Share and Apply the Data

At Premier Oilfield Group, we believe that generating and sharing rock and fluid data is the key to making more efficient and more effective field development decisions. The Premier Sample Library is a prime example of that belief in action.

Following an intense program of scanning, digitization, well identification, and location, we are now able to produce a searchable, GIS-based index for a large part of the collection. Many of the remaining boxes are hand-labeled with long-since-acquired operating companies and non-unique well names but forensic work will continue in an attempt to match them with API-recognized locations.

We are excited to have just launched the first version of our datastak™ online platform. This will finally make the PSL collection visible to everyone. Visitors can see detailed, depth-based information on sample availability and any measurements that have already been performed. Subscribers gain access to data purchase and manipulation tools and, as the platform develops, we will add click-through functionality to display underlying test results and images.

Subscriptions for datastak™ cater to everyone from individual geologists to multi-national corporations. We want the data that’s generated to be as widely available and applicable as possible.

Rock properties available through datastak™ provide insights during several critical workflows. These often require integrating rock properties with other data sets, such as offset well information, pressure pumping data, and historical production. Many of the data types generated through cuttings analysis can already be brought into Petro.ai® and made readily available for engineers and geologists. This provides a rich data set, ready for analytics.

For example, Petro.ai® can be used to build machine learning models that tease apart the effects of completions design and reservoir quality on well performance. Premier and Ruths.ai continue working collaboratively to identify additional data types and engineering workflows that will help ground advanced analytics with sound geologic properties.

Armed with a consistent, comprehensive set of rock property data, developers will be in a position to separate spurious correlation from important, geology-driven causality when seeking to understand what drives superior well performance in their area of interest. Whether that work is being carried out entirely by humans or with the assistance of data science algorithms, bringing in this additional information will enable more effective field development and increase economic hydrocarbon recovery.

For further information on Premier Oilfield Group, please visit www.pofg.com.

Categories
Data Science & Analytics Drilling & Completions Transfer

Demystifying Completions Data: Collecting and Organizing Data for Analytics (Part 3)

As mentioned in my previous post, in order to really be of value, we need to extend this analysis to future wells where we won’t have such a complete data set. We need to build multi-variate models using common, “always” data – like pump curves, geologic maps, or bulk data. Our approach has been for engineers to build these models directly in Spotfire through a side panel we’ve added but save these models back to a central location so that they can be version controlled and accessed by anyone in the organization. They can quickly iterate through a variety of models trained on our data set to review the model performance and sensitivity.

If we have access to historical information from previous wells, we can run our model on a variety of data sets to confirm its performance. This could be past wells that had micro seismic or where we knew there were issues with containment. Based on these diagnostics we can select a model to be applied by engineers on future developments. In order to make sure the model is used correctly we can set fences on the variables based on our training set to ensure the models are used appropriately. Because the models are built by your team – not a third-party vendor – they know exactly what assumptions and uncertainties went into the model. This approach empowers them to explore their data and answer questions without the stigma of a black-box recommendation.

Figure 1: Your team builds the models in Petro. ai – not a third-party vendor – so you know exactly what assumptions and uncertainties went into it. This approach empowers you to explore your data in new ways and answer questions without the limitations of black-box recommendations.

However, in addition to fences, we need to make sure engineers understand how and when to apply the models correctly. I won’t go into this topic much but will just say that the direction our industry is moving requires a basic level of statistics and data science understanding by all engineers and geologists, because of this Ruths.ai has incorporated training into our standard engagements.

Slightly different hypothesis

This example used a variety of data, but it only answers one question. It’s important to note that even slight variations in the question we ask can alter what data is needed. In our example, instead of asking if a specific frac design would stay within our selected interval, we wanted to know if the vertical fracture length changed over time, we would need a different data set. Since micro seismic is a snapshot in time we wouldn’t know if the vertical frac stays open. A different data type would be needed to show these transient effects.

Data integration is often the biggest hurdle to analytics

We can start creating a map to tie back the required data needed for the questions we are interested in answering. The point of this diagram shown here is not to demonstrate the exact mapping of questions to data types, but rather, to illustrate how data integration quickly becomes a critical part of this story. This chart shows only a couple questions we may want to ask, and you can see how complicated the integration becomes. Not only are there additional questions, but new data types are constantly being added; none of which add value in isolation – there is no silver bullet, no one data type that will answer all our questions.

Figure 2: Data integration quickly becomes complicated based on the data types needed to build a robust model. There is no silver bullet. No single data type can answer all your questions.

With the pace of unconventional development, you probably don’t have time to build dedicated applications and processes for each question. You need a flexible framework to approach this analysis. Getting to an answer cannot take 6 or 12 months, by then the questions have changed and the answers are no longer relevant.

Wrap up

Bringing these data types together and analyzing them to gain cross-silo insights is critical in moving from science to scale. This is where we will find step changes in completions design and asset development that will lead to improving the capital efficiency of unconventionals. I focused on completions today, but the same story applies across the well lifecycle. Understanding what’s happening in artificial lift requires inputs from geology, drilling and completions. Petro.ai empowers asset teams to operationalize their data and start using it for analytics.


Three key take ways:

  • Specific questions should dictate data collection requirements.
  • Data integration is key to extracting meaningful answers.
  • We need flexible tools that can operate at the speed of unconventionals.

I’m excited about the progress we’ve already made and the direction we’re going.

Categories
Data Science & Analytics Drilling & Completions Transfer

Demystifying Completions Data: Collecting and Organizing Data for Analytics (Part 2)

As promised, let’s now walk through a specific example to illustrate an approach to analytics that we’ve seen be very effective.

I’m going to focus more on the methodology and the tools used rather than the actual analysis. The development of stacked pay is critical to the Permian as well as other plays. Containment and understanding vertical frac propagation is key to developing these resources economically. We might want to ask if a given pumping design (pump rate, intensity, landing) will stay in the target interval or break into other, less desirable rock. There are some fundamental tradeoffs that we might want to explore. For example, we may break out of zone if we pump above a given rate. If we lower the pump rate and increase the duration of the job, we need to have some confidence that the increase in day rates will yield better returns.

We can first build simulations for the frac and look at the effects of different completions designs. We can look at offset wells and historical data – though that could be challenging to piece together. We may ultimately want to validate the simulation and test different frac designs. We could do this changing the pumping schedule at different stages along the lateral of multiple wells.


Data collection

With this specific question in mind, we need to determine what data to collect. The directional survey, the formation tops (from reference well logs) and the frac van data will all be needed. However, we will also want micro seismic to see where the frac goes. Since we want to understand why the frac is either contained or not we will also need the stress profile across the intervals of interest. These could be derived from logs but ideally measured from DFITs. We may also want to collect other data types that we think could be proxies to relate back to the stress profile, like bulk seismic or interpreted geologic maps.

These data types will be collected by different vendors, at different times, and delivered to the operator in a variety of formats. We have bulk data, time series data, data processed by vendors, data interpreted by engineers and geologists. Meaningful conclusions cannot be derived from any one data type, only by integrating them can we start to see a mosaic.

Integration

Integrating the data means overcoming a series of challenges. We first need to decide where this data will live. Outlook does not make a good or sustainable data depository. Putting it all on a shared drive is not ideal as it’s difficult to relate. We could stand up a SQL database or bring all the data into an application and let it live there but both have drawbacks. Our approach leverages Petro.ai which uses a NoSQL back end. This provides a highly scalable and performant environment for the variety of data we will need. Also, by not trapping the data in an application (in some proprietary format) it can easily be reused to answer other questions or by other people in the future.



Getting the data co-located is a start but there’s more work to be done before we can run analytics. Throwing everything into a data lake doesn’t get us to an answer and it’s why we now have the term “data swamp”. A critical step is relating the data to each other. Petro.ai takes this raw data and transforms it using a standard, open data model and robust well alias system; all built from the ground up for O&G. For example, different pressure pumping vendors will have different names for common variables (maybe even different well names) that we need to reconcile. We use a well-centric data model that currently supports over 60 data types and exposes the data through an open API.



Petro.ai also accounts for things like coordinate reference systems, time zones, and units. These are critical corrections to make since we want to be able to reuse as much of our work as possible in future analysis. Contrast this approach with the one dataset – one use case approach where you essentially rebuild the data source for every question you want to ask. We’ve seen the pitfalls of that approach as you quickly run into sustainability challenges around supporting these separate instances. At this point we have an analytics staging ground that we can actually use.

Interacting with and analyzing data

With the data integrated we need to decide how users are going to interact with the data. That could be through Matlab, Spotfire, python, excel, or PowerBI. Obviously, there are trade-offs here as well. Python and Matlab are very flexible but require a lot of user expertise. We need to consider not only the skill set of the people doing the analysis, but the skill set of the those who may ultimately leverage the insights and workflows. Do only a small group of power users need to run this analysis, or do we want every completions engineer to be able to take these results and apply them to their wells? We see a big push for the latter and so our approach has been to use a combination of custom web apps we’ve created along with O&G specific Spotfire integrations. Spotfire is widespread in O&G and it’s great for workflows. We’ve added custom visualizations and calculations to Spotfire to aid in the analysis. For example, we can bring in the directional surveys, grids, and micro seismic points to see them in 3D.


Figure 4: Petro.ai enables a user friendly interface, meeting engineers where they are already working with integrations to Spotfire and web apps.

We now have the data merged in an open, NoSQL back end, and have presented that processed data to end users through Spotfire where the data can be visualized and interrogated to answer our questions. We can get the well-well and well-top spacing. We can see the extent of vertical frac propagation from the micro seismic data. From here we can characterize the frac response at each stage to determine where we went out of zone. We’re building a 360 view of the reservoir to form a computational model that can be used to pull out insights.

In the third and final post of this series, we will continue this containment example and review how we can extend our analysis across an asset. We’ll also revisit the data integration challenges as we expand our approach to other questions we may want to ask while designing completions.