Categories
Updates

Petro.ai welcomes Dr. Nathan Meehan as Senior Advisor for Reserves and Emissions

Petro.ai expands its Technical Advisory Board with the addition of Dr. D. Nathan Meehan, President of CMG Petroleum Consulting and the 2016 President of the Society of Petroleum Engineers. 

“Throughout his career, Nathan has become known as an extraordinary engineer and an even better leader,” explains Dr. Troy Ruths, Founder and CEO of Petro.ai. “We are thrilled to partner with such an outstanding individual. It is our aim to infuse each Petro.ai workflow with the care and expertise that Nathan has delivered to the industry.”

Previously, Dr. Meehan was President of Gaffney, Cline & Associates; Senior Executive Advisor for Baker Hughes; Vice President of Engineering for Occidental Petroleum; and General Manager, Exploration & Production Services for Union Pacific Resources.  Dr. Meehan holds a BSc in Physics from the Georgia Institute of Technology, an MSc in Petroleum Engineering from the University of Oklahoma, and a PhD in Petroleum Engineering from Stanford University. He is an SPE Distinguished Member and the recipient of the SPE Lester C. Uren Award for Distinguished Achievement in Petroleum Engineering, the Degolyer Distinguished Service Medal, the SPE Public Service Award, and has been named an Honorary Member of the SPE. Dr. Meehan has also received the World Oil Lifetime Achievement Award and the Petroleum Economist Legacy Award. He has served on the National Petroleum Council and is a long-standing member of the Interstate Oil and Gas Compact Commission. 

“I am very proud to be joining the Petro.ai team as the Senior Advisor for Reserves and Emissions,” reports Dr. Meehan. “I’ve been fortunate to work with the world’s largest energy companies through reserves reporting processes and I also share Troy’s passion for delivering tools to the industry that will foster reductions in emissions, and ultimately, a transition to clean burning energy.”

With this addition, the Petro.ai Technical Advisory Board includes global experts in both geomechanics and reservoir engineering.

“I’m absolutely delighted that Nathan is joining Petro.ai. I’ve known Nathan since he was a PhD student at Stanford several decades ago. More importantly, I have had the pleasure to connect with him a number of times since then: in his leadership roles with both operating and service companies, his activities as a private consultant, and his professional service as President of the SPE,” explained Dr. Mark Zoback, Petro.ai Senior Advisor in Geomechanics.  “I can think of no one who could bring a wider range of experience and expertise to Petro.ai and help us to better serve our current and future clients through cutting-edge software and services.”

Categories
Updates

Looking Back on Hacking for Houston 2020

Bringing together O&G technical experts and public health professionals

Earlier this year, before the era of social distancing and remote work, Petro.ai dedicated time and effort to give back to the community with data science. We created the “Hacking for Houston” event to give the Petro.ai user base a voice for good in the communities in which we live and work and partnered with Harris County Public Health (HCPH).

Watch now.

Uche Arizor, Team Lead at PHI Lab, the innovation arm of HCPH commented that, “Our mission is to facilitate cross-sector collaboration, creativity, and innovation in public health practice. Partnering with Petro.ai for Hacking for Houston 2020 was a great opportunity to bring people together from the sectors of oil & gas, data science, and public health to work on real issues that HCPH would like to address.” 

All of us were surprised when the night before the hackathon, a water main burst in downtown Houston (remember this?). After all the hard work put into organizing the event with our partner, the Public Health Innovations Lab (PHI Lab) at Harris County Public Health, we decided to press on with the event. We are so glad that we did!  Little did we know, this was our last opportunity to interact with a large group of people in our office space. More importantly, participants were able to deliver actionable insights!

We encouraged anyone with a passion for data science to attend, especially our clients and partners, as well as university students in data science and public health. We were unsure if attendees would still be able to join us in light of the water main break—but even the turnout for the optional two-hour morning workshop was fantastic. Shota Ota and other members of the Petro.ai support team covered tools and topics useful for the Hackathon. 

After lunch, the hackathon began with a high-intensity couple of hours where participants worked in teams of 1-3 people to build and code projects. Teams were not restricted to any particular software or tools to implement their solutions and people deployed a variety of tools including Power BI, Spotfire, R, python, ArcGIS, Excel, Jupyter notebooks, and even open-sourced 3D visualization software. 

Three Challenges were laid out to participants, each with actual data provided by HCPH. Teams then chose one of the available challenges to work on during the event. 

Go Upstream for Downstream Costs

Objectives:   

  • Identify the rates of preventable hospitalization types and charges from zip codes with the highest rates of preventable visits.  
  • Create profiles of select zip codes that explore trends in socio-demographics, health outcomes, and issues in health care access.   

Increase Government Efficiency

Objectives:   

  • Model the overlap and gap of services provided by current facility locations based on community need (population density, poor health outcomes, etc.) 
  • Identify pilot sites for the co-location of public health, clinical health, and mental health services, while justifying community needs around the site. 
  • Explore the impact of other public and private facilities that may offer similar services in affected communities.   

Reducing West Nile virus (WNV) Disease Risk

Objectives:   

  • Use disease, mosquito, environmental and population data from the past 4 years, to develop a model that predicts areas in Harris County at higher risk for WNV disease transmission compared to others.   
  • Identify the key factors that influence WNV disease risk in Harris County as a whole or in different clustered communities. 

At 5 pm, each team gave a 5-minute presentation or “pitch” to the panel of judges and other participants. Their projects were judged according to four categories: communication, technical achievement, creativity, and aesthetic. Our 2020 judges included Dr. Dana Beckham, Director of the Office of Science, Surveillance, and Technology, HCPH; Dr. Lance Black, Associate Director, TMCx; and Dr. Troy Ruths, Founder and CEO, Petro.ai.

The judges were impressed by all the teams and how much they were able to accomplish in just four hours. Each team presented their findings and their recommendations for HCPH. The winning team consisted of Callie Hall from the Houston Health Department, Elena Feofanova, a PhD candidate at UT Health, and Alex Lach, a reservoir engineer at Oxy. Their team chose Challenge 2, Increase Government Efficiency, and combined outstanding data analysis with a great pitch.  

Dr. Beckham, Director of the Office of Science, Surveillance, and Technology at HCPH, said, “The hackathon was a great way to network with future leaders and address public health issues in a creative and innovative way. Information taken back will be implemented to assist with making better business decisions to provide services to Harris County residents. It was a great opportunity for government (HCPH) and private industry (Petro.ai) to work together for equity and better health outcomes for the community.” 

The success of Hacking for Houston 2020 made it an easy decision for us to bring it back in the future. If you missed the event, joined the Petro.ai Community to stay up to date and hear about our next hackathon. 

Categories
Passion for Change

Passion for Change: Geoffrey Cann

with Richard Gaut, CFO & COO of Petro.ai and Geoffrey Cann, Speaker, Trainer, and Author of Bits, Bytes, and Barrels: The Digital Transformation of Oil and Gas

Watch Now.


Full Transcript

Richard Gaut: Geoff, thanks for joining us on our Passion for Change interview series! You’ve written a really influential book, Bits, Bytes, and Barrels: The Digital Transformation of Oil and Gas, that our customers have been talking about, that’s getting a lot of publicity, and that is part of the zeitgeist now. It is so great to have you on as a guest for our very first video interview. 

Geoffrey Cann: I very much appreciate the invite. Thank you so much. 

RG: Across the oil and gas complex, what do you see as the technology with the largest opportunity for a digital solution to make an impact?

GC: The digital solution which offers the greatest potential today – by far – is the world of artificial intelligence and machine learning. The oil and gas industry is blessed with enormous deposits of data assets, which have accumulated over the years and will continue to accumulate. 

Unfortunately, this data sits in places where the industry has either forgotten it exists, doesn’t understand its value, or dismisses it out of hand as being “dirty data”. I believe that the fastest way to value in the world of digital isn’t necessarily to generate more data, though that’s very easy to do. Instead, it is harvesting the data assets that you’ve already got. If you wanted a shot that you could pull today that would yield a meaningful outcome, it would be to apply artificial intelligence or machine learning somewhere in the business.

RG: Where do you see folks on this transition from managing their own IT systems versus finding service providers in the cloud? What’s the industry doing to manage these huge data volumes?

GC: Well, the first challenge that the industry has to come to grips with is, as you point out, is the enormous growth in the volume of data out there. There’s the first problem. How do you get your arms around all of this data and, if you’re an oil company, can you afford to stand up your own incremental infrastructure year-on-year, just to store all of this data?

That brings with it all kinds of other interesting questions: Where do you locate your data center? How do you handle backups and recoveries? How do you build in your redundancies? What about your redundant power supplies? How are you going to even fuel it, since so much energy goes into running a data center.

The leading oil and gas companies have concluded that the right answer is to shift off of this “roll your own” infrastructure and to leverage the capabilities afforded by the large cloud computing companies. Migrating out of your proprietary data center and onto cloud infrastructure. That, in turn, opens up all of the new business model possibilities that we’ve seen from other industries that have migrated ahead of oil and gas. That’s step one.

Step two, though, has to be investing in the talent and the capability to take the data that you’re sitting on and make sense of it. That’s where the need to bring onboard data scientists and other data specializations comes from; so that you can begin to extract the value promise from all of that data.

RG: There’s a really great phrase in the book that I learned and I hadn’t heard this one before, Geoff. It’s “wetware.” Can you tell me what “wetware” is?

GC: Well, if software is what’s on your computer and hardware is an iPhone, then wetware is you and me. We are our own compute capacity. It’s just up here in your brain where things are wet! So wetware refers to the humans that are working with both hardware and software. For the time being, we are going to be in a wetware world. We’re going to have lots of people managing and administering our facilities and our assets. 

RG: The fact the matter is that humans just weren’t designed for it. Wetware just is not capable of digesting, ingesting, or contextualizing these incredible volumes of data that we’re now privy to. 

GC: Quite right. As humans, we learn at a certain pace and so we are at a significant disadvantage when you think about the pace of digital change in how fast machines are able to learn.

RG: It feels like there’s some top-down initiatives at the board level to undertake some of these transformation initiatives, but when the rubber meets the road inside the company things are more challenging. What have been that the successful strategies that companies have undertaken to take a tangible first step after that memo comes down from the board? 

GC: The short and quick path forward that most companies take is that they will create some kind of digital task force, innovation council, or digital Center of Excellence somewhere in their organization. Then it becomes this group’s job to move digital initiatives forward. This can work, but in my view it needs four essential ingredients for success. One, it needs organization. Two, it needs to have resources so it can actually do things: money and budget to spend. Number three is that it needs to have ways of working. Fourth is that team needs to have real hard measures of success. If you don’t have those four ingredients, your task force is not going to be successful.

The second ingredient you have to have in place is it’s got to be implemented in a business unit. To get to a successful outcome, the business unit itself has to be ready to embrace this digital change. That means changing the performance metrics for the manager in that unit. Then, you need to train the workforce in that unit so that they know that what’s coming at them is an expectation of the company. If the workforce doesn’t embrace these changes and drive digital growth, then the whole unit will suffer.

RG: You make a really interesting argument about what competes for capital in an up market versus what competes for capital in a down Market. Would love to hear your specific thoughts about that.

GC: We have some real challenges in the context of how to drive this change agenda forward. You can go from midstream companies with a viable digital game plans underway, to upstream companies, and even to refineries. The place in the value chain doesn’t matter; the digital agenda should continue to run regardless of where we are in the cycle. 

RG: Another thing that I was really interested in was IT and OT and their roles in digital transformation. If you could just walk us through how they end up managing these projects.

GC: Sure. Most commercial businesses will have an Information Technology (IT) department and within it you’ll find the team that makes sure the email system works correctly, the ERP systems are supported properly, and that the infrastructure is in place to do things like Zoom calls. They let you bring your tablet to work and gives you single sign-on and all that sort of stuff. IT’s specialization is integrating these multiple technologies together and making them appear seamless. That’s one of their secret sauces. The are generally very good at patching, keeping complex systems going, and securing and providing a whole range of services responding to employee needs.

OT is what we call Operational Technology. OT is what you find in a plant as it runs 24/7. It never shuts down. It is responsible for keeping physical infrastructure running within certain set points. OT can go by the name SCADA, which stands for Supervisory Control and Data Acquisition. Here, you’re supervising an asset and you’re capturing the data from that asset as it’s running. Historically, IT and OT have been two separate solitudes.

The problem, though, is that in a digital world, they start to come together. If you look at the oil and gas industries from one end of the spectrum to the other (upstream, midstream, downstream, retail, trading, or capital projects), you’ll find slight and distinct differences all the way along the chain. Differences in ghw people think about an approach the world of IT their world of Operations Technology and how they connect in the world of digital technology. There isn’t a clear cut answer emerging … yet.

RG: This has been really fantastic, Geoff! I greatly appreciate the opportunity to visit with you. I wanted to show the group that we have our own copy of Bits, Bytes, and Barrels that you were kind enough to help us print our own Petro.ai logo on. So, if this is something that you are interested in, follow us on LinkedIn and join our Petro.ai ommunity and we will give you an opportunity to get a copy of Geoff’s book. We’d love to share this with you. But, before we sign off is there any wisdom you’d want to share with us as parting words?

GC: Not one thing, but three things! The first is that I write a weekly article series about digital innovation in oil and gas which is available on my website. It’s absolutely free. A companion to that is a podcast that I also publish every week on iTunes, Stitcher, and Spotify and all the places where you find podcasts. It’s called Digital Oil and Gas. Third, a government agency asked me if I would turn my book into a training course and so I did that for them. I built all the materials and then recorded all the materials as a series of online lectures and they’re available on Udemy for about the same price as the book itself. 

RG: Thanks so much for taking the time.

GC: You bet. I’m delighted to do it and look forward to doing this sometime in the future again. Take care.

Categories
Drilling & Completions Passion for Change

Passion for Change: Colorado School of Mines

with faculty from the Colorado School of Mines Dr. Bill Eustes, Associate Professor, Petroleum Engineering and Jim Crompton, Professor of Practice, Petroleum Engineering

Special thanks to Ronnie Arispe, Data and Analytics Specialist at Concho, and Anthony Bordonaro, Production Technologist at Chevron, from the SPE Permian Basin Section for helping to conduct this interview. The Permian Basin Section has been recognized by SPE with the 2019 Section Excellence Award in recognition of the section’s hard work and strong programs in industry engagement, operation and planning, community involvement, professional development and innovation.

Tell us about your background.

JC: I’m something called a Professor of Practice in the Petroleum Engineering Department at the Colorado School of Mines, somebody who got his lumps from a number of decades in the industry rather than a PhD. 

I am relatively new to the faculty, although I go back way to 1974 at the School of Mines when I got my degree in geophysical engineering. After getting my Master’s, I joined Chevron Oil Company where I spent the next 37 years. One company, one paycheck, but a number of different careers from traditional seismic processing, seismic interpretation, and then I finished the last third of my career in the area of digital oilfields, or integrated oilfields, as it was called at Chevron at the time. 

I retired in 2013 and moved back to Colorado. Four years ago, I was asked to create a capstone course for a Data Analytics Minor within the Petroleum Engineering program.

BE: I’m Bill Eustes. I have spent 42 years in this business. I graduated from Louisiana Tech back in 1978 with a Bachelor of Science in mechanical engineering. I went to work at ARCO Oil and Gas working as a drilling engineer out in Hobbs, New Mexico. Then I did a stint in Midland, so I’ve had the experience of living in the Permian Basin. Then I worked as a drilling engineer out of the Midcontinent District in Tulsa as well as in the East Texas and North Louisiana area, and then finally went to Enid, Oklahoma where I was a production engineer until 1987. 

At that time, I recall ARCO getting a spreadsheet program called Lotus 1-2-3.  We loaded the specs on all of our wells on it. When the market crashed in ‘85 and ‘86, we went through there and populated it and said, “What is our break-even point for the price of oil for each well?” I remember this was just an awesome event to be able to go through 2,500 wells and then sort it and see which wells were making money. That was an amazing epiphany to be able to look at something like that.

Another thing that stuck with me—there was this really deep well in 1982 that I was involved with in Oklahoma while working for ARCO. I remember a company called ExLog that did mud logging; and, they would print out all of the specifications of the drilling operations on one of those old tractor feed type of printers.  I remember looking at stacks of paper and wondering what I was going to do with it. I could see some value, but it wasn’t any sort of format that we could use. 

That’s always been in the back of my mind: how do I use this information to be able to do a better job?

And then I got laid off. 

In hindsight, that was the best thing, because I got to choose my own pathway forward. I decided I wanted to get more education. I went to the University of Colorado Boulder and have a Master of Science degree in Mechanical Engineering. I thought I’d change the industry I worked in, but when you start looking at your bloodstream when you’ve been in this business, it’s no longer blood— it’s oil.

It just so happens there was a school right down the road from CU-Boulder that had a Petroleum Engineering program. That’s how I wound up at the Colorado School of Mines as a graduate student. I spent six years as a graduate student in various areas of research including the Yucca Mountain project, the Hanford nuclear waste site, places like that.

I had my advisor retire right as I finished, so I put my name in the hat, and lo and behold, here I am 24 years later. It’s been a wild ride!

What do you do at the Colorado School of Mines and what makes your work unique?

JC: I think one of the things that Bill and I share is the passion to apply data to do something useful—drill a better well, have better production, artificial lift optimization, whatever it is. Through our individual four decades of experience, we’ve seen this data become more plentiful. We’ve seen this data become a little bit easier to use. We’ve seen better tools crop up. So, it’s getting closer and closer to being able to do decision-making analysis. 

It isn’t the company with the most data that wins. It’s a company that makes the best decisions from the data they have that wins. 

I think both of us share this idea of trying to instill into the next generation workforce their understanding of the data and then what you can do with it. It’s not an overemphasis on sensors or IOT or cloud computing or whatever. It’s the idea of application. 

We talk a lot about understanding data. We talk a lot about data visualization. Forty years ago, when I was on campus, a petroleum engineer wouldn’t go beyond Excel spreadsheets. Now, we’ve got R and Python programming and it’s a new world of the capabilities, a new generation of digital engineers.

BE: We now have the tools, but you know the famous phrase, “All models are wrong, but some are useful.” [AE1] We’re trying to build more useful models.

The machines are there to assist you, to augment you in being able to make decisions. They’re not there to make the decisions for you. 

We’re working on a certificate program for those that are at the postgraduate level, whether it be in a Masters or PhD program, or just somebody out in Industry interested in wanting to get a better understanding of how to be a digital engineer- actually working on projects in drilling, production, reservoir, and unconventional resources. At the end of the 12 credit-hour sequence, you would have a Graduate Level certificate in Petroleum Data Analytics from the Colorado School of Mines. 

We’re also looking at automation, developing really good high-quality data and models that can be able to tell the machine where things should be going. 

That’s one of the things I personally am looking at, deriving insights into making our operations better. But also looking at a longer-term goal of trying to see what areas we can automate and make things safer and more reliable and more consistent.

I’m part of the Drilling Systems Automation Technology Section of the SPE. One of our drivers is developing methodologies to be able to automate our drilling rigs for consistency as well as safety. A well-trained crew can beat a machine right now, but they can only last so long before they wear out, and of course, finding a well-trained crew might be a challenge these days with the loss of experience that we’re unfortunately seeing. So perhaps this is a way to help us drill wells better and safer.

We need to start with what kind of problem you’re solving and then need to understand what kind of data you’re using and tell a good story with the data, but at the same time, talk about what you could do with the data. It isn’t just data crunching. The model has to go beyond just telling you what’s happened. The challenge for petroleum is to figure out what’s going to happen in the future, not just what was my production today. Can you give me an accurate forecast for my production in the next three to six months so I can go to the shareholder meeting and tell them how much money we’re going to make?

JC: To help older graduates, we’ve developed a graduate certificate program for more mature engineering people practicing in the industry to take in the evenings and on the weekends. We think we can add value for a modest commitment to engineers at any level, even if you just take it to learn the language, you get some hands-on experience with the tools. We’re not turning petroleum engineers into programmers, but students learn basic scripting programming languages like Python and R.

BE: Something that’s kind of unique is that we have a drill and we actually collect our own data. It’s actually a mining coring rig and we have sensors all over it so that we can actually collect the core as we are drilling and record the data. The idea is that you collect and analyze your own data. I want to see how students handle this large volume of data: 20,000 Hertz in 10 minutes from two tri-axial sensors, being able to deal with that, and see the pitfalls and the promises of being able to handle that information, and what it tells you.

JC: There comes a moment in every young digital petroleum engineer’s career where they break Excel, and we want to give them that experience early so they can realize what’s on the other side of it, the new tools and new technologies that will help them build those models with that volume of data, variety of data, and velocity of data.

Do you see any gaps in the tools being used today? What do you think the tools of the future could look like?

JC: We’re building billion-cell reservoir simulations instead of a few thousand cells. Streaming analytics as well as spatial analytics are two areas that I think we’re moving into and it has to do with the variety of data and velocity of data. Maybe we don’t know exactly what to do with 20,000 Hertz, but we could if we could just downsize that to a thousand Hertz. That’s a lot of data. Can we then have a feedback loop where the model is learning from data? 

As we’re drilling a well, if that model gets updated, it could become a better predictor, and then we can find that potential stuck pipe problem, or we could find the fact that we’re going to break off a tooth on the drill bit and avoid an unnecessary trip to set another casing string. Right now, we’re trying to do the best we can, which means we’re probably an hour behind where the drill bit is. We have MWD units, LWD units. We’ve got wired pipe. 

We’ve got some of the capacity to move the data, but I don’t think we really have the capacity to use the data in a proactive fashion, really incorporating the data coming back so we can think ahead of the drill bit.

We’re trying to upgrade our capability managing higher and higher frequency multivariate data. If we’ve got six sensors, I don’t want to just use one. I’m going to use all six. There may be some sort of signal that comes, not just from one, but from a combination of several, so we want to do that. 

We’ve gotten pretty good at producing more oil, no doubt about that. But as shale producers have found out, they haven’t been doing all that well in producing more money and profitability, and they’ve sometimes had environmental issues.

We need to manage the whole, not the parts. We’ve come a long way in the last 30 years managing the parts. I think one of the challenges now is managing the whole.

When it comes to production or we’re dealing with the reservoir, the spatial analytics side becomes important.  We have SCADA data. We’ve got individual well production history; we’ve got all that. Now put that together in a cube. We’re not just dealing with the well, we’re dealing with a cube of rock, we’re building spatial understanding of the subsurface, and even on an operational side, energy use and emissions detection. How can I put all that together so that I am producing the field to make the most money, not just producing the field to make the most fluid volumes? 

BE: There are two other issues that I think need to be worked upon. There’s a lot of the sensors on a drilling rig that are not that accurate or not that precise. They’re not calibrated very often. You’ve got to have good information coming in to be able to come up with good insights, so improved sensors on drilling rigs is a factor as well as the data transmission. There’s wired pipe, but it’s very expensive and it has challenges in and of itself. 

Are there ways that we can get data from downhole back to us in a timely fashion at a rate we need right now? I don’t think we’re there. If we’re going to improve drilling operations, we need to have the information coming from the source, which is the drill bit, and the area around the drill bit, and we have to be able to deal with the velocity and the volume of data in real time so we can make decisions in real time. It doesn’t do you any good to know the well blew out and you’re on fire back there already. We need to know what’s happening now.

Have people been skeptical about incorporating data and analytics into the field? How have you dealt with it?

JC: The oil industry has been criticized, probably correctly, for being relatively slow adopters of some of this technology. My generation didn’t believe in the models enough. I think the new generation believes in them too much. We have to find somewhere in between. 

I don’t care if you are the slickest Python programmer in the world and you just built this reservoir model. You have to be able to explain it. 

Building trust is understanding your data and being able to explain it. It’s the physics as well as the data-driven analytical processes. It’s not one or the other, it’s both, and that’s a harder challenge.

BE: One of the things I like to tell our students in classes about the use of technology and information is you have to get buy-in from everybody, including in the field, because if the rig crew doesn’t want something to work, it won’t work. You’ve got to be able to sell your ideas, to explain what’s going on and why it’s going to make their job better and make their life easier. People are more willing to do stuff that helps them do their job better, and that’s how you have to sell it.

Are there any books, sites, or other resources you would recommend?

BE: Jim, this is a good time to talk about your two books!

JC: I have written two non-academic books: The Future Belongs to the Digital Engineer and A Digital Journey: The Transformation of the Oil and Gas IndustryI also blog on LinkedIn.

Automation will get rid of some jobs, probably jobs human beings don’t really want to deal with because they’re dangerous and dirty. The petroleum industry will certainly change, but it won’t go away. You’re going to have to become model masters and prediction wizards and future tellers and a whole bunch of funny things that you maybe didn’t get in your sophomore and junior classes in Petroleum Engineering. The role will change. I don’t think the role goes away, but if you don’t change with it, you might go away if your skill set isn’t competitive in the industry.

There’s going to be a greater emphasis on predicting what is going to happen and new ways of creating value, and with all of that, you need the data. I think it’s now inescapable that digital literacy is becoming a core competency of engineers, regardless of what specialty they go for, what industry they work in. AI is going to be a tool in the future. It’s going to be a co-worker and that’s something we have to wrap our heads around.

BE: I can add that other great resources include the different conferences, like the IADC Drilling Conference which had a number of sessions on digital transformation, and then also I’d recommend your local SPE. That’s a really great place just to get in on the ground floor about what’s going on and what your peers are doing in your region. 

Dr. Bill Eustes is an associate professor within the Petroleum Engineering Department at the Colorado School of Mines. He has a B.S. degree in Mechanical Engineering from Louisiana Tech University (1978), a M.S. Degree in Mechanical Engineering from the University of Colorado in Boulder (1989), and a Ph.D. in Petroleum Engineering from the Colorado School of Mines (1996). He specializes in drilling operations, experimental, and modeling research. 
Jim Crompton is a Professor of Practice at Colorado School of Mines. Jim retired from Chevron in 2013 after almost 37 years with the major international oil & gas company. After moving from Houston to Colorado Springs, Colorado, Jim established the Reflections Data Consulting LLC to continue his work in the area of data management and analytics for Exploration and Production industry.
Categories
Geology & Geoscience

Understand Stress to Understand Well Spacing

Overview

True to its unconventional designation, shale development requires new ways of working: new operations, new well designs, and even new science.  While we haven’t discovered any new physics, the geomechanics of unconventional reservoirs has been largely overlooked in the realms of geoscience. 

As a data scientist, I’ve been part of analyzing well spacing for several years – combining a multitude of factors across disciplines.  It wasn’t until I started working with Dr. Mark Zoback that I realized we were approaching the problem without the most important ingredient: Geomechanical Stress. 

In this post, I’ll explain how vertical geomechanical stress profiles can be extracted from ISIP measurements and used throughout an asset to optimize well spacing.  This is a perfect activity for engineers and geoscientists while the rig count is down and the organization has time to update its plan.  At Petro.ai, we have built a new tool that facilitates fast and accurate ISIP measurement.

Advances in Geomechanics

Dr. Zoback spent his early career measuring and characterizing the state of stress in the earth, which he applied successfully to wellbore stability problems all around the globe.  Prior to tight reservoirs, breaking rock largely fell in the lap of drillers, with very minimal productive hydraulic fracturing. Interestingly, Dr. Zoback’s research was used to prevent a well from hydrofrac’ing while drilling. 

Because of Dr. Zoback’s pioneering work in measuring and applying the state of stress, our industry has been able to drill more complex, deeper wells through a variety of formations and stress regimes.  These techniques are now canon in the drilling doctrine.  However, in the development of a shale asset, in which we fracture the entire length of the contact within the pay zone, we did not apply the same principles. As a result, we’ve assumed that when it came to frac’ing, bigger was better.

The Problem of Well Spacing

As it turns out, bigger isn’t better.  Continuing to expand development has ushered in the problem of well spacing – how many wells, and how closely must they be spaced, to effectively deplete a shale reservoir?  “Cube development” only increases the stakes; betting more dollars on upfront well spacing assumptions.  While an operator will avoid the complicating factor of depletion, with all the Capex chips on red, so to speak.  As some gamblers may know, in the long run, the house always wins. The same has proven true at the beginning of the second decade of shale development, the gamblers aren’t winning.  Why not?  To me, it comes down to fundamentals – the same issues that I saw as a data scientist – we are missing a key ingredient: Geomechanical Stress.

Understanding Stress

In order to understand well spacing, we need to understand the state of stress surrounding a well and the interactions created while stimulating and draining a volume of reservoir.  In Dr. Zoback’s research, he does a fantastic job of blending theory, simulation, and empirical evidence to understand phenomena, leveraging all three. Dr. Zoback is able to identify the pattern, characterize it with key drivers, and connect those key drivers to the observations.  He outlines and delivers an entire course on these key drivers, has published a textbook on the subject (Unconventional Reservoir Geomechanics), and collaborates with Petro.ai to create new geomechanics software tools (Dr. Zoback is our technical advisor on Geomechanics).  

A very common problem is the lack of good data capture and interpretation in shale.  I see lots of companies collect and store huge volumes of data, but these companies don’t take the time to interpret it.  We may have an abundance of data, but most of it is bad: poorly organized and inaccessible.  Further compounding the problem are engineers who are unable to quality control and make interpretations on collected data.  As a result, engineers select from small volumes of good data, leading to an abundance of sampling bias in an industry that is overrun with data.  My personal goal is to help customers use all of their collected data that holds great information but needs to be emancipated (I call this “dark data”). 

Applying Geomechanics Understanding

I’ve had the great pleasure to work with Dr. Zoback for over a year, learning with him as we’ve tackled new and exciting use cases for our clients.  I’m on the data and AI side, taking his concepts and scaling them to the level of operations a shale client requires, including handling complex development histories.  The impact his research will have on this industry will be profound – it will be a central tenet for shale development.

Like most things in the physical world, hydraulic fractures want to open in the easiest direction. Stress is measured in pressure and there are three principal stresses that need to be accounted for in the reservoir: 

  • the minimum stress (Shmin)
  • the maximum stress (SHmax) and 
  • the vertical stress (SV).  

The magnitude relationship between these stresses dictates the stress regime: normal, strike-slip, or reverse faulting.  We can discern the vertical stress from the weight of the rock column, it’s hard to know SHmax, and we can measure Shmin (in most cases).  In each of these regimes, the plane of the fault will be different because the fractures are opening in the direction with the least stresses.  

Orientation of SHmax and relative stress magnitudes across North America

Dr. Zoback and Dr. Lund Snee recently released a new publication that maps the orientation of SHmax and relative stress magnitudes across North America.  Because SHMax is hard to interpret, they’ve done the hard work for us. Now, with their data set, if you measure Shmin (which we will explain later), you’ll be able to determine all three principal stresses in your asset.  

We put this stress map in Petro.ai so you can easily reference this information across your asset.  Understanding these principal stresses can have a dramatic affect when optimized: controlling for all other factors, wells drilled in the “correct direction” – 90 degrees from SHmax – perform 10-30% better. 

When the pressure in the wellbore is higher than the minimum stress, it is easier for the fluid to fracture the rock and enter the reservoir as a frac than to stay in the wellbore.  The wellbore pressure (measured as treating pressure at the surface) needs to overcome pressure loss over the perforations, cement issues along the wellbore, and stress shadow effects from neighboring fractures.  The stress shadow effects can artificially raise the least principal stress, forcing a screen out or stopping fracture propagation.

The same logic applies vertically – whether or not you have stacked pay.  In order to determine if the frac will stay in zone, you need to know the relative magnitudes of the stresses above and below the well.  If the stress is lower above the well, the frac will go up; if the stress is lower below the well, the frac will go down.  Many operators seem to assume that there are frac barriers (higher least principal stress) above and below a pay zone – this is very rare.  

More likely, and in the most catastrophic scenario, you have an elevated pore pressure in your reservoir, increasing the productivity of the wells, but also causing hydraulic fractures to go both up and down (higher pore pressure increases Shmin).  And, if you used the “bigger is better” strategy I described earlier, there are likely to be depletion affects across the whole pay zone.  As infill wells are placed above and below, they will compete for shared resources with the original frac and you’ve overcapitalized your pad.

Using ISIPs to Improve Well Spacing

There are many factors that could lead to changes in the profile of minimum stress.  Pore pressure, stress relaxation and depletion are your most common factors.  In order to understand what is driving changes in least principal stress, you need to measure it over space and time.  The most abundant (albeit noisy) data source is in 1-second frac van data. 

At the end of a stage treatment, the treating pressure drops as the pumps turn off and the fractures close.  There is a point called the Initial Shut-in Pressure “ISIP” that is commonly picked as part of the post-stage diagnostics by the pressure pumper.  Due to operational considerations, high treating pressures, and lack of consistent theory (people still argue whether you should pick ISIP or fracture closure), ISIPs are rarely picked correctly and become a cloud of meaningless data.

ISIP Picking Method in Petro.ai

At Petro.ai, we took time to develop a robust methodology for picking ISIPs after looking through thousands of stages by applying reasonable physical limitations of the pressure system.  First, we account for friction loss across perforations, and second, we co-visualize the ISIP in reservoir conditions.  Perforations present substantial friction – on the order of several thousand PSI – driven mostly by its radius.  

As pumps shut off, the effect of this friction is removed over a very short time period.  By adjusting treating pressures to “reservoir contact” we can gain a better picture of true net pressures (typically no more than 1,000 psi) and more realistic ISIPs.  By visualizing the ISIP pick (with an uncertainty range) in the reservoir, we can instantly QC data to ensure it falls within a reasonable pressure gradient window (i.e. below the vertical stress and above hydrostatic).  

Vertical Stress Profile in Petro.ai

Whenever I hear a task should be automated, I hear an opportunity for it to be collaborated.  You need to have a deeper discussion about your asset including ISIPs, frac gradients and your vertical stress profile.  This is why Petro.ai has social comments and tags built into the interpretation process.  Any “#tagged” data can easily be searched for and filtered upon.  

As part of the social iteration engine, engineers and scientists can create different model scenarios to compare and contrast their ISIP interpretations and collaborate to develop more comprehensive interpretations.  As Dr. Zoback says “just because you have a solution, it doesn’t mean you have understanding”.

Troy Ruths, Ph.D
CEO & Chief Data Scientist, Petro.ai
Specialties:  High performance computing, Machine learning, Software development, Optimization, Petroleum engineering, Data visualization, Scientific modelling, Data Management
Troy Ruths received his BS in Computer Science from Washington University in St. Louis and a PhD in Computer Science with a specialization in Computational Biology, from Rice University.  He has over 9 years of experience in data analysis of oil and gas applications and has managed the development and deployment of upstream-focused data analysis and visualization tools in 6 countries, representing shale gas, conventional oil, heavy oil, onshore, and deep offshore.  He founded Petro.ai in 2013 as a conversion from his independent consulting practice.
Categories
Passion for Change

Passion for Change: Birchcliff Energy

with Theo van der Werken, Asset Manager at Birchcliff Energy

Tell us about your background and what you do now.

I am currently employed with Birchcliff Energy, which is a Canadian based intermediate producer with a large acreage position in the Montney unconventional resource play.  As the Asset Manager, I manage a team of multidisciplinary engineers and geoscientists that are very busy optimizing the development of our unconventional resource in the Montney.

I’m originally from Holland, where I graduated with a degree in Mining Engineering. Before graduating, I completed an internship, working offshore in the North Sea and realized that an oil and gas career path was more aligned with my interest. 

My first job took me to the Middle East, where I worked for an oil and gas service company. I was involved with a major that utilized Underbalanced Drilling as a drilling technology to explore for oil in the Omani desert. I was based out of Dubai, where I started my career in the drilling engineering department. Subsequently I relocated to Houston where after some good field experience in Texas and Canada, I pivoted from drilling into reservoir engineering.  

After several years I made the conscious decision to switch to the operator side and joined a large multinational and relocated from Houston to Calgary where I started working as an Exploitation engineer in an asset team. It was a great experience, I got to see a lot of different things and work on a variety of reservoirs. Subsequently, I went to a start-up and spent about two and a half years with them. This is where I picked up and learned a lot of the surface side of the business: production engineering, facilities, pipelines, joint venture and so forth.

I joined Birchcliff Energy in 2011 where I started as a Senior Development Engineer and took the role as Asset Manager at the end of 2011. I’ve been in this position for nine years, which has been very rewarding and I have never looked back. 

During the last 9 years we have seen tremendous growth in my team and in the company as whole, primarily through the drill bit where we have grown from approximately 16,000Boed to 80,000Boed. 

Can you tell us why you have a passion for change in this industry and what else you’re passionate about?

I think the oil and gas industry is a very exciting and dynamic industry that is under-appreciated as it relates to technological innovation. 

In addition, the industry often gets vilified by the general public without really understanding the disconnect between end-user habits and the associated energy requirements.  This motivates me to not just advocate for our industry, but also strive to continuously improve on the responsible extraction of hydrocarbons. 

In the last 20 years, with the rise of the unconventionals, we have seen a tremendous amount of technological innovation to both hardware and software that is used to extract hydrocarbons from tight oil and gas reservoirs. 

Above ground, rig automation has evolved with fit-for-purpose rig designs that are really well suited for large scale pad development. At Birchcliff, we are now developing our field with surface pads that can accommodate up to 28 wells from one surface location, minimizing the environmental footprint. These types of walking rigs are surprisingly agile and really help reduce cycle time and ultimately drive down finding and development cost. 

In the subsurface, we continue to see innovations on the software side with more advanced integrated physics-based models as well as data driven models that can guide completion design and field development strategies. In the field, the use of advanced diagnostics such as fiber, geophones and pressure data are really insightful to capture real time system behaviour as we zipper frac these massive pads. 

The diagnostic data is very useful to further advance the modelling space to calibrate and validate not just our hypothesis, but often also to test the reliability of these models. 

Because the system is so complex and the industry continues to innovate there is a great opportunity for continuous improvement on optimizing “where to drill, how to drill, where to frac and how to frac.” 

That’s my number one passion that I rally my team around—we have an opportunity to do better every year, or even every well, based on integrating more data sets, looking at new technology and just continuously pushing the envelope of how we develop these unconventional reservoirs.

As technology advances, it allows asset teams to move down the grain size, if you will. What was once viewed as poor quality rock back in 2011—we’ve now added multiple horizons. Technology is allowing us to economically explore, even with declining commodity prices and mounting external pressures and taxes.

To this point, tight reservoirs have revolutionized the supply side and it’s really driven down prices, particularly in natural gas sector. Notwithstanding that, we can still make a go of it with more room on the upside. That’s just really fascinating and motivating to me and my team. That’s the passion for what I do at work.

Outside of work, I would say my passion—and why I’m living in Canada as a Dutchman—is the mountains. I really like the outdoors, always have. Holland is a pretty busy place. There’s about 17 million people in the size of Southern Alberta, whereas all of Canada has about 38 million people. There is a ton of room here. It’s absolutely beautiful country in summer and winter. I really enjoy spending time with my friends and family in the mountains. I’m pretty passionate about that.

Do you think this downturn we’re experiencing now will accelerate digital transformation or put it on pause until we see better commodity prices?

I think we’re stalling a bit to be honest with you. It seems that we are all very rattled with everything that’s going on from biological viruses to this price war. In North America, there’s 35 billion dollars of capital that’s basically been pulled out of the 2020 budget plans.

On top of that, you layer on remote working and suddenly implementing a digital strategy becomes daunting. When I think about implementing a digital transformation, it’s really a management of change process that is quite culturally involved.

A successful digital transformation is a lot more than buying software. It’s a lot more than hiring a data scientist.  Making sure you assemble the right team and align with the right industry partner are all important components that need to be interlaced. Then building enough internal buy-in and getting people to culturally rally around that is very involved. 

If you’ve already started that journey, I think you can continue to reap the benefits and dig in on specific projects. But if you haven’t started yet, I don’t think this price shock alone gives you the push.

I feel very comfortable with what we are doing here at Birchcliff. We have detailed road maps and projects and inventory of things that we’re working on. We’ve got the people, the data engineering, and the data pipelines. I feel good about that. 

How much do you see your culture tying to your competitive advantage, being able to capture that next generation of knowledge?

It goes back to this passion for change, passion for continuous improvement. We’ve built an internal framework with some tangible tools. How can you continuously improve? We want to improve everything: from trialing different completion styles in the field, to new technology using physics-based and data-driven models, to spending time collaborating with peers, to competitor intelligence to learn from “best in class” competitors.

In addition, we have set up the business processes that support these efforts. How do you design a pilot or operational trial? How do you define success with appropriate KPIs? Who is responsible for scouting for new technology? All these items are important to maximize value. 

That interlacing of these various components is what is part of our Continuous Improvement framework. Data analytics and science is just one of the tools that fit within this framework. If our team decides to set up a field trial, we need the right sensors in the field so we can collect the right data to feed a data-driven model or calibrate our physics-based model. The culture of continuous improvement—and people rallying around that—allows us to get the buy-in for something like data analytics or an investment in physics-based modeling.

We just got approval to run downhole fiber in this environment. We’re making this investment because our people are bought in on how data, physics, and analytics interplay to drive continuous improvement. So, they all go hand-in-hand.

Some would argue that our Birchcliff sandbox is not the most competitive sandbox from a pressure and permeability perspective. If you look at the Montney, we’ve got a great position. It’s all contiguous land and we own our own plant and we’ve done a series of things which make our strategy highly successful and profitable compared to most of our peers.

Can you make a go of it without spending any money on modeling, field trials or diagnostics? If you have superior rock, you can probably still be very competitive. In the long run, I think the winners and survivors need to strive for this Continuous Improvement culture that is very much alive at Birchcliff. Our technical teams have been able to demonstrate that by year-over-year improvements on type curves and a variety of economic indicators.

Have you seen specific examples of success with data science and machine learning projects? And was there skepticism? How did you convince people to go along with it?

Analytics adds value in two places: 1) improving the efficiency of things that you already do and 2) by making better decisions because you’re able to interact with different data sets that you weren’t able to interface with before.

On the first point, we all want to be more efficient; that applies to everyone within Birchcliff. You can spend a lot of money moving data from external sources into your organization and then cleaning and staging it for an end product. The fact of the matter is, you don’t need to spend a lot of money to make some improvements on that. You just need some smart people and some software tools. 

If you can help make better decisions in terms of how you manage your production, that’s directly going to hit your revenue line. So, initially we focused a lot on production engineering with visualization dashboards. Those were some of the early use cases. By no means have we figured it all out, but we started small. Back in 2013, we organized data, then slowly but surely, we started to evaluate different tools. We started to build a network of people that we thought were like-minded and hired our first engineer with advanced analytics skills. 

That small group accelerated the adaptation of a lot of things. We slowly started to add people and showed more value in projects; that has led to where we are today. We have a dedicated data analytics team, which is rolled up under corporate development and competitor intelligence. We’ve since hired a data engineer and data scientist. 

That team didn’t make a lot of noise until we felt that it was worth bringing up with the broader organization. Many people, myself included, need “soak time” on these things. Having some tact around how you slowly but surely get your organization to adapt, there’s some strategy involved. For us, things have accelerated in a very positive direction.

Have you seen any competitors that are having success with data science and analytics? 

I think you see very different things depending on the size of the company. Larger players often employ highly technical and specialized people with great skill sets. But, individuals can feel isolated within the larger organizational chart. Some of those smart people are building some really cool analytics workflows, but they’re having a difficult time making it a broader initiative or socializing it to a larger group. 

On the other end, you see these large companies with a global mandate and initiative to implement a digital strategy, but when users actually need support on the gory details of data engineering and data pipelines in specific business units, support is lacking.  Instead, the initiative is rolled out at the corporate level but it is not well supported in the business unit where the focus should be on a specific problem in a specific basin. 

That brings me back to what I see here in Calgary. There are a lot of highly skilled technical people, but they appear to be quite siloed working on niche projects. It could be a function of the wrong expectations being set from the beginning. As per my previous comments, a successful digital strategy is really rooted in a management of change process that can be daunting and can make or break this kind of initiatives.

Do you see Birchcliff being on the cutting edge? And is that where you want to live or do you want to be somewhere in the early majority?

The leading edge is great, the bleeding edge is not so great. We’ve found that the analytics space is changing quickly; there’s a lot of smart suppliers and vendors developing and building tools and workflows. Looking at Birchcliff specifically, we try to balance this evolving space by always asking ourselves if we should build versus buy. Maybe you want to buy it because if you have to maintain and support it, the cost of maintenance can become prohibitive and is better suited for a 3rd party vendor to host it on the cloud. In addition, if the workflow or technology is evolving you can leverage of improvements that are driven by other operators as well where we may not use that specific application 24/7. 

So in general – I would say if it is truly novel we continue to develop it in house, simply because there is nothing like it on the market whereby we consider buying it if there are obvious advantages as previously highlighted.

Is there one specific project you could talk about that you were really proud of?

I guess there’s a lot of them. We’ve got workflows now for automated lookbacks on wells that are recently drilled. There’re field dashboards that we use to optimize production that have significantly impacted the bottom line. We’re using blending analytics tools to help our marketing group blend the right grades to maximize returns. There are ingestion tools that we’ve built for third-party pressure data that allows us to pipeline and database it. There are custom apps that we’ve built to house datasets that don’t typically have a home, where we have built processes that ingest that data and allow for easy access by our engineers. There are many examples.

Over the last few years, we’ve been spending more and more time building up competency at related to multivariate modeling and specifically machine learning projects. We are now a long way down that road where we have staged large ‘featurized’ data sets, allowing us to operationalize the type of workflows. We’re really excited about trying to blend a data-driven empirical data set with a physics-based model. That’s the stretch goal we’re working toward but expect to have a operationalized workflow by the end of this year.

Is Birchcliff using any cloud computing resources today?

We’re a bit of a hybrid. This is not my area of expertise. Some things are on the cloud. Some things are on-prem. Security, scalability, latency, control—all these things have pros and cons in both buckets, but we’re doing a little bit of both.

Control of the data: that was a really big thing. We own our plant. We don’t do contractors. We need to control everything, but now we’re starting to see the benefits of cloud, so we are trying to find a happy medium.

We haven’t seen any limitations with having on-prem servers during this time when everyone’s working remotely. Everyone’s on VPN and it was flawless for us. We didn’t skip a beat. 

Any books or blogs that you’d suggest reading?

I don’t seem to have too much time for reading between work, family and, of course, the mountains. I think Bill Gates always has a few interesting things to say in Gates Notes where he shares his perspective on the complicated challenges our world is facing today.

A second thought leader I enjoy learning from is Peter Tertzakian, a Geophysicist by training who is the Executive Director of the ARC Energy Research Institute. He’s basically a historian of energy transitions. He’s got a really cool website called Energyphile and has written several books as well. 

He also hosts the ARC Energy Ideas podcast with Jackie Forrest that explains the latest trends in Canadian energy and beyond. They’re not just focused on upstream E&P, they’re focused on energy.  

In your own words, how would you describe Petro.ai?

I think there’s a lot of shared vision with Petro.ai in terms of the value of analytics and where it could play a really critical role in this continuous improvement journey. I think you have a great group and a good culture. I’m really happy for the success that your team has seen. 

Theo van der Werken is a highly resourceful and results-oriented manager with a deep understanding of all aspects in upstream oil and gas disciplines. Specialties include asset management of tight oil and gas reservoirs with proven leadership experience.