Categories
Database, Cloud, & IT Transfer

Petro.ai Joins OSDU

Petro.ai is proud to announce that it joins industry leaders Schlumberger, Chevron, Microsoft, Shell, and others in membership in the Open Subsurface Data Universe™ Forum.  The OSDU is developing a cloud-native data platform for the oil and gas industry, which will reduce silos and put data at the center of the subsurface community.

Membership in the OSDU Forum gives Petro.ai a seat at the table in developing the latest standards in petrotechnical data access and integration. Leveraging the OSDU data platform, Petro.ai accelerates the oil and gas digital transformation: empowering asset teams to organize, share, and interact with data like never before.

Learn more about the OSDU here.

Categories
Database, Cloud, & IT Transfer

How to Rename DCA Wrangler Models in Petro.ai – Part 2

Last week, we discussed how to delete specific DCA Wrangler models. This week, we will look at how to rename them.

What if you accidentally hit save before naming the model correctly?  Or if your model evolves into something more than you first intended?

You can just follow these steps to update your model name.

Let’s get started!

You will need a couple of things:

1. Well Decline Curve or Type Curve models created with the DCA Wrangler

2. Robo 3T (free) or Studio 3T (easier use, not free).

3. Valid Petro.ai Database Connection.

Quick FYI – Petro.ai Collection (Tables)

prod.WellDeclineCurveModels – Decline Curve Models saved using Single-Well or Multi-Well mode

prod.TypeCurveModels – Type Curve Models saved using Type Curve mode

Robo 3T and Studio 3T

Code for Single-Well/Multi-Well (Robo 3T)

/*
This script will RENAME all well decline curve models listed below. 
Insert your current model name in currentName.
Insert your new model name in newName.
*/

var currentName = "Q1Modelss";
var newName = "Q1Model";

db.getCollection("prod.WellDeclineCurveModels").updateMany(
    { modelName: currentName },
    { $set: { modelName: newName } }
);

Code for Single-Well/Multi-Well (Studio 3T)

//Define Database Below
use petroai;
/*
This script will RENAME all well decline curve models listed below. 
Insert your current model name in currentName.
Insert your new model name in newName.
*/

var currentName = "Q1Modelss";
var newName = "Q1Model";

db.getCollection("prod.WellDeclineCurveModels").updateMany(
    { modelName: currentName },
    { $set: { modelName: newName } }
);

Examples:

Robo 3T

Studio 3T – IntelliShell

Code for Type Curves (Robo 3T) 

/*
This script will RENAME all type curve model listed below. 
Insert your current model name in currentName.
Insert your new model name in newName.
*/

var currentName = "Q1TypeCurvess";
var newName = "Q1TypeCurves";

db.getCollection("prod.TypeCurveModels").updateMany(
    { modelName: currentName },
    { $set: { modelName: newName } }
);

Code for Type Curves (Studio 3T)

//Define Database Below
use petroai;
/*
This script will RENAME all type curve model listed below. 
Insert your current model name in currentName.
Insert your new model name in newName.
*/

var currentName = "Q1TypeCurvess";
var newName = "Q1TypeCurves";

db.getCollection("prod.TypeCurveModels").updateMany(
    { modelName: currentName },
    { $set: { modelName: newName } }
);

Examples:

Robo 3T

Studio 3T – IntelliShell

DONE!

Categories
Database, Cloud, & IT Transfer

How to Delete DCA Wrangler Models in Petro.ai – Part 1

Have you ever wondered how you can remove DCA Wrangler models that are stored in the Petro.ai Database?

Dream no longer!

You will need a couple of things:

1. Well Decline Curve or Type Curve models created with the DCA Wrangler

2. Robo 3T (free) or Studio 3T (easier use, not free).

3. Valid Petro.ai Database Connection.

Quick FYI – Petro.ai Collections (Tables)

prod.WellDeclineCurveModels – Decline Curve Models saved using Single-Well or Multi-Well mode

prod.TypeCurveModels – Type Curve Models saved using Type Curve mode

Robo 3T and Studio 3T

Code for Single-Well/Multi-Well (Robo 3T)

/*
This script will DELETE all decline curve models listed below. 
Change "false" below to "true" in order to delete, otherwise it will just count (for safety)
*/

//List all model names, in quotes and separated by commas:
var modelNames = [
    'MyModel1',
    'MyModel2',
]
//////////////////////////////////////////////////////
var I_WOULD_LIKE_TO_DELETE_ALL_MODELS = false;
//////////////////////////////////////////////////////
modelNames.forEach((modelName) => {
    if (I_WOULD_LIKE_TO_DELETE_ALL_MODELS) {
        printjson({
            _id: modelName,
            modelCount: db.getCollection("prod.WellDeclineCurveModels").deleteMany({'modelName': modelName}),
        });
    }
    else {
        printjson({
            _id: modelName,
            modelCount: db.getCollection("prod.WellDeclineCurveModels").count({'modelName': modelName}),
        });
    }
});

Code for Single-Well/Multi-Well (Studio 3T)

//Define Database Below
use petroai;
/*
This script will DELETE all decline curve models listed below. 
Change "false" below to "true" in order to delete, otherwise it will just count (for safety)
*/
//List all model names, in quotes and separated by commas:
var modelNames = [
    'MyModel1',
    'MyModel2',
]
//////////////////////////////////////////////////////
var I_WOULD_LIKE_TO_DELETE_ALL_MODELS = false;
//////////////////////////////////////////////////////
modelNames.forEach((modelName) => {
    if (I_WOULD_LIKE_TO_DELETE_ALL_MODELS) {
        printjson({
            _id: modelName,
            modelCount: db.getCollection("prod.WellDeclineCurveModels").deleteMany({'modelName': modelName}),
        });
    }
    else {
        printjson({
            _id: modelName,
            modelCount: db.getCollection("prod.WellDeclineCurveModels").count({'modelName': modelName}),
        });
    }
});

Examples:

Robo 3T

Studio 3T – IntelliShell

Code for Type Curves (Robo 3T) 

/*
This script will DELETE all type curve models listed below. 
Change "false" below to "true" in order to delete, otherwise it will just count (for safety)
*/
//List all model names, in quotes and separated by commas:
var modelNames = [
    'MyModel1',
    'MyModel2',
]
//////////////////////////////////////////////////////
var I_WOULD_LIKE_TO_DELETE_ALL_MODELS = false;
//////////////////////////////////////////////////////
modelNames.forEach((modelName) => {
    if (I_WOULD_LIKE_TO_DELETE_ALL_MODELS) {
        printjson({
            _id: modelName,
            modelCount: db.getCollection("prod.TypeCurveModels").deleteMany({'modelName': modelName}),
        });
    }
    else {
        printjson({
            _id: modelName,
            modelCount: db.getCollection("prod.TypeCurveModels").count({'modelName': modelName}),
        });
    }
});

Code for Type Curves (Studio 3T)

//Define DB Below
use petroai;
/*
This script will DELETE all type curve models listed below. 
Change "false" below to "true" in order to delete, otherwise it will just count (for safety)
*/
//List all model names, in quotes and separated by commas:
var modelNames = [
    'MyModel1',
    'MyModel2',
]
//////////////////////////////////////////////////////
var I_WOULD_LIKE_TO_DELETE_ALL_MODELS = false;
//////////////////////////////////////////////////////
modelNames.forEach((modelName) => {
    if (I_WOULD_LIKE_TO_DELETE_ALL_MODELS) {
        printjson({
            _id: modelName,
            modelCount: db.getCollection("prod.TypeCurveModels").deleteMany({'modelName': modelName}),
        });
    }
    else {
        printjson({
            _id: modelName,
            modelCount: db.getCollection("prod.TypeCurveModels").count({'modelName': modelName}),
        });
    }
});

Examples:

Robo 3T

 

Studio 3T – IntelliShell

Enjoy!

Stay tuned for the next blog where I will talk about how to rename your model.

Categories
Database, Cloud, & IT Production & Operations Transfer

Real-time Production in Petro.ai using Raspberry Pi

One of the most pressing topics for data administrators is “what can I do with my real-time production data?”. With the advent of science pads and a move to digitization in the oilfield, streaming data has become one of the most valuable assets. But it can take some practice and getting used to.

I enjoyed tinkering around with the Petro.ai platform and while we have simulated wells: it’s much more fun to have some real data. Ruths.ai doesn’t own any wells but when the office got cold brew, I saw the opportunity.


We would connect a Raspberry Pi with a sensor for temperature to the cold brew keg and pipe temperature readings directly into the Petro.ai database. The data would come in as “casing temperature” and we’d be able to watch our coffee machine in real-time using Petro.ai!

The Plan

The over diagram would look like this:


The keg would be connected to the sensor and pass real-time information to the Raspberry Pi. Then it would shape it into the real-time schema and publish to the REST API endpoint.

Build out

The first step was to acquire the Raspberry Pi. I picked up a relatively inexpensive one off Amazon and then separately purchased two temperature sensors by Adafruit. It read temperature and humidity, but for the moment we’d just use the former.


There’s enough information online to confirm that these would be compatible. After unpacking it, I setup an Ubuntu image and booted it up.

The Script

The script was easy enough, the Adafruit came with a code snippet and then for the Petro.ai endpoint, it was a matter of picking the right collection to POST to.

[code language="python"]
import time
from multiprocessing import Pool
import os
import datetime
import requests
import csv
from pprint import pprint
import argparse
from functions import get_well_identifier, post_new_well, post_live_production, get_well_identifier
import sys
import Adafruit_DHT


while True:
PETRO_URL = 'https://<YOUR PETRO.AI SERVER>/api/'
freq_seconds = 3
wellId = 'COFFEE 001'
endpoint = 'RealTimeProduction'

pwi = '5ce813c9f384f2057c983601'

# Try to grab a sensor reading.  Use the read_retry method which will retry up
# to 15 times to get a sensor reading (waiting 2 seconds between each retry).
humidity, temperature = Adafruit_DHT.read_retry(Adafruit_DHT.DHT22, 4)

# Un-comment the line below to convert the temperature to Fahrenheit.
temperature = temperature * 9/5.0 + 32

if temperature is not None:
casingtemp = temperature
else:
casingtemp = 0
sys.exit(1)

try:
post_live_production(endpoint, pwi, 0, casingtemp, 0, 0, 0, 0, 0, 0, PETRO_URL)
except:
pass
print((wellId + " Tag sent to " + PETRO_URL + endpoint + " at "+ datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")))
time.sleep(freq_seconds)
[/code]

Results

Once connected, we were extremely pleased with the results. With the frequency of readings set to 3 seconds, we could watch the rising and falling of the temperature inside the keg. The well was affectionately named “COFFEE 001”

Categories
Database, Cloud, & IT Transfer

Death by apps, first on your phone and now in O&G

How many apps do you use regularly on your phone? How many of them actually improve your day? What at first seemed like a blessing has turned into a curse as we flip through pages of apps searching for what we need. Most of these apps are standalone, made by different developers that don’t communicate with each other. We’re now seeing a similar trend in O&G with a proliferation in software, especially around analytics.

O&G has always been a data-heavy industry. It’s well documented that data is one of the greatest assets these companies possess. With the onset of unconventionals, both the types of data and the amount of data has exploded. Companies that best manage and leverage data will be the high performers. However, this can be a challenge for even the most sophisticated operators.

Data is collected throughout the well lifecycle from multiple vendors in a wide range of formats. These data types have historically been ‘owned’ by different technical domains as well. For instance, drillers owned the WITSML data, geo’s the well logs, completions engineers the frac van data, production engineers the daily rates and pressures. These different data types are delivered to the operator through various formats and mechanisms, like csv files, via FTP site or client portals, in proprietary software, and even as ppt or pdf files.

Each domain has worked hard to optimize their processes to drive down costs and increase performance. Part of the gains are due to analytics applications – either built in house or delivered as an SaaS offering from vendors – providing tailored solutions aimed at addressing specific use cases. Many such vendors have recently entered the space to help drillers analyze drilling data to increase ROP or to help reservoir engineers auto-forecast production. However, the O&G landscape is starting to look like all those apps cluttering your phone and not communicating with each other. This usually translates into asset teams becoming disjointed, as each technical disciple uses different tools and has visibility only on their own data. Not only is this not ideal, but operators are forced to procure and support dozens of disconnected applications.

Despite the gains achieved in recent years, certainly due in part to analytics, most shale operators are still cash flow negative. Where will we find the additional performance improvements required to move these companies into the black?

The next step in gains will be found in integrating data from across domains to apply analytics to the overall asset development plan. A cross-disciplinary, integrated approach is needed to really understand the reservoir and best extract the resources. Some asset teams have started down this path but are forced to cobble together solutions, leaving operators with unsupported code that spans Excel, Spotfire, Python, Matlab, and other siloed vendor data sources.

Large, big-name service providers are trying to build out their platforms, enticing operators to go all-in with their software just to more easily integrate their data. Not surprisingly, many operators are reluctant to go down this path and become too heavily dependent on a company that provides both their software and a large chunk of their oilfield services. Is it inevitable that operators will have to go with a single provider for all their analytics needs just to look for insights across the well lifecycle?

An alternative and perhaps more attractive option for operators is to form their own data strategy and leverage an analytics layer where critical data types can be merged and readily accessed through an open API. This doesn’t mean another data lake or big data buzz words, but a purpose-built analytics staging area to clean, process, blend, and store both real time and historical data. This layer would fill the gap currently experienced by asset teams when trying to piece their data together. Petro.ai provides this analytics layer but comes with pre-built capabilities so that operators do not need a team of developers working for 12 months to start getting value. Rather than an SaaS solution to one use case, Petro.ai is a platform as a service (PaaS) that can be easily extended across many use cases. This approach removes the burden of building a back end for every use case and then supporting a range of standalone applications. In addition, since all the technical disciplines leverage the same back end, there is one true source for data which can be easily shared across teams.

Imagine a phone with a “Life” app rather than music, calendar, weather, chat, phone, social, etc. A single location with a defined data model and open access can empower teams to perform analytics, machine learning, engineering workflows, and ad hoc analysis. This is the direction leading O&G companies are moving to enable the integrated approach to developing unconventionals profitably. It will be exciting to see where we land.

Categories
Database, Cloud, & IT Transfer

Getting Started with MongoDB

This is the start of a new series we’re doing about MongoDB. We are going to take you through the ins and outs of using a NoSQl to store your data. MongoDB is one of the most popular DBMS out there and is the industry leading NoSQL database. MongoDB’s popularity stems from its speed, ease of development, and flexibility. Instead of normalizing your data into a series of related tables, Mongo allows you to store all the relevant information into a single schemaless document. This not only makes writing queries faster but depending on how you set up your database this makes your queries execute faster too.