Data Engineering and Science

Clarification on data management

We read data from any source (live stream, api, excel sheet, SQL, pdf, notes, presentations, you name it). 

Before you use the data, we import it the data to our servers. We match all data to the right well, wellbore, sidetracks, well design in depth and time and prepare the data for lightning fast searches. We then improve the data by use of machine learning and statistical methods to ensure a quality level which the users are satisfied with. Well planning is never dependent on the last hours of operations, most data updated monthly.


In the software, engineers generate wells, wellbores, well designs, technical calculations and experiences, all of these are available through our API, with personal history. The amount of data we create in well planning are magnitudes larger than the manual process.

Then we make all the data available through API’s, so you can integrate to any other application or storage.

Returning data to the original databases becomes a question, when users improve data or we take in multiple sources of one dataset, how will you handle this in your other storage which only stores one truth? Should we update your storage on changes and then read from your storage again? Our experience is that this is slow and prone to unexpected events. We have to be responsible for a robust data improvement process with high quality data, no user accepts that previous fixes returns as problems.

In the next stages, we can collaborate on data quality measures so we can have live data sharing with data lakes or other, but we need guidance from you on how to achieve this.

Share this

Latest news