• Skip to main content
  • Skip to footer

INT

Empowering Visualization

COMMUNITY BLOG
CONTACT US SUPPORT
MENUMENU
  • Solutions
    • Overview
    • Real-Time Visualization
    • Visualization Components
    • New Energy Visualization
    • OSDU Visualization
    • Machine Learning
    • Developer Tools
    • Cloud Partners
  • Products
    • IVAAP™
          • SOLUTIONS

            Real-Time Visualization

            OSDU Visualization

            Visualization Components

            New Energy Visualization

            Upstream Data Visualization

          • SUCCESS STORIES

            WEATHERFORD
            Well delivery optimization software

            BARDASZ
            Data management, data quality monitoring

            ANPG / SATEC-MIAPIA
            Virtual data room

            MAILLANCE
            High-performance visualization of ML algorithms

            SEE ALL >

          • SUPPORT

            DEVELOPER COMMUNITY
            Join or log in to the INT Developer Community.

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • DEMOS

            IVAAP DEMOS
            Cloud-Based Demos

            FIRST TIME HERE?
            Register to access our
            IVAAP demos

    • GeoToolkit™
          • SUCCESS STORIES

            CAYROS
            Field development planning

            TOTALENERGIES
            High-performance large dataset reservoir rendering

            IFP ENERGIES NOUVELLES
            Seismic and structural interpretation validation

            SEE ALL >

          • SUPPORT

            DEVELOPER COMMUNITY
            Join or log in to the INT Developer Community.

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • DEMOS

            GEOTOOLKIT DEMOS
            Geoscience Demos

    • INTViewer™
          • SUCCESS STORIES

            STRYDE
            Fast seismic QA/QC in the field

            SILVERTHORNE SEISMIC
            Efficient 2D/3D seismic data delivery

            WIRELESS SEISMIC
            Drones, IoT, and Advanced Onsite Seismic Data Validation

            SEE ALL >

          • SUPPORT

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • PLUG-INS

            EXTEND INTVIEWER
            More than 65 plugins available

  • Demos
    • GeoToolkit Demos
    • IVAAP Demos
  • Success Stories
  • Resources
    • Blog
    • Developer Community
    • FAQ
    • INT Resources Library
  • About
    • Overview
    • News
    • Events
    • Careers
    • Meet Our Team
    • About INT

SDK

Sep 06 2023

5 Key Designs of the IVAAP Backends

When you are getting ready to embark on a multi-year journey of software development, technical design matters. You don’t just want to take into account the requirements of a minimum viable product, you have a roadmap for this product, and this roadmap gives you constraints. Over the years, a product evolves and so do the constraints. With the 2.11 release of IVAAP, now is a good time to revisit early technical design decisions for the IVAAP backends, decisions that have stood the test of time over the entire lifecycle of the product.

1. Designed to Mesh Data

IVAAP has very strong visualization features. Some customers use IVAAP only to visualize data from their PPDM database, or from their OSDU deployment. Many others use IVAAP to visualize data coming from multiple data sources. This “multiple data sources” aspect of IVAAP has been part of the DNA of the product from the start. It gives users the ability to compare data across multiple systems, in one deployment, and in one consistent user interface. 

Technically, this feature imposes architectural constraints. First, from a reliability standpoint, when you access multiple data sources at the same time, you can’t assume all of them are available. And the failure of one data source shouldn’t affect access to data from another source. For example, if a PPDM database goes down, this shouldn’t affect access to WITSML data. This is one of the reasons why IVAAP has been designed as a “mesh” of multiple Java Virtual Machines. Even in the most basic deployment, each data source type (PPDM, WITSML, OSDU, etc) has its own dedicated sandbox, and it’s the role of the IVAAP cluster to make these sandboxes cooperate to give users a seamless experience.

Sandboxing access to each data source type also provides a simple way to scale. Whether the components of IVAAP are deployed with Kubernetes or with Docker Compose, you can customize at deployment time how each data source type scales. 

The ability to access multiple data sources also permeates the way URLs are designed. The URLs of the IVAAP Data Backend are designed so that the cluster can easily route HTTP requests towards the right component. This routing doesn’t just affect the URL of REST services, but also the URL of real-time channels used in web sockets. These “mesh-friendly” URLs have stayed stable since the first version of IVAAP.

2. Container Agnostic

An architecture where multiple JVMs run in parallel is complex to deploy as a developer. 99% of the development time is spent on developing or modifying a service meant to execute within a single JVM. As a platform, IVAAP needed to be powerful, but also easy to use for the most common tasks. This is why it was designed to run on top of multiple containers.

The container used in production is made of a Play server and an Akka cluster. Each data source type is associated with a so-called “node” that is part of that cluster. The container used in development is an Apache Tomcat server, running Java Servlets. This is a configuration that most developers are already familiar with, and that IDEs support well. 

From a code point of view, it makes no difference whether a service will be deployed in Tomcat or in Play. But being “container agnostic” goes beyond the benefits of seamlessly switching between development and runtime environments. The services written with the IVAAP Backend SDK may also be deployed on containers such as Amazon Beanstalk or Google App Engine. JUnit itself is considered a container, making it easy to unit test REST services without having to launch a HTTP server.

Introduced later, the Admin Backend benefited from the SDK’s versatility. While this SDK was initially designed with an Akka Cluster in mind, it is also used by the Admin Backend running Apache Tomcat.

IVAAP is a feature-rich platform, with hundreds of REST services. One of the reasons INT was able to implement so many services is the versatility of IVAAP’s SDK. Making abstraction of the underlying runtime environment was an early SDK decision, key to making IVAAP developers instantly productive.

3. Modular

When you develop microservices, there is one keyword that is the antithesis of the type of product you are creating: a monolithic application. To borrow from Wikipedia, “a monolithic application is a single unified software application which is self-contained and independent from other applications, but typically lacks flexibility.” A monolithic application is often used along proprietary data silos. From an architecture point of view, IVAAP is the opposite of a monolithic application — it attempts to free users from the proprietary data silos by providing the flexibility that silos lack. This is where a modular architecture helps.

With a modular architecture, optionalism is not limited to configuration, but also depends on the presence or absence of a module. From a development perspective, it is sometimes easier to develop and test a module that implements an option than to modify (and risk breaking) an already well-tested existing code.

Modularity is particularly useful when 99% of the product fits a customer needs, but the remaining 1% needs to be customized. With a modular architecture, the IVAAP code doesn’t need to be changed to implement a proprietary authentication mechanism. A custom authentication module can be deployed instead.

The benefits of modularity were well understood before INT started working on IVAAP. It’s really how it was implemented that turned out to have staying power. The IVAAP Backend SDK used a simple Lookup system to let developers plug or unplug implementations. This Lookup system was actually the first line of code I wrote when I started working on IVAAP. In a nutshell, a single annotation governs how classes are registered into the lookup at startup. It’s an API that is incredibly powerful, but also simple to learn. The entire IVAAP Backend code was built around it.

4. Developer-Friendly Data Models

IVAAP supports multiple types of data sources. Many of these data sources contain well data, but they all have their own way of storing that data. The role of a connector’s implementation is to expose this data to the rest of IVAAP, using a standard data API.

A typical way to do this is to use the concept of “Data Access Object.” To quote Wikipedia again, “a data access object (DAO) is a pattern that provides an abstract interface to some type of database or other persistence mechanism.” The IVAAP SDK follows this DAO concept, but while keeping developers in mind. 

A typical DAO implementation indicates which interfaces need to be implemented. A developer working on a connector would code towards these interfaces, then start testing the code. This approach has several problems. One of them is that a developer cannot start testing his/her work until all interfaces have been implemented. The second problem is that there may be lots of interfaces to implement, delaying further the availability of the work.

IVAAP’s DAO approach introduces the concept of “finders” and “updaters.” Finders are in charge of performing “read-only” data accesses, while “updaters” are in charge of updating this data if needed. Finders and updaters are added to the lookup at startup. Unlike a classic DAO implementation, developers only need to plug finders and updaters that will be actually used, that actually match data in the data source they are working with. 

For example, a WITSML store may contain “well risk” data and INT’s WITSML connector implements a “well risk finder.” A PPDM database doesn’t contain “well risk” data and INT’s PPDM connector doesn’t need to implement a “well risk finder.” By splitting the code requirements into fine-grained finders and updaters, the IVAAP SDK implements a DAO system that requires only the minimum amount of code for developers to write.

Paired with well-defined data models, the now battle-tested concept of finders and updaters has been a key element for new developers to learn the IVAAP SDK and develop connectors fast.

5. Circle of Trust

Going back to the idea of a “monolith,” one of the pitfalls of building a monolith is that a monolithic application can often only talk to itself. Since IVAAP is based on microservices, it is, by design, a system that is easy to integrate with. There is however a constant hurdle to such integration: authentication and security.

When authentication is simple, many systems use the concept of a “service account” to consume microservices. But “service accounts” often break when multi-factor authentication is required. When integration with other systems is a concern, we found that microservices need to be able to clearly differentiate human interaction from automated interactions.

This is essentially the purpose of the Circle of Trust. When deploying IVAAP, the software components that require special access can authenticate themselves as such. For example, a Python or Shell script that scoops up the monthly activity report would identify itself as a member of the Circle of Trust to access this report.

As IVAAP matures, many of the solutions proposed today by INT to its customers involve the Circle of Trust. The Circle of Trust is a simple but secure way of integrating third-party software with IVAAP. While it is a recent introduction to IVAAP’s design, it has become a key tool to meet customer needs.

 

Conclusion

The designs above made IVAAP’s journey possible, but IVAAP’s journey is not over. What they all share is a focus on users. Whether these users are geoscientists or programmers, these designs were meant to empower them. Empowering doesn’t just mean catering to today’s needs — it also means anticipating tomorrow’s requirements and making them possible. These five designs were strong enough that they were the gifts that kept on giving.

Visit us online at int.com/ivaap for a preview of IVAAP or for a demo of INT’s other data visualization products. 

For more information, please visit int.flywheelstaging.com or contact us at intinfo@int.com.


Filed Under: IVAAP Tagged With: Amazon Beanstalk, DAO, Google App Engine, HTTP, ivaap, JVM, OSDU, PPDM, REST, SDK, URL, WITSML

May 01 2023

How the Admin Backend Provides Flexibility to IVAAP Customers

The P of IVAAP stands for Platform. As a platform, IVAAP is designed to be modified by INT’s own customers to meet their specific (and sometimes proprietary) needs. Over the years, most of the focus of my blog articles has been on the Data Backend. The Data Backend is the core component of IVAAP that accesses data and makes it available in a standardized form—the IVAAP viewer. The Data Backend is highly specialized for geoscience data. There is, however, another backend, the Admin Backend, that is more generic in nature and that typically doesn’t get as much attention. The goal of this article is to shed some light on how this “other backend” can be customized or consumed to meet customer needs.

Roles of the Admin Backend

The Admin Backend has multiple roles. Some see the Admin Backend as a component managing IVAAP projects. An IVAAP project is essentially an arbitrary grouping of datasets, where each dataset is accessible through the Data Backend. Indeed, the Admin Backend manages projects and all their members. Each member is identified by a URL and often carries metadata, such as its name or location. The actual storage of this information is a PostgreSQL database. The IVAAP Admin Backend provides a simple REST API to the UI to manage project data.

Describing the Admin Backend as a store for projects doesn’t do it justice. It also manages many types of IVAAP entities such as connectors, cloud services, users, groups, dashboards, templates, formulas, and queries. Other data types are geoscience-related, such as curve dictionaries. The Admin Backend is also in charge of providing an audit trail when data is added, updated, or removed. All these features are implemented based on a documented Java API so that INT’s customers can plug their own implementations. Developers are not limited by the REST services that the Admin provides, they can add their own. While the Admin SDK has multiple customization points, the use cases below are the most common.

Customization Use Cases

The first use case of the Admin SDK is the customization of authentication. By default, IVAAP supports two types of authentication. A simple OAUTH2-based authentication, and OpenID Connect. IVAAP customers often have their own authentication system, and IVAAP needs to use that system. To make this possible, the Admin SDK provides a way to customize how users are authenticated, and how sessions are managed and tracked.

The next use case is the customization of key services, typically collection services. For example, the Admin Backend has a service that lists all active users. This service is used by the UI when a template is shared with others. IVAAP customers can plug their own service that will list potential users, for example, listing them from an LDAP server instead of IVAAP’s own PostgreSQL database.

The third use case is the customization of entitlements. Each time a dataset is opened, the Admin Backend is queried to check whether the currently logged-in user has access to this dataset. The default implementation relies on group memberships, but customers can plug their own rules. These rules can be fine-grained, for example, making determinations at the well log levels instead of the well levels.

External Integration Use Cases

The integration of the Admin Backend with other systems goes beyond authentication. The Admin REST services are designed to be called either by a human or a computer. Unlike humans, computers can’t easily log in to a system that requires two-factor authentication. This is why the Admin provides a “Circle of Trust” REST API that allows computers to access its data without the need for a login or password, but rather by a secure exchange of keys. This feature opens new integration use cases.

The first use case for the “Circle of Trust” is the automated retrieval of user activities. Some INT customers require monitoring of user sessions to assess how much IVAAP is used. The REST API for listing user activities is straightforward to use and can be leveraged by a tool outside of IVAAP.

Another use case for the “Circle of Trust” is the automated registration of external workflows. INT customers might have hundreds of machine learning (ML) workflows that are hosted on various systems. With “Circle of Trust” credentials, the endpoints for these workflows can be registered automatically so that they appear as options to IVAAP users.

A Single SDK for Two Backends

The Admin Backend has about 200 REST services, and these services were developed with the same SDK as the Data Backend SDK. It’s the same INT developers who maintain the Data Backend that also maintains the Admin Backend, with no additional training required. It’s not just INT that benefits, but our customers benefit, too. Together, the Data Backend and the Admin Backend provide a unified experience for all Java developers customizing IVAAP servers.

Visit us online at int.com/ivaap for a preview of IVAAP or for a demo of INT’s other data visualization products. 

For more information, please visit int.flywheelstaging.com or contact us at intinfo@int.com.


Filed Under: IVAAP Tagged With: API, backend, developer, ivaap, java, OpenID, SDK, URL

Jan 19 2022

An Inside Look at the New Release of the IVAAP Data Backend SDK

INT recently announced the release of IVAAP 2.9. We sat down with Thierry Danard, INT’s VP of Core Platform Technologies, for a quick chat about what this release includes for the Data Backend SDK.

 

Hi Thierry, what can you tell us about IVAAP 2.9 for the Data Backend?

There are a few low-level changes that won’t make it to the release notes, but that will make a difference for users. For example, we optimized the data caching mechanism. Instead of being based on the number of datasets, the data caches are now based on the size of these datasets. This typically means lower latency as more datasets tend to be cached within the same memory space. This is a feature we initially used only with WITSML wells that we have extended to other connectors and data types.

At a high level, we added a powerful “tiled images” feature. Some INT customers have large rasters that they need to visualize, or that their customers need to visualize as part of a portal. These images are stored as good old TIFF image files on a local file system or in the cloud. To visualize these images in IVAAP, you just need to point at the location of these tiles, and IVAAP does the rest. While the idea of visualizing images online might seem mundane, there is quite a lot of technology to make this happen.

First, these image files are LARGE. They can be up to 4 GB in TIFF format, and that’s a compressed format! To make the visualization of these images seamless, you need to be really good at:

  1. Downloading these files from the cloud, fast
  2. Reading these images in their native format
  3. Rendering these images as small tiles and sending these tiles to the viewer efficiently
  4. Stitching these images back together as one raster on the viewer side
  5. Caching these images long enough, but not too long
  6. Doing all this concurrently, for a large number of simultaneous users

That’s the technology side of it. From a business perspective, what’s really interesting is that there is no preprocessing of the images required. The ingestion workflow requires no specialized tooling. There’s no need to “precut” each TIFF as multiple tiles at multiple resolution levels. The “cutting” is all done on the fly and it’s seamless to end users. This is particularly important to keep storage to a minimum when you migrate your image library to the cloud. When each TIFF file takes gigabytes, a library of only a few thousand files already has a significant footprint, so you don’t want to expand that footprint with file duplication.

 

Is this image tiling feature something I could use for my own image storage?

Yes. The technology we developed makes abstraction of the storage mechanism possible. The reading, cutting, rendering, and caching are part of the IVAAP Data Backend SDK’s API. You can reuse these parts in your code.

 

What’s the purpose of this SDK, and how does it apply to images?

One of the strengths of IVAAP is that it allows the visualization of data from multiple data sources. For example, just for visualization of wells, IVAAP supports Peloton WellView, PPDM, WITSML, LAS files in the cloud, etc. These data sources typically share a common data model, but what’s different between them is the medium or protocol to access this data. The typical use case of the IVAAP Data Backend SDK is to facilitate the implementation of that access layer.

Images are treated as data. Just like we have “well” web services, we have “image” web services. The image web services are generic in nature—it’s the same service code running regardless of their underlying storage mechanism. It’s only the access code that differs. As a matter of fact, we implemented access to four types of image stores on top of our SDK:

  • Images stored locally on disk
  • Images stored in Amazon S3
  • Images stored in Microsoft Azure Blob Storage
  • Images stored in Google Cloud Storage

In this particular example, we used our own SDK to develop both the data layer and the web service layer.

 

Is the IVAAP Data Backend SDK a web framework?

While this SDK does provide an API to create your own web services, it goes well beyond that. If you call the IVAAP Data Backend SDK a framework, then it’s a highly specialized framework. And it’s not just a web framework, it’s also a data framework. Also, web frameworks tend to force you into a container. To scale your application, you are limited by the capabilities of this container. The IVAAP Data Backend SDK abstracts the container away, giving customers multiple options for how to deploy it and scale it.

From a technical perspective, a good SDK should empower developers. Often, this means requiring the least amount of effort to get the job done. From a business perspective, a good SDK should reduce development costs. Calling the IVAAP Data Backend SDK “yet another web framework” is missing the point of its value. Its value is not about doing the same thing as your favorite “other framework,” it’s about fitting your use case in the most effective way.

 

Then … how is the IVAAP Data Backend SDK a good fit?

Obviously, the integration with the rest of IVAAP is a strong benefit. You could write your data access object (DAO) layer using the API of your choice, but you would still need to write more mundane aspects, such as:

  • Exposing this data in a REST API, following the exact protocol that the IVAAP client expects. There would be hundreds of REST services to implement.
  • Integrating with services from the Admin server. For example, how users are identified, how to find out who has access to what, what are the datasets within a project, etc.

The second fit is that the SDK is designed to work with geoscience data. It integrates the data models of multiple data types, such as wells, seismic, surfaces … and raster images. While the IVAAP Data Backend SDK can be used outside of a geoscience context, a good portion of its API is geoscience-specific.

During design, we were particularly careful not to make any assumptions about how our customers’ geoscience data is stored. No matter how exotic your data systems might be, the IVAAP Data Backend will be able to access them. We are not just talking about storage, we are also talking about real-time feeds and machine learning (ML) workflows. We verified this multiple times. One particularly visible example is OSDU. INT has been on the leading edge of OSDU development, following the multiple iterations or flavors of OSDU services without requiring a change to our SDK.

 

How is the IVAAP Data Backend SDK effective?

Its API is quite simple to learn. Multiple INT customers have used this SDK to develop their own IVAAP customizations, and its API has always been well received. One aspect that is particularly liked is the lookup architecture. Plugging a class requires no XML, just one Java annotation, and it’s the same annotation used across the entire API. It’s an elegant mechanism, easy to learn, and even easier to use.

The real proof of the IVAAP Data Backend SDK’s effectiveness is the time it takes external developers to learn it and develop their own connector. We tested the effectiveness of the SDK by asking new hires to develop a connector, only armed with their knowledge of Java. Without any prior geoscience knowledge, the average time to get that connector up and running has been 2 weeks.

Another way that the backend keeps developers effective is by not taking away their favorite development tool. While the IVAAP Data Backend uses a powerful cluster architecture in production deployments, the typical developer’s day is with their favorite IDE (NetBeans, Eclipse, IntelliJ, etc.) and a well-known application server (Tomcat, Glassfish). Development is much faster when you don’t need to launch an entire cluster for a simple debugging session.

Effectiveness and fit are key. The goal of a typical framework is to help you get started faster. It provides a shortcut to skip the implementation of the mundane concerns of an application. In IVAAP’s case, for many customers, the application itself is already written. For these customers, the IVAAP Data Backend SDK helps you get finished faster instead. It provides customers with the API to finish the last mile of an IVAAP deployment, the access layer to their proprietary data store.

For more information or for a free demo of IVAAP, visit int.com/products/ivaap/.


Filed Under: IVAAP Tagged With: backend, data, IDE, image, ivaap, OSDU, raster, SDK, TIFF

Jun 17 2021

How to Extend the IVAAP Data Model with the Backend SDK

The IVAAP Data Backend SDK’s main purpose is to facilitate the integration of INT customers’ data into IVAAP. A typical conversation between INT’s technical team and a prospective customer might go like this:

Can I visualize my data in IVAAP?

Yes, the backend SDK is designed to make this possible, without having to change the UI or even write web services.

What if my data is stored in a proprietary data store?

The SDK doesn’t make any assumptions on how your data is stored. Your data can be hidden away behind REST services, in a SQL database, in files, etc.

What are the steps to integrate my data?

The first step is to map your data model with the IVAAP data model. The second step is to identify, for each entity of that data model, how records are identified uniquely. The third step is to plug your data source. The fourth and final step is to implement, for each entity, the matching finders.

What if I want to extend the IVAAP data model?

My typical answer to this last question is “it depends”. The SDK has various hooks to make this possible. Picking the right “hook” depends on the complexity of what’s being added.

Using Properties

The IVAAP data model was created after researching the commonalities between the industry data models. However, when we built it, we kept in mind that each data store carries its own set of information, information that is useful for users to consume. This is why we made properties a part of the data model itself. From an IVAAP SDK point of view. Properties is a set of name-value pairs that you can associate with specific entities within the IVAAP Data Model.

For example, if a well dataset is backed by an .LAS file in Amazon S3 or Microsoft Azure Blob Storage, knowing the location of that file is a valuable piece of information as part of a QC workflow. But not all data stores are backed by files, a file location is not necessarily relevant to a user accessing data from PPDM. As a result, the set of properties shown when opening a dataset backed by Azure will typically be different from a set coming from PPDM, even for the same well.

 

properties dialog example
An example of properties dialog, showing multiple properties of a seismic dataset stored in Amazon S3.

 

 

Calling the properties a set of name-value pairs does not do justice to its flexibility. While a simple name+value is the most common use, you can create a tree of properties and attach additional attributes to each name-value pair. The most common additional attribute is a unit, qualifying the value to make this property more of a measurement.  Another attribute allows name-value pairs to be invisible to users. The purpose of invisible properties is to facilitate the integration with other systems than the IVAAP client. For example, while a typical user might be interested in the size of a file, this size should be rounded and expressed in KB, MB or GB. An external software consuming the IVAAP Properties REST services would need the exact number of bytes.

One of the benefits of using properties to carry information is that it’s simple to implement your own property finder, and it requires no additional work. No custom REST services to write, and no widget to implement to visualize these properties. The IVAAP HTML5 client is designed to consume the IVAAP services of the data backend, and show these properties in the UI.

Adding Your Own Tables and Documents

One of the limitations of properties is they don’t provide much interaction. Users can only view properties. The simplest way to extend the IVAAP model in a way that users can interact with that data is to add tables. For example, the monthly production of a well is an easy table  to make accessible as a node under a well. Once the production of a well is accessible as a table, users have multiple options to graph this production: as a 2D Plot, as a pie chart, as an histogram, etc. And this chart can be saved as part of a dashboard or a template, making IVAAP a portal.

The IVAAP Data Backend SDK has been designed to make the addition of tables a simple task. Just like for properties, the HTML5 Viewer of IVAAP doesn’t need to be customized to discover and display these tables. It’s the services of the data backend that direct the viewer on how to build the data tree shown to users. And while the data backend might advertise many reports, only non-empty reports will be shown as nodes by the viewer. 

 

tabular reports
An example of tabular reports related to a well.

 

 

In the many customization projects that I’ve been involved in, the tabular features of IVAAP have been used the most. I have seen dozens of reports under wells. The IVAAP Data Backend makes no assumptions about where this production data is stored relative to where the well is stored. For example, you can mix the schematics from Peloton WellView with the production reports from Peloton ProdView. From a user point of view, the source of the data is invisible, IVAAP combines the data from several sources in a transparent way. Extending the IVAAP data model doesn’t just mean exposing more data from your data source, it also means enriching your data model with data from other sources.

Data enrichment is sometimes achieved just by making accessible the documents associated with a well. For example, for Staatsolie’s portal, the IVAAP UI was giving direct access to the documentation of a well, stored in Schlumberger’s ESearch.

 

PDF document related to a well
An example of PDF document related to a well.

 

 

Adding Your Own Entities and Services

When data cannot be expressed as properties, tables or documents, the next step is to plug your own model. The API of the Backend SDK makes it possible to plug your own entities under any existing entity of the built-in data model. In this use case, not only code to access data needs to be developed, but also code to expose this data to the viewer. The IVAAP data model is mature, so this is a rare occurrence.

There are hundreds of services implemented with the IVAAP Data Backend SDK, developers who embark on a journey involving adding their own data types can be reassured by the fact that the path they follow is the same path the INT developers follow every day as we augment the IVAAP data model. INT makes use of its own SDK every day.

 

IVAAP Data Backend SDK Homepage
Home page of the website dedicated to the IVAAP Data Backend SDK.

 

 

Whether IVAAP customers need to pepper the IVAAP UI with proprietary properties or their own data types, these customers have options. The SDK is designed to make extensions straightforward, not just for INT’s own developers, but for INT customers as well. You do not need to contract INT’s services to roll your own extensions. You can, but you don’t have to. When IVAAP gets deployed, we don’t just give you the keys to IVAAP as an application, we also give you the keys to IVAAP as a platform, where you can independently control its capabilities.

For more information on IVAAP, please visit int.flywheelstaging.com/products/ivaap/

 


Filed Under: IVAAP Tagged With: backend, data, html5, ivaap, SDK

May 20 2021

Deploying IVAAP Services to Google App Engine

One of the productivity features of the IVAAP Data Backend SDK is that the services developed with this SDK are container-agnostic. Practically, it means that a REST service developed on your PC using your favorite IDE and deployed locally to Apache Tomcat will run without changes on IVAAP’s Play cluster.

While the Data Backend SDK is traditionally used to serve data, it is also a good candidate when it comes to developing non-data-related services. For example, as part of IVAAP 2.8, we worked on a gridding service. In a nutshell, this service computes a grid surface based upon the positions of a top across the wells of a project. When we tested this service, we didn’t deploy it to IVAAP’s cluster; it was deployed as a standalone application, as a servlet, on a virtual machine (VM).

Deploying Apache Tomcat on a virtual machine is “old school”. Our customers are rapidly moving to the cloud, and while VMs are often a practical choice, other options are sometimes available. One of these options is Google App Engine. Google App Engine is a bit of a pioneer of cloud-based deployments. It was the first product that allowed servlet deployments that scale automatically, without having to worry about the underlying infrastructure of virtual machines. This “infinite” scalability comes with quite a few constraints, and I was curious to find out whether services developed with the IVAAP Data Backend SDK could live within these constraints (spoiler alert: it can).

Synchronous Servlet Support

The first constraint was the lack of support for asynchronous servlets. Google App Engine doesn’t support asynchronous servlets and the IVAAP servlet shipped with the SDK is strictly asynchronous. Supporting the synchronous requirements of Google App Engine didn’t take much time. The main change was to modify the concrete implementation of
com.interactive.ivaap.server.servlets.async.AbstractServiceRequest.waitForResponse
and wait on a java.util.concurrent.CountDownLatch instead of calling javax.servlet.startAsync().

Local File Access

The second constraint was the lack of a local file system. Google App Engine doesn’t let developers access the local files of the virtual machine where an application is deployed. The IVAAP Data Backend SDK typically doesn’t make much use of the local file system, except at startup when it reads its service configuration. To authorize users, the services developed with the IVAAP Data Backend SDK need to know how to validate Bearer tokens, and this validation requires the knowledge of the host name of the IVAAP Admin Backend. The Admin Backend exposes REST services for the validation of Bearer tokens. To support Google App Engine, I had to make the discovery of these configuration files pluggable so that they can be read from the WEB-INF directory of the servlet instead of a directory external to that servlet.

Persistence Mechanism

The third constraint was the lack of persistence. Google App Engine doesn’t provide a way to “remember” information between two HTTP calls. To effectively support computing services, a REST API cannot make an HTTP client “wait” for the completion of this computing. The computation might take minutes, even hours. The REST API of a computing service has to give a “ticket” number back to the client when a process starts, and provide a way for this client to observe the progress of that ticket, all the way to the completion. In a typical servlet deployment, there are many options to achieve this: the service can use the Java Heap to store the ticket information or use a database. To achieve the same result with Google App Engine, I needed to pick a persistence mechanism. For simplicity’s sake, I picked Google Cloud Storage. The state of each ticket is stored as a file in that storage. 

Background Task Executions

The fourth constraint was the lack of support for background executions. Google App Engine by itself doesn’t allow processes to execute in the background. Google however provides integration with another product called Google Cloud Tasks. Using the Google Cloud Tasks API, you can submit HTTP requests to a queue, and Google Cloud Tasks will make sure these requests get executed eventually. Essentially, when the gridding service receives an HTTP request, it creates a ticket number, submits this HTTP request immediately to Google Cloud Tasks, which in turn calls back Google App Engine. The IVAAP service recognizes that the call comes from Google Cloud Tasks and stores the result to a file in Google Cloud Storage instead of the servlet output stream. It then notifies the client that the process has completed.

Here’s a diagram that describes the complete workflow: 

INT_GCP_Workflow

Constraints and Considerations

While the SDK did provide the API to implement this workflow out of the box, getting this to work took a bit of time. I had to learn 3 Google products at once to get it working. Also, I encountered obstacles that I will share here so that other developers benefit:

  1. The first obstacle was that the Java SDK for Google App Engine requires the Eclipse IDE. There is no support for the NetBeans IDE. I am more proficient with NetBeans.
  2. The second obstacle was that I had to register my Eclipse IDE with Google so I can deploy code from that environment. It just happened that that day, the Google registration server was having issues, blocking me from making progress.
  3. The third obstacle was the use of Java 8. The Google Cloud SDK required Java 8, but Eclipse defaulted to Java 11. It took me a while to understand the arcane error messages thrown at me.
  4. The fourth obstacle was that I had to pick a flavor of Google App Engine, either “Standard” or “Flexible”. The “Standard” option is cheaper to run because it doesn’t require an instance running at all times. The “Flexible” option has less warmup time because there is always at least one instance running. There are many more differences, not all of them well documented. The two options are similar, but do not share the same API. You don’t write the same code for both environments. In the end, I picked the “Standard” option because it was the most constraining, better suited to a proof of concept.
  5. The fifth obstacle was the confusion due to the word “Promote” used by the Google SDK when deploying an instance. In this context, “Promote” doesn’t mean “advertising”, it means “production”. For a while, I couldn’t figure out why my application wouldn’t show any changes where I expected them. The answer was that I didn’t “promote” them.
  6. The last obstacle was the logging system. Google has a “Google Logging” product to access logs produced by your application. Logging is essential to debugging unruly code that you can’t run locally. Despite several weeks of use, I still haven’t figured out how this product really works. It is designed to be used to monitor an application in production, not so much for debugging. Debugging with logs is difficult. There might be several reasons why you can’t find a log. The first possibility is that the code doesn’t go where you think it’s going, and the log is not produced. The second possibility is that the log was produced, but I am too impatient, there is a significant delay and it hasn’t shown up yet. The third possibility is that it has shown up, but is nested inside some obscure hierarchy, and you won’t see it unless you expand the entire tree of logs. The log search doesn’t help much and has some strange UI quirks. I found that the most practical way to explore logs is to download them locally, then use the search capabilities of a text editor. Because the running servlet is not local to your development environment, debugging a Google App Engine application is a time-consuming activity.

In the end, the IVAAP Data Backend SDK passed this proof of concept with flying colors. Despite the constraints and obstacles of the environment, all the REST services that were written with the IVAAP Cluster in mind are compatible with Google App Engine, without any changes. Programming is hard, it’s an investment in time and resources. Developing with the IVAAP Data Backend SDK preserves your investment because it makes a minimum amount of assumptions on how and where you will run this code.

For more information or for a free demo of IVAAP, visit int.com/products/ivaap/.


Filed Under: IVAAP Tagged With: API, cloud, Google, Google App Engine, ivaap, SDK

  • Page 1
  • Page 2
  • Page 3
  • Go to Next Page »

Footer

Solutions

  • Real-Time Visualization
  • Visualization Components
  • New Energy Visualization
  • OSDU Visualization
  • Machine Learning
  • Developer Tools
  • Cloud Partners
  • Customer Success Stories

Products

  • IVAAP
  • GeoToolkit
  • INTViewer
  • IVAAP Demos
  • GeoToolkit Demos

About

  • News
  • Events
  • Careers
  • Management Team

Resources

  • Blog
  • FAQ

Support

  • JIRA
  • Developer Community

Contact

INT logo
© 1989–2024 Interactive Network Technologies, Inc.
Privacy Policy
  • Careers
  • Contact Us
  • Search

COPYRIGHT © 2025 INTERACTIVE NETWORK TECHNOLOGIES, Inc