• Skip to main content
  • Skip to footer

INT

Empowering Visualization

COMMUNITY BLOG
CONTACT US SUPPORT
MENUMENU
  • Solutions
    • Overview
    • Real-Time Visualization
    • Visualization Components
    • New Energy Visualization
    • OSDU Visualization
    • Machine Learning
    • Developer Tools
    • Cloud Partners
  • Products
    • IVAAP™
          • SOLUTIONS

            Real-Time Visualization

            OSDU Visualization

            Visualization Components

            New Energy Visualization

            Upstream Data Visualization

          • SUCCESS STORIES

            WEATHERFORD
            Well delivery optimization software

            BARDASZ
            Data management, data quality monitoring

            ANPG / SATEC-MIAPIA
            Virtual data room

            MAILLANCE
            High-performance visualization of ML algorithms

            SEE ALL >

          • SUPPORT

            DEVELOPER COMMUNITY
            Join or log in to the INT Developer Community.

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • DEMOS

            IVAAP DEMOS
            Cloud-Based Demos

            FIRST TIME HERE?
            Register to access our
            IVAAP demos

    • GeoToolkit™
          • SUCCESS STORIES

            CAYROS
            Field development planning

            TOTALENERGIES
            High-performance large dataset reservoir rendering

            IFP ENERGIES NOUVELLES
            Seismic and structural interpretation validation

            SEE ALL >

          • SUPPORT

            DEVELOPER COMMUNITY
            Join or log in to the INT Developer Community.

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • DEMOS

            GEOTOOLKIT DEMOS
            Geoscience Demos

    • INTViewer™
          • SUCCESS STORIES

            STRYDE
            Fast seismic QA/QC in the field

            SILVERTHORNE SEISMIC
            Efficient 2D/3D seismic data delivery

            WIRELESS SEISMIC
            Drones, IoT, and Advanced Onsite Seismic Data Validation

            SEE ALL >

          • SUPPORT

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • PLUG-INS

            EXTEND INTVIEWER
            More than 65 plugins available

  • Demos
    • GeoToolkit Demos
    • IVAAP Demos
  • Success Stories
  • Resources
    • Blog
    • Developer Community
    • FAQ
    • INT Resources Library
  • About
    • Overview
    • News
    • Events
    • Careers
    • Meet Our Team
    • About INT

URL

Sep 06 2023

5 Key Designs of the IVAAP Backends

When you are getting ready to embark on a multi-year journey of software development, technical design matters. You don’t just want to take into account the requirements of a minimum viable product, you have a roadmap for this product, and this roadmap gives you constraints. Over the years, a product evolves and so do the constraints. With the 2.11 release of IVAAP, now is a good time to revisit early technical design decisions for the IVAAP backends, decisions that have stood the test of time over the entire lifecycle of the product.

1. Designed to Mesh Data

IVAAP has very strong visualization features. Some customers use IVAAP only to visualize data from their PPDM database, or from their OSDU deployment. Many others use IVAAP to visualize data coming from multiple data sources. This “multiple data sources” aspect of IVAAP has been part of the DNA of the product from the start. It gives users the ability to compare data across multiple systems, in one deployment, and in one consistent user interface. 

Technically, this feature imposes architectural constraints. First, from a reliability standpoint, when you access multiple data sources at the same time, you can’t assume all of them are available. And the failure of one data source shouldn’t affect access to data from another source. For example, if a PPDM database goes down, this shouldn’t affect access to WITSML data. This is one of the reasons why IVAAP has been designed as a “mesh” of multiple Java Virtual Machines. Even in the most basic deployment, each data source type (PPDM, WITSML, OSDU, etc) has its own dedicated sandbox, and it’s the role of the IVAAP cluster to make these sandboxes cooperate to give users a seamless experience.

Sandboxing access to each data source type also provides a simple way to scale. Whether the components of IVAAP are deployed with Kubernetes or with Docker Compose, you can customize at deployment time how each data source type scales. 

The ability to access multiple data sources also permeates the way URLs are designed. The URLs of the IVAAP Data Backend are designed so that the cluster can easily route HTTP requests towards the right component. This routing doesn’t just affect the URL of REST services, but also the URL of real-time channels used in web sockets. These “mesh-friendly” URLs have stayed stable since the first version of IVAAP.

2. Container Agnostic

An architecture where multiple JVMs run in parallel is complex to deploy as a developer. 99% of the development time is spent on developing or modifying a service meant to execute within a single JVM. As a platform, IVAAP needed to be powerful, but also easy to use for the most common tasks. This is why it was designed to run on top of multiple containers.

The container used in production is made of a Play server and an Akka cluster. Each data source type is associated with a so-called “node” that is part of that cluster. The container used in development is an Apache Tomcat server, running Java Servlets. This is a configuration that most developers are already familiar with, and that IDEs support well. 

From a code point of view, it makes no difference whether a service will be deployed in Tomcat or in Play. But being “container agnostic” goes beyond the benefits of seamlessly switching between development and runtime environments. The services written with the IVAAP Backend SDK may also be deployed on containers such as Amazon Beanstalk or Google App Engine. JUnit itself is considered a container, making it easy to unit test REST services without having to launch a HTTP server.

Introduced later, the Admin Backend benefited from the SDK’s versatility. While this SDK was initially designed with an Akka Cluster in mind, it is also used by the Admin Backend running Apache Tomcat.

IVAAP is a feature-rich platform, with hundreds of REST services. One of the reasons INT was able to implement so many services is the versatility of IVAAP’s SDK. Making abstraction of the underlying runtime environment was an early SDK decision, key to making IVAAP developers instantly productive.

3. Modular

When you develop microservices, there is one keyword that is the antithesis of the type of product you are creating: a monolithic application. To borrow from Wikipedia, “a monolithic application is a single unified software application which is self-contained and independent from other applications, but typically lacks flexibility.” A monolithic application is often used along proprietary data silos. From an architecture point of view, IVAAP is the opposite of a monolithic application — it attempts to free users from the proprietary data silos by providing the flexibility that silos lack. This is where a modular architecture helps.

With a modular architecture, optionalism is not limited to configuration, but also depends on the presence or absence of a module. From a development perspective, it is sometimes easier to develop and test a module that implements an option than to modify (and risk breaking) an already well-tested existing code.

Modularity is particularly useful when 99% of the product fits a customer needs, but the remaining 1% needs to be customized. With a modular architecture, the IVAAP code doesn’t need to be changed to implement a proprietary authentication mechanism. A custom authentication module can be deployed instead.

The benefits of modularity were well understood before INT started working on IVAAP. It’s really how it was implemented that turned out to have staying power. The IVAAP Backend SDK used a simple Lookup system to let developers plug or unplug implementations. This Lookup system was actually the first line of code I wrote when I started working on IVAAP. In a nutshell, a single annotation governs how classes are registered into the lookup at startup. It’s an API that is incredibly powerful, but also simple to learn. The entire IVAAP Backend code was built around it.

4. Developer-Friendly Data Models

IVAAP supports multiple types of data sources. Many of these data sources contain well data, but they all have their own way of storing that data. The role of a connector’s implementation is to expose this data to the rest of IVAAP, using a standard data API.

A typical way to do this is to use the concept of “Data Access Object.” To quote Wikipedia again, “a data access object (DAO) is a pattern that provides an abstract interface to some type of database or other persistence mechanism.” The IVAAP SDK follows this DAO concept, but while keeping developers in mind. 

A typical DAO implementation indicates which interfaces need to be implemented. A developer working on a connector would code towards these interfaces, then start testing the code. This approach has several problems. One of them is that a developer cannot start testing his/her work until all interfaces have been implemented. The second problem is that there may be lots of interfaces to implement, delaying further the availability of the work.

IVAAP’s DAO approach introduces the concept of “finders” and “updaters.” Finders are in charge of performing “read-only” data accesses, while “updaters” are in charge of updating this data if needed. Finders and updaters are added to the lookup at startup. Unlike a classic DAO implementation, developers only need to plug finders and updaters that will be actually used, that actually match data in the data source they are working with. 

For example, a WITSML store may contain “well risk” data and INT’s WITSML connector implements a “well risk finder.” A PPDM database doesn’t contain “well risk” data and INT’s PPDM connector doesn’t need to implement a “well risk finder.” By splitting the code requirements into fine-grained finders and updaters, the IVAAP SDK implements a DAO system that requires only the minimum amount of code for developers to write.

Paired with well-defined data models, the now battle-tested concept of finders and updaters has been a key element for new developers to learn the IVAAP SDK and develop connectors fast.

5. Circle of Trust

Going back to the idea of a “monolith,” one of the pitfalls of building a monolith is that a monolithic application can often only talk to itself. Since IVAAP is based on microservices, it is, by design, a system that is easy to integrate with. There is however a constant hurdle to such integration: authentication and security.

When authentication is simple, many systems use the concept of a “service account” to consume microservices. But “service accounts” often break when multi-factor authentication is required. When integration with other systems is a concern, we found that microservices need to be able to clearly differentiate human interaction from automated interactions.

This is essentially the purpose of the Circle of Trust. When deploying IVAAP, the software components that require special access can authenticate themselves as such. For example, a Python or Shell script that scoops up the monthly activity report would identify itself as a member of the Circle of Trust to access this report.

As IVAAP matures, many of the solutions proposed today by INT to its customers involve the Circle of Trust. The Circle of Trust is a simple but secure way of integrating third-party software with IVAAP. While it is a recent introduction to IVAAP’s design, it has become a key tool to meet customer needs.

 

Conclusion

The designs above made IVAAP’s journey possible, but IVAAP’s journey is not over. What they all share is a focus on users. Whether these users are geoscientists or programmers, these designs were meant to empower them. Empowering doesn’t just mean catering to today’s needs — it also means anticipating tomorrow’s requirements and making them possible. These five designs were strong enough that they were the gifts that kept on giving.

Visit us online at int.com/ivaap for a preview of IVAAP or for a demo of INT’s other data visualization products. 

For more information, please visit int.flywheelstaging.com or contact us at intinfo@int.com.


Filed Under: IVAAP Tagged With: Amazon Beanstalk, DAO, Google App Engine, HTTP, ivaap, JVM, OSDU, PPDM, REST, SDK, URL, WITSML

Jun 12 2023

How to Use the Nimbus Library to Authenticate Users with OpenID Connect

OpenID Connect (OIDC) is an authentication protocol using a third-party site to provide single-sign-on capabilities to web applications. For example, before you can visualize OpenSDU data in IVAAP, your browser is redirected to a login page hosted by Microsoft or Amazon. After completion of the two-factor authentication, your browser is redirected to IVAAP. The sequence of steps between IVAAP and external authentication servers follows the OpenID Connect protocol.

OIDC is built on top of Oauth2. It was created in 2014, superseding Open ID 2.0. Over the years, it has become a widely used standard for consumer and enterprise-grade applications. While simpler than OpenID 2.0, most developers will need to use a library to integrate OIDC with their applications.

In IVAAP’s case, since the Admin Backend is written in Java, we use the OIDC Nimbus library identified by this maven configuration:

<dependency>
    <groupId>com.nimbusds</groupId>
    <artifactId>oauth2-oidc-sdk</artifactId>
    <version>10.1</version>
</dependency>

While this Nimbus library doesn’t provide a full implementation of the OpenID Connect workflow, it contains pieces that can be reused in your code. IVAAP being a platform, it comes with an SDK to plug your own authentication. Nimbus fits the concept of an SDK where most of the work is already done for developers, and you only need to implement a few hooks. In our case, the hooks identify mainly what to do when the /login, /callback, /refresh, and /logout services are called. Let’s dive a little further into how Nimbus helps developers implement these services.

The Login Service

The main purpose of the /login service is to redirect the browser to an external authentication page. The login URL changes slightly each time it’s called for security reasons. It contains various information such as the scope, a callback URL, a client ID, a state, and a nonce. The scope, callback URL, and client ID typically don’t change, but the state and nonce are always new.

ClientID clientID = new ClientID(this.clientId);
URI callback = new URI(this.callbackURL);

Nonce nonce = new Nonce();
Scope authScope = new Scope();
String[] split = this.scope.split(" ");
for (String currentToken : split) {
    authScope.add(currentToken);
}

AuthenticationRequest request = new AuthenticationRequest.Builder(
            ResponseType.CODE,
            authScope,
            clientID,
            callback)
            .endpointURI(new URI(this.authURL))
            .state(state)
            .nonce(nonce)
            .prompt(new Prompt("login"))
            .build();
return request.toURI();

Since OIDC was released, a more secure variation called PKCE (pronounced pixy) has been added. PKCE introduces a code challenge instead of relying on a client secret used in the /callback service. The same code looks like this when PKCE is used:

ClientID clientID = new ClientID(this.clientId);
URI callback = new URI(this.callbackURL);
this.codeVerifier = new CodeVerifier();

Nonce nonce = new Nonce();
Scope authScope = new Scope();
String[] split = this.scope.split(" ");
for (String currentToken : split) {
    authScope.add(currentToken);
}

AuthenticationRequest request = new AuthenticationRequest.Builder(
            ResponseType.CODE,
            authScope,
            clientID,
            callback)
            .endpointURI(new URI(this.authURL))
            .state(state)
            .codeChallenge(codeVerifier, CodeChallengeMethod.S256)
            .nonce(nonce)
            .prompt(new Prompt("login"))
            .build();
return request.toURI();

The Callback Service

When the /callback service is called after successful authentication, the callback URL contains a state and a code that identifies the authentication that was just performed. The following lines extract this code from the callback URL:

AuthenticationResponse response = AuthenticationResponseParser.parse(
                new URI(callbackUrl));
State state = response.getState();
AuthorizationCode code = response.toSuccessResponse().getAuthorizationCode();

The state should match the state created when the /login service was called.

This “authorization code” can be exchanged with authentication tokens calling the OIDC token service.

     URI tokenEndpoint = new URI(this.tokenURL);
     URI callback = new URI(this.callbackURL);
      AuthorizationGrant codeGrant = new AuthorizationCodeGrant(code, callback);
      ClientID clientID = new ClientID(this.clientId);
      Secret secret = new Secret(this.clientSecret);
      Scope authScope = new Scope();
      String[] split = this.scope.split(" ");
      for (String currentToken : split) {
          authScope.add(currentToken);
      }
      ClientAuthentication clientAuth = new ClientSecretBasic(clientID, secret);
      TokenRequest request = new TokenRequest(tokenEndpoint, clientAuth, codeGrant, authScope); 
      HTTPRequest toHTTPRequest = request.toHTTPRequest();
      TokenResponse tokenResponse= OIDCTokenResponseParser.parse(toHTTPRequest.send());
OIDCTokenResponse successResponse = (OIDCTokenResponse) tokenResponse.toSuccessResponse();
  JWT idToken = successResponse.getOIDCTokens().getIDToken();
  AccessToken accessToken = successResponse.getOIDCTokens().getAccessToken();
  RefreshToken refreshToken = successResponse.getOIDCTokens().getRefreshToken();

If PKCE is enabled, this code is simpler. It doesn’t require a client’s secret to be passed:

           URI tokenEndpoint = new URI(this.tokenURL);
           URI callback = new URI(this.callbackURL);
            AuthorizationGrant codeGrant = new AuthorizationCodeGrant(code, callback, this.codeVerifier);
            ClientID clientID = new ClientID(this.clientId);
            TokenRequest  request = new TokenRequest(tokenEndpoint, clientID, codeGrant);
authScope); 
            HTTPRequest toHTTPRequest = request.toHTTPRequest();
            TokenResponse tokenResponse= OIDCTokenResponseParser.parse(toHTTPRequest.send());
      OIDCTokenResponse successResponse = (OIDCTokenResponse) tokenResponse.toSuccessResponse();
        JWT idToken = successResponse.getOIDCTokens().getIDToken();
        AccessToken accessToken = successResponse.getOIDCTokens().getAccessToken();
        RefreshToken refreshToken = successResponse.getOIDCTokens().getRefreshToken();

The OpenID Connect token service gives us 3 tokens:

  • An access token
  • A JSON Web Token (JWT) token, also known as an ID token or bearer token
  • A refresh token

The access token is the token that typically grants access to data. It expires, and a new access token can be retrieved, passing the refresh token to the OIDC refresh service.

The JWT token is the token that identifies the user. Unlike the access token, it doesn’t expire. While a JWT token may be parsed to get user info, the access token is typically used instead. A user info OIDC service typically needs to be called with the access token to get the user details.

            URI userInfoURI = new URI(this.userInfoURL);
            HTTPResponse httpResponse = new UserInfoRequest(userInfoURI, accessToken)
                    .toHTTPRequest()
                    .send();
            UserInfoResponse userInfoResponse = UserInfoResponse.parse(httpResponse);
            UserInfo userInfo = userInfoResponse.toSuccessResponse().getUserInfo();
String email = (String) userInfo.getClaim("email");

For OpenSDU, a more complex API involving claim verifiers needs to be used to get user details.

Set<String> claims = new LinkedHashSet<>();
claims.add("unique_name");
ConfigurableJWTProcessor<SecurityContext> jwtProcessor = new DefaultJWTProcessor<>();
jwtProcessor.setJWSTypeVerifier(new DefaultJOSEObjectTypeVerifier<>(new JOSEObjectType("JWT")));
JWKSource<SecurityContext> keySource = new RemoteJWKSet<>(...));
JWSAlgorithm expectedJWSAlg = JWSAlgorithm.RS256;
JWSKeySelector<SecurityContext> keySelector = new JWSVerificationKeySelector<>(expectedJWSAlg, keySource);
jwtProcessor.setJWTClaimsSetVerifier(new DefaultJWTClaimsVerifier(new JWTClaimsSet.Builder().issuer(...).build(),
        claims));
jwtProcessor.setJWSKeySelector(keySelector);
JWTClaimsSet claimsSet = jwtProcessor.process(accessToken.getValue(), null);
Map<String, Object> userInfo = claimsSet.toJSONObject();
String email = (String) userInfo.getClaim("unique_name");

The Refresh Service

In IVAAP’s case, the UI’s role is to call the /refresh service before an access token expires. When this refresh service is called with the last issued refresh token, new access and refresh tokens are obtained by calling the OIDC token service again.

    RefreshToken receivedRefreshToken = new RefreshToken(…);
    AuthorizationGrant refreshTokenGrant = new RefreshTokenGrant(receivedRefreshToken);
    URI tokenEndpoint = new URI(this.tokenURL);
    ClientID clientID = new ClientID(this.clientId);
    Secret secret = new Secret(this.clientSecret);
    ClientAuthentication clientAuth = new ClientSecretBasic(clientID, secret);
    Scope authScope = new Scope();
    String[] split = this.scope.split(" ");
    for (String currentToken : split) {
        authScope.add(currentToken);
    }
    TokenRequest  request = new TokenRequest(tokenEndpoint, clientAuth, refreshTokenGrant, authScope);

HTTPResponse httpResponse = request.toHTTPRequest().send();
AccessTokenResponse successResponse = response.toSuccessResponse();
Tokens tokens = successResponse.getTokens();
AccessToken accessToken = tokens.getAccessToken();
RefreshToken refreshToken = tokens.getRefreshToken();

If PKCE is enabled, this code is simpler. It doesn’t require a client secret to be passed:

RefreshToken receivedRefreshToken = new RefreshToken(…);
AuthorizationGrant refreshTokenGrant = new RefreshTokenGrant(receivedRefreshToken);
URI tokenEndpoint = new URI(this.tokenURL);
ClientID clientID = new ClientID(this.clientId);
TokenRequest request = new TokenRequest(tokenEndpoint, clientID, refreshTokenGrant);
HTTPResponse httpResponse = request.toHTTPRequest().send();
AccessTokenResponse successResponse = response.toSuccessResponse();
Tokens tokens = successResponse.getTokens();
AccessToken accessToken = tokens.getAccessToken();
RefreshToken refreshToken = tokens.getRefreshToken();

The Logout Service

As OpenID Connect doesn’t provide a standard for logging out, no Nimbus API generates the logout URL. The logout URL has to be built manually depending on the OpenID provider (Microsoft or Amazon). This logout URL is sometimes provided by the content behind the discovery URL of an OpenID provider.

Going Beyond

For simplicity’s sake, I didn’t include in the code samples the handling of errors. While the OpenID Connect protocol is well-defined, there are a few variations between cloud providers. For example, the fields stored in tokens may vary. This article doesn’t describe what happens after the /callback service is called: once tokens are issued, how are they passed to the viewer? These implementation details may be implemented differently by each application. When I was tasked with integrating OpenID Connect, I found the Nimbus website clear and simple to use, showing sample code front and center. I highly recommend this library.

Visit us online at int.com/ivaap for a preview of IVAAP or for a demo of INT’s other data visualization products. 

For more information, please visit int.flywheelstaging.com or contact us at intinfo@int.com.

____________

ABOUT INT

INT software empowers energy companies to visualize their complex data (geoscience, well, surface reservoir, equipment in 2D/3D). INT offers a visualization platform (IVAAP) and libraries (GeoToolkit) that developers can use with their data ecosystem to deliver subsurface solutions (Exploration, Drilling, Production). INT’s powerful HTML5/JavaScript technology can be used for data aggregation, API services, and high-performance visualization of G&G and energy data in a browser. INT simplifies complex subsurface data visualization.

INT, the INT logo, and IVAAP are trademarks of Interactive Network Technologies, Inc., in the United States and/or other countries.


Filed Under: IVAAP Tagged With: ivaap, java, oauth2, OIDC, OpenID, PKCE, URL

May 01 2023

How the Admin Backend Provides Flexibility to IVAAP Customers

The P of IVAAP stands for Platform. As a platform, IVAAP is designed to be modified by INT’s own customers to meet their specific (and sometimes proprietary) needs. Over the years, most of the focus of my blog articles has been on the Data Backend. The Data Backend is the core component of IVAAP that accesses data and makes it available in a standardized form—the IVAAP viewer. The Data Backend is highly specialized for geoscience data. There is, however, another backend, the Admin Backend, that is more generic in nature and that typically doesn’t get as much attention. The goal of this article is to shed some light on how this “other backend” can be customized or consumed to meet customer needs.

Roles of the Admin Backend

The Admin Backend has multiple roles. Some see the Admin Backend as a component managing IVAAP projects. An IVAAP project is essentially an arbitrary grouping of datasets, where each dataset is accessible through the Data Backend. Indeed, the Admin Backend manages projects and all their members. Each member is identified by a URL and often carries metadata, such as its name or location. The actual storage of this information is a PostgreSQL database. The IVAAP Admin Backend provides a simple REST API to the UI to manage project data.

Describing the Admin Backend as a store for projects doesn’t do it justice. It also manages many types of IVAAP entities such as connectors, cloud services, users, groups, dashboards, templates, formulas, and queries. Other data types are geoscience-related, such as curve dictionaries. The Admin Backend is also in charge of providing an audit trail when data is added, updated, or removed. All these features are implemented based on a documented Java API so that INT’s customers can plug their own implementations. Developers are not limited by the REST services that the Admin provides, they can add their own. While the Admin SDK has multiple customization points, the use cases below are the most common.

Customization Use Cases

The first use case of the Admin SDK is the customization of authentication. By default, IVAAP supports two types of authentication. A simple OAUTH2-based authentication, and OpenID Connect. IVAAP customers often have their own authentication system, and IVAAP needs to use that system. To make this possible, the Admin SDK provides a way to customize how users are authenticated, and how sessions are managed and tracked.

The next use case is the customization of key services, typically collection services. For example, the Admin Backend has a service that lists all active users. This service is used by the UI when a template is shared with others. IVAAP customers can plug their own service that will list potential users, for example, listing them from an LDAP server instead of IVAAP’s own PostgreSQL database.

The third use case is the customization of entitlements. Each time a dataset is opened, the Admin Backend is queried to check whether the currently logged-in user has access to this dataset. The default implementation relies on group memberships, but customers can plug their own rules. These rules can be fine-grained, for example, making determinations at the well log levels instead of the well levels.

External Integration Use Cases

The integration of the Admin Backend with other systems goes beyond authentication. The Admin REST services are designed to be called either by a human or a computer. Unlike humans, computers can’t easily log in to a system that requires two-factor authentication. This is why the Admin provides a “Circle of Trust” REST API that allows computers to access its data without the need for a login or password, but rather by a secure exchange of keys. This feature opens new integration use cases.

The first use case for the “Circle of Trust” is the automated retrieval of user activities. Some INT customers require monitoring of user sessions to assess how much IVAAP is used. The REST API for listing user activities is straightforward to use and can be leveraged by a tool outside of IVAAP.

Another use case for the “Circle of Trust” is the automated registration of external workflows. INT customers might have hundreds of machine learning (ML) workflows that are hosted on various systems. With “Circle of Trust” credentials, the endpoints for these workflows can be registered automatically so that they appear as options to IVAAP users.

A Single SDK for Two Backends

The Admin Backend has about 200 REST services, and these services were developed with the same SDK as the Data Backend SDK. It’s the same INT developers who maintain the Data Backend that also maintains the Admin Backend, with no additional training required. It’s not just INT that benefits, but our customers benefit, too. Together, the Data Backend and the Admin Backend provide a unified experience for all Java developers customizing IVAAP servers.

Visit us online at int.com/ivaap for a preview of IVAAP or for a demo of INT’s other data visualization products. 

For more information, please visit int.flywheelstaging.com or contact us at intinfo@int.com.


Filed Under: IVAAP Tagged With: API, backend, developer, ivaap, java, OpenID, SDK, URL

Sep 14 2022

How IVAAP Maximizes Use of HATEOAS Links

Ever since the concept of web services first gained popularity, developers attempting to use these web services have faced two challenges: The first challenge is finding the right service to use; the second challenge is writing the code to call these services. The goal of this article is to describe how IVAAP uses HATEOAS hypermedia links to address both problems. We’ll also try to highlight other use cases benefiting from the concept of hypermedia applied to microservices.

A Brief Description of HATEOAS

Most microservices developed today (including IVAAP’s) use a REST API. REST stands for “REpresentational State Transfer.” It’s a term coined by Roy Fielding, who is also the inventor of “Hypermedia as the Engine of Application State,” or HATEOAS. It describes REST services that use hypermedia links to describe how they relate to other services.

For example, in IVAAP, the metadata of a seismic dataset is typically accessible through a URL such as this one:

https://…/ivaap/api/ds/geofiles/v1/sources/a8b05811-6409-43bb-8902-c9142ab48cff/seismic/cG9zZWlkb24gdmRzIG4gc2VneS9Qb3NlZGlvbiBkZXB0aC9wc2RuMTFfVGJzZG1GX2Z1bGxfd19BR0NfTm92MTFfdmVsNV9kZXB0aF8zMmJpdHMueGd5

Unlike the JSON content returned by a classic REST service, the JSON content returned by this IVAAP service contains more than just the requested metadata. It also contains a “links” JSON node that leads to additional information about this seismic dataset.

Picture1A sample JSON output for the IVAAP “seismic metadata” service, as shown in Google Chrome Developer Tools


In the example above, there are multiple HATEOAS links. One of them is the “Geometry” link. The purpose of the seismic geometry service is to expose the shape of seismic surveys. The URL of this service is:

https://…/ivaap/api/ds/geofiles/v1/sources/a8b05811-6409-43bb-8902-c9142ab48cff/seismic/cG9zZWlkb24gdmRzIG4gc2VneS9Qb3NlZGlvbiBkZXB0aC9wc2RuMTFfVGJzZG1GX2Z1bGxfd19BR0NfTm92MTFfdmVsNV9kZXB0aF8zMmJpdHMueGd5/geometry

This Geometry service is meant to be used by applications showing seismic datasets on a map. An application leveraging HATEOAS links would typically examine the “links” returned by the “seismic metadata” service to retrieve the URL of the associated “geometry” service. An application not leveraging HATEOAS links would hardcode the logic that /geometry needs to be added to the URL of the “seismic metadata” service to get the same result.

Both approaches are valid, but the HATEOAS approach brings multiple benefits that we are going to detail.

The “Broken Link” Issue Applied to Data

If you surfed in the ’90s, you are certainly familiar with the concept of “broken links.” Back in those days, websites were made of pages maintained by hand, and text on these web pages was often peppered with underlined words (often colored in blue) leading to another page. If the target page was moved, the link would stop working and the web surfer would be greeted by an unhelpful 404 error instead.

The lesson from these early days is that web page URLs are anything but permanent. The initial idea behind HATEOAS links is that this lesson can be applied to web services, too. If an application uses a hard-coded service URL to read data, this application will immediately stop working if that web service is moved. If each microservice describes the URL of related microservices, then the application can just follow URLs instead of using hard-coded versions. The maintenance of URLs becomes a server concern, and no longer a service consumer concern.

The main issue with this concept is that its purported benefits are unproven in the real world. To prove the benefit of this “forward compatibility” approach, you’d have to observe the life-cycle of many microservices (and consumers of these microservices) over a long period of time to determine whether the use of HATEOAS links was worth it. Taking IVAAP as an example, even though the IVAAP microservices use HATEOAS links, changing the URL of an IVAAP REST API is a rare event. One of the reasons is that there is no way to enforce the use of these links on the service consumer side. HATEOAS doesn’t provide a guarantee that no consumer hard-coded any URL. It is even sometimes faster to use hard-coded URLs, for example, to restore dashboards.

The second issue is that the “broken link” issue is a narrow backward compatibility concern, focused on URL changes only. While services may move, their API might also change, and HATEOAS doesn’t provide a way to address the backward compatibility of service APIs.

Discoverability

While HATEOAS links may help address issues associated with changing URLs, HATEOAS links really shine when it comes to discoverability. A component like the IVAAP Data Backend has hundreds of microservices. It doesn’t matter if each one of these services is documented, just finding whether a service exists is a complex task. HATEOAS links clearly indicate all URLs related to the data being accessed, in a consistent manner.

IVAAP is a platform. It was designed so that INT customers can modify the user interface using the IVAAP Client SDK, and we strive to make it as easy as possible. HATEOAS links give contextual documentation of the services that are available for any server-side object that the UI is accessing. As a result, modifying the IVAAP UI doesn’t require client-side developers to discover the server-side REST API before getting started. Developers can be immediately productive.

Testing

Developing a web-based application like IVAAP has a bit of a chicken-and-egg problem. You need to start by developing the data services first, but you don’t really know how well they work until the UI consuming these services is complete.

To get ahead of the game, there are methods to unit test data services, but they are time-consuming to follow. Just building by hand the right URL to test takes time, especially with long URLs. And because data quality varies, bugs might be data-specific and you need to test a bunch of them to make sure your data services are rock solid.

 

Following HATEOAS Links with Postman

Following HATEOAS links with Postman.


A widely used tool to inspect and test web services is Postman. Postman “understands” HATEOAS links and testing your work just consists of following links within Postman, just like you would do with an HTML-based website.

The most common use case of the IVAAP Data Backend SDK is when INT customers write a connector that accesses a proprietary data source. The testing steps of such a connector are typically very fast because they don’t require the UI to be ready. Most bugs can be found immediately, just using Postman.

Going Beyond: Automatic UI Generation

Discoverability and testability are well-known benefits of including HATEOAS links in a REST API. IVAAP also uses HATEOAS links to generate part of its user interface. For example, the tree that is shown to users when they open a well is server-driven, not client-driven. The IVAAP UI parses the HATEOAS links and builds a tree of nodes based on them.

Not all wells have the same details of data. Some wells might only have a location, others might have a trajectory. The presence of relevant HATEOAS links is what gives the UI the information on which data is available for that well. The IVAAP UI doesn’t need to understand what a trajectory is to include a trajectory node under a well node. The tree is generated automatically from HATEOAS links.

The Nodes Under the “AKM-11” Well (left)

The nodes under the “AKM-11” well (left), as listed by the HATEOAS link for that well (right).


Not all HATEOAS links associated with an object are meant to be shown as nodes in the UI. By convention, only the HATEOAS links with the attribute “children” set to true should be shown. Customers who want to customize the nodes shown in the UI don’t need to write client-side code. They have complete control by just plugging their own code into the Data Backend.

The same technique is used to build the UI, allowing users to add datasets to their projects. The Data Backend advertises through HATEOAS links how data from a connector can be browsed, and the UI parses these HATEOAS links to build a matching user interface.

User Interface Generated when Listing Wells in a “mongo” Connector

User interface generated when listing wells in a “mongo” connector.


Each data source has different capabilities, and this is reflected in the user interface. Some data sources might support search by name, paging, or sorting. For example, when search by name is supported on the server side, the IVAAP UI may propose a search box. IVAAP advertises querying capabilities to the user interface by including a “supportedQueries” attribute along its HATEOAS links.

A Sample JSON Output for the IVAAP “connector” Service

A sample JSON output for the IVAAP “connector” service, as shown in Google Chrome Developer Tools.

Likewise, it is sometimes convenient to be able to edit the name of a dataset from the same user interface. Not all data sources support name editing, and it’s only when editing is supported by a connector that a relevant HATEOAS link should be included in the server responses.

A Sample JSON Output for the IVAAP “connector” Service, as Shown in Google Chrome Developer Tools

A sample JSON output for the IVAAP “connector” service, as shown in Google Chrome Developer Tools.

In the response above, not only the Data Backend advertises that data names can be edited, but it also indicates it supports data deletion.

This concept of automated UI generation using HATEOAS links is not a standard use. It requires the addition of attributes that are typically not seen in web services using HATEOAS links. They have powerful tools as they reduce the amount of work on the client side. Actually, the IVAAP REST API is designed to support more complex than the two use cases already mentioned.

A majority of the IVAAP REST services are either services returning a collection of meta-data, or services returning the meta-data of a single dataset. The JSON format of these two types of services is standardized across the IVAAP Data Backend. Because the services provide consistent JSON outputs, it is easy to write a generic UI that will browse through the entire tree of meta-data and even allow editing. In other words, you can write a basic IVAAP client from scratch without much effort on the UI side.

A completely automated UI would not be limited by the search and editing capabilities advertised in HATEOAS links. Each IVAAP REST service comes with its own documentation. This documentation is accessible in the OpenAPI 3.0 format using a standard mechanism.

In this mechanism, if the URL of the service listing wells in a MongoDB database is: https://…/ivaap/api/ds/mongo/v1/sources/a8b05811-6409-43bb-8902-c9142ab48cff/wells/, the URL of its matching OpenAPI documentation would be: https://…/ivaap/api/ds/mongo/v1/sources/a8b05811-6409-43bb-8902-c9142ab48cff/wells/openapispecs

A completely automated UI could expose the search parameters described in this documentation, a bit like SwaggerEditor does. This wouldn’t be limited to search, the same principle can be applied to updating and deleting data.

 

A Form Generated Automatically by SwaggerEditor

A form generated automatically by SwaggerEditor from the OpenAPI specification of the wells service.

 

Going Beyond: Batch Support

Another feature enabled by HATEOAS links is the ability to fetch multiple aspects of a dataset in one HTTP call.

Microservices work best when they do only one thing at a time, but this means the IVAAP client needs to make multiple calls to the Data Backend to restore a dashboard. Currently, Google Chrome only allows up to 6 concurrent HTTP connections per host, sometimes forcing the client to “wait” for the availability of connections. This has a direct impact on the user experience.

To help with this, the IVAAP Data Backend provides a so-called “Batch” REST API to retrieve the content behind multiple URLs in one go. Other servers also have this feature, but what’s different about IVAAP’s Batch API is that it allows developers to leverage HATEOAS links.

For example, if you are building a data map and need to retrieve the metadata of a seismic dataset along with its outline, you would specify to the Batch REST API that you need to retrieve the content behind  https://…/ivaap/api/ds/geofiles/v1/sources/a8b05811-6409-43bb-8902-c9142ab48cff/seismic/cG9zZWlkb24gdmRzIG4gc2VneS9Qb3NlZGlvbiBkZXB0aC9wc2RuMTFfVGJzZG1GX2Z1bGxfd19BR0NfTm92MTFfdmVsNV9kZXB0aF8zMmJpdHMueGd5 as well as the content behind the associated “Geometry” HATEOAS link. This method of fetching multiple aspects of a dataset at once is much more expressive than passing multiple URLs to the Batch REST API.

Something that should be noted when it comes to performance is that we made the HATEOAS links an optional feature of IVAAP. Consumers of the IVAAP Data Backend API who don’t use these links can opt to reduce the size of the JSON payload between the client and the server. The default behavior of the Data Backend is to include HATEOAS links, but the collection services can be called in a way that excludes these links completely or only includes specific, named links.

Conclusion

HATEOAS links have been a part of the IVAAP Data Backend since day one. Over time, we found that they pack much more functionality than we initially thought. All these features have a common goal: facilitating the work of the UI developers and accelerating the delivery of software. While I used examples from IVAAP, the ideas in this article can easily be applied to your own data backend.

For more information or for a free demo of IVAAP, visit int.com/products/ivaap/.


Filed Under: IVAAP Tagged With: backend, HATEOAS, ivaap, metadata, microservices, URL, web services

Footer

Solutions

  • Real-Time Visualization
  • Visualization Components
  • New Energy Visualization
  • OSDU Visualization
  • Machine Learning
  • Developer Tools
  • Cloud Partners
  • Customer Success Stories

Products

  • IVAAP
  • GeoToolkit
  • INTViewer
  • IVAAP Demos
  • GeoToolkit Demos

About

  • News
  • Events
  • Careers
  • Management Team

Resources

  • Blog
  • FAQ

Support

  • JIRA
  • Developer Community

Contact

INT logo
© 1989–2024 Interactive Network Technologies, Inc.
Privacy Policy
  • Careers
  • Contact Us
  • Search

COPYRIGHT © 2025 INTERACTIVE NETWORK TECHNOLOGIES, Inc