• Skip to main content
  • Skip to footer

INT

Empowering Visualization

COMMUNITY BLOG
CONTACT US SUPPORT
MENUMENU
  • Solutions
    • Overview
    • Real-Time Visualization
    • Visualization Components
    • New Energy Visualization
    • OSDU Visualization
    • Machine Learning
    • Developer Tools
    • Cloud Partners
  • Products
    • IVAAP™
          • SOLUTIONS

            Real-Time Visualization

            OSDU Visualization

            Visualization Components

            New Energy Visualization

            Upstream Data Visualization

          • SUCCESS STORIES

            WEATHERFORD
            Well delivery optimization software

            BARDASZ
            Data management, data quality monitoring

            ANPG / SATEC-MIAPIA
            Virtual data room

            MAILLANCE
            High-performance visualization of ML algorithms

            SEE ALL >

          • SUPPORT

            DEVELOPER COMMUNITY
            Join or log in to the INT Developer Community.

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • DEMOS

            IVAAP DEMOS
            Cloud-Based Demos

            FIRST TIME HERE?
            Register to access our
            IVAAP demos

    • GeoToolkit™
          • SUCCESS STORIES

            CAYROS
            Field development planning

            TOTALENERGIES
            High-performance large dataset reservoir rendering

            IFP ENERGIES NOUVELLES
            Seismic and structural interpretation validation

            SEE ALL >

          • SUPPORT

            DEVELOPER COMMUNITY
            Join or log in to the INT Developer Community.

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • DEMOS

            GEOTOOLKIT DEMOS
            Geoscience Demos

    • INTViewer™
          • SUCCESS STORIES

            STRYDE
            Fast seismic QA/QC in the field

            SILVERTHORNE SEISMIC
            Efficient 2D/3D seismic data delivery

            WIRELESS SEISMIC
            Drones, IoT, and Advanced Onsite Seismic Data Validation

            SEE ALL >

          • SUPPORT

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • PLUG-INS

            EXTEND INTVIEWER
            More than 65 plugins available

  • Demos
    • GeoToolkit Demos
    • IVAAP Demos
  • Success Stories
  • Resources
    • Blog
    • Developer Community
    • FAQ
    • INT Resources Library
  • About
    • Overview
    • News
    • Events
    • Careers
    • Meet Our Team
    • About INT

java

Apr 18 2024

Celebrating a Decade of Innovation: GeoToolkit.JS Marks 10 Years of Empowering Subsurface Data Visualization

In the realm of subsurface data visualization, the 10th anniversary of GeoToolkit.JS marks a significant milestone. What began as a comprehensive set of high-performance tools and libraries has evolved into an indispensable resource for developers seeking to craft advanced, domain-specific software efficiently.

GeoToolkit™ encompasses diverse features, catering to the complex needs of developers working with 2D/3D seismic, well log, schematics, contour, maps, charts, and more. Its flexibility allows embedding within energy applications or building from the ground up, facilitating swift and precise development. Also, integrating GeoToolkit.JS data visualization libraries within the oil and gas industry is a game-changer for the end user—be it a geoscientist, an engineer, or a decision-maker. These individuals rely on accurate, comprehensive insights derived from complex subsurface data.

This anniversary is not just a celebration of a decade but a commemoration of more than 30 years of GeoToolkit’s presence in the industry, with versions spanning .NET, Java, JavaScript / TypeScript, and C++. Over this period, GeoToolkit has continuously evolved, refining its capabilities and pushing the boundaries of subsurface data visualization.

The journey of GeoToolkit has been paved with remarkable achievements. Its key benefits include support for various subsurface data formats like LAS, DLIS, WITSML, SEG-Y, SEG-D, and more in some versions of GeoToolkit. Moreover, it provides robust technical support, extensive online documentation, and a thriving Developer Community, fostering collaboration and knowledge exchange.

GeoToolkit’s high-performance tools and libraries stand out for their plug-and-play nature. With just a single line of code, developers can deploy these tools or use them as the foundation for crafting custom applications, significantly expediting time-to-market.

The extensive capabilities of GeoToolkit cover a broad spectrum of functionalities, from 3D seismic and reservoir data visualization using WebGL technology to 2D scatter plots for comparison, well log displays that can correlate thousands of wells, and the creation of PDF reports with multiple widgets for streamlined log header construction.

Furthermore, its extensive volume rendering capability facilitates high-quality visualizations of massive datasets, enabling seamless visualization of billions of cells. GeoToolkit also boasts robust support for diverse map layers, such as Google, ESRI, Bing Maps, ArcGIS GeoServices, OpenStreetMap, and more, making it an invaluable asset for comprehensive geospatial applications.

Here’s a glimpse of the evolution of GeoToolkit.JS showcasing its transformative capabilities:

Report Builder

Build PDF reports with multiple widgets, including custom log headers, with template saving and printing for efficient creation and sharing.

Report Builder- pic

Well Log Correlation

Correlate thousands of wells effortlessly with well log displays, streamlining your analysis.

Well log correlation – pic

3D Reservoir Seismic Intersection

Intersect your 3D reservoir grid with seismic data and add well logs, horizons, and more.

3D Reservoir Seismic Intersection – pic

Schematics Seismic

Combining wellbore architecture with seismic profiles, this tool allows you to identify zones of interest in relation to each hole section and cementing sections versus formations crossed.

Schematics Seismic – pic

GeoToolkit Demo Gallery

Dive deeper and experience the transformative capabilities of GeoToolkit with our Interactive GeoToolkit Demos.

GeoToolkit Demo Gallery – pic


“Without INT’s GeoToolkit, we wouldn’t be where we are right now with C-Fields. We have high expectations that this tool will become the standard in Field Development planning and that we will be able to accomplish a lot with this tool. There’s nothing in the market like C-Fields right now.”

—Francisco Caycedo, Regional Director Latin America, CAYROS


However, amidst this celebration, expressing our deepest gratitude is crucial. To every individual involved in the development of GeoToolkit, your dedication, expertise, and unwavering commitment have been the cornerstone of our success. Your collective efforts have driven innovation, shaped the product, and paved the way for a decade of groundbreaking achievements.

To our clients, we extend our heartfelt thanks. Your trust, partnership, and continuous support have been the driving force behind GeoToolkit’s evolution. 

The 10th anniversary of GeoToolkit.JS showcases innovation and an unwavering commitment to empowering developers in the intricate landscape of subsurface data visualization. Here’s to a decade of achievements and many more years of pioneering advancements in the field.

To learn more about GeoToolkit.JS, please visit int.com/products/geotoolkit/ or contact us at intinfo@int.com.

Explore our Interactive GeoToolkit Demos
Request a 30-day trial

Filed Under: GeoToolkit, JavaScript Tagged With: .JS, .net, ArcGIS GeoServices, Bing maps, C++, DLIS, ESRI, java, javascript, LAS, OpenStreetMap, schematics, SEGD, segy, seismic, welllog, WITSML

Jun 12 2023

How to Use the Nimbus Library to Authenticate Users with OpenID Connect

OpenID Connect (OIDC) is an authentication protocol using a third-party site to provide single-sign-on capabilities to web applications. For example, before you can visualize OpenSDU data in IVAAP, your browser is redirected to a login page hosted by Microsoft or Amazon. After completion of the two-factor authentication, your browser is redirected to IVAAP. The sequence of steps between IVAAP and external authentication servers follows the OpenID Connect protocol.

OIDC is built on top of Oauth2. It was created in 2014, superseding Open ID 2.0. Over the years, it has become a widely used standard for consumer and enterprise-grade applications. While simpler than OpenID 2.0, most developers will need to use a library to integrate OIDC with their applications.

In IVAAP’s case, since the Admin Backend is written in Java, we use the OIDC Nimbus library identified by this maven configuration:

<dependency>
    <groupId>com.nimbusds</groupId>
    <artifactId>oauth2-oidc-sdk</artifactId>
    <version>10.1</version>
</dependency>

While this Nimbus library doesn’t provide a full implementation of the OpenID Connect workflow, it contains pieces that can be reused in your code. IVAAP being a platform, it comes with an SDK to plug your own authentication. Nimbus fits the concept of an SDK where most of the work is already done for developers, and you only need to implement a few hooks. In our case, the hooks identify mainly what to do when the /login, /callback, /refresh, and /logout services are called. Let’s dive a little further into how Nimbus helps developers implement these services.

The Login Service

The main purpose of the /login service is to redirect the browser to an external authentication page. The login URL changes slightly each time it’s called for security reasons. It contains various information such as the scope, a callback URL, a client ID, a state, and a nonce. The scope, callback URL, and client ID typically don’t change, but the state and nonce are always new.

ClientID clientID = new ClientID(this.clientId);
URI callback = new URI(this.callbackURL);

Nonce nonce = new Nonce();
Scope authScope = new Scope();
String[] split = this.scope.split(" ");
for (String currentToken : split) {
    authScope.add(currentToken);
}

AuthenticationRequest request = new AuthenticationRequest.Builder(
            ResponseType.CODE,
            authScope,
            clientID,
            callback)
            .endpointURI(new URI(this.authURL))
            .state(state)
            .nonce(nonce)
            .prompt(new Prompt("login"))
            .build();
return request.toURI();

Since OIDC was released, a more secure variation called PKCE (pronounced pixy) has been added. PKCE introduces a code challenge instead of relying on a client secret used in the /callback service. The same code looks like this when PKCE is used:

ClientID clientID = new ClientID(this.clientId);
URI callback = new URI(this.callbackURL);
this.codeVerifier = new CodeVerifier();

Nonce nonce = new Nonce();
Scope authScope = new Scope();
String[] split = this.scope.split(" ");
for (String currentToken : split) {
    authScope.add(currentToken);
}

AuthenticationRequest request = new AuthenticationRequest.Builder(
            ResponseType.CODE,
            authScope,
            clientID,
            callback)
            .endpointURI(new URI(this.authURL))
            .state(state)
            .codeChallenge(codeVerifier, CodeChallengeMethod.S256)
            .nonce(nonce)
            .prompt(new Prompt("login"))
            .build();
return request.toURI();

The Callback Service

When the /callback service is called after successful authentication, the callback URL contains a state and a code that identifies the authentication that was just performed. The following lines extract this code from the callback URL:

AuthenticationResponse response = AuthenticationResponseParser.parse(
                new URI(callbackUrl));
State state = response.getState();
AuthorizationCode code = response.toSuccessResponse().getAuthorizationCode();

The state should match the state created when the /login service was called.

This “authorization code” can be exchanged with authentication tokens calling the OIDC token service.

     URI tokenEndpoint = new URI(this.tokenURL);
     URI callback = new URI(this.callbackURL);
      AuthorizationGrant codeGrant = new AuthorizationCodeGrant(code, callback);
      ClientID clientID = new ClientID(this.clientId);
      Secret secret = new Secret(this.clientSecret);
      Scope authScope = new Scope();
      String[] split = this.scope.split(" ");
      for (String currentToken : split) {
          authScope.add(currentToken);
      }
      ClientAuthentication clientAuth = new ClientSecretBasic(clientID, secret);
      TokenRequest request = new TokenRequest(tokenEndpoint, clientAuth, codeGrant, authScope); 
      HTTPRequest toHTTPRequest = request.toHTTPRequest();
      TokenResponse tokenResponse= OIDCTokenResponseParser.parse(toHTTPRequest.send());
OIDCTokenResponse successResponse = (OIDCTokenResponse) tokenResponse.toSuccessResponse();
  JWT idToken = successResponse.getOIDCTokens().getIDToken();
  AccessToken accessToken = successResponse.getOIDCTokens().getAccessToken();
  RefreshToken refreshToken = successResponse.getOIDCTokens().getRefreshToken();

If PKCE is enabled, this code is simpler. It doesn’t require a client’s secret to be passed:

           URI tokenEndpoint = new URI(this.tokenURL);
           URI callback = new URI(this.callbackURL);
            AuthorizationGrant codeGrant = new AuthorizationCodeGrant(code, callback, this.codeVerifier);
            ClientID clientID = new ClientID(this.clientId);
            TokenRequest  request = new TokenRequest(tokenEndpoint, clientID, codeGrant);
authScope); 
            HTTPRequest toHTTPRequest = request.toHTTPRequest();
            TokenResponse tokenResponse= OIDCTokenResponseParser.parse(toHTTPRequest.send());
      OIDCTokenResponse successResponse = (OIDCTokenResponse) tokenResponse.toSuccessResponse();
        JWT idToken = successResponse.getOIDCTokens().getIDToken();
        AccessToken accessToken = successResponse.getOIDCTokens().getAccessToken();
        RefreshToken refreshToken = successResponse.getOIDCTokens().getRefreshToken();

The OpenID Connect token service gives us 3 tokens:

  • An access token
  • A JSON Web Token (JWT) token, also known as an ID token or bearer token
  • A refresh token

The access token is the token that typically grants access to data. It expires, and a new access token can be retrieved, passing the refresh token to the OIDC refresh service.

The JWT token is the token that identifies the user. Unlike the access token, it doesn’t expire. While a JWT token may be parsed to get user info, the access token is typically used instead. A user info OIDC service typically needs to be called with the access token to get the user details.

            URI userInfoURI = new URI(this.userInfoURL);
            HTTPResponse httpResponse = new UserInfoRequest(userInfoURI, accessToken)
                    .toHTTPRequest()
                    .send();
            UserInfoResponse userInfoResponse = UserInfoResponse.parse(httpResponse);
            UserInfo userInfo = userInfoResponse.toSuccessResponse().getUserInfo();
String email = (String) userInfo.getClaim("email");

For OpenSDU, a more complex API involving claim verifiers needs to be used to get user details.

Set<String> claims = new LinkedHashSet<>();
claims.add("unique_name");
ConfigurableJWTProcessor<SecurityContext> jwtProcessor = new DefaultJWTProcessor<>();
jwtProcessor.setJWSTypeVerifier(new DefaultJOSEObjectTypeVerifier<>(new JOSEObjectType("JWT")));
JWKSource<SecurityContext> keySource = new RemoteJWKSet<>(...));
JWSAlgorithm expectedJWSAlg = JWSAlgorithm.RS256;
JWSKeySelector<SecurityContext> keySelector = new JWSVerificationKeySelector<>(expectedJWSAlg, keySource);
jwtProcessor.setJWTClaimsSetVerifier(new DefaultJWTClaimsVerifier(new JWTClaimsSet.Builder().issuer(...).build(),
        claims));
jwtProcessor.setJWSKeySelector(keySelector);
JWTClaimsSet claimsSet = jwtProcessor.process(accessToken.getValue(), null);
Map<String, Object> userInfo = claimsSet.toJSONObject();
String email = (String) userInfo.getClaim("unique_name");

The Refresh Service

In IVAAP’s case, the UI’s role is to call the /refresh service before an access token expires. When this refresh service is called with the last issued refresh token, new access and refresh tokens are obtained by calling the OIDC token service again.

    RefreshToken receivedRefreshToken = new RefreshToken(…);
    AuthorizationGrant refreshTokenGrant = new RefreshTokenGrant(receivedRefreshToken);
    URI tokenEndpoint = new URI(this.tokenURL);
    ClientID clientID = new ClientID(this.clientId);
    Secret secret = new Secret(this.clientSecret);
    ClientAuthentication clientAuth = new ClientSecretBasic(clientID, secret);
    Scope authScope = new Scope();
    String[] split = this.scope.split(" ");
    for (String currentToken : split) {
        authScope.add(currentToken);
    }
    TokenRequest  request = new TokenRequest(tokenEndpoint, clientAuth, refreshTokenGrant, authScope);

HTTPResponse httpResponse = request.toHTTPRequest().send();
AccessTokenResponse successResponse = response.toSuccessResponse();
Tokens tokens = successResponse.getTokens();
AccessToken accessToken = tokens.getAccessToken();
RefreshToken refreshToken = tokens.getRefreshToken();

If PKCE is enabled, this code is simpler. It doesn’t require a client secret to be passed:

RefreshToken receivedRefreshToken = new RefreshToken(…);
AuthorizationGrant refreshTokenGrant = new RefreshTokenGrant(receivedRefreshToken);
URI tokenEndpoint = new URI(this.tokenURL);
ClientID clientID = new ClientID(this.clientId);
TokenRequest request = new TokenRequest(tokenEndpoint, clientID, refreshTokenGrant);
HTTPResponse httpResponse = request.toHTTPRequest().send();
AccessTokenResponse successResponse = response.toSuccessResponse();
Tokens tokens = successResponse.getTokens();
AccessToken accessToken = tokens.getAccessToken();
RefreshToken refreshToken = tokens.getRefreshToken();

The Logout Service

As OpenID Connect doesn’t provide a standard for logging out, no Nimbus API generates the logout URL. The logout URL has to be built manually depending on the OpenID provider (Microsoft or Amazon). This logout URL is sometimes provided by the content behind the discovery URL of an OpenID provider.

Going Beyond

For simplicity’s sake, I didn’t include in the code samples the handling of errors. While the OpenID Connect protocol is well-defined, there are a few variations between cloud providers. For example, the fields stored in tokens may vary. This article doesn’t describe what happens after the /callback service is called: once tokens are issued, how are they passed to the viewer? These implementation details may be implemented differently by each application. When I was tasked with integrating OpenID Connect, I found the Nimbus website clear and simple to use, showing sample code front and center. I highly recommend this library.

Visit us online at int.com/ivaap for a preview of IVAAP or for a demo of INT’s other data visualization products. 

For more information, please visit int.flywheelstaging.com or contact us at intinfo@int.com.

____________

ABOUT INT

INT software empowers energy companies to visualize their complex data (geoscience, well, surface reservoir, equipment in 2D/3D). INT offers a visualization platform (IVAAP) and libraries (GeoToolkit) that developers can use with their data ecosystem to deliver subsurface solutions (Exploration, Drilling, Production). INT’s powerful HTML5/JavaScript technology can be used for data aggregation, API services, and high-performance visualization of G&G and energy data in a browser. INT simplifies complex subsurface data visualization.

INT, the INT logo, and IVAAP are trademarks of Interactive Network Technologies, Inc., in the United States and/or other countries.


Filed Under: IVAAP Tagged With: ivaap, java, oauth2, OIDC, OpenID, PKCE, URL

May 01 2023

How the Admin Backend Provides Flexibility to IVAAP Customers

The P of IVAAP stands for Platform. As a platform, IVAAP is designed to be modified by INT’s own customers to meet their specific (and sometimes proprietary) needs. Over the years, most of the focus of my blog articles has been on the Data Backend. The Data Backend is the core component of IVAAP that accesses data and makes it available in a standardized form—the IVAAP viewer. The Data Backend is highly specialized for geoscience data. There is, however, another backend, the Admin Backend, that is more generic in nature and that typically doesn’t get as much attention. The goal of this article is to shed some light on how this “other backend” can be customized or consumed to meet customer needs.

Roles of the Admin Backend

The Admin Backend has multiple roles. Some see the Admin Backend as a component managing IVAAP projects. An IVAAP project is essentially an arbitrary grouping of datasets, where each dataset is accessible through the Data Backend. Indeed, the Admin Backend manages projects and all their members. Each member is identified by a URL and often carries metadata, such as its name or location. The actual storage of this information is a PostgreSQL database. The IVAAP Admin Backend provides a simple REST API to the UI to manage project data.

Describing the Admin Backend as a store for projects doesn’t do it justice. It also manages many types of IVAAP entities such as connectors, cloud services, users, groups, dashboards, templates, formulas, and queries. Other data types are geoscience-related, such as curve dictionaries. The Admin Backend is also in charge of providing an audit trail when data is added, updated, or removed. All these features are implemented based on a documented Java API so that INT’s customers can plug their own implementations. Developers are not limited by the REST services that the Admin provides, they can add their own. While the Admin SDK has multiple customization points, the use cases below are the most common.

Customization Use Cases

The first use case of the Admin SDK is the customization of authentication. By default, IVAAP supports two types of authentication. A simple OAUTH2-based authentication, and OpenID Connect. IVAAP customers often have their own authentication system, and IVAAP needs to use that system. To make this possible, the Admin SDK provides a way to customize how users are authenticated, and how sessions are managed and tracked.

The next use case is the customization of key services, typically collection services. For example, the Admin Backend has a service that lists all active users. This service is used by the UI when a template is shared with others. IVAAP customers can plug their own service that will list potential users, for example, listing them from an LDAP server instead of IVAAP’s own PostgreSQL database.

The third use case is the customization of entitlements. Each time a dataset is opened, the Admin Backend is queried to check whether the currently logged-in user has access to this dataset. The default implementation relies on group memberships, but customers can plug their own rules. These rules can be fine-grained, for example, making determinations at the well log levels instead of the well levels.

External Integration Use Cases

The integration of the Admin Backend with other systems goes beyond authentication. The Admin REST services are designed to be called either by a human or a computer. Unlike humans, computers can’t easily log in to a system that requires two-factor authentication. This is why the Admin provides a “Circle of Trust” REST API that allows computers to access its data without the need for a login or password, but rather by a secure exchange of keys. This feature opens new integration use cases.

The first use case for the “Circle of Trust” is the automated retrieval of user activities. Some INT customers require monitoring of user sessions to assess how much IVAAP is used. The REST API for listing user activities is straightforward to use and can be leveraged by a tool outside of IVAAP.

Another use case for the “Circle of Trust” is the automated registration of external workflows. INT customers might have hundreds of machine learning (ML) workflows that are hosted on various systems. With “Circle of Trust” credentials, the endpoints for these workflows can be registered automatically so that they appear as options to IVAAP users.

A Single SDK for Two Backends

The Admin Backend has about 200 REST services, and these services were developed with the same SDK as the Data Backend SDK. It’s the same INT developers who maintain the Data Backend that also maintains the Admin Backend, with no additional training required. It’s not just INT that benefits, but our customers benefit, too. Together, the Data Backend and the Admin Backend provide a unified experience for all Java developers customizing IVAAP servers.

Visit us online at int.com/ivaap for a preview of IVAAP or for a demo of INT’s other data visualization products. 

For more information, please visit int.flywheelstaging.com or contact us at intinfo@int.com.


Filed Under: IVAAP Tagged With: API, backend, developer, ivaap, java, OpenID, SDK, URL

Feb 17 2022

How Apache SIS Simplifies the Hidden Complexity of Coordinate Systems in IVAAP

See how Apache SIS, with IVAAP, helps support our client’s coordinate systems by using less code.

With the recent release of IVAAP 2.9, now is a good time to reflect on the help we got along the way. One of the components that made IVAAP possible is the Apache SIS library. The goal of this blog article is to bring more visibility to this awesome Java library.

What Is the Apache SIS Library?

Apache SIS is an open-source library written in Java that makes it easier to develop geospatial applications. Many datasets have metadata designating their location on Earth, and these locations are relative to a datum and a map projection method. There are many datums and many map projection methods. Apache SIS facilitates their identification and the accurate conversion of coordinates between them.

What’s a Datum and What’s a Map Projection Method?

Most people are familiar with latitude and longitude coordinates. This geographic coordinate system has been used for maritime and land-based navigation for centuries. Since the late 1800s, the line defining 0º of longitude has been the Prime meridian, crossing the location of the Royal Observatory in Greenwich, England. This meridian defined one axis, from South to North. The equator defined the other axis, from West to East. The origin point of this system on the Earth’s surface is in the Gulf of Guinea, 600 km off the coast of West Africa.

The traditional geographic coordinate system
The traditional geographic coordinate system (Source)

 

Similarly, a datum defines the origin point of the coordinate axes on the Earth’s surface and defines the direction of the axes. To account for the fact that the Earth is not a perfect sphere, a datum also describes the generalized shape of the Earth. For example, WGS 84 (World Geodetic System 1984) is a widely-used global datum based on latitudes and longitudes where the Earth is modeled as an oblate spheroid, centered around its center of mass.

The WGS 84 reference frame. The oblateness of the ellipsoid is exaggerated in this image. (Source)

 

WGS 84 is used by GPS receivers and the longitude 0º of this datum is actually 335 ft east of the Greenwich meridian.

While universal latitude and longitude coordinates are convenient, they are not universally practical because of land masses drift. Satellite measurements show that the location of Houston relative to the WGS 84 datum changes by 1 inch each year. A local datum is a more pragmatic choice than a global datum because distances from a local point of reference are smaller and don’t change over the years when all locations are on the same tectonic plate. A local datum may also align its spheroid to closely fit the Earth’s surface in this particular area.

A map projection method indicates how the Earth’s surface is flattened into a plane in order to make a 2D map. The most widely known projection method was presented by Gerardus Mercator in 1569. This is a global cylindrical projection method. It preserves local directions and shapes but distorts sizes away from the equator.

Cylindrical Projection
An example of global cylindrical projection (Source)

 

In the US, the Lambert Conformal Conic projection has become a standard projection for mapping large areas. This is a projection that requires multiple parameters, defining the longitude and latitude of its center, a distance offset to this center, and the latitude of its southern and northern parallels.

Conical Projection
An example of local conical projection (Source)

 

When a datum and a projection are used together, they define a projected coordinate reference system. While local systems limit distortions, they are only valid in a small area, an area known as the ”area of use” where a minimum level of accuracy is guaranteed.

Select Coordinate System
A screenshot from INTViewer showing the area of use of NAD27 / Wyoming East Central, a derived projected coordinate reference system

 

How Does Apache SIS Help IVAAP?

To show geoscience datasets on one of IVAAP’s 2D map widgets, you need to use a common visualization coordinate reference system.

IVAAP Screenshot
An IVAAP screenshot showing the location of wells on a map

 

This is where Apache SIS helps us: It understands the properties of both the data and visualization systems and is able to convert coordinates between them.

The math to perform these conversions is complex, this is not something you want to implement on your own. It requires specialized skills, both as a programmer and a domain expert. And just beyond the math, the number of datums and projection methods is mind-boggling. Many historical surveys are still in use today. For example, there are two datums used for making horizontal measurements in North America: the North American Datum of 1927 (NAD 27) and the North American Datum of 1983 (NAD 83). The two datums are based on two different ellipsoid models. As a result, the two datums have grid shifts of up to 100 meters, depending on location. IVAAP is able to visualize datasets that used NAD 27 as a reference, and it is Apache SIS that makes it possible to accurately reproject these coordinates into modern coordinate systems, accounting for their respective datum shift.

The datum shift between NAD 27 and NAD 83 (Source)

 

The oil and gas industry is at the origin of some of these local coordinate systems. Many of today’s new oil fields are in remote areas, initially lacking a geographical survey. There is an organization called the “OGP Surveying and Positioning Committee” which keeps track of these coordinate systems. It is colloquially known as “EPSG” for historical reasons. It regularly provides a database of these coordinate systems to all its members. This database is used by IVAAP and Apache SIS provides a simple API to take advantage of it. Each record in this database has a numerical WKID (Well Known ID). To instantiate a projection method or a coordinate system defined in this database, you just need to prefix this id with the “EPSG:” string.

OperationMethod method = getCoordinateOperationFactory().getOperationMethod("EPSG:9807"); // Transverse Mercator method

CoordinateReferenceSystem crs = CRS.forCode("EPSG:32056”);

 

The EPSG database itself is extensive, but it is common for INT customers to use unlisted coordinate reference systems, created for brand new oil fields. In these cases, a WKT (Well Known Text) string can be used instead. This text is a human-readable description of a projection method or coordinate system. The Apache SIS provides a clean API to parse WKTs. It also provides an API for formula-based projection methods that can’t be described by a WKT.

PROJCS["NAD27 / Wyoming East Central",
    GEOGCS["NAD27",
        DATUM["North_American_Datum_1927",
            SPHEROID["Clarke 1866",6378206.4,294.9786982139006,
                AUTHORITY["EPSG","7008"]],
            AUTHORITY["EPSG","6267"]],
        PRIMEM["Greenwich",0,
            AUTHORITY["EPSG","8901"]],
        UNIT["degree",0.0174532925199433,
            AUTHORITY["EPSG","9122"]],
        AUTHORITY["EPSG","4267"]],
    PROJECTION["Transverse_Mercator"],
    PARAMETER["latitude_of_origin",40.66666666666666],
    PARAMETER["central_meridian",-107.3333333333333],
    PARAMETER["scale_factor",0.999941177],
    PARAMETER["false_easting",500000],
    PARAMETER["false_northing",0],
    UNIT["US survey foot",0.3048006096012192,
        AUTHORITY["EPSG","9003"]],
    AXIS["X",EAST],
    AXIS["Y",NORTH],
    AUTHORITY["EPSG","32056"]]

The WKT of NAD27 / Wyoming East Central, with the WKID 32056

Why Did INT Choose Apache SIS Over Other Options?

INT had previous experience using GeoTools. Similarly to Apache SIS, GeoTools is a Java library dedicated to facilitating the implementation of geographical information systems. Being an older library, it goes much further than Apache SIS. For example, one of its components allows the parsing of shape files, something currently outside of the scope of Apache’s library. As a matter of fact, the first versions of IVAAP were using GeoTools for coordinate conversions.

One of the issues we encountered with GeoTools is that it is a library that provides only fine-grained Java conversion APIs. There are several paths to convert coordinates between two systems, and GeoTools allows the developer to choose the best method. Choosing the “best” method without human interaction is complex; it depends on the extent of the data being manipulated and the “area of use” of each coordinate reference system involved. It also depends on the availability of well-known transformation algorithms between datums. In North America, the standard for transformations between datums was formerly known as NADCON. The rest of the world uses a standard known as NTV2. Apache SIS works with both datum shift standards. It may elect to use WGS 84 as a hub when no datum shift is applicable. An algorithm to pick the best method would require a significant amount of code for INT to write and maintain. While Apache SIS allows fine-grained control over the different transformations used when converting from one coordinate reference system into another, it also provides a high-level API to perform this conversion. The picking of the best algorithm is part of the Apache SIS’ implementation. Its high-level Java API that picks a conversion algorithm matches IVAAP’s general use microservice for the same function. To pick the right algorithm, it only takes 3 parameters:

  • A definition of the “from” coordinate system
  • A definition of the “to” coordinate system
  • A description of the “extent” of the coordinates to convert
double x = …
double y = …
GeographicBoundingBox extentInLongLat = …
DirectPosition position = new DirectPosition2D(x, y);
CoordinateReferenceSystem fromCrs = CRS.forCode("EPSG:32056");
CoordinateReferenceSystem toCrs = CRS.forCode("EPSG:3737");
CoordinateReferenceSystem displayOrientedFromCrs = AbstractCRS.castOrCopy(fromCrs).forConvention(AxesConvention.DISPLAY_ORIENTED);
CoordinateReferenceSystem displayOrientedToCrs = AbstractCRS.castOrCopy(toCrs).forConvention(AxesConvention.DISPLAY_ORIENTED);
CoordinateOperation operation = CRS.findOperation(displayOrientedFromCrs, displayOrientedToCrs, extentInLongLat);
MathTransform mathTransform = operation.getMathTransform();
double[] coordinate = mathTransform.transform(position, position).getCoordinate();

Sample code to convert a single x, y position from “NAD27 / Wyoming East Central” to “NAD83 / Wyoming East Central”

We still use GeoTools for other parts, but as a general rule, the Apache SIS Java API tends to be simpler, more modern than GeoTools when it comes to manipulating coordinates and coordinate systems.

After 3 years of use, we are happy with our decision to move to Apache SIS. This library allows us to support more of our customers’ coordinate systems, with less code. We are also planning to use it to interpret the metadata of GeoTIFF files. The support has been excellent. When we needed help, the members of the Apache SIS development team were really keen to help us. This is one of the reasons why INT felt we needed to give back to the open-source community. Being a long-time member of OSDU, INT contributed to OSDU a coordinate conversion library built on top of Apache SIS. This coordinate conversion library converts GeoJSON and trajectory stations between different coordinate reference systems. Users can specify the specific transformation steps that will be used in the conversion process, either through EPSGs or WKTs. Behind the scenes, it’s the Apache SIS’ fine-grained API that is being used.


Filed Under: IVAAP Tagged With: apache, apache sis, ivaap, java, maps

Jan 12 2021

Comparing Storage APIs from Amazon, Microsoft and Google Clouds

One of the unique capabilities of IVAAP is that it works with the cloud infrastructure of multiple vendors. Whether your SEGY file is posted on Microsoft Azure Blob Storage, Amazon S3 or Google Cloud Storage, IVAAP will be capable of visualizing it.

It’s only when administrators register new connectors that vendor-specific details need to be entered.  For all other users, the user interface will be identical regardless of the data source. The REST API consumed by IVAAP’s HTML5 client is common to all connectors as well. The key component that does the hard work of “speaking the language of each cloud vendor and hiding their details to the other components” is the IVAAP Data Backend.

While the concept of “storage in the cloud” is similar across all three vendors, they each provide a different API to achieve similar goals. In this article, we will compare how to implement 4 basic functionalities. Because the IVAAP Data Backend is written in Java, we’ll only compare Java APIs.

 

Checking that an Object or Blob Exists

Amazon S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String keyName = …
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
try {
    HeadObjectRequest.Builder builder = HeadObjectRequest.builder().bucket(bucketName).key(keyName);
    s3Client.headObject(request);
    return true;
} catch (NoSuchKeyException e) {
    return false;
}

Microsoft Azure Blob Storage

String accountName = …
String accountKey = …
String containerName = …
String blobName = ...
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
String endpoint = String.format(Locale.ROOT, "https://%s.blob.core.windows.net", accountName);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder().endpoint(endpoint).credential(credential);
BlobServiceClient client = builder.buildClient();
BlobContainerClient containerClient = client.getBlobContainerClient(containerName);
BlobClient blobClient = containerClient.getBlobClient(blobName);
return blob.exists();

Google Cloud Storage

String authKey = …
String projectId = …
String bucketName = …
String blobName = ...
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Blob blob = storage.getBlob(bucketName, blobName, BlobGetOption.fields(BlobField.ID));
return blob.exists();

 

Getting the Last Modification Date of an Object or Blob

Amazon S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String keyName = …
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
HeadObjectRequest headObjectRequest = HeadObjectRequest.builder()
.bucket(bucketName)
.key(keyName)
.build();
HeadObjectResponse headObjectResponse = s3Client.headObject(headObjectRequest);
return headObjectResponse.lastModified();

Microsoft Azure Blob Storage

String accountName = …
String accountKey = …
String containerName = …
String blobName = …
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
String endpoint = String.format(Locale.ROOT, "https://%s.blob.core.windows.net", accountName);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder()
.endpoint(endpoint)
.credential(credential);
BlobServiceClient client = builder.buildClient();
BlobClient blob = client.getBlobClient(containerName, blobName);            BlobProperties properties = blob.getProperties();
return properties.getLastModified();

Google Cloud Storage

String authKey = …
String projectId = …
String bucketName = …
String blobName = …
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Blob blob = storage.get(bucketName, blobName,  BlobGetOption.fields(Storage.BlobField.UPDATED));
return blob.getUpdateTime();

 

Getting an Input Stream out of an Object or Blob

Amazon S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String keyName = …
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
GetObjectRequest getObjectRequest = GetObjectRequest.builder()
.bucket(bucketName)
.key(keyName)
.build();
return s3Client.getObject(getObjectRequest);

Microsoft Azure Blob Storage

String accountName = …
String accountKey = …
String containerName = …
String blobName = …
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
String endpoint = String.format(Locale.ROOT, "https://%s.blob.core.windows.net", accountName);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder()
.endpoint(endpoint)
.credential(credential);
BlobServiceClient client = builder.buildClient();
BlobClient blob = client.getBlobClient(containerName, blobName);
return blob.openInputStream();

Google Cloud Storage

String authKey = …
String projectId = …
String bucketName = …
String blobName = …
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Blob blob = storage.get(bucketName, blobName,  BlobGetOption.fields(BlobField.values()));
return Channels.newInputStream(blob.reader());

 

Listing the Objects in a Bucket or Container While Taking into Account Folder Hierarchies

S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String parentFolderPath = ...
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
ListObjectsV2Request.Builder builder = ListObjectsV2Request.builder().bucket(bucketName).delimiter("/").prefix(parentFolderPath + "/");
ListObjectsV2Request request = builder.build();
ListObjectsV2Iterable paginator = s3Client.listObjectsV2Paginator(request);
Iterator<CommonPrefix> foldersIterator = paginator.commonPrefixes().iterator();
while (foldersIterator.hasNext()) {
…
}

Microsoft

String accountName = …
String accountKey = …
String containerName = …
String parentFolderPath = ...
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder()
.endpoint(endpoint)
.credential(credential);
BlobServiceClient client = builder.buildClient();
BlobContainerClient containerClient = client.getBlobContainerClient(containerName);
Iterable<BlobItem> iterable = containerClient.listBlobsByHierarchy(parentFolderPath + "/");
for (BlobItem currentItem : iterable) {
   …
}

Google

String authKey = …
String projectId = …
String bucketName = …
String parentFolderPath = ...
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Page<Blob> blobs = cloudStorage.listBlobs(bucketName, BlobListOption.prefix(parentFolderPath + "/"), BlobListOption.currentDirectory());
for (Blob currentBlob : blobs.iterateAll()) {
 ...
}

 

Most developers will discover these APIs by leveraging their favorite search engine. Driven by innovation and performance, cloud APIs become obsolete quickly. Amazon was the pioneer, and much of the documentation still indexed by Google is for the v1 SDK, while the v2 has been available for more than two years, but wasn’t a complete replacement. This sometimes makes research challenging for the simplest needs. Microsoft has migrated from v8 to v12 a bit more recently and has a similar challenge to overcome. Being the most recent major player, the Google SDK is not dragged down much by obsolete articles.

The second way that developers will discover an API is by using the official documentation. I found that the Microsoft documentation is the most accessible. There is a definite feel that the Microsoft Azure documentation is treated as an important part of the product, with lots of high-quality sample code targeted at beginners.

The third way that developers discover an API is by using their IDE’s code completion. All cloud vendors make heavy use of the builder pattern. The builder pattern is a powerful way to provide options without breaking backward compatibility, but slows down the self-discovery of the API. The Amazon S3 API also stays quite close to the HTTP protocol, using terminology such as “GetRequest” and “HeadRequest”. Microsoft had a higher level API in v8 where you were manipulating blobs. The v12 iteration moved away from apparent simplicity by introducing the concept of blob clients instead. Microsoft offers a refreshing explanation of this transition. Overall, I found that the Google SDK tends to offer simpler APIs for performing simple tasks.

There are more criterias than simplicity, discoverability when comparing APIs. Versatility and performance are two of them. The Amazon S3 Java SDK is probably the most versatile because of the larger number of applications that have used its technology. It even works with S3 clones such as MinIO Object Storage (and so does IVAAP). The space where there are still a lot of changes is asynchronous APIs. Asynchronous APIs tend to offer higher scalability, faster execution, but can only be compared in specific use cases where they are actually needed. IVAAP makes heavy use of asynchronous APIs, especially to visualize seismic data. This would be the subject of another article. This is an area that evolves rapidly and would deserve a more in-depth comparison.

For more information on IVAAP, please visit int.flywheelstaging.com/products/ivaap/

 


Filed Under: IVAAP Tagged With: API, cloud, Google, ivaap, java, Microsoft

  • Page 1
  • Page 2
  • Go to Next Page »

Footer

Solutions

  • Real-Time Visualization
  • Visualization Components
  • New Energy Visualization
  • OSDU Visualization
  • Machine Learning
  • Developer Tools
  • Cloud Partners
  • Customer Success Stories

Products

  • IVAAP
  • GeoToolkit
  • INTViewer
  • IVAAP Demos
  • GeoToolkit Demos

About

  • News
  • Events
  • Careers
  • Management Team

Resources

  • Blog
  • FAQ

Support

  • JIRA
  • Developer Community

Contact

INT logo
© 1989–2024 Interactive Network Technologies, Inc.
Privacy Policy
  • Careers
  • Contact Us
  • Search

COPYRIGHT © 2025 INTERACTIVE NETWORK TECHNOLOGIES, Inc