• Skip to main content
  • Skip to footer

INT

Empowering Visualization

COMMUNITY BLOG
CONTACT US SUPPORT
MENUMENU
  • Solutions
    • Overview
    • Real-Time Visualization
    • Visualization Components
    • New Energy Visualization
    • OSDU Visualization
    • Machine Learning
    • Developer Tools
    • Cloud Partners
  • Products
    • IVAAP™
          • SOLUTIONS

            Real-Time Visualization

            OSDU Visualization

            Visualization Components

            New Energy Visualization

            Upstream Data Visualization

          • SUCCESS STORIES

            WEATHERFORD
            Well delivery optimization software

            BARDASZ
            Data management, data quality monitoring

            ANPG / SATEC-MIAPIA
            Virtual data room

            MAILLANCE
            High-performance visualization of ML algorithms

            SEE ALL >

          • SUPPORT

            DEVELOPER COMMUNITY
            Join or log in to the INT Developer Community.

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • DEMOS

            IVAAP DEMOS
            Cloud-Based Demos

            FIRST TIME HERE?
            Register to access our
            IVAAP demos

    • GeoToolkit™
          • SUCCESS STORIES

            CAYROS
            Field development planning

            TOTALENERGIES
            High-performance large dataset reservoir rendering

            IFP ENERGIES NOUVELLES
            Seismic and structural interpretation validation

            SEE ALL >

          • SUPPORT

            DEVELOPER COMMUNITY
            Join or log in to the INT Developer Community.

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • DEMOS

            GEOTOOLKIT DEMOS
            Geoscience Demos

    • INTViewer™
          • SUCCESS STORIES

            STRYDE
            Fast seismic QA/QC in the field

            SILVERTHORNE SEISMIC
            Efficient 2D/3D seismic data delivery

            WIRELESS SEISMIC
            Drones, IoT, and Advanced Onsite Seismic Data Validation

            SEE ALL >

          • SUPPORT

            GET SUPPORT
            Log a ticket for an issue or bug.

            CONTACT US

          • PLUG-INS

            EXTEND INTVIEWER
            More than 65 plugins available

  • Demos
    • GeoToolkit Demos
    • IVAAP Demos
  • Success Stories
  • Resources
    • Blog
    • Developer Community
    • FAQ
    • INT Resources Library
  • About
    • Overview
    • News
    • Events
    • Careers
    • Meet Our Team
    • About INT

IVAAP

Jan 04 2019

What We Learned about the Future of NetBeans from the Last 2 Years

It’s been two years since Oracle announced the donation of the NetBeans source code to the Apache software foundation. This move was much more than a licensing change — it was a bit of a new beginning for NetBeans.

The NetBeans IDE is well liked at INT. In fact, INTViewer is built on top of the NetBeans platform, and the IVAAP backend was entirely written with the NetBeans IDE. With the release of NetBeans 10, now is a good time to look back and recognize the changes that this transition to Apache brought.

Better Licensing
Under Oracle’s stewardship, the NetBeans source code was available under two licenses: a Common Development and Distribution License (CDDL) and a GNU General Public License (GPL). The CDDL is not well known, and the GPL sometimes carries stigma. The license move is a clear win for the platform as Apache is appreciated in the Java community for its business-friendly license and ubiquitous libraries.

Open Governance
The Apache NetBeans project is still “incubating.” The incubation process allows the foundation to assess whether the donated code complies with legal standards and how the community adheres to Apache’s guiding principles. These principles are centered around openness and collaboration. You can see it at work on NetBeans’ own website: all communications are recorded and shared. This is actually a requirement from the Apache foundation to use mailing lists. No conversations behind closed doors. No secret agenda. When decisions are made, you can see how the consensus was built. The NetBeans project didn’t just get a “new home,” it inherited a renewed philosophy, a new process, moving from “open source” to “open governance.”

Ongoing Support
Oracle has been a significant contributor to NetBeans in the past. Despite the spin off, Oracle’s contributions continue to this day — the NetBeans project received in June a second code donation from Oracle, a donation that will enable JakartaEE projects. Two years ago, observers were worried that Oracle might be “abandoning” NetBeans to Apache. However, the last two years have proved that Oracle still intends to spend resources on NetBeans.

The move to Apache was also a good opportunity to modernize the community tools: Bugzilla was retired, making room for JIRA, and self-hosted Mercurial was replaced by Git, hosted on GitHub. These changes make contributions from developers easier to make, giving the community much more freedom to control its future.

The timing of this transition wasn’t the best. Effort that would have normally been spent by the NetBeans community to support Java 9, 10, and 11 was spent instead meeting Apache’s legal requirements. The release of NetBeans 10 officially closes this chapter. The NetBeans developers need to be recognized for their efforts during these two years. As all Java developers can attest, the transition to Java 11 of any code base is a challenge. This was certainly even more true for a large code base such as NetBeans’.

What’s Next for NetBeans?
This chapter has yet to be written. Discussions point to frequent updates, maybe every 6 months. Meanwhile, INT is working actively to integrate the NetBeans 10 platform to INTViewer. Personally, I feel that the NetBeans project is likely to attract a new crowd of developers. Developers who have an itch to scratch. Since the users of the NetBeans IDE are developers themselves, there is a definite sense that filing bug reports or proposing new features won’t be enough to get things done. Pull requests make it easier than ever to submit changes and I plan to scratch long-time itches myself — maybe INTViewer would benefit from some tweaks to the NetBeans window system. It’s time for all of us to use these newfound abilities.

Visit our products page for more information about INTViewer or IVAAP or contact us for a demo.


Filed Under: INTViewer, IVAAP Tagged With: INTViewer, ivaap, NetBeans

Nov 13 2018

Using Scopes in IVAAP: Smart Caching and Other Benefits for Developers

With the release of IVAAP 2.2 coming up, there are lots of new features to explore. Since more data sources have being added, the Software Development Kit (SDK) of IVAAP’s backend has also grown. As developers get acquainted with IVAAP’s Application Programming Interface (API), one often-asked question concerns the presence of a scopeUUID parameter in several method declarations: What is this scope, and why is it useful?

Smart Caching

The purpose of IVAAP’s backend is to access the data from many data sources and present this data in a unified manner to the IVAAP HTML5 client. This backend is written in Java and accesses SQL databases, web services, basically any type of data store. Performance is key, and some data stores are faster to access than others. For example, web services tend to have much higher latency than SQL databases. In a web service configuration, a smart caching strategy is required so that the same web/HTTP calls are not made to the same data store twice while a service request is being fulfilled, regardless of its complexity. This is where the concept of scope comes in.

A scope is defined as a short-lived execution, and within this short lifetime, it’s generally OK to assume that two identical calls to the same data store will return the same result. The scopeUUID is a unique identifier of a scope. A scope is created automatically when an HTTP request is sent to a backend service, and disposed of when that entire request has been fulfilled. Depending of the performance characteristics of a data source, developers working on a data connector have the option to reuse cached information across the time span of a scope.

Scopes are needed because of the asynchronous nature of IVAAP’s execution architecture. Asynchronous code is not as common as synchronous code; it’s more complex to write, but tends to withstand higher workloads, which is a key requirement for IVAAP. If service executions were performed synchronously, the scope would be strongly associated with the current thread, and developers would typically cache the same information in a ThreadLocal variable instead.

Identifying the Source of a Method Call

While the most common use of scopes is for caching, being able to identify the source of a method call can be quite useful when running your application. In a synchronous world, the Java call stack provides this information. But in an asynchronous world where messages are being passed between actors, this call stack provides very little useful information. The IVAAP API allows you to build a useful call stack while developing. For example, if your backend code makes a call to an external web service, but you’re not sure why, you can plug your own AbstractScopesController class to control when this scope is created, then plug your own AbstractActorsController class to track how actors spawn other actors within that scope.

Managing Transactions

A third typical usage of scopes is to help with the management of transactions. When you retrieve a connection from a SQL connection pool, knowing the scope allows you to decide whether to reuse the same connection instead of the first one from the pool. Transactions are tightly associated with connections, and, without a scope, you’d need to pass connections around while a transaction is in play. Passing a scopeUUID instead avoids this code clutter, allowing you to easily implement APIs that are truly data-source agnostic.

Scope: Just More Code Clutter?

This brings me to the main objection to scopes: “Isn’t passing a scopeUUID a code clutter of its own?” The answer lies in how a developer plans to use the IVAAP backend SDK. If you use this API to write synchronous code, the scopeUUID will appear to get in the way. If you use the same API to write asynchronous code, you’ll find that the internal maintenance of scopes doesn’t affect the implementation of your actors. The scopeUUID of a service request is passed automatically from one actor to the next, without the developer’s intervention. If your data source is fast enough that it doesn’t require caching, if you do not make use of transactions, your asynchronous code will not need to be aware of scopes. It’s only when you will have to troubleshoot the relationships between actors at runtime that you might come to appreciate its usefulness.

Visit our products page for more information about IVAAP or contact us for a demo.


Filed Under: IVAAP Tagged With: ivaap, SDK

Oct 24 2018

My Experience at INT with IVAAP: A First Look as a Developer

I started at INT a few weeks ago and my first task as a new INT developer was to add a data connector to IVAAP, INT’s HTML5 visualization framework for upstream E&P solutions.

As a new member of the software development team, I had no prior experience with development on this platform. To gain knowledge of IVAAP and to understand more about the IVAAP software development kit, I used the IVAAP developer’s guide. I found this guide quite useful as it made the key points behind IVAAP easily understandable.

With only a few years of experience with Java, I was surprised by the lookup system. IVAAP has a microservices REST architecture and is very modular in nature, and the lookup system ties all these modules together. It’s quite powerful, but this is something I had never encountered before.

Coding with IVAAP uses a simple model where each entity implementation consists of implementing a POJO (Plain Old Java Object) class and its finder. This paradigm is consistent throughout the entire code. Essentially, for this project, I plugged only a few classes:

  • A data source type class
  • A data source class
  • A log curve class and its finder
  • A log curve data series class and its finder
  • A log curve data frame class and its finder

Implementing these classes essentially consists of following templates, where the public API provides hooks and the developer adds the implementation specific to their project. The public API is documented, making it clear what each method or class is meant to perform.

Even though this project is to be deployed on Linux, my development environment was on Windows. It consisted of an Integrated Development Environment (the NetBeans IDE) and the Postman tool for testing individual services. This project accessed a SQL server database, so I used dbVisualizer to browse the data.

Since this project only included adding a data connector to IVAAP, I didn’t try to add new services, only extend the data sources it supports. Building a connector on top of the existing web services allowed me to validate my work as it progressed. For example, after plugging a new data source type, I could immediately verify that it worked as intended using Postman and following the HATEOAS links. This remained true when I plugged a data source and each finder. No need to wait until all classes are plugged to verify that the logic works. I also found that the error management built into IVAAP helped me be efficient since the error report made it easy to trace the actual issue.

The learning curve of the IVAAP software development kit is gradual. The API guides you. Unlike some of the frameworks I have worked with, there is no prior knowledge necessary to get started. You can be effective from day one with just basic Java knowledge.

Visit our products page for more information about IVAAP or contact us for a demo.


Filed Under: IVAAP Tagged With: API, ivaap, microservices, SDK

May 17 2018

What Cloud Data Lakes Mean for Geoscience

With the explosion of storage capacity, cloud computing, and bandwidth availability, a trend has emerged in the oil and gas industry over the last few years. Data that was previously aggregated and discarded is now maintained and stored, creating an opportunity for the industry. Coupled with new advancements in machine learning and Al, this data availability is poised to drive more data driven decision making in well planning, drilling & completions, and well operations.

After our conversations with major operators over the last few months, we realized that the concept of a data lake is still pretty new and can be perceived differently. We thought this would be a great opportunity to explain what this technology approach is and its benefits and how major operators can leverage INT’s enterprise data visualization platform to help garner insights from geoscience data in the cloud.

What is a Data Lake?

The idea behind a data lake is that as businesses gather more data, this data cannot be used the same way it has been used in the past. Databases are not a good fit for big data, not just because of the size of that data but also because this data cannot be normalized. A Data Lake is a large storage of raw data where each data point is stored in its native format, along with relevant metadata.

This approach solves a problem that the oil and gas industry has faced for a long time. Despite industry efforts to standardize data formats, these formats are loosely followed and too limited to contain all information related to a particular seismic survey or an oil field. Keeping the data as is makes your job of extracting valuable information difficult. And if you opt to normalize that data, you inevitably lose information. The concept of Data Lake opens the possibility of both keeping your original data while allowing its exploitation.

How do you exploit data from a Data Lake?

When you add files to your Lake, you carry along these files’ relevant metadata. When the right set of metadata is provided, you can search your data from this set. For example, if each document is geo-referenced, you can search for all documents relative to a geographical area.

How is the concept of Data Lake different from a search engine?

Search engines can’t really exploit files in SEG-Y or LAS formats. And even for well-known formats such as PDF, search engines have no awareness of which attributes of a PDF file are important to you. For example, if your metadata indicates that a file documents the characteristics of a well at a particular location, a Data Lake will allow you to find this file just by selecting the right region of interest.

How are cloud providers helping with Data Lakes?

Microsoft Azure, Amazon Web Services, and Google Cloud have developed a wide range of products and components to facilitate the creation and use of data lakes, such as nearly infinite storage, artificial intelligence, and machine learning to provide a seamless way to ingest and consume your data. The technology behind such advanced indexing and analytics cannot be reproduced in-house—you need a world-class partner.

How is INT helping with Data Lakes?

INT helps in two ways: We have unique experience in the industry. Working with so many actors, we have acquired the knowledge required to read multiple data formats, even when these formats are not strictly followed. Our tools facilitate the extraction of the metadata required for the Lake to function as intended, and not as a “swamp.”

And, of course, our visualization technology is what makes it all possible, all from the comfort of your browser. Our IVAAP Enterprise Cloud Viewer allows you to visualize the datasets and documents stored remotely in your Data Lake. You’ll see how you can start from a map and drill down all the way down to the log curves of a well found on that map. In the same screen, you’ll be able to review PDF reports for that well and navigate through the slices of the matching seismic survey as if it was stored locally.

For more information about our enterprise data visualization solutions, visit the IVAAP product page, or contact us.


Filed Under: IVAAP Tagged With: cloud, data lake, geoscience, ivaap

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 8
  • Page 9
  • Page 10

Footer

Solutions

  • Real-Time Visualization
  • Visualization Components
  • New Energy Visualization
  • OSDU Visualization
  • Machine Learning
  • Developer Tools
  • Cloud Partners
  • Customer Success Stories

Products

  • IVAAP
  • GeoToolkit
  • INTViewer
  • IVAAP Demos
  • GeoToolkit Demos

About

  • News
  • Events
  • Careers
  • Management Team

Resources

  • Blog
  • FAQ

Support

  • JIRA
  • Developer Community

Contact

INT logo
© 1989–2024 Interactive Network Technologies, Inc.
Privacy Policy
  • Careers
  • Contact Us
  • Search

COPYRIGHT © 2025 INTERACTIVE NETWORK TECHNOLOGIES, Inc