And then I asked GPT to intepret a seismic line! guess what it did!
Artificial intelligence has already tried to write poetry, plan my trips, and debate phylosophy with a seriousness that is almost touching. So I decided to try something different to continue my previous post.
I asked ChatGPT to interpret a seismic line.
Not to draw cartoons. Not to mimic what seismic “looks like”. But to take an actual seismic section of a salt dome and propose an interpretation.
It felt like the closest thing to asking a machine to think geologically.
The Experiment: Putting AI in the Interpreter's Chair
I uploaded the previously generated AI seismic line showing a classic salt pillow geometry: a broad uplift, strong drape above, steep flanks, and chaotic low signal beneath.
Then I prompted GPT to:
- Identify the salt body
- Draw the top of salt
- Trace the major unconformities
- Divide the reflectors into megasequences
- Colour code the result
- Annotate the structural relationships
In other words: do the work of a structural interpreter who has just arrived on site, coffee in hand, ready to map.
And it did. Not perfectly. Not physically. But recognizably.
Three Versions, Three Digital Personalities
I didn't get one answer; I got a spectrum. Each interpretation revealed something slightly different about how the model “sees” the data.
Version 1: The Junior Interpreter
A conservative attempt: top salt and one unconformity. It got the drape right but underestimated the number of depositional sequences. It was almost like a junior interpreter who is too afraid to make a bold pick, sticking to the most obvious features.
Version 2: The Confident Geoscientist
This one was more confident. It mapped more sequences, showed better structural conformity, and sketched a more realistic shape of the salt pillow. It even introduced subtle onlap patterns on the flanks, demonstrating a more nuanced understanding of stratigraphic relationships.
Version 3: The Workshop Lead
The most ambitious. GPT identified multiple megasequences, refined the base-salt contact, and filled the interpretation with colour-coded units that resemble a textbook example. This is the version you might see an experienced geoscientist sketch on a whiteboard during a team workshop.
The "How": Mimicry, Not Magic
So, what is actually happening here? Is the AI thinking?
No. All the interpretations wrong, no wells no concept of time-depth conversion of verlocity problems, but AI is not “interpreting” in the human sense. It is not running physics, ray tracing, wave equations, or inversion. It is not calculating impedance contrasts or verifying whether the geometry can produce the recorded wavefield. What it is doing is something more primitive but still remarkable: It is mimicking the language of seismic interpretation.
It has learned the gestures, colour schemes, and structural patterns common in published seismic figures and geological diagrams. That is why the interpretations feel familiar even when the physics is missing.
The Human Lesson: Interpretation is a Hypothesis, Not a Drawing
This experiment reinforces the most essential lesson in our discipline:
Interpretation is not decoration.
Interpretation is hypothesis testing.
A horizon is not a line you draw because it “looks nice”. It is a geometric proposition about the Earth that must be able to produce the wavefield we observe. If it cannot, the interpretation is false; no matter how senior or confident the interpreter.
This is precisely why I am now building a Python forward modelling sandbox. The goal is to test interpretations, compare models, and, most importantly, let the wavefield be the judge. (A future post will cover that in detail).
The Bottom Line: What GPT Reveals About Our Future
When AI attempts to interpret seismic, three things become clear:
- It can recognize structural patterns because it has seen thousands of similar images.
- It cannot evaluate physical plausibility without an explicit geophysical model.
- If we give it physics based forward models, it could evolve into a genuinely powerful partner in interpretation, not a replacement.
This technology doesn't replace interpreters yet. But it does put pressure on us to work more rigorously, to go beyond tracing lines and into the realm of testing hypotheses. The lazy interpreters will be very soon be killed by AI.
Conclusion: A Digital Field Assistant
These images are not toys. They are a glimpse of how digital companions might assist future interpreters:
- Rapid structural sketching to kickstart a project.
- Scenario generation to explore multiple possibilities.
- Uncertainty visualization to highlight ambiguous zones.
- QC against physics-based forward models to validate our picks.
AI will not tell us what the Earth is. But it may help us see what the Earth cannot be.
And that is already incredibly valuable.
Welcome again to "The Geoscientist Blog".
More experiments are already running.
Comments
Post a Comment