Seismic Interpretation Sandbox V1.0: Transforming Depth Sketches into Synthetic Seismic Sections

Throughout my career in geophysics I have spent a lot of time staring at seismic lines, mentally trying to convert two way travel time back into a geological depth picture and arguing for perspectives that often sit in the blind spot of the managers in charge. The core challenge for interpreters is that our conceptual models live in depth, while the data we work with is almost always in time.

For this personal project I decided to reverse the usual direction of thinking. Instead of going from time to depth, I start with a simple geological sketch in depth and generate a synthetic time section that looks realistic enough to interpret. The result is a lightweight tool I call Seismic Interpretation Sandbox V1.0. It acts as a small bridge, transforming colour coded depth models into seismic style images that you can actually pick and think about.

In a later version I would like to connect it to AI services such as OpenAI so the code can read more complex sketches and models directly. That kind of integration depends on paid external compute, so for now this prototype stays deliberately simple and home made as a cake.

In this post I walk through how Seismic Interpretation Sandbox V1.0 works, the basic physics behind it, and show results from three test cases:

  • dipping layered strata
  • a normal fault with offset horizons
  • a salt dome that produces velocity pull up

This is very much a work in progress, but already useful as a quick way to explore depth ideas in the time domain. If you have comments or ideas for improvement I would be happy to hear them. This is just the beginning.

How the Seismic Sandbox works

The first version is intentionally simple and does not rely on heavy modelling software. Behind the scenes there are seven main steps that convert a conceptual depth model into a synthetic seismic section.

  1. Sketch the depth model
    The starting point is a PNG image created in any drawing tool. Each geological body or facies is drawn with a flat solid colour. The vertical axis of the image represents depth in kilometres and the horizontal axis represents distance in kilometres.
  2. Convert colours to facies
    The script reads the PNG, groups similar colours and assigns a facies id to each group. This gives a regular grid of facies values on the chosen depth and distance grid. In a future version it would be very natural to add an AI step here, so that the tool can clean up hand drawn sketches, correct small drafting mistakes and accelerate the conversion from drawing to model.
  3. Assign physical properties
    For every facies id I manually enter P wave velocity v and density rho, based on well data, outcrops or simple geological common sense. For the examples in this post I used typical values such as:
    • water: v = 1500 m/s, rho = 1000 kg/m³
    • compacted sand: v = 2200 to 2500 m/s, rho = 2200 to 2400 kg/m³
    • carbonates: v = 4500 m/s, rho = 2600 kg/m³
    • basement: v = 5500 m/s, rho = 2700 kg/m³
    • salt: v = 4500 m/s, rho = 2150 kg/m³
  4. Compute impedance and reflectivity in depth
    Acoustic impedance is computed as I = v * rho for every cell. At each interface between layer 1 and layer 2 the reflection coefficient is R = (I2 - I1) / (I2 + I1). This tells us how strong the reflection is at that depth.
  5. Convert depth to two way travel time
    For each trace the script integrates downward through the model using delta t = 2 * delta z / v. This maps every depth sample to a two way travel time and gives a depth based reflectivity series in time.
  6. Generate the synthetic seismogram
    The reflectivity is resampled on a regular time grid and convolved with a Ricker wavelet to create a simple zero offset synthetic trace at each horizontal position.
  7. Style for interpretation
    Finally I apply some basic processing so that the section looks more like a real seismic line: lateral smoothing, automatic gain control and a symmetric grey scale with amplitude clipping as a first step.

In the figures below each example shows the input depth sketch at the top left, the facies and velocity models at the top right, and the resulting synthetic seismic section at the bottom.

Example 1: dipping layered strata

The first model is a baseline test. A water layer overlies compacted sands, a carbonate unit and crystalline basement, all dipping gently.

The synthetic section behaves exactly as a simple convolutional model should:

  • strong, continuous reflectors at each interface
  • systematic moveout of events in time that mirrors the dip in depth
  • no unexpected artefacts, which is what you want from a first test

Higher velocity layers such as the carbonates compress the time interval to everything below them. Deeper reflectors appear closer together in time than they are in depth. It is a nice visual reminder of how much velocity controls the final seismic image.

Example 2: normal fault with offset horizons

The second model reuses the same stratigraphy but introduces a normal fault and a gently curved horizon beneath it.

In the synthetic section:

  • horizons that were continuous in example 1 now show clear breaks at the fault
  • the curved horizon produces a gentle rollover in time
  • the fault itself is not a strong vertical reflector because its properties are similar to the surrounding rock

What we primarily see is the geometric effect of offset layers. This version still has some edge artefacts and does not attempt to model a detailed damage zone, but the basic kinematics of a normal fault are already captured.

Example 3: salt dome with velocity pull up

The third scenario introduces a salt body that rises from depth and deforms the overburden and underlying units.

The synthetic shows a few classic salt related effects:

  • a strong top salt reflector from the high impedance contrast
  • events beneath the dome pulled up in time relative to the flanks because the salt is fast
  • warped base salt and deeper horizons that resemble real subsalt imaging problems

Even with a very simple one way acoustic model, a smooth depth structure turns into a complex time pattern. That is exactly the kind of intuition this sandbox is meant to support.

Strengths and limitations of V1.0

These first experiments suggest that the sandbox is already a useful bridge between depth and time.

What works well

  • It quickly translates depth sketches into time sections.
  • It makes the impact of velocity variations very visible.
  • It captures the basic kinematics of structures like faults and salt domes.

Current limitations

  • The physics is purely acoustic and zero offset.
  • There are no diffractions, scattering or multiples yet.
  • Some pixel artefacts are inherited directly from the original PNG drawings.

The goal is not full waveform modelling. The aim is fast first order insight for interpreters who think in depth but read the world in time.

Next steps

As a side project the Seismic Interpretation Sandbox will keep evolving. Planned improvements include:

  • smoother rasterisation to reduce pixel artefacts
  • simple multiples and alternative wavelets
  • more complex structures such as trishear faults or layered evaporites
  • a web based version where you can upload your own depth sketch and generate a synthetic section in the browser

For now the tool runs locally on my machine. If you have ever sketched a subsurface model on paper, this sandbox is a small step toward seeing that sketch come alive as a seismic line.

If you have comments, ideas or examples you would like to test I would be very happy to hear from you.

If you enjoy this kind of work and want to support future iterations of the sandbox, you can support the project here: ko fi.com/maxsegali.

Comments