Gravity Is Not a Pretty Picture

So why do we keep treating it like one?

Gravity data does not show the subsurface. It does not outline bodies, trace contacts, or confirm shapes. It measures a potential field. Everything else is inference.

And yet, gravity results are often presented as if a curve or map were already a structure. This sandbox was built to slow that leap down and make the tradeoffs visible instead of hidden.

The scale is intentionally near surface and civil engineering oriented. Small bodies. Shallow depths. Short wavelengths. Strong ambiguity. These are not limitations of the method here. They are the conditions under which gravity becomes most uncomfortable and most honest.

What are you actually computing when you model gravity?

The sandbox represents a two dimensional geological cross section. You draw a subsurface body as a polygon and the model assumes that body extends infinitely perpendicular to the screen. This is the classic two dimensional infinite strike approximation.

The tool computes the vertical component of gravitational acceleration along an observation line and displays it in mGal.

Forward modelling only. Nothing is inverted. Nothing is solved. Every curve you see is the direct consequence of assumptions you can inspect, change, or deliberately break.

The curve is a synthetic observation. It is not a Bouguer anomaly and not field processed gravity. It is simply what the physics predicts given the model you chose.

The physics behind the curve

For a two dimensional body of infinite strike, the vertical gravity component at an observation point is:

gz(x) = 2 G ∫body [ Δρ (y − yobs) / r2 ] dA

Gravity depends on density contrast and geometry. Not on rock names. Not on interpretation confidence. Just contrast and distance.

In the sandbox, the polygon is discretized into small cells, so the integral becomes a numerical sum:

gz(x) ≈ 2 G Σi [ Δρ Ai (yi − yobs) / ri2 ]

The factor 2 and the r squared denominator are not cosmetic details. They are what make two dimensional gravity so sensitive to shallow mass and so forgiving of deep ambiguity. In three dimensions the denominator becomes r cubed, and the balance changes.

This is where many intuitive interpretations quietly break.

The uncomfortable part: controls and assumptions

You can change geometry, density, topography, and observation height. Each control feels innocent. None of them is.

Change the polygon shape and the curve changes. Change density contrast and the curve changes. Raise the observation height and the curve smooths. Remove mass above a reference depth and the anomaly reorganizes itself.

Which of these changes is geological and which is interpretive?

The reference depth level makes this explicit. It removes mass above a chosen depth. This is not a physical correction. It is a deliberate interpretive operator. It shows how easily we separate deep and shallow contributions and how subjective that separation really is.

Filters that feel like insight

The sandbox lets you view the same anomaly through different lenses.

View What it does What it makes louder
Demeaned Subtracts the mean value Shape rather than offset
First derivative          d gz / d x Lateral change and contacts
Second derivative         d2 gz / d x2 Curvature and edges, plus noise

None of these views adds information. They decide what you pay attention to.

Derivatives are powerful and dangerous. They reward sharpness and punish noise. They can feel like clarity while quietly amplifying assumptions.

Matching a curve is easy. Explaining it is not.

In target matching mode, the sandbox generates a gravity curve from a hidden body. Your task is to reproduce it by changing geometry and density.

The misfit is measured using RMS difference. When the number is small, the match looks good.

But here is the question that matters: How many different bodies can produce the same curve?

The answer is not one. Multiple, radically different subsurface structures can achieve the same low misfit. A good fit is not a unique explanation. Gravity never promised that it would be.

What this sandbox ultimately points to

Gravity methods are often associated with large scale exploration. Basins, regional trends, deep structures. At that scale, ambiguity is expected and tolerated.

At near surface and civil engineering scale, the opposite assumption is common: that gravity is either too crude, too ambiguous, or simply unnecessary. Other methods feel more direct, more visual, more reassuring.

But this is precisely where gravity becomes interesting.

Shallow problems are not simple problems. Cavities, weak zones, buried infrastructure, karst, backfill, voids, and heterogeneous ground conditions rarely present themselves as clean targets. They live in uncertainty. They interact with access constraints, safety, cost, and incomplete information.

In that context, gravity is not a stand-alone solution. It is a decision tool. It helps constrain what is plausible, where uncertainty concentrates, and where further investigation is worth the cost.

This sandbox is a small demonstration of that mindset. Not a finished workflow, not a turnkey product, but a way of thinking about subsurface problems where physics, assumptions, and interpretation remain visible.

The same approach can scale from a didactic model to real projects: integrating gravity with other data, designing surveys around specific engineering questions, and using forward modelling to test scenarios before committing to expensive interventions.

When gravity is treated not as an image, but as a constraint, it stops competing with other methods and starts guiding them.

Sometimes the most valuable contribution is not a definitive answer, but a clearer map of uncertainty before decisions are made.

Comments