Inverse Problems - Linear Inverse Problems - Examples - Earth's Gravitational Field

Earth's Gravitational Field

Only a few physical systems are actually linear with respect to the model parameters. One such system from geophysics is that of the Earth's gravitational field. The Earth's gravitational field is determined by the density distribution of the Earth in the subsurface. Because the lithology of the Earth changes quite significantly, we are able to observe minute differences in the Earth's gravitational field on the surface of the Earth. From our understanding of gravity (Newton's Law of Gravitation), we know that the mathematical expression for gravity is: where is a measure of the local gravitational acceleration, is the universal gravitational constant, is the local mass (density) of the rock in the subsurface and is the distance from the mass to the observation point.

By discretizing the above expression, we are able to relate the discrete data observations on the surface of the Earth to the discrete model parameters (density) in the subsurface that we wish to know more about. For example, consider the case where we have 5 measurements on the surface of the Earth. In this case, our data vector, d is a column vector of dimension (5x1). We also know that we only have five unknown masses in the subsurface (unrealistic but used to demonstrate the concept). Thus, we can construct the linear system relating the five unknown masses to the five data points as follows:

 d =
\begin{bmatrix}
d_1 \\
d_2 \\
d_3 \\
d_4 \\
d_5 \end{bmatrix},
 m =
\begin{bmatrix}
M_1 \\
M_2 \\
M_3 \\
M_4 \\
M_5
\end{bmatrix},
 G =
\begin{bmatrix}
\frac{K}{r_{11}^2} & \frac{K}{r_{12}^2} & \frac{K}{r_{13}^2} & \frac{K}{r_{14}^2} & \frac{K}{r_{15}^2} \\
\frac{K}{r_{21}^2} & \frac{K}{r_{22}^2} & \frac{K}{r_{23}^2} & \frac{K}{r_{24}^2} & \frac{K}{r_{25}^2} \\
\frac{K}{r_{31}^2} & \frac{K}{r_{32}^2} & \frac{K}{r_{33}^2} & \frac{K}{r_{34}^2} & \frac{K}{r_{35}^2} \\
\frac{K}{r_{41}^2} & \frac{K}{r_{42}^2} & \frac{K}{r_{43}^2} & \frac{K}{r_{44}^2} & \frac{K}{r_{45}^2} \\
\frac{K}{r_{51}^2} & \frac{K}{r_{52}^2} & \frac{K}{r_{53}^2} & \frac{K}{r_{54}^2} & \frac{K}{r_{55}^2}
\end{bmatrix}

Now, we can see that the system has five equations, with five unknowns, . To solve for the model parameters that fit our data, we might be able to invert the matrix to directly convert the measurements into our model parameters. For example:

However, not all square matrices are invertible ( is almost never invertible). This is because we are not guaranteed to have enough information to uniquely determine the solution to the given equations unless we have independent measurements (i.e. each measurement adds unique information to the system). It's important to note that in most physical systems, we do not ever have enough information to uniquely constrain our solutions because the observation matrix does not contain unique equations. From a linear algebra perspective, the matrix is rank deficient (i.e. has zero eigenvalues), meaning that is not invertible. Further, if we add additional observations to our matrix (i.e. more equations), then the matrix is no longer square. Even then, we're not guaranteed to have full-rank in the observation matrix. Therefore, most inverse problems are considered to be underdetermined, meaning that we do not have unique solutions to the inverse problem. If we have a full-rank system, then our solution may be unique. Overdetermined systems (more equations than unknowns) have other issues.

Because we cannot directly invert the observation matrix, we use methods from optimization to solve the inverse problem. To do so, we define a goal, also known as an objective function, for the inverse problem. The goal is a functional that measures how close the predicted data from the recovered model fits the observed data. In the case where we have perfect data (i.e. no noise) and perfect physical understanding (i.e. we know the physics) then the recovered model should fit the observed data perfectly. The standard objective function, is usually of the form:

which represents the L-2 norm of the misfit between the observed data and the predicted data from the model. We use the L-2 norm here as a generic measurement of the distance between the predicted data and the observed data, but other norms are possible for use. The goal of the objective function is to minimize the difference between the predicted and observed data.

To minimize the objective function (i.e. solve the inverse problem) we compute the gradient of the objective function using the same rationale as we would to minimize a function of only one variable. The gradient of the objective function is:

where GT denotes the matrix transpose of G. This equation simplifies to:

After rearrangement, this becomes:

This expression is known as the Normal Equation and gives us a possible solution to the inverse problem. It is equivalent to Ordinary Least Squares

Additionally, we usually know that our data has random variations caused by random noise, or worse yet coherent noise. In any case, errors in the observed data introduces errors in the recovered model parameters that we obtain by solving the inverse problem. To avoid these errors, we may want to constrain possible solutions to emphasize certain possible features in our models. This type of constraint is known as regularization.

Read more about this topic:  Inverse Problems, Linear Inverse Problems, Examples

Famous quotes containing the words earth and/or field:

    Since man’s highest mission on earth is to spiritualize everything, it is his excrement in particular that needs it most.
    Salvador Dali (1904–1989)

    the whole field is a
    white desire, empty, a single stem;
    a cluster, flower by flower,
    a pious wish to whiteness gone over—
    or nothing.
    William Carlos Williams (1883–1963)