A framework for petrophysically and geologically guided geophysical inversion using a dynamic Gaussian mixture model prior

1From A Posterior with Gaussian Distributions to an least-squares Objective-Function

In this appendix, we develop the operations necessary to go from a posterior distribution formulation to an objective function in the case of the Tikhonov inverse problem.

We stated in section 2 that a least-squares term corresponds to a Gaussian prior. For a vector of parameters, the multivariate Gaussian distribution, denoted by N\mathcal{N}, with mean μ\mathbf{\mu} and covariance Σ (denoted by σ2\sigma^2 for a unidimensional Gaussian, with σ being the standard-deviation) is:

N(mμ,Σ)=exp(12Σ12(mμ)22)(2π)ndet(Σ)\begin{align} &\mathcal{N}(\mathbf{m}|\mathbf{\mu}, \Sigma) = {\frac{\exp(-\frac{1}{2}||\Sigma^{-\frac{1}{2}}(\mathbf{m}-\mathbf{\mu})||_2^2)}{\sqrt{(2\pi)^n\det(\Sigma)}}} \end{align}

Going from a posterior distribution to an objective function is done by taking the negative natural logarithm of the posterior. Taking the negative logarithm of a single Gaussian term gives:

log(N(mμ,Σ))=12Σ12(mμ)22+log((2π)ndet(Σ))\begin{align} &-\log(\mathcal{N}(\mathbf{m}|\mathbf{\mu}, \Sigma)) = \frac{1}{2}||\Sigma^{-\frac{1}{2}}(\mathbf{m}-\mathbf{\mu})||_2^2 + \log({\sqrt{(2\pi)^n\det(\Sigma)}}) \end{align}

The first term obtained in equation 2 is recognizable as a least-squares misfit. The second term does not depend on m\mathbf{m} and is thus a constant. As constants do not play any role in the optimization, it is thus discarded in the objective function.

For the full Tikhonov geophysical objective function P(mdobs)\mathcal{P}(\mathbf{m}|\mathbf{d}_{\text{obs}}), the full posterior distributions can be written, using the Bayes rules, as:

P(mdobs)=P(dobsm)P(m)P(dobs)\begin{align} &\mathcal{P}(\mathbf{m}|\mathbf{d}_{\text{obs}}) = \frac{\mathcal{P}(\mathbf{d}_{\text{obs}}|\mathbf{m})\mathcal{P}(\mathbf{m})}{\mathcal{P}(\mathbf{d}_{\text{obs}})} \\ \end{align}

with:

P(dobsm)=N(F[m]dobs,(WdTWd)1)\begin{align} &\mathcal{P}(\mathbf{d}_{\text{obs}}|\mathbf{m}) = \mathcal{N}(\mathcal{F}\lbrack\mathbf{m}\rbrack|\mathbf{d}_{\text{obs}}, (W_d^TW_d)^{-1}) \\ \end{align}
P(m)=N(mmref,(βαsWsTWs)1)N(mmref,(βLTL)1)\begin{align} &\mathcal{P}(\mathbf{m}) = \mathcal{N}(\mathbf{m}|\mathbf{m}_{{\text{ref}}}, (\beta\alpha_sW_s^TW_s)^{-1})\mathcal{N}(\mathbf{m}|\mathbf{m}_{{\text{ref}}},(\beta L^TL)^{-1}) \end{align}

where the finite difference operator LL summarizes the first or second derivatives in all directions with their respective {α}\{\alpha\} scale. We respectively express the data misfit, the smallness and smoothness probability functions as Gaussian distributions:

N(dobsm,(WdTWd)1)=exp(12Wd(F[m]dobs)22)(2π)ndet([WdTWd]1)\begin{align} &\mathcal{N}(\mathbf{d}_{\text{obs}}|\mathbf{m}, (W_d^TW_d)^{-1}) = {\frac{\exp(-\frac{1}{2}||W_d(\mathcal{F}\lbrack\mathbf{m}\rbrack-\mathbf{d}_{\text{obs}})||_2^2)}{\sqrt{(2\pi)^n\det(\lbrack W_d^TW_d\rbrack^{-1})}}} \\ \end{align}
N(mmref,(βαsWsTWs)1)=exp(12Ws(mmref)22)βαs(2π)ndet([βαsWsTWs]1)\begin{align} &\mathcal{N}(\mathbf{m}|\mathbf{m}_{{\text{ref}}}, (\beta\alpha_sW_s^TW_s)^{-1}) = {\frac{\exp(-\frac{1}{2}||W_s(\mathbf{m}-\mathbf{m}_{{\text{ref}}})||_2^2)^{\beta\alpha_s}}{\sqrt{(2\pi)^n\det(\lbrack \beta\alpha_sW_s^TW_s\rbrack^{-1})}}} \\ \end{align}
N(mmref,(βLTL)1)=exp(12Li(mmref)22)β(2π)ndet([βLTL]1)\begin{align} &\mathcal{N}(\mathbf{m}|\mathbf{m}_{{\text{ref}}}, (\beta L^TL)^{-1}) = {\frac{\exp(-\frac{1}{2}||L_i(\mathbf{m}-\mathbf{m}_{{\text{ref}}})||_2^2)^{\beta}}{\sqrt{(2\pi)^n\det(\lbrack \beta L^TL\rbrack^{-1})}}} \end{align}

The objective function is obtained by applying the negative natural logarithm to the posterior distribution described in equation 3. The summation form is simply a consequence of the fundamental property of the logarithm function, the multiplication becomes an addition in this new space. It follows:

Φ(m)=12Wd(F[m]dobs)22+βαs2Ws(mmref)22+ β2L(mmref)22+Constant\begin{align} \Phi(\mathbf{m}) &= \frac{1}{2}||W_d(\mathcal{F}\lbrack\mathbf{m}\rbrack-\mathbf{d}_{\text{obs}})||_2^2 + \frac{\beta\alpha_s}{2}||W_s(\mathbf{m}-\mathbf{m}_{{\text{ref}}})||_2^2 + \nonumber\\ &~\frac{\beta}{2}||L(\mathbf{m}-\mathbf{m}_{{\text{ref}}})||_2^2 +\text{Constant} \end{align}

The constant term in equation 9 contains the constant terms for each Gaussian distribution and the constant term log(P(dobs))log(\mathcal{P}(\mathbf{d}_{\text{obs}})).

This completes the detailed operation to go from a posterior distribution to an objective function formulation of the Tikhonov inverse problem.

2Conjugate prior versus semi-conjugate prior for means and variances prior information for a single univariate Gaussian Distribution

A comparison of the MLE estimations of the mean and variance for a single one dimensional Gaussian distribution (no prior), semi-conjugate prior and full conjugate prior MAPs. We have observed samples in blue, along with their estimated MLE Gaussian distribution, and a prior distribution in gray. Using confidence values of unity in the prior is similar to having an equal number of samples from the prior and in the observed data sets. A synthetic prior sample set is represented as the gray histogram. The posterior distribution with a semi-conjugate prior is in black and is seen as an average of the parameters of the observed and prior distributions. The posterior distribution with a full conjugate prior is in red. The red histogram is obtained by merging the observed samples with the synthetic prior samples. This histogram corresponds to the posterior distribution with a full conjugate prior. Note that the variance is larger than for either of the two original distributions; this results because of the difference in the means. Legend: Hist. stands for histogram; Dist. stands for distribution; Obs. stands for observed samples.

Figure 1:A comparison of the MLE estimations of the mean and variance for a single one dimensional Gaussian distribution (no prior), semi-conjugate prior and full conjugate prior MAPs. We have observed samples in blue, along with their estimated MLE Gaussian distribution, and a prior distribution in gray. Using confidence values of unity in the prior is similar to having an equal number of samples from the prior and in the observed data sets. A synthetic prior sample set is represented as the gray histogram. The posterior distribution with a semi-conjugate prior is in black and is seen as an average of the parameters of the observed and prior distributions. The posterior distribution with a full conjugate prior is in red. The red histogram is obtained by merging the observed samples with the synthetic prior samples. This histogram corresponds to the posterior distribution with a full conjugate prior. Note that the variance is larger than for either of the two original distributions; this results because of the difference in the means. Legend: Hist. stands for histogram; Dist. stands for distribution; Obs. stands for observed samples.

In this section, we elaborate on the difference between a full and a semi-conjugate prior approach for estimating the parameters of a single, one dimensional, Gaussian distribution. After giving the needed definitions for the full conjugate prior and how the MAP-EM updates are affected, we illustrate the difference between semi and full conjugate through the example displayed in Fig. 1.

We first need some definitions. The full conjugate prior for the mean and variance follows a Normal-Inverse-Gamma distribution Murphy, 2012. As in section 3.2, we use the Normal-Inverse Chi-Squared re-parameterization:

P(μ,σ2)=NX2(μ,σ2μprior,K,N,σprior2)\begin{align} &\mathcal{P}({\mu},\sigma^2) = \mathcal{N}\mathcal{X}^{-2}({\mu}, \sigma^2|{\mu}_{prior}, K, N,\sigma_{prior}^2) \end{align}

With the confidences KK in the prior mean and NN in the prior variance:

K=(κπpriorV)1N=νπpriorV\begin{align} &{K}=({\kappa}{\pi}_{prior}V)^{-1}\\ &{N}={\nu}{\pi}_{prior}V \end{align}

The update of the means in MAP-EM algorithm stays the same as for the semi-conjugate prior approach, as detailed in equation 32. On the contrary the update of the variances now requires an additional term sμˉj(k){s_{\bar{{\mu}}}}_j^{(k)} to account for the difference between the observed and the prior means:

σj2(k)=Vj(k)σmˉ2j(k)+νjπjpriorVσj2prior+sμˉj(k)Vj(k)+νjπjpriorV\begin{align} &{\sigma_j^2}^{(k)} = \frac{{{V_{j}^{(k)}} {\sigma^2_{\bar{\mathbf{m}}}}_j}^{(k)} + \nu_j {\pi_j}_{prior} V {\sigma_j^2}_{prior}+ {s_{\bar{{\mu}}}}_j^{(k)} } {{V_{j}^{(k)}} + \nu_j {\pi_j}_{prior} V} \\ \end{align}

with:

σmˉ2j(k)=1Vj(k)i=1nvinij(k)(mimjˉ(k))2\begin{align} &{\sigma^2_{\bar{\mathbf{m}}}}_j^{(k)} =\frac{1}{{V_{j}^{(k)}}} \sum_{i=1}^{n} v_i n_{ij}^{(k)}({m}_i-\bar{{m}_j}^{(k)})^2\\ \end{align}
sμˉj(k)=κjVj(k)κj+Vj(k)(mˉj(k)μjprior)2\begin{align} &{s_{\bar{{\mu}}}}_j^{(k)} = \frac{\kappa_j V_{j}^{(k)} }{\kappa_j + V_{j}^{(k)}}(\bar{{m}}_j^{(k)}-{{\mu}_{j}}_{prior})^2 \end{align}

We illustrate those concepts with Fig. 1. Let us consider the following setup. We have a set of observations (in our framework, this was our geophysical model), represented through their histogram in blue. The Gaussian distribution in blue represents the MLE parameters (estimated without any prior information). We now add prior information in the form of a Gaussian distribution, in gray in the Fig. 1. We set the confidence parameters κ and ν to unity, meaning that we have equal confidence in the observations and the prior (ζ is irrelevant here as we have only one Gaussian distribution). This equal confidence can be represented by having an equal number of samples from the prior than in the observed set. The histogram of this prior synthetic samples set set is shown in gray.

The full conjugate prior approach can be understood as fitting a Gaussian distribution on the dataset formed by merging the observed and synthetic observations; this is represented in Fig. 1 in red. The full conjugate MAP distribution, also in red, is well centered between the two observed and prior distribution as expected, and so is the semi-conjugate distribution (in black). However the variance of the red histogram, and thus of the posterior distribution with full conjugate prior, is higher than the variance of either the observed or prior distribution. This is due to the difference in the means of the two distributions, which the full conjugate prior approach accounts for (see equations 12 and 14). The semi-conjugate prior approach considers the means and variances independently (see equations 30 and 31). The MAP mean estimates are the same for both priors.

The semi-conjugate prior approach seems a better choice in the context of geophysical inversion. As our goal with this framework is to differentiate various geological units, by guiding the geophysical model to reproduce certain features, the full conjugate prior approach is sometimes detrimental as it can drive the variance further from the prior than what is currently seen in the model at each iteration.

3Pseudocode algorithm

We give in algorithm 1 the details of our implementation of this framework. The optimization notions of inexact Gauss-Newton and backtracking line search are detailed in Ascher & Greif (2011), or in Haber (2014) for their geophysical applications.

Figure 2 graphically summarizes in a flowchart how our framework loops over the various datasets to produce a final geophysical model with the desired petrophysical distribution and geological features.

A visual pseudocode of our iterative cyclic framework to include geological and petrophysical information into geophysical inversion. The numbers correspond to the steps displayed in algorithm 1.

Figure 2:A visual pseudocode of our iterative cyclic framework to include geological and petrophysical information into geophysical inversion. The numbers correspond to the steps displayed in algorithm 1.

References
  1. Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. The MIT Press.
  2. Ascher, U., & Greif, C. (2011). A First Course in Numerical Methods. Society for Industrial. 10.1137/9780898719987
  3. Haber, E. (2014). Computational methods in geophysical electromagnetics (Vol. 1). SIAM.