![]() |
Prev | Next | posterior |
sample
command with method equal to
simulate
fits simulated random measurements data_sim_table
and simulated random priors prior_sim_table
.
The Lemmas on this page prove that, in the linear Gaussian case,
this gives the proper statistics for the posterior distribution
of the maximum likelihood estimate.
Note that the dismod_at case is not linear nor are all the
statistics Gaussian.
The maximum likelihood estimate @(@
\hat{\theta}
@)@ satisfies the
equation @(@
f^{(1)} ( \hat{\theta} ) = 0
@)@; i.e.,
@[@
\begin{array}{rcl}
0 & = & - ( y - A \hat{\theta} )^\R{T} V^{-1} A
\\
A^\R{T} V^{-1} y & = & A^\R{T} V^{-1} A \hat{\theta}
\\
\hat{\theta} & = & ( A^\R{T} V^{-1} A )^{-1} A^\R{T} V^{-1} y
\end{array}
@]@
Defining @(@
e = y - A \bar{\theta}
@)@, we have
@(@
\B{E} [ e ] = 0
@)@ and
@[@
\begin{array}{rcl}
\hat{\theta} & = &
( A^\R{T} V^{-1} A )^{-1} A^\R{T} V^{-1} ( A \bar{\theta} + e )
\\
\hat{\theta}
& = &
\bar{\theta} + ( A^\R{T} V^{-1} A )^{-1} A^\R{T} V^{-1} e
\end{array}
@]@
This expresses the estimate @(@
\hat{\theta}
@)@ as a deterministic
function of the noise @(@
e
@)@.
It follows from the last equation for @(@
\hat{\theta}
@)@ above,
and the fact that @(@
\B{E} [ e ] = 0
@)@,
that @(@
\B{E} [ \hat{\theta} ] = \bar{\theta}
@)@.
This completes the proof of the equation for the expected value
of @(@
\hat{\theta}
@)@ in the statement of the lemma.
It also follows, from the equation for @(@
\hat{\theta}
@)@ above, that
@[@
\begin{array}{rcl}
( \hat{\theta} - \bar{\theta} ) ( \hat{\theta} - \bar{\theta} )^\R{T}
& = &
( A^\R{T} V^{-1} A )^{-1} A^\R{T}
V^{-1} e e^\R{T} V^{-1}
A ( A^\R{T} V^{-1} A )^{-1}
\\
\B{E} [ ( \hat{\theta} - \bar{\theta} ) ( \hat{\theta} - \bar{\theta} )^\R{T} ]
& = &
( A^\R{T} V^{-1} A )^{-1}
\end{array}
@]@
This completes the proof of the equation for the covariance of
@(@
\hat{\theta} - \bar{\theta}
@)@ in the statement of the lemma.
Setting the derivative to zero, we get the corresponding maximum likelihood
estimate @(@
\hat{\theta}
@)@ satisfies
@[@
\begin{array}{rcl}
( y - A \hat{\theta} )^\R{T} V^{-1} A
& = &
( B \hat{\theta} - z )^\R{T} P^{-1} B
\\
y^\R{T} V^{-1} A + z^\R{T} P^{-1} B
& = &
\hat{\theta}^\R{T} A^\R{T} V^{-1} A + \hat{\theta}^\R{T} B^\R{T} P^{-1} B
\\
\hat{\theta}
& = &
( A^\R{T} V^{-1} A + B^\R{T} P^{-1} B )^{-1}
( A^\R{T} V^{-1} y + B^\R{T} P^{-1} z )
\\
\hat{\theta}
& = &
( A^\R{T} V^{-1} A + B^\R{T} P^{-1} B )^{-1}
( A^\R{T} V^{-1} A \bar{\theta} + B^\R{T} P^{-1} z )
+
( A^\R{T} V^{-1} A + B^\R{T} P^{-1} B )^{-1} A^\R{T} V^{-1} e
\end{array}
@]@
The first term is deterministic and the second term is mean zero.
It follows that
@[@
\begin{array}{rcl}
\B{E} [ \hat{\theta} ]
& = &
( A^\R{T} V^{-1} A + B^\R{T} P^{-1} B )^{-1}
( A^\R{T} V^{-1} A \bar{\theta} + B^\R{T} P^{-1} z )
\\
\B{E} [
( \hat{\theta} - \B{E} [ \hat{\theta} ] )
( \hat{\theta} - \B{E} [ \hat{\theta} ] )^\R{T}
]
& = &
( A^\R{T} V^{-1} A + B^\R{T} P^{-1} B )^{-1}
A^\R{T} V^{-1} A
( A^\R{T} V^{-1} A + B^\R{T} P^{-1} B )^{-1}
\end{array}
@]@
Since the matrix @(@
B^\R{T} P^{-1} B
@)@ is positive definite, we have
@[@
A^\R{T} V^{-1} A \prec A^\R{T} V^{-1} A + B^\R{T} P^{-1} B
@]@
Replacing
@(@
A^\R{T} V^{-1} A
@)@ by
@(@
A^\R{T} V^{-1} A + B^\R{T} P^{-1} B
@)@
in the center of the previous expression
for the variance of @(@
\hat{\theta}
@)@ we obtain
@[@
\B{E} [
( \hat{\theta} - \B{E} [ \hat{\theta} ] )
( \hat{\theta} - \B{E} [ \hat{\theta} ] )^\R{T}
]
\prec
( A^\R{T} V^{-1} A + B^\R{T} P^{-1} B )^{-1}
@]@
This completes the proof of this lemma.