The
full Bayesian inference adds up all the posterior distribution output and then
takes the average as its output. The maximum a posteriori probability (MAP) picks
a weight vector which maximises the posterior distribution.  If the prior and additive noise model are
Gaussian then the Posterior is also a Gaussian. In linear Gaussian, the
convergence of the posterior mean and the MAP estimator coincide.  The sample mean of random linear function is equivalent
to the MAP mean of Gaussian for any value of standard deviation.

 

Since
we have prior knowledge about pervious posterior distribution, we can use it to
enhance the MAP hence we can get the complete posterior distribution – as shown
above in part ‘e’ the Gaussian have a mean of  in high dimension which is more accurate than sample mean
from the linear function.

Related Posts

© All Right Reserved