full Bayesian inference adds up all the posterior distribution output and then
takes the average as its output. The maximum a posteriori probability (MAP) picks
a weight vector which maximises the posterior distribution.  If the prior and additive noise model are
Gaussian then the Posterior is also a Gaussian. In linear Gaussian, the
convergence of the posterior mean and the MAP estimator coincide.  The sample mean of random linear function is equivalent
to the MAP mean of Gaussian for any value of standard deviation.


we have prior knowledge about pervious posterior distribution, we can use it to
enhance the MAP hence we can get the complete posterior distribution – as shown
above in part ‘e’ the Gaussian have a mean of  in high dimension which is more accurate than sample mean
from the linear function.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Related Posts

© All Right Reserved

I'm Melba!

Would you like to get a custom essay? How about receiving a customized one?

Check it out