Bayes vs MLE: an estimation theory fairy tale
I found a neat little example in one of my introductory stats books about Bayesian versus maximum-likelihood estimation for the simple problem of estimating a binomial distribution given only one sample.
I was going to try and show the math but since Blogger is not making it possible to actually render MathML I’ll just hand-wave instead. [Fixed in Whisper. —ed.]
So let’s say we’re trying to estimate a binomial distribution parameterized by
The maximum likelhood estimate for
In the coin case it seems crazy to say, I saw one head, so I’m going to assume that the coinalways turns up heads, but that’s because of our prior knowledge of how coins behave. If we’re given a black box with a button and two lights, and you press the button, and one of the lights come on, then maybe estimating that that light always comes on when you press the button makes a little more sense.
Finding the Bayesian estimate is slightly more complicated. Let’s use a uniform prior. Our conditional distribution is
Now if we were in the world of classication, we’d take the MAP estimate, which is a fancy way of saying the value with the biggest probability, or the mode of the distribution. Since we’re using a uniform prior, that would end up as the same as the MLE. But we’re not. We’re in the world of real numbers, so we can take something better: the expected value, or the mean of the distribution. This is known as the Bayes estimate, and there are some decision-theoretic reasons for using it, but informally, it makes more sense than using the MAP estimate: you can take into account the entire shape of the distribution, not just the mode.
Using the Bayes estimate, we arrive at
Up till now we’ve been talking about “estimation theory”, i.e. the art of estimating shit. But estimation theory is basically decision theory in disguise, where your decision space is the same as your parameter space: you’re deciding on a value for
Now what’s cool about moving to the world of decision theory is that we can say: if I have to decide on a particular value for
And it turns out that, if you plot the risk for the MLE estimate and for the Bayes estimate under different values of the true value
So that’s pretty cool. It seems like the Bayes estimate must be a superior estimate.
Of course, I set this whole thing up. Those “decision-theoretic reasons” for choosing the Bayes estimate I mentioned? Well, they’re theorems that show that the Bayes estimate minimizes risk. And, in fact, the Bayes estimate of the mean of the distribution is specific to squared-error loss. If we chose another loss function, we could come up with a potentially very different Bayes estimate.
But my intention wasn’t really to trick you into believing that Bayes estimates are awesome. (Though they are!) I wanted to show that:
- Bayes and classical approaches can come up with very different estimates, even with a uniform prior.
- If you cast things in decision-theoretic terms, you can make some real quantitative statements about different ways of estimating.
In the decision theory world, you can customize your estimates to minimize your particular costs in your particular situation. And that’s an idea that I think is very, very powerful.
'Archive' 카테고리의 다른 글
Viola-Jones Face Detection Visualization (0) | 2011.10.22 |
---|---|
5 Programming Languages Everyone Should Know (0) | 2011.10.18 |
IP로 호스트네임, 도메인네임 알아내기 혹은 호스트네임, 도메인네임으로 IP 알아내기 (0) | 2011.10.10 |
Exception에 관하여 (0) | 2011.10.08 |
가차 없는 테스트 (0) | 2011.10.06 |