A couple weeks ago I posted about flipping an unfair coin. There, I imagined a coin that came out of a flip the same way it went in 51% of the time. The coin didn’t prefer heads or tails in general, but if it started out heads it was slightly more likely to stay heads, and likewise for tails.
If you flip that coin once, it’s not bad – just a little bit away from fair. If you flip it twice, it’s so close to fair you’d have to flip do tens of thousands of repetitions of double-flips to see they were biased.
But what if instead of 51% chance to land the same way, the coin had 90% chance? Then how many times would you have to flip it to get pretty close to fair?
The trick to this is not to try to calculate all the strings of length six, for example, and see what the probability is for heads on the sixth toss, and then go to seven if six wasn’t good enough. Instead, flip it once, and make the second flip act on the uncertain state.
Here’s what I mean: We’ll write heads as a column vector like this:
and tails like this:
For an uncertain state, where we haven’t looked at the coin yet, we write:
Flipping is then the symmetric operator that takes us from the certain state to the 90-10 state. This is
Flipping a coin that started out heads corresponds to the equation
To flip twice, act with the operator again.
To flip times, evaluate
Intuitively, one eigenvector of that operator is the 50-50 state. Since the operator is symmetric, if we begin with complete uncertainty, we must still have complete uncertainty after flipping. Evidently, the eigenvalue is one.
The eigenvectors of a symmetric operator on a real vector space are orthogonal1, so the other eigenvector is
which is not very meaningful by itself, but forms a basis along with the first eigenvector. Its eigenvalue is 0.8.
We can break any heads/tails probability down into a sum of these two eigenvectors. The first eigenvector is the fair state. The second eigenvector is what takes us away from the fair state. When its component is positive, we lean towards heads, and the higher its component the more we lean towards heads. When the component of this second eigenvector is negative, we lean towards tails.
The pure heads state can be written in the eigenbasis by
Flip times, and the component of the first eigenvector is unchanged because its eigenvalue is 1, but the second eigenvector’s component dies away exponentially, because it’s multiplied by 0.8 each time. That leaves us with
If we want to probability for heads to be within of 50%, we know
We’d have to flip 21 times to get down to 51%. We can see that the second eigenvalue, the one that was .8 here, comes from .9 – .1. In general, if the chance to stay what you are is , then this second eigenvalue will be . If , then this eigenvalue is negative, and the component of the second eigenvector will switch each flip. We'll go from biased towards heads, to biased towards tails, and back again, flip-flopping with each coin toss.
1 if and are two eigenvectors of a symmetric operator with eigenvalues and , then
by acting with on the right. Since is symmetric, it can also act on the left, giving
These expressions are the same, so
as long as . If they are equal, we have a two-dimensional eigenspace, and can simply choose two orthogonal vectors inside it using Gram-Schmidt. Either way, once you find the first eigenvector, whatever is perpendicular will be another one.
Also, there may be complex eigenvectors, but there must be an even number of them, so once we found one real eigenvector we knew there was another.