Flip It One More Time

A couple weeks ago I posted about flipping an unfair coin. There, I imagined a coin that came out of a flip the same way it went in 51% of the time. The coin didn’t prefer heads or tails in general, but if it started out heads it was slightly more likely to stay heads, and likewise for tails.

If you flip that coin once, it’s not bad – just a little bit away from fair. If you flip it twice, it’s so close to fair you’d have to flip do tens of thousands of repetitions of double-flips to see they were biased.

But what if instead of 51% chance to land the same way, the coin had 90% chance? Then how many times would you have to flip it to get pretty close to fair?

The trick to this is not to try to calculate all the strings of length six, for example, and see what the probability is for heads on the sixth toss, and then go to seven if six wasn’t good enough. Instead, flip it once, and make the second flip act on the uncertain state.

Here’s what I mean: We’ll write heads as a column vector like this:

\left( \begin{array}{c} 1 \\ 0 \end{array}\right)

and tails like this:

\left( \begin{array}{c} 0 \\ 1 \end{array}\right)

For an uncertain state, where we haven’t looked at the coin yet, we write:

\left( \begin{array}{c} P_H \\ P_T \end{array}\right).

Flipping is then the symmetric operator that takes us from the certain state to the 90-10 state. This is

\left( \begin{array}{cc} .9 & .1 \\ .1 & .9 \end{array}\right)

Flipping a coin that started out heads corresponds to the equation

\left( \begin{array}{cc} .9 & .1 \\ .1 & .9 \end{array}\right)  \left( \begin{array}{c} 1 \\ 0 \end{array}\right) = \left( \begin{array}{c} .9 \\ .1 \end{array}\right)

To flip twice, act with the operator again.

\left( \begin{array}{cc} .9 & .1 \\ .1 & .9 \end{array} \right)\left( \begin{array}{cc} .9 & .1 \\ .1 & .9 \end{array}\right) \left( \begin{array}{c} 1 \\ 0 \end{array}\right) = \left( \begin{array}{c} .82 \\ .18 \end{array}\right)

To flip n times, evaluate

\left( \begin{array}{cc} .9 & .1 \\ .1 & .9 \end{array}\right)^n \left( \begin{array}{c} 1 \\ 0 \end{array}\right)

Intuitively, one eigenvector of that operator is the 50-50 state. Since the operator is symmetric, if we begin with complete uncertainty, we must still have complete uncertainty after flipping. Evidently, the eigenvalue is one.

The eigenvectors of a symmetric operator on a real vector space are orthogonal1, so the other eigenvector is

\left( \begin{array}{c} .5 \\ -.5 \end{array}\right),

which is not very meaningful by itself, but forms a basis along with the first eigenvector. Its eigenvalue is 0.8.

We can break any heads/tails probability down into a sum of these two eigenvectors. The first eigenvector is the fair state. The second eigenvector is what takes us away from the fair state. When its component is positive, we lean towards heads, and the higher its component the more we lean towards heads. When the component of this second eigenvector is negative, we lean towards tails.

The pure heads state can be written in the eigenbasis by

\left( \begin{array}{c} 1 \\ 0 \end{array}\right) = \left( \begin{array}{c} .5 \\ .5 \end{array}\right)  + \left( \begin{array}{c} .5 \\ -.5 \end{array}\right).

Flip n times, and the component of the first eigenvector is unchanged because its eigenvalue is 1, but the second eigenvector’s component dies away exponentially, because it’s multiplied by 0.8 each time. That leaves us with

\left( \begin{array}{c} .5 + .5*,8^n \\ .5 - .5*.8^n\end{array} \right).

If we want to probability for heads to be within \epsilon of 50%, we know

\epsilon = .8^n

and so

n = \ln(\epsilon)/\ln(0.8)

We’d have to flip 21 times to get down to 51%. We can see that the second eigenvalue, the one that was .8 here, comes from .9 – .1. In general, if the chance to stay what you are is p, then this second eigenvalue will be 2 p - 1. If p<.5, then this eigenvalue is negative, and the component of the second eigenvector will switch each flip. We'll go from biased towards heads, to biased towards tails, and back again, flip-flopping with each coin toss.


1 if A and B are two eigenvectors of a symmetric operator M with eigenvalues \alpha and \beta, then

AMB = A\beta B = \beta AB

by acting with M on the right. Since M is symmetric, it can also act on the left, giving

AMB = \alpha AB.

These expressions are the same, so

\beta AB = \alpha AB

which implies

AB = 0

as long as \alpha \neq \beta. If they are equal, we have a two-dimensional eigenspace, and can simply choose two orthogonal vectors inside it using Gram-Schmidt. Either way, once you find the first eigenvector, whatever is perpendicular will be another one.

Also, there may be complex eigenvectors, but there must be an even number of them, so once we found one real eigenvector we knew there was another.

Tags:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: