My goal here is to understand the error propagation formulas that many scientists look up in some reference book every once in a while when they have to do the data analysis for their next paper.

Suppose you are conducting two different measurements, the results of which you label and . We’ll let the variables take any value, so

,

and each of those results has a different chance of occurring. Write

to define , the probability density for the variable . Then do the same thing to define , the probability density for .

The only requirements on and are that

and

I allowed the measurements to be any number from minus infinity to plus infinity, but if you have something that’s restricted to some finite range, that’s no problem. Just set

not in .

On the other hand, it’s not okay if your measurement is discrete (how many jelly beans are in the jar?), unless you’re willing to do some mathematical tricks (step function distributions, for example).

You aren’t happy with just and . Instead, you want to combine them in some way to find

,

(add them, multiply them, find , etc.) What is the probability distribution for this new statistic?

The new statistic is a number, so it also ranges from to . (It may have zero probability over part of that range). What is the probability that ? Technically, it’s zero (or infinitesimal, if you like), but what is the probability density, ?

There may be many ways to get . For example, if

, then

does the trick for all , meaning there are infinitely many solutions. The chance of getting comes from adding up all the various possibilities thusly:

or, if you don’t like the dirac delta function, think of it as a path integral

with an element of length along a curve of constant in its two-dimensional domain, so that

That gives the full probability distribution for . Scientists generally report their data in terms of a mean and standard deviation, so you would next extract those statistics from their definition and the distribution .

Let’s work an example by adding two Gaussians. Suppose we have

where the ‘s are the means and the ‘s are the standard deviations. Define

.

What is the chance that, for example, ? We could have

They all contribute some probability. To find the total probability density, integrate over all possibilities, remembering that (assuming the measurements are independent)

.

Also, instead of looking at the special case , we’ll keep things general and let .

Before continuing, note the identity

meaning that any quadratic differs from a perfect square by only a constant. This is good because it means the exponential of any quadratic is a (not necessarily normalized) Gaussian.

with

Take the equation for , combine the exponents of the two exponentials using , and we’ll have the exponential of a quadratic as the integrand. All we have to do is choose the appropriate values of . is a constant when integrating over , so it goes on the outside of the integral (note that does have dependence). What’s left inside the integral is a new Gaussian, and since we’re integrating it from to it turns into a number that can be found quite easily.

At last you can just feel all that remaining pesky algebra stuff crunchy crunch crunching under the implacable fury of your pencil, and what comes out is simply

with

So when you add two Gaussians, you get a new Gaussian, and the means and variances add. Isn’t that convenient?

February 4, 2009 at 7:03 am

[…] Propagation of Error « Arcsecond […]