My summer camp quantum students may or may not need these definite integrals. Anyway, I’ve been telling them to work on them in case they finish their problem sets early.

## 1

Integrating this by parts gives

It’s easy to see that , so the recursion relation is satisfied by . This result may be useful in examining the radial part of the wave function for a hydrogen atom, for example, or really anywhere that the asymptotic behavior of the wavefunction is exponential, because when that exponential behavior is factored away, a set of orthogonal polynomials is a likely resulting eigenbasis for the Hamiltonian. (Sorry to speak gobbledygook for people who haven’t studied quantum/differential equations.)

The result is also generally useful in finding an analytic continuation of the factorial function. Although is not well defined, I can certainly set in the integral and get a sensible answer. When approximating for large numbers, this form of the factorial function is frequently easier to work with.

In order to lead into the next part, we’ll solve the same problem over again by a new method. Begin with the following relation:

Note that the definite integral is a function of . Now apply the operator to both sides.

Differentiating under the integral yields

Setting would yield the desired result for . Instead, let’s now apply the same operator not once but times.

Which becomes

.

Setting recovers the result from integration by parts.

## 2

Let us try a substitution. . Then . We get

.

For odd , we can apply the result from the previous section to get .

For even , let’s apply the trick of introducing a parameter, as we did in the previous problem (this is called parametric differentiation.)

.

I don’t feel like writing up how to get this first result. One way to do it (the only one with which I am personally acquainted) is the square it, introducing a new integral over . You then have a double integral over . Changing to “polar coordinates” the integral becomes a previously solved case, specifically, the same problem with .

Now apply the operator ( even). We get

.

Setting

.

These integrals are useful in quantum mechanics because the gaussian wavefunction is quite common. This is the ground state of the harmonic oscillator, for example. Gaussians frequently saturate the uncertainty principle (that is, they have ), and if we want to create a realistic, but mathematically tractable wavefunction for a free particle propagating along some direction, for example to set up a scattering problem, a wavefunction that is gaussian in momentum space is a good bet.

## 3

(assume ).

We could use some trig identities to simplify this, or write it out in terms of complex exponentials. Before recoursing to those, let’s see how far intuition can go.

When , we’re integrating over a number of full periods. That means we’re finding the average value of the function. The sine and cosine functions have the same shape, and therefore so do their squares. So their squares have the same average value. But , so the average value of each function individually is . So when the integral evaluates to .

If we look at and , they clearly multiply to give zero. Here they are plotted from to by Wolfram Alpha.

Calculus will tell us that the answer is zero for the integral whenever . But unless one of or is zero, or there is some other special proportion between them, their orthogonality is not so obvious as in the above case. Consider and . They look like this:

In some places they have the same sign, and in other places opposite signs. But it isn’t clear to me that all the the places where the signs are opposite ought to cancel the places where the signs are the same. When we look at the product we get

It’s a funny-looking function for which it is certainly plausible that the area above the curve is the same as the area below it, but it still feels somewhat miraculous that this is exact.

I have been trying to come up with a reason, based on what sines are and mean, that these functions should be orthogonal. I haven’t been able to do it yet.

Within the context of linear algebra, we can show that they are orthogonal without explicitly computing. Both functions are eigenvectors of the operator on the space of functions on with . That operator is Hermitian. The two eigenvectors have different eigenvalues, and because eigenvectors of a Hermitian operator with different eigenvalues are orthogonal, the sine functions must be orthogonal. So this is something I can prove, although I’m not quite ready to claim I understand it.

August 1, 2009 at 4:27 am

[…] I have never understood why involves as the power of , rather than just . It makes even less sense when you consider for natural numbers . […]