Optical Theorem - Derivation

Derivation

The theorem can be derived rather directly from a treatment of a scalar wave. If a plane wave is incident on an object, then the wave amplitude a great distance away from the scatterer is approximately by

All higher terms, when squared, vanish more quickly than, and so are negligible a great distance away. Notice that for large values of z and small angles the binomial theorem gives us

We would now like to use the fact that the intensity is proportional to the square of the amplitude . Approximating the r in the denominator as z, we have

If we drop the term and use the fact that we have

Now suppose we integrate over a screen in the x-y plane, at a distance which is small enough for the small angle approximations to be appropriate, but large enough that we can integrate the intensity from to with negligible error. In optics, this is equivalent to including many fringes of the diffraction pattern. To further simplify matters, let's approximate . We quickly obtain

where A is the area of the surface integrated over. The exponentials can be treated as Gaussians and so

which is just the probability of reaching the screen if none were scattered, lessened by an amount, which is therefore the effective scattering cross section of the scatterer.

Read more about this topic:  Optical Theorem