Particle Filter - Sequential Importance Resampling (SIR)

Sequential importance resampling (SIR), the original particle filtering algorithm (Gordon et al. 1993), is a very commonly used particle filtering algorithm, which approximates the filtering distribution by a weighted set of P particles

The importance weights are approximations to the relative posterior probabilities (or densities) of the particles such that .

SIR is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function can be approximated as a weighted average


\int f(x_k) p(x_k|y_0,\dots,y_k) dx_k \approx
\sum_{L=1}^P w^{(L)} f(x_k^{(L)}).

For a finite set of particles, the algorithm performance is dependent on the choice of the proposal distribution

.

The optimal proposal distribution is given as the target distribution


\pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1},y_{k}). \,

However, the transition prior is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:


\pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1}). \,

Sequential Importance Resampling (SIR) filters with transition prior as importance function are commonly known as bootstrap filter and condensation algorithm.

Resampling is used to avoid the problem of degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. The stratified sampling proposed by Kitagawa (1996) is optimal in terms of variance.

A single step of sequential importance resampling is as follows:

1) For draw samples from the proposal distribution

x^{(L)}_k \sim \pi(x_k|x^{(L)}_{0:k-1},y_{0:k})
2) For update the importance weights up to a normalizing constant:

\hat{w}^{(L)}_k = w^{(L)}_{k-1}
\frac{p(y_k|x^{(L)}_k) p(x^{(L)}_k|x^{(L)}_{k-1})}
{\pi(x_k^{(L)}|x^{(L)}_{0:k-1},y_{0:k})}.
Note that when we use the transition prior as the importance function, this simplifies to the following :
3) For compute the normalized importance weights:

w^{(L)}_k = \frac{\hat{w}^{(L)}_k}{\sum_{J=1}^P \hat{w}^{(J)}_k}
4) Compute an estimate of the effective number of particles as

\hat{N}_\mathit{eff} = \frac{1}{\sum_{L=1}^P\left(w^{(L)}_k\right)^2}
5) If the effective number of particles is less than a given threshold, then perform resampling:
a) Draw particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.
b) For set

The term Sampling Importance Resampling is also sometimes used when referring to SIR filters.

Read more about this topic:  Particle Filter

Famous quotes containing the word importance:

    When will the world learn that a million men are of no importance compared with one man?
    Henry David Thoreau (1817–1862)