Home Robotics Blog: Unscented Kalman Filter
Post
Cancel

Robotics Blog: Unscented Kalman Filter

Unscented Kalman Filter

The unscented transform can be briefly summarized in 3 steps:

  1. Draw N deterministic samples from an input PDF.
  2. Transform these samples through a nonlinear function.
  3. Use the transformed points to combine the mean and covariance of the output PDF.

This examples uses a similar motion model to the EKF example in the previous post.

State Predict

The state predict is a 3 step procedure

  1. Generate sigma points
  2. Propogate points through motion model
  3. Calculate mean and covariance of sigma points

State Update

The state update is a similar 3 step procedure

  1. Generate sigma points
  2. Propogate points through observation model
  3. Calculate mean and covariance of sigma points

Generating Sigma Points

For a state with L dimensions, we need to generate 2L + 1 sigma points. We assume our state is Gaussian with a mean and covariance

\[\mathcal{N} (\mu*x, \Sigma*{xx})\]

We take the cholesky decomposition of our covariance matrix to obtain a bolded L matrix.

\[\boldsymbol{L}\boldsymbol{L}^T = \Sigma\_{xx}\]

Sigma point \(x_i\) can be represented below

\[x_0 = \mu_x\]

\(x_i = \mu_x + \sqrt{(L + \lambda ) col_i \boldsymbol{L}}\) for \(i = 1, ..., L\)

\(x_{i} = \mu_x - \sqrt{(L + \lambda ) col_i \boldsymbol{L}}\) for \(i = L + 1, ..., 2L\)

This formula is reflected in the generateSigmaPoints function. We use

\[\gamma = \sqrt{L + \lambda}\]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
 std::vector<Eigen::Vector4f> generateSigmaPoints(
     const Eigen::Vector4f& x_est, const Eigen::Matrix4f& P_est, const float gamma) {
    const size_t L = 4;
    std::vector<Eigen::Vector4f> sigma_points;
    sigma_points.reserve(2*L + 1);

    sigma_points.push_back(x_est);

    // do cholesky
    Eigen::Matrix4f P_sqrt ( P_est.llt().matrixL() );
    for (size_t i = 0; i < L; ++i) {
        Eigen::Vector4f pt = x_est + gamma * P_sqrt.col(i);
        sigma_points.push_back(pt);
    }

    for (size_t i = 0; i < L; ++i) {
        Eigen::Vector4f pt = x_est - gamma * P_sqrt.col(i);
        sigma_points.push_back(pt);
    }

    return sigma_points;
}

Recovering Mean and Covariance

After we pass our sigma points through some nonlinear function \(y = f(x_i)\), where \(f\) can be your motion/observation model we need to compute the mean and covariance of the new set of sigma points.

Our weight vector in recovering our mean is

\[w_{m, i} = \left\{\begin{matrix} \frac{ \lambda }{\lambda+ L} & i = 0\\ \frac{ 1 }{2 (L + \lambda)}& i = 1, ..., 2L \end{matrix}\right.\]

Our weight vector in recovering our covariance is

\[w_{c, i} = \left\{\begin{matrix} \frac{ \lambda }{\lambda+ L} + ( 1 - \alpha^2 + \beta) & i = 0\\ \frac{ 1 }{2 (L + \lambda)}& i = 1, ..., 2L \end{matrix}\right.\]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
std::tuple<std::vector<float>, std::vector<float>, float>
    setupWeights(float nx, float alpha, float kappa, float beta) {
    // nx = L
    float lamb = std::pow(alpha, 2) * (nx + kappa) - nx;
    float gamma = std::sqrt(nx + lamb);

    std::vector<float> wm(2*nx+1, 0.0f);
    std::vector<float> wc(2*nx+1, 0.0f);

    wm[0] = lamb / (lamb + nx);
    wc[0] = ( lamb / (lamb + nx) ) + (1 - std::pow(alpha, 2) + beta);

    for (size_t i = 1; i < 2*nx+1; ++i) {
        wm[i] = 1.0f / (2 * (nx + lamb));
        wc[i] = 1.0f / (2 * (nx + lamb));
    }

    return {wm, wc, gamma};
}

Using our weights we can calculate the mean and covariance as follows:

\[\mu_y = \sum w_{m, i} y_i\] \[\Sigma_{yy} = \sum w_{c, i} (y_i - \mu_y) (y_i - \mu_y)^T\]

The summation of the weights with the individual sigma points for the motion model and observation model are very similar. We use a gain matrix K (similar to EKF) to propogate our state estimate.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
void unscentedKalmanFilter(Eigen::Vector4f& x_est, Eigen::Matrix4f& P_est,
        const Eigen::Vector2f& u, const Eigen::Vector2f& z,
        const Eigen::Matrix4f& Q, const Eigen::Matrix2f& R,
        const float dt, const float gamma,
        const std::vector<float>& wm,
        const std::vector<float>& wc) {
    std::vector<Eigen::Vector4f> sigma_points;

    // STATE PREDICT
    // generate and propogate sigma points through motion model
    sigma_points = generateSigmaPoints(x_est, P_est, gamma);
    for (auto& pt : sigma_points)
        pt = motionModel(pt, u, dt);

    // recombine sigma points to get predict mean and cov
    Eigen::Vector4f x_pred = Eigen::Vector4f::Zero(4);
    for (size_t i = 0; i < sigma_points.size(); ++i)
        x_pred += wm[i] * sigma_points[i];

    Eigen::Matrix4f P_pred = Q;
    for (size_t i = 0; i < sigma_points.size(); ++i) {
        Eigen::Vector4f diff = (sigma_points[i] - x_pred);
        P_pred += wc[i] * diff * diff.transpose();
    }

    // STATE CORRECT
    Eigen::Vector2f z_pred = observationModel(x_pred);
    Eigen::Vector2f y = z - z_pred;

    // generate sigma points, transform to observation domain
    sigma_points = generateSigmaPoints(x_pred, P_pred, gamma);
    std::vector<Eigen::Vector2f> z_sigma_points;
    for (auto& pt : sigma_points)
        z_sigma_points.emplace_back( observationModel(pt) );

    Eigen::Vector2f z_mean = Eigen::Vector2f::Zero(2);
    for (size_t i = 0; i < sigma_points.size(); ++i)
        z_mean += wm[i] * z_sigma_points[i];

    Eigen::Matrix2f cov_zz = R; // st
    for (size_t i = 0; i < z_sigma_points.size(); ++i) {
        Eigen::Vector2f diff = (z_sigma_points[i] - z_mean);
        cov_zz += wc[i] * diff * diff.transpose();
    }

    // Eigen::Matrix<float, 4, 2> cov_xz;
    Eigen::Matrix<float, 4, 2> cov_xz = Eigen::MatrixXf::Zero(4,2);
    for (size_t i = 0; i < z_sigma_points.size(); ++i) {
        Eigen::Vector2f diffz = (z_sigma_points[i] - z_mean);
        Eigen::Vector4f diffx = (sigma_points[i] - x_pred);
        cov_xz += wc[i] * diffx * diffz.transpose();
    }

    auto K = cov_xz * cov_zz.inverse(); // (4x2) * (2,2) => (4,2)
    x_est = x_pred + K * y; // (4,1) + (4,2) * (2,1)
    P_est = P_pred - K * cov_zz * K.transpose(); // (4,4) - (4,2)*(2,2)*(2,4)
}

Sources

  • https://www.seas.harvard.edu/courses/cs281/papers/unscented.pdf
This post is licensed under CC BY 4.0 by the author.
Contents