Probabilistic Robotics
  • Introduction
  • Basics
    • Recursive State Estimation
      • Basic Concepts in Probability
      • Robot Environment Interaction
      • Bayes Filters
    • Gaussian Filters
      • Kalman Filter
      • Extended Kalman Filter
    • Nonparametric Filters
      • Histogram Filter
      • Particle Filter
    • Robot Perception
      • Beam Models of Range Finders
      • Likelihood Fields for Range Finders
  • Localization
    • Taxonomy of Localization Problems
    • Grid and Monte Carlo
      • Monte Carlo Localization
    • Markov and Gaussian
  • Projects
    • Mislocalization Heatmap
Powered by GitBook
On this page
  • Linearization
  • Taylor Expansion
  • EKF Algorithm

Was this helpful?

  1. Basics
  2. Gaussian Filters

Extended Kalman Filter

The problem with standard Kalman filter is that it assumes linear state transitions and measurements. In reality, state transitions and measurements are rarely linear, e.g. a robot that moves with constant transnational and rotational velocity will move in a circular path, which is not linear. The extended Kalman filter relaxes the linearity assumption.

We suppose that state transition probability and measurement probability are governed by nonlinear functions.

xt=g(ut,xt−1)+ϵt  zt=h(xt)+δtx_{t} = g(u_{t}, x_{t-1}) + \epsilon_{t} \\\;\\ z_{t} = h(x_{t}) + \delta_{t}xt​=g(ut​,xt−1​)+ϵt​zt​=h(xt​)+δt​

The function gggreplaces the matrices AtA_{t}At​ and BtB_{t}Bt​, and the function hhh replaces the matrix CtC_{t}Ct​. Unfortunately, with arbitrary non-linear functions, the belief is no longer a Gaussian. In fact, performing the belief update exactly is usually impossible in this case. The EKF (extended Kalman filter) needs to calculate a Gaussian approximation to the true belief via Monte Carlo methods.

Linearization

The key idea underlying the EKF approximation is called linearization. If we were to use Monte Carlo methods, we'd have to pass hundred thousands of points to gggfollowed by the computation of their mean and covariance. This is extremely inefficient. We need a way to hack around it and approximates ggg and hhh as linear functions using first order Taylor expansion.

Taylor Expansion

Taylor expansion constructs a linear approximation to a function ggg from ggg's value and slope. The slope is given by the following expression.

g′(ut,xt−1)=∂g∂xt−1g\prime(u_{t}, x_{t-1}) = \frac{\partial g}{\partial x_{t-1}}g′(ut​,xt−1​)=∂xt−1​∂g​

The value and slope of gggare dependent on the arguments utu_{t}ut​ and xt−1x_{t-1}xt−1​. We should choose the most probable states to perform Taylor expansion with. Since the most probable state of XtX_{t}Xt​ is μt−1\mu_{t-1}μt−1​, ggg is approximated by its values at μt−1\mu_t{-1}μt​−1 and utu_{t}ut​.

g(ut,xt−1)≈g(ut,μt−1)+g′(ut,μt−1)(xt−1−μt−1)=g(ut,μt−1)+Gt(xt−1−μt−1)g(u_{t}, x_{t-1}) \approx g(u_{t}, \mu_{t-1}) + g\prime(u_{t}, \mu_{t-1})(x_{t-1} - \mu_{t-1}) = g(u_{t}, \mu_{t-1}) + G_{t}(x_{t-1} - \mu_{t-1})g(ut​,xt−1​)≈g(ut​,μt−1​)+g′(ut​,μt−1​)(xt−1​−μt−1​)=g(ut​,μt−1​)+Gt​(xt−1​−μt−1​)

Notice that GtG_{t}Gt​ is a n by n matrix, with n denoting the dimension of the state. This matrix is often called the Jacobian. The value of Jacobian is dependent on utu_{t}ut​ and μt−1\mu_{t-1}μt−1​, hence the matrix is time dependent. EKF implements the exact same linearization for the measurement function hhh. The calculation must perform after prediction state, hence μ‾t\overline{\mu}_{t}μ​t​.

h(xt)≈h(μ‾t)+h′(μ‾t)(xt−μ‾t)=h(μ‾t)+Ht(xt−μ‾t)h(x_{t}) \approx h(\overline{\mu}_{t}) + h\prime(\overline{\mu}_{t})(x_{t} - \overline{\mu}_{t}) = h(\overline{\mu}_{t}) + H_{t}(x_{t} - \overline{\mu}_{t})h(xt​)≈h(μ​t​)+h′(μ​t​)(xt​−μ​t​)=h(μ​t​)+Ht​(xt​−μ​t​)

EKF Algorithm

Now if we put everything together, we have the EKF algorithm. Equation (1) performs the prediction steps.

μ‾t=g(ut,μt−1)  Σ‾t=GtΣt−1GtT+Rrt(1)\tag{1} \overline{\mu}_{t} = g(u_{t}, \mu_{t-1}) \\\;\\ \overline{\Sigma}_{t} = G_{t}\Sigma_{t-1}G^{T}_{t} + R_{r}tμ​t​=g(ut​,μt−1​)Σt​=Gt​Σt−1​GtT​+Rr​t(1)

Equation (2) performs the Kalman gain calculation.

Kt=Σ‾tHtT(HtΣ‾HtT+Qt)−1(2)\tag{2} K_{t} = \overline{\Sigma}_{t}H_{t}^{T}(H_{t}\overline{\Sigma}H_{t}^{T} + Q_{t})^{-1}Kt​=Σt​HtT​(Ht​ΣHtT​+Qt​)−1(2)

Equation (3) performs the measurement incorporation steps.

μt=μ‾t+Kt(zt−h(μ‾t))  Σt=(I−KtHt)Σ‾t\mu_{t} = \overline{\mu}_{t} + K_{t}(z_{t} - h(\overline{\mu}_{t})) \\\;\\ \Sigma_{t} = (I - K_{t}H_{t})\overline{\Sigma}_{t}μt​=μ​t​+Kt​(zt​−h(μ​t​))Σt​=(I−Kt​Ht​)Σt​

Finally,

return  μt,Σt\text{return}\;\mu_{t}, \Sigma_{t}returnμt​,Σt​
PreviousKalman FilterNextNonparametric Filters

Last updated 5 years ago

Was this helpful?