Kalman Filter
Last updated
Last updated
The Kalman filter implements belief computational for continuous state. It is not applicable to discrete or hybrid state spaces. The filter represents belief by moments parameterization. At time , the belief is represented by the mean and the covariance . Posteriors are Gaussian if the following three properties hold in addition to Markov assumption.
The state transition probability must be a linear function in its arguments with added Gaussian noises.
The state and control are vectors. In our notation, they are vertical vectors.
Here and are matrices. is of size n by n and is of size n by m. The dimension of state vector is n and the dimension of control vector is m. The random variable has the same dimension as the state vector. Its mean is zero and its covariance will be denoted as . It is there to modifies the uncertainty introduced by the state transition. The mean of the posterior state is given by equation 1 and covariance by .
Before we perform the measurement update, we need to compute Kalman gain from equation 3b, which specifies the degree to which the measurement is incorporated into the new state estimate. Then we use the gain to get the new state.
For simplicity sake, I will omit the time dependence for transformation matrices. Let's define three matrices A
, B
, and C
using numpy
. Note that A
is the state transition model or function, B
is the control input model, and C
is the observation model.
If we were to express them in matrix form, they would look like the following.
Now we can put everything together and construct a Kalman filter algorithm.
The measurement probability must also be linear with added Gaussian noise.
Here is a matrix of size k by n where k is the dimension of the measurement vector. The describes the measurement noise. The distribution of noise is a multivariate Gaussian with zero mean and covariance of .
The initial belief must be normally distributed with mean and covariance .
Given arguments , , , and , we have the following update rules.
The predicted belief and are calculated to represent the belief , one time step later, but before incorporating the measurement .
The key concept here is innovation, which is the difference between the actual measurement and expected measurement, denoted by .
Suppose our robot is at coordinate initially and we don't apply any external control to it. Let's denote x
to be , x_cov
to be , u
to be , and u_cov
to be .
Let Z
to be and z_cov
to be .