Mislocalization Heatmap

General Math Concepts

Joint Distribution

The joint distribution of two random variables XX and YY are written as follows.

p(x,y)=p(X=x and Y=y)p(x,y)=p(\text{$X=x$ and $Y=y$})

If they are independent,

p(x,y)=p(x)p(y)p(x, y) = p(x)p(y)

If they are conditioned,

p(xy)=p(X=xY=y)p(x∣y)=p(X=x \mid Y=y)

Theorem of Total Probability

p(x)=yp(xy)p(y)=p(xy)p(y)dyp(x) = \sum_y p(x \mid y)p(y) = \int p(x \mid y)p(y) dy

Bayes' Rule

p(xy)=p(yx)p(x)p(y)p(x \mid y) = \frac{p(y \mid x) p(x)}{p(y)}

Since we know xx and yy are conditioned on each other, we can use the theorem of total probability to express Bayes' rule.

In discrete form,

p(xy)=p(yx)p(x)xp(yx)p(x)p(x \mid y) = \frac{p(y \mid x)p(x)}{\sum_{x^\prime} p(y \mid x^\prime) p(x^\prime)}

In integral form,

p(xy)=p(yx)p(x)p(yx)p(x)dxp(x \mid y) = \frac{p(y \mid x)p(x)}{\int p(y \mid x^\prime) p(x^\prime) dx^\prime}

Prior & Posterior Distribution

If xx is a quantity that we would like to infer from yy, then

  • p(x)p(x) is called prior probability distribution.

  • p(xy)p(x \mid y) is called posterior probability distribution over XX.

  • p(yx)p(y \mid x) is called generative model.

In general YY is called data, e.g. range finder laser measurements or control actions.

Glossary

  • Let xtx_t denote robot state at time tt.

  • Let utu_t denote control action we apply to a robot at time tt.

  • Let ztz_t denote measurement at time tt.

State Transition Probability

State transition probability describes what is the likelihood of producing a new state xtx_t given that previous state xt1x_{t-1} and control action utu_t.

p(xt,xt1,ut)p(x_t, \mid x_{t-1}, u_t)

Measurement Probability

Measurement probability describes what is the likelihood of seeing a set of measurements, given the current state xtx_t.

p(ztxt)p(z_t \mid x_t)

Belief

A belief reflects the robot's internal knowledge about the state of the environment. A belief distribution assigns a probability to each possible hypothesis with regards to the true state. Belief distributions are posterior probabilities over state variables conditioned on the available data.

Using zero indices, a belief is described as follows.

bel(xt)=p(xtz0:t.u0:t)bel(x_t) = p(x_t \mid z_{0:t}. u_{0:t})

For each time step, before we incorporate the measurement data, we would like to make a prediction. The prediction belief is described as follows.

bel(xt)=p(xtz0:t1,u0:t)\overline{bel}(x_t) = p(x_t \mid z_{0:t-1}, u_{0:t})

Calculating a belief from a prediction belief is called correction or measurement update.

measurement update:bel(x)bel(xt)\text{measurement update}: \overline{bel}(x) \to bel(x_t)

Bayes Filter

The general Bayes filter involves two steps.

  1. Generate prediction of current state xtx_t using previous state xt1x_{t-1} and control action utu_t.

  2. Perform correction, also known as measurement update, by incorporating ztz_t.

Prediction

Formally speaking, it is impossible to know the true state xtx_t, at best we can only describe what we know about the current state or previous state as a probability density function, denote as bel(xt)bel(x_t).

bel(xt)=p(xtut,xt1)  bel(xt1)dxt1=control_update(ut,xt1)\overline{bel}(x_t) = \int p(x_{t} \mid u_t, x_{t-1})\;bel(x_{t-1}) dx_{t-1} = \text{control\_update}(u_t, x_{t-1})

Prediction step is also called control update.

Measurement Update

As noted before, the final belief function is a probability density function that tells you what is the probability for the random variable XX to take the value of xtx_t.

bel(xt)=p(ztxt)  bel(xt)p(ztxt)  bel(xt)  dx=measurement_update(zt,xt)bel(x_t) = \frac{p(z_t \mid x_t)\;\overline{bel}(x_t)}{\int p(z_t \mid x^\prime_t)\;\overline{bel}(x^\prime_t)\; dx^\prime} = \text{measurement\_update}(z_t, x_t)

MCL Particle Filter

Monte Carlo Localization Particle Filter is an algorithm derived from the Bayes filter, suitable for representing beliefs that cannot be modeled by Gaussian or other parametric models.

Parametric model is a class of probability distributions that has a finite number of parameters.

Particle filters represent beliefs by a cluster of particles. It usually involves 4 major steps.

  1. Initialize a set of MM particles.

  2. Iterate through each particle, for m=1m = 1 to m=Mm = M.

    1. Perform control update on particle pmp_m.

    2. Perform measurement update on particle pmp_m.

    3. Compute weight wmw_mof the particle.

    4. Add pmp_m to a sample set.

  3. Iterate MM times.

    1. Draw pmp_m from sample set with probability proportional to wmw_m with replacement.

    2. Add pmp_m to the final sample set.

  4. Return final sample set, which should have length MM.

Repeat step 2 to step 4 for subsequent control and measurement updates.

Last updated