Probabilistic Robotics
  • Introduction
  • Basics
    • Recursive State Estimation
      • Basic Concepts in Probability
      • Robot Environment Interaction
      • Bayes Filters
    • Gaussian Filters
      • Kalman Filter
      • Extended Kalman Filter
    • Nonparametric Filters
      • Histogram Filter
      • Particle Filter
    • Robot Perception
      • Beam Models of Range Finders
      • Likelihood Fields for Range Finders
  • Localization
    • Taxonomy of Localization Problems
    • Grid and Monte Carlo
      • Monte Carlo Localization
    • Markov and Gaussian
  • Projects
    • Mislocalization Heatmap
Powered by GitBook
On this page
  • General Math Concepts
  • Glossary
  • Bayes Filter
  • MCL Particle Filter

Was this helpful?

  1. Projects

Mislocalization Heatmap

General Math Concepts

Joint Distribution

The joint distribution of two random variables XXX and YYY are written as follows.

p(x,y)=p(X=x and Y=y)p(x,y)=p(\text{$X=x$ and $Y=y$})p(x,y)=p(X=x and Y=y)

If they are independent,

p(x,y)=p(x)p(y)p(x, y) = p(x)p(y)p(x,y)=p(x)p(y)

If they are conditioned,

p(x∣y)=p(X=x∣Y=y)p(x∣y)=p(X=x \mid Y=y)p(x∣y)=p(X=x∣Y=y)

Theorem of Total Probability

p(x)=∑yp(x∣y)p(y)=∫p(x∣y)p(y)dyp(x) = \sum_y p(x \mid y)p(y) = \int p(x \mid y)p(y) dyp(x)=y∑​p(x∣y)p(y)=∫p(x∣y)p(y)dy

Bayes' Rule

p(x∣y)=p(y∣x)p(x)p(y)p(x \mid y) = \frac{p(y \mid x) p(x)}{p(y)}p(x∣y)=p(y)p(y∣x)p(x)​

Since we know xxx and yyy are conditioned on each other, we can use the theorem of total probability to express Bayes' rule.

In discrete form,

p(x∣y)=p(y∣x)p(x)∑x′p(y∣x′)p(x′)p(x \mid y) = \frac{p(y \mid x)p(x)}{\sum_{x^\prime} p(y \mid x^\prime) p(x^\prime)}p(x∣y)=∑x′​p(y∣x′)p(x′)p(y∣x)p(x)​

In integral form,

p(x∣y)=p(y∣x)p(x)∫p(y∣x′)p(x′)dx′p(x \mid y) = \frac{p(y \mid x)p(x)}{\int p(y \mid x^\prime) p(x^\prime) dx^\prime} p(x∣y)=∫p(y∣x′)p(x′)dx′p(y∣x)p(x)​

Prior & Posterior Distribution

If xxx is a quantity that we would like to infer from yyy, then

  • p(x)p(x)p(x) is called prior probability distribution.

  • p(x∣y)p(x \mid y)p(x∣y) is called posterior probability distribution over XXX.

  • p(y∣x)p(y \mid x)p(y∣x) is called generative model.

In general YYY is called data, e.g. range finder laser measurements or control actions.

Glossary

  • Let xtx_txt​ denote robot state at time ttt.

  • Let utu_tut​ denote control action we apply to a robot at time ttt.

  • Let ztz_tzt​ denote measurement at time ttt.

State Transition Probability

State transition probability describes what is the likelihood of producing a new state xtx_txt​ given that previous state xt−1x_{t-1}xt−1​ and control action utu_tut​.

p(xt,∣xt−1,ut)p(x_t, \mid x_{t-1}, u_t)p(xt​,∣xt−1​,ut​)

Measurement Probability

Measurement probability describes what is the likelihood of seeing a set of measurements, given the current state xtx_txt​.

p(zt∣xt)p(z_t \mid x_t)p(zt​∣xt​)

Belief

A belief reflects the robot's internal knowledge about the state of the environment. A belief distribution assigns a probability to each possible hypothesis with regards to the true state. Belief distributions are posterior probabilities over state variables conditioned on the available data.

Using zero indices, a belief is described as follows.

bel(xt)=p(xt∣z0:t.u0:t)bel(x_t) = p(x_t \mid z_{0:t}. u_{0:t}) bel(xt​)=p(xt​∣z0:t​.u0:t​)

For each time step, before we incorporate the measurement data, we would like to make a prediction. The prediction belief is described as follows.

bel‾(xt)=p(xt∣z0:t−1,u0:t)\overline{bel}(x_t) = p(x_t \mid z_{0:t-1}, u_{0:t})bel(xt​)=p(xt​∣z0:t−1​,u0:t​)

Calculating a belief from a prediction belief is called correction or measurement update.

measurement update:bel‾(x)→bel(xt)\text{measurement update}: \overline{bel}(x) \to bel(x_t)measurement update:bel(x)→bel(xt​)

Bayes Filter

The general Bayes filter involves two steps.

  1. Generate prediction of current state xtx_txt​ using previous state xt−1x_{t-1}xt−1​ and control action utu_tut​.

  2. Perform correction, also known as measurement update, by incorporating ztz_tzt​.

Prediction

Formally speaking, it is impossible to know the true state xtx_txt​, at best we can only describe what we know about the current state or previous state as a probability density function, denote as bel(xt)bel(x_t)bel(xt​).

bel‾(xt)=∫p(xt∣ut,xt−1)  bel(xt−1)dxt−1=control_update(ut,xt−1)\overline{bel}(x_t) = \int p(x_{t} \mid u_t, x_{t-1})\;bel(x_{t-1}) dx_{t-1} = \text{control\_update}(u_t, x_{t-1})bel(xt​)=∫p(xt​∣ut​,xt−1​)bel(xt−1​)dxt−1​=control_update(ut​,xt−1​)

Prediction step is also called control update.

Measurement Update

As noted before, the final belief function is a probability density function that tells you what is the probability for the random variable XXX to take the value of xtx_txt​.

bel(xt)=p(zt∣xt)  bel‾(xt)∫p(zt∣xt′)  bel‾(xt′)  dx′=measurement_update(zt,xt)bel(x_t) = \frac{p(z_t \mid x_t)\;\overline{bel}(x_t)}{\int p(z_t \mid x^\prime_t)\;\overline{bel}(x^\prime_t)\; dx^\prime} = \text{measurement\_update}(z_t, x_t)bel(xt​)=∫p(zt​∣xt′​)bel(xt′​)dx′p(zt​∣xt​)bel(xt​)​=measurement_update(zt​,xt​)

MCL Particle Filter

Monte Carlo Localization Particle Filter is an algorithm derived from the Bayes filter, suitable for representing beliefs that cannot be modeled by Gaussian or other parametric models.

Parametric model is a class of probability distributions that has a finite number of parameters.

Particle filters represent beliefs by a cluster of particles. It usually involves 4 major steps.

  1. Initialize a set of MMM particles.

  2. Iterate through each particle, for m=1m = 1m=1 to m=Mm = Mm=M.

    1. Perform control update on particle pmp_mpm​.

    2. Perform measurement update on particle pmp_mpm​.

    3. Compute weight wmw_mwm​of the particle.

    4. Add pmp_mpm​ to a sample set.

  3. Iterate MMM times.

    1. Draw pmp_mpm​ from sample set with probability proportional to wmw_mwm​ with replacement.

    2. Add pmp_mpm​ to the final sample set.

  4. Return final sample set, which should have length MMM.

Repeat step 2 to step 4 for subsequent control and measurement updates.

PreviousMarkov and Gaussian

Last updated 5 years ago

Was this helpful?