Probabilistic Robotics
  • Introduction
  • Basics
    • Recursive State Estimation
      • Basic Concepts in Probability
      • Robot Environment Interaction
      • Bayes Filters
    • Gaussian Filters
      • Kalman Filter
      • Extended Kalman Filter
    • Nonparametric Filters
      • Histogram Filter
      • Particle Filter
    • Robot Perception
      • Beam Models of Range Finders
      • Likelihood Fields for Range Finders
  • Localization
    • Taxonomy of Localization Problems
    • Grid and Monte Carlo
      • Monte Carlo Localization
    • Markov and Gaussian
  • Projects
    • Mislocalization Heatmap
Powered by GitBook
On this page
  • Algorithm
  • Properties
  • Random Particle Augmented MCL

Was this helpful?

  1. Localization
  2. Grid and Monte Carlo

Monte Carlo Localization

PreviousGrid and Monte CarloNextMarkov and Gaussian

Last updated 5 years ago

Was this helpful?

Algorithm

MCL (Monte Carlo Localization) is applicable to both local and global localization problem. It represents the belief bel(xt)bel(x_t)bel(xt​)by particles. The algorithm itself is basically a small modification of the previous particle filter algorithm we have discussed.

χ‾t=χt=∅\overline{\chi}_t = \chi_t = \emptysetχ​t​=χt​=∅
for m=1 to M do:xtm=sample_motion_model(ut,xt−1m)wtm=measurement_model(zt,xtm,m)χ‾t=χ‾t+⟨xtm,wtm⟩endfor\text{for $m=1$ to $M$ do:} \\ x^m_t = \text{sample\_motion\_model}(u_t, x^m_{t-1}) \\ w^m_t = \text{measurement\_model}(z_t, x_t^m, m) \\ \overline{\chi}_t = \overline{\chi}_t + \langle x^m_t, w^m_t\rangle \\ \text{endfor}for m=1 to M do:xtm​=sample_motion_model(ut​,xt−1m​)wtm​=measurement_model(zt​,xtm​,m)χ​t​=χ​t​+⟨xtm​,wtm​⟩endfor
for m=1 to M do:draw i with probability∝wtiadd xti to χtendfor  return χt\text{for $m=1$ to $M$ do:} \\ \text{draw $i$ with probability} \propto w^i_t \\ \text{add $x^i_t$ to $\chi_t$} \\ \text{endfor} \\\;\\ \text{return $\chi_t$}for m=1 to M do:draw i with probability∝wti​add xti​ to χt​endforreturn χt​

The algorithm is obtained by substituting the appropriate probabilistic model and perceptual model into the particle filter algorithm. MCL represents the belief by a set of MMM particles where χt={xt1,xt2,...,xtM}\chi_t = \{ x^1_t, x^2_t, ..., x^M_t\}χt​={xt1​,xt2​,...,xtM​}. The motion model takes a control command and produces a new state. The measurement model assigns importance weight to the particle based on the likelihood of measurement given the new predicted state and map environments, i.e. the landmarks. The two motion and measurement models can be implemented by any motion/measurement models described in Robot Perception section.

Properties

MCL can approximate almost any distribution of practical importance. Increasing the total number of particles increases the accuracy of the approximation. The number of particles MMM is a parameter that enables the user to trade off the accuracy of the computation and the computational resources necessary to run MCL. A common strategy for setting MMM is to keep sampling until the next pair of utu_tut​ and ztz_tzt​ has arrived.

Random Particle Augmented MCL

MCL, in its present form, solves the global localization problem but cannot recover from robot kidnapping p failures. We can solve this problem by introducing random particles to the set on every iteration. The question is how many random particles to add and when to add?

One idea is to add particle based on some estimate of the localization performance. We need to monitor the probability of sensor measurements.

And relate it to the average measurement probability. By definition, an importance weight is a stochastic estimate of this probability. The average value approximates the desired probability as stated above.

There exist multiple reasons why the measurement probability may be low. The amount of sensor noise might be unnaturally high, or the particles may still be spread out during a global localization phase. For these reasons, it is a good idea to maintain a short-term average of the measurement likelihood, and relate it to the long-term average when determining the number of random samples.

Otherwise, the re-sampling proceeds in the familiar way. The probability of adding a random sample takes into consideration the divergence between the short-term and long-term average of the measurement likelihood. If the short-term likelihood is better or equal to the long-term likelihood, no random sample is added. However, if the short-term likelihood is much worse than the long-term one, random samples are added in proportion to the quotient of these values. In this way, a sudden decay in measurement likelihood induces an increased number of random samples.

p(zt∣z1:t−1,u1:t,m)p(z_t \mid z_{1:t-1}, u_{1:t}, m)p(zt​∣z1:t−1​,u1:t​,m)
1M∑m=1Mwtm≈p(zt∣z1:t−1,u1:t,m)\frac{1}{M}\sum_{m=1}^{M} w^m_t \approx p(z_t \mid z_{1:t-1}, u_{1:t}, m)M1​m=1∑M​wtm​≈p(zt​∣z1:t−1​,u1:t​,m)
static ωslow ωfastχ‾t=χt=∅  \text{static $\omega_{slow}$ $\omega_{fast}$} \\ \overline{\chi}_t = \chi_t = \emptyset \\\;\\ static ωslow​ ωfast​χ​t​=χt​=∅
for m=1 to M do:xtm=sample_motion_model(ut,xt−1m)wtm=measurement_model(zt,xtm,m)χ‾t=χ‾t+⟨xtm,wtm⟩ωavg=wavg+wtmMendfor\text{for $m=1$ to $M$ do:} \\ x^m_t = \text{sample\_motion\_model}(u_t, x^m_{t-1}) \\ w^m_t = \text{measurement\_model}(z_t, x_t^m, m) \\ \overline{\chi}_t = \overline{\chi}_t + \langle x^m_t, w^m_t\rangle \\ \omega_{avg} = w_{avg} + \frac{w^m_t}{M} \\ \text{endfor}for m=1 to M do:xtm​=sample_motion_model(ut​,xt−1m​)wtm​=measurement_model(zt​,xtm​,m)χ​t​=χ​t​+⟨xtm​,wtm​⟩ωavg​=wavg​+Mwtm​​endfor
ωslow=ωslow+αslow(ωavg−ωslow)ωfast=ωfast+αfast(ωavg−ωfast)\omega_{slow} = \omega_{slow} + \alpha_{slow}(\omega_{avg} - \omega_{slow}) \\ \omega_{fast} = \omega_{fast} + \alpha_{fast}(\omega_{avg} - \omega_{fast})ωslow​=ωslow​+αslow​(ωavg​−ωslow​)ωfast​=ωfast​+αfast​(ωavg​−ωfast​)
for m=1 to M do:with probability max{0.0,1.0−ωfastωslow}: add random particle to set else: draw i with probability∝wtiadd xti to χtendfor  return χi\text{for $m=1$ to $M$ do:} \\ \text{with probability $max\{0.0, 1.0 - \frac{\omega_{fast}}{\omega_{slow}}\}$: add random particle to set} \\ \text{ else: draw $i$ with probability} \propto w^i_t \\ \text{add $x^i_t$ to $\chi_t$} \\ \text{endfor} \\\;\\ \text{return $\chi_i$}for m=1 to M do:with probability max{0.0,1.0−ωslow​ωfast​​}: add random particle to set else: draw i with probability∝wti​add xti​ to χt​endforreturn χi​

The algorithm requires that 0≤αslow≪αfast0 \leq \alpha_{slow} \ll \alpha_{fast}0≤αslow​≪αfast​. The parameters αfast\alpha_{fast}αfast​ and αslow\alpha_{slow}αslow​ are decay rates for the exponential filters that estimate the long-term and short-term averages respectively. During the re-sampling process, a random sample is added with the following probability.

max{0.0,1.0−ωfastωslow}max\{ 0.0, 1.0 - \frac{\omega_{fast}}{\omega_{slow}}\}max{0.0,1.0−ωslow​ωfast​​}