Monte Carlo Localization
Last updated
Last updated
MCL (Monte Carlo Localization) is applicable to both local and global localization problem. It represents the belief by particles. The algorithm itself is basically a small modification of the previous particle filter algorithm we have discussed.
The algorithm is obtained by substituting the appropriate probabilistic model and perceptual model into the particle filter algorithm. MCL represents the belief by a set of particles where . The motion model takes a control command and produces a new state. The measurement model assigns importance weight to the particle based on the likelihood of measurement given the new predicted state and map environments, i.e. the landmarks. The two motion and measurement models can be implemented by any motion/measurement models described in Robot Perception section.
MCL can approximate almost any distribution of practical importance. Increasing the total number of particles increases the accuracy of the approximation. The number of particles is a parameter that enables the user to trade off the accuracy of the computation and the computational resources necessary to run MCL. A common strategy for setting is to keep sampling until the next pair of and has arrived.
MCL, in its present form, solves the global localization problem but cannot recover from robot kidnapping p failures. We can solve this problem by introducing random particles to the set on every iteration. The question is how many random particles to add and when to add?
One idea is to add particle based on some estimate of the localization performance. We need to monitor the probability of sensor measurements.
And relate it to the average measurement probability. By definition, an importance weight is a stochastic estimate of this probability. The average value approximates the desired probability as stated above.
There exist multiple reasons why the measurement probability may be low. The amount of sensor noise might be unnaturally high, or the particles may still be spread out during a global localization phase. For these reasons, it is a good idea to maintain a short-term average of the measurement likelihood, and relate it to the long-term average when determining the number of random samples.
Otherwise, the re-sampling proceeds in the familiar way. The probability of adding a random sample takes into consideration the divergence between the short-term and long-term average of the measurement likelihood. If the short-term likelihood is better or equal to the long-term likelihood, no random sample is added. However, if the short-term likelihood is much worse than the long-term one, random samples are added in proportion to the quotient of these values. In this way, a sudden decay in measurement likelihood induces an increased number of random samples.
The algorithm requires that . The parameters and are decay rates for the exponential filters that estimate the long-term and short-term averages respectively. During the re-sampling process, a random sample is added with the following probability.