World Applied Sciences Journal 8 (4): 422-428, 2010 ISSN 1818-4952 © IDOSI Publications, 2010
Robot Movement Optimaization with Using Localization Algorithms
1
Omid Panah, 2Amir Panh, 1Amin Panah and 1Abolfazl Akbari
1
Department of Computer Engineering, Islamic Azad University Ayatollah Amoli Branch, Amol, Iran 2 Department of Computer Engineering, Islamic Azad University Qazvin, Qazvinl, Iran
Abstract: The majority of localization algorithms start at a known position and add internal movement data and external environment data to this position each cycle. If the robot isreplaced or the sensor data quality is too low, these algorithms are usually not able to recover to a useful position estimation Members of these so-called local approaches are the linear least squares estimator and the Kalman filter. Robots equipped with global localization algorithms like Markov localization and particle filter are able to localize themselves even under global uncertainty. This Article focuses on local and global localization, static environments andpassive approaches. Active approaches have to be discussed along with the decision making. To be able to cope with dynamic environments, map building is necessary. Both topics are not within the scope of this work. Key words: Kalman Filter % LLSQ % Markov Localization % Particle Filter INTRODUCTION Localization, Estimate the location of the robot within the environment based on observations. These observations typically consist of a mixture of odometric information about the robot’s movements and information obtained from the robot’s proximity sensors or cameras.
The Term Paper on The Environment 2
- The Environment The impact of people on their environment can be devastating. This is where the respective role of governments can make decisions that shape environmental policy and responsibilities. These governments can be broken up into four different levels: local, state, federal and international. Air quality and biodiversity are two current issues that can be related to the role of ...
In first problems are presented which may occur while performing a localization task. After introducing a taxonomy for localization algorithms, four different types of them linear least squares filter, Kalman filter, Markov localization and particle filter are discussed. They are ordered by their complexity and by historic development. First the simplest filter, the linear least squares filter, is described. Afterwards the Kalman filter and its derivates are shown. Next, Markov localization is presented. Finally, the particle filter, which is based upon the principles of the Markov localization, is explained. It is noteworthy to mention that the Kalman filter can be used within a Markov localization or particle filter algorithm as an extension to get better results [2]. After all algorithms are discussed, they are compared via a simulation example. Linear Least Squares Filter: The linear least squares filter (LLSQ) is a simple state estimator. Based upon an observation history, outliers are muted. The dimensions of the state are not correlated-hence, the x and y values of a position are treated independently resulting in the fact that a faulty value in one dimension has no influence on the trustworthiness of its counterpart in the other dimension. The advantage of the LLSQ for localization is that the interval between the positions does not have to be constant. This filter can only be used for local localization. The position of a robot is represented by the state vector s (1.1).
It consists of the two values for x and y and the direction n. 1 (1.1) S = [x y ϕ ] For non-deterministic moving objects like robots, second order polynomial equations are used to calculate the elements of the state vector at the time t (1.2, 1.3, 1.4).
For the ball and other non-self steered objects, a polynomial equation of first order can be used (e.gxt=a+bt).
(1.2) xt = a + bt + ct 2 (1.3) y t = d + et + ft 2 (1.4) ϕ t = g + ht + Equation (1.5) can be used to calculate the three elements a, b, c. Looking at the matrix notation (1.6) for this equation, the parts of the equation can be identified as the time matrix A, the value history px for x and the result vector rx containing a, b and c.
The Term Paper on Mean Filters Median Filter
University of Texas at San Antonio College of Engineering EE 4623 Digital Filtering Project #3 Develop a Program that will implement the non-linear filters Adriana Juarez December 3, 2002 Abstract: The purpose of this project is to develop a program that implements non-linear filters. For this project we will research the mean filter and the Median filter. Introduction: The Idea of this project is ...
Correspoding Author: Omid Panah, Department of Computer Engineering, Islamic Azad University Ayatollah Amoli Branch, Amol, Iran
422
World Appl. Sci. J., 8 (4): 422-428, 2010
Arx = px
xo 2 1 to to 0 a x1 2 1 t 1 t1 M M M b = x2 c M 2 1 t n-1 t n-1 x n−1
rx = ( AT | A)T AT p x
(1.5) (1.6)
(1.7)
The transformed equation to calculate rx can be adapted to calculate the result vectors for y and n by replacing the history-vector. The same time matrix A is used to calculate all three result vectors. The length n of the history defines the adaption rate of the estimation to new circumstances. The resulting fills the data matrix with the incoming sensor updates in a FIFO manner [1]. As soon as enough sensor/position updates have been collected, the position estimation can be calculated.The old values are shifted by one position, resulting in the removal of the oldest ones. If the array is filled and thereafter the number of iterations j is equal or larger than the size of the array n, the state can be estimated. The result vector rx is determined using the matrix transformation described in (1.7).
The components of rx are then needed for the state estimation of rt. Kalman Filter: The Kalman filter (KF)-or discrete Kalman filter-is a recursive state estimator, which can deal with incomplete and noisy data. The algorithm is divided into two update phases: (a) measurement update and (b) prediction update [3]. In (a), new sensor values are used to refine the prediction, whereas (b) predicts the new state estimation using the refined prediction of phase (a).
Other than the linear least squares filter, which basically uses the mean of the last n observations to determine the new state, the Kalman filter generates and adapts a world transition model [4]. Over time, its state estimations are getting more robust against measurement outliers.Traditionally, the KF is used for local localization problems. Kalman Model: The model underlying the KF (Figure 1.1) is divided into a visible and a hidden part. The visible part consists of the system input and the sensor readings whereas the hidden part contains the state vector, the transition models, etc. As can be seen, the estimated state at time k is based upon the previous state estimation xk-1 and the system input or control vector uk. The sensor observation vector yk is derived out of the current state xk. 423
The Dissertation on Qr Decomposition Based Matrix Inversion Technique
In this paper, we overcome this barrier by presenting a novel matrix inversion algorithm which is ideally suited to high performance floating-point implementation. We show how the resulting architecture offers fundamentally higher performance than currently published matrix inversion approaches and we use it to create the first reported architecture capable of supporting real-time 802. 11n ...
Fig. 1.1: Model Underlying the Kalman Filter
xk = Axk −1 + Buk + wk yk = Cxk + vk
(1.8) (1.9)
Equations (1.8) and (1.9) are describing Figure 1.1. The first one defines the recursive state estimator xk 0 Rn, where n is the dimension of the state vector and k is the time index representing the time in multiples of ªt. It is a combination of the transition of the last state, the system input and noise. The transition from one state to the next is defined in the matrix A, which has a dimension of n×n. Next, to determine the relation between the system input uk and the state, the matrix B with a dimension of n×o is used. The constant o gives the dimension of u, where u 0 Ro. The system input uk represents for example motion commands. Finally, to model uncertainty, a system noise wk is added [5]. The noise is white and with normal probability distribution p(w)~N(0,Q).
Equation (1.9) gives the relation between the sensor observation vector yk ( yk 0 Rm , where m is the dimension of the sensor vector) and the state vector xk. This relation is stored in C. This matrix has a dimension of m×n. Similarly to wk , the variable vk adds system noise to this relation. This noise is independent of w, but also white and with normal probability distribution p(v)~N(0,R).
The matrices Q and R are the process and the measurement covariance matrices. Next to their task to add system noise, they can be used to define a priori known relations between e.g. sensor readings. Their dimensions are n×n for Q and m×m for R. Kalman Iteration Process: This Process is divided into the “Time Update” or prediction phase and the “Measurement Update” or correction phase which are processed periodically (see Figure 1.2).
In the first phase, the state is predicted upon the current internal state of the filter. The second phase corrects this prediction using the noisy measurements.
The Essay on Matrices Used In Computer Graphics
Every one of us uses matrices nearly everyday in our lives and probably unaware of it. Matrices are commonly used in computers for their 3D graphics. Most of the matrices that are used are either 3×3 or 4×4 matrices and are computed by either rotation matrices or translation matrices. The matrices that are used are an array that holds numbers, commonly called a 3×3 array or 4×4 ...
World Appl. Sci. J., 8 (4): 422-428, 2010
Fig. 1.2: Kalman Iteration Process Time Update
% xk = Axk −1 + Buk ˆ pk = Apk −1 A|T + Q
(1.10) (1.11)
Measurment Update
Kk =
ˆ pkC ˆ C pkCT + R
T
&& Pk = (1 − K k C) Pk ˆ ˆ xk = x + K k ( y K − Cxk )
(1.12) (1.13) (1.14)
ˆ For prediction of the estimation x of the state vector x, only two components of the original equation (compare with Equation (1.8)) are used: The state transition and the system input. The intermediate error covariance ) matrix p with the dimension n×n describes the estimated accuracy of the state estimation [6]. In Equation (1.11), the trustworthiness of the prediction is lowered by the process covariance matrix Q. The Kalman gain K describes the proportion between how much the predicted state can be trusted and up to which extent the new measurements have to be integrated. Ideally, the predicted state is accurate enough that the noisy sensor data is not needed for precise state ) estimation. p is updated with the Kalman gain in Equation resulting in the error covariance matrix P. Thus, K and P are recursively influencing each other. I is the identity matrix with dimension n×n. Finally, the new state x is ˆ obtained by adding to the predicted state x the deviation of the actual and the predicted measurement in proportion to the Kalman gain.
Markov Localization: The linear least squares filter and the Kalman filter are only able to represent one state estimation. If e.g. based upon the current sensor readings the robot can be at two different positions but not in between, other approaches have to be used. An intuitive approach is to use topological or grid maps of the environment. For each possible position, the possibility of the robot to be there is calculated based upon the previous distribution and the actual sensor readings. Finally, the area with the highest belief is selected as estimated position. In case of global uncertainty-several not connected areas; each area with a high possibility-the robot has to continue moving until the possibility for all but one area have decreased and, therefore, only one possible estimation of the robot position remains. Like the Kalman filter, the Markov localization (ML) is based upon the Markov assumption. Thus, all past information is already inside the model through recursively applying the algorithm and sensor data on one set of locations.
The Term Paper on A New Mobile Robot Navigation Algorithm
A New Mobile Robot Navigation Algorithm By using Fuzzy Logic Control and Virtual Target Technique Dr. Omid Reza Esmaeili Motlagh Intelligent Systems and Robotics (ISRL) Lab Institute of Advanced Technology, University Putra Malaysia (UPM), Malaysia Abstract – A new fuzzy logic control system is developed for reactive navigation of a behavior-based mobile robot. The robot perceives its environment ...
Bel ( Lo = ι ) = P (L o = ι) P(L t | Lt −1 , at −1 ) =
(1.17) (1.18) (1.19) (1.20)
∑
ι
P( Lt = ι | Lt -1 ι ′, a ) Bel ( Lt -1 = ι )
Bel ( Lt = ι ) = P( s | ι ) P( Lt −1 , at −1 ) Bel ( Lt = ι ) =
Bel ( Lt =ι ) p ( s|Lt )
Non-Linear Kalman Filters: The EKF replaces the linear transition model Ax k + Bu k and the linear measurement model C x k with the non-linear functions g and h. Thus, the resulting belief of the estimation is no longer represented by a gaussian distribution.
xk = g (uk , xk −1 ) + wk yk = h( xk ) + vk
(1.15) (1.16)
The non-linearization of the matrices A, B and C is done using Taylor approximation and Jacobians [7]. The matrices A and B are represented by the Jacobian Gk and the matrix C by Hk. 424
all locations have a uniform belief (Equation (1.17)).
In case the starting position is known, the initialization of the localization beliefs can be adopted to this a priori knowledge. In each cycle, the location probabilities are updated. This is not only done using sensor readings but also-as an extension to the intuitive approach-actions executed by the system. Equation (1.18) applies the actions. E.g. if the robot moves one step right, all position estimations can be moved accordingly. After the sensor inputs have been applied (Equation (1.19)), all beliefs have to be normalized (Equation (1.20)).
The sum of the beliefs for each location has to be The normalization step can be skipped if no new sensor inputs are available. For each element that has to be considered for localization, another dimension is added to the state space. Thus, for the typical case of x, y and n and grid representation of the locations, we have a three dimensional grid (see Figure 1.3).
Even with optimization techniques, the time needed to process this large array is problematic. As mentioned above, the space can be reduced using topological maps, but this may not be possible for all applications. Furthermore, even topological maps can get huge.
The Essay on Third Dimension Exist Particles Dimensional
The Facts of Life "Existence is Truth" Everything we know of is the third dimension, and its properties. Everything is composed of atoms, and atoms are composed of smaller particles. All particles are three dimensional because they have the dimension of height, the dimension of width, and the dimension of depth. Air is composed of particles in a low state of density, water is composed of particles ...
World Appl. Sci. J., 8 (4): 422-428, 2010
Fig. 1.3: Three-Dimensional Grid for Markov Localization This localization method can easily be used not only to observe the self position, but also to observe several moving entities like the ball and other robots. Each area which is ratedwith a high belief represents at least one of these entities. Thus, it is a powerful tool for sensor data integration. Particle Filter: Like mentioned above, the Markov localization can only be optimized to some extent. For larger environments, it is certain that either the precision or the calculation time is getting problematic. To overcome this, the number of cells has to be reduced without deterioration of the desired precision. The Particle filter-or Monte Carlo Localization (MCL)-replaces the dense grid representation by a much lower number of particles. These particles are state vectors representing the belief that the true location is exactly here. The basic concept for the algorithm is a Monte Carlo approximation with Sequential Importance Sampling (SIS).
Initially, random particles are generated and their weight is set to the reciprocal value of the number of particles. In each cycle, the weight of a particle is calculated by multiplying its old weight with the probability that this particle represents the real state regarding the newest sensor readings. After normalization of the weights, the real state can be estimated using them. The SIS step-multiplying the old weight with the new probability-leads to degeneration. After several cycles, particles with a low probability are converging towards zero. Even if the sensor readings suggest that one of these particles should increase its weight, the previously strong particles will override this and stay strongly weighted. Thus, once a degree of certainty of the estimation has been reached, the model cannot change its estimation any more. Here, the weight of each particle is determined each round independently from the previous one. After estimation of the state, a new set of particles is drawn from the actual set. This new set is used as basis for the next cycle. The particles are drawn in such a way that more important ones are more likely to be picked. Thus, 425
areas with a high probability are gaining more weight by replicating their particles, whereas areas with a low probability are thinning out. Once an area has no particle left, another source for degeneration displays-due to the lack of appropriate particles, local maxima will dominate the estimation. During initialization, all of the N particles are generated by distributing them uniformly over the state space (Equation (1.21)).
At the beginning of every cycle, the cycle index k is increased and temporary % particles x k are generated out of the state space xk under the conditions of the particle from the % previous cycle xk −1 (i) and the command control uk. Using % the actual sensor data yk, the weight w k ( i ) for the particles is calculated. the normalized weights are used to calculate the state estimation for xk under the condition of all previous sensor readings y 1 : k . As preparation forthe % % resampling step,all tuples { xk (i ), wk (i )} have to be sorted in descending order with the highest weight-value first. During resampling, N times a random number j is drawn in such: (1.21) xo (i) ~ p( xo ) (1.22) x (i) ~ p( x | u , x (i))
k k k k −1
% % wk (i ) = p ( yk | xk (i )) % % % wk (i ) = wk (i ) ∑ j =1 wk (i) E ( g ( xk | y1:k )) =
N −1
(1.23) (1.24)
∑
N j =1
% % g ( xk ( j ) )wk ( j )
(1.25) (1.26)
1 % % xk (i ), = { xk ( j ), wk ( j )} N
a way that tuples with a higher weight are preferred. % For each j, the corresponding xk (i ) is drawn and assigned to the final state xk (i ) . All final state vectors are assigned with a uniform weight. To avoid local maxima, finally, T particles are replaced with newly generated uniformly distributed ones. T is much smaller than N, typically 100 times and more. Comparison Experiment: To compare all four localization algorithms handled in a simulation has been carried out during this work. The aim of this simulation is to track a single object, which is observed by a non-moving single vision system, which is mounted above the area. The object to be tracked is a robot, which travels at a constant speed in total two meters. The outputs of the vision system are the absolute position coordinates x and y, but no direction n. The standard deviation of the perceived position is six
World Appl. Sci. J., 8 (4): 422-428, 2010
centimeters from the real position. The robot moves at constant speed and during the simulation run, 100 data points are collected. Thus, the robot travels 2 cm per time frame. The localization algorithms have to estimate the real robot position using only the few unreliable sensor readings. Additionaldata like odometry, multiple objects and motion control have been left out intentionally. Configuration: The following list gives only information about the parameters and optional elements for each algorithm. Linear Least Squares Algorithm Has a queue length of 5 steps. It uses a polynomial equation of second order to approximate the position. Kalman Filter: The matrices are initialized as follows:
1 0 A= 0 0 0 1 0 1 0 1 , 0 1 0 0 0 1 3 0 Po = 0 0 0 0 B= , 0 0 0 3 0 0 0 0 3 0 0 0 , 0 3
Fig. 1.4: Distance Between the Real Position and the Estimated Positins
1 0 0 0 C= , 0 1 0 0 0.14 0 Q= , 0 0.14
0 0 0 3600 0 3600 0 0 , u = [ 0] , R= o 0 0 3600 0 0 0 3600 0 0 0 xo = , 1 0
Markov Localization: The grid is configured to have a cell size of 5×5 cm2. As action update, the probability for the vicinity of the estimated position is increased. This is done applying a multi-variant distribution with F_ much larger than the sensor error. Particle Filter: 1000 particles are used. Further, in each cycle, 10 particles are reinjected. No action update is used. Results and Interpretation: Figure 1.4 and Table 1.1 show error statistics for the path estimations depicted in Figure 1.5. The path given by the sensor (Figure 1.5(b)) is perceived by the vision system using the real path of the robot (Figure 1.5 (a)).
Sensor data is also used as a benchmark for the localization algorithms-only if such a localization is on average better than the perceived sensor data, the effort is worth while.
Fig. 1.5: Comparison of Localization Algorithms Although the estimated path of the linear least squares algorithm moves sometimes back and forth (Figure 1.5 (c)), the overall performance is reasonably well
426
World Appl. Sci. J., 8 (4): 422-428, 2010
Table 1.1: Statistics for th eomparison Experiment Max. Error Sensor data LLSQ KF ML MCL 176.6 100.1 126.6 189.9 153.6 Min. Error 3.6 9.5 13.2 28.9 5.5 Avg. Error 68.6 51.1 57.7 110.8 51.8 better than Sensor 0% 67% 54% 22% 76%
generally be said that the LLSQ is the fastest, shortly followed by the KF. The MCL is slower by a power of ten but this number varies for different numbers of particles. The slowest algorithm is the ML. Summary: The simulation experiment presented in Section 6 showed that there cannot be such a thing as a general optimal localization algorithm. The choice for the best algorithm for a given application depends on several parameters, like available performance, type of objects to be tracked, robustness against outliers, or flexibility towards course changes. Table 1.2 gives an overview of the strengths and weaknesses of the different algorithms. It is based. The first five rows are showing clearly that the ability to deal with multiple hypotheses is strongly related with the ability to handle raw data with any type of noise. Further, multiple hypotheses are needed for global localization. Optional sensor data is necessary for the case that the vision system needs 5 to 10 cycles to generate new landmarks. If the algorithm demands all sensor data in every round, this either leads to a decrease of cycles to fit the worst case, or the landmarks from the vision system have to be approximated if no new data is available. The next three rows are about efficiency. LLSQ, KF and EKF are more efficient than ML and MCL-they only store a small state space. ML with the largest state space also takes longest and the precision is always lower bounded to the cell size. In the last two rows, robustness and flexibility are shown. Here, the EKF is at its best. KF. and LLSQ are serving either the robustness or the flexibility, while ML has problems serving any of the two. In this overview, MCL is always good. Thus, for most applications, it is the algorithm of choice. REFRENCE 1. Gregor Novak. Multi Agent Systems Robot Soccer. PhD thesis, Technische Universit¨at Wien, Institut f¨ur Handhabungsger¨ate und Robotertechnik, 2002. Vetterling, W., W. Press, S. Teukolsky and B. Flannery, 1998. Numerical Recipes in C. Cambridge University Press, New York, ISBN 0-521-43108-5. Emil Kalman, Rudolph. 1960. A new approach to linear filtering and prediction problems. Transactions of the ASME J. Basic Engineering, 82(Series D): 35-45. Dieter Fox and Sebastian 2006. Thrun. Probabilistic Robotics. MIT Press, ISBN 0-262-20162-3.
Table 1.2: Comparison of the Localization Algorithms LLSQ Measurements Noise Optional sensor inputs Global Locatization Multi-Hypothesis Efficiency (time) Efficiency (memory) Resolution Robustness Elexibility KF EKF ML raw any yes yes yes MCL raw any yes yes yes + + + + +
landmarks landmarks landmarks Gaussian no no no ++ ++ ++ ++ Gaussian no no no ++ ++ ++ ++ Gaussian no no no + ++ ++ ++ +
good. It is second best compared to the benchmark and in the category of the average distance to the real position it is the best [8]. Also, the maximum error is the lowest in the test. The Kalman filter follows the real path even with these faulty sensor readings (Figure 1.5(d)).
The delayed adaption to course changes is systematic. To overcome it, additional information like motion commands is needed to apply the EKF. According to the table, the KF is second worst in the overall performance, only outperforming the Markov localization. But if the shape of the path is more important than the error distance to the real path, the KF is the best choice. The worst result in this test is the one of the Markov localization. While Figure 1.5 (e) suggests the same result interpretations like for KF, the error distances in Figure 1 are showing that the estimated path is on average twice as bad as the other estimations. Finally, the particle filter produces similar results as the LLSQ. If the state vector is extended by the direction n, the MCL and the other two algorithms would outperform LLSQ. In LLSQ, each dimension is treated independently, while the other algorithms are putting them into relation [9]. Thus, if there is a main movement direction, sensor readings, which suddenly change the direction to the opposite are more likely to be discarded. Because no code optimizations were done, timings of the different algorithms are not shown here. However, it can 427
2.
3.
4.
World Appl. Sci. J., 8 (4): 422-428, 2010
5.
6.
7.
Rudy Negenborn, Robot localization and kalman filters. 2003. on finding your position in a noisy world. Master’s thesis, UTRECHT UNIVERSITY, Institute of Information and Computing Sciences. Greg Welch and Gary Bishop. 1995. An introduction to the kalman filter. Technical report, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA. Peter S. Maybeck., 1979. Stochastic models, estimation and control, volume 141 of Mathematics in Science and Engineering. ISBN 0124807038.
8.
9.
Jochen S. Gutmann, Wolfram Burgard, Dieter Fox and Kurt Konolige, 1998. An experimental comparison of localization methods. In Proc. of the IEEE/RSJ InternationalConference on Intelligent Robots and Systems. Jochen S. Gutmann and Dieter Fox. 2002. An experimental comparison of localization methods continued. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems.
428