## A practical approach to Kalman filter and how to implement it

I have for a long time been interrested in Kalman filers and how they work, I also used a Kalman filter for my Balancing robot, but I never explained how it actually was implemented. Actually I had never taken the time to sit down with a pen and a piece of paper and try to do the math by myself, so I actually did not know how it was implemented.

It turned out to be a good thing, as I actually discovered a mistake in the original code, but I will get back to that later.

I actually wrote about the Kalman filter as my master assignment in high school back in December 2011. But I only used the Kalman filter to calculate the true voltage of a DC signal modulated by known Gaussian white noise. My assignment can be found in the following zip file: http://www.tkjelectronics.dk/uploads/Kalman_SRP.zip. It is in danish, but you can properly use google translate to translate some of it. If you got any specific questions regarding the assignment, then ask in the comments below.

Okay, but back to the subject. As I sad I had never taken the time to sit down and do the math regarding the Kalman filter based on an accelerometer and a gyroscope. It was not as hard as I expected, but I must confess that I still have not studied the deeper theory behind, on why it actually works. But for me, and most people out there, I am more interrested in implementing the filter, than in the deeper theory behind and why the equations works.

Before we begin you must have some basic knowledge about matrices like multiplication of matrices and transposing of matrices. If not then please take a look at the following websites:

- http://en.wikipedia.org/wiki/Matrix_multiplication#Matrix_product_.28two_matrices.29
- http://www.mathwarehouse.com/algebra/matrix/multiply-matrix.php
- http://en.wikipedia.org/wiki/Transpose
- http://en.wikipedia.org/wiki/Covariance_matrix

For those of you who do not know what a Kalman filter is, it is an algorithm which uses a series of measurements observed over time, in this context an accelerometer and a gyroscope. These measurements will contain noise that will contribute to the error of the measurement. The Kalman filter will then try to estimate the state of the system, based on the current and previous states, that tend to be more precise that than the measurements alone.

In this context the problem is that the accelerometer is in general very noise when it is used to measure the gravitational acceleration since the robot is moving back and forth. The problem with the gyro is that it drifts over time – just like a spinning wheel-gyro will start to fall down when it is losing speed.

In short you can say that you can only trust the gyroscope on a short term while you can only trust the accelerometer on a long term.

There is actually a very easy way to deal with this by using a complimentary filter, which basicly just consist of a digital low-pass filter on the accelerometer and digital high-pass filter on the gyroscope readings. But it is not as accurate as the Kalman filter, but other people have succesfully build balancing robots using a fine-tuned complimentary filter.

More information about gyroscopes, accelerometer and complimentary filters can be found in this pdf. A comparison between a complimentary filter and a Kalman filter can be found in the following blog post.

The Kalman filter operates by producing a statistically optimal estimate of the system state based upon the measurement(s). To do this it will need to know the noise of the input to the filter called the measurement noise, but also the noise of the system itself called the process noise. To do this the noise has to be Gaussian distributed and have a mean of zero, luckily for us most random noise have this characteristic.

For more information about the theory behind the filter take a look at the following pages:

- http://en.wikipedia.org/wiki/Kalman_filter
- http://www.cs.unc.edu/~welch/media/pdf/kalman_intro.pdf
- http://academic.csuohio.edu/simond/courses/eec644/kalman.pdf

**The system state **

The next of this article might seem very confusing for some, but I promise you if you grab a pen and a piece of paper and try to follow along it is not that hard if you are reasonable at math.

If you, like me, do not have a calculator or computer program that can work with matrices, then I recommend the free online calculator Wolfram Alpha. I used it for all the calculations in this article.

I will use the same notation as the wikipedia article, but I will like to note that when the matrixes are constants and does not depend on the current time you do not have to write the k after them. So for instance can be simplified to .

Also I would like to write a small explanation of the other aspects of the notations.

First I will make a note about whats called the *previous state*:

Which is the previous estimated state based on the previous state and the estimates of the states before it.

The next is the *a priori state*:

A priori means the estimate of the state matrix at the current time k based on the previous state of the system and the estimates of the states before it.

The last one is called *a posteriori state*:

Is the estimated of the state at time k given observations up to and including at time k.

The problem is that the system state itself is hidden and can only be observed through observation . This is also called a Hidden Markov model.

This means that the state will be based upon the state at time k and all the previous states. That also means that you can not trust the estimate of the state before the Kalman filter has stabilized – take a look at the graph at the front page of my assignment.

The hat over the means that is the estimate of the state. Unlike just a single which means the true state – the one we are trying to estimate.

So the notation for the state at time k is:

The state of the system at time k if given by:

Where is the state matrix which is given by:

As you can see the output of the filter will be the angle but also the bias based upon the measurements from the accelerometer and gyroscope. The bias is the amount the gyro has drifted. This means that one can get the true rate by subtracting the bias from the gyro measurement.

The next is the matrix, which is the state transition model which is applied to the prevouis state .

In this case is defined as:

I know that the might seem confusing, but it will make sense later (take a look at my comment).

The next is the control input , in this case it is the gyroscope measurement in degrees per second (°/s) at time k, this is also called the rate . We will actually rewrite the state equation as:

The next thing is the matrix. Which is called the control-input model, which is defined as:

This makes perfectly sense as you will get the angle when you multiply the rate by the delta time and since we can not calculate the bias directly based on the rate we will set the bottom of the matrix to 0.

is process noise which is Gaussian distributed with a zero mean and with covariance to the time k:

is the process noise covariance matrix and in this case the covariance matrix of the state estimate of the accelerometer and bias. In this case we will consider the estimate of the bias and the accelerometer to be independent, so it’s actually just equal to the variance of the estimate of the accelerometer and bias.

The final matrix is defined as so:

As you can see the covariance matrix depends on the current time k, so the accelerometer variance and the variance of the bias is multiplied by the delta time .

This makes sense as the process noise will be larger as longer time it is since the last update of the state. For instance the gyro could have drifted.

You will have to know these constants for the Kalman filter to work.

Note if you set a larger value, the more noise in the estimation of the state. So for instance if the estimated angle starts to drift you have to increase the value of . Otherwise if the estimate tends to be slow you are trusting the estimate of the angle too much and should try to decrease the value of to make it more responsive.

**The measurement **

Now we will take a look at the observation or measurement of the true state . The observation is given by:

As you can see the measurement is given by the current state multiplied by the matrix plus the measurement noise .

is called the observation model and is used to map the true state space into the observed space. The true state can not be observed. Since the measurement is just the measurement from the accelerometer, is given by:

The noise of the measurement have to be Gaussian distributed as well with a zero mean and as the covariance:

But as is not a matrix the measurement noise is just equal to the variance of the measurement, since the covariance of the same variable is equal to the variance. See this page for more information.

Now we can define as so:

More information about covariance can be found on Wikipedia and in my assignment.

We will assume that the measurement noise is the same and does not depend on the time k:

Note that if you set the measurement noise variance too high the filter will respond really slowly as it is trusting new measurements less, but if it is too small the value might overshoot and be noisy since we trust the accelerometer measurements too much.

So to round up you have to find the the process noise variances and and the measurement variance of the measurement noise . There are multiple ways to find them, but it is out of the aspect of this article.

**The Kalman filter equations**

Okay now to the equations we will use to estimate the true state of the system at time k . Some clever guys came up with equations found below to estimate the state of the system.

The equations can be written more compact, but I prefer to have them stretched out, so it is easier to implement and understand the different steps.

**Predict**

In the first two equations we will try to predict the current state and the error covariance matrix at time k. First the filter will try to estimate the current state based on all the previous states and the gyro measurement:

That is also why it is called a control input, since we use it as an extra input to estimate the state at the current time k called the a priori state as described in the beginning of the article.

The next thing is that we will try to estimate the a priori error covariance matrix based on the previous error covariance matrix , which is defined as:

This matrix is used to estimate how much we trust the current values of the estimated state. The smaller the more we trust the current estimated state. The principle of the equation above is actually pretty easy to understand, as it is pretty obvious that the error covariance will increase since we last updated the estimate of the state, therefore we multiplied the error covariance matrix by the state transition model and the transpose of that and add the current process noise at time k.

The error covariance matrix in our case is a 2×2 matrix:

**Update**

The fist thing we will do is to compute the difference between the measurement and the a priori state , this is also called the innovation:

The observation model is used to map the a priori state into the observed space which is the measurement from the accelerometer, therefore the innovation is not a matrix

The next thing we will do is calculate whats called the innovation covariance:

What it does is that it tries to predict how much we should trust the measurement based on the a priori error covariance matrix and the measurement covariance matrix . The observation model is used to map the a priori error covariance matrix into observed space.

The bigger the value of the measurement noise the larger the value of , this means that we do not trust the incoming measurement that much.

In this case is not a matrix and is just written as:

The next step is to calculate the Kalman gain. The Kalman gain is used to to indicate how much we trust the innovation and is defined as:

You can see that if we do not trust the innovation that much the innovation covariance will be high and if we trust the estimate of the state then the error covariance matrix will be small the Kalman gain will therefore be small and oppesite if we trust the innovation but does not trust the estimation of the current state.

If you take a deeper look you can see that the transpose of the observation model is used to map the state of the error covariance matrix into observed space. We then compare the error covariance matrix by multiplying with the inverse of the innovation covariance .

This make sense as we will use the observation model to extract data from the state error covariance and compare that with the current estimate of the innovation covariance.

Note that if you do not know the state at startup you can set the error covariance matrix like so:

Where represent a large number.

For my balancing robot I know the starting angle and I find the bias of the gyro at startup by calibrating, so I assume that the state will be known at startup, so I initialize the error covariance matrix like so:

Take a look at my calibration routine for more information.

In this case the Kalman gain is a 2×1 matrix:

Now we can update the a posteriori estimate of the current state:

This is done by adding the a priori state with the Kalman gain multiplied by the innovation .

Remember that the innovation is the difference between the measurement and the estimated priori state , so the innovation can both be positive and negative.

A little simplified the equation can be understood as we simply correct the estimate of the a priori state , that was calculated using the previous state and the gyro measurement, with the measurement – in this case the accelerometer.

The last thing we will do is update the a posteriori error covariance matrix:

Where is called the identity matrix and is defined as:

What the filter is doing is that it is basically self-correcting the error covariance matrix based on how much we corrected the estimate. This make sense as we corrected the state based the a priori error covariance matrix , but also the innovation covariance .

**Implementing the filter**

In this section I will use the equation from above to implement the filter into a simple c++ code that can be used for balancing robots, quadcopters and other applications where you need to compute the angle, bias or rate.

In case you want the code next to you, it can be found at github: https://github.com/TKJElectronics/KalmanFilter.

I will simply write the equations at the top of each step and then simplify them after that I will write how it is can be done i C and finally I will link to calculations done in Wolfram Alpha in the bottom of each step, as I used them to do the calculation.

**Step 1:**

As you can see the a priori estimate of the angle is is equal to the estimate of the previous state plus the unbiased rate times the delta time .

Since we can not directly measure the bias the estimate of the a priori bias is just equal to the previous one.

This can be written in C like so:

angle += dt * rate;

Note that I calculate the unbiased rate, so it can be be used by the user as well.

Wolfram Alpha links:

**Step 2:**

The equations above can be written in C like so:

P[0][1] -= dt * P[1][1];

P[1][0] -= dt * P[1][1];

P[1][1] += Q_gyroBias * dt;

Note that this is the part of the code that there was an error in in the original code that I used.

Wolfram Alpha links:

**Step 3:**

The innovation can be calculated in C like so:

Wolfram Alpha links:

**Step 4:**

Again the C code is pretty simple:

Wolfram Alpha links:

**Step 5:**

Note that in other cases can be a matrix and you can not just simply divide by . Instead you have to calculate the inverse of the matrix. See the following page for more information on how to do so.

The C implementation looks like this:

K[1] = P[1][0] / S;

Wolfram Alpha links:

**Step 6:**

Yet again the equation end up pretty short, and can be written as so in C:

bias += K[1] * y;

**Step 7:**

Remember that we decrease the error covariance matrix again, since the error of the estimate of the state has been decreased.

The C code looks like this:

float P01_temp = P[0][1];

P[0][0] -= K[0] * P00_temp;

P[0][1] -= K[0] * P01_temp;

P[1][0] -= K[1] * P00_temp;

P[1][1] -= K[1] * P01_temp;

Wolfram Alpha links:

Note that I have found that the following variances works perfectly for most IMUs:

float Q_gyroBias = 0.003;

float R_measure = 0.03;

Remember that it’s very important to set the target angle at startup if you need to use the output at startup. For more information, see the calibration routine for my balancing robot.

In case you missed it here is the library I wrote a library that can be used by any microcontroller that supports floating math. The source code can be found at github: https://github.com/TKJElectronics/KalmanFilter.

If you prefer a video explanation about the Kalman filter, I recommend the following video series: http://www.youtube.com/watch?v=FkCT_LV9Syk.

Note that you can not use the library if you need to represent something in a full 3D orientations, as euler angles suffer from what is called Gimbal lock you will need to use Quaternions to do that, but that is a whole nother story. For now take a look at the following page.

This is all for know, I hope that you will find i helpfull, if you do or have any questions fell free to post a comment below – it supports LaTeX syntax as well, if you need to write equations.

If you spot any errors please let me know as well.

Great article! However I compared results of Matrix and C code equations and found a mistake:

In Step 2 You unnecessary multiply “Q” by “delta t” while solving the equations .

So in C code:

(1) Should be: P[0][0] += dt * (dt*P[1][1] -- P[0][1] -- P[1][0]) + Q_angle;

(4) Should be: P[1][1] += Q_gyroBias;

Hi Kristian!

This is awesome!

@Krzychu

I also noticed the mistake with Q_angle. The accelerometer does not accumulate noise and should therefore not be multiplied by dt. But I do think the Q_gyrobias should be multiplied with dt. If dt is not constant, the gyro drift will be different each timestep.

Did you do any measurements with your code to see if you get a better signal with your calculations?

@Krzychu

Since I cannot delete my comment I will have to respond to my own comment. On further investigation, I realized that you were right on the gyro bias as well. The time step is being taken care of by the F matrix when transitioning from the previous P covariance matrix to the current a priori P matrix.

Thanks this by far the best explanation of KF i have seen.I am working on a collision avoidance protocol for a quadcopter.i will be using 8 sensors(a pair of ultrasonic and infrared sensors on each arm.). i want to implement a KF for this.please need your ideas on this