Home > Guides, TKJ Electronics > A practical approach to Kalman filter and how to implement it

A practical approach to Kalman filter and how to implement it

September 10th, 2012 Leave a comment Go to comments

I have for a long time been interrested in Kalman filers and how they work, I also used a Kalman filter for my Balancing robot, but I never explained how it actually was implemented. Actually I had never taken the time to sit down with a pen and a piece of paper and try to do the math by myself, so I actually did not know how it was implemented.
It turned out to be a good thing, as I actually discovered a mistake in the original code, but I will get back to that later.

I actually wrote about the Kalman filter as my master assignment in high school back in December 2011. But I only used the Kalman filter to calculate the true voltage of a DC signal modulated by known Gaussian white noise. My assignment can be found in the following zip file: http://www.tkjelectronics.dk/uploads/Kalman_SRP.zip. It is in danish, but you can properly use google translate to translate some of it. If you got any specific questions regarding the assignment, then ask in the comments below.

Okay, but back to the subject. As I sad I had never taken the time to sit down and do the math regarding the Kalman filter based on an accelerometer and a gyroscope. It was not as hard as I expected, but I must confess that I still have not studied the deeper theory behind, on why it actually works. But for me, and most people out there, I am more interrested in implementing the filter, than in the deeper theory behind and why the equations works.

Before we begin you must have some basic knowledge about matrices like multiplication of matrices and transposing of matrices. If not then please take a look at the following websites:

For those of you who do not know what a Kalman filter is, it is an algorithm which uses a series of measurements observed over time, in this context an accelerometer and a gyroscope. These measurements will contain noise that will contribute to the error of the measurement. The Kalman filter will then try to estimate the state of the system, based on the current and previous states, that tend to be more precise that than the measurements alone.

In this context the problem is that the accelerometer is in general very noise when it is used to measure the gravitational acceleration since the robot is moving back and forth. The problem with the gyro is that it drifts over time – just like a spinning wheel-gyro will start to fall down when it is losing speed.
In short you can say that you can only trust the gyroscope on a short term while you can only trust the accelerometer on a long term.

There is actually a very easy way to deal with this by using a complimentary filter, which basicly just consist of a digital low-pass filter on the accelerometer and digital high-pass filter on the gyroscope readings. But it is not as accurate as the Kalman filter, but other people have succesfully build balancing robots using a fine-tuned complimentary filter.

More information about gyroscopes, accelerometer and complimentary filters can be found in this pdf. A comparison between a complimentary filter and a Kalman filter can be found in the following blog post.

The Kalman filter operates by producing a statistically optimal estimate of the system state based upon the measurement(s). To do this it will need to know the noise of the input to the filter called the measurement noise, but also the noise of the system itself called the process noise. To do this the noise has to be Gaussian distributed and have a mean of zero, luckily for us most random noise have this characteristic.
For more information about the theory behind the filter take a look at the following pages:

The system state \boldsymbol{x}_k
The next of this article might seem very confusing for some, but I promise you if you grab a pen and a piece of paper and try to follow along it is not that hard if you are reasonable at math.

If you, like me, do not have a calculator or computer program that can work with matrices, then I recommend the free online calculator Wolfram Alpha. I used it for all the calculations in this article.

I will use the same notation as the wikipedia article, but I will like to note that when the matrixes are constants and does not depend on the current time you do not have to write the k after them. So for instance F_k can be simplified to F.

Also I would like to write a small explanation of the other aspects of the notations.
First I will make a note about whats called the previous state:

\boldsymbol{\hat{x}}_{k-1 | k-1}

Which is the previous estimated state based on the previous state and the estimates of the states before it.

The next is the a priori state:

\boldsymbol{\hat{x}}_{k | k-1}

A priori means the estimate of the state matrix at the current time k based on the previous state of the system and the estimates of the states before it.

The last one is called a posteriori state:

\boldsymbol{\hat{x}}_{k | k}

Is the estimated of the state at time k given observations up to and including at time k.

The problem is that the system state itself is hidden and can only be observed through observation z_k. This is also called a Hidden Markov model.

This means that the state will be based upon the state at time k and all the previous states. That also means that you can not trust the estimate of the state before the Kalman filter has stabilized – take a look at the graph at the front page of my assignment.

The hat over the \hat{x} means that is the estimate of the state. Unlike just a single x which means the true state – the one we are trying to estimate.
So the notation for the state at time k is:

\boldsymbol{x}_k

The state of the system at time k if given by:

\boldsymbol{x}_k = \boldsymbol{F}x_{k-1} + \boldsymbol{B}u_k + w_k

Where x_k is the state matrix which is given by:

\boldsymbol{x}_k = \begin{bmatrix} \theta \\ \dot{\theta}_b \end{bmatrix}_k

As you can see the output of the filter will be the angle \theta but also the bias \dot{\theta}_b based upon the measurements from the accelerometer and gyroscope. The bias is the amount the gyro has drifted. This means that one can get the true rate by subtracting the bias from the gyro measurement.

The next is the F matrix, which is the state transition model which is applied to the prevouis state x_{k-1}.

In this case F is defined as:

\boldsymbol{F} = \begin{bmatrix} 1 & -\Delta t \\ 0 & 1 \end{bmatrix}

I know that the -\Delta t might seem confusing, but it will make sense later (take a look at my comment).

The next is the control input u_k, in this case it is the gyroscope measurement in degrees per second (°/s) at time k, this is also called the rate \dot{\theta}. We will actually rewrite the state equation as:

\boldsymbol{x}_k = \boldsymbol{F}x_{k-1} + \boldsymbol{B}{\dot{\theta}_k} + w_k

The next thing is the B matrix. Which is called the control-input model, which is defined as:

\boldsymbol{B} = \begin{bmatrix} \Delta t \\ 0 \end{bmatrix}

This makes perfectly sense as you will get the angle \theta when you multiply the rate \dot{\theta} by the delta time \Delta t and since we can not calculate the bias directly based on the rate we will set the bottom of the matrix to 0.

w_k is process noise which is Gaussian distributed with a zero mean and with covariance Q to the time k:

\boldsymbol{w}_k \sim N\left ( 0, \boldsymbol{Q}_k \right )

Q_k is the process noise covariance matrix and in this case the covariance matrix of the state estimate of the accelerometer and bias. In this case we will consider the estimate of the bias and the accelerometer to be independent, so it’s actually just equal to the variance of the estimate of the accelerometer and bias.
The final matrix is defined as so:

\boldsymbol{Q}_k = \begin{bmatrix} Q_\theta & 0 \\ 0 & Q_{\dot{\theta}_b} \end{bmatrix}\Delta t

As you can see the Q_k covariance matrix depends on the current time k, so the accelerometer variance Q_\theta and the variance of the bias Q_{\dot{\theta}_b} is multiplied by the delta time \Delta t.
This makes sense as the process noise will be larger as longer time it is since the last update of the state. For instance the gyro could have drifted.
You will have to know these constants for the Kalman filter to work.
Note if you set a larger value, the more noise in the estimation of the state. So for instance if the estimated angle starts to drift you have to increase the value of Q_{\dot{\theta}_b}. Otherwise if the estimate tends to be slow you are trusting the estimate of the angle too much and should try to decrease the value of Q_\theta to make it more responsive.

The measurement \boldsymbol{z}_k
Now we will take a look at the observation or measurement z_k of the true state x_k. The observation z_k is given by:

\boldsymbol{z}_k = \boldsymbol{H}x_{k} + v_k

As you can see the measurement z_k is given by the current state x_k multiplied by the H matrix plus the measurement noise v_k.

H is called the observation model and is used to map the true state space into the observed space. The true state can not be observed. Since the measurement is just the measurement from the accelerometer, H is given by:

\boldsymbol{H} = \begin{bmatrix} 1 & 0 \end{bmatrix}

The noise of the measurement have to be Gaussian distributed as well with a zero mean and R as the covariance:

\boldsymbol{v}_k \sim N\left ( 0, \boldsymbol{R} \right )

But as R is not a matrix the measurement noise is just equal to the variance of the measurement, since the covariance of the same variable is equal to the variance. See this page for more information.
Now we can define R as so:

\boldsymbol{R} = E \begin{bmatrix} v_k & {v_k}^T \end{bmatrix} = var(v_k)

More information about covariance can be found on Wikipedia and in my assignment.

We will assume that the measurement noise is the same and does not depend on the time k:

var(v_k) = var(v)

Note that if you set the measurement noise variance var(v) too high the filter will respond really slowly as it is trusting new measurements less, but if it is too small the value might overshoot and be noisy since we trust the accelerometer measurements too much.

So to round up you have to find the the process noise variances Q_\theta and Q_{\dot{\theta}_b} and the measurement variance of the measurement noise var(v). There are multiple ways to find them, but it is out of the aspect of this article.

The Kalman filter equations
Okay now to the equations we will use to estimate the true state of the system at time k \hat{x}_k. Some clever guys came up with equations found below to estimate the state of the system.
The equations can be written more compact, but I prefer to have them stretched out, so it is easier to implement and understand the different steps.

Predict
In the first two equations we will try to predict the current state and the error covariance matrix at time k. First the filter will try to estimate the current state based on all the previous states and the gyro measurement:

\boldsymbol{\hat{x}}_{k | k-1} = \boldsymbol{F}\hat{x}_{k-1 | k-1} + \boldsymbol{B}{\dot{\theta}_k}

That is also why it is called a control input, since we use it as an extra input to estimate the state at the current time k called the a priori state \hat{x}_{k | k-1} as described in the beginning of the article.

The next thing is that we will try to estimate the a priori error covariance matrix P_{k | k-1} based on the previous error covariance matrix P_{k-1 | k-1}, which is defined as:

\boldsymbol{P}_{k | k-1} = \boldsymbol{F}\boldsymbol{P}_{k-1 | k-1}\boldsymbol{F}^T + \boldsymbol{Q}_k

This matrix is used to estimate how much we trust the current values of the estimated state. The smaller the more we trust the current estimated state. The principle of the equation above is actually pretty easy to understand, as it is pretty obvious that the error covariance will increase since we last updated the estimate of the state, therefore we multiplied the error covariance matrix by the state transition model F and the transpose of that F^T and add the current process noise Q_k at time k.

The error covariance matrix P in our case is a 2×2 matrix:

\boldsymbol{P} = \begin{bmatrix} P_{00} & P_{01} \\ P_{10} & P_{11} \end{bmatrix}

Update
The fist thing we will do is to compute the difference between the measurement z_k and the a priori state x_{k | k-1}, this is also called the innovation:

\boldsymbol{\tilde{y}}_k = \boldsymbol{z}_k - \boldsymbol{H}\hat{x}_{k | k-1}

The observation model H is used to map the a priori state x_{k | k-1} into the observed space which is the measurement from the accelerometer, therefore the innovation is not a matrix

\boldsymbol{\tilde{y}}_k = \begin{bmatrix} \boldsymbol{\tilde{y}} \end{bmatrix}_k

The next thing we will do is calculate whats called the innovation covariance:

\boldsymbol{S}_k = \boldsymbol{H} \boldsymbol{P}_{k | k-1} \boldsymbol{H}^T + \boldsymbol{R}

What it does is that it tries to predict how much we should trust the measurement based on the a priori error covariance matrix P_{k | k-1} and the measurement covariance matrix R. The observation model H is used to map the a priori error covariance matrix P_{k | k-1} into observed space.

The bigger the value of the measurement noise the larger the value of S, this means that we do not trust the incoming measurement that much.
In this case S is not a matrix and is just written as:

\boldsymbol{S}_k = \begin{bmatrix} \boldsymbol{S} \end{bmatrix}_k

The next step is to calculate the Kalman gain. The Kalman gain is used to to indicate how much we trust the innovation and is defined as:

\boldsymbol{K}_k = \boldsymbol{P}_{k | k-1} \boldsymbol{H}^T \boldsymbol{S}_k^{-1}

You can see that if we do not trust the innovation that much the innovation covariance S will be high and if we trust the estimate of the state then the error covariance matrix P will be small the Kalman gain will therefore be small and oppesite if we trust the innovation but does not trust the estimation of the current state.

If you take a deeper look you can see that the transpose of the observation model H is used to map the state of the error covariance matrix P into observed space. We then compare the error covariance matrix by multiplying with the inverse of the innovation covariance S.

This make sense as we will use the observation model H to extract data from the state error covariance and compare that with the current estimate of the innovation covariance.

Note that if you do not know the state at startup you can set the error covariance matrix like so:

\boldsymbol{P} = \begin{bmatrix} L & 0 \\ 0 & L \end{bmatrix}

Where L represent a large number.

For my balancing robot I know the starting angle and I find the bias of the gyro at startup by calibrating, so I assume that the state will be known at startup, so I initialize the error covariance matrix like so:

\boldsymbol{P} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}

Take a look at my calibration routine for more information.

In this case the Kalman gain is a 2×1 matrix:

\boldsymbol{K} = \begin{bmatrix} K_0 \\ K_1 \end{bmatrix}

Now we can update the a posteriori estimate of the current state:

\boldsymbol{\hat{x}}_{k | k} = \boldsymbol{\hat{x}}_{k | k-1} + \boldsymbol{K}_k \; \boldsymbol{\tilde{y}}_k

This is done by adding the a priori state \hat{x}_{k | k-1} with the Kalman gain multiplied by the innovation \tilde{y}_k.
Remember that the innovation \tilde{y}_k is the difference between the measurement z_k and the estimated priori state \hat{x}_{k | k-1}, so the innovation can both be positive and negative.

A little simplified the equation can be understood as we simply correct the estimate of the a priori state \hat{x}_{k | k-1}, that was calculated using the previous state and the gyro measurement, with the measurement – in this case the accelerometer.

The last thing we will do is update the a posteriori error covariance matrix:

\boldsymbol{P}_{k | k} = (\boldsymbol{I} - \boldsymbol{K}_k \boldsymbol{H}) \boldsymbol{P}_{k | k-1}

Where I is called the identity matrix and is defined as:

\boldsymbol{I} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}

What the filter is doing is that it is basically self-correcting the error covariance matrix based on how much we corrected the estimate. This make sense as we corrected the state based the a priori error covariance matrix P_{k | k-1}, but also the innovation covariance S_k.

Implementing the filter
In this section I will use the equation from above to implement the filter into a simple c++ code that can be used for balancing robots, quadcopters and other applications where you need to compute the angle, bias or rate.

In case you want the code next to you, it can be found at github: https://github.com/TKJElectronics/KalmanFilter.

I will simply write the equations at the top of each step and then simplify them after that I will write how it is can be done i C and finally I will link to calculations done in Wolfram Alpha in the bottom of each step, as I used them to do the calculation.

Step 1:

\boldsymbol{\hat{x}}_{k | k-1} = \boldsymbol{F}\hat{x}_{k-1 | k-1} + \boldsymbol{B}{\dot{\theta}_k}

\begin{bmatrix} \theta \\ \dot{\theta}_b \end{bmatrix}_{k | k-1} = \begin{bmatrix} 1 & -\Delta t \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \theta \\ \dot{\theta}_b \end{bmatrix}_{k-1 | k-1} + \begin{bmatrix} \Delta t \\ 0 \end{bmatrix} \dot{\theta}_k

 = \begin{bmatrix} \theta - \dot{\theta}_b \Delta t \\ \dot{\theta}_b \end{bmatrix}_{k-1 | k-1} + \begin{bmatrix} \Delta t \\ 0 \end{bmatrix} \dot{\theta}_k

 = \begin{bmatrix} \theta - \dot{\theta}_b \Delta t + \dot{\theta} \Delta t \\ \dot{\theta}_b \end{bmatrix}

 = \begin{bmatrix} \theta + \Delta t (\dot{\theta} - \dot{\theta}_b) \\ \dot{\theta}_b \end{bmatrix}

As you can see the a priori estimate of the angle is \hat{\theta}_{k | k-1} is equal to the estimate of the previous state \hat{\theta}_{k-1 | k-1} plus the unbiased rate times the delta time \Delta t.
Since we can not directly measure the bias the estimate of the a priori bias is just equal to the previous one.

This can be written in C like so:

angle += dt * rate;
rate = newRate - bias;

Note that I calculate the unbiased rate, so it can be be used by the user as well.

Wolfram Alpha links:

Step 2:

\boldsymbol{P}_{k | k-1} = \boldsymbol{F}\boldsymbol{P}_{k-1 | k-1}\boldsymbol{F}^T + \boldsymbol{Q}_k

\begin{bmatrix} P_{00} & P_{01} \\ P_{10} & P_{11} \end{bmatrix}_{k | k-1} = \begin{bmatrix} 1 & -\Delta t \\ 0 & 1 \end{bmatrix} \begin{bmatrix} P_{00} & P_{01} \\ P_{10} & P_{11} \end{bmatrix}_{k-1 | k-1} \begin{bmatrix} 1 & 0 \\ -\Delta t & 1 \end{bmatrix} + \begin{bmatrix} Q_\theta & 0 \\ 0 & Q_{\dot{\theta}_b} \end{bmatrix}\Delta t

= \begin{bmatrix} P_{00} - \Delta t P_{10} & P_{01} - \Delta t P_{11} \\ P_{10} & P_{11} \end{bmatrix}_{k-1 | k-1} \begin{bmatrix} 1 & 0 \\ -\Delta t & 1 \end{bmatrix} + \begin{bmatrix} Q_\theta & 0 \\ 0 & Q_{\dot{\theta}_b} \end{bmatrix}\Delta t

= \begin{bmatrix} P_{00} -\Delta t P_{10} -\Delta t (P_{01} - \Delta t P_{11}) & P_{01} -\Delta t P_{11} \\ P_{10} -\Delta t P_{11} & P_{11} \end{bmatrix}_{k-1 | k-1} + \begin{bmatrix} Q_\theta & 0 \\ 0 & Q_{\dot{\theta}_b} \end{bmatrix}\Delta t

= \begin{bmatrix} P_{00} -\Delta t P_{10} -\Delta t (P_{01} - \Delta t P_{11}) + Q_\theta \Delta t & P_{01} -\Delta t P_{11} \\ P_{10} -\Delta t P_{11} & P_{11} + Q_{\dot{\theta}_b} \Delta t \end{bmatrix}

= \begin{bmatrix} P_{00} + \Delta t (\Delta t P_{11} - P_{01} - P_{10} + Q_\theta) & P_{01} -\Delta t P_{11} \\ P_{10} -\Delta t P_{11} & P_{11} + Q_{\dot{\theta}_b} \Delta t \end{bmatrix}

The equations above can be written in C like so:

P[0][0] += dt * (dt*P[1][1] - P[0][1] - P[1][0] + Q_angle);
P[0][1] -= dt * P[1][1];
P[1][0] -= dt * P[1][1];
P[1][1] += Q_gyroBias * dt;

Note that this is the part of the code that there was an error in in the original code that I used.

Wolfram Alpha links:

Step 3:

\boldsymbol{\tilde{y}}_k = \boldsymbol{z}_k - \boldsymbol{H}\hat{x}_{k | k-1}

= \boldsymbol{z}_k - \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} \theta \\ \dot{\theta}_b \end{bmatrix}_{k | k-1}

= \boldsymbol{z}_k - \theta_{k | k-1}

The innovation can be calculated in C like so:

y = newAngle - angle;

Wolfram Alpha links:

Step 4:

\boldsymbol{S}_k = \boldsymbol{H} \boldsymbol{P}_{k | k-1} \boldsymbol{H}^T + \boldsymbol{R}

= \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} P_{00} & P_{01} \\ P_{10} & P_{11} \end{bmatrix}_{k | k-1} \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \boldsymbol{R}

= {P_{00}}_{k | k-1} + \boldsymbol{R}

= {P_{00}}_{k | k-1} + var(v)

Again the C code is pretty simple:

S = P[0][0] + R_angle;

Wolfram Alpha links:

Step 5:

\boldsymbol{K}_k = \boldsymbol{P}_{k | k-1} \boldsymbol{H}^T \boldsymbol{S}_k^{-1}

\begin{bmatrix} K_0 \\ K_1 \end{bmatrix}_k = \begin{bmatrix} P_{00} & P_{01} \\ P_{10} & P_{11} \end{bmatrix}_{k | k-1} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \boldsymbol{S}_k^{-1}

= \begin{bmatrix} P_{00} \\ P_{10} \end{bmatrix}_{k | k-1} \boldsymbol{S}_k^{-1}

= \dfrac{\begin{bmatrix} P_{00} \\ P_{10}\end{bmatrix}_{k | k-1}}{\boldsymbol{S}_k}

Note that in other cases S can be a matrix and you can not just simply divide P by S. Instead you have to calculate the inverse of the matrix. See the following page for more information on how to do so.

The C implementation looks like this:

K[0] = P[0][0] / S;
K[1] = P[1][0] / S;

Wolfram Alpha links:

Step 6:

\boldsymbol{\hat{x}}_{k | k} = \boldsymbol{\hat{x}}_{k | k-1} + \boldsymbol{K}_k \; \boldsymbol{\tilde{y}}_k

\begin{bmatrix} \theta \\ \dot{\theta}_b \end{bmatrix}_{k | k} = \begin{bmatrix} \theta \\ \dot{\theta}_b \end{bmatrix}_{k | k-1} + \begin{bmatrix} K_0 \\ K_1 \end{bmatrix}_k \boldsymbol{\tilde{y}}_k

= \begin{bmatrix} \theta \\ \dot{\theta}_b \end{bmatrix}_{k | k-1} + \begin{bmatrix} K_0 \; \boldsymbol{\tilde{y}} \\ K_1 \; \boldsymbol{\tilde{y}} \end{bmatrix}_k

Yet again the equation end up pretty short, and can be written as so in C:

angle += K[0] * y;
bias += K[1] * y;

Step 7:

\boldsymbol{P}_{k | k} = (\boldsymbol{I} - \boldsymbol{K}_k \boldsymbol{H}) \boldsymbol{P}_{k | k-1}

\begin{bmatrix} P_{00} & P_{01} \\ P_{10} & P_{11} \end{bmatrix}_{k | k} = \left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} - \begin{bmatrix} K_0 \\ K_1 \end{bmatrix}_k \begin{bmatrix} 1 & 0 \end{bmatrix}\right) \begin{bmatrix} P_{00} & P_{01} \\ P_{10} & P_{11} \end{bmatrix}_{k | k-1}

= \left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} - \begin{bmatrix} K_0 & 0 \\ K_1 & 0 \end{bmatrix}_k \right) \begin{bmatrix} P_{00} & P_{01} \\ P_{10} & P_{11} \end{bmatrix}_{k | k-1}

= \begin{bmatrix} P_{00} & P_{01} \\ P_{10} & P_{11} \end{bmatrix}_{k | k-1} - \begin{bmatrix} K_0 \; P_{00} & K_0 \; P_{01} \\ K_1 \; P_{00} & K_1 \; P_{01} \end{bmatrix}

Remember that we decrease the error covariance matrix again, since the error of the estimate of the state has been decreased.
The C code looks like this:

P[0][0] -= K[0] * P[0][0];
P[0][1] -= K[0] * P[0][1];
P[1][0] -= K[1] * P[0][0];
P[1][1] -= K[1] * P[0][1];

Wolfram Alpha links:

Note that I have found that the following variances works perfectly for most IMUs:

const double Q_angle = 0.001;
const double Q_gyroBias = 0.003;
const double R_angle = 0.03;

Remember that it’s very important to set the target angle at startup if you need to use the output at startup. For more information, see the calibration routine for my balancing robot.

In case you missed it here is the library I wrote a library that can be used by any microcontroller that supports floating math. The source code can be found at github: https://github.com/TKJElectronics/KalmanFilter.

If you prefer a video explanation about the Kalman filter, I recommend the following video series: http://www.youtube.com/watch?v=FkCT_LV9Syk.

Note that you can not use the library if you need to represent something in a full 3D orientations, as euler angles suffer from what is called Gimbal lock you will need to use Quaternions to do that, but that is a whole nother story. For now take a look at the following page.

This is all for know, I hope that you will find i helpfull, if you do or have any questions fell free to post a comment below – it supports LaTeX syntax as well, if you need to write equations.
If you spot any errors please let me know as well.

Categories: Guides, TKJ Electronics Tags:
  1. June 11th, 2014 at 16:35 | #1

    @James
    1. Simply don’t add PI at these lines: https://github.com/TKJElectronics/Example-Sketch-for-IMU-including-Kalman-filter/blob/master/IMU/ITG3205_ADXL345/ITG3205_ADXL345.ino#L117 and https://github.com/TKJElectronics/Example-Sketch-for-IMU-including-Kalman-filter/blob/master/IMU/ITG3205_ADXL345/ITG3205_ADXL345.ino#L123.
    2. Simply put it on one of it’s side. One of the axis should then read 1g while the others should read 0g. If they don’t this will be your zero value. Repeat this until you have this behavior on all three axis.

    @faadhil
    1. Please read: http://arduino.cc/forum/index.php/topic,58048.0.html where I have described what is going on.
    2. Because the gyro and accelerometer are not correctly orientated on the PCB, so you will have to compensate for this in the code.

    @faadhil
    Please see the following example: https://github.com/TKJElectronics/Example-Sketch-for-IMU-including-Kalman-filter/tree/master/IMU/MPU6050_HMC5883L.

    @Jesse
    I am actually build one as well. I will write a blog post about once it is finished.
    The code is available here: https://github.com/Lauszus/BalancingRobotFullSize and it is already balancing as we speak ;)

  2. dsl
    July 2nd, 2014 at 22:28 | #2

    Hi, I think there’s a small bug in your implementation of step 7. On lines 75 and 76 you refer to the priori error covariance matrix (computer in lines 73 and 74) -- but elements P[0][0] and P[0][1] have already been updated to posteriori state. Could you double check? I could be reading this wrong.

    Otherwise, this is super helpful. Great read, thanks.

  3. klaus
    July 10th, 2014 at 12:02 | #3

    hi, i have simulated the whole thing with matlab. if i “turn” my virtual object (changing the gyro and acc values) the estimated angle is correct, but the estimated drift is totally wrong. did you watch the drift estimation in your model? i think there is a failure in the model?

  4. Manh Hoang
    July 12th, 2014 at 10:26 | #4

    rate = newRate -- bias;
    angle += dt * rate;

    I can not understand how we can write this from the matrix. Could you please explain more?

    Thanks in advance.

  5. July 14th, 2014 at 12:30 | #5

    @dsl
    You are absolutely correct. I will update it soon -- I moving to San Francisco to study the next semester, so haven’t got much time right now.

    @klaus
    I just tried to output the unbiased rate using getRate() and it looks alright to me. Could you try to do it in hardware and check as well?

    @Manh Hoang
    They two lines were swapped. Please have a look now.

  6. klaus
    July 14th, 2014 at 14:52 | #6

    Please have a look at my simulation http://www.mikrocontroller.net/topic/338563
    Maybe you detect an error in my simulation?

  7. Douglas Santana
    July 17th, 2014 at 15:17 | #7

    Hi Lauszus! In the definition of “Qk” where you say “accelerometer noise”, i think you should say “gyroscope noise”. Please, take a view on it. Congratulations, nice article!

  8. Thamanoon
    July 29th, 2014 at 02:57 | #8

    Thank you for your explanation. I implement your kalman filter in my stm32f3discovery board, it’s work but I don’t know it’s good way or not.
    http://www.electoday.com/index.php/topic,11432.0.html
    Thanks!

  9. Robert
    July 31st, 2014 at 10:22 | #9

    Hi Lauszus, thanks for this great practical tutorial! I implemented your code succesfully. For anyone, who is interested in how Kalman filter equations are derived in a more or less intuitive manner, then this paper is a great source to begin with: http://www.cl.cam.ac.uk/~rmf25/papers/Understanding%20the%20Basis%20of%20the%20Kalman%20Filter.pdf
    Again, thanks Lauszus!

  10. xeonqq
    August 7th, 2014 at 15:30 | #10

    Hi, Lauszus
    I used your kalman filter code also for my MPU-6050
    I printed out the angles from the serial port, and I compare the plot of raw angle from Acc and angle from complementary filter and kalman filter. And I find out kalman filter has a relative big delay in time than complementary filter , do you know what parameters influence the delay of kalman filter?
    plot is as follows:
    https://docs.google.com/document/d/1_UjrBrdR2eVgniOkUlND6guSIYaW88QAqSPfV8z3JKg/edit?usp=sharing

  11. Rocky
    August 31st, 2014 at 10:35 | #11

    Since the angle \theta calculated by accelerometer has no bias, why should we subtract gyro bias in the system equation? What ‘s more, these are two independent sensor.

  12. soodabeh
    September 1st, 2014 at 08:15 | #12

    thank you for ur perfect article dear Lauszus :)
    Can I implement it for java and android sets?
    I mean that sensors are same as robots?
    did u see this code for java?

    thank you again :)

  13. Mansuk, Yang
    September 2nd, 2014 at 14:53 | #13

    First of all, I would like to express deep thanks to you. I tested MPU6050_HMC5883L.ino with GY-68. Pitch and roll looks like working correctly but yaw looks like not working.

    And also there are something I could not understand in updateMPU6050 routine:

    void updateMPU6050() {
      while (i2cRead(MPU6050, 0x3B, i2cData, 14)); // Get accelerometer and gyroscope values
      accX = ((i2cData[0] << 8) | i2cData[1]);
      accY = -((i2cData[2] << 8) | i2cData[3]);
      accZ = ((i2cData[4] << 8) | i2cData[5]);
      tempRaw = (i2cData[6] << 8) | i2cData[7];
      gyroX = -(i2cData[8] << 8) | i2cData[9];
      gyroY = (i2cData[10] << 8) | i2cData[11];
      gyroZ = -(i2cData[12] << 8) | i2cData[13];
    }

    gryroX and gyroZ have -- sign in just upper part. Is it correct?

    And also I changed double to int for accX, accY, accZ, gyroX, gyroY, gyroZ, then it does not work.

  14. ary kusuma ningsih
    September 6th, 2014 at 08:42 | #14

    Hi Lauszus! Thank you so much for the explanations! It helps me a lot but still I have to do more paper research regarding this topic. Have a nice day :)

Comment pages
1 3 4 5 2868
  1. September 10th, 2012 at 19:47 | #1
  2. September 11th, 2012 at 06:08 | #2
  3. September 12th, 2012 at 01:10 | #3