While I am working on the literature review on distributed estimation algorithm, I think it is insightful to list several algorithms alongside, to see their development.

  • Consensus KF

    The algorithm is the first one along this investigation. Basically, the algorithm requires nodes to run consensus protocol to a convergent value, which is used to replicate a centralized version of KF. Therefore, the estimate in every node will be the same theoretically. But one has to note the the convergent time is infinite.

  • Diffusion KF

    This algorithm use convex combination of estimate from neighboring estimates to improve the local estimate. Without the corresponding covaraince update, the so-called covariance term in this algorithm is not necessarily the real covariance. On the other side, the convergence criterion of the algorithm is the same as this algorithm without diffusion step, which somehow makes the effect of diffusion step unclear.

  • Consensus+Innovations KF

    This algorithm was proposed about 3 years ago. The main goal of this algorithm is to get rid of the infinite convergent time of the consensus KF. But by clear investigation, the local node keeps a parameter of the overall covariance, which is only possible since the system model as well as communication topology is time invariant.

The algorithm that we are working on now can avoid all the disadvantages mentioned above!

Next Post Previous Post