Large-scale distributed Kalman filtering via an optimization approach

From MaRDI portal
Publication:6285366

arXiv1704.03125MaRDI QIDQ6285366FDOQ6285366

Mathias Hudoba de Badyn, Mehran Mesbahi

Publication date: 10 April 2017

Abstract: Large-scale distributed systems such as sensor networks, often need to achieve filtering and consensus on an estimated parameter from high-dimensional measurements. Running a Kalman filter on every node in such a network is computationally intensive; in particular the matrix inversion in the Kalman gain update step is expensive. In this paper, we extend previous results in distributed Kalman filtering and large-scale machine learning to propose a gradient descent step for updating an estimate of the error covariance matrix; this is then embedded and analyzed in the context of distributed Kalman filtering. We provide properties of the resulting filters, in addition to a number of applications throughout the paper.












This page was built for publication: Large-scale distributed Kalman filtering via an optimization approach

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6285366)