In this work, we consider a security problem of consensus-based distributed estimation. Even if without the knowledge of system model, the replay attack can arbitrarily determine a part of sensors to falsify their measurements by repeating the previously recorded sensor readings. It is proven that for a stable system, the estimation error covariance under any given attack strategy not only is bounded, but also can re-enter into the steady state. The Kullback-Leibler (K-L) divergence that describes the detection performance is upper bounded as well, so that the attack is $\epsilon$-stealthy. On the other hand, when the system is unstable, we prove that the trace of estimation error covariance is lower bounded by an exponential function, which implies that the adversary may severely damage the estimation performance. To guarantee that all of the replay attacks are detectable, we propose a criteria to design the system parameters. Moreover, we analyze the case that the adversary waits to attack until the current measurement is close to the initial part of the recording. Numerical simulations are provided to illustrate the theoretical results in the end.
Jiahao HuangWen YangDaniel W. C. HoFangfei LiYang Tang
Jiahao HuangYang TangWen YangFangfei Li
Qinghuan YangYu FengFuyong WangZhongxin Liu