Recently, Jadbabaie et al. presented a social learning model, where agents update beliefs by combining Bayesian posterior beliefs based on personal observations and weighted averages of the beliefs of neighbors. For a network with fixed topology, they provided sufficient conditions for all the agents in the network to learn the true state almost surely. In this paper, we extend the model to networks with time-varying topologies. Under certain assumptions on weights and connectivity, we prove that agents eventually have correct forecasts for upcoming signals, and all the beliefs of agents reach a consensus. In addition, if there is no state that is observationally equivalent to the true state from the point of view of all agents, we show that the consensus belief of agents eventually reflects the true state.
Wenbing ZhangYang TangQing-Long HanYurong Liu
Yuxin WuDeyuan MengJingyao ZhangLong Cheng
Wenlian LuFatihcan M. AtayJürgen Jost