Francesco LocatelloStefan BauerMario LučićGunnar RätschSylvain GellyBernhard SchölkopfOlivier Bachem
The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision. In this paper, we summarize the results of (Locatello et al. 2019b) and focus on their implications for practitioners. We discuss the theoretical result showing that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases and the practical challenges it entails. Finally, we comment on our experimental findings, highlighting the limitations of state-of-the-art approaches and directions for future research.
Santiago JiménezJulia Guerrero‐ViuBelén Masiá Corcoy
Lingyun JiangKai QiaoRuoxi QinJian ChenHaibing BuBin Yan
Lianrui ZuoBlake E. DeweyYihao LiuYufan HeScott D. NewsomeEllen M. MowrySusan M. ResnickJerry L. PrinceAaron Carass
Xiao LiuSpyridon ThermosPedro SanchezAlison Q. O’NeilSotirios A. Tsaftaris