This work addresses the problem of semantic scene understanding under foggy\nroad conditions. Although marked progress has been made in semantic scene\nunderstanding over the recent years, it is mainly concentrated on clear weather\noutdoor scenes. Extending semantic segmentation methods to adverse weather\nconditions like fog is crucially important for outdoor applications such as\nself-driving cars. In this paper, we propose a novel method, which uses purely\nsynthetic data to improve the performance on unseen real-world foggy scenes\ncaptured in the streets of Zurich and its surroundings. Our results highlight\nthe potential and power of photo-realistic synthetic images for training and\nespecially fine-tuning deep neural nets. Our contributions are threefold, 1) we\ncreated a purely synthetic, high-quality foggy dataset of 25,000 unique outdoor\nscenes, that we call Foggy Synscapes and plan to release publicly 2) we show\nthat with this data we outperform previous approaches on real-world foggy test\ndata 3) we show that a combination of our data and previously used data can\neven further improve the performance on real-world foggy data.\n
Christos SakaridisDengxin DaiLuc Van Gool
Christos SakaridisDengxin DaiSimon HeckerLuc Van Gool
Dengxin DaiChristos SakaridisSimon HeckerLuc Van Gool
Ankur HandaViorica PătrăuceanVijay BadrinarayananSimon StentRoberto Cipolla
Tan-Hiep ToThanh‐Nghi DoDuc-Nghia NgoMinh–Triet TranTrung-Nghia Le