Neural networks are powerful tools that can be used to solve a host of difficult tabular data modeling challenges. However, they're also less obviously interpretable than other alternatives to modeling tabular data, like Linear Regression or decision trees – from which the model's processing of the data can be more or less directly read off of the learned parameters. The same is not true for neural network architectures, which are significantly more complex and therefore more difficult to interpret. At the same time, it is important to interpret any model used in production to verify that it is not using cheap tricks or other exploitative measures, which can lead to poor behavior in production.
Zuohui ChenJun ZhouYoucheng SunJingyi WangQi XuanXiaoniu Yang
H. SongPengfei YuJao J. OuWei LiJingjing Gu
Bödeker, LukasKusters, Luc J. B.Müller, Markus
Shikai SongYuexian HouGuangcheng Liu