The knowledge-augmented deep learning paradigm refers to a paradigm in which domain knowledge is identified and integrated into deep models. Conventional methods typically employ task-specific approaches to gather external knowledge from various sources. In contrast, large language models are extensively pre-trained and can serve as a comprehensive source of external knowledge. In this paper, we propose CoT-KA, a Chain-of-Thought-based method that augments knowledge for deep learning. CoT-KA avoids the need for additional knowledge retrieval or knowledge reasoning models, as required in conventional augmentation methods. Our results demonstrate that CoT-KA outperforms both pure CoT-based methods and the non-augmented method across the majority of eleven publicly available benchmarks for various reasoning tasks.
Zhenfang ChenQinhong ZhouYikang ShenYining HongZhiqing SunDan GutfreundChuang Gan
Haicheng WuZijing LinJunying YuanJianxin Zhang
Yoonjeong ParkHyunjin KimChanyeol ChoiJunseong KimJ. Sohn