This paper proposes a multi-task deep neural network (MT-DNN) architecture to handle the multi-label learning problem, in which each label learning is defined as a binary classification task, i.e., a positive class for "an instance owns this label" and a negative class for "an instance does not own this label". Multi-label learning is accordingly transformed to multiple binary-class classification tasks. Considering that a deep neural nets (DNN) architecture can learn good intermediate representations shared across tasks, we generalize one classification task of traditional DNN into multiple binary classification tasks through defining the output layer with a negative class node and a positive class node for each label. After a similar pretraining process to deep belief nets, we redefine the label assignment error of MT-DNN and perform the back-propagation algorithm to fine-tune the network. To evaluate the proposed model, we carry out image annotation experiments on two public image datasets, with 2000 images and 30,000 images respectively. The experiments demonstrate that the proposed model achieves the state-of-the-art performance.
James GibsonDavid C. AtkinsTorrey A. CreedZac ImelPanayiotis GeorgiouShrikanth Narayanan
Bouraffa, BouraffaAmiri, Hamid
Xiaodong FengZhen LiuWenbing WuWenbo Zuo
Jun HeDongliang LiBo SunLejun Yu
A. Agnes LydiaF. Sagayaraj Francis