Deniz ErdoğmuşY.N. RaoJosé C. Prı́ncipe
Supervised adaptive system training is traditionally performed with available pairs of input-output data and the system weights are fixed following this training procedure. Recently, in the context of machine learning, where the desired outputs are discrete-valued, the idea of exploiting unlabeled samples for improving classification performance has been proposed. We introduce an information theoretic framework based on density divergence minimization to obtain extended training algorithms. Our goal is to provide a theoretical framework upon which we can build efficient algorithms to this end.
Can GaoJie ZhouDuoqian MiaoJiajun WenXiaodong Yue
Manuel Ortega-MoralD. Gutiérrez-GonzálezM. L. De-PabloJesús Cid‐Sueiro
João Roberto BertiniAlneu de Andrade LopesLiang Zhao
Kunal KartikTafeer AhmedShantanu Ghosh