Jinfu FanYang YuZhongjie WangJinyi Gu
Partial Label Learning (PLL) is a weakly supervised learning framework where each training instance is associated with more than one candidate label. This learning method is dedicated to finding out the true label for each training instance. Most of the current PLL algorithms directly disambiguate the candidate labels without correcting the disambiguated results, making the algorithms vulnerable to the influence of instances easily misjudged. In this paper, GraphDCN, an innovative disambiguation correction net with inductive graph representation learning model is proposed. GraphDCN consists of a disambiguation model and a correction model. For a given instance, the disambiguation model tries to fit its underlying ground-truth label through the candidate label distribution of the instances connected with the given one, while the correction model tries to maximize the distance between the disambiguated labels and non-candidate labels, and uses the label probability thresholds to correct the disambiguated labels that may be wrong. As the training goes on, both the disambiguation and correction models alternately and iteratively boost their performance. Moreover, when considering the implementation of the disambiguation model, a partial cross entropy formulation is proposed to estimate the ground-truth label loss by updating the ambiguity confidence matrix, which can be proved satisfying convergence in PLL. Simulation results reveal the overwhelming performance of GraphDCN.
Ze-Sen ChenXuan WuQing-Guo ChenYao HuMin-Ling Zhang
Haobo WangShisong YangGengyu LyuWeiwei LiuTianlei HuKe ChenSonghe FengGang Chen
Jinfu FanYang YuLinqing HuangZhongjie Wang
Deng-Bao WangMin-Ling ZhangLi Li
Deng-Bao WangLi LiMin-Ling Zhang