Lingtao MengQiuyu ZhangRui YangYibo Huang
Deep hashing enhances image retrieval accuracy by integrating hash encoding with deep neural networks. However, existing unsupervised deep hashing methods primarily rely on the rotational invariance of images to construct triplets, resulting in triplets that are unsatisfactory in both reliability and quantity. Additionally, some methods fail to adequately consider the relative similarity information between samples. To overcome these limitations, we propose a novel unsupervised deep triplet hashing method for image retrieval (abbreviated as UDTrHash). UDTrHash utilizes the extremal cosine similarity of deep features of images to construct more reliable first type triplets and expands the formed triplets through data augmentation strategies to introduce a larger number of triplets. Furthermore, we design a new triplet loss function to enhance the discriminative ability of the generated hash codes. Extensive experiments demonstrate that UDTrHash exhibits superior performance on three public benchmark datasets such as MIRFlickr25K compared to existing state-of-the-art hashing methods.
Shanshan HuangXiong YichaoYa ZhangJia Wang
Yifan GuHaofeng ZhangZheng ZhangQiaolin Ye
Jie LinOlivier MorèreJulie PettaVijay ChandrasekharAntoine Veillard
Yifan GuShidong WangHaofeng ZhangYazhou YaoWankou YangLi Liu
Haofeng ZhangYifan GuYazhou YaoZheng ZhangLi LiuJian ZhangLing Shao