Grasping detection learning techniques are crucial for robotic operations, as they transfer knowledge learned by robots to new real-world objects, enabling robots to grasp unknown objects effortlessly. However, most previous works have not adequately emphasized spatial information features, leading to subpar grasping performance. How to design a grasp detection network that effectively utilizes spatial information features, efficiently encodes relationships between channels, and addresses long-range dependencies to enhance robot grasping performance remains a challenging problem. To tackle this concern, we introduce EDCoA-net, a novel grasping detection network utilizing the encoder-decoder architecture. In this network, we propose a novel module, the CoRA module, which innovatively integrates the idea of residuals and utilizes Coordinate Attention to enhance the expression capability of learning features, while simultaneously encoding channel relationships and long-range dependencies. We assess the effectiveness of this network on the publicly available Jacquard grasping dataset, achieving a high accuracy of 95.4%, demonstrating the model performance of EDCoA-net. Additionally, we assess the efficacy of the CoRA module through a series of ablation experiments
Zhenning ZhouXiaoxiao ZhuQixin Cao
Xiaofei QinWenkai HuXiao ChenChangxiang HeSongwen PeiXuedian Zhang
Qian-Qian HongLiang YangBi Zeng
Jianan HuangXuebing LiuQing ZhuYaonan WangMingtao FengJiaming ZhouZhen ZhouLin ChenDanwei Wang
Haoda YanChaoyi DongShuai XiangGe TaiTianyu YuanZ. Jonny KongChenzhe ZhangXiaoyan Chen