Md. Rabiul IslamAndrew VargoMotoi IwataMasakazu IwamuraKoichi Kise
The major drawback of deep learning (DL) algorithms is the necessity of large and labeled data sets in order to achieve peak performance. A DL technique that can overcome this constraint is self-supervised learning, which is applied as non-contrastive self-supervised learning (SSL) and contrastive self-supervised learning (contrastive learning). This paper evaluates a contrastive learning method, which is called simple framework for contrastive learning of visual representations (SimCLR), for the task of fine-grained reading detection. We employ in-the-wild electrooculography (EOG) data sets that describe the eye movement behaviors to evaluate the SimCLR method and compare it against the SSL and pure supervised methods. The results show a maximum performance gain of 3.02 and 3.96 percentage points compared to the SSL and pure supervised methods, respectively, over an equal amount of training data. In addition, the SimCLR method shows a data efficiency of about 80%. The obtained results show a direction for system designers and researchers to handle the lack of large-sized labeled data issues in developing DL models that help to improve user reading habits through eye movement behaviors.
Md. Rabiul IslamAndrew VargoMotoi IwataMasakazu IwamuraKoichi Kise
Yan LiQixiong WangXiaoyan LuoJihao Yin
Tiantian ZhuYang XiangQingcai ChenYang QinBaotian HuWentai Zhang
Hui TangXun LiangYuhui GuoXiangping ZhengBo Wu
Lanxin ZengHaowen GuoWen YangHuai YuLei YuPeng ZhangTongyuan Zou