JOURNAL ARTICLE

Gaze estimation via self-attention augmented convolutions

Abstract

Although recently deep learning methods have boosted the accuracy of appearance-based gaze estimation, there is still room for improvement in the network architectures for this particular task. Hence we propose here a novel network architecture grounded on self-attention augmented convolutions to improve the quality of the learned features during the training of a shallower residual network. The rationale is that self-attention mechanism can help outperform deeper architectures by learning dependencies between distant regions in full-face images. This mechanism can also create better and more spatially-aware feature representations derived from the face and eye images before gaze regression. We dubbed our framework ARes-gaze, which explores our Attention-augmented ResNet (ARes-14) as twin convolutional backbones. In our experiments, results showed a decrease of the average angular error by 2.38% when compared to state-of-the-art methods on the MPIIFaceGaze data set, while achieving a second-place on the EyeDiap data set. It is noteworthy that our proposed framework was the only one to reach high accuracy simultaneously on both data sets.

Keywords:
Gaze Computer science Convolutional neural network Artificial intelligence Feature (linguistics) Set (abstract data type) Deep learning Face (sociological concept) Task (project management) Data set Residual Computer vision Machine learning Pattern recognition (psychology) Algorithm

Metrics

2
Cited By
0.14
FWCI (Field Weighted Citation Impact)
42
Refs
0.49
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Gaze Tracking and Assistive Technology
Physical Sciences →  Computer Science →  Human-Computer Interaction
Visual Attention and Saliency Detection
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Computing and Algorithms
Social Sciences →  Social Sciences →  Urban Studies
© 2026 ScienceGate Book Chapters — All rights reserved.