JOURNAL ARTICLE

Eye Semantic Segmentation Using Ensemble of Deep Convolutional Neural Networks

Abstract

Eye semantic segmentation is a fundamental task in many works such as identification and medical applications. In this study, three encoder-decoder architectures using convolutional neural network are applied to segment the eyes. A simple encoder-decoder architecture is capable of generating only coarse segmentation results. On the other hand, fine details like eyelashes can be achieved by U-net and SegNet architectures. However, they sometimes produce overall results worse than the simple one. To resolve this problem, we introduce a deep convolutional neural network-based ensemble technique for eye segmentation. The results from those architectures are combined in order to yield good results in both coarse-level and fine-level segmentation. In the proposed technique, a trainable mask function is applied to achieve an optimal ensemble of coarse-level and fine-level results. Our dataset comprises 64 eye images from different environments, camera settings, people, and eye conditions. Experimental results show that our ensemble technique can improve the results from the conventional architectures. The proposed ensemble method manages to reach the average accuracy of 96.33% for three-class segmentation.

Keywords:
Computer science Segmentation Artificial intelligence Convolutional neural network Encoder Pattern recognition (psychology) Image segmentation Deep learning Computer vision Scale-space segmentation

Metrics

2
Cited By
0.00
FWCI (Field Weighted Citation Impact)
18
Refs
0.25
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Retinal Imaging and Analysis
Health Sciences →  Medicine →  Radiology, Nuclear Medicine and Imaging
Gaze Tracking and Assistive Technology
Physical Sciences →  Computer Science →  Human-Computer Interaction
Visual Attention and Saliency Detection
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.