JOURNAL ARTICLE

EEG-Driven Image Reconstruction with Saliency-Guided Diffusion Models

Abstract

Existing EEG-driven image reconstruction methods often overlook spatial attention mechanisms, limiting fidelity and semantic coherence. To address this, we propose a dual-conditioning framework that combines EEG embeddings with spatial saliency maps to enhance image generation. Our approach leverages the Adaptive Thinking Mapper (ATM) for EEG feature extraction and fine-tunes Stable Diffusion 2.1 via Low-Rank Adaptation (LoRA) to align neural signals with visual semantics, while a ControlNet branch conditions generation on saliency maps for spatial control. Evaluated on THINGS-EEG, our method achieves a significant improvement in the quality of low- and high-level image features over existing approaches. Simultaneously, strongly aligning with human visual attention. The results demonstrate that attentional priors resolve EEG ambiguities, enabling high-fidelity reconstructions with applications in medical diagnostics and neuroadaptive interfaces, advancing neural decoding through efficient adaptation of pre-trained diffusion models.

Keywords:

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
22
Refs
0.46
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Visual Attention and Saliency Detection
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Photoacoustic and Ultrasonic Imaging
Physical Sciences →  Engineering →  Biomedical Engineering
Cell Image Analysis Techniques
Life Sciences →  Biochemistry, Genetics and Molecular Biology →  Biophysics
© 2026 ScienceGate Book Chapters — All rights reserved.