DISSERTATION

Integrating top-down and bottom-up visual attention

Navalpakkam, Vidhya (author)

Year: 2015 University:   University of Southern California Digital Library

Abstract

Visual attention -- the brain's mechanism for selecting important visual information -- is influenced by a combination of bottom-up (sudden, unexpected visual events that are spatio-temporally different from the surroundings) and top-down (goal-relevant) factors. Although both are crucial for real-world applications like robot navigation or visual surveillance, most existing models are either purely bottom-up or top-down. In this thesis, we present a new model that integrates top-down and bottom-up attention. We begin with a wide perspective ofhow a task specification (e.g., "who is doing what to whom'') influences attention during scene understanding. We propose and partially implement a general-purpose architecture illustrating how different bottom-up and top-down components of visual processing such as the gist, saliency map, object detection and recognition modules, working memory, long term memory, task-relevance map may interact and interface with each other to guide attention to salient and relevant scene locations. Next, we investigate the specifics of how bottom-up and top-down influences may integrate while searching for a target in a distracting background. We probe the granularity of information integration within feature dimensions such as color, size, luminance. Results of our eye tracking experiments assert that bottom-up responses encoding feature dimensions can be modulated by not just one, but several top-down gain control signals, thusrevealing high granularity of integration. Finally, we investigate the computational principles underlying the integration. We derive a formal theory of optimal integration of bottom-up salience with top-down knowledge about target and distractor features, such that the target's salience relative to the distractors is maximized, thereby accelerating search speed.

Keywords:
Salience (neuroscience) Gaze-contingency paradigm Salient Granularity Visual search Eye tracking Feature (linguistics) Visualization Eye movement Visual attention

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Visual Attention and Saliency Detection
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Gaze Tracking and Assistive Technology
Physical Sciences →  Computer Science →  Human-Computer Interaction
Tactile and Sensory Interactions
Life Sciences →  Neuroscience →  Cognitive Neuroscience

Related Documents

BOOK-CHAPTER

Visual-Tactile Bottom-Up and Top-Down Attention

Qiong WuChunlin LiSatoshi TakahashiJinglong Wu

Advances in bioinformatics and biomedical engineering book series Year: 2012 Pages: 183-191
JOURNAL ARTICLE

Models of bottom-up and top-down visual attention

Itti, Laurent

Journal:   CaltechTHESIS (California Institute of Technology) Year: 2005
JOURNAL ARTICLE

A Top–Down and Bottom–Up Component of Visual Attention

Gerald S. WassermanAmanda R. BolbeckerJia LiCorrinne C. M. Lim-Kessler

Journal:   Cognitive Computation Year: 2010 Vol: 3 (1)Pages: 294-302
© 2026 ScienceGate Book Chapters — All rights reserved.