JOURNAL ARTICLE

Modeling visual search in naturalistic virtual reality environments

Angela RadulescuBas van OpheusdenFred CallawayThomas L. GriffithsJames Hillis

Year: 2020 Journal:   Journal of Vision Vol: 20 (11)Pages: 1401-1401   Publisher: Association for Research in Vision and Ophthalmology

Abstract

Visual search is a ubiquitous human behavior and canonical example of selectively sampling sensory information to attain a goal. Previous research has studied optimality in visual search with artificial laboratory tasks (Najemnik and Geisler, 2005; Yang et al. 2016). To understand how people search in naturalistic environments, we conducted a study of visual search in virtual reality. Participants (N=21) viewed scenes generated with the Unity game engine through a head-mounted display equipped with an eye-tracker. On each of 300 trials, participants were shown a target object and teleported into a virtual cluttered room where they searched for the item from a fixed viewpoint. They had 8 seconds to identify the target object among 60-100 distractors. Participants had a 76% success rate of finding the target with a median response time on successful trials of 2.89s (IQR: 1.99-4.44s). To understand what features drive people’s search, we annotated gaze samples with semantic scene information such as the identity, shape, color, and texture of the object at the center of gaze. Concretely, we used the object asset (3D mesh and texture) to compute low-dimensional shape and color representations of each object. We found that people’s gaze is primarily directed to task-relevant objects (i.e. targets or distractors), and that the distractors that people look at are close in representational space to the target. Furthermore, this distance decreased over time, suggesting that representational similarity guides eye movements. We discuss these results in the context of a meta-level Markov Decision Process model (Callaway et al. 2018), which frames visual search as optimal information sampling under computational constraints.

Keywords:
Visual search Gaze Computer science Computer vision Object (grammar) Artificial intelligence Context (archaeology) Virtual reality Task (project management) Human–computer interaction

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.12
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Image Retrieval and Classification Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.