JOURNAL ARTICLE

AUTONOMOUS CAMERA CONTROL BY NEURAL MODELS IN ROBOTIC VISION SYSTEMS

Abstract

Recently there has been growing interest in creating large-scale simulations of certain areas in the brain. The areas that are receiving the overwhelming focus are visual in nature, which may provide a means to compute some of the complex visual functions that have plagued AI researchers for many decades; robust object recognition, for example. Additionally, with the recent introduction of cheap computational hardware capable of computing at several teraflops, real-time robotic vision systems will likely be implemented using simplified neural models based on their slower, more realistic counterparts. This paper presents a series of small neural networks that can be integrated into a neural model of the human retina to automatically control the white-balance and exposure parameters of a standard video camera to optimize the computational processing performed by the neural model. Results of a sample implementation including a comparison with proprietary methods are presented. One strong advantage that these integrated subnetworks possess over proprietary mechanisms is that ‘attention’ signals could be used to selectively optimize areas of the image that are most relevant to the task at hand.

Keywords:
Computer science Artificial intelligence Artificial neural network Task (project management) Focus (optics) Computer vision Cognitive neuroscience of visual object recognition Object (grammar) Engineering

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
15
Refs
0.05
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Visual Attention and Saliency Detection
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
CCD and CMOS Imaging Sensors
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Visual perception and processing mechanisms
Life Sciences →  Neuroscience →  Cognitive Neuroscience
© 2026 ScienceGate Book Chapters — All rights reserved.