Abstract

Event cameras differ from conventional frame-based cameras in that each individual pixel responds independently and asynchronously to brightness changes. Instead of absolute brightness measurements reported as entire frames at regular time intervals, data from event cameras come as a stream of spatially and temporally sparse brightness change events. Event cameras have several characteristics that are favorable to maritime computer vision tasks, including high dynamic range and high temporal resolution. In this work, we apply Asynet, a sparse convolutional neural network based object detection model, to maritime event data sets we collected in the field. To address the limited size of our data set, we propose fine-tuning from pretrained weights learned from the neuromorphic Caltech101 (N-Caltech101) data set and then using a combination of augmentation techniques drawn from traditional image-based computer vision as well as event-specific augmentations. Empirical findings show that using simple image-based augmentation strategies are enough to significantly boost the performance of the Asynet model.

Keywords:
Computer science Artificial intelligence Computer vision Brightness Event (particle physics) Convolutional neural network Frame (networking) Pixel Set (abstract data type) Object detection Neuromorphic engineering Object (grammar) Image resolution Frame rate Data set Artificial neural network Pattern recognition (psychology)

Metrics

2
Cited By
0.22
FWCI (Field Weighted Citation Impact)
24
Refs
0.50
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Memory and Neural Computing
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Ferroelectric and Negative Capacitance Devices
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
EEG and Brain-Computer Interfaces
Life Sciences →  Neuroscience →  Cognitive Neuroscience
© 2026 ScienceGate Book Chapters — All rights reserved.