The biological nervous system can learn complex sensory features necessary for an animal’s survival from a single training sample in real-time and simultaneously process, learn from and react to novel input stimuli. Building intelligent systems that are capable of solving different tasks has always been a driving force of engineering. However, hard-coded task-specific algorithms are poor at adapting to new data and generalising to different scenarios. It is in this pursuit of self-learning algorithms that we look at biology for inspiration. The past decade has seen a surge in machine learning research, primarily driven by the success of Deep Learning. Artificial Neural Networks (ANNs) have demonstrated their ability to solve a broad range of problems across various domains. For time-sensitive tasks, intelligent systems, such as unmanned vehicles and robotic systems, must always be ready to react in a fraction of a second to a sudden and important stimulus. For such high temporal precision, systems must employ a high sampling rate on a stream of uneventful data from the environment. When employed for these tasks, ANNs demand power-intensive computational resources, such as GPUs, to handle the vast quantities of data produced by high-resolution spatial and temporal sensors. Moreover, deep learning systems require sequential and synchronous operations at each layer for both inference and training, impacting the system’s overall latency. To address this, Neuromorphic Engineering, an evolving research field, offers low-power alternatives to conventional technologies at the sensor level. Neuromorphic sensors draw inspiration from biological neural pathways, which evolved under pressure to develop fast, robust, and power- efficient information processing systems. Compared to conventional sensors, these sensors can reduce the amount of data generated by orders of magnitude. Pairing neuromorphic sensors with compatible bio-inspired algorithms has yielded impressive results in specific learning tasks while significantly reducing power consumption compared to traditional approaches. Nevertheless, much remains to be done to achieve comparable performance and adaptability across diverse domains. This research aims to enhance neuromorphic processing capabilities by bridging the di- vide between deep learning and neuromorphic systems. This thesis introduces a novel class of end-to-end, event-driven computational architectures. These architectures can learn hierarchical spatio-temporal features through an online learning method that is not reliant on gradient-based error backpropagation or access to non-local information at each neuron. The proposed architectures embody neuromorphic principles, facilitating asynchronous and sparse event-driven processing across multiple layers. Our work introduces a foundational architecture adaptable to various modalities and learning paradigms. We then extend it to a convolutional architecture designed for event-based visual and tactile pattern recognition tasks. We also present event-driven architectures designed to perform reinforcement learning on event-based data. These architectures support end-to-end online training in various settings, including un- supervised, supervised, and reinforcement learning. In these architectures, the communication among nodes is event-based, and their computation is event-driven and asynchronous. The development of these event-driven neural architectures allows event-based signal processing solely when patterns of interest arise in the input data, potentially bringing continual, on-chip online learning to low-power edge devices.
Moisés Macero GarcíaTarun Telang