JOURNAL ARTICLE

OvA-INN: Continual Learning with Invertible Neural Networks

Abstract

In the field of Continual Learning, the objective is to learn several tasks one after the other without access to the data from previous tasks. Several solutions have been proposed to tackle this problem but they usually assume that the user knows which of the tasks to perform at test time on a particular sample, or rely on small samples from previous data and most of them suffer of a substantial drop in accuracy when updated with batches of only one class at a time. In this article, we propose a new method, OvA-INN, which is able to learn one class at a time and without storing any of the previous data. To achieve this, for each class, we train a specific Invertible Neural Network to extract the relevant features to compute the likelihood on this class. At test time, we can predict the class of a sample by identifying the network which predicted the highest likelihood. With this method, we show that we can take advantage of pretrained models by stacking an Invertible Network on top of a feature extractor. This way, we are able to outperform state-of-the-art approaches that rely on features learning for the Continual Learning of MNIST and CIFAR-100 datasets. In our experiments, we reach 72% accuracy on CIFAR-100 after training our model one class at a time.

Keywords:
MNIST database Computer science Artificial intelligence Class (philosophy) Machine learning Artificial neural network Extractor Feature (linguistics) Test data Sample (material)

Metrics

10
Cited By
1.17
FWCI (Field Weighted Citation Impact)
67
Refs
0.82
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
COVID-19 diagnosis using AI
Health Sciences →  Medicine →  Radiology, Nuclear Medicine and Imaging

Related Documents

JOURNAL ARTICLE

Continual learning with invertible generative models

Jary PomponiSimone ScardapaneAurelio Uncini

Journal:   Neural Networks Year: 2023 Vol: 164 Pages: 606-616
BOOK-CHAPTER

Continual Robot Learning with Constructive Neural Networks

Axel GroßmannRiccardo Poli

Lecture notes in computer science Year: 1998 Pages: 95-108
JOURNAL ARTICLE

Continual Learning with Columnar Spiking Neural Networks

D. LarionovN. BazenkovM. Kiselev

Journal:   Optical Memory and Neural Networks Year: 2025 Vol: 34 (S1)Pages: S58-S71
© 2026 ScienceGate Book Chapters — All rights reserved.