JOURNAL ARTICLE

Linking Sparse Coding Dictionaries for Representation Learning

Abstract

Sparsity is a desirable property as our natural environment can be described by a small number of structural primitives. Strong evidence demonstrates that the brain's representation is both explicit and sparse, which makes it metabolically efficient by reducing the cost of code transmission. In current standardized machine learning practices, end-to-end classification pipelines are much more prevalent. For the brain, there is no single classification objective function optimized by back-propagation. Instead, the brain is highly modular and learns based on local information and learning rules. In our work, we seek to show that an unsupervised, biologically inspired sparse coding algorithm can create a sparse representation that achieves a classification accuracy on par with standard supervised learning algorithms. We leverage the concept of multi-modality to show that we can link the embedding space with multiple, heterogeneous modalities. Furthermore, we demonstrate a sparse coding model which controls the latent space and creates a sparse disentangled representation, while maintaining a high classification accuracy.

Keywords:
Neural coding Computer science Sparse approximation Artificial intelligence Machine learning Embedding Leverage (statistics) Modular design Feature learning Coding (social sciences) Pattern recognition (psychology) Mathematics

Metrics

1
Cited By
0.00
FWCI (Field Weighted Citation Impact)
22
Refs
0.21
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Neural dynamics and brain function
Life Sciences →  Neuroscience →  Cognitive Neuroscience
Blind Source Separation Techniques
Physical Sciences →  Computer Science →  Signal Processing
Fractal and DNA sequence analysis
Life Sciences →  Biochemistry, Genetics and Molecular Biology →  Molecular Biology

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.