Sparsity is a desirable property as our natural environment can be described by a small number of structural primitives. Strong evidence demonstrates that the brain's representation is both explicit and sparse, which makes it metabolically efficient by reducing the cost of code transmission. In current standardized machine learning practices, end-to-end classification pipelines are much more prevalent. For the brain, there is no single classification objective function optimized by back-propagation. Instead, the brain is highly modular and learns based on local information and learning rules. In our work, we seek to show that an unsupervised, biologically inspired sparse coding algorithm can create a sparse representation that achieves a classification accuracy on par with standard supervised learning algorithms. We leverage the concept of multi-modality to show that we can link the embedding space with multiple, heterogeneous modalities. Furthermore, we demonstrate a sparse coding model which controls the latent space and creates a sparse disentangled representation, while maintaining a high classification accuracy.
Thanh-Ha DoThị Minh Huyền NguyễnKC Santosh
Yongqin ZhangJinsheng XiaoShuhong LiCaiyun ShiGuoxi Xie
Jayaraman J. ThiagarajanAndreas Spanias
Ron RubinsteinAlfred M. Bruckstein⋆Michael Elad
Farshad NourbakhshÉric Granger