Abstract

Convolutional Neural Networks (CNN) are inherently equivariant under translations, however, they do not have an equivalent embedded mechanism to handle other transformations such as rotations. The existing solutions require redesigning standard networks with filters mapped from combinations of predefined basis involving complex analytical functions. Such formulations are hard to implement as well as the imposed restrictions in the choice of basis can lead to model weights that are sub-optimal for the primary deep learning task (e.g. classification). We propose Implicitly Equivariant Network (IEN) which induces approximate equivariance in the different layers of a standard CNN by optimizing a multi-objective loss function. We show for ResNet models on Rot-MNIST and Rot-TinyImageNet that even with its simple formulation, IEN performs at par or even better than steerable networks. Also, IEN facilitates construction of heterogeneous filter groups allowing reduction in the number of channels in CNNs by a factor of over 30%. Further, we demonstrate that for the hard problem of visual object tracking, IEN outperforms the state-of-the-art rotation equivariant tracking method while providing faster inference speed.

Keywords:
Equivariant map MNIST database Computer science Convolutional neural network Inference Rotation (mathematics) Object (grammar) Artificial intelligence Basis (linear algebra) Reduction (mathematics) Pattern recognition (psychology) Algorithm Artificial neural network Mathematics Pure mathematics Geometry

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
32
Refs
0.04
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Video Surveillance and Tracking Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.