Authors:
Mark Basting
1
;
Robert-Jan Bruintjes
1
;
Thaddäus Wiedemer
2
;
3
;
Matthias Kümmerer
2
;
Matthias Bethge
2
;
4
and
Jan van Gemert
1
Affiliations:
1
Computer Vision Lab, Delft University of Technology, The Netherlands
;
2
Bethgelab, University of Tübingen, Geschwister-Scholl-Platz, Tübingen, Germany
;
3
Machine Learning, Max-Planck-Institute for Intelligent Systems, Max-Planck-Ring, Tübingen, Germany
;
4
Tübingen AI Center, Maria-von-Linden-Straße, Tübingen, Germany
Keyword(s):
Convolutional Neural Networks, Scale, Scale-Equivariance, Scale Learning.
Abstract:
Objects can take up an arbitrary number of pixels in an image: Objects come in different sizes, and, photographs of these objects may be taken at various distances to the camera. These pixel size variations are problematic for CNNs, causing them to learn separate filters for scaled variants of the same objects which prevents learning across scales. This is addressed by scale-equivariant approaches that share features across a set of pre-determined fixed internal scales. These works, however, give little information about how to best choose the internal scales when the underlying distribution of sizes, or scale distribution, in the dataset, is unknown. In this work we investigate learning the internal scales distribution in scale-equivariant CNNs, allowing them to adapt to unknown data scale distributions. We show that our method can learn the internal scales on various data scale distributions and can adapt the internal scales in current scale-equivariant approaches.