Cohen, T. S., Geiger, M., and Weiler, M. (2019b). A general
theory of equivariant cnns on homogeneous spaces.
In Wallach, H. M., Larochelle, H., Beygelzimer, A.,
d’Alch
´
e-Buc, F., Fox, E. B., and Garnett, R., editors,
NeurIPS 2019, 8-14 December 2019, Vancouver, BC,
Canada, pages 9142–9153.
Condurache, A. P. and Mertins, A. (2012). Sparse represen-
tations and invariant sequence-feature extraction for
event detection. VISAPP 2012, 1.
Devries, T. and Taylor, G. W. (2017). Improved regular-
ization of convolutional neural networks with cutout.
CoRR, abs/1708.04552.
Diaconu, N. and Worrall, D. E. (2019a). Affine self convo-
lution. CoRR, abs/1911.07704.
Diaconu, N. and Worrall, D. E. (2019b). Learning to con-
volve: A generalized weight-tying approach. In ICML
2019, 9-15 June 2019, Long Beach, California, USA,
pages 1586–1595.
Esteves, C., Allen-Blanchette, C., Makadia, A., and Dani-
ilidis, K. (2018a). Learning SO(3) equivariant repre-
sentations with spherical cnns. In ECCV 2018, Mu-
nich, Germany, September 8-14, 2018, Proceedings,
Part XIII, pages 54–70.
Esteves, C., Allen-Blanchette, C., Zhou, X., and Daniilidis,
K. (2018b). Polar transformer networks. In ICLR
2018.
Falkner, S., Klein, A., and Hutter, F. (2018). BOHB: Robust
and efficient hyperparameter optimization at scale. In
Proceedings of the 35th International Conference on
Machine Learning, pages 1436–1445.
Finzi, M., Welling, M., and Wilson, A. G. (2021). A
practical method for constructing equivariant multi-
layer perceptrons for arbitrary matrix groups. CoRR,
abs/2104.09459.
Freeman, W. T. and Adelson, E. H. (1991). The design
and use of steerable filters. IEEE Trans. Pattern Anal.
Mach. Intell., 13(9):891–906.
Fuchs, F., Worrall, D. E., Fischer, V., and Welling, M.
(2020). Se(3)-transformers: 3d roto-translation equiv-
ariant attention networks. In Larochelle, H., Ranzato,
M., Hadsell, R., Balcan, M., and Lin, H., editors,
NeurIPS 2020, December 6-12, 2020, virtual.
Fuchs, F. B., Wagstaff, E., Dauparas, J., and Posner,
I. (2021). Iterative se(3)-transformers. CoRR,
abs/2102.13419.
Ghosh, R. and Gupta, A. K. (2019). Scale steerable filters
for locally scale-invariant convolutional neural net-
works. CoRR, abs/1906.03861.
Han, S., Pool, J., Tran, J., and Dally, W. J. (2015).
Learning both weights and connections for efficient
neural network. In Cortes, C., Lawrence, N. D.,
Lee, D. D., Sugiyama, M., and Garnett, R., editors,
NeurIPS 2015, December 7-12, 2015, Montreal, Que-
bec, Canada, pages 1135–1143.
Hutchinson, M., Lan, C. L., Zaidi, S., Dupont, E.,
Teh, Y. W., and Kim, H. (2020). Lietransformer:
Equivariant self-attention for lie groups. CoRR,
abs/2012.10885.
Jaderberg, M., Simonyan, K., Zisserman, A., and
Kavukcuoglu, K. (2015). Spatial transformer net-
works. In NeurIPS 2015, pages 2017–2025. Curran
Associates, Inc.
Kanazawa, A., Sharma, A., and Jacobs, D. W. (2014). Lo-
cally scale-invariant convolutional neural networks.
CoRR, abs/1412.5104.
Kingma, D. P. and Ba, J. (2015). Adam: A method for
stochastic optimization. In Bengio, Y. and LeCun, Y.,
editors, ICLR 2015, San Diego, CA, USA, May 7-9,
2015, Conference Track Proceedings.
Kondor, R., Lin, Z., and Trivedi, S. (2018). Clebsch-
gordan nets: a fully fourier space spherical convolu-
tional neural network. In Bengio, S., Wallach, H. M.,
Larochelle, H., Grauman, K., Cesa-Bianchi, N., and
Garnett, R., editors, NeurIPS 2018, 3-8 December
2018, Montr
´
eal, Canada, pages 10138–10147.
Krizhevsky, A. (2009). Learning multiple layers of features
from tiny images,. Technical report.
Laptev, D., Savinov, N., Buhmann, J. M., and Pollefeys, M.
(2016). TI-POOLING: transformation-invariant pool-
ing for feature learning in convolutional neural net-
works. In CVPR 2016, Las Vegas, NV, USA, June 27-
30, 2016, pages 289–297.
Larochelle, H., Erhan, D., Courville, A. C., Bergstra, J.,
and Bengio, Y. (2007). An empirical evaluation of
deep architectures on problems with many factors of
variation. In ICML 2007, Corvallis, Oregon, USA,
June 20-24, 2007, pages 473–480.
LeCun, Y., Bengio, Y., and Hinton, G. E. (2015). Deep
learning. Nature, 521(7553):436–444.
Lee, N., Ajanthan, T., and Torr, P. H. S. (2019). Snip:
single-shot network pruning based on connection sen-
sitivity. In ICLR 2019, New Orleans, LA, USA, May
6-9, 2019. OpenReview.net.
Marcos, D., Kellenberger, B., Lobry, S., and Tuia, D.
(2018). Scale equivariance in cnns with vector fields.
CoRR, abs/1807.11783.
Marcos, D., Volpi, M., Komodakis, N., and Tuia, D. (2017).
Rotation equivariant vector field networks. In ICCV
2017, Venice, Italy, October 22-29, 2017, pages 5058–
5067. IEEE Computer Society.
M
¨
uller, F. and Mertins, A. (2009). Invariant-integration
method for robust feature extraction in speaker-
independent speech recognition. In INTERSPEECH
2009, Brighton, United Kingdom, September 6-10,
2009, pages 2975–2978.
M
¨
uller, F. and Mertins, A. (2010). Invariant integration
features combined with speaker-adaptation methods.
In INTERSPEECH 2010, Makuhari, Chiba, Japan,
September 26-30, 2010, pages 2622–2625.
M
¨
uller, F. and Mertins, A. (2011). Contextual
invariant-integration features for improved speaker-
independent speech recognition. Speech Communica-
tion, 53(6):830–841.
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and
Ng, A. Y. (2011). Reading digits in natural images
with unsupervised feature learning. NIPS Workshop
on Deep Learning and Unsupervised Feature Learn-
ing.
Noether, E. (1916). Der endlichkeitssatz der invarianten
endlicher gruppen. Mathematische Annalen, 77:89–
92.
Improving the Sample-complexity of Deep Classification Networks with Invariant Integration
223