and augmentation anchoring. In International Confer-
ence on Learning Representations.
Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N.,
Oliver, A., and Raffel, C. A. (2019). Mixmatch: A
holistic approach to semi-supervised learning. In Ad-
vances in Neural Information Processing Systems.
Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., and Le,
Q. V. (2018). Autoaugment: Learning augmentation
policies from data. arXiv preprint arXiv:1805.09501.
Cubuk, E. D., Zoph, B., Shlens, J., and Le, Q. V. (2020).
Randaugment: Practical automated data augmentation
with a reduced search space. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition Workshops.
DeVries, T. and Taylor, G. W. (2017). Improved regular-
ization of convolutional neural networks with cutout.
arXiv preprint arXiv:1708.04552.
Ghiasi, G., Lin, T.-Y., and Le, Q. V. (2018). Dropblock: a
regularization method for convolutional networks. In
Proceedings of the International Conference on Neu-
ral Information Processing Systems.
Ghorban, F., Hasan, N., Velten, J., and Kummert, A. (2021).
Improving fm-gan through mixup manifold regular-
ization. In International Symposium on Circuits and
Systems.
Ghorban, F., Mar
´
ın, J., Su, Y., Colombo, A., and Kummert,
A. (2018). Aggregated channels network for real-time
pedestrian detection. In International Conference on
Machine Vision.
Goodfellow, I., Shlens, J., and Szegedy, C. (2015). Explain-
ing and harnessing adversarial examples. In Interna-
tional Conference on Learning Representations.
Hendrycks*, D., Mu*, N., Cubuk, E. D., Zoph, B., Gilmer,
J., and Lakshminarayanan, B. (2020). Augmix: A
simple method to improve robustness and uncertainty
under data shift. In International Conference on
Learning Representations.
Ho, D., Liang, E., Chen, X., Stoica, I., and Abbeel, P.
(2019). Population based augmentation: Efficient
learning of augmentation policy schedules. In Inter-
national Conference on Machine Learning.
Iscen, A., Tolias, G., Avrithis, Y., and Chum, O. (2019).
Label propagation for deep semi-supervised learning.
In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition.
Kamnitsas, K., Castro, D., Le Folgoc, L., Walker, I., Tanno,
R., Rueckert, D., Glocker, B., Criminisi, A., and Nori,
A. (2018). Semi-supervised learning via compact la-
tent space clustering. In International Conference on
Machine Learning.
Ke, Z., Wang, D., Yan, Q., Ren, J., and Lau, R. W. (2019).
Dual student: Breaking the limits of the teacher in
semi-supervised learning. In Proceedings of the IEEE
International Conference on Computer Vision.
Krizhevsky, A. (2009). Learning multiple layers of fea-
tures from tiny images. Technical report, Master’s
thesis, Department of Computer Science, University
of Toronto.
Kuo, C.-W., Ma, C.-Y., Huang, J.-B., and Kira, Z. (2020).
Featmatch: Feature-based augmentation for semi-
supervised learning. In European Conference on
Computer Vision.
Laine, S. and Aila, T. (2016). Temporal ensem-
bling for semi-supervised learning. arXiv preprint
arXiv:1610.02242.
Li, W., Wang, Z., Li, J., Polson, J., Speier, W., and Arnold,
C. W. (2019). Semi-supervised learning based on gen-
erative adversarial network: a comparison between
good gan and bad gan approach. In CVPR Workshops.
Lim, S., Kim, I., Kim, T., Kim, C., and Kim, S. (2019). Fast
autoaugment. arXiv preprint arXiv:1905.00397.
Luo, Y., Zhu, J., Li, M., Ren, Y., and Zhang, B.
(2018). Smooth neighbors on teacher graphs for semi-
supervised learning. In Proceedings of the IEEE Con-
ference on Computer Vision and Pattern Recognition.
Ma, Y., Mao, X., Chen, Y., and Li, Q. (2020). Mix-
ing up real samples and adversarial samples for semi-
supervised learning. In International Joint Conference
on Neural Networks.
Mayer, C., Paul, M., and Timofte, R. (2021). Adversar-
ial feature distribution alignment for semi-supervised
learning. Computer Vision and Image Understanding.
Miyato, T., Maeda, S.-i., Koyama, M., and Ishii, S. (2018).
Virtual adversarial training: a regularization method
for supervised and semi-supervised learning. IEEE
Transactions on Pattern Analysis and Machine Intel-
ligence.
Nair, V., Alonso, J. F., and Beltramelli, T. (2019). Realmix:
Towards realistic semi-supervised deep learning algo-
rithms. arXiv preprint arXiv:1912.08766.
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and
Ng, A. (2011). Reading digits in natural images with
unsupervised feature learning. In Advances in Neural
Information Processing Systems.
Qiao, S., Shen, W., Zhang, Z., Wang, B., and Yuille, A.
(2018). Deep co-training for semi-supervised image
recognition. In Proceedings of the European Confer-
ence on Computer Vision.
Sohn, K., Berthelot, D., Li, C.-L., Zhang, Z., Carlini, N.,
Cubuk, E. D., Kurakin, A., Zhang, H., and Raffel, C.
(2020). Fixmatch: Simplifying semi-supervised learn-
ing with consistency and confidence. arXiv preprint
arXiv:2001.07685.
Tarvainen, A. and Valpola, H. (2017). Mean teachers are
better role models: Weight-averaged consistency tar-
gets improve semi-supervised deep learning results. In
Advances in Neural Information Processing Systems.
Tokozume, Y., Ushiku, Y., and Harada, T. (2018). Between-
class learning for image classification. In Proceedings
of the IEEE Conference on Computer Vision and Pat-
tern Recognition.
Verma, V., Lamb, A., Beckham, C., Najafi, A., Mitliagkas,
I., Lopez-Paz, D., and Bengio, Y. (2019a). Manifold
mixup: Better representations by interpolating hid-
den states. In International Conference on Machine
Learning.
Verma, V., Lamb, A., Kannala, J., Bengio, Y., and Lopez-
Paz, D. (2019b). Interpolation consistency training for
semi-supervised learning. In Proceedings of the Inter-
national Joint Conference on Artificial Intelligence.
CGT: Consistency Guided Training in Semi-Supervised Learning
63