First, we experiment with the assumption that unla-
beled data are in-class like conventional SSL prob-
lems. Then, we experiment how SST performs for
out-of-class unlabeled data.
For the conventional SSL problems, we achieved
competitive results on several datasets and our
method could be combined with conventional algo-
rithms to improve performance. The accuracy of SST
is either saturated or not depending on the dataset.
Nonetheless, SST has shown performance improve-
ments as a number of data increases. In addition, the
results of the combined experiments of SST and other
algorithms show the possibility of performance im-
provement.
For the new SSL problems, SST did not show any
performance degradation even if the model is learned
from in-class data and out-of-class unlabeled data.
Decreasing the threshold of the selection network in
new SSL problem, performance degrades. However,
the output of the selection network shows different
trends according to in-class and out-of-class. By set-
ting a threshold that does not add out-of-class data,
SST has prevented the addition of out-of-class sam-
ples to the new training dataset. It means that it is pos-
sible to prevent the erroneous data from being added
to the unlabeled dataset in a real environment.
ACKNOWLEDGEMENTS
This work was supported by IITP grant funded by the
Korea government (MSIT) (No.2019-0-01367).
REFERENCES
Akhmedova, S., Semenkin, E., and Stanovov, V. (2017).
Semi-supervised svm with fuzzy controlled coopera-
tion of biology related algorithms. In ICINCO (1),
pages 64–71.
Blum, A. and Mitchell, T. (1998). Combining labeled and
unlabeled data with co-training. In Proceedings of the
eleventh annual conference on Computational learn-
ing theory, pages 92–100. ACM.
Chapelle, O., Scholkopf, B., and Zien, A. (2009).
Semi-supervised learning (chapelle, o. et al., eds.;
2006)[book reviews]. IEEE Transactions on Neural
Networks, 20(3):542–542.
Everingham, M., Van Gool, L., Williams, C. K., Winn, J.,
and Zisserman, A. (2010). The pascal visual object
classes (voc) challenge. International journal of com-
puter vision, 88(2):303–338.
FernáNdez, A., LóPez, V., Galar, M., Del Jesus, M. J., and
Herrera, F. (2013). Analysing the classification of im-
balanced data-sets with multiple classes: Binarization
techniques and ad-hoc approaches. Knowledge-based
systems, 42:97–110.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial nets. In
Advances in neural information processing systems,
pages 2672–2680.
Kingma, D. P., Mohamed, S., Rezende, D. J., and Welling,
M. (2014). Semi-supervised learning with deep gen-
erative models. In Advances in Neural Information
Processing Systems, pages 3581–3589.
Krizhevsky, A. and Hinton, G. (2009). Learning multiple
layers of features from tiny images. Technical report,
Citeseer.
Krizhevsky, A., Nair, V., and Hinton, G. (2014). The
cifar-10 dataset. online: http://www. cs. toronto.
edu/kriz/cifar. html.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
agenet classification with deep convolutional neural
networks. In Advances in neural information process-
ing systems, pages 1097–1105.
Laine, S. and Aila, T. (2016). Temporal ensem-
bling for semi-supervised learning. arXiv preprint
arXiv:1610.02242.
Lee, D.-H. (2013). Pseudo-label: The simple and efficient
semi-supervised learning method for deep neural net-
works. In Workshop on Challenges in Representation
Learning, ICML, volume 3, page 2.
Luo, Y., Zhu, J., Li, M., Ren, Y., and Zhang, B.
(2017). Smooth neighbors on teacher graphs
for semi-supervised learning. arXiv preprint
arXiv:1711.00258.
McLachlan, G. J. (1975). Iterative reclassification proce-
dure for constructing an asymptotically optimal rule
of allocation in discriminant analysis. Journal of the
American Statistical Association, 70(350):365–369.
Miyato, T., Maeda, S.-i., Koyama, M., and Ishii, S. (2017).
Virtual adversarial training: a regularization method
for supervised and semi-supervised learning. arXiv
preprint arXiv:1704.03976.
Nasrabadi, N. M. (2007). Pattern recognition and ma-
chine learning. Journal of electronic imaging,
16(4):049901.
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and
Ng, A. Y. (2011). Reading digits in natural images
with unsupervised feature learning. In NIPS workshop
on deep learning and unsupervised feature learning,
volume 2011, page 5.
Odena, A. (2016). Semi-supervised learning with
generative adversarial networks. arXiv preprint
arXiv:1606.01583.
Odena, A., Oliver, A., Raffel, C., Cubuk, E. D., and
Goodfellow, I. (2018). Realistic evaluation of semi-
supervised learning algorithms.
Rasmus, A., Berglund, M., Honkala, M., Valpola, H., and
Raiko, T. (2015). Semi-supervised learning with lad-
der networks. In Advances in Neural Information Pro-
cessing Systems, pages 3546–3554.
ICPRAM 2020 - 9th International Conference on Pattern Recognition Applications and Methods
30