seen in Figure. 12, can be used as tools to help de-
termine a final threshold value when designing and
developing watchdog guarded networks.
5 CONCLUSION
An initial proof of concept neural network watch-
dog is proposed to help improve the performance
of classifiers on various datasets. The approach is
also transparently applicable to regression neural net-
works. The choice of RMSE threshold is ultimately
determined by the desired detection versus false alarm
tradeoff. Alternately, the RMSE can be used to in-
form users of a measure of closeness of an input to
the manifold of the watchdog autoencoder defined in-
distribution manifold in feature space.
REFERENCES
Abbasi, M., Shui, C., Rajabi, A., Gagne, C., and Bobba,
R. (2019). Toward metrics for differentiating out-of-
distribution sets. arXiv preprint arXiv:1910.08650.
Alain, G. and Bengio, Y. (2016). Understanding intermedi-
ate layers using linear classifier probes. arXiv preprint
arXiv:1610.01644.
Alvernaz, S. and Togelius, J. (2017). Autoencoder-
augmented neuroevolution for visual doom playing.
In 2017 IEEE Conference on Computational Intelli-
gence and Games (CIG), pages 1–8. IEEE.
Baur, C., Wiestler, B., Albarqouni, S., and Navab, N.
(2018). Deep autoencoding models for unsupervised
anomaly segmentation in brain mr images. In Inter-
national MICCAI Brainlesion Workshop, pages 161–
169. Springer.
Bhatnagar, S., Ghosal, D., and Kolekar, M. H. (2017).
Classification of fashion article images using convolu-
tional neural networks. In 2017 Fourth International
Conference on Image Information Processing (ICIIP),
pages 1–6. IEEE.
Ciresan, D. C., Meier, U., Masci, J., Gambardella, L. M.,
and Schmidhuber, J. (2011). Flexible, high perfor-
mance convolutional neural networks for image clas-
sification. In Twenty-second international joint con-
ference on artificial intelligence.
DeVries, T. and Taylor, G. W. (2018). Learning confidence
for out-of-distribution detection in neural networks.
arXiv preprint arXiv:1802.04865.
G
´
eron, A. (2019). Hands-on machine learning with Scikit-
Learn, Keras, and TensorFlow: Concepts, tools, and
techniques to build intelligent systems. O’Reilly Me-
dia.
Gondara, L. (2016). Medical image denoising using convo-
lutional denoising autoencoders. In 2016 IEEE 16th
International Conference on Data Mining Workshops
(ICDMW), pages 241–246. IEEE.
Guttormsson, S. E., Marks, R., El-Sharkawi, M., and Ker-
szenbaum, I. (1999). Elliptical novelty grouping for
on-line short-turn detection of excited running rotors.
IEEE Transactions on Energy Conversion, 14(1):16–
22.
Huang, H., He, R., Sun, Z., Tan, T., et al. (2018). In-
trovae: Introspective variational autoencoders for pho-
tographic image synthesis. In Advances in neural in-
formation processing systems, pages 52–63.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learn-
ing. nature, 521(7553):436–444.
Lore, K. G., Akintayo, A., and Sarkar, S. (2017). Llnet: A
deep autoencoder approach to natural low-light image
enhancement. Pattern Recognition, 61:650–662.
Lu, X., Tsao, Y., Matsuda, S., and Hori, C. (2013). Speech
enhancement based on deep denoising autoencoder. In
Interspeech, volume 2013, pages 436–440.
Luo, J., Xu, Y., Tang, C., and Lv, J. (2017). Learning inverse
mapping by autoencoder based generative adversarial
nets. In International Conference on Neural Informa-
tion Processing, pages 207–216. Springer.
Markopoulos, P. P., Kundu, S., Chamadia, S., and Pados,
D. A. (2017). Efficient l1-norm principal-component
analysis via bit flipping. IEEE Transactions on Signal
Processing, 65(16):4252–4264.
Martın-Clemente, R. and Zarzoso, V. (2016). On the link
between l1-pca and ica. IEEE transactions on pattern
analysis and machine intelligence, 39(3):515–528.
Meng, Q., Catchpoole, D., Skillicom, D., and Kennedy, P. J.
(2017). Relational autoencoder for feature extraction.
In 2017 International Joint Conference on Neural Net-
works (IJCNN), pages 364–371. IEEE.
Mescheder, L., Nowozin, S., and Geiger, A. (2017). Ad-
versarial variational bayes: Unifying variational au-
toencoders and generative adversarial networks. arXiv
preprint arXiv:1701.04722.
Ng, A. et al. (2011). Sparse autoencoder. CS294A Lecture
notes, 72(2011):1–19.
Oja, E. (1989). Neural networks, principal components, and
subspaces. International journal of neural systems,
1(01):61–68.
Qi, Y., Shen, C., Wang, D., Shi, J., Jiang, X., and Zhu, Z.
(2017). Stacked sparse autoencoder-based deep net-
work for fault diagnosis of rotating machinery. Ieee
Access, 5:15066–15079.
Ranjan, A., Bolkart, T., Sanyal, S., and Black, M. J. (2018).
Generating 3d faces using convolutional mesh autoen-
coders. In Proceedings of the European Conference
on Computer Vision (ECCV), pages 704–720.
Reed, R. and MarksII, R. J. (1999). Neural smithing: su-
pervised learning in feedforward artificial neural net-
works. Mit Press.
Shwartz-Ziv, R. and Tishby, N. (2017). Opening the black
box of deep neural networks via information. arXiv
preprint arXiv:1703.00810.
Streifel, R. J., Marks, R., El-Sharkawi, M., and Kerszen-
baum, I. (1996). Detection of shorted-turns in the
field winding of turbine-generator rotors using novelty
detectors-development and field test. IEEE Transac-
tions on Energy Conversion, 11(2):312–317.
Autoencoder Watchdog Outlier Detection for Classifiers
995