0
10000
20000
30000
40000
50000
60000
70000
80000
0 1 2 3 4 5
Distance
Layers
Euclidean Distance
Near_Min
Medium_Min
Far_Min
Near_Max
Medium_Max
Far_Max
Figure 8: Euclidean distance among representative points
(class 2). The first entry in x−axis shows the input value
followed by five layers of AlexNet. Near min and Near max
are minimum and maximum distance values, respectively
from the near region. Similarly for medium and far regions
the minimum and maximum values are shown.
0
10000
20000
30000
40000
50000
60000
70000
80000
0 1 2 3 4 5
Distance
Layers
Euclidean Distance
Near_Min
Medium_Min
Far_Min
Near_Max
Medium_Max
Far_Max
Figure 9: Euclidean distance among representative points
(between classes). The first entry in x−axis shows the in-
put value followed by five layers of AlexNet. Near min and
Near max are minimum and maximum distance values, re-
spectively from the near region. Similarly for medium and
far regions the minimum and maximum values are shown.
resentative points from each class and observe the ef-
fect of nonlinear transformations on the input data by
measuring the change in angle and distance between
these points and we observed that same class data is
bunched together and different class data are well sep-
arated in-spite of the fact that all data points come
closer irrespective of class.
REFERENCES
Deng, J., Dong, W., Socher, R., jia Li, L., Li, K., and Fei-
fei, L. (2009). Imagenet: A large-scale hierarchical
image database. In In CVPR.
Dosovitskiy, A. and Brox, T. (2016). Inverting visual repre-
sentations with convolutional networks. In Proceed-
ings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 4829–4837.
Giryes, R., Sapiro, G., and Bronstein, A. M. (2016). Deep
neural networks with random gaussian weights: a uni-
versal classification strategy? IEEE Trans. Signal
Processing, 64(13):3444–3457.
Kim, H., Nam, H., Jung, W., and Lee, J. (2017). Perfor-
mance analysis of cnn frameworks for gpus. Perfor-
mance Analysis of Systems and Software (ISPASS).
Kobayashi, T. (2018). Analyzing filters toward efficient
convnet. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages
5619–5628.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
agenet classification with deep convolutional neural
networks. In Advances in neural information process-
ing systems, pages 1097–1105.
LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard,
R. E., Hubbard, W., and Jackel, L. D. (1989). Back-
propagation applied to handwritten zip code recogni-
tion. Neural computation, 1(4):541–551.
Oyallon, E. (2017). Building a regular decision bound-
ary with deep networks. In 2017 IEEE Conference
on Computer Vision and Pattern Recognition (CVPR),
volume 00, pages 1886–1894.
Sokoli
´
c, J., Giryes, R., Sapiro, G., and Rodrigues, M. R.
(2017). Robust large margin deep neural net-
works. IEEE Transactions on Signal Processing,
65(16):4265–4280.
Sokoli
´
c, J., Giryes, R., Sapiro, G., and Rodrigues, M. R. D.
(2017). Generalization error of deep neural networks:
Role of classification margin and data structure. In
2017 International Conference on Sampling Theory
and Applications (SampTA), pages 147–151.
Xie, L., Zheng, L., Wang, J., Yuille, A. L., and Tian, Q.
(2016). Interactive: Inter-layer activeness propaga-
tion. In Proceedings of the IEEE Conference on Com-
puter Vision and Pattern Recognition, pages 270–279.
Young, T., Hazarika, D., Poria, S., and Cambria, E. (2018).
Recent trends in deep learning based natural language
processing. ieee Computational intelligenCe maga-
zine, 13(3):55–75.
Analyzing the Linear and Nonlinear Transformations of AlexNet to Gain Insight into Its Performance
865