REFERENCES
Anwar, S., Hwang, K., and Sung, W. (2015). Fixed point
optimization of deep convolutional neural networks
for object recognition. Speech and Signal Processing
IEEE International Conference.
Ciresan, D., Meier, U., M., G. L., and Schmidhuber, J.
(2011). Flexible, high performance convolutional neu-
ral networks for image classification. Proceedings of
the Twenty-Second International Joint Conference on
Artificial Intelligence, 2:1237–1242.
Courbariaux, M., Bengio, Y., and David, J. (2015). Bina-
ryconnect: Training deep neural networks with binary
weights during propagations. Advances in Neural In-
formation Processing Systems.
Courbariaux, M., Bengio, Y., and Jean-Pierre, D. (2014a).
Training deep neural networks with low precision
multiplications. ArXiv e-prints, abs/1412.7024.
Courbariaux, M., David, J.-P., and Bengio, Y. (2014b).
Training deep neural networks with low precision
multiplications. arXiv preprint arXiv:1412.7024.
Dipert, B., Bier, J., Rowen, C., Dashwood, J., Laroche,
D., Ors, A., and Thompson, M. (2016 (accessed
February 20, 2017)). Deep learning for object
recognition: Dsp and specialized processor optimiza-
tions. http://www.embedded-vision.com/platinum-
members/embedded-vision-alliance/embedded-
vision-training/documents/pages/cnn-dsps.
Esser, S., Appuswamy, R., Merolla, P., Arthur, J., and
Modha, D. (2015). Backpropagation for energy-
efficient neuromorphic computing. in: Advances in
neural information processing systems. Advances in
Neural Information Processing Systems.
Google (2016 (accessed February 22, 2017)). Tensorflow.
https://www.tensorflow.org.
Gupta, S., Agrawal, A., Gopalakrishnan, K., and
Narayanan, P. (2015). Deep learning with limited nu-
merical precision. Proceedings of the 32nd Interna-
tional Conference on Machine Learning.
Gysel, P. (2016). Ristretto: Hardware-oriented approxima-
tion of convolutional neural networks. arXiv preprint
arXiv:1605.06402.
Gysel, P., Motamedi, M., and Ghiasi, S. (2014). Hardware-
oriented approximation of convolutional neural net-
works. ArXiv e-prints, arXiv:1604.03168.
Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz,
M. A., and Dally, W. J. (2016a). Eie: Efficient in-
ference engine on compressed deep neural network.
arXiv preprint arXiv:1602.01528.
Han, S., Mao, H., and Dally, W. J. (2016b). Deep compres-
sion: Compressing deep neural networks with prun-
ing, trained quantization and huffman coding. arXiv
preprint arXiv:1602.01528.
He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep resid-
ual learning for image recognition. arXiv:1512.03385.
Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing
the dimensionality of data with neural networks. Sci-
ence, 313(5786):504–507.
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and
Y., B. (2016a). Binarized neural networks: Train-
ing neural networks with weights and activations con-
strained to +1 or -1. arxiv 2016.
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and
Y., B. (2016b). Quantized neural networks: Training
neural networks with low precision weights and acti-
vations. arxiv 2016.
Hwang, K. and Sung, W. (2014). Fixed-point feedforward
deep neural network design using weights +1, 0, and
-1. Signal Processing Systems, 2014 IEEE Workshop
on, IEEE (2014).
Iandola, F. N., Moskewicz, M. W., Ashraf, K., Han, S.,
Dally, W. J., and Keutzer, K. (2016). Squeezenet:
Alexnet-level accuracy with 50x fewer parameters and
0.5mb model size. arXiv preprint arXiv:1602.07360.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
agenet classification with deep convolutional neural
networks. In Advances in neural information process-
ing systems, pages 1097–1105.
Lin, D., Talathi, S., and Annapureddy, V. (2016). Fixed
point quantization of deep convolutional networks.
ICLR 2016.
Pietron, M., Wielgosz, M., and Wiatr, K. (2016a). Formal
analysis of htm spatial pooler performance under pre-
defined operation condition. International Joint Con-
ference on Rough Sets.
Pietron, M., Wielgosz, M., and Wiatr, K. (2016b). Parallel
implementation of spatial pooler in hierarchical tem-
poral memory. International Conference on Agents
and Artificial Intelligence.
Rastegari, M., Ordonez, V., Redmon, J., and Farhadi, A.
(2016). Xnor-net: Imagenet classification using bi-
nary convolutional neural networks. ArXiv e-prints,
arXiv:1603.05279.
Soudry, D., Hubara, I., and Meir, R. (2014). Expecta-
tion backpropagation: parameter-free training of mul-
tilayer neural networks with continuous or discrete
weights. Signal Processing Systems (SiPS), IEEE
Workshop.
TensorFlow (2017 (accessed July 12,
2017)). Tensorflow quantization.
http://www.tensorflow.org/performance/quantization.
Vanhoucke, V., Senior, A., and Mao, M. (2011). Improv-
ing the speed of neural networks on cpus. Proc. Deep
Learning and Unsupervised Feature Learning NIPS
Workshop.
Vedaldi, A. and Lenc, K. (2015). Matconvnet – convolu-
tional neural networks for matlab. In Proceeding of
the ACM Int. Conf. on Multimedia.
Wielgosz, M. and Pietron, M. (2017). Using spatial pooler
of hierarchical temporal memory to classify noisy
videos with predefined complexity. Journal of Neu-
rocomputing.
Wielgosz, M., Pietron, M., and Wiatr, K. (2016). Opencl-
accelerated object classification in video streams us-
ing spatial pooler of hierarchical temporal memory.
Journal IJACSA.
Zhang, L. and Liu, B. (2016). Ternary weight networks.
arXiv:1605.04711.
ICAART 2018 - 10th International Conference on Agents and Artificial Intelligence
580