Bartels, B. (2012). Strategies to the prediction, mitigation
and management of product obsolescence. Wiley.
Cooper, G. F. (1990). The computational complexity
of probabilistic inference using bayesian belief net-
works. Artificial intelligence, 42(2):393–405.
Ezziane, Z. (2006). Applications of artificial intelligence in
bioinformatics: A review. Expert Systems with Appli-
cations, 30(1):2–10.
Grady, L. J. and Polimeni, J. (2010). Discrete calculus:
Applied analysis on graphs for computational science.
Springer Science & Business Media.
Graves, A. (2013). Generating sequences with recurrent
neural networks. arXiv preprint arXiv:1308.0850.
Graves, A. and Jaitly, N. (2014). Towards end-to-end
speech recognition with recurrent neural networks. In
Proceedings of the 31st International Conference on
Machine Learning (ICML-14), pages 1764–1772.
Graves, A., Liwicki, M., Fern
´
andez, S., Bertolami, R.,
Bunke, H., and Schmidhuber, J. (2009). A novel
connectionist system for unconstrained handwriting
recognition. Pattern Analysis and Machine Intelli-
gence, IEEE Transactions on, 31(5):855–868.
Graves, A., Mohamed, A.-r., and Hinton, G. (2013). Speech
recognition with deep recurrent neural networks. In
Acoustics, Speech and Signal Processing (ICASSP),
2013 IEEE International Conference on, pages 6645–
6649. IEEE.
Gregor, K., Danihelka, I., Graves, A., Rezende, D. J.,
and Wierstra, D. (2015). Draw: A recurrent neu-
ral network for image generation. arXiv preprint
arXiv:1502.04623.
Hinton, G. E. and Sejnowski, T. J. (1986). Learning and re-
leaming in boltzmann machines. Parallel distributed
processing: Explorations in the microstructure of cog-
nition, 1:282–317.
Hofmann, T. (2001). Unsupervised learning by probabilis-
tic latent semantic analysis. Machine learning, 42(1-
2):177–196.
Hopfield, J. J. (1982). Neural networks and physical sys-
tems with emergent collective computational abili-
ties. Proceedings of the national academy of sciences,
79(8):2554–2558.
Ji, L., Liu, Q., and Liao, X. (2014). On reaching group
consensus for linearly coupled multi-agent networks.
Information Sciences, 287:1–12.
Jordan, M. I. and Rumelhart, D. E. (1992). Forward models:
Supervised learning with a distal teacher. Cognitive
science, 16(3):307–354.
Kani, S. P. and Ardehali, M. (2011). Very short-term wind
speed prediction: a new artificial neural network–
markov chain model. Energy Conversion and Man-
agement, 52(1):738–745.
Kim, Y. and Srivastava, J. (2007). Impact of social influence
in e-commerce decision making. In Proceedings of
the ninth international conference on Electronic com-
merce, pages 293–302. ACM.
Kisch, H. R. and Motta, C. L. R. (2015). Model of a neu-
ron network in human brains for learning assistance
in e-learning environments. In Proceedings of the 7th
International Conference on Computer Supported Ed-
ucation, pages 407–415.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
agenet classification with deep convolutional neural
networks. In Advances in neural information process-
ing systems, pages 1097–1105.
Le, Q. V., Jaitly, N., and Hinton, G. E. (2015). A simple
way to initialize recurrent networks of rectified linear
units. arXiv preprint arXiv:1504.00941.
Liao, X. and Ji, L. (2014). On pinning group consensus
for dynamical multi-agent networks with general con-
nected topology. Neurocomputing, 135:262–267.
Luong, M.-T., Sutskever, I., Le, Q. V., Vinyals, O., and
Zaremba, W. (2014). Addressing the rare word prob-
lem in neural machine translation. arXiv preprint
arXiv:1410.8206.
Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H.,
and Koper, R. (2011). Recommender systems in tech-
nology enhanced learning. In Recommender systems
handbook, pages 387–415. Springer.
Ochs, P., Ranftl, R., Brox, T., and Pock, T. (2016). Tech-
niques for gradient-based bilevel optimization with
non-smooth lower level problems. Journal of Math-
ematical Imaging and Vision, pages 1–20.
Poria, S., Cambria, E., Winterstein, G., and Huang, G.-B.
(2014). Sentic patterns: Dependency-based rules for
concept-level sentiment analysis. Knowledge-Based
Systems, 69:45–63.
Rosenblatt, F. (1958). The perceptron: a probabilistic model
for information storage and organization in the brain.
Psychological review, 65(6):386.
Scheffer, M., Bascompte, J., Bjordam, T. K., Carpenter,
S. R., Clarke, L., Folke, C., Marquet, P., Mazzeo, N.,
Meerhoff, M., Sala, O., et al. (2015). Dual thinking
for scientists. Ecology and Society, 20(2).
Sporns, O. and Koetter, R. (2004). Motifs in brain networks.
PLoS Biology, 2(2):e411.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I.,
and Salakhutdinov, R. (2014). Dropout: A simple way
to prevent neural networks from overfitting. The Jour-
nal of Machine Learning Research, 15(1):1929–1958.
Suhrer, S. J., Wiederstein, M., Gruber, M., and Sippl,
M. J. (2009). Cops a novel workbench for explo-
rations in fold space. Nucleic acids research, 37(suppl
2):W539–W544.
Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Se-
quence to sequence learning with neural networks. In
Advances in neural information processing systems,
pages 3104–3112.
Real World Examples of Agent based Decision Support Systems for Deep Learning based on Complex Feed Forward Neural Networks
101