Real World Examples of Agent based Decision Support Systems for Deep Learning based on Complex Feed Forward Neural Networks

Harald R. Kisch, Claudia L. R. Motta

2017

Abstract

Nature frequently shows us phenomena that in many cases are not fully understood. To research these phenomena we use approaches in computer simulations. This article presents a model based approach for the simulation of human brain functions in order to create recurrent machine learning map fractals that enable the investigation of any problem trained beforehand. On top of a neural network for which each neuron is illustrated with biological capabilities like collection, association, operation, definition and transformation, a thinking model for imagination and reasoning is exemplified in this research. This research illustrates the technical complexity of our dual thinking process in a mathematical and computational way and describes two examples, where an adaptive and self-regulating learning process was applied to real world examples. In conclusion, this research exemplifies how a previously researched conceptual model (SLA process) can be used for making progress to simulate the complex systematics of human thinking processes and gives an overview of the next major steps for making progress on how artificial intelligence can be used to simulate natural learning.

References

  1. Atzori, L., Iera, A., Morabito, G., and Nitti, M. (2012). The social internet of things (siot)-when social networks meet the internet of things: Concept, architecture and network characterization. Computer Networks, 56(16):3594-3608.
  2. Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
  3. Bartels, B. (2012). Strategies to the prediction, mitigation and management of product obsolescence. Wiley.
  4. Cooper, G. F. (1990). The computational complexity of probabilistic inference using bayesian belief networks. Artificial intelligence , 42(2):393-405.
  5. Ezziane, Z. (2006). Applications of artificial intelligence in bioinformatics: A review. Expert Systems with Applications, 30(1):2-10.
  6. Grady, L. J. and Polimeni, J. (2010). Discrete calculus: Applied analysis on graphs for computational science. Springer Science & Business Media.
  7. Graves, A. (2013). Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.
  8. Graves, A. and Jaitly, N. (2014). Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1764-1772.
  9. Graves, A., Liwicki, M., Fernández, S., Bertolami, R., Bunke, H., and Schmidhuber, J. (2009). A novel connectionist system for unconstrained handwriting recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(5):855-868.
  10. Graves, A., Mohamed, A.-r., and Hinton, G. (2013). Speech recognition with deep recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 6645- 6649. IEEE.
  11. Gregor, K., Danihelka, I., Graves, A., Rezende, D. J., and Wierstra, D. (2015). Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623.
  12. Hinton, G. E. and Sejnowski, T. J. (1986). Learning and releaming in boltzmann machines. Parallel distributed processing: Explorations in the microstructure of cognition, 1:282-317.
  13. Hofmann, T. (2001). Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42(1- 2):177-196.
  14. Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554-2558.
  15. Ji, L., Liu, Q., and Liao, X. (2014). On reaching group consensus for linearly coupled multi-agent networks. Information Sciences, 287:1-12.
  16. Jordan, M. I. and Rumelhart, D. E. (1992). Forward models: Supervised learning with a distal teacher. Cognitive science, 16(3):307-354.
  17. Kani, S. P. and Ardehali, M. (2011). Very short-term wind speed prediction: a new artificial neural networkmarkov chain model. Energy Conversion and Management, 52(1):738-745.
  18. Kim, Y. and Srivastava, J. (2007). Impact of social influence in e-commerce decision making. In Proceedings of the ninth international conference on Electronic commerce, pages 293-302. ACM.
  19. Kisch, H. R. and Motta, C. L. R. (2015). Model of a neuron network in human brains for learning assistance in e-learning environments. In Proceedings of the 7th International Conference on Computer Supported Education, pages 407-415.
  20. Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105.
  21. Le, Q. V., Jaitly, N., and Hinton, G. E. (2015). A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941.
  22. Liao, X. and Ji, L. (2014). On pinning group consensus for dynamical multi-agent networks with general connected topology. Neurocomputing, 135:262-267.
  23. Luong, M.-T., Sutskever, I., Le, Q. V., Vinyals, O., and Zaremba, W. (2014). Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206.
  24. Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H., and Koper, R. (2011). Recommender systems in technology enhanced learning. In Recommender systems handbook, pages 387-415. Springer.
  25. Ochs, P., Ranftl, R., Brox, T., and Pock, T. (2016). Techniques for gradient-based bilevel optimization with non-smooth lower level problems. Journal of Mathematical Imaging and Vision, pages 1-20.
  26. Poria, S., Cambria, E., Winterstein, G., and Huang, G.-B. (2014). Sentic patterns: Dependency-based rules for concept-level sentiment analysis. Knowledge-Based Systems, 69:45-63.
  27. Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386.
  28. Scheffer, M., Bascompte, J., Bjordam, T. K., Carpenter, S. R., Clarke, L., Folke, C., Marquet, P., Mazzeo, N., Meerhoff, M., Sala, O., et al. (2015). Dual thinking for scientists. Ecology and Society, 20(2).
  29. Sporns, O. and Koetter, R. (2004). Motifs in brain networks. PLoS Biology, 2(2):e411.
  30. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.
  31. Suhrer, S. J., Wiederstein, M., Gruber, M., and Sippl, M. J. (2009). Cops a novel workbench for explorations in fold space. Nucleic acids research, 37(suppl 2):W539-W544.
  32. Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112.
Download


Paper Citation


in Harvard Style

Kisch H. and Motta C. (2017). Real World Examples of Agent based Decision Support Systems for Deep Learning based on Complex Feed Forward Neural Networks . In Proceedings of the 2nd International Conference on Complexity, Future Information Systems and Risk - Volume 1: COMPLEXIS, ISBN 978-989-758-244-8, pages 94-101. DOI: 10.5220/0006307000940101


in Bibtex Style

@conference{complexis17,
author={Harald R. Kisch and Claudia L. R. Motta},
title={Real World Examples of Agent based Decision Support Systems for Deep Learning based on Complex Feed Forward Neural Networks},
booktitle={Proceedings of the 2nd International Conference on Complexity, Future Information Systems and Risk - Volume 1: COMPLEXIS,},
year={2017},
pages={94-101},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006307000940101},
isbn={978-989-758-244-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 2nd International Conference on Complexity, Future Information Systems and Risk - Volume 1: COMPLEXIS,
TI - Real World Examples of Agent based Decision Support Systems for Deep Learning based on Complex Feed Forward Neural Networks
SN - 978-989-758-244-8
AU - Kisch H.
AU - Motta C.
PY - 2017
SP - 94
EP - 101
DO - 10.5220/0006307000940101