derived on the signal nature and time scale. As such,
the matrix accommodates 4 different data types:
Structured, panel, visual and sequential. Each of the
data types is adapted for a particular ANN
architecture within the scope of supervised learning.
Future research opportunities would be to enlarge the
criteria set of this classification approach and include
unsupervised learning architectures in the study.
REFERENCES
Baldi, P. P. (2003). The Pricipled Design of Large-Scale
Recursive Neural Networks Architectures-DAG-RNNs
and the Protein Structure Prediction Problem. Journal
of Machine Learning Research, 575-602.
Blinkov S. M., G. (1998). The Human Brain in Figures and
Tables: A Quantitative Handbook. New York: Plenum.
Broomhead, D. S. (1988). Radial basis functions, multi-
variable functional interpolation and adaptive
networks.
Chung, J. e. (2014). Empirical evaluation of gated recurrent
neural networks on sequence modeling. arXiv preprint
arXiv.
Costa, F. F. (2003). Towards Incremental Parsing of
Natural Language Using Recursive Neural Networks.
Applied Intelligence, 9-25.
D. Rumelhart, G. H. (1986). Learning Internal
Representations by Error Propagation.
Elman, J. L. (1990). Finding structure in time. Cognitive
science 14.2, 179-211.
Frasconi, P. G. (1998). A General Framework for Adaptive
Processing of Data Structures. IEEE Transactions on
Neural Networks, 768-786.
Gers, F. A., Schmidhuber, J., & Cummins, F. (2000).
Learning to Forget: Continual Prediction with LSTM.
Neural Computation, 2451–2471.
Gerven, M. v., & Bohte, S. (2018). Artificial neural
networks as models of neural information processing.
Frontiers in Computational Neuroscience.
Goldstein, J. R., Sobotka, T., & Jasilioniene, A. (2009). The
End of “Lowest‐Low” Fertility? Population and
development review, 663-699.
Hochreiter, S. a. (1997). Long short-term memory. Neural
computation 9.8 , 1735-1780.
Hsieh, T.-J., Hsiao, H.-F., & Yeh, C. (2011). Forecasting
stock markets using wavelet transforms and recurrent
neural networks. Applied Soft Computing, 2510-2525.
Hubel, D. H. (1968). Receptive fields and functional
architecture of monkey striate cortex. The Journal of
Physiology, 215-243.
Krizhevsky, A. (2013). ImageNet Classification with Deep
Convolutional Neural Networks.
LeCun, Y. (2013). LeNet-5, convolutional neural networks.
LeCun, Y. e. (1998). Gradient-based learning applied to
document recognition. Proceedings of the IEEE 86.11,
2278-2324.
McCulloch, W., & Pitts, W. (1943). A Logical Calculus of
Ideas Immanent in Nervous Activity. Bulletin of
Mathematical Biophysics, 115–133.
Richard Socher, A. P. (2013). Recursive Deep Models for
Semantic Compositionality Over a Sentiment
Treebank. Stanford University.
Rosenblatt, F. (1958). The perceptron: a probabilistic model
for information storage and organization in the brain.
Sporns, O. (2011). Networks of the brain. The M IT Press.
Vázquez, F. (2018, March 23). A “weird” introduction to
Deep Learning. Retrieved April 10, 2018, from
https://towardsdatascience.com/.
Veen, F. V. (2016, September 14). The neural netwok Zoo.
Retrieved April 10, 2018, from ASIMOV institute:
http://www.asimovinstitute.org/neural-network-zoo/
Chollet, F., 2018. Deep learning with Python. s.l.:Manning.
Gregory, R., 1987. Perception" in Gregory. Zangwill, p.
598–601.
Zell, A., 1994. Simulation of Neural Networks. s.l.:
Addison-Wesley.
Werbos, P., 1975. Beyond Regression: New Tools for
Prediction and Analysis in the Behavioral Sciences.
Schwenker, F., Kestler, H. A. & Palm, G., 2001. Three
learning phases for radial-basis-function networks.
Neural Networks, p. 439–458.
Aghdam, H. H. & Heravi, E. J., 2007. Guide to
convolutional neural networks: a practical application
to traffic-sign detection and classification. Cham:
Springer.
Graves, A. et al., 2009. A Novel Connectionist System for
Improved Unconstrained Handwriting Recognition.
IEEE Transactions on Pattern Analysis and Machine
Intelligence.
Klaus Greff, 2015. LSTM: A Search Space Odyssey. IEEE
Transactions on Neural Networks and Learning
Systems, pp. 2222 - 2232.