4 CONCLUSIONS
It has become commonplace in industry as well as in
academia to argue that work is set to disappear
through the impact of mass automation and the rise of
increasingly more powerful AI (Ford 2015, Poitevin
2017). The picture we have sketched in this article
stands in contrast with such a view, though. More
precisely, rather than envisage a wholesale
replacement of human work, we foresee that a fruitful
collaboration between humans and machines can
characterize the future of AI.
As argued at length in Section 2, there are good
reasons to believe that a great part in the ML and DL
industry successes was played by sheer data volume;
however, regulatory changes, scientific evidence
from human psychology as well business
considerations strongly point towards an untapped
market for machines that can learn in small data,
privacy-aware contexts.
We need to be careful to distinguish between DL
and the overall A.I. landscape, which is much more
varied than many observers take it to be: as outlined
in Section 3 through a fairly general industry use case,
there are promising approaches to marry the inference
ability of machines with the prior knowledge of
humans.
Developing further tools for concept learning is a
giant opportunity to deploy scalable A.I. systems for
humans and with humans: if we look at A.I. through
the lens of the probabilistic framework we champion,
it’s easy to see, pace Joy 2001, that the future does
indeed need us.
ACKNOWLEDGEMENTS
The authors are immensely grateful for the help of the
editors and reviewers in improving and shaping the
final version of the article.
REFERENCES
Abadi, 2015, TensorFlow: Large-Scale Machine Learning
on Heterogeneous Distributed Systems, White Paper
TensorFlow.
ACM, 2019. Fathers of the Deep Learning Revolution
Receive ACM.
A.M. TURING AWARD, retrieved from https://amturing.
acm.org/
Brynjolfsson, E., Hu, Y. and Simester, D., 2011. Goodbye
pareto principle, hello long tail: The effect of search
costs on the concentration of product sales.
Management Science, 57(8), pp.1373-1386.
Chollet F., 2017, retrieved from https://twitter.com/
fchollet/status/94273341478819 0209?lang=en
Chui, M., James Manyika, Mehdi Miremadi, Nicolaus
Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra,
2018. Notes from the AI frontier: Applications and
value of deep learning. McKinsey Global Institute.
Costa, T. 2014. Personalization and The Rise of
individualized experiences. Forrester Research.
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K. and Fei-Fei,
L., 2009. ImageNet: A Large-Scale Hierarchical
Image Database. In CVPR09.
Ford, M., 2015. The rise of the robots: Technology and the
threat of mass unemployment. Oneworld publications.
Ghemawat, S., Gobioff H., Leung S., 2003 Proceedings of
the 19th ACM Symposium on Operating Systems
Principle, 20--4, Bolton Landing, NY.
Goodman, N. D., Tenenbaum, J. B. and The ProbMods
Contributors. 2016. Probabilistic Models of Cognition
(2nd ed.). Retrieved 2019-4-15 from https://
probmods.org/
Goodman, N. D., and Stuhlmüller, A. 2014. The Design and
Implementation of Probabilistic Programming
Languages. Retrieved from http://dippl.org. [bibtex]
Hagstroem M., Roggendorf M., Saleh R., Sharma J., 2017.
A smarter way to jump into data lakes. McKinsey
Quarterly.
Halevy, A., Norvig, P. and Pereira, F., 2009. The
unreasonable effectiveness of data, IEEE. Vancouver.
Hartnett, 2018, To Build Truly Intelligent Machines,
Teach Them Cause and Effect, Quanta Magazine
Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S.,
Gani, A., & Ullah Khan, S., 2015. The rise of “big
data” on cloud computing: Review and open research
issues. Information Systems, 47, 98–115.
doi:10.1016/j.is.2014.07.006
Henke, N., Bughin, J., Chui, M., Manyika, J., Saleh, T.,
Wiseman, B. and Sethupathy, G., 2016. The age of
analytics: Competing in a data-driven world.
McKinsey Global Institute, 4.
Hestness, J., Narang, S., Ardalani, N., Diamos, G., Jun, H.,
Kianinejad, H., Patwary, M., Ali, M., Yang, Y. and
Zhou, Y., 2017. Deep learning scaling is predictable,
empirically. arXiv preprint arXiv:1712.00409.
Hinton, G. E., Krizhevsky A., Sutskever I., Srivastva I.,
2013, System and method for addressing overfitting in
a neural network, USS PATENT: US9406017B2.
Hochreiter S., Schmidhuber S., 1997, Long short-term
memory, Neural Computation. 9 (8): 1735–1780.
doi:10.1162/neco.1997.9.8.1735. PMID 9377276.
Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012.
Imagenet classification with deep convolutional neural
networks. In Advances in neural information
processing systems (pp. 1097-1105).
Joy, B., 2001, Why the Future doesn’t Need Us, Wired.
Lake, B.M., Ullman, T.D., Tenenbaum, J.B. and
Gershman,
S.J., 2017. Building machines that learn and think like
people. Behavioral and brain sciences, 40.