Authors:
Jorge López
1
;
Andrey Laputenko
2
;
Natalia Kushik
1
;
Nina Yevtushenko
3
and
Stanislav N. Torgaev
2
Affiliations:
1
SAMOVAR, CNRS, Télécom SudParis, Université Paris-Saclay, 9 rue Charles Fourier, 91000 Évry and France
;
2
Department of Information Technologies, Tomsk State University, 36 Lenin street, 634050 Tomsk and Russia
;
3
Department of Information Technologies, Tomsk State University, 36 Lenin street, 634050 Tomsk, Russia, Ivannikov Institute for System Programming of the Russian Academy of Sciences, 25 Alexander Solzhenitsyn street, 109004, Moscow and Russia
Keyword(s):
Supervised Machine Learning, Digital Circuits, Constrained Devices, Deep Learning.
Related
Ontology
Subjects/Areas/Topics:
Data Communication Networking
;
Distributed and Mobile Software Systems
;
Enterprise Information Systems
;
Internet of Things
;
Parallel and High Performance Computing
;
Sensor Networks
;
Software Agents and Internet Computing
;
Software and Architectures
;
Software Engineering
;
Telecommunications
Abstract:
Computationally constrained devices are devices with typically low resources / computational power built for specific tasks. At the same time, recent advances in machine learning, e.g., deep learning or hierarchical or cascade compositions of machines, that allow to accurately predict / classify some values of interest such as quality, trust, etc., require high computational power. Often, such complicated machine learning configurations are possible due to advances in processing units, e.g., Graphical Processing Units (GPUs). Computationally constrained devices can also benefit from such advances and an immediate question arises: how? This paper is devoted to reply the stated question. Our approach proposes to use scalable representations of ‘trained’ models through the synthesis of logic circuits. Furthermore, we showcase how a cascade machine learning composition can be achieved by using ‘traditional’ digital electronic devices. To validate our approach, we present a set of prelimi
nary experimental studies that show how different circuit apparatus clearly outperform (in terms of processing speed and resource consumption) current machine learning software implementations.
(More)