loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Jorge López 1 ; Andrey Laputenko 2 ; Natalia Kushik 1 ; Nina Yevtushenko 3 and Stanislav N. Torgaev 2

Affiliations: 1 SAMOVAR, CNRS, Télécom SudParis, Université Paris-Saclay, 9 rue Charles Fourier, 91000 Évry and France ; 2 Department of Information Technologies, Tomsk State University, 36 Lenin street, 634050 Tomsk and Russia ; 3 Department of Information Technologies, Tomsk State University, 36 Lenin street, 634050 Tomsk, Russia, Ivannikov Institute for System Programming of the Russian Academy of Sciences, 25 Alexander Solzhenitsyn street, 109004, Moscow and Russia

Keyword(s): Supervised Machine Learning, Digital Circuits, Constrained Devices, Deep Learning.

Related Ontology Subjects/Areas/Topics: Data Communication Networking ; Distributed and Mobile Software Systems ; Enterprise Information Systems ; Internet of Things ; Parallel and High Performance Computing ; Sensor Networks ; Software Agents and Internet Computing ; Software and Architectures ; Software Engineering ; Telecommunications

Abstract: Computationally constrained devices are devices with typically low resources / computational power built for specific tasks. At the same time, recent advances in machine learning, e.g., deep learning or hierarchical or cascade compositions of machines, that allow to accurately predict / classify some values of interest such as quality, trust, etc., require high computational power. Often, such complicated machine learning configurations are possible due to advances in processing units, e.g., Graphical Processing Units (GPUs). Computationally constrained devices can also benefit from such advances and an immediate question arises: how? This paper is devoted to reply the stated question. Our approach proposes to use scalable representations of ‘trained’ models through the synthesis of logic circuits. Furthermore, we showcase how a cascade machine learning composition can be achieved by using ‘traditional’ digital electronic devices. To validate our approach, we present a set of prelimi nary experimental studies that show how different circuit apparatus clearly outperform (in terms of processing speed and resource consumption) current machine learning software implementations. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.188.223.120

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
López, J.; Laputenko, A.; Kushik, N.; Yevtushenko, N. and Torgaev, S. (2018). Scalable Supervised Machine Learning Apparatus for Computationally Constrained Devices. In Proceedings of the 13th International Conference on Software Technologies - ICSOFT; ISBN 978-989-758-320-9; ISSN 2184-2833, SciTePress, pages 518-528. DOI: 10.5220/0006908905520562

@conference{icsoft18,
author={Jorge López. and Andrey Laputenko. and Natalia Kushik. and Nina Yevtushenko. and Stanislav N. Torgaev.},
title={Scalable Supervised Machine Learning Apparatus for Computationally Constrained Devices},
booktitle={Proceedings of the 13th International Conference on Software Technologies - ICSOFT},
year={2018},
pages={518-528},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006908905520562},
isbn={978-989-758-320-9},
issn={2184-2833},
}

TY - CONF

JO - Proceedings of the 13th International Conference on Software Technologies - ICSOFT
TI - Scalable Supervised Machine Learning Apparatus for Computationally Constrained Devices
SN - 978-989-758-320-9
IS - 2184-2833
AU - López, J.
AU - Laputenko, A.
AU - Kushik, N.
AU - Yevtushenko, N.
AU - Torgaev, S.
PY - 2018
SP - 518
EP - 528
DO - 10.5220/0006908905520562
PB - SciTePress