loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Silvestre Malta 1 ; 2 ; Pedro Pinto 1 ; 3 and Manuel Fernández Veiga 2

Affiliations: 1 ADiT-Lab, ESTG - Instituto Politécnico de Viana do Castelo, Portugal ; 2 University of Vigo and AtlanTTic Research Center, Spain ; 3 INESC TEC, R. Dr. Roberto Frias, Porto, Portugal

Keyword(s): Neural Networks, Machine Learning, NLP, LSTM, RNN, GRU, CNN, Word2Vec, Mobility Prediction, Training Time Optimization.

Abstract: The process of building and deploying Machine Learning (ML) models includes several phases and the training phase is taken as one of the most time-consuming. ML models with time series datasets can be used to predict users positions, behaviours or mobility patterns, which implies paths crossing by well-defined positions, and thus, in these cases, syntactic similarity can be used to reduce these models training time. This paper uses the case study of a Mobile Network Operator (MNO) where users mobility are predicted through ML and the use of syntactic similarity with Word2Vec (W2V) framework is tested with Recurrent Neural Network (RNN), Gate Recurrent Unit (GRU), Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN) models. Experimental results show that by using framework W2V in these architectures, the training time task is reduced in average between 22% to 43%. Also an improvement on the validation accuracy of mobility prediction of about 3 percentage points in aver age is obtained. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.217.237.169

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Malta, S.; Pinto, P. and Veiga, M. (2021). Using Syntactic Similarity to Shorten the Training Time of Deep Learning Models using Time Series Datasets: A Case Study. In Proceedings of the 2nd International Conference on Deep Learning Theory and Applications - DeLTA; ISBN 978-989-758-526-5; ISSN 2184-9277, SciTePress, pages 93-100. DOI: 10.5220/0010515700930100

@conference{delta21,
author={Silvestre Malta. and Pedro Pinto. and Manuel Fernández Veiga.},
title={Using Syntactic Similarity to Shorten the Training Time of Deep Learning Models using Time Series Datasets: A Case Study},
booktitle={Proceedings of the 2nd International Conference on Deep Learning Theory and Applications - DeLTA},
year={2021},
pages={93-100},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010515700930100},
isbn={978-989-758-526-5},
issn={2184-9277},
}

TY - CONF

JO - Proceedings of the 2nd International Conference on Deep Learning Theory and Applications - DeLTA
TI - Using Syntactic Similarity to Shorten the Training Time of Deep Learning Models using Time Series Datasets: A Case Study
SN - 978-989-758-526-5
IS - 2184-9277
AU - Malta, S.
AU - Pinto, P.
AU - Veiga, M.
PY - 2021
SP - 93
EP - 100
DO - 10.5220/0010515700930100
PB - SciTePress