loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Elie Azeraf 1 ; 2 ; Emmanuel Monfrini 2 ; Emmanuel Vignon 1 and Wojciech Pieczynski 1

Affiliations: 1 SAMOVAR, CNRS, Telecom SudParis, Institut Polytechnique de Paris, Evry, France ; 2 Watson Department, IBM GBS, avenue de l’Europe, Bois-Colombes, France

Keyword(s): Hidden Markov Model, Entropic Forward-Backward, Recurrent Neural Network, Sequence Labeling, Hidden Neural Markov Chain.

Abstract: Nowadays, neural network models achieve state-of-the-art results in many areas as computer vision or speech processing. For sequential data, especially for Natural Language Processing (NLP) tasks, Recurrent Neural Networks (RNNs) and their extensions, the Long Short Term Memory (LSTM) network and the Gated Recurrent Unit (GRU), are among the most used models, having a “term-to-term” sequence processing. However, if many works create extensions and improvements of the RNN, few have focused on developing other ways for sequential data processing with neural networks in a “term-to-term” way. This paper proposes the original Hidden Neural Markov Chain (HNMC) framework, a new family of sequential neural models. They are not based on the RNN but on the Hidden Markov Model (HMM), a probabilistic graphical model. This neural extension is possible thanks to the recent Entropic Forward-Backward algorithm for HMM restoration. We propose three different models: the classic HNMC, the HNMC2, and t he HNMC-CN. After describing our models’ whole construction, we compare them with classic RNN and Bidirectional RNN (BiRNN) models for some sequence labeling tasks: Chunking, Part-Of-Speech Tagging, and Named Entity Recognition. For every experiment, whatever the architecture or the embedding method used, one of our proposed models has the best results. It shows this new neural sequential framework’s potential, which can open the way to new models, and might eventually compete with the prevalent BiLSTM and BiGRU. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.137.159.17

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Azeraf, E.; Monfrini, E.; Vignon, E. and Pieczynski, W. (2021). Introducing the Hidden Neural Markov Chain Framework. In Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART; ISBN 978-989-758-484-8; ISSN 2184-433X, SciTePress, pages 1013-1020. DOI: 10.5220/0010303310131020

@conference{icaart21,
author={Elie Azeraf. and Emmanuel Monfrini. and Emmanuel Vignon. and Wojciech Pieczynski.},
title={Introducing the Hidden Neural Markov Chain Framework},
booktitle={Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART},
year={2021},
pages={1013-1020},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010303310131020},
isbn={978-989-758-484-8},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART
TI - Introducing the Hidden Neural Markov Chain Framework
SN - 978-989-758-484-8
IS - 2184-433X
AU - Azeraf, E.
AU - Monfrini, E.
AU - Vignon, E.
AU - Pieczynski, W.
PY - 2021
SP - 1013
EP - 1020
DO - 10.5220/0010303310131020
PB - SciTePress