Authors:
Xiaoyang Gao
1
and
Ryutaro Ichise
2
Affiliations:
1
Peking University, China
;
2
National Institute of Informatics and National Institute of Advanced Industrial Science and Technology, Japan
Keyword(s):
NLP, Word Embeddings, Deep Learning, Neural Network.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Artificial Intelligence
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computational Intelligence
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Knowledge Engineering and Ontology Development
;
Knowledge-Based Systems
;
Methodologies and Methods
;
Natural Language Processing
;
Neural Networks
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Symbolic Systems
;
Theory and Methods
Abstract:
Continuous representations language models have gained popularity in many NLP tasks. To measure the similarity of two words, we have to calculate the cosine distances. However the qualities of word embeddings are due to the selected corpus. As for Word2Vec, we observe that the vectors are far apart to each other. Furthermore, synonym words with low occurrences or with multiple meanings are even further. In these cases, cosine similarities are not appropriate to evaluate how similar the words are. And considering about the structures of most of the language models, they are not as deep as we supposed. Based on these observations, we implement a mixed deep neural networks with two kinds of architectures. We show that adjustment can be done on word embeddings in both unsupervised and supervised ways. Remarkably, this approach improves the cases we mentioned by largely increasing almost all of synonyms similarities. It is also easy to train and adapt to certain tasks by changing the trai
ning target and dataset.
(More)