Authors:
Sadam Al-Azani
and
El-Sayed M. El-Alfy
Affiliation:
College of Computer Sciences and Engineering, King Fahd University of Petroleum and Minerals, Dhahran 31261 and Saudi Arabia
Keyword(s):
Multimodal Recognition, Sentiment Analysis, Opinion Mining, Gender Recognition, Machine Learning.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computational Intelligence
;
Data Manipulation
;
Data Mining
;
Databases and Information Systems Integration
;
Enterprise Information Systems
;
Evolutionary Computing
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Knowledge Discovery and Information Retrieval
;
Knowledge-Based Systems
;
Machine Learning
;
Methodologies and Methods
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Symbolic Systems
;
Vision and Perception
Abstract:
Sentiment analysis has recently attracted an immense attention from the social media research community. Until recently, the focus has been mainly on textual features before new directions are proposed for integration of other modalities. Moreover, combining gender classification with sentiment recognition is a more challenging problem and forms new business models for directed-decision making. This paper explores a sentiment and gender classification system for Arabic speakers using audio, textual and visual modalities. A video corpus is constructed and processed. Different features are extracted for each modality and then evaluated individually and in different combinations using two machine learning classifiers: support vector machines and logistic regression. Promising results are obtained with more than 90% accuracy achieved when using support vector machines with audio-visual or audio-text-visual features.