Binali H., Wu C., Potdar V., 2010. Computational
Approaches for Emotion Detection in Text, 4-th IEEE
Int. Conf. Digit. Ecosyst. Technol., vol. 37, no. 5, pp.
498–527.
Breazeal C., Aryananda L., 2002. Recognition of affective
communicative intent in robot-directed speech, Auton.
Robots, vol. 12, no. 1, pp. 83–104.
Brooks M., Naylor P., Gudnason J., 2006. A Quantitative
Assessment of Group Delay Methods for Identifying
Glottal Closures in Voiced Speech. IEEE Transactions
on Audio, Speech and Language Processing, vol. 14,
no. 2, pp. 456-466.
Iliev, A., 2012. Emotion Recognition from Speech: Using
In-depth Analysis of Glottal and Prosodic Features,
Various Classifiers, Speech Corpora and Noise
Conditions, Lambert Academic Publishing, pp. 168.
Iliev, A., Stanchev, P., 2018. Information Retrieval and
Recommendation Using Emotion from Speech Signal,
in: 2018 IEEE Conference on Multimedia Information
Processing and Retrieval, Miami, FL, USA, April 10-
12, , pp. 222-225, DOI:10.1109/MIPR.2018.00054.
Iliev, A., Stanchev, P., 2017. Smart multifunctional digital
content ecosystem using emotion analysis of voice,
18th International Conference on Computer Systems
and Technologies CompSysTech’17, Ruse, Bulgaria –
June.22-24, ACM, ISBN 978-1-4503-5234-5, vol.
1369, pp.58-64.
Marinova, D., Iliev, A., Pavlov, R., Zlatkov, L., 2018.
Towards Increasing and Personalizing of User
Experience in the Digital Culture Ecosystem”,
International Journal of Applied Engineering
Research, ISSN 0973-4562, vol. 13, no 6, pp. 4227-
423.
McCallum A., Nigam K., 1998. A Comparison of Event
Models for Naive Bayes Text Classification,
AAAI/ICML-98 Work. Learn. Text Categ., pp. 41–48.
Moore E., Clements M., Peifer J., Weisser L., 2003.
Investigating the Role of Glottal Features in Classifying
Clinical Depression. 25-th Annual International
Conference of the IEEE EMBAS, pp. 2849-2852.
O’Shaughnessy D., 2000. Speech Communications –
Human and Machine. IEEE Press.
Plutchik R., 1980. Emotion: A psychoevolutionary
synthesis. Harpercollins College Division.
Pramod R., Vijayarajan V., 2017. Extraction of Emotions
from Speech - A Survey, International Journal of
Applied Engineering Research ISSN 0973-4562 vol.
12, no. 16, pp. 5760-5767.
Quatieri T., 2002. Discrete-Time Speech Signal Processing
Principles and Practice, Prentice Hall.
Rabiner L., Schafer R., 1978. Digital Processing of Speech
Signals, Prentice Hall.
Rothenberg M., 1973. A New Inverse-Filtering Technique
for Deriving the Glottal Air Flow Waveform during
Voicing. Journal of the Acoustical Society of America,
vol. 53, pp. l632-1645.
Schuller B., Rigoll G., Lang M., 2003. Hidden Markov
model-based speech emotion recognition, in 2003 IEEE
International Conference on Acoustics, Speech, and
Signal Processing, vol. 2, pp. II–1–4.
Slaney G., 2003. “Baby ears: are cognition system for
affective vocalizations,” Speech Commun., vol. 39, pp.
367–384.
Stanchev, P., Marinova, D., Iliev, A., 2017. Enhanced User
Experience and Behavioral Patterns for Digital Cultural
Ecosystems, The 9th International Conference on
Management of Digital EcoSystems (MEDES’17),
Bangkok, Thailand, 7-10. Nov., ACM, ISBN:978-1-
4503-4895-9, pp. 288-293.
Wong D., Markel J., Gray A, 1979. Least Squares Glottal
Inverse Filtering from the Acoustical Speech
Waveform. IEEE Transactions on Acoustics, Speech,
and Signal Processing, vol. ASSP-27, no. 4, pp. 350-
355.
Zhang Z., Coutinho E., Deng J., Schuller B., 2014.
Cooperative Learning and its Application to Emotion
Recognition from Speech, IEEE/ACM Trans. Audio,
Speech, Lang. Process., vol. 23, no. 1, pp. 1–1, 2014.