6 CONCLUSIONS
In this work, we have demonstrated the plausible po-
tential in composing a bouquet made of a lead and
supporting flowers by attending to a set of unquali-
fied sentiment expressions we stream over RNN. We
confirmed that GRU-based RNN improved our flo-
wer prediction quality by about ten percentage points
compared to a standard RNN, and feeding RNN with
spatiotemporal sentiment context prove particularly
beneficial to the performance of short-text sequences.
Yet using bidirectional propagation of sentiment word
vectors that enter RNN was less instinctive, and con-
tributed to an inconsequential prediction gain. Our
proposed simple workflow offers on average high pre-
diction recall to hundreds of mix choices for a main-
line bouquet, and sends a more cohesive emotional
message that is made of semantically related senti-
ments.
To the extent of our knowledge, the work we pre-
sented is first to apply computational linguistic mo-
deling to the language of flowers. We contend that
floriography is an important NLP discipline to pur-
sue from both its rooted historical impact on society
culture, and the prospect to influence areas of critical
theory and sentiment analysis. In its current state, the
corpus we used in this paper is small and challenging,
but we anticipate the language to expand sentiment
translation to thousands of flower plants and further
merit our statistically reasoned system. A direct pro-
gression of our work is to evolve to a task that matches
flower-sentiment pairs from unstructured full text and
not just from a set of prescribed sentiment phrases,
and have a profound practical importance to impact
a much broader scope of application domains that in-
clude cryptography and secured communication.
ACKNOWLEDGEMENTS
We would like to thank the anonymous reviewers for
their insightful suggestions and feedback.
REFERENCES
Baeza-Yates, R. and Ribeiro-Neto, B., editors (1999).
Modern Information Retrieval. ACM Press Se-
ries/Addison Wesley, Essex, UK.
Cho, K., van Merrienboer, B., Bahdanau, D., and Ben-
gio, Y. (2014). On the properties of neural machine
translation: Encoder-decoder approaches. CoRR,
abs/1409.1259. http://arxiv.org/abs/1409.1259.
Chung, J., G
¨
ulc¸ehre, C¸ ., Cho, K., and Bengio, Y. (2014).
Empirical evaluation of gated recurrent neural net-
works on sequence modeling. CoRR, abs/1412.3555.
http://arxiv.org/abs/1412.3555.
Diffenbaugh, V. (2011). Victoria’s dictionary of flo-
wers. http://aboutflowers.com/images/stories/Florist/
languageofflowers-flowerdictionary.pdf.
Elman, J. L. (1990). Finding structure in time. Cognitive
Science, 14(2):179–211.
Hoang, C. D. V., Cohn, T., and Haffari, G. (2016). In-
corporating side information into recurrent neural net-
work language models. In Human Language Techno-
logies: North American Chapter of the Association
for Computational Linguistics (NAACL), pages 1250–
1255, San Diego, California.
Hochreiter, S. and Schmidhuber, J. (1997). Long short-term
memory. Neural Computation, 9(8):1735–1780.
Hofmann, T. and Buhmann, J. (1995). Multidimensional
scaling and data clustering. In Advances in Neural
Information Processing Systems (NIPS), pages 459–
466. MIT Press, Cambridge, MA.
Kaufman, L. and Rousseeuw, P. J., editors (1990). Finding
Groups in Data: An Introduction to Cluster Analysis.
Wiley, New York, NY.
Lee, J. Y. and Dernoncourt, F. (2016). Sequential short-text
classification with recurrent and convolutional neural
networks. In Human Language Technologies: North
American Chapter of the Association for Computatio-
nal Linguistics (NAACL), pages 515–520, San Diego,
California.
Mikolov, T., Chen, K., Corrado, G. S., and Dean, J.
(2013a). Efficient estimation of word represen-
tations in vector space. CoRR, abs/1301.3781.
http://arxiv.org/abs/1301.3781.
Mikolov, T., Karafi
´
at, M., Burget, L., Cernock
´
y, J., and
Khudanpur, S. (2010). Recurrent neural network ba-
sed language model. In International Speech Commu-
nication Association (INTERSPEECH), pages 1045–
1048, Chiba, Japan.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and
Dean, J. (2013b). Distributed representations of words
and phrases and their compositionality. In Advances in
Neural Information Processing Systems (NIPS), pages
3111–3119. Curran Associates, Inc., Red Hook, NY.
Pennington, J., Socher, R., and Manning, C. D. (2014).
GloVe: Global vectors for word representation. In
Empirical Methods in Natural Language Processing
(EMNLP), pages 1532–1543, Doha, Qatar.
R Core Team (2013). R: A Language and Environment
for Statistical Computing. R Foundation for Sta-
tistical Computing, Vienna, Austria. http://www.R-
project.org/.
Roof, L. and Roof, A. H. (2010). Language of flowers pro-
ject. http://languageofflowers.com/index.htm.
Salton, G. M., Wong, A., and Yang, C. S. (1975). A vector
space model for automatic indexing. Communications
of the ACM, 18(11):613–620.
Schuster, M. and Paliwal, K. K. (1997). Bidirectio-
nal recurrent neural networks. Signal Processing,
45(11):2673–2681.
ICAART 2018 - 10th International Conference on Agents and Artificial Intelligence
420