Table 6: Coherence accuracy by survey.
Bad Normal Excellent
36.11 25 38.89
5 CONCLUSIONS
Through the development of the project, the analysis
of the metrics, and the results found in the other mod-
els, we concluded that our model has good text gener-
ation results, but it needs a high processing power to
be trained. For this limitation, the model parameters
were not the desired ones and that can be the cause
for some incoherence in the generated text.
The CNN and LSTM models have provided a
good performance on the GAN architecture for the
text generation with sentiments. A benefit of using
convolutional networks is that they are capable of fea-
ture extracting, this help to be more precise on the
discriminator and classificator work. In the case of
the LSTM generator, due to the information saved on
each interaction on the generation, the text result has
good coherence and quality.
A good upgrade to this work that can be done in
the future, is the exchange of the internal models, sim-
ilarly to GPT3 based models (de Rivero et al., 2021).
Despite the good performance it presented, this can be
improved, for example, by replacing the LSTM gen-
erator with a transformed-based generator or transfer
learning from a CNN (Rodr
´
ıguez et al., 2021).
REFERENCES
Banerjee, S. and Lavie, A. (2005). METEOR: an automatic
metric for MT evaluation with improved correlation
with human judgments. In IEEvaluation@ACL.
Cai, P., Chen, X., Jin, P., Wang, H., and Li, T. (2021). Distri-
butional discrepancy: A metric for unconditional text
generation. Knowl. Based Syst., 217.
Chen, J., Wu, Y., Jia, C., Zheng, H., and Huang, G. (2020).
Customizable text generation via conditional text gen-
erative adversarial network. Neurocomputing, 416.
de Rivero, M., Tirado, C., and Ugarte, W. (2021). For-
malstyler: GPT based model for formal style trans-
fer based on formality and meaning preservation. In
IC3K.
Firdaus, M., Chauhan, H., Ekbal, A., and Bhattacharyya, P.
(2020). Emosen: Generating sentiment and emotion
controlled responses in a multimodal dialogue system.
IEEE Transactions on Affective Computing.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A. C., and
Bengio, Y. (2014). Generative adversarial networks.
CoRR, abs/1406.2661.
Hochreiter, S. and Schmidhuber, J. (1997). Long short-term
memory. Neural Comput., 9(8).
Huszar, F. (2015). How (not) to train your generative model:
Scheduled sampling, likelihood, adversary? CoRR,
abs/1511.05101.
Li, Y., Pan, Q., Wang, S., Yang, T., and Cambria, E. (2018).
A generative model for category text generation. Inf.
Sci., 450.
Lin, C.-Y. (2004). Rouge: a package for automatic evalua-
tion of summaries. In Workshop on Text Summariza-
tion Branches Out of ACL.
Liu, Z., Wang, J., and Liang, Z. (2020). Catgan: Category-
aware generative adversarial networks with hierarchi-
cal evolutionary learning for category text generation.
In AAAI.
Montahaei, E., Alihosseini, D., and Baghshah, M. S.
(2021). DGSAN: discrete generative self-adversarial
network. Neurocomputing, 448.
Newman, N. (2019). Journalism, media and technology
trends and predictions 2018.
Papineni, K., Roukos, S., Ward, T., and Zhu, W. (2002).
Bleu: a method for automatic evaluation of machine
translation. In ACL, pages 311–318.
Rizzo, G. and Van, T. H. M. (2020). Adversarial text gen-
eration with context adapted global knowledge and
a self-attentive discriminator. Inf. Process. Manag.,
57(6).
Rodr
´
ıguez, M., Pastor, F., and Ugarte, W. (2021). Clas-
sification of fruit ripeness grades using a convolu-
tional neural network and data augmentation. In IEEE
FRUCT.
Wang, K. and Wan, X. (2018). Sentigan: Generating senti-
mental texts via mixture adversarial networks. In IJ-
CAI.
Wu, Y. and Wang, J. (2020). Text generation service model
based on truth-guided seqgan. IEEE Access, 8:11880–
11886.
Yan, Y., Shen, G., Zhang, S., Huang, T., Deng, Z., and Yun,
U. (2021). Sequence generative adversarial nets with
a conditional discriminator. Neurocomputing, 429.
Yu, L., Zhang, W., Wang, J., and Yu, Y. (2017). Seqgan:
Sequence generative adversarial nets with policy gra-
dient. In AAAI.
Zia, T. and Zahid, U. (2019). Long short-term memory re-
current neural network architectures for urdu acoustic
modeling. Int. J. Speech Technol., 22(1).
KDIR 2022 - 14th International Conference on Knowledge Discovery and Information Retrieval
256