5 CONCLUSIONS
In this work, we present a new way to train GANs
using XAI explanations to guide the training of the
generator. The idea was to extract the most crit-
ical features from the images and provide them to
the generator during the training. Through quantita-
tive experiments, we demonstrate that the proposed
method improved the quality of the generated images.
It was possible to obtain an increase of up to 37.8%
in the quality of the artificial images from the MNIST
dataset, with up to 4.94% more variability when com-
pared to traditional methods. We show that this sig-
nificant difference was achieved with little increase
in processing time. For example, it was possible to
obtain a 30.9% decrease in FID with just a 4.51% in-
crease in processing time. Although it was not pos-
sible to select a specific combination of methods for
all datasets, it is possible to note that the proposed
method always improved image quality or variability.
In future works, we intend to conduct new tests
with different combinations of GAN models and dif-
ferent ways to extract information from the images.
We believe that the improvement of the generator
training is a field that is still little explored, with much
room for improvement. We also intend to analyze
how stable the proposed method is compared to tra-
ditional methods. Finally, we intend to investigate
the relevance of artificial images in data augmentation
problems.
ACKNOWLEDGEMENTS
This research was funded in part by the: Coordenac¸
˜
ao
de Aperfeic¸oamento de Pessoal de N
´
ıvel Superior
- Brasil (CAPES) – Finance Code 001; National
Council for Scientific and Technological Develop-
ment CNPq (#313643/2021-0 and #311404/2021-9);
the State of Minas Gerais Research Foundation -
FAPEMIG (Grant #APQ-00578-18); S
˜
ao Paulo Re-
search Foundation - FAPESP (Grant #2022/03020-1).
REFERENCES
Arjovsky, M., Chintala, S., and Bottou, L. (2017). Wasser-
stein generative adversarial networks. In Interna-
tional conference on machine learning, pages 214–
223. PMLR.
Bai, Q., Yang, C., Xu, Y., Liu, X., Yang, Y., and Shen,
Y. (2023). Glead: Improving gans with a generator-
leading task. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition,
pages 12094–12104.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial nets. In Pro-
ceedings of the 27th International Conference on Neu-
ral Information Processing Systems-Volume 2, pages
2672–2680.
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and
Courville, A. (2017). Improved training of wasser-
stein gans. In Proceedings of the 31st International
Conference on Neural Information Processing Sys-
tems, pages 5769–5779.
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and
Hochreiter, S. (2017). Gans trained by a two time-
scale update rule converge to a local nash equilibrium.
In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H.,
Fergus, R., Vishwanathan, S., and Garnett, R., editors,
Advances in Neural Information Processing Systems,
volume 30. Curran Associates, Inc.
Jolicoeur-Martineau, A. (2018). The relativistic discrimina-
tor: a key element missing from standard gan. In In-
ternational Conference on Learning Representations.
Krizhevsky, A. (2009). Learning multiple layers of fea-
tures from tiny images. Technical report, University
of Toronto.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998).
Gradient-based learning applied to document recogni-
tion. Proceedings of the IEEE, 86(11):2278–2324.
Nielsen, I. E., Dera, D., Rasool, G., Ramachandran, R. P.,
and Bouaynaya, N. C. (2022). Robust explainability:
A tutorial on gradient-based attribution methods for
deep neural networks. IEEE Signal Processing Mag-
azine, 39(4):73–84.
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V.,
Radford, A., Chen, X., and Chen, X. (2016). Im-
proved techniques for training gans. In Lee, D.,
Sugiyama, M., Luxburg, U., Guyon, I., and Garnett,
R., editors, Advances in Neural Information Process-
ing Systems, volume 29. Curran Associates, Inc.
Shrikumar, A., Greenside, P., and Kundaje, A. (2017).
Learning important features through propagating ac-
tivation differences. In International conference on
machine learning, pages 3145–3153. PMLR.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2014). Deep
inside convolutional networks: visualising image clas-
sification models and saliency maps. In Proceedings
of the International Conference on Learning Repre-
sentations (ICLR), pages 1–8. ICLR.
Trevisan de Souza, V. L., Marques, B. A. D., Batagelo,
H. C., and Gois, J. P. (2023). A review on generative
adversarial networks for image generation. Comput-
ers & Graphics, 114:13–25.
Wang, J., Yang, C., Xu, Y., Shen, Y., Li, H., and Zhou,
B. (2022). Improving gan equilibrium by raising spa-
tial awareness. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition,
pages 11285–11293.
Wang, Z., She, Q., and Ward, T. E. (2021). Generative
adversarial networks in computer vision: A survey
and taxonomy. ACM Computing Surveys (CSUR),
54(2):1–38.
X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence
681