2017; Karras et al., 2021). According to a study made
by (Radford et al., 2016), exploring the arithmetic
property of the vector in the latent space indicates
that GAN learns some semantics in the earliest hidden
space. Prior study (Bau et al., 2019) demonstrates that
the generator synthesizes some visual traits through
its intermediate layers. Despite that, there is still a
dearth of knowledge regarding the concept of how
changing in the latent space can affect a desired gen-
erated output.
In this paper, we present and evaluate a meta-
analysis of GAN’s latent space. We propose MA-
GAN, an algorithm for Meta-Analysis of GANs’ la-
tent space. We explore the GAN latent space by
studying the arithmetic beyond the vectors in the la-
tent space and discovering how can a modification in
this vector affect the generated output. We discovered
that feeding the system with a specific vector in the
latent space as an input for the generator can give us
an insight about what would be the generated output.
In other words, we can control ahead of time the out-
put and generate the desired output such as generating
coats or trousers.
The organization of this paper is as follows. In
the first part of this paper, we give a brief introduction
about GANs, literature review, and the contributions
accomplished in this work. Section 2 introduces a
background about GANs and discusses the motivation
behind using GANs. Section 3 presents the proposed
MAGAN algorithm as well as the model components.
Section 4 presents the meta-analysis of latent space
and the experimental results. Finally, Section 5 con-
cludes the paper by summarizing the discussed work.
2 BACKGROUND AND
MOTIVATION
In this section, we give a brief overview of the fun-
damental concepts and key notions of GANs. GAN
is a neural network framework used for unsupervised
learning. It consists of two components that compete
against each other via a min-max game. One of the
components is called discriminator (D) distinguish-
ing between real samples and fake samples while the
other one is called generator (G) producing samples
that look like the real data trying to fool D. The con-
cept of GAN is summarized in Figure 1 where G takes
sample from the latent space as its input and generate
fake samples. However, D receives two inputs: real
samples (dataset) and fake samples (generated by G).
The role of D is to separate between real and fake
samples. GANs train in an alternative way, the two
models ought to always have similar skill levels.
Figure 1: Design of the GAN architecture.
Since both networks have distinct goal functions,
they both attempt to optimize themselves in order to
achieve those functions. G wants to lower its cost
value, whereas D wants to maximize it, so that the
overall optimization is:
min
G
max
D
V (D, G) = E
x∼pdata(x)
[logD(x)]+
E
z∼pz(z)
[log (1 –D(G(z)))]
(1)
GANs have gained exponentially expanding atten-
tion in the deep learning field due to various bene-
fits over more conventional generative models. Us-
ing conventional generative models face some limita-
tions on the generator architecture; however, GANs
can train any kind of generator network (Doersch,
2016; Goodfellow, 2017; Kingma and Welling, 2013).
Compared to other conventional generative models,
GANs can generate improved output. While VAE is
unable to produce perfect images, GANs can produce
any form of probability density (Goodfellow, 2017).
Lastly, there are no limitations on the latent variable’s
dimension.
These benefits have allowed GANs to produce
synthetic data at the highest possible level, partic-
ularly for picture data. Adding to all these advan-
tages, GANs can be used for data augmentation and
especially in the case of scarce data. Furthermore,
the interpolation in the latent space is one of the
most intriguing results of the GAN training. Sim-
ple vector arithmetic features appear, and when they
are altered, the resultant pictures’ semantic qualities
change (Radford et al., 2015). Dimensionality reduc-
tion and novel applications are both made possible by
the latent space of GANs. A robust classifier might
be created by using adversarial examples that are de-
termined by changes in the latent space (Jalal et al.,
2017). Hence, the ability of performing interpolation
and interpretability in the latent space raise our moti-
vation to accomplish this work: Meta-analysis of the
latent space and study the effects of arithmetic mod-
ifications in the latent space with its impacts on the
generated output.
MAGAN: A Meta-Analysis for Generative Adversarial Networks’ Latent Space
489