tor to design shoes or fabrics that will appeal to con-
sumers. With this ability to imitate without actually
copying, GANs could have far-reaching implications
for industrial design and copyright protection.
With regard to the limitations of the command sys-
tems, namely: sparsity and noise, two lines of re-
search have were conducted, and their common ideas
can be summarized as follows:
1. For the problem of data sparsity, data augmenta-
tion (Sandy et al., ) implemented by capturing the
distribution of real data under the minimax is the
main adaptation strategy.
2. For the issue of data noise, adversarial distur-
bances and training based on adversarial sampling
are often used as a solution (Mayer and Timofte,
2020).
In this article, we will take a closer look at GANs
and the different variations to their loss functions
(Brownlee, 2020), so that we can get a better insight
into how the GAN works while addressing the unex-
pected performance issues. The standard GAN loss
function, also known as the min-max loss (Brown-
lee, 2020),will be used to train these two models. The
generator tries to minimize this function while the dis-
criminator tries to maximize it. The rest of this paper
is organized as follows. In Section 2 we briefly com-
pare and position our solution with other proposals
find in the literature. Section 3 describes the prob-
lem handled. In Section 4, we describe our proposed
method that can be potentially applied to any discrim-
inator model with a loss that is a sum of the real and
fake parts.
2 STATE-OF-THE-ART
The Generative Adversarial Networks refers to a fam-
ily of generative models that seek to discover the un-
derlying distribution behind a certain data generating
process. It is described as two models in competition
which, when trained, is able to generate samples in-
discernible from those sampled from the normal dis-
tribution This distribution is discovered through an
adversarial competition between a generator and a
discriminator. The two models are trained such that
the discriminator strives to distinguish between gen-
erated and true examples, while the generator seeks
to confuse the discriminator by producing data that
are as realistic and compelling as possible. This gen-
erative model puts in competition two networks of
neurons D and G which will be called hereafter the
discriminator and the generator, respectively. In this
section, we present a brief review of existing litera-
ture of generative adversarial networks. In this study
(Goodfellow et al., 2020) (Mao and Li, 2021), GANs
were formulated for the first time. This article demon-
strates the potential of GANs as a generative model.
GANs became popular for image synthesis based on
the successful use of deep convolution layers (Mao
and Li, 2021) (noa, 2015).
Classical Algorithms. Classical image processing
algorithms are unsupervised algorithms that improve
low-light images through well-founded mathematical
models. They are efficient and simple in terms of cal-
culation. But they are not robust enough and require
manual calibration to be used in certain conditions
(Tanaka et al., 2019).
Implicit Model for Generation. Apart from the
descriptive models, another popular branch of deep
generative models are black-box models which map
the latent variables to signals via a top-down CNN,
such as the Generative Adversarial Network (GAN)
(Goodfellow et al., 2020) and its variants. These mod-
els have gained remarkable success in generating re-
alistic images and learn the generator network with an
assistant discriminator network.
Adversarial Networks. Generative Adversarial Net-
work (Goodfellow et al., 2020) have proven to per-
form sufficiently well for many supervised and un-
supervised learning problems. In (Zhu et al., 2017)
the authors propose a model through which the need
for paired images has been elevated and image trans-
lation between two domains can be done through
cycle-consistence loss. These techniques have been
applied to many other applications including dehaz-
ing, super-resolution, etc. Lately, it has been applied
to low light image enhancement in EnlightenGAN
(Jiang et al., 2019) with promising results and this
has motivated our GAN model. Generative adver-
sarial networks (Goodfellow et al., 2020) have also
benefited from convolutional decoder networks, for
the generator network module. Denton et al (Denton
et al., 2015) used a Laplacian pyramid of adversar-
ial generator and discriminators to synthesize images
at multiple resolutions. This work generated com-
pelling high-resolution images and could also condi-
tion on class labels for controllable generation. Rad-
ford (Alec et al., 2015) used a standard convolutional
decoder, but developed a highly effective and stable
architecture incorporating batch.
Fully Connected GANs. The first GAN architec-
tures used fully connected neural networks for both
the generator and discriminator (Goodfellow et al.,
2020). This type of architecture was applied to rela-
tively simple image data sets: Kaggle MNIST (hand-
written digits), CIFAR-10 (natural images), and the
Toronto Face Data Set (TFD).
Impact of Hyperparameters on the Generative Adversarial Networks Behavior
429