In order to reduce the oscillation of the model,
(Nips 2016) the model will retain an image buffer to
store the generated images (Shrivastava, Pfister,
Tuzel, Susskind, Wang and Webb 2016).
4.2 Qualitative Evaluation
In this paper, an experiment is conducted on the
WASH_INK_DATASET_FIN dataset to convert
ordinary photos to ink paintings, and in the
asymmetric image style conversion, the experiment is
carried out on the current CycleGAN network with
better network performance (Ren 2020). Since the
suggestion of CycleGAN, the processing of
conditional information in discriminant networks is
mainly shown in the following ways: firstly, the
conditional information is combined with the input in
the input layer; secondly, conditional information is
connected with the feature vector in the hidden layer
of the discriminant network; finally, the discriminant
is used to reconstruct conditional information instead
of conveying the conditional information to the
discriminant. Therefore, the discriminant should
learn to judge the authenticity of the image and
perform the additional task of image classification.
When evaluating the quality of the generated images,
the evaluation methods can be divided into two
categories: subjective evaluation and objective
evaluation. Subjective evaluation refers to evaluating
the generated image by the experimental personnel or
the third party according to their subjective feelings.
The objective evaluation chooses the evaluation
index to evaluate the quality of the generated image.
Subjective Analysis
CycleGAN image ink under the framework of the
results, as shown in the Figure, in the proposed
framework after image translation, the image
generated by the model in this paper can retain more
picture details and only convert the colour, painting
style and other features related to the target domain.
It also does not appear deformation. Shading level
changes nature, and stroke smooth and orderly, dry
wet comparative harmony, picture overall layout
space feels good, able to cope with the composition,
ink, and paintings three elements of change.
Quantitative Analysis
FID was selected as the image evaluation index to
evaluate the quality of the translated image. It is a
measure to calculate the distance between the actual
image and the eigenvector of the generated image and
comprehensively represents the distance between the
actual image and the Inception feature vector of the
generated image in the same domain. FID has a good
discriminant ability. The smaller FID is, the closer the
feature distribution of the generated object is to the
target feature distribution, and the better the generator
effect is [16]. On the contrary, the higher the score,
the worse the quality and the linear relationship. This
paper uses GAN as the baseline. Table 2 shows the
results of FID scoring for the GAN model
And CycleGAN model on the image style transfer
task. The results indicate that CycleGAN has a lower
FID value than GAN, and it can be considered that
CycleGAN has an excellent performance in
completing the task of style transformation.
Table 2: FID for GAN and CycleGAN.
Model GAN CycleGAN
FID 52.6906 48.2173
5 CONCLUSION
The methods of image style conversion by neural
network emerge in an endless stream, and the
applications in various directions and fields are also
being explored constantly. In this paper, a CycleGAN
framework for the style transfer of ordinary image ink
painting is proposed. The deep neural network is used
to learn the cross-band mapping relationship of
images without the need for training pairs of images.
The innovation of this paper lies in the combination
of the modern network structure model and the
traditional ink painting, which realizes the one-click
transfer from the actual image to the ink painting. The
result meets the requirements and has a certain artistic
quality.
Similarly, this study also has some shortcomings,
which will be the optimization direction of future
research: 1. More scale training sets should be added
to solve the problems of a single colour and poor
transition of the generated image. 2. Although the
model has been proved to be feasible to a certain
extent, the structure and training methods have not
been modified. Therefore, it has certain limitations.
REFERENCES
A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang,
and R. Webb. Learning from simulated and
unsupervised images through adversarial training.
arXiv preprint arXiv:1612.07828, 2016. 3, 4, 5
Chen, JC "Image style transfer of Chinese painting based
on neural network", [D]. Hanzhou Electronic Science
and Technology University, 2020.