reproduce natural coloring images with high
accuracy, even for shoelaces, which are difficult to
convert because of their complex structure.
In addition, as mentioned in Section 2.1, we
confirmed that the areas where characteristic lines
were not drawn were complemented and
automatically colored. For sneakers and sandals,
which were evaluated as having poor appearance in
the line drawing results, their missing characteristic
lines were complemented through the coloring
process, resulting in colored images with good
appearance. This result shows that the final
appearance should be judged not by the output result
shown in Figure 12, but by the automatically colored
image shown in Figure 12.
Figure 14: Experimental results of coloring.
5 CONCLUSION
In this study, we proposed a method for automatically
generating feature-captured line drawings using
simple operations. The proposed method generated an
outline of characteristic lines from a contour-only line
drawing using a model obtained by training pix2pix
on a training image to which four processes were
applied: acquisition of a contour-only line drawing,
blurring, projection transformation, and image size
normalization based on the bounding rectangle. Then,
our method applied pix2pix to generate a final line
drawing from the outline of the characteristic lines
and produced a line drawing with characteristic lines.
Colored illustrations can be generated for line
drawing by applying pix2pix, which has already been
proposed for color line drawings. In addition, the
level of detail of the lines and those of the coloring
can be adjusted by changing the degree of blurring in
the blurring process.
In the experiments, we evaluated line drawings
with characteristic lines generated from contour-only
line drawings and their colored images generated
from the line drawings. In addition, we examined how
the acquired images were changed by adjusting the
degree of blurring. As a result, we observed that if the
degree of blur was weak, noise would be mixed in
with the line drawing, making it look bad. However,
when the degree of blurring was increased by
increasing the kernel size, the number of lines that
captured the features was reduced, and noiseless line
drawings were obtained. By making increasing the
degree of blurring, the number of lines that captured
the features in the generated line drawing increased.
In this study, contour lines were input as part of
the subject as a starting point for line drawing
generation. In the future, it will be necessary to survey
designers and others to determine what type of line
drawing is appropriate for use as a starting point for
line completion. Because the subject of the
experiment was only shoe images, which is not
practical, we would like to verify it with various
practical images. In addition, it was necessary to
quantitatively evaluate the obtained results.
ACKNOWLEDGEMENTS
This work was supported by JSPS KAKENHI (Grant
Number JP 19K12045).
REFERENCES
Phillip, I., Jun-Yan, Z., Tinghui, Z., Alexei, A. E. (2017).
Image-to-Image translation with conditional
adversarial networks, Proceedings of the IEEE
Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 1125–1134.
Ian, J.G., Jean, P-A., Mehdi, M., Bing, X., David, W-F.,
Sherjil, O., Aaron, C., Yoshua, B. (2014). Generative
adversarial networks, Advances in Information
Processing Systems 27(NIPS).
Mirza, M. and Osindero, S. (2014). Conditional generative
adversarial nets, arXiv Preprint arXiv1411.1784.
Radford, A., Metz, L., and Chintala, S. (2016).
Unsupervised representation learning with deep
convolutional generative adversarial networks, In 4th
International Conference on Learning Representations
(ICLR’16).
Larsen, A.B.L., Sønderby, S.K., Larochelle, H., and
Winther, O. (2016). Autoencoding beyond pixels using
a learned similarity metric, In 33rd International
Conference on Machine Learning (ICML’16), pp.
2341–2349.