ACKNOWLEDGEMENTS
This work was supported by the Hasler Fundation,
project number 16015.
REFERENCES
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z.,
Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin,
M., et al. (2016). Tensorflow: Large-scale machine
learning on heterogeneous distributed systems. arXiv
preprint arXiv:1603.04467.
Chollet, F. (2015). Keras.
Denton, E. L., Chintala, S., Fergus, R., et al. (2015). Deep
generative image models using a laplacian pyramid of
adversarial networks. In Advances in neural informa-
tion processing systems, pages 1486–1494.
Kingma, D. and Ba, J. (2014). Adam: A method
for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Mahendran, A. and Vedaldi, A. (2015). Understanding
deep image representations by inverting them. In
2015 IEEE conference on computer vision and pattern
recognition (CVPR), pages 5188–5196. IEEE.
Mahendran, A. and Vedaldi, A. (2016). Visualizing
deep convolutional neural networks using natural pre-
images. International Journal of Computer Vision,
120(3):233–255.
Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., and
Clune, J. (2016). Synthesizing the preferred inputs
for neurons in neural networks via deep generator net-
works. In Advances in Neural Information Processing
Systems, pages 3387–3395.
Nguyen, A., Yosinski, J., and Clune, J. (2015). Deep neural
networks are easily fooled: High confidence predic-
tions for unrecognizable images. In 2015 IEEE Con-
ference on Computer Vision and Pattern Recognition
(CVPR), pages 427–436. IEEE.
Radford, A., Metz, L., and Chintala, S. (2015). Unsu-
pervised representation learning with deep convolu-
tional generative adversarial networks. arXiv preprint
arXiv:1511.06434.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why
should i trust you?”: Explaining the predictions of any
classifier. arXiv preprint arXiv:1602.04938.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S.,
Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bern-
stein, M., et al. (2015). Imagenet large scale visual
recognition challenge. International Journal of Com-
puter Vision, 115(3):211–252.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2013).
Deep inside convolutional networks: Visualising im-
age classification models and saliency maps. arXiv
preprint arXiv:1312.6034.
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556.
Springenberg, J. T., Dosovitskiy, A., Brox, T., and Ried-
miller, M. (2014). Striving for simplicity: The all con-
volutional net. arXiv preprint arXiv:1412.6806.
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and
Manzagol, P.-A. (2010). Stacked denoising autoen-
coders: Learning useful representations in a deep net-
work with a local denoising criterion. Journal of Ma-
chine Learning Research, 11(Dec):3371–3408.
Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., and Lipson,
H. (2015). Understanding neural networks through
deep visualization. arXiv preprint arXiv:1506.06579.
Zeiler, M. D. and Fergus, R. (2014). Visualizing and under-
standing convolutional networks. In European Con-
ference on Computer Vision, pages 818–833. Springer.
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Tor-
ralba, A. (2015). Learning deep features for discrimi-
native localization. arXiv preprint arXiv:1512.04150.
APPENDIX
The network detailed architecture is presented in
this section. You can also access the original source
code as well as the entire set of generated images at
https://github.com/jdespraz/deep generative networks
Table 2: Detailed generative network architecture (G).
LAYER DIM
Input Vector 1 × 3200
Gaussian Noise
Locally Connected 1D 1 × 1
Reshape 128 × 5 × 5
Upsampling 2×2
Convolution 2D 512 × 2 × 2
Batch Normalization
Convolution 2D 512 × 2 × 2
Batch Normalization
Upsampling 2×2
Convolution 2D 256 × 3 × 3
Batch Normalization
Convolution 2D 256 × 3 × 3
Batch Normalization
Upsampling 2×2
Convolution 2D 256 × 3 × 3
Batch Normalization
Convolution 2D 256 × 3 × 3
LAYER (CONT.) DIM (CONT.)
Batch Normalization
Upsampling 2× 2
Convolution 2D 128 × 3 × 3
Batch Normalization
Convolution 2D 128 × 3 × 3
Batch Normalization
Upsampling 2× 2
Convolution 2D 128 × 3 × 3
Batch Normalization
Convolution 2D 128 × 3 × 3
Batch Normalization
Upsampling 2× 2
Convolution 2D 64 × 3 × 3
Batch Normalization
Convolution 2D 64 × 3 × 3
Batch Normalization
Convolution 2D 3 × 3 × 3
Batch Normalization