Generating Images from Caption and Vice Versa via CLIP-Guided Generative Latent Space Search
Federico Galatolo, Mario Cimino, Gigliola Vaglini
2021
Abstract
In this research work we present CLIP-GLaSS, a novel zero-shot framework to generate an image (or a caption) corresponding to a given caption (or image). CLIP-GLaSS is based on the CLIP neural network, which, given an image and a descriptive caption, provides similar embeddings. Differently, CLIP-GLaSS takes a caption (or an image) as an input, and generates the image (or the caption) whose CLIP embedding is the most similar to the input one. This optimal image (or caption) is produced via a generative network, after an exploration by a genetic algorithm. Promising results are shown, based on the experimentation of the image Generators BigGAN and StyleGAN2, and of the text Generator GPT2.
DownloadPaper Citation
in Harvard Style
Galatolo F., Cimino M. and Vaglini G. (2021). Generating Images from Caption and Vice Versa via CLIP-Guided Generative Latent Space Search. In Proceedings of the International Conference on Image Processing and Vision Engineering - Volume 1: IMPROVE, ISBN 978-989-758-511-1, pages 166-174. DOI: 10.5220/0010503701660174
in Bibtex Style
@conference{improve21,
author={Federico Galatolo and Mario Cimino and Gigliola Vaglini},
title={Generating Images from Caption and Vice Versa via CLIP-Guided Generative Latent Space Search},
booktitle={Proceedings of the International Conference on Image Processing and Vision Engineering - Volume 1: IMPROVE,},
year={2021},
pages={166-174},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010503701660174},
isbn={978-989-758-511-1},
}
in EndNote Style
TY - CONF
JO - Proceedings of the International Conference on Image Processing and Vision Engineering - Volume 1: IMPROVE,
TI - Generating Images from Caption and Vice Versa via CLIP-Guided Generative Latent Space Search
SN - 978-989-758-511-1
AU - Galatolo F.
AU - Cimino M.
AU - Vaglini G.
PY - 2021
SP - 166
EP - 174
DO - 10.5220/0010503701660174