Authors:
Duway Nicolas Lesmes-Leon
1
;
2
;
Miro Miranda
1
;
2
;
Maria Caroprese
3
;
Gillian Lovell
3
;
Andreas Dengel
2
;
1
and
Sheraz Ahmed
2
Affiliations:
1
Department of Computer Science, University of Kaiserslautern-Landau (RPTU), Kaiserslautern, Germany
;
2
German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
;
3
Sartorius, Royston, U.K.
Keyword(s):
Cell Microscopy, GAN, Generative AI, Instance Segmentation.
Abstract:
Data scarcity and annotation limit the quantitation of cell microscopy images. Data acquisition, preparation, and annotation are costly and time-consuming. Additionally, cell annotation is an error-prone task that requires personnel with specialized knowledge. Generative artificial intelligence is an alternative to alleviate these limitations by generating realistic images from an unknown data probabilistic distribution. Still, extra effort is needed since data annotation remains an independent task of the generative process. In this work, we assess whether generative models learn meaningful instance segmentation-related features, and their potential to produce realistic annotated images. We present a single-channel grayscale segmentation mask pipeline that differentiates overlapping objects while minimizing the number of labels. Additionally, we propose a modified version of the established StyleGAN2 generator that synthesizes images and segmentation masks simultaneously without add
itional components. We tested our generative pipeline with LIVECell and TissueNet, two benchmark cell segmentation datasets. Furthermore, we augmented a segmentation deep learning network with synthetic samples and illustrated improved or on-par performance compared to its non-augmented version. Our results support that the features learned by generative models are relevant in the annotation context. With adequate data preparation and regularization, generative models are capable of producing realistic annotated samples cost-effectively.
(More)