Authors:
Xareni Galindo
1
;
Thierno Barry
1
;
Pauline Guyot
2
;
Charlotte Rivière
2
;
3
;
4
;
Rémi Galland
1
and
Florian Levet
1
;
5
Affiliations:
1
CNRS, Interdisciplinary Institute for Neuroscience, IINS, UMR 5297, University of Bordeaux, Bordeaux, France
;
2
Univ. Lyon, Université Claude Bernard Lyon 1, CNRS, Institut Lumière Matière, UMR 5306, 69622, Villeurbanne, France
;
3
Institut Universitaire de France (IUF), France
;
4
Institut Convergence PLAsCAN, Centre de Cancérologie de Lyon, INSERM U1052-CNRS, UMR5286, Univ. Lyon, Université Claude Bernard Lyon 1, Centre Léon Bérard, Lyon, France
;
5
CNRS, INSERM, Bordeaux Imaging Center, BIC, UAR3420, US 4, University of Bordeaux, Bordeaux, France
Keyword(s):
Bioimaging, Deep Learning, Miscrocopy, Nuclei, 3D, Image Processing, GAN.
Abstract:
Nuclei segmentation is an important task in cell biology analysis that requires accurate and reliable methods, especially within complex low signal to noise ratio images with crowded cells populations. In this context, deep learning-based methods such as Stardist have emerged as the best performing solutions for segmenting nucleus. Unfortunately, the performances of such methods rely on the availability of vast libraries of ground truth hand-annotated data-sets, which become especially tedious to create for 3D cell cultures in which nuclei tend to overlap. In this work, we present a workflow to segment nuclei in 3D in such conditions when no specific ground truth exists. It combines the use of a robust 2D segmentation method, Stardist 2D, which have been trained on thousands of already available ground truth datasets, with the generation of pair of 3D masks and synthetic fluorescence volumes through a conditional GAN. It allows to train a Stardist 3D model with 3D ground truth masks
and synthetic volumes that mimic our fluorescence ones. This strategy allows to segment 3D data that have no available ground truth, alleviating the need to perform manual annotations, and improving the results obtained by training Stardist with the original ground truth data.
(More)