
a pixel is far enough from the meeting parts, all lay-
ers see data similar to the training dataset. When we
are getting closer to the meeting border, the deepest
layer can see more of the mixed data. Going further,
the mixed data can potentially affect more and more
layers. In Table 4 we show the micro-averaged F1
score to the selected bands. Here, BM denotes the ar-
eas around the meeting border that can only affect the
deepest, 5th (bottleneck) level. L4 denotes the areas
that can affect the bottleneck and one higher level, but
where the higher levels are unaffected. L1 denotes the
areas where all layers can potentially see mixed data.
We marked the areas where all layers see only original
type of data as out.
Table 4: Micro-averaged F1 score for the different models
on different parts of the mixed dataset, by affected layers.
L*a*b* RGB sRGB
Sigmoid
out 0.8765 0.8476 0.8623
BN 0.7487 0.6993 0.7304
L4 0.6926 0.6406 0.6689
L3 0.6532 0.6061 0.6170
L2 0.6221 0.5634 0.5494
L1 0.6073 0.5158 0.5076
Softmax
out 0.8776 0.8308 0.8608
BN 0.7636 0.6904 0.7377
L4 0.7154 0.6208 0.6823
L3 0.6818 0.5995 0.6301
L2 0.6482 0.5727 0.5749
L1 0.6214 0.5556 0.5548
All models show similar performance at the outer
regions, but going closer to the meeting border
of healthy and diseased parts, we see performance
degradation as more layers are affected by mixed data.
The advantage of using CIE L*a*b* color space is
higher than on the original dataset. For this color
space, using softmax is preferred over sigmoid as fi-
nal activation function. Interestingly, for the sRGB
and linear RGB color spaces, the models using sig-
moid perform better for outer regions, but they lose
this advantage near the mixed data.
6 CONCLUSIONS
In this paper, we compared the performance of U-Net
models of the same architecture and structure, trained
on the same datased but using different color spaces
and output activation functions. We also investigated
the performance on a dataset that differs significantly
from the training data. Deeper examination of the re-
sults show how each layer affects the prediction.
Experimental results show that a perceptually uni-
form, device-independent color space, CIE L*a*b*,
that separates the lightness and color information,
has advantage over the traditionnaly used, gamma-
encoded sRGB color space and also over phisically
uniform RGB representation that is used in image
processing.
ACKNOWLEDGEMENTS
We thank Dr. J
´
ozsef Mal
´
eth from Department of
Medicine, Albert Szent-Gy
¨
orgyi Medical School,
University of Szeged, Szeged, Hungary for gener-
ously providing the microscopy images as source data
that was essential for this study.
This research was supported by project TKP2021-
NVA-09. Project no TKP2021-NVA-09 has been im-
plemented with the support provided by the Ministry
of Culture and Innovation of Hungary from the Na-
tional Research, Development and Innovation Fund,
financed under the TKP2021-NVA funding scheme.
REFERENCES
Ahmed, A. A., Abouzid, M., and Kaczmarek, E. (2022).
Deep learning approaches in histopathology. Cancers,
14(21):5264.
Chang, Y. H., Thibault, G., Madin, O., Azimi, V., Meyers,
C., Johnson, B., Link, J., Margolin, A., and Gray, J. W.
(2017). Deep learning based nucleus classification in
pancreas histological images. In 2017 39th Annual
International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC), pages 672–
675.
Du, G., Cao, X., Liang, J., Chen, X., and Zhan, Y. (2020).
Medical image segmentation based on u-net: A re-
view. Journal of Imaging Science and Technology,
64(2):020508–1–020508–12.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In 2016 IEEE Con-
ference on Computer Vision and Pattern Recognition
(CVPR), pages 770–778.
Iizuka, O., Kanavati, F., Kato, K., Rambeau, M., Arihiro,
K., and Tsuneki, M. (2020). Deep learning models for
histopathological classification of gastric and colonic
epithelial tumours. Scientific Reports, 10(1).
Ronneberger, O., Fischer, P., and Brox, T. (2015). U-
net: Convolutional networks for biomedical image
segmentation. In Navab, N., Hornegger, J., Wells,
W. M., and Frangi, A. F., editors, Medical Image Com-
puting and Computer-Assisted Intervention – MICCAI
2015, pages 234–241, Cham. Springer International
Publishing.
Shelhamer, E., Long, J., and Darrell, T. (2017). Fully con-
volutional networks for semantic segmentation. IEEE
Transactions on Pattern Analysis and Machine Intel-
ligence, 39(4):640–651.
U-Net in Histological Segmentation: Comparison of the Effect of Using Different Color Spaces and Final Activation Functions
389