of the patterns that result from the logical AND op-
eration of I
1
and I
2
. And the binary overlap could be
calculated as:
bo(I
1
,I
2
) =
2 ∗ N
I
1
∩I
2
N
I
1
+ N
I
2
(4)
4.4 Results Analysis
The binary correlation and binary overlap between the
masks from our method and the others are shown in
Table 1. The measures indicate the discrepancies be-
tween our methods compared to the other approaches.
The results show that all the methods work well for
H&E staining. Our method can, however, remove all
the background in the image resulting from a dirty
staining I
2
. For images resulting from a weak staining
I
3
, MobileNet predicts fewer tissues and EfficientNet
could find more tissues. Otsu and MobileNet view
the empty area in I
4
as foreground, EfficientNet and
our method can recognize the empty area and only
consider the tissue as foreground. Due to the empty
area and the white part inside the empty area, the Otsu
missed most tissues in I
4
and I
6
. Our method removes
the small holes predicted by MobileNet in I
5
, I
6
, and
I
7
.
5 CONCLUSIONS
In this paper, we have proposed a solution for the con-
struction of a tissue mask as a pre-processing step
for tissue classification. The masking is based on a
tissue segmentation task, which uses a combination
of mathematical morphology processing on results
from the U-Net architecture. Several experiments of
our method on different types of staining show the
method performs well and leads to better results for
the patching. For the PAS staining, there are still a few
parts of tissue missing. So, here we need to do further
filter and parameter optimization to be as complete
as possible in identifying the tissue parts. Further-
more, we aim to automatically extract the parameters
from the images. This will require further analysis of
a larger number of images.
ACKNOWLEDGEMENTS
This work is partially supported by the Chinese
Scholarship Council (CSC No.202106280008). We
would like to thank the LUMC (Leiden University
Medical Center) to provide the research data.
REFERENCES
Cai, F. and Verbeek, F. J. (2015). Dam-based rolling ball
with fuzzy-rough constraints, a new background sub-
traction algorithm for image analysis in microscopy.
In 2015 International Conference on Image Process-
ing Theory, Tools and Applications (IPTA), pages
298–303. IEEE.
Chen, P. and Yang, L. (2019). Tissueloc: Whole slide digital
pathology image tissue localization. J. Open Source
Software, 4(33):1148.
He, K., Gkioxari, G., Doll
´
ar, P., and Girshick, R. (2017).
Mask r-cnn. In Proceedings of the IEEE international
conference on computer vision, pages 2961–2969.
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D.,
Wang, W., Weyand, T., Andreetto, M., and Adam,
H. (2017). Mobilenets: Efficient convolutional neu-
ral networks for mobile vision applications. arXiv
preprint arXiv:1704.04861.
Khened, M., Kori, A., Rajkumar, H., Krishnamurthi, G.,
and Srinivasan, B. (2021). A generalized deep learn-
ing framework for whole-slide image segmentation
and analysis. Scientific reports, 11(1):1–14.
Li, X., Li, C., Rahaman, M. M., Sun, H., Li, X., Wu, J.,
Yao, Y., and Grzegorzek, M. (2022). A comprehensive
review of computer-aided whole-slide image analy-
sis: from datasets to feature extraction, segmentation,
classification and detection approaches. Artificial In-
telligence Review, pages 1–70.
Long, J., Shelhamer, E., and Darrell, T. (2015). Fully con-
volutional networks for semantic segmentation. In
Proceedings of the IEEE conference on computer vi-
sion and pattern recognition, pages 3431–3440.
Lu, M. Y., Williamson, D. F., Chen, T. Y., Chen, R. J., Bar-
bieri, M., and Mahmood, F. (2021). Data-efficient
and weakly supervised computational pathology on
whole-slide images. Nature biomedical engineering,
5(6):555–570.
Neuner, C., Coras, R., Bl
¨
umcke, I., Popp, A., Schlaf-
fer, S. M., Wirries, A., Buchfelder, M., and Jabari,
S. (2021). A whole-slide image managing library
based on fastai for deep learning in the context of
histopathology: Two use-cases explained. Applied
Sciences, 12(1):13.
Otsu, N. (1979). A threshold selection method from gray-
level histograms. IEEE transactions on systems, man,
and cybernetics, 9(1):62–66.
Pizer, S., Johnston, R., Ericksen, J., Yankaskas, B., and
Muller, K. (1990). Contrast-limited adaptive his-
togram equalization: speed and effectiveness. In
[1990] Proceedings of the First Conference on Visu-
alization in Biomedical Computing, pages 337–345.
Riasatian, A., Rasoolijaberi, M., Babaei, M., and Tizhoosh,
H. R. (2020). A comparative study of u-net topologies
for background removal in histopathology images. In
2020 International Joint Conference on Neural Net-
works (IJCNN), pages 1–8. IEEE.
Ronneberger, O., Fischer, P., and Brox, T. (2015). U-Net :
Convolutional Networks for Biomedical. In MICCAI,
pages 234–241.
BIOIMAGING 2023 - 10th International Conference on Bioimaging
152