
MoNuSeg dataset consists of 30 images for training
and 14 images for evaluation. Among the training
data, 24 images are allocated for training, and 6 im-
ages are reserved for validation. The test data is used
as is for evaluation and includes lung and brain cells
that are not present in the training data, rendering
them unseen data.
For all experiments, the seed value is changed four
times to calculate average accuracy. We resize all im-
ages to 256 × 256 pixels as preprocessing. The learn-
ing rate is set to 1 × 10
−3
, the batch size is 2, the op-
timizer is Adam, and the number of epochs is 200.
We used an Nvidia RTX A6000 GPU. The number of
quantum vectors is set to 512. The evaluation metric
is intersection over union (IoU), and we evaluate us-
ing the IoU for each class and the mean IoU (mIoU)
across all classes. We compared the proposed method
with U-Net and UCTransNet. The rationale is that the
proposed method uses U-Net or UCTransNet as an
encoder and makes predictions using quantum vectors
based on its output. In other words, the same feature
extractor is used up to the point of segmentation pre-
diction.
4.2 Domain Generalization on Chase,
Stare, and Drive Datasets
The results of DG on the Chase, Stare, and Drive
datasets are shown in Table 1. The method with
the highest accuracy is shown in orange, while the
second-highest accuracy is in blue. When the Drive
and Stare datasets were used for training and the
Chase dataset for evaluation, the proposed method
(U-Net+ours) using U-Net as a feature extractor ex-
hibited a 1.80% improvement in mIoU compared
to the original U-Net, with a specific improvement
of 3.69% in the blood vessel area. Additionally,
the proposed method (UCTransNet+ours) using UC-
TransNet as a feature extractor demonstrated a 2.41%
improvement in mIoU compared to the original UC-
TransNet, with a specific improvement of 5.01% in
the blood vessel area. When the Drive and Chase
datasets were used for training and the Stare dataset
for evaluation, the proposed method (U-Net+ours) us-
ing U-Net as a feature extractor showed a 1.20% im-
provement in mIoU compared to the original U-Net,
with a specific improvement of 2.47% in the blood
vessel area. Additionally, the proposed method (UC-
TransNet+ours) using UCTransNet as a feature ex-
tractor noted a 0.32% improvement in mIoU com-
pared to the original UCTransNet, with a specific im-
provement of 0.76% in the blood vessel area. When
the Stare and Chase datasets were used for training
and the Drive dataset for evaluation, the proposed
Table 1: IoU and standard deviation Chase, Stare, and Drive
datasets. orange indicates the highest accuracy, and blue
indicates the second-highest accuracy.
datasets methods background blood vessels mIoU
Chase
U-Net 95.33(±0.35) 43.56(±4.14) 69.44(±2.15)
U-Net + ours 95.24(±0.48) 47.25(±1.82) 71.24(±1.13)
UCTransNet 95.27(±0.37) 45.92(±3.84) 70.59(±1.97)
UCTransNet + ours 95.06(±0.39) 50.93(±1.65) 73.0(±0.92)
Stare
U-Net 95.81(±0.86) 56.70(±6.79) 76.26(±3.73)
U-Net + ours 95.75(±0.71) 59.17(±4.16) 77.46(±2.36)
UCTransNet 95.80(±0.85) 56.71(±5.66) 76.25(±3.20)
UCTransNet + ours 95.67(±0.71) 57.47(±4.17) 76.57(±2.36)
Drive
U-Net 95.56(±0.53) 57.86(±1.85) 76.71(±1.17)
U-Net + ours 95.75(±0.40) 59.84(±3.06) 77.80(±1.57)
UCTransNet 95.62(±0.49) 58.35(±2.10) 76.99(±1.28)
UCTransNet + ours 95.58(±0.46) 59.10(±1.58) 77.34(±1.01)
method (U-Net+ours) using U-Net as a feature ex-
tractor showed a 1.09% improvement in mIoU com-
pared to the original U-Net, with a specific improve-
ment of 1.98% in the blood vessel area. Addition-
ally, the proposed method (UCTransNet+ours), using
UCTransNet as a feature extractor showed a 0.35%
improvement in mIoU compared to the original UC-
TransNet, with a specific improvement of 0.75% in
the blood vessel area. These improvements indicate
that DG is effectively achieved.
Additionally, segmentation results are shown in
Figure 4. The top three rows display the results on
the Chase, Stare, and Drive datasets. The areas high-
lighted in red boxes show significant improvements.
For the Chase dataset, vascular regions in the red
box of the input image appear slightly darker, which
the original U-Net and UCTransNet predict as back-
ground. In contrast, our methods (U-Net+ours and
UCTransNet+ours) extract category information in-
dependently of the domain, preventing domain gaps,
can densely extract blood vessel category informa-
tion, predicting them correctly. For the Stare dataset,
focusing on the red box areas, the original U-Net
and UCTransNet make predictions indicating discon-
nected blood vessels. However, the proposed meth-
ods (U-Net+ours and UCTransNet+ours) predict con-
nected blood vessels, effectively extracting category
information independently of the domain. For the
Drive dataset, in the red box areas, the original U-
Net and UCTransNet predict the blood vessels as thin
or disconnected. In contrast, the proposed methods
predict thicker blood vessels and connect previously
disconnected vessels, successfully extracting domain-
independent information.
4.3 Domain Generalization on
MoNuSeg
The results of DG on the MoNuSeg dataset are shown
in Table 2. The method with the highest accuracy
is highlighted in orange, while the method with the
second-highest accuracy is shown in blue. Compar-
ICPRAM 2025 - 14th International Conference on Pattern Recognition Applications and Methods
374