
Table 3: Comparison of GFSS accuracy on COCO-20
i
Dataset across various methods. Our method (WFIL) uses the final
combination of IL+LoRA+SVD+Scaling.
Method Base (%) Novel (%) Total (%)
CAPL (Tian et al., 2022a) 44.61 7.05 35.46
Harmonizing Base and Novel class (Liu et al., 2023b) 46.89 8.83 37.48
PKL and OFP for GFS-Seg (Huang et al., 2023) 46.36 11.04 37.71
POP (Liu et al., 2023a) 54.78 18.07 45.71
WFIL (Our Method) 54.51 21.93 46.46
tialization and weight merging techniques. Our pro-
posed method achieved state of the art accuracy val-
ues in Generalized Few-shot Segmentation when the
number of novel classes are large or there is diversity
and challenges in the dataset.
REFERENCES
Dong, N. and Xing, E. P. (2018). Few-shot semantic seg-
mentation with prototype learning. In British Machine
Vision Conference.
Finn, C., Abbeel, P., and Levine, S. (2017). Model-agnostic
meta-learning for fast adaptation of deep networks. In
Precup, D. and Teh, Y. W., editors, Proceedings of the
34th International Conference on Machine Learning,
volume 70 of Proceedings of Machine Learning Re-
search, pages 1126–1135. PMLR.
Hajimiri, S., Boudiaf, M., Ayed, I. B., and Dolz, J. (2023).
A strong baseline for generalized few-shot semantic
segmentation.
Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang,
S., Wang, L., and Chen, W. (2021). Lora: Low-rank
adaptation of large language models.
Huang, K., Wang, F., Xi, Y., and Gao, Y. (2023). Prototyp-
ical kernel learning and open-set foreground percep-
tion for generalized few-shot semantic segmentation.
Li, G., Jampani, V., Sevilla-Lara, L., Sun, D., Kim, J., and
Kim, J. (2021). Adaptive prototype learning and allo-
cation for few-shot segmentation.
Liu, S.-A., Zhang, Y., Qiu, Z., Xie, H., Zhang, Y., and Yao,
T. (2023a). Learning orthogonal prototypes for gen-
eralized few-shot semantic segmentation. In Proceed-
ings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition.
Liu, W., Wu, Z., Zhao, Y., Fang, Y., Foo, C.-S., Cheng,
J., and Lin, G. (2023b). Harmonizing base and novel
classes: A class-contrastive approach for generalized
few-shot segmentation.
Liu, Y., Liu, N., Yao, X., and Han, J. (2022a). Intermediate
prototype mining transformer for few-shot semantic
segmentation.
Liu, Y., Zhang, X., Zhang, S., and He, X. (2022b). Part-
aware prototype network for few-shot semantic seg-
mentation.
Lu, Z., He, S., Li, D., Song, Y.-Z., and Xiang, T. (2023).
Prediction calibration for generalized few-shot seman-
tic segmentation. IEEE Transactions on Image Pro-
cessing, 32:3311–3323.
Lu, Z., He, S., Zhu, X., Zhang, L., Song, Y.-Z., and Xi-
ang, T. (2021). Simpler is better: Few-shot semantic
segmentation with classifier weight transformer.
Rakelly, K., Shelhamer, E., Darrell, T., Efros, A., and
Levine, S. (2018). Conditional networks for few-shot
semantic segmentation.
Ravi, S. and Larochelle, H. (2017). Optimization as a model
for few-shot learning. In International Conference on
Learning Representations.
Shaban, A., Bansal, S., Liu, Z., Essa, I., and Boots, B.
(2017). One-shot learning for semantic segmentation.
Siam, M., Oreshkin, B., and Jagersand, M. (2019). Adap-
tive masked proxies for few-shot segmentation.
Tian, Z., Lai, X., Jiang, L., Liu, S., Shu, M., Zhao, H., and
Jia, J. (2022a). Generalized few-shot semantic seg-
mentation.
Tian, Z., Zhao, H., Shu, M., Yang, Z., Li, R., and Jia, J.
(2022b). Prior guided feature enrichment network for
few-shot segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 44(2):1050–1065.
Wang, H., Zhang, X., Hu, Y., Yang, Y., Cao, X., and Zhen,
X. (2020). Few-shot semantic segmentation with
democratic attention networks. In European Confer-
ence on Computer Vision.
Wang, K., Liew, J. H., Zou, Y., Zhou, D., and Feng, J.
(2019). Panet: Few-shot image semantic segmenta-
tion with prototype alignment. In 2019 IEEE/CVF In-
ternational Conference on Computer Vision (ICCV),
pages 9196–9205.
Yang, B., Liu, C., Li, B., Jiao, J., and Ye, Q. (2020). Proto-
type mixture models for few-shot semantic segmenta-
tion.
Zhang, B., Xiao, J., and Qin, T. (2021). Self-guided and
cross-guided learning for few-shot segmentation.
Zhang, G., Kang, G., Yang, Y., and Wei, Y. (2022). Few-
shot segmentation via cycle-consistent transformer.
VISAPP 2025 - 20th International Conference on Computer Vision Theory and Applications
562