
ular orbital energies and hydrophobicity. Journal of
medicinal chemistry, 34(2):786–797.
Dobson, P. D. and Doig, A. J. (2003). Distinguishing
enzyme structures from non-enzymes without align-
ments. Journal of molecular biology, 330(4):771–
783.
Feng, P., Ma, J., Li, T., Ma, X., Xi, N., and Lu, D.
(2020). Android malware detection based on call
graph via graph neural network. In 2020 International
Conference on Networking and Network Applications
(NaNA), pages 368–374. IEEE.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Ex-
plaining and harnessing adversarial examples. arXiv
preprint arXiv:1412.6572.
Gu, T., Dolan-Gavitt, B., and Garg, S. (2019). Bad-
nets: Identifying vulnerabilities in the machine learn-
ing model supply chain.
Guo, L., Yin, H., Chen, T., Zhang, X., and Zheng, K.
(2021). Hierarchical hyperedge embedding-based
representation learning for group recommendation.
ACM Transactions on Information Systems (TOIS),
40(1):1–27.
Jia, J., Cao, X., and Gong, N. Z. (2021). Intrinsic certified
robustness of bagging against data poisoning attacks.
In Proceedings of the AAAI conference on artificial
intelligence, volume 35, pages 7961–7969.
Jia, J., Cao, X., Wang, B., and Gong, N. Z. (2019). Cer-
tified robustness for top-k predictions against adver-
sarial perturbations via randomized smoothing. arXiv
preprint arXiv:1912.09899.
Jiang, B. and Li, Z. (2022). Defending against backdoor at-
tack on graph nerual network by explainability. arXiv
preprint arXiv:2209.02902.
Jiang, C., He, Y., Chapman, R., and Wu, H. (2022). Cam-
ouflaged poisoning attack on graph neural networks.
In Proceedings of the 2022 International Conference
on Multimedia Retrieval, pages 451–461.
Kipf, T. N. and Welling, M. (2016). Semi-supervised clas-
sification with graph convolutional networks. arXiv
preprint arXiv:1609.02907.
Kwon, H., Yoon, H., and Park, K.-W. (2019). Selec-
tive poisoning attack on deep neural network to in-
duce fine-grained recognition error. In 2019 IEEE
Second International Conference on Artificial Intel-
ligence and Knowledge Engineering (AIKE), pages
136–139. IEEE.
Liu, Z., Chen, C., Yang, X., Zhou, J., Li, X., and Song,
L. (2020). Heterogeneous graph neural networks for
malicious account detection.
Meguro, R., Kato, H., Narisada, S., Hidano, S., Fukushima,
K., Suganuma, T., and Hiji, M. (2024). Gradient-
based clean label backdoor attack to graph neural net-
works. In ICISSP, pages 510–521.
Qiu, R., Huang, Z., Li, J., and Yin, H. (2020). Exploiting
cross-session information for session-based recom-
mendation with graph neural networks. ACM Trans-
actions on Information Systems (TOIS), 38(3):1–23.
Riesen, K. and Bunke, H. (2008). Iam graph database repos-
itory for graph based pattern recognition and machine
learning. In Structural, Syntactic, and Statistical Pat-
tern Recognition: Joint IAPR International Workshop,
SSPR & SPR 2008, Orlando, USA, December 4-6,
2008. Proceedings, pages 287–297. Springer.
Shafahi, A., Huang, W. R., Najibi, M., Suciu, O., Studer,
C., Dumitras, T., and Goldstein, T. (2018). Poison
frogs! targeted clean-label poisoning attacks on neural
networks. Advances in neural information processing
systems, 31.
Wale, N., Watson, I. A., and Karypis, G. (2008). Compar-
ison of descriptor spaces for chemical compound re-
trieval and classification. Knowledge and Information
Systems, 14:347–375.
Wang, B., Jia, J., Cao, X., and Gong, N. Z. (2021). Certified
robustness of graph neural networks against adversar-
ial structural perturbation. In Proceedings of the 27th
ACM SIGKDD Conference on Knowledge Discovery
& Data Mining, pages 1645–1653.
Wang, S., Chen, Z., Ni, J., Yu, X., Li, Z., Chen, H., and Yu,
P. S. (2019). Adversarial defense framework for graph
neural network. arXiv preprint arXiv:1905.03679.
Weber, M., Xu, X., Karla
ˇ
s, B., Zhang, C., and Li, B. (2023).
Rab: Provable robustness against backdoor attacks. In
2023 IEEE Symposium on Security and Privacy (SP),
pages 1311–1328. IEEE.
Yang, J., Ma, W., Zhang, M., Zhou, X., Liu, Y., and Ma, S.
(2021). Legalgnn: Legal information enhanced graph
neural network for recommendation. ACM Transac-
tions on Information Systems (TOIS), 40(2):1–29.
Zhang, M., Hu, L., Shi, C., and Wang, X. (2020). Adversar-
ial label-flipping attack and defense for graph neural
networks. In 2020 IEEE International Conference on
Data Mining (ICDM), pages 791–800. IEEE.
Zhang, X. and Zitnik, M. (2020). Gnnguard: Defend-
ing graph neural networks against adversarial attacks.
Advances in neural information processing systems,
33:9263–9275.
Zhang, Y., Albarghouthi, A., and D’Antoni, L. (2022).
Bagflip: A certified defense against data poisoning.
Advances in Neural Information Processing Systems,
35:31474–31483.
Zhang, Z., Jia, J., Wang, B., and Gong, N. Z. (2021). Back-
door attacks to graph neural networks. In Proceedings
of the 26th ACM Symposium on Access Control Mod-
els and Technologies, pages 15–26.
Z
¨
ugner, D., Akbarnejad, A., and G
¨
unnemann, S. (2018).
Adversarial attacks on neural networks for graph data.
In Proceedings of the 24th ACM SIGKDD interna-
tional conference on knowledge discovery & data
mining, pages 2847–2856.
Flexible Noise Based Robustness Certification Against Backdoor Attacks in Graph Neural Networks
563