method's reliability. We verified accuracy for 1 to 100
layers. It can be concluded that our model's
performance remains stable even when increasing the
number of layers. Additionally, there is no
representation convergence due to the increase in the
number of layers, which is the opposite of GCN. This
demonstrates that decoupling the propagation from
the transformation can alleviate the over-smoothing
problem.
5 CONCLUSIONS
We propose a decoupled graph convolutional
network DAP-GCN with a dual adaptive propagation
mechanism. It can be applied to both homophilic and
heterophilic networks. DAP-GCN extracts class-
aware information by learning class similarity
matrices from attribute information and topological
information. The matrix adaptively changes the
propagation process of the network based on the class
similarity between nodes. Finally, the information is
extracted adaptively in different layers. DAP-GCN
mainly solves the heterophilic problem and also
effectively mitigates the over-smoothing problem.
Experiments on real datasets show that DAP-GCN
provides better performance than current methods
under both homophilic and heterophilic graphs.
REFERENCES
T. N. Kipf and M. Welling, 2016. Semi-supervised
classification with graph convolutional networks.
arXiv:1609.02907.
Chen. D, Lin. Y, Li. W, Li. P, Zhou. J, and Sun. X, 2020.
Measuring and relieving the over-smoothing problem
for graph neural networks from the topological view, in
Proc. AAAI Conf. Artif. Intell., vol. 34, no. 4, pp. 3438–
3445.
M. Liu, H. Gao, and S. Ji . 2020. Towards deeper graph
neural networks.In Proc. 26th ACM SIGKDD Int. Conf.
Knowl. Discovery Data Mining,Aug. pp. 338–348.
J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D.
Koutra, 2020. Beyond homophily in graph neural
networks: Current limitations and effective designs. In
Advances in Neural Information Processing Systems 33.
H. Pei, B. Wei, K. C. Chang, Y. Lei and B. Yang, 2020.
Geom-GCN: Geometric graph convolutional networks.
In Proceeding of the 8th International Conference on
Learning Representations.
Y. Rong,W. Huang, T. Xu, and J. Huang, 2019. DropEdge:
Towards deep graph convolutional networks on node
classification. arXiv:1907.10903.
J. Gasteiger, A. Bojchevski, and S. Günnemann, 2018.
Predict then propagate: Graph neural networks meet
personalized PAGERANK. arXiv:1810.05997.
J. Tang, J. Sun, C. Wang and Z. Yang, 2009. Social
influence analysis in large-scale networks. In
Proceedings of the 15th ACMSIGKDD International
Conference on Knowledge Discovery & Data Mining,
pp. 807–816.
G. M. Namata, B. London, L. Getoor and B. Huang, 2012 .
Query-driven active surveying for collective
classification. In 10th International Workshop on
Mining and Learning with Graphs, pp. 8.
B. Perozzi, R. Al-Rfou and S. Skiena, 2014 . DeepWalk:
Online learning of social representations. In
Proceedings of the 20th ACMSIGKDD International
Conference on Knowledge Discovery & Data Mining.
pp. 701–710.
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P.
Liò, and Y. Bengio, 2017. Graph attention networks.
arXiv:1710.10903.
E. Chien, J. Peng, P. Li and O. Milenkovic, 2021. Adaptive
universal generalized pagerank graph neural network.
In Proceeding of the 9th International Conference on
Learning Representations.
X. Wang, M. Zhu, D. Bo, P. Cui, C. Shi and J. Pei, 2020.
AM-GCN: Adaptive multi-channel graph
convolutional networks. In Proceedings of the 26th
ACM SIGKDD International Conference on
Knowledge Discovery & Data Mining, pp. 1243–1253.
D. P. Kingma and J. Ba, 2015. Adam: A Method for
stochastic optimization. In Proceedings of the 3rd
International Conference on Learning Representations.
H. Gao, Z. Wang, and S. Ji, 2018. Large-scale learnable
graph convolutional networks. In Proceedings ofthe
24th ACM SIGKDD International Conference on
Knowledge
Discovery & Data Mining, pp. 1416–1424.