Boosting Graph Contrastive Learning via Adaptive Sampling

Abstract

Contrastive Learning (CL) is a prominent technique for self-supervised representation learning, which aims to contrast semantically similar (i.e., positive) and dissimilar (i.e., negative) pairs of examples under different augmented views. Recently, CL has provided unprecedented potential for learning expressive graph representations without external supervision. In graph CL, the negative nodes are typically uniformly sampled from augmented views to formulate the contrastive objective. However, this uniform negative sampling strategy limits the expressive power of contrastive models. To be specific, not all the negative nodes can provide sufficiently meaningful knowledge for effective contrastive representation learning. In addition, the negative nodes that are semantically similar to the anchor are undesirably repelled from it, leading to degraded model performance. To address these limitations, in this paper, we devise an Adaptive Sampling strategy termed ‘AdaS’. The proposed AdaS framework can be trained to adaptively encode the importance of different negative nodes, so as to encourage learning from the most informative graph nodes. Meanwhile, an auxiliary polarization regularizer is proposed to suppress the adverse impacts of the false negatives and enhance the discrimination ability of AdaS. The experimental results on a variety of realworld datasets firmly verify the effectiveness of our AdaS in improving the performance of graph CL.

Publication
IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
Sheng Wan
Sheng Wan
PostDoc @ NUST

My research interests include Graph Neural Networks, Contrastive Learning, and Hyperspectral Image Processing.

Shirui Pan
Shirui Pan
Professor | ARC Future Fellow

My research interests include data mining, machine learning, and graph analysis.