Multiple Kernel Representation Learning on Networks


Learning representations of nodes in a low dimensional space is a crucial task with numerous interesting applications in network analysis, including link prediction, node classification, and visualization. Two popular approaches for this problem are matrix factorization and random walk-based models.

In this work, we aim to bring together the best of both worlds, towards learning node representations. In particular, we propose a weighted matrix factorization model that encodes random walk-based information about nodes of the network. The benefit of this novel formulation is that it enables us to utilize kernel functions without realizing the exact proximity matrix so that it enhances the expressiveness of existing matrix decomposition methods with kernels and alleviates their computational complexities. We extend the approach with a multiple kernel learning formulation that provides the flexibility of learning the kernel as the linear combination of a dictionary of kernels in data-driven fashion. We perform an empirical evaluation on real-world networks, showing that the proposed model outperforms baseline node embedding algorithms in downstream machine learning tasks.


An implementation of the project in C++ can be reached at the Github repository.


Abdulkadir Çelikkanat, Yanning Shen and Fragkiskos D. Malliaros


A. Celikkanat and F. D. Malliaros, Kernel Node Embeddings, 7th IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2019

A. Celikkanat, Y. Shen and F. D. Malliaros, Multiple Kernel Representation Learning on Networks, IEEE Transactions on Knowledge and Data Engineering, 2022