Learning Network Representation Through Reinforcement Learning
Siqi Shen, Yongquan Fu, Adele Lu Jia, Huayou Su, Qinglin Wang, Chengsong Wang, Yong Dou
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:50
Network Representation Learning embeds each node in a network into a low-dimensional real-value vector which can be used for downstream tasks such as link prediction and recommendation. Many existing approaches use unsupervised or (semi-)supervised methods to explore the network topology and learn representations from it. In contrast, we propose, reinforcement learning network representations (RLNet), which explores the idea of using reinforcement learning to learn to explore the network and to obtain network representations. Based on reward signals, RLNet learns an actor which uses a policy to determine the network navigation actions. RLNet uses node representations to parameterize its policy, and the representations are learned together with the policy. Through experiments based on multiple datasets, we show that RLNet can obtain better results than state-of-the-art methods in link prediction tasks.