English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Cheshire: An Online Algorithm for Activity Maximization in Social Networks

MPS-Authors
/persons/resource/persons75510

Gomez Rodriguez,  Manuel
Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1703.02059.pdf
(Preprint), 3MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Zarezade, A., De, A., Rabiee, H., & Gomez Rodriguez, M. (2017). Cheshire: An Online Algorithm for Activity Maximization in Social Networks. In 55th Annual Allerton Conference on Communications, Control, and Computing. Retrieved from http://arxiv.org/abs/1703.02059.


Cite as: https://hdl.handle.net/21.11116/0000-0000-C6C0-7
Abstract
User engagement in social networks depends critically on the number of online actions their users take in the network. Can we design an algorithm that finds when to incentivize users to take actions to maximize the overall activity in a social network? In this paper, we model the number of online actions over time using multidimensional Hawkes processes, derive an alternate representation of these processes based on stochastic differential equations (SDEs) with jumps and, exploiting this alternate representation, address the above question from the perspective of stochastic optimal control of SDEs with jumps. We find that the optimal level of incentivized actions depends linearly on the current level of overall actions. Moreover, the coefficients of this linear relationship can be found by solving a matrix Riccati differential equation, which can be solved efficiently, and a first order differential equation, which has a closed form solution. As a result, we are able to design an efficient online algorithm, Cheshire, to sample the optimal times of the users' incentivized actions. Experiments on both synthetic and real data gathered from Twitter show that our algorithm is able to consistently maximize the number of online actions more effectively than the state of the art.