Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse




Conference Paper

PAC-Bayesian Analysis of Contextual Bandits


Seldin,  Y
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available

Seldin, Y., Auer P, Laviolette F, Shawe-Taylor, J., & Ortner, R. (2012). PAC-Bayesian Analysis of Contextual Bandits. In Advances in Neural Information Processing Systems 24 (pp. 1683-1691). Red Hook, NY, USA: Curran.

Cite as:
We derive an instantaneous (per-round) data-dependent regret bound for stochastic multiarmed bandits with side information (also known as contextual bandits). The scaling of our regret bound with the number of states (contexts) N goes as \sqrtN I_{ρ_t}(S;A)}, where I_{ρ_t}(S;A) is the mutual information between states and actions (the side information) used by the algorithm at round t. If the algorithm uses all the side information, the regret bound scales as \sqrt{N \ln K}, where K is the number of actions (arms). However, if the side information I_{ρ_t}(S;A) is not fully used, the regret bound is significantly tighter. In the extreme case, when I_{ρ_t(S;A) = 0, the dependence on the number of states reduces from linear to logarithmic. Our analysis allows to provide the algorithm large amount of side information, let the algorithm to decide which side information is relevant for the task, and penalize the algorithm only for the side information that it is using de facto. We also present an algorithm for multiarmed bandits with side information with computational complexity that is a linear in the number of actions.