Closing the Gap between Bandit and Full-Information Online Optimization: High-Probability Regret Bound

Alexander Rakhlin, Ambuj Tewari and Peter Bartlett

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2007-109
August 26, 2007

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-109.pdf

We demonstrate a modification of the algorithm of Dani et al for the online linear optimization problem in the bandit setting, which allows us to achieve an O( \sqrt{T ln T} ) regret bound in high probability against an adaptive adversary, as opposed to the in expectation result against an oblivious adversary of Dani et al. We obtain the same dependence on the dimension as that exhibited by Dani et al. The results of this paper rest firmly on those of Dani et al and the remarkable technique of Auer et al for obtaining high-probability bounds via optimistic estimates. This paper answers an open question: it eliminates the gap between the high-probability bounds obtained in the full-information vs bandit settings.


BibTeX citation:

@techreport{Rakhlin:EECS-2007-109,
    Author = {Rakhlin, Alexander and Tewari, Ambuj and Bartlett, Peter},
    Title = {Closing the Gap between Bandit and Full-Information Online Optimization: High-Probability Regret Bound},
    Institution = {EECS Department, University of California, Berkeley},
    Year = {2007},
    Month = {Aug},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-109.html},
    Number = {UCB/EECS-2007-109},
    Abstract = {We demonstrate a modification of the algorithm of Dani et al for the online linear optimization problem in the bandit setting, which allows us to achieve an O( \sqrt{T ln T} ) regret bound in high probability against an adaptive adversary, as opposed to the in expectation result against  an oblivious adversary of Dani et al. We obtain the same dependence on the dimension as that exhibited by Dani et al. The results of this paper rest firmly on those of Dani et al and the remarkable technique of Auer et al for obtaining high-probability bounds via optimistic estimates. This paper answers an open question: it eliminates the gap between the high-probability bounds obtained in the full-information vs bandit settings.}
}

EndNote citation:

%0 Report
%A Rakhlin, Alexander
%A Tewari, Ambuj
%A Bartlett, Peter
%T Closing the Gap between Bandit and Full-Information Online Optimization: High-Probability Regret Bound
%I EECS Department, University of California, Berkeley
%D 2007
%8 August 26
%@ UCB/EECS-2007-109
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-109.html
%F Rakhlin:EECS-2007-109