Jacob Duncan Abernethy and Peter Bartlett and Alexander Rakhlin

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2007-20

January 28, 2007

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-20.pdf

We consider the problem of prediction with expert advice in the setting where a forecaster is presented with several online prediction tasks. Instead of competing against the best expert separately on each task, we assume the tasks are related, and thus we expect that a few experts will perform well on the entire set of tasks. That is, our forecaster would like, on each task, to compete against the best expert chosen from a small set of experts. While we describe the "ideal" algorithm and its performance bound, we show that the computation required for this algorithm is as hard as computation of a matrix permanent. We present an efficient algorithm based on mixing priors, and prove a bound that is nearly as good for the sequential task presentation case. We also consider a harder case where the task may change arbitrarily from round to round, and we develop an efficient approximate randomized algorithm based on Markov chain Monte Carlo techniques.


BibTeX citation:

@techreport{Abernethy:EECS-2007-20,
    Author= {Abernethy, Jacob Duncan and Bartlett, Peter and Rakhlin, Alexander},
    Title= {Multitask Learning with Expert Advice},
    Year= {2007},
    Month= {Jan},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-20.html},
    Number= {UCB/EECS-2007-20},
    Abstract= {We consider the problem of prediction with expert advice in the setting where a forecaster is presented with several online prediction tasks. Instead of competing against the best expert separately on each task, we assume the tasks are related, and thus we expect that a few experts will perform well on the entire set of tasks. That is, our forecaster would like, on each task, to compete against the best expert chosen from a small set of experts. While we describe the "ideal" algorithm and its performance bound, we show that the computation required for this algorithm is as hard as computation of a matrix permanent. We present an efficient algorithm based on mixing priors, and prove a bound that is nearly as good for the sequential task presentation case. We also consider a harder case where the task may change arbitrarily from round to round, and we develop an efficient approximate randomized algorithm based on Markov chain Monte Carlo techniques.},
}

EndNote citation:

%0 Report
%A Abernethy, Jacob Duncan 
%A Bartlett, Peter 
%A Rakhlin, Alexander 
%T Multitask Learning with Expert Advice
%I EECS Department, University of California, Berkeley
%D 2007
%8 January 28
%@ UCB/EECS-2007-20
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-20.html
%F Abernethy:EECS-2007-20