A closer look at adaptive regret

Mitya Adamskiy, Wouter Koolen, Alexey Chernov, Vladimir Vovk

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

For the prediction with expert advice setting, we consider methods to construct algorithms that have low adaptive regret. The adaptive regret of an algorithm on a time interval [t_1,t_2] is the loss of the algorithm there minus the loss of the best expert. Adaptive regret measures how well the algorithm approximates the best expert locally, and it is therefore somewhere between the classical regret (measured on all outcomes) and the tracking regret, where the algorithm is compared to a good sequence of experts.

We investigate two existing intuitive methods to derive algorithms with low adaptive regret, one based on specialist experts and the other based on restarts. Quite surprisingly, we show that both methods lead to the same algorithm, namely Fixed Share, which is known for its tracking regret. Our main result is a thorough analysis of the adaptive regret of Fixed Share. We obtain the exact worst-case adaptive regret for Fixed Share, from which the classical tracking bounds can be derived. We also prove that Fixed Share is optimal, in the sense that no algorithm can have a better adaptive regret bound.
Original languageEnglish
Title of host publicationProceedings of the Twenty Third International Conference on Algorithmic Learning Theory
EditorsNader Bshouty, Gilles Stoltz, Nicolas Vayatis, Thomas Zeugmann
Place of PublicationBerlin
PublisherSpringer
Pages290 - 304
Number of pages15
Volume7568
Publication statusAccepted/In press - 2012

Publication series

NameLecture Notes in Artificial Intelligence
PublisherSpringer

Keywords

  • online learning
  • adaptive regret
  • Fixed Share
  • specialist experts

Cite this