Matthew Streeter
Research Areas
Authored Publications
Google Publications
Other Publications
Sort By
Preview abstract
We analyze new online gradient descent algorithms for distributed systems with large delays between gradient computations and the corresponding updates. Using insights from adaptive gradient methods, we develop algorithms that adapt not only to the sequence of gradients, but also to the precise update delays that occur. We first give an impractical algorithm that achieves a regret bound that precisely quantifies the impact of the delays. We then analyze AdaptiveRevision, an algorithm that is efficiently implementable and achieves comparable guarantees. The key algorithmic technique is appropriately and efficiently revising the learning rate used for previous gradient steps. Experimental results show when the delays grow large (1000 updates or more), our new algorithms perform significantly better than standard adaptive gradient methods.
View details
Preview abstract
Some of the most compelling applications of online convex
optimization, including online prediction and classification, are
unconstrained: the natural feasible set is R^n. Existing
algorithms fail to achieve sub-linear regret in this setting unless
constraints on the comparator point x* are known in advance. We
present algorithms that, without such prior knowledge, offer
near-optimal regret bounds with respect to any choice of
x*. In particular, regret with respect to x* = 0 is
constant. We then prove lower bounds showing that our
guarantees are near-optimal in this setting.
View details
Open Problem: Better Bounds for Online Logistic Regression
COLT/ICML Joint Open Problem Session, JMLR: Workshop and Conference Proceedings (2012)
Preview abstract
Known algorithms applied to online logistic regression on a feasible set of L2 diameter D achieve regret bounds like O(eD log T) in one dimension, but we show a bound of O(sqrt(D) + log T) is possible in a binary 1-dimensional problem. Thus, we pose the following question: Is it possible to achieve a regret bound for online logistic regression that is O(poly(D)log(T))? Even if this is not possible in general, it would be interesting to have a bound that reduces to our bound in the one-dimensional case.
View details
Adaptive Bound Optimization for Online Convex Optimization
Proceedings of the 23rd Annual Conference on Learning Theory (COLT) (2010)
Preview abstract
We introduce a new online convex optimization algorithm that
adaptively chooses its regularization function based on the loss
functions observed so far. This is in contrast to previous algorithms
that use a fixed regularization function such as L2-squared, and
modify it only via a single time-dependent parameter. Our algorithm's
regret bounds are worst-case optimal, and for certain realistic
classes of loss functions they are much better than existing bounds.
These bounds are problem-dependent, which means they can exploit the
structure of the actual problem instance. Critically, however, our
algorithm does not need to know this structure in advance. Rather, we
prove competitive guarantees that show the algorithm provides a bound
within a constant factor of the best possible bound (of a certain
functional form) in hindsight.
View details
Tighter Bounds for Multi-Armed Bandits with Expert Advice
Preview
Proceedings of the 22nd Annual Conference on Learning Theory (COLT) (2009)
No Results Found