Improved Regret Guarantees for Online Mirror Descent using a Portfolio of Mirror Maps

Computer Vision & MultiModal AI
Published: arXiv: 2602.13177v1
Authors

Swati Gupta Jai Moondra Mohit Singh

Abstract

OMD and its variants give a flexible framework for OCO where the performance depends crucially on the choice of the mirror map. While the geometries underlying OPGD and OEG, both special cases of OMD, are well understood, it remains a challenging open question on how to construct an optimal mirror map for any given constrained set and a general family of loss functions, e.g., sparse losses. Motivated by parameterizing a near-optimal set of mirror maps, we consider a simpler question: is it even possible to obtain polynomial gains in regret by using mirror maps for geometries that interpolate between $L_1$ and $L_2$, which may not be possible by restricting to only OEG ($L_1$) or OPGD ($L_2$). Our main result answers this question positively. We show that mirror maps based on block norms adapt better to the sparsity of loss functions, compared to previous $L_p$ (for $p \in [1, 2]$) interpolations. In particular, we construct a family of online convex optimization instances in $\mathbb{R}^d$, where block norm-based mirror maps achieve a provable polynomial (in $d$) improvement in regret over OEG and OPGD for sparse loss functions. We then turn to the setting in which the sparsity level of the loss functions is unknown. In this case, the choice of geometry itself becomes an online decision problem. We first show that naively switching between OEG and OPGD can incur linear regret, highlighting the intrinsic difficulty of geometry selection. To overcome this issue, we propose a meta-algorithm based on multiplicative weights that dynamically selects among a family of uniform block norms. We show that this approach effectively tunes OMD to the sparsity of the losses, yielding adaptive regret guarantees. Overall, our results demonstrate that online mirror-map selection can significantly enhance the ability of OMD to exploit sparsity in online convex optimization.

Paper Summary

Problem
The main problem this research paper addresses is the challenge of designing an optimal mirror map for Online Mirror Descent (OMD) in online convex optimization (OCO). The goal is to minimize regret, which measures the difference between the player's total loss and the minimum total loss of a clairvoyant oracle.
Key Innovation
The key innovation of this work is the development of a portfolio of mirror maps based on block norms that adapt to the sparsity of loss functions. This approach is shown to provide significant improved performance over standard algorithms like Online Projected Gradient Descent (OPGD) and Online Exponentiated Gradient (OEG).
Practical Impact
The practical impact of this research is the ability to design more efficient online optimization algorithms that can adapt to the geometry of the OCO problem and exploit sparsity in loss functions. This can lead to improved performance in various applications, such as online learning, recommendation systems, and resource allocation.
Analogy / Intuitive Explanation
Imagine you're trying to find the shortest path in a complex network. The traditional approach would be to use a fixed geometry, like a Euclidean or entropic map, which might not be optimal for the specific problem. The new approach is like having a portfolio of maps that can adapt to the network's structure, allowing you to switch between them at runtime to find the shortest path. This is similar to how the block norm mirror maps adapt to the sparsity of loss functions in online convex optimization.
Paper Information
Categories:
math.OC cs.DS cs.LG
Published Date:

arXiv ID:

2602.13177v1

Quick Actions