Beyond the Worst-case: Data-dependent Rates in Learning and Optimization completed

Worst-case complexity bounds are increasingly insufficient to explain the (often superior) real-world performance of optimization and learning algorithms. We consider data-dependent rates, approximation guarantees, and complexity bounds to provide guarantees much more in line with actual performance.

🧑‍🎓 Project Members

Sebastian Pokutta
Principal Investigator
pokutta (at) zib.de

🪙 Funding

This project was being funded by the Berlin Mathematics Research Center MATH+ (project ID AA3-7), itself funded by the German Research Foundation (DFG) under Germany's Excellence Strategy (EXC-2046/1, project ID 390685689) from January 2021 to December 2022.

🔬 Project Description

Worst-case complexity bounds are increasingly insufficient to explain the (often superior) real-world performance of optimization and learning algorithms. We will consider data-dependent rates, approximation guarantees, and complexity bounds to provide guarantees much more in line with actual performance. We are in particular interested in exploiting properties of the feasible region to obtain algorithms that are adaptive to the problem structure.

One interesting instance of this situation is for the case of Carathéodory’s theorem. This theorem states that any point in the convex hull of of n points in the d-dimensional space is a convex combination of at most d+1 of those points. However, if approximate solutions are allowed, one can compute smaller combinations that are close enough to the target point. The size of this subset depends on the diameter of the convex hull.

This has applications for linear programs. Linear programs are one one of the most common classes of optimization problems that appear in scientific computing. It is currently unknown if these can be solved in strongly polynomial time, and the search for more efficient algorithms in practice is an active research field. The approximate Carathéodory’s theorem in the dual linear program setting states that the optimum dual solution of a linear program is a convex combination of at most d+1 of the constraints. Since real life linear programs are often not completely pathologic, techniques such as this exploiting the regularity of the data are an interesting research direction.

We also study the particular case for packing linear programs. Packing linear programs have all constraints and variables positive real numbers. We show that solving the 1-fair dual packing problem efficiently can be done efficiently exploiting the geometry of the input polytope.

A more technical overview can be found here.

Packing Proportional Fairness

đź“ť Publications and preprints

  1. Criado, F., MartĂ­nez-Rubio, D., and Pokutta, S. (2022). Fast Algorithms for Packing Proportional Fairness and Its Dual. Proceedings of Conference on Neural Information Processing Systems. [arXiv] [poster]
    [BibTeX]
    @inproceedings{2021_CriadoMartinezrubioPokutta_Packingproportionalfairness,
      year = {2022},
      booktitle = {Proceedings of Conference on Neural Information Processing Systems},
      archiveprefix = {arXiv},
      eprint = {2109.03678},
      primaryclass = {math.OC},
      author = {Criado, Francisco and MartĂ­nez-Rubio, David and Pokutta, Sebastian},
      title = {Fast Algorithms for Packing Proportional Fairness and Its Dual},
      poster = {https://pokutta.com/slides/20211105_fairpacking-poster.pdf}
    }