Sparsity and Sample-size Efficiency in Structured Learning completed

In this project, we study algorithms that promote sparsity. We develop PageRank optimization algorithms that scale with solution sparsity and investigate Riemannian optimization using manifold geometry. Additionally, we develop algorithms for efficient fair resource allocation based on established fairness axioms.

🧑‍🎓 Project Members

Sebastian Pokutta
Principal Investigator
pokutta (at) zib.de
David MartĂ­nez-Rubio
martinez-rubio (at) zib.de

🪙 Funding

This project was being funded by the Berlin Mathematics Research Center MATH+ (project ID AA5-1), itself funded by the German Research Foundation (DFG) under Germany's Excellence Strategy (EXC-2046/1, project ID 390685689) from January 2022 to December 2023.

🔬 Project Description

In this project, we investigate the properties of sparse solutions induced through learning algorithms that promote sparsity. In particular, we look at sparsity as being a consequence of model and the implicit sparsity bias of the chosen optimization method. A related question is sample-size efficiency, as these algorithms also tend to have a much higher sample-size efficiency by implicitly restricting the degrees of freedom. Another example of sparsity exploitation is PageRank, we study state of the art PageRank optimization algorithms whose running time scales with the sparsity of the solution, rather than depending on the whole graph. Riemannian optimization is another related field that exploits the geometry of Riemannian constraints to have iterates intrinsically in these manifolds, that are lower dimensional than the embedding space.

We also look at algorithms that are able to quickly approximate the optimal fair resource allocation, given a notion of fairness in some problems. For instance, under the fairness axioms of Pareto optimality, independence of irrelevant alternative, affine invariance and symmetry, the optimal fair allocation can be posed as an optimization problem. This direction of research concerns the study of finding allocations that approximate this or other fair allocations efficiently. Riemannian Optimization Scheme

đź’¬ Talks and posters

Conference and workshop talks

Nov 2023
Bounding Geometric Penalties in First-order Riemannian Optimization by Christophe Roux
Seminar "Modern Methods in Applied Stochastics and Nonparametric Statistics"

Research seminar talks

Apr 2024
Bounding Geometric Penalties in Riemannian Optimization by Christophe Roux
CISPA Research seminar, SaarbrĂĽcken

Poster presentations

Jul 2024
Convergence and Trade-Offs in Riemannian Gradient Descent and Riemannian Proximal Point by Christophe Roux
41st International Conference on Machine Learning (ICML), Vienna

đź“ť Publications and preprints

  1. MartĂ­nez-Rubio, D., Roux, C., and Pokutta, S. (2024). Convergence and Trade-offs in Riemannian Gradient Descent and Riemannian Proximal Point. Proceedings of the International Conference on Machine Learning. [URL] [arXiv]
    [BibTeX]
    @inproceedings{2024_MartinezrubioRouxPokutta_Riemanniangradientdescent,
      year = {2024},
      booktitle = {Proceedings of the International Conference on Machine Learning},
      url = {https://proceedings.mlr.press/v235/marti-nez-rubio24a.html},
      archiveprefix = {arXiv},
      eprint = {2403.10429},
      primaryclass = {math.OC},
      author = {MartĂ­nez-Rubio, David and Roux, Christophe and Pokutta, Sebastian},
      title = {Convergence and Trade-offs in Riemannian Gradient Descent and Riemannian Proximal Point}
    }
  2. MartĂ­nez-Rubio, D., and Pokutta, S. (2023). Accelerated Riemannian Optimization: Handling Constraints with a Prox to Bound Geometric Penalties. Proceedings of the Conference on Learning Theory. [URL] [arXiv] [poster]
    [BibTeX]
    @inproceedings{2022_MartinezrubioPokutta_Acceleratedriemannian,
      year = {2023},
      booktitle = {Proceedings of the Conference on Learning Theory},
      url = {https://proceedings.mlr.press/v195/martinez-rubio23a/martinez-rubio23a.pdf},
      archiveprefix = {arXiv},
      eprint = {2211.14645},
      primaryclass = {math.OC},
      author = {MartĂ­nez-Rubio, David and Pokutta, Sebastian},
      title = {Accelerated Riemannian Optimization: Handling Constraints with a Prox to Bound Geometric Penalties},
      poster = {https://pokutta.com/slides/20221203_poster_neurips_riemannian.pdf}
    }
  3. Criscitiello, C., MartĂ­nez-Rubio, D., and Boumal, N. (2023). Open Problem: Polynomial Linearly-convergent Method for G-convex Optimization? Proceedings of the Conference on Learning Theory. [arXiv]
    [BibTeX]
    @inproceedings{2023_CriscitielloMartinezrubioBoumal_Polynomialconvergence,
      year = {2023},
      booktitle = {Proceedings of the Conference on Learning Theory},
      archiveprefix = {arXiv},
      eprint = {2307.12743},
      primaryclass = {math.OC},
      author = {Criscitiello, Christopher and MartĂ­nez-Rubio, David and Boumal, Nicolas},
      title = {Open Problem: Polynomial Linearly-convergent Method for G-convex Optimization?}
    }
  4. MartĂ­nez-Rubio, D., Roux, C., Criscitiello, C., and Pokutta, S. (2023). Accelerated Riemannian Min-Max Optimization Ensuring Bounded Geometric Penalties. [arXiv]
    [BibTeX]
    @misc{2023_MartinezrubioRouxCriscitielloPokutta_Riemannianminmax,
      archiveprefix = {arXiv},
      eprint = {2305.16186},
      primaryclass = {math.OC},
      year = {2023},
      author = {MartĂ­nez-Rubio, David and Roux, Christophe and Criscitiello, Christopher and Pokutta, Sebastian},
      title = {Accelerated Riemannian Min-Max Optimization Ensuring Bounded Geometric Penalties}
    }
  5. MartĂ­nez-Rubio, D., Wirth, E., and Pokutta, S. (2023). Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond. Proceedings of the Conference on Learning Theory. [arXiv]
    [BibTeX]
    @inproceedings{2023_MartinezrubioWirthPokutta_Sparseapproximatepagerank,
      year = {2023},
      booktitle = {Proceedings of the Conference on Learning Theory},
      archiveprefix = {arXiv},
      eprint = {2303.12875},
      primaryclass = {math.OC},
      author = {MartĂ­nez-Rubio, David and Wirth, Elias and Pokutta, Sebastian},
      title = {Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond}
    }
  6. Scieur, D., Kerdreux, T., Martínez-Rubio, D., d’Aspremont, A., and Pokutta, S. (2023). Strong Convexity of Sets in Riemannian Manifolds. [arXiv]
    [BibTeX]
    @misc{2023_ScieurEtAl_StrongconvexityRiemannianmanifolds,
      archiveprefix = {arXiv},
      eprint = {2312.03583},
      primaryclass = {math.OC},
      year = {2023},
      author = {Scieur, Damien and Kerdreux, Thomas and MartĂ­nez-Rubio, David and d'Aspremont, Alexandre and Pokutta, Sebastian},
      title = {Strong Convexity of Sets in Riemannian Manifolds}
    }
  7. Criado, F., MartĂ­nez-Rubio, D., and Pokutta, S. (2022). Fast Algorithms for Packing Proportional Fairness and Its Dual. Proceedings of the Conference on Neural Information Processing Systems. [arXiv] [poster]
    [BibTeX]
    @inproceedings{2021_CriadoMartinezrubioPokutta_Packingproportionalfairness,
      year = {2022},
      booktitle = {Proceedings of the Conference on Neural Information Processing Systems},
      archiveprefix = {arXiv},
      eprint = {2109.03678},
      primaryclass = {math.OC},
      author = {Criado, Francisco and MartĂ­nez-Rubio, David and Pokutta, Sebastian},
      title = {Fast Algorithms for Packing Proportional Fairness and Its Dual},
      poster = {https://pokutta.com/slides/20211105_fairpacking-poster.pdf}
    }