Sparsity and Sample-size Efficiency in Structured Learning completed
In this project, we study algorithms that promote sparsity. We develop PageRank optimization algorithms that scale with solution sparsity and investigate Riemannian optimization using manifold geometry. Additionally, we develop algorithms for efficient fair resource allocation based on established fairness axioms.
🧑‍🎓 Project Members
martinez-rubio (at) zib.de
🪙 Funding
This project was being funded by the Berlin Mathematics Research Center MATH+ (project ID AA5-1), itself funded by the German Research Foundation (DFG) under Germany's Excellence Strategy (EXC-2046/1, project ID 390685689) from January 2022 to December 2023.
🔬 Project Description
In this project, we investigate the properties of sparse solutions induced through learning algorithms that promote sparsity. In particular, we look at sparsity as being a consequence of model and the implicit sparsity bias of the chosen optimization method. A related question is sample-size efficiency, as these algorithms also tend to have a much higher sample-size efficiency by implicitly restricting the degrees of freedom. Another example of sparsity exploitation is PageRank, we study state of the art PageRank optimization algorithms whose running time scales with the sparsity of the solution, rather than depending on the whole graph. Riemannian optimization is another related field that exploits the geometry of Riemannian constraints to have iterates intrinsically in these manifolds, that are lower dimensional than the embedding space.
We also look at algorithms that are able to quickly approximate the optimal fair resource allocation, given a notion of fairness in some problems. For instance, under the fairness axioms of Pareto optimality, independence of irrelevant alternative, affine invariance and symmetry, the optimal fair allocation can be posed as an optimization problem. This direction of research concerns the study of finding allocations that approximate this or other fair allocations efficiently.
đź’¬ Talks and posters
Conference and workshop talks
- Nov 2023
- Bounding Geometric Penalties in First-order Riemannian Optimization by Christophe Roux
Seminar "Modern Methods in Applied Stochastics and Nonparametric Statistics", Berlin
Research seminar talks
- Apr 2024
- Bounding Geometric Penalties in Riemannian Optimization by Christophe Roux
Research Seminar, SaarbrĂĽcken
đź“ť Publications and preprints
- MartĂnez-Rubio, D., Roux, C., and Pokutta, S. (2024). Convergence and Trade-offs in Riemannian Gradient Descent and Riemannian Proximal Point. Proceedings of International Conference on Machine Learning.
[arXiv]
[BibTeX]
- MartĂnez-Rubio, D., and Pokutta, S. (2023). Accelerated Riemannian Optimization: Handling Constraints with a Prox to Bound Geometric Penalties. Proceedings of Annual Conference on Learning Theory.
[URL]
[arXiv]
[poster]
[BibTeX]
- Criscitiello, C., MartĂnez-Rubio, D., and Boumal, N. (2023). Open Problem: Polynomial Linearly-convergent Method for G-convex Optimization? Proceedings of Annual Conference on Learning Theory.
[arXiv]
[BibTeX]
- MartĂnez-Rubio, D., Roux, C., Criscitiello, C., and Pokutta, S. (2023). Accelerated Riemannian Min-Max Optimization Ensuring Bounded Geometric Penalties.
[arXiv]
[BibTeX]
- MartĂnez-Rubio, D., Wirth, E., and Pokutta, S. (2023). Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond. Proceedings of Annual Conference on Learning Theory.
[arXiv]
[BibTeX]
- Scieur, D., Kerdreux, T., MartĂnez-Rubio, D., d’Aspremont, A., and Pokutta, S. (2023). Strong Convexity of Sets in Riemannian Manifolds.
[arXiv]
[BibTeX]
- Criado, F., MartĂnez-Rubio, D., and Pokutta, S. (2022). Fast Algorithms for Packing Proportional Fairness and Its Dual. Proceedings of Conference on Neural Information Processing Systems.
[arXiv]
[poster]
[BibTeX]