We research and develop algorithms for continuous optimization, with a particular emphasis on solving high-dimensional optimization problems with first-order methods. We prove convergence rates and guarantees for a variety of settings, such as:
Conditional gradient aka Frank-Wolfe methods, which perform constrained optimization by accessing the feasible set via a linear optimization oracle.
Proximal methods, which yield efficient algorithms for nonsmooth optimization and practically reshape smooth ones.
Online learning algorithms which consist of playing a sequential game in which the algorithms predict and receive a loss in an online way and try to minimize overall regret; they have numerous applications to optimization via reductions.
Accelerated methods which combine online learning tools with other optimization techniques to exploit and improve convergence of other more simple algorithms, often to optimality.
And block-iterative algorithms which leverage parallelism and distributed computation for faster algorithmic performance.
Braun, G., Guzmán, C., and Pokutta, S. (2024). Corrigendum in “Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization Via Information Theory.” IEEE Transactions on Information Theory.DOI: 10.1109/TIT.2024.3357200[BibTeX]
Carderera, A., Besançon, M., and Pokutta, S. (2024). Scalable Frank-Wolfe on Generalized Self-concordant Functions Via Simple Steps. SIAM Journal on Optimization.[summary][slides][poster][code][BibTeX]
Martínez-Rubio, D., Roux, C., and Pokutta, S. (2024). Convergence and Trade-offs in Riemannian Gradient Descent and Riemannian Proximal Point.[arXiv][BibTeX]
Besançon, M., Garcia, J. D., Legat, B., and Sharma, A. (2023). Flexible Differentiable Optimization Via Model Transformations. INFORMS Journal on Computing.DOI: 10.1287/ijoc.2022.0283[URL][arXiv][BibTeX]
Wirth, E., Pena, J., and Pokutta, S. (2023). Accelerated Affine-invariant Convergence Rates of the Frank-Wolfe Algorithm with Open-loop Step-sizes.[arXiv][BibTeX]
Kreimeier, T., Pokutta, S., Walther, A., and Woodstock, Z. (2023). On a Frank-Wolfe Approach for Abs-smooth Functions. Optimization Methods and Software.[arXiv][BibTeX]
Criscitiello, C., Martínez-Rubio, D., and Boumal, N. (2023). Open Problem: Polynomial Linearly-convergent Method for G-convex Optimization? Proceedings of Annual Workshop on Computational Learning Theory.[arXiv][BibTeX]
Deza, A., Onn, S., Pokutta, S., and Pournin, L. (2023). Kissing Polytopes.[arXiv][BibTeX]
Martínez-Rubio, D., Roux, C., Criscitiello, C., and Pokutta, S. (2023). Accelerated Riemannian Min-Max Optimization Ensuring Bounded Geometric Penalties. Proceedings of Optimization for Machine Learning (NeurIPS Workshop OPT 2023).[arXiv][BibTeX]
Martínez-Rubio, D., Wirth, E., and Pokutta, S. (2023). Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond. Proceedings of Annual Workshop on Computational Learning Theory.[arXiv][BibTeX]
Scieur, D., Kerdreux, T., Martínez-Rubio, D., d’Aspremont, A., and Pokutta, S. (2023). Strong Convexity of Sets in Riemannian Manifolds.[arXiv][BibTeX]
Woodstock, Z., and Pokutta, S. (2023). Splitting the Conditional Gradient Algorithm.[arXiv][BibTeX]
Wirth, E., Kerdreux, T., and Pokutta, S. (2023). Acceleration of Frank-Wolfe Algorithms with Open Loop Step-sizes. Proceedings of International Conference on Artificial Intelligence and Statistics.[arXiv][BibTeX]
Wirth, E., Kera, H., and Pokutta, S. (2023). Approximate Vanishing Ideal Computations at Scale. Proceedings of International Conference on Learning Representations.[arXiv][slides][BibTeX]
Wilken, S. E., Besançon, M., Kratochvíl, M., Kuate, C. A. F., Trefois, C., Gu, W., and Ebenhöh, O. (2022). Interrogating the Effect of Enzyme Kinetics on Metabolism Using Differentiable Constraint-based Models. Metabolic Engineering.[BibTeX]
Búi, M. N., Combettes, P. L., and Woodstock, Z. (2022). Block-activated Algorithms for Multicomponent Fully Nonsmooth Minimization. Proceedings of ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5428–5432.DOI: 10.1109/ICASSP43922.2022.9747479[URL][BibTeX]
Besançon, M., Carderera, A., and Pokutta, S. (2022). FrankWolfe.jl: A High-Performance and Flexible Toolbox for Frank-Wolfe Algorithms and Conditional Gradients. INFORMS Journal on Computing.[arXiv][summary][slides][code][BibTeX]
Braun, G., Pokutta, S., and Weismantel, R. (2022). Alternating Linear Minimization: Revisiting von Neumann’s Alternating Projections.[arXiv][slides][video][BibTeX]
Criado, F., Martínez-Rubio, D., and Pokutta, S. (2022). Fast Algorithms for Packing Proportional Fairness and Its Dual. Proceedings of Conference on Neural Information Processing Systems.[arXiv][poster][BibTeX]
Braun, G., Carderera, A., Combettes, C., Hassani, H., Karbasi, A., Mokhtari, A., and Pokutta, S. (2022). Conditional Gradient Methods.[arXiv][BibTeX]
Hunkenschröder, C., Pokutta, S., and Weismantel, R. (2022). Optimizing a Low-dimensional Convex Function Over a High-dimensional Cube. SIAM Journal on Optimization.[arXiv][BibTeX]
Macdonald, J., Besançon, M., and Pokutta, S. (2022). Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings. Proceedings of International Conference on Machine Learning.[arXiv][poster][video][BibTeX]
Martínez-Rubio, D., and Pokutta, S. (2022). Accelerated Riemannian Optimization: Handling Constraints with a Prox to Bound Geometric Penalties. Proceedings of Optimization for Machine Learning (NeurIPS Workshop OPT 2022).[URL][arXiv][poster][BibTeX]
Wirth, E., and Pokutta, S. (2022). Conditional Gradients for the Approximately Vanishing Ideal. Proceedings of International Conference on Artificial Intelligence and Statistics.[arXiv][summary][poster][code][BibTeX]
Kerdreux, T., Roux, C., d’Aspremont, A., and Pokutta, S. (2021). Linear Bandits on Uniformly Convex Sets. Journal of Machine Learning Research, 22(284), 1–23.[URL][arXiv][summary][BibTeX]
Braun, G., and Pokutta, S. (2021). Dual Prices for Frank–Wolfe Algorithms.[arXiv][BibTeX]
Roux, C., Wirth, E., Pokutta, S., and Kerdreux, T. (2021). Efficient Online-bandit Strategies for Minimax Learning Problems.[arXiv][BibTeX]
Braun, G., Pokutta, S., and Zink, D. (2019). Affine Reductions for LPs and SDPs. Mathematical Programming, 173, 281–312.DOI: 10.1007/s10107-017-1221-9[arXiv][BibTeX]
Braun, G., Pokutta, S., and Zink, D. (2019). Lazifying Conditional Gradient Algorithms. The Journal of Machine Learning Research, 20(71), 1–42.[URL][arXiv][BibTeX]
Braun, G., Pokutta, S., Tu, D., and Wright, S. (2019). Blended Conditional Gradients: the Unconditioning of Conditional Gradients. Proceedings of International Conference on Machine Learning, 97, 735–743.[URL][arXiv][summary][slides][poster][code][BibTeX]
Braun, G., Pokutta, S., and Roy, A. (2018). Strong Reductions for Extended Formulations. Mathematical Programming, 172, 591–620.DOI: 10.1007/s10107-018-1316-y[arXiv][BibTeX]
Braun, G., Brown-Cohen, J., Huq, A., Pokutta, S., Raghavendra, P., Roy, A., Weitz, B., and Zink, D. (2017). The Matching Problem Has No Small Symmetric SDP. Mathematical Programming, 165, 643–662.DOI: 10.1007/s10107-016-1098-z[arXiv][BibTeX]
Braun, G., Guzmán, C., and Pokutta, S. (2017). Unifying Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization. IEEE Transactions on Information Theory, 63(7), 4709–4724.DOI: 10.1109/TIT.2017.2701343[arXiv][BibTeX]
Braun, G., Jain, R., Lee, T., and Pokutta, S. (2017). Information-theoretic Approximations of the Nonnegative Rank. Computational Complexity, 26, 147–197.DOI: 10.1007/s00037-016-0125-z[URL][BibTeX]
Braun, G., Firorini, S., and Pokutta, S. (2016). Average Case Polyhedral Complexity of the Maximum Stable Set Problem. Mathematical Programming, 160(1), 407–431.DOI: 10.1007/s10107-016-0989-3[arXiv][BibTeX]
Braun, G., and Pokutta, S. (2016). Common Information and Unique Disjointness. Algorithmica, 76(3), 597–629.DOI: 10.1007/s00453-016-0132-0[URL][BibTeX]
Braun, G., and Pokutta, S. (2016). An Efficient High-probability Algorithm for Linear Bandits.[arXiv][BibTeX]
Braun, G., Firorini, S., Pokutta, S., and Steurer, D. (2015). Approximation Limits of Linear Programs (beyond Hierarchies). Mathematics of Operations Research, 40(3), 756–772.DOI: 10.1287/moor.2014.0694[arXiv][BibTeX]
Braun, G., Pokutta, S., and Xie, Y. (2015). Info-greedy Sequential Adaptive Compressed Sensing. IEEE Journal of Selected Topics in Signal Processing, 9(4), 601–611.DOI: 10.1109/JSTSP.2015.2400428[arXiv][BibTeX]
Braun, G., and Pokutta, S. (2015). The Matching Polytope Does Not Admit Fully-polynomial Size Relaxation Schemes. Proceedings of Symposium on Discrete Algorithms, 837–846.DOI: 10.1137/1.9781611973730.57[arXiv][BibTeX]
Braun, G., and Pokutta, S. (2012). An Algebraic Approach to Symmetric Extended Formulations. Proceedings of International Symposium on Combinatorial Optimization, 7422, 141–152.DOI: 10.1007/978-3-642-32147-4_14[arXiv][BibTeX]