## What we are interested in

Machine learning has revolutionised the process of analyzing data and has led to new insights and applications. However, one of the key short comings of this field is its use of black box models which may lack explainability. A key aspect of our research is to develop interpretable machine learning algorithms. We do so by using techniques such as adversarial optimization, sparsity, etc. We also focus on the interplay between learning and optimization by developing new algorithms to train machine learning models and exploting machine learning models to improve the process of optimization. Finally, we also aim to identify the theoretical reasons behind the success of such models.

## Researchers

## Projects

- Adaptive Algorithms Through Machine Learning: Exploiting Interactions in Integer Programming (MATH+ EF1-9)
- Expanding Merlin-Arthur Classifiers: Interpretable Neural Networks Through Interactive Proof Systems (MATH+ EF1-67)
- Learning to Schedule Heuristics in IP
- Globally Optimal Neural Network Training (SPP 2298, project number 463910157)
- AI-Based High-Resolution Forest Monitoring

## Publications

- Kevin-Martin, A., Bärmann, A., Braun, K., Liers, F., Pokutta, S., Schneider, O., Sharma, K., and Tschuppik, S. (2023). Data-driven Distributionally Robust Optimization Over Time.
*INFORMS Journal on Optimization*. [arXiv]## [BibTeX]

- Kruser, J., Sharma, K., Holl, J., and Nohadani, O. (2023). Identifying Patterns of Medical Intervention in Acute Respiratory Failure: A Retrospective Observational Study.
*Critical Care Explorations*.## [BibTeX]

- Thuerck, D., Sofranac, B., Pfetsch, M., and Pokutta, S. (2023).
*Learning Cuts Via Enumeration Oracles*. [arXiv]## [BibTeX]

- Bienstock, D., Muñoz, G., and Pokutta, S. (2023). Principled Deep Neural Network Training Through Linear Programming.
*Discrete Optimization*. [URL] [arXiv] [summary]## [BibTeX]

- Zimmer, M., Spiegel, C., and Pokutta, S. (2023).
*Sparse Model Soups: A Recipe for Improved Pruning Via Model Averaging*. [arXiv]## [BibTeX]

- Zimmer, M., Spiegel, C., and Pokutta, S. (2023). How I Learned to Stop Worrying and Love Retraining.
*Proceedings of International Conference on Learning Representations*. [arXiv] [code]## [BibTeX]

- Deza, A., Pokutta, S., and Pournin, L. (2022).
*The Complexity of Geometric Scaling*. [arXiv]## [BibTeX]

- Kossen, T., Hirzel, M. A., Madai, V. I., Boenisch, F., Hennemuth, A., Hildebrand, K., Pokutta, S., Sharma, K., Hilbert, A., Sobesky, J., Galinovic, I., Khalil, A. A., Fiebach, J. B., and Frey, D. (2022). Towards Sharing Brain Images: Differentially Private TOF-MRA Images with Segmentation Labels Using Generative Adversarial Networks.
*Frontiers in Artificial Intelligence*. DOI: 10.3389/frai.2022.813842## [BibTeX]

- Macdonald, J., Besançon, M., and Pokutta, S. (2022). Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings.
*Proceedings of International Conference on Machine Learning*. [arXiv] [poster] [video]## [BibTeX]

- Nohadani, O., and Sharma, K. (2022). Optimization Under Connected Uncertainty.
*INFORMS Journal on Optimization*. DOI: 10.1287/ijoo.2021.0067## [BibTeX]

- Tsuji, K., Tanaka, K., and Pokutta, S. (2022). Pairwise Conditional Gradients without Swap Steps and Sparser Kernel Herding.
*Proceedings of International Conference on Machine Learning*. [arXiv] [summary] [slides] [code] [video]## [BibTeX]

- Wäldchen, S., Huber, F., and Pokutta, S. (2022). Training Characteristic Functions with Reinforcement Learning: XAI-methods Play Connect Four.
*Proceedings of International Conference on Machine Learning*. [arXiv] [poster] [video]## [BibTeX]

- Wäldchen, S., Sharma, K., Zimmer, M., Turan, B., and Pokutta, S. (2022).
*Merlin-Arthur Classifiers: Formal Interpretability with Interactive Black Boxes*. [arXiv]## [BibTeX]

- Wäldchen, S., Sharma, K., Zimmer, M., Turan, B., and Pokutta, S. (2022).
*Merlin-Arthur Classifiers: Formal Interpretability with Interactive Black Boxes*. [arXiv]## [BibTeX]

- Zimmer, M., Spiegel, C., and Pokutta, S. (2022).
*Compression-aware Training of Neural Networks Using Frank-Wolfe*. [arXiv]## [BibTeX]

- Ziemke, T., Sering, L., Vargas Koch, L., Zimmer, M., Nagel, K., and Skutella, M. (2021). Flows Over Time As Continuous Limits of Packet-based Network Simulations.
*Transportation Research Procedia*,*52*, 123–130. DOI: 10.1016/j.trpro.2021.01.014 [URL]## [BibTeX]

- Combettes, C., Spiegel, C., and Pokutta, S. (2020).
*Projection-free Adaptive Gradients for Large-scale Optimization*. [arXiv] [summary] [code]## [BibTeX]

- Pokutta, S., Spiegel, C., and Zimmer, M. (2020).
*Deep Neural Network Training with Frank-Wolfe*. [arXiv] [summary] [code]## [BibTeX]

- Goess, A., Martin, A., Pokutta, S., and Sharma, K.
*Norm-induced Cuts: Optimization with Lipschitzian Black-box Functions*. [URL]## [BibTeX]