What we are interested in
Machine learning has revolutionised the process of analyzing data and has led to new insights and applications. However, one of the key short comings of this field is its use of black box models which may lack explainability. A key aspect of our research is to develop interpretable machine learning algorithms. We do so by using techniques such as adversarial optimization, sparsity, etc. We also focus on the interplay between learning and optimization by developing new algorithms to train machine learning models and exploting machine learning models to improve the process of optimization. Finally, we also aim to identify the theoretical reasons behind the success of such models.
Members
Kartikey Sharma
kartikey.sharma (at) zib.de
kartikey.sharma (at) zib.de
Sai Ganesh Nagarajan
nagarajan (at) zib.de
nagarajan (at) zib.de
Adnan Mahmud
mahmud (at) zib.de
mahmud (at) zib.de
Berkant Turan
turan (at) zib.de
turan (at) zib.de
Christoph Graczyk
graczyk (at) zib.de
graczyk (at) zib.de
Christophe Roux
roux (at) zib.de
roux (at) zib.de
Felix Prause
prause (at) zib.de
prause (at) zib.de
Ingo Meise
meise (at) zib.de
meise (at) zib.de
Kartikeya Chitranshi
chitranshi (at) zib.de
chitranshi (at) zib.de
Konrad Mundinger
mundinger (at) zib.de
mundinger (at) zib.de
Max Zimmer
zimmer (at) zib.de
zimmer (at) zib.de
Megi Andoni
andoni (at) zib.de
andoni (at) zib.de
Moritz Wagner
wagner (at) zib.de
wagner (at) zib.de
Nico Pelleriti
pelleriti (at) zib.de
pelleriti (at) zib.de
Shpresim Sadiku
sadiku (at) zib.de
sadiku (at) zib.de
Projects
- Adaptive Algorithms Through Machine Learning: Exploiting Interactions in Integer Programming (MATH+ EF1-9)
- Expanding Merlin-Arthur Classifiers: Interpretable Neural Networks Through Interactive Proof Systems (MATH+ EF1-67)
- Learning to Schedule Heuristics in IP
- Globally Optimal Neural Network Training (SPP 2298, project number 463910157)
- AI-Based High-Resolution Forest Monitoring
Publications
- Deza, A., Pokutta, S., and Pournin, L. (2024). The Complexity of Geometric Scaling. Operations Research Letters, 52.
DOI: 10.1016/j.orl.2023.11.010
[arXiv]
[BibTeX]
- Göß, A., Martin, A., Pokutta, S., and Sharma, K. (2024). Norm-induced Cuts: Optimization with Lipschitzian Black-box Functions.
[URL]
[arXiv]
[BibTeX]
- Wäldchen, S., Sharma, K., Zimmer, M., Turan, B., and Pokutta, S. (2024). Merlin-Arthur Classifiers: Formal Interpretability with Interactive Black Boxes. Proceedings of International Conference on Artificial Intelligence and Statistics.
[arXiv]
[BibTeX]
- Zimmer, M., Spiegel, C., and Pokutta, S. (2024). Sparse Model Soups: A Recipe for Improved Pruning Via Model Averaging. Proceedings of International Conference on Learning Representations.
[arXiv]
[BibTeX]
- Kevin-Martin, A., Bärmann, A., Braun, K., Liers, F., Pokutta, S., Schneider, O., Sharma, K., and Tschuppik, S. (2023). Data-driven Distributionally Robust Optimization Over Time. INFORMS Journal on Optimization.
[arXiv]
[BibTeX]
- Kruser, J., Sharma, K., Holl, J., and Nohadani, O. (2023). Identifying Patterns of Medical Intervention in Acute Respiratory Failure: A Retrospective Observational Study. Critical Care Explorations.
[BibTeX]
- Thuerck, D., Sofranac, B., Pfetsch, M., and Pokutta, S. (2023). Learning Cuts Via Enumeration Oracles. Proceedings of Conference on Neural Information Processing Systems.
[arXiv]
[BibTeX]
- Bienstock, D., Muñoz, G., and Pokutta, S. (2023). Principled Deep Neural Network Training Through Linear Programming. Discrete Optimization.
[URL]
[arXiv]
[summary]
[BibTeX]
- Zimmer, M., Andoni, M., Spiegel, C., and Pokutta, S. (2023). PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs.
[arXiv]
[code]
[BibTeX]
- Zimmer, M., Spiegel, C., and Pokutta, S. (2023). How I Learned to Stop Worrying and Love Retraining. Proceedings of International Conference on Learning Representations.
[arXiv]
[code]
[BibTeX]
- Kossen, T., Hirzel, M. A., Madai, V. I., Boenisch, F., Hennemuth, A., Hildebrand, K., Pokutta, S., Sharma, K., Hilbert, A., Sobesky, J., Galinovic, I., Khalil, A. A., Fiebach, J. B., and Frey, D. (2022). Towards Sharing Brain Images: Differentially Private TOF-MRA Images with Segmentation Labels Using Generative Adversarial Networks. Frontiers in Artificial Intelligence.
DOI: 10.3389/frai.2022.813842
[BibTeX]
- Macdonald, J., Besançon, M., and Pokutta, S. (2022). Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings. Proceedings of International Conference on Machine Learning.
[arXiv]
[poster]
[video]
[BibTeX]
- Nohadani, O., and Sharma, K. (2022). Optimization Under Connected Uncertainty. INFORMS Journal on Optimization.
DOI: 10.1287/ijoo.2021.0067
[BibTeX]
- Tsuji, K., Tanaka, K., and Pokutta, S. (2022). Pairwise Conditional Gradients without Swap Steps and Sparser Kernel Herding. Proceedings of International Conference on Machine Learning.
[arXiv]
[summary]
[slides]
[code]
[video]
[BibTeX]
- Wäldchen, S., Huber, F., and Pokutta, S. (2022). Training Characteristic Functions with Reinforcement Learning: XAI-methods Play Connect Four. Proceedings of International Conference on Machine Learning.
[arXiv]
[poster]
[video]
[BibTeX]
- Zimmer, M., Spiegel, C., and Pokutta, S. (2022). Compression-aware Training of Neural Networks Using Frank-Wolfe.
[arXiv]
[BibTeX]
- Ziemke, T., Sering, L., Vargas Koch, L., Zimmer, M., Nagel, K., and Skutella, M. (2021). Flows Over Time As Continuous Limits of Packet-based Network Simulations. Transportation Research Procedia, 52, 123–130.
DOI: 10.1016/j.trpro.2021.01.014
[URL]
[BibTeX]
- Combettes, C., Spiegel, C., and Pokutta, S. (2020). Projection-free Adaptive Gradients for Large-scale Optimization.
[arXiv]
[summary]
[code]
[BibTeX]
- Pokutta, S., Spiegel, C., and Zimmer, M. (2020). Deep Neural Network Training with Frank-Wolfe.
[arXiv]
[summary]
[code]
[BibTeX]