
Berkant Turan
I am working on improving the interpretability of AI, focusing on the Merlin-Arthur Classifier. This intriguing model uses two feature selectors to generate meaningful saliency maps. My objectives are to fine-tune its architecture, explore its adversarial components, and evaluate its effectiveness with complex datasets.
📬 Contact
- office
- Room 3024 at ZIB
- turan (at) zib.de
- homepage
- b-turan.github.io
- languages
- German, English, and Turkish
🎓 Curriculum vitae
- since 2023
- Member of BMS
- since 2022
- Researcher at ZIB
- 2022
- Research Assistant at ZIB
- May 2022
- M.Sc. in Scientific Computing at TUB
- Oct 2018
- B.Sc. in Engineering Science at TUB
📝 Publications and preprints
Conference proceedings
- Głuch, G., Turan, B., Nagarajan, S. G., and Pokutta, S. (2025). The Good, the Bad and the Ugly: Meta-Analysis of Watermarks, Transferable Attacks and Adversarial Defenses. Proceedings of the Conference on Neural Information Processing Systems, 38.
[arXiv]
[BibTeX]
@inproceedings{2024_GrzegorzTuranNagarajanPokutta_Watermarksadversarialdefenses:1, year = {2025}, booktitle = {Proceedings of the Conference on Neural Information Processing Systems}, month = sep, volume = {38}, archiveprefix = {arXiv}, eprint = {2410.08864}, arxiv = {arXiv:2410.08864}, primaryclass = {cs.LG}, author = {Głuch, Grzegorz and Turan, Berkant and Nagarajan, Sai Ganesh and Pokutta, Sebastian}, title = {The Good, the Bad and the Ugly: Meta-Analysis of Watermarks, Transferable Attacks and Adversarial Defenses} } - Turan, B., Asadulla, S., Steinmann, D., Stammer, W., and Pokutta, S. (2025, July 10). Neural Concept Verifier: Scaling Prover-Verifier Games Via Concept Encodings. Proceedings of the Actionable Interpretability Workshop at ICML 2025.
DOI: 10.48550/arXiv.2507.07532
[arXiv]
[BibTeX]
@inproceedings{2025_TuranEtAl_Neuralconceptverifier_2507-07532, year = {2025}, booktitle = {Proceedings of the Actionable Interpretability Workshop at ICML 2025}, month = jul, doi = {10.48550/arXiv.2507.07532}, archiveprefix = {arXiv}, eprint = {2507.07532}, arxiv = {arXiv:2507.07532}, primaryclass = {cs.LG}, author = {Turan, Berkant and Asadulla, Suhrab and Steinmann, David and Stammer, Wolfgang and Pokutta, Sebastian}, title = {Neural Concept Verifier: Scaling Prover-Verifier Games Via Concept Encodings}, date = {2025-07-10} } - Pauls, J., Zimmer, M., Turan, B., Saatchi, S., Ciais, P., Pokutta, S., and Gieseke, F. (2025). Capturing Temporal Dynamics in Large-Scale Canopy Tree Height Estimation. Proceedings of the International Conference on Machine Learning, 267.
DOI: 10.48550/arXiv.2501.19328
[arXiv]
[BibTeX]
@inproceedings{2025_JanEtAl_Temporalcanopyheight, year = {2025}, booktitle = {Proceedings of the International Conference on Machine Learning}, month = may, volume = {267}, doi = {10.48550/arXiv.2501.19328}, archiveprefix = {arXiv}, eprint = {2501.19328}, arxiv = {arXiv:2501.19328}, primaryclass = {cs.LG}, author = {Pauls, Jan and Zimmer, Max and Turan, Berkant and Saatchi, Sassan and Ciais, Philippe and Pokutta, Sebastian and Gieseke, Fabian}, title = {Capturing Temporal Dynamics in Large-Scale Canopy Tree Height Estimation}, date = {2025-01-31} } - Głuch, G., Turan, B., Nagarajan, S. G., and Pokutta, S. (2025, March). The Good, the Bad and the Ugly: Meta-Analysis of Watermarks, Transferable Attacks and Adversarial Defenses. Proceedings of the ICLR 2025: Workshop on GenAI Watermarking @ ICLR 2025.
[arXiv]
[BibTeX]
@inproceedings{2024_GrzegorzTuranNagarajanPokutta_Watermarksadversarialdefenses, year = {2025}, booktitle = {Proceedings of the ICLR 2025: Workshop on GenAI Watermarking @ ICLR 2025}, month = mar, archiveprefix = {arXiv}, eprint = {2410.08864}, arxiv = {arXiv:2410.08864}, primaryclass = {cs.LG}, author = {Głuch, Grzegorz and Turan, Berkant and Nagarajan, Sai Ganesh and Pokutta, Sebastian}, title = {The Good, the Bad and the Ugly: Meta-Analysis of Watermarks, Transferable Attacks and Adversarial Defenses} } - Wäldchen, S., Sharma, K., Turan, B., Zimmer, M., and Pokutta, S. (2024, January). Interpretability Guarantees with Merlin-Arthur Classifiers. Proceedings of the International Conference on Artificial Intelligence and Statistics.
[arXiv]
[BibTeX]
@inproceedings{2022_WaeldchenEtAl_Interpretabilityguarantees, year = {2024}, booktitle = {Proceedings of the International Conference on Artificial Intelligence and Statistics}, month = jan, archiveprefix = {arXiv}, eprint = {2206.00759}, arxiv = {arXiv:2206.00759}, primaryclass = {cs.LG}, author = {Wäldchen, Stephan and Sharma, Kartikey and Turan, Berkant and Zimmer, Max and Pokutta, Sebastian}, title = {Interpretability Guarantees with Merlin-Arthur Classifiers} }
🔬 Projects
This project develops advanced AI methods to monitor forests using satellite imagery, including radar and optical data. It creates scalable techniques for detailed, high-resolution global maps to monitor canopy height, biomass, and track forest disturbances.
Existing approaches for interpreting Neural Network classifiers that highlight features relevant for a decision are based solely on heuristics. This project introduces a theory that bounds feature quality without assumptions on the classifier model by relating classification to Interactive Proof Systems.
đź’¬ Talks and posters
Research seminar talks
- Oct 2024
- Interpretability Guarantees with Merlin-Arthur Classifiers
Research Seminar of the Machine Learning and Data Engineering Group at WWU, MĂĽnster
Poster presentations
- Jul 2025
- Neural Concept Verifier: Scaling Prover-Verifier Games Via Concept Encodings
Actionable Interpretability Workshop at ICML 2025 (AIW), Vancouver - Jul 2025
- Capturing Temporal Dynamics in Large-Scale Canopy Tree Height Estimation
42nd International Conference on Machine Learning (ICML), Vancouver - May 2025
- The Good, the Bad and the Ugly: Watermarks, Transferable Attacks and Adversarial Defenses
7th DOxML Conference, Kyoto - Apr 2025
- The Good, the Bad and the Ugly: Watermarks, Transferable Attacks and Adversarial Defenses
ICLR 2025: Workshop on GenAI Watermarking @ ICLR 2025 (ICLR), Singapore - Jul 2024
- Unified Taxonomy in AI Safety: Watermarks, Adversarial Defenses, and Transferable Attacks
TF2M Workshop, Vienna - May 2024
- Interpretability Guarantees with Merlin-Arthur Classifiers
27th AISTATS Conference, València - Jul 2023
- Extending Merlin-Arthur Classifiers for Improved Interpretability
The 1st World Conference on eXplainable Artificial Intelligence
đź“… Event Attendance
- Jul 2025
- Actionable Interpretability Workshop at ICML 2025 (AIW), Vancouver
- Jul 2025
- 42nd International Conference on Machine Learning (ICML), Vancouver
- May 2025
- 7th DOxML Conference, Kyoto
- May 2025
- 28th AISTATS Conference, Phuket
- Apr 2025
- ICLR 2025: Workshop on GenAI Watermarking @ ICLR 2025 (ICLR), Singapore
- Apr 2025
- 13th International Conference on Learning Representations (ICLR), Singapore
- Jul 2024
- 41st International Conference on Machine Learning (ICML), Vienna
- Jul 2024
- TF2M Workshop, Vienna
- May 2024
- 27th AISTATS Conference, València
- Sep 2023
- 5th Computational Optimization at Work (CO@Work), Berlin
👨‍🏫 Teaching
- winter 2019
- Tutor for Statics and Elementary Strength of Materials at TUB
- summer 2018
- Tutor for Energy-Based Methods in Mechanics at TUB
- summer 2018
- Tutor for Continuum Mechanics at TUB
- winter 2018
- Tutor for Kinematics and Dynamics at TUB
- summer 2017
- Tutor for Statics and Elementary Strength of Materials at TUB
- winter 2017
- Tutor for Energy-Based Methods in Mechanics at TUB
- winter 2017
- Tutor for Continuum Mechanics at TUB
- summer 2016
- Tutor for Kinematics and Dynamics at TUB
- summer 2015
- Tutor for Statics and Elementary Strength of Materials at TUB
📝 Organization and outreach
- Jul 2025
- Lange Nacht der Wissenschaften at Zuse Institute Berlin
- Jun 2024
- Lange Nacht der Wissenschaften at Zuse Institute Berlin
- 2024
- Support in the organization of ZIB40 events
- Jun 2023
- Lange Nacht der Wissenschaften at Zuse Insittute Berlin