Florian Mai
Welcome
I am a Junior Research Group Leader at the CAISA group at University of Bonn as part of The Lamarr Institute for Machine Learning and Artificial Intelligence.
My current research focuses on AI alignment and safety issues, exploring how to ensure that current and future advanced AI systems are beneficial and safe for humanity.
Read more about my research and background →
News
Registrations are now open for the International Conference on Large-Scale AI Risks from 26-28th May 2025 in Leuven, Belgium. I helped organize this event and I look forward to seeing you there!
I am participating in a panel discussion on trustworthy AI at the Deutsches Museum Bonn.
Our paper “Superalignment with Dynamic Human Values” was accepted at the BiAlign Workshop at ICLR 2025!
I started as a Junior Research Group Leader at University of Bonn as a part of the CAISA lab headed by Prof. Lucie Flek. My research will focus on AI safety topics like value alignment, and on reasoning and planning approaches for LLMs.
I am starting a short-term scholarship at the CAISA lab at the University of Bonn, funded through the DAAD AInet program! Our project will focus on drafting a new approach to the [scalable oversight problem].(https://aisafety.info/questions/8EL8/What-is-scalable-oversight)
Selected Publications
ICLR 2025 Workshop on Bidirectional Human-AI Alignment (BiAlign), 2025
This paper sketches a roadmap for training a superhuman reasoning model to decompose complex tasks into subtasks amenable to human-level guidance, addressing scalable oversight and dynamic human values in AI alignment.
COLM, 2024
We propose a method to learn planning for language modeling using unlabeled data.
ACL, 2023
We propose an efficient all-MLP architecture with the same inductive biases as Transformers.