This workshop brings together leading researchers to investigate the safety and security vulnerabilities of large language models (LLMs). As the threat landscape evolves—driven by ever-larger model scales, ubiquitous deployment, and increasingly agentic behaviour—there is a pressing need for principled mitigation strategies grounded in empirical evidence. By providing a focused forum for rigorous discussion and collaboration, the workshop aims to sharpen our collective understanding of emerging risks and to catalyse robust, technically sound defences.
The workshop will last 1.5 hours and will consist of a keynote and a poster session combined with networking between participants.
Expected discussion themes include:
TBA
TBA
We invite posters for works previously accepted at one of the following venues or associated LLM Safety/Security workshops (Submission link):
First come first serve: In case the number of submitted posters is greater than venue's capacity, posters submitted earlier will be given priority.
October 28, 2025
October 31, 2025
December 2, 2025
For questions about the workshop, please contact egor dot zverev at ist.ac.at
ELLIS UnConference LLM Safety and Security Workshop