Swabha Swayamdipta
Gabilan Assistant Professor • USC Viterbi CS • Associate Director of USC Center for AI and Society • Amazon Visiting Academic • USC NLP

My goal is to design frameworks that allow robust, and reliable frameworks that allow AI systems to be broadly and safely deployed, especially in applications with societal implications. Three directions that this corresponds to are:
-
- Safety-Critical, Robust and Reliable Frameworks for Evaluation:
- What cannot be measured, cannot be improved. How can we reliably compare the generative capabilities of language models, and ensure our assessment is robust? How can we tell if performance match can translate to application safety, especially when there are societal implications? How can we evaluate new capabilities in LLMs when we do not necessarily know the correct answer?
-
- Understanding the Mechanisms that Drive Language Technologies:
- Even the most reliable evaluation may not reveal much about the mechanisms driving powerful yet opaque models. What do model geometries reveal about the processes underlying our models, and how can we improve models through different designs? Are models by design limited to making some choices which can uniquely identify them?
-
- Human and AI Collaboration:
- AI technologies are designed by humans and for humans, the future of AI involves cooperation and collaboration with humans. How can we say when a general-purpose model will reliably serve the custom utility for a human user? Where can these technologies complement human capabilities and where not?
These challenges require novel and creative techniques for redesigning generative evaluation to keep pace with model performance. This brings together a broad array of empirical research with theoretical fundamentals underlying language models.
news
Mar 31, 2025 | Co-organizing The Futures of Language Models and Transformers this week with Sasha Rush, as part of the Special Program on LLMs (Part 2). |
---|---|
Jan 14, 2025 | I’ll be spending most of spring at the Simons Institute, attending the Special Program on LLMs (Part 2). Come say hi if you are in Berkeley! |
Jan 06, 2025 | Starting a new 20% role as an Amazon Visiting Academic in AWS Bedrock. |
Dec 18, 2024 | DILL lab undergrads, Aryan Gulati and Ryan Wang received CRA Outstanding Undergraduate Researcher Awards! |
Nov 14, 2024 | We won an outstanding paper award at EMNLP 2024 for our work on OATH frames! |
selected publications
See here for a full list.- EMNLPIn Findings of EMNLP , 2024
- COLM
- ACLIn Proceedings of ACL , 2024
- ICLR
- EMNLP
- ACLIn Findings of ACL , 2023
- ACL
- EMNLPIn Findings of EMNLP , 2022
- EMNLPIn Findings of EMNLP , 2022
- NAACLIn Proc. of NAACL , 2022
- NAACLIn Proc. of NAACL , 2022
- ICMLIn Proc. of ICML , 2022
- NeurIPS
- ACLIn Proc. of ACL , 2021
- EACLIn Proc. of EACL , 2021
- EMNLPIn Proc. of EMNLP , 2020
- ACL
- ICML