I am a postdoc at the BauLab at Northeastern University, and a member of the NSF National Deep Inference Fabric (NDIF) team working on open-source interfaces for interpretability research. Previously, I was a PhD student at the University of Groningen GroNLP Lab and part of the Dutch InDeep consortium, where I wrote a thesis on actionable interpretability for machine translation. Before that, I was also an applied scientist intern at AWS AI Labs NYC, a research scientist at Aindo and a founding member of the AI Student Society in Trieste.
My research aims to bridge the gap between advances in interpretability research on large language models (LLMs) and their downstream applications for improving the transparency and trustworthiness of such models. I am also very passionate about open-source collaboration
, and I believe that good tools play a fundamental role in scientific discovery. For this reason, I participate in the development of NDIF's nnsight interpretability toolkit, and lead the development of inseq for attributional analyses of generative language models.






