Education
- PhD in Computer Science, Massachusetts Institute of Technology, 2021–Ongoing.
- MPhil in Advanced Computer Science, University of Cambridge, 2020. Graduated with Distinction.
- BSc (Hons) in Artificial Intelligence and Computer Science, University of Edinburgh, 2019. Graduted with First-class honors.
Work experience
- 06/2023–09/2023: Research Intern (MIT-IBM Watson AI Lab, Cambridge, Massachusetts)
- Worked on instruction tuning for LLMs
- Supervised by Akash Srivastava
- 10/2020–07/2021: Research Intern (Mila, Montreal, Canada)
- Worked on developing and understanding word representations
- Supervised by Siva Reddy
- 06/2018–08/2018: Intern Software Developer (Canon Medical Systems Corporation, Edinburgh, UK)
- Worked on a virtual reality application using Direct3D and SteamVR
Publications
- Lucas Torroba-Hennigen, Hunter Lang, Han Guo, Yoon Kim. On the Duality between Gradient Transformations and Adapters. arXiv preprint 2025.
- Lucas Torroba Hennigen*, Shannon Shen*, Aniruddha Nrusimha, Bernhard Gapp, David Sontag, Yoon Kim. Towards Verifiable Text Generation with Symbolic References. COLM 2024.
- Li Du, Afra Amini, Lucas Torroba Hennigen, Xinyan Velocity Yu, Jason Eisner, Holden Lee, Ryan Cotterell. Principled Gradient-based Markov Chain Monte Carlo for Text Generation. ICML 2024.
- Lucas Torroba Hennigen, Yoon Kim. Deriving Language Models from Masked Language Models. ACL 2023.
- Li Du, Lucas Torroba Hennigen, Tiago Pimentel, Clara Meister, Jason Eisner, Ryan Cotterell. A Measure-Theoretic Characterization of Tight Language Models. ACL 2023.
- Kevin Du, Lucas Torroba Hennigen, Niklas Stoehr, Alexander Warstadt, Ryan Cotterell. Generalizing Backpropagation for Gradient-based Interpretability. ACL 2023.
- Niklas Stoehr, Lucas Torroba Hennigen, Josef Valvoda, Robert West, Ryan Cotterell, Aaron Schein. An Ordinal Latent Variable Model of Conflict Intensity. ACL 2023.
- Peihao Wang, Rameswar Panda, Lucas Torroba Hennigen, Philip Greengard, Leonid Karlinsky, Rogerio Feris, David Daniel Cox, Zhangyang Wang, Yoon Kim. Learning to grow pretrained models for efficient transformer training. ICLR 2023.
- Karolina Stańczak*, Lucas Torroba Hennigen*, Adina Williams, Ryan Cotterell, Isabelle Augenstein. A Latent-Variable Model for Intrinsic Probing. AAAI 2023.
- Karolina Stańczak, Edoardo Ponti, Lucas Torroba Hennigen, Ryan Cotterell, Isabelle Augenstein. Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models. NAACL 2022.
- Alexander Immer*, Lucas Torroba Hennigen*, Vincent Fortuin, Ryan Cotterell. Probing as Quantifying the Inductive Bias of Pre-trained Representations. ACL 2022.
- Niklas Stoehr, Lucas Torroba Hennigen, Samin Ahbab, Robert West, Ryan Cotterell. Classifying Dyads for Militarized Conflict Analysis. EMNLP 2021.
- Lucas Torroba Hennigen, Adina Williams, Ryan Cotterell. Intrinsic Probing Through Dimension Selection. EMNLP 2020.
- Or Honovich*, Lucas Torroba Hennigen*, Omri Abend, Shay B. Cohen. Machine Reading of Historical Events. ACL 2020.
Awards
- Michael Athans Fellowship (Funding for 1st year of PhD)
- Mitacs Globalink Research Award (Awarded funding to do an in-person internship at Mila, but wasn’t able to claim due to COVID-19)
- Graduate Tutors Prize for Distinction in Masters degree (Graduated from MPhil with a Distinction)
- Class Prize for Artificial Intelligence and Computer Science (Graduated in 1st place in Edinburgh AI & CS cohort; 1 recipient)
- Howe Prize for Top Performance in UG4 Artificial Intelligence (Highest aggregate grade in 4th year undergraduate AI cohort; 1 recipient)
- British Computing Society Prize for Top Performing Student in the Professional Issues Course (Highest grade in 3rd year Professional Issues course; 2 recipients)