About
Hi, I'm Alicia Yi Sun. I am currently at FAIR NYC, Meta’s fundamental AI research group. Most recently I have been working on memory‑augmented architectures and pre‑training paradigms to reduce hallucinations and improve factuality in language models. Previously I have also worked on post‑training safety alignment.
Before joining Meta, I completed my PhD at MIT with Kalyan Veeramachaneni in the Data to AI Lab at LIDS. Broadly, I design sequential algorithms that are robust, reliable, and aligned with societal values. Prior to MIT, I studied Computer Science at Washington University in St. Louis, working with Kilian Q. Weinberger.
Recent Projects
Semi‑Parametric Language Models
I'm interested in how to better represent knowledge in non-parametric memory and how to decouple memory intensive long-tail knowledge from reasoning capabilities at pretraining time.
Recent Publications
2025
-
Improving Factuality with Explicit Working Memory
Mingda Chen, Yang Li, Karthik Padthe, Rulin Shao, Alicia Sun, Jacob Kahn, Luke Zettlemoyer, Gargi Ghosh, Scott Wen-tau Yih
Proceedings of ACL, 2025
2024
-
Chained Tuning Leads to Biased Forgetting
Megan Ung*, Alicia Yi Sun*, Sam Bell, Levent Sagun, Adina Williams
Next Gen AI Safety @ ICML 2024
2022
-
Using Natural Sentence Prompts for Understanding Biases in Language Models
Sarah Alnegheimish, Alicia Guo, Alicia Yi Sun
NAACL 2022
-
The Backfire Effects of Fairness Constraints
Alicia Yi Sun, Alfredo Cuesta‑Infante, Kalyan Veeramachaneni
ICML 2022 Workshop on Responsible Decision Making in Dynamic Environments
2021
-
Towards Reducing Biases in Combining Multiple Experts Online
Alicia Yi Sun, Ivan Ramirez, Alfredo Cuesta‑Infante, Kalyan Veeramachaneni
2019
-
Learning Vine Copula For Synthetic Data Generation
Alicia Yi Sun, Alfredo Cuesta‑Infante, Kalyan Veeramachaneni