I'm currently a postdoctoral researcher at Princeton University in the Intelligent Performance and Adaptation Laboratory. I am supervised by Professors Jordan A. Taylor and Jonathan D. Cohen.
When I first read Oliver Sacks’ The Man Who Mistook His Wife for a Hat, I felt that particular kind of curiosity that borders on awe: the realization that the mind can be both astonishingly capable and surprisingly fragile—and that the same brain that builds a coherent world can also, under the right conditions, build a world that is confidently wrong. The cases portrayed in the book hinted that everyday perception, memory, and intelligence are not inevitabilities, but achievements—constructed moment by moment by underlying algorithms we can study. We, as humans, hallucinate our own realities. I wonder if other intelligent systems do as well.
That sense of wonder has stayed with me, but my motivation has also always been practical. I’m drawn to questions where understanding the mechanism can plausibly make life better. Early on, that meant asking: How do people learn efficiently? What makes practice actually work? Why do some training schedules produce durable skill and others wash out? These questions matter everywhere—education, rehabilitation, expertise, and any domain where we’re trying to help people build competence without wasting time or burning out.
Today, I’m a postdoctoral researcher at Princeton University in the Intelligent Performance and Adaptation Lab, and my work sits at the intersection of learning, memory, and computation. At the center is a simple tension: the same machinery that makes intelligence powerful also makes it vulnerable. Intelligent systems—human or artificial—generalize by compressing experience, reusing representations, and letting similarity guide inference. Most of the time that’s exactly what you want. But it also creates predictable failure modes: interference between similar skills, distortions in recall, confident confabulations, and behavior that drifts when uncertainty accumulates.
That’s one reason my research spans both motor skill learning (how we acquire and retain skilled actions) and declarative memory (how we store and retrieve events and facts). The surface behavior is different, but the computational problem is shared: a learner must decide what should be treated as “the same thing” and what should be kept separate. When that decision is right, generalization is fast and flexible. When it’s wrong, we see systematic errors—and those errors often reveal the structure of the underlying representation.
Recently, that same lens has pulled me toward AI safety and reliability. As modern AI systems become more integrated into the fabric of our society, confident failures matter more than ever. Hallucinations, goal drift, brittle generalization, and long-horizon derailments feel, to me, like the modern version of the puzzle I first met in Sacks: how can a system be so competent—and yet fail in ways that are structured rather than random? My goal is not to draw a shallow analogy between humans and AI, but to bring a cognitive scientist’s toolkit to these failures: define them precisely, measure them over time, connect them to representation and inference, and test interventions that reduce them.
At the same time, I’m increasingly interested in the other side of the coin: how to leverage AI to accelerate scientific discovery. Currently, models are pretty bad at coming up with new ideas. However, they are really good at finding patterns in data. Thus, I think we should create tools that leverage their strengths and are useful outside of being a tool for an AI. Thus, I am building a tool to map design spaces, first starting with motor adaptation and then I will scale it up to be completely automated.
Education
-
Ph.D. in Experimental Psychology — University of California, San Diego (2019-2024)
-
M.A. in Psychology — University of California, San Diego (2019-2021)
-
B.S. in Neuroscience, B.S. in Psychology — Michigan State University (2013-2017)
Research Interests
- Successful Generalization
- Failures in Generalization
- AI Safety & Reliability
- Control & Performance
- Design Spaces