From Few-Shot Learning to Data-Efficient Intelligence
DOI:
https://doi.org/10.1609/aaai.v40i47.41356Abstract
Modern artificial intelligence performs impressively in data-rich settings but still struggles to learn and adapt from only a few examples—a capability central to human intelligence. My research seeks to understand and enable data-efficient generalization, unifying principles across few-shot learning, meta-learning, in-context learning in large language models (LLMs), and adaptive agent behavior. First, I revisit few-shot learning from a foundational perspective, showing why conventional supervised learning breaks down under sparse data and how prior knowledge enables reliable adaptation. I then discuss how these principles extend to real-world scenarios such as scientific discovery and cold-start recommendation, where data are scarce, costly, or dynamically evolving. Finally, I explore how LLMs perform in-context learning and how their adaptive behaviors connect to meta-learning mechanisms. Building on these insights, I develop data-efficient, preference-adaptive agents that quickly align to user needs with minimal interaction.This talk presents a cohesive view of data-efficient intelligence and outlines future directions toward more reliable, human-like learning systems.Downloads
Published
2026-03-14
How to Cite
Wang, Y. (2026). From Few-Shot Learning to Data-Efficient Intelligence. Proceedings of the AAAI Conference on Artificial Intelligence, 40(47), 39834–39835. https://doi.org/10.1609/aaai.v40i47.41356
Issue
Section
New Faculty Highlights