NeurIPS 2024 Workshop on
Adaptive Foundation Models
Overview
In the rapidly evolving landscape of AI, the development of adaptive foundation models represents a groundbreaking shift towards AI systems that can continually learn, adapt, and evolve in response to new information, changing environments, and user preferences. These models, equipped with the capability to perform continual weight updates, compute- and memory-efficient finetuning, and personalized adaptation, are poised to revolutionize how AI interacts with the world.
For instance, imagine a model that continually learns from current news events, adapting to the ever-changing global landscape by integrating up-to-date knowledge. Such models could provide more accurate forecasts, adapting to new trends as they emerge. Moreover, the integration of retrieval-augmented generation (RAG) into foundation models can ensure that generated content is not only relevant, but also reflects the most current knowledge. In addition, personalization has emerged as an essential feature of generative models: personalized LLMs aim to align model responses with an individual user's preferences, enhancing their interactions; similarly, personalized text-to-image diffusion models unlock creative applications that incorporate user-specific subjects and tailor images to a user's style. These capabilities rely on techniques for adapting foundation models, including fine-tuning, prompt tuning, and in-context/few-shot learning.
This workshop aims to explore cutting-edge advancements in adaptive foundation models, focusing on methodologies across vision, language, and multi-modal domains. Hosting this workshop at NeurIPS aligns with the conference's mission to advance the frontiers of machine learning, as recently there have been a number of emerging approaches and paradigms for adapting foundation models in the real world. The workshop will bring together interdisciplinary researchers from core ML/DL, efficient ML, computer vision, and NLP. Topics include but are not limited to: