NeurIPS 2024 Workshop on
Adaptive Foundation Models

Overview

In the rapidly evolving landscape of AI, the development of adaptive foundation models represents a groundbreaking shift towards AI systems that can continually learn, adapt, and evolve in response to new information, changing environments, and user preferences. These models, equipped with the capability to perform continual weight updates, compute- and memory-efficient finetuning, and personalized adaptation, are poised to revolutionize how AI interacts with the world.

For instance, imagine a model that continually learns from current news events, adapting to the ever-changing global landscape by integrating up-to-date knowledge. Such models could provide more accurate forecasts, adapting to new trends as they emerge. Moreover, the integration of retrieval-augmented generation (RAG) into foundation models can ensure that generated content is not only relevant, but also reflects the most current knowledge. In addition, personalization has emerged as an essential feature of generative models: personalized LLMs aim to align model responses with an individual user's preferences, enhancing their interactions; similarly, personalized text-to-image diffusion models unlock creative applications that incorporate user-specific subjects and tailor images to a user's style. These capabilities rely on techniques for adapting foundation models, including fine-tuning, prompt tuning, and in-context/few-shot learning.

This workshop aims to explore cutting-edge advancements in adaptive foundation models, focusing on methodologies across vision, language, and multi-modal domains. Hosting this workshop at NeurIPS aligns with the conference's mission to advance the frontiers of machine learning, as recently there have been a number of emerging approaches and paradigms for adapting foundation models in the real world. The workshop will bring together interdisciplinary researchers from core ML/DL, efficient ML, computer vision, and NLP. Topics include but are not limited to:

Topics

Continual Weight Updates

Techniques and challenges in updating model weights continually to adapt to new information without forgetting previously learned knowledge.

Efficient Fine-Tuning

Strategies to fine-tune models in a resource-efficient manner, enabling broader application without compromising performance.

Token/Prompt Tuning

Exploration of lightweight methods to adapt large models to specific tasks or domains through token or prompt modifications.

In-Context Learning/Few-Shot Learning

Mechanisms for models to learn from context within a limited interaction, and learn new concepts or tasks with very few examples.

Personalized Adaptation

Techniques for customizing models to individual user preferences, tasks, or domains, ensuring more relevant and effective interactions.

Retrieval-Augmented Generation

Integration of external knowledge sources to enhance the generation capabilities of models, facilitating more informed and contextually relevant outputs.

Multimodal Learning

Techniques for leveraging data from multiple modalities (e.g., text, images, robot interactions) into a unified framework, yielding rich interactivity.

Invited Speakers



Organizers

If you have any questions, please reach out to: neurips2024-adaptive-foundation@googlegroups.com