Tutorial Bundle: Parameter-Efficient and Prompt Learning for Speech and Language Foundation Models (Parts 1-3), ICASSP 2024
Introduction and Motivation for Studying Parameter-Efficient learning
To be presented by Dr. Huck Yang
Background: Large-scale Pre-trained and Foundation Models
Definition and Theory of parameter-efficient learning
Basics of Pre-trained Model Representation Errors Analysis
Editing Models with Task Arithmetic
Advanced Settings of Task Vectors
Multimodal Weights Merging
BERT + Hubert for ASR
Vit + AST for Acoustic Modeling
In-Context Learning
Frozen Model Adaptation through long context windows
New Approaches on Neural Model Reprogramming
To be presented by Dr. Pin-Yu Chen, IBM Research AI
Reprogramming for Medical Images and DNA with 1B+ LLM (ICML 23)
Prompting Large Language Models
To be presented by Cheng-Han Chiang and Prof. Hung-yi Lee
Connection between prompting and parameter-efficient learning
Prompting large language models for reasoning
ReAct, Plan-and-Solve, Tree-of-Thought prompting
Faithfulness and robustness of LLM reasonings
Using LLMs for tool using
Automatic evaluation using large language models by prompting
LLM evaluation and G-Eval
Parameter-Efficient Learning for Speech Processing
To be presented by Kai-Wei Chang and Prof. Hung-yi Lee
Adapting text Large Language Models for Speech Processing
Adapting text LLM (e.g. LLaMA) for spoken language modeling
Prompting and Instruction Tuning on Speech Pre-trained Models
Semantic and acoustic tokens for speech language models
Prompting and instruction tuning for various speech processing tasks
Conclusion and Open Questions
To be presented by Prof. Hung-yi Lee
Lessons learned: a signal processor wandering in the land of large-scale models
Available resources and code for research in parameter-efficient learning