Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 01:36:16
17 Dec 2024

In today's landscape of natural language processing (NLP) and speech processing, developing applications often begins with fine-tuning a foundation model. However, teaching a foundation model new skills is not as straightforward as it seems. Despite the sophistication of current models, introducing new capabilities can often impair their original functions, a phenomenon known as catastrophic forgetting. While experience replay is a common solution, the lack of open-source training data for models like LLaMA poses challenges for continuous training. This talk will delve into recent research on fine-tuning language models, including their spoken counterparts, focusing on preserving their initial capabilities. This talk will also share some benchmarks related to the ongoing fine-tuning of foundation models.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00