Skip to main content

UNIFIED PROMPT LEARNING MAKES PRE-TRAINED LANGUAGE MODELS BETTER FEW-SHOT LEARNERS

Feihu Jin (Institute of Automation ,Chinese Academy of Sciences); Jinliang Lu (Institute of Automation,Chinese Academy of Sciences); Jiajun Zhang (Institute of Automation Chinese Academy of Sciences)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

Language prompting induces the model to produce a textual output by inserting a discrete prompt or continuous prompt into each input instance of the pre-trained language models (PLMs) during the training phase, which achieves remarkable performance in few-shot learning scenarios. However, current prompt-based methods either use the same task-specific prompts for each instance, losing the particularity of instance-dependent information, or generate an instance-dependent prompt for each instance, lacking shared information about the task. Intuitively, a good prompt should reflect both task-specific and instance-dependent information. In this paper, we propose an efficient few-shot learning method to dynamically decide the degree to which task-specific and instance-dependent information are incorporated according to different task characteristics, enriching the prompt with task-specific and instance-dependent information. Extensive experiments on a wide range of natural language understanding tasks demonstrate that our approach obtains significant improvements compared to prompt-based fine-tuning baselines in a few-shot setting with about 0.1\% parameters tuned. Specifically, our approach outperforms existing state-of-the-art efficient few-shot learning methods on several natural language understanding tasks.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00