FINDADAPTNET: FIND AND INSERT ADAPTERS BY LEARNED LAYER IMPORTANCE
Junwei Huang (Carnegie Mellon University); Karthik Ganesan (CARNeGIE MELlON UNIVERSITY); Soumi Maiti (CMU); Young Min Kim (Carnegie Mellon University); Xuankai Chang (Carnegie Mellon University); Paul Pu Liang (Carnegie Mellon University); Shinji Watanabe (Carnegie Mellon University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Adapters are lightweight bottleneck modules introduced to assist pre-trained self-supervised learning (SSL) models to be customized to new tasks. However, searching the appropriate layers to insert adapters on large models has become
difficult due to the large number of possible layers and thus a vast search space (2^N possibilities for N layers). In this paper, we propose a technique that achieves automatic insertion of adapters for downstream automatic speech recognition (ASR) and spoken language understanding (SLU) tasks. Our approach is based on two-stage training. First, we train our model for a specific downstream task with additional shallow learnable layers and weight parameters to obtain the weighted summation over the output of each layer in SSL. This training method is established by the SUPERB baseline [1]. This first-stage training determines the most important layers given their respective weights. In the second stage, we proceed to insert adapters to the most important layers, retaining both performance and neural architecture search efficiency. On the CommonVoice dataset[2] we obtain 20.6% absolute improvement in Word Error Rate (WER) on the Welsh language against the conventional method, which inserts the adapter modules into the highest layers without search. In the SLURP SLU task, our method yields 4.0% intent accuracy improvement against the same conventional baseline.