LEARNING TO DETECT NOVEL AND FINE-GRAINED ACOUSTIC SEQUENCES USING PRETRAINED AUDIO REPRESENTATIONS
Vasudha Kowtha (Apple); Miquel Espi (Apple); Jonathan J Huang (Apple); Yichi Zhang (Apple); Carlos Avendano (Apple)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
This work investigates pretrained audio representations for few shot Sound Event Detection. We specifically address the task of few shot detection of novel acoustic sequences, or sound events with semantically meaningful temporal structure, without assuming access to non-target audio. We develop procedures for pretraining suitable representations, and methods which transfer them to our few shot learning scenario. Our experiments evaluate the general purpose utility of our pretrained representations on AudioSet, and the utility of proposed few shot methods via tasks constructed from real-world acoustic sequences. Our pretrained embeddings are suitable to the proposed task, and enable multiple aspects of our few shot framework.