Text Classification in the Wild: A Large-Scale Long-Tailed Name Normalization Dataset
Jiexing Qi (Shanghai Jiao Tong University); Shuhao Li (Shanghai Jiao Tong University ); Zhixin Guo (Shanghai Jiao Tong University); Yusheng Huang (Shanghai Jiao Tong University); Chenghu Zhou (Shanghai Jiao Tong University); Weinan Zhang (Shanghai Jiao Tong University); Xinbing Wang (Shanghai Jiao Tong University); Zhouhan Lin (Shanghai Jiao Tong University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Real-world data usually exhibits a long-tailed distribution, with a few frequent labels and a lot of few-shot labels. The study of institution name normalization is a perfect application case showing this phenomenon: there are many institutions around the world with enormous variations of their names in the publicly available literature. In this work, we first collect a large-scale institution name normalization dataset containing over 25k classes whose frequencies are naturally long-tail distributed. We construct our test set from four different subsets: many-, medium-, and few-shot sets, as well as a zero-shot open set, which are meant to isolate the few-shot and zero-shot learning scenarios out from the massive many-shot classes. We also replicate several important baseline methods on our data, covering a wide range from search-based methods to neural network methods that use the pretrained BERT model.
Further, we propose our specially pretrained, BERT-based model that shows better out-of-distribution generalization on few-shot and zero-shot test sets. Compared to other datasets focusing on the long-tailed phenomenon, our dataset has one order magnitude more training data than the largest existing long-tailed datasets and is naturally long-tailed rather than manually synthesized. We believe it provides an important and different scenario to study this problem. To our best knowledge, this is the first natural language dataset that focuses on this long-tailed and open classification problem.