SELF SUPERVISED BERT FOR LEGAL TEXT CLASSIFICATION
Arghya Pal (Monash University); Sailaja Rajanala (Monash University Malaysia); Raphael CW Phan (Monash University); KokSheik Wong (Monash University Malaysia)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Critical BERT-based text classification tasks, such as legal text classification, require huge amounts of accurately labeled data. Legal text classification faces two trivial problems: labeling legal data is a sensitive process and can only be carried out by skilled professionals, and legal text is prone to privacy issues hence not all the data can be made available in the public domain. This means that we have limited diversity in the textual data, and to account for this data paucity, we propose a self-supervision approach to train Legal-BERT classifiers. We use the BERT text classifier’s knowledge of the class boundaries and perform gradient ascent w.r.t. class logits. Synthetic latent texts are generated through activation maximization. The main advantages over existing SOTAs are that our model: is easy to train, does not require much data but instead uses the synthesized data as fake samples; has less variance that helps to generate texts with good sample quality and diversity. We show the efficacy of the proposed method on the ECHR Violation (Multi-Label) Dataset and the Overruling Task Dataset.