Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 14:21
04 May 2020

Two important sequence tasks are sequence modeling and labeling. Sequence modeling involves determining the probabilities of sequences, e.g. language modeling. It is still difficult to improve language modeling with additional relevant tags, e.g. part-of-speech (POS) tags. For sequence labeling, it is worthwhile to explore task-dependent semi-supervised learning to leverage a mix of labeled and unlabeled data, besides pre-training. In this paper, we propose to upgrade condtional random fields (CRFs) and obtain a joint generative model of observation and label sequences, called joint random fields (JRFs). Specifically, we propose to use the potential function in the original CRF as the potential function that defines the joint distribution. This development from CRFs to JRFs benefits both modeling and labeling of sequence data, as shown in our experiments. For example, the JRF model (using POS tags) outperforms traditional language models and avoids the need to produce hypothesized labels by a standalone POS tagger. For sequence labeling, task-dependent semi-supervised learning by JRFs consistently outperform the CRF baseline and self-training, on POS tagging, chunking and NER.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00