Skip to main content

EMCLR: Expectation Maximization Contrastive Learning Representations

Meng Liu ( Shanghai Jiao Tong University); Ran Yi (Shanghai Jiao Tong University); Lizhuang Ma (Shanghai Jiao Tong University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

One of the bottlenecks of self-supervised contrastive learning is the degenerate constant solution, where all the samples are mapped to one single point in representation space. To prevent such collapses, the mainstream paradigm is using negative samples, forcing negative pairs to push away. However, such manner results in O(N^2) time and space complexities, limiting the expansibility, scalability and efficiency. We observe current negative-requiring objectives can be decomposed to alignment and uniformity, where uniformity dominates the O(N^2) complexity. To reduce the complexity, inspired by the traditional EM algorithm, we derive the embedding matrix of each batch with optimally uniform distribution and discard the uniformity part in objectives. Specifically, for stacked embedding matrices of two views, we first calculate the optimal solution of one view by the proposed algorithm. Then we align the embedding matrix with the obtained optimal solution. The learning paradigm ingeniously avoids model collapses without ad-hoc negative pairs and reduces the square complexity to linear. Extensive experiments on CIFAR-10/100 and STL-10 show the proposed methods achieve comparable results in O(N) complexity.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00