Skip to main content

GOING DEEPER WITH NEURAL NETWORKS WITHOUT SKIP CONNECTIONS

Oyebade Oyedotun, Abd El Rahman Shabayek, Djamila Aouada, Bjorn Ottersten

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 10:36
27 Oct 2020

We propose the training of very deep neural networks (DNNs) without shortcut connections known as PlainNets. Training such networks is a notoriously hard problem due to: (1) the relatively popular challenge of vanishing and exploding activations, and (2) the less studied ‘near singularity’ problem. We argue that if the aforementioned problems are tackled together, the training of deeper PlainNets becomes easier. Subsequently, we propose the training of very deep PlainNets by leveraging Leaky Rectified Linear Units (LReLUs), parameter constraint and strategic parameter initialization. Our approach is simple and allows to successfully train very deep PlainNets having up to 100 layers without employing shortcut connections. To validate this approach, we validate on five challenging datasets; namely, MNIST, CIFAR-10, CIFAR-100, SVHN and ImageNet datasets. We report the best results known on the ImageNet dataset using a PlainNet with top-1 and top-5 error rates of 24.3% and 7.9%, respectively.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00