Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:13:01
03 Oct 2022

With the advent of Neural Architecture Search (NAS), auto-designing of artificial neural networks has been made possible. Among various NAS methods, Differentiable Architecture Search (DARTS) has achieved significant progress due to its high calculating efficiency. However, it suffers from poor stability and obvious performance drop because of bi-level optimization and hard pruning. Besides, it only generates one best architecture at once. To alleviate the problems above, we design a three-stage framework with a path-wise weight sharing derivation. We first prune the supernet with differentiable methods to keep top-k operations on each edge instead of one. Then the pruned supernet is trained with our path-wise weight sharing method. At the derivation stage, the best candidate operations are selected with Evolutionary Search based on the validation accuracy of paths. Our weight sharing derivation is proved effective in improving searching stability as well as alleviating the performance drop. Furthermore, it also allows us to search for a large number of architectures with different parameter sizes at one time. Comprehensive experiments on CIFAR-10 and ImageNet show that we manage to find a group of state-of-the-art architectures (97.61% on CIFAR-10 and 76.4% on ImageNet).

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00