Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:02:16
21 Apr 2023

The widely adopted practice to improve performance of deep neural networks is by increasing its depth and the number of parameters, since it increases a network's information capacity. Restraining their energy consumption is yet another community expectation. However, energy consumption being dependent on the number of operation and the memory involved during computation. It seems to be a conflicting goal to achieve high performance at low energy consumption. This phenomena can be observed with UNeXt which consumes significantly less energy on account of lesser number of parameters but fails to achieve adequate performance. In this paper, we crack the code by introducing attention mechanism within a light weight network in order to compensate its reduced information capacity, by fixating the attention to relevant regions per layer. The Energy efficient, Lightweight, and computationally Thin Network (ELiTNet) is thus proposed for semantic segmentation tasks, and demonstrated for semantic segmentation of retinal arteries and optic disc in digital color fundus images. Experiments compare against SUMNet, U-Net, UNeXt and ResUNet++ architectures on five publicly available datasets including HRF, DRIVE, AMD, IDRiD, and REFUGE. We demonstrate that the proposed method consumes 2.2x lower energy, while featuring 155.2x fewer parameters, and consuming 41.22 x lesser GFlop while maintaining the best performance metric of ELiTNet with F1-score 97.22 % and Jaccard Index of 94.74 \% with IDRiD dataset.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00