Skip to main content

Benchmarking Convolutional Neural Network Inference on Low-Power Edge Devices

Oscar Ferraz (IT, Dep. of Electrical and Computer Engineering, University of Coimbra, Portugal); Helder Araujo (University of Coimbra); Vitor Silva (IT, Dep. of Electrical and Computer Engineering, University of Coimbra, Portugal); Gabriel Falcao (IT, University of Coimbra, Portugal)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

The massive adoption of IoT devices, the recent developments in the efficiency of AI systems, and the increase of edge computational power, accelerated the deployment of edge AI systems. The implementation of these systems through the use of low-power embedded devices scattered across the edges of a network allows for reduced latency and cost, compared to traditional cloud-based AI computing systems. As a result of the low-complexity AI models and the available low-power embedded systems on the market, this paper provides a comparative study on the inference performance of convolutional neural networks for different edge devices, by exploiting lowpower GPUs and dedicated AI hardware. The benchmark results were able to achieve 864 inferences/s for the Jetson AGX Xavier board on a pre-trained SqueezeNet, while reaching a high power efficiency of 52.6 inferences/s per Watt. For the dedicated Movidius neural stick, the system requires only 1.5 W for processing 24.2 inferences/s.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00