Skip to main content

In-Sensor & Neuromorphic Computing are all you need for Efficient Computer Vision

Gourav Datta (University of Southern California); Zeyu Liu (University of Southern California); Md Abdullah-Al Kaiser (University of Southern California); Souvik Kundu (Intel Labs); Joe Mathai (Information Sciences Institute); Zihan Yin (USC); Ajey Jacob (USC); Akhilesh Jaiswal (USC); Peter A. Beerel (University of Southern California)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
09 Jun 2023

Due to the high activation sparsity and use of accumulates (AC) instead of expensive multiply-and-accumulates (MAC), neuromorphic spiking neural networks (SNNs) have emerged as a promising low-power alternative to traditional DNNs for several computer vision (CV) applications. However, most existing SNNs require multiple time steps for acceptable inference accuracy, hindering real-time deployment and increasing spiking activity and, consequently, energy consumption. Recent works proposed direct encoding that directly feeds the analog pixel values in the first layer of the SNN in order to significantly reduce the number of time steps. Although the overhead for the first layer MACs with direct encoding is negligible for deep SNNs and the CV processing is efficient using SNNs, the data transfer between the image sensors and the downstream processing costs significant bandwidth and may dominate the total energy. To mitigate this concern, we propose an in-sensor computing hardware-software co-design framework for SNNs targeting image recognition tasks. Our approach reduces the bandwidth between sensing and processing by 12-96x and the resulting total energy by 2.32x compared to traditional CV processing, with a 3.8% reduction in accuracy on ImageNet.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00