Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:41
20 Sep 2021

Under the extreme conditions such as excessive light, insufficient light or high-speed motion, the detection of vehicles by frame-based cameras still has challenges. Event cameras can capture the frame and event data asynchronously, which is of great help to address the object detection under the aforementioned extreme condition. We propose a fusion network with Attention Fusion module for vehicle object detection by jointly utilizing the features of both frame and event data. The frame and event data are separately fed into the symmetric framework based on Gaussian YOLOv3 to model the bounding box (bbox) coordinates of YOLOv3 as the Gaussian parameters and predict the localization uncertainty of bbox with a redesigned cross-entropy loss function of bbox. The feature maps of these Gaussian parameter and confidence map in each layer are deeply fused in the Attention Fusion module. Finally, the feature maps of the frame and event data are concatenated to the detection layer to improve the detection accuracy. The experimental results show that the method presented in this paper outperforms the state-of-the-art methods only using the traditional frame-based network and the joint network combining the event and frame information.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00