Skip to main content

POST-STIMULUS TIME-DEPENDENT EVENT DESCRIPTOR

Shane Harrigan, Sonya Coleman, Dermot Kerr, Pratheepan Yogarajah, Zheng Fang, Chengdong Wu

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 14:26
28 Oct 2020

Event-based image processing is a relatively new domain in the field of computer vision. Much research has been car- ried out on adapting event-based data to comply with estab- lished techniques from frame-based computer vision. On the contrary, this paper presents a descriptor which is designed specifically for direct use with event-based data and therefore can be considered to be a pure event-based vision descriptor as it only uses events emitted from event-based vision devices without transforming the data to accommodate frame-based vision techniques. This novel descriptor is known as the Post- stimulus Time-dependent Event Descriptor (P-TED). P-TED is comprised of two features extracted from event data which describe motion and the underlying pattern of transmission respectively. Furthermore a framework is presented which leverages the P-TED descriptor to classify motions within event data. This framework is compared against another state- of-the-art event-based vision descriptor as well as an estab- lished frame-based approach

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00