Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Lecture 10 Oct 2023

Video super-resolution aims to restore low-resolution videos into their high-resolution counterparts. Existing methods typically rely on optical flow, which assumes linear motion and is sensitive to rapid lighting changes, to capture inter-frame information. Event cameras can asynchronously output high temporal resolution event streams, which can reflect nonlinear motion and are robust to lighting changes. Inspired by these characteristics, we propose an \textbf{E}vent-driven \textbf{B}idirectional \textbf{V}ideo \textbf{S}uper-\textbf{R}esolution (EBVSR) framework. Firstly, we propose an event-assisted temporal alignment module that utilizes events to generate nonlinear motion to align adjacent frames, complementing flow-based methods. Secondly, we build an event-based frame synthesis module that enhances the network's robustness to lighting changes through a bidirectional cross-modal fusion design. Experimental results on synthetic and real-world datasets demonstrate the superiority of our method. The code is available at \textit{\href{https://github.com/DachunKai/EBVSR}{https://github.com/DachunKai/EBVSR}}.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00