Working towards transparent application of machine learning in video processing
Luka Murn, Marc Gorriz Blanch, Maria Santamaria, Fiona Rivera, Marta Mrak
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 08:08
In recent years, innovations in broadcast technology applied to video production and delivery have been largely driven by the proliferation of Artificial Intelligence (AI) and Machine Learning (ML). For example, ML models leveraging trained neural networks are providing breakthroughs in improved video compression codecs for optimal delivery of high quality video and for automated image enhancement tools (such as for auto-colourisation, and for increased resolution). Our work on interpretable AI aims at opening up the black-boxes of such neural networks to examine the inner workings that underpin these advanced technologies. Providing insight on what the networks have learned, potentially offers opportunities for the systems that incorporate them to be further optimized and used in a more efficient and trustworthy manner. Our research aims to allow us not only to explain and verify the outputs of AI models, but it also targets facilitating the promotion of the development of models that require fewer computing resources to support edge computing.
The presented demo introduces a series of principles to follow as guidelines towards transparency in applying ML models in video processing. More specifically, the demo will interactively show video coding / enhancement applications which involve these principles.
The presented demo introduces a series of principles to follow as guidelines towards transparency in applying ML models in video processing. More specifically, the demo will interactively show video coding / enhancement applications which involve these principles.