CNN ORIENTED COMPLEXITY REDUCTION OF VVC INTRA ENCODER
Alexandre Tissier, Wassim Hamidouche, Jarno Vanne, Franck Galpin, Daniel Menard
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 15:26
The Joint Video Expert Team (JVET) is currently developing the next-generation MPEG/ITU video coding standard called Versatile Video Coding (VVC) and their ultimate goal is to double the coding efficiency over the state-of-the-art HEVC standard. The latest version of the VVC reference encoder, VTM6.1, is able to improve the intra coding efficiency by 24% over the HEVC reference encoder HM16.20, but at the expense of 27 times the encoding time. The complexity overhead of VVC primarily stems from its novel block partitioning scheme that complements Quad-Tree (QT) split with Multi-Type Tree (MTT) partitioning in order to better fit the local variations of the video signal. This work reduces the block partitioning complexity of VTM6.1 through the use of Convolutional Neural Networks (CNNs). For each 64×64 Coding Unit (CU), the CNN is trained to predict a probability vector that speeds up coding block partitioning in encoding. Our solution is shown to decrease the intra encoding complexity of VTM6.1 by 51.5% with a bitrate increase of only 1.45%.