CTGAN: Cloud Transformer Generative Adversarial Network
Gi-Luen Huang, Pei-Yuan Wu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:55
Prior research has demonstrated that the multi-hypothesis motion compensated prediction (MCP) can theoretically provide a better prediction quality than single-reference MCP, thereby improving the compression efficiency in video coding. However, the existing multi-hypothesis MCP methods typically require either additional rate cost to transmit the motion vectors, or significant decoding complexity to conduct the motion search at the decoder end, which is usually expensive. in this work, we propose a novel scheme to materialize the multi-hypothesis MCP that requires no additional rate cost, nor extra motion search on either the encoder or decoder side. Various approaches to synthesize these available multiple references to form the inter prediction are presented. We experimentally demonstrate that the proposed scheme provides considerable and consistent coding gains across a wide range of operating points.