Grand Challenge on Short Video Coding

Introduction

Video coding technologies play a more critical role in the emerging mobile internet era with a huge demand for high quality video contents. After standardizing High Efficiency Video Coding (HEVC) [1] successfully in 2013, experts in the ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG) then collaborated to develop a new generation video coding standard, named Versatile Video Coding in 2018.

Short video, as a user-generated content, has been becoming so popular in recent years. It is used for a wide range of applications, including research, news, entertainment and advertising. Each day, billions of short videos are distributed to mobile users and tens of millions of new short videos are produced and uploaded to quite a few mobile apps like Tik Tok, Vigo Video. For those short videos, typically, a second round of compression is required for storage and transmission. In addition, short videos may have different characteristics, such as frequent scene change, special effects etc. al. Better compression technologies in addition to the existing and on-going video coding standards are worthwhile further being explored.

This challenge focuses on identifying effective technologies beyond the state-of-the-art for short videos. A set of short videos and target bitrates will be provided to participants. Compressed bitstreams, decoded YUV files and executable encoder and decoder should be provided by the deadline of submission. In addition, each participate will also be asked to submit a document to describe the proposed scheme and authors may be invited to present their paper as oral in the meeting.

Two tracks are supported in this challenge: encoder optimization and new coding tools. For the former track, the provided bitstreams must be decoded by an unmodified baseline decoder. For the latter track, both encoder and decoder can be modified, preserving existing tools and incorporating new ones. No formal complexity constraints apply. For both tracks, it is requested that the objective coding gains in terms of BD-Rate, sample runtime and complexity be reported. 

Task

The challenge asks participants to compress 8 videos in 536x960 resolution. Two anchors using High Efficiency Video Coding HEVC, (ITU-T Rec. H.265 | ISO/IEC 23008-2) reference software HM version 16.20 and Versatile Video Coding (VVC) reference software VTM version 4.0 are provided by the organizer. The participated team could implement their solutions on either HM or VTM. The submitted results should have similar bitrates as corresponding given anchors with no more than +/-3% differences.

Tentative Timetable

  • May 10th, 2019: Release the test dataset, anchor results and open the submission of the test results. Participants should register for the challenge and apply for links of test data access and uploading with the contact person.
  • Jul 31st, 2019: Participants upload compressed bitstreams, decoded YUV files and executable encoder and decoder
  • Aug 7th, 2019: Participants submit paper which describe the techniques in their solution
  • Aug 25th, 2019: Regular Paper acceptance notification.
  • Sep 5th, 2019: Camera-Ready Regular Paper submission deadline

Evaluation

For the encoder optimization track, coding efficiency and sample runtime are two most important aspects. For the new coding tool track, objective measurements including PSNR, SSIM and VMAF. Subjective evaluation will be performed off-line on a set of selected videos.

In addition to those criteria, intellectual merit, the presentation of papers will also be taken into consideration in the decision of winners. The final decision will be made in the meeting by an invited experts team. New normative coding tools based on VVC are preferred.

Awards

Bytedance will sponsor for the awards of this grand challenge.

Organizers

Jizheng Xu, Senior Scientist of Bytedance
Li Zhang, Head of Video Coding Standard Team of Bytedance

Contact

For registration, test data and submission information, please contact with Li Zhang <lizhang.idm@bytedance.com>.


Copyright © 2019 Ningbo institute of Technology, Zhejiang University