Accepted Special Sessions

1. Special session: Processing and coding for immersive video

Session chairs:

Marek Domański, Poznań University of Technology, Poznań,Poland, marek.domanski@put.poznan.pl
Gauthier Lafruit, Free University of Brussels, Brussels, Belgium, Gauther.Lafruit@ulc.ac.be

Description: The session is foreseen to provide the possibility to report and discuss recent research results in immersive video, i.e. the scene representations that allow a viewer to watch video in any direction or/and to virtually walk around a scene. The contributions are welcomed that will provide original research results important to the area. The topics of the session include but are not limited to: immersive video (e.g., 3DoF+, 6DoF) representation and compression, including the recent standardization-related activities, free-viewpoint television, depth estimation and depth/color refinement for immersive video, virtual view synthesis/rendering techniques able to produce a viewport from views and patches related to diverse virtual camera positions, etc.

2. Special session: AVS3 Video Coding Standard

Session chairs:

Siwei Ma, Professor, Peking University, China, swma@pku.edu.cn
Xiaozhen Zheng, Senior Engineer at SZ DJI Technologies, China, xiaozhen.zheng@dji.com

Description: AVS3 video coding standard (AVS3) is a new generation standard developed by China AVS working group. Compared with the previous video coding standards, e.g. AVS2 and HEVC, AVS3 has achieved significant coding efficiency improvement especially for 4K/8K super high definition videos. It is very promising for the future 4K/8K and VR applications. This special session will present an essential introduction about AVS3, including the key coding tools and the complexity analysis for the implementation of AVS3, etc.

3. Special session: Neural Network Technology in Future Image/Video Coding

Session chairs:

Heming Sun, Assistant Processor, Waseda University,hemingsun@aoni.waseda.jp
Jing Zhou, Researcher, Fujitsu, zhoujing@cn.fujitsu.com
Jiro Katto, Processor, Waseda University, katto@waseda.jp

Description: With the explosion of video data, video compression becomes quite essential. To improve the compression ability, many new coding features have been adopted in past compression standards. However, basic coding structure in the standards are fixed and spanned for more than decades. In the other side, recently, neural network-based methods have shown its strong ability in image and video compression due to its powerful non-linear mapping ability and the feature extraction in very deep- learned layers. Therefore, this session aims to pursue the neural network-based proposals that can raise the performance for the future video coding.

4. Special Session: Tools currently discussed in the standardization of Versatile Video Coding

Session chairs:

Jonathan Pfaff, Fraunhofer, HHI, jonathan.pfaff@hhi.fraunhofer.de
Shan Liu, Tencent, shanl@tencent.com
Andrew Segall, Sharp Labs of America, asegall@sharplabs.com
Detlev Marpe, Fraunhofer HHI, detlev.marpe@hhi.fraunhofer.de

Description: The demand for streaming and storing video is rising while transmission capacities and memory are limited. This discrepancy is one of the driving forces behind research on video coding technologies with higher compression efficiency. Currently, experts of the Joint Video Experts Team (JVET) collaboratively work on the design of the Versatile Video Coding standard (VVC). For the design of VVC, several new ideas within the research area of video compression emerged. The aim of the special session is to provide an insight into some of these ideas with a particular focus on coding tools suggested at the latest stage of the VVC project.


Copyright © 2019 Ningbo institute of Technology, Zhejiang University