Paper

Deep Frame Prediction for Video Coding

Volume Number:
30
Issue Number:
7
Pages:
Starting page
1843
Ending page
1855
Publication Date:
Publication Date
June 2019
Author(s)
H. Choi and I. V. Bajić

paper Menu

Abstract

We propose a novel frame prediction method using a deep neural network (DNN), with the goal of improving the video coding efficiency. The proposed DNN makes use of decoded frames, at both the encoder and decoder to predict the textures of the current coding block. Unlike conventional inter-prediction, the proposed method does not require any motion information to be transferred between the encoder and the decoder. Still, both the uni-directional and bi-directional predictions are possible using the proposed DNN, which is enabled by the use of the temporal index channel, in addition to the color channels. In this paper, we developed a jointly trained DNN for both uni-directional and bi-directional predictions, as well as separate networks for uni-directional and bi-directional predictions, and compared the efficacy of both the approaches. The proposed DNNs were compared with the conventional motion-compensated prediction in the latest video coding standard, High Efficiency Video Coding (HEVC), in terms of the BD-bitrate. The experiments show that the proposed joint DNN (for both uni-directional and bi-directional predictions) reduces the luminance bitrate by about 4.4%, 2.4%, and 2.3% in the low delay P , low delay, and random access configurations, respectively. In addition, using the separately trained DNNs brings further bit savings of about 0.3%-0.5%.