Interview Motion Compensated Joint Decoding for Compressively Sampled Multiview Video Streams
In this paper, we design a novel multiview video encoding/decoding architecture for wirelessly multiview video streaming applications, e.g., 360 degrees video, Internet of Things (IoT) multimedia sensing, among others, based on distributed video coding and compressed sensing principles. Specifically, we focus on joint decoding of independently encoded compressively sampled multiview video streams. We first propose a novel side-information (SI) generation method based on a new interview motion compensation algorithm for multiview video joint reconstruction at the decoder end. Then, we propose a technique to fuse the received measurements with resampled measurements from the generated SI to perform the final recovery. Based on the proposed joint reconstruction method, we also derive a blind video quality estimation technique that can be used to adapt online the video encoding rate at the sensors to guarantee desired quality levels in multiview video streaming. Extensive simulation results of real multiview video traces show the effectiveness of the proposed fusion reconstruction method with the assistance of SI generated by an interview motion compensation method. Moreover, they also illustrate that the blind quality estimation algorithm can accurately estimate the reconstruction quality.
N. Cen et al., "Interview Motion Compensated Joint Decoding for Compressively Sampled Multiview Video Streams," IEEE Transactions on Multimedia, vol. 19, no. 6, pp. 1117 - 1126, Institute of Electrical and Electronics Engineers (IEEE), Jun 2017.
The definitive version is available at https://doi.org/10.1109/TMM.2017.2653770
Keywords and Phrases
360 Degrees Video; Compressed Sensing (CS); Internet of Things (IoT); Multiview Video Streaming
International Standard Serial Number (ISSN)
Article - Journal
© 2017 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
01 Jun 2017
This work is based upon material supported in part by the U.S. National Science Foundation under Grant CNS1422874, and in part by the U.S. Office of Naval Research under Grant N00014-16-1-2213 and Grant ARMY W911NF-17-1-0034.
This paper was presented in part at the Picture Coding Symposium, San Jose, CA, December 2013.