JOURNAL ARTICLE

Efficient view-temporal prediction structures for multi-view video coding

P.-K. ParkKwan‐Jung OhYo‐Sung Ho

Year: 2008 Journal:   Electronics Letters Vol: 44 (2)Pages: 102-104   Publisher: Institution of Engineering and Technology

Abstract

To compress multi-view video, spatial redundancy between adjacent view sequences as well as temporal redundancy need to be eliminated. View-temporal prediction structures are proposed, which can be adjusted to various characteristics of multi-view videos. The proposed prediction structure achieves better coding performance than the reference prediction structure for the standardisation of multi-view video coding.

Keywords:
Computer science Redundancy (engineering) Coding (social sciences) Artificial intelligence Algorithmic efficiency Computer vision Pattern recognition (psychology) Mathematics

Metrics

18
Cited By
1.68
FWCI (Field Weighted Citation Impact)
1
Refs
0.85
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Video Coding and Compression Technologies
Physical Sciences →  Computer Science →  Signal Processing
Advanced Data Compression Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Image and Video Quality Assessment
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

Temporal Prediction Structure for Multi-view Video Coding

Yoon Hyo-SunMiyoung Kim

Journal:   Journal of Korea Multimedia Society Year: 2012 Vol: 15 (9)Pages: 1093-1101
JOURNAL ARTICLE

Efficient Inter-View Prediction Structure for Multi-View High Efficiency Video Coding

Tao Yan

Journal:   International Journal of Performability Engineering Year: 2019
JOURNAL ARTICLE

Efficient Multi-view Video Coding using Multi-view Depth Map

Shinya ShimizuHideaki KimataYoshiyuki YashimaMasayuki Tanimoto

Journal:   The Journal of The Institute of Image Information and Television Engineers Year: 2009 Vol: 63 (4)Pages: 524-532
© 2026 ScienceGate Book Chapters — All rights reserved.