In traditional video, the viewpoint of the camera is determined at capture time. Free-viewpoint video use multiple cameras to capture the scene. However, the viewpoint is not restricted to the viewpoints of existing cameras, but the user can freely select viewpoints in between cameras where no camera is present. The goal in free-viewpoint video, is to render realistic images from arbitrary viewpoints from captured real-world dynamic scenes.
In this thesis we investigate a pipeline for high-quality free-viewpoint video. Our contributions are threefold; 1) we implement the modules comprised in a free-viewpoint video pipeline, namely camera calibration, depth estimation and free-viewpoint rendering. 2) We analyze the quality and robustness of the implemented modules, do performance measurements using objective quality metrics and assess the implementation’s suitedness for achieving real-time performance. 3)We present variations of the rendering algorithms to improve quality of the rendered viewpoints, including a novel cross-checking constraint that removes artefacts while preserving "correct" pixels in the rendered view.