In computer vision, silhouette extraction plays an important role. Many applications need to extract people or objects, e.g. in video surveillance, tracking of objects, human or objectdetection, 3D-reconstruction or mixed reality applications.Since the extraction of silhouettes is often a small part of theseapplications, it has to be done as fast as possible — at least in real-time applications, leaving valuable execution time to the rest of the application.
General-purpose computing on graphics processing units (GPGPU) has become increasingly popular the last years. With development frameworks like CUDA and OpenCL, the parallel processing power of GPUs have never been easier to utilize. This gives us the possibility to develop real-time applications using high resolution video.
In this thesis we explore and implement a silhouette extraction algorithm using Graphics Processing Units and OpenCL for best possible performance. We explore and measure different optimizations on GPU to improve runtime, and show that using different optimizations, high performance can be achieved compared to a CPU implementation.