Colon cancer accounts for almost 10% of all cancer cases worldwide. It is also the fourth most common cause of death from cancer globally. However, many cases of colon cancer could be prevented by early screening and removal of colon polyps - a common precursor of colon cancer. In this respect, capsule endoscopy is a non-invasive screening method with the potential to significantly reduce the cost of screening as well as the discomfort caused for the patient using traditional endoscopy examination. The financial cost of evaluating the recorded video footage, as well as the availability of specialists, currently prevents the deployment of capsule endoscopy for mass screening. With this work, we research solutions for automating the evaluation of capsule endoscopy video sequences using machine learning, image recognition and extraction of global image features. Rather than focusing on a single approach, we build tools that can be used for conducting further experiments with different methods and algorithms. We present the prototype of an integrated software solution that can be used for collecting videos from hospitals, annotating videos, tracking objects in video sequences, build- ing training and testing datasets, training classifiers and eventually, testing and evaluating the generated classifiers. We evaluate our software by training classifiers that are based on three different image recognition approaches. We also test the generated classifiers with different datasets and thereby evaluate the different approaches for their feasibility of being used to recognize colon polyps. Our main conclusion is that state of the art image recognition methods, such as the use of Haar- features or Histogram of oriented Gradients based detectors, are not suitable for detecting lesions in the intestine because of the enormous variety of possible appearances and orientations of such lesions. Global image features such as Joint Composite Descriptor on the other hand, lead to very promising results. Performing leave-one-out-cross-validation with all 20 videos of the ASU-Mayo Clinic polyp database, our system achieves a weighted average precision of 93.9% and a weighted average recall of 98.5%.