The field of medicine has a history of adopting new technology. Video equipment and sensors are used to visualize areas of interest in the human allowing for doctors to make diagnoses based on imagery observations. However, the detection rate of the doctors towards diseases and abnormalities is heavily dependent on the experience and state of mind of the doctor doing the examination. Computer-aided detection systems are systems designed to aid the doctor in improving the detection rate, and they are using or experimenting with machine learning. Deep convolutional neural networks, a type of machine learning, are shown to be highly efficient at image detection, classification, and analysis. However, these networks require large datasets to train properly. Transfer learning is a training technique where we use a pre-trained machine learning model and transfer some of the attained knowledge from other application domains over to a new model. This way, we can use small datasets and train a model in much shorter time. In this respect, transfer learning works fine but has many configurations called hyperparameters which are often not optimized. Our work aims to address the lack of automatic hyperparameter optimization for transfer learning by experiments utilizing a known hyperparameter optimization method and creating a system for running those experiments. First, we decided to focus on the field of gastroenterology by utilizing two publicly available datasets showing images from the gastrointestinal tract. Next, we used a specific transfer learning method and chose hyperparameters suitable for automatic optimization. The optimization method we chose was Bayesian optimization because of its reputation for being one of the best methods for hyperparameter optimization. However, Bayesian optimization has hyperparameters of its own, and there are also different versions of Bayesian optimization. We chose to limit the thesis, so we use standard Bayesian optimization with standard parameters. We created a system for running automatic experiments of three different hyperparameter optimization strategies. With the system, we ran a set of experiments for each dataset. Between the strategies, one was successful in achieving a high validation accuracy, while the others were considered failures. Compared to baselines, our best models was around 10% better. With these experiments, we demonstrated that automatic hyperparameter optimization is an effective strategy for increasing performance in transfer learning and that the best hyperparameters are nontrivial to select manually.