The recent surge in research towards general artificial intelligence has produced a wealth of promising techniques, which when utilized for traditional robotics tasks can give rise to new and interesting algorithms. This thesis proposes and implements a method of applying Reinforcement Learning (RL) to three-dimensional octree navigation. Octrees are spatial models used in robotics for navigation and collision avoidance. 3D navigation with octrees can be used in a variety of applications such as autonomous Search and Rescue quadcopter drones, or any other robotics task involving movement in three dimensions. Octree navigation traditionally uses man-made path finding algorithms. In this work we present the first known application of Reinforcement Learning to 3D navigation with octrees. The proposed method uses sampling-based observations and continuous actions spaces, and applies Hindsight Experience Replay (HER), a data augmentation technique, to increase sample efficiency. Along with an implementation of the methods, we design and implement a handful of simulated environments for evaluating performance on simple navigation tasks. From the experiments, we find that the combination of sparse rewards and continuous observations are beneficial over alternative setups. Experiments show low success rates when trained and evaluated on the navigation tasks, and further study is necessary to determine if Reinforcement Learning is a viable approach to 3D octree navigation. Regardless, this thesis can serve as baseline for future research and shed light on a new potential application for machine learning.