Autonomous manufacturing and assembly requires precise estimation of pose (position and orientation) of the object to be handled. If geometric information of the object is available, such as a CAD file, these estimates can be obtained by fitting the model with captured images of the object. This is typically done by mapping robust, characteristic features visible on the object to its geometry. By doing feature matching on images of the object, the pose can be estimated. Because image features rely on texture and color, they are challenging to use on uniformly coloured objects and objects made of non-Lambertian materials. To circumvent this, Directional Chamfer Matching (DCM) can be used to compare edges of a model to edges found in images. In this way, the object’s pose can be estimated using geometric information only. This thesis describes an implementation of a simulated robot system where a manipulator needs to obtain the location of an object using camera in- formation. The system features a full pose estimation procedure where DCM is combined with a Levenberg-Marquardt optimisation algorithm. Additionally, the manipulator is set up with two cameras, a task controller and a motion planner to facilitate capturing images from multiple views. The pose estimation procedure’s robustness is compared to a similar procedure using the Iterated Closest Point algorithm. Additionally, different strategies for generating template data bases are explored, and measurements of how distance affects the quality of the estimate are made. Both one-image estimation and two-image estimation are tested. The results from the system experiments show that the mean and standard deviation of the estimation errors are under 3 mm for the position, and under 0.05 radians for the orientation.