As the world's population ages, demands for health services continues to increase and robotics can be part of the solutions to this problem. One particular area of medicine, ultrasound, can be helped by robotics, especially for doctors who would otherwise suffer long term health implications. The primary purpose of this study is to construct a system designed to be useful and low cost in the context of an ultrasound procedure. An equally important goal is to determine if the proposed solution, which is based on what the patient feels, works compared to a more traditional Human-Computer Interaction (HCI) approach. In general, the functions introduced here can be used at the start of, during and after an ultrasound diagnosis procedure. As for the approach itself, the system developed provides a simulation of the ultrasound robot and the patient, collision detection between them and emotion detection of the patient's facial expression. The system consist of different computer vision-based methods performing in real-time. The experiments performed investigated the different components, communication, collision and emotion detection, before finally putting them all together and compare this solution to the HCI approach. In the end, because of robot limitations, no verification of the system as a whole was completed.