Mutation Testing (MT) is a technique for evaluating how well software is tested.MT makes small changes to the software, and the goal is to see whether the currenttest cases are able to distinguish mutants from the original software. If mutants arenot distinguished, it is likely that the software was not tested well enough. However,apart from trivial software, making changes to software might have dangerous sidee ects on the host where test cases are executed. For example, a program thatmanipulates les might end up in deleting or overwriting important les in the lesystem if such program is arbitrarily mutated with MT. For programs written inJava, it is possible to execute MT in a sandbox, to avoid these types of problems.But how often such problems happen in practice? What is the overhead of usingsuch a sandbox? Are there ways to improve MT to reduce the negative impactsof these side eff ects? In this thesis, we investigate whether and how often mutantscause undesirable side e ects. We carried out MT sessions for ten di erent large realworld projects downloaded from SourceForge, and wrote tools to analyze the resultsand run MT in a sandbox. The data from these experiments are used to studyseveral correlations among the factors that a ect MT applied to real world softwarewhere unwanted side e ects of the testing phase can be harmful. We identi ed sometypes of MT operators that have higher probabiltiy of causing harmful side-e ects.These operators could be removed from MT analyzes and tools.