In order to estimate the mass of supermassive black holes found in the centres of active galactic nuclei (AGNs) from spectroscopic data, the contamination from the light emitted by the stars in the surrounding host galaxy must be determined in order to be subtracted. I characterise the ability of a spectral decomposition software code I have written based on the Levenberg-Marquardt algorithm to accurately determine the host galaxy contribution to optical spectra of AGNs. I do this by creating a catalogue of 43,200 synthetic AGN spectra with various amounts of continuum emission, iron emission and host galaxy emission, effectively simulating different types of AGNs. I add different amounts of noise to the spectra before they are modelled by the decomposition code, in order to study the effect of noise on the results. I find that the host galaxy contribution is accurately determined in the spectra with a galaxy ratio of 0.75 (measured at 6000 Å) to the observed spectrum. It becomes increasingly difficult to accurately determine the host galaxy emission as this contribution becomes relatively weaker, resulting in a significant overestimation of the host galaxy emission in these cases. I conclude that the spectral decomposition method is most applicable to low-luminosity Seyfert galaxies, though its applicability will depend on the desired accuracy and precision of the user. Increasing the signal-to-noise ratio (S/N) from 5 to 50 does not affect the accuracy of the determination of the host galaxy contribution. Due to the increasing uncertainties and degeneracies between the different spectral components for low S/N ratios, the spectral decomposition method used in this thesis should only be applied to spectra with S/N > 10 per pixel.