In this thesis, we make and evaluate procedures for converting between different lexical semantic representations. We implement three different methods for converting from vector space models to graph models: the threshold method, the kNN method and the variable-$k$ method. We also implement a single procedure for converting from graph models to word embedding models. Further, we do comprehensive evaluation of our conversion procedures and their results. We perform extensive intrinsic evaluation using several gold standard data sets. We also do extrinsic evaluation of our converted graph models. We show that our graph models perform well at two real-world tasks: word sense induction/disambiguation and hypernym discovery. The use of word embeddings has become widespread, in large part because of the increasing use of deep learning methods. However, training high quality word embeddings is time-consuming and requires large corpora. Therefore, an increasing number of pre-trained word embedding models have been made available, for instance by the Nordic language processing laboratory. These pre-trained word embeddings are useful in many natural language processing tasks. However, there are also many graph-based algorithms. We show that we can utilize high-quality word embeddings in graph-based algorithms by converting word embedding models to graphs.