Unwanted behaviour in terms of threats of violence in online discussions are of common occurrence. We can find such behaviour in e.g. comment fields on Facebook, Youtube and in online news papers. With increasing use of these kind of discussion arenas, it generates a lot of work for moderators that have to, at worst manually go through comments, and remove the ones containing undesired content. In this project, we use a corpus with YouTube comments. The task will be to classify comments as containing violent threats or not. The comments in the corpus are manually annotated as "threat" or "non-threat". To attempt to solve this, we use deep learning techniques in combination with word embeddings. We have systematically explored the effects of a range of different choices regarding architecture and parameterization. In our result we find that threat detection using convolutional neural networks do not outperform earlier work on the same task.