Abstract
Targeted Sentiment Analysis attempts to extract sentiment targets and the sentiment polarity towards these targets, as explicitly expressed in text. Targeted Sentiment Analysis is a difficult task where there may be multiple sentiment targets in one sentence, and there may be conflicting sentiments towards one target. Sentiment may be expressed through nuances and combinations of words at different positions in the sentence. State-of-the-art models for Targeted Sentiment Analysis therefore require large amount of data. In our thesis we explore approaches to Targeted Sentiment Analysis in scenarios where a) we have a large annotated dataset, b) we have a very limited amount of annotated data, and c) we have no annotated data for the target language and domain. Given a large monolingual dataset, we provide a state-of-the art model through the multilingual BERT (M-BERT) pretrained language model. Given more limited data we show how bilingual training data allows for noteworthy improvements over monolingual training. Given a scenario with no labeled data for the target domain and language, we demonstrate the cross-lingual performance of M-BERT for the Norwegian and English language pair. We isolate and compare the effect of domain and language differences, and demonstrate the option of machine-translating text for Targeted Sentiment Analysis.