In this work we present a CRF-based sequence-labeling system for negation scope resolution that relies heavily on syntactic information extracted from dependency graphs. The models for negation are obtained using two corpora that differ from each other both in terms of language domain and scope annotation, allowing us to present parallel, comparative results for system configurations that draw information from similar, yet conceptually different sources. We evaluate the performance of the system on several levels. First, we assess the utility of syntactic features and label sets of different granularity, showing how the benefits of more involved configurations vary across corpora. Then, we compare our best performing configuration to similar systems, showing that our approach outperforms all known CRF-based systems on the same corpora. Finally, we evaluate the performance of system configurations based on different negation models as subcomponents of a simple engine for Sentiment Analysis.