In this thesis, we present a study of English noun–noun compound analysis that takes a holistic perspective on the problem. The holistic nature of our work manifests in three respects. First, of five compound analysis tasks, we focus primarily on compound interpretation and identification, but we also create a resource for compound bracketing and reflect on (the need for) compound sense disambiguation in our work. Second, we part company with past natural language processing (NLP) studies on compound analysis and resituate the problem within general-purpose whole-sentence meaning representation frameworks. Specifically, we introduce a new approach (and a new resource) that derives the semantic interpretation of noun–noun compounds from linguistic resources that represent the semantics of phrasal or sentential structures (viz. NomBank and PCEDT), in contradistinction to the more isolated, compound-centric perspective of much past work. Third, we empirically determine the utility of distributional semantic models as well as of neural networks to classify the relations holding between the compound constituents. Our experimental setup is systemically varied to account for different properties of the models we use, in isolation and in combination. Overall, this thesis stands at the intersection of many of the recent, and not-so-recent, developments in NLP. In addition to investigating several properties of word embeddings and neural networks, we also experiment with transfer and multi-task learning for compound interpretation—two machine learning techniques that have recently drawn much attention in NLP research. Further, we use manually annotated, well-established meaning representation resources to shed new light on noun–noun compounds, and call for critical reflections on some of the long-held views pertaining to their analysis in NLP.