A variety of model selection criteria have been developed, of general and specific types. Most of these aim at selecting a single model with good overall properties, e.g. formulated via average prediction quality or shortest estimated overall distance to the in some sense true model. The Akaike, the Bayesian and the deviance information criteria AIC, BIC, DIC, along with many suitable variations, are eminent examples of such methods, and are in frequent use. These methods are however not concerned with the actual use of the selected model, which varies with context and application.
The present paper takes the view that the model selector should instead focus on the parameter singled out for interest; in particular, a model which gives good precision for one estimand may be worse when used for inference for another estimand. We develop a method which for given focus parameter estimates the precision of any submodel-based estimator. The framework is that of large-sample likelihood inference. Using an unbiased estimate of limiting risk, we propose a focussed information criterion for model selection, the FIC. We investigate and discuss properties of the method, establish some connections to the AIC, and illustrate its use in a variety of situations.