The traditional use of model selection methods in practice is to proceed as if the final selected model had been chosen in advance, without acknowledging the additional uncertainty introduced by model selection. This often means underreporting of variability and too optimistic confidence intervals. We build a general large-sample likelihood apparatus in which limiting distributions and risk properties of estimators-post-selection as well as of model average estimators are precisely described, also explicitly taking modelling bias into account. This allows a drastic reduction of complexity, as competing model averaging schemes may be developed, discussed and compared inside a statistical prototype experiment where only a few crucial quantities matter. In particular we offer a frequentist view on Bayesian model averaging methods and give a link to generalised ridge estimators. Our work also leads to new model selection criteria. The methods are illustrated with real data applications.