Regression analysis is a commonly used approach to modelling the relationships between dependent and independent variables. When estimating the coefficients of a regression model, the least squares estimator is often used, as it has the least variance among the unbiased estimators. When the goal is to make a prediction, nevertheless, one may accept some bias in order to reduce the variance, which may further minimize the prediction error (bias-variance tradeoff). A series of shrinkage methods have therefore been developed to reach the goal. This thesis aims to contrast the shrinkage effects of a selection of methods, such as three post-selection shrinkage methods (global, parameterwise and joint shrinkage), Lasso and boosting. The prediction performances of these methods are compared in the case of only linear effects (case study 1), or a combination of linear and nonlinear effects (case study 2). In case 1, a simulation study is conducted to compare the prediction performances of different methods under four scenarios. The analysis shows that when the data contains less information (small sample size and large unexplainable variability), the mean squared prediction errors (MSPEs) of the model fitted by Lasso and boosting are smaller than that fitted by the least squares method. The model with parameterwise shrinkage factors (PSF) predicts slightly better than that with global shrinkage factors (GSF). The boosting method with a certain number of iterations calculated by cross-validation produces a model with many variables, whereas with fewer iterations it tends to select only the relevant variables. In case 2, the effects of independent variables are modelled by the fractional polynomial (FP) functions and the prediction performances of different methods, such as multivariable model-building with FP (MFP), FP with shrinkage factors, boosting with FP base-learners, a combination of MFP and Lasso (referred to as MFP-Lasso), and a combination of MFP and boosting (referred to as MFP-boosting), are compared using an artificial (ART) dataset. Small differences in MSPE are found when comparing GSF, PSF and joint shrinkage factors (JSF). Similar results are obtained from the two novel approaches (MFP-boosting and MFP- Lasso). Boosting with the FP base-learners implemented in the R-package mboost has worse prediction performance compared to other methods.