In statistical modelling, there is often a genuine interest to learn the most reasonable, parsimonious, and interpretable model that fits the data. We turn our attention to the problem of variable selection in the context of ordinary linear regressions. Model selection is indeed a vastly covered topic, so our focus is on the Bayesian approach to model selection, emphasising the selection of variables through inferences on model probabilities. The appeal of Bayesian methods are that it reduces the selection problem to one of estimation, rather than a true search of the variable space for the model that optimises a certain criterion. The I-prior (Bergsma, 2019) on the regression coefficients further enhances this appeal, as we discover its suitability to be used in the presence of multicollinearity, which is shown by simulation studies and several real world examples. The goal of this chapter is to detail a simple, data-driven, Bayesian approach which works well for model selection purposes.