en
                    array(1) {
  ["en"]=>
  array(13) {
    ["code"]=>
    string(2) "en"
    ["id"]=>
    string(1) "1"
    ["native_name"]=>
    string(7) "English"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    string(1) "1"
    ["default_locale"]=>
    string(5) "en_US"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "en"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(7) "English"
    ["url"]=>
    string(83) "https://www.statworx.com/en/content-hub/blog/model-regularization-the-bayesian-way/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/en.png"
    ["language_code"]=>
    string(2) "en"
  }
}
                    
Contact
Content Hub
Blog Post

Model Regularization – The Bayesian Way

  • Expert Thomas Alcock
  • Date 15. July 2020
  • Topic Data ScienceStatistics & Methods
  • Format Blog
  • Category Technology
Model Regularization – The Bayesian Way

Introduction

Sometimes here at STATWORX impromptu discussions about statistical methods happen. In one such discussion one of my colleagues decided to declare (albeit jokingly) Bayesian statistics unnecessary. This made me ask myself: Why would I ever use Bayesian models in the context of a standard regression problem? Surely, existing approaches such as Ridge Regression are just as good, if not better. However, the Bayesian approach has the advantage that it lets you regularize your model to prevent overfitting **and** meaningfully interpret the regularization parameters. Contrary to the usual way of looking at ridge regression, the regularization parameters are no longer abstract numbers, but can be interpreted through the Bayesian paradigm as derived from prior beliefs. In this post, I’ll show you the formal similarity between a generalized ridge estimator and the Bayesian equivalent.

A (very brief) primer on Bayesian Stats

To understand the Bayesian regression estimator a minimal amount of knowledge about Bayesian statistics is necessary, so here’s what you need to know (if you don’t already): In Bayesian statistics we think about model parameters (i.e. regression coefficients) probabilistically. In other words, the data given to us is fixed, and the parameters are considered random. This runs counter to the standard, frequentist perspective in which the underlying model parameters are treated as fixed, while the data are considered random realizations of the stochastic process driven by those fixed model parameters. The end goal of Bayesian analysis is to find the posterior distribution, which you may remember from Bayes Rule:

    \[p(\theta|y) = \frac{p(y|\theta) p(\theta)}{p(y)}\]


While p(y|\theta) is our likelihood and p(y) is a normalizing constant, p(\theta) is our prior which does not depend on the data, y. In classical statistics, p(\theta) is set to 1 (an improper reference prior) so that when the posterior ‘probability’ is maximized, really just the likelihood is maximized, because its the only part that still depends on \theta. However, in Bayesian statistics we use an actual probability distribution in place of p(\theta), a Normal distribution for example. So lets consider the case of a regression problem and we’ll assume that our target, y, **and** our prior follow normal distributions. This leads us to **conjugate** Bayesian analysis, in which we can neatly write down an equation for the posterior distribution. In many cases this is actually not possible and for this reason Markov Chain Monte Carlo methods were invented to sample from the posterior – taking a frequentist approach, ironically.

We’ll make the usual assumption about the data: y_i is i.i.d. N(\bold {x_i \beta}, \sigma^2) for all observations i. This gives us our standard likelihood for the Normal distribution. Now we can specify the prior for the parameter we’re trying to estimate, (\beta, \sigma^2). If we choose a Normal prior (conditional on the variance, \sigma^2) for the vector or weights in \beta, i.e. N(b_0, \sigma^2 B_0) and an inverse-Gamma prior over the variance parameter it can be shown that the posterior distribution for \beta is Normally distributed with mean

    \[\hat\beta_{Bayesian} = (B_0^{-1} + X'X)^{-1}(B_0^{-1} b_0 + X'X \hat\beta)\]


If you’re interested in a proof of this result check out Jackman (2009, p.526).

Let’s look at it piece by piece:

\hat\beta is our standard OLS estimator, (X'X)^{-1}X'y
b_0 is the mean vector of (multivariate normal) prior distribution, so it lets us specify what we think the average values of each of our model parameters are
B_0 is the covariance matrix and contains our respective uncertainties about the model parameters. The inverse of the variance is called the **precision**

What we can see from the equation is that the mean of our posterior is a **precision weighted average** of our prior mean (information not based on data) and the OLS estimator (based solely on the data). The second term in parentheses indicates that we are taking the uncertainty weighted prior mean, B_0^{-1} b_0, and adding it to the weighted OLS estimator, X'X\hat\beta. Imagine for a moment that B_0^{-1} = 0 . Then

    \[\hat\beta_{Bayesian} = (X'X)^{-1}(X'X \hat\beta) = \hat\beta\]


This would mean that we are **infinitely** uncertain about our prior beliefs that the mean vector of our prior distribution would vanish, contributing nothing to our posterior! LIkewise, if our uncertainty decreases (and the precision thus increases) the prior mean, b_0, would contribute more to the posterior mean.

After this short primer on Bayesian statistics, we can now formally compare the Ridge estimator with the above Bayesian estimator. But first, we need to take a look at a more general version of the Ridge estimator.

Generalizing the Ridge estimator

A standard tool used in many regression problems, the standard Ridge estimator is derived by solving a least squares problem from the following loss function:

    \[L(\beta,\lambda) = \frac{1}{2}\sum(y-X\beta)^2 + \frac{1}{2} \lambda ||\beta||^2\]


While minimizing this gives us the standard Ridge estimator you have probably seen in textbooks on the subject, there’s a slightly more general version of this loss function:

    \[L(\beta,\lambda,\mu) = \frac{1}{2}\sum(y-X\beta)^2 + \frac{1}{2} \lambda ||\beta - \mu||^2\]


Let’s derive the estimator by first re-writing the loss function in terms of matrices:

    \[\begin{aligned}L(\beta,\lambda,\mu) &= \frac{1}{2}(y - X \beta)^{T}(y - X \beta) + \frac{1}{2} \lambda||\beta - \mu||^2 \\&= \frac{1}{2} y^Ty - \beta^T X^T y + \frac{1}{2} \beta^T X^T X \beta + \frac{1}{2} \lambda||\beta - \mu||^2\end{aligned}\]


Differentiating with respect to the parameter vector, we end up with this expression for the gradient:

    \[\nabla_{\beta} L (\beta, \lambda, \mu) = -X^T y + X^T X \beta + \lambda (\beta - \mu)\]


So, Minimizing over \beta we get this expression for the generalized ridge estimator:

    \[\hat\beta_{Ridge} = (X'X + \lambda I )^{-1}(\lambda \mu + X'y)\]


The standard Ridge estimator can be recovered by setting \mu=0. Usually we regard \lambda as an abstract parameter that regulates the penalty size and \mu as a vector of values (one for each predictor) that increases the loss the further these coefficients deviate from these values. When \mu=0 the coefficients are pulled towards zero.

Let’s take a look how the estimator behaves when the parameters, \mu and \lambda change. We’ll define a meaningful ‘prior’ for our example and then vary the penalty parameter. As an example, we’ll use the `diamonds` dataset from the `ggplot2` package and model the price as a linear function of the number of carats, in each diamond, the depth, table, x, y and z attributes

![penalty_plots](/Users/thomasalcock/Projekte/blog/bayesian_regularization/penalty_plots.tiff)

As we can see from the plot both with and without a prior the coefficient estimates change rapidly for the first few increases in the penalty size. We also see that the ‘shrinkage’ effect holds from the upper plot: as the penalty increases the coefficients tend towards zero, some faster than others. The plot on the right shows how the coefficients change when we set a sensible ‘prior’. The coefficients still change, but they now tend towards the ‘prior’ we specified. This is because \lambda penalizes deviations from our \mu, which means that larger values for the penalty pull the coefficients towards \mu. You might be asking yourself, how this compares to the Bayesian estimator. Let’s find out!

Comparing the Ridge and Bayesian Estimator

Now that we’ve seen both the Ridge and the Bayesian estimators, it’s time to compare them. We discovered, that the Bayesian estimator contains the OLS estimator. Since we know its form, let’s substitute it and see what happens:

    \[\begin{aligned}\hat\beta_{Bayesian} &= (X'X + B_0^{-1})^{-1}(B_0^{-1} b_0 + X'X \hat\beta) \\&= (X'X + B_0^{-1})^{-1}(B_0^{-1} b_0 + X'X (X'X)^{-1}X'y) \\&= (X'X + B_0^{-1})^{-1}(B_0^{-1} b_0 + X'y)\end{aligned}\]

 
This form makes the analogy much clearer:

\lambda I corresponds to B_0^{-1}, the matrix of precisions. In other words, since I is the identity matrix, the ridge estimator assumes no covariances between the regression coefficients and a constant precision across all coefficients (recall that \lambda is a scalar)
\lambda \mu corresponds to B_0^{-1} b_0, which makes sense, since the vector b_0 is the mean of our prior distribution, which essentially pulls the estimator towards it, just like \mu ‘shrinks’ the coefficients towards its values. This ‘pull’ depends on the uncertainty captured by B_0^{-1} or \lambda I in the ridge estimator.

That’s all well and good, but lets see how changing the uncertainty in the Bayesian case compares to the behaviour of the ridge estimator. Using the same data and the same model specfication as above, we’ll set the covariance matrix B_0 matrix to equal \lambda I and then change lambda. Remember, smaller values of \lambda now imply greater contribution of the prior (less uncertainty) and therefore increasing them makes the prior less important.

![bayes_penalty_plots](/Users/thomasalcock/Projekte/blog/bayesian_regularization/bayes_penalty_plots.tiff)

The above plots match out understanding so far: With a prior mean of zeros the coefficients are shrunken towards zero, as in the ridge regression case when the prior dominates, i.e. when the precision is high. And when a prior mean is set the coefficients tend towards it as the precision increases. So much for the coefficients, but what about the performance? Let’s have a look!

Performance comparison

Lastly, we’ll compare the predictive performance of the two models. Although we could treat the parameters in the model as hyperparameters which we would need to tune, this would defy the purpose of using prior knowlegde. Instead, let’s choose a prior specification for both models, and then compare the performance on a hold out set (30% of the data). While we can use the simple X\hat\beta as our predictor for the Ridge model, the Bayesian model provides us with a full posterior preditctive distribution which we can sample from to get model predictions. To estimate the model I used the `brms`package.

| | RMSE | MAE | MAPE |
| :——————– | ——: | ——: | —-: |
| Bayesian Linear Model | 1625.38 | 1091.36 | 44.15 |
| Ridge Estimator | 1756.01 | 1173.50 | 43.44 |

Overall both models perform similarly, although some error metrics slightly favor one model over the other. Judging by these errors, we could certainly improve our models by specifying a more approriate probability distribution for our target variable. After all, prices can not be negative yet our models can and do produce negative predictions.

Recap

In this post, I’ve shown you how the ridge estimator compares to the Bayesian conjugate linear model. Now you understand the connection between the two models and how a Bayesian approach can provide a more readily interpretable way of regularizing your model. Normally \lambda would be considered a penalty size, but now it can be interpreted as a measure of prior uncertainty. Similarly, the parameter vector \mu can be seen as a vector of prior means for our model parameters in the extended ridge model. As far as the Bayesian approach goes, we also can use prior distributions to implement expert knowledge in your estimation process. This regularizes your model and allows for incorporation of external information in your model. If you are interested in the code, check it out at our [GitHub page](https://github.com/STATWORX/blog/tree/master/bayesian_regularization)!

References

– Jackman S. 2009. Bayesian Analysis for the Social Sciences. West Sussex: Wiley.

Thomas Alcock Thomas Alcock

Learn more!

As one of the leading companies in the field of data science, machine learning, and AI, we guide you towards a data-driven future. Learn more about statworx and our motivation.
About us