Back to all Blog Posts

Model Regularization – The Bayesian Way

  • Data Science
  • Statistics & Methods
15. July 2020
·

Thomas Alcock
Team AI Development

Introduction

Sometimes here at STATWORX impromptu discussions about statistical methods happen. In one such discussion one of my colleagues decided to declare (albeit jokingly) Bayesian statistics unnecessary. This made me ask myself: Why would I ever use Bayesian models in the context of a standard regression problem? Surely, existing approaches such as Ridge Regression are just as good, if not better. However, the Bayesian approach has the advantage that it lets you regularize your model to prevent overfitting **and** meaningfully interpret the regularization parameters. Contrary to the usual way of looking at ridge regression, the regularization parameters are no longer abstract numbers, but can be interpreted through the Bayesian paradigm as derived from prior beliefs. In this post, I'll show you the formal similarity between a generalized ridge estimator and the Bayesian equivalent.

A (very brief) primer on Bayesian Stats

To understand the Bayesian regression estimator a minimal amount of knowledge about Bayesian statistics is necessary, so here's what you need to know (if you don't already): In Bayesian statistics we think about model parameters (i.e. regression coefficients) probabilistically. In other words, the data given to us is fixed, and the parameters are considered random. This runs counter to the standard, frequentist perspective in which the underlying model parameters are treated as fixed, while the data are considered random realizations of the stochastic process driven by those fixed model parameters. The end goal of Bayesian analysis is to find the posterior distribution, which you may remember from Bayes Rule:

$$p(\theta|y) = \frac{p(y|\theta) p(\theta)}{p(y)}$$

While $p(y|\theta)$ is our likelihood and $p(y)$ is a normalizing constant, $p(\theta)$ is our prior which does not depend on the data, $y$. In classical statistics, $p(\theta)$ is set to 1 (an improper reference prior) so that when the posterior 'probability' is maximized, really just the likelihood is maximized, because its the only part that still depends on $\theta$. However, in Bayesian statistics we use an actual probability distribution in place of $p(\theta)$, a Normal distribution for example. So lets consider the case of a regression problem and we'll assume that our target, $y$, **and** our prior follow normal distributions. This leads us to **conjugate** Bayesian analysis, in which we can neatly write down an equation for the posterior distribution. In many cases this is actually not possible and for this reason Markov Chain Monte Carlo methods were invented to sample from the posterior - taking a frequentist approach, ironically.

We'll make the usual assumption about the data: $y_i$ is i.i.d. $N(\bold {x_i \beta}, \sigma^2)$ for all observations $i$. This gives us our standard likelihood for the Normal distribution. Now we can specify the prior for the parameter we're trying to estimate, $(\beta, \sigma^2)$. If we choose a Normal prior (conditional on the variance, $\sigma^2$) for the vector or weights in $\beta$, i.e. $N(b_0, \sigma^2 B_0)$ and an inverse-Gamma prior over the variance parameter it can be shown that the posterior distribution for $\beta$ is Normally distributed with mean

$$\hat\beta_{Bayesian} = (B_0^{-1} + X'X)^{-1}(B_0^{-1} b_0 + X'X \hat\beta)$$

If you're interested in a proof of this result check out Jackman (2009, p.526).

Let's look at it piece by piece:

- $\hat\beta$ is our standard OLS estimator, $(X'X)^{-1}X'y$
- $b_0$ is the mean vector of (multivariate normal) prior distribution, so it lets us specify what we think the average values of each of our model parameters are
- $B_0$ is the covariance matrix and contains our respective uncertainties about the model parameters. The inverse of the variance is called the **precision**

What we can see from the equation is that the mean of our posterior is a **precision weighted average** of our prior mean (information not based on data) and the OLS estimator (based solely on the data). The second term in parentheses indicates that we are taking the uncertainty weighted prior mean, $B_0^{-1} b_0$, and adding it to the weighted OLS estimator, $X'X\hat\beta$. Imagine for a moment that $B_0^{-1} = 0$ . Then

$$\hat\beta_{Bayesian} = (X'X)^{-1}(X'X \hat\beta) = \hat\beta$$

This would mean that we are **infinitely** uncertain about our prior beliefs that the mean vector of our prior distribution would vanish, contributing nothing to our posterior! LIkewise, if our uncertainty decreases (and the precision thus increases) the prior mean, $b_0$, would contribute more to the posterior mean.

After this short primer on Bayesian statistics, we can now formally compare the Ridge estimator with the above Bayesian estimator. But first, we need to take a look at a more general version of the Ridge estimator.

Generalizing the Ridge estimator

A standard tool used in many regression problems, the standard Ridge estimator is derived by solving a least squares problem from the following loss function:

$$L(\beta,\lambda) = \frac{1}{2}\sum(y-X\beta)^2 + \frac{1}{2} \lambda ||\beta||^2$$

While minimizing this gives us the standard Ridge estimator you have probably seen in textbooks on the subject, there's a slightly more general version of this loss function:

$$L(\beta,\lambda,\mu) = \frac{1}{2}\sum(y-X\beta)^2 + \frac{1}{2} \lambda ||\beta - \mu||^2$$

Let's derive the estimator by first re-writing the loss function in terms of matrices:

$$\begin{aligned}L(\beta,\lambda,\mu) &= \frac{1}{2}(y - X \beta)^{T}(y - X \beta) + \frac{1}{2} lambda||\beta - \mu||^2 \\&= \frac{1}{2} y^Ty - \beta^T X^T y + \frac{1}{2} \beta^T X^T X \beta +\frac{1}{2} \lambda||\beta - \mu||^2\end{aligned}$$

Differentiating with respect to the parameter vector, we end up with this expression for the gradient:

$$\nabla_{\beta} L (\beta, \lambda, \mu) = -X^T y + X^T X \beta + \lambda (\beta - \mu)$$

So, Minimizing over $\beta$ we get this expression for the generalized ridge estimator:

$$\hat\beta_{Ridge} = (X'X + \lambda I )^{-1}(\lambda \mu + X'y)$$

The standard Ridge estimator can be recovered by setting $\mu=0$. Usually we regard $\lambda$ as an abstract parameter that regulates the penalty size and $\mu$ as a vector of values (one for each predictor) that increases the loss the further these coefficients deviate from these values. When $\mu=0$ the coefficients are pulled towards zero.

Let's take a look how the estimator behaves when the parameters, $\mu$ and $\lambda$ change. We'll define a meaningful 'prior' for our example and then vary the penalty parameter. As an example, we'll use the `diamonds` dataset from the `ggplot2` package and model the price as a linear function of the number of carats, in each diamond, the depth, table, x, y and z attributes

As we can see from the plot both with and without a prior the coefficient estimates change rapidly for the first few increases in the penalty size. We also see that the 'shrinkage' effect holds from the upper plot: as the penalty increases the coefficients tend towards zero, some faster than others. The plot on the right shows how the coefficients change when we set a sensible 'prior'. The coefficients still change, but they now tend towards the 'prior' we specified. This is because $\lambda$ penalizes deviations from our $\mu$, which means that larger values for the penalty pull the coefficients towards $\mu$. You might be asking yourself, how this compares to the Bayesian estimator. Let's find out!

Comparing the Ridge and Bayesian Estimator

Now that we've seen both the Ridge and the Bayesian estimators, it's time to compare them. We discovered, that the Bayesian estimator contains the OLS estimator. Since we know its form, let's substitute it and see what happens:

$$\begin{aligned}\hat\beta_{Bayesian} &= (X'X + B_0^{-1})^{-1}(B_0^{-1} b_0 + X'X \hat\beta) \\&= (X'X + B_0^{-1})^{-1}(B_0^{-1} b_0 + X'X (X'X)^{-1}X'y) \\&= (X'X + B_0^{-1})^{-1}(B_0^{-1} b_0 + X'y)
\end{aligned}$$

This form makes the analogy much clearer:

- $\lambda I$ corresponds to $B_0^{-1}$, the matrix of precisions. In other words, since $I$ is the identity matrix, the ridge estimator assumes no covariances between the regression coefficients and a constant precision across all coefficients (recall that $\lambda$ is a scalar)
- $\lambda \mu$ corresponds to $B_0^{-1} b_0$, which makes sense, since the vector $b_0$ is the mean of our prior distribution, which essentially pulls the estimator towards it, just like $\mu$ 'shrinks' the coefficients towards its values. This 'pull' depends on the uncertainty captured by $B_0^{-1}$ or $\lambda I$ in the ridge estimator.

That's all well and good, but lets see how changing the uncertainty in the Bayesian case compares to the behaviour of the ridge estimator. Using the same data and the same model specfication as above, we'll set the covariance matrix $B_0$ matrix to equal $\lambda I$ and then change lambda. Remember, smaller values of $\lambda$ now imply greater contribution of the prior (less uncertainty) and therefore increasing them makes the prior less important.

The above plots match out understanding so far: With a prior mean of zeros the coefficients are shrunken towards zero, as in the ridge regression case when the prior dominates, i.e. when the precision is high. And when a prior mean is set the coefficients tend towards it as the precision increases. So much for the coefficients, but what about the performance? Let's have a look!

Performance comparison

Lastly, we'll compare the predictive performance of the two models. Although we could treat the parameters in the model as hyperparameters which we would need to tune, this would defy the purpose of using prior knowlegde. Instead, let's choose a prior specification for both models, and then compare the performance on a hold out set (30% of the data). While we can use the simple $X\hat\beta$ as our predictor for the Ridge model, the Bayesian model provides us with a full posterior preditctive distribution which we can sample from to get model predictions. To estimate the model I used the `brms`package.

| | RMSE | MAE | MAPE |
| :-------------------- | ------: | ------: | ----: |
| Bayesian Linear Model | 1625.38 | 1091.36 | 44.15 |
| Ridge Estimator | 1756.01 | 1173.50 | 43.44 |

Overall both models perform similarly, although some error metrics slightly favor one model over the other. Judging by these errors, we could certainly improve our models by specifying a more approriate probability distribution for our target variable. After all, prices can not be negative yet our models can and do produce negative predictions.

Recap

In this post, I've shown you how the ridge estimator compares to the Bayesian conjugate linear model. Now you understand the connection between the two models and how a Bayesian approach can provide a more readily interpretable way of regularizing your model. Normally $\lambda$ would be considered a penalty size, but now it can be interpreted as a measure of prior uncertainty. Similarly, the parameter vector $\mu$ can be seen as a vector of prior means for our model parameters in the extended ridge model. As far as the Bayesian approach goes, we also can use prior distributions to implement expert knowledge in your estimation process. This regularizes your model and allows for incorporation of external information in your model. If you are interested in the code, check it out at our GitHub page!

References

- Jackman S. 2009. Bayesian Analysis for the Social Sciences. West Sussex: Wiley.

Linkedin Logo
Marcel Plaschke
Head of Strategy, Sales & Marketing
schedule a consultation
Zugehörige Leistungen
No items found.

More Blog Posts

  • Artificial Intelligence
AI Trends Report 2025: All 16 Trends at a Glance
Tarik Ashry
05. February 2025
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
Explainable AI in practice: Finding the right method to open the Black Box
Jonas Wacker
15. November 2024
Read more
  • Artificial Intelligence
  • Data Science
  • GenAI
How a CustomGPT Enhances Efficiency and Creativity at hagebau
Tarik Ashry
06. November 2024
Read more
  • Artificial Intelligence
  • Data Culture
  • Data Science
  • Deep Learning
  • GenAI
  • Machine Learning
AI Trends Report 2024: statworx COO Fabian Müller Takes Stock
Tarik Ashry
05. September 2024
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
The AI Act is here – These are the risk classes you should know
Fabian Müller
05. August 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 4)
Tarik Ashry
31. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 3)
Tarik Ashry
24. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 2)
Tarik Ashry
04. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Back to the Future: The Story of Generative AI (Episode 1)
Tarik Ashry
10. July 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Generative AI as a Thinking Machine? A Media Theory Perspective
Tarik Ashry
13. June 2024
Read more
  • Artificial Intelligence
  • GenAI
  • statworx
Custom AI Chatbots: Combining Strong Performance and Rapid Integration
Tarik Ashry
10. April 2024
Read more
  • Artificial Intelligence
  • Data Culture
  • Human-centered AI
How managers can strengthen the data culture in the company
Tarik Ashry
21. February 2024
Read more
  • Artificial Intelligence
  • Data Culture
  • Human-centered AI
AI in the Workplace: How We Turn Skepticism into Confidence
Tarik Ashry
08. February 2024
Read more
  • Artificial Intelligence
  • Data Science
  • GenAI
The Future of Customer Service: Generative AI as a Success Factor
Tarik Ashry
25. October 2023
Read more
  • Artificial Intelligence
  • Data Science
How we developed a chatbot with real knowledge for Microsoft
Isabel Hermes
27. September 2023
Read more
  • Data Science
  • Data Visualization
  • Frontend Solution
Why Frontend Development is Useful in Data Science Applications
Jakob Gepp
30. August 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • statworx
the byte - How We Built an AI-Powered Pop-Up Restaurant
Sebastian Heinz
14. June 2023
Read more
  • Artificial Intelligence
  • Recap
  • statworx
Big Data & AI World 2023 Recap
Team statworx
24. May 2023
Read more
  • Data Science
  • Human-centered AI
  • Statistics & Methods
Unlocking the Black Box – 3 Explainable AI Methods to Prepare for the AI Act
Team statworx
17. May 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
How the AI Act will change the AI industry: Everything you need to know about it now
Team statworx
11. May 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
Gender Representation in AI – Part 2: Automating the Generation of Gender-Neutral Versions of Face Images
Team statworx
03. May 2023
Read more
  • Artificial Intelligence
  • Data Science
  • Statistics & Methods
A first look into our Forecasting Recommender Tool
Team statworx
26. April 2023
Read more
  • Artificial Intelligence
  • Data Science
On Can, Do, and Want – Why Data Culture and Death Metal have a lot in common
David Schlepps
19. April 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
GPT-4 - A categorisation of the most important innovations
Mareike Flögel
17. March 2023
Read more
  • Artificial Intelligence
  • Data Science
  • Strategy
Decoding the secret of Data Culture: These factors truly influence the culture and success of businesses
Team statworx
16. March 2023
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
How to create AI-generated avatars using Stable Diffusion and Textual Inversion
Team statworx
08. March 2023
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Strategy
Knowledge Management with NLP: How to easily process emails with AI
Team statworx
02. March 2023
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
3 specific use cases of how ChatGPT will revolutionize communication in companies
Ingo Marquart
16. February 2023
Read more
  • Recap
  • statworx
Ho ho ho – Christmas Kitchen Party
Julius Heinz
22. December 2022
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Real-Time Computer Vision: Face Recognition with a Robot
Sarah Sester
30. November 2022
Read more
  • Data Engineering
  • Tutorial
Data Engineering – From Zero to Hero
Thomas Alcock
23. November 2022
Read more
  • Recap
  • statworx
statworx @ UXDX Conf 2022
Markus Berroth
18. November 2022
Read more
  • Artificial Intelligence
  • Machine Learning
  • Tutorial
Paradigm Shift in NLP: 5 Approaches to Write Better Prompts
Team statworx
26. October 2022
Read more
  • Recap
  • statworx
statworx @ vuejs.de Conf 2022
Jakob Gepp
14. October 2022
Read more
  • Data Engineering
  • Data Science
Application and Infrastructure Monitoring and Logging: metrics and (event) logs
Team statworx
29. September 2022
Read more
  • Coding
  • Data Science
  • Machine Learning
Zero-Shot Text Classification
Fabian Müller
29. September 2022
Read more
  • Cloud Technology
  • Data Engineering
  • Data Science
How to Get Your Data Science Project Ready for the Cloud
Alexander Broska
14. September 2022
Read more
  • Artificial Intelligence
  • Human-centered AI
  • Machine Learning
Gender Repre­sentation in AI – Part 1: Utilizing StyleGAN to Explore Gender Directions in Face Image Editing
Isabel Hermes
18. August 2022
Read more
  • Artificial Intelligence
  • Human-centered AI
statworx AI Principles: Why We Started Developing Our Own AI Guidelines
Team statworx
04. August 2022
Read more
  • Data Engineering
  • Data Science
  • Python
How to Scan Your Code and Dependencies in Python
Thomas Alcock
21. July 2022
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
Data-Centric AI: From Model-First to Data-First AI Processes
Team statworx
13. July 2022
Read more
  • Artificial Intelligence
  • Deep Learning
  • Human-centered AI
  • Machine Learning
DALL-E 2: Why Discrimination in AI Development Cannot Be Ignored
Team statworx
28. June 2022
Read more
  • R
The helfRlein package – A collection of useful functions
Jakob Gepp
23. June 2022
Read more
  • Recap
  • statworx
Unfold 2022 in Bern – by Cleverclip
Team statworx
11. May 2022
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
  • Machine Learning
Break the Bias in AI
Team statworx
08. March 2022
Read more
  • Artificial Intelligence
  • Cloud Technology
  • Data Science
  • Sustainable AI
How to Reduce the AI Carbon Footprint as a Data Scientist
Team statworx
02. February 2022
Read more
  • Recap
  • statworx
2022 and the rise of statworx next
Sebastian Heinz
06. January 2022
Read more
  • Recap
  • statworx
5 highlights from the Zurich Digital Festival 2021
Team statworx
25. November 2021
Read more
  • Data Science
  • Human-centered AI
  • Machine Learning
  • Strategy
Why Data Science and AI Initiatives Fail – A Reflection on Non-Technical Factors
Team statworx
22. September 2021
Read more
  • Artificial Intelligence
  • Data Science
  • Human-centered AI
  • Machine Learning
  • statworx
Column: Human and machine side by side
Sebastian Heinz
03. September 2021
Read more
  • Coding
  • Data Science
  • Python
How to Automatically Create Project Graphs With Call Graph
Team statworx
25. August 2021
Read more
  • Coding
  • Python
  • Tutorial
statworx Cheatsheets – Python Basics Cheatsheet for Data Science
Team statworx
13. August 2021
Read more
  • Data Science
  • statworx
  • Strategy
STATWORX meets DHBW – Data Science Real-World Use Cases
Team statworx
04. August 2021
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
Deploy and Scale Machine Learning Models with Kubernetes
Team statworx
29. July 2021
Read more
  • Cloud Technology
  • Data Engineering
  • Machine Learning
3 Scenarios for Deploying Machine Learning Workflows Using MLflow
Team statworx
30. June 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Car Model Classification III: Explainability of Deep Learning Models With Grad-CAM
Team statworx
19. May 2021
Read more
  • Artificial Intelligence
  • Coding
  • Deep Learning
Car Model Classification II: Deploying TensorFlow Models in Docker Using TensorFlow Serving
No items found.
12. May 2021
Read more
  • Coding
  • Deep Learning
Car Model Classification I: Transfer Learning with ResNet
Team statworx
05. May 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
Car Model Classification IV: Integrating Deep Learning Models With Dash
Dominique Lade
05. May 2021
Read more
  • AI Act
Potential Not Yet Fully Tapped – A Commentary on the EU’s Proposed AI Regulation
Team statworx
28. April 2021
Read more
  • Artificial Intelligence
  • Deep Learning
  • statworx
Creaition – revolutionizing the design process with machine learning
Team statworx
31. March 2021
Read more
  • Artificial Intelligence
  • Data Science
  • Machine Learning
5 Types of Machine Learning Algorithms With Use Cases
Team statworx
24. March 2021
Read more
  • Recaps
  • statworx
2020 – A Year in Review for Me and GPT-3
Sebastian Heinz
23. Dezember 2020
Read more
  • Artificial Intelligence
  • Deep Learning
  • Machine Learning
5 Practical Examples of NLP Use Cases
Team statworx
12. November 2020
Read more
  • Data Science
  • Deep Learning
The 5 Most Important Use Cases for Computer Vision
Team statworx
11. November 2020
Read more
  • Data Science
  • Deep Learning
New Trends in Natural Language Processing – How NLP Becomes Suitable for the Mass-Market
Dominique Lade
29. October 2020
Read more
  • Data Engineering
5 Technologies That Every Data Engineer Should Know
Team statworx
22. October 2020
Read more
  • Artificial Intelligence
  • Data Science
  • Machine Learning

Generative Adversarial Networks: How Data Can Be Generated With Neural Networks
Team statworx
10. October 2020
Read more
  • Coding
  • Data Science
  • Deep Learning
Fine-tuning Tesseract OCR for German Invoices
Team statworx
08. October 2020
Read more
  • Artificial Intelligence
  • Machine Learning
Whitepaper: A Maturity Model for Artificial Intelligence
Team statworx
06. October 2020
Read more
  • Data Engineering
  • Data Science
  • Machine Learning
How to Provide Machine Learning Models With the Help Of Docker Containers
Thomas Alcock
01. October 2020
Read more
  • Recap
  • statworx
STATWORX 2.0 – Opening of the New Headquarters in Frankfurt
Julius Heinz
24. September 2020
Read more
  • Machine Learning
  • Python
  • Tutorial
How to Build a Machine Learning API with Python and Flask
Team statworx
29. July 2020
Read more
  • Recap
  • statworx
Off To New Adventures: STATWORX Office Soft Opening
Team statworx
14. July 2020
Read more
  • Data Engineering
  • R
  • Tutorial
How To Dockerize ShinyApps
Team statworx
15. May 2020
Read more
  • Coding
  • Python
Making Of: A Free API For COVID-19 Data
Sebastian Heinz
01. April 2020
Read more
  • Frontend
  • Python
  • Tutorial
How To Build A Dashboard In Python – Plotly Dash Step-by-Step Tutorial
Alexander Blaufuss
26. March 2020
Read more
  • Coding
  • R
Why Is It Called That Way?! – Origin and Meaning of R Package Names
Team statworx
19. March 2020
Read more
  • Data Visualization
  • R
Community Detection with Louvain and Infomap
Team statworx
04. March 2020
Read more
  • Coding
  • Data Engineering
  • Data Science
Testing REST APIs With Newman
Team statworx
26. February 2020
Read more
  • Coding
  • Frontend
  • R
Dynamic UI Elements in Shiny – Part 2
Team statworx
19. Febuary 2020
Read more
  • Coding
  • Data Visualization
  • R
Animated Plots using ggplot and gganimate
Team statworx
14. Febuary 2020
Read more
  • Machine Learning
Machine Learning Goes Causal II: Meet the Random Forest’s Causal Brother
Team statworx
05. February 2020
Read more
  • Artificial Intelligence
  • Machine Learning
  • Statistics & Methods
Machine Learning Goes Causal I: Why Causality Matters
Team statworx
29.01.2020
Read more
  • Data Engineering
  • R
  • Tutorial
How To Create REST APIs With R Plumber
Stephan Emmer
23. January 2020
Read more
  • Recaps
  • statworx
statworx 2019 – A Year in Review
Sebastian Heinz
20. Dezember 2019
Read more
  • Artificial Intelligence
  • Deep Learning
Deep Learning Overview and Getting Started
Team statworx
04. December 2019
Read more
  • Coding
  • Machine Learning
  • R
Tuning Random Forest on Time Series Data
Team statworx
21. November 2019
Read more
  • Data Science
  • R
Combining Price Elasticities and Sales Forecastings for Sales Improvement
Team statworx
06. November 2019
Read more
  • Data Engineering
  • Python
Access your Spark Cluster from Everywhere with Apache Livy
Team statworx
30. October 2019
Read more
  • Recap
  • statworx
STATWORX on Tour: Wine, Castles & Hiking!
Team statworx
18. October 2019
Read more
  • Data Science
  • R
  • Statistics & Methods
Evaluating Model Performance by Building Cross-Validation from Scratch
Team statworx
02. October 2019
Read more
  • Data Science
  • Machine Learning
  • R
Time Series Forecasting With Random Forest
Team statworx
25. September 2019
Read more
  • Coding
  • Frontend
  • R
Dynamic UI Elements in Shiny – Part 1
Team statworx
11. September 2019
Read more
  • Machine Learning
  • R
  • Statistics & Methods
What the Mape Is FALSELY Blamed For, Its TRUE Weaknesses and BETTER Alternatives!
Team statworx
16. August 2019
Read more
  • Coding
  • Python
Web Scraping 101 in Python with Requests & BeautifulSoup
Team statworx
31. July 2019
Read more
  • Coding
  • Frontend
  • R
Getting Started With Flexdashboards in R
Thomas Alcock
19. July 2019
Read more
  • Recap
  • statworx
statworx summer barbecue 2019
Team statworx
21. June 2019
Read more
  • Data Visualization
  • R
Interactive Network Visualization with R
Team statworx
12. June 2019
Read more
  • Deep Learning
  • Python
  • Tutorial
Using Reinforcement Learning to play Super Mario Bros on NES using TensorFlow
Sebastian Heinz
29. May 2019
Read more
This is some text inside of a div block.
This is some text inside of a div block.