3D splines

XGBoost Tree vs. Linear

Fabian Müller Blog, Data Science

Introduction

One of the highlights of this year's H2O World was a Kaggle Grandmaster Panel. The attendees, Gilberto Titericz (Airbnb), Mathias Müller (H2O.ai), Dmitry Larko (H2O.ai), Marios Michailidis (H2O.ai), and Mark Landry (H2O.ai), answered various questions about Kaggle and data science in general.

One of the questions from the audience was which tools and algorithms the Grandmasters frequently use. As expected, every single of them named the gradient boosting implementation XGBoost (Chen and Guestrin 2016). This is not surprising, since it is long known that XGBoost is at the moment the probably most used algorithm in data science.

The popularity of XGBoost manifests itself in various blog posts. Including tutorials for R and Python, Hyperparameter for XGBoost, and even using XGBoost with Nvidia's CUDA GPU support.

At STATWORX, we also frequently leverage XGBoost's power for external and internal projects (see Sales Forecasting Automative Use-Case). One question that we lately asked ourselves was how big the difference between the two base learners (also called boosters) offered by XGBoost is? This posts tries to answer this question in a more systematic way.

Weak Learner

Gradient boosting can be interpreted as a combination of single models (so called base learners or weak learners) to an ensemble model (Natekin and Knoll 2013).
In theory, any base learner can be used in the boosting framework, whereas some base learners have proven themselves to be particularly useful: Linear and penalized models (Hastie et al. 2008), (B/P-) splines (Huang and Yang 2004), and especially decision trees (James et al. 2013). Among the less frequently used base learners are random effects (Tutz and Groll 2009), radial basis functions (Gomez-Verdejo et al. 2002), markov random fields (Dietterich et al. 2004) und wavlets (Dubossarsky et al. 2016).

Chen and Guestrin (2016) describe XGBoost as an additive function, given the data D = \left{ \left( x_{ i }, y_{ i }  \right) \right}, of the following form:

    \[\hat{y_{ i }} = \Phi(x_{ i }) = \sum_{ k = 1 }^K{ f_{ k }(x_{ i }), f_{ k } } \in F\]

In their original paper, f_{ k }(x)\forall k = 1, ... , K is defined as an classification or regression tree (CART). Apart from that, the alert reader of the technical documentation knows that one can alter the functional form of f_k(x) by using the booster argument in R/Python

# Example of XGBoost for regression in R with trees (CART)
xgboost(data = train_DMatrix,
        obj = "reg:linear".
        eval_metric = "rmse",
        booster = "gbtree")

One can choose between decision trees (gbtree and dart) and linear models (gblinear). Unfortunately, there is only limited literature on the comparison of different base learners for boosting (see for example Joshi et al. 2002). To our knowledge, for the special case of XGBoost no systematic comparison is available.

Simulation and Setup

In order to compare linear with tree base learners, we propose the following Monte Carlo simulation:

1) Draw a random number n from a uniform distribution [100, 2500].
2) Simulate four datasets, two for classification and two for regression, each having n observations.
3) On each dataset, train a boosting model with tree and linear base learners, respectively.
4) Calculate an appropriate error metric for each model on each dataset.

Repeat the outlined procedure m = 100 times.

As for simulation, we use the functions twoClassSim(), LPH07_1(), LPH07_2(), SLC14_1() from the caret package. In addition to the relevant features, a varying number of (correlated) random features was added. Note that in order to match real life data, all data generating processes involve non-linear components. For further details, we advise the reader to take a look at the caret package documentation.

For each dataset, we apply the same (random) splitting strategy, where 70% of the data goes to training, 15% is used for validation, and the last 15% is used for testing. Regarding hyperparameter tuning, we use a grid-search strategy in combination with 10-fold crossvalidation on the training data. Regardless of the base learner type, L1 (alpha) and L2 (lambda) regularization were tuned using a shared parameter space.
For tree boosting, the learning rate (eta) was held constant at 0.3 while tuning the optimal tree size (max_depth). Finally, we used a fixed number of 1000 boosting iterations (nrounds) in combination with ten early stopping rounds (early_stopping_rounds) on the validation frame. The final performance was evaluated by applying the model with the best crossvalidated parameters on the test dataset.

Results

Figure 1 and Figure 2 show the distributions of out of sample classification errors (AUC) and regression errors (RMSE) for both datasets. Associated summary statistics can be found in Table 1.

Table 1: Error summary statistics by datasets and base learners
Base learnerDatasetTypeError metricAverage errorError std.
Linear1ClassificationAUC0.9040.031
Tree1ClassificationAUC0.9340.090
Linear2ClassificationAUC0.7330.087
Tree2ClassificationAUC0.7300.062
Linear3RegressionRMSE45.1822.915
Tree3RegressionRMSE17.2079.067
Linear4RegressionRMSE17.3831.454
Tree4RegressionRMSE6.5953.104

For the first dataset, the models using tree learners are on average better than the models with linear learners. However, the tree models exhibit a greater variance. The relationships are reversed for the second dataset. On average, the linear models are slightly better and the tree models exhibit a lower variance.

result-oos-classification

In contrast to the classification case, there is for both regression datasets a substantial difference in performance in favor of the tree models. For the third dataset, the tree models are on average better than their linear counterparts. Also, the variance of the results is substantially higher for the tree models. The results are similar for the fourth dataset. The tree models are again better on average than their linear counterparts, but feature a higher variation.

result-oos-regression

Summary

The results from a Monte Carlo simulation with 100 artificial datasets indicate that XGBoost with tree and linear base learners yields comparable results for classification problems, while tree learners are superior for regression problems. Based on this result, there is no single recommendation which model specification one should use when trying to minimize the model bias. In addition, tree based XGBoost models suffer from higher estimation variance compared to their linear counterparts. This finding is probably related to the more sophisticated parameter space of tree models. The complete code can be found on github.

References

Über den Autor
Fabian Müller

Fabian Müller

I am the Head of Data Science at STATWORX and responsible for our data science teams and key accounts. In my spare time, I'm into doing sports and fast cars.