## Introduction

One of the highlights of this year’s H2O World was a Kaggle Grandmaster Panel. The attendees, Gilberto Titericz (Airbnb), Mathias Müller (H2O.ai), Dmitry Larko (H2O.ai), Marios Michailidis (H2O.ai), and Mark Landry (H2O.ai), answered various questions about Kaggle and data science in general.

One of the questions from the audience was which tools and algorithms the Grandmasters frequently use. As expected, every single of them named the gradient boosting implementation XGBoost (Chen and Guestrin 2016). This is not surprising, since it is long known that XGBoost is at the moment the probably most used algorithm in data science.

The popularity of XGBoost manifests itself in various blog posts. Including tutorials for R and Python, Hyperparameter for XGBoost, and even using XGBoost with Nvidia’s CUDA GPU support.

At STATWORX, we also frequently leverage XGBoost’s power for external and internal projects. One question that we lately asked ourselves was how big the difference between the two **base learners** (also called *boosters*) offered by XGBoost is? This posts tries to answer this question in a more systematic way.

## Weak Learner

Gradient boosting can be interpreted as a combination of single models (so called *base learners* or *weak learner*s) to an ensemble model (Natekin and Knoll 2013).

In theory, any base learner can be used in the boosting framework, whereas some base learners have proven themselves to be particularly useful: Linear and penalized models (Hastie et al. 2008), (B/P-) splines (Huang and Yang 2004), and especially decision trees (James et al. 2013). Among the less frequently used base learners are random effects (Tutz and Groll 2009), radial basis functions (Gomez-Verdejo et al. 2002), markov random fields (Dietterich et al. 2004) und wavlets (Dubossarsky et al. 2016).

Chen and Guestrin (2016) describe XGBoost as an additive function, given the data , of the following form:

In their original paper, is defined as an *classification or regression tree* (CART). Apart from that, the alert reader of the technical documentation knows that one can alter the functional form of by using the `booster`

argument in R/Python

```
# Example of XGBoost for regression in R with trees (CART)
xgboost(data = train_DMatrix,
obj = "reg:linear".
eval_metric = "rmse",
booster = "gbtree")
```

One can choose between decision trees (`gbtree`

and `dart`

) and linear models (`gblinear`

). Unfortunately, there is only limited literature on the comparison of different base learners for boosting (see for example Joshi et al. 2002). To our knowledge, for the special case of XGBoost no systematic comparison is available.

## Simulation and Setup

In order to compare linear with tree base learners, we propose the following Monte Carlo simulation:

1) Draw a random number from a uniform distribution .

2) Simulate four datasets, two for classification and two for regression, each having observations.

3) On each dataset, train a boosting model with tree and linear base learners, respectively.

4) Calculate an appropriate error metric for each model on each dataset.

Repeat the outlined procedure times.

As for simulation, we use the functions `twoClassSim()`

, `LPH07_1()`

, `LPH07_2()`

, `SLC14_1()`

from the `caret`

package. In addition to the relevant features, a varying number of (correlated) random features was added. Note that in order to match real life data, all data generating processes involve non-linear components. For further details, we advise the reader to take a look at the `caret`

package documentation.

For each dataset, we apply the same (random) splitting strategy, where 70% of the data goes to training, 15% is used for validation, and the last 15% is used for testing. Regarding hyperparameter tuning, we use a grid-search strategy in combination with 10-fold crossvalidation on the training data. Regardless of the base learner type, (`alpha`

) and (`lambda`

) regularization were tuned using a shared parameter space.

For tree boosting, the learning rate (`eta`

) was held constant at 0.3 while tuning the optimal tree size (`max_depth`

). Finally, we used a fixed number of 1000 boosting iterations (`nrounds`

) in combination with ten early stopping rounds (`early_stopping_rounds`

) on the validation frame. The final performance was evaluated by applying the model with the best crossvalidated parameters on the test dataset.

## Results

Figure 1 and Figure 2 show the distributions of out of sample classification errors (AUC) and regression errors (RMSE) for both datasets. Associated summary statistics can be found in Table 1.

##### Table 1: Error summary statistics by datasets and base learners

Base learner | Dataset | Type | Error metric | Average error | Error std. |
---|---|---|---|---|---|

Linear | 1 | Classification | AUC | 0.904 | 0.031 |

Tree | 1 | Classification | AUC | 0.934 | 0.090 |

Linear | 2 | Classification | AUC | 0.733 | 0.087 |

Tree | 2 | Classification | AUC | 0.730 | 0.062 |

Linear | 3 | Regression | RMSE | 45.182 | 2.915 |

Tree | 3 | Regression | RMSE | 17.207 | 9.067 |

Linear | 4 | Regression | RMSE | 17.383 | 1.454 |

Tree | 4 | Regression | RMSE | 6.595 | 3.104 |

For the first dataset, the models using tree learners are on average better than the models with linear learners. However, the tree models exhibit a greater variance. The relationships are reversed for the second dataset. On average, the linear models are slightly better and the tree models exhibit a lower variance.

In contrast to the classification case, there is for both regression datasets a substantial difference in performance in favor of the tree models. For the third dataset, the tree models are on average better than their linear counterparts. Also, the variance of the results is substantially higher for the tree models. The results are similar for the fourth dataset. The tree models are again better on average than their linear counterparts, but feature a higher variation.

## Summary

The results from a Monte Carlo simulation with 100 artificial datasets indicate that XGBoost with tree and linear base learners yields comparable results for classification problems, while tree learners are superior for regression problems. Based on this result, there is no single recommendation which model specification one should use when trying to minimize the model bias. In addition, tree based XGBoost models suffer from higher estimation variance compared to their linear counterparts. This finding is probably related to the more sophisticated parameter space of tree models. The complete code can be found on github.

## References

- Chen, Tianqi, and Carlos Guestrin. 2016. “XGBoost – A Scalable Tree Boosting System.” https://arxiv.org/pdf/1603.02754.pdf.
- Dietrich, Thomas G., Pedro Domingos, Lise Getoor, Stephen Muggleton, and Prasad Tadepalli. 2008. “Structured Machine Learning: The Next Ten Years.” http://www.doc.ic.ac.uk/~shm/Papers/strucml.pdf
- Dubossarsky, E., J.H. Friedman, J.T. Ormerod, and M.P. Wand. 2016. “Wavelet-Based Gradient Boosting.” http://www.matt-wand.utsacademics.info/publicns/Dubossarsky16.pdf
- Gómez-Verdejo, Vanessa, Jerónimo Arenas-García, Manuel Ortega-Moral and Aníbal R. Figueiras-Vidal. 2002. “Designing RBF Classifiers for Weighted Boosting.” http://www.tsc.uc3m.es/~jarenas/papers/international_conferences/2005_IJCNN_RBF_boosting.pdf
- Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2008. “The Elements of Statistical Learning.” Springer
- Huang, Juanhua Z., and Lijian Yang. 2004. “Identification of Nonlinear Additive Autoregressive Models.” http://lijianyang.com/mypapers/splinelag.pdf
- James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani. 2013. “An Introduction to Statistical Learning.” Springer
- Natekin, Alexey, and Alois Knoll. 2013. “Gradient Boosting Machines, a Tutorial.” https://www.frontiersin.org/articles/10.3389/fnbot.2013.00021/full.
- Tut, Gerhard, and Andreas Groll. 2009. “Generalized Linear Mixed Models Based on Boosting.” http://www.fm.mathematik.uni-muenchen.de/download/publications/glmm_boost.pdf