en
                    array(1) {
  ["en"]=>
  array(13) {
    ["code"]=>
    string(2) "en"
    ["id"]=>
    string(1) "1"
    ["native_name"]=>
    string(7) "English"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    string(1) "1"
    ["default_locale"]=>
    string(5) "en_US"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "en"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(7) "English"
    ["url"]=>
    string(68) "https://www.statworx.com/en/content-hub/blog/xgboost-tree-vs-linear/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/en.png"
    ["language_code"]=>
    string(2) "en"
  }
}
                    
Contact
Content Hub
Blog Post

XGBoost Tree vs. Linear

  • Expert Fabian Müller
  • Date 12. January 2018
  • Topic Machine LearningPythonR
  • Format Blog
  • Category Technology
XGBoost Tree vs. Linear

Introduction

One of the highlights of this year’s H2O World was a Kaggle Grandmaster Panel. The attendees, Gilberto Titericz (Airbnb), Mathias Müller (H2O.ai), Dmitry Larko (H2O.ai), Marios Michailidis (H2O.ai), and Mark Landry (H2O.ai), answered various questions about Kaggle and data science in general.

One of the questions from the audience was which tools and algorithms the Grandmasters frequently use. As expected, every single of them named the gradient boosting implementation XGBoost (Chen and Guestrin 2016). This is not surprising, since it is long known that XGBoost is at the moment the probably most used algorithm in data science.

The popularity of XGBoost manifests itself in various blog posts. Including tutorials for R and Python, Hyperparameter for XGBoost, and even using XGBoost with Nvidia’s CUDA GPU support.

At STATWORX, we also frequently leverage XGBoost’s power for external and internal projects. One question that we lately asked ourselves was how big the difference between the two base learners (also called boosters) offered by XGBoost is? This posts tries to answer this question in a more systematic way.

Weak Learner

Gradient boosting can be interpreted as a combination of single models (so called base learners or weak learners) to an ensemble model (Natekin and Knoll 2013).
In theory, any base learner can be used in the boosting framework, whereas some base learners have proven themselves to be particularly useful: Linear and penalized models (Hastie et al. 2008), (B/P-) splines (Huang and Yang 2004), and especially decision trees (James et al. 2013). Among the less frequently used base learners are random effects (Tutz and Groll 2009), radial basis functions (Gomez-Verdejo et al. 2002), markov random fields (Dietterich et al. 2004) und wavlets (Dubossarsky et al. 2016).

Chen and Guestrin (2016) describe XGBoost as an additive function, given the data D = left{ left( x_{ i }, y_{ i } right) right}, of the following form:

    \[hat{y_{ i }} = Phi(x_{ i }) = sum_{ k = 1 }^K{ f_{ k }(x_{ i }), f_{ k } } in F\]

In their original paper, f_{ k }(x)forall k = 1, ... , K is defined as an classification or regression tree (CART). Apart from that, the alert reader of the technical documentation knows that one can alter the functional form of f_k(x) by using the booster argument in R/Python

# Example of XGBoost for regression in R with trees (CART)
xgboost(data = train_DMatrix,
        obj = "reg:linear".
        eval_metric = "rmse",
        booster = "gbtree")

One can choose between decision trees (gbtree and dart) and linear models (gblinear). Unfortunately, there is only limited literature on the comparison of different base learners for boosting (see for example Joshi et al. 2002). To our knowledge, for the special case of XGBoost no systematic comparison is available.

Simulation and Setup

In order to compare linear with tree base learners, we propose the following Monte Carlo simulation:

1) Draw a random number n from a uniform distribution [100, 2500].
2) Simulate four datasets, two for classification and two for regression, each having n observations.
3) On each dataset, train a boosting model with tree and linear base learners, respectively.
4) Calculate an appropriate error metric for each model on each dataset.

Repeat the outlined procedure m = 100 times.

As for simulation, we use the functions twoClassSim(), LPH07_1(), LPH07_2(), SLC14_1() from the caret package. In addition to the relevant features, a varying number of (correlated) random features was added. Note that in order to match real life data, all data generating processes involve non-linear components. For further details, we advise the reader to take a look at the caret package documentation.

For each dataset, we apply the same (random) splitting strategy, where 70% of the data goes to training, 15% is used for validation, and the last 15% is used for testing. Regarding hyperparameter tuning, we use a grid-search strategy in combination with 10-fold crossvalidation on the training data. Regardless of the base learner type, L1 (alpha) and L2 (lambda) regularization were tuned using a shared parameter space.
For tree boosting, the learning rate (eta) was held constant at 0.3 while tuning the optimal tree size (max_depth). Finally, we used a fixed number of 1000 boosting iterations (nrounds) in combination with ten early stopping rounds (early_stopping_rounds) on the validation frame. The final performance was evaluated by applying the model with the best crossvalidated parameters on the test dataset.

Results

Figure 1 and Figure 2 show the distributions of out of sample classification errors (AUC) and regression errors (RMSE) for both datasets. Associated summary statistics can be found in Table 1.

Table 1: Error summary statistics by datasets and base learners
Base learner Dataset Type Error metric Average error Error std.
Linear 1 Classification AUC 0.904 0.031
Tree 1 Classification AUC 0.934 0.090
Linear 2 Classification AUC 0.733 0.087
Tree 2 Classification AUC 0.730 0.062
Linear 3 Regression RMSE 45.182 2.915
Tree 3 Regression RMSE 17.207 9.067
Linear 4 Regression RMSE 17.383 1.454
Tree 4 Regression RMSE 6.595 3.104

For the first dataset, the models using tree learners are on average better than the models with linear learners. However, the tree models exhibit a greater variance. The relationships are reversed for the second dataset. On average, the linear models are slightly better and the tree models exhibit a lower variance.

result-oos-classification

In contrast to the classification case, there is for both regression datasets a substantial difference in performance in favor of the tree models. For the third dataset, the tree models are on average better than their linear counterparts. Also, the variance of the results is substantially higher for the tree models. The results are similar for the fourth dataset. The tree models are again better on average than their linear counterparts, but feature a higher variation.

result-oos-regression

Summary

The results from a Monte Carlo simulation with 100 artificial datasets indicate that XGBoost with tree and linear base learners yields comparable results for classification problems, while tree learners are superior for regression problems. Based on this result, there is no single recommendation which model specification one should use when trying to minimize the model bias. In addition, tree based XGBoost models suffer from higher estimation variance compared to their linear counterparts. This finding is probably related to the more sophisticated parameter space of tree models. The complete code can be found on github.

References

Fabian Müller Fabian Müller

Learn more!

As one of the leading companies in the field of data science, machine learning, and AI, we guide you towards a data-driven future. Learn more about statworx and our motivation.
About us