Tong He 98ee7e8057 Update xgboost.R
add parameter missing
2014-11-20 15:14:05 -08:00
2014-10-13 09:21:43 -07:00
2014-11-20 15:14:05 -08:00
2014-09-03 22:43:55 -07:00
2014-08-17 22:49:36 -07:00
2014-09-05 13:37:18 +02:00
2014-11-20 13:14:04 -08:00
2014-08-31 14:13:44 -07:00
2014-09-06 22:27:25 -07:00
2014-08-30 09:58:35 -07:00
2014-05-15 20:28:34 -07:00
2014-09-06 22:21:50 -07:00
2014-10-23 09:43:03 -07:00

xgboost: eXtreme Gradient Boosting

An optimized general purpose gradient boosting library. The library is parallelized using OpenMP. It implements machine learning algorithm under gradient boosting framework, including generalized linear model and gradient boosted regression tree.

Contributors: https://github.com/tqchen/xgboost/graphs/contributors

Turorial and Documentation: https://github.com/tqchen/xgboost/wiki

Questions and Issues: https://github.com/tqchen/xgboost/issues

Examples Code: Learning to use xgboost by examples

Notes on the Code: Code Guide

Learning about the model: Introduction to Boosted Trees

  • This slide is made by Tianqi Chen to introduce gradient boosting in a statistical view.
  • It present boosted tree learning as formal functional space optimization of defined objective.
  • The model presented is used by xgboost for boosted trees

What's New

  • Thanks to Bing Xu, XGBoost.jl allows you to use xgboost from Julia
  • See the updated demo folder for feature walkthrough
  • Thanks to Tong He, the new R package is available

Features

  • Sparse feature format:
    • Sparse feature format allows easy handling of missing values, and improve computation efficiency.
  • Push the limit on single machine:
    • Efficient implementation that optimizes memory and computation.
  • Speed: XGBoost is very fast
    • IN demo/higgs/speedtest.py, kaggle higgs data it is faster(on our machine 20 times faster using 4 threads) than sklearn.ensemble.GradientBoostingClassifier
  • Layout of gradient boosting algorithm to support user defined objective

Build

  • Run bash build.sh (you can also type make)
  • If your compiler does not come with OpenMP support, it will fire an warning telling you that the code will compile into single thread mode, and you will get single thread xgboost
  • You may get a error: -lgomp is not found
    • You can type make no_omp=1, this will get you single thread xgboost
    • Alternatively, you can upgrade your compiler to compile multi-thread version
  • Windows(VS 2010): see windows folder
    • In principle, you put all the cpp files in the Makefile to the project, and build

Version

  • This version xgboost-0.3, the code has been refactored from 0.2x to be cleaner and more flexibility
  • This version of xgboost is not compatible with 0.2x, due to huge amount of changes in code structure
    • This means the model and buffer file of previous version can not be loaded in xgboost-3.0
  • For legacy 0.2x code, refer to Here
  • Change log in CHANGES.md

XGBoost in Graphlab Create

Description
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
Readme 33 MiB
Languages
C++ 45.5%
Python 20.3%
Cuda 15.2%
R 6.8%
Scala 6.4%
Other 5.6%