* Integrating a faster version of grow_gpu plugin 1. Removed the older files to reduce duplication 2. Moved all of the grow_gpu files under 'exact' folder 3. All of them are inside 'exact' namespace to avoid any conflicts 4. Fixed a bug in benchmark.py while running only 'grow_gpu' plugin 5. Added cub and googletest submodules to ease integration and unit-testing 6. Updates to CMakeLists.txt to directly build cuda objects into libxgboost * Added support for building gpu plugins through make flow 1. updated makefile and config.mk to add right targets 2. added unit-tests for gpu exact plugin code * 1. Added support for building gpu plugin using 'make' flow as well 2. Updated instructions for building and testing gpu plugin * Fix travis-ci errors for PR#2360 1. lint errors on unit-tests 2. removed googletest, instead depended upon dmlc-core provide gtest cache * Some more fixes to travis-ci lint failures PR#2360 * Added Rory's copyrights to the files containing code from both. * updated copyright statement as per Rory's request * moved the static datasets into a script to generate them at runtime * 1. memory usage print when silent=0 2. tests/ and test/ folder organization 3. removal of the dependency of googletest for just building xgboost 4. coding style updates for .cuh as well * Fixes for compilation warnings * add cuda object files as well when JVM_BINDINGS=ON
CUDA Accelerated Tree Construction Algorithms
This plugin adds GPU accelerated tree construction algorithms to XGBoost.
Usage
Specify the 'updater' parameter as one of the following algorithms.
Algorithms
| updater | Description |
|---|---|
| grow_gpu | The standard XGBoost tree construction algorithm. Performs exact search for splits. Slower and uses considerably more memory than 'grow_gpu_hist' |
| grow_gpu_hist | Equivalent to the XGBoost fast histogram algorithm. Faster and uses considerably less memory. Splits may be less accurate. |
Supported parameters
| parameter | grow_gpu | grow_gpu_hist |
|---|---|---|
| subsample | ✔ | ✔ |
| colsample_bytree | ✔ | ✔ |
| colsample_bylevel | ✔ | ✔ |
| max_bin | ✖ | ✔ |
| gpu_id | ✔ | ✔ |
All algorithms currently use only a single GPU. The device ordinal can be selected using the 'gpu_id' parameter, which defaults to 0.
This plugin currently works with the CLI version and python version.
Python example:
param['gpu_id'] = 1
param['max_bin'] = 16
param['updater'] = 'grow_gpu_hist'
Benchmarks
To run benchmarks on synthetic data for binary classification:
$ python benchmark/benchmark.py
Training time time on 1000000 rows x 50 columns with 500 boosting iterations on i7-6700K CPU @ 4.00GHz and Pascal Titan X.
| Updater | Time (s) |
|---|---|
| grow_gpu_hist | 11.09 |
| grow_fast_histmaker (histogram XGBoost - CPU) | 41.75 |
| grow_gpu | 193.90 |
| grow_colmaker (standard XGBoost - CPU) | 720.12 |
See here for additional performance benchmarks of the 'grow_gpu' updater.
Test
To run tests:
$ python -m nose test/python/
Dependencies
A CUDA capable GPU with at least compute capability >= 3.5 (the algorithm depends on shuffle and vote instructions introduced in Kepler).
Building the plug-in requires CUDA Toolkit 7.5 or later.
Build
Using cmake
To use the plugin xgboost must be built by specifying the option PLUGIN_UPDATER_GPU=ON. CMake will prepare a build system depending on which platform you are on.
On Linux, from the xgboost directory:
$ mkdir build
$ cd build
$ cmake .. -DPLUGIN_UPDATER_GPU=ON
$ make
If 'make' fails try invoking make again. There can sometimes be problems with the order items are built.
On Windows you may also need to specify your generator as 64 bit, so the cmake command becomes:
$ cmake .. -G"Visual Studio 12 2013 Win64" -DPLUGIN_UPDATER_GPU=ON
You may also be able to use a later version of visual studio depending on whether the CUDA toolkit supports it. cmake will generate an xgboost.sln solution file in the build directory. Build this solution in release mode. This is also a good time to check it is being built as x64. If not make sure the cmake generator is set correctly.
Using make
Now, it also supports the usual 'make' flow to build gpu-enabled tree construction plugins. It's currently only tested on Linux. From the xgboost directory
# make sure CUDA SDK bin directory is in the 'PATH' env variable
$ make PLUGIN_UPDATER_GPU=ON
For Developers!
Now, some of the code-base inside gpu plugins have googletest unit-tests inside 'tests/'. They can be enabled run along with other unit-tests inside '/tests/cpp' using:
# make sure CUDA SDK bin directory is in the 'PATH' env variable
# below 2 commands need only be executed once
$ source ./dmlc-core/scripts/travis/travis_setup_env.sh
$ make -f dmlc-core/scripts/packages.mk gtest
$ make PLUGIN_UPDATER_GPU=ON GTEST_PATH=${CACHE_PREFIX} test
Changelog
2017/5/31
- Faster version of the grow_gpu plugin
- Added support for building gpu plugin through 'make' flow too
2017/5/5
- Histogram performance improvements
- Fix gcc build issues
2017/4/25
- Add fast histogram algorithm
- Fix Linux build
- Add 'gpu_id' parameter
References
Author
Rory Mitchell
Please report bugs to the xgboost/issues page. You can tag me with @RAMitchell.
Otherwise I can be contacted at r.a.mitchell.nz at gmail.