* [CI] Assign larger /dev/shm to NCCL
* Use 10.2 artifact to run multi-GPU Python tests
* Add CUDA 10.0 -> 11.0 cross-version test; remove CUDA 10.0 target
* [CI] Move lint to a separate script
* [CI] Improved lintr launcher
* Add lintr as a separate action
* Add custom parsing logic to print out logs
* Fix lintr issues in demos
* Run R demos
* Fix CRAN checks
* Install XGBoost into R env before running lintr
* Install devtools (needed to run demos)
* [jvm-packages] add gpu_hist tree method
* change updater hist to grow_quantile_histmaker
* add gpu scheduling
* pass correct parameters to xgboost library
* remove debug info
* add use.cuda for pom
* add CI for gpu_hist for jvm
* add gpu unit tests
* use gpu node to build jvm
* use nvidia-docker
* Add CLI interface to create_jni.py using argparse
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
* [R] Add a compatibility layer to load Booster from an old RDS
* Modify QuantileHistMaker::LoadConfig() to be backward compatible with 1.1.x
* Add a big warning about compatibility in QuantileHistMaker::LoadConfig()
* Add testing suite
* Discourage use of saveRDS() in CRAN doc
* set a minimal reducer msg size. Receive the same data size from parent each time.
* When parent read from a child, check it receive minimal reduce size.
fix bug. Rewrite the minimal reducer size check, make sure it's 1~N times of minimal reduce size
Assume the minimal reduce size is X, the logic here is
1: each child upload total_size of message
2: each parent receive X message at least, up to total_size
3: parent reduce X or NxX or total_size message
4: parent sends X or NxX or total_size message to its parent
4: parent's parent receive X message at least, up to total_size. Then reduce X or NxX or total_size message
6: parent's parent sends X or NxX or total_size message to its children
7: parent receives X or NxX or total_size message, sends to its children
8: child receive X or NxN or total_size message.
During the whole process, each transfer is (1~N)xX Byte message or up to total_size.
if X is larger than total_size, then allreduce allways reduce the whole messages and pass down.
* Follow style check rule
* fix the cpplint check
* fix allreduce_base header seq
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
* Publish artifacts only on the master and release branches
* Build CUDA only for Compute Capability 7.5 when building PRs
* Run all Windows jobs in a single worker image
* Build nightly XGBoost4J SNAPSHOT JARs with Scala 2.12 only
* Show skipped Python tests on Windows
* Make Graphviz optional for Python tests
* Add back C++ tests
* Unstash xgboost_cpp_tests
* Fix label to CUDA 10.1
* Install cuPy for CUDA 10.1
* Install jsonschema
* Address reviewer's feedback
* Add interval accuracy
* De-virtualize AFT functions
* Lint
* Refactor AFT metric using GPU-CPU reducer
* Fix R build
* Fix build on Windows
* Fix copyright header
* Clang-tidy
* Fix crashing demo
* Fix typos in comment; explain GPU ID
* Remove unnecessary #include
* Add C++ test for interval accuracy
* Fix a bug in accuracy metric: use log pred
* Refactor AFT objective using GPU-CPU Transform
* Lint
* Fix lint
* Use Ninja to speed up build
* Use time, not /usr/bin/time
* Add cpu_build worker class, with concurrency = 1
* Use concurrency = 1 only for CUDA build
* concurrency = 1 for clang-tidy
* Address reviewer's feedback
* Update link to AFT paper