* Ensure RMM is 0.18 or later
* Add use_rmm flag to global configuration
* Modify XGBCachingDeviceAllocatorImpl to skip CUB when use_rmm=True
* Update the demo
* [CI] Pin NumPy to 1.19.4, since NumPy 1.19.5 doesn't work with latest Shap
* Enable loading model from <1.0.0 trained with objective='binary:logitraw'
* Add binary:logitraw in model compatibility testing suite
* Feedback from @trivialfis: Override ProbToMargin() for LogisticRaw
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
* Add management functions for global configuration: XGBSetGlobalConfig(), XGBGetGlobalConfig().
* Add Python interface: set_config(), get_config(), and config_context().
* Add unit tests for Python
* Add R interface: xgb.set.config(), xgb.get.config()
* Add unit tests for R
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
* [CI] Add noLD test
* Make noLD test only trigger with a PR comment
* [CI] Don't install stringi
* Add the Titanic example as a unit test
* Document trigger
* add to index
* Clarify that it needs to be a review comment
* Disable JSON serialization for now.
* Multi-class classification is checkpointing for each iteration.
This brings significant overhead.
Revert: 90355b4f007ae
* Set R tests to use binary.
* Change DefaultEvalMetric of classification from error to logloss
* Change default binary metric in plugin/example/custom_obj.cc
* Set old error metric in python tests
* Set old error metric in R tests
* Fix missed eval metrics and typos in R tests
* Fix setting eval_metric twice in R tests
* Add warning for empty eval_metric for classification
* Fix Dask tests
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
* [R] Fix empty empty tests and a test warnings
* [R] Remove stringi dependency (fix#5905)
* Fix R lint check
* [R] Fix automatic conversion to factor in R < 4.0.0 in xgb.model.dt.tree
* Add `R` Makefile variable
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
* add SHAP summary plot using ggplot2
* Update xgb.plot.shap
* Update example in xgb.plot.shap documentation
* update logic, add tests
* whitespace fixes
* whitespace fixes for test_helpers
* namespace for sd function
* explicitly declare variables that are automatically evaluated by data.table
* Fix R lint
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
* [CI] Move lint to a separate script
* [CI] Improved lintr launcher
* Add lintr as a separate action
* Add custom parsing logic to print out logs
* Fix lintr issues in demos
* Run R demos
* Fix CRAN checks
* Install XGBoost into R env before running lintr
* Install devtools (needed to run demos)
* [R] Add a compatibility layer to load Booster from an old RDS
* Modify QuantileHistMaker::LoadConfig() to be backward compatible with 1.1.x
* Add a big warning about compatibility in QuantileHistMaker::LoadConfig()
* Add testing suite
* Discourage use of saveRDS() in CRAN doc
* [R-package] replace uses of T and F with TRUE and FALSE
* enable linting
* Remove skip
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
* Set output margin to True for custom objective in Python and R.
* Add a demo for writing multi-class custom objective function.
* Run tests on selected demos.
* Add bindings for serialization.
* Change `xgb.save.raw' into full serialization instead of simple model.
* Add `xgb.load.raw' for unserialization.
* Run devtools.
* Simplify DropTrees calling logic
* Add `training` parameter for prediction method.
* [Breaking]: Add `training` to C API.
* Change for R and Python custom objective.
* Correct comment.
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
This PR fixes tree weights in dart being ignored when computing contributions.
* Fix ellpack page source link.
* Add tree weights to compute contribution.
**Symptom** Apple Clang's implementation of `std::shuffle` expects doesn't work
correctly when it is run with the random bit generator for R package:
```cpp
CustomGlobalRandomEngine::result_type
CustomGlobalRandomEngine::operator()() {
return static_cast<result_type>(
std::floor(unif_rand() * CustomGlobalRandomEngine::max()));
}
```
Minimial reproduction of failure (compile using Apple Clang 10.0):
```cpp
std::vector<int> feature_set(100);
std::iota(feature_set.begin(), feature_set.end(), 0);
// initialize with 0, 1, 2, 3, ..., 99
std::shuffle(feature_set.begin(), feature_set.end(), common::GlobalRandom());
// This returns 0, 1, 2, ..., 99, so content didn't get shuffled at all!!!
```
Note that this bug is platform-dependent; it does not appear when GCC or
upstream LLVM Clang is used.
**Diagnosis** Apple Clang's `std::shuffle` expects 32-bit integer
inputs, whereas `CustomGlobalRandomEngine::operator()` produces 64-bit
integers.
**Fix** Have `CustomGlobalRandomEngine::operator()` produce 32-bit integers.
Closes#3523.