Compare commits

..

327 Commits

Author SHA1 Message Date
Hui Liu
1ce5029a96 fix memory type 2024-01-26 15:45:04 -08:00
Hui Liu
420f8d6fde merge v2.0.3 from upstream 2024-01-25 07:40:06 -08:00
Hui Liu
dc7ee041cc use __HIPCC__ for device code 2024-01-24 12:32:51 -08:00
Hui Liu
7dc152450e workaround memoryType and change rccl config 2024-01-11 13:13:14 -08:00
Philip Hyunsu Cho
82d846bbeb Update change_scala_version.py to also change scala.version property (#9897) 2023-12-18 23:49:41 -08:00
Philip Hyunsu Cho
71d330afdc Bump version to 2.0.3 (#9895) 2023-12-14 17:54:05 -08:00
Philip Hyunsu Cho
3acbd8692b [jvm-packages] Fix POM for xgboost-jvm metapackage (#9893)
* [jvm-packages] Fix POM for xgboost-jvm metapackage

* Add script for updating the Scala version
2023-12-14 16:50:34 -08:00
Philip Hyunsu Cho
ad524f76ab [backport] [CI] Upload libxgboost4j.dylib (M1) to S3 bucket (#9887)
* [CI] Set up CI for Mac M1 (#9699)

* [CI] Improve CI for Mac M1 (#9748)

* [CI] Build libxgboost4j.dylib with CMAKE_OSX_DEPLOYMENT_TARGET (#9749)

* [CI] Upload libxgboost4j.dylib (M1) to S3 bucket (#9886)
2023-12-13 16:05:40 -08:00
Jiaming Yuan
d2d1751c03 [backport][py] Use the first found native library. (#9860) (#9879) 2023-12-13 14:20:30 +08:00
Jiaming Yuan
e4ee4e79dc [backport][sklearn] Fix loading model attributes. (#9808) (#9880) 2023-12-13 14:20:04 +08:00
Philip Hyunsu Cho
41ce8f28b2 [jvm-packages] Add Scala version suffix to xgboost-jvm package (#9776)
* Update JVM script (#9714)

* Bump version to 2.0.2; revamp pom.xml

* Update instructions in prepare_jvm_release.py

* Fix formatting
2023-11-08 10:17:26 -08:00
Jiaming Yuan
0ffc52e05c [backport] Fix using categorical data with the ranker. (#9753) (#9778) 2023-11-09 01:20:52 +08:00
Hui Liu
82d81bca94 rm hip.h files 2023-10-30 21:54:00 -07:00
Hui Liu
6ec5cf26fc enable 3 more tests 2023-10-30 15:27:02 -07:00
Hui Liu
1ec57fd1a3 enable ROCm support, rm un-necessary code 2023-10-30 12:39:30 -07:00
Hui Liu
d0774a78e4 add hip to config 2023-10-30 12:01:24 -07:00
Hui Liu
8d160a206e add jvm rocm support 2023-10-30 11:49:47 -07:00
Hui Liu
a41bc0975c rocm enable for v2.0.1, rm setup.py 2023-10-27 18:53:16 -07:00
Hui Liu
782b73f2bb rocm enable for v2.0.1 2023-10-27 18:50:28 -07:00
Philip Hyunsu Cho
a408254c2f Use sys.base_prefix instead of sys.prefix (#9711)
* Use sys.base_prefix instead of sys.prefix

* Update libpath.py too
2023-10-23 23:31:40 -07:00
Philip Hyunsu Cho
22e891dafa [jvm-packages] Remove hard dependency on libjvm (#9698) (#9705) 2023-10-23 21:21:14 -07:00
Philip Hyunsu Cho
89530c80a7 [CI] Build libxgboost4j.dylib for Intel Mac (#9704) 2023-10-23 20:45:01 -07:00
Philip Hyunsu Cho
946ab53b57 Fix libpath logic for Windows (#9687) 2023-10-19 10:42:46 -07:00
Philip Hyunsu Cho
afd03a6934 Fix build for AppleClang 11 (#9684) 2023-10-18 09:35:59 -07:00
Jiaming Yuan
f7da938458 [backport][pyspark] Support stage-level scheduling (#9519) (#9686)
Co-authored-by: Bobby Wang <wbo4958@gmail.com>
2023-10-18 14:05:08 +08:00
Philip Hyunsu Cho
6ab6577511 Fix build for GCC 8.x (#9670) 2023-10-12 23:36:41 -07:00
Philip Hyunsu Cho
8c57558d74 [backport] [CI] Pull CentOS 7 images from NGC (#9666) (#9668) 2023-10-13 14:09:54 +08:00
Jiaming Yuan
58aa98a796 Bump version to 2.0.1. (#9660) 2023-10-13 08:47:32 +08:00
Jiaming Yuan
92273b39d8 [backport] Add support for cgroupv2. (#9651) (#9656) 2023-10-12 11:39:27 +08:00
Jiaming Yuan
e824b18bf6 [backport] Support pandas 2.1.0. (#9557) (#9655) 2023-10-12 11:29:59 +08:00
Jiaming Yuan
66ee89d8b4 [backport] Workaround Apple clang issue. (#9615) (#9636) 2023-10-08 15:42:15 +08:00
Jiaming Yuan
54d1d72d01 [backport] Use array interface for testing numpy arrays. (#9602) (#9635) 2023-10-08 11:45:49 +08:00
Jiaming Yuan
032bcc57f9 [backport][R] Fix method name. (#9577) (#9592) 2023-09-19 02:08:46 +08:00
Jiaming Yuan
ace7713201 [backport] Fix default metric configuration. (#9575) (#9590) 2023-09-18 23:40:43 +08:00
Jiaming Yuan
096047c547 Make 2.0 release. (#9567) 2023-09-12 00:20:49 +08:00
Jiaming Yuan
e75dd75bb2 [backport] [pyspark] support gpu transform (#9542) (#9559)
---------

Co-authored-by: Bobby Wang <wbo4958@gmail.com>
2023-09-07 17:21:09 +08:00
Jiaming Yuan
4d387cbfbf [backport] [pyspark] rework transform to reuse same code (#9292) (#9558)
Co-authored-by: Bobby Wang <wbo4958@gmail.com>
2023-09-07 15:26:24 +08:00
Jiaming Yuan
3fde9361d7 [backport] Fix inplace predict with fallback when base margin is used. (#9536) (#9548)
- Copy meta info from proxy DMatrix.
- Use `std::call_once` to emit less warnings.
2023-09-05 23:38:06 +08:00
Jiaming Yuan
b67c2ed96d [backport] [CI] bump setup-r action version. (#9544) (#9551) 2023-09-05 22:10:30 +08:00
Jiaming Yuan
177fd79864 [backport] Fix read the doc configuration. [skip ci] (#9549) 2023-09-05 17:32:00 +08:00
Jiaming Yuan
06487d3896 [backport] Fix GPU categorical split memory allocation. (#9529) (#9535) 2023-08-29 21:14:43 +08:00
Jiaming Yuan
e50ccc4d3c [R] Fix integer inputs with NA. (#9522) (#9534) 2023-08-29 19:52:13 +08:00
Jiaming Yuan
add57f8880 [backport] Delay the check for vector leaf. (#9509) (#9533) 2023-08-29 18:25:59 +08:00
Jiaming Yuan
a0d3573c74 [backport] Fix device dispatch for linear updater. (#9507) (#9532) 2023-08-29 15:10:43 +08:00
Jiaming Yuan
4301558a57 Make 2.0.0 RC1. (#9492) 2023-08-17 16:16:51 +08:00
Bobby Wang
68be454cfa [pyspark] hotfix for GPU setup validation (#9495)
* [pyspark] fix a bug of validating gpu configuration

---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-08-17 16:01:39 +08:00
Jiaming Yuan
5188e27513 Fix version parsing with rc release. (#9493) 2023-08-16 22:44:58 +08:00
Jiaming Yuan
f380c10a93 Use hint for find nccl. (#9490) 2023-08-16 16:08:41 +08:00
Sean Yang
12fe2fc06c Fix federated learning demos and tests (#9488) 2023-08-16 15:25:05 +08:00
Jiaming Yuan
b2e93d2742 [doc] Quick note for the device parameter. [skip ci] (#9483) 2023-08-16 13:35:55 +08:00
Jiaming Yuan
c061e3ae50 [jvm-packages] Bump rapids version. (#9482) 2023-08-15 16:26:42 -07:00
James Lamb
b82e78c169 [R] remove commented-out code (#9481) 2023-08-15 13:44:08 +08:00
Boris
8463107013 Updated versions. Reorganised dependencies. (#9479) 2023-08-14 14:28:28 -07:00
Jiaming Yuan
19b59938b7 Convert input to str for hypothesis note. (#9480) 2023-08-15 02:27:58 +08:00
James Lamb
e3f624d8e7 [R] remove more uses of default values in internal functions (#9476) 2023-08-14 22:18:33 +08:00
James Lamb
2c84daeca7 [R] [doc] remove documentation index entries for internal functions (#9477) 2023-08-14 22:18:02 +08:00
Bobby Wang
344f90b67b [jvm-packages] throw exception when tree_method=approx and device=cuda (#9478)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-08-14 17:52:14 +08:00
Jiaming Yuan
05d7000096 Handle special characters in JSON model dump. (#9474) 2023-08-14 15:49:00 +08:00
github-actions[bot]
f03463c45b [CI] Update RAPIDS to latest stable (#9464)
* [CI] Update RAPIDS to latest stable

* [CI] Use CMake 3.26.4

---------

Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2023-08-13 18:54:37 -07:00
Jiaming Yuan
fd4335d0bf [doc] Document the current status of some features. (#9469) 2023-08-13 23:42:27 +08:00
Jiaming Yuan
801116c307 Test scikit-learn model IO with gblinear. (#9459) 2023-08-13 23:41:49 +08:00
Jiaming Yuan
bb56183396 Normalize file system path. (#9463) 2023-08-11 21:26:46 +08:00
Jiaming Yuan
bdc1a3c178 Fix pyspark parameter. (#9460)
- Don't pass the `use_gpu` parameter to the learner.
- Fix GPU approx with PySpark.
2023-08-11 19:07:50 +08:00
James Lamb
428f6cbbe2 [R] remove default values in internal booster manipulation functions (#9461) 2023-08-11 15:07:18 +08:00
ShaneConneely
d638535581 Update README.md (#9462) 2023-08-11 04:02:04 +08:00
James Lamb
44bd2981b2 [R] remove default values in internal utility functions (#9457) 2023-08-10 21:40:59 +08:00
James Lamb
9dbb71490c [Doc] fix typos in documentation (#9458) 2023-08-10 19:26:36 +08:00
James Lamb
4359356d46 [R] [CI] use lintr 3.1.0 (#9456) 2023-08-10 17:49:16 +08:00
Jiaming Yuan
1caa93221a Use realloc for histogram cache and expose the cache limit. (#9455) 2023-08-10 14:05:27 +08:00
Jiaming Yuan
a57371ef7c Fix links in R doc. (#9450) 2023-08-10 02:38:14 +08:00
Jiaming Yuan
f05a23b41c Use weakref instead of id for DataIter cache. (#9445)
- Fix case where Python reuses id from freed objects.
- Small optimization to column matrix with QDM by using `realloc` instead of copying data.
2023-08-10 00:40:06 +08:00
Bobby Wang
d495a180d8 [pyspark] add logs for training (#9449) 2023-08-09 18:32:23 +08:00
joshbrowning2358
7f854848d3 Update R docs based on deprecated parameters/behaviour (#9437) 2023-08-09 17:04:28 +08:00
Jiaming Yuan
f05294a6f2 Fix clang warnings. (#9447)
- static function in header. (which is marked as unused due to translation unit
visibility).
- Implicit copy operator is deprecated.
- Unused lambda capture.
- Moving a temporary variable prevents copy elision.
2023-08-09 15:34:45 +08:00
Philip Hyunsu Cho
819098a48f [R] Handle UTF-8 paths on Windows (#9448) 2023-08-08 21:29:19 -07:00
Jiaming Yuan
c1b2cff874 [CI] Check compiler warnings. (#9444) 2023-08-08 12:02:45 -07:00
Philip Hyunsu Cho
7ce090e775 Handle UTF-8 paths correctly on Windows platform (#9443)
* Fix round-trip serialization with UTF-8 paths

* Add compiler version check

* Add comment to C API functions

* Add Python tests

* [CI] Updatre MacOS deployment target

* Use std::filesystem instead of dmlc::TemporaryDirectory
2023-08-07 23:27:25 -07:00
Jiaming Yuan
97fd5207dd Use lambda function in ParallelFor2D. (#9441) 2023-08-08 14:04:46 +08:00
Jiaming Yuan
54029a59af Bound the size of the histogram cache. (#9440)
- A new histogram collection with a limit in size.
- Unify histogram building logic between hist, multi-hist, and approx.
2023-08-08 03:21:26 +08:00
Philip Hyunsu Cho
5bd163aa25 Explicitly specify libcudart_static in CMake config (#9436) 2023-08-05 14:15:44 -07:00
Philip Hyunsu Cho
7fc57f3974 Remove Koffie Labs from Sponsors list (#9434) 2023-08-04 06:52:27 -07:00
Rong Ou
bde1ebc209 Switch back to the GPUIDX macro (#9438) 2023-08-04 15:14:31 +08:00
Philip Hyunsu Cho
1aabc690ec [Doc] Clarify the output behavior of reg:logistic (#9435) 2023-08-03 20:42:07 -07:00
jinmfeng001
04c99683c3 Change training stage from ResultStage to ShuffleMapStage (#9423) 2023-08-03 23:40:04 +08:00
Jiaming Yuan
1332ff787f Unify the code path between local and distributed training. (#9433)
This removes the need for a local histogram space during distributed training, which cuts the cache size by half.
2023-08-03 21:46:36 +08:00
Hendrik Makait
f958e32683 Raise if expected workers are not alive in xgboost.dask.train (#9421) 2023-08-03 20:14:07 +08:00
Jiaming Yuan
7129988847 Accept only keyword arguments in data iterator. (#9431) 2023-08-03 12:44:16 +08:00
Jiaming Yuan
e93a274823 Small cleanup for histogram routines. (#9427)
* Small cleanup for histogram routines.

- Extract hist train param from GPU hist.
- Make histogram const after construction.
- Unify parameter names.
2023-08-02 18:28:26 +08:00
Rong Ou
c2b85ab68a Clean up MGPU C++ tests (#9430) 2023-08-02 14:31:18 +08:00
Jiaming Yuan
a9da2e244a [CI] Update github actions. (#9428) 2023-08-01 23:03:53 +08:00
Jiaming Yuan
912e341d57 Initial GPU support for the approx tree method. (#9414) 2023-07-31 15:50:28 +08:00
Bobby Wang
8f0efb4ab3 [jvm-packages] automatically set the max/min direction for best score (#9404) 2023-07-27 11:09:55 +08:00
Rong Ou
7579905e18 Retry switching to per-thread default stream (#9416) 2023-07-26 07:09:12 +08:00
Nicholas Hilton
54579da4d7 [doc] Fix typo in prediction.rst (#9415)
Typo for `pred_contribs` and `pred_interactions`
2023-07-26 07:03:04 +08:00
Jiaming Yuan
3a9996173e Revert "Switch to per-thread default stream (#9396)" (#9413)
This reverts commit f7f673b00c.
2023-07-24 12:03:28 -07:00
Bobby Wang
1b657a5513 [jvm-packages] set device to cuda when tree method is "gpu_hist" (#9412) 2023-07-24 18:32:25 +08:00
Jiaming Yuan
a196443a07 Implement sketching with Hessian on GPU. (#9399)
- Prepare for implementing approx on GPU.
- Unify the code path between weighted and uniform sketching on DMatrix.
2023-07-24 15:43:03 +08:00
Jiaming Yuan
851cba931e Define best_iteration only if early stopping is used. (#9403)
* Define `best_iteration` only if early stopping is used.

This is the behavior specified by the document but not honored in the actual code.

- Don't set the attributes if there's no early stopping.
- Clean up the code for callbacks, and replace assertions with proper exceptions.
- Assign the attributes when early stopping `save_best` is used.
- Turn the attributes into Python properties.

---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-07-24 12:43:35 +08:00
Jiaming Yuan
01e00efc53 [breaking] Remove support for single string feature info. (#9401)
- Input must be a sequence of strings.
- Improve validation error message.
2023-07-24 11:06:30 +08:00
Jiaming Yuan
275da176ba Document for device ordinal. (#9398)
- Rewrite GPU demos. notebook is converted to script to avoid committing additional png plots.
- Add GPU demos into the sphinx gallery.
- Add RMM demos into the sphinx gallery.
- Test for firing threads with different device ordinals.
2023-07-22 15:26:29 +08:00
Jiaming Yuan
22b0a55a04 Remove hist builder class. (#9400)
* Remove hist build class.

* Cleanup this stateless class.

* Add comment to thread block.
2023-07-22 10:43:12 +08:00
Jiaming Yuan
0de7c47495 Fix metric serialization. (#9405) 2023-07-22 08:39:21 +08:00
Jiaming Yuan
dbd5309b55 Fix warning message for device. (#9402) 2023-07-20 23:30:04 +08:00
Rong Ou
f7f673b00c Switch to per-thread default stream (#9396) 2023-07-20 08:21:00 +08:00
Jiaming Yuan
7a0ccfbb49 Add compute 90. (#9397) 2023-07-19 13:42:38 +08:00
Jiaming Yuan
0897477af0 Remove unmaintained jvm readme and dev scripts. (#9395) 2023-07-18 18:23:43 +08:00
Philip Hyunsu Cho
e082718c66 [CI] Build pip wheel with RMM support (#9383) 2023-07-18 01:52:26 -07:00
Jiaming Yuan
6e18d3a290 [pyspark] Handle the device parameter in pyspark. (#9390)
- Handle the new `device` parameter in PySpark.
- Deprecate the old `use_gpu` parameter.
2023-07-18 08:47:03 +08:00
Philip Hyunsu Cho
2a0ff209ff [CI] Block CI from running for dependabot PRs (#9394) 2023-07-17 10:53:57 -07:00
Jiaming Yuan
f4fb2be101 [jvm-packages] Add the new device parameter. (#9385) 2023-07-17 18:40:39 +08:00
Jiaming Yuan
2caceb157d [jvm-packages] Reduce log verbosity for GPU tests. (#9389) 2023-07-17 13:25:46 +08:00
Jiaming Yuan
b342ef951b Make feature validation immutable. (#9388) 2023-07-16 06:52:55 +08:00
Jiaming Yuan
0a07900b9f Fix integer overflow. (#9380) 2023-07-15 21:11:02 +08:00
Jiaming Yuan
16eb41936d Handle the new device parameter in dask and demos. (#9386)
* Handle the new `device` parameter in dask and demos.

- Check no ordinal is specified in the dask interface.
- Update demos.
- Update dask doc.
- Update the condition for QDM.
2023-07-15 19:11:20 +08:00
Jiaming Yuan
9da5050643 Turn warning messages into Python warnings. (#9387) 2023-07-15 07:46:43 +08:00
Jiaming Yuan
04aff3af8e Define the new device parameter. (#9362) 2023-07-13 19:30:25 +08:00
Cássia Sampaio
2d0cd2817e [doc] Fux learning_to_rank.rst (#9381)
just adding one missing bracket
2023-07-13 11:00:24 +08:00
jinmfeng001
a1367ea1f8 Set feature_names and feature_types in jvm-packages (#9364)
* 1. Add parameters to set feature names and feature types
2. Save feature names and feature types to native json model

* Change serialization and deserialization format to ubj.
2023-07-12 15:18:46 +08:00
Rong Ou
3632242e0b Support column split with GPU quantile (#9370) 2023-07-11 12:15:56 +08:00
Jiaming Yuan
97ed944209 Unify the hist tree method for different devices. (#9363) 2023-07-11 10:04:39 +08:00
Jiaming Yuan
20c52f07d2 Support exporting cut values (#9356) 2023-07-08 15:32:41 +08:00
edumugi
c3124813e8 Support numpy vertical split (#9365) 2023-07-08 13:18:12 +08:00
Jiaming Yuan
59787b23af Allow empty page in external memory. (#9361) 2023-07-08 09:24:35 +08:00
Rong Ou
15ca12a77e Fix NCCL test hang (#9367) 2023-07-07 11:21:35 +08:00
Jiaming Yuan
41c6813496 Preserve order of saved updaters config. (#9355)
- Save the updater sequence as an array instead of object.
- Warn only once.

The compatibility is kept, but we should be able to break it as the config is not loaded
in pickle model and it's declared to be not stable.
2023-07-05 20:20:07 +08:00
Jiaming Yuan
b572a39919 [doc] Fix removed reference. (#9358) 2023-07-05 16:49:25 +08:00
Jiaming Yuan
645037e376 Improve test coverage with predictor configuration. (#9354)
* Improve test coverage with predictor configuration.

- Test with ext memory.
- Test with QDM.
- Test with dart.
2023-07-05 15:17:22 +08:00
Oliver Holworthy
6c9c8a9001 Enable Installation of Python Package with System lib in a Virtual Environment (#9349) 2023-07-05 05:46:17 +08:00
Boris
bb2de1fd5d xgboost4j-gpu_2.12-2.0.0: added libxgboost4j.so back. (#9351) 2023-07-04 03:31:33 +08:00
Jiaming Yuan
d0916849a6 Remove unused weight from buffer for cat features. (#9341) 2023-07-04 01:07:09 +08:00
Jiaming Yuan
6155394a06 Update news for 1.7.6 [skip ci] (#9350) 2023-07-04 01:04:34 +08:00
Jiaming Yuan
e964654b8f [skl] Enable cat feature without specifying tree method. (#9353) 2023-07-03 22:06:17 +08:00
Jiaming Yuan
39390cc2ee [breaking] Remove the predictor param, allow fallback to prediction using DMatrix. (#9129)
- A `DeviceOrd` struct is implemented to indicate the device. It will eventually replace the `gpu_id` parameter.
- The `predictor` parameter is removed.
- Fallback to `DMatrix` when `inplace_predict` is not available.
- The heuristic for choosing a predictor is only used during training.
2023-07-03 19:23:54 +08:00
Rong Ou
3a0f787703 Support column split in GPU predictor (#9343) 2023-07-03 04:05:34 +08:00
Rong Ou
f90771eec6 Fix device communicator dependency (#9346) 2023-06-29 10:34:30 +08:00
Jiaming Yuan
f4798718c7 Use hist as the default tree method. (#9320) 2023-06-27 23:04:24 +08:00
Jiaming Yuan
bc267dd729 Use ptr from mmap for GHistIndexMatrix and ColumnMatrix. (#9315)
* Use ptr from mmap for `GHistIndexMatrix` and `ColumnMatrix`.

- Define a resource for holding various types of memory pointers.
- Define ref vector for holding resources.
- Swap the underlying resources for GHist and ColumnM.
- Add documentation for current status.
- s390x support is removed. It should work if you can compile XGBoost, all the old workaround code does is to get GCC to compile.
2023-06-27 19:05:46 +08:00
jasjung
96c3071a8a [doc] Update learning_to_rank.rst (#9336) 2023-06-27 13:56:18 +08:00
Jiaming Yuan
cfa9c42eb4 Fix callback in AFT viz demo. (#9333)
* Fix callback in AFT viz demo.

- Update the callback function.
- Add lint check.
2023-06-26 22:35:02 +08:00
Jiaming Yuan
6efe7c129f [doc] Update reference in R vignettes. (#9323) 2023-06-26 18:32:11 +08:00
amdsc21
2e7e9d3b2d update rocgputreeshap branch 2023-06-23 19:50:08 +02:00
amdsc21
3e0c7d1dee new url for rocgputreeshap 2023-06-23 19:46:45 +02:00
amdsc21
2f47a1ebe6 rm warp-primitives 2023-06-22 21:43:00 +02:00
Jiaming Yuan
54da4b3185 Cleanup to prepare for using mmap pointer in external memory. (#9317)
- Update SparseDMatrix comment.
- Use a pointer in the bitfield. We will replace the `std::vector<bool>` in `ColumnMatrix` with bitfield.
- Clean up the page source. The timer is removed as it's inaccurate once we swap the mmap pointer into the page.
2023-06-22 06:43:11 +08:00
Jiaming Yuan
4066d68261 [doc] Clarify early stopping. (#9304) 2023-06-20 17:56:47 +08:00
Jiaming Yuan
6d22ea793c Test QDM with sparse data on CPU. (#9316) 2023-06-19 21:27:03 +08:00
Jiaming Yuan
ee6809e642 Use mmap for external memory. (#9282)
- Have basic infrastructure for mmap.
- Release file write handle.
2023-06-19 18:52:55 +08:00
Rong Ou
d8beb517ed Support bitwise allreduce in NCCL communicator (#9300) 2023-06-17 01:56:50 +08:00
George Othon
2718ff530c [doc] Variable 'label' is not defined in the pyspark application example (#9302) 2023-06-16 05:06:52 +08:00
amdsc21
5ca7daaa13 merge latest changes 2023-06-15 21:39:14 +02:00
Jacek Laskowski
0df1272695 [docs] How to build the docs using conda (#9276)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-06-15 07:39:26 +08:00
Rong Ou
e70810be8a Refactor device communicator to make allreduce more flexible (#9295) 2023-06-14 03:53:03 +08:00
Philip Hyunsu Cho
c2f0486d37 [CI] Run two pipeline loaders for responsiveness (#9294) 2023-06-12 09:52:40 -07:00
Jake Blitch
aad1313154 Fix community.rst typos. (#9291) 2023-06-11 09:09:27 +08:00
ZHAOKAI WANG
2b76061659 remove redundant method in expand_entry (#9283) 2023-06-10 05:18:21 +08:00
amdsc21
5f78360949 merge changes Jun092023 2023-06-09 22:41:33 +02:00
Jiaming Yuan
152e2fb072 Unify test helpers for creating ctx. (#9274) 2023-06-10 03:35:22 +08:00
Jiaming Yuan
ea0deeca68 Disable dense optimization in hist for distributed training. (#9272) 2023-06-10 02:31:34 +08:00
github-actions[bot]
8c1065f645 [CI] Update RAPIDS to latest stable (#9278)
Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
2023-06-09 09:55:08 -07:00
Jiaming Yuan
1fcc26a6f8 Set ndcg to default for LTR. (#8822)
- Add document.
- Add tests.
- Use `ndcg` with `topk` as default.
2023-06-09 23:31:33 +08:00
amdsc21
35cde3b1b2 remove some hip.h 2023-06-07 04:48:09 +02:00
amdsc21
ce345c30a8 remove some hip.h 2023-06-07 03:39:01 +02:00
amdsc21
af8845405a sync Jun 5 2023-06-07 02:43:21 +02:00
amdsc21
9ee1852d4e restore device helper 2023-06-02 02:55:13 +02:00
Your Name
6ecd7903f2 Merge branch 'master' into sync-condition-2023Jun01 2023-06-01 15:58:31 -07:00
Your Name
42867a4805 sync Jun 1 2023-06-01 15:55:06 -07:00
amdsc21
c5b575e00e fix host __assert_fail 2023-05-24 19:40:24 +02:00
amdsc21
1354138b7d Merge branch 'master' into sync-condition-2023May15 2023-05-24 17:44:16 +02:00
amdsc21
b994a38b28 Merge branch 'master' into sync-condition-2023May15 2023-05-23 01:07:50 +02:00
amdsc21
3a834c4992 change workflow 2023-05-20 07:04:06 +02:00
amdsc21
b22644fc10 add hip.h 2023-05-20 01:25:33 +02:00
amdsc21
7663d47383 Merge branch 'master' into sync-condition-2023May15 2023-05-19 20:30:35 +02:00
amdsc21
88fc8badfa Merge branch 'master' into sync-condition-2023May15 2023-05-17 19:55:50 +02:00
amdsc21
8cad8c693c sync up May15 2023 2023-05-15 18:59:18 +02:00
amdsc21
b066accad6 fix lambdarank_obj 2023-05-02 21:06:22 +02:00
amdsc21
b324d51f14 fix array_interface.h half type 2023-05-02 20:50:50 +02:00
amdsc21
65097212b3 fix IterativeDeviceDMatrix, support HIP 2023-05-02 20:20:11 +02:00
amdsc21
4a24ca2f95 fix helpers.h, enable HIP 2023-05-02 20:04:23 +02:00
amdsc21
83e6fceb5c fix lambdarank_obj.cc, support HIP 2023-05-02 19:03:18 +02:00
amdsc21
e4538cb13c fix, to support hip 2023-05-02 17:43:11 +02:00
amdsc21
5446c501af merge 23Mar01 2023-05-02 00:05:58 +02:00
amdsc21
313a74b582 add Shap Magic to check if use cat 2023-05-01 21:55:14 +02:00
amdsc21
65d83e288f fix device query 2023-04-19 19:53:26 +02:00
amdsc21
f645cf51c1 Merge branch 'master' into sync-condition-2023Apr11 2023-04-17 18:33:00 +02:00
amdsc21
db8420225b fix RCCL 2023-04-12 01:09:14 +02:00
amdsc21
843fdde61b sync Apr 11 2023 2023-04-11 20:03:25 +02:00
amdsc21
08bc4b0c0f Merge branch 'master' into sync-condition-2023Apr11 2023-04-11 19:38:38 +02:00
amdsc21
6825d986fd move Dockerfile to ci 2023-04-11 19:34:23 +02:00
paklui
d155ec77f9 building docker for xgboost-amd-condition 2023-03-30 13:36:39 -07:00
amdsc21
991738690f Merge branch 'sync-condition-2023Mar27' into amd-condition 2023-03-30 05:16:36 +02:00
amdsc21
aeb3fd1c95 Merge branch 'master' into sync-condition-2023Mar27 2023-03-30 05:15:55 +02:00
amdsc21
141a062e00 Merge branch 'sync-condition-2023Mar27' into amd-condition 2023-03-30 00:47:16 +02:00
amdsc21
acad01afc9 sync Mar 29 2023-03-30 00:46:50 +02:00
amdsc21
f289e5001d Merge branch 'sync-condition-2023Mar27' into amd-condition 2023-03-28 00:24:12 +02:00
amdsc21
06d9b998ce fix CAPI BuildInfo 2023-03-28 00:14:18 +02:00
amdsc21
c50cc424bc sync Mar 27 2023 2023-03-27 18:54:41 +02:00
amdsc21
8c77e936d1 tune grid size 2023-03-26 17:45:19 +02:00
amdsc21
18034a4291 tune histogram 2023-03-26 01:42:51 +01:00
amdsc21
7ee4734d3a rm device_helpers.hip.h from cu 2023-03-26 00:24:11 +01:00
amdsc21
ee582f03c3 rm device_helpers.hip.h from cuh 2023-03-25 23:35:57 +01:00
amdsc21
f3286bac04 rm warp header 2023-03-25 23:01:44 +01:00
amdsc21
3ee3bea683 fix warp header 2023-03-25 22:37:37 +01:00
amdsc21
5098735698 Merge branch 'condition-sync-Mar24-23' into hui-condition 2023-03-25 05:28:40 +01:00
amdsc21
e74b3bbf3c fix macro 2023-03-25 05:17:39 +01:00
amdsc21
22525c002a fix macro 2023-03-25 05:08:30 +01:00
amdsc21
80961039d7 fix macro 2023-03-25 05:00:55 +01:00
amdsc21
1474789787 add new file 2023-03-25 04:54:02 +01:00
amdsc21
1dc138404a initial merge, fix linalg.h 2023-03-25 04:48:47 +01:00
amdsc21
e1d050f64e initial merge, fix linalg.h 2023-03-25 04:37:43 +01:00
amdsc21
7fbc561e17 initial merge 2023-03-25 04:31:55 +01:00
amdsc21
d97be6f396 enable last 3 tests 2023-03-25 04:05:05 +01:00
amdsc21
f1211cffca enable last 3 tests 2023-03-25 00:45:52 +01:00
amdsc21
e0716afabf fix objective/objective.cc, CMakeFile and setup.py 2023-03-23 20:22:34 +01:00
amdsc21
595cd81251 add max shared mem workaround 2023-03-19 20:08:42 +01:00
amdsc21
0325ce0bed update gputreeshap 2023-03-19 20:07:36 +01:00
amdsc21
a79a35c22c add warp size 2023-03-15 22:00:26 +01:00
amdsc21
4484c7f073 disable Optin Shared Mem 2023-03-15 02:10:16 +01:00
amdsc21
8207015e48 fix ../tests/cpp/common/test_span.h 2023-03-14 22:19:06 +01:00
amdsc21
364df7db0f fix ../tree/gpu_hist/evaluate_splits.hip bugs, size 64 2023-03-14 06:17:21 +01:00
amdsc21
a2bab03205 fix aft_obj.hip 2023-03-13 23:19:59 +01:00
amdsc21
b71c1b50de fix macro, no ! 2023-03-12 23:02:28 +01:00
amdsc21
fa2336fcfd sort bug fix 2023-03-12 07:09:10 +01:00
amdsc21
7d96758382 macro format 2023-03-11 06:57:24 +01:00
amdsc21
b0dacc5a80 fix bug 2023-03-11 03:47:23 +01:00
amdsc21
f64152bf97 add helpers.hip 2023-03-11 02:56:50 +01:00
amdsc21
b4dbe7a649 fix isnan 2023-03-11 02:39:58 +01:00
amdsc21
e5b6219a84 typo 2023-03-11 02:30:27 +01:00
amdsc21
3a07b1edf8 complete test porting 2023-03-11 02:17:05 +01:00
amdsc21
9bf16a2ca6 testing porting 2023-03-11 01:38:54 +01:00
amdsc21
332f6a89a9 more tests 2023-03-11 01:33:48 +01:00
amdsc21
204d0c9a53 add hip tests 2023-03-11 00:38:16 +01:00
amdsc21
e961016e71 rm HIPCUB 2023-03-10 22:21:37 +01:00
amdsc21
f0b8c02f15 merge latest changes 2023-03-10 22:10:20 +01:00
amdsc21
5e8b1842b9 fix Pointer Attr 2023-03-10 19:06:02 +01:00
amdsc21
9f072b50ba fix __popc 2023-03-10 17:14:31 +01:00
amdsc21
e1ddb5ae58 fix macro XGBOOST_USE_HIP 2023-03-10 07:11:05 +01:00
amdsc21
643e2a7b39 fix macro XGBOOST_USE_HIP 2023-03-10 07:09:41 +01:00
amdsc21
bde3107c3e fix macro XGBOOST_USE_HIP 2023-03-10 07:01:25 +01:00
amdsc21
5edfc1e2e9 finish ellpack_page.cc 2023-03-10 06:41:25 +01:00
amdsc21
c073417d0c finish aft_obj.cu 2023-03-10 06:39:03 +01:00
amdsc21
9bbbeb3f03 finish multiclass_obj.cu 2023-03-10 06:35:46 +01:00
amdsc21
4bde2e3412 finish multiclass_obj.cu 2023-03-10 06:35:21 +01:00
amdsc21
58a9fe07b6 finish multiclass_obj.cu 2023-03-10 06:35:06 +01:00
amdsc21
41407850d5 finish rank_obj.cu 2023-03-10 06:29:08 +01:00
amdsc21
968a1db4c0 finish regression_obj.cu 2023-03-10 06:07:53 +01:00
amdsc21
ad710e4888 finish hinge.cu 2023-03-10 06:04:59 +01:00
amdsc21
4e3c699814 finish adaptive.cu 2023-03-10 06:02:48 +01:00
amdsc21
757de84398 finish quantile.cu 2023-03-10 05:55:51 +01:00
amdsc21
d27f9dfdce finish host_device_vector.cu 2023-03-10 05:45:38 +01:00
amdsc21
14cc438a64 finish stats.cu 2023-03-10 05:38:16 +01:00
amdsc21
911a5d8a60 finish hist_util.cu 2023-03-10 05:32:38 +01:00
amdsc21
54b076b40f finish common.cu 2023-03-10 05:20:29 +01:00
amdsc21
91a5ef762e finish common.cu 2023-03-10 05:19:41 +01:00
amdsc21
8fd2af1c8b finish numeric.cu 2023-03-10 05:16:23 +01:00
amdsc21
bb6adda8a3 finish c_api.cu 2023-03-10 05:12:51 +01:00
amdsc21
a76ccff390 finish c_api.cu 2023-03-10 05:11:20 +01:00
amdsc21
61c0b19331 finish ellpack_page_source.cu 2023-03-10 05:06:36 +01:00
amdsc21
fa9f69dd85 finish sparse_page_dmatrix.cu 2023-03-10 05:04:57 +01:00
amdsc21
080fc35c4b finish ellpack_page_raw_format.cu 2023-03-10 05:02:35 +01:00
amdsc21
ccce4cf7e1 finish data.cu 2023-03-10 05:00:57 +01:00
amdsc21
713ab9e1a0 finish sparse_page_source.cu 2023-03-10 04:42:56 +01:00
amdsc21
134cbfddbe finish gradient_index.cu 2023-03-10 04:40:33 +01:00
amdsc21
6e2c5be83e finish array_interface.cu 2023-03-10 04:36:04 +01:00
amdsc21
185dbce21f finish ellpack_page.cu 2023-03-10 04:26:09 +01:00
amdsc21
49732359ef finish iterative_dmatrix.cu 2023-03-10 03:47:00 +01:00
amdsc21
ec9f500a49 finish proxy_dmatrix.cu 2023-03-10 03:40:07 +01:00
amdsc21
53244bef6f finish simple_dmatrix.cu 2023-03-10 03:38:09 +01:00
amdsc21
f0febfbcac finish gpu_predictor.cu 2023-03-10 01:29:54 +01:00
amdsc21
1c58ff61d1 finish fit_stump.cu 2023-03-10 00:46:29 +01:00
amdsc21
1530c03f7d finish constraints.cu 2023-03-09 22:43:51 +01:00
amdsc21
309268de02 finish updater_gpu_hist.cu 2023-03-09 22:40:44 +01:00
amdsc21
500428cc0f finish row_partitioner.cu 2023-03-09 22:31:11 +01:00
amdsc21
495816f694 finished gradient_based_sampler.cu 2023-03-09 22:26:08 +01:00
amdsc21
df42dd2c53 finished evaluator.cu 2023-03-09 22:22:05 +01:00
amdsc21
f55243fda0 finish evaluate_splits.cu 2023-03-09 22:15:10 +01:00
amdsc21
1e09c21456 finished feature_groups.cu 2023-03-09 21:31:00 +01:00
amdsc21
0ed5d3c849 finished histogram.cu 2023-03-09 21:28:37 +01:00
amdsc21
f67e7de7ef finished communicator.cu 2023-03-09 21:02:48 +01:00
amdsc21
5044713388 finished updater_gpu_coordinate.cu 2023-03-09 20:53:54 +01:00
amdsc21
c875f0425f finished rank_metric.cu 2023-03-09 20:48:31 +01:00
amdsc21
4fd08b6c32 finished survival_metric.cu 2023-03-09 20:41:52 +01:00
amdsc21
b9d86d44d6 finish multiclass_metric.cu 2023-03-09 20:37:16 +01:00
amdsc21
a56055225a fix auc.cu 2023-03-09 20:29:38 +01:00
amdsc21
6eba0a56ec fix CMakeLists.txt 2023-03-09 18:57:14 +01:00
amdsc21
00c24a58b1 finish elementwise_metric.cu 2023-03-08 22:50:07 +01:00
amdsc21
6fa248b75f try elementwise_metric.cu 2023-03-08 22:42:48 +01:00
amdsc21
946f9e9802 fix gbtree.cc 2023-03-08 21:44:20 +01:00
amdsc21
4c4e5af29c port elementwise_metric.cu 2023-03-08 21:39:56 +01:00
amdsc21
7e1b06417b finish gbtree.cu porting 2023-03-08 21:09:56 +01:00
amdsc21
cdd7794641 add unused option 2023-03-08 20:37:53 +01:00
amdsc21
cd743a1ae9 fix DispatchRadixSort 2023-03-08 20:31:23 +01:00
amdsc21
a45005863b fix DispatchScan 2023-03-08 20:15:33 +01:00
amdsc21
bdcb036592 add context.hip 2023-03-08 07:34:19 +01:00
amdsc21
7a3a9b682a add device_helpers.hip.h 2023-03-08 07:18:33 +01:00
amdsc21
0a711662c3 add device_helpers.hip.h 2023-03-08 07:10:32 +01:00
amdsc21
312e58ec99 enable rocm, fix common.h 2023-03-08 06:45:03 +01:00
amdsc21
ca8f4e7993 enable rocm, fix stats.cuh 2023-03-08 06:43:06 +01:00
amdsc21
60795f22de enable rocm, fix linalg_op.cuh 2023-03-08 06:42:20 +01:00
amdsc21
05fdca893f enable rocm, fix cuda_pinned_allocator.h 2023-03-08 06:39:40 +01:00
amdsc21
d8cc93f3f2 enable rocm, fix algorithm.cuh 2023-03-08 06:38:35 +01:00
amdsc21
62c4efac51 enable rocm, fix transform.h 2023-03-08 06:37:34 +01:00
amdsc21
ba9e00d911 enable rocm, fix hist_util.cuh 2023-03-08 06:36:15 +01:00
amdsc21
d3be67ad8e enable rocm, fix quantile.cuh 2023-03-08 06:32:09 +01:00
amdsc21
2eb0b6aae4 enable rocm, fix threading_utils.cuh 2023-03-08 06:30:52 +01:00
amdsc21
327f1494f1 enable rocm, fix cuda_context.cuh 2023-03-08 06:29:45 +01:00
amdsc21
fa92aa56ee enable rocm, fix device_adapter.cuh 2023-03-08 06:26:31 +01:00
amdsc21
427f6c2a1a enable rocm, fix simple_dmatrix.cuh 2023-03-08 06:24:34 +01:00
amdsc21
270c7b4802 enable rocm, fix row_partitioner.cuh 2023-03-08 06:22:25 +01:00
amdsc21
0fc1f640a9 enable rocm, fix nccl_device_communicator.cuh 2023-03-08 06:18:13 +01:00
amdsc21
762fd9028d enable rocm, fix device_communicator_adapter.cuh 2023-03-08 06:13:29 +01:00
amdsc21
f2009533e1 rm hip.h 2023-03-08 06:04:01 +01:00
amdsc21
53b5cd73f2 add hip flags 2023-03-08 03:42:51 +01:00
amdsc21
52b05d934e add hip 2023-03-08 03:32:19 +01:00
amdsc21
840f15209c add HIP flags, common 2023-03-08 03:11:49 +01:00
amdsc21
1e1c7fd8d5 add HIP flags, c_api 2023-03-08 01:34:37 +01:00
amdsc21
f5f800c80d add HIP flags 2023-03-08 01:33:38 +01:00
amdsc21
6b7be96373 add HIP flags 2023-03-08 01:22:25 +01:00
amdsc21
75712b9c3c enable HIP flags 2023-03-08 01:10:07 +01:00
amdsc21
ed45aa2816 Merge branch 'master' into dev-hui 2023-03-08 00:39:33 +01:00
amdsc21
f286ae5bfa add hip rocthrust hipcub 2023-03-07 06:35:00 +01:00
amdsc21
f13a7f8d91 add submodules 2023-03-07 05:44:24 +01:00
amdsc21
c51a1c9aae rename hip.cc to hip 2023-03-07 05:39:53 +01:00
amdsc21
30de728631 fix hip.cc 2023-03-07 05:11:42 +01:00
amdsc21
75fa15b36d add hip support 2023-03-07 04:02:49 +01:00
amdsc21
eb30cb6293 add hip support 2023-03-07 03:49:52 +01:00
amdsc21
cafbfce51f add hip.h 2023-03-07 03:46:26 +01:00
amdsc21
6039a71e6c add hip structure 2023-03-07 02:17:19 +01:00
628 changed files with 17650 additions and 8184 deletions

View File

@@ -51,14 +51,14 @@ jobs:
id: extract_branch
if: |
(github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')) &&
matrix.os == 'windows-latest'
(matrix.os == 'windows-latest' || matrix.os == 'macos-11')
- name: Publish artifact xgboost4j.dll to S3
run: |
cd lib/
Rename-Item -Path xgboost4j.dll -NewName xgboost4j_${{ github.sha }}.dll
dir
python -m awscli s3 cp xgboost4j_${{ github.sha }}.dll s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/ --acl public-read
python -m awscli s3 cp xgboost4j_${{ github.sha }}.dll s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/libxgboost4j/ --acl public-read
if: |
(github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')) &&
matrix.os == 'windows-latest'
@@ -66,6 +66,19 @@ jobs:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
- name: Publish artifact libxgboost4j.dylib to S3
run: |
cd lib/
mv -v libxgboost4j.dylib libxgboost4j_${{ github.sha }}.dylib
ls
python -m awscli s3 cp libxgboost4j_${{ github.sha }}.dylib s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/libxgboost4j/ --acl public-read
if: |
(github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')) &&
matrix.os == 'macos-11'
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
- name: Test XGBoost4J (Core, Spark, Examples)
run: |

View File

@@ -255,3 +255,44 @@ jobs:
shell: bash -l {0}
run: |
pytest -s -v -rxXs --durations=0 ./tests/test_distributed/test_with_spark
python-system-installation-on-ubuntu:
name: Test XGBoost Python package System Installation on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- name: Set up Python 3.8
uses: actions/setup-python@v4
with:
python-version: 3.8
- name: Install ninja
run: |
sudo apt-get update && sudo apt-get install -y ninja-build
- name: Build XGBoost on Ubuntu
run: |
mkdir build
cd build
cmake .. -GNinja
ninja
- name: Copy lib to system lib
run: |
cp lib/* "$(python -c 'import sys; print(sys.base_prefix)')/lib"
- name: Install XGBoost in Virtual Environment
run: |
cd python-package
pip install virtualenv
virtualenv venv
source venv/bin/activate && \
pip install -v . --config-settings use_system_libxgboost=True && \
python -c 'import xgboost'

View File

@@ -25,7 +25,7 @@ jobs:
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@50d1eae9b8da0bb3f8582c59a5b82225fa2fe7f2 # v2.3.1
- uses: r-lib/actions/setup-r@11a22a908006c25fe054c4ef0ac0436b1de3edbe # v2.6.4
with:
r-version: ${{ matrix.config.r }}
@@ -64,7 +64,7 @@ jobs:
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@50d1eae9b8da0bb3f8582c59a5b82225fa2fe7f2 # v2.3.1
- uses: r-lib/actions/setup-r@11a22a908006c25fe054c4ef0ac0436b1de3edbe # v2.6.4
with:
r-version: ${{ matrix.config.r }}

View File

@@ -27,7 +27,7 @@ jobs:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@99c53751e09b9529366343771cc321ec74e9bd3d # tag=v2.0.6
uses: ossf/scorecard-action@08b4669551908b1024bb425080c797723083c031 # tag=v2.2.0
with:
results_file: results.sarif
results_format: sarif
@@ -41,7 +41,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@6673cd052c4cd6fcf4b4e6e60ea986c889389535 # tag=v3.0.0
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # tag=v3.1.2
with:
name: SARIF file
path: results.sarif
@@ -49,6 +49,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@5f532563584d71fdef14ee64d17bafb34f751ce5 # tag=v1.0.26
uses: github/codeql-action/upload-sarif@7b6664fa89524ee6e3c3e9749402d5afd69b3cd8 # tag=v2.14.1
with:
sarif_file: results.sarif

1
.gitignore vendored
View File

@@ -48,6 +48,7 @@ Debug
*.Rproj
./xgboost.mpi
./xgboost.mock
*.bak
#.Rbuildignore
R-package.Rproj
*.cache*

3
.gitmodules vendored
View File

@@ -5,3 +5,6 @@
[submodule "gputreeshap"]
path = gputreeshap
url = https://github.com/rapidsai/gputreeshap.git
[submodule "rocgputreeshap"]
path = rocgputreeshap
url = https://github.com/ROCmSoftwarePlatform/rocgputreeshap

View File

@@ -32,4 +32,3 @@ formats:
python:
install:
- requirements: doc/requirements.txt
system_packages: true

View File

@@ -15,4 +15,3 @@
address = {New York, NY, USA},
keywords = {large-scale machine learning},
}

View File

@@ -1,5 +1,5 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(xgboost LANGUAGES CXX C VERSION 2.0.0)
project(xgboost LANGUAGES CXX C VERSION 2.0.3)
include(cmake/Utils.cmake)
list(APPEND CMAKE_MODULE_PATH "${xgboost_SOURCE_DIR}/cmake/modules")
cmake_policy(SET CMP0022 NEW)
@@ -14,8 +14,24 @@ endif ((${CMAKE_VERSION} VERSION_GREATER 3.13) OR (${CMAKE_VERSION} VERSION_EQUA
message(STATUS "CMake version ${CMAKE_VERSION}")
if (CMAKE_COMPILER_IS_GNUCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5.0)
message(FATAL_ERROR "GCC version must be at least 5.0!")
# Check compiler versions
# Use recent compilers to ensure that std::filesystem is available
if(MSVC)
if(MSVC_VERSION LESS 1920)
message(FATAL_ERROR "Need Visual Studio 2019 or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "8.1")
message(FATAL_ERROR "Need GCC 8.1 or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "11.0")
message(FATAL_ERROR "Need Xcode 11.0 (AppleClang 11.0) or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "9.0")
message(FATAL_ERROR "Need Clang 9.0 or newer to build XGBoost")
endif()
endif()
include(${xgboost_SOURCE_DIR}/cmake/FindPrefetchIntrinsics.cmake)
@@ -42,7 +58,7 @@ option(ENABLE_ALL_WARNINGS "Enable all compiler warnings. Only effective for GCC
option(LOG_CAPI_INVOCATION "Log all C API invocations for debugging" OFF)
option(GOOGLE_TEST "Build google tests" OFF)
option(USE_DMLC_GTEST "Use google tests bundled with dmlc-core submodule" OFF)
option(USE_DEVICE_DEBUG "Generate CUDA device debug info." OFF)
option(USE_DEVICE_DEBUG "Generate CUDA/HIP device debug info." OFF)
option(USE_NVTX "Build with cuda profiling annotations. Developers only." OFF)
set(NVTX_HEADER_DIR "" CACHE PATH "Path to the stand-alone nvtx header")
option(RABIT_MOCK "Build rabit with mock" OFF)
@@ -50,10 +66,15 @@ option(HIDE_CXX_SYMBOLS "Build shared library and hide all C++ symbols" OFF)
option(KEEP_BUILD_ARTIFACTS_IN_BINARY_DIR "Output build artifacts in CMake binary dir" OFF)
## CUDA
option(USE_CUDA "Build with GPU acceleration" OFF)
option(USE_PER_THREAD_DEFAULT_STREAM "Build with per-thread default stream" ON)
option(USE_NCCL "Build with NCCL to enable distributed GPU support." OFF)
option(BUILD_WITH_SHARED_NCCL "Build with shared NCCL library." OFF)
set(GPU_COMPUTE_VER "" CACHE STRING
"Semicolon separated list of compute versions to be built against, e.g. '35;61'")
## HIP
option(USE_HIP "Build with GPU acceleration" OFF)
option(USE_RCCL "Build with RCCL to enable distributed GPU support." OFF)
option(BUILD_WITH_SHARED_RCCL "Build with shared RCCL library." OFF)
## Copied From dmlc
option(USE_HDFS "Build with HDFS support" OFF)
option(USE_AZURE "Build with AZURE support" OFF)
@@ -76,6 +97,7 @@ option(ADD_PKGCONFIG "Add xgboost.pc into system." ON)
if (USE_DEBUG_OUTPUT AND (NOT (CMAKE_BUILD_TYPE MATCHES Debug)))
message(SEND_ERROR "Do not enable `USE_DEBUG_OUTPUT' with release build.")
endif (USE_DEBUG_OUTPUT AND (NOT (CMAKE_BUILD_TYPE MATCHES Debug)))
if (USE_NCCL AND NOT (USE_CUDA))
message(SEND_ERROR "`USE_NCCL` must be enabled with `USE_CUDA` flag.")
endif (USE_NCCL AND NOT (USE_CUDA))
@@ -85,6 +107,17 @@ endif (USE_DEVICE_DEBUG AND NOT (USE_CUDA))
if (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
message(SEND_ERROR "Build XGBoost with -DUSE_NCCL=ON to enable BUILD_WITH_SHARED_NCCL.")
endif (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
if (USE_RCCL AND NOT (USE_HIP))
message(SEND_ERROR "`USE_RCCL` must be enabled with `USE_HIP` flag.")
endif (USE_RCCL AND NOT (USE_HIP))
if (USE_DEVICE_DEBUG AND NOT (USE_HIP))
message(SEND_ERROR "`USE_DEVICE_DEBUG` must be enabled with `USE_HIP` flag.")
endif (USE_DEVICE_DEBUG AND NOT (USE_HIP))
if (BUILD_WITH_SHARED_RCCL AND (NOT USE_RCCL))
message(SEND_ERROR "Build XGBoost with -DUSE_RCCL=ON to enable BUILD_WITH_SHARED_RCCL.")
endif (BUILD_WITH_SHARED_RCCL AND (NOT USE_RCCL))
if (JVM_BINDINGS AND R_LIB)
message(SEND_ERROR "`R_LIB' is not compatible with `JVM_BINDINGS' as they both have customized configurations.")
endif (JVM_BINDINGS AND R_LIB)
@@ -98,9 +131,15 @@ endif (USE_AVX)
if (PLUGIN_LZ4)
message(SEND_ERROR "The option 'PLUGIN_LZ4' is removed from XGBoost.")
endif (PLUGIN_LZ4)
if (PLUGIN_RMM AND NOT (USE_CUDA))
message(SEND_ERROR "`PLUGIN_RMM` must be enabled with `USE_CUDA` flag.")
endif (PLUGIN_RMM AND NOT (USE_CUDA))
if (PLUGIN_RMM AND NOT (USE_HIP))
message(SEND_ERROR "`PLUGIN_RMM` must be enabled with `USE_HIP` flag.")
endif (PLUGIN_RMM AND NOT (USE_HIP))
if (PLUGIN_RMM AND NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") OR (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")))
message(SEND_ERROR "`PLUGIN_RMM` must be used with GCC or Clang compiler.")
endif (PLUGIN_RMM AND NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") OR (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")))
@@ -153,6 +192,24 @@ if (USE_CUDA)
find_package(CUDAToolkit REQUIRED)
endif (USE_CUDA)
if (USE_HIP)
set(USE_OPENMP ON CACHE BOOL "HIP requires OpenMP" FORCE)
# `export CXX=' is ignored by CMake HIP.
set(CMAKE_HIP_HOST_COMPILER ${CMAKE_CXX_COMPILER})
message(STATUS "Configured HIP host compiler: ${CMAKE_HIP_HOST_COMPILER}")
enable_language(HIP)
find_package(hip REQUIRED)
find_package(rocthrust REQUIRED)
find_package(hipcub REQUIRED)
set(CMAKE_HIP_FLAGS "${CMAKE_HIP_FLAGS} -I${HIP_INCLUDE_DIRS} -I${HIP_INCLUDE_DIRS}/hip")
set(CMAKE_HIP_FLAGS "${CMAKE_HIP_FLAGS} -Wunused-result -w")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D__HIP_PLATFORM_AMD__")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -I${HIP_INCLUDE_DIRS}")
add_subdirectory(${PROJECT_SOURCE_DIR}/rocgputreeshap)
endif (USE_HIP)
if (FORCE_COLORED_OUTPUT AND (CMAKE_GENERATOR STREQUAL "Ninja") AND
((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") OR
(CMAKE_CXX_COMPILER_ID STREQUAL "Clang")))
@@ -192,6 +249,10 @@ if (USE_NCCL)
find_package(Nccl REQUIRED)
endif (USE_NCCL)
if (USE_RCCL)
find_package(rccl REQUIRED)
endif (USE_RCCL)
# dmlc-core
msvc_use_static_runtime()
if (FORCE_SHARED_CRT)
@@ -216,6 +277,11 @@ endif (RABIT_BUILD_MPI)
add_subdirectory(${xgboost_SOURCE_DIR}/src)
target_link_libraries(objxgboost PUBLIC dmlc)
# Link -lstdc++fs for GCC 8.x
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS "9.0")
target_link_libraries(objxgboost PUBLIC stdc++fs)
endif()
# Exports some R specific definitions and objects
if (R_LIB)
add_subdirectory(${xgboost_SOURCE_DIR}/R-package)
@@ -231,6 +297,15 @@ add_subdirectory(${xgboost_SOURCE_DIR}/plugin)
if (PLUGIN_RMM)
find_package(rmm REQUIRED)
# Patch the rmm targets so they reference the static cudart
# Remove this patch once RMM stops specifying cudart requirement
# (since RMM is a header-only library, it should not specify cudart in its CMake config)
get_target_property(rmm_link_libs rmm::rmm INTERFACE_LINK_LIBRARIES)
list(REMOVE_ITEM rmm_link_libs CUDA::cudart)
list(APPEND rmm_link_libs CUDA::cudart_static)
set_target_properties(rmm::rmm PROPERTIES INTERFACE_LINK_LIBRARIES "${rmm_link_libs}")
get_target_property(rmm_link_libs rmm::rmm INTERFACE_LINK_LIBRARIES)
endif (PLUGIN_RMM)
#-- library

17
NEWS.md
View File

@@ -3,6 +3,23 @@ XGBoost Change Log
This file records the changes in xgboost library in reverse chronological order.
## 1.7.6 (2023 Jun 16)
This is a patch release for bug fixes. The CRAN package for the R binding is kept at 1.7.5.
### Bug Fixes
* Fix distributed training with mixed dense and sparse partitions. (#9272)
* Fix monotone constraints on CPU with large trees. (#9122)
* [spark] Make the spark model have the same UID as its estimator (#9022)
* Optimize prediction with `QuantileDMatrix`. (#9096)
### Document
* Improve doxygen (#8959)
* Update the cuDF pip index URL. (#9106)
### Maintenance
* Fix tests with pandas 2.0. (#9014)
## 1.7.5 (2023 Mar 30)
This is a patch release for bug fixes.

View File

@@ -1,8 +1,8 @@
Package: xgboost
Type: Package
Title: Extreme Gradient Boosting
Version: 2.0.0.1
Date: 2022-10-18
Version: 2.0.3.1
Date: 2023-12-14
Authors@R: c(
person("Tianqi", "Chen", role = c("aut"),
email = "tianqi.tchen@gmail.com"),

View File

@@ -70,7 +70,7 @@ cb.print.evaluation <- function(period = 1, showsd = TRUE) {
i == env$begin_iteration ||
i == env$end_iteration) {
stdev <- if (showsd) env$bst_evaluation_err else NULL
msg <- format.eval.string(i, env$bst_evaluation, stdev)
msg <- .format_eval_string(i, env$bst_evaluation, stdev)
cat(msg, '\n')
}
}
@@ -380,7 +380,9 @@ cb.early.stop <- function(stopping_rounds, maximize = FALSE,
if ((maximize && score > best_score) ||
(!maximize && score < best_score)) {
best_msg <<- format.eval.string(i, env$bst_evaluation, env$bst_evaluation_err)
best_msg <<- .format_eval_string(
i, env$bst_evaluation, env$bst_evaluation_err
)
best_score <<- score
best_iteration <<- i
best_ntreelimit <<- best_iteration * env$num_parallel_tree
@@ -511,7 +513,7 @@ cb.cv.predict <- function(save_models = FALSE) {
if (save_models) {
env$basket$models <- lapply(env$bst_folds, function(fd) {
xgb.attr(fd$bst, 'niter') <- env$end_iteration - 1
xgb.Booster.complete(xgb.handleToBooster(fd$bst), saveraw = TRUE)
xgb.Booster.complete(xgb.handleToBooster(handle = fd$bst, raw = NULL), saveraw = TRUE)
})
}
}
@@ -659,7 +661,7 @@ cb.gblinear.history <- function(sparse = FALSE) {
} else { # xgb.cv:
cf <- vector("list", length(env$bst_folds))
for (i in seq_along(env$bst_folds)) {
dmp <- xgb.dump(xgb.handleToBooster(env$bst_folds[[i]]$bst))
dmp <- xgb.dump(xgb.handleToBooster(handle = env$bst_folds[[i]]$bst, raw = NULL))
cf[[i]] <- as.numeric(grep('(booster|bias|weigh)', dmp, invert = TRUE, value = TRUE))
if (sparse) cf[[i]] <- as(cf[[i]], "sparseVector")
}
@@ -754,7 +756,7 @@ xgb.gblinear.history <- function(model, class_index = NULL) {
#
# Format the evaluation metric string
format.eval.string <- function(iter, eval_res, eval_err = NULL) {
.format_eval_string <- function(iter, eval_res, eval_err = NULL) {
if (length(eval_res) == 0)
stop('no evaluation results')
enames <- names(eval_res)

View File

@@ -140,7 +140,7 @@ check.custom.eval <- function(env = parent.frame()) {
# Update a booster handle for an iteration with dtrain data
xgb.iter.update <- function(booster_handle, dtrain, iter, obj = NULL) {
xgb.iter.update <- function(booster_handle, dtrain, iter, obj) {
if (!identical(class(booster_handle), "xgb.Booster.handle")) {
stop("booster_handle must be of xgb.Booster.handle class")
}
@@ -163,7 +163,7 @@ xgb.iter.update <- function(booster_handle, dtrain, iter, obj = NULL) {
# Evaluate one iteration.
# Returns a named vector of evaluation metrics
# with the names in a 'datasetname-metricname' format.
xgb.iter.eval <- function(booster_handle, watchlist, iter, feval = NULL) {
xgb.iter.eval <- function(booster_handle, watchlist, iter, feval) {
if (!identical(class(booster_handle), "xgb.Booster.handle"))
stop("class of booster_handle must be xgb.Booster.handle")
@@ -234,7 +234,7 @@ generate.cv.folds <- function(nfold, nrows, stratified, label, params) {
y <- factor(y)
}
}
folds <- xgb.createFolds(y, nfold)
folds <- xgb.createFolds(y = y, k = nfold)
} else {
# make simple non-stratified folds
kstep <- length(rnd_idx) %/% nfold
@@ -251,7 +251,7 @@ generate.cv.folds <- function(nfold, nrows, stratified, label, params) {
# Creates CV folds stratified by the values of y.
# It was borrowed from caret::createFolds and simplified
# by always returning an unnamed list of fold indices.
xgb.createFolds <- function(y, k = 10) {
xgb.createFolds <- function(y, k) {
if (is.numeric(y)) {
## Group the numeric data based on their magnitudes
## and sample within those groups.

View File

@@ -1,7 +1,6 @@
# Construct an internal xgboost Booster and return a handle to it.
# internal utility function
xgb.Booster.handle <- function(params = list(), cachelist = list(),
modelfile = NULL, handle = NULL) {
xgb.Booster.handle <- function(params, cachelist, modelfile, handle) {
if (typeof(cachelist) != "list" ||
!all(vapply(cachelist, inherits, logical(1), what = 'xgb.DMatrix'))) {
stop("cachelist must be a list of xgb.DMatrix objects")
@@ -12,7 +11,7 @@ xgb.Booster.handle <- function(params = list(), cachelist = list(),
## A filename
handle <- .Call(XGBoosterCreate_R, cachelist)
modelfile <- path.expand(modelfile)
.Call(XGBoosterLoadModel_R, handle, modelfile[1])
.Call(XGBoosterLoadModel_R, handle, enc2utf8(modelfile[1]))
class(handle) <- "xgb.Booster.handle"
if (length(params) > 0) {
xgb.parameters(handle) <- params
@@ -44,7 +43,7 @@ xgb.Booster.handle <- function(params = list(), cachelist = list(),
# Convert xgb.Booster.handle to xgb.Booster
# internal utility function
xgb.handleToBooster <- function(handle, raw = NULL) {
xgb.handleToBooster <- function(handle, raw) {
bst <- list(handle = handle, raw = raw)
class(bst) <- "xgb.Booster"
return(bst)
@@ -129,7 +128,12 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
stop("argument type must be xgb.Booster")
if (is.null.handle(object$handle)) {
object$handle <- xgb.Booster.handle(modelfile = object$raw, handle = object$handle)
object$handle <- xgb.Booster.handle(
params = list(),
cachelist = list(),
modelfile = object$raw,
handle = object$handle
)
} else {
if (is.null(object$raw) && saveraw) {
object$raw <- xgb.serialize(object$handle)
@@ -475,7 +479,7 @@ predict.xgb.Booster <- function(object, newdata, missing = NA, outputmargin = FA
#' @export
predict.xgb.Booster.handle <- function(object, ...) {
bst <- xgb.handleToBooster(object)
bst <- xgb.handleToBooster(handle = object, raw = NULL)
ret <- predict(bst, ...)
return(ret)

View File

@@ -88,7 +88,7 @@ xgb.DMatrix <- function(data, info = list(), missing = NA, silent = FALSE, nthre
# get dmatrix from data, label
# internal helper method
xgb.get.DMatrix <- function(data, label = NULL, missing = NA, weight = NULL, nthread = NULL) {
xgb.get.DMatrix <- function(data, label, missing, weight, nthread) {
if (inherits(data, "dgCMatrix") || is.matrix(data)) {
if (is.null(label)) {
stop("label must be provided when data is a matrix")

View File

@@ -135,9 +135,6 @@ xgb.cv <- function(params = list(), data, nrounds, nfold, label = NULL, missing
check.custom.obj()
check.custom.eval()
#if (is.null(params[['eval_metric']]) && is.null(feval))
# stop("Either 'eval_metric' or 'feval' must be provided for CV")
# Check the labels
if ((inherits(data, 'xgb.DMatrix') && is.null(getinfo(data, 'label'))) ||
(!inherits(data, 'xgb.DMatrix') && is.null(label))) {
@@ -161,10 +158,6 @@ xgb.cv <- function(params = list(), data, nrounds, nfold, label = NULL, missing
folds <- generate.cv.folds(nfold, nrow(data), stratified, cv_label, params)
}
# Potential TODO: sequential CV
#if (strategy == 'sequential')
# stop('Sequential CV strategy is not yet implemented')
# verbosity & evaluation printing callback:
params <- c(params, list(silent = 1))
print_every_n <- max(as.integer(print_every_n), 1L)
@@ -194,7 +187,13 @@ xgb.cv <- function(params = list(), data, nrounds, nfold, label = NULL, missing
# create the booster-folds
# train_folds
dall <- xgb.get.DMatrix(data, label, missing, nthread = params$nthread)
dall <- xgb.get.DMatrix(
data = data,
label = label,
missing = missing,
weight = NULL,
nthread = params$nthread
)
bst_folds <- lapply(seq_along(folds), function(k) {
dtest <- slice(dall, folds[[k]])
# code originally contributed by @RolandASc on stackoverflow
@@ -202,7 +201,12 @@ xgb.cv <- function(params = list(), data, nrounds, nfold, label = NULL, missing
dtrain <- slice(dall, unlist(folds[-k]))
else
dtrain <- slice(dall, train_folds[[k]])
handle <- xgb.Booster.handle(params, list(dtrain, dtest))
handle <- xgb.Booster.handle(
params = params,
cachelist = list(dtrain, dtest),
modelfile = NULL,
handle = NULL
)
list(dtrain = dtrain, bst = handle, watchlist = list(train = dtrain, test = dtest), index = folds[[k]])
})
rm(dall)
@@ -223,8 +227,18 @@ xgb.cv <- function(params = list(), data, nrounds, nfold, label = NULL, missing
for (f in cb$pre_iter) f()
msg <- lapply(bst_folds, function(fd) {
xgb.iter.update(fd$bst, fd$dtrain, iteration - 1, obj)
xgb.iter.eval(fd$bst, fd$watchlist, iteration - 1, feval)
xgb.iter.update(
booster_handle = fd$bst,
dtrain = fd$dtrain,
iter = iteration - 1,
obj = obj
)
xgb.iter.eval(
booster_handle = fd$bst,
watchlist = fd$watchlist,
iter = iteration - 1,
feval = feval
)
})
msg <- simplify2array(msg)
bst_evaluation <- rowMeans(msg)

View File

@@ -142,6 +142,7 @@ xgb.ggplot.shap.summary <- function(data, shap_contrib = NULL, features = NULL,
#'
#' @return A data.table containing the observation ID, the feature name, the
#' feature value (normalized if specified), and the SHAP contribution value.
#' @noRd
prepare.ggplot.shap.data <- function(data_list, normalize = FALSE) {
data <- data_list[["data"]]
shap_contrib <- data_list[["shap_contrib"]]
@@ -170,6 +171,7 @@ prepare.ggplot.shap.data <- function(data_list, normalize = FALSE) {
#' @param x Numeric vector
#'
#' @return Numeric vector with mean 0 and sd 1.
#' @noRd
normalize <- function(x) {
loc <- mean(x, na.rm = TRUE)
scale <- stats::sd(x, na.rm = TRUE)
@@ -181,7 +183,7 @@ normalize <- function(x) {
# ... the plots
# cols number of columns
# internal utility function
multiplot <- function(..., cols = 1) {
multiplot <- function(..., cols) {
plots <- list(...)
num_plots <- length(plots)

View File

@@ -35,7 +35,12 @@ xgb.load <- function(modelfile) {
if (is.null(modelfile))
stop("xgb.load: modelfile cannot be NULL")
handle <- xgb.Booster.handle(modelfile = modelfile)
handle <- xgb.Booster.handle(
params = list(),
cachelist = list(),
modelfile = modelfile,
handle = NULL
)
# re-use modelfile if it is raw so we do not need to serialize
if (typeof(modelfile) == "raw") {
warning(
@@ -45,9 +50,9 @@ xgb.load <- function(modelfile) {
" `xgb.unserialize` instead. "
)
)
bst <- xgb.handleToBooster(handle, modelfile)
bst <- xgb.handleToBooster(handle = handle, raw = modelfile)
} else {
bst <- xgb.handleToBooster(handle, NULL)
bst <- xgb.handleToBooster(handle = handle, raw = NULL)
}
bst <- xgb.Booster.complete(bst, saveraw = TRUE)
return(bst)

View File

@@ -86,8 +86,7 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
text <- xgb.dump(model = model, with_stats = TRUE)
}
if (length(text) < 2 ||
sum(grepl('leaf=(\\d+)', text)) < 1) {
if (length(text) < 2 || !any(grepl('leaf=(\\d+)', text))) {
stop("Non-tree model detected! This function can only be used with tree models.")
}

View File

@@ -136,7 +136,7 @@ get.leaf.depth <- function(dt_tree) {
# list of paths to each leaf in a tree
paths <- lapply(paths_tmp$vpath, names)
# combine into a resulting path lengths table for a tree
data.table(Depth = sapply(paths, length), ID = To[Leaf == TRUE])
data.table(Depth = lengths(paths), ID = To[Leaf == TRUE])
}, by = Tree]
}

View File

@@ -193,7 +193,7 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
#' hence allows us to see which features have a negative / positive contribution
#' on the model prediction, and whether the contribution is different for larger
#' or smaller values of the feature. We effectively try to replicate the
#' \code{summary_plot} function from https://github.com/slundberg/shap.
#' \code{summary_plot} function from https://github.com/shap/shap.
#'
#' @inheritParams xgb.plot.shap
#'
@@ -202,7 +202,7 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
#'
#' @examples # See \code{\link{xgb.plot.shap}}.
#' @seealso \code{\link{xgb.plot.shap}}, \code{\link{xgb.ggplot.shap.summary}},
#' \url{https://github.com/slundberg/shap}
#' \url{https://github.com/shap/shap}
xgb.plot.shap.summary <- function(data, shap_contrib = NULL, features = NULL, top_n = 10, model = NULL,
trees = NULL, target_class = NULL, approxcontrib = FALSE, subsample = NULL) {
# Only ggplot implementation is available.

View File

@@ -43,6 +43,6 @@ xgb.save <- function(model, fname) {
}
model <- xgb.Booster.complete(model, saveraw = FALSE)
fname <- path.expand(fname)
.Call(XGBoosterSaveModel_R, model$handle, fname[1])
.Call(XGBoosterSaveModel_R, model$handle, enc2utf8(fname[1]))
return(TRUE)
}

View File

@@ -363,8 +363,13 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
is_update <- NVL(params[['process_type']], '.') == 'update'
# Construct a booster (either a new one or load from xgb_model)
handle <- xgb.Booster.handle(params, append(watchlist, dtrain), xgb_model)
bst <- xgb.handleToBooster(handle)
handle <- xgb.Booster.handle(
params = params,
cachelist = append(watchlist, dtrain),
modelfile = xgb_model,
handle = NULL
)
bst <- xgb.handleToBooster(handle = handle, raw = NULL)
# extract parameters that can affect the relationship b/w #trees and #iterations
num_class <- max(as.numeric(NVL(params[['num_class']], 1)), 1)
@@ -390,10 +395,21 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
for (f in cb$pre_iter) f()
xgb.iter.update(bst$handle, dtrain, iteration - 1, obj)
xgb.iter.update(
booster_handle = bst$handle,
dtrain = dtrain,
iter = iteration - 1,
obj = obj
)
if (length(watchlist) > 0)
bst_evaluation <- xgb.iter.eval(bst$handle, watchlist, iteration - 1, feval) # nolint: object_usage_linter
if (length(watchlist) > 0) {
bst_evaluation <- xgb.iter.eval( # nolint: object_usage_linter
booster_handle = bst$handle,
watchlist = watchlist,
iter = iteration - 1,
feval = feval
)
}
xgb.attr(bst$handle, 'niter') <- iteration - 1

View File

@@ -10,7 +10,13 @@ xgboost <- function(data = NULL, label = NULL, missing = NA, weight = NULL,
save_period = NULL, save_name = "xgboost.model",
xgb_model = NULL, callbacks = list(), ...) {
merged <- check.booster.params(params, ...)
dtrain <- xgb.get.DMatrix(data, label, missing, weight, nthread = merged$nthread)
dtrain <- xgb.get.DMatrix(
data = data,
label = label,
missing = missing,
weight = weight,
nthread = merged$nthread
)
watchlist <- list(train = dtrain)

18
R-package/configure vendored
View File

@@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.71 for xgboost 2.0.0.
# Generated by GNU Autoconf 2.71 for xgboost 2.0.3.
#
#
# Copyright (C) 1992-1996, 1998-2017, 2020-2021 Free Software Foundation,
@@ -607,8 +607,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='xgboost'
PACKAGE_TARNAME='xgboost'
PACKAGE_VERSION='2.0.0'
PACKAGE_STRING='xgboost 2.0.0'
PACKAGE_VERSION='2.0.3'
PACKAGE_STRING='xgboost 2.0.3'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
@@ -1225,7 +1225,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures xgboost 2.0.0 to adapt to many kinds of systems.
\`configure' configures xgboost 2.0.3 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@@ -1287,7 +1287,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of xgboost 2.0.0:";;
short | recursive ) echo "Configuration of xgboost 2.0.3:";;
esac
cat <<\_ACEOF
@@ -1367,7 +1367,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
xgboost configure 2.0.0
xgboost configure 2.0.3
generated by GNU Autoconf 2.71
Copyright (C) 2021 Free Software Foundation, Inc.
@@ -1533,7 +1533,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by xgboost $as_me 2.0.0, which was
It was created by xgboost $as_me 2.0.3, which was
generated by GNU Autoconf 2.71. Invocation command line was
$ $0$ac_configure_args_raw
@@ -3412,7 +3412,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by xgboost $as_me 2.0.0, which was
This file was extended by xgboost $as_me 2.0.3, which was
generated by GNU Autoconf 2.71. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@@ -3467,7 +3467,7 @@ ac_cs_config_escaped=`printf "%s\n" "$ac_cs_config" | sed "s/^ //; s/'/'\\\\\\\\
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config='$ac_cs_config_escaped'
ac_cs_version="\\
xgboost config.status 2.0.0
xgboost config.status 2.0.3
configured by $0, generated by GNU Autoconf 2.71,
with options \\"\$ac_cs_config\\"

View File

@@ -2,7 +2,7 @@
AC_PREREQ(2.69)
AC_INIT([xgboost],[2.0.0],[],[xgboost],[])
AC_INIT([xgboost],[2.0.3],[],[xgboost],[])
: ${R_HOME=`R RHOME`}
if test -z "${R_HOME}"; then

View File

@@ -44,7 +44,7 @@ treeInteractions <- function(input_tree, input_max_depth) {
# Remove non-interactions (same variable)
interaction_list <- lapply(interaction_list, unique) # remove same variables
interaction_length <- sapply(interaction_list, length)
interaction_length <- lengths(interaction_list)
interaction_list <- interaction_list[interaction_length > 1]
interaction_list <- unique(lapply(interaction_list, sort))
return(interaction_list)

View File

@@ -1,18 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.ggplot.R
\name{normalize}
\alias{normalize}
\title{Scale feature value to have mean 0, standard deviation 1}
\usage{
normalize(x)
}
\arguments{
\item{x}{Numeric vector}
}
\value{
Numeric vector with mean 0 and sd 1.
}
\description{
This is used to compare multiple features on the same plot.
Internal utility function
}

View File

@@ -1,27 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.ggplot.R
\name{prepare.ggplot.shap.data}
\alias{prepare.ggplot.shap.data}
\title{Combine and melt feature values and SHAP contributions for sample
observations.}
\usage{
prepare.ggplot.shap.data(data_list, normalize = FALSE)
}
\arguments{
\item{data_list}{List containing 'data' and 'shap_contrib' returned by
\code{xgb.shap.data()}.}
\item{normalize}{Whether to standardize feature values to have mean 0 and
standard deviation 1 (useful for comparing multiple features on the same
plot). Default \code{FALSE}.}
}
\value{
A data.table containing the observation ID, the feature name, the
feature value (normalized if specified), and the SHAP contribution value.
}
\description{
Conforms to data format required for ggplot functions.
}
\details{
Internal utility function.
}

View File

@@ -67,12 +67,12 @@ Each point (observation) is coloured based on its feature value. The plot
hence allows us to see which features have a negative / positive contribution
on the model prediction, and whether the contribution is different for larger
or smaller values of the feature. We effectively try to replicate the
\code{summary_plot} function from https://github.com/slundberg/shap.
\code{summary_plot} function from https://github.com/shap/shap.
}
\examples{
# See \code{\link{xgb.plot.shap}}.
}
\seealso{
\code{\link{xgb.plot.shap}}, \code{\link{xgb.ggplot.shap.summary}},
\url{https://github.com/slundberg/shap}
\url{https://github.com/shap/shap}
}

View File

@@ -47,6 +47,7 @@ OBJECTS= \
$(PKGROOT)/src/data/data.o \
$(PKGROOT)/src/data/sparse_page_raw_format.o \
$(PKGROOT)/src/data/ellpack_page.o \
$(PKGROOT)/src/data/file_iterator.o \
$(PKGROOT)/src/data/gradient_index.o \
$(PKGROOT)/src/data/gradient_index_page_source.o \
$(PKGROOT)/src/data/gradient_index_format.o \
@@ -68,6 +69,8 @@ OBJECTS= \
$(PKGROOT)/src/tree/updater_quantile_hist.o \
$(PKGROOT)/src/tree/updater_refresh.o \
$(PKGROOT)/src/tree/updater_sync.o \
$(PKGROOT)/src/tree/hist/param.o \
$(PKGROOT)/src/tree/hist/histogram.o \
$(PKGROOT)/src/linear/linear_updater.o \
$(PKGROOT)/src/linear/updater_coordinate.o \
$(PKGROOT)/src/linear/updater_shotgun.o \
@@ -82,6 +85,7 @@ OBJECTS= \
$(PKGROOT)/src/common/charconv.o \
$(PKGROOT)/src/common/column_matrix.o \
$(PKGROOT)/src/common/common.o \
$(PKGROOT)/src/common/error_msg.o \
$(PKGROOT)/src/common/hist_util.o \
$(PKGROOT)/src/common/host_device_vector.o \
$(PKGROOT)/src/common/io.o \

View File

@@ -47,6 +47,7 @@ OBJECTS= \
$(PKGROOT)/src/data/data.o \
$(PKGROOT)/src/data/sparse_page_raw_format.o \
$(PKGROOT)/src/data/ellpack_page.o \
$(PKGROOT)/src/data/file_iterator.o \
$(PKGROOT)/src/data/gradient_index.o \
$(PKGROOT)/src/data/gradient_index_page_source.o \
$(PKGROOT)/src/data/gradient_index_format.o \
@@ -68,6 +69,8 @@ OBJECTS= \
$(PKGROOT)/src/tree/updater_quantile_hist.o \
$(PKGROOT)/src/tree/updater_refresh.o \
$(PKGROOT)/src/tree/updater_sync.o \
$(PKGROOT)/src/tree/hist/param.o \
$(PKGROOT)/src/tree/hist/histogram.o \
$(PKGROOT)/src/linear/linear_updater.o \
$(PKGROOT)/src/linear/updater_coordinate.o \
$(PKGROOT)/src/linear/updater_shotgun.o \
@@ -82,6 +85,7 @@ OBJECTS= \
$(PKGROOT)/src/common/charconv.o \
$(PKGROOT)/src/common/column_matrix.o \
$(PKGROOT)/src/common/common.o \
$(PKGROOT)/src/common/error_msg.o \
$(PKGROOT)/src/common/hist_util.o \
$(PKGROOT)/src/common/host_device_vector.o \
$(PKGROOT)/src/common/io.o \

View File

@@ -120,11 +120,25 @@ XGB_DLL SEXP XGDMatrixCreateFromMat_R(SEXP mat, SEXP missing, SEXP n_threads) {
ctx.nthread = asInteger(n_threads);
std::int32_t threads = ctx.Threads();
if (is_int) {
xgboost::common::ParallelFor(nrow, threads, [&](xgboost::omp_ulong i) {
for (size_t j = 0; j < ncol; ++j) {
data[i * ncol + j] = is_int ? static_cast<float>(iin[i + nrow * j]) : din[i + nrow * j];
auto v = iin[i + nrow * j];
if (v == NA_INTEGER) {
data[i * ncol + j] = std::numeric_limits<float>::quiet_NaN();
} else {
data[i * ncol + j] = static_cast<float>(v);
}
}
});
} else {
xgboost::common::ParallelFor(nrow, threads, [&](xgboost::omp_ulong i) {
for (size_t j = 0; j < ncol; ++j) {
data[i * ncol + j] = din[i + nrow * j];
}
});
}
DMatrixHandle handle;
CHECK_CALL(XGDMatrixCreateFromMat_omp(BeginPtr(data), nrow, ncol,
asReal(missing), &handle, threads));

View File

@@ -32,7 +32,7 @@ namespace common {
bool CheckNAN(double v) {
return ISNAN(v);
}
#if !defined(XGBOOST_USE_CUDA)
#if !defined(XGBOOST_USE_CUDA) && !defined(XGBOOST_USE_HIP)
double LogGamma(double v) {
return lgammafn(v);
}

View File

@@ -85,9 +85,18 @@ test_that("dart prediction works", {
rnorm(100)
set.seed(1994)
booster_by_xgboost <- xgboost(data = d, label = y, max_depth = 2, booster = "dart",
rate_drop = 0.5, one_drop = TRUE,
eta = 1, nthread = 2, nrounds = nrounds, objective = "reg:squarederror")
booster_by_xgboost <- xgboost(
data = d,
label = y,
max_depth = 2,
booster = "dart",
rate_drop = 0.5,
one_drop = TRUE,
eta = 1,
nthread = 2,
nrounds = nrounds,
objective = "reg:squarederror"
)
pred_by_xgboost_0 <- predict(booster_by_xgboost, newdata = d, ntreelimit = 0)
pred_by_xgboost_1 <- predict(booster_by_xgboost, newdata = d, ntreelimit = nrounds)
expect_true(all(matrix(pred_by_xgboost_0, byrow = TRUE) == matrix(pred_by_xgboost_1, byrow = TRUE)))
@@ -97,14 +106,14 @@ test_that("dart prediction works", {
set.seed(1994)
dtrain <- xgb.DMatrix(data = d, info = list(label = y))
booster_by_train <- xgb.train(params = list(
booster_by_train <- xgb.train(
params = list(
booster = "dart",
max_depth = 2,
eta = 1,
rate_drop = 0.5,
one_drop = TRUE,
nthread = 1,
tree_method = "exact",
objective = "reg:squarederror"
),
data = dtrain,
@@ -399,7 +408,7 @@ test_that("colsample_bytree works", {
xgb.importance(model = bst)
# If colsample_bytree works properly, a variety of features should be used
# in the 100 trees
expect_gte(nrow(xgb.importance(model = bst)), 30)
expect_gte(nrow(xgb.importance(model = bst)), 28)
})
test_that("Configuration works", {

View File

@@ -56,6 +56,42 @@ test_that("xgb.DMatrix: basic construction", {
expect_equal(raw_fd, raw_dgc)
})
test_that("xgb.DMatrix: NA", {
n_samples <- 3
x <- cbind(
x1 = sample(x = 4, size = n_samples, replace = TRUE),
x2 = sample(x = 4, size = n_samples, replace = TRUE)
)
x[1, "x1"] <- NA
m <- xgb.DMatrix(x)
xgb.DMatrix.save(m, "int.dmatrix")
x <- matrix(as.numeric(x), nrow = n_samples, ncol = 2)
colnames(x) <- c("x1", "x2")
m <- xgb.DMatrix(x)
xgb.DMatrix.save(m, "float.dmatrix")
iconn <- file("int.dmatrix", "rb")
fconn <- file("float.dmatrix", "rb")
expect_equal(file.size("int.dmatrix"), file.size("float.dmatrix"))
bytes <- file.size("int.dmatrix")
idmatrix <- readBin(iconn, "raw", n = bytes)
fdmatrix <- readBin(fconn, "raw", n = bytes)
expect_equal(length(idmatrix), length(fdmatrix))
expect_equal(idmatrix, fdmatrix)
close(iconn)
close(fconn)
file.remove("int.dmatrix")
file.remove("float.dmatrix")
})
test_that("xgb.DMatrix: saving, loading", {
# save to a local file
dtest1 <- xgb.DMatrix(test_data, label = test_label)
@@ -72,6 +108,7 @@ test_that("xgb.DMatrix: saving, loading", {
tmp <- c("0 1:1 2:1", "1 3:1", "0 1:1")
tmp_file <- tempfile(fileext = ".libsvm")
writeLines(tmp, tmp_file)
expect_true(file.exists(tmp_file))
dtest4 <- xgb.DMatrix(paste(tmp_file, "?format=libsvm", sep = ""), silent = TRUE)
expect_equal(dim(dtest4), c(3, 4))
expect_equal(getinfo(dtest4, 'label'), c(0, 1, 0))

View File

@@ -189,7 +189,7 @@ test_that("SHAPs sum to predictions, with or without DART", {
tol <- 1e-5
expect_equal(rowSums(shap), pred, tol = tol)
expect_equal(apply(shapi, 1, sum), pred, tol = tol)
expect_equal(rowSums(shapi), pred, tol = tol)
for (i in seq_len(nrow(d)))
for (f in list(rowSums, colSums))
expect_equal(f(shapi[i, , ]), shap[i, ], tol = tol)

View File

@@ -76,8 +76,6 @@ test_that("Models from previous versions of XGBoost can be loaded", {
name <- m[3]
is_rds <- endsWith(model_file, '.rds')
is_json <- endsWith(model_file, '.json')
cpp_warning <- capture.output({
# Expect an R warning when a model is loaded from RDS and it was generated by version < 1.1.x
if (is_rds && compareVersion(model_xgb_ver, '1.1.1.1') < 0) {
booster <- readRDS(model_file)
@@ -94,14 +92,4 @@ test_that("Models from previous versions of XGBoost can be loaded", {
run_booster_check(booster, name)
}
})
cpp_warning <- paste0(cpp_warning, collapse = ' ')
if (is_rds && compareVersion(model_xgb_ver, '1.1.1.1') >= 0) {
# Expect a C++ warning when a model is loaded from RDS and it was generated by old XGBoost`
m <- grepl(paste0('.*If you are loading a serialized model ',
'\\(like pickle in Python, RDS in R\\).*',
'for more details about differences between ',
'saving model and serializing.*'), cpp_warning, perl = TRUE)
expect_true(length(m) > 0 && all(m))
}
})
})

View File

@@ -0,0 +1,21 @@
context("Test Unicode handling")
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
train <- agaricus.train
test <- agaricus.test
set.seed(1994)
test_that("Can save and load models with Unicode paths", {
nrounds <- 2
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = nrounds, objective = "binary:logistic",
eval_metric = "error")
tmpdir <- tempdir()
lapply(c("모델.json", "がうる・ぐら.json", "类继承.ubj"), function(x) {
path <- file.path(tmpdir, x)
xgb.save(bst, path)
bst2 <- xgb.load(path)
expect_equal(predict(bst, test$data), predict(bst2, test$data))
})
})

View File

@@ -13,7 +13,10 @@ test_that("updating the model works", {
watchlist <- list(train = dtrain, test = dtest)
# no-subsampling
p1 <- list(objective = "binary:logistic", max_depth = 2, eta = 0.05, nthread = 2)
p1 <- list(
objective = "binary:logistic", max_depth = 2, eta = 0.05, nthread = 2,
updater = "grow_colmaker,prune"
)
set.seed(11)
bst1 <- xgb.train(p1, dtrain, nrounds = 10, watchlist, verbose = 0)
tr1 <- xgb.model.dt.tree(model = bst1)

View File

@@ -51,24 +51,24 @@ A *categorical* variable has a fixed number of different values. For instance, i
>
> Type `?factor` in the console for more information.
To answer the question above we will convert *categorical* variables to `numeric` one.
To answer the question above we will convert *categorical* variables to `numeric` ones.
### Conversion from categorical to numeric variables
#### Looking at the raw data
In this Vignette we will see how to transform a *dense* `data.frame` (*dense* = few zeroes in the matrix) with *categorical* variables to a very *sparse* matrix (*sparse* = lots of zero in the matrix) of `numeric` features.
+In this Vignette we will see how to transform a *dense* `data.frame` (*dense* = the majority of the matrix is non-zero) with *categorical* variables to a very *sparse* matrix (*sparse* = lots of zero entries in the matrix) of `numeric` features.
The method we are going to see is usually called [one-hot encoding](https://en.wikipedia.org/wiki/One-hot).
The first step is to load `Arthritis` dataset in memory and wrap it with `data.table` package.
The first step is to load the `Arthritis` dataset in memory and wrap it with the `data.table` package.
```{r, results='hide'}
data(Arthritis)
df <- data.table(Arthritis, keep.rownames = FALSE)
```
> `data.table` is 100% compliant with **R** `data.frame` but its syntax is more consistent and its performance for large dataset is [best in class](https://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) (`dplyr` from **R** and `Pandas` from **Python** [included](https://github.com/Rdatatable/data.table/wiki/Benchmarks-%3A-Grouping)). Some parts of **XGBoost** **R** package use `data.table`.
> `data.table` is 100% compliant with **R** `data.frame` but its syntax is more consistent and its performance for large dataset is [best in class](https://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) (`dplyr` from **R** and `Pandas` from **Python** [included](https://github.com/Rdatatable/data.table/wiki/Benchmarks-%3A-Grouping)). Some parts of **XGBoost's** **R** package use `data.table`.
The first thing we want to do is to have a look to the first few lines of the `data.table`:
@@ -95,19 +95,19 @@ We will add some new *categorical* features to see if it helps.
##### Grouping per 10 years
For the first feature we create groups of age by rounding the real age.
For the first features we create groups of age by rounding the real age.
Note that we transform it to `factor` so the algorithm treat these age groups as independent values.
Note that we transform it to `factor` so the algorithm treats these age groups as independent values.
Therefore, 20 is not closer to 30 than 60. To make it short, the distance between ages is lost in this transformation.
Therefore, 20 is not closer to 30 than 60. In other words, the distance between ages is lost in this transformation.
```{r}
head(df[, AgeDiscret := as.factor(round(Age / 10, 0))])
```
##### Random split into two groups
##### Randomly split into two groups
Following is an even stronger simplification of the real age with an arbitrary split at 30 years old. We choose this value **based on nothing**. We will see later if simplifying the information based on arbitrary values is a good strategy (you may already have an idea of how well it will work...).
The following is an even stronger simplification of the real age with an arbitrary split at 30 years old. I choose this value **based on nothing**. We will see later if simplifying the information based on arbitrary values is a good strategy (you may already have an idea of how well it will work...).
```{r}
head(df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))])
@@ -119,7 +119,7 @@ These new features are highly correlated to the `Age` feature because they are s
For many machine learning algorithms, using correlated features is not a good idea. It may sometimes make prediction less accurate, and most of the time make interpretation of the model almost impossible. GLM, for instance, assumes that the features are uncorrelated.
Fortunately, decision tree algorithms (including boosted trees) are very robust to these features. Therefore we have nothing to do to manage this situation.
Fortunately, decision tree algorithms (including boosted trees) are very robust to these features. Therefore we don't have to do anything to manage this situation.
##### Cleaning data
@@ -144,7 +144,7 @@ We will use the [dummy contrast coding](https://stats.oarc.ucla.edu/r/library/r-
The purpose is to transform each value of each *categorical* feature into a *binary* feature `{0, 1}`.
For example, the column `Treatment` will be replaced by two columns, `TreatmentPlacebo`, and `TreatmentTreated`. Each of them will be *binary*. Therefore, an observation which has the value `Placebo` in column `Treatment` before the transformation will have after the transformation the value `1` in the new column `TreatmentPlacebo` and the value `0` in the new column `TreatmentTreated`. The column `TreatmentPlacebo` will disappear during the contrast encoding, as it would be absorbed into a common constant intercept column.
For example, the column `Treatment` will be replaced by two columns, `TreatmentPlacebo`, and `TreatmentTreated`. Each of them will be *binary*. Therefore, an observation which has the value `Placebo` in column `Treatment` before the transformation will have the value `1` in the new column `TreatmentPlacebo` and the value `0` in the new column `TreatmentTreated` after the transformation. The column `TreatmentPlacebo` will disappear during the contrast encoding, as it would be absorbed into a common constant intercept column.
Column `Improved` is excluded because it will be our `label` column, the one we want to predict.
@@ -176,13 +176,9 @@ bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 4,
```
You can see some `train-error: 0.XXXXX` lines followed by a number. It decreases. Each line shows how well the model explains your data. Lower is better.
You can see some `train-logloss: 0.XXXXX` lines followed by a number. It decreases. Each line shows how well the model explains the data. Lower is better.
A small value for training error may be a symptom of [overfitting](https://en.wikipedia.org/wiki/Overfitting), meaning the model will not accurately predict the future values.
> Here you can see the numbers decrease until line 7 and then increase.
>
> It probably means we are overfitting. To fix that I should reduce the number of rounds to `nrounds = 4`. I will let things like that because I don't really care for the purpose of this example :-)
A small value for training error may be a symptom of [overfitting](https://en.wikipedia.org/wiki/Overfitting), meaning the model will not accurately predict unseen values.
Feature importance
------------------
@@ -199,64 +195,35 @@ importance <- xgb.importance(feature_names = colnames(sparse_matrix), model = bs
head(importance)
```
> The column `Gain` provide the information we are looking for.
> The column `Gain` provides the information we are looking for.
>
> As you can see, features are classified by `Gain`.
`Gain` is the improvement in accuracy brought by a feature to the branches it is on. The idea is that before adding a new split on a feature X to the branch there was some wrongly classified elements, after adding the split on this feature, there are two new branches, and each of these branch is more accurate (one branch saying if your observation is on this branch then it should be classified as `1`, and the other branch saying the exact opposite).
`Gain` is the improvement in accuracy brought by a feature to the branches it is on. The idea is that before adding a new split on a feature X to the branch there were some wrongly classified elements; after adding the split on this feature, there are two new branches, and each of these branches is more accurate (one branch saying if your observation is on this branch then it should be classified as `1`, and the other branch saying the exact opposite).
`Cover` measures the relative quantity of observations concerned by a feature.
`Cover` is related to the second order derivative (or Hessian) of the loss function with respect to a particular variable; thus, a large value indicates a variable has a large potential impact on the loss function and so is important.
`Frequency` is a simpler way to measure the `Gain`. It just counts the number of times a feature is used in all generated trees. You should not use it (unless you know why you want to use it).
#### Improvement in the interpretability of feature importance data.table
We can go deeper in the analysis of the model. In the `data.table` above, we have discovered which features counts to predict if the illness will go or not. But we don't yet know the role of these features. For instance, one of the question we may want to answer would be: does receiving a placebo treatment helps to recover from the illness?
One simple solution is to count the co-occurrences of a feature and a class of the classification.
For that purpose we will execute the same function as above but using two more parameters, `data` and `label`.
```{r}
importanceRaw <- xgb.importance(feature_names = colnames(sparse_matrix), model = bst, data = sparse_matrix, label = output_vector)
# Cleaning for better display
importanceClean <- importanceRaw[, `:=`(Cover = NULL, Frequency = NULL)]
head(importanceClean)
```
> In the table above we have removed two not needed columns and select only the first lines.
First thing you notice is the new column `Split`. It is the split applied to the feature on a branch of one of the tree. Each split is present, therefore a feature can appear several times in this table. Here we can see the feature `Age` is used several times with different splits.
How the split is applied to count the co-occurrences? It is always `<`. For instance, in the second line, we measure the number of persons under 61.5 years with the illness gone after the treatment.
The two other new columns are `RealCover` and `RealCover %`. In the first column it measures the number of observations in the dataset where the split is respected and the label marked as `1`. The second column is the percentage of the whole population that `RealCover` represents.
Therefore, according to our findings, getting a placebo doesn't seem to help but being younger than 61 years may help (seems logic).
> You may wonder how to interpret the `< 1.00001` on the first line. Basically, in a sparse `Matrix`, there is no `0`, therefore, looking for one hot-encoded categorical observations validating the rule `< 1.00001` is like just looking for `1` for this feature.
### Plotting the feature importance
All these things are nice, but it would be even better to plot the results.
```{r, fig.width=8, fig.height=5, fig.align='center'}
xgb.plot.importance(importance_matrix = importance)
```
Feature have automatically been divided in 2 clusters: the interesting features... and the others.
Running this line of code, you should get a bar chart showing the importance of the 6 features (containing the same data as the output we saw earlier, but displaying it visually for easier consumption). Note that `xgb.ggplot.importance` is also available for all the ggplot2 fans!
> Depending of the dataset and the learning parameters you may have more than two clusters. Default value is to limit them to `10`, but you can increase this limit. Look at the function documentation for more information.
According to the plot above, the most important features in this dataset to predict if the treatment will work are :
* the Age ;
* having received a placebo or not ;
* the sex is third but already included in the not interesting features group ;
* then we see our generated features (AgeDiscret). We can see that their contribution is very low.
* An individual's age;
* Having received a placebo or not;
* Gender;
* Our generated feature AgeDiscret. We can see that its contribution is very low.
### Do these results make sense?
@@ -270,53 +237,53 @@ c2 <- chisq.test(df$Age, output_vector)
print(c2)
```
Pearson correlation between Age and illness disappearing is **`r round(c2$statistic, 2 )`**.
The Pearson correlation between Age and illness disappearing is **`r round(c2$statistic, 2 )`**.
```{r, warning=FALSE, message=FALSE}
c2 <- chisq.test(df$AgeDiscret, output_vector)
print(c2)
```
Our first simplification of Age gives a Pearson correlation is **`r round(c2$statistic, 2)`**.
Our first simplification of Age gives a Pearson correlation of **`r round(c2$statistic, 2)`**.
```{r, warning=FALSE, message=FALSE}
c2 <- chisq.test(df$AgeCat, output_vector)
print(c2)
```
The perfectly random split I did between young and old at 30 years old have a low correlation of **`r round(c2$statistic, 2)`**. It's a result we may expect as may be in my mind > 30 years is being old (I am 32 and starting feeling old, this may explain that), but for the illness we are studying, the age to be vulnerable is not the same.
The perfectly random split we did between young and old at 30 years old has a low correlation of **2.36**. This suggests that, for the particular illness we are studying, the age at which someone is vulnerable to this disease is likely very different from 30.
Morality: don't let your *gut* lower the quality of your model.
Moral of the story: don't let your *gut* lower the quality of your model.
In *data science* expression, there is the word *science* :-)
In *data science*, there is the word *science* :-)
Conclusion
----------
As you can see, in general *destroying information by simplifying it won't improve your model*. **Chi2** just demonstrates that.
But in more complex cases, creating a new feature based on existing one which makes link with the outcome more obvious may help the algorithm and improve the model.
But in more complex cases, creating a new feature from an existing one may help the algorithm and improve the model.
The case studied here is not enough complex to show that. Check [Kaggle website](http://www.kaggle.com/) for some challenging datasets. However it's almost always worse when you add some arbitrary rules.
+The case studied here is not complex enough to show that. Check [Kaggle website](https://www.kaggle.com/) for some challenging datasets.
Moreover, you can notice that even if we have added some not useful new features highly correlated with other features, the boosting tree algorithm have been able to choose the best one, which in this case is the Age.
Moreover, you can see that even if we have added some new features which are not very useful/highly correlated with other features, the boosting tree algorithm was still able to choose the best one (which in this case is the Age).
Linear model may not be that smart in this scenario.
Linear models may not perform as well.
Special Note: What about Random Forests™?
-----------------------------------------
As you may know, [Random Forests](https://en.wikipedia.org/wiki/Random_forest) algorithm is cousin with boosting and both are part of the [ensemble learning](https://en.wikipedia.org/wiki/Ensemble_learning) family.
As you may know, the [Random Forests](https://en.wikipedia.org/wiki/Random_forest) algorithm is cousin with boosting and both are part of the [ensemble learning](https://en.wikipedia.org/wiki/Ensemble_learning) family.
Both trains several decision trees for one dataset. The *main* difference is that in Random Forests, trees are independent and in boosting, the tree `N+1` focus its learning on the loss (<=> what has not been well modeled by the tree `N`).
Both train several decision trees for one dataset. The *main* difference is that in Random Forests, trees are independent and in boosting, the `N+1`-st tree focuses its learning on the loss (<=> what has not been well modeled by the tree `N`).
This difference have an impact on a corner case in feature importance analysis: the *correlated features*.
This difference can have an impact on a edge case in feature importance analysis: *correlated features*.
Imagine two features perfectly correlated, feature `A` and feature `B`. For one specific tree, if the algorithm needs one of them, it will choose randomly (true in both boosting and Random Forests).
However, in Random Forests this random choice will be done for each tree, because each tree is independent from the others. Therefore, approximatively, depending of your parameters, 50% of the trees will choose feature `A` and the other 50% will choose feature `B`. So the *importance* of the information contained in `A` and `B` (which is the same, because they are perfectly correlated) is diluted in `A` and `B`. So you won't easily know this information is important to predict what you want to predict! It is even worse when you have 10 correlated features...
However, in Random Forests this random choice will be done for each tree, because each tree is independent from the others. Therefore, approximately (and depending on your parameters) 50% of the trees will choose feature `A` and the other 50% will choose feature `B`. So the *importance* of the information contained in `A` and `B` (which is the same, because they are perfectly correlated) is diluted in `A` and `B`. So you won't easily know this information is important to predict what you want to predict! It is even worse when you have 10 correlated features...
In boosting, when a specific link between feature and outcome have been learned by the algorithm, it will try to not refocus on it (in theory it is what happens, reality is not always that simple). Therefore, all the importance will be on feature `A` or on feature `B` (but not both). You will know that one feature have an important role in the link between the observations and the label. It is still up to you to search for the correlated features to the one detected as important if you need to know all of them.
In boosting, when a specific link between feature and outcome have been learned by the algorithm, it will try to not refocus on it (in theory it is what happens, reality is not always that simple). Therefore, all the importance will be on feature `A` or on feature `B` (but not both). You will know that one feature has an important role in the link between the observations and the label. It is still up to you to search for the correlated features to the one detected as important if you need to know all of them.
If you want to try Random Forests algorithm, you can tweak XGBoost parameters!

View File

@@ -18,13 +18,11 @@
publisher={Institute of Mathematical Statistics}
}
@misc{
Bache+Lichman:2013 ,
author = "K. Bache and M. Lichman",
year = "2013",
title = "{UCI} Machine Learning Repository",
url = "http://archive.ics.uci.edu/ml/",
url = "https://archive.ics.uci.edu/",
institution = "University of California, Irvine, School of Information and Computer Sciences"
}

View File

@@ -48,7 +48,6 @@ Become a sponsor and get a logo here. See details at [Sponsoring the XGBoost Pro
<a href="https://www.nvidia.com/en-us/" target="_blank"><img src="https://raw.githubusercontent.com/xgboost-ai/xgboost-ai.github.io/master/images/sponsors/nvidia.jpg" alt="NVIDIA" width="72" height="72"></a>
<a href="https://www.intel.com/" target="_blank"><img src="https://images.opencollective.com/intel-corporation/2fa85c1/logo/256.png" width="72" height="72"></a>
<a href="https://getkoffie.com/?utm_source=opencollective&utm_medium=github&utm_campaign=xgboost" target="_blank"><img src="https://images.opencollective.com/koffielabs/f391ab8/logo/256.png" width="72" height="72"></a>
### Backers
[[Become a backer](https://opencollective.com/xgboost#backer)]

View File

@@ -90,8 +90,8 @@ function(format_gencode_flags flags out)
endif()
# Set up architecture flags
if(NOT flags)
if (CUDA_VERSION VERSION_GREATER_EQUAL "11.1")
set(flags "50;60;70;80")
if (CUDA_VERSION VERSION_GREATER_EQUAL "11.8")
set(flags "50;60;70;80;90")
elseif (CUDA_VERSION VERSION_GREATER_EQUAL "11.0")
set(flags "50;60;70;80")
elseif(CUDA_VERSION VERSION_GREATER_EQUAL "10.0")
@@ -133,6 +133,11 @@ function(xgboost_set_cuda_flags target)
$<$<COMPILE_LANGUAGE:CUDA>:-Xcompiler=${OpenMP_CXX_FLAGS}>
$<$<COMPILE_LANGUAGE:CUDA>:-Xfatbin=-compress-all>)
if (USE_PER_THREAD_DEFAULT_STREAM)
target_compile_options(${target} PRIVATE
$<$<COMPILE_LANGUAGE:CUDA>:--default-stream per-thread>)
endif (USE_PER_THREAD_DEFAULT_STREAM)
if (CMAKE_VERSION VERSION_GREATER_EQUAL "3.18")
set_property(TARGET ${target} PROPERTY CUDA_ARCHITECTURES ${CMAKE_CUDA_ARCHITECTURES})
endif (CMAKE_VERSION VERSION_GREATER_EQUAL "3.18")
@@ -172,9 +177,27 @@ function(xgboost_set_cuda_flags target)
set_target_properties(${target} PROPERTIES
CUDA_STANDARD 17
CUDA_STANDARD_REQUIRED ON
CUDA_SEPARABLE_COMPILATION OFF)
CUDA_SEPARABLE_COMPILATION OFF
CUDA_RUNTIME_LIBRARY Static)
endfunction(xgboost_set_cuda_flags)
# Set HIP related flags to target.
function(xgboost_set_hip_flags target)
if (USE_DEVICE_DEBUG)
target_compile_options(${target} PRIVATE
$<$<AND:$<CONFIG:DEBUG>,$<COMPILE_LANGUAGE:HIP>>:-G>)
endif (USE_DEVICE_DEBUG)
target_compile_definitions(${target} PRIVATE -DXGBOOST_USE_HIP=1)
target_include_directories(${target} PRIVATE ${xgboost_SOURCE_DIR}/rocgputreeshap)
target_include_directories(${target} PRIVATE ${xgboost_SOURCE_DIR}/warp-primitives/include)
set_target_properties(${target} PROPERTIES
HIP_STANDARD 17
HIP_STANDARD_REQUIRED ON
HIP_SEPARABLE_COMPILATION OFF)
endfunction(xgboost_set_hip_flags)
macro(xgboost_link_nccl target)
if (BUILD_STATIC_LIB)
target_include_directories(${target} PUBLIC ${NCCL_INCLUDE_DIR})
@@ -187,6 +210,20 @@ macro(xgboost_link_nccl target)
endif (BUILD_STATIC_LIB)
endmacro(xgboost_link_nccl)
macro(xgboost_link_rccl target)
if(BUILD_STATIC_LIB)
target_include_directories(${target} PUBLIC ${RCCL_INCLUDE_DIR}/rccl)
target_compile_definitions(${target} PUBLIC -DXGBOOST_USE_RCCL=1)
target_link_directories(${target} PUBLIC ${HIP_LIB_INSTALL_DIR})
target_link_libraries(${target} PUBLIC ${RCCL_LIBRARY})
else()
target_include_directories(${target} PRIVATE ${RCCL_INCLUDE_DIR}/rccl)
target_compile_definitions(${target} PRIVATE -DXGBOOST_USE_RCCL=1)
target_link_directories(${target} PUBLIC ${HIP_LIB_INSTALL_DIR})
target_link_libraries(${target} PRIVATE ${RCCL_LIBRARY})
endif()
endmacro()
# compile options
macro(xgboost_target_properties target)
set_target_properties(${target} PROPERTIES
@@ -209,6 +246,10 @@ macro(xgboost_target_properties target)
-Xcompiler=-Wall -Xcompiler=-Wextra -Xcompiler=-Wno-expansion-to-defined,
-Wall -Wextra -Wno-expansion-to-defined>
)
target_compile_options(${target} PUBLIC
$<IF:$<COMPILE_LANGUAGE:HIP>,
-Wall -Wextra >
)
endif(ENABLE_ALL_WARNINGS)
target_compile_options(${target}
@@ -274,8 +315,13 @@ macro(xgboost_target_link_libraries target)
if (USE_CUDA)
xgboost_set_cuda_flags(${target})
target_link_libraries(${target} PUBLIC CUDA::cudart_static)
endif (USE_CUDA)
if (USE_HIP)
xgboost_set_hip_flags(${target})
endif (USE_HIP)
if (PLUGIN_RMM)
target_link_libraries(${target} PRIVATE rmm::rmm)
endif (PLUGIN_RMM)
@@ -284,6 +330,10 @@ macro(xgboost_target_link_libraries target)
xgboost_link_nccl(${target})
endif (USE_NCCL)
if(USE_RCCL)
xgboost_link_rccl(${target})
endif()
if (USE_NVTX)
target_link_libraries(${target} PRIVATE CUDA::nvToolsExt)
endif (USE_NVTX)

View File

@@ -52,11 +52,11 @@ endif (BUILD_WITH_SHARED_NCCL)
find_path(NCCL_INCLUDE_DIR
NAMES nccl.h
PATHS $ENV{NCCL_ROOT}/include ${NCCL_ROOT}/include)
HINTS ${NCCL_ROOT}/include $ENV{NCCL_ROOT}/include)
find_library(NCCL_LIBRARY
NAMES ${NCCL_LIB_NAME}
PATHS $ENV{NCCL_ROOT}/lib/ ${NCCL_ROOT}/lib)
HINTS ${NCCL_ROOT}/lib $ENV{NCCL_ROOT}/lib/)
message(STATUS "Using nccl library: ${NCCL_LIBRARY}")

View File

@@ -3,6 +3,8 @@
set(USE_OPENMP @USE_OPENMP@)
set(USE_CUDA @USE_CUDA@)
set(USE_NCCL @USE_NCCL@)
set(USE_HIP @USE_HIP@)
set(USE_RCCL @USE_RCCL@)
set(XGBOOST_BUILD_STATIC_LIB @BUILD_STATIC_LIB@)
include(CMakeFindDependencyMacro)
@@ -15,6 +17,9 @@ if (XGBOOST_BUILD_STATIC_LIB)
if(USE_CUDA)
find_dependency(CUDA)
endif()
if(USE_HIP)
find_dependency(HIP)
endif()
# nccl should be linked statically if xgboost is built as static library.
endif (XGBOOST_BUILD_STATIC_LIB)

View File

@@ -4,13 +4,13 @@ python mapfeat.py
# split train and test
python mknfold.py machine.txt 1
# training and output the models
../../xgboost machine.conf
../../../xgboost machine.conf
# output predictions of test data
../../xgboost machine.conf task=pred model_in=0002.model
../../../xgboost machine.conf task=pred model_in=0002.model
# print the boosters of 0002.model in dump.raw.txt
../../xgboost machine.conf task=dump model_in=0002.model name_dump=dump.raw.txt
../../../xgboost machine.conf task=dump model_in=0002.model name_dump=dump.raw.txt
# print the boosters of 0002.model in dump.nice.txt with feature map
../../xgboost machine.conf task=dump model_in=0002.model fmap=featmap.txt name_dump=dump.nice.txt
../../../xgboost machine.conf task=dump model_in=0002.model fmap=featmap.txt name_dump=dump.nice.txt
# cat the result
cat dump.nice.txt

View File

@@ -106,7 +106,7 @@ Please send pull requests if you find ones that are missing here.
- Prarthana Bhat, 2nd place winner in [DYD Competition](https://datahack.analyticsvidhya.com/contest/date-your-data/). Link to [Solution](https://github.com/analyticsvidhya/DateYourData/blob/master/Prathna_Bhat_Model.R).
## Talks
- [XGBoost: A Scalable Tree Boosting System](http://datascience.la/xgboost-workshop-and-meetup-talk-with-tianqi-chen/) (video+slides) by Tianqi Chen at the Los Angeles Data Science meetup
- XGBoost: A Scalable Tree Boosting System ([video] (https://www.youtube.com/watch?v=Vly8xGnNiWs) + [slides](https://speakerdeck.com/datasciencela/tianqi-chen-xgboost-overview-and-latest-news-la-meetup-talk)) by Tianqi Chen at the Los Angeles Data Science meetup
## Tutorials

View File

@@ -11,17 +11,27 @@ import numpy as np
import xgboost as xgb
plt.rcParams.update({'font.size': 13})
plt.rcParams.update({"font.size": 13})
# Function to visualize censored labels
def plot_censored_labels(X, y_lower, y_upper):
def replace_inf(x, target_value):
def plot_censored_labels(
X: np.ndarray, y_lower: np.ndarray, y_upper: np.ndarray
) -> None:
def replace_inf(x: np.ndarray, target_value: float) -> np.ndarray:
x[np.isinf(x)] = target_value
return x
plt.plot(X, y_lower, 'o', label='y_lower', color='blue')
plt.plot(X, y_upper, 'o', label='y_upper', color='fuchsia')
plt.vlines(X, ymin=replace_inf(y_lower, 0.01), ymax=replace_inf(y_upper, 1000),
label='Range for y', color='gray')
plt.plot(X, y_lower, "o", label="y_lower", color="blue")
plt.plot(X, y_upper, "o", label="y_upper", color="fuchsia")
plt.vlines(
X,
ymin=replace_inf(y_lower, 0.01),
ymax=replace_inf(y_upper, 1000.0),
label="Range for y",
color="gray",
)
# Toy data
X = np.array([1, 2, 3, 4, 5]).reshape((-1, 1))
@@ -33,11 +43,11 @@ y_upper = np.array([INF, INF, 20, 50, INF])
plt.figure(figsize=(5, 4))
plot_censored_labels(X, y_lower, y_upper)
plt.ylim((6, 200))
plt.legend(loc='lower right')
plt.title('Toy data')
plt.xlabel('Input feature')
plt.ylabel('Label')
plt.yscale('log')
plt.legend(loc="lower right")
plt.title("Toy data")
plt.xlabel("Input feature")
plt.ylabel("Label")
plt.yscale("log")
plt.tight_layout()
plt.show(block=True)
@@ -46,54 +56,83 @@ grid_pts = np.linspace(0.8, 5.2, 1000).reshape((-1, 1))
# Train AFT model using XGBoost
dmat = xgb.DMatrix(X)
dmat.set_float_info('label_lower_bound', y_lower)
dmat.set_float_info('label_upper_bound', y_upper)
params = {'max_depth': 3, 'objective':'survival:aft', 'min_child_weight': 0}
dmat.set_float_info("label_lower_bound", y_lower)
dmat.set_float_info("label_upper_bound", y_upper)
params = {"max_depth": 3, "objective": "survival:aft", "min_child_weight": 0}
accuracy_history = []
def plot_intermediate_model_callback(env):
"""Custom callback to plot intermediate models"""
# Compute y_pred = prediction using the intermediate model, at current boosting iteration
y_pred = env.model.predict(dmat)
# "Accuracy" = the number of data points whose ranged label (y_lower, y_upper) includes
# the corresponding predicted label (y_pred)
acc = np.sum(np.logical_and(y_pred >= y_lower, y_pred <= y_upper)/len(X) * 100)
class PlotIntermediateModel(xgb.callback.TrainingCallback):
"""Custom callback to plot intermediate models."""
def __init__(self) -> None:
super().__init__()
def after_iteration(
self,
model: xgb.Booster,
epoch: int,
evals_log: xgb.callback.TrainingCallback.EvalsLog,
) -> bool:
"""Run after training is finished."""
# Compute y_pred = prediction using the intermediate model, at current boosting
# iteration
y_pred = model.predict(dmat)
# "Accuracy" = the number of data points whose ranged label (y_lower, y_upper)
# includes the corresponding predicted label (y_pred)
acc = np.sum(
np.logical_and(y_pred >= y_lower, y_pred <= y_upper) / len(X) * 100
)
accuracy_history.append(acc)
# Plot ranged labels as well as predictions by the model
plt.subplot(5, 3, env.iteration + 1)
plt.subplot(5, 3, epoch + 1)
plot_censored_labels(X, y_lower, y_upper)
y_pred_grid_pts = env.model.predict(xgb.DMatrix(grid_pts))
plt.plot(grid_pts, y_pred_grid_pts, 'r-', label='XGBoost AFT model', linewidth=4)
plt.title('Iteration {}'.format(env.iteration), x=0.5, y=0.8)
y_pred_grid_pts = model.predict(xgb.DMatrix(grid_pts))
plt.plot(
grid_pts, y_pred_grid_pts, "r-", label="XGBoost AFT model", linewidth=4
)
plt.title("Iteration {}".format(epoch), x=0.5, y=0.8)
plt.xlim((0.8, 5.2))
plt.ylim((1 if np.min(y_pred) < 6 else 6, 200))
plt.yscale('log')
plt.yscale("log")
return False
res = {}
res: xgb.callback.TrainingCallback.EvalsLog = {}
plt.figure(figsize=(12, 13))
bst = xgb.train(params, dmat, 15, [(dmat, 'train')], evals_result=res,
callbacks=[plot_intermediate_model_callback])
bst = xgb.train(
params,
dmat,
15,
[(dmat, "train")],
evals_result=res,
callbacks=[PlotIntermediateModel()],
)
plt.tight_layout()
plt.legend(loc='lower center', ncol=4,
plt.legend(
loc="lower center",
ncol=4,
bbox_to_anchor=(0.5, 0),
bbox_transform=plt.gcf().transFigure)
bbox_transform=plt.gcf().transFigure,
)
plt.tight_layout()
# Plot negative log likelihood over boosting iterations
plt.figure(figsize=(8, 3))
plt.subplot(1, 2, 1)
plt.plot(res['train']['aft-nloglik'], 'b-o', label='aft-nloglik')
plt.xlabel('# Boosting Iterations')
plt.legend(loc='best')
plt.plot(res["train"]["aft-nloglik"], "b-o", label="aft-nloglik")
plt.xlabel("# Boosting Iterations")
plt.legend(loc="best")
# Plot "accuracy" over boosting iterations
# "Accuracy" = the number of data points whose ranged label (y_lower, y_upper) includes
# the corresponding predicted label (y_pred)
plt.subplot(1, 2, 2)
plt.plot(accuracy_history, 'r-o', label='Accuracy (%)')
plt.xlabel('# Boosting Iterations')
plt.legend(loc='best')
plt.plot(accuracy_history, "r-o", label="Accuracy (%)")
plt.xlabel("# Boosting Iterations")
plt.legend(loc="best")
plt.tight_layout()
plt.show()

View File

@@ -53,15 +53,7 @@ int main() {
// configure the training
// available parameters are described here:
// https://xgboost.readthedocs.io/en/latest/parameter.html
safe_xgboost(XGBoosterSetParam(booster, "tree_method", use_gpu ? "gpu_hist" : "hist"));
if (use_gpu) {
// set the GPU to use;
// this is not necessary, but provided here as an illustration
safe_xgboost(XGBoosterSetParam(booster, "gpu_id", "0"));
} else {
// avoid evaluating objective and metric on a GPU
safe_xgboost(XGBoosterSetParam(booster, "gpu_id", "-1"));
}
safe_xgboost(XGBoosterSetParam(booster, "device", use_gpu ? "cuda" : "cpu"));
safe_xgboost(XGBoosterSetParam(booster, "objective", "binary:logistic"));
safe_xgboost(XGBoosterSetParam(booster, "min_child_weight", "1"));

View File

@@ -18,43 +18,45 @@ def main(client):
# The Veterans' Administration Lung Cancer Trial
# The Statistical Analysis of Failure Time Data by Kalbfleisch J. and Prentice R (1980)
CURRENT_DIR = os.path.dirname(__file__)
df = dd.read_csv(os.path.join(CURRENT_DIR, os.pardir, 'data', 'veterans_lung_cancer.csv'))
df = dd.read_csv(
os.path.join(CURRENT_DIR, os.pardir, "data", "veterans_lung_cancer.csv")
)
# DaskDMatrix acts like normal DMatrix, works as a proxy for local
# DMatrix scatter around workers.
# For AFT survival, you'd need to extract the lower and upper bounds for the label
# and pass them as arguments to DaskDMatrix.
y_lower_bound = df['Survival_label_lower_bound']
y_upper_bound = df['Survival_label_upper_bound']
X = df.drop(['Survival_label_lower_bound',
'Survival_label_upper_bound'], axis=1)
dtrain = DaskDMatrix(client, X, label_lower_bound=y_lower_bound,
label_upper_bound=y_upper_bound)
y_lower_bound = df["Survival_label_lower_bound"]
y_upper_bound = df["Survival_label_upper_bound"]
X = df.drop(["Survival_label_lower_bound", "Survival_label_upper_bound"], axis=1)
dtrain = DaskDMatrix(
client, X, label_lower_bound=y_lower_bound, label_upper_bound=y_upper_bound
)
# Use train method from xgboost.dask instead of xgboost. This
# distributed version of train returns a dictionary containing the
# resulting booster and evaluation history obtained from
# evaluation metrics.
params = {'verbosity': 1,
'objective': 'survival:aft',
'eval_metric': 'aft-nloglik',
'learning_rate': 0.05,
'aft_loss_distribution_scale': 1.20,
'aft_loss_distribution': 'normal',
'max_depth': 6,
'lambda': 0.01,
'alpha': 0.02}
output = xgb.dask.train(client,
params,
dtrain,
num_boost_round=100,
evals=[(dtrain, 'train')])
bst = output['booster']
history = output['history']
params = {
"verbosity": 1,
"objective": "survival:aft",
"eval_metric": "aft-nloglik",
"learning_rate": 0.05,
"aft_loss_distribution_scale": 1.20,
"aft_loss_distribution": "normal",
"max_depth": 6,
"lambda": 0.01,
"alpha": 0.02,
}
output = xgb.dask.train(
client, params, dtrain, num_boost_round=100, evals=[(dtrain, "train")]
)
bst = output["booster"]
history = output["history"]
# you can pass output directly into `predict` too.
prediction = xgb.dask.predict(client, bst, dtrain)
print('Evaluation history: ', history)
print("Evaluation history: ", history)
# Uncomment the following line to save the model to the disk
# bst.save_model('survival_model.json')
@@ -62,7 +64,7 @@ def main(client):
return prediction
if __name__ == '__main__':
if __name__ == "__main__":
# or use other clusters for scaling
with LocalCluster(n_workers=7, threads_per_worker=4) as cluster:
with Client(cluster) as client:

View File

@@ -25,21 +25,23 @@ def main(client):
# distributed version of train returns a dictionary containing the
# resulting booster and evaluation history obtained from
# evaluation metrics.
output = xgb.dask.train(client,
{'verbosity': 1,
'tree_method': 'hist'},
output = xgb.dask.train(
client,
{"verbosity": 1, "tree_method": "hist"},
dtrain,
num_boost_round=4, evals=[(dtrain, 'train')])
bst = output['booster']
history = output['history']
num_boost_round=4,
evals=[(dtrain, "train")],
)
bst = output["booster"]
history = output["history"]
# you can pass output directly into `predict` too.
prediction = xgb.dask.predict(client, bst, dtrain)
print('Evaluation history:', history)
print("Evaluation history:", history)
return prediction
if __name__ == '__main__':
if __name__ == "__main__":
# or use other clusters for scaling
with LocalCluster(n_workers=7, threads_per_worker=4) as cluster:
with Client(cluster) as client:

View File

@@ -13,33 +13,38 @@ from xgboost import dask as dxgb
from xgboost.dask import DaskDMatrix
def using_dask_matrix(client: Client, X, y):
# DaskDMatrix acts like normal DMatrix, works as a proxy for local
# DMatrix scatter around workers.
def using_dask_matrix(client: Client, X: da.Array, y: da.Array) -> da.Array:
# DaskDMatrix acts like normal DMatrix, works as a proxy for local DMatrix scatter
# around workers.
dtrain = DaskDMatrix(client, X, y)
# Use train method from xgboost.dask instead of xgboost. This
# distributed version of train returns a dictionary containing the
# resulting booster and evaluation history obtained from
# evaluation metrics.
output = xgb.dask.train(client,
{'verbosity': 2,
# Use train method from xgboost.dask instead of xgboost. This distributed version
# of train returns a dictionary containing the resulting booster and evaluation
# history obtained from evaluation metrics.
output = xgb.dask.train(
client,
{
"verbosity": 2,
"tree_method": "hist",
# Golden line for GPU training
'tree_method': 'gpu_hist'},
"device": "cuda",
},
dtrain,
num_boost_round=4, evals=[(dtrain, 'train')])
bst = output['booster']
history = output['history']
num_boost_round=4,
evals=[(dtrain, "train")],
)
bst = output["booster"]
history = output["history"]
# you can pass output directly into `predict` too.
prediction = xgb.dask.predict(client, bst, dtrain)
print('Evaluation history:', history)
print("Evaluation history:", history)
return prediction
def using_quantile_device_dmatrix(client: Client, X, y):
"""`DaskQuantileDMatrix` is a data type specialized for `gpu_hist` and `hist` tree
methods for reducing memory usage.
def using_quantile_device_dmatrix(client: Client, X: da.Array, y: da.Array) -> da.Array:
"""`DaskQuantileDMatrix` is a data type specialized for `hist` tree methods for
reducing memory usage.
.. versionadded:: 1.2.0
@@ -52,17 +57,19 @@ def using_quantile_device_dmatrix(client: Client, X, y):
# the `ref` argument of `DaskQuantileDMatrix`.
dtrain = dxgb.DaskQuantileDMatrix(client, X, y)
output = xgb.dask.train(
client, {"verbosity": 2, "tree_method": "gpu_hist"}, dtrain, num_boost_round=4
client,
{"verbosity": 2, "tree_method": "hist", "device": "cuda"},
dtrain,
num_boost_round=4,
)
prediction = xgb.dask.predict(client, output, X)
return prediction
if __name__ == '__main__':
if __name__ == "__main__":
# `LocalCUDACluster` is used for assigning GPU to XGBoost processes. Here
# `n_workers` represents the number of GPUs since we use one GPU per worker
# process.
# `n_workers` represents the number of GPUs since we use one GPU per worker process.
with LocalCUDACluster(n_workers=2, threads_per_worker=4) as cluster:
with Client(cluster) as client:
# generate some random data for demonstration
@@ -71,7 +78,7 @@ if __name__ == '__main__':
X = da.random.random(size=(m, n), chunks=10000)
y = da.random.random(size=(m,), chunks=10000)
print('Using DaskQuantileDMatrix')
print("Using DaskQuantileDMatrix")
from_ddqdm = using_quantile_device_dmatrix(client, X, y)
print('Using DMatrix')
print("Using DMatrix")
from_dmatrix = using_dask_matrix(client, X, y)

View File

@@ -21,7 +21,8 @@ def main(client):
y = da.random.random(m, partition_size)
regressor = xgboost.dask.DaskXGBRegressor(verbosity=1)
regressor.set_params(tree_method='gpu_hist')
# set the device to CUDA
regressor.set_params(tree_method="hist", device="cuda")
# assigning client here is optional
regressor.client = client
@@ -31,13 +32,13 @@ def main(client):
bst = regressor.get_booster()
history = regressor.evals_result()
print('Evaluation history:', history)
print("Evaluation history:", history)
# returned prediction is always a dask array.
assert isinstance(prediction, da.Array)
return bst # returning the trained model
if __name__ == '__main__':
if __name__ == "__main__":
# With dask cuda, one can scale up XGBoost to arbitrary GPU clusters.
# `LocalCUDACluster` used here is only for demonstration purpose.
with LocalCUDACluster() as cluster:

View File

@@ -1,5 +0,0 @@
# GPU Acceleration Demo
`cover_type.py` shows how to train a model on the [forest cover type](https://archive.ics.uci.edu/ml/datasets/covertype) dataset using GPU acceleration. The forest cover type dataset has 581,012 rows and 54 features, making it time consuming to process. We compare the run-time and accuracy of the GPU and CPU histogram algorithms.
`shap.ipynb` demonstrates using GPU acceleration to compute SHAP values for feature importance.

View File

@@ -0,0 +1,8 @@
:orphan:
GPU Acceleration Demo
=====================
This is a collection of demonstration scripts to showcase the basic usage of GPU. Please
see :doc:`/gpu/index` for more info. There are other demonstrations for distributed GPU
training using dask or spark.

View File

@@ -1,41 +1,49 @@
"""
Using xgboost on GPU devices
============================
Shows how to train a model on the `forest cover type
<https://archive.ics.uci.edu/ml/datasets/covertype>`_ dataset using GPU
acceleration. The forest cover type dataset has 581,012 rows and 54 features, making it
time consuming to process. We compare the run-time and accuracy of the GPU and CPU
histogram algorithms.
In addition, The demo showcases using GPU with other GPU-related libraries including
cupy and cuml. These libraries are not strictly required.
"""
import time
import cupy as cp
from cuml.model_selection import train_test_split
from sklearn.datasets import fetch_covtype
from sklearn.model_selection import train_test_split
import xgboost as xgb
# Fetch dataset using sklearn
cov = fetch_covtype()
X = cov.data
y = cov.target
X, y = fetch_covtype(return_X_y=True)
X = cp.array(X)
y = cp.array(y)
y -= y.min()
# Create 0.75/0.25 train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, train_size=0.75,
random_state=42)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, train_size=0.75, random_state=42
)
# Specify sufficient boosting iterations to reach a minimum
num_round = 3000
# Leave most parameters as default
param = {'objective': 'multi:softmax', # Specify multiclass classification
'num_class': 8, # Number of possible output classes
'tree_method': 'gpu_hist' # Use GPU accelerated algorithm
}
# Convert input data from numpy to XGBoost format
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
gpu_res = {} # Store accuracy result
tmp = time.time()
clf = xgb.XGBClassifier(device="cuda", n_estimators=num_round)
# Train model
xgb.train(param, dtrain, num_round, evals=[(dtest, 'test')], evals_result=gpu_res)
print("GPU Training Time: %s seconds" % (str(time.time() - tmp)))
start = time.time()
clf.fit(X_train, y_train, eval_set=[(X_test, y_test)])
gpu_res = clf.evals_result()
print("GPU Training Time: %s seconds" % (str(time.time() - start)))
# Repeat for CPU algorithm
tmp = time.time()
param['tree_method'] = 'hist'
cpu_res = {}
xgb.train(param, dtrain, num_round, evals=[(dtest, 'test')], evals_result=cpu_res)
print("CPU Training Time: %s seconds" % (str(time.time() - tmp)))
clf = xgb.XGBClassifier(device="cpu", n_estimators=num_round)
start = time.time()
cpu_res = clf.evals_result()
print("CPU Training Time: %s seconds" % (str(time.time() - start)))

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,55 @@
"""
Use GPU to speedup SHAP value computation
=========================================
Demonstrates using GPU acceleration to compute SHAP values for feature importance.
"""
import shap
from sklearn.datasets import fetch_california_housing
import xgboost as xgb
# Fetch dataset using sklearn
data = fetch_california_housing()
print(data.DESCR)
X = data.data
y = data.target
num_round = 500
param = {
"eta": 0.05,
"max_depth": 10,
"tree_method": "hist",
"device": "cuda",
}
# GPU accelerated training
dtrain = xgb.DMatrix(X, label=y, feature_names=data.feature_names)
model = xgb.train(param, dtrain, num_round)
# Compute shap values using GPU with xgboost
model.set_param({"device": "cuda"})
shap_values = model.predict(dtrain, pred_contribs=True)
# Compute shap interaction values using GPU
shap_interaction_values = model.predict(dtrain, pred_interactions=True)
# shap will call the GPU accelerated version as long as the device parameter is set to
# "cuda"
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
# visualize the first prediction's explanation
shap.force_plot(
explainer.expected_value,
shap_values[0, :],
X[0, :],
feature_names=data.feature_names,
matplotlib=True,
)
# Show a summary of feature importance
shap.summary_plot(shap_values, X, plot_type="bar", feature_names=data.feature_names)

View File

@@ -1,9 +1,9 @@
'''
"""
Demo for using and defining callback functions
==============================================
.. versionadded:: 1.3.0
'''
"""
import argparse
import os
import tempfile
@@ -17,10 +17,11 @@ import xgboost as xgb
class Plotting(xgb.callback.TrainingCallback):
'''Plot evaluation result during training. Only for demonstration purpose as it's quite
"""Plot evaluation result during training. Only for demonstration purpose as it's quite
slow to draw.
'''
"""
def __init__(self, rounds):
self.fig = plt.figure()
self.ax = self.fig.add_subplot(111)
@@ -31,16 +32,16 @@ class Plotting(xgb.callback.TrainingCallback):
plt.ion()
def _get_key(self, data, metric):
return f'{data}-{metric}'
return f"{data}-{metric}"
def after_iteration(self, model, epoch, evals_log):
'''Update the plot.'''
"""Update the plot."""
if not self.lines:
for data, metric in evals_log.items():
for metric_name, log in metric.items():
key = self._get_key(data, metric_name)
expanded = log + [0] * (self.rounds - len(log))
self.lines[key], = self.ax.plot(self.x, expanded, label=key)
(self.lines[key],) = self.ax.plot(self.x, expanded, label=key)
self.ax.legend()
else:
# https://pythonspot.com/matplotlib-update-plot/
@@ -55,8 +56,8 @@ class Plotting(xgb.callback.TrainingCallback):
def custom_callback():
'''Demo for defining a custom callback function that plots evaluation result during
training.'''
"""Demo for defining a custom callback function that plots evaluation result during
training."""
X, y = load_breast_cancer(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y, random_state=0)
@@ -69,14 +70,16 @@ def custom_callback():
# Pass it to the `callbacks` parameter as a list.
xgb.train(
{
'objective': 'binary:logistic',
'eval_metric': ['error', 'rmse'],
'tree_method': 'gpu_hist'
"objective": "binary:logistic",
"eval_metric": ["error", "rmse"],
"tree_method": "hist",
"device": "cuda",
},
D_train,
evals=[(D_train, 'Train'), (D_valid, 'Valid')],
evals=[(D_train, "Train"), (D_valid, "Valid")],
num_boost_round=num_boost_round,
callbacks=[plotting])
callbacks=[plotting],
)
def check_point_callback():
@@ -89,10 +92,10 @@ def check_point_callback():
if i == 0:
continue
if as_pickle:
path = os.path.join(tmpdir, 'model_' + str(i) + '.pkl')
path = os.path.join(tmpdir, "model_" + str(i) + ".pkl")
else:
path = os.path.join(tmpdir, 'model_' + str(i) + '.json')
assert(os.path.exists(path))
path = os.path.join(tmpdir, "model_" + str(i) + ".json")
assert os.path.exists(path)
X, y = load_breast_cancer(return_X_y=True)
m = xgb.DMatrix(X, y)
@@ -100,31 +103,36 @@ def check_point_callback():
with tempfile.TemporaryDirectory() as tmpdir:
# Use callback class from xgboost.callback
# Feel free to subclass/customize it to suit your need.
check_point = xgb.callback.TrainingCheckPoint(directory=tmpdir,
iterations=rounds,
name='model')
xgb.train({'objective': 'binary:logistic'}, m,
check_point = xgb.callback.TrainingCheckPoint(
directory=tmpdir, iterations=rounds, name="model"
)
xgb.train(
{"objective": "binary:logistic"},
m,
num_boost_round=10,
verbose_eval=False,
callbacks=[check_point])
callbacks=[check_point],
)
check(False)
# This version of checkpoint saves everything including parameters and
# model. See: doc/tutorials/saving_model.rst
check_point = xgb.callback.TrainingCheckPoint(directory=tmpdir,
iterations=rounds,
as_pickle=True,
name='model')
xgb.train({'objective': 'binary:logistic'}, m,
check_point = xgb.callback.TrainingCheckPoint(
directory=tmpdir, iterations=rounds, as_pickle=True, name="model"
)
xgb.train(
{"objective": "binary:logistic"},
m,
num_boost_round=10,
verbose_eval=False,
callbacks=[check_point])
callbacks=[check_point],
)
check(True)
if __name__ == '__main__':
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--plot', default=1, type=int)
parser.add_argument("--plot", default=1, type=int)
args = parser.parse_args()
check_point_callback()

View File

@@ -63,7 +63,8 @@ def load_cat_in_the_dat() -> tuple[pd.DataFrame, pd.Series]:
params = {
"tree_method": "gpu_hist",
"tree_method": "hist",
"device": "cuda",
"n_estimators": 32,
"colsample_bylevel": 0.7,
}

View File

@@ -58,13 +58,13 @@ def main() -> None:
# Specify `enable_categorical` to True, also we use onehot encoding based split
# here for demonstration. For details see the document of `max_cat_to_onehot`.
reg = xgb.XGBRegressor(
tree_method="gpu_hist", enable_categorical=True, max_cat_to_onehot=5
tree_method="hist", enable_categorical=True, max_cat_to_onehot=5, device="cuda"
)
reg.fit(X, y, eval_set=[(X, y)])
# Pass in already encoded data
X_enc, y_enc = make_categorical(100, 10, 4, True)
reg_enc = xgb.XGBRegressor(tree_method="gpu_hist")
reg_enc = xgb.XGBRegressor(tree_method="hist", device="cuda")
reg_enc.fit(X_enc, y_enc, eval_set=[(X_enc, y_enc)])
reg_results = np.array(reg.evals_result()["validation_0"]["rmse"])

View File

@@ -22,7 +22,10 @@ import xgboost
def make_batches(
n_samples_per_batch: int, n_features: int, n_batches: int, tmpdir: str,
n_samples_per_batch: int,
n_features: int,
n_batches: int,
tmpdir: str,
) -> List[Tuple[str, str]]:
files: List[Tuple[str, str]] = []
rng = np.random.RandomState(1994)
@@ -38,6 +41,7 @@ def make_batches(
class Iterator(xgboost.DataIter):
"""A custom iterator for loading files in batches."""
def __init__(self, file_paths: List[Tuple[str, str]]):
self._file_paths = file_paths
self._it = 0
@@ -82,10 +86,11 @@ def main(tmpdir: str) -> xgboost.Booster:
missing = np.NaN
Xy = xgboost.DMatrix(it, missing=missing, enable_categorical=False)
# Other tree methods including ``hist`` and ``gpu_hist`` also work, see tutorial in
# ``approx`` is also supported, but less efficient due to sketching. GPU behaves
# differently than CPU tree methods as it uses a hybrid approach. See tutorial in
# doc for details.
booster = xgboost.train(
{"tree_method": "approx", "max_depth": 2},
{"tree_method": "hist", "max_depth": 4},
Xy,
evals=[(Xy, "Train")],
num_boost_round=10,

View File

@@ -0,0 +1,214 @@
"""
Getting started with learning to rank
=====================================
.. versionadded:: 2.0.0
This is a demonstration of using XGBoost for learning to rank tasks using the
MSLR_10k_letor dataset. For more infomation about the dataset, please visit its
`description page <https://www.microsoft.com/en-us/research/project/mslr/>`_.
This is a two-part demo, the first one contains a basic example of using XGBoost to
train on relevance degree, and the second part simulates click data and enable the
position debiasing training.
For an overview of learning to rank in XGBoost, please see
:doc:`Learning to Rank </tutorials/learning_to_rank>`.
"""
from __future__ import annotations
import argparse
import json
import os
import pickle as pkl
import numpy as np
import pandas as pd
from sklearn.datasets import load_svmlight_file
import xgboost as xgb
from xgboost.testing.data import RelDataCV, simulate_clicks, sort_ltr_samples
def load_mlsr_10k(data_path: str, cache_path: str) -> RelDataCV:
"""Load the MSLR10k dataset from data_path and cache a pickle object in cache_path.
Returns
-------
A list of tuples [(X, y, qid), ...].
"""
root_path = os.path.expanduser(args.data)
cacheroot_path = os.path.expanduser(args.cache)
cache_path = os.path.join(cacheroot_path, "MSLR_10K_LETOR.pkl")
# Use only the Fold1 for demo:
# Train, Valid, Test
# {S1,S2,S3}, S4, S5
fold = 1
if not os.path.exists(cache_path):
fold_path = os.path.join(root_path, f"Fold{fold}")
train_path = os.path.join(fold_path, "train.txt")
valid_path = os.path.join(fold_path, "vali.txt")
test_path = os.path.join(fold_path, "test.txt")
X_train, y_train, qid_train = load_svmlight_file(
train_path, query_id=True, dtype=np.float32
)
y_train = y_train.astype(np.int32)
qid_train = qid_train.astype(np.int32)
X_valid, y_valid, qid_valid = load_svmlight_file(
valid_path, query_id=True, dtype=np.float32
)
y_valid = y_valid.astype(np.int32)
qid_valid = qid_valid.astype(np.int32)
X_test, y_test, qid_test = load_svmlight_file(
test_path, query_id=True, dtype=np.float32
)
y_test = y_test.astype(np.int32)
qid_test = qid_test.astype(np.int32)
data = RelDataCV(
train=(X_train, y_train, qid_train),
test=(X_test, y_test, qid_test),
max_rel=4,
)
with open(cache_path, "wb") as fd:
pkl.dump(data, fd)
with open(cache_path, "rb") as fd:
data = pkl.load(fd)
return data
def ranking_demo(args: argparse.Namespace) -> None:
"""Demonstration for learning to rank with relevance degree."""
data = load_mlsr_10k(args.data, args.cache)
# Sort data according to query index
X_train, y_train, qid_train = data.train
sorted_idx = np.argsort(qid_train)
X_train = X_train[sorted_idx]
y_train = y_train[sorted_idx]
qid_train = qid_train[sorted_idx]
X_test, y_test, qid_test = data.test
sorted_idx = np.argsort(qid_test)
X_test = X_test[sorted_idx]
y_test = y_test[sorted_idx]
qid_test = qid_test[sorted_idx]
ranker = xgb.XGBRanker(
tree_method="hist",
device="cuda",
lambdarank_pair_method="topk",
lambdarank_num_pair_per_sample=13,
eval_metric=["ndcg@1", "ndcg@8"],
)
ranker.fit(
X_train,
y_train,
qid=qid_train,
eval_set=[(X_test, y_test)],
eval_qid=[qid_test],
verbose=True,
)
def click_data_demo(args: argparse.Namespace) -> None:
"""Demonstration for learning to rank with click data."""
data = load_mlsr_10k(args.data, args.cache)
train, test = simulate_clicks(data)
assert test is not None
assert train.X.shape[0] == train.click.size
assert test.X.shape[0] == test.click.size
assert test.score.dtype == np.float32
assert test.click.dtype == np.int32
X_train, clicks_train, y_train, qid_train = sort_ltr_samples(
train.X,
train.y,
train.qid,
train.click,
train.pos,
)
X_test, clicks_test, y_test, qid_test = sort_ltr_samples(
test.X,
test.y,
test.qid,
test.click,
test.pos,
)
class ShowPosition(xgb.callback.TrainingCallback):
def after_iteration(
self,
model: xgb.Booster,
epoch: int,
evals_log: xgb.callback.TrainingCallback.EvalsLog,
) -> bool:
config = json.loads(model.save_config())
ti_plus = np.array(config["learner"]["objective"]["ti+"])
tj_minus = np.array(config["learner"]["objective"]["tj-"])
df = pd.DataFrame({"ti+": ti_plus, "tj-": tj_minus})
print(df)
return False
ranker = xgb.XGBRanker(
n_estimators=512,
tree_method="hist",
device="cuda",
learning_rate=0.01,
reg_lambda=1.5,
subsample=0.8,
sampling_method="gradient_based",
# LTR specific parameters
objective="rank:ndcg",
# - Enable bias estimation
lambdarank_unbiased=True,
# - normalization (1 / (norm + 1))
lambdarank_bias_norm=1,
# - Focus on the top 12 documents
lambdarank_num_pair_per_sample=12,
lambdarank_pair_method="topk",
ndcg_exp_gain=True,
eval_metric=["ndcg@1", "ndcg@3", "ndcg@5", "ndcg@10"],
callbacks=[ShowPosition()],
)
ranker.fit(
X_train,
clicks_train,
qid=qid_train,
eval_set=[(X_test, y_test), (X_test, clicks_test)],
eval_qid=[qid_test, qid_test],
verbose=True,
)
ranker.predict(X_test)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Demonstration of learning to rank using XGBoost."
)
parser.add_argument(
"--data",
type=str,
help="Root directory of the MSLR-WEB10K data.",
required=True,
)
parser.add_argument(
"--cache",
type=str,
help="Directory for caching processed data.",
required=True,
)
args = parser.parse_args()
ranking_demo(args)
click_data_demo(args)

View File

@@ -28,17 +28,18 @@ BATCHES = 32
class IterForDMatrixDemo(xgboost.core.DataIter):
'''A data iterator for XGBoost DMatrix.
"""A data iterator for XGBoost DMatrix.
`reset` and `next` are required for any data iterator, other functions here
are utilites for demonstration's purpose.
'''
"""
def __init__(self):
'''Generate some random data for demostration.
"""Generate some random data for demostration.
Actual data can be anything that is currently supported by XGBoost.
'''
"""
self.rows = ROWS_PER_BATCH
self.cols = COLS
rng = cupy.random.RandomState(1994)
@@ -59,27 +60,26 @@ class IterForDMatrixDemo(xgboost.core.DataIter):
return cupy.concatenate(self._weights)
def data(self):
'''Utility function for obtaining current batch of data.'''
"""Utility function for obtaining current batch of data."""
return self._data[self.it]
def labels(self):
'''Utility function for obtaining current batch of label.'''
"""Utility function for obtaining current batch of label."""
return self._labels[self.it]
def weights(self):
return self._weights[self.it]
def reset(self):
'''Reset the iterator'''
"""Reset the iterator"""
self.it = 0
def next(self, input_data):
'''Yield next batch of data.'''
"""Yield next batch of data."""
if self.it == len(self._data):
# Return 0 when there's no more batch.
return 0
input_data(data=self.data(), label=self.labels(),
weight=self.weights())
input_data(data=self.data(), label=self.labels(), weight=self.weights())
self.it += 1
return 1
@@ -103,18 +103,19 @@ def main():
assert m_with_it.num_col() == m.num_col()
assert m_with_it.num_row() == m.num_row()
# Tree meethod must be one of the `hist` or `gpu_hist`. We use `gpu_hist` for GPU
# input here.
# Tree meethod must be `hist`.
reg_with_it = xgboost.train(
{"tree_method": "gpu_hist"}, m_with_it, num_boost_round=rounds
{"tree_method": "hist", "device": "cuda"}, m_with_it, num_boost_round=rounds
)
predict_with_it = reg_with_it.predict(m_with_it)
reg = xgboost.train({"tree_method": "gpu_hist"}, m, num_boost_round=rounds)
reg = xgboost.train(
{"tree_method": "hist", "device": "cuda"}, m, num_boost_round=rounds
)
predict = reg.predict(m)
numpy.testing.assert_allclose(predict_with_it, predict, rtol=1e6)
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -7,6 +7,11 @@ Quantile Regression
The script is inspired by this awesome example in sklearn:
https://scikit-learn.org/stable/auto_examples/ensemble/plot_gradient_boosting_quantile.html
.. note::
The feature is only supported using the Python package. In addition, quantile
crossing can happen due to limitation in the algorithm.
"""
import argparse
from typing import Dict

View File

@@ -24,7 +24,7 @@ def main():
Xy = xgb.DMatrix(X_train, y_train)
evals_result: xgb.callback.EvaluationMonitor.EvalsLog = {}
booster = xgb.train(
{"tree_method": "gpu_hist", "max_depth": 6},
{"tree_method": "hist", "max_depth": 6, "device": "cuda"},
Xy,
num_boost_round=n_rounds,
evals=[(Xy, "Train")],
@@ -87,7 +87,7 @@ def main():
np.testing.assert_allclose(
np.array(prune_result["Original"]["rmse"]),
np.array(prune_result["Train"]["rmse"]),
atol=1e-5
atol=1e-5,
)

1
demo/nvflare/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
!config

View File

@@ -0,0 +1,23 @@
{
"format_version": 2,
"executors": [
{
"tasks": [
"train"
],
"executor": {
"path": "trainer.XGBoostTrainer",
"args": {
"server_address": "localhost:9091",
"world_size": 2,
"server_cert_path": "server-cert.pem",
"client_key_path": "client-key.pem",
"client_cert_path": "client-cert.pem",
"use_gpus": false
}
}
}
],
"task_result_filters": [],
"task_data_filters": []
}

View File

@@ -0,0 +1,22 @@
{
"format_version": 2,
"server": {
"heart_beat_timeout": 600
},
"task_data_filters": [],
"task_result_filters": [],
"workflows": [
{
"id": "server_workflow",
"path": "controller.XGBoostController",
"args": {
"port": 9091,
"world_size": 2,
"server_key_path": "server-key.pem",
"server_cert_path": "server-cert.pem",
"client_cert_path": "client-cert.pem"
}
}
],
"components": []
}

View File

@@ -6,7 +6,7 @@ This directory contains a demo of Horizontal Federated Learning using
## Training with CPU only
To run the demo, first build XGBoost with the federated learning plugin enabled (see the
[README](../../plugin/federated/README.md)).
[README](../../../plugin/federated/README.md)).
Install NVFlare (note that currently NVFlare only supports Python 3.8):
```shell

View File

@@ -70,8 +70,7 @@ class XGBoostTrainer(Executor):
param = {'max_depth': 2, 'eta': 1, 'objective': 'binary:logistic'}
if self._use_gpus:
self.log_info(fl_ctx, f'Training with GPU {rank}')
param['tree_method'] = 'gpu_hist'
param['gpu_id'] = rank
param['device'] = f"cuda:{rank}"
# Specify validations set to watch performance
watchlist = [(dtest, 'eval'), (dtrain, 'train')]

View File

@@ -16,7 +16,7 @@ split -n l/${world_size} --numeric-suffixes=1 -a 1 ../../data/agaricus.txt.test
nvflare poc -n 2 --prepare
mkdir -p /tmp/nvflare/poc/admin/transfer/horizontal-xgboost
cp -fr config custom /tmp/nvflare/poc/admin/transfer/horizontal-xgboost
cp -fr ../config custom /tmp/nvflare/poc/admin/transfer/horizontal-xgboost
cp server-*.pem client-cert.pem /tmp/nvflare/poc/server/
for (( site=1; site<=world_size; site++ )); do
cp server-cert.pem client-*.pem /tmp/nvflare/poc/site-"$site"/

View File

@@ -6,7 +6,7 @@ This directory contains a demo of Vertical Federated Learning using
## Training with CPU only
To run the demo, first build XGBoost with the federated learning plugin enabled (see the
[README](../../plugin/federated/README.md)).
[README](../../../plugin/federated/README.md)).
Install NVFlare (note that currently NVFlare only supports Python 3.8):
```shell

View File

@@ -16,7 +16,7 @@ class SupportedTasks(object):
class XGBoostTrainer(Executor):
def __init__(self, server_address: str, world_size: int, server_cert_path: str,
client_key_path: str, client_cert_path: str):
client_key_path: str, client_cert_path: str, use_gpus: bool):
"""Trainer for federated XGBoost.
Args:
@@ -32,6 +32,7 @@ class XGBoostTrainer(Executor):
self._server_cert_path = server_cert_path
self._client_key_path = client_key_path
self._client_cert_path = client_cert_path
self._use_gpus = use_gpus
def execute(self, task_name: str, shareable: Shareable, fl_ctx: FLContext,
abort_signal: Signal) -> Shareable:
@@ -81,6 +82,8 @@ class XGBoostTrainer(Executor):
'objective': 'binary:logistic',
'eval_metric': 'auc',
}
if self._use_gpus:
self.log_info(fl_ctx, 'GPUs are not currently supported by vertical federated XGBoost')
# specify validations set to watch performance
watchlist = [(dtest, "eval"), (dtrain, "train")]

View File

@@ -56,7 +56,7 @@ fi
nvflare poc -n 2 --prepare
mkdir -p /tmp/nvflare/poc/admin/transfer/vertical-xgboost
cp -fr config custom /tmp/nvflare/poc/admin/transfer/vertical-xgboost
cp -fr ../config custom /tmp/nvflare/poc/admin/transfer/vertical-xgboost
cp server-*.pem client-cert.pem /tmp/nvflare/poc/server/
for (( site=1; site<=world_size; site++ )); do
cp server-cert.pem client-*.pem /tmp/nvflare/poc/site-"${site}"/

View File

@@ -1,47 +0,0 @@
Using XGBoost with RAPIDS Memory Manager (RMM) plugin (EXPERIMENTAL)
====================================================================
[RAPIDS Memory Manager (RMM)](https://github.com/rapidsai/rmm) library provides a collection of
efficient memory allocators for NVIDIA GPUs. It is now possible to use XGBoost with memory
allocators provided by RMM, by enabling the RMM integration plugin.
The demos in this directory highlights one RMM allocator in particular: **the pool sub-allocator**.
This allocator addresses the slow speed of `cudaMalloc()` by allocating a large chunk of memory
upfront. Subsequent allocations will draw from the pool of already allocated memory and thus avoid
the overhead of calling `cudaMalloc()` directly. See
[this GTC talk slides](https://on-demand.gputechconf.com/gtc/2015/presentation/S5530-Stephen-Jones.pdf)
for more details.
Before running the demos, ensure that XGBoost is compiled with the RMM plugin enabled. To do this,
run CMake with option `-DPLUGIN_RMM=ON` (`-DUSE_CUDA=ON` also required):
```
cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON -DPLUGIN_RMM=ON
make -j4
```
CMake will attempt to locate the RMM library in your build environment. You may choose to build
RMM from the source, or install it using the Conda package manager. If CMake cannot find RMM, you
should specify the location of RMM with the CMake prefix:
```
# If using Conda:
cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON -DPLUGIN_RMM=ON -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
# If using RMM installed with a custom location
cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON -DPLUGIN_RMM=ON -DCMAKE_PREFIX_PATH=/path/to/rmm
```
# Informing XGBoost about RMM pool
When XGBoost is compiled with RMM, most of the large size allocation will go through RMM
allocators, but some small allocations in performance critical areas are using a different
caching allocator so that we can have better control over memory allocation behavior.
Users can override this behavior and force the use of rmm for all allocations by setting
the global configuration ``use_rmm``:
``` python
with xgb.config_context(use_rmm=True):
clf = xgb.XGBClassifier(tree_method="gpu_hist")
```
Depending on the choice of memory pool size or type of allocator, this may have negative
performance impact.
* [Using RMM with a single GPU](./rmm_singlegpu.py)
* [Using RMM with a local Dask cluster consisting of multiple GPUs](./rmm_mgpu_with_dask.py)

View File

@@ -0,0 +1,51 @@
Using XGBoost with RAPIDS Memory Manager (RMM) plugin (EXPERIMENTAL)
====================================================================
`RAPIDS Memory Manager (RMM) <https://github.com/rapidsai/rmm>`__ library provides a
collection of efficient memory allocators for NVIDIA GPUs. It is now possible to use
XGBoost with memory allocators provided by RMM, by enabling the RMM integration plugin.
The demos in this directory highlights one RMM allocator in particular: **the pool
sub-allocator**. This allocator addresses the slow speed of ``cudaMalloc()`` by
allocating a large chunk of memory upfront. Subsequent allocations will draw from the pool
of already allocated memory and thus avoid the overhead of calling ``cudaMalloc()``
directly. See `this GTC talk slides
<https://on-demand.gputechconf.com/gtc/2015/presentation/S5530-Stephen-Jones.pdf>`_ for
more details.
Before running the demos, ensure that XGBoost is compiled with the RMM plugin enabled. To do this,
run CMake with option ``-DPLUGIN_RMM=ON`` (``-DUSE_CUDA=ON`` also required):
.. code-block:: sh
cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON -DPLUGIN_RMM=ON
make -j$(nproc)
CMake will attempt to locate the RMM library in your build environment. You may choose to build
RMM from the source, or install it using the Conda package manager. If CMake cannot find RMM, you
should specify the location of RMM with the CMake prefix:
.. code-block:: sh
# If using Conda:
cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON -DPLUGIN_RMM=ON -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
# If using RMM installed with a custom location
cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON -DPLUGIN_RMM=ON -DCMAKE_PREFIX_PATH=/path/to/rmm
********************************
Informing XGBoost about RMM pool
********************************
When XGBoost is compiled with RMM, most of the large size allocation will go through RMM
allocators, but some small allocations in performance critical areas are using a different
caching allocator so that we can have better control over memory allocation behavior.
Users can override this behavior and force the use of rmm for all allocations by setting
the global configuration ``use_rmm``:
.. code-block:: python
with xgb.config_context(use_rmm=True):
clf = xgb.XGBClassifier(tree_method="hist", device="cuda")
Depending on the choice of memory pool size or type of allocator, this may have negative
performance impact.

View File

@@ -1,3 +1,7 @@
"""
Using rmm with Dask
===================
"""
import dask
from dask.distributed import Client
from dask_cuda import LocalCUDACluster
@@ -11,25 +15,33 @@ def main(client):
# xgb.set_config(use_rmm=True)
X, y = make_classification(n_samples=10000, n_informative=5, n_classes=3)
# In pratice one should prefer loading the data with dask collections instead of using
# `from_array`.
# In pratice one should prefer loading the data with dask collections instead of
# using `from_array`.
X = dask.array.from_array(X)
y = dask.array.from_array(y)
dtrain = xgb.dask.DaskDMatrix(client, X, label=y)
params = {'max_depth': 8, 'eta': 0.01, 'objective': 'multi:softprob', 'num_class': 3,
'tree_method': 'gpu_hist', 'eval_metric': 'merror'}
output = xgb.dask.train(client, params, dtrain, num_boost_round=100,
evals=[(dtrain, 'train')])
bst = output['booster']
history = output['history']
for i, e in enumerate(history['train']['merror']):
print(f'[{i}] train-merror: {e}')
params = {
"max_depth": 8,
"eta": 0.01,
"objective": "multi:softprob",
"num_class": 3,
"tree_method": "hist",
"eval_metric": "merror",
"device": "cuda",
}
output = xgb.dask.train(
client, params, dtrain, num_boost_round=100, evals=[(dtrain, "train")]
)
bst = output["booster"]
history = output["history"]
for i, e in enumerate(history["train"]["merror"]):
print(f"[{i}] train-merror: {e}")
if __name__ == '__main__':
# To use RMM pool allocator with a GPU Dask cluster, just add rmm_pool_size option to
# LocalCUDACluster constructor.
with LocalCUDACluster(rmm_pool_size='2GB') as cluster:
if __name__ == "__main__":
# To use RMM pool allocator with a GPU Dask cluster, just add rmm_pool_size option
# to LocalCUDACluster constructor.
with LocalCUDACluster(rmm_pool_size="2GB") as cluster:
with Client(cluster) as client:
main(client)

View File

@@ -1,3 +1,7 @@
"""
Using rmm on a single node device
=================================
"""
import rmm
from sklearn.datasets import make_classification
@@ -16,7 +20,8 @@ params = {
"eta": 0.01,
"objective": "multi:softprob",
"num_class": 3,
"tree_method": "gpu_hist",
"tree_method": "hist",
"device": "cuda",
}
# XGBoost will automatically use the RMM pool allocator
bst = xgb.train(params, dtrain, num_boost_round=100, evals=[(dtrain, "train")])

View File

@@ -0,0 +1,79 @@
import argparse
import pathlib
import re
import shutil
def main(args):
if args.scala_version == "2.12":
scala_ver = "2.12"
scala_patchver = "2.12.18"
elif args.scala_version == "2.13":
scala_ver = "2.13"
scala_patchver = "2.13.11"
else:
raise ValueError(f"Unsupported Scala version: {args.scala_version}")
# Clean artifacts
if args.purge_artifacts:
for target in pathlib.Path("jvm-packages/").glob("**/target"):
if target.is_dir():
print(f"Removing {target}...")
shutil.rmtree(target)
# Update pom.xml
for pom in pathlib.Path("jvm-packages/").glob("**/pom.xml"):
print(f"Updating {pom}...")
with open(pom, "r", encoding="utf-8") as f:
lines = f.readlines()
with open(pom, "w", encoding="utf-8") as f:
replaced_scalaver = False
replaced_scala_binver = False
for line in lines:
for artifact in [
"xgboost-jvm",
"xgboost4j",
"xgboost4j-gpu",
"xgboost4j-spark",
"xgboost4j-spark-gpu",
"xgboost4j-flink",
"xgboost4j-example",
]:
line = re.sub(
f"<artifactId>{artifact}_[0-9\\.]*",
f"<artifactId>{artifact}_{scala_ver}",
line,
)
# Only replace the first occurrence of scala.version
if not replaced_scalaver:
line, nsubs = re.subn(
r"<scala.version>[0-9\.]*",
f"<scala.version>{scala_patchver}",
line,
)
if nsubs > 0:
replaced_scalaver = True
# Only replace the first occurrence of scala.binary.version
if not replaced_scala_binver:
line, nsubs = re.subn(
r"<scala.binary.version>[0-9\.]*",
f"<scala.binary.version>{scala_ver}",
line,
)
if nsubs > 0:
replaced_scala_binver = True
f.write(line)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--purge-artifacts", action="store_true")
parser.add_argument(
"--scala-version",
type=str,
required=True,
help="Version of Scala to use in the JVM packages",
choices=["2.12", "2.13"],
)
parsed_args = parser.parse_args()
main(parsed_args)

View File

@@ -2,7 +2,6 @@ import argparse
import errno
import glob
import os
import platform
import re
import shutil
import subprocess
@@ -21,12 +20,14 @@ def normpath(path):
else:
return normalized
def cp(source, target):
source = normpath(source)
target = normpath(target)
print("cp {0} {1}".format(source, target))
shutil.copy(source, target)
def maybe_makedirs(path):
path = normpath(path)
print("mkdir -p " + path)
@@ -36,6 +37,7 @@ def maybe_makedirs(path):
if e.errno != errno.EEXIST:
raise
@contextmanager
def cd(path):
path = normpath(path)
@@ -47,18 +49,22 @@ def cd(path):
finally:
os.chdir(cwd)
def run(command, **kwargs):
print(command)
subprocess.check_call(command, shell=True, **kwargs)
def get_current_git_tag():
out = subprocess.check_output(["git", "tag", "--points-at", "HEAD"])
return out.decode().split("\n")[0]
def get_current_commit_hash():
out = subprocess.check_output(["git", "rev-parse", "HEAD"])
return out.decode().split("\n")[0]
def get_current_git_branch():
out = subprocess.check_output(["git", "log", "-n", "1", "--pretty=%d", "HEAD"])
m = re.search(r"release_[0-9\.]+", out.decode())
@@ -66,38 +72,49 @@ def get_current_git_branch():
raise ValueError("Expected branch name of form release_xxx")
return m.group(0)
def retrieve(url, filename=None):
print(f"{url} -> {filename}")
return urlretrieve(url, filename)
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--release-version", type=str, required=True,
help="Version of the release being prepared")
parser.add_argument(
"--release-version",
type=str,
required=True,
help="Version of the release being prepared",
)
args = parser.parse_args()
if sys.platform != "darwin" or platform.machine() != "x86_64":
raise NotImplementedError("Please run this script using an Intel Mac")
version = args.release_version
expected_git_tag = "v" + version
current_git_tag = get_current_git_tag()
if current_git_tag != expected_git_tag:
if not current_git_tag:
raise ValueError(f"Expected git tag {expected_git_tag} but current HEAD has no tag. "
f"Run: git checkout {expected_git_tag}")
raise ValueError(f"Expected git tag {expected_git_tag} but current HEAD is at tag "
f"{current_git_tag}. Run: git checkout {expected_git_tag}")
raise ValueError(
f"Expected git tag {expected_git_tag} but current HEAD has no tag. "
f"Run: git checkout {expected_git_tag}"
)
raise ValueError(
f"Expected git tag {expected_git_tag} but current HEAD is at tag "
f"{current_git_tag}. Run: git checkout {expected_git_tag}"
)
commit_hash = get_current_commit_hash()
git_branch = get_current_git_branch()
print(f"Using commit {commit_hash} of branch {git_branch}, git tag {current_git_tag}")
print(
f"Using commit {commit_hash} of branch {git_branch}, git tag {current_git_tag}"
)
with cd("jvm-packages/"):
print("====copying pure-Python tracker====")
for use_cuda in [True, False]:
xgboost4j = "xgboost4j-gpu" if use_cuda else "xgboost4j"
cp("../python-package/xgboost/tracker.py", f"{xgboost4j}/src/main/resources")
cp(
"../python-package/xgboost/tracker.py",
f"{xgboost4j}/src/main/resources",
)
print("====copying resources for testing====")
with cd("../demo/CLI/regression"):
@@ -115,7 +132,12 @@ def main():
cp(file, f"{xgboost4j_spark}/src/test/resources")
print("====Creating directories to hold native binaries====")
for os_ident, arch in [("linux", "x86_64"), ("windows", "x86_64"), ("macos", "x86_64")]:
for os_ident, arch in [
("linux", "x86_64"),
("windows", "x86_64"),
("macos", "x86_64"),
("macos", "aarch64"),
]:
output_dir = f"xgboost4j/src/main/resources/lib/{os_ident}/{arch}"
maybe_makedirs(output_dir)
for os_ident, arch in [("linux", "x86_64")]:
@@ -123,52 +145,98 @@ def main():
maybe_makedirs(output_dir)
print("====Downloading native binaries from CI====")
nightly_bucket_prefix = "https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds"
maven_repo_prefix = "https://s3-us-west-2.amazonaws.com/xgboost-maven-repo/release/ml/dmlc"
nightly_bucket_prefix = (
"https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds"
)
maven_repo_prefix = (
"https://s3-us-west-2.amazonaws.com/xgboost-maven-repo/release/ml/dmlc"
)
retrieve(url=f"{nightly_bucket_prefix}/{git_branch}/xgboost4j_{commit_hash}.dll",
filename="xgboost4j/src/main/resources/lib/windows/x86_64/xgboost4j.dll")
retrieve(
url=f"{nightly_bucket_prefix}/{git_branch}/libxgboost4j/xgboost4j_{commit_hash}.dll",
filename="xgboost4j/src/main/resources/lib/windows/x86_64/xgboost4j.dll",
)
retrieve(
url=f"{nightly_bucket_prefix}/{git_branch}/libxgboost4j/libxgboost4j_{commit_hash}.dylib",
filename="xgboost4j/src/main/resources/lib/macos/x86_64/libxgboost4j.dylib",
)
retrieve(
url=f"{nightly_bucket_prefix}/{git_branch}/libxgboost4j/libxgboost4j_m1_{commit_hash}.dylib",
filename="xgboost4j/src/main/resources/lib/macos/aarch64/libxgboost4j.dylib",
)
with tempfile.TemporaryDirectory() as tempdir:
# libxgboost4j.so for Linux x86_64, CPU only
zip_path = os.path.join(tempdir, "xgboost4j_2.12.jar")
extract_dir = os.path.join(tempdir, "xgboost4j")
retrieve(url=f"{maven_repo_prefix}/xgboost4j_2.12/{version}/"
retrieve(
url=f"{maven_repo_prefix}/xgboost4j_2.12/{version}/"
f"xgboost4j_2.12-{version}.jar",
filename=zip_path)
filename=zip_path,
)
os.mkdir(extract_dir)
with zipfile.ZipFile(zip_path, "r") as t:
t.extractall(extract_dir)
cp(os.path.join(extract_dir, "lib", "linux", "x86_64", "libxgboost4j.so"),
"xgboost4j/src/main/resources/lib/linux/x86_64/libxgboost4j.so")
cp(
os.path.join(extract_dir, "lib", "linux", "x86_64", "libxgboost4j.so"),
"xgboost4j/src/main/resources/lib/linux/x86_64/libxgboost4j.so",
)
# libxgboost4j.so for Linux x86_64, GPU support
zip_path = os.path.join(tempdir, "xgboost4j-gpu_2.12.jar")
extract_dir = os.path.join(tempdir, "xgboost4j-gpu")
retrieve(url=f"{maven_repo_prefix}/xgboost4j-gpu_2.12/{version}/"
retrieve(
url=f"{maven_repo_prefix}/xgboost4j-gpu_2.12/{version}/"
f"xgboost4j-gpu_2.12-{version}.jar",
filename=zip_path)
filename=zip_path,
)
os.mkdir(extract_dir)
with zipfile.ZipFile(zip_path, "r") as t:
t.extractall(extract_dir)
cp(os.path.join(extract_dir, "lib", "linux", "x86_64", "libxgboost4j.so"),
"xgboost4j-gpu/src/main/resources/lib/linux/x86_64/libxgboost4j.so")
cp(
os.path.join(extract_dir, "lib", "linux", "x86_64", "libxgboost4j.so"),
"xgboost4j-gpu/src/main/resources/lib/linux/x86_64/libxgboost4j.so",
)
print("====Next Steps====")
print("1. Gain upload right to Maven Central repo.")
print("1-1. Sign up for a JIRA account at Sonatype: ")
print("1-2. File a JIRA ticket: "
print(
"1-2. File a JIRA ticket: "
"https://issues.sonatype.org/secure/CreateIssue.jspa?issuetype=21&pid=10134. Example: "
"https://issues.sonatype.org/browse/OSSRH-67724")
print("2. Store the Sonatype credentials in .m2/settings.xml. See insturctions in "
"https://central.sonatype.org/publish/publish-maven/")
print("3. Now on a Mac machine, run:")
print(" GPG_TTY=$(tty) mvn deploy -Prelease -DskipTests")
print("4. Log into https://oss.sonatype.org/. On the left menu panel, click Staging "
"Repositories. Visit the URL https://oss.sonatype.org/content/repositories/mldmlc-1085 "
"https://issues.sonatype.org/browse/OSSRH-67724"
)
print(
"2. Store the Sonatype credentials in .m2/settings.xml. See insturctions in "
"https://central.sonatype.org/publish/publish-maven/"
)
print(
"3. Now on a Linux machine, run the following to build Scala 2.12 artifacts. "
"Make sure to use an Internet connection with fast upload speed:"
)
print(
" # Skip native build, since we have all needed native binaries from CI\n"
" export MAVEN_SKIP_NATIVE_BUILD=1\n"
" GPG_TTY=$(tty) mvn deploy -Prelease -DskipTests"
)
print(
"4. Log into https://oss.sonatype.org/. On the left menu panel, click Staging "
"Repositories. Visit the URL https://oss.sonatype.org/content/repositories/mldmlc-xxxx "
"to inspect the staged JAR files. Finally, press Release button to publish the "
"artifacts to the Maven Central repository.")
"artifacts to the Maven Central repository. The top-level metapackage should be "
"named xgboost-jvm_2.12."
)
print(
"5. Remove the Scala 2.12 artifacts and build Scala 2.13 artifacts:\n"
" export MAVEN_SKIP_NATIVE_BUILD=1\n"
" python dev/change_scala_version.py --scala-version 2.13 --purge-artifacts\n"
" GPG_TTY=$(tty) mvn deploy -Prelease-cpu-only,scala-2.13 -DskipTests"
)
print(
"6. Go to https://oss.sonatype.org/ to release the Scala 2.13 artifacts. "
"The top-level metapackage should be named xgboost-jvm_2.13."
)
if __name__ == "__main__":
main()

2
doc/.gitignore vendored
View File

@@ -6,3 +6,5 @@ doxygen
parser.py
*.pyc
web-data
# generated by doxygen
tmp

View File

@@ -1,70 +1,76 @@
# Understand your dataset with XGBoost
Understand your dataset with XGBoost
====================================
## Introduction
Introduction
------------
The purpose of this vignette is to show you how to use **XGBoost** to
discover and understand your own dataset better.
The purpose of this Vignette is to show you how to use **XGBoost** to discover and understand your own dataset better.
This Vignette is not about predicting anything (see [XGBoost presentation](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd)). We will explain how to use **XGBoost** to highlight the *link* between the *features* of your data and the *outcome*.
This vignette is not about predicting anything (see [XGBoost
presentation](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd)).
We will explain how to use **XGBoost** to highlight the *link* between
the *features* of your data and the *outcome*.
Package loading:
```r
require(xgboost)
require(Matrix)
require(data.table)
if (!require('vcd')) install.packages('vcd')
```
if (!require('vcd')) {
install.packages('vcd')
}
> **VCD** package is used for one of its embedded dataset only.
Preparation of the dataset
--------------------------
### Numeric VS categorical variables
## Preparation of the dataset
### Numeric v.s. categorical variables
**XGBoost** manages only `numeric` vectors.
What to do when you have *categorical* data?
A *categorical* variable has a fixed number of different values. For instance, if a variable called *Colour* can have only one of these three values, *red*, *blue* or *green*, then *Colour* is a *categorical* variable.
A *categorical* variable has a fixed number of different values. For
instance, if a variable called *Colour* can have only one of these three
values, *red*, *blue* or *green*, then *Colour* is a *categorical*
variable.
> In **R**, a *categorical* variable is called `factor`.
>
> Type `?factor` in the console for more information.
To answer the question above we will convert *categorical* variables to `numeric` one.
To answer the question above we will convert *categorical* variables to
`numeric` ones.
### Conversion from categorical to numeric variables
#### Looking at the raw data
In this Vignette we will see how to transform a *dense* `data.frame` (*dense* = few zeroes in the matrix) with *categorical* variables to a very *sparse* matrix (*sparse* = lots of zero in the matrix) of `numeric` features.
+In this Vignette we will see how to transform a *dense* `data.frame`
(*dense* = the majority of the matrix is non-zero) with *categorical*
variables to a very *sparse* matrix (*sparse* = lots of zero entries in
the matrix) of `numeric` features.
The method we are going to see is usually called [one-hot encoding](http://en.wikipedia.org/wiki/One-hot).
The method we are going to see is usually called [one-hot
encoding](https://en.wikipedia.org/wiki/One-hot).
The first step is to load `Arthritis` dataset in memory and wrap it with `data.table` package.
The first step is to load the `Arthritis` dataset in memory and wrap it
with the `data.table` package.
```r
data(Arthritis)
df <- data.table(Arthritis, keep.rownames = FALSE)
```
> `data.table` is 100% compliant with **R** `data.frame` but its syntax is more consistent and its performance for large dataset is [best in class](http://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) (`dplyr` from **R** and `Pandas` from **Python** [included](https://github.com/Rdatatable/data.table/wiki/Benchmarks-%3A-Grouping)). Some parts of **XGBoost** **R** package use `data.table`.
> `data.table` is 100% compliant with **R** `data.frame` but its syntax
> is more consistent and its performance for large dataset is [best in
> class](https://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly)
> (`dplyr` from **R** and `Pandas` from **Python**
> [included](https://github.com/Rdatatable/data.table/wiki/Benchmarks-%3A-Grouping)).
> Some parts of **XGBoosts** **R** package use `data.table`.
The first thing we want to do is to have a look to the first lines of the `data.table`:
The first thing we want to do is to have a look to the first few lines
of the `data.table`:
```r
head(df)
```
```
## ID Treatment Sex Age Improved
## 1: 57 Treated Male 27 Some
## 2: 46 Treated Male 29 None
@@ -72,16 +78,11 @@ head(df)
## 4: 17 Treated Male 32 Marked
## 5: 36 Treated Male 46 Marked
## 6: 23 Treated Male 58 Marked
```
Now we will check the format of each column.
```r
str(df)
```
```
## Classes 'data.table' and 'data.frame': 84 obs. of 5 variables:
## $ ID : int 57 46 77 17 36 23 75 39 33 55 ...
## $ Treatment: Factor w/ 2 levels "Placebo","Treated": 2 2 2 2 2 2 2 2 2 2 ...
@@ -89,14 +90,14 @@ str(df)
## $ Age : int 27 29 30 32 46 58 59 59 63 63 ...
## $ Improved : Ord.factor w/ 3 levels "None"<"Some"<..: 2 1 1 3 3 3 1 3 1 1 ...
## - attr(*, ".internal.selfref")=<externalptr>
```
2 columns have `factor` type, one has `ordinal` type.
> `ordinal` variable :
>
> * can take a limited number of values (like `factor`) ;
> * these values are ordered (unlike `factor`). Here these ordered values are: `Marked > Some > None`
> - can take a limited number of values (like `factor`) ;
> - these values are ordered (unlike `factor`). Here these ordered
> values are: `Marked > Some > None`
#### Creation of new features based on old ones
@@ -104,18 +105,16 @@ We will add some new *categorical* features to see if it helps.
##### Grouping per 10 years
For the first feature we create groups of age by rounding the real age.
For the first features we create groups of age by rounding the real age.
Note that we transform it to `factor` so the algorithm treat these age groups as independent values.
Note that we transform it to `factor` so the algorithm treats these age
groups as independent values.
Therefore, 20 is not closer to 30 than 60. To make it short, the distance between ages is lost in this transformation.
Therefore, 20 is not closer to 30 than 60. In other words, the distance
between ages is lost in this transformation.
```r
head(df[, AgeDiscret := as.factor(round(Age / 10, 0))])
```
```
## ID Treatment Sex Age Improved AgeDiscret
## 1: 57 Treated Male 27 Some 3
## 2: 46 Treated Male 29 None 3
@@ -123,18 +122,17 @@ head(df[,AgeDiscret := as.factor(round(Age/10,0))])
## 4: 17 Treated Male 32 Marked 3
## 5: 36 Treated Male 46 Marked 5
## 6: 23 Treated Male 58 Marked 6
```
##### Random split in two groups
##### Randomly split into two groups
Following is an even stronger simplification of the real age with an arbitrary split at 30 years old. I choose this value **based on nothing**. We will see later if simplifying the information based on arbitrary values is a good strategy (you may already have an idea of how well it will work...).
The following is an even stronger simplification of the real age with an
arbitrary split at 30 years old. I choose this value **based on
nothing**. We will see later if simplifying the information based on
arbitrary values is a good strategy (you may already have an idea of how
well it will work…).
```r
head(df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))])
```
```
## ID Treatment Sex Age Improved AgeDiscret AgeCat
## 1: 57 Treated Male 27 Some 3 Young
## 2: 46 Treated Male 29 None 3 Young
@@ -142,330 +140,336 @@ head(df[,AgeCat:= as.factor(ifelse(Age > 30, "Old", "Young"))])
## 4: 17 Treated Male 32 Marked 3 Old
## 5: 36 Treated Male 46 Marked 5 Old
## 6: 23 Treated Male 58 Marked 6 Old
```
##### Risks in adding correlated features
These new features are highly correlated to the `Age` feature because they are simple transformations of this feature.
These new features are highly correlated to the `Age` feature because
they are simple transformations of this feature.
For many machine learning algorithms, using correlated features is not a good idea. It may sometimes make prediction less accurate, and most of the time make interpretation of the model almost impossible. GLM, for instance, assumes that the features are uncorrelated.
For many machine learning algorithms, using correlated features is not a
good idea. It may sometimes make prediction less accurate, and most of
the time make interpretation of the model almost impossible. GLM, for
instance, assumes that the features are uncorrelated.
Fortunately, decision tree algorithms (including boosted trees) are very robust to these features. Therefore we have nothing to do to manage this situation.
Fortunately, decision tree algorithms (including boosted trees) are very
robust to these features. Therefore we dont have to do anything to
manage this situation.
##### Cleaning data
We remove ID as there is nothing to learn from this feature (it would just add some noise).
We remove ID as there is nothing to learn from this feature (it would
just add some noise).
```r
df[, ID := NULL]
```
We will list the different values for the column `Treatment`:
```r
levels(df[, Treatment])
```
```
## [1] "Placebo" "Treated"
```
#### One-hot encoding
#### Encoding categorical features
Next step, we will transform the categorical data to dummy variables.
This is the [one-hot encoding](http://en.wikipedia.org/wiki/One-hot) step.
Several encoding methods exist, e.g., [one-hot
encoding](https://en.wikipedia.org/wiki/One-hot) is a common approach.
We will use the [dummy contrast
coding](https://stats.oarc.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/)
which is popular because it produces “full rank” encoding (also see
[this blog post by Max
Kuhn](http://appliedpredictivemodeling.com/blog/2013/10/23/the-basics-of-encoding-categorical-data-for-predictive-models)).
The purpose is to transform each value of each *categorical* feature in a *binary* feature `{0, 1}`.
The purpose is to transform each value of each *categorical* feature
into a *binary* feature `{0, 1}`.
For example, the column `Treatment` will be replaced by two columns, `Placebo`, and `Treated`. Each of them will be *binary*. Therefore, an observation which has the value `Placebo` in column `Treatment` before the transformation will have after the transformation the value `1` in the new column `Placebo` and the value `0` in the new column `Treated`. The column `Treatment` will disappear during the one-hot encoding.
For example, the column `Treatment` will be replaced by two columns,
`TreatmentPlacebo`, and `TreatmentTreated`. Each of them will be
*binary*. Therefore, an observation which has the value `Placebo` in
column `Treatment` before the transformation will have the value `1` in
the new column `TreatmentPlacebo` and the value `0` in the new column
`TreatmentTreated` after the transformation. The column
`TreatmentPlacebo` will disappear during the contrast encoding, as it
would be absorbed into a common constant intercept column.
Column `Improved` is excluded because it will be our `label` column, the one we want to predict.
Column `Improved` is excluded because it will be our `label` column, the
one we want to predict.
```r
sparse_matrix <- sparse.model.matrix(Improved~.-1, data = df)
sparse_matrix <- sparse.model.matrix(Improved ~ ., data = df)[, -1]
head(sparse_matrix)
```
```
## 6 x 10 sparse Matrix of class "dgCMatrix"
##
## 1 . 1 1 27 1 . . . . 1
## 2 . 1 1 29 1 . . . . 1
## 3 . 1 1 30 1 . . . . 1
## 4 . 1 1 32 1 . . . . .
## 5 . 1 1 46 . . 1 . . .
## 6 . 1 1 58 . . . 1 . .
```
## 6 x 9 sparse Matrix of class "dgCMatrix"
## TreatmentTreated SexMale Age AgeDiscret3 AgeDiscret4 AgeDiscret5 AgeDiscret6
## 1 1 1 27 1 . . .
## 2 1 1 29 1 . . .
## 3 1 1 30 1 . . .
## 4 1 1 32 1 . . .
## 5 1 1 46 . . 1 .
## 6 1 1 58 . . . 1
## AgeDiscret7 AgeCatYoung
## 1 . 1
## 2 . 1
## 3 . 1
## 4 . .
## 5 . .
## 6 . .
> Formulae `Improved~.-1` used above means transform all *categorical* features but column `Improved` to binary values. The `-1` is here to remove the first column which is full of `1` (this column is generated by the conversion). For more information, you can type `?sparse.model.matrix` in the console.
> Formula `Improved ~ .` used above means transform all *categorical*
> features but column `Improved` to binary values. The `-1` column
> selection removes the intercept column which is full of `1` (this
> column is generated by the conversion). For more information, you can
> type `?sparse.model.matrix` in the console.
Create the output `numeric` vector (not as a sparse `Matrix`):
```r
output_vector = df[,Improved] == "Marked"
```
output_vector <- df[, Improved] == "Marked"
1. set `Y` vector to `0`;
2. set `Y` to `1` for rows where `Improved == Marked` is `TRUE` ;
3. return `Y` vector.
Build the model
---------------
## Build the model
The code below is very usual. For more information, you can look at the documentation of `xgboost` function (or at the vignette [XGBoost presentation](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd)).
The code below is very usual. For more information, you can look at the
documentation of `xgboost` function (or at the vignette [XGBoost
presentation](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd)).
```r
bst <- xgboost(data = sparse_matrix, label = output_vector, max.depth = 4,
bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 4,
eta = 1, nthread = 2, nrounds = 10, objective = "binary:logistic")
```
```
## [0] train-error:0.202381
## [1] train-error:0.166667
## [2] train-error:0.166667
## [3] train-error:0.166667
## [4] train-error:0.154762
## [5] train-error:0.154762
## [6] train-error:0.154762
## [7] train-error:0.166667
## [8] train-error:0.166667
## [9] train-error:0.166667
```
## [1] train-logloss:0.485466
## [2] train-logloss:0.438534
## [3] train-logloss:0.412250
## [4] train-logloss:0.395828
## [5] train-logloss:0.384264
## [6] train-logloss:0.374028
## [7] train-logloss:0.365005
## [8] train-logloss:0.351233
## [9] train-logloss:0.341678
## [10] train-logloss:0.334465
You can see some `train-error: 0.XXXXX` lines followed by a number. It decreases. Each line shows how well the model explains your data. Lower is better.
You can see some `train-logloss: 0.XXXXX` lines followed by a number. It
decreases. Each line shows how well the model explains the data. Lower
is better.
A model which fits too well may [overfit](http://en.wikipedia.org/wiki/Overfitting) (meaning it copy/paste too much the past, and won't be that good to predict the future).
A small value for training error may be a symptom of
[overfitting](https://en.wikipedia.org/wiki/Overfitting), meaning the
model will not accurately predict unseen values.
> Here you can see the numbers decrease until line 7 and then increase.
>
> It probably means we are overfitting. To fix that I should reduce the number of rounds to `nrounds = 4`. I will let things like that because I don't really care for the purpose of this example :-)
Feature importance
------------------
## Feature importance
## Measure feature importance
### Build the feature importance data.table
In the code below, `sparse_matrix@Dimnames[[2]]` represents the column names of the sparse matrix. These names are the original values of the features (remember, each binary column == one value of one *categorical* feature).
Remember, each binary column corresponds to a single value of one of
*categorical* features.
```r
importance <- xgb.importance(feature_names = sparse_matrix@Dimnames[[2]], model = bst)
importance <- xgb.importance(feature_names = colnames(sparse_matrix), model = bst)
head(importance)
```
```
## Feature Gain Cover Frequency
## 1: Age 0.622031651 0.67251706 0.67241379
## 2: TreatmentPlacebo 0.285750607 0.11916656 0.10344828
## 3: SexMale 0.048744054 0.04522027 0.08620690
## 4: AgeDiscret6 0.016604647 0.04784637 0.05172414
## 5: AgeDiscret3 0.016373791 0.08028939 0.05172414
## 6: AgeDiscret4 0.009270558 0.02858801 0.01724138
```
## 1: Age 0.622031769 0.67251696 0.67241379
## 2: TreatmentTreated 0.285750540 0.11916651 0.10344828
## 3: SexMale 0.048744022 0.04522028 0.08620690
## 4: AgeDiscret6 0.016604639 0.04784639 0.05172414
## 5: AgeDiscret3 0.016373781 0.08028951 0.05172414
## 6: AgeDiscret4 0.009270557 0.02858801 0.01724138
> The column `Gain` provide the information we are looking for.
> The column `Gain` provides the information we are looking for.
>
> As you can see, features are classified by `Gain`.
`Gain` is the improvement in accuracy brought by a feature to the branches it is on. The idea is that before adding a new split on a feature X to the branch there was some wrongly classified elements, after adding the split on this feature, there are two new branches, and each of these branch is more accurate (one branch saying if your observation is on this branch then it should be classified as `1`, and the other branch saying the exact opposite).
`Gain` is the improvement in accuracy brought by a feature to the
branches it is on. The idea is that before adding a new split on a
feature X to the branch there were some wrongly classified elements;
after adding the split on this feature, there are two new branches, and
each of these branches is more accurate (one branch saying if your
observation is on this branch then it should be classified as `1`, and
the other branch saying the exact opposite).
`Cover` measures the relative quantity of observations concerned by a feature.
`Cover` is related to the second order derivative (or Hessian) of the
loss function with respect to a particular variable; thus, a large value
indicates a variable has a large potential impact on the loss function
and so is important.
`Frequency` is a simpler way to measure the `Gain`. It just counts the number of times a feature is used in all generated trees. You should not use it (unless you know why you want to use it).
#### Improvement in the interpretability of feature importance data.table
We can go deeper in the analysis of the model. In the `data.table` above, we have discovered which features counts to predict if the illness will go or not. But we don't yet know the role of these features. For instance, one of the question we may want to answer would be: does receiving a placebo treatment helps to recover from the illness?
One simple solution is to count the co-occurrences of a feature and a class of the classification.
For that purpose we will execute the same function as above but using two more parameters, `data` and `label`.
```r
importanceRaw <- xgb.importance(feature_names = sparse_matrix@Dimnames[[2]], model = bst, data = sparse_matrix, label = output_vector)
# Cleaning for better display
importanceClean <- importanceRaw[,`:=`(Cover=NULL, Frequency=NULL)]
head(importanceClean)
```
```
## Feature Split Gain RealCover RealCover %
## 1: TreatmentPlacebo -1.00136e-05 0.28575061 7 0.2500000
## 2: Age 61.5 0.16374034 12 0.4285714
## 3: Age 39 0.08705750 8 0.2857143
## 4: Age 57.5 0.06947553 11 0.3928571
## 5: SexMale -1.00136e-05 0.04874405 4 0.1428571
## 6: Age 53.5 0.04620627 10 0.3571429
```
> In the table above we have removed two not needed columns and select only the first lines.
First thing you notice is the new column `Split`. It is the split applied to the feature on a branch of one of the tree. Each split is present, therefore a feature can appear several times in this table. Here we can see the feature `Age` is used several times with different splits.
How the split is applied to count the co-occurrences? It is always `<`. For instance, in the second line, we measure the number of persons under 61.5 years with the illness gone after the treatment.
The two other new columns are `RealCover` and `RealCover %`. In the first column it measures the number of observations in the dataset where the split is respected and the label marked as `1`. The second column is the percentage of the whole population that `RealCover` represents.
Therefore, according to our findings, getting a placebo doesn't seem to help but being younger than 61 years may help (seems logic).
> You may wonder how to interpret the `< 1.00001` on the first line. Basically, in a sparse `Matrix`, there is no `0`, therefore, looking for one hot-encoded categorical observations validating the rule `< 1.00001` is like just looking for `1` for this feature.
`Frequency` is a simpler way to measure the `Gain`. It just counts the
number of times a feature is used in all generated trees. You should not
use it (unless you know why you want to use it).
### Plotting the feature importance
All these things are nice, but it would be even better to plot the
results.
All these things are nice, but it would be even better to plot the results.
xgb.plot.importance(importance_matrix = importance)
<img src="discoverYourData_files/figure-markdown_strict/unnamed-chunk-12-1.png" style="display: block; margin: auto;" />
```r
xgb.plot.importance(importance_matrix = importanceRaw)
```
Running this line of code, you should get a bar chart showing the
importance of the 6 features (containing the same data as the output we
saw earlier, but displaying it visually for easier consumption). Note
that `xgb.ggplot.importance` is also available for all the ggplot2 fans!
```
## Error in xgb.plot.importance(importance_matrix = importanceRaw): Importance matrix is not correct (column names issue)
```
> Depending of the dataset and the learning parameters you may have more
> than two clusters. Default value is to limit them to `10`, but you can
> increase this limit. Look at the function documentation for more
> information.
Feature have automatically been divided in 2 clusters: the interesting features... and the others.
According to the plot above, the most important features in this dataset
to predict if the treatment will work are :
> Depending of the dataset and the learning parameters you may have more than two clusters. Default value is to limit them to `10`, but you can increase this limit. Look at the function documentation for more information.
According to the plot above, the most important features in this dataset to predict if the treatment will work are :
* the Age ;
* having received a placebo or not ;
* the sex is third but already included in the not interesting features group ;
* then we see our generated features (AgeDiscret). We can see that their contribution is very low.
- An individuals age;
- Having received a placebo or not;
- Gender;
- Our generated feature AgeDiscret. We can see that its contribution
is very low.
### Do these results make sense?
Let's check some **Chi2** between each of these features and the label.
Lets check some **Chi2** between each of these features and the label.
Higher **Chi2** means better correlation.
```r
c2 <- chisq.test(df$Age, output_vector)
print(c2)
```
```
##
## Pearson's Chi-squared test
##
## data: df$Age and output_vector
## X-squared = 35.475, df = 35, p-value = 0.4458
```
Pearson correlation between Age and illness disappearing is **35.48**.
The Pearson correlation between Age and illness disappearing is
**35.47**.
```r
c2 <- chisq.test(df$AgeDiscret, output_vector)
print(c2)
```
```
##
## Pearson's Chi-squared test
##
## data: df$AgeDiscret and output_vector
## X-squared = 8.2554, df = 5, p-value = 0.1427
```
Our first simplification of Age gives a Pearson correlation is **8.26**.
Our first simplification of Age gives a Pearson correlation of **8.26**.
```r
c2 <- chisq.test(df$AgeCat, output_vector)
print(c2)
```
```
##
## Pearson's Chi-squared test with Yates' continuity correction
##
## data: df$AgeCat and output_vector
## X-squared = 2.3571, df = 1, p-value = 0.1247
```
The perfectly random split I did between young and old at 30 years old have a low correlation of **2.36**. It's a result we may expect as may be in my mind > 30 years is being old (I am 32 and starting feeling old, this may explain that), but for the illness we are studying, the age to be vulnerable is not the same.
The perfectly random split we did between young and old at 30 years old
has a low correlation of **2.36**. This suggests that, for the
particular illness we are studying, the age at which someone is
vulnerable to this disease is likely very different from 30.
Morality: don't let your *gut* lower the quality of your model.
Moral of the story: dont let your *gut* lower the quality of your
model.
In *data science* expression, there is the word *science* :-)
In *data science*, there is the word *science* :-)
Conclusion
----------
## Conclusion
As you can see, in general *destroying information by simplifying it won't improve your model*. **Chi2** just demonstrates that.
As you can see, in general *destroying information by simplifying it
wont improve your model*. **Chi2** just demonstrates that.
But in more complex cases, creating a new feature based on existing one which makes link with the outcome more obvious may help the algorithm and improve the model.
But in more complex cases, creating a new feature from an existing one
may help the algorithm and improve the model.
The case studied here is not enough complex to show that. Check [Kaggle website](http://www.kaggle.com/) for some challenging datasets. However it's almost always worse when you add some arbitrary rules.
+The case studied here is not complex enough to show that. Check [Kaggle
website](https://www.kaggle.com/) for some challenging datasets.
Moreover, you can notice that even if we have added some not useful new features highly correlated with other features, the boosting tree algorithm have been able to choose the best one, which in this case is the Age.
Moreover, you can see that even if we have added some new features which
are not very useful/highly correlated with other features, the boosting
tree algorithm was still able to choose the best one (which in this case
is the Age).
Linear models may not be that smart in this scenario.
Linear models may not perform as well.
Special Note: What about Random Forests™?
-----------------------------------------
## Special Note: What about Random Forests™?
As you may know, [Random Forests](http://en.wikipedia.org/wiki/Random_forest) algorithm is cousin with boosting and both are part of the [ensemble learning](http://en.wikipedia.org/wiki/Ensemble_learning) family.
As you may know, the [Random
Forests](https://en.wikipedia.org/wiki/Random_forest) algorithm is
cousin with boosting and both are part of the [ensemble
learning](https://en.wikipedia.org/wiki/Ensemble_learning) family.
Both train several decision trees for one dataset. The *main* difference is that in Random Forests, trees are independent and in boosting, the tree `N+1` focus its learning on the loss (<=> what has not been well modeled by the tree `N`).
Both train several decision trees for one dataset. The *main* difference
is that in Random Forests, trees are independent and in boosting, the
`N+1`-st tree focuses its learning on the loss (&lt;=&gt; what has not
been well modeled by the tree `N`).
This difference have an impact on a corner case in feature importance analysis: the *correlated features*.
This difference can have an impact on a edge case in feature importance
analysis: *correlated features*.
Imagine two features perfectly correlated, feature `A` and feature `B`. For one specific tree, if the algorithm needs one of them, it will choose randomly (true in both boosting and Random Forests).
Imagine two features perfectly correlated, feature `A` and feature `B`.
For one specific tree, if the algorithm needs one of them, it will
choose randomly (true in both boosting and Random Forests).
However, in Random Forests this random choice will be done for each tree, because each tree is independent from the others. Therefore, approximatively, depending of your parameters, 50% of the trees will choose feature `A` and the other 50% will choose feature `B`. So the *importance* of the information contained in `A` and `B` (which is the same, because they are perfectly correlated) is diluted in `A` and `B`. So you won't easily know this information is important to predict what you want to predict! It is even worse when you have 10 correlated features...
However, in Random Forests this random choice will be done for each
tree, because each tree is independent from the others. Therefore,
approximately (and depending on your parameters) 50% of the trees will
choose feature `A` and the other 50% will choose feature `B`. So the
*importance* of the information contained in `A` and `B` (which is the
same, because they are perfectly correlated) is diluted in `A` and `B`.
So you wont easily know this information is important to predict what
you want to predict! It is even worse when you have 10 correlated
features…
In boosting, when a specific link between feature and outcome have been learned by the algorithm, it will try to not refocus on it (in theory it is what happens, reality is not always that simple). Therefore, all the importance will be on feature `A` or on feature `B` (but not both). You will know that one feature have an important role in the link between the observations and the label. It is still up to you to search for the correlated features to the one detected as important if you need to know all of them.
In boosting, when a specific link between feature and outcome have been
learned by the algorithm, it will try to not refocus on it (in theory it
is what happens, reality is not always that simple). Therefore, all the
importance will be on feature `A` or on feature `B` (but not both). You
will know that one feature has an important role in the link between the
observations and the label. It is still up to you to search for the
correlated features to the one detected as important if you need to know
all of them.
If you want to try Random Forests algorithm, you can tweak XGBoost parameters!
If you want to try Random Forests algorithm, you can tweak XGBoost
parameters!
**Warning**: this is still an experimental parameter.
For instance, to compute a model with 1000 trees, with a 0.5 factor on
sampling rows and columns:
For instance, to compute a model with 1000 trees, with a 0.5 factor on sampling rows and columns:
```r
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
train <- agaricus.train
test <- agaricus.test
#Random Forest - 1000 trees
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nrounds = 1, objective = "binary:logistic")
```
bst <- xgboost(
data = train$data
, label = train$label
, max_depth = 4
, num_parallel_tree = 1000
, subsample = 0.5
, colsample_bytree = 0.5
, nrounds = 1
, objective = "binary:logistic"
)
```
## [0] train-error:0.002150
```
## [1] train-logloss:0.456201
```r
#Boosting - 3 rounds
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, nrounds = 3, objective = "binary:logistic")
```
bst <- xgboost(
data = train$data
, label = train$label
, max_depth = 4
, nrounds = 3
, objective = "binary:logistic"
)
```
## [0] train-error:0.006142
## [1] train-error:0.006756
## [2] train-error:0.001228
```
## [1] train-logloss:0.444882
## [2] train-logloss:0.302428
## [3] train-logloss:0.212847
> Note that the parameter `round` is set to `1`.
> [**Random Forests**](https://www.stat.berkeley.edu/~breiman/RandomForests/cc_papers.htm) is a trademark of Leo Breiman and Adele Cutler and is licensed exclusively to Salford Systems for the commercial release of the software.
> [**Random
> Forests**](https://www.stat.berkeley.edu/~breiman/RandomForests/cc_papers.htm)
> is a trademark of Leo Breiman and Adele Cutler and is licensed
> exclusively to Salford Systems for the commercial release of the
> software.

View File

@@ -119,7 +119,7 @@ An up-to-date version of the CUDA toolkit is required.
.. note:: Checking your compiler version
CUDA is really picky about supported compilers, a table for the compatible compilers for the latests CUDA version on Linux can be seen `here <https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html>`_.
CUDA is really picky about supported compilers, a table for the compatible compilers for the latest CUDA version on Linux can be seen `here <https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html>`_.
Some distros package a compatible ``gcc`` version with CUDA. If you run into compiler errors with ``nvcc``, try specifying the correct compiler with ``-DCMAKE_CXX_COMPILER=/path/to/correct/g++ -DCMAKE_C_COMPILER=/path/to/correct/gcc``. On Arch Linux, for example, both binaries can be found under ``/opt/cuda/bin/``.
@@ -259,7 +259,7 @@ There are several ways to build and install the package from source:
import sys
import pathlib
libpath = pathlib.Path(sys.prefix).joinpath("lib", "libxgboost.so")
libpath = pathlib.Path(sys.base_prefix).joinpath("lib", "libxgboost.so")
assert libpath.exists()
Then pass ``use_system_libxgboost=True`` option to ``pip install``:

View File

@@ -33,6 +33,8 @@ DMatrix
.. doxygengroup:: DMatrix
:project: xgboost
.. _c_streaming:
Streaming
---------

View File

@@ -19,7 +19,6 @@ import sys
import tarfile
import urllib.request
import warnings
from subprocess import call
from urllib.error import HTTPError
from sh.contrib import git
@@ -148,12 +147,20 @@ extensions = [
sphinx_gallery_conf = {
# path to your example scripts
"examples_dirs": ["../demo/guide-python", "../demo/dask", "../demo/aft_survival"],
"examples_dirs": [
"../demo/guide-python",
"../demo/dask",
"../demo/aft_survival",
"../demo/gpu_acceleration",
"../demo/rmm_plugin"
],
# path to where to save gallery generated output
"gallery_dirs": [
"python/examples",
"python/dask-examples",
"python/survival-examples",
"python/gpu-examples",
"python/rmm-examples",
],
"matplotlib_animations": True,
}

View File

@@ -32,7 +32,7 @@ GitHub Actions is also used to build Python wheels targeting MacOS Intel and App
``python_wheels`` pipeline sets up environment variables prefixed ``CIBW_*`` to indicate the target
OS and processor. The pipeline then invokes the script ``build_python_wheels.sh``, which in turns
calls ``cibuildwheel`` to build the wheel. The ``cibuildwheel`` is a library that sets up a
suitable Python environment for each OS and processor target. Since we don't have Apple Silion
suitable Python environment for each OS and processor target. Since we don't have Apple Silicon
machine in GitHub Actions, cross-compilation is needed; ``cibuildwheel`` takes care of the complex
task of cross-compiling a Python wheel. (Note that ``cibuildwheel`` will call
``pip wheel``. Since XGBoost has a native library component, we created a customized build
@@ -131,7 +131,7 @@ set up a credential pair in order to provision resources on AWS. See
Worker Image Pipeline
=====================
Building images for worker machines used to be a chore: you'd provision an EC2 machine, SSH into it, and
manually install the necessary packages. This process is not only laborous but also error-prone. You may
manually install the necessary packages. This process is not only laborious but also error-prone. You may
forget to install a package or change a system configuration.
No more. Now we have an automated pipeline for building images for worker machines.

View File

@@ -16,8 +16,10 @@ C++ Coding Guideline
* Each line of text may contain up to 100 characters.
* The use of C++ exceptions is allowed.
- Use C++11 features such as smart pointers, braced initializers, lambda functions, and ``std::thread``.
- Use C++17 features such as smart pointers, braced initializers, lambda functions, and ``std::thread``.
- Use Doxygen to document all the interface code.
- We have some comments around symbols imported by headers, some of those are hinted by `include-what-you-use <https://include-what-you-use.org>`_. It's not required.
- We use clang-tidy and clang-format. You can check their configuration in the root directory of the XGBoost source tree.
- We have a series of automatic checks to ensure that all of our codebase complies with the Google style. Before submitting your pull request, you are encouraged to run the style checks on your machine. See :ref:`running_checks_locally`.
***********************
@@ -98,7 +100,7 @@ two automatic checks to enforce coding style conventions. To expedite the code r
Linter
======
We use `pylint <https://github.com/PyCQA/pylint>`_ and `cpplint <https://github.com/cpplint/cpplint>`_ to enforce style convention and find potential errors. Linting is especially useful for Python, as we can catch many errors that would have otherwise occured at run-time.
We use `pylint <https://github.com/PyCQA/pylint>`_ and `cpplint <https://github.com/cpplint/cpplint>`_ to enforce style convention and find potential errors. Linting is especially useful for Python, as we can catch many errors that would have otherwise occurred at run-time.
To run this check locally, run the following command from the top level source tree:

View File

@@ -11,7 +11,7 @@ General Development Process
---------------------------
Everyone in the community is welcomed to send patches, documents, and propose new directions to the project. The key guideline here is to enable everyone in the community to get involved and participate the decision and development. When major changes are proposed, an RFC should be sent to allow discussion by the community. We encourage public discussion, archivable channels such as issues and discuss forum, so that everyone in the community can participate and review the process later.
Code reviews are one of the key ways to ensure the quality of the code. High-quality code reviews prevent technical debt for long-term and are crucial to the success of the project. A pull request needs to be reviewed before it gets merged. A committer who has the expertise of the corresponding area would moderate the pull request and the merge the code when it is ready. The corresponding committer could request multiple reviewers who are familiar with the area of the code. We encourage contributors to request code reviews themselves and help review each other's code -- remember everyone is volunteering their time to the community, high-quality code review itself costs as much as the actual code contribution, you could get your code quickly reviewed if you do others the same favor.
Code reviews are one of the key ways to ensure the quality of the code. High-quality code reviews prevent technical debt for long-term and are crucial to the success of the project. A pull request needs to be reviewed before it gets merged. A committer who has the expertise of the corresponding area would moderate the pull request and then merge the code when it is ready. The corresponding committer could request multiple reviewers who are familiar with the area of the code. We encourage contributors to request code reviews themselves and help review each other's code -- remember everyone is volunteering their time to the community, high-quality code review itself costs as much as the actual code contribution, you could get your code quickly reviewed if you do others the same favor.
The community should strive to reach a consensus on technical decisions through discussion. We expect committers and PMCs to moderate technical discussions in a diplomatic way, and provide suggestions with clear technical reasoning when necessary.
@@ -25,11 +25,11 @@ Committers are individuals who are granted the write access to the project. A co
- Quality of contributions: High-quality, readable code contributions indicated by pull requests that can be merged without a substantial code review. History of creating clean, maintainable code and including good test cases. Informative code reviews to help other contributors that adhere to a good standard.
- Community involvement: active participation in the discussion forum, promote the projects via tutorials, talks and outreach. We encourage committers to collaborate broadly, e.g. do code reviews and discuss designs with community members that they do not interact physically.
The Project Management Committee(PMC) consists group of active committers that moderate the discussion, manage the project release, and proposes new committer/PMC members. Potential candidates are usually proposed via an internal discussion among PMCs, followed by a consensus approval, i.e. least 3 +1 votes, and no vetoes. Any veto must be accompanied by reasoning. PMCs should serve the community by upholding the community practices and guidelines XGBoost a better community for everyone. PMCs should strive to only nominate new candidates outside of their own organization.
The Project Management Committee(PMC) consists of a group of active committers that moderate the discussion, manage the project release, and proposes new committer/PMC members. Potential candidates are usually proposed via an internal discussion among PMCs, followed by a consensus approval, i.e. least 3 +1 votes, and no vetoes. Any veto must be accompanied by reasoning. PMCs should serve the community by upholding the community practices and guidelines in order to make XGBoost a better community for everyone. PMCs should strive to only nominate new candidates outside of their own organization.
The PMC is in charge of the project's `continuous integration (CI) <https://en.wikipedia.org/wiki/Continuous_integration>`_ and testing infrastructure. Currently, we host our own Jenkins server at https://xgboost-ci.net. The PMC shall appoint committer(s) to manage the CI infrastructure. The PMC may accept 3rd-party donations and sponsorships that would defray the cost of the CI infrastructure. See :ref:`donation_policy`.
Reviewers
---------
Reviewers are individuals who actively contributed to the project and are willing to participate in the code review of new contributions. We identify reviewers from active contributors. The committers should explicitly solicit reviews from reviewers. High-quality code reviews prevent technical debt for long-term and are crucial to the success of the project. A pull request to the project has to be reviewed by at least one reviewer in order to be merged.
Reviewers are individuals who actively contributed to the project and are willing to participate in the code review of new contributions. We identify reviewers from active contributors. The committers should explicitly solicit reviews from reviewers. High-quality code reviews prevent technical debt for the long-term and are crucial to the success of the project. A pull request to the project has to be reviewed by at least one reviewer in order to be merged.

View File

@@ -8,23 +8,83 @@ Documentation and Examples
:backlinks: none
:local:
*********
Documents
*********
*************
Documentation
*************
* Python and C documentation is built using `Sphinx <http://www.sphinx-doc.org/en/master/>`_.
* Each document is written in `reStructuredText <http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_.
* You can build document locally to see the effect, by running
* The documentation is the ``doc/`` directory.
* You can build it locally using ``make html`` command.
.. code-block:: bash
make html
inside the ``doc/`` directory. The online document is hosted by `Read the Docs <https://readthedocs.org/>`__ where the imported project is managed by `Hyunsu Cho <https://github.com/hcho3>`__ and `Jiaming Yuan <https://github.com/trivialfis>`__.
Run ``make help`` to learn about the other commands.
The online document is hosted by `Read the Docs <https://readthedocs.org/>`__ where the imported project is managed by `Hyunsu Cho <https://github.com/hcho3>`__ and `Jiaming Yuan <https://github.com/trivialfis>`__.
=========================================
Build the Python Docs using pip and Conda
=========================================
#. Create a conda environment.
.. code-block:: bash
conda create -n xgboost-docs --yes python=3.10
.. note:: Python 3.10 is required by `xgboost_ray <https://github.com/ray-project/xgboost_ray>`__ package.
#. Activate the environment
.. code-block:: bash
conda activate xgboost-docs
#. Install required packages (in the current environment) using ``pip`` command.
.. code-block:: bash
pip install -r requirements.txt
.. note::
It is currently not possible to install the required packages using ``conda``
due to ``xgboost_ray`` being unavailable in conda channels.
.. code-block:: bash
conda install --file requirements.txt --yes -c conda-forge
#. (optional) Install `graphviz <https://www.graphviz.org/>`__
.. code-block:: bash
conda install graphviz --yes
#. Eventually, build the docs.
.. code-block:: bash
make html
You should see the following messages in the console:
.. code-block:: console
$ make html
sphinx-build -b html -d _build/doctrees . _build/html
Running Sphinx v6.2.1
...
The HTML pages are in _build/html.
Build finished. The HTML pages are in _build/html.
********
Examples
********
* Use cases and examples will be in `demo <https://github.com/dmlc/xgboost/tree/master/demo>`_.
* Use cases and examples are in `demo <https://github.com/dmlc/xgboost/tree/master/demo>`_ directory.
* We are super excited to hear about your story. If you have blog posts,
tutorials, or code solutions using XGBoost, please tell us, and we will add
a link in the example pages.

View File

@@ -29,7 +29,7 @@ The Project Management Committee (PMC) of the XGBoost project appointed `Open So
All expenses incurred for hosting CI will be submitted to the fiscal host with receipts. Only the expenses in the following categories will be approved for reimbursement:
* Cloud exprenses for the cloud test farm (https://buildkite.com/xgboost)
* Cloud expenses for the cloud test farm (https://buildkite.com/xgboost)
* Cost of domain https://xgboost-ci.net
* Monthly cost of using BuildKite
* Hosting cost of the User Forum (https://discuss.xgboost.ai)

View File

@@ -169,7 +169,7 @@ supply a specified SANITIZER_PATH.
How to use sanitizers with CUDA support
=======================================
Runing XGBoost on CUDA with address sanitizer (asan) will raise memory error.
Running XGBoost on CUDA with address sanitizer (asan) will raise memory error.
To use asan with CUDA correctly, you need to configure asan via ASAN_OPTIONS
environment variable:

View File

@@ -63,7 +63,7 @@ XGBoost supports missing values by default.
In tree algorithms, branch directions for missing values are learned during training.
Note that the gblinear booster treats missing values as zeros.
When the ``missing`` parameter is specifed, values in the input predictor that is equal to
When the ``missing`` parameter is specified, values in the input predictor that is equal to
``missing`` will be treated as missing and removed. By default it's set to ``NaN``.
**************************************

View File

@@ -14,53 +14,46 @@ Most of the algorithms in XGBoost including training, prediction and evaluation
Usage
=====
Specify the ``tree_method`` parameter as ``gpu_hist``. For details around the ``tree_method`` parameter, see :doc:`tree method </treemethod>`.
Supported parameters
--------------------
GPU accelerated prediction is enabled by default for the above mentioned ``tree_method`` parameters but can be switched to CPU prediction by setting ``predictor`` to ``cpu_predictor``. This could be useful if you want to conserve GPU memory. Likewise when using CPU algorithms, GPU accelerated prediction can be enabled by setting ``predictor`` to ``gpu_predictor``.
The device ordinal (which GPU to use if you have many of them) can be selected using the
``gpu_id`` parameter, which defaults to 0 (the first device reported by CUDA runtime).
To enable GPU acceleration, specify the ``device`` parameter as ``cuda``. In addition, the device ordinal (which GPU to use if you have multiple devices in the same node) can be specified using the ``cuda:<ordinal>`` syntax, where ``<ordinal>`` is an integer that represents the device ordinal. XGBoost defaults to 0 (the first device reported by CUDA runtime).
The GPU algorithms currently work with CLI, Python, R, and JVM packages. See :doc:`/install` for details.
.. code-block:: python
:caption: Python example
param['gpu_id'] = 0
param['tree_method'] = 'gpu_hist'
params = dict()
params["device"] = "cuda"
params["tree_method"] = "hist"
Xy = xgboost.QuantileDMatrix(X, y)
xgboost.train(params, Xy)
.. code-block:: python
:caption: With Scikit-Learn interface
XGBRegressor(tree_method='gpu_hist', gpu_id=0)
:caption: With the Scikit-Learn interface
XGBRegressor(tree_method="hist", device="cuda")
GPU-Accelerated SHAP values
=============================
XGBoost makes use of `GPUTreeShap <https://github.com/rapidsai/gputreeshap>`_ as a backend for computing shap values when the GPU predictor is selected.
XGBoost makes use of `GPUTreeShap <https://github.com/rapidsai/gputreeshap>`_ as a backend for computing shap values when the GPU is used.
.. code-block:: python
model.set_param({"predictor": "gpu_predictor"})
shap_values = model.predict(dtrain, pred_contribs=True)
booster.set_param({"device": "cuda:0"})
shap_values = booster.predict(dtrain, pred_contribs=True)
shap_interaction_values = model.predict(dtrain, pred_interactions=True)
See examples `here
<https://github.com/dmlc/xgboost/tree/master/demo/gpu_acceleration>`__.
See :ref:`sphx_glr_python_gpu-examples_tree_shap.py` for a worked example.
Multi-node Multi-GPU Training
=============================
XGBoost supports fully distributed GPU training using `Dask <https://dask.org/>`_, ``Spark`` and ``PySpark``. For getting started with Dask see our tutorial :doc:`/tutorials/dask` and worked examples `here <https://github.com/dmlc/xgboost/tree/master/demo/dask>`__, also Python documentation :ref:`dask_api` for complete reference. For usage with ``Spark`` using Scala see :doc:`/jvm/xgboost4j_spark_gpu_tutorial`. Lastly for distributed GPU training with ``PySpark``, see :doc:`/tutorials/spark_estimator`.
XGBoost supports fully distributed GPU training using `Dask <https://dask.org/>`_, ``Spark`` and ``PySpark``. For getting started with Dask see our tutorial :doc:`/tutorials/dask` and worked examples :doc:`/python/dask-examples/index`, also Python documentation :ref:`dask_api` for complete reference. For usage with ``Spark`` using Scala see :doc:`/jvm/xgboost4j_spark_gpu_tutorial`. Lastly for distributed GPU training with ``PySpark``, see :doc:`/tutorials/spark_estimator`.
Memory usage
============
The following are some guidelines on the device memory usage of the `gpu_hist` tree method.
The following are some guidelines on the device memory usage of the ``hist`` tree method on GPU.
Memory inside xgboost training is generally allocated for two reasons - storing the dataset and working memory.
@@ -73,12 +66,13 @@ If you are getting out-of-memory errors on a big dataset, try the or :py:class:`
CPU-GPU Interoperability
========================
XGBoost models trained on GPUs can be used on CPU-only systems to generate predictions. For information about how to save and load an XGBoost model, see :doc:`/tutorials/saving_model`.
The model can be used on any device regardless of the one used to train it. For instance, a model trained using GPU can still work on a CPU-only machine and vice versa. For more information about model serialization, see :doc:`/tutorials/saving_model`.
Developer notes
===============
The application may be profiled with annotations by specifying USE_NTVX to cmake. Regions covered by the 'Monitor' class in CUDA code will automatically appear in the nsight profiler when `verbosity` is set to 3.
The application may be profiled with annotations by specifying ``USE_NTVX`` to cmake. Regions covered by the 'Monitor' class in CUDA code will automatically appear in the nsight profiler when `verbosity` is set to 3.
**********
References

View File

@@ -3,10 +3,10 @@ Installation Guide
##################
XGBoost provides binary packages for some language bindings. The binary packages support
the GPU algorithm (``gpu_hist``) on machines with NVIDIA GPUs. Please note that **training
with multiple GPUs is only supported for Linux platform**. See :doc:`gpu/index`. Also we
have both stable releases and nightly builds, see below for how to install them. For
building from source, visit :doc:`this page </build>`.
the GPU algorithm (``device=cuda:0``) on machines with NVIDIA GPUs. Please note that
**training with multiple GPUs is only supported for Linux platform**. See
:doc:`gpu/index`. Also we have both stable releases and nightly builds, see below for how
to install them. For building from source, visit :doc:`this page </build>`.
.. contents:: Contents
@@ -189,7 +189,7 @@ This will check out the latest stable version from the Maven Central.
For the latest release version number, please check `release page <https://github.com/dmlc/xgboost/releases>`_.
To enable the GPU algorithm (``tree_method='gpu_hist'``), use artifacts ``xgboost4j-gpu_2.12`` and ``xgboost4j-spark-gpu_2.12`` instead (note the ``gpu`` suffix).
To enable the GPU algorithm (``device='cuda'``), use artifacts ``xgboost4j-gpu_2.12`` and ``xgboost4j-spark-gpu_2.12`` instead (note the ``gpu`` suffix).
.. note:: Windows not supported in the JVM package
@@ -325,4 +325,4 @@ The SNAPSHOT JARs are hosted by the XGBoost project. Every commit in the ``maste
You can browse the file listing of the Maven repository at https://s3-us-west-2.amazonaws.com/xgboost-maven-repo/list.html.
To enable the GPU algorithm (``tree_method='gpu_hist'``), use artifacts ``xgboost4j-gpu_2.12`` and ``xgboost4j-spark-gpu_2.12`` instead (note the ``gpu`` suffix).
To enable the GPU algorithm (``device='cuda'``), use artifacts ``xgboost4j-gpu_2.12`` and ``xgboost4j-spark-gpu_2.12`` instead (note the ``gpu`` suffix).

View File

@@ -23,8 +23,8 @@ Installation
:local:
:backlinks: none
Checkout the :doc:`Installation Guide </install>` for how to install jvm package, or
:doc:`Building from Source </build>` on how to build it form source.
Checkout the :doc:`Installation Guide </install>` for how to install the jvm package, or
:doc:`Building from Source </build>` on how to build it from the sources.
********
Contents

View File

@@ -129,7 +129,7 @@ With parameters and data, you are able to train a booster model.
booster.saveModel("model.bin");
* Generaing model dump with feature map
* Generating model dump with feature map
.. code-block:: java

View File

@@ -121,7 +121,7 @@ To train a XGBoost model for classification, we need to claim a XGBoostClassifie
"objective" -> "multi:softprob",
"num_class" -> 3,
"num_round" -> 100,
"tree_method" -> "gpu_hist",
"device" -> "cuda",
"num_workers" -> 1)
val featuresNames = schema.fieldNames.filter(name => name != labelName)
@@ -130,15 +130,14 @@ To train a XGBoost model for classification, we need to claim a XGBoostClassifie
.setFeaturesCol(featuresNames)
.setLabelCol(labelName)
The available parameters for training a XGBoost model can be found in :doc:`here </parameter>`.
Similar to the XGBoost4J-Spark package, in addition to the default set of parameters,
XGBoost4J-Spark-GPU also supports the camel-case variant of these parameters to be
consistent with Spark's MLlib naming convention.
The ``device`` parameter is for informing XGBoost that CUDA devices should be used instead of CPU. Unlike the single-node mode, GPUs are managed by spark instead of by XGBoost. Therefore, explicitly specified device ordinal like ``cuda:1`` is not support.
The available parameters for training a XGBoost model can be found in :doc:`here </parameter>`. Similar to the XGBoost4J-Spark package, in addition to the default set of parameters, XGBoost4J-Spark-GPU also supports the camel-case variant of these parameters to be consistent with Spark's MLlib naming convention.
Specifically, each parameter in :doc:`this page </parameter>` has its equivalent form in
XGBoost4J-Spark-GPU with camel case. For example, to set ``max_depth`` for each tree, you can pass
parameter just like what we did in the above code snippet (as ``max_depth`` wrapped in a Map), or
you can do it through setters in XGBoostClassifer:
XGBoost4J-Spark-GPU with camel case. For example, to set ``max_depth`` for each tree, you
can pass parameter just like what we did in the above code snippet (as ``max_depth``
wrapped in a Map), or you can do it through setters in XGBoostClassifer:
.. code-block:: scala

View File

@@ -34,6 +34,20 @@ General Parameters
- Which booster to use. Can be ``gbtree``, ``gblinear`` or ``dart``; ``gbtree`` and ``dart`` use tree based models while ``gblinear`` uses linear functions.
* ``device`` [default= ``cpu``]
.. versionadded:: 2.0.0
- Device for XGBoost to run. User can set it to one of the following values:
+ ``cpu``: Use CPU.
+ ``cuda``: Use a GPU (CUDA device).
+ ``cuda:<ordinal>``: ``<ordinal>`` is an integer that specifies the ordinal of the GPU (which GPU do you want to use if you have more than one devices).
+ ``gpu``: Default GPU device selection from the list of available and supported devices. Only ``cuda`` devices are supported currently.
+ ``gpu:<ordinal>``: Default GPU device selection from the list of available and supported devices. Only ``cuda`` devices are supported currently.
For more information about GPU acceleration, see :doc:`/gpu/index`. In distributed environments, ordinal selection is handled by distributed frameworks instead of XGBoost. As a result, using ``cuda:<ordinal>`` will result in an error. Use ``cuda`` instead.
* ``verbosity`` [default=1]
- Verbosity of printing messages. Valid values are 0 (silent), 1 (warning), 2 (info), 3
@@ -44,7 +58,7 @@ General Parameters
* ``validate_parameters`` [default to ``false``, except for Python, R and CLI interface]
- When set to True, XGBoost will perform validation of input parameters to check whether
a parameter is used or not.
a parameter is used or not. A warning is emitted when there's unknown parameter.
* ``nthread`` [default to maximum number of threads available if not set]
@@ -55,10 +69,6 @@ General Parameters
- Flag to disable default metric. Set to 1 or ``true`` to disable.
* ``num_feature`` [set automatically by XGBoost, no need to be set by user]
- Feature dimension used in boosting, set to maximum dimension of the feature
Parameters for Tree Booster
===========================
* ``eta`` [default=0.3, alias: ``learning_rate``]
@@ -99,7 +109,7 @@ Parameters for Tree Booster
- ``gradient_based``: the selection probability for each training instance is proportional to the
*regularized absolute value* of gradients (more specifically, :math:`\sqrt{g^2+\lambda h^2}`).
``subsample`` may be set to as low as 0.1 without loss of model accuracy. Note that this
sampling method is only supported when ``tree_method`` is set to ``gpu_hist``; other tree
sampling method is only supported when ``tree_method`` is set to ``hist`` and the device is ``cuda``; other tree
methods only support ``uniform`` sampling.
* ``colsample_bytree``, ``colsample_bylevel``, ``colsample_bynode`` [default=1]
@@ -131,26 +141,15 @@ Parameters for Tree Booster
* ``tree_method`` string [default= ``auto``]
- The tree construction algorithm used in XGBoost. See description in the `reference paper <http://arxiv.org/abs/1603.02754>`_ and :doc:`treemethod`.
- XGBoost supports ``approx``, ``hist`` and ``gpu_hist`` for distributed training. Experimental support for external memory is available for ``approx`` and ``gpu_hist``.
- Choices: ``auto``, ``exact``, ``approx``, ``hist``, ``gpu_hist``, this is a
combination of commonly used updaters. For other updaters like ``refresh``, set the
parameter ``updater`` directly.
- Choices: ``auto``, ``exact``, ``approx``, ``hist``, this is a combination of commonly
used updaters. For other updaters like ``refresh``, set the parameter ``updater``
directly.
- ``auto``: Use heuristic to choose the fastest method.
- For small dataset, exact greedy (``exact``) will be used.
- For larger dataset, approximate algorithm (``approx``) will be chosen. It's
recommended to try ``hist`` and ``gpu_hist`` for higher performance with large
dataset.
(``gpu_hist``)has support for ``external memory``.
- Because old behavior is always use exact greedy in single machine, user will get a
message when approximate algorithm is chosen to notify this choice.
- ``auto``: Same as the ``hist`` tree method.
- ``exact``: Exact greedy algorithm. Enumerates all split candidates.
- ``approx``: Approximate greedy algorithm using quantile sketch and gradient histogram.
- ``hist``: Faster histogram optimized approximate greedy algorithm.
- ``gpu_hist``: GPU implementation of ``hist`` algorithm.
* ``scale_pos_weight`` [default=1]
@@ -163,7 +162,8 @@ Parameters for Tree Booster
- ``grow_colmaker``: non-distributed column-based construction of trees.
- ``grow_histmaker``: distributed tree construction with row-based data splitting based on global proposal of histogram counting.
- ``grow_quantile_histmaker``: Grow tree using quantized histogram.
- ``grow_gpu_hist``: Grow tree with GPU.
- ``grow_gpu_hist``: Enabled when ``tree_method`` is set to ``hist`` along with ``device=cuda``.
- ``grow_gpu_approx``: Enabled when ``tree_method`` is set to ``approx`` along with ``device=cuda``.
- ``sync``: synchronizes trees in all distributed nodes.
- ``refresh``: refreshes tree's statistics and/or leaf values based on the current data. Note that no random subsampling of data rows is performed.
- ``prune``: prunes the splits where loss < min_split_loss (or gamma) and nodes that have depth greater than ``max_depth``.
@@ -183,7 +183,7 @@ Parameters for Tree Booster
* ``grow_policy`` [default= ``depthwise``]
- Controls a way new nodes are added to the tree.
- Currently supported only if ``tree_method`` is set to ``hist``, ``approx`` or ``gpu_hist``.
- Currently supported only if ``tree_method`` is set to ``hist`` or ``approx``.
- Choices: ``depthwise``, ``lossguide``
- ``depthwise``: split at nodes closest to the root.
@@ -195,22 +195,10 @@ Parameters for Tree Booster
* ``max_bin``, [default=256]
- Only used if ``tree_method`` is set to ``hist``, ``approx`` or ``gpu_hist``.
- Only used if ``tree_method`` is set to ``hist`` or ``approx``.
- Maximum number of discrete bins to bucket continuous features.
- Increasing this number improves the optimality of splits at the cost of higher computation time.
* ``predictor``, [default= ``auto``]
- The type of predictor algorithm to use. Provides the same results but allows the use of GPU or CPU.
- ``auto``: Configure predictor based on heuristics.
- ``cpu_predictor``: Multicore CPU prediction algorithm.
- ``gpu_predictor``: Prediction using GPU. Used when ``tree_method`` is ``gpu_hist``.
When ``predictor`` is set to default value ``auto``, the ``gpu_hist`` tree method is
able to provide GPU based prediction without copying training data to GPU memory.
If ``gpu_predictor`` is explicitly specified, then all data is copied into GPU, only
recommended for performing prediction tasks.
* ``num_parallel_tree``, [default=1]
- Number of parallel trees constructed during each iteration. This option is used to support boosted random forest.
@@ -238,6 +226,15 @@ Parameters for Tree Booster
- ``one_output_per_tree``: One model for each target.
- ``multi_output_tree``: Use multi-target trees.
* ``max_cached_hist_node``, [default = 65536]
Maximum number of cached nodes for CPU histogram.
.. versionadded:: 2.0.0
- For most of the cases this parameter should not be set except for growing deep trees
on CPU.
.. _cat-param:
Parameters for Categorical Feature
@@ -332,7 +329,7 @@ Parameters for Linear Booster (``booster=gblinear``)
- Choice of algorithm to fit linear model
- ``shotgun``: Parallel coordinate descent algorithm based on shotgun algorithm. Uses 'hogwild' parallelism and therefore produces a nondeterministic solution on each run.
- ``coord_descent``: Ordinary coordinate descent algorithm. Also multithreaded but still produces a deterministic solution.
- ``coord_descent``: Ordinary coordinate descent algorithm. Also multithreaded but still produces a deterministic solution. When the ``device`` parameter is set to ``cuda`` or ``gpu``, a GPU variant would be used.
* ``feature_selector`` [default= ``cyclic``]
@@ -357,7 +354,7 @@ Specify the learning task and the corresponding learning objective. The objectiv
- ``reg:squarederror``: regression with squared loss.
- ``reg:squaredlogerror``: regression with squared log loss :math:`\frac{1}{2}[log(pred + 1) - log(label + 1)]^2`. All input labels are required to be greater than -1. Also, see metric ``rmsle`` for possible issue with this objective.
- ``reg:logistic``: logistic regression.
- ``reg:logistic``: logistic regression, output probability
- ``reg:pseudohubererror``: regression with Pseudo Huber loss, a twice differentiable alternative to absolute loss.
- ``reg:absoluteerror``: Regression with L1 error. When tree model is used, leaf value is refreshed after tree construction. If used in distributed training, the leaf value is calculated as the mean value from all workers, which is not guaranteed to be optimal.

Some files were not shown because too many files have changed in this diff Show More