Compare commits

...

522 Commits

Author SHA1 Message Date
Jiaming Yuan
096047c547 Make 2.0 release. (#9567) 2023-09-12 00:20:49 +08:00
Jiaming Yuan
e75dd75bb2 [backport] [pyspark] support gpu transform (#9542) (#9559)
---------

Co-authored-by: Bobby Wang <wbo4958@gmail.com>
2023-09-07 17:21:09 +08:00
Jiaming Yuan
4d387cbfbf [backport] [pyspark] rework transform to reuse same code (#9292) (#9558)
Co-authored-by: Bobby Wang <wbo4958@gmail.com>
2023-09-07 15:26:24 +08:00
Jiaming Yuan
3fde9361d7 [backport] Fix inplace predict with fallback when base margin is used. (#9536) (#9548)
- Copy meta info from proxy DMatrix.
- Use `std::call_once` to emit less warnings.
2023-09-05 23:38:06 +08:00
Jiaming Yuan
b67c2ed96d [backport] [CI] bump setup-r action version. (#9544) (#9551) 2023-09-05 22:10:30 +08:00
Jiaming Yuan
177fd79864 [backport] Fix read the doc configuration. [skip ci] (#9549) 2023-09-05 17:32:00 +08:00
Jiaming Yuan
06487d3896 [backport] Fix GPU categorical split memory allocation. (#9529) (#9535) 2023-08-29 21:14:43 +08:00
Jiaming Yuan
e50ccc4d3c [R] Fix integer inputs with NA. (#9522) (#9534) 2023-08-29 19:52:13 +08:00
Jiaming Yuan
add57f8880 [backport] Delay the check for vector leaf. (#9509) (#9533) 2023-08-29 18:25:59 +08:00
Jiaming Yuan
a0d3573c74 [backport] Fix device dispatch for linear updater. (#9507) (#9532) 2023-08-29 15:10:43 +08:00
Jiaming Yuan
4301558a57 Make 2.0.0 RC1. (#9492) 2023-08-17 16:16:51 +08:00
Bobby Wang
68be454cfa [pyspark] hotfix for GPU setup validation (#9495)
* [pyspark] fix a bug of validating gpu configuration

---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-08-17 16:01:39 +08:00
Jiaming Yuan
5188e27513 Fix version parsing with rc release. (#9493) 2023-08-16 22:44:58 +08:00
Jiaming Yuan
f380c10a93 Use hint for find nccl. (#9490) 2023-08-16 16:08:41 +08:00
Sean Yang
12fe2fc06c Fix federated learning demos and tests (#9488) 2023-08-16 15:25:05 +08:00
Jiaming Yuan
b2e93d2742 [doc] Quick note for the device parameter. [skip ci] (#9483) 2023-08-16 13:35:55 +08:00
Jiaming Yuan
c061e3ae50 [jvm-packages] Bump rapids version. (#9482) 2023-08-15 16:26:42 -07:00
James Lamb
b82e78c169 [R] remove commented-out code (#9481) 2023-08-15 13:44:08 +08:00
Boris
8463107013 Updated versions. Reorganised dependencies. (#9479) 2023-08-14 14:28:28 -07:00
Jiaming Yuan
19b59938b7 Convert input to str for hypothesis note. (#9480) 2023-08-15 02:27:58 +08:00
James Lamb
e3f624d8e7 [R] remove more uses of default values in internal functions (#9476) 2023-08-14 22:18:33 +08:00
James Lamb
2c84daeca7 [R] [doc] remove documentation index entries for internal functions (#9477) 2023-08-14 22:18:02 +08:00
Bobby Wang
344f90b67b [jvm-packages] throw exception when tree_method=approx and device=cuda (#9478)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-08-14 17:52:14 +08:00
Jiaming Yuan
05d7000096 Handle special characters in JSON model dump. (#9474) 2023-08-14 15:49:00 +08:00
github-actions[bot]
f03463c45b [CI] Update RAPIDS to latest stable (#9464)
* [CI] Update RAPIDS to latest stable

* [CI] Use CMake 3.26.4

---------

Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2023-08-13 18:54:37 -07:00
Jiaming Yuan
fd4335d0bf [doc] Document the current status of some features. (#9469) 2023-08-13 23:42:27 +08:00
Jiaming Yuan
801116c307 Test scikit-learn model IO with gblinear. (#9459) 2023-08-13 23:41:49 +08:00
Jiaming Yuan
bb56183396 Normalize file system path. (#9463) 2023-08-11 21:26:46 +08:00
Jiaming Yuan
bdc1a3c178 Fix pyspark parameter. (#9460)
- Don't pass the `use_gpu` parameter to the learner.
- Fix GPU approx with PySpark.
2023-08-11 19:07:50 +08:00
James Lamb
428f6cbbe2 [R] remove default values in internal booster manipulation functions (#9461) 2023-08-11 15:07:18 +08:00
ShaneConneely
d638535581 Update README.md (#9462) 2023-08-11 04:02:04 +08:00
James Lamb
44bd2981b2 [R] remove default values in internal utility functions (#9457) 2023-08-10 21:40:59 +08:00
James Lamb
9dbb71490c [Doc] fix typos in documentation (#9458) 2023-08-10 19:26:36 +08:00
James Lamb
4359356d46 [R] [CI] use lintr 3.1.0 (#9456) 2023-08-10 17:49:16 +08:00
Jiaming Yuan
1caa93221a Use realloc for histogram cache and expose the cache limit. (#9455) 2023-08-10 14:05:27 +08:00
Jiaming Yuan
a57371ef7c Fix links in R doc. (#9450) 2023-08-10 02:38:14 +08:00
Jiaming Yuan
f05a23b41c Use weakref instead of id for DataIter cache. (#9445)
- Fix case where Python reuses id from freed objects.
- Small optimization to column matrix with QDM by using `realloc` instead of copying data.
2023-08-10 00:40:06 +08:00
Bobby Wang
d495a180d8 [pyspark] add logs for training (#9449) 2023-08-09 18:32:23 +08:00
joshbrowning2358
7f854848d3 Update R docs based on deprecated parameters/behaviour (#9437) 2023-08-09 17:04:28 +08:00
Jiaming Yuan
f05294a6f2 Fix clang warnings. (#9447)
- static function in header. (which is marked as unused due to translation unit
visibility).
- Implicit copy operator is deprecated.
- Unused lambda capture.
- Moving a temporary variable prevents copy elision.
2023-08-09 15:34:45 +08:00
Philip Hyunsu Cho
819098a48f [R] Handle UTF-8 paths on Windows (#9448) 2023-08-08 21:29:19 -07:00
Jiaming Yuan
c1b2cff874 [CI] Check compiler warnings. (#9444) 2023-08-08 12:02:45 -07:00
Philip Hyunsu Cho
7ce090e775 Handle UTF-8 paths correctly on Windows platform (#9443)
* Fix round-trip serialization with UTF-8 paths

* Add compiler version check

* Add comment to C API functions

* Add Python tests

* [CI] Updatre MacOS deployment target

* Use std::filesystem instead of dmlc::TemporaryDirectory
2023-08-07 23:27:25 -07:00
Jiaming Yuan
97fd5207dd Use lambda function in ParallelFor2D. (#9441) 2023-08-08 14:04:46 +08:00
Jiaming Yuan
54029a59af Bound the size of the histogram cache. (#9440)
- A new histogram collection with a limit in size.
- Unify histogram building logic between hist, multi-hist, and approx.
2023-08-08 03:21:26 +08:00
Philip Hyunsu Cho
5bd163aa25 Explicitly specify libcudart_static in CMake config (#9436) 2023-08-05 14:15:44 -07:00
Philip Hyunsu Cho
7fc57f3974 Remove Koffie Labs from Sponsors list (#9434) 2023-08-04 06:52:27 -07:00
Rong Ou
bde1ebc209 Switch back to the GPUIDX macro (#9438) 2023-08-04 15:14:31 +08:00
Philip Hyunsu Cho
1aabc690ec [Doc] Clarify the output behavior of reg:logistic (#9435) 2023-08-03 20:42:07 -07:00
jinmfeng001
04c99683c3 Change training stage from ResultStage to ShuffleMapStage (#9423) 2023-08-03 23:40:04 +08:00
Jiaming Yuan
1332ff787f Unify the code path between local and distributed training. (#9433)
This removes the need for a local histogram space during distributed training, which cuts the cache size by half.
2023-08-03 21:46:36 +08:00
Hendrik Makait
f958e32683 Raise if expected workers are not alive in xgboost.dask.train (#9421) 2023-08-03 20:14:07 +08:00
Jiaming Yuan
7129988847 Accept only keyword arguments in data iterator. (#9431) 2023-08-03 12:44:16 +08:00
Jiaming Yuan
e93a274823 Small cleanup for histogram routines. (#9427)
* Small cleanup for histogram routines.

- Extract hist train param from GPU hist.
- Make histogram const after construction.
- Unify parameter names.
2023-08-02 18:28:26 +08:00
Rong Ou
c2b85ab68a Clean up MGPU C++ tests (#9430) 2023-08-02 14:31:18 +08:00
Jiaming Yuan
a9da2e244a [CI] Update github actions. (#9428) 2023-08-01 23:03:53 +08:00
Jiaming Yuan
912e341d57 Initial GPU support for the approx tree method. (#9414) 2023-07-31 15:50:28 +08:00
Bobby Wang
8f0efb4ab3 [jvm-packages] automatically set the max/min direction for best score (#9404) 2023-07-27 11:09:55 +08:00
Rong Ou
7579905e18 Retry switching to per-thread default stream (#9416) 2023-07-26 07:09:12 +08:00
Nicholas Hilton
54579da4d7 [doc] Fix typo in prediction.rst (#9415)
Typo for `pred_contribs` and `pred_interactions`
2023-07-26 07:03:04 +08:00
Jiaming Yuan
3a9996173e Revert "Switch to per-thread default stream (#9396)" (#9413)
This reverts commit f7f673b00c.
2023-07-24 12:03:28 -07:00
Bobby Wang
1b657a5513 [jvm-packages] set device to cuda when tree method is "gpu_hist" (#9412) 2023-07-24 18:32:25 +08:00
Jiaming Yuan
a196443a07 Implement sketching with Hessian on GPU. (#9399)
- Prepare for implementing approx on GPU.
- Unify the code path between weighted and uniform sketching on DMatrix.
2023-07-24 15:43:03 +08:00
Jiaming Yuan
851cba931e Define best_iteration only if early stopping is used. (#9403)
* Define `best_iteration` only if early stopping is used.

This is the behavior specified by the document but not honored in the actual code.

- Don't set the attributes if there's no early stopping.
- Clean up the code for callbacks, and replace assertions with proper exceptions.
- Assign the attributes when early stopping `save_best` is used.
- Turn the attributes into Python properties.

---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-07-24 12:43:35 +08:00
Jiaming Yuan
01e00efc53 [breaking] Remove support for single string feature info. (#9401)
- Input must be a sequence of strings.
- Improve validation error message.
2023-07-24 11:06:30 +08:00
Jiaming Yuan
275da176ba Document for device ordinal. (#9398)
- Rewrite GPU demos. notebook is converted to script to avoid committing additional png plots.
- Add GPU demos into the sphinx gallery.
- Add RMM demos into the sphinx gallery.
- Test for firing threads with different device ordinals.
2023-07-22 15:26:29 +08:00
Jiaming Yuan
22b0a55a04 Remove hist builder class. (#9400)
* Remove hist build class.

* Cleanup this stateless class.

* Add comment to thread block.
2023-07-22 10:43:12 +08:00
Jiaming Yuan
0de7c47495 Fix metric serialization. (#9405) 2023-07-22 08:39:21 +08:00
Jiaming Yuan
dbd5309b55 Fix warning message for device. (#9402) 2023-07-20 23:30:04 +08:00
Rong Ou
f7f673b00c Switch to per-thread default stream (#9396) 2023-07-20 08:21:00 +08:00
Jiaming Yuan
7a0ccfbb49 Add compute 90. (#9397) 2023-07-19 13:42:38 +08:00
Jiaming Yuan
0897477af0 Remove unmaintained jvm readme and dev scripts. (#9395) 2023-07-18 18:23:43 +08:00
Philip Hyunsu Cho
e082718c66 [CI] Build pip wheel with RMM support (#9383) 2023-07-18 01:52:26 -07:00
Jiaming Yuan
6e18d3a290 [pyspark] Handle the device parameter in pyspark. (#9390)
- Handle the new `device` parameter in PySpark.
- Deprecate the old `use_gpu` parameter.
2023-07-18 08:47:03 +08:00
Philip Hyunsu Cho
2a0ff209ff [CI] Block CI from running for dependabot PRs (#9394) 2023-07-17 10:53:57 -07:00
Jiaming Yuan
f4fb2be101 [jvm-packages] Add the new device parameter. (#9385) 2023-07-17 18:40:39 +08:00
Jiaming Yuan
2caceb157d [jvm-packages] Reduce log verbosity for GPU tests. (#9389) 2023-07-17 13:25:46 +08:00
Jiaming Yuan
b342ef951b Make feature validation immutable. (#9388) 2023-07-16 06:52:55 +08:00
Jiaming Yuan
0a07900b9f Fix integer overflow. (#9380) 2023-07-15 21:11:02 +08:00
Jiaming Yuan
16eb41936d Handle the new device parameter in dask and demos. (#9386)
* Handle the new `device` parameter in dask and demos.

- Check no ordinal is specified in the dask interface.
- Update demos.
- Update dask doc.
- Update the condition for QDM.
2023-07-15 19:11:20 +08:00
Jiaming Yuan
9da5050643 Turn warning messages into Python warnings. (#9387) 2023-07-15 07:46:43 +08:00
Jiaming Yuan
04aff3af8e Define the new device parameter. (#9362) 2023-07-13 19:30:25 +08:00
Cássia Sampaio
2d0cd2817e [doc] Fux learning_to_rank.rst (#9381)
just adding one missing bracket
2023-07-13 11:00:24 +08:00
jinmfeng001
a1367ea1f8 Set feature_names and feature_types in jvm-packages (#9364)
* 1. Add parameters to set feature names and feature types
2. Save feature names and feature types to native json model

* Change serialization and deserialization format to ubj.
2023-07-12 15:18:46 +08:00
Rong Ou
3632242e0b Support column split with GPU quantile (#9370) 2023-07-11 12:15:56 +08:00
Jiaming Yuan
97ed944209 Unify the hist tree method for different devices. (#9363) 2023-07-11 10:04:39 +08:00
Jiaming Yuan
20c52f07d2 Support exporting cut values (#9356) 2023-07-08 15:32:41 +08:00
edumugi
c3124813e8 Support numpy vertical split (#9365) 2023-07-08 13:18:12 +08:00
Jiaming Yuan
59787b23af Allow empty page in external memory. (#9361) 2023-07-08 09:24:35 +08:00
Rong Ou
15ca12a77e Fix NCCL test hang (#9367) 2023-07-07 11:21:35 +08:00
Jiaming Yuan
41c6813496 Preserve order of saved updaters config. (#9355)
- Save the updater sequence as an array instead of object.
- Warn only once.

The compatibility is kept, but we should be able to break it as the config is not loaded
in pickle model and it's declared to be not stable.
2023-07-05 20:20:07 +08:00
Jiaming Yuan
b572a39919 [doc] Fix removed reference. (#9358) 2023-07-05 16:49:25 +08:00
Jiaming Yuan
645037e376 Improve test coverage with predictor configuration. (#9354)
* Improve test coverage with predictor configuration.

- Test with ext memory.
- Test with QDM.
- Test with dart.
2023-07-05 15:17:22 +08:00
Oliver Holworthy
6c9c8a9001 Enable Installation of Python Package with System lib in a Virtual Environment (#9349) 2023-07-05 05:46:17 +08:00
Boris
bb2de1fd5d xgboost4j-gpu_2.12-2.0.0: added libxgboost4j.so back. (#9351) 2023-07-04 03:31:33 +08:00
Jiaming Yuan
d0916849a6 Remove unused weight from buffer for cat features. (#9341) 2023-07-04 01:07:09 +08:00
Jiaming Yuan
6155394a06 Update news for 1.7.6 [skip ci] (#9350) 2023-07-04 01:04:34 +08:00
Jiaming Yuan
e964654b8f [skl] Enable cat feature without specifying tree method. (#9353) 2023-07-03 22:06:17 +08:00
Jiaming Yuan
39390cc2ee [breaking] Remove the predictor param, allow fallback to prediction using DMatrix. (#9129)
- A `DeviceOrd` struct is implemented to indicate the device. It will eventually replace the `gpu_id` parameter.
- The `predictor` parameter is removed.
- Fallback to `DMatrix` when `inplace_predict` is not available.
- The heuristic for choosing a predictor is only used during training.
2023-07-03 19:23:54 +08:00
Rong Ou
3a0f787703 Support column split in GPU predictor (#9343) 2023-07-03 04:05:34 +08:00
Rong Ou
f90771eec6 Fix device communicator dependency (#9346) 2023-06-29 10:34:30 +08:00
Jiaming Yuan
f4798718c7 Use hist as the default tree method. (#9320) 2023-06-27 23:04:24 +08:00
Jiaming Yuan
bc267dd729 Use ptr from mmap for GHistIndexMatrix and ColumnMatrix. (#9315)
* Use ptr from mmap for `GHistIndexMatrix` and `ColumnMatrix`.

- Define a resource for holding various types of memory pointers.
- Define ref vector for holding resources.
- Swap the underlying resources for GHist and ColumnM.
- Add documentation for current status.
- s390x support is removed. It should work if you can compile XGBoost, all the old workaround code does is to get GCC to compile.
2023-06-27 19:05:46 +08:00
jasjung
96c3071a8a [doc] Update learning_to_rank.rst (#9336) 2023-06-27 13:56:18 +08:00
Jiaming Yuan
cfa9c42eb4 Fix callback in AFT viz demo. (#9333)
* Fix callback in AFT viz demo.

- Update the callback function.
- Add lint check.
2023-06-26 22:35:02 +08:00
Jiaming Yuan
6efe7c129f [doc] Update reference in R vignettes. (#9323) 2023-06-26 18:32:11 +08:00
Jiaming Yuan
54da4b3185 Cleanup to prepare for using mmap pointer in external memory. (#9317)
- Update SparseDMatrix comment.
- Use a pointer in the bitfield. We will replace the `std::vector<bool>` in `ColumnMatrix` with bitfield.
- Clean up the page source. The timer is removed as it's inaccurate once we swap the mmap pointer into the page.
2023-06-22 06:43:11 +08:00
Jiaming Yuan
4066d68261 [doc] Clarify early stopping. (#9304) 2023-06-20 17:56:47 +08:00
Jiaming Yuan
6d22ea793c Test QDM with sparse data on CPU. (#9316) 2023-06-19 21:27:03 +08:00
Jiaming Yuan
ee6809e642 Use mmap for external memory. (#9282)
- Have basic infrastructure for mmap.
- Release file write handle.
2023-06-19 18:52:55 +08:00
Rong Ou
d8beb517ed Support bitwise allreduce in NCCL communicator (#9300) 2023-06-17 01:56:50 +08:00
George Othon
2718ff530c [doc] Variable 'label' is not defined in the pyspark application example (#9302) 2023-06-16 05:06:52 +08:00
Jacek Laskowski
0df1272695 [docs] How to build the docs using conda (#9276)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-06-15 07:39:26 +08:00
Rong Ou
e70810be8a Refactor device communicator to make allreduce more flexible (#9295) 2023-06-14 03:53:03 +08:00
Philip Hyunsu Cho
c2f0486d37 [CI] Run two pipeline loaders for responsiveness (#9294) 2023-06-12 09:52:40 -07:00
Jake Blitch
aad1313154 Fix community.rst typos. (#9291) 2023-06-11 09:09:27 +08:00
ZHAOKAI WANG
2b76061659 remove redundant method in expand_entry (#9283) 2023-06-10 05:18:21 +08:00
Jiaming Yuan
152e2fb072 Unify test helpers for creating ctx. (#9274) 2023-06-10 03:35:22 +08:00
Jiaming Yuan
ea0deeca68 Disable dense optimization in hist for distributed training. (#9272) 2023-06-10 02:31:34 +08:00
github-actions[bot]
8c1065f645 [CI] Update RAPIDS to latest stable (#9278)
Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
2023-06-09 09:55:08 -07:00
Jiaming Yuan
1fcc26a6f8 Set ndcg to default for LTR. (#8822)
- Add document.
- Add tests.
- Use `ndcg` with `topk` as default.
2023-06-09 23:31:33 +08:00
Philip Hyunsu Cho
e4dd6051a0 Use good commit message when updating Rapids 2023-06-08 19:30:25 -07:00
Philip Hyunsu Cho
2ec2ecf013 Allow admin to manually trigger update_rapids workflow 2023-06-08 19:21:36 -07:00
Philip Hyunsu Cho
181dee13e9 Update update_rapids.yml 2023-06-08 19:11:49 -07:00
Rong Ou
ff122d61ff More tests for cpu predictor with column split (#9270) 2023-06-08 22:47:19 +08:00
ZHAOKAI WANG
84d3fcb7ea Fix cpu_predictor categorical feature disaptch (#9256) 2023-06-08 01:24:04 +08:00
dependabot[bot]
e229692572 Bump maven-surefire-plugin from 3.1.0 to 3.1.2 in /jvm-packages (#9265)
Bumps [maven-surefire-plugin](https://github.com/apache/maven-surefire) from 3.1.0 to 3.1.2.
- [Release notes](https://github.com/apache/maven-surefire/releases)
- [Commits](https://github.com/apache/maven-surefire/compare/surefire-3.1.0...surefire-3.1.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-surefire-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-07 20:53:20 +08:00
dependabot[bot]
4a5802ed2c Bump maven-project-info-reports-plugin in /jvm-packages (#9268)
Bumps [maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 3.4.4 to 3.4.5.
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-3.4.4...maven-project-info-reports-plugin-3.4.5)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-07 19:07:36 +08:00
Jiaming Yuan
0cba2cdbb0 Support linalg data structures in check device. (#9243) 2023-06-06 09:47:24 +08:00
Jiaming Yuan
fc8110ef79 Remove document and demo in RABIT. (#9246) 2023-06-06 08:20:10 +08:00
Boris
7f9cb921f4 Rearranged maven profiles so that scala-2.13 artifacts are published without gpu-related libraries (#9253) 2023-06-05 13:52:10 -07:00
dependabot[bot]
a474a66573 Bump maven-release-plugin from 3.0.0 to 3.0.1 in /jvm-packages (#9252)
Bumps [maven-release-plugin](https://github.com/apache/maven-release) from 3.0.0 to 3.0.1.
- [Release notes](https://github.com/apache/maven-release/releases)
- [Commits](https://github.com/apache/maven-release/compare/maven-release-3.0.0...maven-release-3.0.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-release-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 21:29:59 +08:00
Rong Ou
962a20693f More support for column split in cpu predictor (#9244)
- Added column split support to `PredictInstance` and `PredictLeaf`.
- Refactoring of tests.
2023-06-05 08:05:38 +08:00
Philip Hyunsu Cho
3bf0f145bb Update update_rapids.yml 2023-06-03 13:12:12 -07:00
Philip Hyunsu Cho
a1fad72ab3 Update outdated build badges (#9232) 2023-06-02 08:22:25 -07:00
Philip Hyunsu Cho
288539ac78 [CI] Automatically bump Rapids version in containers (#9234)
* [CI] Use RAPIDS 23.04

* [CI] Remove outdated filters in dependabot

* [CI] Automatically bump Rapids version in containers

* Automate pull request
2023-06-02 08:17:41 -07:00
Jiaming Yuan
9fbde21e9d Rework the precision metric. (#9222)
- Rework the precision metric for both CPU and GPU.
- Mention it in the document.
- Cleanup old support code for GPU ranking metric.
- Deterministic GPU implementation.

* Drop support for classification.

* type.

* use batch shape.

* lint.

* cpu build.

* cpu build.

* lint.

* Tests.

* Fix.

* Cleanup error message.
2023-06-02 20:49:43 +08:00
Philip Hyunsu Cho
db8288121d Revert "Publishing scala-2.13 artifacts to the maven S3 repo. (#9224)" (#9233)
This reverts commit bb2a17b90c.
2023-06-01 14:39:39 -07:00
Boris
bb2a17b90c Publishing scala-2.13 artifacts to the maven S3 repo. (#9224) 2023-06-01 10:45:18 -07:00
dependabot[bot]
e93b805a75 Bump scala.version from 2.12.17 to 2.12.18 in /jvm-packages (#9230)
Bumps `scala.version` from 2.12.17 to 2.12.18.

Updates `scala-compiler` from 2.12.17 to 2.12.18
- [Release notes](https://github.com/scala/scala/releases)
- [Commits](https://github.com/scala/scala/compare/v2.12.17...v2.12.18)

Updates `scala-library` from 2.12.17 to 2.12.18
- [Release notes](https://github.com/scala/scala/releases)
- [Commits](https://github.com/scala/scala/compare/v2.12.17...v2.12.18)

---
updated-dependencies:
- dependency-name: org.scala-lang:scala-compiler
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.scala-lang:scala-library
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-01 10:44:43 -07:00
ZHAOKAI WANG
fa2ab1f021 TreeRefresher note word spelling modification (#9223) 2023-05-31 20:27:27 +08:00
Jiaming Yuan
aba4559c4f [doc] Update dask demo. (#9201) 2023-05-31 05:01:02 +08:00
Jiaming Yuan
7f20eaed93 [doc] Troubleshoot nccl shared memory. [skip ci] (#9206) 2023-05-31 05:00:02 +08:00
Jiaming Yuan
62e9387cd5 [ci] Update PySpark version. (#9214) 2023-05-31 03:00:44 +08:00
Jiaming Yuan
17fd3f55e9 Optimize adapter element counting on GPU. (#9209)
- Implement a simple `IterSpan` for passing iterators with size.
- Use shared memory for column size counts.
- Use one thread for each sample in row count to reduce atomic operations.
2023-05-30 23:28:43 +08:00
Jiaming Yuan
097f11b6e0 Support CUDA f16 without transformation. (#9207)
- Support f16 from cupy.
- Include CUDA header explicitly.
- Cleanup cmake nvtx support.
2023-05-30 20:54:31 +08:00
dependabot[bot]
6f83d9c69a Bump maven-project-info-reports-plugin in /jvm-packages (#9219)
Bumps [maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 3.4.3 to 3.4.4.
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-3.4.3...maven-project-info-reports-plugin-3.4.4)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-30 19:10:07 +08:00
Jiaming Yuan
ae7450ce54 Skip optional synchronization in thrust. (#9212) 2023-05-30 17:23:09 +08:00
Jean Lescut-Muller
ddec0f378c [doc] Show derivative of the custom objective (#9213)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-05-30 04:07:12 +08:00
Bobby Wang
320323f533 [pyspark] add parameters in the ctor of all estimators. (#9202)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-05-29 05:58:16 +08:00
Jiaming Yuan
03bc6e6427 Remove unused variables. (#9210)
- remove used variables.
- Remove signed comparison warnings.
2023-05-28 05:24:15 +08:00
dependabot[bot]
d563d6d8f4 Bump scala-collection-compat_2.12 from 2.9.0 to 2.10.0 in /jvm-packages (#9208)
Bumps [scala-collection-compat_2.12](https://github.com/scala/scala-collection-compat) from 2.9.0 to 2.10.0.
- [Release notes](https://github.com/scala/scala-collection-compat/releases)
- [Commits](https://github.com/scala/scala-collection-compat/compare/v2.9.0...v2.10.0)

---
updated-dependencies:
- dependency-name: org.scala-lang.modules:scala-collection-compat_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-28 00:22:28 +08:00
Boris
a01df102c9 Scala 2.13 support. (#9099)
1. Updated the test logic
2. Added smoke tests for Spark examples.
3. Added integration tests for Spark with Scala 2.13
2023-05-27 19:34:02 +08:00
Jiaming Yuan
8c174ef2d3 [CI] Update images that are not related to binary release. (#9205)
* [CI] Update images that are not related to the binary release.

- Update clang-tidy, prefer tools from the Ubuntu repository.
- Update GPU image to 22.04.
- Small cleanup to the tidy script.
- Remove gpu_jvm, which seems to be unused.
2023-05-27 17:40:46 +08:00
michael-gendy-mention-me
c5677a2b2c Remove type: ignore hints (#9197) 2023-05-27 07:48:28 +08:00
Jiaming Yuan
053aababd4 Avoid thrust logical operation. (#9199)
Thrust implementation of `thrust::all_of/any_of/none_of` adopts an early stopping strategy
to bailout early by dividing the input into small batches. This is not ideal for data
validation as we expect all data to be valid. The strategy leads to excessive kernel
launches and stream synchronization.

* Use reduce from dh instead.
2023-05-27 01:36:58 +08:00
dependabot[bot]
614f47c477 Bump flink-clients from 1.17.0 to 1.17.1 in /jvm-packages (#9203)
Bumps [flink-clients](https://github.com/apache/flink) from 1.17.0 to 1.17.1.
- [Commits](https://github.com/apache/flink/compare/release-1.17.0...release-1.17.1)

---
updated-dependencies:
- dependency-name: org.apache.flink:flink-clients
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-26 18:42:24 +08:00
Rong Ou
5b69534b43 Support column split in multi-target hist (#9171) 2023-05-26 16:56:05 +08:00
Rong Ou
acd363033e Fix running MGPU gtests (#9200) 2023-05-26 05:26:38 +08:00
dependabot[bot]
5d99b441d5 Bump scalatest_2.12 from 3.2.15 to 3.2.16 in /jvm-packages/xgboost4j (#9160)
Bumps [scalatest_2.12](https://github.com/scalatest/scalatest) from 3.2.15 to 3.2.16.
- [Release notes](https://github.com/scalatest/scalatest/releases)
- [Commits](https://github.com/scalatest/scalatest/compare/release-3.2.15...release-3.2.16)

---
updated-dependencies:
- dependency-name: org.scalatest:scalatest_2.12
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-24 09:09:25 +08:00
dependabot[bot]
e38e94ba4d Bump rapids-4-spark_2.12 from 23.04.0 to 23.04.1 in /jvm-packages (#9158)
Bumps rapids-4-spark_2.12 from 23.04.0 to 23.04.1.

---
updated-dependencies:
- dependency-name: com.nvidia:rapids-4-spark_2.12
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-24 07:15:46 +08:00
dependabot[bot]
d6d83c818f Bump maven-assembly-plugin from 3.5.0 to 3.6.0 in /jvm-packages (#9163)
Bumps [maven-assembly-plugin](https://github.com/apache/maven-assembly-plugin) from 3.5.0 to 3.6.0.
- [Commits](https://github.com/apache/maven-assembly-plugin/compare/maven-assembly-plugin-3.5.0...maven-assembly-plugin-3.6.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-assembly-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-23 13:56:12 -07:00
dependabot[bot]
22b0fc0992 Bump maven-source-plugin from 3.2.1 to 3.3.0 in /jvm-packages (#9184)
Bumps [maven-source-plugin](https://github.com/apache/maven-source-plugin) from 3.2.1 to 3.3.0.
- [Commits](https://github.com/apache/maven-source-plugin/compare/maven-source-plugin-3.2.1...maven-source-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-source-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-24 03:29:44 +08:00
dependabot[bot]
e67a0b8599 Bump maven-checkstyle-plugin from 3.2.2 to 3.3.0 in /jvm-packages (#9192)
Bumps [maven-checkstyle-plugin](https://github.com/apache/maven-checkstyle-plugin) from 3.2.2 to 3.3.0.
- [Commits](https://github.com/apache/maven-checkstyle-plugin/compare/maven-checkstyle-plugin-3.2.2...maven-checkstyle-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-checkstyle-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-24 01:43:47 +08:00
Jiaming Yuan
3913ff470f Import data lazily during tests. (#9176) 2023-05-23 03:58:31 +08:00
Bobby Wang
6274fba0a5 [pyspark] support tying (#9172) 2023-05-19 14:39:26 +08:00
Bobby Wang
caf326d508 [pyspark] Refactor and typing support for models (#9156) 2023-05-17 16:38:51 +08:00
Bobby Wang
cb370c4f7d [jvm] separate spark.version for cpu and gpu (#9166) 2023-05-17 07:12:20 +08:00
Stephan T. Lavavej
7375bd058b Fix IndexTransformIter. (#9155) 2023-05-12 21:25:54 +08:00
Stephan T. Lavavej
59edfdb315 Fix typo: _defined => defined (#9153) 2023-05-11 16:34:45 -07:00
Stephan T. Lavavej
779b82c098 Avoid redefining macros. (#9154) 2023-05-11 15:59:25 -07:00
Rong Ou
603f8ce2fa Support hist in the partition builder under column split (#9120) 2023-05-11 05:24:29 +08:00
Rong Ou
52311dcec9 Fix multi-threaded gtests (#9148) 2023-05-10 19:15:32 +08:00
Jiaming Yuan
e4129ed6ee [jvm-packages] Remove akka in tester. (#9149) 2023-05-10 14:10:58 +08:00
dependabot[bot]
2ab6660943 Bump maven-surefire-plugin in /jvm-packages/xgboost4j-spark (#9131)
Bumps [maven-surefire-plugin](https://github.com/apache/maven-surefire) from 3.0.0 to 3.1.0.
- [Release notes](https://github.com/apache/maven-surefire/releases)
- [Commits](https://github.com/apache/maven-surefire/compare/surefire-3.0.0...surefire-3.1.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-surefire-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-10 12:21:36 +08:00
dependabot[bot]
d21e7e5f82 Bump maven-gpg-plugin from 3.0.1 to 3.1.0 in /jvm-packages (#9136)
Bumps [maven-gpg-plugin](https://github.com/apache/maven-gpg-plugin) from 3.0.1 to 3.1.0.
- [Commits](https://github.com/apache/maven-gpg-plugin/compare/maven-gpg-plugin-3.0.1...maven-gpg-plugin-3.1.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-gpg-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-10 10:21:36 +08:00
Philip Hyunsu Cho
0cd4382d72 Fix config-settings handling in pip install (#9115)
* Fix config_settings handling in pip install

* Fix formatting

* Fix flag use_system_libxgboost

* Add setuptools to doc requirements.txt

* Fix mypy
2023-05-09 17:54:20 -07:00
Jiaming Yuan
09b44915e7 [doc] Replace recommonmark with myst-parser. (#9125) 2023-05-10 08:11:36 +08:00
Jiaming Yuan
85988a3178 Wait for data CUDA stream instead of sync. (#9144)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-05-09 09:52:21 +08:00
Uriya Harpeness
a075aa24ba Move python tool configurations to pyproject.toml, and add the python 3.11 classifier. (#9112) 2023-05-06 02:59:06 +08:00
Jiaming Yuan
55968ed3fa Fix monotone constraints on CPU. (#9122) 2023-05-06 01:07:54 +08:00
Rong Ou
250b22dd22 Fix nvflare horizontal demo (#9124) 2023-05-05 16:48:22 +08:00
Jiaming Yuan
47b3cb6fb7 Remove unused parameters in RABIT. (#9108) 2023-05-05 05:26:24 +08:00
Philip Hyunsu Cho
07b2d5a26d Add useful links to pyproject.toml (#9114) 2023-05-02 12:47:15 -07:00
Jiaming Yuan
08ce495b5d Use Booster context in DMatrix. (#8896)
- Pass context from booster to DMatrix.
- Use context instead of integer for `n_threads`.
- Check the consistency configuration for `max_bin`.
- Test for all combinations of initialization options.
2023-04-28 21:47:14 +08:00
Jiaming Yuan
1f9a57d17b [Breaking] Require format to be specified in input URI. (#9077)
Previously, we use `libsvm` as default when format is not specified. However, the dmlc
data parser is not particularly robust against errors, and the most common type of error
is undefined format.

Along with which, we will recommend users to use other data loader instead. We will
continue the maintenance of the parsers as it's currently used for many internal tests
including federated learning.
2023-04-28 19:45:15 +08:00
Bobby Wang
e922004329 [doc] fix the cudf installation [skip ci] (#9106) 2023-04-28 19:43:58 +08:00
Jiaming Yuan
17ff471616 Optimize array interface input. (#9090) 2023-04-28 18:01:58 +08:00
Rong Ou
fb941262b4 Add demo for vertical federated learning (#9103) 2023-04-28 16:03:21 +08:00
Jiaming Yuan
e206b899ef Rework MAP and Pairwise for LTR. (#9075) 2023-04-28 02:39:12 +08:00
Jiaming Yuan
0e470ef606 Optimize prediction with QuantileDMatrix. (#9096)
- Reduce overhead in `FVecDrop`.
- Reduce overhead caused by `HostVector()` calls.
2023-04-28 00:51:41 +08:00
Jiaming Yuan
fa267ad093 [CI] Freeze R version to 4.2.0 with MSVC. (#9104) 2023-04-27 22:48:31 +08:00
Jiaming Yuan
96d3f8a6f3 [doc] Update document. (#9098)
- Mention flink is still under construction.
- Update doxygen version.
- Fix warnings from doxygen about defgroup title and mismatched parameter name.
2023-04-27 19:29:03 +08:00
Rong Ou
511d4996b5 Rely on gRPC to generate random port (#9102) 2023-04-27 09:48:26 +08:00
Jiaming Yuan
101a2e643d [jvm-packages] Bump rapids version. (#9097) 2023-04-27 09:46:46 +08:00
Scott Gustafson
353ed5339d Convert `DaskXGBClassifier.classes_` to an array (#8452)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-04-27 02:23:35 +08:00
Boris
0e7377ba9c Updated flink 1.8 -> 1.17. Added smoke tests for Flink (#9046) 2023-04-26 18:41:11 +08:00
Rong Ou
a320b402a5 More refactoring to take advantage of collective aggregators (#9081) 2023-04-26 03:36:09 +08:00
dependabot[bot]
49ccae7fb9 Bump spark.version from 3.1.1 to 3.4.0 in /jvm-packages (#9039)
Bumps `spark.version` from 3.1.1 to 3.4.0.

Updates `spark-mllib_2.12` from 3.1.1 to 3.4.0

Updates `spark-core_2.12` from 3.1.1 to 3.4.0

Updates `spark-sql_2.12` from 3.1.1 to 3.4.0

---
updated-dependencies:
- dependency-name: org.apache.spark:spark-mllib_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.spark:spark-core_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.spark:spark-sql_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-26 01:32:06 +08:00
Bobby Wang
17add4776f [pyspark] Don't stack for non feature columns (#9088) 2023-04-25 23:09:12 +08:00
dependabot[bot]
a2cc78c1fb Bump scala.version from 2.12.8 to 2.12.17 in /jvm-packages (#9083)
Bumps `scala.version` from 2.12.8 to 2.12.17.

Updates `scala-compiler` from 2.12.8 to 2.12.17
- [Release notes](https://github.com/scala/scala/releases)
- [Commits](https://github.com/scala/scala/compare/v2.12.8...v2.12.17)

Updates `scala-library` from 2.12.8 to 2.12.17
- [Release notes](https://github.com/scala/scala/releases)
- [Commits](https://github.com/scala/scala/compare/v2.12.8...v2.12.17)

---
updated-dependencies:
- dependency-name: org.scala-lang:scala-compiler
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.scala-lang:scala-library
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 21:30:38 +08:00
Bobby Wang
339f21e1bf [pyspark] fix a type hint with old pyspark release (#9079) 2023-04-24 20:04:14 +08:00
Bobby Wang
d237378452 [jvm-packages] Clean up the dependencies after removing scala versioned tracker (#9078) 2023-04-24 17:49:08 +08:00
Jiaming Yuan
c512c3f46b [jvm-packages] Bump rapids version. (#9056) 2023-04-22 15:46:44 +08:00
Rong Ou
8dbe0510de More collective aggregators (#9060) 2023-04-22 03:32:05 +08:00
Jiaming Yuan
7032981350 Fix timer annotation. (#9057) 2023-04-21 22:53:58 +08:00
austinzh
3b742dc4f1 Stop using Rabit in predition (#9054) 2023-04-21 19:38:07 +08:00
dependabot[bot]
39b0fde0e7 Bump kryo from 5.4.0 to 5.5.0 in /jvm-packages (#9070)
Bumps [kryo](https://github.com/EsotericSoftware/kryo) from 5.4.0 to 5.5.0.
- [Release notes](https://github.com/EsotericSoftware/kryo/releases)
- [Commits](https://github.com/EsotericSoftware/kryo/compare/kryo-parent-5.4.0...kryo-parent-5.5.0)

---
updated-dependencies:
- dependency-name: com.esotericsoftware:kryo
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-21 18:16:34 +08:00
dependabot[bot]
ee84e22c8d Bump maven-checkstyle-plugin from 3.2.1 to 3.2.2 in /jvm-packages (#9073)
Bumps [maven-checkstyle-plugin](https://github.com/apache/maven-checkstyle-plugin) from 3.2.1 to 3.2.2.
- [Release notes](https://github.com/apache/maven-checkstyle-plugin/releases)
- [Commits](https://github.com/apache/maven-checkstyle-plugin/compare/maven-checkstyle-plugin-3.2.1...maven-checkstyle-plugin-3.2.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-checkstyle-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-21 18:16:08 +08:00
Jiaming Yuan
b908680bec Fix race condition in cpp metric tests. (#9058) 2023-04-21 05:24:10 +08:00
Philip Hyunsu Cho
a5cd2412de Replace setup.py with pyproject.toml (#9021)
* Create pyproject.toml
* Implement a custom build backend (see below) in packager directory. Build logic from setup.py has been refactored and migrated into the new backend.
* Tested: pip wheel . (build wheel), python -m build --sdist . (source distribution)
2023-04-20 13:51:39 -07:00
Jiaming Yuan
a7b3dd3176 Fix compiler warnings. (#9055) 2023-04-21 02:26:47 +08:00
dependabot[bot]
2acd78b44b Bump maven-project-info-reports-plugin in /jvm-packages/xgboost4j (#9049)
Bumps [maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 3.4.2 to 3.4.3.
- [Release notes](https://github.com/apache/maven-project-info-reports-plugin/releases)
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-3.4.2...maven-project-info-reports-plugin-3.4.3)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-21 00:10:45 +08:00
Emil Ejbyfeldt
a84a1fde02 [jvm-packages] Update scalatest to 3.2.15 (#8925)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-04-20 22:16:56 +08:00
Jiaming Yuan
564df59204 [breaking] [jvm-packages] Remove scala-implemented tracker. (#9045) 2023-04-20 16:29:35 +08:00
Rong Ou
42d100de18 Make sure metrics work with federated learning (#9037) 2023-04-19 15:39:11 +08:00
Jiaming Yuan
ef13dd31b1 Rework the NDCG objective. (#9015) 2023-04-18 21:16:06 +08:00
Rong Ou
ba9d24ff7b Make sure metrics work with column-wise distributed training (#9020) 2023-04-18 03:48:23 +08:00
WeichenXu
191d0aa5cf [spark] Make spark model have the same UID with its estimator (#9022)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2023-04-14 02:53:30 +08:00
Philip Hyunsu Cho
8e0f320db3 [CI] Don't run CI automatically for dependabot (#9034) 2023-04-13 08:19:56 -07:00
Jiaming Yuan
fe9dff339c Convert federated learner test into test suite. (#9018)
* Convert federated learner test into test suite.

- Add specialization to learning to rank.
2023-04-11 09:52:55 +08:00
Jiaming Yuan
2c8d735cb3 Fix tests with pandas 2.0. (#9014)
* Fix tests with pandas 2.0.

- `is_categorical` is replaced by `is_categorical_dtype`.
- one hot encoding returns boolean type instead of integer type.
2023-04-11 00:17:34 +08:00
Sarah Charlotte Johnson
ebd64f6e22 [doc] Update Dask deployment options (#9008) 2023-04-07 01:09:15 +08:00
Jiaming Yuan
1cf4d93246 Convert federated tests into test suite. (#9006)
- Add specialization for learning to rank.
2023-04-04 01:29:47 +08:00
Rong Ou
15e073ca9d Make objectives work with vertical distributed and federated learning (#9002) 2023-04-03 17:07:42 +08:00
Jiaming Yuan
720a8c3273 [doc] Remove parameter type in Python doc strings. (#9005) 2023-04-01 04:04:30 +08:00
Jiaming Yuan
4caca2947d Improve helper script for making release. [skip ci] (#9004)
* Merge source tarball generation script.
* Generate Python source wheel.
* Generate hashes and release note.
2023-03-31 23:14:58 +08:00
Jiaming Yuan
bcb55d3b6a Portable macro definition. (#8999) 2023-03-31 20:48:59 +08:00
Jiaming Yuan
bac22734fb Remove ntree limit in python package. (#8345)
- Remove `ntree_limit`. The parameter has been deprecated since 1.4.0.
- The SHAP package compatibility is broken.
2023-03-31 19:01:55 +08:00
Jiaming Yuan
b647403baa Update release news. [skip ci] (#9000) 2023-03-31 03:52:09 +08:00
Jiaming Yuan
cd05e38533 [doc][R] Update link. (#8998) 2023-03-30 19:09:07 +08:00
Jiaming Yuan
d062a9e009 Define pair generation strategies for LTR. (#8984) 2023-03-30 12:00:35 +08:00
Rong Ou
d385cc64e2 Fix aft_loss_distribution documentation (#8995) 2023-03-29 19:13:23 -07:00
Jiaming Yuan
a58055075b [dask] Return the first valid booster instead of all valid ones. (#8993)
* [dask] Return the first valid booster instead of all valid ones.

- Reduce memory footprint of the returned model.

* mypy error.

* lint.

* duplicated.
2023-03-30 03:16:18 +08:00
Philip Hyunsu Cho
6676c28cbc [CI] Fix Windows wheel to be compatible with Poetry (#8991)
* [CI] Fix Windows wheel to be compatible with Poetry

* Typo

* Eagerly scan globs to avoid patching same file twice
2023-03-28 21:32:54 -07:00
Rong Ou
ff26cd3212 More tests for column split and vertical federated learning (#8985)
Added some more tests for the learner and fit_stump, for both column-wise distributed learning and vertical federated learning.

Also moved the `IsRowSplit` and `IsColumnSplit` methods from the `DMatrix` to the `MetaInfo` since in some places we only have access to the `MetaInfo`. Added a new convenience method `IsVerticalFederatedLearning`.

Some refactoring of the testing fixtures.
2023-03-28 16:40:26 +08:00
Jiaming Yuan
401ce5cf5e Run linters with the multi output demo. (#8966) 2023-03-28 00:47:28 +08:00
Jiaming Yuan
acc110c251 [MT-TREE] Support prediction cache and model slicing. (#8968)
- Fix prediction range.
- Support prediction cache in mt-hist.
- Support model slicing.
- Make the booster a Python iterable by defining `__iter__`.
- Cleanup removed/deprecated parameters.
- A new field in the output model `iteration_indptr` for pointing to the ranges of trees for each iteration.
2023-03-27 23:10:54 +08:00
Jiaming Yuan
c2b3a13e70 [breaking][skl] Remove parameter serialization. (#8963)
- Remove parameter serialization in the scikit-learn interface.

The scikit-lear interface `save_model` will save only the model and discard all
hyper-parameters. This is to align with the native XGBoost interface, which distinguishes
the hyper-parameter and model parameters.

With the scikit-learn interface, model parameters are attributes of the estimator. For
instance, `n_features_in_`, `n_classes_` are always accessible with
`estimator.n_features_in_` and `estimator.n_classes_`, but not with the
`estimator.get_params`.

- Define a `load_model` method for classifier to load its own attributes.

- Set n_estimators to None by default.
2023-03-27 21:34:10 +08:00
dependabot[bot]
90645c4957 Bump maven-resources-plugin from 3.3.0 to 3.3.1 in /jvm-packages (#8980)
Bumps [maven-resources-plugin](https://github.com/apache/maven-resources-plugin) from 3.3.0 to 3.3.1.
- [Release notes](https://github.com/apache/maven-resources-plugin/releases)
- [Commits](https://github.com/apache/maven-resources-plugin/compare/maven-resources-plugin-3.3.0...maven-resources-plugin-3.3.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-resources-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-27 16:03:45 +08:00
dependabot[bot]
43878b10b6 Bump maven-deploy-plugin in /jvm-packages/xgboost4j-spark-gpu (#8973)
Bumps [maven-deploy-plugin](https://github.com/apache/maven-deploy-plugin) from 3.0.0 to 3.1.1.
- [Release notes](https://github.com/apache/maven-deploy-plugin/releases)
- [Commits](https://github.com/apache/maven-deploy-plugin/compare/maven-deploy-plugin-3.0.0...maven-deploy-plugin-3.1.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-deploy-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-27 12:47:13 +08:00
dependabot[bot]
cff50fe3ef Bump hadoop.version from 3.3.4 to 3.3.5 in /jvm-packages (#8962)
Bumps `hadoop.version` from 3.3.4 to 3.3.5.

Updates `hadoop-hdfs` from 3.3.4 to 3.3.5

Updates `hadoop-common` from 3.3.4 to 3.3.5

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-hdfs
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-23 16:12:04 +08:00
Jiaming Yuan
21a52c7f98 [doc] Add introduction and notes for the sklearn interface. (#8948) 2023-03-23 13:30:42 +08:00
Jiaming Yuan
bf88dadb61 [doc] Fix callback example. (#8944) 2023-03-23 03:27:04 +08:00
Jiaming Yuan
15a2724ff7 Removed outdated configuration serialization logic. (#8942)
- `saved_params` is empty.
- `saved_configs_` contains `num_round`, which is not used anywhere inside xgboost.
2023-03-23 01:31:46 +08:00
Jiaming Yuan
151882dd26 Initial support for multi-target tree. (#8616)
* Implement multi-target for hist.

- Add new hist tree builder.
- Move data fetchers for tests.
- Dispatch function calls in gbm base on the tree type.
2023-03-22 23:49:56 +08:00
Jiaming Yuan
ea04d4c46c [doc] [dask] Troubleshooting NCCL errors. (#8943) 2023-03-22 22:17:26 +08:00
Jiaming Yuan
a551bed803 Remove duplicated learning rate parameter. (#8941) 2023-03-22 20:51:14 +08:00
Jiaming Yuan
a05799ed39 Specify char type in JSON. (#8949)
char is defined as signed on x86 but unsigned on arm64

- Use `std::int8_t` instead of char.
- Fix include when clang is pretending to be gcc.
2023-03-22 19:13:44 +08:00
Jiaming Yuan
5891f752c8 Rework the MAP metric. (#8931)
- The new implementation is more strict as only binary labels are accepted. The previous implementation converts values greater than 1 to 1.
- Deterministic GPU. (no atomic add).
- Fix top-k handling.
- Precise definition of MAP. (There are other variants on how to handle top-k).
- Refactor GPU ranking tests.
2023-03-22 17:45:20 +08:00
Rong Ou
b240f055d3 Support vertical federated learning (#8932) 2023-03-22 14:25:26 +08:00
Philip Hyunsu Cho
8dc1e4b3ea Improve doxygen (#8959)
* Remove Sphinx build from GH Action

* Build Doxygen as part of RTD build

* Add jQuery
2023-03-21 09:22:11 -07:00
dependabot[bot]
34092d7fd0 Bump maven-release-plugin in /jvm-packages/xgboost4j-spark (#8952)
Bumps [maven-release-plugin](https://github.com/apache/maven-release) from 2.5.3 to 3.0.0.
- [Release notes](https://github.com/apache/maven-release/releases)
- [Commits](https://github.com/apache/maven-release/compare/maven-release-2.5.3...maven-release-3.0.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-release-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-21 15:34:43 +08:00
Jiaming Yuan
9b6cc0ed07 Refactor hist to prepare for multi-target builder. (#8928)
- Extract the builder from the updater class. We need a new builder for multi-target.
- Extract `UpdateTree`, it can be reused for different builders. Eventually, other tree
  updaters can use it as well.
2023-03-17 17:21:04 +08:00
Philip Hyunsu Cho
36263dd109 [jvm-packages] Use akka 2.6 (#8920) 2023-03-16 20:06:42 -07:00
Quentin Fiard
55ed50c860 Fix a few typos in the C API tutorial (#8926) 2023-03-16 20:24:03 +08:00
Jiaming Yuan
a093770f36 Partitioner for multi-target tree. (#8922) 2023-03-16 18:49:34 +08:00
Jiaming Yuan
26209a42a5 Define git attributes for renormalization. (#8921) 2023-03-16 02:43:11 +08:00
Philip Hyunsu Cho
a2cdba51ce Use hi-res SVG logo (#8923) 2023-03-15 10:02:38 -07:00
dependabot[bot]
fd016e43c6 Bump maven-surefire-plugin from 2.22.2 to 3.0.0 in /jvm-packages (#8917)
Bumps [maven-surefire-plugin](https://github.com/apache/maven-surefire) from 2.22.2 to 3.0.0.
- [Release notes](https://github.com/apache/maven-surefire/releases)
- [Commits](https://github.com/apache/maven-surefire/compare/surefire-2.22.2...surefire-3.0.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-surefire-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-15 18:51:46 +08:00
Jiaming Yuan
f186c87cf9 Check inf in data for all types of DMatrix. (#8911) 2023-03-15 11:24:35 +08:00
Jiaming Yuan
72e8331eab Reimplement the NDCG metric. (#8906)
- Add support for non-exp gain.
- Cache the DMatrix object to avoid re-calculating the IDCG.
- Make GPU implementation deterministic. (no atomic add)
2023-03-15 03:26:17 +08:00
Jiaming Yuan
8685556af2 Implement hist evaluator for multi-target tree. (#8908) 2023-03-15 01:42:51 +08:00
Jiaming Yuan
95e2baf7c2 [doc] Fix typo [skip ci] (#8907) 2023-03-15 00:55:17 +08:00
Jiaming Yuan
910ce580c8 Clear all cache after model load. (#8904) 2023-03-14 22:09:36 +08:00
Jiaming Yuan
c400fa1e8d Predictor for vector leaf. (#8898) 2023-03-14 19:07:10 +08:00
Jiaming Yuan
8be6095ece Implement NDCG cache. (#8893) 2023-03-13 22:16:31 +08:00
Jiaming Yuan
9bade7203a Remove public access to tree model param. (#8902)
* Make tree model param a private member.
* Number of features and targets are immutable after construction.

This is to reduce the number of places where we can run configuration.
2023-03-13 20:55:10 +08:00
Jiaming Yuan
5ba3509dd3 Define multi expand entry. (#8895) 2023-03-13 19:31:05 +08:00
Jiaming Yuan
bbee355b45 [doc][dask] Note on reproducible result. [skip ci] (#8903) 2023-03-13 19:30:35 +08:00
Jiaming Yuan
3689695d16 [CI] Run RMM gtests. (#8900)
* [CI] Run RMM gtests.

* Update test-cpp-gpu.sh

---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-03-12 03:14:31 +08:00
Jiaming Yuan
36a7396658 Replace dmlc any with std any. (#8892) 2023-03-11 06:11:04 +08:00
Rong Ou
79efcd37f5 Pick up dmlc-core fix for CSV parser (#8897) 2023-03-11 04:51:43 +08:00
Jiaming Yuan
2aa838c75e Define multi-strategy parameter. (#8890) 2023-03-11 02:58:01 +08:00
Jiaming Yuan
6deaec8027 Pass obj info by reference instead of by value. (#8889)
- Pass obj info into tree updater as const pointer.

This way we don't have to initialize the learner model param before configuring gbm, hence
breaking up the dependency of configurations.
2023-03-11 01:38:28 +08:00
Jiaming Yuan
54e001bbf4 [doc][dask] Reference examples from coiled. [skip ci] (#8891) 2023-03-09 20:03:24 -08:00
Jiaming Yuan
c5c8f643f2 Remove the cub submodule. (#8888)
XGBoost now uses CTK-11.8 for binary packages, there's no need to maintain a cub
submodule anymore.
2023-03-09 19:43:02 -08:00
Jiaming Yuan
5feee8d4a9 Define core multi-target regression tree structure. (#8884)
- Define a new tree struct embedded in the `RegTree`.
- Provide dispatching functions in `RegTree`.
- Fix some c++-17 warnings about the use of nodiscard (currently we disable the warning on
  the CI).
- Use uint32_t instead of size_t for `bst_target_t` as it has a defined size and can be used
  as part of dmlc parameter.
- Hide the `Segment` struct inside the categorical split matrix.
2023-03-09 19:03:06 +08:00
Jiaming Yuan
46dfcc7d22 Define a new ranking parameter. (#8887) 2023-03-09 17:46:24 +08:00
Krzysztof Dyba
e8a69013e6 [R] update predict docs (#8886) 2023-03-09 05:58:39 +08:00
Jiaming Yuan
8c16da8863 [doc] Add note for rabit port. [skip ci] (#8879) 2023-03-08 19:00:10 +08:00
dependabot[bot]
85c3334c2b Bump hadoop-common from 3.2.4 to 3.3.4 in /jvm-packages (#8882)
Bumps hadoop-common from 3.2.4 to 3.3.4.

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 13:15:39 +08:00
Jiaming Yuan
f236640427 Support F order for the tensor type. (#8872)
- Add F order support for tensor and view.
- Use parameter pack for automatic type cast. (avoid excessive static cast for shape).
2023-03-08 03:27:49 +08:00
dependabot[bot]
f53055f75e Bump maven-assembly-plugin from 3.4.2 to 3.5.0 in /jvm-packages (#8837)
Bumps [maven-assembly-plugin](https://github.com/apache/maven-assembly-plugin) from 3.4.2 to 3.5.0.
- [Release notes](https://github.com/apache/maven-assembly-plugin/releases)
- [Commits](https://github.com/apache/maven-assembly-plugin/compare/maven-assembly-plugin-3.4.2...maven-assembly-plugin-3.5.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-assembly-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 02:20:40 +08:00
Jiaming Yuan
f7ce0ec0df Upgrade gcc toolchain to 9.x. (#8878)
* Use new tool chain.

* Use gcc-9.

* Use cmake from system.

* DOn't link leak.
2023-03-07 08:25:23 -08:00
dependabot[bot]
2b2eb0d0f1 Bump scala-maven-plugin in /jvm-packages/xgboost4j-spark-gpu (#8877)
Bumps scala-maven-plugin from 4.8.0 to 4.8.1.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 19:33:33 +08:00
dependabot[bot]
5eabcae27b Bump scala-maven-plugin from 4.8.0 to 4.8.1 in /jvm-packages/xgboost4j (#8876)
Bumps scala-maven-plugin from 4.8.0 to 4.8.1.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 19:33:16 +08:00
dependabot[bot]
d06b1fc26e Bump scala-maven-plugin in /jvm-packages/xgboost4j-example (#8875)
Bumps scala-maven-plugin from 4.8.0 to 4.8.1.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 19:32:06 +08:00
dependabot[bot]
ffa5eb2aa4 Bump scala-maven-plugin in /jvm-packages/xgboost4j-gpu (#8874)
Bumps scala-maven-plugin from 4.8.0 to 4.8.1.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 19:31:50 +08:00
dependabot[bot]
0f6c502d36 Bump scala-maven-plugin in /jvm-packages/xgboost4j-spark (#8873)
Bumps scala-maven-plugin from 4.8.0 to 4.8.1.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 19:31:23 +08:00
Jiaming Yuan
7eba285a1e Support sklearn cross validation for ranker. (#8859)
* Support sklearn cross validation for ranker.

- Add a convention for X to include a special `qid` column.

sklearn utilities consider only `X`, `y` and `sample_weight` for supervised learning
algorithms, but we need an additional qid array for ranking.

It's important to be able to support the cross validation function in sklearn since all
other tuning functions like grid search are based on cross validation.
2023-03-07 00:22:08 +08:00
Jiaming Yuan
cad7401783 Disable gcc parallel extension if openmp is not available. (#8871)
`<parallel/algorithm>` internally includes the <omp.h> header, which leads to an error
when openmp is not available.
2023-03-06 22:51:06 +08:00
Jiaming Yuan
228a46e8ad Support learning rate for zero-hessian objectives. (#8866) 2023-03-06 20:33:28 +08:00
Jiaming Yuan
173096a6a7 Discover libasan.so.6. (#8864) 2023-03-06 18:56:54 +08:00
Jiaming Yuan
6a892ce281 Specify src path for isort. (#8867) 2023-03-06 17:30:27 +08:00
Jiaming Yuan
4d665b3fb0 Restore clang tidy test. (#8861) 2023-03-03 13:47:04 -08:00
Rong Ou
2dc22e7aad Take advantage of C++17 features (#8858)
---------

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-03-04 00:24:13 +08:00
Rory Mitchell
69a50248b7 Fix scope of feature set pointers (#8850)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-03-02 12:37:14 +08:00
mzzhang95
6cef9a08e9 [pyspark] Update eval_metric validation to support list of strings (#8826) 2023-03-02 08:24:12 +08:00
Jiaming Yuan
803d5e3c4c Update c++ requirement to 17 for the R package. (#8860) 2023-03-01 14:49:39 -08:00
Rong Ou
a5852365fd Update dmlc-core to get C++17 deprecation warning (#8855) 2023-03-01 12:30:59 -08:00
Rong Ou
7cbaee9916 Support column split in approx tree method (#8847) 2023-03-02 03:59:07 +08:00
Philip Hyunsu Cho
6d8afb2218 [CI] Require C++17 + CMake 3.18; Use CUDA 11.8 in CI (#8853)
* Update to C++17

* Turn off unity build

* Update CMake to 3.18

* Use MSVC 2022 + CUDA 11.8

* Re-create stack for worker images

* Allocate more disk space for Windows

* Tempiorarily disable clang-tidy

* RAPIDS now requires Python 3.10+

* Unpin cuda-python

* Use latest NCCL

* Use Ubuntu 20.04 in RMM image

* Mark failing mgpu test as xfail
2023-03-01 09:22:24 -08:00
Jiaming Yuan
d54ef56f6f Fix cache with gc (#8851)
- Make DMatrixCache thread-safe.
- Remove the use of thread-local memory.
2023-03-01 00:39:06 +08:00
Rong Ou
d9688f93c7 Support column-split in row partitioner (#8828) 2023-02-26 04:43:35 +08:00
Mauro Leggieri
90c0633a28 Fixes compilation errors on MSVC x86 targets (#8823) 2023-02-26 03:20:28 +08:00
Rong Ou
a65ad0bd9c Support column split in histogram builder (#8811) 2023-02-17 22:37:01 +08:00
dependabot[bot]
40fd3d6d5f Bump maven-javadoc-plugin in /jvm-packages/xgboost4j-gpu (#8815)
Bumps [maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 3.4.1 to 3.5.0.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-3.4.1...maven-javadoc-plugin-3.5.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-17 16:39:16 +08:00
dependabot[bot]
6ce9a35f55 Bump maven-javadoc-plugin from 3.4.1 to 3.5.0 in /jvm-packages/xgboost4j (#8813)
Bumps [maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 3.4.1 to 3.5.0.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-3.4.1...maven-javadoc-plugin-3.5.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-17 15:04:06 +08:00
dependabot[bot]
d62daa0b32 Bump maven-javadoc-plugin from 3.4.1 to 3.5.0 in /jvm-packages (#8814)
Bumps [maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 3.4.1 to 3.5.0.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-3.4.1...maven-javadoc-plugin-3.5.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 23:16:11 +08:00
Jiaming Yuan
c0afdb6786 Fix CPU bin compression with categorical data. (#8809)
* Fix CPU bin compression with categorical data.

* The bug causes the maximum category to be lesser than 256 or the maximum number of bins when
the input data is dense.
2023-02-16 04:20:34 +08:00
Jiaming Yuan
cce4af4acf Initial support for quantile loss. (#8750)
- Add support for Python.
- Add objective.
2023-02-16 02:30:18 +08:00
Jiaming Yuan
282b1729da Specify the number of threads for parallel sort. (#8735)
* Specify the number of threads for parallel sort.

- Pass context object into argsort.
- Replace macros with inline functions.
2023-02-16 00:20:19 +08:00
Jiaming Yuan
c7c485d052 Extract fit intercept. (#8793) 2023-02-15 22:41:31 +08:00
Jiaming Yuan
594371e35b Fix CPP lint. (#8807) 2023-02-15 20:16:35 +08:00
Jiaming Yuan
e62167937b [CI] Update action cache for jvm tests. (#8806) 2023-02-15 18:43:48 +08:00
Rong Ou
74572b5d45 Add convenience method for allgather (#8804) 2023-02-15 11:37:11 +08:00
WeichenXu
f27a7258c6 Fix feature types param (#8772)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2023-02-14 02:16:42 +08:00
Jiaming Yuan
52d0230b58 Fix merge conflict. (#8791) 2023-02-13 23:43:42 +08:00
Jiaming Yuan
81b2ee1153 Pass DMatrix into metric for caching. (#8790) 2023-02-13 22:15:05 +08:00
Jiaming Yuan
31d3ec07af Extract device algorithms. (#8789) 2023-02-13 20:53:53 +08:00
Jiaming Yuan
457f704e3d Add quantile metric. (#8761) 2023-02-13 19:07:40 +08:00
Jiaming Yuan
d11a0044cf Generalize prediction cache. (#8783)
* Extract most of the functionality into `DMatrixCache`.
* Move API entry to independent file to reduce dependency on `predictor.h` file.
* Add test.

---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-02-13 12:36:43 +08:00
Rong Ou
ed91e775ec Fix quantile tests running on multi-gpus (#8775)
* Fix quantile tests running on multi-gpus

* Run some gtests with multiple GPUs

* fix mgpu test naming

* Instruct NCCL to print extra logs

* Allocate extra space in /dev/shm to enable NCCL

* use gtest_skip to skip mgpu tests

---------

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2023-02-12 17:00:26 -08:00
Jiaming Yuan
225b3158f6 Support custom metric in sklearn ranker. (#8786) 2023-02-12 13:14:07 +08:00
Jiaming Yuan
17b709acb9 Rename ranking utils to threading utils. (#8785) 2023-02-12 05:41:18 +08:00
Jiaming Yuan
70c9b885ef Extract floating point rounding routines. (#8771) 2023-02-12 04:26:41 +08:00
Jiaming Yuan
e9c178f402 [doc] Document update [skip ci] (#8784)
- Remove version specifics in cat demo.
- Remove aws yarn.
- Update faq.
- Stop mentioning MPI.
- Update sphinx inventory links.
- Fix typo.
2023-02-12 04:25:22 +08:00
Jiaming Yuan
8a16944664 Fix ranking with quantile dmatrix and group weight. (#8762) 2023-02-10 20:32:35 +08:00
Dai-Jie (Jay) Wu
ad0ccc6e4f [doc] fix inconsistent doc and minor typo for external memory (#8773) 2023-02-10 01:05:34 +08:00
Jiaming Yuan
199c421d60 Send default configuration from metric to objective. (#8760) 2023-02-09 20:18:07 +08:00
Jiaming Yuan
5f76edd296 Extract make metric name from ranking metric. (#8768)
- Extract the metric parsing routine from ranking.
- Add a test.
- Accept null for string view.
2023-02-09 18:30:21 +08:00
Jiaming Yuan
4ead65a28c Increase timeout limit for linear. (#8767) 2023-02-09 18:20:12 +08:00
Rong Ou
cbf98cb9c6 Add Allgather to collective communicator (#8765)
* Add Allgather to collective communicator
2023-02-09 11:31:22 +08:00
Jiaming Yuan
48cefa012e Support multiple alphas for segmented quantile. (#8758) 2023-02-07 17:17:59 +08:00
Jiaming Yuan
c4802bfcd0 Cleanup booster param types. (#8756) 2023-02-07 15:52:19 +08:00
Jiaming Yuan
7b3d473593 [doc] Add demo for inference using individual tree. (#8752) 2023-02-07 04:40:18 +08:00
Jiaming Yuan
28bb01aa22 Extract optional weight. (#8747)
- Extract optional weight from coommon.h to reduce dependency on this header.
- Add test.
2023-02-07 03:11:53 +08:00
Jiaming Yuan
0f37a01dd9 Require black formatter for the python package. (#8748) 2023-02-07 01:53:33 +08:00
Jiaming Yuan
a2e433a089 Fix empty DMatrix with categorical features. (#8739) 2023-02-07 00:40:11 +08:00
Rory Mitchell
7214a45e83 Fix different number of features in gpu_hist evaluator. (#8754) 2023-02-06 23:15:16 +08:00
Rong Ou
66191e9926 Support cpu quantile sketch with column-wise data split (#8742) 2023-02-05 14:26:24 +08:00
Jiaming Yuan
c1786849e3 Use array interface for CSC matrix. (#8672)
* Use array interface for CSC matrix.

Use array interface for CSC matrix and align the interface with CSR and dense.

- Fix nthread issue in the R package DMatrix.
- Unify the behavior of handling `missing` with other inputs.
- Unify the behavior of handling `missing` around R, Python, Java, and Scala DMatrix.
- Expose `num_non_missing` to the JVM interface.
- Deprecate old CSR and CSC constructors.
2023-02-05 01:59:46 +08:00
BenEfrati
213b5602d9 Add sample_weight to eval_metric (#8706) 2023-02-05 00:06:38 +08:00
Philip Hyunsu Cho
dd79ab846f [CI] Fix failing arm build (#8751)
* Always install Conda env into /opt/python; use Mamba

* Change ownership of Conda env to buildkite-agent user

* Use unique name

* Fix
2023-02-03 22:32:48 -08:00
Jiaming Yuan
0e61ba57d6 Fix GPU L1 error. (#8749) 2023-02-04 03:02:00 +08:00
Hamel Husain
16ef016ba7 [CI] Use bash -l {0} as the default in GitHub Actions (#8741) 2023-01-31 15:00:29 +08:00
James Lamb
0d8248ddcd [R] discourage use of regex for fixed string comparisons (#8736) 2023-01-30 18:47:21 +08:00
Jiaming Yuan
1325ba9251 Support primitive types of pyarrow-backed pandas dataframe. (#8653)
Categorical data (dictionary) is not supported at the moment.
2023-01-30 17:53:29 +08:00
Jiaming Yuan
3760cede0f Consistent use of context to specify number of threads. (#8733)
- Use context in all tests.
- Use context in R.
- Use context in C API DMatrix initialization. (0 threads is used as dft).
2023-01-30 15:25:31 +08:00
Jiaming Yuan
21a28f2cc5 Small refactor for hist builder. (#8698)
- Use span instead of vector as parameter. No perf change as the builder work on pointer.
- Use const pointer for reg tree.
2023-01-30 14:06:41 +08:00
Rong Ou
8af98e30fc Use in-memory communicator to test quantile (#8710) 2023-01-27 23:28:28 +08:00
James Lamb
96e6b6beba [ci] remove unused imports in tests (#8707) 2023-01-25 14:10:29 +08:00
Philip Hyunsu Cho
d29e45371f [R-package] Alter xgb.train() to accept multiple eval metrics as a list (#8657) 2023-01-24 17:14:14 -08:00
James Lamb
0f4d52a864 [R] add tests on print.xgb.DMatrix() (#8704) 2023-01-22 06:44:14 +08:00
Jiaming Yuan
9fb12b20a4 Cleanup the callback module. (#8702)
- Cleanup pylint markers.
- run formatter.
- Update examples of using callback.
2023-01-22 00:13:49 +08:00
Jiaming Yuan
34eee56256 Fix compiler warnings. (#8703)
Fix warnings about signed/unsigned comparisons.
2023-01-21 15:16:23 +08:00
Jiaming Yuan
e49e0998c0 Extract CPU sampling routines. (#8697) 2023-01-19 23:28:18 +08:00
Jiaming Yuan
7a068af1a3 Workaround CUDA warning. (#8696) 2023-01-19 09:16:08 +08:00
James Lamb
6933240837 [python-package] remove unused functions in xgboost.data (#8695) 2023-01-19 08:02:54 +08:00
Jiaming Yuan
4416452f94 Return single thread from context when called inside omp region. (#8693) 2023-01-18 09:23:37 +08:00
Jiaming Yuan
31b9cbab3d Make sure input numpy array is aligned. (#8690)
- use `np.require` to specify that the alignment is required.
- scipy csr as well.
- validate input pointer in `ArrayInterface`.
2023-01-18 08:12:13 +08:00
Jiaming Yuan
175986b739 [doc] Add missing document for pyspark ranker. [skip ci] (#8692) 2023-01-18 07:52:18 +08:00
Rong Ou
78396f8a6e Initial support for column-split cpu predictor (#8676) 2023-01-18 06:33:13 +08:00
James Lamb
980233e648 [R] remove XGBoosterPredict_R (fixes #8687) (#8689) 2023-01-17 14:19:01 +08:00
Jiaming Yuan
247946a875 Cache transformed in QuantileDMatrix for efficiency. (#8666) 2023-01-17 06:02:40 +08:00
James Lamb
06ba285f71 [R] fix OpenMP detection on macOS (#8684) 2023-01-17 05:01:26 +08:00
Jiaming Yuan
43152657d4 Extract JSON type check. (#8677)
- Reuse it in `GetMissing`.
- Add test.
2023-01-17 03:11:07 +08:00
Jiaming Yuan
9f598efc3e Rename context in Metric. (#8686) 2023-01-17 01:10:13 +08:00
Jiaming Yuan
d6018eb4b9 Remove all use of DeviceQuantileDMatrix. (#8665) 2023-01-17 00:04:10 +08:00
Jiaming Yuan
0ae8df9a65 Define default ctors for gpair. (#8660)
* Define default ctors for gpair.

Fix clang warning:

Definition of implicit copy assignment operator for 'GradientPairInternal<float>' is
deprecated because it has a user-declared copy constructor
2023-01-16 22:52:13 +08:00
dependabot[bot]
a9c6199723 Bump maven-project-info-reports-plugin in /jvm-packages (#8662)
Bumps [maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 3.4.1 to 3.4.2.
- [Release notes](https://github.com/apache/maven-project-info-reports-plugin/releases)
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-3.4.1...maven-project-info-reports-plugin-3.4.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-16 04:57:28 +08:00
dependabot[bot]
37d4482e3e Bump maven-checkstyle-plugin from 3.2.0 to 3.2.1 in /jvm-packages (#8661)
Bumps [maven-checkstyle-plugin](https://github.com/apache/maven-checkstyle-plugin) from 3.2.0 to 3.2.1.
- [Release notes](https://github.com/apache/maven-checkstyle-plugin/releases)
- [Commits](https://github.com/apache/maven-checkstyle-plugin/compare/maven-checkstyle-plugin-3.2.0...maven-checkstyle-plugin-3.2.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-checkstyle-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-16 02:02:03 +08:00
James Lamb
e227abc57a [R] avoid leaving test files behind (#8685) 2023-01-15 23:34:54 +08:00
Jiaming Yuan
e7d612d22c [R] Fix threads used to create DMatrix in predict. (#8681) 2023-01-15 03:09:08 +08:00
James Lamb
292df67824 [R] remove unused define XGBOOST_CUSTOMIZE_LOGGER (#8647) 2023-01-15 02:29:25 +08:00
Jiaming Yuan
f7a2f52136 [R] Get CXX flags from R CMD config. (#8669) 2023-01-14 16:48:21 +08:00
Jiaming Yuan
07cf3d3e53 Fix threads in DMatrix slice. (#8667) 2023-01-14 07:16:57 +08:00
Jiaming Yuan
e27cda7626 [CI] Skip pyspark sparse tests. (#8675) 2023-01-14 05:37:00 +08:00
Jiaming Yuan
b2b6a8aa39 [R] fix CSR input. (#8673) 2023-01-14 01:32:41 +08:00
Bobby Wang
72ec0c5484 [pyspark] support pred_contribs (#8633) 2023-01-11 16:51:12 +08:00
Jiaming Yuan
cfa994d57f Multi-target support for L1 error. (#8652)
- Add matrix support to the median function.
- Iterate through each target for quantile computation.
2023-01-11 05:51:14 +08:00
Jiaming Yuan
badeff1d74 Init estimation for regression. (#8272) 2023-01-11 02:04:56 +08:00
Jiaming Yuan
1b58d81315 [doc] Document Python inputs. (#8643) 2023-01-10 15:39:32 +08:00
Bobby Wang
4e12f3e1bc [Breaking][jvm-packages] Bump rapids version to 22.12.0 (#8648)
* [jvm-packages] Bump rapids version to 22.12.0

This PR bumps spark version to 3.1.1 and the rapids version
to 22.12.0, which results in the latest xgboost can't run
with the old rapids packages.
2023-01-07 18:59:17 +08:00
Jiaming Yuan
06a1cb6e03 Release news for patch releases including upcoming 1.7.3. [skip ci] (#8645) 2023-01-06 16:19:16 +08:00
Emre Batuhan Baloğlu
2b88099c74 [doc] Update custom_metric_obj.rst (#8626) 2023-01-06 05:08:25 +08:00
Jiaming Yuan
e68a152d9e Do not return internal value for get_params. (#8634) 2023-01-05 17:48:26 +08:00
Jiaming Yuan
26c9882e23 Fix loading GPU pickle with a CPU-only xgboost distribution. (#8632)
We can handle loading the pickle on a CPU-only machine if the XGBoost is built with CUDA
enabled (Linux and Windows PyPI package), but not if the distribution is CPU-only (macOS
PyPI package).
2023-01-05 02:14:30 +08:00
Bobby Wang
d3ad0524e7 [pyspark] Re-work _fit function (#8630) 2023-01-04 18:21:57 +08:00
Jiaming Yuan
beefd28471 Split up SHAP from RegTree. (#8612)
* Split up SHAP from `RegTree`.

Simplify the tree interface.
2023-01-04 18:17:47 +08:00
Jiaming Yuan
d308124910 Refactor PySpark tests. (#8605)
- Convert classifier tests to pytest tests.
- Replace hardcoded tests.
2023-01-04 17:05:16 +08:00
James Lamb
fa44a33ee6 remove unused variables in JSON-parsing code (#8627) 2023-01-04 15:50:33 +08:00
Jiaming Yuan
6eaddaa9c3 [CI] Fix CI with updated dependencies. (#8631)
* [CI] Fix CI with updated dependencies.

- Fix jvm package get iris.

* Skip SHAP test for now.

* Revert "Skip SHAP test for now."

This reverts commit 9aa28b4d8aee53fa95d92d2a879c6783ff4b2faa.

* Catch all exceptions.
2023-01-03 21:04:04 -08:00
Jiaming Yuan
8d545ab2a2 Implement fit stump. (#8607) 2023-01-04 04:14:51 +08:00
dependabot[bot]
20e6087579 Bump kryo from 5.3.0 to 5.4.0 in /jvm-packages (#8629)
Bumps [kryo](https://github.com/EsotericSoftware/kryo) from 5.3.0 to 5.4.0.
- [Release notes](https://github.com/EsotericSoftware/kryo/releases)
- [Commits](https://github.com/EsotericSoftware/kryo/compare/kryo-parent-5.3.0...kryo-parent-5.4.0)

---
updated-dependencies:
- dependency-name: com.esotericsoftware:kryo
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-03 18:44:39 +08:00
James Lamb
dd72af2620 [CI] fix git errors related to directory ownership (#8628) 2023-01-01 16:05:44 -08:00
James Lamb
9a98c3726c [R] [CI] add more linting checks (#8624) 2022-12-29 18:20:36 +08:00
James Lamb
b05abfc494 [CI] remove unused cpp test helper function (#8625) 2022-12-28 02:47:52 +08:00
Rong Ou
3ceeb8c61c Add data split mode to DMatrix MetaInfo (#8568) 2022-12-25 20:37:37 +08:00
Rong Ou
77b069c25d Support bitwise allreduce operations in the communicator (#8623) 2022-12-25 06:40:05 +08:00
James Lamb
c7e82b5914 [R] enforce lintr checks (fixes #8012) (#8613) 2022-12-25 05:02:56 +08:00
James Lamb
f489d824ca [R] remove unused imports in tests (#8614) 2022-12-25 03:45:47 +08:00
Jiaming Yuan
c430ae52f3 Fix mypy errors with the latest numpy. (#8617) 2022-12-21 01:42:05 -08:00
Philip Hyunsu Cho
5bf9e79413 [CI] Disable gtest with RMM (#8620) 2022-12-21 01:41:34 -08:00
Jiaming Yuan
c6a8754c62 Define CUDA Context. (#8604)
We will transition to non-default and non-blocking CUDA stream.
2022-12-20 15:15:07 +08:00
James Lamb
e01639548a [R] remove unused compiler flag RABIT_CUSTOMIZE_MSG_ (#8610) 2022-12-17 19:36:35 +08:00
James Lamb
17ce1f26c8 [R] address some lintr warnings (#8609) 2022-12-17 18:36:14 +08:00
James Lamb
53e6e32718 [R] resolve assignment_linter warnings (#8599) 2022-12-17 01:22:41 +08:00
Jiaming Yuan
f6effa1734 Support Series and Python primitives in inplace_predict and QDM (#8547) 2022-12-17 00:15:15 +08:00
Jiaming Yuan
a10e4cba4e Fix linalg iterator. (#8603) 2022-12-16 23:05:03 +08:00
Jiaming Yuan
38887a1876 Fix windows build on buildkite. (#8602) 2022-12-16 21:12:24 +08:00
Jiaming Yuan
43a647a4dd Fix inference with categorical feature. (#8591) 2022-12-15 17:57:26 +08:00
Esteban Djeordjian
7dc3e95a77 Added ranges for alpha and lambda in docs (#8597) 2022-12-15 16:51:04 +08:00
dependabot[bot]
0c38ca7f6e Bump nexus-staging-maven-plugin from 1.6.7 to 1.6.13 in /jvm-packages (#8600) 2022-12-15 08:44:05 +00:00
Jiaming Yuan
001e663d42 Set enable_categorical to True in predict. (#8592) 2022-12-15 05:27:06 +08:00
James Lamb
7a07dcf651 [R] resolve line_length_linter warnings (#8565)
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2022-12-14 21:04:24 +08:00
dependabot[bot]
eac980fbfc Bump maven-checkstyle-plugin from 3.1.2 to 3.2.0 in /jvm-packages (#8594)
Bumps [maven-checkstyle-plugin](https://github.com/apache/maven-checkstyle-plugin) from 3.1.2 to 3.2.0.
- [Release notes](https://github.com/apache/maven-checkstyle-plugin/releases)
- [Commits](https://github.com/apache/maven-checkstyle-plugin/compare/maven-checkstyle-plugin-3.1.2...maven-checkstyle-plugin-3.2.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-checkstyle-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-14 19:46:03 +08:00
James Lamb
06ea6c7e79 [python] remove unnecessary conversions between data structures (#8546)
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2022-12-14 18:32:02 +08:00
dependabot[bot]
f64871c74a Bump spark.version from 3.0.1 to 3.0.3 in /jvm-packages (#8593)
Bumps `spark.version` from 3.0.1 to 3.0.3.

Updates `spark-mllib_2.12` from 3.0.1 to 3.0.3

Updates `spark-core_2.12` from 3.0.1 to 3.0.3

Updates `spark-sql_2.12` from 3.0.1 to 3.0.3

---
updated-dependencies:
- dependency-name: org.apache.spark:spark-mllib_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.spark:spark-core_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.spark:spark-sql_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-14 17:23:48 +08:00
Jiaming Yuan
40343c8ee1 Test dask demos. (#8557)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-12-13 18:37:31 +08:00
Rong Ou
15a88ceef0 Fix deprecated CUB calls in CUDA 12.0 (#8578) 2022-12-12 17:02:30 +08:00
Philip Hyunsu Cho
35d8447282 [CI] Use conda-forge channel in conda (#8583) 2022-12-11 23:25:29 -08:00
Rong Ou
42e6fbb0db Fix sklearn test that calls a removed field (#8579) 2022-12-09 13:06:44 -08:00
Jiaming Yuan
deb3edf562 Support list and tuple for QDM. (#8542) 2022-12-10 01:14:44 +08:00
Jiaming Yuan
8824b40961 Update date in release script. [skip ci] (#8574) 2022-12-09 23:16:10 +08:00
Rong Ou
0caf2be684 Update NVFlare demo to work with the latest release (#8576) 2022-12-09 02:48:20 +08:00
James Lamb
ffee35e0f0 [R] [ci] remove dependency on {devtools} (#8563) 2022-12-09 01:21:28 +08:00
James Lamb
fbe40d00d8 [R] resolve brace_linter warnings (#8564) 2022-12-08 23:01:00 +08:00
Bobby Wang
40a1a2ffa8 [pyspark] check use_qdm across all the workers (#8496) 2022-12-08 18:09:17 +08:00
dependabot[bot]
5aeb8f7009 Bump maven-gpg-plugin from 1.5 to 3.0.1 in /jvm-packages (#8571) 2022-12-08 06:59:11 +00:00
dependabot[bot]
f592a5125b Bump flink.version from 1.7.2 to 1.8.3 in /jvm-packages (#8561) 2022-12-07 20:53:22 +00:00
dependabot[bot]
27aea6c7b5 Bump maven-surefire-plugin from 2.19.1 to 2.22.2 in /jvm-packages (#8562) 2022-12-07 17:56:05 +00:00
Gianfrancesco Angelini
5540019373 feat(py, plot_importance): + values_format as arg (#8540) 2022-12-08 00:47:28 +08:00
François Bobot
8c6630c310 Typo in model schema (#8543)
categorical -> categories
2022-12-07 22:56:59 +08:00
Matthew Rocklin
b7ffdcdbb9 Properly await async method client.wait_for_workers (#8558)
* Properly await async method client.wait_for_workers

* ignore mypy error.

Co-authored-by: jiamingy <jm.yuan@outlook.com>
2022-12-07 21:49:30 +08:00
dependabot[bot]
4f1e453ff5 Bump maven-project-info-reports-plugin in /jvm-packages (#8560)
Bumps [maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 2.2 to 3.4.1.
- [Release notes](https://github.com/apache/maven-project-info-reports-plugin/releases)
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-2.2...maven-project-info-reports-plugin-3.4.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-07 14:33:29 +08:00
Jiaming Yuan
3e26107a9c Rename and extract Context. (#8528)
* Rename `GenericParameter` to `Context`.
* Rename header file to reflect the change.
* Rename all references.
2022-12-07 04:58:54 +08:00
James Lamb
05fc6f3ca9 [R] [ci] move linting code out of package (#8545) 2022-12-07 03:18:17 +08:00
Jiaming Yuan
e38fe21e0d Cleanup regression objectives. (#8539) 2022-12-07 01:05:42 +08:00
dependabot[bot]
7774bf628e Bump scalatest-maven-plugin from 1.0 to 2.2.0 in /jvm-packages (#8509)
Bumps [scalatest-maven-plugin](https://github.com/scalatest/scalatest-maven-plugin) from 1.0 to 2.2.0.
- [Release notes](https://github.com/scalatest/scalatest-maven-plugin/releases)
- [Commits](https://github.com/scalatest/scalatest-maven-plugin/commits/release-2.2.0)

---
updated-dependencies:
- dependency-name: org.scalatest:scalatest-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 21:34:22 +08:00
dependabot[bot]
4a99c9bdb8 Bump commons-lang3 from 3.9 to 3.12.0 in /jvm-packages (#8548)
Bumps commons-lang3 from 3.9 to 3.12.0.

---
updated-dependencies:
- dependency-name: org.apache.commons:commons-lang3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 20:13:46 +08:00
Jiaming Yuan
d99bdd1b1e [CI] Fix github action mismatched glibcxx. (#8551)
* [CI] Fix github action mismatched glibcxx.

Split up the Linux test to use the toolchain from conda forge.
2022-12-06 17:42:15 +08:00
dependabot[bot]
ed1a4f3205 Bump maven-source-plugin from 2.2.1 to 3.2.1 in /jvm-packages (#8549)
Bumps [maven-source-plugin](https://github.com/apache/maven-source-plugin) from 2.2.1 to 3.2.1.
- [Release notes](https://github.com/apache/maven-source-plugin/releases)
- [Commits](https://github.com/apache/maven-source-plugin/compare/maven-source-plugin-2.2.1...maven-source-plugin-3.2.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-source-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 14:10:19 +08:00
dependabot[bot]
e85c9b987b Bump maven-site-plugin from 3.0 to 3.12.1 in /jvm-packages (#8533)
Bumps [maven-site-plugin](https://github.com/apache/maven-site-plugin) from 3.0 to 3.12.1.
- [Release notes](https://github.com/apache/maven-site-plugin/releases)
- [Commits](https://github.com/apache/maven-site-plugin/compare/maven-site-plugin-3.0...maven-site-plugin-3.12.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-site-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 11:35:53 +08:00
Jiaming Yuan
7ac52e674f [doc] Update model schema. (#8538)
* Update model schema with `num_target`.
2022-12-06 11:35:07 +08:00
dependabot[bot]
2790e3091f Bump maven-assembly-plugin from 2.6 to 3.4.2 in /jvm-packages (#8521)
Bumps [maven-assembly-plugin](https://github.com/apache/maven-assembly-plugin) from 2.6 to 3.4.2.
- [Release notes](https://github.com/apache/maven-assembly-plugin/releases)
- [Commits](https://github.com/apache/maven-assembly-plugin/compare/maven-assembly-plugin-2.6...maven-assembly-plugin-3.4.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-assembly-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 04:07:04 +08:00
dependabot[bot]
0c1769b3a5 Bump maven-javadoc-plugin in /jvm-packages/xgboost4j (#8534)
Bumps [maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 2.10.3 to 3.4.1.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-2.10.3...maven-javadoc-plugin-3.4.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 02:30:46 +08:00
dependabot[bot]
67752e3967 Bump scala-maven-plugin from 3.2.2 to 4.8.0 in /jvm-packages (#8532)
Bumps scala-maven-plugin from 3.2.2 to 4.8.0.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 00:58:30 +08:00
Jiaming Yuan
8afcecc025 [doc] Fix outdated document [skip ci] (#8527)
* [doc] Fix document around categorical parameters. [skip ci]

* note on validate parameter [skip ci]

* Fix dask doc as well [skip ci]
2022-12-06 00:56:17 +08:00
Jiaming Yuan
e143a4dd7e [pyspark] Refactor local tests. (#8525)
- Use pytest fixture for spark session.
- Replace hardcoded results.
2022-12-05 23:49:54 +08:00
Philip Hyunsu Cho
42c5ee5588 [jvm-packages] Bump version of akka packages (#8524) 2022-12-05 22:45:00 +08:00
Jiaming Yuan
e3bf5565ab Extract transform iterator. (#8498) 2022-12-05 21:37:07 +08:00
Jiaming Yuan
d8544e4d9e [R] Remove unused assert definition. (#8526) 2022-12-05 20:29:03 +08:00
dependabot[bot]
d8d2eefa63 Bump junit from 4.13.1 to 4.13.2 in /jvm-packages/xgboost4j-gpu (#8516)
Bumps [junit](https://github.com/junit-team/junit4) from 4.13.1 to 4.13.2.
- [Release notes](https://github.com/junit-team/junit4/releases)
- [Changelog](https://github.com/junit-team/junit4/blob/main/doc/ReleaseNotes4.13.1.md)
- [Commits](https://github.com/junit-team/junit4/compare/r4.13.1...r4.13.2)

---
updated-dependencies:
- dependency-name: junit:junit
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-05 19:11:33 +08:00
dependabot[bot]
8e8d3ac708 Bump kryo from 4.0.2 to 5.3.0 in /jvm-packages (#8503)
Bumps [kryo](https://github.com/EsotericSoftware/kryo) from 4.0.2 to 5.3.0.
- [Release notes](https://github.com/EsotericSoftware/kryo/releases)
- [Commits](https://github.com/EsotericSoftware/kryo/compare/kryo-parent-4.0.2...kryo-parent-5.3.0)

---
updated-dependencies:
- dependency-name: com.esotericsoftware:kryo
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-05 18:01:59 +08:00
dependabot[bot]
3bfe90c183 Bump exec-maven-plugin in /jvm-packages/xgboost4j-gpu (#8531)
Bumps [exec-maven-plugin](https://github.com/mojohaus/exec-maven-plugin) from 1.6.0 to 3.1.0.
- [Release notes](https://github.com/mojohaus/exec-maven-plugin/releases)
- [Commits](https://github.com/mojohaus/exec-maven-plugin/compare/exec-maven-plugin-1.6.0...exec-maven-plugin-3.1.0)

---
updated-dependencies:
- dependency-name: org.codehaus.mojo:exec-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-05 17:41:41 +08:00
dependabot[bot]
a903241fbf Bump maven-javadoc-plugin in /jvm-packages/xgboost4j-gpu (#8530)
Bumps [maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 2.10.3 to 3.4.1.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-2.10.3...maven-javadoc-plugin-3.4.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-05 16:32:08 +08:00
Bobby Wang
f1e9bbcee5 [breakinig] [jvm-packages] change DeviceQuantileDmatrix into QuantileDMatrix (#8461) 2022-12-05 12:23:21 +08:00
Rong Ou
78d65a1928 Initial support for column-wise data split (#8468) 2022-12-04 01:37:51 +08:00
dependabot[bot]
c0609b98f1 Bump exec-maven-plugin from 1.6.0 to 3.1.0 in /jvm-packages/xgboost4j (#8518)
Bumps [exec-maven-plugin](https://github.com/mojohaus/exec-maven-plugin) from 1.6.0 to 3.1.0.
- [Release notes](https://github.com/mojohaus/exec-maven-plugin/releases)
- [Commits](https://github.com/mojohaus/exec-maven-plugin/compare/exec-maven-plugin-1.6.0...exec-maven-plugin-3.1.0)

---
updated-dependencies:
- dependency-name: org.codehaus.mojo:exec-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:24:22 -08:00
dependabot[bot]
ba0ed255ef Bump maven-jar-plugin from 3.0.2 to 3.3.0 in /jvm-packages/xgboost4j-gpu (#8512)
Bumps [maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.0.2 to 3.3.0.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.0.2...maven-jar-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:23:03 -08:00
dependabot[bot]
1d8bb7332f Bump maven-resources-plugin in /jvm-packages/xgboost4j (#8515)
Bumps [maven-resources-plugin](https://github.com/apache/maven-resources-plugin) from 3.1.0 to 3.3.0.
- [Release notes](https://github.com/apache/maven-resources-plugin/releases)
- [Commits](https://github.com/apache/maven-resources-plugin/compare/maven-resources-plugin-3.1.0...maven-resources-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-resources-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:22:26 -08:00
dependabot[bot]
dcc92a6703 Bump maven-jar-plugin from 3.0.2 to 3.3.0 in /jvm-packages/xgboost4j (#8517)
Bumps [maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.0.2 to 3.3.0.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.0.2...maven-jar-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:21:42 -08:00
dependabot[bot]
fcafd3a777 Bump maven-jar-plugin from 3.0.2 to 3.3.0 in /jvm-packages (#8506)
Bumps [maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.0.2 to 3.3.0.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.0.2...maven-jar-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:21:01 -08:00
dependabot[bot]
b23e97f8b0 Bump maven-resources-plugin from 3.1.0 to 3.3.0 in /jvm-packages (#8504)
Bumps [maven-resources-plugin](https://github.com/apache/maven-resources-plugin) from 3.1.0 to 3.3.0.
- [Release notes](https://github.com/apache/maven-resources-plugin/releases)
- [Commits](https://github.com/apache/maven-resources-plugin/compare/maven-resources-plugin-3.1.0...maven-resources-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-resources-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:20:37 -08:00
Bobby Wang
8e41ad24f5 [pyspark] sort qid for SparkRanker (#8497)
* [pyspark] sort qid for SparkRandker

* resolve comments
2022-12-01 16:40:35 -08:00
dependabot[bot]
f747e05eac Bump maven-deploy-plugin from 2.8.2 to 3.0.0 in /jvm-packages (#8502)
Bumps [maven-deploy-plugin](https://github.com/apache/maven-deploy-plugin) from 2.8.2 to 3.0.0.
- [Release notes](https://github.com/apache/maven-deploy-plugin/releases)
- [Commits](https://github.com/apache/maven-deploy-plugin/compare/maven-deploy-plugin-2.8.2...maven-deploy-plugin-3.0.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-deploy-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-01 16:39:59 -08:00
Philip Hyunsu Cho
2546d139d6 [jvm-packages] Add missing commons-lang3 dependency to xgboost4j-gpu (#8508)
* [jvm-packages] Add missing commons-lang3 dependency to xgboost4j-gpu

* Update commons-lang3
2022-12-01 16:27:11 -08:00
Philip Hyunsu Cho
7c6f2346d3 [jvm-packages] Configure dependabot properly (#8507)
* [jvm-packages] Configure dependabot properly

* Allow automatic updates for Scala and Spark within the same major version
2022-12-01 16:26:47 -08:00
Philip Hyunsu Cho
f550109641 Bump some old dependencies of JVM packages (#8456) 2022-11-30 23:04:08 -08:00
Philip Hyunsu Cho
9a98e79649 [jvm-packages] Set up dependabot (#8501) 2022-11-30 22:46:17 -08:00
Rong Ou
a8255ea678 Add an in-memory collective communicator (#8494) 2022-12-01 00:24:12 +08:00
Jiaming Yuan
157e98edf7 Support half type from cupy. (#8487) 2022-11-30 17:56:42 +08:00
Jiaming Yuan
addaa63732 Support null value in CUDA array interface. (#8486)
* Support null value in CUDA array interface.

- Fix for potential null value in array interface.
- Fix incorrect check on mask stride.

* Simple tests.

* Extract mask.
2022-11-28 17:48:25 -08:00
Jiaming Yuan
3fc1046fd3 Reduce compiler warnings on CPU-only build. (#8483) 2022-11-29 00:04:16 +08:00
Jiaming Yuan
d666ba775e Support all pandas nullable integer types. (#8480)
- Enumerate all pandas integer types.
- Tests for `None`, `nan`, and `pd.NA`
2022-11-28 22:38:16 +08:00
Jiaming Yuan
f2209c1fe4 Don't shuffle columns in categorical tests. (#8446) 2022-11-28 20:28:06 +08:00
WeichenXu
67ea1c3435 [pyspark] Make QDM optional based on cuDF check (#8471) 2022-11-27 14:58:54 +08:00
Jiaming Yuan
8f97c92541 Support half type for pandas. (#8481) 2022-11-24 12:47:40 +08:00
Jiaming Yuan
e07245f110 Take datatable as row major input. (#8472)
* Take datatable as row major input.

Try to avoid a transform with dense table.
2022-11-24 09:20:13 +08:00
Jiaming Yuan
284dcf8d22 Add script for change version. (#8443)
- Replace jvm regex replacement script with mvn command.
- Replace cmake script for python version with python script.
- Automate rest of the manual steps.

The script can handle dev branch, rc release, and formal release version.
2022-11-24 00:06:39 +08:00
Jiaming Yuan
5f1a6fca0d [R] Use new interface for creating DMatrix from CSR. (#8455)
* [R] Use new interface for creating DMatrix from CSR.

- CSC is still using the old API.

The old API is not aware of `nthread` parameter, which makes DMatrix to use all available
thread during construction and during transformation lie `SparsePage` -> `CSCPage`.
2022-11-23 21:36:43 +08:00
Nick Becker
58d211545f explain cpu/gpu interop and link to model IO tutorial (#8450) 2022-11-23 20:58:28 +08:00
Bobby Wang
2dde65f807 [ci] reduce pyspark test time (#8324) 2022-11-21 16:58:00 +08:00
Joyce
3b8a0e08f7 feat: use commit hash instead of version to actions workflows (#8460)
Signed-off-by: Joyce Brum <joycebrum@google.com>

Signed-off-by: Joyce Brum <joycebrum@google.com>
2022-11-17 22:04:11 +08:00
Rong Ou
30b1a26fc0 Remove unused page size constant (#8457) 2022-11-17 11:41:39 +08:00
Otto von Sperling
812d577597 Fix inline code blocks in 'spark_estimator.rst' (#8465) 2022-11-15 05:47:58 +08:00
Robert Maynard
16f96b6cfb Work with newer thrust and libcudacxx (#8454)
* Thrust 1.17 removes the experimental/pinned_allocator.

When xgboost is brought into a large project it can
be compiled against Thrust 1.17+ which don't offer
this experimental allocator.

To ensure that going forward xgboost works in all environments we provide a xgboost namespaced version of
the pinned_allocator that previously was in Thrust.
2022-11-11 04:22:53 +08:00
Gavin Zhang
0c6266bc4a SO_DOMAIN do not support on IBM i, using getsockname instead (#8437)
Co-authored-by: GavinZhang <zhanggan@cn.ibm.com>
2022-11-10 23:54:57 +08:00
Jiaming Yuan
9dd8d70f0e Fix mypy errors. (#8444) 2022-11-09 13:19:11 +08:00
Jiaming Yuan
0252d504d8 Fix R package build on CI. (#8445)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-11-09 12:18:36 +08:00
Jiaming Yuan
a83748eb45 [CI] Revise R tests. (#8430)
- Use the standard package check (check on the tarball instead of the source tree).
- Run commands in parallel.
- Cleanup dependencies installation.
- Replace makefile.
- Documentation.
- Test using the image from rhub.
2022-11-09 09:12:13 +08:00
Rong Ou
4449e30184 Always link federated proto statically (#8442) 2022-11-09 07:47:38 +08:00
Jiaming Yuan
ca0f7f2714 [doc] Update C tutorial. [skip ci] (#8436)
- Use rst references instead of doxygen links.
- Replace deprecated functions.

- Add SaveModel; put free step last [skip ci]

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-11-09 07:14:12 +08:00
Jiaming Yuan
0b36f8fba1 [R] Fix CRAN test notes. (#8428)
- Limit the number of used CPU cores in examples.
- Add a note for the constraint.
- Bring back the cleanup script.
2022-11-09 02:03:30 +08:00
Rong Ou
8e76f5f595 Use DataSplitMode to configure data loading (#8434)
* Use `DataSplitMode` to configure data loading
2022-11-08 16:21:50 +08:00
Jiaming Yuan
0d3da9869c Require isort on all Python files. (#8420) 2022-11-08 12:59:06 +08:00
James Lamb
bf8de227a9 [CI] remove unused import in python tests (#8409) 2022-11-03 22:27:25 +08:00
James Lamb
b1b2524dbb add files from python tests to .gitignore (#8410) 2022-11-03 07:57:45 +08:00
Rong Ou
99fa8dad2d Add back xgboost.rabit for backwards compatibility (#8408)
* Add back xgboost.rabit for backwards compatibility

* fix my errors

* Fix lint

* Use FutureWarning

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-11-01 21:47:41 -07:00
Philip Hyunsu Cho
0db903b471 Fix formatting in NEWS.md [skip ci] 2022-10-31 15:42:31 -07:00
Jiaming Yuan
917cbc0699 1.7 release note. [skip ci] (#8374)
* Draft for 1.7 release note. [skip ci]

* Wording [skip ci]

* Update with backports [skip ci]

* Apply suggestions from code review [skip ci]

* Apply suggestions from code review [skip ci]

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Update NEWS.md [skip ci]

Co-authored-by: Rory Mitchell <r.a.mitchell.nz@gmail.com>

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
Co-authored-by: Rory Mitchell <r.a.mitchell.nz@gmail.com>
2022-10-31 09:32:33 -07:00
Jiaming Yuan
2ed3c29c8a [CI] Cleanup github action tests. (#8397)
- Merge doxygen build with sphinx.
- Use mamba on non-windows Github Action.
2022-10-29 06:04:27 +08:00
Joyce
7174d60ed2 Fix Scorecard Github Action not working (#8402)
* chore: create security policy

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* chore: only latest release on security police

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* chore: security policy support on effort base

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* Use dedicated e-mail address for security reporting

* fix: upgrade scorecard action version

Signed-off-by: Joyce Brum <joycebrum@google.com>

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>
Signed-off-by: Joyce Brum <joycebrum@google.com>
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-10-28 16:25:43 -04:00
Jiaming Yuan
a408c34558 Update JSON parser demo with categorical feature. (#8401)
- Parse categorical features in the Python example.
- Add tests.
- Update document.
2022-10-28 20:57:43 +08:00
Jiaming Yuan
cfd2a9f872 Extract dask and spark test into distributed test. (#8395)
- Move test files.
- Run spark and dask separately to prevent conflicts.
- Gather common code into the testing module.
2022-10-28 16:24:32 +08:00
Jiaming Yuan
f73520bfff Bump development version to 2.0. (#8390) 2022-10-28 15:21:19 +08:00
Christian Clauss
ae27e228c4 xrange() was removed in Python 3 in favor or range() (#8371) 2022-10-27 16:36:14 +08:00
Yizhi Liu
5699f60a88 Type fix for WebAssembly: use bst_ulong instead of size_t for ncol in CSR conversion. (#8369) 2022-10-26 19:21:45 +08:00
Jiaming Yuan
a2593e60bf Speedup R test on github. (#8388) 2022-10-26 18:02:27 +08:00
Jiaming Yuan
786aa27134 [doc] Additional notes for release [skip ci] (#8367) 2022-10-26 17:55:15 +08:00
Jiaming Yuan
cf70864fa3 Move Python testing utilities into xgboost module. (#8379)
- Add typehints.
- Fixes for pylint.

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-10-26 16:56:11 +08:00
Jiaming Yuan
7e53189e7c [pyspark] Improve tutorial on enabling GPU support. (#8385)
- Quote the databricks doc on how to manage dependencies.
- Some wording changes.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-10-26 15:45:54 +08:00
Thomas Stanley
ba9cc43464 Fix acronym (#8386) 2022-10-26 06:22:30 +08:00
Philip Hyunsu Cho
8bb55949ef Fix building XGBoost with libomp 15 (#8384) 2022-10-25 12:01:11 -07:00
Jiaming Yuan
d0b99bdd95 [pyspark] Add type hint to basic utilities. (#8375) 2022-10-25 17:26:25 +08:00
Jiaming Yuan
1d2f6de573 remove travis status [skip ci] (#8382) 2022-10-24 14:37:33 +08:00
Jiaming Yuan
a3b8bca46a Remove travis configuration file. [skip ci] (#8381) 2022-10-23 02:49:29 +08:00
Jiaming Yuan
bb5e18c29c Fix CUDA async stream. (#8380) 2022-10-22 23:13:28 +08:00
Christian Clauss
5761f27e5e Use ==/!= to compare constant literals (str, bytes, int, float, tuple) (#8372) 2022-10-22 21:53:03 +08:00
Jiaming Yuan
99467f3999 [doc] Cleanup outdated documents for GPU. [skip ci] (#8378) 2022-10-21 20:13:31 +08:00
956 changed files with 52638 additions and 31991 deletions

View File

@@ -1,4 +1,4 @@
Checks: 'modernize-*,-modernize-make-*,-modernize-use-auto,-modernize-raw-string-literal,-modernize-avoid-c-arrays,-modernize-use-trailing-return-type,google-*,-google-default-arguments,-clang-diagnostic-#pragma-messages,readability-identifier-naming' Checks: 'modernize-*,-modernize-use-nodiscard,-modernize-concat-nested-namespaces,-modernize-make-*,-modernize-use-auto,-modernize-raw-string-literal,-modernize-avoid-c-arrays,-modernize-use-trailing-return-type,google-*,-google-default-arguments,-clang-diagnostic-#pragma-messages,readability-identifier-naming'
CheckOptions: CheckOptions:
- { key: readability-identifier-naming.ClassCase, value: CamelCase } - { key: readability-identifier-naming.ClassCase, value: CamelCase }
- { key: readability-identifier-naming.StructCase, value: CamelCase } - { key: readability-identifier-naming.StructCase, value: CamelCase }

18
.gitattributes vendored Normal file
View File

@@ -0,0 +1,18 @@
* text=auto
*.c text eol=lf
*.h text eol=lf
*.cc text eol=lf
*.cuh text eol=lf
*.cu text eol=lf
*.py text eol=lf
*.txt text eol=lf
*.R text eol=lf
*.scala text eol=lf
*.java text eol=lf
*.sh text eol=lf
*.rst text eol=lf
*.md text eol=lf
*.csv text eol=lf

31
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "maven"
directory: "/jvm-packages"
schedule:
interval: "daily"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j"
schedule:
interval: "daily"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-gpu"
schedule:
interval: "daily"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-example"
schedule:
interval: "daily"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-spark"
schedule:
interval: "daily"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-spark-gpu"
schedule:
interval: "daily"

View File

@@ -15,16 +15,16 @@ jobs:
os: [windows-latest, ubuntu-latest, macos-11] os: [windows-latest, ubuntu-latest, macos-11]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- uses: actions/setup-python@v2 - uses: actions/setup-python@7f80679172b057fc5e90d70d197929d454754a5a # v4.3.0
with: with:
python-version: '3.8' python-version: '3.8'
architecture: 'x64' architecture: 'x64'
- uses: actions/setup-java@v1 - uses: actions/setup-java@d202f5dbf7256730fb690ec59f6381650114feb2 # v3.6.0
with: with:
java-version: 1.8 java-version: 1.8
@@ -34,13 +34,13 @@ jobs:
python -m pip install awscli python -m pip install awscli
- name: Cache Maven packages - name: Cache Maven packages
uses: actions/cache@v2 uses: actions/cache@6998d139ddd3e68c71e9e398d8e40b71a2f39812 # v3.2.5
with: with:
path: ~/.m2 path: ~/.m2
key: ${{ runner.os }}-m2-${{ hashFiles('./jvm-packages/pom.xml') }} key: ${{ runner.os }}-m2-${{ hashFiles('./jvm-packages/pom.xml') }}
restore-keys: ${{ runner.os }}-m2 restore-keys: ${{ runner.os }}-m2-${{ hashFiles('./jvm-packages/pom.xml') }}
- name: Test XGBoost4J - name: Test XGBoost4J (Core)
run: | run: |
cd jvm-packages cd jvm-packages
mvn test -B -pl :xgboost4j_2.12 mvn test -B -pl :xgboost4j_2.12
@@ -67,7 +67,7 @@ jobs:
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
- name: Test XGBoost4J-Spark - name: Test XGBoost4J (Core, Spark, Examples)
run: | run: |
rm -rfv build/ rm -rfv build/
cd jvm-packages cd jvm-packages
@@ -75,3 +75,13 @@ jobs:
if: matrix.os == 'ubuntu-latest' # Distributed training doesn't work on Windows if: matrix.os == 'ubuntu-latest' # Distributed training doesn't work on Windows
env: env:
RABIT_MOCK: ON RABIT_MOCK: ON
- name: Build and Test XGBoost4J with scala 2.13
run: |
rm -rfv build/
cd jvm-packages
mvn -B clean install test -Pdefault,scala-2.13
if: matrix.os == 'ubuntu-latest' # Distributed training doesn't work on Windows
env:
RABIT_MOCK: ON

View File

@@ -19,7 +19,7 @@ jobs:
matrix: matrix:
os: [macos-11] os: [macos-11]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- name: Install system packages - name: Install system packages
@@ -45,7 +45,7 @@ jobs:
matrix: matrix:
os: [ubuntu-latest] os: [ubuntu-latest]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- name: Install system packages - name: Install system packages
@@ -66,30 +66,30 @@ jobs:
c-api-demo: c-api-demo:
name: Test installing XGBoost lib + building the C API demo name: Test installing XGBoost lib + building the C API demo
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
defaults:
run:
shell: bash -l {0}
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
os: ["ubuntu-latest"] os: ["ubuntu-latest"]
python-version: ["3.8"] python-version: ["3.8"]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- name: Install system packages - uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
run: |
sudo apt-get install -y --no-install-recommends ninja-build
- uses: conda-incubator/setup-miniconda@v2
with: with:
auto-update-conda: true cache-downloads: true
python-version: ${{ matrix.python-version }} cache-env: true
activate-environment: test environment-name: cpp_test
environment-file: tests/ci_build/conda_env/cpp_test.yml
- name: Display Conda env - name: Display Conda env
shell: bash -l {0}
run: | run: |
conda info conda info
conda list conda list
- name: Build and install XGBoost static library - name: Build and install XGBoost static library
shell: bash -l {0}
run: | run: |
mkdir build mkdir build
cd build cd build
@@ -97,7 +97,6 @@ jobs:
ninja -v install ninja -v install
cd - cd -
- name: Build and run C API demo with static - name: Build and run C API demo with static
shell: bash -l {0}
run: | run: |
pushd . pushd .
cd demo/c-api/ cd demo/c-api/
@@ -109,15 +108,14 @@ jobs:
cd .. cd ..
rm -rf ./build rm -rf ./build
popd popd
- name: Build and install XGBoost shared library - name: Build and install XGBoost shared library
shell: bash -l {0}
run: | run: |
cd build cd build
cmake .. -DBUILD_STATIC_LIB=OFF -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja cmake .. -DBUILD_STATIC_LIB=OFF -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja
ninja -v install ninja -v install
cd - cd -
- name: Build and run C API demo with shared - name: Build and run C API demo with shared
shell: bash -l {0}
run: | run: |
pushd . pushd .
cd demo/c-api/ cd demo/c-api/
@@ -130,14 +128,14 @@ jobs:
./tests/ci_build/verify_link.sh ./demo/c-api/build/basic/api-demo ./tests/ci_build/verify_link.sh ./demo/c-api/build/basic/api-demo
./tests/ci_build/verify_link.sh ./demo/c-api/build/external-memory/external-memory-demo ./tests/ci_build/verify_link.sh ./demo/c-api/build/external-memory/external-memory-demo
lint: cpp-lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: Code linting for C++ name: Code linting for C++
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- uses: actions/setup-python@v2 - uses: actions/setup-python@7f80679172b057fc5e90d70d197929d454754a5a # v4.3.0
with: with:
python-version: "3.8" python-version: "3.8"
architecture: 'x64' architecture: 'x64'
@@ -146,68 +144,15 @@ jobs:
python -m pip install wheel setuptools cpplint pylint python -m pip install wheel setuptools cpplint pylint
- name: Run lint - name: Run lint
run: | run: |
LINT_LANG=cpp make lint python3 dmlc-core/scripts/lint.py xgboost cpp R-package/src
doxygen: python3 dmlc-core/scripts/lint.py --exclude_path \
runs-on: ubuntu-latest python-package/xgboost/dmlc-core \
name: Generate C/C++ API doc using Doxygen python-package/xgboost/include \
steps: python-package/xgboost/lib \
- uses: actions/checkout@v2 python-package/xgboost/rabit \
with: python-package/xgboost/src \
submodules: 'true' --pylint-rc python-package/.pylintrc \
- uses: actions/setup-python@v2 xgboost \
with: cpp \
python-version: "3.8" include src python-package
architecture: 'x64'
- name: Install system packages
run: |
sudo apt-get install -y --no-install-recommends doxygen graphviz ninja-build
python -m pip install wheel setuptools
python -m pip install awscli
- name: Run Doxygen
run: |
mkdir build
cd build
cmake .. -DBUILD_C_DOC=ON -GNinja
ninja -v doc_doxygen
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Publish
run: |
cd build/
tar cvjf ${{ steps.extract_branch.outputs.branch }}.tar.bz2 doc_doxygen/
python -m awscli s3 cp ./${{ steps.extract_branch.outputs.branch }}.tar.bz2 s3://xgboost-docs/doxygen/ --acl public-read
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
sphinx:
runs-on: ubuntu-latest
name: Build docs using Sphinx
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: actions/setup-python@v2
with:
python-version: "3.8"
architecture: 'x64'
- name: Install system packages
run: |
sudo apt-get install -y --no-install-recommends graphviz
python -m pip install wheel setuptools
python -m pip install -r doc/requirements.txt
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Run Sphinx
run: |
make -C doc html
env:
SPHINX_GIT_BRANCH: ${{ steps.extract_branch.outputs.branch }}

View File

@@ -5,6 +5,10 @@ on: [push, pull_request]
permissions: permissions:
contents: read # to fetch code (actions/checkout) contents: read # to fetch code (actions/checkout)
defaults:
run:
shell: bash -l {0}
jobs: jobs:
python-mypy-lint: python-mypy-lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -12,150 +16,125 @@ jobs:
strategy: strategy:
matrix: matrix:
os: [ubuntu-latest] os: [ubuntu-latest]
python-version: ["3.8"]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- uses: conda-incubator/setup-miniconda@v2 - uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with: with:
auto-update-conda: true cache-downloads: true
python-version: ${{ matrix.python-version }} cache-env: true
activate-environment: python_lint environment-name: python_lint
environment-file: tests/ci_build/conda_env/python_lint.yml environment-file: tests/ci_build/conda_env/python_lint.yml
- name: Display Conda env - name: Display Conda env
shell: bash -l {0}
run: | run: |
conda info conda info
conda list conda list
- name: Run mypy - name: Run mypy
shell: bash -l {0}
run: | run: |
python tests/ci_build/lint_python.py --format=0 --type-check=1 --pylint=0 python tests/ci_build/lint_python.py --format=0 --type-check=1 --pylint=0
- name: Run formatter - name: Run formatter
shell: bash -l {0}
run: | run: |
python tests/ci_build/lint_python.py --format=1 --type-check=0 --pylint=0 python tests/ci_build/lint_python.py --format=1 --type-check=0 --pylint=0
- name: Run pylint - name: Run pylint
shell: bash -l {0}
run: | run: |
python tests/ci_build/lint_python.py --format=0 --type-check=0 --pylint=1 python tests/ci_build/lint_python.py --format=0 --type-check=0 --pylint=1
python-sdist-test: python-sdist-test-on-Linux:
# Mismatched glibcxx version between system and conda forge.
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
name: Test installing XGBoost Python source package on ${{ matrix.os }} name: Test installing XGBoost Python source package on ${{ matrix.os }}
strategy: strategy:
matrix: matrix:
os: [ubuntu-latest, macos-11, windows-latest] os: [ubuntu-latest]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with:
cache-downloads: true
cache-env: true
environment-name: sdist_test
environment-file: tests/ci_build/conda_env/sdist_test.yml
- name: Display Conda env
run: |
conda info
conda list
- name: Build and install XGBoost
run: |
cd python-package
python --version
python -m build --sdist
pip install -v ./dist/xgboost-*.tar.gz --config-settings use_openmp=False
cd ..
python -c 'import xgboost'
python-sdist-test:
# Use system toolchain instead of conda toolchain for macos and windows.
# MacOS has linker error if clang++ from conda-forge is used
runs-on: ${{ matrix.os }}
name: Test installing XGBoost Python source package on ${{ matrix.os }}
strategy:
matrix:
os: [macos-11, windows-latest]
python-version: ["3.8"] python-version: ["3.8"]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- name: Install osx system dependencies - name: Install osx system dependencies
if: matrix.os == 'macos-11' if: matrix.os == 'macos-11'
run: | run: |
brew install ninja libomp brew install ninja libomp
- name: Install Ubuntu system dependencies - uses: conda-incubator/setup-miniconda@35d1405e78aa3f784fe3ce9a2eb378d5eeb62169 # v2.1.1
if: matrix.os == 'ubuntu-latest'
run: |
sudo apt-get install -y --no-install-recommends ninja-build
- uses: conda-incubator/setup-miniconda@v2
with: with:
auto-update-conda: true auto-update-conda: true
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
activate-environment: test activate-environment: test
- name: Install build
run: |
conda install -c conda-forge python-build
- name: Display Conda env - name: Display Conda env
shell: bash -l {0}
run: | run: |
conda info conda info
conda list conda list
- name: Build and install XGBoost - name: Build and install XGBoost
shell: bash -l {0}
run: | run: |
cd python-package cd python-package
python --version python --version
python setup.py sdist python -m build --sdist
pip install -v ./dist/xgboost-*.tar.gz pip install -v ./dist/xgboost-*.tar.gz
cd .. cd ..
python -c 'import xgboost' python -c 'import xgboost'
python-tests-on-win:
name: Test XGBoost Python package on ${{ matrix.config.os }}
runs-on: ${{ matrix.config.os }}
strategy:
matrix:
config:
- {os: windows-latest, python-version: '3.8'}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: conda-incubator/setup-miniconda@v2
with:
auto-update-conda: true
python-version: ${{ matrix.config.python-version }}
activate-environment: win64_env
environment-file: tests/ci_build/conda_env/win64_cpu_test.yml
- name: Display Conda env
shell: bash -l {0}
run: |
conda info
conda list
- name: Build XGBoost on Windows
shell: bash -l {0}
run: |
mkdir build_msvc
cd build_msvc
cmake .. -G"Visual Studio 17 2022" -DCMAKE_CONFIGURATION_TYPES="Release" -A x64 -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON
cmake --build . --config Release --parallel $(nproc)
- name: Install Python package
shell: bash -l {0}
run: |
cd python-package
python --version
python setup.py bdist_wheel --universal
pip install ./dist/*.whl
- name: Test Python package
shell: bash -l {0}
run: |
pytest -s -v ./tests/python
python-tests-on-macos: python-tests-on-macos:
name: Test XGBoost Python package on ${{ matrix.config.os }} name: Test XGBoost Python package on ${{ matrix.config.os }}
runs-on: ${{ matrix.config.os }} runs-on: ${{ matrix.config.os }}
timeout-minutes: 90 timeout-minutes: 60
strategy: strategy:
matrix: matrix:
config: config:
- {os: macos-11, python-version "3.8" } - {os: macos-11}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- uses: conda-incubator/setup-miniconda@v2 - uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with: with:
auto-update-conda: true cache-downloads: true
python-version: ${{ matrix.config.python-version }} cache-env: true
activate-environment: macos_test environment-name: macos_test
environment-file: tests/ci_build/conda_env/macos_cpu_test.yml environment-file: tests/ci_build/conda_env/macos_cpu_test.yml
- name: Display Conda env - name: Display Conda env
shell: bash -l {0}
run: | run: |
conda info conda info
conda list conda list
- name: Build XGBoost on macos - name: Build XGBoost on macos
shell: bash -l {0}
run: | run: |
brew install ninja brew install ninja
@@ -164,17 +143,156 @@ jobs:
# Set prefix, to use OpenMP library from Conda env # Set prefix, to use OpenMP library from Conda env
# See https://github.com/dmlc/xgboost/issues/7039#issuecomment-1025038228 # See https://github.com/dmlc/xgboost/issues/7039#issuecomment-1025038228
# to learn why we don't use libomp from Homebrew. # to learn why we don't use libomp from Homebrew.
cmake .. -GNinja -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON -DCMAKE_PREFIX_PATH=$CONDA_PREFIX cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
ninja ninja
- name: Install Python package - name: Install Python package
shell: bash -l {0}
run: | run: |
cd python-package cd python-package
python --version python --version
python setup.py install pip install -v .
- name: Test Python package - name: Test Python package
run: |
pytest -s -v -rxXs --durations=0 ./tests/python
- name: Test Dask Interface
run: |
pytest -s -v -rxXs --durations=0 ./tests/test_distributed/test_with_dask
python-tests-on-win:
name: Test XGBoost Python package on ${{ matrix.config.os }}
runs-on: ${{ matrix.config.os }}
timeout-minutes: 60
strategy:
matrix:
config:
- {os: windows-latest, python-version: '3.8'}
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: conda-incubator/setup-miniconda@35d1405e78aa3f784fe3ce9a2eb378d5eeb62169 # v2.1.1
with:
auto-update-conda: true
python-version: ${{ matrix.config.python-version }}
activate-environment: win64_env
environment-file: tests/ci_build/conda_env/win64_cpu_test.yml
- name: Display Conda env
run: |
conda info
conda list
- name: Build XGBoost on Windows
run: |
mkdir build_msvc
cd build_msvc
cmake .. -G"Visual Studio 17 2022" -DCMAKE_CONFIGURATION_TYPES="Release" -A x64 -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON
cmake --build . --config Release --parallel $(nproc)
- name: Install Python package
run: |
cd python-package
python --version
pip wheel -v . --wheel-dir dist/
pip install ./dist/*.whl
- name: Test Python package
run: |
pytest -s -v -rxXs --durations=0 ./tests/python
python-tests-on-ubuntu:
name: Test XGBoost Python package on ${{ matrix.config.os }}
runs-on: ${{ matrix.config.os }}
timeout-minutes: 90
strategy:
matrix:
config:
- {os: ubuntu-latest, python-version: "3.8"}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with:
cache-downloads: true
cache-env: true
environment-name: linux_cpu_test
environment-file: tests/ci_build/conda_env/linux_cpu_test.yml
- name: Display Conda env
run: |
conda info
conda list
- name: Build XGBoost on Ubuntu
run: |
mkdir build
cd build
cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
ninja
- name: Install Python package
run: |
cd python-package
python --version
pip install -v .
- name: Test Python package
run: |
pytest -s -v -rxXs --durations=0 ./tests/python
- name: Test Dask Interface
run: |
pytest -s -v -rxXs --durations=0 ./tests/test_distributed/test_with_dask
- name: Test PySpark Interface
shell: bash -l {0} shell: bash -l {0}
run: | run: |
pytest -s -v ./tests/python pytest -s -v -rxXs --durations=0 ./tests/test_distributed/test_with_spark
python-system-installation-on-ubuntu:
name: Test XGBoost Python package System Installation on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- name: Set up Python 3.8
uses: actions/setup-python@v4
with:
python-version: 3.8
- name: Install ninja
run: |
sudo apt-get update && sudo apt-get install -y ninja-build
- name: Build XGBoost on Ubuntu
run: |
mkdir build
cd build
cmake .. -GNinja
ninja
- name: Copy lib to system lib
run: |
cp lib/* "$(python -c 'import sys; print(sys.base_prefix)')/lib"
- name: Install XGBoost in Virtual Environment
run: |
cd python-package
pip install virtualenv
virtualenv venv
source venv/bin/activate && \
pip install -v . --config-settings use_system_libxgboost=True && \
python -c 'import xgboost'

View File

@@ -17,11 +17,11 @@ jobs:
- os: macos-latest - os: macos-latest
platform_id: macosx_arm64 platform_id: macosx_arm64
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- name: Setup Python - name: Setup Python
uses: actions/setup-python@v2 uses: actions/setup-python@7f80679172b057fc5e90d70d197929d454754a5a # v4.3.0
with: with:
python-version: "3.8" python-version: "3.8"
- name: Build wheels - name: Build wheels

View File

@@ -1,4 +1,4 @@
# Run R tests with noLD R. Only triggered by a pull request review # Run expensive R tests with the help of rhub. Only triggered by a pull request review
# See discussion at https://github.com/dmlc/xgboost/pull/6378 # See discussion at https://github.com/dmlc/xgboost/pull/6378
name: XGBoost-R-noLD name: XGBoost-R-noLD
@@ -7,9 +7,6 @@ on:
pull_request_review_comment: pull_request_review_comment:
types: [created] types: [created]
env:
R_PACKAGES: c('XML', 'igraph', 'data.table', 'ggplot2', 'DiagrammeR', 'Ckmeans.1d.dp', 'vcd', 'testthat', 'lintr', 'knitr', 'rmarkdown', 'e1071', 'cplm', 'devtools', 'float', 'titanic')
permissions: permissions:
contents: read # to fetch code (actions/checkout) contents: read # to fetch code (actions/checkout)
@@ -18,26 +15,22 @@ jobs:
if: github.event.comment.body == '/gha run r-nold-test' && contains('OWNER,MEMBER,COLLABORATOR', github.event.comment.author_association) if: github.event.comment.body == '/gha run r-nold-test' && contains('OWNER,MEMBER,COLLABORATOR', github.event.comment.author_association)
timeout-minutes: 120 timeout-minutes: 120
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: rhub/debian-gcc-devel-nold container:
image: rhub/debian-gcc-devel-nold
steps: steps:
- name: Install git and system packages - name: Install git and system packages
shell: bash shell: bash
run: | run: |
apt-get update && apt-get install -y git libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev libxml2-dev apt update && apt install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev libglpk-dev libxml2-dev libharfbuzz-dev libfribidi-dev git -y
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- name: Install dependencies - name: Install dependencies
shell: bash shell: bash -l {0}
run: | run: |
cat > install_libs.R <<EOT /tmp/R-devel/bin/Rscript -e "source('./R-package/tests/helper_scripts/install_deps.R')"
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
EOT
/tmp/R-devel/bin/Rscript install_libs.R
- name: Run R tests - name: Run R tests
shell: bash shell: bash

View File

@@ -3,7 +3,6 @@ name: XGBoost-R-Tests
on: [push, pull_request] on: [push, pull_request]
env: env:
R_PACKAGES: c('XML', 'data.table', 'ggplot2', 'DiagrammeR', 'Ckmeans.1d.dp', 'vcd', 'testthat', 'lintr', 'knitr', 'rmarkdown', 'e1071', 'cplm', 'devtools', 'float', 'titanic')
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
permissions: permissions:
@@ -22,41 +21,32 @@ jobs:
RSPM: ${{ matrix.config.rspm }} RSPM: ${{ matrix.config.rspm }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- uses: r-lib/actions/setup-r@v2 - uses: r-lib/actions/setup-r@11a22a908006c25fe054c4ef0ac0436b1de3edbe # v2.6.4
with: with:
r-version: ${{ matrix.config.r }} r-version: ${{ matrix.config.r }}
- name: Cache R packages - name: Cache R packages
uses: actions/cache@v2 uses: actions/cache@937d24475381cd9c75ae6db12cb4e79714b926ed # v3.0.11
with: with:
path: ${{ env.R_LIBS_USER }} path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }} key: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }} restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
- name: Install dependencies - name: Install dependencies
shell: Rscript {0} shell: Rscript {0}
run: | run: |
install.packages(${{ env.R_PACKAGES }}, source("./R-package/tests/helper_scripts/install_deps.R")
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- name: Install igraph on Windows
shell: Rscript {0}
if: matrix.config.os == 'windows-latest'
run: |
install.packages('igraph', type='binary')
- name: Run lintr - name: Run lintr
run: | run: |
cd R-package MAKEFLAGS="-j$(nproc)" R CMD INSTALL R-package/
R CMD INSTALL . Rscript tests/ci_build/lint_r.R $(pwd)
# Disable lintr errors for now: https://github.com/dmlc/xgboost/issues/8012
Rscript tests/helper_scripts/run_lint.R || true
test-with-R: test-R-on-Windows:
runs-on: ${{ matrix.config.os }} runs-on: ${{ matrix.config.os }}
name: Test R on OS ${{ matrix.config.os }}, R ${{ matrix.config.r }}, Compiler ${{ matrix.config.compiler }}, Build ${{ matrix.config.build }} name: Test R on OS ${{ matrix.config.os }}, R ${{ matrix.config.r }}, Compiler ${{ matrix.config.compiler }}, Build ${{ matrix.config.build }}
strategy: strategy:
@@ -64,95 +54,82 @@ jobs:
matrix: matrix:
config: config:
- {os: windows-latest, r: 'release', compiler: 'mingw', build: 'autotools'} - {os: windows-latest, r: 'release', compiler: 'mingw', build: 'autotools'}
- {os: windows-latest, r: 'release', compiler: 'msvc', build: 'cmake'} - {os: windows-latest, r: '4.2.0', compiler: 'msvc', build: 'cmake'}
- {os: windows-latest, r: 'release', compiler: 'mingw', build: 'cmake'}
env: env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
RSPM: ${{ matrix.config.rspm }} RSPM: ${{ matrix.config.rspm }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with: with:
submodules: 'true' submodules: 'true'
- uses: r-lib/actions/setup-r@v2 - uses: r-lib/actions/setup-r@11a22a908006c25fe054c4ef0ac0436b1de3edbe # v2.6.4
with: with:
r-version: ${{ matrix.config.r }} r-version: ${{ matrix.config.r }}
- name: Cache R packages - name: Cache R packages
uses: actions/cache@v2 uses: actions/cache@937d24475381cd9c75ae6db12cb4e79714b926ed # v3.0.11
with: with:
path: ${{ env.R_LIBS_USER }} path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }} key: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }} restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
- name: Install dependencies - uses: actions/setup-python@7f80679172b057fc5e90d70d197929d454754a5a # v4.3.0
shell: Rscript {0}
if: matrix.config.os != 'windows-latest'
run: |
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- name: Install binary dependencies
shell: Rscript {0}
if: matrix.config.os == 'windows-latest'
run: |
install.packages(${{ env.R_PACKAGES }},
type = 'binary',
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- uses: actions/setup-python@v2
with: with:
python-version: "3.8" python-version: "3.8"
architecture: 'x64' architecture: 'x64'
- name: Test R
run: |
python tests/ci_build/test_r_package.py --compiler='${{ matrix.config.compiler }}' --build-tool='${{ matrix.config.build }}'
test-R-CRAN:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
config:
- {r: 'release'}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@v2
with:
r-version: ${{ matrix.config.r }}
- uses: r-lib/actions/setup-tinytex@v2 - uses: r-lib/actions/setup-tinytex@v2
- name: Install system packages
run: |
sudo apt-get update && sudo apt-get install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev pandoc pandoc-citeproc libglpk-dev
- name: Cache R packages
uses: actions/cache@v2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }}
- name: Install dependencies - name: Install dependencies
shell: Rscript {0} shell: Rscript {0}
run: | run: |
install.packages(${{ env.R_PACKAGES }}, source("./R-package/tests/helper_scripts/install_deps.R")
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
install.packages('igraph', repos = 'http://cloud.r-project.org', dependencies = c('Depends', 'Imports', 'LinkingTo'))
- name: Check R Package - name: Test R
run: | run: |
# Print stacktrace upon success of failure python tests/ci_build/test_r_package.py --compiler='${{ matrix.config.compiler }}' --build-tool="${{ matrix.config.build }}" --task=check
make Rcheck || tests/ci_build/print_r_stacktrace.sh fail
tests/ci_build/print_r_stacktrace.sh success test-R-on-Debian:
name: Test R package on Debian
runs-on: ubuntu-latest
container:
image: rhub/debian-gcc-devel
steps:
- name: Install system dependencies
run: |
# Must run before checkout to have the latest git installed.
# No need to add pandoc, the container has it figured out.
apt update && apt install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev libglpk-dev libxml2-dev libharfbuzz-dev libfribidi-dev git -y
- name: Trust git cloning project sources
run: |
git config --global --add safe.directory "${GITHUB_WORKSPACE}"
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- name: Install dependencies
shell: bash -l {0}
run: |
/tmp/R-devel/bin/Rscript -e "source('./R-package/tests/helper_scripts/install_deps.R')"
- name: Test R
shell: bash -l {0}
run: |
python3 tests/ci_build/test_r_package.py --r=/tmp/R-devel/bin/R --build-tool=autotools --task=check
- uses: dorny/paths-filter@v2
id: changes
with:
filters: |
r_package:
- 'R-package/**'
- name: Run document check
if: steps.changes.outputs.r_package == 'true'
run: |
python3 tests/ci_build/test_r_package.py --r=/tmp/R-devel/bin/R --task=doc

View File

@@ -27,21 +27,21 @@ jobs:
persist-credentials: false persist-credentials: false
- name: "Run analysis" - name: "Run analysis"
uses: ossf/scorecard-action@865b4092859256271290c77adbd10a43f4779972 # tag=v2.0.3 uses: ossf/scorecard-action@08b4669551908b1024bb425080c797723083c031 # tag=v2.2.0
with: with:
results_file: results.sarif results_file: results.sarif
results_format: sarif results_format: sarif
# Publish the results for public repositories to enable scorecard badges. For more details, see # Publish the results for public repositories to enable scorecard badges. For more details, see
# https://github.com/ossf/scorecard-action#publishing-results. # https://github.com/ossf/scorecard-action#publishing-results.
# For private repositories, `publish_results` will automatically be set to `false`, regardless # For private repositories, `publish_results` will automatically be set to `false`, regardless
# of the value entered here. # of the value entered here.
publish_results: true publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF # Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab. # format to the repository Actions tab.
- name: "Upload artifact" - name: "Upload artifact"
uses: actions/upload-artifact@6673cd052c4cd6fcf4b4e6e60ea986c889389535 # tag=v3.0.0 uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # tag=v3.1.2
with: with:
name: SARIF file name: SARIF file
path: results.sarif path: results.sarif
@@ -49,6 +49,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard. # Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning" - name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@5f532563584d71fdef14ee64d17bafb34f751ce5 # tag=v1.0.26 uses: github/codeql-action/upload-sarif@7b6664fa89524ee6e3c3e9749402d5afd69b3cd8 # tag=v2.14.1
with: with:
sarif_file: results.sarif sarif_file: results.sarif

44
.github/workflows/update_rapids.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: update-rapids
on:
workflow_dispatch:
schedule:
- cron: "0 20 * * *" # Run once daily
permissions:
pull-requests: write
contents: write
defaults:
run:
shell: bash -l {0}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # To use GitHub CLI
jobs:
update-rapids:
name: Check latest RAPIDS
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- name: Check latest RAPIDS and update conftest.sh
run: |
bash tests/buildkite/update-rapids.sh
- name: Create Pull Request
uses: peter-evans/create-pull-request@v5
if: github.ref == 'refs/heads/master'
with:
add-paths: |
tests/buildkite
branch: create-pull-request/update-rapids
base: master
title: "[CI] Update RAPIDS to latest stable"
commit-message: "[CI] Update RAPIDS to latest stable"

15
.gitignore vendored
View File

@@ -48,6 +48,7 @@ Debug
*.Rproj *.Rproj
./xgboost.mpi ./xgboost.mpi
./xgboost.mock ./xgboost.mock
*.bak
#.Rbuildignore #.Rbuildignore
R-package.Rproj R-package.Rproj
*.cache* *.cache*
@@ -137,5 +138,15 @@ credentials.csv
.metals .metals
.bloop .bloop
# hypothesis python tests # python tests
.hypothesis demo/**/*.txt
*.dmatrix
.hypothesis
__MACOSX/
model*.json
# R tests
*.libsvm
*.rds
Rplots.pdf
*.zip

3
.gitmodules vendored
View File

@@ -2,9 +2,6 @@
path = dmlc-core path = dmlc-core
url = https://github.com/dmlc/dmlc-core url = https://github.com/dmlc/dmlc-core
branch = main branch = main
[submodule "cub"]
path = cub
url = https://github.com/NVlabs/cub
[submodule "gputreeshap"] [submodule "gputreeshap"]
path = gputreeshap path = gputreeshap
url = https://github.com/rapidsai/gputreeshap.git url = https://github.com/rapidsai/gputreeshap.git

View File

@@ -32,4 +32,3 @@ formats:
python: python:
install: install:
- requirements: doc/requirements.txt - requirements: doc/requirements.txt
system_packages: true

View File

@@ -1,53 +0,0 @@
sudo: required
dist: bionic
env:
global:
- secure: "lqkL5SCM/CBwgVb1GWoOngpojsa0zCSGcvF0O3/45rBT1EpNYtQ4LRJ1+XcHi126vdfGoim/8i7AQhn5eOgmZI8yAPBeoUZ5zSrejD3RUpXr2rXocsvRRP25Z4mIuAGHD9VAHtvTdhBZRVV818W02pYduSzAeaY61q/lU3xmWsE="
- secure: "mzms6X8uvdhRWxkPBMwx+mDl3d+V1kUpZa7UgjT+dr4rvZMzvKtjKp/O0JZZVogdgZjUZf444B98/7AvWdSkGdkfz2QdmhWmXzNPfNuHtmfCYMdijsgFIGLuD3GviFL/rBiM2vgn32T3QqFiEJiC5StparnnXimPTc9TpXQRq5c="
jobs:
include:
- os: linux
arch: s390x
env: TASK=s390x_test
# dependent brew packages
# the dependencies from homebrew is installed manually from setup script due to outdated image from travis.
addons:
homebrew:
update: false
apt:
packages:
- unzip
before_install:
- source tests/travis/travis_setup_env.sh
install:
- source tests/travis/setup.sh
script:
- tests/travis/run_test.sh
cache:
directories:
- ${HOME}/.cache/usr
- ${HOME}/.cache/pip
before_cache:
- tests/travis/travis_before_cache.sh
after_failure:
- tests/travis/travis_after_failure.sh
after_success:
- tree build
- bash <(curl -s https://codecov.io/bash) -a '-o src/ src/*.c'
notifications:
email:
on_success: change
on_failure: always

View File

@@ -15,4 +15,3 @@
address = {New York, NY, USA}, address = {New York, NY, USA},
keywords = {large-scale machine learning}, keywords = {large-scale machine learning},
} }

View File

@@ -1,5 +1,5 @@
cmake_minimum_required(VERSION 3.14 FATAL_ERROR) cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(xgboost LANGUAGES CXX C VERSION 1.7.0) project(xgboost LANGUAGES CXX C VERSION 2.0.0)
include(cmake/Utils.cmake) include(cmake/Utils.cmake)
list(APPEND CMAKE_MODULE_PATH "${xgboost_SOURCE_DIR}/cmake/modules") list(APPEND CMAKE_MODULE_PATH "${xgboost_SOURCE_DIR}/cmake/modules")
cmake_policy(SET CMP0022 NEW) cmake_policy(SET CMP0022 NEW)
@@ -14,8 +14,24 @@ endif ((${CMAKE_VERSION} VERSION_GREATER 3.13) OR (${CMAKE_VERSION} VERSION_EQUA
message(STATUS "CMake version ${CMAKE_VERSION}") message(STATUS "CMake version ${CMAKE_VERSION}")
if (CMAKE_COMPILER_IS_GNUCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5.0) # Check compiler versions
message(FATAL_ERROR "GCC version must be at least 5.0!") # Use recent compilers to ensure that std::filesystem is available
if(MSVC)
if(MSVC_VERSION LESS 1920)
message(FATAL_ERROR "Need Visual Studio 2019 or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "8.1")
message(FATAL_ERROR "Need GCC 8.1 or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "11.0")
message(FATAL_ERROR "Need Xcode 11.0 (AppleClang 11.0) or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "9.0")
message(FATAL_ERROR "Need Clang 9.0 or newer to build XGBoost")
endif()
endif() endif()
include(${xgboost_SOURCE_DIR}/cmake/FindPrefetchIntrinsics.cmake) include(${xgboost_SOURCE_DIR}/cmake/FindPrefetchIntrinsics.cmake)
@@ -47,11 +63,12 @@ option(USE_NVTX "Build with cuda profiling annotations. Developers only." OFF)
set(NVTX_HEADER_DIR "" CACHE PATH "Path to the stand-alone nvtx header") set(NVTX_HEADER_DIR "" CACHE PATH "Path to the stand-alone nvtx header")
option(RABIT_MOCK "Build rabit with mock" OFF) option(RABIT_MOCK "Build rabit with mock" OFF)
option(HIDE_CXX_SYMBOLS "Build shared library and hide all C++ symbols" OFF) option(HIDE_CXX_SYMBOLS "Build shared library and hide all C++ symbols" OFF)
option(KEEP_BUILD_ARTIFACTS_IN_BINARY_DIR "Output build artifacts in CMake binary dir" OFF)
## CUDA ## CUDA
option(USE_CUDA "Build with GPU acceleration" OFF) option(USE_CUDA "Build with GPU acceleration" OFF)
option(USE_PER_THREAD_DEFAULT_STREAM "Build with per-thread default stream" ON)
option(USE_NCCL "Build with NCCL to enable distributed GPU support." OFF) option(USE_NCCL "Build with NCCL to enable distributed GPU support." OFF)
option(BUILD_WITH_SHARED_NCCL "Build with shared NCCL library." OFF) option(BUILD_WITH_SHARED_NCCL "Build with shared NCCL library." OFF)
option(BUILD_WITH_CUDA_CUB "Build with cub in CUDA installation" OFF)
set(GPU_COMPUTE_VER "" CACHE STRING set(GPU_COMPUTE_VER "" CACHE STRING
"Semicolon separated list of compute versions to be built against, e.g. '35;61'") "Semicolon separated list of compute versions to be built against, e.g. '35;61'")
## Copied From dmlc ## Copied From dmlc
@@ -115,9 +132,6 @@ endif (ENABLE_ALL_WARNINGS)
if (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS)) if (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
message(SEND_ERROR "Cannot build a static library libxgboost.a when R or JVM packages are enabled.") message(SEND_ERROR "Cannot build a static library libxgboost.a when R or JVM packages are enabled.")
endif (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS)) endif (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
if (PLUGIN_RMM AND (NOT BUILD_WITH_CUDA_CUB))
message(SEND_ERROR "Cannot build with RMM using cub submodule.")
endif (PLUGIN_RMM AND (NOT BUILD_WITH_CUDA_CUB))
if (PLUGIN_FEDERATED) if (PLUGIN_FEDERATED)
if (CMAKE_CROSSCOMPILING) if (CMAKE_CROSSCOMPILING)
message(SEND_ERROR "Cannot cross compile with federated learning support") message(SEND_ERROR "Cannot cross compile with federated learning support")
@@ -153,9 +167,7 @@ if (USE_CUDA)
format_gencode_flags("${GPU_COMPUTE_VER}" GEN_CODE) format_gencode_flags("${GPU_COMPUTE_VER}" GEN_CODE)
add_subdirectory(${PROJECT_SOURCE_DIR}/gputreeshap) add_subdirectory(${PROJECT_SOURCE_DIR}/gputreeshap)
if ((${CMAKE_CUDA_COMPILER_VERSION} VERSION_GREATER_EQUAL 11.4) AND (NOT BUILD_WITH_CUDA_CUB)) find_package(CUDAToolkit REQUIRED)
set(BUILD_WITH_CUDA_CUB ON)
endif ()
endif (USE_CUDA) endif (USE_CUDA)
if (FORCE_COLORED_OUTPUT AND (CMAKE_GENERATOR STREQUAL "Ninja") AND if (FORCE_COLORED_OUTPUT AND (CMAKE_GENERATOR STREQUAL "Ninja") AND
@@ -168,11 +180,24 @@ find_package(Threads REQUIRED)
if (USE_OPENMP) if (USE_OPENMP)
if (APPLE) if (APPLE)
# Require CMake 3.16+ on Mac OSX, as previous versions of CMake had trouble locating find_package(OpenMP)
# OpenMP on Mac. See https://github.com/dmlc/xgboost/pull/5146#issuecomment-568312706 if (NOT OpenMP_FOUND)
cmake_minimum_required(VERSION 3.16) # Try again with extra path info; required for libomp 15+ from Homebrew
endif (APPLE) execute_process(COMMAND brew --prefix libomp
find_package(OpenMP REQUIRED) OUTPUT_VARIABLE HOMEBREW_LIBOMP_PREFIX
OUTPUT_STRIP_TRAILING_WHITESPACE)
set(OpenMP_C_FLAGS
"-Xpreprocessor -fopenmp -I${HOMEBREW_LIBOMP_PREFIX}/include")
set(OpenMP_CXX_FLAGS
"-Xpreprocessor -fopenmp -I${HOMEBREW_LIBOMP_PREFIX}/include")
set(OpenMP_C_LIB_NAMES omp)
set(OpenMP_CXX_LIB_NAMES omp)
set(OpenMP_omp_LIBRARY ${HOMEBREW_LIBOMP_PREFIX}/lib/libomp.dylib)
find_package(OpenMP REQUIRED)
endif ()
else ()
find_package(OpenMP REQUIRED)
endif ()
endif (USE_OPENMP) endif (USE_OPENMP)
#Add for IBM i #Add for IBM i
if (${CMAKE_SYSTEM_NAME} MATCHES "OS400") if (${CMAKE_SYSTEM_NAME} MATCHES "OS400")
@@ -223,6 +248,15 @@ add_subdirectory(${xgboost_SOURCE_DIR}/plugin)
if (PLUGIN_RMM) if (PLUGIN_RMM)
find_package(rmm REQUIRED) find_package(rmm REQUIRED)
# Patch the rmm targets so they reference the static cudart
# Remove this patch once RMM stops specifying cudart requirement
# (since RMM is a header-only library, it should not specify cudart in its CMake config)
get_target_property(rmm_link_libs rmm::rmm INTERFACE_LINK_LIBRARIES)
list(REMOVE_ITEM rmm_link_libs CUDA::cudart)
list(APPEND rmm_link_libs CUDA::cudart_static)
set_target_properties(rmm::rmm PROPERTIES INTERFACE_LINK_LIBRARIES "${rmm_link_libs}")
get_target_property(rmm_link_libs rmm::rmm INTERFACE_LINK_LIBRARIES)
endif (PLUGIN_RMM) endif (PLUGIN_RMM)
#-- library #-- library
@@ -263,8 +297,13 @@ if (JVM_BINDINGS)
xgboost_target_defs(xgboost4j) xgboost_target_defs(xgboost4j)
endif (JVM_BINDINGS) endif (JVM_BINDINGS)
set_output_directory(runxgboost ${xgboost_SOURCE_DIR}) if (KEEP_BUILD_ARTIFACTS_IN_BINARY_DIR)
set_output_directory(xgboost ${xgboost_SOURCE_DIR}/lib) set_output_directory(runxgboost ${xgboost_BINARY_DIR})
set_output_directory(xgboost ${xgboost_BINARY_DIR}/lib)
else ()
set_output_directory(runxgboost ${xgboost_SOURCE_DIR})
set_output_directory(xgboost ${xgboost_SOURCE_DIR}/lib)
endif ()
# Ensure these two targets do not build simultaneously, as they produce outputs with conflicting names # Ensure these two targets do not build simultaneously, as they produce outputs with conflicting names
add_dependencies(xgboost runxgboost) add_dependencies(xgboost runxgboost)

145
Makefile
View File

@@ -1,145 +0,0 @@
ifndef DMLC_CORE
DMLC_CORE = dmlc-core
endif
ifndef RABIT
RABIT = rabit
endif
ROOTDIR = $(CURDIR)
# workarounds for some buggy old make & msys2 versions seen in windows
ifeq (NA, $(shell test ! -d "$(ROOTDIR)" && echo NA ))
$(warning Attempting to fix non-existing ROOTDIR [$(ROOTDIR)])
ROOTDIR := $(shell pwd)
$(warning New ROOTDIR [$(ROOTDIR)] $(shell test -d "$(ROOTDIR)" && echo " is OK" ))
endif
MAKE_OK := $(shell "$(MAKE)" -v 2> /dev/null)
ifndef MAKE_OK
$(warning Attempting to recover non-functional MAKE [$(MAKE)])
MAKE := $(shell which make 2> /dev/null)
MAKE_OK := $(shell "$(MAKE)" -v 2> /dev/null)
endif
$(warning MAKE [$(MAKE)] - $(if $(MAKE_OK),checked OK,PROBLEM))
include $(DMLC_CORE)/make/dmlc.mk
# set compiler defaults for OSX versus *nix
# let people override either
OS := $(shell uname)
ifeq ($(OS), Darwin)
ifndef CC
export CC = $(if $(shell which clang), clang, gcc)
endif
ifndef CXX
export CXX = $(if $(shell which clang++), clang++, g++)
endif
else
# linux defaults
ifndef CC
export CC = gcc
endif
ifndef CXX
export CXX = g++
endif
endif
export CFLAGS= -DDMLC_LOG_CUSTOMIZE=1 -std=c++14 -Wall -Wno-unknown-pragmas -Iinclude $(ADD_CFLAGS)
CFLAGS += -I$(DMLC_CORE)/include -I$(RABIT)/include -I$(GTEST_PATH)/include
ifeq ($(TEST_COVER), 1)
CFLAGS += -g -O0 -fprofile-arcs -ftest-coverage
else
CFLAGS += -O3 -funroll-loops
endif
ifndef LINT_LANG
LINT_LANG= "all"
endif
# specify tensor path
.PHONY: clean all lint clean_all doxygen rcpplint pypack Rpack Rbuild Rcheck
build/%.o: src/%.cc
@mkdir -p $(@D)
$(CXX) $(CFLAGS) -MM -MT build/$*.o $< >build/$*.d
$(CXX) -c $(CFLAGS) $< -o $@
# The should be equivalent to $(ALL_OBJ) except for build/cli_main.o
amalgamation/xgboost-all0.o: amalgamation/xgboost-all0.cc
$(CXX) -c $(CFLAGS) $< -o $@
rcpplint:
python3 dmlc-core/scripts/lint.py xgboost ${LINT_LANG} R-package/src
lint: rcpplint
python3 dmlc-core/scripts/lint.py --exclude_path python-package/xgboost/dmlc-core \
python-package/xgboost/include python-package/xgboost/lib \
python-package/xgboost/make python-package/xgboost/rabit \
python-package/xgboost/src --pylint-rc ${PWD}/python-package/.pylintrc xgboost \
${LINT_LANG} include src python-package
ifeq ($(TEST_COVER), 1)
cover: check
@- $(foreach COV_OBJ, $(COVER_OBJ), \
gcov -pbcul -o $(shell dirname $(COV_OBJ)) $(COV_OBJ) > gcov.log || cat gcov.log; \
)
endif
clean:
$(RM) -rf build lib bin *~ */*~ */*/*~ */*/*/*~ */*.o */*/*.o */*/*/*.o #xgboost
$(RM) -rf build_tests *.gcov tests/cpp/xgboost_test
if [ -d "R-package/src" ]; then \
cd R-package/src; \
$(RM) -rf rabit src include dmlc-core amalgamation *.so *.dll; \
cd $(ROOTDIR); \
fi
clean_all: clean
cd $(DMLC_CORE); "$(MAKE)" clean; cd $(ROOTDIR)
cd $(RABIT); "$(MAKE)" clean; cd $(ROOTDIR)
# create pip source dist (sdist) pack for PyPI
pippack: clean_all
cd python-package; python setup.py sdist; mv dist/*.tar.gz ..; cd ..
# Script to make a clean installable R package.
Rpack: clean_all
rm -rf xgboost xgboost*.tar.gz
cp -r R-package xgboost
rm -rf xgboost/src/*.o xgboost/src/*.so xgboost/src/*.dll
rm -rf xgboost/src/*/*.o
rm -rf xgboost/demo/*.model xgboost/demo/*.buffer xgboost/demo/*.txt
rm -rf xgboost/demo/runall.R
cp -r src xgboost/src/src
cp -r include xgboost/src/include
cp -r amalgamation xgboost/src/amalgamation
mkdir -p xgboost/src/rabit
cp -r rabit/include xgboost/src/rabit/include
cp -r rabit/src xgboost/src/rabit/src
rm -rf xgboost/src/rabit/src/*.o
mkdir -p xgboost/src/dmlc-core
cp -r dmlc-core/include xgboost/src/dmlc-core/include
cp -r dmlc-core/src xgboost/src/dmlc-core/src
cp ./LICENSE xgboost
cat R-package/src/Makevars.in|sed '2s/.*/PKGROOT=./' > xgboost/src/Makevars.in
cat R-package/src/Makevars.win|sed '2s/.*/PKGROOT=./' > xgboost/src/Makevars.win
rm -f xgboost/src/Makevars.win-e # OSX sed create this extra file; remove it
bash R-package/remove_warning_suppression_pragma.sh
bash xgboost/remove_warning_suppression_pragma.sh
rm xgboost/remove_warning_suppression_pragma.sh
rm xgboost/CMakeLists.txt
rm -rfv xgboost/tests/helper_scripts/
R ?= R
Rbuild: Rpack
$(R) CMD build xgboost
rm -rf xgboost
Rcheck: Rbuild
$(R) CMD check --as-cran xgboost*.tar.gz
-include build/*.d
-include build/*/*.d

219
NEWS.md
View File

@@ -3,6 +3,225 @@ XGBoost Change Log
This file records the changes in xgboost library in reverse chronological order. This file records the changes in xgboost library in reverse chronological order.
## 1.7.6 (2023 Jun 16)
This is a patch release for bug fixes. The CRAN package for the R binding is kept at 1.7.5.
### Bug Fixes
* Fix distributed training with mixed dense and sparse partitions. (#9272)
* Fix monotone constraints on CPU with large trees. (#9122)
* [spark] Make the spark model have the same UID as its estimator (#9022)
* Optimize prediction with `QuantileDMatrix`. (#9096)
### Document
* Improve doxygen (#8959)
* Update the cuDF pip index URL. (#9106)
### Maintenance
* Fix tests with pandas 2.0. (#9014)
## 1.7.5 (2023 Mar 30)
This is a patch release for bug fixes.
* C++ requirement is updated to C++-17, along with which, CUDA 11.8 is used as the default CTK. (#8860, #8855, #8853)
* Fix import for pyspark ranker. (#8692)
* Fix Windows binary wheel to be compatible with Poetry (#8991)
* Fix GPU hist with column sampling. (#8850)
* Make sure iterative DMatrix is properly initialized. (#8997)
* [R] Update link in document. (#8998)
## 1.7.4 (2023 Feb 16)
This is a patch release for bug fixes.
* [R] Fix OpenMP detection on macOS. (#8684)
* [Python] Make sure input numpy array is aligned. (#8690)
* Fix feature interaction with column sampling in gpu_hist evaluator. (#8754)
* Fix GPU L1 error. (#8749)
* [PySpark] Fix feature types param (#8772)
* Fix ranking with quantile dmatrix and group weight. (#8762)
## 1.7.3 (2023 Jan 6)
This is a patch release for bug fixes.
* [Breaking] XGBoost Sklearn estimator method `get_params` no longer returns internally configured values. (#8634)
* Fix linalg iterator, which may crash the L1 error. (#8603)
* Fix loading pickled GPU model with a CPU-only XGBoost build. (#8632)
* Fix inference with unseen categories with categorical features. (#8591, #8602)
* CI fixes. (#8620, #8631, #8579)
## v1.7.2 (2022 Dec 8)
This is a patch release for bug fixes.
* Work with newer thrust and libcudacxx (#8432)
* Support null value in CUDA array interface namespace. (#8486)
* Use `getsockname` instead of `SO_DOMAIN` on AIX. (#8437)
* [pyspark] Make QDM optional based on a cuDF check (#8471)
* [pyspark] sort qid for SparkRanker. (#8497)
* [dask] Properly await async method client.wait_for_workers. (#8558)
* [R] Fix CRAN test notes. (#8428)
* [doc] Fix outdated document [skip ci]. (#8527)
* [CI] Fix github action mismatched glibcxx. (#8551)
## v1.7.1 (2022 Nov 3)
This is a patch release to incorporate the following hotfix:
* Add back xgboost.rabit for backwards compatibility (#8411)
## v1.7.0 (2022 Oct 20)
We are excited to announce the feature packed XGBoost 1.7 release. The release note will walk through some of the major new features first, then make a summary for other improvements and language-binding-specific changes.
### PySpark
XGBoost 1.7 features initial support for PySpark integration. The new interface is adapted from the existing PySpark XGBoost interface developed by databricks with additional features like `QuantileDMatrix` and the rapidsai plugin (GPU pipeline) support. The new Spark XGBoost Python estimators not only benefit from PySpark ml facilities for powerful distributed computing but also enjoy the rest of the Python ecosystem. Users can define a custom objective, callbacks, and metrics in Python and use them with this interface on distributed clusters. The support is labeled as experimental with more features to come in future releases. For a brief introduction please visit the tutorial on XGBoost's [document page](https://xgboost.readthedocs.io/en/latest/tutorials/spark_estimator.html). (#8355, #8344, #8335, #8284, #8271, #8283, #8250, #8231, #8219, #8245, #8217, #8200, #8173, #8172, #8145, #8117, #8131, #8088, #8082, #8085, #8066, #8068, #8067, #8020, #8385)
Due to its initial support status, the new interface has some limitations; categorical features and multi-output models are not yet supported.
### Development of categorical data support
More progress on the experimental support for categorical features. In 1.7, XGBoost can handle missing values in categorical features and features a new parameter `max_cat_threshold`, which limits the number of categories that can be used in the split evaluation. The parameter is enabled when the partitioning algorithm is used and helps prevent over-fitting. Also, the sklearn interface can now accept the `feature_types` parameter to use data types other than dataframe for categorical features. (#8280, #7821, #8285, #8080, #7948, #7858, #7853, #8212, #7957, #7937, #7934)
### Experimental support for federated learning and new communication collective
An exciting addition to XGBoost is the experimental federated learning support. The federated learning is implemented with a gRPC federated server that aggregates allreduce calls, and federated clients that train on local data and use existing tree methods (approx, hist, gpu_hist). Currently, this only supports horizontal federated learning (samples are split across participants, and each participant has all the features and labels). Future plans include vertical federated learning (features split across participants), and stronger privacy guarantees with homomorphic encryption and differential privacy. See [Demo with NVFlare integration](demo/nvflare/README.md) for example usage with nvflare.
As part of the work, XGBoost 1.7 has replaced the old rabit module with the new collective module as the network communication interface with added support for runtime backend selection. In previous versions, the backend is defined at compile time and can not be changed once built. In this new release, users can choose between `rabit` and `federated.` (#8029, #8351, #8350, #8342, #8340, #8325, #8279, #8181, #8027, #7958, #7831, #7879, #8257, #8316, #8242, #8057, #8203, #8038, #7965, #7930, #7911)
The feature is available in the public PyPI binary package for testing.
### Quantile DMatrix
Before 1.7, XGBoost has an internal data structure called `DeviceQuantileDMatrix` (and its distributed version). We now extend its support to CPU and renamed it to `QuantileDMatrix`. This data structure is used for optimizing memory usage for the `hist` and `gpu_hist` tree methods. The new feature helps reduce CPU memory usage significantly, especially for dense data. The new `QuantileDMatrix` can be initialized from both CPU and GPU data, and regardless of where the data comes from, the constructed instance can be used by both the CPU algorithm and GPU algorithm including training and prediction (with some overhead of conversion if the device of data and training algorithm doesn't match). Also, a new parameter `ref` is added to `QuantileDMatrix`, which can be used to construct validation/test datasets. Lastly, it's set as default in the scikit-learn interface when a supported tree method is specified by users. (#7889, #7923, #8136, #8215, #8284, #8268, #8220, #8346, #8327, #8130, #8116, #8103, #8094, #8086, #7898, #8060, #8019, #8045, #7901, #7912, #7922)
### Mean absolute error
The mean absolute error is a new member of the collection of objectives in XGBoost. It's noteworthy since MAE has zero hessian value, which is unusual to XGBoost as XGBoost relies on Newton optimization. Without valid Hessian values, the convergence speed can be slow. As part of the support for MAE, we added line searches into the XGBoost training algorithm to overcome the difficulty of training without valid Hessian values. In the future, we will extend the line search to other objectives where it's appropriate for faster convergence speed. (#8343, #8107, #7812, #8380)
### XGBoost on Browser
With the help of the [pyodide](https://github.com/pyodide/pyodide) project, you can now run XGBoost on browsers. (#7954, #8369)
### Experimental IPv6 Support for Dask
With the growing adaption of the new internet protocol, XGBoost joined the club. In the latest release, the Dask interface can be used on IPv6 clusters, see XGBoost's Dask tutorial for details. (#8225, #8234)
### Optimizations
We have new optimizations for both the `hist` and `gpu_hist` tree methods to make XGBoost's training even more efficient.
* Hist
Hist now supports optional by-column histogram build, which is automatically configured based on various conditions of input data. This helps the XGBoost CPU hist algorithm to scale better with different shapes of training datasets. (#8233, #8259). Also, the build histogram kernel now can better utilize CPU registers (#8218)
* GPU Hist
GPU hist performance is significantly improved for wide datasets. GPU hist now supports batched node build, which reduces kernel latency and increases throughput. The improvement is particularly significant when growing deep trees with the default ``depthwise`` policy. (#7919, #8073, #8051, #8118, #7867, #7964, #8026)
### Breaking Changes
Breaking changes made in the 1.7 release are summarized below.
- The `grow_local_histmaker` updater is removed. This updater is rarely used in practice and has no test. We decided to remove it and focus have XGBoot focus on other more efficient algorithms. (#7992, #8091)
- Single precision histogram is removed due to its lack of accuracy caused by significant floating point error. In some cases the error can be difficult to detect due to log-scale operations, which makes the parameter dangerous to use. (#7892, #7828)
- Deprecated CUDA architectures are no longer supported in the release binaries. (#7774)
- As part of the federated learning development, the `rabit` module is replaced with the new `collective` module. It's a drop-in replacement with added runtime backend selection, see the federated learning section for more details (#8257)
### General new features and improvements
Before diving into package-specific changes, some general new features other than those listed at the beginning are summarized here.
* Users of `DMatrix` and `QuantileDMatrix` can get the data from XGBoost. In previous versions, only getters for meta info like labels are available. The new method is available in Python (`DMatrix::get_data`) and C. (#8269, #8323)
* In previous versions, the GPU histogram tree method may generate phantom gradient for missing values due to floating point error. We fixed such an error in this release and XGBoost is much better equated to handle floating point errors when training on GPU. (#8274, #8246)
* Parameter validation is no longer experimental. (#8206)
* C pointer parameters and JSON parameters are vigorously checked. (#8254, #8254)
* Improved handling of JSON model input. (#7953, #7918)
* Support IBM i OS (#7920, #8178)
### Fixes
Some noteworthy bug fixes that are not related to specific language binding are listed in this section.
* Rename misspelled config parameter for pseudo-Huber (#7904)
* Fix feature weights with nested column sampling. (#8100)
* Fix loading DMatrix binary in distributed env. (#8149)
* Force auc.cc to be statically linked for unusual compiler platforms. (#8039)
* New logic for detecting libomp on macos (#8384).
### Python Package
* Python 3.8 is now the minimum required Python version. (#8071)
* More progress on type hint support. Except for the new PySpark interface, the XGBoost module is fully typed. (#7742, #7945, #8302, #7914, #8052)
* XGBoost now validates the feature names in `inplace_predict`, which also affects the predict function in scikit-learn estimators as it uses `inplace_predict` internally. (#8359)
* Users can now get the data from `DMatrix` using `DMatrix::get_data` or `QuantileDMatrix::get_data`.
* Show `libxgboost.so` path in build info. (#7893)
* Raise import error when using the sklearn module while scikit-learn is missing. (#8049)
* Use `config_context` in the sklearn interface. (#8141)
* Validate features for inplace prediction. (#8359)
* Pandas dataframe handling is refactored to reduce data fragmentation. (#7843)
* Support more pandas nullable types (#8262)
* Remove pyarrow workaround. (#7884)
* Binary wheel size
We aim to enable as many features as possible in XGBoost's default binary distribution on PyPI (package installed with pip), but there's a upper limit on the size of the binary wheel. In 1.7, XGBoost reduces the size of the wheel by pruning unused CUDA architectures. (#8179, #8152, #8150)
* Fixes
Some noteworthy fixes are listed here:
- Fix the Dask interface with the latest cupy. (#8210)
- Check cuDF lazily to avoid potential errors with cuda-python. (#8084)
* Fix potential error in DMatrix constructor on 32-bit platform. (#8369)
* Maintenance work
- Linter script is moved from dmlc-core to XGBoost with added support for formatting, mypy, and parallel run, along with some fixes (#7967, #8101, #8216)
- We now require the use of `isort` and `black` for selected files. (#8137, #8096)
- Code cleanups. (#7827)
- Deprecate `use_label_encoder` in XGBClassifier. The label encoder has already been deprecated and removed in the previous version. These changes only affect the indicator parameter (#7822)
- Remove the use of distutils. (#7770)
- Refactor and fixes for tests (#8077, #8064, #8078, #8076, #8013, #8010, #8244, #7833)
* Documents
- [dask] Fix potential error in demo. (#8079)
- Improved documentation for the ranker. (#8356, #8347)
- Indicate lack of py-xgboost-gpu on Windows (#8127)
- Clarification for feature importance. (#8151)
- Simplify Python getting started example (#8153)
### R Package
We summarize improvements for the R package briefly here:
* Feature info including names and types are now passed to DMatrix in preparation for categorical feature support. (#804)
* XGBoost 1.7 can now gracefully load old R models from RDS for better compatibility with 3-party tuning libraries (#7864)
* The R package now can be built with parallel compilation, along with fixes for warnings in CRAN tests. (#8330)
* Emit error early if DiagrammeR is missing (#8037)
* Fix R package Windows build. (#8065)
### JVM Packages
The consistency between JVM packages and other language bindings is greatly improved in 1.7, improvements range from model serialization format to the default value of hyper-parameters.
* Java package now supports feature names and feature types for DMatrix in preparation for categorical feature support. (#7966)
* Models trained by the JVM packages can now be safely used with other language bindings. (#7896, #7907)
* Users can specify the model format when saving models with a stream. (#7940, #7955)
* The default value for training parameters is now sourced from XGBoost directly, which helps JVM packages be consistent with other packages. (#7938)
* Set the correct objective if the user doesn't explicitly set it (#7781)
* Auto-detection of MUSL is replaced by system properties (#7921)
* Improved error message for launching tracker. (#7952, #7968)
* Fix a race condition in parameter configuration. (#8025)
* [Breaking] ` timeoutRequestWorkers` is now removed. With the support for barrier mode, this parameter is no longer needed. (#7839)
* Dependencies updates. (#7791, #8157, #7801, #8240)
### Documents
- Document for the C interface is greatly improved and is now displayed at the [sphinx document page](https://xgboost.readthedocs.io/en/latest/c.html). Thanks to the breathe project, you can view the C API just like the Python API. (#8300)
- We now avoid having XGBoost internal text parser in demos and recommend users use dedicated libraries for loading data whenever it's feasible. (#7753)
- Python survival training demos are now displayed at [sphinx gallery](https://xgboost.readthedocs.io/en/latest/python/survival-examples/index.html). (#8328)
- Some typos, links, format, and grammar fixes. (#7800, #7832, #7861, #8099, #8163, #8166, #8229, #8028, #8214, #7777, #7905, #8270, #8309, d70e59fef, #7806)
- Updated winning solution under readme.md (#7862)
- New security policy. (#8360)
- GPU document is overhauled as we consider CUDA support to be feature-complete. (#8378)
### Maintenance
* Code refactoring and cleanups. (#7850, #7826, #7910, #8332, #8204)
* Reduce compiler warnings. (#7768, #7916, #8046, #8059, #7974, #8031, #8022)
* Compiler workarounds. (#8211, #8314, #8226, #8093)
* Dependencies update. (#8001, #7876, #7973, #8298, #7816)
* Remove warnings emitted in previous versions. (#7815)
* Small fixes occurred during development. (#8008)
### CI and Tests
* We overhauled the CI infrastructure to reduce the CI cost and lift the maintenance burdens. Jenkins is replaced with buildkite for better automation, with which, finer control of test runs is implemented to reduce overall cost. Also, we refactored some of the existing tests to reduce their runtime, drooped the size of docker images, and removed multi-GPU C++ tests. Lastly, `pytest-timeout` is added as an optional dependency for running Python tests to keep the test time in check. (#7772, #8291, #8286, #8276, #8306, #8287, #8243, #8313, #8235, #8288, #8303, #8142, #8092, #8333, #8312, #8348)
* New documents for how to reproduce the CI environment (#7971, #8297)
* Improved automation for JVM release. (#7882)
* GitHub Action security-related updates. (#8263, #8267, #8360)
* Other fixes and maintenance work. (#8154, #7848, #8069, #7943)
* Small updates and fixes to GitHub action pipelines. (#8364, #8321, #8241, #7950, #8011)
## v1.6.1 (2022 May 9) ## v1.6.1 (2022 May 9)
This is a patch release for bug fixes and Spark barrier mode support. The R package is unchanged. This is a patch release for bug fixes and Spark barrier mode support. The R package is unchanged.

View File

@@ -16,7 +16,6 @@ target_compile_definitions(xgboost-r
-DDMLC_LOG_BEFORE_THROW=0 -DDMLC_LOG_BEFORE_THROW=0
-DDMLC_DISABLE_STDIN=1 -DDMLC_DISABLE_STDIN=1
-DDMLC_LOG_CUSTOMIZE=1 -DDMLC_LOG_CUSTOMIZE=1
-DRABIT_CUSTOMIZE_MSG_
-DRABIT_STRICT_CXX98_) -DRABIT_STRICT_CXX98_)
target_include_directories(xgboost-r target_include_directories(xgboost-r
PRIVATE PRIVATE
@@ -31,7 +30,7 @@ if (USE_OPENMP)
endif (USE_OPENMP) endif (USE_OPENMP)
set_target_properties( set_target_properties(
xgboost-r PROPERTIES xgboost-r PROPERTIES
CXX_STANDARD 14 CXX_STANDARD 17
CXX_STANDARD_REQUIRED ON CXX_STANDARD_REQUIRED ON
POSITION_INDEPENDENT_CODE ON) POSITION_INDEPENDENT_CODE ON)

View File

@@ -1,8 +1,8 @@
Package: xgboost Package: xgboost
Type: Package Type: Package
Title: Extreme Gradient Boosting Title: Extreme Gradient Boosting
Version: 1.7.0.1 Version: 2.0.0.1
Date: 2022-10-18 Date: 2023-09-11
Authors@R: c( Authors@R: c(
person("Tianqi", "Chen", role = c("aut"), person("Tianqi", "Chen", role = c("aut"),
email = "tianqi.tchen@gmail.com"), email = "tianqi.tchen@gmail.com"),
@@ -54,10 +54,8 @@ Suggests:
Ckmeans.1d.dp (>= 3.3.1), Ckmeans.1d.dp (>= 3.3.1),
vcd (>= 1.3), vcd (>= 1.3),
testthat, testthat,
lintr,
igraph (>= 1.0.1), igraph (>= 1.0.1),
float, float,
crayon,
titanic titanic
Depends: Depends:
R (>= 3.3.0) R (>= 3.3.0)
@@ -66,5 +64,6 @@ Imports:
methods, methods,
data.table (>= 1.9.6), data.table (>= 1.9.6),
jsonlite (>= 1.0), jsonlite (>= 1.0),
RoxygenNote: 7.1.1 RoxygenNote: 7.2.3
SystemRequirements: GNU make, C++14 Encoding: UTF-8
SystemRequirements: GNU make, C++17

View File

@@ -1,9 +1,9 @@
Copyright (c) 2014 by Tianqi Chen and Contributors Copyright (c) 2014-2023, Tianqi Chen and XBGoost Contributors
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.
You may obtain a copy of the License at You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software Unless required by applicable law or agreed to in writing, software

View File

@@ -114,7 +114,7 @@ cb.evaluation.log <- function() {
if (is.null(mnames) || any(mnames == "")) if (is.null(mnames) || any(mnames == ""))
stop("bst_evaluation must have non-empty names") stop("bst_evaluation must have non-empty names")
mnames <<- gsub('-', '_', names(env$bst_evaluation)) mnames <<- gsub('-', '_', names(env$bst_evaluation), fixed = TRUE)
if (!is.null(env$bst_evaluation_err)) if (!is.null(env$bst_evaluation_err))
mnames <<- c(paste0(mnames, '_mean'), paste0(mnames, '_std')) mnames <<- c(paste0(mnames, '_mean'), paste0(mnames, '_std'))
} }
@@ -185,7 +185,7 @@ cb.reset.parameters <- function(new_params) {
if (typeof(new_params) != "list") if (typeof(new_params) != "list")
stop("'new_params' must be a list") stop("'new_params' must be a list")
pnames <- gsub("\\.", "_", names(new_params)) pnames <- gsub(".", "_", names(new_params), fixed = TRUE)
nrounds <- NULL nrounds <- NULL
# run some checks in the beginning # run some checks in the beginning
@@ -300,9 +300,9 @@ cb.early.stop <- function(stopping_rounds, maximize = FALSE,
if (length(env$bst_evaluation) == 0) if (length(env$bst_evaluation) == 0)
stop("For early stopping, watchlist must have at least one element") stop("For early stopping, watchlist must have at least one element")
eval_names <- gsub('-', '_', names(env$bst_evaluation)) eval_names <- gsub('-', '_', names(env$bst_evaluation), fixed = TRUE)
if (!is.null(metric_name)) { if (!is.null(metric_name)) {
metric_idx <<- which(gsub('-', '_', metric_name) == eval_names) metric_idx <<- which(gsub('-', '_', metric_name, fixed = TRUE) == eval_names)
if (length(metric_idx) == 0) if (length(metric_idx) == 0)
stop("'metric_name' for early stopping is not one of the following:\n", stop("'metric_name' for early stopping is not one of the following:\n",
paste(eval_names, collapse = ' '), '\n') paste(eval_names, collapse = ' '), '\n')
@@ -319,7 +319,7 @@ cb.early.stop <- function(stopping_rounds, maximize = FALSE,
# maximize is usually NULL when not set in xgb.train and built-in metrics # maximize is usually NULL when not set in xgb.train and built-in metrics
if (is.null(maximize)) if (is.null(maximize))
maximize <<- grepl('(_auc|_map|_ndcg)', metric_name) maximize <<- grepl('(_auc|_map|_ndcg|_pre)', metric_name)
if (verbose && NVL(env$rank, 0) == 0) if (verbose && NVL(env$rank, 0) == 0)
cat("Will train until ", metric_name, " hasn't improved in ", cat("Will train until ", metric_name, " hasn't improved in ",
@@ -511,7 +511,7 @@ cb.cv.predict <- function(save_models = FALSE) {
if (save_models) { if (save_models) {
env$basket$models <- lapply(env$bst_folds, function(fd) { env$basket$models <- lapply(env$bst_folds, function(fd) {
xgb.attr(fd$bst, 'niter') <- env$end_iteration - 1 xgb.attr(fd$bst, 'niter') <- env$end_iteration - 1
xgb.Booster.complete(xgb.handleToBooster(fd$bst), saveraw = TRUE) xgb.Booster.complete(xgb.handleToBooster(handle = fd$bst, raw = NULL), saveraw = TRUE)
}) })
} }
} }
@@ -544,9 +544,11 @@ cb.cv.predict <- function(save_models = FALSE) {
#' #'
#' @return #' @return
#' Results are stored in the \code{coefs} element of the closure. #' Results are stored in the \code{coefs} element of the closure.
#' The \code{\link{xgb.gblinear.history}} convenience function provides an easy way to access it. #' The \code{\link{xgb.gblinear.history}} convenience function provides an easy
#' way to access it.
#' With \code{xgb.train}, it is either a dense of a sparse matrix. #' With \code{xgb.train}, it is either a dense of a sparse matrix.
#' While with \code{xgb.cv}, it is a list (an element per each fold) of such matrices. #' While with \code{xgb.cv}, it is a list (an element per each fold) of such
#' matrices.
#' #'
#' @seealso #' @seealso
#' \code{\link{callbacks}}, \code{\link{xgb.gblinear.history}}. #' \code{\link{callbacks}}, \code{\link{xgb.gblinear.history}}.
@@ -558,7 +560,7 @@ cb.cv.predict <- function(save_models = FALSE) {
#' # without considering the 2nd order interactions: #' # without considering the 2nd order interactions:
#' x <- model.matrix(Species ~ .^2, iris)[,-1] #' x <- model.matrix(Species ~ .^2, iris)[,-1]
#' colnames(x) #' colnames(x)
#' dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor")) #' dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor"), nthread = 2)
#' param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc", #' param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc",
#' lambda = 0.0003, alpha = 0.0003, nthread = 2) #' lambda = 0.0003, alpha = 0.0003, nthread = 2)
#' # For 'shotgun', which is a default linear updater, using high eta values may result in #' # For 'shotgun', which is a default linear updater, using high eta values may result in
@@ -583,19 +585,19 @@ cb.cv.predict <- function(save_models = FALSE) {
#' #'
#' # For xgb.cv: #' # For xgb.cv:
#' bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8, #' bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8,
#' callbacks = list(cb.gblinear.history())) #' callbacks = list(cb.gblinear.history()))
#' # coefficients in the CV fold #3 #' # coefficients in the CV fold #3
#' matplot(xgb.gblinear.history(bst)[[3]], type = 'l') #' matplot(xgb.gblinear.history(bst)[[3]], type = 'l')
#' #'
#' #'
#' #### Multiclass classification: #' #### Multiclass classification:
#' # #' #
#' dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1) #' dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1, nthread = 1)
#' param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3, #' param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3,
#' lambda = 0.0003, alpha = 0.0003, nthread = 2) #' lambda = 0.0003, alpha = 0.0003, nthread = 1)
#' # For the default linear updater 'shotgun' it sometimes is helpful #' # For the default linear updater 'shotgun' it sometimes is helpful
#' # to use smaller eta to reduce instability #' # to use smaller eta to reduce instability
#' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 70, eta = 0.5, #' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 50, eta = 0.5,
#' callbacks = list(cb.gblinear.history())) #' callbacks = list(cb.gblinear.history()))
#' # Will plot the coefficient paths separately for each class: #' # Will plot the coefficient paths separately for each class:
#' matplot(xgb.gblinear.history(bst, class_index = 0), type = 'l') #' matplot(xgb.gblinear.history(bst, class_index = 0), type = 'l')
@@ -609,13 +611,15 @@ cb.cv.predict <- function(save_models = FALSE) {
#' matplot(xgb.gblinear.history(bst, class_index = 0)[[1]], type = 'l') #' matplot(xgb.gblinear.history(bst, class_index = 0)[[1]], type = 'l')
#' #'
#' @export #' @export
cb.gblinear.history <- function(sparse=FALSE) { cb.gblinear.history <- function(sparse = FALSE) {
coefs <- NULL coefs <- NULL
init <- function(env) { init <- function(env) {
if (!is.null(env$bst)) { # xgb.train: # xgb.train(): bst will be present
} else if (!is.null(env$bst_folds)) { # xgb.cv: # xgb.cv(): bst_folds will be present
} else stop("Parent frame has neither 'bst' nor 'bst_folds'") if (is.null(env$bst) && is.null(env$bst_folds)) {
stop("Parent frame has neither 'bst' nor 'bst_folds'")
}
} }
# convert from list to (sparse) matrix # convert from list to (sparse) matrix
@@ -655,7 +659,7 @@ cb.gblinear.history <- function(sparse=FALSE) {
} else { # xgb.cv: } else { # xgb.cv:
cf <- vector("list", length(env$bst_folds)) cf <- vector("list", length(env$bst_folds))
for (i in seq_along(env$bst_folds)) { for (i in seq_along(env$bst_folds)) {
dmp <- xgb.dump(xgb.handleToBooster(env$bst_folds[[i]]$bst)) dmp <- xgb.dump(xgb.handleToBooster(handle = env$bst_folds[[i]]$bst, raw = NULL))
cf[[i]] <- as.numeric(grep('(booster|bias|weigh)', dmp, invert = TRUE, value = TRUE)) cf[[i]] <- as.numeric(grep('(booster|bias|weigh)', dmp, invert = TRUE, value = TRUE))
if (sparse) cf[[i]] <- as(cf[[i]], "sparseVector") if (sparse) cf[[i]] <- as(cf[[i]], "sparseVector")
} }

View File

@@ -38,11 +38,11 @@ check.booster.params <- function(params, ...) {
stop("params must be a list") stop("params must be a list")
# in R interface, allow for '.' instead of '_' in parameter names # in R interface, allow for '.' instead of '_' in parameter names
names(params) <- gsub("\\.", "_", names(params)) names(params) <- gsub(".", "_", names(params), fixed = TRUE)
# merge parameters from the params and the dots-expansion # merge parameters from the params and the dots-expansion
dot_params <- list(...) dot_params <- list(...)
names(dot_params) <- gsub("\\.", "_", names(dot_params)) names(dot_params) <- gsub(".", "_", names(dot_params), fixed = TRUE)
if (length(intersect(names(params), if (length(intersect(names(params),
names(dot_params))) > 0) names(dot_params))) > 0)
stop("Same parameters in 'params' and in the call are not allowed. Please check your 'params' list.") stop("Same parameters in 'params' and in the call are not allowed. Please check your 'params' list.")
@@ -82,7 +82,7 @@ check.booster.params <- function(params, ...) {
# interaction constraints parser (convert from list of column indices to string) # interaction constraints parser (convert from list of column indices to string)
if (!is.null(params[['interaction_constraints']]) && if (!is.null(params[['interaction_constraints']]) &&
typeof(params[['interaction_constraints']]) != "character"){ typeof(params[['interaction_constraints']]) != "character") {
# check input class # check input class
if (!identical(class(params[['interaction_constraints']]), 'list')) stop('interaction_constraints should be class list') if (!identical(class(params[['interaction_constraints']]), 'list')) stop('interaction_constraints should be class list')
if (!all(unique(sapply(params[['interaction_constraints']], class)) %in% c('numeric', 'integer'))) { if (!all(unique(sapply(params[['interaction_constraints']], class)) %in% c('numeric', 'integer'))) {
@@ -140,7 +140,7 @@ check.custom.eval <- function(env = parent.frame()) {
# Update a booster handle for an iteration with dtrain data # Update a booster handle for an iteration with dtrain data
xgb.iter.update <- function(booster_handle, dtrain, iter, obj = NULL) { xgb.iter.update <- function(booster_handle, dtrain, iter, obj) {
if (!identical(class(booster_handle), "xgb.Booster.handle")) { if (!identical(class(booster_handle), "xgb.Booster.handle")) {
stop("booster_handle must be of xgb.Booster.handle class") stop("booster_handle must be of xgb.Booster.handle class")
} }
@@ -163,7 +163,7 @@ xgb.iter.update <- function(booster_handle, dtrain, iter, obj = NULL) {
# Evaluate one iteration. # Evaluate one iteration.
# Returns a named vector of evaluation metrics # Returns a named vector of evaluation metrics
# with the names in a 'datasetname-metricname' format. # with the names in a 'datasetname-metricname' format.
xgb.iter.eval <- function(booster_handle, watchlist, iter, feval = NULL) { xgb.iter.eval <- function(booster_handle, watchlist, iter, feval) {
if (!identical(class(booster_handle), "xgb.Booster.handle")) if (!identical(class(booster_handle), "xgb.Booster.handle"))
stop("class of booster_handle must be xgb.Booster.handle") stop("class of booster_handle must be xgb.Booster.handle")
@@ -234,7 +234,7 @@ generate.cv.folds <- function(nfold, nrows, stratified, label, params) {
y <- factor(y) y <- factor(y)
} }
} }
folds <- xgb.createFolds(y, nfold) folds <- xgb.createFolds(y = y, k = nfold)
} else { } else {
# make simple non-stratified folds # make simple non-stratified folds
kstep <- length(rnd_idx) %/% nfold kstep <- length(rnd_idx) %/% nfold
@@ -251,8 +251,7 @@ generate.cv.folds <- function(nfold, nrows, stratified, label, params) {
# Creates CV folds stratified by the values of y. # Creates CV folds stratified by the values of y.
# It was borrowed from caret::createFolds and simplified # It was borrowed from caret::createFolds and simplified
# by always returning an unnamed list of fold indices. # by always returning an unnamed list of fold indices.
xgb.createFolds <- function(y, k = 10) xgb.createFolds <- function(y, k) {
{
if (is.numeric(y)) { if (is.numeric(y)) {
## Group the numeric data based on their magnitudes ## Group the numeric data based on their magnitudes
## and sample within those groups. ## and sample within those groups.

View File

@@ -1,7 +1,6 @@
# Construct an internal xgboost Booster and return a handle to it. # Construct an internal xgboost Booster and return a handle to it.
# internal utility function # internal utility function
xgb.Booster.handle <- function(params = list(), cachelist = list(), xgb.Booster.handle <- function(params, cachelist, modelfile, handle) {
modelfile = NULL, handle = NULL) {
if (typeof(cachelist) != "list" || if (typeof(cachelist) != "list" ||
!all(vapply(cachelist, inherits, logical(1), what = 'xgb.DMatrix'))) { !all(vapply(cachelist, inherits, logical(1), what = 'xgb.DMatrix'))) {
stop("cachelist must be a list of xgb.DMatrix objects") stop("cachelist must be a list of xgb.DMatrix objects")
@@ -12,7 +11,7 @@ xgb.Booster.handle <- function(params = list(), cachelist = list(),
## A filename ## A filename
handle <- .Call(XGBoosterCreate_R, cachelist) handle <- .Call(XGBoosterCreate_R, cachelist)
modelfile <- path.expand(modelfile) modelfile <- path.expand(modelfile)
.Call(XGBoosterLoadModel_R, handle, modelfile[1]) .Call(XGBoosterLoadModel_R, handle, enc2utf8(modelfile[1]))
class(handle) <- "xgb.Booster.handle" class(handle) <- "xgb.Booster.handle"
if (length(params) > 0) { if (length(params) > 0) {
xgb.parameters(handle) <- params xgb.parameters(handle) <- params
@@ -44,7 +43,7 @@ xgb.Booster.handle <- function(params = list(), cachelist = list(),
# Convert xgb.Booster.handle to xgb.Booster # Convert xgb.Booster.handle to xgb.Booster
# internal utility function # internal utility function
xgb.handleToBooster <- function(handle, raw = NULL) { xgb.handleToBooster <- function(handle, raw) {
bst <- list(handle = handle, raw = raw) bst <- list(handle = handle, raw = raw)
class(bst) <- "xgb.Booster" class(bst) <- "xgb.Booster"
return(bst) return(bst)
@@ -129,7 +128,12 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
stop("argument type must be xgb.Booster") stop("argument type must be xgb.Booster")
if (is.null.handle(object$handle)) { if (is.null.handle(object$handle)) {
object$handle <- xgb.Booster.handle(modelfile = object$raw, handle = object$handle) object$handle <- xgb.Booster.handle(
params = list(),
cachelist = list(),
modelfile = object$raw,
handle = object$handle
)
} else { } else {
if (is.null(object$raw) && saveraw) { if (is.null(object$raw) && saveraw) {
object$raw <- xgb.serialize(object$handle) object$raw <- xgb.serialize(object$handle)
@@ -214,6 +218,10 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
#' Since it quadratically depends on the number of features, it is recommended to perform selection #' Since it quadratically depends on the number of features, it is recommended to perform selection
#' of the most important features first. See below about the format of the returned results. #' of the most important features first. See below about the format of the returned results.
#' #'
#' The \code{predict()} method uses as many threads as defined in \code{xgb.Booster} object (all by default).
#' If you want to change their number, then assign a new number to \code{nthread} using \code{\link{xgb.parameters<-}}.
#' Note also that converting a matrix to \code{\link{xgb.DMatrix}} uses multiple threads too.
#'
#' @return #' @return
#' The return type is different depending whether \code{strict_shape} is set to \code{TRUE}. By default, #' The return type is different depending whether \code{strict_shape} is set to \code{TRUE}. By default,
#' for regression or binary classification, it returns a vector of length \code{nrows(newdata)}. #' for regression or binary classification, it returns a vector of length \code{nrows(newdata)}.
@@ -328,8 +336,9 @@ predict.xgb.Booster <- function(object, newdata, missing = NA, outputmargin = FA
predleaf = FALSE, predcontrib = FALSE, approxcontrib = FALSE, predinteraction = FALSE, predleaf = FALSE, predcontrib = FALSE, approxcontrib = FALSE, predinteraction = FALSE,
reshape = FALSE, training = FALSE, iterationrange = NULL, strict_shape = FALSE, ...) { reshape = FALSE, training = FALSE, iterationrange = NULL, strict_shape = FALSE, ...) {
object <- xgb.Booster.complete(object, saveraw = FALSE) object <- xgb.Booster.complete(object, saveraw = FALSE)
if (!inherits(newdata, "xgb.DMatrix")) if (!inherits(newdata, "xgb.DMatrix"))
newdata <- xgb.DMatrix(newdata, missing = missing) newdata <- xgb.DMatrix(newdata, missing = missing, nthread = NVL(object$params[["nthread"]], -1))
if (!is.null(object[["feature_names"]]) && if (!is.null(object[["feature_names"]]) &&
!is.null(colnames(newdata)) && !is.null(colnames(newdata)) &&
!identical(object[["feature_names"]], colnames(newdata))) !identical(object[["feature_names"]], colnames(newdata)))
@@ -470,7 +479,7 @@ predict.xgb.Booster <- function(object, newdata, missing = NA, outputmargin = FA
#' @export #' @export
predict.xgb.Booster.handle <- function(object, ...) { predict.xgb.Booster.handle <- function(object, ...) {
bst <- xgb.handleToBooster(object) bst <- xgb.handleToBooster(handle = object, raw = NULL)
ret <- predict(bst, ...) ret <- predict(bst, ...)
return(ret) return(ret)
@@ -629,7 +638,7 @@ xgb.attributes <- function(object) {
#' @export #' @export
xgb.config <- function(object) { xgb.config <- function(object) {
handle <- xgb.get.handle(object) handle <- xgb.get.handle(object)
.Call(XGBoosterSaveJsonConfig_R, handle); .Call(XGBoosterSaveJsonConfig_R, handle)
} }
#' @rdname xgb.config #' @rdname xgb.config
@@ -671,7 +680,7 @@ xgb.config <- function(object) {
if (is.null(names(p)) || any(nchar(names(p)) == 0)) { if (is.null(names(p)) || any(nchar(names(p)) == 0)) {
stop("parameter names cannot be empty strings") stop("parameter names cannot be empty strings")
} }
names(p) <- gsub("\\.", "_", names(p)) names(p) <- gsub(".", "_", names(p), fixed = TRUE)
p <- lapply(p, function(x) as.character(x)[1]) p <- lapply(p, function(x) as.character(x)[1])
handle <- xgb.get.handle(object) handle <- xgb.get.handle(object)
for (i in seq_along(p)) { for (i in seq_along(p)) {

View File

@@ -18,7 +18,7 @@
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data') #' xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
#' dtrain <- xgb.DMatrix('xgb.DMatrix.data') #' dtrain <- xgb.DMatrix('xgb.DMatrix.data')
#' if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data') #' if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')
@@ -36,19 +36,37 @@ xgb.DMatrix <- function(data, info = list(), missing = NA, silent = FALSE, nthre
cnames <- colnames(data) cnames <- colnames(data)
} else if (inherits(data, "dgCMatrix")) { } else if (inherits(data, "dgCMatrix")) {
handle <- .Call( handle <- .Call(
XGDMatrixCreateFromCSC_R, data@p, data@i, data@x, nrow(data), as.integer(NVL(nthread, -1)) XGDMatrixCreateFromCSC_R,
data@p,
data@i,
data@x,
nrow(data),
missing,
as.integer(NVL(nthread, -1))
) )
cnames <- colnames(data) cnames <- colnames(data)
} else if (inherits(data, "dgRMatrix")) { } else if (inherits(data, "dgRMatrix")) {
handle <- .Call( handle <- .Call(
XGDMatrixCreateFromCSR_R, data@p, data@j, data@x, ncol(data), as.integer(NVL(nthread, -1)) XGDMatrixCreateFromCSR_R,
data@p,
data@j,
data@x,
ncol(data),
missing,
as.integer(NVL(nthread, -1))
) )
cnames <- colnames(data) cnames <- colnames(data)
} else if (inherits(data, "dsparseVector")) { } else if (inherits(data, "dsparseVector")) {
indptr <- c(0L, as.integer(length(data@i))) indptr <- c(0L, as.integer(length(data@i)))
ind <- as.integer(data@i) - 1L ind <- as.integer(data@i) - 1L
handle <- .Call( handle <- .Call(
XGDMatrixCreateFromCSR_R, indptr, ind, data@x, length(data), as.integer(NVL(nthread, -1)) XGDMatrixCreateFromCSR_R,
indptr,
ind,
data@x,
length(data),
missing,
as.integer(NVL(nthread, -1))
) )
} else { } else {
stop("xgb.DMatrix does not support construction from ", typeof(data)) stop("xgb.DMatrix does not support construction from ", typeof(data))
@@ -70,13 +88,13 @@ xgb.DMatrix <- function(data, info = list(), missing = NA, silent = FALSE, nthre
# get dmatrix from data, label # get dmatrix from data, label
# internal helper method # internal helper method
xgb.get.DMatrix <- function(data, label = NULL, missing = NA, weight = NULL, nthread = NULL) { xgb.get.DMatrix <- function(data, label, missing, weight, nthread) {
if (inherits(data, "dgCMatrix") || is.matrix(data)) { if (inherits(data, "dgCMatrix") || is.matrix(data)) {
if (is.null(label)) { if (is.null(label)) {
stop("label must be provided when data is a matrix") stop("label must be provided when data is a matrix")
} }
dtrain <- xgb.DMatrix(data, label = label, missing = missing, nthread = nthread) dtrain <- xgb.DMatrix(data, label = label, missing = missing, nthread = nthread)
if (!is.null(weight)){ if (!is.null(weight)) {
setinfo(dtrain, "weight", weight) setinfo(dtrain, "weight", weight)
} }
} else { } else {
@@ -110,7 +128,7 @@ xgb.get.DMatrix <- function(data, label = NULL, missing = NA, weight = NULL, nth
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' train <- agaricus.train #' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label) #' dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
#' #'
#' stopifnot(nrow(dtrain) == nrow(train$data)) #' stopifnot(nrow(dtrain) == nrow(train$data))
#' stopifnot(ncol(dtrain) == ncol(train$data)) #' stopifnot(ncol(dtrain) == ncol(train$data))
@@ -138,7 +156,7 @@ dim.xgb.DMatrix <- function(x) {
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' train <- agaricus.train #' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label) #' dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
#' dimnames(dtrain) #' dimnames(dtrain)
#' colnames(dtrain) #' colnames(dtrain)
#' colnames(dtrain) <- make.names(1:ncol(train$data)) #' colnames(dtrain) <- make.names(1:ncol(train$data))
@@ -193,7 +211,7 @@ dimnames.xgb.DMatrix <- function(x) {
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' #'
#' labels <- getinfo(dtrain, 'label') #' labels <- getinfo(dtrain, 'label')
#' setinfo(dtrain, 'label', 1-labels) #' setinfo(dtrain, 'label', 1-labels)
@@ -218,7 +236,7 @@ getinfo.xgb.DMatrix <- function(object, name, ...) {
} }
if (name == "feature_name" || name == "feature_type") { if (name == "feature_name" || name == "feature_type") {
ret <- .Call(XGDMatrixGetStrFeatureInfo_R, object, name) ret <- .Call(XGDMatrixGetStrFeatureInfo_R, object, name)
} else if (name != "nrow"){ } else if (name != "nrow") {
ret <- .Call(XGDMatrixGetInfo_R, object, name) ret <- .Call(XGDMatrixGetInfo_R, object, name)
} else { } else {
ret <- nrow(object) ret <- nrow(object)
@@ -249,7 +267,7 @@ getinfo.xgb.DMatrix <- function(object, name, ...) {
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' #'
#' labels <- getinfo(dtrain, 'label') #' labels <- getinfo(dtrain, 'label')
#' setinfo(dtrain, 'label', 1-labels) #' setinfo(dtrain, 'label', 1-labels)
@@ -328,7 +346,6 @@ setinfo.xgb.DMatrix <- function(object, name, info, ...) {
return(TRUE) return(TRUE)
} }
stop("setinfo: unknown info name ", name) stop("setinfo: unknown info name ", name)
return(FALSE)
} }
@@ -345,7 +362,7 @@ setinfo.xgb.DMatrix <- function(object, name, info, ...) {
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' #'
#' dsub <- slice(dtrain, 1:42) #' dsub <- slice(dtrain, 1:42)
#' labels1 <- getinfo(dsub, 'label') #' labels1 <- getinfo(dsub, 'label')
@@ -401,7 +418,7 @@ slice.xgb.DMatrix <- function(object, idxset, ...) {
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' #'
#' dtrain #' dtrain
#' print(dtrain, verbose=TRUE) #' print(dtrain, verbose=TRUE)
@@ -418,7 +435,7 @@ print.xgb.DMatrix <- function(x, verbose = FALSE, ...) {
cat(infos) cat(infos)
cnames <- colnames(x) cnames <- colnames(x)
cat(' colnames:') cat(' colnames:')
if (verbose & !is.null(cnames)) { if (verbose && !is.null(cnames)) {
cat("\n'") cat("\n'")
cat(cnames, sep = "','") cat(cnames, sep = "','")
cat("'") cat("'")

View File

@@ -7,7 +7,7 @@
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data') #' xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
#' dtrain <- xgb.DMatrix('xgb.DMatrix.data') #' dtrain <- xgb.DMatrix('xgb.DMatrix.data')
#' if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data') #' if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')

View File

@@ -48,8 +48,8 @@
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost') #' data(agaricus.test, package='xgboost')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtest <- with(agaricus.test, xgb.DMatrix(data, label = label)) #' dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
#' #'
#' param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic') #' param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
#' nrounds = 4 #' nrounds = 4
@@ -65,8 +65,12 @@
#' new.features.test <- xgb.create.features(model = bst, agaricus.test$data) #' new.features.test <- xgb.create.features(model = bst, agaricus.test$data)
#' #'
#' # learning with new features #' # learning with new features
#' new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label) #' new.dtrain <- xgb.DMatrix(
#' new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label) #' data = new.features.train, label = agaricus.train$label, nthread = 2
#' )
#' new.dtest <- xgb.DMatrix(
#' data = new.features.test, label = agaricus.test$label, nthread = 2
#' )
#' watchlist <- list(train = new.dtrain) #' watchlist <- list(train = new.dtrain)
#' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2) #' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
#' #'
@@ -79,7 +83,7 @@
#' accuracy.after, "!\n")) #' accuracy.after, "!\n"))
#' #'
#' @export #' @export
xgb.create.features <- function(model, data, ...){ xgb.create.features <- function(model, data, ...) {
check.deprecation(...) check.deprecation(...)
pred_with_leaf <- predict(model, data, predleaf = TRUE) pred_with_leaf <- predict(model, data, predleaf = TRUE)
cols <- lapply(as.data.frame(pred_with_leaf), factor) cols <- lapply(as.data.frame(pred_with_leaf), factor)

View File

@@ -75,9 +75,11 @@
#' @details #' @details
#' The original sample is randomly partitioned into \code{nfold} equal size subsamples. #' The original sample is randomly partitioned into \code{nfold} equal size subsamples.
#' #'
#' Of the \code{nfold} subsamples, a single subsample is retained as the validation data for testing the model, and the remaining \code{nfold - 1} subsamples are used as training data. #' Of the \code{nfold} subsamples, a single subsample is retained as the validation data for testing the model,
#' and the remaining \code{nfold - 1} subsamples are used as training data.
#' #'
#' The cross-validation process is then repeated \code{nrounds} times, with each of the \code{nfold} subsamples used exactly once as the validation data. #' The cross-validation process is then repeated \code{nrounds} times, with each of the
#' \code{nfold} subsamples used exactly once as the validation data.
#' #'
#' All observations are used for both training and validation. #' All observations are used for both training and validation.
#' #'
@@ -110,17 +112,17 @@
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"), #' cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"),
#' max_depth = 3, eta = 1, objective = "binary:logistic") #' max_depth = 3, eta = 1, objective = "binary:logistic")
#' print(cv) #' print(cv)
#' print(cv, verbose=TRUE) #' print(cv, verbose=TRUE)
#' #'
#' @export #' @export
xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing = NA, xgb.cv <- function(params = list(), data, nrounds, nfold, label = NULL, missing = NA,
prediction = FALSE, showsd = TRUE, metrics=list(), prediction = FALSE, showsd = TRUE, metrics = list(),
obj = NULL, feval = NULL, stratified = TRUE, folds = NULL, train_folds = NULL, obj = NULL, feval = NULL, stratified = TRUE, folds = NULL, train_folds = NULL,
verbose = TRUE, print_every_n=1L, verbose = TRUE, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL, callbacks = list(), ...) { early_stopping_rounds = NULL, maximize = NULL, callbacks = list(), ...) {
check.deprecation(...) check.deprecation(...)
@@ -133,9 +135,6 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
check.custom.obj() check.custom.obj()
check.custom.eval() check.custom.eval()
#if (is.null(params[['eval_metric']]) && is.null(feval))
# stop("Either 'eval_metric' or 'feval' must be provided for CV")
# Check the labels # Check the labels
if ((inherits(data, 'xgb.DMatrix') && is.null(getinfo(data, 'label'))) || if ((inherits(data, 'xgb.DMatrix') && is.null(getinfo(data, 'label'))) ||
(!inherits(data, 'xgb.DMatrix') && is.null(label))) { (!inherits(data, 'xgb.DMatrix') && is.null(label))) {
@@ -159,10 +158,6 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
folds <- generate.cv.folds(nfold, nrow(data), stratified, cv_label, params) folds <- generate.cv.folds(nfold, nrow(data), stratified, cv_label, params)
} }
# Potential TODO: sequential CV
#if (strategy == 'sequential')
# stop('Sequential CV strategy is not yet implemented')
# verbosity & evaluation printing callback: # verbosity & evaluation printing callback:
params <- c(params, list(silent = 1)) params <- c(params, list(silent = 1))
print_every_n <- max(as.integer(print_every_n), 1L) print_every_n <- max(as.integer(print_every_n), 1L)
@@ -192,7 +187,13 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
# create the booster-folds # create the booster-folds
# train_folds # train_folds
dall <- xgb.get.DMatrix(data, label, missing) dall <- xgb.get.DMatrix(
data = data,
label = label,
missing = missing,
weight = NULL,
nthread = params$nthread
)
bst_folds <- lapply(seq_along(folds), function(k) { bst_folds <- lapply(seq_along(folds), function(k) {
dtest <- slice(dall, folds[[k]]) dtest <- slice(dall, folds[[k]])
# code originally contributed by @RolandASc on stackoverflow # code originally contributed by @RolandASc on stackoverflow
@@ -200,7 +201,12 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
dtrain <- slice(dall, unlist(folds[-k])) dtrain <- slice(dall, unlist(folds[-k]))
else else
dtrain <- slice(dall, train_folds[[k]]) dtrain <- slice(dall, train_folds[[k]])
handle <- xgb.Booster.handle(params, list(dtrain, dtest)) handle <- xgb.Booster.handle(
params = params,
cachelist = list(dtrain, dtest),
modelfile = NULL,
handle = NULL
)
list(dtrain = dtrain, bst = handle, watchlist = list(train = dtrain, test = dtest), index = folds[[k]]) list(dtrain = dtrain, bst = handle, watchlist = list(train = dtrain, test = dtest), index = folds[[k]])
}) })
rm(dall) rm(dall)
@@ -221,8 +227,18 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
for (f in cb$pre_iter) f() for (f in cb$pre_iter) f()
msg <- lapply(bst_folds, function(fd) { msg <- lapply(bst_folds, function(fd) {
xgb.iter.update(fd$bst, fd$dtrain, iteration - 1, obj) xgb.iter.update(
xgb.iter.eval(fd$bst, fd$watchlist, iteration - 1, feval) booster_handle = fd$bst,
dtrain = fd$dtrain,
iter = iteration - 1,
obj = obj
)
xgb.iter.eval(
booster_handle = fd$bst,
watchlist = fd$watchlist,
iter = iteration - 1,
feval = feval
)
}) })
msg <- simplify2array(msg) msg <- simplify2array(msg)
bst_evaluation <- rowMeans(msg) bst_evaluation <- rowMeans(msg)

View File

@@ -38,7 +38,7 @@
#' cat(xgb.dump(bst, with_stats = TRUE, dump_format='json')) #' cat(xgb.dump(bst, with_stats = TRUE, dump_format='json'))
#' #'
#' @export #' @export
xgb.dump <- function(model, fname = NULL, fmap = "", with_stats=FALSE, xgb.dump <- function(model, fname = NULL, fmap = "", with_stats = FALSE,
dump_format = c("text", "json"), ...) { dump_format = c("text", "json"), ...) {
check.deprecation(...) check.deprecation(...)
dump_format <- match.arg(dump_format) dump_format <- match.arg(dump_format)

View File

@@ -4,7 +4,7 @@
#' @rdname xgb.plot.importance #' @rdname xgb.plot.importance
#' @export #' @export
xgb.ggplot.importance <- function(importance_matrix = NULL, top_n = NULL, measure = NULL, xgb.ggplot.importance <- function(importance_matrix = NULL, top_n = NULL, measure = NULL,
rel_to_first = FALSE, n_clusters = c(1:10), ...) { rel_to_first = FALSE, n_clusters = seq_len(10), ...) {
importance_matrix <- xgb.plot.importance(importance_matrix, top_n = top_n, measure = measure, importance_matrix <- xgb.plot.importance(importance_matrix, top_n = top_n, measure = measure,
rel_to_first = rel_to_first, plot = FALSE, ...) rel_to_first = rel_to_first, plot = FALSE, ...)
@@ -142,6 +142,7 @@ xgb.ggplot.shap.summary <- function(data, shap_contrib = NULL, features = NULL,
#' #'
#' @return A data.table containing the observation ID, the feature name, the #' @return A data.table containing the observation ID, the feature name, the
#' feature value (normalized if specified), and the SHAP contribution value. #' feature value (normalized if specified), and the SHAP contribution value.
#' @noRd
prepare.ggplot.shap.data <- function(data_list, normalize = FALSE) { prepare.ggplot.shap.data <- function(data_list, normalize = FALSE) {
data <- data_list[["data"]] data <- data_list[["data"]]
shap_contrib <- data_list[["shap_contrib"]] shap_contrib <- data_list[["shap_contrib"]]
@@ -170,6 +171,7 @@ prepare.ggplot.shap.data <- function(data_list, normalize = FALSE) {
#' @param x Numeric vector #' @param x Numeric vector
#' #'
#' @return Numeric vector with mean 0 and sd 1. #' @return Numeric vector with mean 0 and sd 1.
#' @noRd
normalize <- function(x) { normalize <- function(x) {
loc <- mean(x, na.rm = TRUE) loc <- mean(x, na.rm = TRUE)
scale <- stats::sd(x, na.rm = TRUE) scale <- stats::sd(x, na.rm = TRUE)
@@ -181,7 +183,7 @@ normalize <- function(x) {
# ... the plots # ... the plots
# cols number of columns # cols number of columns
# internal utility function # internal utility function
multiplot <- function(..., cols = 1) { multiplot <- function(..., cols) {
plots <- list(...) plots <- list(...)
num_plots <- length(plots) num_plots <- length(plots)

View File

@@ -82,7 +82,7 @@
#' #'
#' @export #' @export
xgb.importance <- function(feature_names = NULL, model = NULL, trees = NULL, xgb.importance <- function(feature_names = NULL, model = NULL, trees = NULL,
data = NULL, label = NULL, target = NULL){ data = NULL, label = NULL, target = NULL) {
if (!(is.null(data) && is.null(label) && is.null(target))) if (!(is.null(data) && is.null(label) && is.null(target)))
warning("xgb.importance: parameters 'data', 'label' and 'target' are deprecated") warning("xgb.importance: parameters 'data', 'label' and 'target' are deprecated")
@@ -104,7 +104,11 @@ xgb.importance <- function(feature_names = NULL, model = NULL, trees = NULL,
XGBoosterFeatureScore_R, model$handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null") XGBoosterFeatureScore_R, model$handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null")
) )
names(results) <- c("features", "shape", "weight") names(results) <- c("features", "shape", "weight")
n_classes <- if (length(results$shape) == 2) { results$shape[2] } else { 0 } if (length(results$shape) == 2) {
n_classes <- results$shape[2]
} else {
n_classes <- 0
}
importance <- if (n_classes == 0) { importance <- if (n_classes == 0) {
data.table(Feature = results$features, Weight = results$weight)[order(-abs(Weight))] data.table(Feature = results$features, Weight = results$weight)[order(-abs(Weight))]
} else { } else {

View File

@@ -35,7 +35,12 @@ xgb.load <- function(modelfile) {
if (is.null(modelfile)) if (is.null(modelfile))
stop("xgb.load: modelfile cannot be NULL") stop("xgb.load: modelfile cannot be NULL")
handle <- xgb.Booster.handle(modelfile = modelfile) handle <- xgb.Booster.handle(
params = list(),
cachelist = list(),
modelfile = modelfile,
handle = NULL
)
# re-use modelfile if it is raw so we do not need to serialize # re-use modelfile if it is raw so we do not need to serialize
if (typeof(modelfile) == "raw") { if (typeof(modelfile) == "raw") {
warning( warning(
@@ -45,9 +50,9 @@ xgb.load <- function(modelfile) {
" `xgb.unserialize` instead. " " `xgb.unserialize` instead. "
) )
) )
bst <- xgb.handleToBooster(handle, modelfile) bst <- xgb.handleToBooster(handle = handle, raw = modelfile)
} else { } else {
bst <- xgb.handleToBooster(handle, NULL) bst <- xgb.handleToBooster(handle = handle, raw = NULL)
} }
bst <- xgb.Booster.complete(bst, saveraw = TRUE) bst <- xgb.Booster.complete(bst, saveraw = TRUE)
return(bst) return(bst)

View File

@@ -62,7 +62,7 @@
#' #'
#' @export #' @export
xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL, xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
trees = NULL, use_int_id = FALSE, ...){ trees = NULL, use_int_id = FALSE, ...) {
check.deprecation(...) check.deprecation(...)
if (!inherits(model, "xgb.Booster") && !is.character(text)) { if (!inherits(model, "xgb.Booster") && !is.character(text)) {
@@ -82,12 +82,11 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
stop("trees: must be a vector of integers.") stop("trees: must be a vector of integers.")
} }
if (is.null(text)){ if (is.null(text)) {
text <- xgb.dump(model = model, with_stats = TRUE) text <- xgb.dump(model = model, with_stats = TRUE)
} }
if (length(text) < 2 || if (length(text) < 2 || !any(grepl('leaf=(\\d+)', text))) {
sum(grepl('leaf=(\\d+)', text)) < 1) {
stop("Non-tree model detected! This function can only be used with tree models.") stop("Non-tree model detected! This function can only be used with tree models.")
} }

View File

@@ -136,7 +136,7 @@ get.leaf.depth <- function(dt_tree) {
# list of paths to each leaf in a tree # list of paths to each leaf in a tree
paths <- lapply(paths_tmp$vpath, names) paths <- lapply(paths_tmp$vpath, names)
# combine into a resulting path lengths table for a tree # combine into a resulting path lengths table for a tree
data.table(Depth = sapply(paths, length), ID = To[Leaf == TRUE]) data.table(Depth = lengths(paths), ID = To[Leaf == TRUE])
}, by = Tree] }, by = Tree]
} }

View File

@@ -102,7 +102,9 @@ xgb.plot.importance <- function(importance_matrix = NULL, top_n = NULL, measure
original_mar <- par()$mar original_mar <- par()$mar
# reset margins so this function doesn't have side effects # reset margins so this function doesn't have side effects
on.exit({par(mar = original_mar)}) on.exit({
par(mar = original_mar)
})
mar <- original_mar mar <- original_mar
if (!is.null(left_margin)) if (!is.null(left_margin))

View File

@@ -61,7 +61,7 @@
#' #'
#' @export #' @export
xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5, plot_width = NULL, plot_height = NULL, xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5, plot_width = NULL, plot_height = NULL,
render = TRUE, ...){ render = TRUE, ...) {
if (!requireNamespace("DiagrammeR", quietly = TRUE)) { if (!requireNamespace("DiagrammeR", quietly = TRUE)) {
stop("DiagrammeR is required for xgb.plot.multi.trees") stop("DiagrammeR is required for xgb.plot.multi.trees")
} }
@@ -97,9 +97,9 @@ xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5,
, by = .(abs.node.position, Feature) , by = .(abs.node.position, Feature)
][, .(Text = paste0( ][, .(Text = paste0(
paste0( paste0(
Feature[1:min(length(Feature), features_keep)], Feature[seq_len(min(length(Feature), features_keep))],
" (", " (",
format(Quality[1:min(length(Quality), features_keep)], digits = 5), format(Quality[seq_len(min(length(Quality), features_keep))], digits = 5),
")" ")"
), ),
collapse = "\n" collapse = "\n"

View File

@@ -143,7 +143,7 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
y <- shap_contrib[, f][ord] y <- shap_contrib[, f][ord]
x_lim <- range(x, na.rm = TRUE) x_lim <- range(x, na.rm = TRUE)
y_lim <- range(y, na.rm = TRUE) y_lim <- range(y, na.rm = TRUE)
do_na <- plot_NA && any(is.na(x)) do_na <- plot_NA && anyNA(x)
if (do_na) { if (do_na) {
x_range <- diff(x_lim) x_range <- diff(x_lim)
loc_na <- min(x, na.rm = TRUE) + x_range * pos_NA loc_na <- min(x, na.rm = TRUE) + x_range * pos_NA
@@ -193,7 +193,7 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
#' hence allows us to see which features have a negative / positive contribution #' hence allows us to see which features have a negative / positive contribution
#' on the model prediction, and whether the contribution is different for larger #' on the model prediction, and whether the contribution is different for larger
#' or smaller values of the feature. We effectively try to replicate the #' or smaller values of the feature. We effectively try to replicate the
#' \code{summary_plot} function from https://github.com/slundberg/shap. #' \code{summary_plot} function from https://github.com/shap/shap.
#' #'
#' @inheritParams xgb.plot.shap #' @inheritParams xgb.plot.shap
#' #'
@@ -202,7 +202,7 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
#' #'
#' @examples # See \code{\link{xgb.plot.shap}}. #' @examples # See \code{\link{xgb.plot.shap}}.
#' @seealso \code{\link{xgb.plot.shap}}, \code{\link{xgb.ggplot.shap.summary}}, #' @seealso \code{\link{xgb.plot.shap}}, \code{\link{xgb.ggplot.shap.summary}},
#' \url{https://github.com/slundberg/shap} #' \url{https://github.com/shap/shap}
xgb.plot.shap.summary <- function(data, shap_contrib = NULL, features = NULL, top_n = 10, model = NULL, xgb.plot.shap.summary <- function(data, shap_contrib = NULL, features = NULL, top_n = 10, model = NULL,
trees = NULL, target_class = NULL, approxcontrib = FALSE, subsample = NULL) { trees = NULL, target_class = NULL, approxcontrib = FALSE, subsample = NULL) {
# Only ggplot implementation is available. # Only ggplot implementation is available.
@@ -272,8 +272,8 @@ xgb.shap.data <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
imp <- xgb.importance(model = model, trees = trees, feature_names = colnames(data)) imp <- xgb.importance(model = model, trees = trees, feature_names = colnames(data))
} }
top_n <- top_n[1] top_n <- top_n[1]
if (top_n < 1 | top_n > 100) stop("top_n: must be an integer within [1, 100]") if (top_n < 1 || top_n > 100) stop("top_n: must be an integer within [1, 100]")
features <- imp$Feature[1:min(top_n, NROW(imp))] features <- imp$Feature[seq_len(min(top_n, NROW(imp)))]
} }
if (is.character(features)) { if (is.character(features)) {
features <- match(features, colnames(data)) features <- match(features, colnames(data))

View File

@@ -34,7 +34,7 @@
#' The branches that also used for missing values are marked as bold #' The branches that also used for missing values are marked as bold
#' (as in "carrying extra capacity"). #' (as in "carrying extra capacity").
#' #'
#' This function uses \href{http://www.graphviz.org/}{GraphViz} as a backend of DiagrammeR. #' This function uses \href{https://www.graphviz.org/}{GraphViz} as a backend of DiagrammeR.
#' #'
#' @return #' @return
#' #'
@@ -68,7 +68,7 @@
#' #'
#' @export #' @export
xgb.plot.tree <- function(feature_names = NULL, model = NULL, trees = NULL, plot_width = NULL, plot_height = NULL, xgb.plot.tree <- function(feature_names = NULL, model = NULL, trees = NULL, plot_width = NULL, plot_height = NULL,
render = TRUE, show_node_id = FALSE, ...){ render = TRUE, show_node_id = FALSE, ...) {
check.deprecation(...) check.deprecation(...)
if (!inherits(model, "xgb.Booster")) { if (!inherits(model, "xgb.Booster")) {
stop("model: Has to be an object of class xgb.Booster") stop("model: Has to be an object of class xgb.Booster")

View File

@@ -43,6 +43,6 @@ xgb.save <- function(model, fname) {
} }
model <- xgb.Booster.complete(model, saveraw = FALSE) model <- xgb.Booster.complete(model, saveraw = FALSE)
fname <- path.expand(fname) fname <- path.expand(fname)
.Call(XGBoosterSaveModel_R, model$handle, fname[1]) .Call(XGBoosterSaveModel_R, model$handle, enc2utf8(fname[1]))
return(TRUE) return(TRUE)
} }

View File

@@ -18,17 +18,37 @@
#' 2.1. Parameters for Tree Booster #' 2.1. Parameters for Tree Booster
#' #'
#' \itemize{ #' \itemize{
#' \item \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1} when it is added to the current approximation. Used to prevent overfitting by making the boosting process more conservative. Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model more robust to overfitting but slower to compute. Default: 0.3 #' \item{ \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1}
#' \item \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be. #' when it is added to the current approximation.
#' Used to prevent overfitting by making the boosting process more conservative.
#' Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model
#' more robust to overfitting but slower to compute. Default: 0.3}
#' \item{ \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree.
#' the larger, the more conservative the algorithm will be.}
#' \item \code{max_depth} maximum depth of a tree. Default: 6 #' \item \code{max_depth} maximum depth of a tree. Default: 6
#' \item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1 #' \item{\code{min_child_weight} minimum sum of instance weight (hessian) needed in a child.
#' \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1 #' If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight,
#' then the building process will give up further partitioning.
#' In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node.
#' The larger, the more conservative the algorithm will be. Default: 1}
#' \item{ \code{subsample} subsample ratio of the training instance.
#' Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees
#' and this will prevent overfitting. It makes computation shorter (because less data to analyse).
#' It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1}
#' \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1 #' \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
#' \item \code{lambda} L2 regularization term on weights. Default: 1 #' \item \code{lambda} L2 regularization term on weights. Default: 1
#' \item \code{alpha} L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0 #' \item \code{alpha} L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0
#' \item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through XGBoost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1 #' \item{ \code{num_parallel_tree} Experimental parameter. number of trees to grow per round.
#' \item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint. #' Useful to test Random Forest through XGBoost
#' \item \code{interaction_constraints} A list of vectors specifying feature indices of permitted interactions. Each item of the list represents one permitted interaction where specified features are allowed to interact with each other. Feature index values should start from \code{0} (\code{0} references the first column). Leave argument unspecified for no interaction constraints. #' (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly.
#' Default: 1}
#' \item{ \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length
#' equals to the number of features in the training data.
#' \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.}
#' \item{ \code{interaction_constraints} A list of vectors specifying feature indices of permitted interactions.
#' Each item of the list represents one permitted interaction where specified features are allowed to interact with each other.
#' Feature index values should start from \code{0} (\code{0} references the first column).
#' Leave argument unspecified for no interaction constraints.}
#' } #' }
#' #'
#' 2.2. Parameters for Linear Booster #' 2.2. Parameters for Linear Booster
@@ -42,29 +62,53 @@
#' 3. Task Parameters #' 3. Task Parameters
#' #'
#' \itemize{ #' \itemize{
#' \item \code{objective} specify the learning task and the corresponding learning objective, users can pass a self-defined function to it. The default objective options are below: #' \item{ \code{objective} specify the learning task and the corresponding learning objective, users can pass a self-defined function to it.
#' The default objective options are below:
#' \itemize{ #' \itemize{
#' \item \code{reg:squarederror} Regression with squared loss (Default). #' \item \code{reg:squarederror} Regression with squared loss (Default).
#' \item \code{reg:squaredlogerror}: regression with squared log loss \eqn{1/2 * (log(pred + 1) - log(label + 1))^2}. All inputs are required to be greater than -1. Also, see metric rmsle for possible issue with this objective. #' \item{ \code{reg:squaredlogerror}: regression with squared log loss \eqn{1/2 * (log(pred + 1) - log(label + 1))^2}.
#' All inputs are required to be greater than -1.
#' Also, see metric rmsle for possible issue with this objective.}
#' \item \code{reg:logistic} logistic regression. #' \item \code{reg:logistic} logistic regression.
#' \item \code{reg:pseudohubererror}: regression with Pseudo Huber loss, a twice differentiable alternative to absolute loss. #' \item \code{reg:pseudohubererror}: regression with Pseudo Huber loss, a twice differentiable alternative to absolute loss.
#' \item \code{binary:logistic} logistic regression for binary classification. Output probability. #' \item \code{binary:logistic} logistic regression for binary classification. Output probability.
#' \item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation. #' \item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation.
#' \item \code{binary:hinge}: hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities. #' \item \code{binary:hinge}: hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities.
#' \item \code{count:poisson}: Poisson regression for count data, output mean of Poisson distribution. \code{max_delta_step} is set to 0.7 by default in poisson regression (used to safeguard optimization). #' \item{ \code{count:poisson}: Poisson regression for count data, output mean of Poisson distribution.
#' \item \code{survival:cox}: Cox regression for right censored survival time data (negative values are considered right censored). Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional hazard function \code{h(t) = h0(t) * HR)}. #' \code{max_delta_step} is set to 0.7 by default in poisson regression (used to safeguard optimization).}
#' \item \code{survival:aft}: Accelerated failure time model for censored survival time data. See \href{https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html}{Survival Analysis with Accelerated Failure Time} for details. #' \item{ \code{survival:cox}: Cox regression for right censored survival time data (negative values are considered right censored).
#' Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional
#' hazard function \code{h(t) = h0(t) * HR)}.}
#' \item{ \code{survival:aft}: Accelerated failure time model for censored survival time data. See
#' \href{https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html}{Survival Analysis with Accelerated Failure Time}
#' for details.}
#' \item \code{aft_loss_distribution}: Probability Density Function used by \code{survival:aft} and \code{aft-nloglik} metric. #' \item \code{aft_loss_distribution}: Probability Density Function used by \code{survival:aft} and \code{aft-nloglik} metric.
#' \item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{num_class - 1}. #' \item{ \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective.
#' \item \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class. #' Class is represented by a number and should be from 0 to \code{num_class - 1}.}
#' \item{ \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be
#' further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging
#' to each class.}
#' \item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss. #' \item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss.
#' \item \code{rank:ndcg}: Use LambdaMART to perform list-wise ranking where \href{https://en.wikipedia.org/wiki/Discounted_cumulative_gain}{Normalized Discounted Cumulative Gain (NDCG)} is maximized. #' \item{ \code{rank:ndcg}: Use LambdaMART to perform list-wise ranking where
#' \item \code{rank:map}: Use LambdaMART to perform list-wise ranking where \href{https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision}{Mean Average Precision (MAP)} is maximized. #' \href{https://en.wikipedia.org/wiki/Discounted_cumulative_gain}{Normalized Discounted Cumulative Gain (NDCG)} is maximized.}
#' \item \code{reg:gamma}: gamma regression with log-link. Output is a mean of gamma distribution. It might be useful, e.g., for modeling insurance claims severity, or for any outcome that might be \href{https://en.wikipedia.org/wiki/Gamma_distribution#Applications}{gamma-distributed}. #' \item{ \code{rank:map}: Use LambdaMART to perform list-wise ranking where
#' \item \code{reg:tweedie}: Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any outcome that might be \href{https://en.wikipedia.org/wiki/Tweedie_distribution#Applications}{Tweedie-distributed}. #' \href{https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision}{Mean Average Precision (MAP)}
#' is maximized.}
#' \item{ \code{reg:gamma}: gamma regression with log-link.
#' Output is a mean of gamma distribution.
#' It might be useful, e.g., for modeling insurance claims severity, or for any outcome that might be
#' \href{https://en.wikipedia.org/wiki/Gamma_distribution#Applications}{gamma-distributed}.}
#' \item{ \code{reg:tweedie}: Tweedie regression with log-link.
#' It might be useful, e.g., for modeling total loss in insurance, or for any outcome that might be
#' \href{https://en.wikipedia.org/wiki/Tweedie_distribution#Applications}{Tweedie-distributed}.}
#' } #' }
#' }
#' \item \code{base_score} the initial prediction score of all instances, global bias. Default: 0.5 #' \item \code{base_score} the initial prediction score of all instances, global bias. Default: 0.5
#' \item \code{eval_metric} evaluation metrics for validation data. Users can pass a self-defined function to it. Default: metric will be assigned according to objective(rmse for regression, and error for classification, mean average precision for ranking). List is provided in detail section. #' \item{ \code{eval_metric} evaluation metrics for validation data.
#' Users can pass a self-defined function to it.
#' Default: metric will be assigned according to objective
#' (rmse for regression, and error for classification, mean average precision for ranking).
#' List is provided in detail section.}
#' } #' }
#' #'
#' @param data training dataset. \code{xgb.train} accepts only an \code{xgb.DMatrix} as the input. #' @param data training dataset. \code{xgb.train} accepts only an \code{xgb.DMatrix} as the input.
@@ -141,7 +185,8 @@
#' \item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}. #' \item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
#' \item \code{mae} Mean absolute error #' \item \code{mae} Mean absolute error
#' \item \code{mape} Mean absolute percentage error #' \item \code{mape} Mean absolute percentage error
#' \item \code{auc} Area under the curve. \url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation. #' \item{ \code{auc} Area under the curve.
#' \url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.}
#' \item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation. #' \item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation.
#' \item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{https://en.wikipedia.org/wiki/NDCG} #' \item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{https://en.wikipedia.org/wiki/NDCG}
#' } #' }
@@ -192,8 +237,8 @@
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost') #' data(agaricus.test, package='xgboost')
#' #'
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtest <- with(agaricus.test, xgb.DMatrix(data, label = label)) #' dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
#' watchlist <- list(train = dtrain, eval = dtest) #' watchlist <- list(train = dtrain, eval = dtest)
#' #'
#' ## A simple xgb.train example: #' ## A simple xgb.train example:
@@ -276,6 +321,10 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
if (is.null(evnames) || any(evnames == "")) if (is.null(evnames) || any(evnames == ""))
stop("each element of the watchlist must have a name tag") stop("each element of the watchlist must have a name tag")
} }
# Handle multiple evaluation metrics given as a list
for (m in params$eval_metric) {
params <- c(params, list(eval_metric = m))
}
# evaluation printing callback # evaluation printing callback
params <- c(params) params <- c(params)
@@ -314,8 +363,13 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
is_update <- NVL(params[['process_type']], '.') == 'update' is_update <- NVL(params[['process_type']], '.') == 'update'
# Construct a booster (either a new one or load from xgb_model) # Construct a booster (either a new one or load from xgb_model)
handle <- xgb.Booster.handle(params, append(watchlist, dtrain), xgb_model) handle <- xgb.Booster.handle(
bst <- xgb.handleToBooster(handle) params = params,
cachelist = append(watchlist, dtrain),
modelfile = xgb_model,
handle = NULL
)
bst <- xgb.handleToBooster(handle = handle, raw = NULL)
# extract parameters that can affect the relationship b/w #trees and #iterations # extract parameters that can affect the relationship b/w #trees and #iterations
num_class <- max(as.numeric(NVL(params[['num_class']], 1)), 1) num_class <- max(as.numeric(NVL(params[['num_class']], 1)), 1)
@@ -341,10 +395,21 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
for (f in cb$pre_iter) f() for (f in cb$pre_iter) f()
xgb.iter.update(bst$handle, dtrain, iteration - 1, obj) xgb.iter.update(
booster_handle = bst$handle,
dtrain = dtrain,
iter = iteration - 1,
obj = obj
)
if (length(watchlist) > 0) if (length(watchlist) > 0) {
bst_evaluation <- xgb.iter.eval(bst$handle, watchlist, iteration - 1, feval) bst_evaluation <- xgb.iter.eval( # nolint: object_usage_linter
booster_handle = bst$handle,
watchlist = watchlist,
iter = iteration - 1,
feval = feval
)
}
xgb.attr(bst$handle, 'niter') <- iteration - 1 xgb.attr(bst$handle, 'niter') <- iteration - 1

View File

@@ -10,7 +10,13 @@ xgboost <- function(data = NULL, label = NULL, missing = NA, weight = NULL,
save_period = NULL, save_name = "xgboost.model", save_period = NULL, save_name = "xgboost.model",
xgb_model = NULL, callbacks = list(), ...) { xgb_model = NULL, callbacks = list(), ...) {
merged <- check.booster.params(params, ...) merged <- check.booster.params(params, ...)
dtrain <- xgb.get.DMatrix(data, label, missing, weight, nthread = merged$nthread) dtrain <- xgb.get.DMatrix(
data = data,
label = label,
missing = missing,
weight = weight,
nthread = merged$nthread
)
watchlist <- list(train = dtrain) watchlist <- list(train = dtrain)

1842
R-package/configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -2,10 +2,25 @@
AC_PREREQ(2.69) AC_PREREQ(2.69)
AC_INIT([xgboost],[1.7.0],[],[xgboost],[]) AC_INIT([xgboost],[2.0.0],[],[xgboost],[])
# Use this line to set CC variable to a C compiler : ${R_HOME=`R RHOME`}
AC_PROG_CC if test -z "${R_HOME}"; then
echo "could not determine R_HOME"
exit 1
fi
CXX17=`"${R_HOME}/bin/R" CMD config CXX17`
CXX17STD=`"${R_HOME}/bin/R" CMD config CXX17STD`
CXX="${CXX17} ${CXX17STD}"
CXXFLAGS=`"${R_HOME}/bin/R" CMD config CXXFLAGS`
CC=`"${R_HOME}/bin/R" CMD config CC`
CFLAGS=`"${R_HOME}/bin/R" CMD config CFLAGS`
CPPFLAGS=`"${R_HOME}/bin/R" CMD config CPPFLAGS`
LDFLAGS=`"${R_HOME}/bin/R" CMD config LDFLAGS`
AC_LANG(C++)
### Check whether backtrace() is part of libc or the external lib libexecinfo ### Check whether backtrace() is part of libc or the external lib libexecinfo
AC_MSG_CHECKING([Backtrace lib]) AC_MSG_CHECKING([Backtrace lib])
@@ -28,12 +43,19 @@ fi
if test `uname -s` = "Darwin" if test `uname -s` = "Darwin"
then then
OPENMP_CXXFLAGS='-Xclang -fopenmp' if command -v brew &> /dev/null
OPENMP_LIB='-lomp' then
HOMEBREW_LIBOMP_PREFIX=`brew --prefix libomp`
else
# Homebrew not found
HOMEBREW_LIBOMP_PREFIX=''
fi
OPENMP_CXXFLAGS="-Xpreprocessor -fopenmp -I${HOMEBREW_LIBOMP_PREFIX}/include"
OPENMP_LIB="-lomp -L${HOMEBREW_LIBOMP_PREFIX}/lib"
ac_pkg_openmp=no ac_pkg_openmp=no
AC_MSG_CHECKING([whether OpenMP will work in a package]) AC_MSG_CHECKING([whether OpenMP will work in a package])
AC_LANG_CONFTEST([AC_LANG_PROGRAM([[#include <omp.h>]], [[ return (omp_get_max_threads() <= 1); ]])]) AC_LANG_CONFTEST([AC_LANG_PROGRAM([[#include <omp.h>]], [[ return (omp_get_max_threads() <= 1); ]])])
${CC} -o conftest conftest.c ${CPPFLAGS} ${LDFLAGS} ${OPENMP_LIB} ${OPENMP_CXXFLAGS} 2>/dev/null && ./conftest && ac_pkg_openmp=yes ${CXX} -o conftest conftest.cpp ${CPPFLAGS} ${LDFLAGS} ${OPENMP_LIB} ${OPENMP_CXXFLAGS} 2>/dev/null && ./conftest && ac_pkg_openmp=yes
AC_MSG_RESULT([${ac_pkg_openmp}]) AC_MSG_RESULT([${ac_pkg_openmp}])
if test "${ac_pkg_openmp}" = no; then if test "${ac_pkg_openmp}" = no; then
OPENMP_CXXFLAGS='' OPENMP_CXXFLAGS=''

View File

@@ -1,5 +1,4 @@
# install development version of caret library that contains xgboost models # install development version of caret library that contains xgboost models
devtools::install_github("topepo/caret/pkg/caret")
require(caret) require(caret)
require(xgboost) require(xgboost)
require(data.table) require(data.table)
@@ -8,14 +7,23 @@ require(e1071)
# Load Arthritis dataset in memory. # Load Arthritis dataset in memory.
data(Arthritis) data(Arthritis)
# Create a copy of the dataset with data.table package (data.table is 100% compliant with R dataframe but its syntax is a lot more consistent and its performance are really good). # Create a copy of the dataset with data.table package
# (data.table is 100% compliant with R dataframe but its syntax is a lot more consistent
# and its performance are really good).
df <- data.table(Arthritis, keep.rownames = FALSE) df <- data.table(Arthritis, keep.rownames = FALSE)
# Let's add some new categorical features to see if it helps. Of course these feature are highly correlated to the Age feature. Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features, even in case of highly correlated features. # Let's add some new categorical features to see if it helps.
# For the first feature we create groups of age by rounding the real age. Note that we transform it to factor (categorical data) so the algorithm treat them as independant values. # Of course these feature are highly correlated to the Age feature.
# Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features,
# even in case of highly correlated features.
# For the first feature we create groups of age by rounding the real age.
# Note that we transform it to factor (categorical data) so the algorithm treat them as independant values.
df[, AgeDiscret := as.factor(round(Age / 10, 0))] df[, AgeDiscret := as.factor(round(Age / 10, 0))]
# Here is an even stronger simplification of the real age with an arbitrary split at 30 years old. I choose this value based on nothing. We will see later if simplifying the information based on arbitrary values is a good strategy (I am sure you already have an idea of how well it will work!). # Here is an even stronger simplification of the real age with an arbitrary split at 30 years old.
# I choose this value based on nothing.
# We will see later if simplifying the information based on arbitrary values is a good strategy
# (I am sure you already have an idea of how well it will work!).
df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))] df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))]
# We remove ID as there is nothing to learn from this feature (it will just add some noise as the dataset is small). # We remove ID as there is nothing to learn from this feature (it will just add some noise as the dataset is small).
@@ -26,9 +34,10 @@ df[, ID := NULL]
# Here we use 10-fold cross-validation, repeating twice, and using random search for tuning hyper-parameters. # Here we use 10-fold cross-validation, repeating twice, and using random search for tuning hyper-parameters.
fitControl <- trainControl(method = "repeatedcv", number = 10, repeats = 2, search = "random") fitControl <- trainControl(method = "repeatedcv", number = 10, repeats = 2, search = "random")
# train a xgbTree model using caret::train # train a xgbTree model using caret::train
model <- train(factor(Improved)~., data = df, method = "xgbTree", trControl = fitControl) model <- train(factor(Improved) ~ ., data = df, method = "xgbTree", trControl = fitControl)
# Instead of tree for our boosters, you can also fit a linear regression or logistic regression model using xgbLinear # Instead of tree for our boosters, you can also fit a linear regression or logistic regression model
# using xgbLinear
# model <- train(factor(Improved)~., data = df, method = "xgbLinear", trControl = fitControl) # model <- train(factor(Improved)~., data = df, method = "xgbLinear", trControl = fitControl)
# See model results # See model results

View File

@@ -7,34 +7,47 @@ if (!require(vcd)) {
} }
# According to its documentation, XGBoost works only on numbers. # According to its documentation, XGBoost works only on numbers.
# Sometimes the dataset we have to work on have categorical data. # Sometimes the dataset we have to work on have categorical data.
# A categorical variable is one which have a fixed number of values. By example, if for each observation a variable called "Colour" can have only "red", "blue" or "green" as value, it is a categorical variable. # A categorical variable is one which have a fixed number of values.
# By example, if for each observation a variable called "Colour" can have only
# "red", "blue" or "green" as value, it is a categorical variable.
# #
# In R, categorical variable is called Factor. # In R, categorical variable is called Factor.
# Type ?factor in console for more information. # Type ?factor in console for more information.
# #
# In this demo we will see how to transform a dense dataframe with categorical variables to a sparse matrix before analyzing it in XGBoost. # In this demo we will see how to transform a dense dataframe with categorical variables to a sparse matrix
# before analyzing it in XGBoost.
# The method we are going to see is usually called "one hot encoding". # The method we are going to see is usually called "one hot encoding".
#load Arthritis dataset in memory. #load Arthritis dataset in memory.
data(Arthritis) data(Arthritis)
# create a copy of the dataset with data.table package (data.table is 100% compliant with R dataframe but its syntax is a lot more consistent and its performance are really good). # create a copy of the dataset with data.table package
# (data.table is 100% compliant with R dataframe but its syntax is a lot more consistent
# and its performance are really good).
df <- data.table(Arthritis, keep.rownames = FALSE) df <- data.table(Arthritis, keep.rownames = FALSE)
# Let's have a look to the data.table # Let's have a look to the data.table
cat("Print the dataset\n") cat("Print the dataset\n")
print(df) print(df)
# 2 columns have factor type, one has ordinal type (ordinal variable is a categorical variable with values which can be ordered, here: None > Some > Marked). # 2 columns have factor type, one has ordinal type
# (ordinal variable is a categorical variable with values which can be ordered, here: None > Some > Marked).
cat("Structure of the dataset\n") cat("Structure of the dataset\n")
str(df) str(df)
# Let's add some new categorical features to see if it helps. Of course these feature are highly correlated to the Age feature. Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features, even in case of highly correlated features. # Let's add some new categorical features to see if it helps.
# Of course these feature are highly correlated to the Age feature.
# Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features,
# even in case of highly correlated features.
# For the first feature we create groups of age by rounding the real age. Note that we transform it to factor (categorical data) so the algorithm treat them as independent values. # For the first feature we create groups of age by rounding the real age.
# Note that we transform it to factor (categorical data) so the algorithm treat them as independent values.
df[, AgeDiscret := as.factor(round(Age / 10, 0))] df[, AgeDiscret := as.factor(round(Age / 10, 0))]
# Here is an even stronger simplification of the real age with an arbitrary split at 30 years old. I choose this value based on nothing. We will see later if simplifying the information based on arbitrary values is a good strategy (I am sure you already have an idea of how well it will work!). # Here is an even stronger simplification of the real age with an arbitrary split at 30 years old.
# I choose this value based on nothing.
# We will see later if simplifying the information based on arbitrary values is a good strategy
# (I am sure you already have an idea of how well it will work!).
df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))] df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))]
# We remove ID as there is nothing to learn from this feature (it will just add some noise as the dataset is small). # We remove ID as there is nothing to learn from this feature (it will just add some noise as the dataset is small).
@@ -48,7 +61,10 @@ print(levels(df[, Treatment]))
# This method is also called one hot encoding. # This method is also called one hot encoding.
# The purpose is to transform each value of each categorical feature in one binary feature. # The purpose is to transform each value of each categorical feature in one binary feature.
# #
# Let's take, the column Treatment will be replaced by two columns, Placebo, and Treated. Each of them will be binary. For example an observation which had the value Placebo in column Treatment before the transformation will have, after the transformation, the value 1 in the new column Placebo and the value 0 in the new column Treated. # Let's take, the column Treatment will be replaced by two columns, Placebo, and Treated.
# Each of them will be binary.
# For example an observation which had the value Placebo in column Treatment before the transformation will have, after the transformation,
# the value 1 in the new column Placebo and the value 0 in the new column Treated.
# #
# Formulae Improved~.-1 used below means transform all categorical features but column Improved to binary values. # Formulae Improved~.-1 used below means transform all categorical features but column Improved to binary values.
# Column Improved is excluded because it will be our output column, the one we want to predict. # Column Improved is excluded because it will be our output column, the one we want to predict.
@@ -70,7 +86,10 @@ bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 9,
importance <- xgb.importance(feature_names = colnames(sparse_matrix), model = bst) importance <- xgb.importance(feature_names = colnames(sparse_matrix), model = bst)
print(importance) print(importance)
# According to the matrix below, the most important feature in this dataset to predict if the treatment will work is the Age. The second most important feature is having received a placebo or not. The sex is third. Then we see our generated features (AgeDiscret). We can see that their contribution is very low (Gain column). # According to the matrix below, the most important feature in this dataset to predict if the treatment will work is the Age.
# The second most important feature is having received a placebo or not.
# The sex is third.
# Then we see our generated features (AgeDiscret). We can see that their contribution is very low (Gain column).
# Does these result make sense? # Does these result make sense?
# Let's check some Chi2 between each of these features and the outcome. # Let's check some Chi2 between each of these features and the outcome.
@@ -82,8 +101,17 @@ print(chisq.test(df$AgeDiscret, df$Y))
# Our first simplification of Age gives a Pearson correlation of 8. # Our first simplification of Age gives a Pearson correlation of 8.
print(chisq.test(df$AgeCat, df$Y)) print(chisq.test(df$AgeCat, df$Y))
# The perfectly random split I did between young and old at 30 years old have a low correlation of 2. It's a result we may expect as may be in my mind > 30 years is being old (I am 32 and starting feeling old, this may explain that), but for the illness we are studying, the age to be vulnerable is not the same. Don't let your "gut" lower the quality of your model. In "data science", there is science :-) # The perfectly random split I did between young and old at 30 years old have a low correlation of 2.
# It's a result we may expect as may be in my mind > 30 years is being old (I am 32 and starting feeling old, this may explain that),
# but for the illness we are studying, the age to be vulnerable is not the same.
# Don't let your "gut" lower the quality of your model. In "data science", there is science :-)
# As you can see, in general destroying information by simplifying it won't improve your model. Chi2 just demonstrates that. But in more complex cases, creating a new feature based on existing one which makes link with the outcome more obvious may help the algorithm and improve the model. The case studied here is not enough complex to show that. Check Kaggle forum for some challenging datasets. # As you can see, in general destroying information by simplifying it won't improve your model.
# Chi2 just demonstrates that.
# But in more complex cases, creating a new feature based on existing one which makes link with the outcome
# more obvious may help the algorithm and improve the model.
# The case studied here is not enough complex to show that. Check Kaggle forum for some challenging datasets.
# However it's almost always worse when you add some arbitrary rules. # However it's almost always worse when you add some arbitrary rules.
# Moreover, you can notice that even if we have added some not useful new features highly correlated with other features, the boosting tree algorithm have been able to choose the best one, which in this case is the Age. Linear model may not be that strong in these scenario. # Moreover, you can notice that even if we have added some not useful new features highly correlated with
# other features, the boosting tree algorithm have been able to choose the best one, which in this case is the Age.
# Linear model may not be that strong in these scenario.

View File

@@ -12,7 +12,7 @@ cat('running cross validation\n')
# do cross validation, this will print result out as # do cross validation, this will print result out as
# [iteration] metric_name:mean_value+std_value # [iteration] metric_name:mean_value+std_value
# std_value is standard deviation of the metric # std_value is standard deviation of the metric
xgb.cv(param, dtrain, nrounds, nfold = 5, metrics = {'error'}) xgb.cv(param, dtrain, nrounds, nfold = 5, metrics = 'error')
cat('running cross validation, disable standard deviation display\n') cat('running cross validation, disable standard deviation display\n')
# do cross validation, this will print result out as # do cross validation, this will print result out as

View File

@@ -33,7 +33,7 @@ treeInteractions <- function(input_tree, input_max_depth) {
} }
# Extract nodes with interactions # Extract nodes with interactions
interaction_trees <- trees[!is.na(Split) & !is.na(parent_1), interaction_trees <- trees[!is.na(Split) & !is.na(parent_1), # nolint: object_usage_linter
c('Feature', paste0('parent_feat_', 1:(input_max_depth - 1))), c('Feature', paste0('parent_feat_', 1:(input_max_depth - 1))),
with = FALSE] with = FALSE]
interaction_trees_split <- split(interaction_trees, seq_len(nrow(interaction_trees))) interaction_trees_split <- split(interaction_trees, seq_len(nrow(interaction_trees)))
@@ -44,7 +44,7 @@ treeInteractions <- function(input_tree, input_max_depth) {
# Remove non-interactions (same variable) # Remove non-interactions (same variable)
interaction_list <- lapply(interaction_list, unique) # remove same variables interaction_list <- lapply(interaction_list, unique) # remove same variables
interaction_length <- sapply(interaction_list, length) interaction_length <- lengths(interaction_list)
interaction_list <- interaction_list[interaction_length > 1] interaction_list <- interaction_list[interaction_length > 1]
interaction_list <- unique(lapply(interaction_list, sort)) interaction_list <- unique(lapply(interaction_list, sort))
return(interaction_list) return(interaction_list)

View File

@@ -24,7 +24,7 @@ accuracy.before <- (sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.te
pred_with_leaf <- predict(bst, dtest, predleaf = TRUE) pred_with_leaf <- predict(bst, dtest, predleaf = TRUE)
head(pred_with_leaf) head(pred_with_leaf)
create.new.tree.features <- function(model, original.features){ create.new.tree.features <- function(model, original.features) {
pred_with_leaf <- predict(model, original.features, predleaf = TRUE) pred_with_leaf <- predict(model, original.features, predleaf = TRUE)
cols <- list() cols <- list()
for (i in 1:model$niter) { for (i in 1:model$niter) {

View File

@@ -1,4 +1,4 @@
# running all scripts in demo folder # running all scripts in demo folder, removed during packaging.
demo(basic_walkthrough, package = 'xgboost') demo(basic_walkthrough, package = 'xgboost')
demo(custom_objective, package = 'xgboost') demo(custom_objective, package = 'xgboost')
demo(boost_from_prediction, package = 'xgboost') demo(boost_from_prediction, package = 'xgboost')

View File

@@ -79,9 +79,9 @@ end_of_table <- empty_lines[empty_lines > start_index][1L]
# Read the contents of the table # Read the contents of the table
exported_symbols <- objdump_results[(start_index + 1L):end_of_table] exported_symbols <- objdump_results[(start_index + 1L):end_of_table]
exported_symbols <- gsub("\t", "", exported_symbols) exported_symbols <- gsub("\t", "", exported_symbols, fixed = TRUE)
exported_symbols <- gsub(".*\\] ", "", exported_symbols) exported_symbols <- gsub(".*\\] ", "", exported_symbols)
exported_symbols <- gsub(" ", "", exported_symbols) exported_symbols <- gsub(" ", "", exported_symbols, fixed = TRUE)
# Write R.def file # Write R.def file
writeLines( writeLines(

View File

@@ -15,9 +15,11 @@ selected per iteration.}
} }
\value{ \value{
Results are stored in the \code{coefs} element of the closure. Results are stored in the \code{coefs} element of the closure.
The \code{\link{xgb.gblinear.history}} convenience function provides an easy way to access it. The \code{\link{xgb.gblinear.history}} convenience function provides an easy
way to access it.
With \code{xgb.train}, it is either a dense of a sparse matrix. With \code{xgb.train}, it is either a dense of a sparse matrix.
While with \code{xgb.cv}, it is a list (an element per each fold) of such matrices. While with \code{xgb.cv}, it is a list (an element per each fold) of such
matrices.
} }
\description{ \description{
Callback closure for collecting the model coefficients history of a gblinear booster Callback closure for collecting the model coefficients history of a gblinear booster
@@ -38,7 +40,7 @@ Callback function expects the following values to be set in its calling frame:
# without considering the 2nd order interactions: # without considering the 2nd order interactions:
x <- model.matrix(Species ~ .^2, iris)[,-1] x <- model.matrix(Species ~ .^2, iris)[,-1]
colnames(x) colnames(x)
dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor")) dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor"), nthread = 2)
param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc", param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc",
lambda = 0.0003, alpha = 0.0003, nthread = 2) lambda = 0.0003, alpha = 0.0003, nthread = 2)
# For 'shotgun', which is a default linear updater, using high eta values may result in # For 'shotgun', which is a default linear updater, using high eta values may result in
@@ -63,19 +65,19 @@ matplot(xgb.gblinear.history(bst), type = 'l')
# For xgb.cv: # For xgb.cv:
bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8, bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8,
callbacks = list(cb.gblinear.history())) callbacks = list(cb.gblinear.history()))
# coefficients in the CV fold #3 # coefficients in the CV fold #3
matplot(xgb.gblinear.history(bst)[[3]], type = 'l') matplot(xgb.gblinear.history(bst)[[3]], type = 'l')
#### Multiclass classification: #### Multiclass classification:
# #
dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1) dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1, nthread = 1)
param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3, param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3,
lambda = 0.0003, alpha = 0.0003, nthread = 2) lambda = 0.0003, alpha = 0.0003, nthread = 1)
# For the default linear updater 'shotgun' it sometimes is helpful # For the default linear updater 'shotgun' it sometimes is helpful
# to use smaller eta to reduce instability # to use smaller eta to reduce instability
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 70, eta = 0.5, bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 50, eta = 0.5,
callbacks = list(cb.gblinear.history())) callbacks = list(cb.gblinear.history()))
# Will plot the coefficient paths separately for each class: # Will plot the coefficient paths separately for each class:
matplot(xgb.gblinear.history(bst, class_index = 0), type = 'l') matplot(xgb.gblinear.history(bst, class_index = 0), type = 'l')

View File

@@ -19,7 +19,7 @@ be directly used with an \code{xgb.DMatrix} object.
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
train <- agaricus.train train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label) dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
stopifnot(nrow(dtrain) == nrow(train$data)) stopifnot(nrow(dtrain) == nrow(train$data))
stopifnot(ncol(dtrain) == ncol(train$data)) stopifnot(ncol(dtrain) == ncol(train$data))

View File

@@ -26,7 +26,7 @@ Since row names are irrelevant, it is recommended to use \code{colnames} directl
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
train <- agaricus.train train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label) dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
dimnames(dtrain) dimnames(dtrain)
colnames(dtrain) colnames(dtrain)
colnames(dtrain) <- make.names(1:ncol(train$data)) colnames(dtrain) <- make.names(1:ncol(train$data))

View File

@@ -34,7 +34,7 @@ The \code{name} field can be one of the following:
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
labels <- getinfo(dtrain, 'label') labels <- getinfo(dtrain, 'label')
setinfo(dtrain, 'label', 1-labels) setinfo(dtrain, 'label', 1-labels)

View File

@@ -1,18 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.ggplot.R
\name{normalize}
\alias{normalize}
\title{Scale feature value to have mean 0, standard deviation 1}
\usage{
normalize(x)
}
\arguments{
\item{x}{Numeric vector}
}
\value{
Numeric vector with mean 0 and sd 1.
}
\description{
This is used to compare multiple features on the same plot.
Internal utility function
}

View File

@@ -122,6 +122,10 @@ With \code{predinteraction = TRUE}, SHAP values of contributions of interaction
are computed. Note that this operation might be rather expensive in terms of compute and memory. are computed. Note that this operation might be rather expensive in terms of compute and memory.
Since it quadratically depends on the number of features, it is recommended to perform selection Since it quadratically depends on the number of features, it is recommended to perform selection
of the most important features first. See below about the format of the returned results. of the most important features first. See below about the format of the returned results.
The \code{predict()} method uses as many threads as defined in \code{xgb.Booster} object (all by default).
If you want to change their number, then assign a new number to \code{nthread} using \code{\link{xgb.parameters<-}}.
Note also that converting a matrix to \code{\link{xgb.DMatrix}} uses multiple threads too.
} }
\examples{ \examples{
## binary classification: ## binary classification:

View File

@@ -1,27 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.ggplot.R
\name{prepare.ggplot.shap.data}
\alias{prepare.ggplot.shap.data}
\title{Combine and melt feature values and SHAP contributions for sample
observations.}
\usage{
prepare.ggplot.shap.data(data_list, normalize = FALSE)
}
\arguments{
\item{data_list}{List containing 'data' and 'shap_contrib' returned by
\code{xgb.shap.data()}.}
\item{normalize}{Whether to standardize feature values to have mean 0 and
standard deviation 1 (useful for comparing multiple features on the same
plot). Default \code{FALSE}.}
}
\value{
A data.table containing the observation ID, the feature name, the
feature value (normalized if specified), and the SHAP contribution value.
}
\description{
Conforms to data format required for ggplot functions.
}
\details{
Internal utility function.
}

View File

@@ -19,7 +19,7 @@ Currently it displays dimensions and presence of info-fields and colnames.
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtrain dtrain
print(dtrain, verbose=TRUE) print(dtrain, verbose=TRUE)

View File

@@ -33,7 +33,7 @@ The \code{name} field can be one of the following:
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
labels <- getinfo(dtrain, 'label') labels <- getinfo(dtrain, 'label')
setinfo(dtrain, 'label', 1-labels) setinfo(dtrain, 'label', 1-labels)

View File

@@ -28,7 +28,7 @@ original xgb.DMatrix object
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dsub <- slice(dtrain, 1:42) dsub <- slice(dtrain, 1:42)
labels1 <- getinfo(dsub, 'label') labels1 <- getinfo(dsub, 'label')

View File

@@ -38,7 +38,7 @@ Supported input file formats are either a LIBSVM text file or a binary file that
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data') xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
dtrain <- xgb.DMatrix('xgb.DMatrix.data') dtrain <- xgb.DMatrix('xgb.DMatrix.data')
if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data') if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')

View File

@@ -16,7 +16,7 @@ Save xgb.DMatrix object to binary file
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data') xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
dtrain <- xgb.DMatrix('xgb.DMatrix.data') dtrain <- xgb.DMatrix('xgb.DMatrix.data')
if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data') if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')

View File

@@ -59,8 +59,8 @@ a rule on certain features."
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost') data(agaricus.test, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtest <- with(agaricus.test, xgb.DMatrix(data, label = label)) dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic') param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
nrounds = 4 nrounds = 4
@@ -76,8 +76,12 @@ new.features.train <- xgb.create.features(model = bst, agaricus.train$data)
new.features.test <- xgb.create.features(model = bst, agaricus.test$data) new.features.test <- xgb.create.features(model = bst, agaricus.test$data)
# learning with new features # learning with new features
new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label) new.dtrain <- xgb.DMatrix(
new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label) data = new.features.train, label = agaricus.train$label, nthread = 2
)
new.dtest <- xgb.DMatrix(
data = new.features.test, label = agaricus.test$label, nthread = 2
)
watchlist <- list(train = new.dtrain) watchlist <- list(train = new.dtrain)
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2) bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)

View File

@@ -148,9 +148,11 @@ The cross validation function of xgboost
\details{ \details{
The original sample is randomly partitioned into \code{nfold} equal size subsamples. The original sample is randomly partitioned into \code{nfold} equal size subsamples.
Of the \code{nfold} subsamples, a single subsample is retained as the validation data for testing the model, and the remaining \code{nfold - 1} subsamples are used as training data. Of the \code{nfold} subsamples, a single subsample is retained as the validation data for testing the model,
and the remaining \code{nfold - 1} subsamples are used as training data.
The cross-validation process is then repeated \code{nrounds} times, with each of the \code{nfold} subsamples used exactly once as the validation data. The cross-validation process is then repeated \code{nrounds} times, with each of the
\code{nfold} subsamples used exactly once as the validation data.
All observations are used for both training and validation. All observations are used for both training and validation.
@@ -158,9 +160,9 @@ Adapted from \url{https://en.wikipedia.org/wiki/Cross-validation_\%28statistics\
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"), cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"),
max_depth = 3, eta = 1, objective = "binary:logistic") max_depth = 3, eta = 1, objective = "binary:logistic")
print(cv) print(cv)
print(cv, verbose=TRUE) print(cv, verbose=TRUE)

View File

@@ -10,7 +10,7 @@ xgb.ggplot.importance(
top_n = NULL, top_n = NULL,
measure = NULL, measure = NULL,
rel_to_first = FALSE, rel_to_first = FALSE,
n_clusters = c(1:10), n_clusters = seq_len(10),
... ...
) )

View File

@@ -67,12 +67,12 @@ Each point (observation) is coloured based on its feature value. The plot
hence allows us to see which features have a negative / positive contribution hence allows us to see which features have a negative / positive contribution
on the model prediction, and whether the contribution is different for larger on the model prediction, and whether the contribution is different for larger
or smaller values of the feature. We effectively try to replicate the or smaller values of the feature. We effectively try to replicate the
\code{summary_plot} function from https://github.com/slundberg/shap. \code{summary_plot} function from https://github.com/shap/shap.
} }
\examples{ \examples{
# See \code{\link{xgb.plot.shap}}. # See \code{\link{xgb.plot.shap}}.
} }
\seealso{ \seealso{
\code{\link{xgb.plot.shap}}, \code{\link{xgb.ggplot.shap.summary}}, \code{\link{xgb.plot.shap}}, \code{\link{xgb.ggplot.shap.summary}},
\url{https://github.com/slundberg/shap} \url{https://github.com/shap/shap}
} }

View File

@@ -67,7 +67,7 @@ The "Yes" branches are marked by the "< split_value" label.
The branches that also used for missing values are marked as bold The branches that also used for missing values are marked as bold
(as in "carrying extra capacity"). (as in "carrying extra capacity").
This function uses \href{http://www.graphviz.org/}{GraphViz} as a backend of DiagrammeR. This function uses \href{https://www.graphviz.org/}{GraphViz} as a backend of DiagrammeR.
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')

View File

@@ -57,17 +57,37 @@ xgboost(
2.1. Parameters for Tree Booster 2.1. Parameters for Tree Booster
\itemize{ \itemize{
\item \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1} when it is added to the current approximation. Used to prevent overfitting by making the boosting process more conservative. Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model more robust to overfitting but slower to compute. Default: 0.3 \item{ \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1}
\item \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be. when it is added to the current approximation.
Used to prevent overfitting by making the boosting process more conservative.
Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model
more robust to overfitting but slower to compute. Default: 0.3}
\item{ \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree.
the larger, the more conservative the algorithm will be.}
\item \code{max_depth} maximum depth of a tree. Default: 6 \item \code{max_depth} maximum depth of a tree. Default: 6
\item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1 \item{\code{min_child_weight} minimum sum of instance weight (hessian) needed in a child.
\item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1 If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight,
then the building process will give up further partitioning.
In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node.
The larger, the more conservative the algorithm will be. Default: 1}
\item{ \code{subsample} subsample ratio of the training instance.
Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees
and this will prevent overfitting. It makes computation shorter (because less data to analyse).
It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1}
\item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1 \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
\item \code{lambda} L2 regularization term on weights. Default: 1 \item \code{lambda} L2 regularization term on weights. Default: 1
\item \code{alpha} L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0 \item \code{alpha} L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0
\item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through XGBoost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1 \item{ \code{num_parallel_tree} Experimental parameter. number of trees to grow per round.
\item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint. Useful to test Random Forest through XGBoost
\item \code{interaction_constraints} A list of vectors specifying feature indices of permitted interactions. Each item of the list represents one permitted interaction where specified features are allowed to interact with each other. Feature index values should start from \code{0} (\code{0} references the first column). Leave argument unspecified for no interaction constraints. (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly.
Default: 1}
\item{ \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length
equals to the number of features in the training data.
\code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.}
\item{ \code{interaction_constraints} A list of vectors specifying feature indices of permitted interactions.
Each item of the list represents one permitted interaction where specified features are allowed to interact with each other.
Feature index values should start from \code{0} (\code{0} references the first column).
Leave argument unspecified for no interaction constraints.}
} }
2.2. Parameters for Linear Booster 2.2. Parameters for Linear Booster
@@ -81,29 +101,53 @@ xgboost(
3. Task Parameters 3. Task Parameters
\itemize{ \itemize{
\item \code{objective} specify the learning task and the corresponding learning objective, users can pass a self-defined function to it. The default objective options are below: \item{ \code{objective} specify the learning task and the corresponding learning objective, users can pass a self-defined function to it.
The default objective options are below:
\itemize{ \itemize{
\item \code{reg:squarederror} Regression with squared loss (Default). \item \code{reg:squarederror} Regression with squared loss (Default).
\item \code{reg:squaredlogerror}: regression with squared log loss \eqn{1/2 * (log(pred + 1) - log(label + 1))^2}. All inputs are required to be greater than -1. Also, see metric rmsle for possible issue with this objective. \item{ \code{reg:squaredlogerror}: regression with squared log loss \eqn{1/2 * (log(pred + 1) - log(label + 1))^2}.
All inputs are required to be greater than -1.
Also, see metric rmsle for possible issue with this objective.}
\item \code{reg:logistic} logistic regression. \item \code{reg:logistic} logistic regression.
\item \code{reg:pseudohubererror}: regression with Pseudo Huber loss, a twice differentiable alternative to absolute loss. \item \code{reg:pseudohubererror}: regression with Pseudo Huber loss, a twice differentiable alternative to absolute loss.
\item \code{binary:logistic} logistic regression for binary classification. Output probability. \item \code{binary:logistic} logistic regression for binary classification. Output probability.
\item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation. \item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation.
\item \code{binary:hinge}: hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities. \item \code{binary:hinge}: hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities.
\item \code{count:poisson}: Poisson regression for count data, output mean of Poisson distribution. \code{max_delta_step} is set to 0.7 by default in poisson regression (used to safeguard optimization). \item{ \code{count:poisson}: Poisson regression for count data, output mean of Poisson distribution.
\item \code{survival:cox}: Cox regression for right censored survival time data (negative values are considered right censored). Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional hazard function \code{h(t) = h0(t) * HR)}. \code{max_delta_step} is set to 0.7 by default in poisson regression (used to safeguard optimization).}
\item \code{survival:aft}: Accelerated failure time model for censored survival time data. See \href{https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html}{Survival Analysis with Accelerated Failure Time} for details. \item{ \code{survival:cox}: Cox regression for right censored survival time data (negative values are considered right censored).
Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional
hazard function \code{h(t) = h0(t) * HR)}.}
\item{ \code{survival:aft}: Accelerated failure time model for censored survival time data. See
\href{https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html}{Survival Analysis with Accelerated Failure Time}
for details.}
\item \code{aft_loss_distribution}: Probability Density Function used by \code{survival:aft} and \code{aft-nloglik} metric. \item \code{aft_loss_distribution}: Probability Density Function used by \code{survival:aft} and \code{aft-nloglik} metric.
\item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{num_class - 1}. \item{ \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective.
\item \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class. Class is represented by a number and should be from 0 to \code{num_class - 1}.}
\item{ \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be
further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging
to each class.}
\item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss. \item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss.
\item \code{rank:ndcg}: Use LambdaMART to perform list-wise ranking where \href{https://en.wikipedia.org/wiki/Discounted_cumulative_gain}{Normalized Discounted Cumulative Gain (NDCG)} is maximized. \item{ \code{rank:ndcg}: Use LambdaMART to perform list-wise ranking where
\item \code{rank:map}: Use LambdaMART to perform list-wise ranking where \href{https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision}{Mean Average Precision (MAP)} is maximized. \href{https://en.wikipedia.org/wiki/Discounted_cumulative_gain}{Normalized Discounted Cumulative Gain (NDCG)} is maximized.}
\item \code{reg:gamma}: gamma regression with log-link. Output is a mean of gamma distribution. It might be useful, e.g., for modeling insurance claims severity, or for any outcome that might be \href{https://en.wikipedia.org/wiki/Gamma_distribution#Applications}{gamma-distributed}. \item{ \code{rank:map}: Use LambdaMART to perform list-wise ranking where
\item \code{reg:tweedie}: Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any outcome that might be \href{https://en.wikipedia.org/wiki/Tweedie_distribution#Applications}{Tweedie-distributed}. \href{https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision}{Mean Average Precision (MAP)}
is maximized.}
\item{ \code{reg:gamma}: gamma regression with log-link.
Output is a mean of gamma distribution.
It might be useful, e.g., for modeling insurance claims severity, or for any outcome that might be
\href{https://en.wikipedia.org/wiki/Gamma_distribution#Applications}{gamma-distributed}.}
\item{ \code{reg:tweedie}: Tweedie regression with log-link.
It might be useful, e.g., for modeling total loss in insurance, or for any outcome that might be
\href{https://en.wikipedia.org/wiki/Tweedie_distribution#Applications}{Tweedie-distributed}.}
} }
}
\item \code{base_score} the initial prediction score of all instances, global bias. Default: 0.5 \item \code{base_score} the initial prediction score of all instances, global bias. Default: 0.5
\item \code{eval_metric} evaluation metrics for validation data. Users can pass a self-defined function to it. Default: metric will be assigned according to objective(rmse for regression, and error for classification, mean average precision for ranking). List is provided in detail section. \item{ \code{eval_metric} evaluation metrics for validation data.
Users can pass a self-defined function to it.
Default: metric will be assigned according to objective
(rmse for regression, and error for classification, mean average precision for ranking).
List is provided in detail section.}
}} }}
\item{data}{training dataset. \code{xgb.train} accepts only an \code{xgb.DMatrix} as the input. \item{data}{training dataset. \code{xgb.train} accepts only an \code{xgb.DMatrix} as the input.
@@ -223,7 +267,8 @@ The following is the list of built-in metrics for which XGBoost provides optimiz
\item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}. \item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
\item \code{mae} Mean absolute error \item \code{mae} Mean absolute error
\item \code{mape} Mean absolute percentage error \item \code{mape} Mean absolute percentage error
\item \code{auc} Area under the curve. \url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation. \item{ \code{auc} Area under the curve.
\url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.}
\item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation. \item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation.
\item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{https://en.wikipedia.org/wiki/NDCG} \item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{https://en.wikipedia.org/wiki/NDCG}
} }
@@ -241,8 +286,8 @@ The following callbacks are automatically created when certain parameters are se
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost') data(agaricus.test, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label)) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtest <- with(agaricus.test, xgb.DMatrix(data, label = label)) dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
watchlist <- list(train = dtrain, eval = dtest) watchlist <- list(train = dtrain, eval = dtest)
## A simple xgb.train example: ## A simple xgb.train example:

View File

@@ -3,12 +3,11 @@ PKGROOT=../../
ENABLE_STD_THREAD=1 ENABLE_STD_THREAD=1
# _*_ mode: Makefile; _*_ # _*_ mode: Makefile; _*_
CXX_STD = CXX14 CXX_STD = CXX17
XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\ XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
-DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\ -DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\
-DDMLC_LOG_CUSTOMIZE=1 -DXGBOOST_CUSTOMIZE_LOGGER=1\ -DDMLC_LOG_CUSTOMIZE=1
-DRABIT_CUSTOMIZE_MSG_
# disable the use of thread_local for 32 bit windows: # disable the use of thread_local for 32 bit windows:
ifeq ($(R_OSTYPE)$(WIN),windows) ifeq ($(R_OSTYPE)$(WIN),windows)
@@ -23,7 +22,6 @@ PKG_LIBS = @OPENMP_CXXFLAGS@ @OPENMP_LIB@ @ENDIAN_FLAG@ @BACKTRACE_LIB@ -pthread
OBJECTS= \ OBJECTS= \
./xgboost_R.o \ ./xgboost_R.o \
./xgboost_custom.o \ ./xgboost_custom.o \
./xgboost_assert.o \
./init.o \ ./init.o \
$(PKGROOT)/src/metric/metric.o \ $(PKGROOT)/src/metric/metric.o \
$(PKGROOT)/src/metric/elementwise_metric.o \ $(PKGROOT)/src/metric/elementwise_metric.o \
@@ -34,10 +32,12 @@ OBJECTS= \
$(PKGROOT)/src/objective/objective.o \ $(PKGROOT)/src/objective/objective.o \
$(PKGROOT)/src/objective/regression_obj.o \ $(PKGROOT)/src/objective/regression_obj.o \
$(PKGROOT)/src/objective/multiclass_obj.o \ $(PKGROOT)/src/objective/multiclass_obj.o \
$(PKGROOT)/src/objective/rank_obj.o \ $(PKGROOT)/src/objective/lambdarank_obj.o \
$(PKGROOT)/src/objective/hinge.o \ $(PKGROOT)/src/objective/hinge.o \
$(PKGROOT)/src/objective/aft_obj.o \ $(PKGROOT)/src/objective/aft_obj.o \
$(PKGROOT)/src/objective/adaptive.o \ $(PKGROOT)/src/objective/adaptive.o \
$(PKGROOT)/src/objective/init_estimation.o \
$(PKGROOT)/src/objective/quantile_obj.o \
$(PKGROOT)/src/gbm/gbm.o \ $(PKGROOT)/src/gbm/gbm.o \
$(PKGROOT)/src/gbm/gbtree.o \ $(PKGROOT)/src/gbm/gbtree.o \
$(PKGROOT)/src/gbm/gbtree_model.o \ $(PKGROOT)/src/gbm/gbtree_model.o \
@@ -47,6 +47,7 @@ OBJECTS= \
$(PKGROOT)/src/data/data.o \ $(PKGROOT)/src/data/data.o \
$(PKGROOT)/src/data/sparse_page_raw_format.o \ $(PKGROOT)/src/data/sparse_page_raw_format.o \
$(PKGROOT)/src/data/ellpack_page.o \ $(PKGROOT)/src/data/ellpack_page.o \
$(PKGROOT)/src/data/file_iterator.o \
$(PKGROOT)/src/data/gradient_index.o \ $(PKGROOT)/src/data/gradient_index.o \
$(PKGROOT)/src/data/gradient_index_page_source.o \ $(PKGROOT)/src/data/gradient_index_page_source.o \
$(PKGROOT)/src/data/gradient_index_format.o \ $(PKGROOT)/src/data/gradient_index_format.o \
@@ -55,27 +56,36 @@ OBJECTS= \
$(PKGROOT)/src/data/iterative_dmatrix.o \ $(PKGROOT)/src/data/iterative_dmatrix.o \
$(PKGROOT)/src/predictor/predictor.o \ $(PKGROOT)/src/predictor/predictor.o \
$(PKGROOT)/src/predictor/cpu_predictor.o \ $(PKGROOT)/src/predictor/cpu_predictor.o \
$(PKGROOT)/src/predictor/cpu_treeshap.o \
$(PKGROOT)/src/tree/constraints.o \ $(PKGROOT)/src/tree/constraints.o \
$(PKGROOT)/src/tree/param.o \ $(PKGROOT)/src/tree/param.o \
$(PKGROOT)/src/tree/fit_stump.o \
$(PKGROOT)/src/tree/tree_model.o \ $(PKGROOT)/src/tree/tree_model.o \
$(PKGROOT)/src/tree/tree_updater.o \ $(PKGROOT)/src/tree/tree_updater.o \
$(PKGROOT)/src/tree/multi_target_tree_model.o \
$(PKGROOT)/src/tree/updater_approx.o \ $(PKGROOT)/src/tree/updater_approx.o \
$(PKGROOT)/src/tree/updater_colmaker.o \ $(PKGROOT)/src/tree/updater_colmaker.o \
$(PKGROOT)/src/tree/updater_prune.o \ $(PKGROOT)/src/tree/updater_prune.o \
$(PKGROOT)/src/tree/updater_quantile_hist.o \ $(PKGROOT)/src/tree/updater_quantile_hist.o \
$(PKGROOT)/src/tree/updater_refresh.o \ $(PKGROOT)/src/tree/updater_refresh.o \
$(PKGROOT)/src/tree/updater_sync.o \ $(PKGROOT)/src/tree/updater_sync.o \
$(PKGROOT)/src/tree/hist/param.o \
$(PKGROOT)/src/tree/hist/histogram.o \
$(PKGROOT)/src/linear/linear_updater.o \ $(PKGROOT)/src/linear/linear_updater.o \
$(PKGROOT)/src/linear/updater_coordinate.o \ $(PKGROOT)/src/linear/updater_coordinate.o \
$(PKGROOT)/src/linear/updater_shotgun.o \ $(PKGROOT)/src/linear/updater_shotgun.o \
$(PKGROOT)/src/learner.o \ $(PKGROOT)/src/learner.o \
$(PKGROOT)/src/context.o \
$(PKGROOT)/src/logging.o \ $(PKGROOT)/src/logging.o \
$(PKGROOT)/src/global_config.o \ $(PKGROOT)/src/global_config.o \
$(PKGROOT)/src/collective/communicator.o \ $(PKGROOT)/src/collective/communicator.o \
$(PKGROOT)/src/collective/in_memory_communicator.o \
$(PKGROOT)/src/collective/in_memory_handler.o \
$(PKGROOT)/src/collective/socket.o \ $(PKGROOT)/src/collective/socket.o \
$(PKGROOT)/src/common/charconv.o \ $(PKGROOT)/src/common/charconv.o \
$(PKGROOT)/src/common/column_matrix.o \ $(PKGROOT)/src/common/column_matrix.o \
$(PKGROOT)/src/common/common.o \ $(PKGROOT)/src/common/common.o \
$(PKGROOT)/src/common/error_msg.o \
$(PKGROOT)/src/common/hist_util.o \ $(PKGROOT)/src/common/hist_util.o \
$(PKGROOT)/src/common/host_device_vector.o \ $(PKGROOT)/src/common/host_device_vector.o \
$(PKGROOT)/src/common/io.o \ $(PKGROOT)/src/common/io.o \
@@ -84,8 +94,11 @@ OBJECTS= \
$(PKGROOT)/src/common/pseudo_huber.o \ $(PKGROOT)/src/common/pseudo_huber.o \
$(PKGROOT)/src/common/quantile.o \ $(PKGROOT)/src/common/quantile.o \
$(PKGROOT)/src/common/random.o \ $(PKGROOT)/src/common/random.o \
$(PKGROOT)/src/common/stats.o \
$(PKGROOT)/src/common/survival_util.o \ $(PKGROOT)/src/common/survival_util.o \
$(PKGROOT)/src/common/threading_utils.o \ $(PKGROOT)/src/common/threading_utils.o \
$(PKGROOT)/src/common/ranking_utils.o \
$(PKGROOT)/src/common/quantile_loss_utils.o \
$(PKGROOT)/src/common/timer.o \ $(PKGROOT)/src/common/timer.o \
$(PKGROOT)/src/common/version.o \ $(PKGROOT)/src/common/version.o \
$(PKGROOT)/src/c_api/c_api.o \ $(PKGROOT)/src/c_api/c_api.o \

View File

@@ -3,12 +3,11 @@ PKGROOT=../../
ENABLE_STD_THREAD=0 ENABLE_STD_THREAD=0
# _*_ mode: Makefile; _*_ # _*_ mode: Makefile; _*_
CXX_STD = CXX14 CXX_STD = CXX17
XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\ XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
-DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\ -DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\
-DDMLC_LOG_CUSTOMIZE=1 -DXGBOOST_CUSTOMIZE_LOGGER=1\ -DDMLC_LOG_CUSTOMIZE=1
-DRABIT_CUSTOMIZE_MSG_
# disable the use of thread_local for 32 bit windows: # disable the use of thread_local for 32 bit windows:
ifeq ($(R_OSTYPE)$(WIN),windows) ifeq ($(R_OSTYPE)$(WIN),windows)
@@ -23,7 +22,6 @@ PKG_LIBS = $(SHLIB_OPENMP_CXXFLAGS) -DDMLC_CMAKE_LITTLE_ENDIAN=1 $(SHLIB_PTHRE
OBJECTS= \ OBJECTS= \
./xgboost_R.o \ ./xgboost_R.o \
./xgboost_custom.o \ ./xgboost_custom.o \
./xgboost_assert.o \
./init.o \ ./init.o \
$(PKGROOT)/src/metric/metric.o \ $(PKGROOT)/src/metric/metric.o \
$(PKGROOT)/src/metric/elementwise_metric.o \ $(PKGROOT)/src/metric/elementwise_metric.o \
@@ -34,10 +32,12 @@ OBJECTS= \
$(PKGROOT)/src/objective/objective.o \ $(PKGROOT)/src/objective/objective.o \
$(PKGROOT)/src/objective/regression_obj.o \ $(PKGROOT)/src/objective/regression_obj.o \
$(PKGROOT)/src/objective/multiclass_obj.o \ $(PKGROOT)/src/objective/multiclass_obj.o \
$(PKGROOT)/src/objective/rank_obj.o \ $(PKGROOT)/src/objective/lambdarank_obj.o \
$(PKGROOT)/src/objective/hinge.o \ $(PKGROOT)/src/objective/hinge.o \
$(PKGROOT)/src/objective/aft_obj.o \ $(PKGROOT)/src/objective/aft_obj.o \
$(PKGROOT)/src/objective/adaptive.o \ $(PKGROOT)/src/objective/adaptive.o \
$(PKGROOT)/src/objective/init_estimation.o \
$(PKGROOT)/src/objective/quantile_obj.o \
$(PKGROOT)/src/gbm/gbm.o \ $(PKGROOT)/src/gbm/gbm.o \
$(PKGROOT)/src/gbm/gbtree.o \ $(PKGROOT)/src/gbm/gbtree.o \
$(PKGROOT)/src/gbm/gbtree_model.o \ $(PKGROOT)/src/gbm/gbtree_model.o \
@@ -47,6 +47,7 @@ OBJECTS= \
$(PKGROOT)/src/data/data.o \ $(PKGROOT)/src/data/data.o \
$(PKGROOT)/src/data/sparse_page_raw_format.o \ $(PKGROOT)/src/data/sparse_page_raw_format.o \
$(PKGROOT)/src/data/ellpack_page.o \ $(PKGROOT)/src/data/ellpack_page.o \
$(PKGROOT)/src/data/file_iterator.o \
$(PKGROOT)/src/data/gradient_index.o \ $(PKGROOT)/src/data/gradient_index.o \
$(PKGROOT)/src/data/gradient_index_page_source.o \ $(PKGROOT)/src/data/gradient_index_page_source.o \
$(PKGROOT)/src/data/gradient_index_format.o \ $(PKGROOT)/src/data/gradient_index_format.o \
@@ -55,9 +56,12 @@ OBJECTS= \
$(PKGROOT)/src/data/iterative_dmatrix.o \ $(PKGROOT)/src/data/iterative_dmatrix.o \
$(PKGROOT)/src/predictor/predictor.o \ $(PKGROOT)/src/predictor/predictor.o \
$(PKGROOT)/src/predictor/cpu_predictor.o \ $(PKGROOT)/src/predictor/cpu_predictor.o \
$(PKGROOT)/src/predictor/cpu_treeshap.o \
$(PKGROOT)/src/tree/constraints.o \ $(PKGROOT)/src/tree/constraints.o \
$(PKGROOT)/src/tree/param.o \ $(PKGROOT)/src/tree/param.o \
$(PKGROOT)/src/tree/fit_stump.o \
$(PKGROOT)/src/tree/tree_model.o \ $(PKGROOT)/src/tree/tree_model.o \
$(PKGROOT)/src/tree/multi_target_tree_model.o \
$(PKGROOT)/src/tree/tree_updater.o \ $(PKGROOT)/src/tree/tree_updater.o \
$(PKGROOT)/src/tree/updater_approx.o \ $(PKGROOT)/src/tree/updater_approx.o \
$(PKGROOT)/src/tree/updater_colmaker.o \ $(PKGROOT)/src/tree/updater_colmaker.o \
@@ -65,17 +69,23 @@ OBJECTS= \
$(PKGROOT)/src/tree/updater_quantile_hist.o \ $(PKGROOT)/src/tree/updater_quantile_hist.o \
$(PKGROOT)/src/tree/updater_refresh.o \ $(PKGROOT)/src/tree/updater_refresh.o \
$(PKGROOT)/src/tree/updater_sync.o \ $(PKGROOT)/src/tree/updater_sync.o \
$(PKGROOT)/src/tree/hist/param.o \
$(PKGROOT)/src/tree/hist/histogram.o \
$(PKGROOT)/src/linear/linear_updater.o \ $(PKGROOT)/src/linear/linear_updater.o \
$(PKGROOT)/src/linear/updater_coordinate.o \ $(PKGROOT)/src/linear/updater_coordinate.o \
$(PKGROOT)/src/linear/updater_shotgun.o \ $(PKGROOT)/src/linear/updater_shotgun.o \
$(PKGROOT)/src/learner.o \ $(PKGROOT)/src/learner.o \
$(PKGROOT)/src/context.o \
$(PKGROOT)/src/logging.o \ $(PKGROOT)/src/logging.o \
$(PKGROOT)/src/global_config.o \ $(PKGROOT)/src/global_config.o \
$(PKGROOT)/src/collective/communicator.o \ $(PKGROOT)/src/collective/communicator.o \
$(PKGROOT)/src/collective/in_memory_communicator.o \
$(PKGROOT)/src/collective/in_memory_handler.o \
$(PKGROOT)/src/collective/socket.o \ $(PKGROOT)/src/collective/socket.o \
$(PKGROOT)/src/common/charconv.o \ $(PKGROOT)/src/common/charconv.o \
$(PKGROOT)/src/common/column_matrix.o \ $(PKGROOT)/src/common/column_matrix.o \
$(PKGROOT)/src/common/common.o \ $(PKGROOT)/src/common/common.o \
$(PKGROOT)/src/common/error_msg.o \
$(PKGROOT)/src/common/hist_util.o \ $(PKGROOT)/src/common/hist_util.o \
$(PKGROOT)/src/common/host_device_vector.o \ $(PKGROOT)/src/common/host_device_vector.o \
$(PKGROOT)/src/common/io.o \ $(PKGROOT)/src/common/io.o \
@@ -84,8 +94,11 @@ OBJECTS= \
$(PKGROOT)/src/common/pseudo_huber.o \ $(PKGROOT)/src/common/pseudo_huber.o \
$(PKGROOT)/src/common/quantile.o \ $(PKGROOT)/src/common/quantile.o \
$(PKGROOT)/src/common/random.o \ $(PKGROOT)/src/common/random.o \
$(PKGROOT)/src/common/stats.o \
$(PKGROOT)/src/common/survival_util.o \ $(PKGROOT)/src/common/survival_util.o \
$(PKGROOT)/src/common/threading_utils.o \ $(PKGROOT)/src/common/threading_utils.o \
$(PKGROOT)/src/common/ranking_utils.o \
$(PKGROOT)/src/common/quantile_loss_utils.o \
$(PKGROOT)/src/common/timer.o \ $(PKGROOT)/src/common/timer.o \
$(PKGROOT)/src/common/version.o \ $(PKGROOT)/src/common/version.o \
$(PKGROOT)/src/c_api/c_api.o \ $(PKGROOT)/src/c_api/c_api.o \

View File

@@ -30,15 +30,14 @@ extern SEXP XGBoosterSaveJsonConfig_R(SEXP handle);
extern SEXP XGBoosterLoadJsonConfig_R(SEXP handle, SEXP value); extern SEXP XGBoosterLoadJsonConfig_R(SEXP handle, SEXP value);
extern SEXP XGBoosterSerializeToBuffer_R(SEXP handle); extern SEXP XGBoosterSerializeToBuffer_R(SEXP handle);
extern SEXP XGBoosterUnserializeFromBuffer_R(SEXP handle, SEXP raw); extern SEXP XGBoosterUnserializeFromBuffer_R(SEXP handle, SEXP raw);
extern SEXP XGBoosterPredict_R(SEXP, SEXP, SEXP, SEXP, SEXP);
extern SEXP XGBoosterPredictFromDMatrix_R(SEXP, SEXP, SEXP); extern SEXP XGBoosterPredictFromDMatrix_R(SEXP, SEXP, SEXP);
extern SEXP XGBoosterSaveModel_R(SEXP, SEXP); extern SEXP XGBoosterSaveModel_R(SEXP, SEXP);
extern SEXP XGBoosterSetAttr_R(SEXP, SEXP, SEXP); extern SEXP XGBoosterSetAttr_R(SEXP, SEXP, SEXP);
extern SEXP XGBoosterSetParam_R(SEXP, SEXP, SEXP); extern SEXP XGBoosterSetParam_R(SEXP, SEXP, SEXP);
extern SEXP XGBoosterUpdateOneIter_R(SEXP, SEXP, SEXP); extern SEXP XGBoosterUpdateOneIter_R(SEXP, SEXP, SEXP);
extern SEXP XGCheckNullPtr_R(SEXP); extern SEXP XGCheckNullPtr_R(SEXP);
extern SEXP XGDMatrixCreateFromCSC_R(SEXP, SEXP, SEXP, SEXP, SEXP); extern SEXP XGDMatrixCreateFromCSC_R(SEXP, SEXP, SEXP, SEXP, SEXP, SEXP);
extern SEXP XGDMatrixCreateFromCSR_R(SEXP, SEXP, SEXP, SEXP, SEXP); extern SEXP XGDMatrixCreateFromCSR_R(SEXP, SEXP, SEXP, SEXP, SEXP, SEXP);
extern SEXP XGDMatrixCreateFromFile_R(SEXP, SEXP); extern SEXP XGDMatrixCreateFromFile_R(SEXP, SEXP);
extern SEXP XGDMatrixCreateFromMat_R(SEXP, SEXP, SEXP); extern SEXP XGDMatrixCreateFromMat_R(SEXP, SEXP, SEXP);
extern SEXP XGDMatrixGetInfo_R(SEXP, SEXP); extern SEXP XGDMatrixGetInfo_R(SEXP, SEXP);
@@ -68,15 +67,14 @@ static const R_CallMethodDef CallEntries[] = {
{"XGBoosterLoadJsonConfig_R", (DL_FUNC) &XGBoosterLoadJsonConfig_R, 2}, {"XGBoosterLoadJsonConfig_R", (DL_FUNC) &XGBoosterLoadJsonConfig_R, 2},
{"XGBoosterSerializeToBuffer_R", (DL_FUNC) &XGBoosterSerializeToBuffer_R, 1}, {"XGBoosterSerializeToBuffer_R", (DL_FUNC) &XGBoosterSerializeToBuffer_R, 1},
{"XGBoosterUnserializeFromBuffer_R", (DL_FUNC) &XGBoosterUnserializeFromBuffer_R, 2}, {"XGBoosterUnserializeFromBuffer_R", (DL_FUNC) &XGBoosterUnserializeFromBuffer_R, 2},
{"XGBoosterPredict_R", (DL_FUNC) &XGBoosterPredict_R, 5},
{"XGBoosterPredictFromDMatrix_R", (DL_FUNC) &XGBoosterPredictFromDMatrix_R, 3}, {"XGBoosterPredictFromDMatrix_R", (DL_FUNC) &XGBoosterPredictFromDMatrix_R, 3},
{"XGBoosterSaveModel_R", (DL_FUNC) &XGBoosterSaveModel_R, 2}, {"XGBoosterSaveModel_R", (DL_FUNC) &XGBoosterSaveModel_R, 2},
{"XGBoosterSetAttr_R", (DL_FUNC) &XGBoosterSetAttr_R, 3}, {"XGBoosterSetAttr_R", (DL_FUNC) &XGBoosterSetAttr_R, 3},
{"XGBoosterSetParam_R", (DL_FUNC) &XGBoosterSetParam_R, 3}, {"XGBoosterSetParam_R", (DL_FUNC) &XGBoosterSetParam_R, 3},
{"XGBoosterUpdateOneIter_R", (DL_FUNC) &XGBoosterUpdateOneIter_R, 3}, {"XGBoosterUpdateOneIter_R", (DL_FUNC) &XGBoosterUpdateOneIter_R, 3},
{"XGCheckNullPtr_R", (DL_FUNC) &XGCheckNullPtr_R, 1}, {"XGCheckNullPtr_R", (DL_FUNC) &XGCheckNullPtr_R, 1},
{"XGDMatrixCreateFromCSC_R", (DL_FUNC) &XGDMatrixCreateFromCSC_R, 5}, {"XGDMatrixCreateFromCSC_R", (DL_FUNC) &XGDMatrixCreateFromCSC_R, 6},
{"XGDMatrixCreateFromCSR_R", (DL_FUNC) &XGDMatrixCreateFromCSR_R, 5}, {"XGDMatrixCreateFromCSR_R", (DL_FUNC) &XGDMatrixCreateFromCSR_R, 6},
{"XGDMatrixCreateFromFile_R", (DL_FUNC) &XGDMatrixCreateFromFile_R, 2}, {"XGDMatrixCreateFromFile_R", (DL_FUNC) &XGDMatrixCreateFromFile_R, 2},
{"XGDMatrixCreateFromMat_R", (DL_FUNC) &XGDMatrixCreateFromMat_R, 3}, {"XGDMatrixCreateFromMat_R", (DL_FUNC) &XGDMatrixCreateFromMat_R, 3},
{"XGDMatrixGetInfo_R", (DL_FUNC) &XGDMatrixGetInfo_R, 2}, {"XGDMatrixGetInfo_R", (DL_FUNC) &XGDMatrixGetInfo_R, 2},

View File

@@ -1,11 +1,11 @@
/** /**
* Copyright 2014-2022 by XGBoost Contributors * Copyright 2014-2023 by XGBoost Contributors
*/ */
#include <dmlc/common.h> #include <dmlc/common.h>
#include <dmlc/omp.h> #include <dmlc/omp.h>
#include <xgboost/c_api.h> #include <xgboost/c_api.h>
#include <xgboost/context.h>
#include <xgboost/data.h> #include <xgboost/data.h>
#include <xgboost/generic_parameters.h>
#include <xgboost/logging.h> #include <xgboost/logging.h>
#include <cstdio> #include <cstdio>
@@ -16,9 +16,11 @@
#include <vector> #include <vector>
#include "../../src/c_api/c_api_error.h" #include "../../src/c_api/c_api_error.h"
#include "../../src/c_api/c_api_utils.h" // MakeSparseFromPtr
#include "../../src/common/threading_utils.h" #include "../../src/common/threading_utils.h"
#include "./xgboost_R.h" #include "./xgboost_R.h" // Must follow other includes.
#include "Rinternals.h"
/*! /*!
* \brief macro to annotate begin of api * \brief macro to annotate begin of api
@@ -46,14 +48,14 @@
using dmlc::BeginPtr; using dmlc::BeginPtr;
xgboost::GenericParameter const *BoosterCtx(BoosterHandle handle) { xgboost::Context const *BoosterCtx(BoosterHandle handle) {
CHECK_HANDLE(); CHECK_HANDLE();
auto *learner = static_cast<xgboost::Learner *>(handle); auto *learner = static_cast<xgboost::Learner *>(handle);
CHECK(learner); CHECK(learner);
return learner->Ctx(); return learner->Ctx();
} }
xgboost::GenericParameter const *DMatrixCtx(DMatrixHandle handle) { xgboost::Context const *DMatrixCtx(DMatrixHandle handle) {
CHECK_HANDLE(); CHECK_HANDLE();
auto p_m = static_cast<std::shared_ptr<xgboost::DMatrix> *>(handle); auto p_m = static_cast<std::shared_ptr<xgboost::DMatrix> *>(handle);
CHECK(p_m); CHECK(p_m);
@@ -114,13 +116,29 @@ XGB_DLL SEXP XGDMatrixCreateFromMat_R(SEXP mat, SEXP missing, SEXP n_threads) {
din = REAL(mat); din = REAL(mat);
} }
std::vector<float> data(nrow * ncol); std::vector<float> data(nrow * ncol);
int32_t threads = xgboost::common::OmpGetNumThreads(asInteger(n_threads)); xgboost::Context ctx;
ctx.nthread = asInteger(n_threads);
std::int32_t threads = ctx.Threads();
if (is_int) {
xgboost::common::ParallelFor(nrow, threads, [&](xgboost::omp_ulong i) {
for (size_t j = 0; j < ncol; ++j) {
auto v = iin[i + nrow * j];
if (v == NA_INTEGER) {
data[i * ncol + j] = std::numeric_limits<float>::quiet_NaN();
} else {
data[i * ncol + j] = static_cast<float>(v);
}
}
});
} else {
xgboost::common::ParallelFor(nrow, threads, [&](xgboost::omp_ulong i) {
for (size_t j = 0; j < ncol; ++j) {
data[i * ncol + j] = din[i + nrow * j];
}
});
}
xgboost::common::ParallelFor(nrow, threads, [&](xgboost::omp_ulong i) {
for (size_t j = 0; j < ncol; ++j) {
data[i * ncol + j] = is_int ? static_cast<float>(iin[i + nrow * j]) : din[i + nrow * j];
}
});
DMatrixHandle handle; DMatrixHandle handle;
CHECK_CALL(XGDMatrixCreateFromMat_omp(BeginPtr(data), nrow, ncol, CHECK_CALL(XGDMatrixCreateFromMat_omp(BeginPtr(data), nrow, ncol,
asReal(missing), &handle, threads)); asReal(missing), &handle, threads));
@@ -131,66 +149,78 @@ XGB_DLL SEXP XGDMatrixCreateFromMat_R(SEXP mat, SEXP missing, SEXP n_threads) {
return ret; return ret;
} }
XGB_DLL SEXP XGDMatrixCreateFromCSC_R(SEXP indptr, SEXP indices, SEXP data, namespace {
SEXP num_row, SEXP n_threads) { void CreateFromSparse(SEXP indptr, SEXP indices, SEXP data, std::string *indptr_str,
SEXP ret; std::string *indices_str, std::string *data_str) {
R_API_BEGIN();
const int *p_indptr = INTEGER(indptr); const int *p_indptr = INTEGER(indptr);
const int *p_indices = INTEGER(indices); const int *p_indices = INTEGER(indices);
const double *p_data = REAL(data); const double *p_data = REAL(data);
size_t nindptr = static_cast<size_t>(length(indptr));
size_t ndata = static_cast<size_t>(length(data));
size_t nrow = static_cast<size_t>(INTEGER(num_row)[0]);
std::vector<size_t> col_ptr_(nindptr);
std::vector<unsigned> indices_(ndata);
std::vector<float> data_(ndata);
for (size_t i = 0; i < nindptr; ++i) { auto nindptr = static_cast<std::size_t>(length(indptr));
col_ptr_[i] = static_cast<size_t>(p_indptr[i]); auto ndata = static_cast<std::size_t>(length(data));
} CHECK_EQ(ndata, p_indptr[nindptr - 1]);
int32_t threads = xgboost::common::OmpGetNumThreads(asInteger(n_threads)); xgboost::detail::MakeSparseFromPtr(p_indptr, p_indices, p_data, nindptr, indptr_str, indices_str,
xgboost::common::ParallelFor(ndata, threads, [&](xgboost::omp_ulong i) { data_str);
indices_[i] = static_cast<unsigned>(p_indices[i]); }
data_[i] = static_cast<float>(p_data[i]); } // namespace
});
XGB_DLL SEXP XGDMatrixCreateFromCSC_R(SEXP indptr, SEXP indices, SEXP data, SEXP num_row,
SEXP missing, SEXP n_threads) {
SEXP ret;
R_API_BEGIN();
std::int32_t threads = asInteger(n_threads);
using xgboost::Integer;
using xgboost::Json;
using xgboost::Object;
std::string sindptr, sindices, sdata;
CreateFromSparse(indptr, indices, data, &sindptr, &sindices, &sdata);
auto nrow = static_cast<std::size_t>(INTEGER(num_row)[0]);
DMatrixHandle handle; DMatrixHandle handle;
CHECK_CALL(XGDMatrixCreateFromCSCEx(BeginPtr(col_ptr_), BeginPtr(indices_), Json jconfig{Object{}};
BeginPtr(data_), nindptr, ndata, // Construct configuration
nrow, &handle)); jconfig["nthread"] = Integer{threads};
jconfig["missing"] = xgboost::Number{asReal(missing)};
std::string config;
Json::Dump(jconfig, &config);
CHECK_CALL(XGDMatrixCreateFromCSC(sindptr.c_str(), sindices.c_str(), sdata.c_str(), nrow,
config.c_str(), &handle));
ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue)); ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE); R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
R_API_END(); R_API_END();
UNPROTECT(1); UNPROTECT(1);
return ret; return ret;
} }
XGB_DLL SEXP XGDMatrixCreateFromCSR_R(SEXP indptr, SEXP indices, SEXP data, XGB_DLL SEXP XGDMatrixCreateFromCSR_R(SEXP indptr, SEXP indices, SEXP data, SEXP num_col,
SEXP num_col, SEXP n_threads) { SEXP missing, SEXP n_threads) {
SEXP ret; SEXP ret;
R_API_BEGIN(); R_API_BEGIN();
const int *p_indptr = INTEGER(indptr); std::int32_t threads = asInteger(n_threads);
const int *p_indices = INTEGER(indices);
const double *p_data = REAL(data); using xgboost::Integer;
size_t nindptr = static_cast<size_t>(length(indptr)); using xgboost::Json;
size_t ndata = static_cast<size_t>(length(data)); using xgboost::Object;
size_t ncol = static_cast<size_t>(INTEGER(num_col)[0]);
std::vector<size_t> row_ptr_(nindptr); std::string sindptr, sindices, sdata;
std::vector<unsigned> indices_(ndata); CreateFromSparse(indptr, indices, data, &sindptr, &sindices, &sdata);
std::vector<float> data_(ndata); auto ncol = static_cast<std::size_t>(INTEGER(num_col)[0]);
for (size_t i = 0; i < nindptr; ++i) {
row_ptr_[i] = static_cast<size_t>(p_indptr[i]);
}
int32_t threads = xgboost::common::OmpGetNumThreads(asInteger(n_threads));
xgboost::common::ParallelFor(ndata, threads, [&](xgboost::omp_ulong i) {
indices_[i] = static_cast<unsigned>(p_indices[i]);
data_[i] = static_cast<float>(p_data[i]);
});
DMatrixHandle handle; DMatrixHandle handle;
CHECK_CALL(XGDMatrixCreateFromCSREx(BeginPtr(row_ptr_), BeginPtr(indices_), Json jconfig{Object{}};
BeginPtr(data_), nindptr, ndata, // Construct configuration
ncol, &handle)); jconfig["nthread"] = Integer{threads};
jconfig["missing"] = xgboost::Number{asReal(missing)};
std::string config;
Json::Dump(jconfig, &config);
CHECK_CALL(XGDMatrixCreateFromCSR(sindptr.c_str(), sindices.c_str(), sdata.c_str(), ncol,
config.c_str(), &handle));
ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue)); ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE); R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
R_API_END(); R_API_END();
UNPROTECT(1); UNPROTECT(1);
@@ -422,27 +452,6 @@ XGB_DLL SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evn
return mkString(ret); return mkString(ret);
} }
XGB_DLL SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask,
SEXP ntree_limit, SEXP training) {
SEXP ret;
R_API_BEGIN();
bst_ulong olen;
const float *res;
CHECK_CALL(XGBoosterPredict(R_ExternalPtrAddr(handle),
R_ExternalPtrAddr(dmat),
asInteger(option_mask),
asInteger(ntree_limit),
asInteger(training),
&olen, &res));
ret = PROTECT(allocVector(REALSXP, olen));
for (size_t i = 0; i < olen; ++i) {
REAL(ret)[i] = res[i];
}
R_API_END();
UNPROTECT(1);
return ret;
}
XGB_DLL SEXP XGBoosterPredictFromDMatrix_R(SEXP handle, SEXP dmat, SEXP json_config) { XGB_DLL SEXP XGBoosterPredictFromDMatrix_R(SEXP handle, SEXP dmat, SEXP json_config) {
SEXP r_out_shape; SEXP r_out_shape;
SEXP r_out_result; SEXP r_out_result;

View File

@@ -59,11 +59,12 @@ XGB_DLL SEXP XGDMatrixCreateFromMat_R(SEXP mat,
* \param indices row indices * \param indices row indices
* \param data content of the data * \param data content of the data
* \param num_row numer of rows (when it's set to 0, then guess from data) * \param num_row numer of rows (when it's set to 0, then guess from data)
* \param missing which value to represent missing value
* \param n_threads Number of threads used to construct DMatrix from csc matrix. * \param n_threads Number of threads used to construct DMatrix from csc matrix.
* \return created dmatrix * \return created dmatrix
*/ */
XGB_DLL SEXP XGDMatrixCreateFromCSC_R(SEXP indptr, SEXP indices, SEXP data, SEXP num_row, XGB_DLL SEXP XGDMatrixCreateFromCSC_R(SEXP indptr, SEXP indices, SEXP data, SEXP num_row,
SEXP n_threads); SEXP missing, SEXP n_threads);
/*! /*!
* \brief create a matrix content from CSR format * \brief create a matrix content from CSR format
@@ -71,11 +72,12 @@ XGB_DLL SEXP XGDMatrixCreateFromCSC_R(SEXP indptr, SEXP indices, SEXP data, SEXP
* \param indices column indices * \param indices column indices
* \param data content of the data * \param data content of the data
* \param num_col numer of columns (when it's set to 0, then guess from data) * \param num_col numer of columns (when it's set to 0, then guess from data)
* \param missing which value to represent missing value
* \param n_threads Number of threads used to construct DMatrix from csr matrix. * \param n_threads Number of threads used to construct DMatrix from csr matrix.
* \return created dmatrix * \return created dmatrix
*/ */
XGB_DLL SEXP XGDMatrixCreateFromCSR_R(SEXP indptr, SEXP indices, SEXP data, SEXP num_col, XGB_DLL SEXP XGDMatrixCreateFromCSR_R(SEXP indptr, SEXP indices, SEXP data, SEXP num_col,
SEXP n_threads); SEXP missing, SEXP n_threads);
/*! /*!
* \brief create a new dmatrix from sliced content of existing matrix * \brief create a new dmatrix from sliced content of existing matrix
@@ -176,17 +178,6 @@ XGB_DLL SEXP XGBoosterBoostOneIter_R(SEXP handle, SEXP dtrain, SEXP grad, SEXP h
*/ */
XGB_DLL SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames); XGB_DLL SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames);
/*!
* \brief (Deprecated) make prediction based on dmat
* \param handle handle
* \param dmat data matrix
* \param option_mask output_margin:1 predict_leaf:2
* \param ntree_limit limit number of trees used in prediction
* \param training Whether the prediction value is used for training.
*/
XGB_DLL SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask,
SEXP ntree_limit, SEXP training);
/*! /*!
* \brief Run prediction on DMatrix, replacing `XGBoosterPredict_R` * \brief Run prediction on DMatrix, replacing `XGBoosterPredict_R`
* \param handle handle * \param handle handle

View File

@@ -1,26 +0,0 @@
// Copyright (c) 2014 by Contributors
#include <stdio.h>
#include <stdarg.h>
#include <Rinternals.h>
// implements error handling
void XGBoostAssert_R(int exp, const char *fmt, ...) {
char buf[1024];
if (exp == 0) {
va_list args;
va_start(args, fmt);
vsprintf(buf, fmt, args);
va_end(args);
error("AssertError:%s\n", buf);
}
}
void XGBoostCheck_R(int exp, const char *fmt, ...) {
char buf[1024];
if (exp == 0) {
va_list args;
va_start(args, fmt);
vsprintf(buf, fmt, args);
va_end(args);
error("%s\n", buf);
}
}

View File

@@ -0,0 +1,51 @@
## Install dependencies of R package for testing. The list might not be
## up-to-date, check DESCRIPTION for the latest list and update this one if
## inconsistent is found.
pkgs <- c(
## CI
"caret",
"pkgbuild",
"roxygen2",
"XML",
"cplm",
"e1071",
## suggests
"knitr",
"rmarkdown",
"ggplot2",
"DiagrammeR",
"Ckmeans.1d.dp",
"vcd",
"lintr",
"testthat",
"igraph",
"float",
"titanic",
## imports
"Matrix",
"methods",
"data.table",
"jsonlite"
)
ncpus <- parallel::detectCores()
print(paste0("Using ", ncpus, " cores to install dependencies."))
if (.Platform$OS.type == "unix") {
print("Installing source packages on unix.")
install.packages(
pkgs,
repo = "https://cloud.r-project.org",
dependencies = c("Depends", "Imports", "LinkingTo"),
Ncpus = parallel::detectCores()
)
} else {
print("Installing binary packages on Windows.")
install.packages(
pkgs,
repo = "https://cloud.r-project.org",
dependencies = c("Depends", "Imports", "LinkingTo"),
Ncpus = parallel::detectCores(),
type = "binary"
)
}

View File

@@ -1,71 +0,0 @@
library(lintr)
library(crayon)
my_linters <- list(
absolute_path_linter = lintr::absolute_path_linter,
assignment_linter = lintr::assignment_linter,
closed_curly_linter = lintr::closed_curly_linter,
commas_linter = lintr::commas_linter,
equals_na = lintr::equals_na_linter,
infix_spaces_linter = lintr::infix_spaces_linter,
line_length_linter = lintr::line_length_linter,
no_tab_linter = lintr::no_tab_linter,
object_usage_linter = lintr::object_usage_linter,
object_length_linter = lintr::object_length_linter,
open_curly_linter = lintr::open_curly_linter,
semicolon = lintr::semicolon_terminator_linter(semicolon = c("compound", "trailing")),
seq = lintr::seq_linter,
spaces_inside_linter = lintr::spaces_inside_linter,
spaces_left_parentheses_linter = lintr::spaces_left_parentheses_linter,
trailing_blank_lines_linter = lintr::trailing_blank_lines_linter,
trailing_whitespace_linter = lintr::trailing_whitespace_linter,
true_false = lintr::T_and_F_symbol_linter,
unneeded_concatenation = lintr::unneeded_concatenation_linter
)
results <- lapply(
list.files(path = '.', pattern = '\\.[Rr]$', recursive = TRUE),
function (r_file) {
cat(sprintf("Processing %s ...\n", r_file))
list(r_file = r_file,
output = lintr::lint(filename = r_file, linters = my_linters))
})
num_issue <- Reduce(sum, lapply(results, function (e) length(e$output)))
lint2str <- function(lint_entry) {
color <- function(type) {
switch(type,
"warning" = crayon::magenta,
"error" = crayon::red,
"style" = crayon::blue,
crayon::bold
)
}
paste0(
lapply(lint_entry$output,
function (lint_line) {
paste0(
crayon::bold(lint_entry$r_file, ":",
as.character(lint_line$line_number), ":",
as.character(lint_line$column_number), ": ", sep = ""),
color(lint_line$type)(lint_line$type, ": ", sep = ""),
crayon::bold(lint_line$message), "\n",
lint_line$line, "\n",
lintr:::highlight_string(lint_line$message, lint_line$column_number, lint_line$ranges),
"\n",
collapse = "")
}),
collapse = "")
}
if (num_issue > 0) {
cat(sprintf('R linters found %d issues:\n', num_issue))
for (entry in results) {
if (length(entry$output)) {
cat(paste0('**** ', crayon::bold(entry$r_file), '\n'))
cat(paste0(lint2str(entry), collapse = ''))
}
}
quit(save = 'no', status = 1) # Signal error to parent shell
}

View File

@@ -1,6 +1,3 @@
require(xgboost)
library(Matrix)
context("basic functions") context("basic functions")
data(agaricus.train, package = 'xgboost') data(agaricus.train, package = 'xgboost')
@@ -88,9 +85,18 @@ test_that("dart prediction works", {
rnorm(100) rnorm(100)
set.seed(1994) set.seed(1994)
booster_by_xgboost <- xgboost(data = d, label = y, max_depth = 2, booster = "dart", booster_by_xgboost <- xgboost(
rate_drop = 0.5, one_drop = TRUE, data = d,
eta = 1, nthread = 2, nrounds = nrounds, objective = "reg:squarederror") label = y,
max_depth = 2,
booster = "dart",
rate_drop = 0.5,
one_drop = TRUE,
eta = 1,
nthread = 2,
nrounds = nrounds,
objective = "reg:squarederror"
)
pred_by_xgboost_0 <- predict(booster_by_xgboost, newdata = d, ntreelimit = 0) pred_by_xgboost_0 <- predict(booster_by_xgboost, newdata = d, ntreelimit = 0)
pred_by_xgboost_1 <- predict(booster_by_xgboost, newdata = d, ntreelimit = nrounds) pred_by_xgboost_1 <- predict(booster_by_xgboost, newdata = d, ntreelimit = nrounds)
expect_true(all(matrix(pred_by_xgboost_0, byrow = TRUE) == matrix(pred_by_xgboost_1, byrow = TRUE))) expect_true(all(matrix(pred_by_xgboost_0, byrow = TRUE) == matrix(pred_by_xgboost_1, byrow = TRUE)))
@@ -100,19 +106,19 @@ test_that("dart prediction works", {
set.seed(1994) set.seed(1994)
dtrain <- xgb.DMatrix(data = d, info = list(label = y)) dtrain <- xgb.DMatrix(data = d, info = list(label = y))
booster_by_train <- xgb.train(params = list( booster_by_train <- xgb.train(
booster = "dart", params = list(
max_depth = 2, booster = "dart",
eta = 1, max_depth = 2,
rate_drop = 0.5, eta = 1,
one_drop = TRUE, rate_drop = 0.5,
nthread = 1, one_drop = TRUE,
tree_method = "exact", nthread = 1,
objective = "reg:squarederror" objective = "reg:squarederror"
), ),
data = dtrain, data = dtrain,
nrounds = nrounds nrounds = nrounds
) )
pred_by_train_0 <- predict(booster_by_train, newdata = dtrain, ntreelimit = 0) pred_by_train_0 <- predict(booster_by_train, newdata = dtrain, ntreelimit = 0)
pred_by_train_1 <- predict(booster_by_train, newdata = dtrain, ntreelimit = nrounds) pred_by_train_1 <- predict(booster_by_train, newdata = dtrain, ntreelimit = nrounds)
pred_by_train_2 <- predict(booster_by_train, newdata = dtrain, training = TRUE) pred_by_train_2 <- predict(booster_by_train, newdata = dtrain, training = TRUE)
@@ -235,12 +241,20 @@ test_that("train and predict RF with softprob", {
test_that("use of multiple eval metrics works", { test_that("use of multiple eval metrics works", {
expect_output( expect_output(
bst <- xgboost(data = train$data, label = train$label, max_depth = 2, bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic", eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic",
eval_metric = 'error', eval_metric = 'auc', eval_metric = "logloss") eval_metric = 'error', eval_metric = 'auc', eval_metric = "logloss")
, "train-error.*train-auc.*train-logloss") , "train-error.*train-auc.*train-logloss")
expect_false(is.null(bst$evaluation_log)) expect_false(is.null(bst$evaluation_log))
expect_equal(dim(bst$evaluation_log), c(2, 4)) expect_equal(dim(bst$evaluation_log), c(2, 4))
expect_equal(colnames(bst$evaluation_log), c("iter", "train_error", "train_auc", "train_logloss")) expect_equal(colnames(bst$evaluation_log), c("iter", "train_error", "train_auc", "train_logloss"))
expect_output(
bst2 <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic",
eval_metric = list("error", "auc", "logloss"))
, "train-error.*train-auc.*train-logloss")
expect_false(is.null(bst2$evaluation_log))
expect_equal(dim(bst2$evaluation_log), c(2, 4))
expect_equal(colnames(bst2$evaluation_log), c("iter", "train_error", "train_auc", "train_logloss"))
}) })
@@ -394,7 +408,7 @@ test_that("colsample_bytree works", {
xgb.importance(model = bst) xgb.importance(model = bst)
# If colsample_bytree works properly, a variety of features should be used # If colsample_bytree works properly, a variety of features should be used
# in the 100 trees # in the 100 trees
expect_gte(nrow(xgb.importance(model = bst)), 30) expect_gte(nrow(xgb.importance(model = bst)), 28)
}) })
test_that("Configuration works", { test_that("Configuration works", {
@@ -404,7 +418,7 @@ test_that("Configuration works", {
config <- xgb.config(bst) config <- xgb.config(bst)
xgb.config(bst) <- config xgb.config(bst) <- config
reloaded_config <- xgb.config(bst) reloaded_config <- xgb.config(bst)
expect_equal(config, reloaded_config); expect_equal(config, reloaded_config)
}) })
test_that("strict_shape works", { test_that("strict_shape works", {

View File

@@ -1,9 +1,4 @@
# More specific testing of callbacks # More specific testing of callbacks
require(xgboost)
require(data.table)
require(titanic)
context("callbacks") context("callbacks")
data(agaricus.train, package = 'xgboost') data(agaricus.train, package = 'xgboost')
@@ -84,7 +79,7 @@ test_that("cb.evaluation.log works as expected", {
list(c(iter = 1, bst_evaluation), c(iter = 2, bst_evaluation))) list(c(iter = 1, bst_evaluation), c(iter = 2, bst_evaluation)))
expect_silent(f(finalize = TRUE)) expect_silent(f(finalize = TRUE))
expect_equal(evaluation_log, expect_equal(evaluation_log,
data.table(iter = 1:2, train_auc = c(0.9, 0.9), test_auc = c(0.8, 0.8))) data.table::data.table(iter = 1:2, train_auc = c(0.9, 0.9), test_auc = c(0.8, 0.8)))
bst_evaluation_err <- c('train-auc' = 0.1, 'test-auc' = 0.2) bst_evaluation_err <- c('train-auc' = 0.1, 'test-auc' = 0.2)
evaluation_log <- list() evaluation_log <- list()
@@ -101,7 +96,7 @@ test_that("cb.evaluation.log works as expected", {
c(iter = 2, c(bst_evaluation, bst_evaluation_err)))) c(iter = 2, c(bst_evaluation, bst_evaluation_err))))
expect_silent(f(finalize = TRUE)) expect_silent(f(finalize = TRUE))
expect_equal(evaluation_log, expect_equal(evaluation_log,
data.table(iter = 1:2, data.table::data.table(iter = 1:2,
train_auc_mean = c(0.9, 0.9), train_auc_std = c(0.1, 0.1), train_auc_mean = c(0.9, 0.9), train_auc_std = c(0.1, 0.1),
test_auc_mean = c(0.8, 0.8), test_auc_std = c(0.2, 0.2))) test_auc_mean = c(0.8, 0.8), test_auc_std = c(0.2, 0.2)))
}) })
@@ -256,6 +251,9 @@ test_that("early stopping using a specific metric works", {
}) })
test_that("early stopping works with titanic", { test_that("early stopping works with titanic", {
if (!requireNamespace("titanic")) {
testthat::skip("Optional testing dependency 'titanic' not found.")
}
# This test was inspired by https://github.com/dmlc/xgboost/issues/5935 # This test was inspired by https://github.com/dmlc/xgboost/issues/5935
# It catches possible issues on noLD R # It catches possible issues on noLD R
titanic <- titanic::titanic_train titanic <- titanic::titanic_train
@@ -322,7 +320,7 @@ test_that("prediction in early-stopping xgb.cv works", {
expect_output( expect_output(
cv <- xgb.cv(param, dtrain, nfold = 5, eta = 0.1, nrounds = 20, cv <- xgb.cv(param, dtrain, nfold = 5, eta = 0.1, nrounds = 20,
early_stopping_rounds = 5, maximize = FALSE, stratified = FALSE, early_stopping_rounds = 5, maximize = FALSE, stratified = FALSE,
prediction = TRUE) prediction = TRUE, base_score = 0.5)
, "Stopping. Best iteration") , "Stopping. Best iteration")
expect_false(is.null(cv$best_iteration)) expect_false(is.null(cv$best_iteration))

View File

@@ -1,7 +1,5 @@
context('Test models with custom objective') context('Test models with custom objective')
require(xgboost)
set.seed(1994) set.seed(1994)
data(agaricus.train, package = 'xgboost') data(agaricus.train, package = 'xgboost')

View File

@@ -1,9 +1,7 @@
require(xgboost) library(Matrix)
require(Matrix)
context("testing xgb.DMatrix functionality") context("testing xgb.DMatrix functionality")
data(agaricus.test, package = 'xgboost') data(agaricus.test, package = "xgboost")
test_data <- agaricus.test$data[1:100, ] test_data <- agaricus.test$data[1:100, ]
test_label <- agaricus.test$label[1:100] test_label <- agaricus.test$label[1:100]
@@ -13,14 +11,85 @@ test_that("xgb.DMatrix: basic construction", {
# from dense matrix # from dense matrix
dtest2 <- xgb.DMatrix(as.matrix(test_data), label = test_label) dtest2 <- xgb.DMatrix(as.matrix(test_data), label = test_label)
expect_equal(getinfo(dtest1, 'label'), getinfo(dtest2, 'label')) expect_equal(getinfo(dtest1, "label"), getinfo(dtest2, "label"))
expect_equal(dim(dtest1), dim(dtest2)) expect_equal(dim(dtest1), dim(dtest2))
#from dense integer matrix # from dense integer matrix
int_data <- as.matrix(test_data) int_data <- as.matrix(test_data)
storage.mode(int_data) <- "integer" storage.mode(int_data) <- "integer"
dtest3 <- xgb.DMatrix(int_data, label = test_label) dtest3 <- xgb.DMatrix(int_data, label = test_label)
expect_equal(dim(dtest1), dim(dtest3)) expect_equal(dim(dtest1), dim(dtest3))
n_samples <- 100
X <- cbind(
x1 = sample(x = 4, size = n_samples, replace = TRUE),
x2 = sample(x = 4, size = n_samples, replace = TRUE),
x3 = sample(x = 4, size = n_samples, replace = TRUE)
)
X <- matrix(X, nrow = n_samples)
y <- rbinom(n = n_samples, size = 1, prob = 1 / 2)
fd <- xgb.DMatrix(X, label = y, missing = 1)
dgc <- as(X, "dgCMatrix")
fdgc <- xgb.DMatrix(dgc, label = y, missing = 1.0)
dgr <- as(X, "dgRMatrix")
fdgr <- xgb.DMatrix(dgr, label = y, missing = 1)
params <- list(tree_method = "hist")
bst_fd <- xgb.train(
params, nrounds = 8, fd, watchlist = list(train = fd)
)
bst_dgr <- xgb.train(
params, nrounds = 8, fdgr, watchlist = list(train = fdgr)
)
bst_dgc <- xgb.train(
params, nrounds = 8, fdgc, watchlist = list(train = fdgc)
)
raw_fd <- xgb.save.raw(bst_fd, raw_format = "ubj")
raw_dgr <- xgb.save.raw(bst_dgr, raw_format = "ubj")
raw_dgc <- xgb.save.raw(bst_dgc, raw_format = "ubj")
expect_equal(raw_fd, raw_dgr)
expect_equal(raw_fd, raw_dgc)
})
test_that("xgb.DMatrix: NA", {
n_samples <- 3
x <- cbind(
x1 = sample(x = 4, size = n_samples, replace = TRUE),
x2 = sample(x = 4, size = n_samples, replace = TRUE)
)
x[1, "x1"] <- NA
m <- xgb.DMatrix(x)
xgb.DMatrix.save(m, "int.dmatrix")
x <- matrix(as.numeric(x), nrow = n_samples, ncol = 2)
colnames(x) <- c("x1", "x2")
m <- xgb.DMatrix(x)
xgb.DMatrix.save(m, "float.dmatrix")
iconn <- file("int.dmatrix", "rb")
fconn <- file("float.dmatrix", "rb")
expect_equal(file.size("int.dmatrix"), file.size("float.dmatrix"))
bytes <- file.size("int.dmatrix")
idmatrix <- readBin(iconn, "raw", n = bytes)
fdmatrix <- readBin(fconn, "raw", n = bytes)
expect_equal(length(idmatrix), length(fdmatrix))
expect_equal(idmatrix, fdmatrix)
close(iconn)
close(fconn)
file.remove("int.dmatrix")
file.remove("float.dmatrix")
}) })
test_that("xgb.DMatrix: saving, loading", { test_that("xgb.DMatrix: saving, loading", {
@@ -37,9 +106,10 @@ test_that("xgb.DMatrix: saving, loading", {
# from a libsvm text file # from a libsvm text file
tmp <- c("0 1:1 2:1", "1 3:1", "0 1:1") tmp <- c("0 1:1 2:1", "1 3:1", "0 1:1")
tmp_file <- 'tmp.libsvm' tmp_file <- tempfile(fileext = ".libsvm")
writeLines(tmp, tmp_file) writeLines(tmp, tmp_file)
dtest4 <- xgb.DMatrix(tmp_file, silent = TRUE) expect_true(file.exists(tmp_file))
dtest4 <- xgb.DMatrix(paste(tmp_file, "?format=libsvm", sep = ""), silent = TRUE)
expect_equal(dim(dtest4), c(3, 4)) expect_equal(dim(dtest4), c(3, 4))
expect_equal(getinfo(dtest4, 'label'), c(0, 1, 0)) expect_equal(getinfo(dtest4, 'label'), c(0, 1, 0))
@@ -53,7 +123,7 @@ test_that("xgb.DMatrix: saving, loading", {
dtrain <- xgb.DMatrix(tmp_file) dtrain <- xgb.DMatrix(tmp_file)
expect_equal(colnames(dtrain), cnames) expect_equal(colnames(dtrain), cnames)
ft <- rep(c("c", "q"), each=length(cnames)/2) ft <- rep(c("c", "q"), each = length(cnames) / 2)
setinfo(dtrain, "feature_type", ft) setinfo(dtrain, "feature_type", ft)
expect_equal(ft, getinfo(dtrain, "feature_type")) expect_equal(ft, getinfo(dtrain, "feature_type"))
}) })
@@ -123,9 +193,62 @@ test_that("xgb.DMatrix: colnames", {
test_that("xgb.DMatrix: nrow is correct for a very sparse matrix", { test_that("xgb.DMatrix: nrow is correct for a very sparse matrix", {
set.seed(123) set.seed(123)
nr <- 1000 nr <- 1000
x <- rsparsematrix(nr, 100, density = 0.0005) x <- Matrix::rsparsematrix(nr, 100, density = 0.0005)
# we want it very sparse, so that last rows are empty # we want it very sparse, so that last rows are empty
expect_lt(max(x@i), nr) expect_lt(max(x@i), nr)
dtest <- xgb.DMatrix(x) dtest <- xgb.DMatrix(x)
expect_equal(dim(dtest), dim(x)) expect_equal(dim(dtest), dim(x))
}) })
test_that("xgb.DMatrix: print", {
data(agaricus.train, package = 'xgboost')
# core DMatrix with just data and labels
dtrain <- xgb.DMatrix(
data = agaricus.train$data
, label = agaricus.train$label
)
txt <- capture.output({
print(dtrain)
})
expect_equal(txt, "xgb.DMatrix dim: 6513 x 126 info: label colnames: yes")
# verbose=TRUE prints feature names
txt <- capture.output({
print(dtrain, verbose = TRUE)
})
expect_equal(txt[[1L]], "xgb.DMatrix dim: 6513 x 126 info: label colnames:")
expect_equal(txt[[2L]], sprintf("'%s'", paste(colnames(dtrain), collapse = "','")))
# DMatrix with weights and base_margin
dtrain <- xgb.DMatrix(
data = agaricus.train$data
, label = agaricus.train$label
, weight = seq_along(agaricus.train$label)
, base_margin = agaricus.train$label
)
txt <- capture.output({
print(dtrain)
})
expect_equal(txt, "xgb.DMatrix dim: 6513 x 126 info: label weight base_margin colnames: yes")
# DMatrix with just features
dtrain <- xgb.DMatrix(
data = agaricus.train$data
)
txt <- capture.output({
print(dtrain)
})
expect_equal(txt, "xgb.DMatrix dim: 6513 x 126 info: NA colnames: yes")
# DMatrix with no column names
data_no_colnames <- agaricus.train$data
colnames(data_no_colnames) <- NULL
dtrain <- xgb.DMatrix(
data = data_no_colnames
)
txt <- capture.output({
print(dtrain)
})
expect_equal(txt, "xgb.DMatrix dim: 6513 x 126 info: NA colnames: no")
})

View File

@@ -1,5 +1,3 @@
library(xgboost)
context("feature weights") context("feature weights")
test_that("training with feature weights works", { test_that("training with feature weights works", {

View File

@@ -1,5 +1,3 @@
require(xgboost)
context("Garbage Collection Safety Check") context("Garbage Collection Safety Check")
test_that("train and prediction when gctorture is on", { test_that("train and prediction when gctorture is on", {

View File

@@ -1,7 +1,5 @@
context('Test generalized linear models') context('Test generalized linear models')
require(xgboost)
test_that("gblinear works", { test_that("gblinear works", {
data(agaricus.train, package = 'xgboost') data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost') data(agaricus.test, package = 'xgboost')

View File

@@ -1,10 +1,11 @@
library(testthat)
context('Test helper functions') context('Test helper functions')
require(xgboost) VCD_AVAILABLE <- requireNamespace("vcd", quietly = TRUE)
require(data.table) .skip_if_vcd_not_available <- function() {
require(Matrix) if (!VCD_AVAILABLE) {
require(vcd, quietly = TRUE) testthat::skip("Optional testing dependency 'vcd' not found.")
}
}
float_tolerance <- 5e-6 float_tolerance <- 5e-6
@@ -12,25 +13,30 @@ float_tolerance <- 5e-6
flag_32bit <- .Machine$sizeof.pointer != 8 flag_32bit <- .Machine$sizeof.pointer != 8
set.seed(1982) set.seed(1982)
data(Arthritis)
df <- data.table(Arthritis, keep.rownames = FALSE)
df[, AgeDiscret := as.factor(round(Age / 10, 0))]
df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))]
df[, ID := NULL]
sparse_matrix <- sparse.model.matrix(Improved~.-1, data = df) # nolint
label <- df[, ifelse(Improved == "Marked", 1, 0)]
# binary
nrounds <- 12 nrounds <- 12
bst.Tree <- xgboost(data = sparse_matrix, label = label, max_depth = 9, if (isTRUE(VCD_AVAILABLE)) {
eta = 1, nthread = 2, nrounds = nrounds, verbose = 0, data(Arthritis, package = "vcd")
objective = "binary:logistic", booster = "gbtree") df <- data.table::data.table(Arthritis, keep.rownames = FALSE)
df[, AgeDiscret := as.factor(round(Age / 10, 0))]
df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))]
df[, ID := NULL]
sparse_matrix <- Matrix::sparse.model.matrix(Improved~.-1, data = df) # nolint
label <- df[, ifelse(Improved == "Marked", 1, 0)]
bst.GLM <- xgboost(data = sparse_matrix, label = label, # binary
eta = 1, nthread = 1, nrounds = nrounds, verbose = 0, bst.Tree <- xgboost(data = sparse_matrix, label = label, max_depth = 9,
objective = "binary:logistic", booster = "gblinear") eta = 1, nthread = 2, nrounds = nrounds, verbose = 0,
objective = "binary:logistic", booster = "gbtree",
base_score = 0.5)
feature.names <- colnames(sparse_matrix) bst.GLM <- xgboost(data = sparse_matrix, label = label,
eta = 1, nthread = 1, nrounds = nrounds, verbose = 0,
objective = "binary:logistic", booster = "gblinear",
base_score = 0.5)
feature.names <- colnames(sparse_matrix)
}
# multiclass # multiclass
mlabel <- as.numeric(iris$Species) - 1 mlabel <- as.numeric(iris$Species) - 1
@@ -45,6 +51,7 @@ mbst.GLM <- xgboost(data = as.matrix(iris[, -5]), label = mlabel, verbose = 0,
test_that("xgb.dump works", { test_that("xgb.dump works", {
.skip_if_vcd_not_available()
if (!flag_32bit) if (!flag_32bit)
expect_length(xgb.dump(bst.Tree), 200) expect_length(xgb.dump(bst.Tree), 200)
dump_file <- file.path(tempdir(), 'xgb.model.dump') dump_file <- file.path(tempdir(), 'xgb.model.dump')
@@ -56,10 +63,11 @@ test_that("xgb.dump works", {
dmp <- xgb.dump(bst.Tree, dump_format = "json") dmp <- xgb.dump(bst.Tree, dump_format = "json")
expect_length(dmp, 1) expect_length(dmp, 1)
if (!flag_32bit) if (!flag_32bit)
expect_length(grep('nodeid', strsplit(dmp, '\n')[[1]]), 188) expect_length(grep('nodeid', strsplit(dmp, '\n', fixed = TRUE)[[1]], fixed = TRUE), 188)
}) })
test_that("xgb.dump works for gblinear", { test_that("xgb.dump works for gblinear", {
.skip_if_vcd_not_available()
expect_length(xgb.dump(bst.GLM), 14) expect_length(xgb.dump(bst.GLM), 14)
# also make sure that it works properly for a sparse model where some coefficients # also make sure that it works properly for a sparse model where some coefficients
# are 0 from setting large L1 regularization: # are 0 from setting large L1 regularization:
@@ -72,10 +80,11 @@ test_that("xgb.dump works for gblinear", {
# JSON format # JSON format
dmp <- xgb.dump(bst.GLM.sp, dump_format = "json") dmp <- xgb.dump(bst.GLM.sp, dump_format = "json")
expect_length(dmp, 1) expect_length(dmp, 1)
expect_length(grep('\\d', strsplit(dmp, '\n')[[1]]), 11) expect_length(grep('\\d', strsplit(dmp, '\n', fixed = TRUE)[[1]]), 11)
}) })
test_that("predict leafs works", { test_that("predict leafs works", {
.skip_if_vcd_not_available()
# no error for gbtree # no error for gbtree
expect_error(pred_leaf <- predict(bst.Tree, sparse_matrix, predleaf = TRUE), regexp = NA) expect_error(pred_leaf <- predict(bst.Tree, sparse_matrix, predleaf = TRUE), regexp = NA)
expect_equal(dim(pred_leaf), c(nrow(sparse_matrix), nrounds)) expect_equal(dim(pred_leaf), c(nrow(sparse_matrix), nrounds))
@@ -84,6 +93,7 @@ test_that("predict leafs works", {
}) })
test_that("predict feature contributions works", { test_that("predict feature contributions works", {
.skip_if_vcd_not_available()
# gbtree binary classifier # gbtree binary classifier
expect_error(pred_contr <- predict(bst.Tree, sparse_matrix, predcontrib = TRUE), regexp = NA) expect_error(pred_contr <- predict(bst.Tree, sparse_matrix, predcontrib = TRUE), regexp = NA)
expect_equal(dim(pred_contr), c(nrow(sparse_matrix), ncol(sparse_matrix) + 1)) expect_equal(dim(pred_contr), c(nrow(sparse_matrix), ncol(sparse_matrix) + 1))
@@ -170,15 +180,16 @@ test_that("SHAPs sum to predictions, with or without DART", {
label = y, label = y,
nrounds = nrounds) nrounds = nrounds)
pr <- function(...) pr <- function(...) {
predict(fit, newdata = d, ...) predict(fit, newdata = d, ...)
}
pred <- pr() pred <- pr()
shap <- pr(predcontrib = TRUE) shap <- pr(predcontrib = TRUE)
shapi <- pr(predinteraction = TRUE) shapi <- pr(predinteraction = TRUE)
tol <- 1e-5 tol <- 1e-5
expect_equal(rowSums(shap), pred, tol = tol) expect_equal(rowSums(shap), pred, tol = tol)
expect_equal(apply(shapi, 1, sum), pred, tol = tol) expect_equal(rowSums(shapi), pred, tol = tol)
for (i in seq_len(nrow(d))) for (i in seq_len(nrow(d)))
for (f in list(rowSums, colSums)) for (f in list(rowSums, colSums))
expect_equal(f(shapi[i, , ]), shap[i, ], tol = tol) expect_equal(f(shapi[i, , ]), shap[i, ], tol = tol)
@@ -186,6 +197,7 @@ test_that("SHAPs sum to predictions, with or without DART", {
}) })
test_that("xgb-attribute functionality", { test_that("xgb-attribute functionality", {
.skip_if_vcd_not_available()
val <- "my attribute value" val <- "my attribute value"
list.val <- list(my_attr = val, a = 123, b = 'ok') list.val <- list(my_attr = val, a = 123, b = 'ok')
list.ch <- list.val[order(names(list.val))] list.ch <- list.val[order(names(list.val))]
@@ -219,10 +231,11 @@ test_that("xgb-attribute functionality", {
expect_null(xgb.attributes(bst)) expect_null(xgb.attributes(bst))
}) })
if (grepl('Windows', Sys.info()[['sysname']]) || if (grepl('Windows', Sys.info()[['sysname']], fixed = TRUE) ||
grepl('Linux', Sys.info()[['sysname']]) || grepl('Linux', Sys.info()[['sysname']], fixed = TRUE) ||
grepl('Darwin', Sys.info()[['sysname']])) { grepl('Darwin', Sys.info()[['sysname']], fixed = TRUE)) {
test_that("xgb-attribute numeric precision", { test_that("xgb-attribute numeric precision", {
.skip_if_vcd_not_available()
# check that lossless conversion works with 17 digits # check that lossless conversion works with 17 digits
# numeric -> character -> numeric # numeric -> character -> numeric
X <- 10^runif(100, -20, 20) X <- 10^runif(100, -20, 20)
@@ -241,6 +254,7 @@ if (grepl('Windows', Sys.info()[['sysname']]) ||
} }
test_that("xgb.Booster serializing as R object works", { test_that("xgb.Booster serializing as R object works", {
.skip_if_vcd_not_available()
saveRDS(bst.Tree, 'xgb.model.rds') saveRDS(bst.Tree, 'xgb.model.rds')
bst <- readRDS('xgb.model.rds') bst <- readRDS('xgb.model.rds')
dtrain <- xgb.DMatrix(sparse_matrix, label = label) dtrain <- xgb.DMatrix(sparse_matrix, label = label)
@@ -259,6 +273,7 @@ test_that("xgb.Booster serializing as R object works", {
}) })
test_that("xgb.model.dt.tree works with and without feature names", { test_that("xgb.model.dt.tree works with and without feature names", {
.skip_if_vcd_not_available()
names.dt.trees <- c("Tree", "Node", "ID", "Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover") names.dt.trees <- c("Tree", "Node", "ID", "Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover")
dt.tree <- xgb.model.dt.tree(feature_names = feature.names, model = bst.Tree) dt.tree <- xgb.model.dt.tree(feature_names = feature.names, model = bst.Tree)
expect_equal(names.dt.trees, names(dt.tree)) expect_equal(names.dt.trees, names(dt.tree))
@@ -278,16 +293,18 @@ test_that("xgb.model.dt.tree works with and without feature names", {
# using integer node ID instead of character # using integer node ID instead of character
dt.tree.int <- xgb.model.dt.tree(model = bst.Tree, use_int_id = TRUE) dt.tree.int <- xgb.model.dt.tree(model = bst.Tree, use_int_id = TRUE)
expect_equal(as.integer(tstrsplit(dt.tree$Yes, '-')[[2]]), dt.tree.int$Yes) expect_equal(as.integer(data.table::tstrsplit(dt.tree$Yes, '-', fixed = TRUE)[[2]]), dt.tree.int$Yes)
expect_equal(as.integer(tstrsplit(dt.tree$No, '-')[[2]]), dt.tree.int$No) expect_equal(as.integer(data.table::tstrsplit(dt.tree$No, '-', fixed = TRUE)[[2]]), dt.tree.int$No)
expect_equal(as.integer(tstrsplit(dt.tree$Missing, '-')[[2]]), dt.tree.int$Missing) expect_equal(as.integer(data.table::tstrsplit(dt.tree$Missing, '-', fixed = TRUE)[[2]]), dt.tree.int$Missing)
}) })
test_that("xgb.model.dt.tree throws error for gblinear", { test_that("xgb.model.dt.tree throws error for gblinear", {
.skip_if_vcd_not_available()
expect_error(xgb.model.dt.tree(model = bst.GLM)) expect_error(xgb.model.dt.tree(model = bst.GLM))
}) })
test_that("xgb.importance works with and without feature names", { test_that("xgb.importance works with and without feature names", {
.skip_if_vcd_not_available()
importance.Tree <- xgb.importance(feature_names = feature.names, model = bst.Tree) importance.Tree <- xgb.importance(feature_names = feature.names, model = bst.Tree)
if (!flag_32bit) if (!flag_32bit)
expect_equal(dim(importance.Tree), c(7, 4)) expect_equal(dim(importance.Tree), c(7, 4))
@@ -345,7 +362,8 @@ test_that("xgb.importance works with and without feature names", {
m <- xgboost::xgboost( m <- xgboost::xgboost(
data = as.matrix(data.frame(x = c(0, 1))), data = as.matrix(data.frame(x = c(0, 1))),
label = c(1, 2), label = c(1, 2),
nrounds = 1 nrounds = 1,
base_score = 0.5
) )
df <- xgb.model.dt.tree(model = m) df <- xgb.model.dt.tree(model = m)
expect_equal(df$Feature, "Leaf") expect_equal(df$Feature, "Leaf")
@@ -353,6 +371,7 @@ test_that("xgb.importance works with and without feature names", {
}) })
test_that("xgb.importance works with GLM model", { test_that("xgb.importance works with GLM model", {
.skip_if_vcd_not_available()
importance.GLM <- xgb.importance(feature_names = feature.names, model = bst.GLM) importance.GLM <- xgb.importance(feature_names = feature.names, model = bst.GLM)
expect_equal(dim(importance.GLM), c(10, 2)) expect_equal(dim(importance.GLM), c(10, 2))
expect_equal(colnames(importance.GLM), c("Feature", "Weight")) expect_equal(colnames(importance.GLM), c("Feature", "Weight"))
@@ -368,6 +387,7 @@ test_that("xgb.importance works with GLM model", {
}) })
test_that("xgb.model.dt.tree and xgb.importance work with a single split model", { test_that("xgb.model.dt.tree and xgb.importance work with a single split model", {
.skip_if_vcd_not_available()
bst1 <- xgboost(data = sparse_matrix, label = label, max_depth = 1, bst1 <- xgboost(data = sparse_matrix, label = label, max_depth = 1,
eta = 1, nthread = 2, nrounds = 1, verbose = 0, eta = 1, nthread = 2, nrounds = 1, verbose = 0,
objective = "binary:logistic") objective = "binary:logistic")
@@ -379,16 +399,19 @@ test_that("xgb.model.dt.tree and xgb.importance work with a single split model",
}) })
test_that("xgb.plot.tree works with and without feature names", { test_that("xgb.plot.tree works with and without feature names", {
.skip_if_vcd_not_available()
expect_silent(xgb.plot.tree(feature_names = feature.names, model = bst.Tree)) expect_silent(xgb.plot.tree(feature_names = feature.names, model = bst.Tree))
expect_silent(xgb.plot.tree(model = bst.Tree)) expect_silent(xgb.plot.tree(model = bst.Tree))
}) })
test_that("xgb.plot.multi.trees works with and without feature names", { test_that("xgb.plot.multi.trees works with and without feature names", {
.skip_if_vcd_not_available()
xgb.plot.multi.trees(model = bst.Tree, feature_names = feature.names, features_keep = 3) xgb.plot.multi.trees(model = bst.Tree, feature_names = feature.names, features_keep = 3)
xgb.plot.multi.trees(model = bst.Tree, features_keep = 3) xgb.plot.multi.trees(model = bst.Tree, features_keep = 3)
}) })
test_that("xgb.plot.deepness works", { test_that("xgb.plot.deepness works", {
.skip_if_vcd_not_available()
d2p <- xgb.plot.deepness(model = bst.Tree) d2p <- xgb.plot.deepness(model = bst.Tree)
expect_equal(colnames(d2p), c("ID", "Tree", "Depth", "Cover", "Weight")) expect_equal(colnames(d2p), c("ID", "Tree", "Depth", "Cover", "Weight"))
xgb.plot.deepness(model = bst.Tree, which = "med.depth") xgb.plot.deepness(model = bst.Tree, which = "med.depth")
@@ -396,6 +419,7 @@ test_that("xgb.plot.deepness works", {
}) })
test_that("xgb.shap.data works when top_n is provided", { test_that("xgb.shap.data works when top_n is provided", {
.skip_if_vcd_not_available()
data_list <- xgb.shap.data(data = sparse_matrix, model = bst.Tree, top_n = 2) data_list <- xgb.shap.data(data = sparse_matrix, model = bst.Tree, top_n = 2)
expect_equal(names(data_list), c("data", "shap_contrib")) expect_equal(names(data_list), c("data", "shap_contrib"))
expect_equal(NCOL(data_list$data), 2) expect_equal(NCOL(data_list$data), 2)
@@ -413,12 +437,14 @@ test_that("xgb.shap.data works when top_n is provided", {
}) })
test_that("xgb.shap.data works with subsampling", { test_that("xgb.shap.data works with subsampling", {
.skip_if_vcd_not_available()
data_list <- xgb.shap.data(data = sparse_matrix, model = bst.Tree, top_n = 2, subsample = 0.8) data_list <- xgb.shap.data(data = sparse_matrix, model = bst.Tree, top_n = 2, subsample = 0.8)
expect_equal(NROW(data_list$data), as.integer(0.8 * nrow(sparse_matrix))) expect_equal(NROW(data_list$data), as.integer(0.8 * nrow(sparse_matrix)))
expect_equal(NROW(data_list$data), NROW(data_list$shap_contrib)) expect_equal(NROW(data_list$data), NROW(data_list$shap_contrib))
}) })
test_that("prepare.ggplot.shap.data works", { test_that("prepare.ggplot.shap.data works", {
.skip_if_vcd_not_available()
data_list <- xgb.shap.data(data = sparse_matrix, model = bst.Tree, top_n = 2) data_list <- xgb.shap.data(data = sparse_matrix, model = bst.Tree, top_n = 2)
plot_data <- prepare.ggplot.shap.data(data_list, normalize = TRUE) plot_data <- prepare.ggplot.shap.data(data_list, normalize = TRUE)
expect_s3_class(plot_data, "data.frame") expect_s3_class(plot_data, "data.frame")
@@ -429,17 +455,19 @@ test_that("prepare.ggplot.shap.data works", {
}) })
test_that("xgb.plot.shap works", { test_that("xgb.plot.shap works", {
.skip_if_vcd_not_available()
sh <- xgb.plot.shap(data = sparse_matrix, model = bst.Tree, top_n = 2, col = 4) sh <- xgb.plot.shap(data = sparse_matrix, model = bst.Tree, top_n = 2, col = 4)
expect_equal(names(sh), c("data", "shap_contrib")) expect_equal(names(sh), c("data", "shap_contrib"))
}) })
test_that("xgb.plot.shap.summary works", { test_that("xgb.plot.shap.summary works", {
.skip_if_vcd_not_available()
expect_silent(xgb.plot.shap.summary(data = sparse_matrix, model = bst.Tree, top_n = 2)) expect_silent(xgb.plot.shap.summary(data = sparse_matrix, model = bst.Tree, top_n = 2))
expect_silent(xgb.ggplot.shap.summary(data = sparse_matrix, model = bst.Tree, top_n = 2)) expect_silent(xgb.ggplot.shap.summary(data = sparse_matrix, model = bst.Tree, top_n = 2))
}) })
test_that("check.deprecation works", { test_that("check.deprecation works", {
ttt <- function(a = NNULL, DUMMY=NULL, ...) { ttt <- function(a = NNULL, DUMMY = NULL, ...) {
check.deprecation(...) check.deprecation(...)
as.list((environment())) as.list((environment()))
} }

View File

@@ -17,7 +17,7 @@ test_that("interaction constraints for regression", {
# Set all observations to have the same x3 values then increment # Set all observations to have the same x3 values then increment
# by the same amount # by the same amount
preds <- lapply(c(1, 2, 3), function(x){ preds <- lapply(c(1, 2, 3), function(x) {
tmat <- matrix(c(x1, x2, rep(x, 1000)), ncol = 3) tmat <- matrix(c(x1, x2, rep(x, 1000)), ncol = 3)
return(predict(bst, tmat)) return(predict(bst, tmat))
}) })

View File

@@ -1,7 +1,5 @@
context('Test prediction of feature interactions') context('Test prediction of feature interactions')
require(xgboost)
set.seed(123) set.seed(123)
test_that("predict feature interactions works", { test_that("predict feature interactions works", {

View File

@@ -1,7 +1,4 @@
context("Test model IO.") context("Test model IO.")
## some other tests are in test_basic.R
require(xgboost)
require(testthat)
data(agaricus.train, package = "xgboost") data(agaricus.train, package = "xgboost")
data(agaricus.test, package = "xgboost") data(agaricus.test, package = "xgboost")

View File

@@ -1,6 +1,3 @@
require(xgboost)
require(jsonlite)
context("Models from previous versions of XGBoost can be loaded") context("Models from previous versions of XGBoost can be loaded")
metadata <- list( metadata <- list(
@@ -62,11 +59,12 @@ test_that("Models from previous versions of XGBoost can be loaded", {
bucket <- 'xgboost-ci-jenkins-artifacts' bucket <- 'xgboost-ci-jenkins-artifacts'
region <- 'us-west-2' region <- 'us-west-2'
file_name <- 'xgboost_r_model_compatibility_test.zip' file_name <- 'xgboost_r_model_compatibility_test.zip'
zipfile <- file.path(getwd(), file_name) zipfile <- tempfile(fileext = ".zip")
model_dir <- file.path(getwd(), 'models') extract_dir <- tempdir()
download.file(paste('https://', bucket, '.s3-', region, '.amazonaws.com/', file_name, sep = ''), download.file(paste('https://', bucket, '.s3-', region, '.amazonaws.com/', file_name, sep = ''),
destfile = zipfile, mode = 'wb', quiet = TRUE) destfile = zipfile, mode = 'wb', quiet = TRUE)
unzip(zipfile, overwrite = TRUE) unzip(zipfile, exdir = extract_dir, overwrite = TRUE)
model_dir <- file.path(extract_dir, 'models')
pred_data <- xgb.DMatrix(matrix(c(0, 0, 0, 0), nrow = 1, ncol = 4)) pred_data <- xgb.DMatrix(matrix(c(0, 0, 0, 0), nrow = 1, ncol = 4))
@@ -78,32 +76,20 @@ test_that("Models from previous versions of XGBoost can be loaded", {
name <- m[3] name <- m[3]
is_rds <- endsWith(model_file, '.rds') is_rds <- endsWith(model_file, '.rds')
is_json <- endsWith(model_file, '.json') is_json <- endsWith(model_file, '.json')
# Expect an R warning when a model is loaded from RDS and it was generated by version < 1.1.x
cpp_warning <- capture.output({ if (is_rds && compareVersion(model_xgb_ver, '1.1.1.1') < 0) {
# Expect an R warning when a model is loaded from RDS and it was generated by version < 1.1.x booster <- readRDS(model_file)
if (is_rds && compareVersion(model_xgb_ver, '1.1.1.1') < 0) { expect_warning(predict(booster, newdata = pred_data))
booster <- readRDS(model_file)
expect_warning(run_booster_check(booster, name))
} else {
if (is_rds) {
booster <- readRDS(model_file) booster <- readRDS(model_file)
expect_warning(predict(booster, newdata = pred_data))
booster <- readRDS(model_file)
expect_warning(run_booster_check(booster, name))
} else { } else {
if (is_rds) { booster <- xgb.load(model_file)
booster <- readRDS(model_file)
} else {
booster <- xgb.load(model_file)
}
predict(booster, newdata = pred_data)
run_booster_check(booster, name)
} }
}) predict(booster, newdata = pred_data)
cpp_warning <- paste0(cpp_warning, collapse = ' ') run_booster_check(booster, name)
if (is_rds && compareVersion(model_xgb_ver, '1.1.1.1') >= 0) {
# Expect a C++ warning when a model is loaded from RDS and it was generated by old XGBoost`
m <- grepl(paste0('.*If you are loading a serialized model ',
'\\(like pickle in Python, RDS in R\\).*',
'for more details about differences between ',
'saving model and serializing.*'), cpp_warning, perl = TRUE)
expect_true(length(m) > 0 && all(m))
} }
}) })
}) })

View File

@@ -1,5 +1,3 @@
require(xgboost)
context("monotone constraints") context("monotone constraints")
set.seed(1024) set.seed(1024)

View File

@@ -1,7 +1,5 @@
context('Test model params and call are exposed to R') context('Test model params and call are exposed to R')
require(xgboost)
data(agaricus.train, package = 'xgboost') data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost') data(agaricus.test, package = 'xgboost')

View File

@@ -1,6 +1,5 @@
context('Test Poisson regression model') context('Test Poisson regression model')
require(xgboost)
set.seed(1994) set.seed(1994)
test_that("Poisson regression works", { test_that("Poisson regression works", {

View File

@@ -1,12 +1,12 @@
require(xgboost)
require(Matrix)
context('Learning to rank') context('Learning to rank')
test_that('Test ranking with unweighted data', { test_that('Test ranking with unweighted data', {
X <- sparseMatrix(i = c(2, 3, 7, 9, 12, 15, 17, 18), X <- Matrix::sparseMatrix(
j = c(1, 1, 2, 2, 3, 3, 4, 4), i = c(2, 3, 7, 9, 12, 15, 17, 18)
x = rep(1.0, 8), dims = c(20, 4)) , j = c(1, 1, 2, 2, 3, 3, 4, 4)
, x = rep(1.0, 8)
, dims = c(20, 4)
)
y <- c(0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0) y <- c(0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0)
group <- c(5, 5, 5, 5) group <- c(5, 5, 5, 5)
dtrain <- xgb.DMatrix(X, label = y, group = group) dtrain <- xgb.DMatrix(X, label = y, group = group)
@@ -20,9 +20,12 @@ test_that('Test ranking with unweighted data', {
}) })
test_that('Test ranking with weighted data', { test_that('Test ranking with weighted data', {
X <- sparseMatrix(i = c(2, 3, 7, 9, 12, 15, 17, 18), X <- Matrix::sparseMatrix(
j = c(1, 1, 2, 2, 3, 3, 4, 4), i = c(2, 3, 7, 9, 12, 15, 17, 18)
x = rep(1.0, 8), dims = c(20, 4)) , j = c(1, 1, 2, 2, 3, 3, 4, 4)
, x = rep(1.0, 8)
, dims = c(20, 4)
)
y <- c(0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0) y <- c(0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0)
group <- c(5, 5, 5, 5) group <- c(5, 5, 5, 5)
weight <- c(1.0, 2.0, 3.0, 4.0) weight <- c(1.0, 2.0, 3.0, 4.0)

View File

@@ -0,0 +1,21 @@
context("Test Unicode handling")
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
train <- agaricus.train
test <- agaricus.test
set.seed(1994)
test_that("Can save and load models with Unicode paths", {
nrounds <- 2
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = nrounds, objective = "binary:logistic",
eval_metric = "error")
tmpdir <- tempdir()
lapply(c("모델.json", "がうる・ぐら.json", "类继承.ubj"), function(x) {
path <- file.path(tmpdir, x)
xgb.save(bst, path)
bst2 <- xgb.load(path)
expect_equal(predict(bst, test$data), predict(bst2, test$data))
})
})

View File

@@ -1,5 +1,3 @@
require(xgboost)
context("update trees in an existing model") context("update trees in an existing model")
data(agaricus.train, package = 'xgboost') data(agaricus.train, package = 'xgboost')
@@ -15,7 +13,10 @@ test_that("updating the model works", {
watchlist <- list(train = dtrain, test = dtest) watchlist <- list(train = dtrain, test = dtest)
# no-subsampling # no-subsampling
p1 <- list(objective = "binary:logistic", max_depth = 2, eta = 0.05, nthread = 2) p1 <- list(
objective = "binary:logistic", max_depth = 2, eta = 0.05, nthread = 2,
updater = "grow_colmaker,prune"
)
set.seed(11) set.seed(11)
bst1 <- xgb.train(p1, dtrain, nrounds = 10, watchlist, verbose = 0) bst1 <- xgb.train(p1, dtrain, nrounds = 10, watchlist, verbose = 0)
tr1 <- xgb.model.dt.tree(model = bst1) tr1 <- xgb.model.dt.tree(model = bst1)

View File

@@ -28,7 +28,9 @@ Package loading:
require(xgboost) require(xgboost)
require(Matrix) require(Matrix)
require(data.table) require(data.table)
if (!require('vcd')) install.packages('vcd') if (!require('vcd')) {
install.packages('vcd')
}
``` ```
> **VCD** package is used for one of its embedded dataset only. > **VCD** package is used for one of its embedded dataset only.
@@ -49,24 +51,24 @@ A *categorical* variable has a fixed number of different values. For instance, i
> >
> Type `?factor` in the console for more information. > Type `?factor` in the console for more information.
To answer the question above we will convert *categorical* variables to `numeric` one. To answer the question above we will convert *categorical* variables to `numeric` ones.
### Conversion from categorical to numeric variables ### Conversion from categorical to numeric variables
#### Looking at the raw data #### Looking at the raw data
In this Vignette we will see how to transform a *dense* `data.frame` (*dense* = few zeroes in the matrix) with *categorical* variables to a very *sparse* matrix (*sparse* = lots of zero in the matrix) of `numeric` features. +In this Vignette we will see how to transform a *dense* `data.frame` (*dense* = the majority of the matrix is non-zero) with *categorical* variables to a very *sparse* matrix (*sparse* = lots of zero entries in the matrix) of `numeric` features.
The method we are going to see is usually called [one-hot encoding](https://en.wikipedia.org/wiki/One-hot). The method we are going to see is usually called [one-hot encoding](https://en.wikipedia.org/wiki/One-hot).
The first step is to load `Arthritis` dataset in memory and wrap it with `data.table` package. The first step is to load the `Arthritis` dataset in memory and wrap it with the `data.table` package.
```{r, results='hide'} ```{r, results='hide'}
data(Arthritis) data(Arthritis)
df <- data.table(Arthritis, keep.rownames = FALSE) df <- data.table(Arthritis, keep.rownames = FALSE)
``` ```
> `data.table` is 100% compliant with **R** `data.frame` but its syntax is more consistent and its performance for large dataset is [best in class](https://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) (`dplyr` from **R** and `Pandas` from **Python** [included](https://github.com/Rdatatable/data.table/wiki/Benchmarks-%3A-Grouping)). Some parts of **XGBoost** **R** package use `data.table`. > `data.table` is 100% compliant with **R** `data.frame` but its syntax is more consistent and its performance for large dataset is [best in class](https://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) (`dplyr` from **R** and `Pandas` from **Python** [included](https://github.com/Rdatatable/data.table/wiki/Benchmarks-%3A-Grouping)). Some parts of **XGBoost's** **R** package use `data.table`.
The first thing we want to do is to have a look to the first few lines of the `data.table`: The first thing we want to do is to have a look to the first few lines of the `data.table`:
@@ -93,22 +95,22 @@ We will add some new *categorical* features to see if it helps.
##### Grouping per 10 years ##### Grouping per 10 years
For the first feature we create groups of age by rounding the real age. For the first features we create groups of age by rounding the real age.
Note that we transform it to `factor` so the algorithm treat these age groups as independent values. Note that we transform it to `factor` so the algorithm treats these age groups as independent values.
Therefore, 20 is not closer to 30 than 60. To make it short, the distance between ages is lost in this transformation. Therefore, 20 is not closer to 30 than 60. In other words, the distance between ages is lost in this transformation.
```{r} ```{r}
head(df[,AgeDiscret := as.factor(round(Age/10,0))]) head(df[, AgeDiscret := as.factor(round(Age / 10, 0))])
``` ```
##### Random split into two groups ##### Randomly split into two groups
Following is an even stronger simplification of the real age with an arbitrary split at 30 years old. We choose this value **based on nothing**. We will see later if simplifying the information based on arbitrary values is a good strategy (you may already have an idea of how well it will work...). The following is an even stronger simplification of the real age with an arbitrary split at 30 years old. I choose this value **based on nothing**. We will see later if simplifying the information based on arbitrary values is a good strategy (you may already have an idea of how well it will work...).
```{r} ```{r}
head(df[,AgeCat:= as.factor(ifelse(Age > 30, "Old", "Young"))]) head(df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))])
``` ```
##### Risks in adding correlated features ##### Risks in adding correlated features
@@ -117,20 +119,20 @@ These new features are highly correlated to the `Age` feature because they are s
For many machine learning algorithms, using correlated features is not a good idea. It may sometimes make prediction less accurate, and most of the time make interpretation of the model almost impossible. GLM, for instance, assumes that the features are uncorrelated. For many machine learning algorithms, using correlated features is not a good idea. It may sometimes make prediction less accurate, and most of the time make interpretation of the model almost impossible. GLM, for instance, assumes that the features are uncorrelated.
Fortunately, decision tree algorithms (including boosted trees) are very robust to these features. Therefore we have nothing to do to manage this situation. Fortunately, decision tree algorithms (including boosted trees) are very robust to these features. Therefore we don't have to do anything to manage this situation.
##### Cleaning data ##### Cleaning data
We remove ID as there is nothing to learn from this feature (it would just add some noise). We remove ID as there is nothing to learn from this feature (it would just add some noise).
```{r, results='hide'} ```{r, results='hide'}
df[,ID:=NULL] df[, ID := NULL]
``` ```
We will list the different values for the column `Treatment`: We will list the different values for the column `Treatment`:
```{r} ```{r}
levels(df[,Treatment]) levels(df[, Treatment])
``` ```
@@ -142,12 +144,12 @@ We will use the [dummy contrast coding](https://stats.oarc.ucla.edu/r/library/r-
The purpose is to transform each value of each *categorical* feature into a *binary* feature `{0, 1}`. The purpose is to transform each value of each *categorical* feature into a *binary* feature `{0, 1}`.
For example, the column `Treatment` will be replaced by two columns, `TreatmentPlacebo`, and `TreatmentTreated`. Each of them will be *binary*. Therefore, an observation which has the value `Placebo` in column `Treatment` before the transformation will have after the transformation the value `1` in the new column `TreatmentPlacebo` and the value `0` in the new column `TreatmentTreated`. The column `TreatmentPlacebo` will disappear during the contrast encoding, as it would be absorbed into a common constant intercept column. For example, the column `Treatment` will be replaced by two columns, `TreatmentPlacebo`, and `TreatmentTreated`. Each of them will be *binary*. Therefore, an observation which has the value `Placebo` in column `Treatment` before the transformation will have the value `1` in the new column `TreatmentPlacebo` and the value `0` in the new column `TreatmentTreated` after the transformation. The column `TreatmentPlacebo` will disappear during the contrast encoding, as it would be absorbed into a common constant intercept column.
Column `Improved` is excluded because it will be our `label` column, the one we want to predict. Column `Improved` is excluded because it will be our `label` column, the one we want to predict.
```{r, warning=FALSE,message=FALSE} ```{r, warning=FALSE,message=FALSE}
sparse_matrix <- sparse.model.matrix(Improved ~ ., data = df)[,-1] sparse_matrix <- sparse.model.matrix(Improved ~ ., data = df)[, -1]
head(sparse_matrix) head(sparse_matrix)
``` ```
@@ -156,7 +158,7 @@ head(sparse_matrix)
Create the output `numeric` vector (not as a sparse `Matrix`): Create the output `numeric` vector (not as a sparse `Matrix`):
```{r} ```{r}
output_vector = df[,Improved] == "Marked" output_vector <- df[, Improved] == "Marked"
``` ```
1. set `Y` vector to `0`; 1. set `Y` vector to `0`;
@@ -170,17 +172,13 @@ The code below is very usual. For more information, you can look at the document
```{r} ```{r}
bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 4, bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 4,
eta = 1, nthread = 2, nrounds = 10,objective = "binary:logistic") eta = 1, nthread = 2, nrounds = 10, objective = "binary:logistic")
``` ```
You can see some `train-error: 0.XXXXX` lines followed by a number. It decreases. Each line shows how well the model explains your data. Lower is better. You can see some `train-logloss: 0.XXXXX` lines followed by a number. It decreases. Each line shows how well the model explains the data. Lower is better.
A small value for training error may be a symptom of [overfitting](https://en.wikipedia.org/wiki/Overfitting), meaning the model will not accurately predict the future values. A small value for training error may be a symptom of [overfitting](https://en.wikipedia.org/wiki/Overfitting), meaning the model will not accurately predict unseen values.
> Here you can see the numbers decrease until line 7 and then increase.
>
> It probably means we are overfitting. To fix that I should reduce the number of rounds to `nrounds = 4`. I will let things like that because I don't really care for the purpose of this example :-)
Feature importance Feature importance
------------------ ------------------
@@ -197,64 +195,35 @@ importance <- xgb.importance(feature_names = colnames(sparse_matrix), model = bs
head(importance) head(importance)
``` ```
> The column `Gain` provide the information we are looking for. > The column `Gain` provides the information we are looking for.
> >
> As you can see, features are classified by `Gain`. > As you can see, features are classified by `Gain`.
`Gain` is the improvement in accuracy brought by a feature to the branches it is on. The idea is that before adding a new split on a feature X to the branch there was some wrongly classified elements, after adding the split on this feature, there are two new branches, and each of these branch is more accurate (one branch saying if your observation is on this branch then it should be classified as `1`, and the other branch saying the exact opposite). `Gain` is the improvement in accuracy brought by a feature to the branches it is on. The idea is that before adding a new split on a feature X to the branch there were some wrongly classified elements; after adding the split on this feature, there are two new branches, and each of these branches is more accurate (one branch saying if your observation is on this branch then it should be classified as `1`, and the other branch saying the exact opposite).
`Cover` measures the relative quantity of observations concerned by a feature. `Cover` is related to the second order derivative (or Hessian) of the loss function with respect to a particular variable; thus, a large value indicates a variable has a large potential impact on the loss function and so is important.
`Frequency` is a simpler way to measure the `Gain`. It just counts the number of times a feature is used in all generated trees. You should not use it (unless you know why you want to use it). `Frequency` is a simpler way to measure the `Gain`. It just counts the number of times a feature is used in all generated trees. You should not use it (unless you know why you want to use it).
#### Improvement in the interpretability of feature importance data.table
We can go deeper in the analysis of the model. In the `data.table` above, we have discovered which features counts to predict if the illness will go or not. But we don't yet know the role of these features. For instance, one of the question we may want to answer would be: does receiving a placebo treatment helps to recover from the illness?
One simple solution is to count the co-occurrences of a feature and a class of the classification.
For that purpose we will execute the same function as above but using two more parameters, `data` and `label`.
```{r}
importanceRaw <- xgb.importance(feature_names = colnames(sparse_matrix), model = bst, data = sparse_matrix, label = output_vector)
# Cleaning for better display
importanceClean <- importanceRaw[,`:=`(Cover=NULL, Frequency=NULL)]
head(importanceClean)
```
> In the table above we have removed two not needed columns and select only the first lines.
First thing you notice is the new column `Split`. It is the split applied to the feature on a branch of one of the tree. Each split is present, therefore a feature can appear several times in this table. Here we can see the feature `Age` is used several times with different splits.
How the split is applied to count the co-occurrences? It is always `<`. For instance, in the second line, we measure the number of persons under 61.5 years with the illness gone after the treatment.
The two other new columns are `RealCover` and `RealCover %`. In the first column it measures the number of observations in the dataset where the split is respected and the label marked as `1`. The second column is the percentage of the whole population that `RealCover` represents.
Therefore, according to our findings, getting a placebo doesn't seem to help but being younger than 61 years may help (seems logic).
> You may wonder how to interpret the `< 1.00001` on the first line. Basically, in a sparse `Matrix`, there is no `0`, therefore, looking for one hot-encoded categorical observations validating the rule `< 1.00001` is like just looking for `1` for this feature.
### Plotting the feature importance ### Plotting the feature importance
All these things are nice, but it would be even better to plot the results. All these things are nice, but it would be even better to plot the results.
```{r, fig.width=8, fig.height=5, fig.align='center'} ```{r, fig.width=8, fig.height=5, fig.align='center'}
xgb.plot.importance(importance_matrix = importance) xgb.plot.importance(importance_matrix = importance)
``` ```
Feature have automatically been divided in 2 clusters: the interesting features... and the others. Running this line of code, you should get a bar chart showing the importance of the 6 features (containing the same data as the output we saw earlier, but displaying it visually for easier consumption). Note that `xgb.ggplot.importance` is also available for all the ggplot2 fans!
> Depending of the dataset and the learning parameters you may have more than two clusters. Default value is to limit them to `10`, but you can increase this limit. Look at the function documentation for more information. > Depending of the dataset and the learning parameters you may have more than two clusters. Default value is to limit them to `10`, but you can increase this limit. Look at the function documentation for more information.
According to the plot above, the most important features in this dataset to predict if the treatment will work are : According to the plot above, the most important features in this dataset to predict if the treatment will work are :
* the Age ; * An individual's age;
* having received a placebo or not ; * Having received a placebo or not;
* the sex is third but already included in the not interesting features group ; * Gender;
* then we see our generated features (AgeDiscret). We can see that their contribution is very low. * Our generated feature AgeDiscret. We can see that its contribution is very low.
### Do these results make sense? ### Do these results make sense?
@@ -268,69 +237,84 @@ c2 <- chisq.test(df$Age, output_vector)
print(c2) print(c2)
``` ```
Pearson correlation between Age and illness disappearing is **`r round(c2$statistic, 2 )`**. The Pearson correlation between Age and illness disappearing is **`r round(c2$statistic, 2 )`**.
```{r, warning=FALSE, message=FALSE} ```{r, warning=FALSE, message=FALSE}
c2 <- chisq.test(df$AgeDiscret, output_vector) c2 <- chisq.test(df$AgeDiscret, output_vector)
print(c2) print(c2)
``` ```
Our first simplification of Age gives a Pearson correlation is **`r round(c2$statistic, 2)`**. Our first simplification of Age gives a Pearson correlation of **`r round(c2$statistic, 2)`**.
```{r, warning=FALSE, message=FALSE} ```{r, warning=FALSE, message=FALSE}
c2 <- chisq.test(df$AgeCat, output_vector) c2 <- chisq.test(df$AgeCat, output_vector)
print(c2) print(c2)
``` ```
The perfectly random split I did between young and old at 30 years old have a low correlation of **`r round(c2$statistic, 2)`**. It's a result we may expect as may be in my mind > 30 years is being old (I am 32 and starting feeling old, this may explain that), but for the illness we are studying, the age to be vulnerable is not the same. The perfectly random split we did between young and old at 30 years old has a low correlation of **2.36**. This suggests that, for the particular illness we are studying, the age at which someone is vulnerable to this disease is likely very different from 30.
Morality: don't let your *gut* lower the quality of your model. Moral of the story: don't let your *gut* lower the quality of your model.
In *data science* expression, there is the word *science* :-) In *data science*, there is the word *science* :-)
Conclusion Conclusion
---------- ----------
As you can see, in general *destroying information by simplifying it won't improve your model*. **Chi2** just demonstrates that. As you can see, in general *destroying information by simplifying it won't improve your model*. **Chi2** just demonstrates that.
But in more complex cases, creating a new feature based on existing one which makes link with the outcome more obvious may help the algorithm and improve the model. But in more complex cases, creating a new feature from an existing one may help the algorithm and improve the model.
The case studied here is not enough complex to show that. Check [Kaggle website](http://www.kaggle.com/) for some challenging datasets. However it's almost always worse when you add some arbitrary rules. +The case studied here is not complex enough to show that. Check [Kaggle website](https://www.kaggle.com/) for some challenging datasets.
Moreover, you can notice that even if we have added some not useful new features highly correlated with other features, the boosting tree algorithm have been able to choose the best one, which in this case is the Age. Moreover, you can see that even if we have added some new features which are not very useful/highly correlated with other features, the boosting tree algorithm was still able to choose the best one (which in this case is the Age).
Linear model may not be that smart in this scenario. Linear models may not perform as well.
Special Note: What about Random Forests™? Special Note: What about Random Forests™?
----------------------------------------- -----------------------------------------
As you may know, [Random Forests](https://en.wikipedia.org/wiki/Random_forest) algorithm is cousin with boosting and both are part of the [ensemble learning](https://en.wikipedia.org/wiki/Ensemble_learning) family. As you may know, the [Random Forests](https://en.wikipedia.org/wiki/Random_forest) algorithm is cousin with boosting and both are part of the [ensemble learning](https://en.wikipedia.org/wiki/Ensemble_learning) family.
Both trains several decision trees for one dataset. The *main* difference is that in Random Forests, trees are independent and in boosting, the tree `N+1` focus its learning on the loss (<=> what has not been well modeled by the tree `N`). Both train several decision trees for one dataset. The *main* difference is that in Random Forests, trees are independent and in boosting, the `N+1`-st tree focuses its learning on the loss (<=> what has not been well modeled by the tree `N`).
This difference have an impact on a corner case in feature importance analysis: the *correlated features*. This difference can have an impact on a edge case in feature importance analysis: *correlated features*.
Imagine two features perfectly correlated, feature `A` and feature `B`. For one specific tree, if the algorithm needs one of them, it will choose randomly (true in both boosting and Random Forests). Imagine two features perfectly correlated, feature `A` and feature `B`. For one specific tree, if the algorithm needs one of them, it will choose randomly (true in both boosting and Random Forests).
However, in Random Forests this random choice will be done for each tree, because each tree is independent from the others. Therefore, approximatively, depending of your parameters, 50% of the trees will choose feature `A` and the other 50% will choose feature `B`. So the *importance* of the information contained in `A` and `B` (which is the same, because they are perfectly correlated) is diluted in `A` and `B`. So you won't easily know this information is important to predict what you want to predict! It is even worse when you have 10 correlated features... However, in Random Forests this random choice will be done for each tree, because each tree is independent from the others. Therefore, approximately (and depending on your parameters) 50% of the trees will choose feature `A` and the other 50% will choose feature `B`. So the *importance* of the information contained in `A` and `B` (which is the same, because they are perfectly correlated) is diluted in `A` and `B`. So you won't easily know this information is important to predict what you want to predict! It is even worse when you have 10 correlated features...
In boosting, when a specific link between feature and outcome have been learned by the algorithm, it will try to not refocus on it (in theory it is what happens, reality is not always that simple). Therefore, all the importance will be on feature `A` or on feature `B` (but not both). You will know that one feature have an important role in the link between the observations and the label. It is still up to you to search for the correlated features to the one detected as important if you need to know all of them. In boosting, when a specific link between feature and outcome have been learned by the algorithm, it will try to not refocus on it (in theory it is what happens, reality is not always that simple). Therefore, all the importance will be on feature `A` or on feature `B` (but not both). You will know that one feature has an important role in the link between the observations and the label. It is still up to you to search for the correlated features to the one detected as important if you need to know all of them.
If you want to try Random Forests algorithm, you can tweak XGBoost parameters! If you want to try Random Forests algorithm, you can tweak XGBoost parameters!
For instance, to compute a model with 1000 trees, with a 0.5 factor on sampling rows and columns: For instance, to compute a model with 1000 trees, with a 0.5 factor on sampling rows and columns:
```{r, warning=FALSE, message=FALSE} ```{r, warning=FALSE, message=FALSE}
data(agaricus.train, package='xgboost') data(agaricus.train, package = 'xgboost')
data(agaricus.test, package='xgboost') data(agaricus.test, package = 'xgboost')
train <- agaricus.train train <- agaricus.train
test <- agaricus.test test <- agaricus.test
#Random Forest - 1000 trees #Random Forest - 1000 trees
bst <- xgboost(data = train$data, label = train$label, max_depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nrounds = 1, objective = "binary:logistic") bst <- xgboost(
data = train$data
, label = train$label
, max_depth = 4
, num_parallel_tree = 1000
, subsample = 0.5
, colsample_bytree = 0.5
, nrounds = 1
, objective = "binary:logistic"
)
#Boosting - 3 rounds #Boosting - 3 rounds
bst <- xgboost(data = train$data, label = train$label, max_depth = 4, nrounds = 3, objective = "binary:logistic") bst <- xgboost(
data = train$data
, label = train$label
, max_depth = 4
, nrounds = 3
, objective = "binary:logistic"
)
``` ```
> Note that the parameter `round` is set to `1`. > Note that the parameter `round` is set to `1`.

View File

@@ -18,13 +18,11 @@
publisher={Institute of Mathematical Statistics} publisher={Institute of Mathematical Statistics}
} }
@misc{ @misc{
Bache+Lichman:2013 , Bache+Lichman:2013 ,
author = "K. Bache and M. Lichman", author = "K. Bache and M. Lichman",
year = "2013", year = "2013",
title = "{UCI} Machine Learning Repository", title = "{UCI} Machine Learning Repository",
url = "http://archive.ics.uci.edu/ml/", url = "https://archive.ics.uci.edu/",
institution = "University of California, Irvine, School of Information and Computer Sciences" institution = "University of California, Irvine, School of Information and Computer Sciences"
} }

View File

@@ -52,9 +52,9 @@ It has several features:
For weekly updated version (highly recommended), install from *GitHub*: For weekly updated version (highly recommended), install from *GitHub*:
```{r installGithub, eval=FALSE} ```{r installGithub, eval=FALSE}
install.packages("drat", repos="https://cran.rstudio.com") install.packages("drat", repos = "https://cran.rstudio.com")
drat:::addRepo("dmlc") drat:::addRepo("dmlc")
install.packages("xgboost", repos="http://dmlc.ml/drat/", type = "source") install.packages("xgboost", repos = "http://dmlc.ml/drat/", type = "source")
``` ```
> *Windows* user will need to install [Rtools](https://cran.r-project.org/bin/windows/Rtools/) first. > *Windows* user will need to install [Rtools](https://cran.r-project.org/bin/windows/Rtools/) first.
@@ -101,8 +101,8 @@ Why *split* the dataset in two parts?
In the first part we will build our model. In the second part we will want to test it and assess its quality. Without dividing the dataset we would test the model on the data which the algorithm have already seen. In the first part we will build our model. In the second part we will want to test it and assess its quality. Without dividing the dataset we would test the model on the data which the algorithm have already seen.
```{r datasetLoading, results='hold', message=F, warning=F} ```{r datasetLoading, results='hold', message=F, warning=F}
data(agaricus.train, package='xgboost') data(agaricus.train, package = 'xgboost')
data(agaricus.test, package='xgboost') data(agaricus.test, package = 'xgboost')
train <- agaricus.train train <- agaricus.train
test <- agaricus.test test <- agaricus.test
``` ```
@@ -152,7 +152,15 @@ We will train decision tree model using the following parameters:
* `nrounds = 2`: there will be two passes on the data, the second one will enhance the model by further reducing the difference between ground truth and prediction. * `nrounds = 2`: there will be two passes on the data, the second one will enhance the model by further reducing the difference between ground truth and prediction.
```{r trainingSparse, message=F, warning=F} ```{r trainingSparse, message=F, warning=F}
bstSparse <- xgboost(data = train$data, label = train$label, max_depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic") bstSparse <- xgboost(
data = train$data
, label = train$label
, max_depth = 2
, eta = 1
, nthread = 2
, nrounds = 2
, objective = "binary:logistic"
)
``` ```
> More complex the relationship between your features and your `label` is, more passes you need. > More complex the relationship between your features and your `label` is, more passes you need.
@@ -164,7 +172,15 @@ bstSparse <- xgboost(data = train$data, label = train$label, max_depth = 2, eta
Alternatively, you can put your dataset in a *dense* matrix, i.e. a basic **R** matrix. Alternatively, you can put your dataset in a *dense* matrix, i.e. a basic **R** matrix.
```{r trainingDense, message=F, warning=F} ```{r trainingDense, message=F, warning=F}
bstDense <- xgboost(data = as.matrix(train$data), label = train$label, max_depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic") bstDense <- xgboost(
data = as.matrix(train$data)
, label = train$label
, max_depth = 2
, eta = 1
, nthread = 2
, nrounds = 2
, objective = "binary:logistic"
)
``` ```
##### xgb.DMatrix ##### xgb.DMatrix
@@ -173,7 +189,14 @@ bstDense <- xgboost(data = as.matrix(train$data), label = train$label, max_depth
```{r trainingDmatrix, message=F, warning=F} ```{r trainingDmatrix, message=F, warning=F}
dtrain <- xgb.DMatrix(data = train$data, label = train$label) dtrain <- xgb.DMatrix(data = train$data, label = train$label)
bstDMatrix <- xgboost(data = dtrain, max_depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic") bstDMatrix <- xgboost(
data = dtrain
, max_depth = 2
, eta = 1
, nthread = 2
, nrounds = 2
, objective = "binary:logistic"
)
``` ```
##### Verbose option ##### Verbose option
@@ -184,17 +207,41 @@ One of the simplest way to see the training progress is to set the `verbose` opt
```{r trainingVerbose0, message=T, warning=F} ```{r trainingVerbose0, message=T, warning=F}
# verbose = 0, no message # verbose = 0, no message
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic", verbose = 0) bst <- xgboost(
data = dtrain
, max_depth = 2
, eta = 1
, nthread = 2
, nrounds = 2
, objective = "binary:logistic"
, verbose = 0
)
``` ```
```{r trainingVerbose1, message=T, warning=F} ```{r trainingVerbose1, message=T, warning=F}
# verbose = 1, print evaluation metric # verbose = 1, print evaluation metric
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic", verbose = 1) bst <- xgboost(
data = dtrain
, max_depth = 2
, eta = 1
, nthread = 2
, nrounds = 2
, objective = "binary:logistic"
, verbose = 1
)
``` ```
```{r trainingVerbose2, message=T, warning=F} ```{r trainingVerbose2, message=T, warning=F}
# verbose = 2, also print information about tree # verbose = 2, also print information about tree
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic", verbose = 2) bst <- xgboost(
data = dtrain
, max_depth = 2
, eta = 1
, nthread = 2
, nrounds = 2
, objective = "binary:logistic"
, verbose = 2
)
``` ```
## Basic prediction using XGBoost ## Basic prediction using XGBoost
@@ -267,8 +314,8 @@ Most of the features below have been implemented to help you to improve your mod
For the following advanced features, we need to put data in `xgb.DMatrix` as explained above. For the following advanced features, we need to put data in `xgb.DMatrix` as explained above.
```{r DMatrix, message=F, warning=F} ```{r DMatrix, message=F, warning=F}
dtrain <- xgb.DMatrix(data = train$data, label=train$label) dtrain <- xgb.DMatrix(data = train$data, label = train$label)
dtest <- xgb.DMatrix(data = test$data, label=test$label) dtest <- xgb.DMatrix(data = test$data, label = test$label)
``` ```
### Measure learning progress with xgb.train ### Measure learning progress with xgb.train
@@ -285,9 +332,17 @@ One way to measure progress in learning of a model is to provide to **XGBoost**
For the purpose of this example, we use `watchlist` parameter. It is a list of `xgb.DMatrix`, each of them tagged with a name. For the purpose of this example, we use `watchlist` parameter. It is a list of `xgb.DMatrix`, each of them tagged with a name.
```{r watchlist, message=F, warning=F} ```{r watchlist, message=F, warning=F}
watchlist <- list(train=dtrain, test=dtest) watchlist <- list(train = dtrain, test = dtest)
bst <- xgb.train(data=dtrain, max_depth=2, eta=1, nthread = 2, nrounds=2, watchlist=watchlist, objective = "binary:logistic") bst <- xgb.train(
data = dtrain
, max_depth = 2
, eta = 1
, nthread = 2
, nrounds = 2
, watchlist = watchlist
, objective = "binary:logistic"
)
``` ```
**XGBoost** has computed at each round the same average error metric than seen above (we set `nrounds` to 2, that is why we have two lines). Obviously, the `train-error` number is related to the training dataset (the one the algorithm learns from) and the `test-error` number to the test dataset. **XGBoost** has computed at each round the same average error metric than seen above (we set `nrounds` to 2, that is why we have two lines). Obviously, the `train-error` number is related to the training dataset (the one the algorithm learns from) and the `test-error` number to the test dataset.
@@ -299,7 +354,17 @@ If with your own dataset you have not such results, you should think about how y
For a better understanding of the learning progression, you may want to have some specific metric or even use multiple evaluation metrics. For a better understanding of the learning progression, you may want to have some specific metric or even use multiple evaluation metrics.
```{r watchlist2, message=F, warning=F} ```{r watchlist2, message=F, warning=F}
bst <- xgb.train(data=dtrain, max_depth=2, eta=1, nthread = 2, nrounds=2, watchlist=watchlist, eval_metric = "error", eval_metric = "logloss", objective = "binary:logistic") bst <- xgb.train(
data = dtrain
, max_depth = 2
, eta = 1
, nthread = 2
, nrounds = 2
, watchlist = watchlist
, eval_metric = "error"
, eval_metric = "logloss"
, objective = "binary:logistic"
)
``` ```
> `eval_metric` allows us to monitor two new metrics for each round, `logloss` and `error`. > `eval_metric` allows us to monitor two new metrics for each round, `logloss` and `error`.
@@ -310,7 +375,17 @@ bst <- xgb.train(data=dtrain, max_depth=2, eta=1, nthread = 2, nrounds=2, watchl
Until now, all the learnings we have performed were based on boosting trees. **XGBoost** implements a second algorithm, based on linear boosting. The only difference with previous command is `booster = "gblinear"` parameter (and removing `eta` parameter). Until now, all the learnings we have performed were based on boosting trees. **XGBoost** implements a second algorithm, based on linear boosting. The only difference with previous command is `booster = "gblinear"` parameter (and removing `eta` parameter).
```{r linearBoosting, message=F, warning=F} ```{r linearBoosting, message=F, warning=F}
bst <- xgb.train(data=dtrain, booster = "gblinear", max_depth=2, nthread = 2, nrounds=2, watchlist=watchlist, eval_metric = "error", eval_metric = "logloss", objective = "binary:logistic") bst <- xgb.train(
data = dtrain
, booster = "gblinear"
, max_depth = 2
, nthread = 2
, nrounds = 2
, watchlist = watchlist
, eval_metric = "error"
, eval_metric = "logloss"
, objective = "binary:logistic"
)
``` ```
In this specific case, *linear boosting* gets slightly better performance metrics than decision trees based algorithm. In this specific case, *linear boosting* gets slightly better performance metrics than decision trees based algorithm.
@@ -328,7 +403,15 @@ Like saving models, `xgb.DMatrix` object (which groups both dataset and outcome)
xgb.DMatrix.save(dtrain, "dtrain.buffer") xgb.DMatrix.save(dtrain, "dtrain.buffer")
# to load it in, simply call xgb.DMatrix # to load it in, simply call xgb.DMatrix
dtrain2 <- xgb.DMatrix("dtrain.buffer") dtrain2 <- xgb.DMatrix("dtrain.buffer")
bst <- xgb.train(data=dtrain2, max_depth=2, eta=1, nthread = 2, nrounds=2, watchlist=watchlist, objective = "binary:logistic") bst <- xgb.train(
data = dtrain2
, max_depth = 2
, eta = 1
, nthread = 2
, nrounds = 2
, watchlist = watchlist
, objective = "binary:logistic"
)
``` ```
```{r DMatrixDel, include=FALSE} ```{r DMatrixDel, include=FALSE}
@@ -340,9 +423,9 @@ file.remove("dtrain.buffer")
Information can be extracted from `xgb.DMatrix` using `getinfo` function. Hereafter we will extract `label` data. Information can be extracted from `xgb.DMatrix` using `getinfo` function. Hereafter we will extract `label` data.
```{r getinfo, message=F, warning=F} ```{r getinfo, message=F, warning=F}
label = getinfo(dtest, "label") label <- getinfo(dtest, "label")
pred <- predict(bst, dtest) pred <- predict(bst, dtest)
err <- as.numeric(sum(as.integer(pred > 0.5) != label))/length(label) err <- as.numeric(sum(as.integer(pred > 0.5) != label)) / length(label)
print(paste("test-error=", err)) print(paste("test-error=", err))
``` ```
@@ -396,7 +479,7 @@ bst2 <- xgb.load("xgboost.model")
pred2 <- predict(bst2, test$data) pred2 <- predict(bst2, test$data)
# And now the test # And now the test
print(paste("sum(abs(pred2-pred))=", sum(abs(pred2-pred)))) print(paste("sum(abs(pred2-pred))=", sum(abs(pred2 - pred))))
``` ```
```{r clean, include=FALSE} ```{r clean, include=FALSE}
@@ -420,7 +503,7 @@ bst3 <- xgb.load(rawVec)
pred3 <- predict(bst3, test$data) pred3 <- predict(bst3, test$data)
# pred2 should be identical to pred # pred2 should be identical to pred
print(paste("sum(abs(pred3-pred))=", sum(abs(pred2-pred)))) print(paste("sum(abs(pred3-pred))=", sum(abs(pred2 - pred))))
``` ```
> Again `0`? It seems that `XGBoost` works pretty well! > Again `0`? It seems that `XGBoost` works pretty well!

View File

@@ -30,7 +30,7 @@ For the purpose of this tutorial we will load the xgboost, jsonlite, and float p
require(xgboost) require(xgboost)
require(jsonlite) require(jsonlite)
require(float) require(float)
options(digits=22) options(digits = 22)
``` ```
We will create a toy binary logistic model based on the example first provided [here](https://github.com/dmlc/xgboost/issues/3960), so that we can easily understand the structure of the dumped JSON model object. This will allow us to understand where discrepancies can occur and how they should be handled. We will create a toy binary logistic model based on the example first provided [here](https://github.com/dmlc/xgboost/issues/3960), so that we can easily understand the structure of the dumped JSON model object. This will allow us to understand where discrepancies can occur and how they should be handled.
@@ -50,10 +50,10 @@ labels <- c(1, 1, 1,
0, 0, 0, 0, 0, 0,
0, 0, 0) 0, 0, 0)
data <- data.frame(dates = dates, labels=labels) data <- data.frame(dates = dates, labels = labels)
bst <- xgboost( bst <- xgboost(
data = as.matrix(data$dates), data = as.matrix(data$dates),
label = labels, label = labels,
nthread = 2, nthread = 2,
nrounds = 1, nrounds = 1,
@@ -69,7 +69,7 @@ We will now dump the model to JSON and attempt to illustrate a variety of issues
First let's dump the model to JSON: First let's dump the model to JSON:
```{r} ```{r}
bst_json <- xgb.dump(bst, with_stats = FALSE, dump_format='json') bst_json <- xgb.dump(bst, with_stats = FALSE, dump_format = 'json')
bst_from_json <- fromJSON(bst_json, simplifyDataFrame = FALSE) bst_from_json <- fromJSON(bst_json, simplifyDataFrame = FALSE)
node <- bst_from_json[[1]] node <- bst_from_json[[1]]
cat(bst_json) cat(bst_json)
@@ -78,10 +78,10 @@ cat(bst_json)
The tree JSON shown by the above code-chunk tells us that if the data is less than 20180132, the tree will output the value in the first leaf. Otherwise it will output the value in the second leaf. Let's try to reproduce this manually with the data we have and confirm that it matches the model predictions we've already calculated. The tree JSON shown by the above code-chunk tells us that if the data is less than 20180132, the tree will output the value in the first leaf. Otherwise it will output the value in the second leaf. Let's try to reproduce this manually with the data we have and confirm that it matches the model predictions we've already calculated.
```{r} ```{r}
bst_preds_logodds <- predict(bst,as.matrix(data$dates), outputmargin = TRUE) bst_preds_logodds <- predict(bst, as.matrix(data$dates), outputmargin = TRUE)
# calculate the logodds values using the JSON representation # calculate the logodds values using the JSON representation
bst_from_json_logodds <- ifelse(data$dates<node$split_condition, bst_from_json_logodds <- ifelse(data$dates < node$split_condition,
node$children[[1]]$leaf, node$children[[1]]$leaf,
node$children[[2]]$leaf) node$children[[2]]$leaf)
@@ -106,19 +106,19 @@ At this stage two things happened:
To explain this, let's repeat the comparison and round to two decimals: To explain this, let's repeat the comparison and round to two decimals:
```{r} ```{r}
round(bst_preds_logodds,2) == round(bst_from_json_logodds,2) round(bst_preds_logodds, 2) == round(bst_from_json_logodds, 2)
``` ```
If we round to two decimals, we see that only the elements related to data values of `20180131` don't agree. If we convert the data to floats, they agree: If we round to two decimals, we see that only the elements related to data values of `20180131` don't agree. If we convert the data to floats, they agree:
```{r} ```{r}
# now convert the dates to floats first # now convert the dates to floats first
bst_from_json_logodds <- ifelse(fl(data$dates)<node$split_condition, bst_from_json_logodds <- ifelse(fl(data$dates) < node$split_condition,
node$children[[1]]$leaf, node$children[[1]]$leaf,
node$children[[2]]$leaf) node$children[[2]]$leaf)
# test that values are equal # test that values are equal
round(bst_preds_logodds,2) == round(bst_from_json_logodds,2) round(bst_preds_logodds, 2) == round(bst_from_json_logodds, 2)
``` ```
What's the lesson? If we are going to work with an imported JSON model, any data must be converted to floats first. In this case, since '20180131' cannot be represented as a 32-bit float, it is rounded up to 20180132, as shown here: What's the lesson? If we are going to work with an imported JSON model, any data must be converted to floats first. In this case, since '20180131' cannot be represented as a 32-bit float, it is rounded up to 20180132, as shown here:
@@ -143,7 +143,7 @@ None are exactly equal. What happened? Although we've converted the data to 32
```{r} ```{r}
# now convert the dates to floats first # now convert the dates to floats first
bst_from_json_logodds <- ifelse(fl(data$dates)<fl(node$split_condition), bst_from_json_logodds <- ifelse(fl(data$dates) < fl(node$split_condition),
as.numeric(fl(node$children[[1]]$leaf)), as.numeric(fl(node$children[[1]]$leaf)),
as.numeric(fl(node$children[[2]]$leaf))) as.numeric(fl(node$children[[2]]$leaf)))
@@ -160,12 +160,13 @@ We were able to get the log-odds to agree, so now let's manually calculate the s
```{r} ```{r}
bst_preds <- predict(bst,as.matrix(data$dates)) bst_preds <- predict(bst, as.matrix(data$dates))
# calculate the predictions casting doubles to floats # calculate the predictions casting doubles to floats
bst_from_json_preds <- ifelse(fl(data$dates)<fl(node$split_condition), bst_from_json_preds <- ifelse(
as.numeric(1/(1+exp(-1*fl(node$children[[1]]$leaf)))), fl(data$dates) < fl(node$split_condition)
as.numeric(1/(1+exp(-1*fl(node$children[[2]]$leaf)))) , as.numeric(1 / (1 + exp(-1 * fl(node$children[[1]]$leaf))))
, as.numeric(1 / (1 + exp(-1 * fl(node$children[[2]]$leaf))))
) )
# test that values are equal # test that values are equal
@@ -177,9 +178,10 @@ None are exactly equal again. What is going on here? Well, since we are using
How do we fix this? We have to ensure we use the correct data types everywhere and the correct operators. If we use only floats, the float library that we have loaded will ensure the 32-bit float exponentiation operator is applied. How do we fix this? We have to ensure we use the correct data types everywhere and the correct operators. If we use only floats, the float library that we have loaded will ensure the 32-bit float exponentiation operator is applied.
```{r} ```{r}
# calculate the predictions casting doubles to floats # calculate the predictions casting doubles to floats
bst_from_json_preds <- ifelse(fl(data$dates)<fl(node$split_condition), bst_from_json_preds <- ifelse(
as.numeric(fl(1)/(fl(1)+exp(fl(-1)*fl(node$children[[1]]$leaf)))), fl(data$dates) < fl(node$split_condition)
as.numeric(fl(1)/(fl(1)+exp(fl(-1)*fl(node$children[[2]]$leaf)))) , as.numeric(fl(1) / (fl(1) + exp(fl(-1) * fl(node$children[[1]]$leaf))))
, as.numeric(fl(1) / (fl(1) + exp(fl(-1) * fl(node$children[[2]]$leaf))))
) )
# test that values are equal # test that values are equal

View File

@@ -1,7 +1,6 @@
<img src=https://raw.githubusercontent.com/dmlc/dmlc.github.io/master/img/logo-m/xgboost.png width=135/> eXtreme Gradient Boosting <img src="https://xgboost.ai/images/logo/xgboost-logo.svg" width=135/> eXtreme Gradient Boosting
=========== ===========
[![Build Status](https://xgboost-ci.net/job/xgboost/job/master/badge/icon)](https://xgboost-ci.net/blue/organizations/jenkins/xgboost/activity) [![Build Status](https://badge.buildkite.com/aca47f40a32735c00a8550540c5eeff6a4c1d246a580cae9b0.svg?branch=master)](https://buildkite.com/xgboost/xgboost-ci)
[![Build Status](https://img.shields.io/travis/dmlc/xgboost.svg?label=build&logo=travis&branch=master)](https://travis-ci.org/dmlc/xgboost)
[![XGBoost-CI](https://github.com/dmlc/xgboost/workflows/XGBoost-CI/badge.svg?branch=master)](https://github.com/dmlc/xgboost/actions) [![XGBoost-CI](https://github.com/dmlc/xgboost/workflows/XGBoost-CI/badge.svg?branch=master)](https://github.com/dmlc/xgboost/actions)
[![Documentation Status](https://readthedocs.org/projects/xgboost/badge/?version=latest)](https://xgboost.readthedocs.org) [![Documentation Status](https://readthedocs.org/projects/xgboost/badge/?version=latest)](https://xgboost.readthedocs.org)
[![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE) [![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE)
@@ -21,7 +20,7 @@
XGBoost is an optimized distributed gradient boosting library designed to be highly ***efficient***, ***flexible*** and ***portable***. XGBoost is an optimized distributed gradient boosting library designed to be highly ***efficient***, ***flexible*** and ***portable***.
It implements machine learning algorithms under the [Gradient Boosting](https://en.wikipedia.org/wiki/Gradient_boosting) framework. It implements machine learning algorithms under the [Gradient Boosting](https://en.wikipedia.org/wiki/Gradient_boosting) framework.
XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way. XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.
The same code runs on major distributed environment (Kubernetes, Hadoop, SGE, MPI, Dask) and can solve problems beyond billions of examples. The same code runs on major distributed environment (Kubernetes, Hadoop, SGE, Dask, Spark, PySpark) and can solve problems beyond billions of examples.
License License
------- -------
@@ -49,7 +48,6 @@ Become a sponsor and get a logo here. See details at [Sponsoring the XGBoost Pro
<a href="https://www.nvidia.com/en-us/" target="_blank"><img src="https://raw.githubusercontent.com/xgboost-ai/xgboost-ai.github.io/master/images/sponsors/nvidia.jpg" alt="NVIDIA" width="72" height="72"></a> <a href="https://www.nvidia.com/en-us/" target="_blank"><img src="https://raw.githubusercontent.com/xgboost-ai/xgboost-ai.github.io/master/images/sponsors/nvidia.jpg" alt="NVIDIA" width="72" height="72"></a>
<a href="https://www.intel.com/" target="_blank"><img src="https://images.opencollective.com/intel-corporation/2fa85c1/logo/256.png" width="72" height="72"></a> <a href="https://www.intel.com/" target="_blank"><img src="https://images.opencollective.com/intel-corporation/2fa85c1/logo/256.png" width="72" height="72"></a>
<a href="https://getkoffie.com/?utm_source=opencollective&utm_medium=github&utm_campaign=xgboost" target="_blank"><img src="https://images.opencollective.com/koffielabs/f391ab8/logo/256.png" width="72" height="72"></a>
### Backers ### Backers
[[Become a backer](https://opencollective.com/xgboost#backer)] [[Become a backer](https://opencollective.com/xgboost#backer)]

Some files were not shown because too many files have changed in this diff Show More