Compare commits

..

709 Commits

Author SHA1 Message Date
Jiaming Yuan
9ecb7583e9 [EM] Add basic distributed GPU tests. (#10861)
- Split Hist and Approx tests in unittests.
- Basic GPU tests for distributed.
2024-10-01 01:28:43 +08:00
Jiaming Yuan
92f1c48a22 [EM] Get quantile cuts from the extmem qdm. (#10860) 2024-10-01 00:59:28 +08:00
Jiaming Yuan
8cf2f7aed8 [CI] Set timeout for freebsd vm launch. (#10858) 2024-09-30 18:26:27 +08:00
david-cortes
429f956111 [R] Fix warning from DT about mismatched names (#10743) 2024-09-29 05:52:15 +08:00
Jiaming Yuan
c9f89c4241 [R] Rename ExternalDMatrix -> ExtMemDMatrix. (#10849) 2024-09-29 05:45:53 +08:00
dependabot[bot]
9ee4008654 Bump org.codehaus.mojo:exec-maven-plugin in /jvm-packages/xgboost4j (#10786)
Bumps [org.codehaus.mojo:exec-maven-plugin](https://github.com/mojohaus/exec-maven-plugin) from 3.3.0 to 3.4.1.
- [Release notes](https://github.com/mojohaus/exec-maven-plugin/releases)
- [Commits](https://github.com/mojohaus/exec-maven-plugin/compare/3.3.0...3.4.1)

---
updated-dependencies:
- dependency-name: org.codehaus.mojo:exec-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-28 21:12:30 +08:00
dependabot[bot]
497f1bdd38 Bump spark.version from 3.5.1 to 3.5.3 in /jvm-packages/xgboost4j-spark (#10859)
Bumps `spark.version` from 3.5.1 to 3.5.3.

Updates `org.apache.spark:spark-core_2.12` from 3.5.1 to 3.5.3

Updates `org.apache.spark:spark-sql_2.12` from 3.5.1 to 3.5.3

Updates `org.apache.spark:spark-mllib_2.12` from 3.5.1 to 3.5.3

---
updated-dependencies:
- dependency-name: org.apache.spark:spark-core_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.spark:spark-sql_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.spark:spark-mllib_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-28 21:11:48 +08:00
dependabot[bot]
521324ba9c Bump org.apache.maven.plugins:maven-checkstyle-plugin (#10785)
Bumps [org.apache.maven.plugins:maven-checkstyle-plugin](https://github.com/apache/maven-checkstyle-plugin) from 3.4.0 to 3.5.0.
- [Commits](https://github.com/apache/maven-checkstyle-plugin/compare/maven-checkstyle-plugin-3.4.0...maven-checkstyle-plugin-3.5.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-checkstyle-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-28 05:49:09 +08:00
Jiaming Yuan
271f4a80e7 Use CUDA virtual memory for pinned memory allocation. (#10850)
- Add a grow-only virtual memory allocator.
- Define a driver API wrapper. Split up the runtime API wrapper.
2024-09-28 04:26:44 +08:00
Jiaming Yuan
13b9874fd6 [jvm-packages] Bump rapids version. (#10857) 2024-09-28 04:02:24 +08:00
dependabot[bot]
dac6e4daa1 Bump org.apache.maven.plugins:maven-site-plugin (#10779)
Bumps [org.apache.maven.plugins:maven-site-plugin](https://github.com/apache/maven-site-plugin) from 3.12.1 to 3.20.0.
- [Release notes](https://github.com/apache/maven-site-plugin/releases)
- [Commits](https://github.com/apache/maven-site-plugin/compare/maven-site-plugin-3.12.1...maven-site-plugin-3.20.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-site-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-28 03:31:12 +08:00
dependabot[bot]
1d6f9d91fc Bump commons-logging:commons-logging in /jvm-packages/xgboost4j-spark (#10790)
Bumps commons-logging:commons-logging from 1.3.3 to 1.3.4.

---
updated-dependencies:
- dependency-name: commons-logging:commons-logging
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-28 02:57:04 +08:00
dependabot[bot]
43ca23fdf2 Bump org.apache.maven.plugins:maven-surefire-plugin (#10777)
Bumps [org.apache.maven.plugins:maven-surefire-plugin](https://github.com/apache/maven-surefire) from 3.3.1 to 3.5.0.
- [Release notes](https://github.com/apache/maven-surefire/releases)
- [Commits](https://github.com/apache/maven-surefire/compare/surefire-3.3.1...surefire-3.5.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-surefire-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-28 01:58:26 +08:00
Jiaming Yuan
cc2daadec3 Fix git ignore. [skip ci] (#10854) 2024-09-28 01:43:29 +08:00
dependabot[bot]
86157b9480 Bump org.apache.maven.plugins:maven-gpg-plugin (#10855)
Bumps [org.apache.maven.plugins:maven-gpg-plugin](https://github.com/apache/maven-gpg-plugin) from 3.2.6 to 3.2.7.
- [Release notes](https://github.com/apache/maven-gpg-plugin/releases)
- [Commits](https://github.com/apache/maven-gpg-plugin/compare/maven-gpg-plugin-3.2.6...maven-gpg-plugin-3.2.7)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-gpg-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-28 01:40:59 +08:00
dependabot[bot]
83b5eabd70 Bump org.apache.maven.plugins:maven-gpg-plugin (#10848)
Bumps [org.apache.maven.plugins:maven-gpg-plugin](https://github.com/apache/maven-gpg-plugin) from 3.2.4 to 3.2.6.
- [Release notes](https://github.com/apache/maven-gpg-plugin/releases)
- [Commits](https://github.com/apache/maven-gpg-plugin/compare/maven-gpg-plugin-3.2.4...maven-gpg-plugin-3.2.6)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-gpg-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-28 00:14:14 +08:00
dependabot[bot]
df6b3e1481 Bump org.apache.maven.plugins:maven-deploy-plugin (#10778)
Bumps [org.apache.maven.plugins:maven-deploy-plugin](https://github.com/apache/maven-deploy-plugin) from 3.1.2 to 3.1.3.
- [Release notes](https://github.com/apache/maven-deploy-plugin/releases)
- [Commits](https://github.com/apache/maven-deploy-plugin/compare/maven-deploy-plugin-3.1.2...maven-deploy-plugin-3.1.3)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-deploy-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-25 23:41:28 +08:00
dependabot[bot]
72546e71a8 Bump org.apache.maven.plugins:maven-project-info-reports-plugin (#10772)
Bumps [org.apache.maven.plugins:maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 3.6.2 to 3.7.0.
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-3.6.2...maven-project-info-reports-plugin-3.7.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-25 18:31:45 +08:00
dependabot[bot]
c648442a46 Bump org.apache.flink:flink-clients in /jvm-packages (#10771)
Bumps [org.apache.flink:flink-clients](https://github.com/apache/flink) from 1.19.1 to 1.20.0.
- [Commits](https://github.com/apache/flink/compare/release-1.19.1...release-1.20.0)

---
updated-dependencies:
- dependency-name: org.apache.flink:flink-clients
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-25 18:30:18 +08:00
Bobby Wang
a049490cdb [jvm-packages] bring back camel case variants of parameters (#10845) 2024-09-25 14:14:42 +08:00
Dmitry Razdoburdin
2179baa50c [SYC]. Implementation of HostDeviceVector (#10842) 2024-09-25 04:45:17 +08:00
Jiaming Yuan
bc69a3e877 [EM] Improve memory estimation for quantile sketching. (#10843)
I- Add basic estimation for RMM.
- Re-estimate after every sub-batch.
- Some debug logs for memory usage.
- Fix the locking mechanism in the memory allocator logger.
2024-09-25 03:20:09 +08:00
Bobby Wang
f3df0d0eb4 [jvm-packages] update scala style configuration (#10836) 2024-09-24 17:39:44 +08:00
Bobby Wang
2a03685bff [jvm-packages] shade xgboost spark packages (#10833) 2024-09-24 15:46:06 +08:00
Jiaming Yuan
68a8865bc5 [CI] Fix PyLint errors. (#10837) 2024-09-24 14:09:32 +08:00
Bobby Wang
982ee34658 [jvm-packages] fix surefire (#10835) 2024-09-24 14:08:30 +08:00
Jiaming Yuan
e228c1a121 [EM] Make page concatenation optional. (#10826)
This PR introduces a new parameter `extmem_concat_pages` to make the page concatenation optional for GPU hist. In addition, the document is updated for the new GPU-based external memory.
2024-09-24 06:19:28 +08:00
jonibr22
215da76263 [R] Fix xgb.model.dt.tree in case where all leaves are negative (#10798) 2024-09-24 05:02:33 +08:00
Bobby Wang
19b55b300b [jvm-packages] Support Ranker (#10823) 2024-09-22 02:02:15 +08:00
Dmitry Razdoburdin
d7599e095b [SYCL] Add dask support for distributed (#10812) 2024-09-22 02:01:57 +08:00
Jiaming Yuan
2a37a8880c Check correct dump format for gblinear. (#10831) 2024-09-21 00:32:52 +08:00
Jiaming Yuan
24241ed6e3 [EM] Compress dense ellpack. (#10821)
This helps reduce the memory copying needed for dense data. In addition, it helps reduce memory usage even if external memory is not used.

- Decouple the number of symbols needed in the compressor with the number of features when the data is dense.
- Remove the fetch call in the `at_end_` iteration.
- Reduce synchronization and kernel launches by using the `uvector` and ctx.
2024-09-20 18:20:56 +08:00
Jiaming Yuan
d5e1c41b69 [coll] Use loky for rabit op tests. (#10828) 2024-09-20 16:46:05 +08:00
Valentin Waeselynck
15c6172e09 [doc] Improve the model introduction. (#10822) 2024-09-19 02:33:49 +08:00
Jiaming Yuan
96bbf80457 [EM] Suport quantile objectives for GPU-based external memory. (#10820)
- Improved error message for memory usage.
- Support quantile-based objectives for GPU external memory.
2024-09-17 13:27:02 +08:00
shlomota
de00e07087 Fix misleading error when feature names are missing during inference (#10814) 2024-09-13 23:30:50 +08:00
Bobby Wang
67c8c96784 [jvm-packages] [breaking] rework xgboost4j-spark and xgboost4j-spark-gpu (#10639)
- Introduce an abstract XGBoost Estimator
- Update to the latest XGBoost parameters
  - Add all XGBoost parameters supported in XGBoost4j-spark.
  - Add setter and getter for these parameters.
  - Remove the deprecated parameters
- Address the missing value handling
- Remove any ETL operations in XGBoost
- Rework the GPU plugin
- Expand sanity tests for CPU and GPU consistency
2024-09-11 15:54:19 +08:00
Jiaming Yuan
d94f6679fc [EM] Avoid synchronous calls and unnecessary ATS access. (#10811)
- Pass context into various functions.
- Factor out some CUDA algorithms.
- Use ATS only for update position.
2024-09-10 14:33:14 +08:00
Jiaming Yuan
ed5f33df16 [EM] Multi-level quantile sketching for GPU. (#10813) 2024-09-10 13:08:34 +08:00
Jiaming Yuan
3ef8383d93 [doc] Fix custom_metric_obj.rst [skip ci] (#10796) (#10815)
Added the square to the derivative in the hessian

Co-authored-by: Corentin Santos <corentin.santos@iphc.cnrs.fr>
2024-09-10 05:11:43 +08:00
Dmitry Razdoburdin
bba6aa74fb [SYCL] Fix for sycl support with sklearn estimators (#10806)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-09-09 14:14:07 +08:00
Jiaming Yuan
5f7f31d464 [EM] Refactor ellpack construction. (#10810)
- Remove the calculation of n_symbols in the accessor.
- Pack initialization steps into the parameter list.
- Pass the context into various ctors.
- Specialization for dense data to prepare for further compression.
2024-09-09 14:10:10 +08:00
dependabot[bot]
c69c4adb58 Bump actions/setup-python from 5.1.1 to 5.2.0 (#10768)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.1.1 to 5.2.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](39cd14951b...f677139bbe)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-08 00:09:22 +08:00
david-cortes
f52f11e1d7 [R] Allow passing data.frame to SHAP (#10744) 2024-09-02 19:44:12 +08:00
dependabot[bot]
ec8cfb3267 Bump actions/upload-artifact from 4.3.4 to 4.4.0 (#10770)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.3.4 to 4.4.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](0b2256b8c0...50769540e7)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-02 17:52:32 +08:00
david-cortes
15b72571f3 [R] update serialization advise for new xgboost class (#10794) 2024-09-02 02:46:11 +08:00
dependabot[bot]
4f88ada219 Bump actions/setup-java from 4.2.1 to 4.2.2 (#10769)
Bumps [actions/setup-java](https://github.com/actions/setup-java) from 4.2.1 to 4.2.2.
- [Release notes](https://github.com/actions/setup-java/releases)
- [Commits](99b8673ff6...6a0805fcef)

---
updated-dependencies:
- dependency-name: actions/setup-java
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-02 02:35:48 +08:00
Samuel Marks
4503555274 POSIX compliant poll.h and mmap over sys/poll.h and mmap64 (#10767) 2024-09-01 15:47:30 +08:00
Jiaming Yuan
e1a2c1bbb3 [EM] Merge GPU partitioning with histogram building. (#10766)
- Stop concatenating pages if there's no subsampling.
- Use a single iteration for histogram build and partitioning.
2024-08-31 03:25:37 +08:00
Jiaming Yuan
98ac153265 Avoid warning from NVCC. (#10757) 2024-08-30 16:11:31 +08:00
Jiaming Yuan
5cc7c735e5 Don't link gputreeshap. (#10758)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-08-30 14:40:58 +08:00
Jiaming Yuan
34d4ab455e [EM] Avoid stream sync in quantile sketching. (#10765)
.
2024-08-30 12:33:24 +08:00
Jiaming Yuan
61dd854a52 [EM] Refactor GPU histogram builder. (#10764)
- Expose the maximum number of cached nodes to be consistent with the CPU implementation. Also easier for testing.
- Extract the subtraction trick for easier testing.
- Split up the `GradientQuantiser` to avoid circular dependency.
2024-08-30 02:39:14 +08:00
Jiaming Yuan
34937fea41 [EM] Python wrapper for the ExtMemQuantileDMatrix. (#10762)
Not exposed to the document yet.

- Add C API.
- Add Python API.
- Basic CPU tests.
2024-08-29 04:08:25 +08:00
Jiaming Yuan
7510a87466 [EM] Reuse the quantile container. (#10761)
Use the push method to merge the quantiles instead of creating multiple containers. This
reduces the memory usage by consistent pruning.
2024-08-29 01:39:55 +08:00
Jiaming Yuan
4fe67f10b4 [EM] Have one partitioner for each batch. (#10760)
- Initialize one partitioner for each batch.
- Collect partition size during initialization.
- Support base ridx in the finalization.
2024-08-29 01:35:17 +08:00
david-cortes
3043827efc [R] Update vignette "XGBoost presentation" (#10749)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-08-28 16:22:54 +08:00
Philip Hyunsu Cho
7794d3da8a Ensure that pip check does not fail due to bad platform tag (#10755)
* Remove custom tag generation

* Revert "Remove custom tag generation"

This reverts commit fe3cf0e8786c7dc05e1deced3a1c92cd79094735.

* Fetch an accurate platform tag from Pip 22+

* Fix formatting

* TOML allows trailing commas

* Update patch

* Add trailing comma

* Fix up patch

* Use `packaging`

Co-authored-by: jakirkham <jakirkham@gmail.com>

---------

Co-authored-by: jakirkham <jakirkham@gmail.com>
2024-08-27 18:11:08 -07:00
Jiaming Yuan
64afe9873b Increase timeout in C++ tests from 1 to 5 seconds. (#10756)
To avoid CI failures on FreeBSD.
2024-08-28 02:27:14 +08:00
Jiaming Yuan
bde1265caf [EM] Return a full DMatrix instead of a Ellpack from the GPU sampler. (#10753) 2024-08-28 01:05:11 +08:00
Jiaming Yuan
d6ebcfb032 [EM] Support CPU quantile objective for external memory. (#10751) 2024-08-27 04:16:57 +08:00
david-cortes
12c6b7ceea [R] Remove demos (#10750) 2024-08-27 04:16:36 +08:00
Jiaming Yuan
06c4246ff1 [CI] Workaround mypy errors. (#10754) 2024-08-27 02:54:11 +08:00
Jiaming Yuan
25966e4ba8 [EM] Pass batch parameter into extmem format. (#10736)
- Allow customization for format reading.
- Customize the number of pre-fetch batches.
2024-08-27 02:37:50 +08:00
Michael Mayer
074cad2343 [R] Finalizes switch to markdown doc (#10733)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-08-27 01:25:06 +08:00
david-cortes
479ae8081b [R] Add class names to coefficients (#10745) 2024-08-25 04:41:58 +08:00
Jiaming Yuan
fd0138c91c [coll] Improve column split tests with named threads. (#10735) 2024-08-24 12:43:47 +08:00
Jiaming Yuan
55aef8f546 [EM] Avoid resizing host cache. (#10734)
* [EM] Avoid resizing host cache.

- Add SAM allocator and resource.
- Use page-based cache instead of stream-based cache.
2024-08-23 06:34:01 +08:00
James Lamb
dbfafd8557 [doc] Install the conda GPU variant in environments without CUDA (#10731) 2024-08-22 19:48:15 +08:00
Philip Hyunsu Cho
cd83fe6033 [breaking][CI] Use CTK 12.4 (#10697) 2024-08-21 19:59:34 -07:00
Jiaming Yuan
142bdc73ec [EM] Support SHAP contribution with QDM. (#10724)
- Add GPU support.
- Add external memory support.
- Update the GPU tree shap.
2024-08-22 05:25:10 +08:00
Jiaming Yuan
cb54374550 Update clang-tidy. (#10730)
- Install cmake using pip.
- Fix compile command generation.
- Clean up the tidy script and remove the need to load the yaml file.
- Fix modernized type traits.
- Fix span class. Polymorphism support is dropped
2024-08-22 04:12:18 +08:00
James Lamb
03bd1183bc [doc] prefer 'cmake -B' and 'cmake --build' everywhere (#10717) 2024-08-22 02:16:55 +08:00
Dmitry Razdoburdin
24d225c1ab [SYCL] Implement UpdatePredictionCache and connect updater with leraner. (#10701)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-08-22 02:07:44 +08:00
Jiaming Yuan
9b88495840 [multi] Implement weight feature importance. (#10700) 2024-08-22 02:06:47 +08:00
Jiaming Yuan
402e7837fb Fix potential race in feature constraint. (#10719) 2024-08-21 16:50:31 +08:00
david-cortes
e9f1abc1f0 [R] keep row names in predictions (#10727) 2024-08-21 05:49:02 +08:00
david-cortes
adf87b27c5 [doc] Fix tutorial for advanced objectives (#10725) 2024-08-21 02:52:50 +08:00
Jiaming Yuan
508ac13243 Check cub errors. (#10721)
- Make sure cuda error returned by cub scan is caught.
- Avoid temporary buffer allocation in thrust device vector.
2024-08-21 02:50:26 +08:00
Michael Mayer
b949a4bf7b [R] Work on Roxygen documentation (#10674) 2024-08-20 13:33:13 +08:00
James Lamb
5db0803eb2 ignore UBJSON files in gitignore (#10718) 2024-08-19 16:50:37 +08:00
david-cortes
caabee2135 [R] remove 'reshape' argument, let shapes be handled by core cpp library (#10330) 2024-08-18 23:31:38 +08:00
Jiaming Yuan
fd365c147e [doc] Brief note about RMM SAM allocator. [skip ci] (#10712) 2024-08-17 04:21:39 +08:00
Jiaming Yuan
ec3f327c20 Add managed memory allocator. (#10711) 2024-08-17 03:02:34 +08:00
Jiaming Yuan
8d7fe262d9 [EM] Enable access to the number of batches. (#10691)
- Expose `NumBatches` in `DMatrix`.
- Small cleanup for removing legacy CUDA stream and ~force CUDA context initialization~.
- Purge old external memory data generation code.
2024-08-17 02:59:45 +08:00
Jiaming Yuan
033a666900 [EM] Log the page size of ellpack. (#10713) 2024-08-17 01:35:47 +08:00
Jiaming Yuan
abe65e3769 Reduce thread contention in column split histogram test. (#10708) 2024-08-17 01:00:32 +08:00
Jiaming Yuan
2258bc870d Add more tests and doc for QDM. (#10692) 2024-08-16 23:30:04 +08:00
Jiaming Yuan
582ea104b5 [EM] Enable prediction cache for GPU. (#10707)
- Use `UpdatePosition` for all nodes and skip `FinalizePosition` when external memory is used.
- Create `encode/decode` for node position, this is just as a refactor.
- Reuse code between update position and finalization.
2024-08-15 21:41:59 +08:00
Dmitry Razdoburdin
0def8e0bae [sycl] fix fitting for fp32 devices (#10702)
Co-authored-by: Dmitry Razdoburdin <>
2024-08-15 03:50:17 +08:00
Dmitry Razdoburdin
773ded684b [sycl] Add depth-wise policy (#10690)
Co-authored-by: Dmitry Razdoburdin <>
2024-08-13 18:12:35 +08:00
James Lamb
b457d0d792 [doc] [R] clarify lintr docs (#10698) 2024-08-13 14:37:31 +08:00
Jiaming Yuan
2ecc85ffad [EM] Support ExtMemQdm in the GPU predictor. (#10694) 2024-08-13 12:21:11 +08:00
Jiaming Yuan
43704549a2 [coll] Reduce the amount of open files (socket). (#10693)
Reduce the chance of hitting `Failed to call `socket`: Too many open files`.
2024-08-13 05:23:49 +08:00
Jiaming Yuan
d414fdf2e7 [EM] Add GPU version of the external memory QDM. (#10689) 2024-08-10 10:49:43 +08:00
James Lamb
18b28d9315 [R] prefer startsWith to substr() or regular expressions (#10687) 2024-08-09 21:18:46 +08:00
James Lamb
fb9201abae [CI] use key=value form for Dockerfile ENV statements (#10685) 2024-08-09 21:12:50 +08:00
James Lamb
e02b376bf7 [R] Ignore auto-generated config.h, ensure tests run without 'vcd' (#10688) 2024-08-09 17:23:27 +08:00
Jiaming Yuan
7bccc1ea2c [EM] CPU implementation for external memory QDM. (#10682)
- A new DMatrix type.
- Extract common code into a new QDM base class.

Not yet working:
- Not exposed to the interface yet, will wait for the GPU implementation.
- ~No meta info yet, still working on the source.~
- Exporting data to CSR is not supported yet.
2024-08-09 09:38:02 +08:00
jakirkham
ac8366654b Tweak R-package endian message for clarity (#10654)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-08-09 09:33:54 +08:00
Dmitry Razdoburdin
e555a238bc [SYCL]. Add implementation for loss-guided policy (#10681)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-08-09 09:04:46 +08:00
Jiaming Yuan
cc3b56fc37 Cleanup GPU Hist tests. (#10677)
* Cleanup GPU Hist tests.

- Remove GPU Hist gradient sampling test. The same properties are tested in the gradient
  sampler test suite.
- Move basic histogram tests into the histogram test suite.
- Remove the header inclusion of the `updater_gpu_hist.cu` in tests.
2024-08-06 11:50:44 +08:00
Jiaming Yuan
6ccf116601 [dask] Reduce the flakiness of tests. (#10678) 2024-08-06 06:04:10 +08:00
Philip Hyunsu Cho
35b1cdb365 Update release script for the JVM packages (#10660) 2024-08-05 14:46:14 -07:00
Jiaming Yuan
3d8107adb8 Support doc link for the sklearn module. (#10287) 2024-08-06 02:35:32 +08:00
Jiaming Yuan
a269055b2b [coll] Use loky for tests. (#10676)
This makes the tests easier to run and debug. In addition, they can now work on Windows as
well.
2024-08-03 07:33:42 +08:00
Jiaming Yuan
a185b693dc Reduce warnings and flakiness in tests. (#10659)
- Fix warnings in tests.
- Try to reduce the flakiness of dask test.
2024-08-03 07:32:47 +08:00
Jiaming Yuan
2e7ba900ef [CI] Add timeout limit to JVM tests. (#10673) 2024-08-03 01:51:13 +08:00
dependabot[bot]
ad32b4e021 Bump ossf/scorecard-action from 2.3.3 to 2.4.0 (#10664)
Bumps [ossf/scorecard-action](https://github.com/ossf/scorecard-action) from 2.3.3 to 2.4.0.
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](dc50aa9510...62b2cac7ed)

---
updated-dependencies:
- dependency-name: ossf/scorecard-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-03 00:11:11 +08:00
dependabot[bot]
9e0a9a066b Bump docker/setup-buildx-action from 3.4.0 to 3.6.1 (#10663)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 3.4.0 to 3.6.1.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v3.4.0...v3.6.1)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-03 00:10:35 +08:00
Jiaming Yuan
574c20dc1d Enable CI build for the federated-secure branch. (#10671) 2024-08-02 22:13:17 +08:00
Jiaming Yuan
77c844cef7 Reduce thread contention in column split tests. (#10658)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-08-01 18:36:46 +08:00
Philip Hyunsu Cho
778751a1bb Update the release script to download xgboost-cpu (#10657)
* Update the release script to download xgboost-cpu

* Exclude mypy 1.11.1; un-cap pylint

* Exclude mypy 1.11.0 too
2024-07-31 14:43:10 -07:00
Jiaming Yuan
fb77ed7603 [CI] Fix Python wheel workflow. (#10649)
* [CI] Fix Python wheel workflow.

* Use Python 3.10 for building wheels

---------

Co-authored-by: Hyunsu Cho <phcho@nvidia.com>
2024-07-30 10:13:47 -07:00
Jiaming Yuan
827d0e8edb [breaking] Bump Python requirement to 3.10. (#10434)
- Bump the Python requirement.
- Fix type hints.
- Use loky to avoid deadlock.
- Workaround cupy-numpy compatibility issue on Windows caused by the `safe` casting rule.
- Simplify the repartitioning logic to avoid dask errors.
2024-07-30 17:31:06 +08:00
jakirkham
757aafc131 Allow external configuration of endianness in R package build (#10642)
* Allow users to set endianness in R build

* Run `autoreconf -vi`

* Don't use :BOOL suffix

* Use AC_CONFIG_HEADERS

---------

Co-authored-by: Hyunsu Cho <phcho@nvidia.com>
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-07-29 17:24:29 -07:00
jakirkham
d4b82f50ab Add Library\mingw-w64 to Windows search path (#10643) 2024-07-29 14:17:59 -07:00
Jiaming Yuan
449be7a402 Quick fix for clang-tidy error. (#10641) 2024-07-26 18:21:16 +08:00
Dmitry Razdoburdin
7720272870 [sycl] add split applications and tests (#10636)
Co-authored-by: Dmitry Razdoburdin <>
2024-07-26 15:25:49 +08:00
Bobby Wang
384983ed27 [jvm-packages] remove code link and make xgboost4j-spark-gpu depend on xgboost4j-spark (#10635) 2024-07-25 23:27:51 +08:00
Philip Hyunsu Cho
ec82c75ee7 Allow building with CCCL that's newer than CTK (#10633) 2024-07-25 00:41:56 -07:00
Bobby Wang
d5834b68c3 [jvm-packages] remove xgboost4j-gpu and rework cudf column (#10630) 2024-07-25 15:31:16 +08:00
Jiaming Yuan
fcae6301ec [dask] Disable broadcast in the scatter call. (#10632) 2024-07-25 04:16:34 +08:00
Philip Hyunsu Cho
411c8466bd [CMake] Explicitly link with CCCL (standalone or CTK) (#10624) 2024-07-23 18:42:54 -07:00
Bobby Wang
7949a8d5f4 [jvm-packages] support missing value when constructing dmatrix with iterator (#10628) 2024-07-23 23:25:07 +08:00
Bobby Wang
b3ed81877a [jvm-packages] Cleanup xgboost4j (#10627) 2024-07-23 13:57:10 +08:00
Bobby Wang
003b418312 [jvm-packages] clean up example (#10618) 2024-07-23 12:15:51 +08:00
Jiaming Yuan
485d90218c Catch exceptions during file read. (#10623) 2024-07-23 03:48:19 +08:00
Jiaming Yuan
a19bbc9be5 Avoid caching allocator for large allocations. (#10582) 2024-07-23 03:48:03 +08:00
Jiaming Yuan
b2cae34a8e Fix integer overflow. (#10615) 2024-07-23 02:13:15 +08:00
Dmitry Razdoburdin
f6cae4da85 [SYCL] Add splits evaluation (#10605)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-07-22 18:14:06 +08:00
Jiaming Yuan
6d9fcb771e Move device histogram storage into histogram.cuh. (#10608) 2024-07-21 14:10:13 +08:00
Jiaming Yuan
cb62f9e73b [EM] Prevent init with CUDA malloc resource. (#10606) 2024-07-21 05:08:29 +08:00
Jiaming Yuan
0846ad860c Optionally skip cupy on windows. (#10611) 2024-07-20 22:12:12 +08:00
Jiaming Yuan
344ddeb9ca Drop support for CUDA legacy stream. (#10607) 2024-07-20 06:14:56 +08:00
Philip Hyunsu Cho
326921dbe4 [CI] Build a CPU-only wheel under name xgboost-cpu (#10603) 2024-07-19 10:51:08 -07:00
Jiaming Yuan
7ab93f3ce3 [CI] Fix test environment. (#10609)
* [CI] Fix test environment.

* Remove shell.

* Remove.

* Update Dockerfile.i386
2024-07-18 10:04:17 -07:00
Jiaming Yuan
292bb677e5 [EM] Support mmap backed ellpack. (#10602)
- Support resource view in ellpack.
- Define the CUDA version of MMAP resource.
- Define the CUDA version of malloc resource.
- Refactor cuda runtime API wrappers, and add memory access related wrappers.
- gather windows macros into a single header.
2024-07-18 08:20:21 +08:00
Jiaming Yuan
e9fbce9791 Refactor DeviceUVector. (#10595)
Create a wrapper instead of using inheritance to avoid inconsistent interface of the class.
2024-07-18 03:33:01 +08:00
dependabot[bot]
07732e02e5 Bump com.fasterxml.jackson.core:jackson-databind (#10590)
Bumps [com.fasterxml.jackson.core:jackson-databind](https://github.com/FasterXML/jackson) from 2.15.2 to 2.17.2.
- [Commits](https://github.com/FasterXML/jackson/commits)

---
updated-dependencies:
- dependency-name: com.fasterxml.jackson.core:jackson-databind
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-18 01:36:44 +08:00
dependabot[bot]
919cfd9c8d Bump actions/upload-artifact from 4.3.3 to 4.3.4 (#10600)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.3.3 to 4.3.4.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](65462800fd...0b2256b8c0)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-18 01:36:12 +08:00
dependabot[bot]
c41a657c4e Bump actions/setup-python from 5.1.0 to 5.1.1 (#10599)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.1.0 to 5.1.1.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](82c7e631bb...39cd14951b)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-18 01:35:21 +08:00
Philip Hyunsu Cho
ee8bb60bf1 [CI] Reduce the frequency of dependabot PRs (#10593) 2024-07-17 06:21:17 -07:00
Jiaming Yuan
a6a8a55ffa Merge approx tests. (#10583) 2024-07-16 19:03:48 +08:00
Jiaming Yuan
5a92ffe3ca Partial fix for CTK 12.5 (#10574) 2024-07-16 17:41:50 +08:00
RektPunk
370dce9d57 [Doc] Fix CRAN badge in README [skip ci] (#10587)
* Change http to https in Badges

* Change all http to https
2024-07-15 23:35:42 +08:00
dependabot[bot]
fa8fea145a Bump scalatest.version from 3.2.18 to 3.2.19 in /jvm-packages/xgboost4j (#10535)
Bumps `scalatest.version` from 3.2.18 to 3.2.19.

Updates `org.scalatest:scalatest_2.12` from 3.2.18 to 3.2.19
- [Release notes](https://github.com/scalatest/scalatest/releases)
- [Commits](https://github.com/scalatest/scalatest/compare/release-3.2.18...release-3.2.19)

Updates `org.scalactic:scalactic_2.12` from 3.2.18 to 3.2.19
- [Release notes](https://github.com/scalatest/scalatest/releases)
- [Commits](https://github.com/scalatest/scalatest/compare/release-3.2.18...release-3.2.19)

---
updated-dependencies:
- dependency-name: org.scalatest:scalatest_2.12
  dependency-type: direct:development
  update-type: version-update:semver-patch
- dependency-name: org.scalactic:scalactic_2.12
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-15 20:22:04 +08:00
Jiaming Yuan
bbd308595a [jvm-packages] Bump rapids version. (#10588) 2024-07-15 20:21:25 +08:00
david-cortes
ab982e7873 [R] Redesigned xgboost() interface skeleton (#10456)
---------

Co-authored-by: Michael Mayer <mayermichael79@gmail.com>
2024-07-15 18:44:58 +08:00
dependabot[bot]
17c64300e3 Bump org.apache.maven.plugins:maven-checkstyle-plugin in /jvm-packages (#10518)
Bumps [org.apache.maven.plugins:maven-checkstyle-plugin](https://github.com/apache/maven-checkstyle-plugin) from 3.3.1 to 3.4.0.
- [Commits](https://github.com/apache/maven-checkstyle-plugin/compare/maven-checkstyle-plugin-3.3.1...maven-checkstyle-plugin-3.4.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-checkstyle-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-15 16:44:50 +08:00
dependabot[bot]
b7511cbd6f Bump net.alchim31.maven:scala-maven-plugin in /jvm-packages/xgboost4j (#10536)
Bumps net.alchim31.maven:scala-maven-plugin from 4.9.1 to 4.9.2.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-15 16:44:36 +08:00
dependabot[bot]
a81ccab7e5 Bump org.apache.maven.plugins:maven-release-plugin (#10586)
Bumps [org.apache.maven.plugins:maven-release-plugin](https://github.com/apache/maven-release) from 3.0.1 to 3.1.1.
- [Release notes](https://github.com/apache/maven-release/releases)
- [Commits](https://github.com/apache/maven-release/compare/maven-release-3.0.1...maven-release-3.1.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-release-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-15 16:44:20 +08:00
dependabot[bot]
5b68b68379 Bump org.apache.maven.plugins:maven-project-info-reports-plugin (#10585)
Bumps [org.apache.maven.plugins:maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 3.6.1 to 3.6.2.
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-3.6.1...maven-project-info-reports-plugin-3.6.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-15 16:44:05 +08:00
dependabot[bot]
0f789e2b22 Bump org.apache.maven.plugins:maven-jar-plugin (#10458)
Bumps [org.apache.maven.plugins:maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.4.1 to 3.4.2.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.4.1...maven-jar-plugin-3.4.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-15 15:15:20 +08:00
dependabot[bot]
8b77964d03 Bump commons-logging:commons-logging in /jvm-packages/xgboost4j-spark (#10547)
Bumps commons-logging:commons-logging from 1.3.2 to 1.3.3.

---
updated-dependencies:
- dependency-name: commons-logging:commons-logging
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-07-15 15:04:30 +08:00
dependabot[bot]
7996914a2d Bump org.apache.maven.plugins:maven-surefire-plugin (#10429)
Bumps [org.apache.maven.plugins:maven-surefire-plugin](https://github.com/apache/maven-surefire) from 3.2.5 to 3.3.0.
- [Release notes](https://github.com/apache/maven-surefire/releases)
- [Commits](https://github.com/apache/maven-surefire/compare/surefire-3.2.5...surefire-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-surefire-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-14 17:27:17 +08:00
dependabot[bot]
5b7c68946d Bump org.apache.flink:flink-clients in /jvm-packages (#10517)
Bumps [org.apache.flink:flink-clients](https://github.com/apache/flink) from 1.19.0 to 1.19.1.
- [Commits](https://github.com/apache/flink/compare/release-1.19.0...release-1.19.1)

---
updated-dependencies:
- dependency-name: org.apache.flink:flink-clients
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-14 15:26:17 +08:00
dependabot[bot]
6fc1088592 Bump org.apache.maven.plugins:maven-project-info-reports-plugin (#10497)
Bumps [org.apache.maven.plugins:maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 3.5.0 to 3.6.1.
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-3.5.0...maven-project-info-reports-plugin-3.6.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-14 15:25:52 +08:00
Dmitry Razdoburdin
ce97de2a7c replace channel for sycl dependencies (#10576)
Co-authored-by: Dmitry Razdoburdin <>
2024-07-12 18:28:54 +08:00
Jiaming Yuan
5fea9d24f2 Small cleanup for CMake scripts. (#10573)
- Remove rabit.
2024-07-12 05:18:23 +08:00
Jiaming Yuan
6c403187ec Fix column split race condition. (#10572) 2024-07-12 01:07:12 +08:00
Jiaming Yuan
1ca4bfd20e Avoid thrust vector initialization. (#10544)
* Avoid thrust vector initialization.

- Add a wrapper for rmm device uvector.
- Split up the `Resize` method for HDV.
2024-07-11 17:29:27 +08:00
Jiaming Yuan
89da9f9741 [fed] Split up federated test CMake file. (#10566)
- Collect all federated test files into the same directory.
- Independently list the files.
2024-07-11 13:09:18 +08:00
Jiaming Yuan
5f910cd4ff [EM] Handle base idx in GPU histogram. (#10549) 2024-07-11 03:26:30 +08:00
Jiaming Yuan
34b154c284 Avoid the use of size_t in the partitioner. (#10541)
- Avoid the use of size_t in the partitioner.
- Use `Span` instead of `Elem` where `node_id` is not needed.
- Remove the `const_cast`.
- Make sure the constness is not removed in the `Elem` by making it reference only.

size_t is implementation-defined, which causes issue when we want to pass pointer or span.
2024-07-11 00:43:08 +08:00
Jiaming Yuan
baba3e9eb0 Fix empty partition. (#10559) 2024-07-10 13:01:47 +08:00
Jiaming Yuan
8e2b874b4c [doc] Add notes about RMM and device ordinal. [skip ci] (#10562)
- Remove the experimental tag, we have been running it for a long time now.
- Add notes about avoiding set CUDA device.
- Add link in parameter.
2024-07-10 13:00:57 +08:00
Jiaming Yuan
3ec74a1ba9 [doc] Add build_info to autodoc. [skip ci] (#10551) 2024-07-10 04:05:20 +08:00
david-cortes
8d0f2bfbaa [doc] Add more detailed explanations for advanced objectives (#10283)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-07-08 19:17:31 +08:00
Jiaming Yuan
2266db17d1 [R] Update roxygen. (#10556) 2024-07-08 17:02:46 +08:00
Dmitry Razdoburdin
0a3941be6d [sycl] Improve build configuration. (#10548)
Co-authored-by: Dmitry Razdoburdin <>
2024-07-07 02:10:54 +08:00
Jiaming Yuan
00264eb72b [EM] Basic distributed test for external memory. (#10492) 2024-07-06 01:15:20 +08:00
Dmitry Razdoburdin
513d7a7d84 [sycl] Reorder if-else statements to allow using of cpu branches for sycl-devices (#10543)
* reoder if-else statements for sycl compatibility

* trigger check

---------

Co-authored-by: Dmitry Razdoburdin <>
2024-07-05 16:31:48 +08:00
Jiaming Yuan
620b2b155a Cache GPU histogram kernel configuration. (#10538) 2024-07-04 15:38:59 +08:00
Jiaming Yuan
cd1d108c7d [doc] Fix learning to rank tutorial. [skip ci] (#10539) 2024-07-03 22:52:26 +08:00
Jiaming Yuan
6243e7c43d [doc] Update link to release notes. [skip ci] (#10533) 2024-07-03 12:16:53 +08:00
Jiaming Yuan
628411a654 Enhance the threadpool implementation. (#10531)
- Accept an initialization function.
- Support void return tasks.
2024-07-03 12:13:27 +08:00
Jiaming Yuan
9cb4c938da [EM] Move prefetch in reset into the end of the iteration. (#10529) 2024-07-03 03:48:18 +08:00
Jiaming Yuan
e537b0969f Fix boolean array for arrow-backed DF. (#10527) 2024-07-02 17:02:54 +08:00
Jiaming Yuan
d33043a348 [coll] Allow using local host for testing. (#10526)
- Don't try to retrieve the IP address if a host is specified.
- Fix compiler deprecation warning.
2024-07-02 15:34:38 +08:00
Jiaming Yuan
a39fef2c67 [fed] Fixes for the encrypted GRPC backend. (#10503) 2024-07-02 15:15:12 +08:00
Jiaming Yuan
5f0c1e902b Small cleanup for error message. (#10502)
- The `Fail` function can handle file location automatically.
- Report concatenated error for connection poll.
- Typos.
2024-07-02 13:36:41 +08:00
dependabot[bot]
804cf85fe4 Bump docker/build-push-action from 5 to 6 (#10516)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-01 21:38:13 -07:00
Philip Hyunsu Cho
09d32f1f2b Fix build and C++ tests for FreeBSD (#10480) 2024-06-28 01:47:55 -07:00
Jiaming Yuan
e8a962575a [EM] Allow staging ellpack on host for GPU external memory. (#10488)
- New parameter `on_host`.
- Abstract format creation and stream creation into policy classes.
2024-06-28 04:42:18 +08:00
Jiaming Yuan
824fba783e Remove support for deprecated format in Python. (#10490) 2024-06-27 11:31:53 +08:00
Jiaming Yuan
2d88d17008 Remove deprecated DeviceQuantileDMatrix. (#10491) 2024-06-27 11:30:51 +08:00
Philip Hyunsu Cho
bed3695beb [jvm-packages] Honor skip.native.build option in xgboost4j-gpu (#10496) 2024-06-26 16:35:23 -07:00
Philip Hyunsu Cho
4b88dfff24 [CI] Temporarily pin pylint to 3.2.3 (#10494)
* [CI] Temporarily pin pylint to 3.2.3

* Add quotes

* Correct env
2024-06-26 14:08:49 -07:00
Hyunsu Cho
5efc979551 [CI] [Hotfix] Make S3 upload conditional 2024-06-26 06:21:46 -07:00
Philip Hyunsu Cho
08658b124d [CI] Add CI pipeline to build libxgboost4j.so targeting Linux ARM64 (#10487) 2024-06-26 01:43:15 -07:00
Philip Hyunsu Cho
4abf24aa4f Download manylinux2014 wheels in the release script (#10485) 2024-06-25 01:22:32 -07:00
Philip Hyunsu Cho
4c1920a6a5 [CI] Fix S3 upload for manylinux2014 wheels (#10483) 2024-06-24 14:40:28 -07:00
Philip Hyunsu Cho
d4dee25eb3 [CI] Set up pipeline to build manylinux2014 wheels (#10478) 2024-06-24 12:25:26 -07:00
Philip Hyunsu Cho
9a8bb7d186 Require Pandas 1.2+ (#10476) 2024-06-22 14:15:22 -07:00
Philip Hyunsu Cho
c519f5690e Fix read the doc on 2.1. (#10460) (#10474)
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-06-21 16:10:07 -07:00
jpizagno
124bc57a6e [ISSUE-10463] Add missing import in learning-to-rank tutorial (#10464)
* added 'sorted()' to qid, and added pandas import

* Update learning_to_rank.rst

---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-06-21 15:52:34 -07:00
david-cortes
61ac8eec8a [R] use Rf_ prefix for R C functions. (#10465) 2024-06-21 14:37:18 +08:00
Jiaming Yuan
26eb68859f Consistently report error in tests. (#10453) 2024-06-21 14:35:22 +08:00
Jiaming Yuan
b38c7fe2ce 2.1.0 release news.(#10378) 2024-06-21 14:34:53 +08:00
Jiaming Yuan
2b400b18d5 Small cleanup for rowset collection. (#10401) 2024-06-19 18:06:23 +08:00
Jiaming Yuan
e5f1720656 [EM] Avoid writing cut matrix to cache. (#10444) 2024-06-19 18:03:38 +08:00
Jiaming Yuan
63418d2f35 Link CMAKE_DL_LIBS when dlopen is used. (#10447) 2024-06-19 15:06:58 +08:00
Philip Hyunsu Cho
45150a844e [CI] [jvm-packages] Build libxgboost4j.dylib on M1 MacOS with OpenMP support (#10449) 2024-06-18 20:20:29 -07:00
Philip Hyunsu Cho
8689f0b562 [CI] Stop vendoring libomp.dylib in MacOS Python wheels (#10440) 2024-06-18 19:17:02 -07:00
Jiaming Yuan
b9e5229ff2 Update rapids (#10435)
* [CI] Update RAPIDS to latest stable

* RMM.

---------

Co-authored-by: hcho3 <2532981+hcho3@users.noreply.github.com>
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-06-18 05:01:57 +08:00
Jiaming Yuan
b4cc350ec5 Fix categorical data with external memory. (#10433) 2024-06-18 04:34:54 +08:00
Jiaming Yuan
a8ddbac163 [doc] Fixes for external memory document. (#10426) 2024-06-18 03:10:49 +08:00
Philip Hyunsu Cho
bc3747bdce [CI] Migrate to rockylinux8 / manylinux_2_28_x86_64 (#10399)
* [CI] Migrate to rockylinux8 / manylinux_2_28_x86_64

* Scrub all references to CentOS 7

* Fix

* Remove use of yum

* Use gcc-10 in cpu

* Temporarily disable -Werror

* Use GCC 9 for now

* Roll back gRPC

* Scrub all references to manylinux2014_x86_64

* Revise rename_whl.py to handle no-op rename

* Change JDK_VERSION back to 8

* Reviewer's comment

* Use GCC 10

* Use Spark 3.5.1, same as in pom.xml

* Fix JAR install
2024-06-17 12:07:49 -07:00
Jiaming Yuan
320e7c2041 [CI] Enable CI binary build for the vertical federated learning branch. (#10417)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-06-17 17:01:04 +08:00
Jiaming Yuan
6c83c8c2ef Allow blocking launch of federated tracker. (#10414)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-06-16 01:43:53 +08:00
Jiaming Yuan
49e25cfb36 Allow unaligned pointer if the array is empty. (#10418)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-06-15 19:10:21 +08:00
Jiaming Yuan
bbff74d2ff [dask] Workaround the tokenizer by changing the scatter function. (#10419)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-06-15 19:10:00 +08:00
Hyunsu Cho
601f2067c7 [CI] Hot fix for JVM tests on MacOS 2024-06-15 01:56:22 -07:00
Hyunsu Cho
b36d023f9e [CI] Hot fix for JVM tests on MacOS 2024-06-15 00:40:48 -07:00
Philip Hyunsu Cho
1ace9c66ec [CI] Fix JVM tests on Windows (#10404) 2024-06-15 00:21:40 -07:00
Richard (Rick) Zamora
dc14f98f40 Avoid default tokenization in Dask (#10398)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-06-14 19:44:54 +08:00
nocluebutalotofit
01ff2b2c29 [doc] FIx learning to rank (#10412) 2024-06-14 18:09:27 +08:00
Bobby Wang
cf0c1d0888 [pyspark] Avoid repartition. (#10408) 2024-06-12 02:26:10 +08:00
Christopher Tee
e0ebbc0746 [doc] Fix small typos (#10405) 2024-06-11 16:13:02 +08:00
Dmitry Razdoburdin
0c44067736 [SYCL] Optimize gradients calculations. (#10325)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-06-08 11:53:23 +08:00
Jiaming Yuan
c9f5fcaf21 [col] Small cleanup to federated comm. (#10397) 2024-06-07 21:19:04 +08:00
Philip Hyunsu Cho
f5815b6982 [Doc] Fix deployment for JVM docs (#10385)
* [Doc] Fix deployment for JVM docs

* Use READTHEDOCS_VERSION_NAME

* Fix html

* Default to master
2024-06-04 19:35:37 -07:00
Jiaming Yuan
9f6608d6aa Add python 3.12 classifier. (#10381) 2024-06-04 18:02:59 +08:00
Bobby Wang
bc7643d35e [jvm-packages] Don't cast to float if it's already float (#10386) 2024-06-04 18:01:51 +08:00
Philip Hyunsu Cho
9b7633c01d [CI] Use Python 3.10 to build docs (#10383) 2024-06-03 22:54:02 -07:00
Jiaming Yuan
43a57c4a85 Bump development version to 2.2. (#10376) 2024-06-04 12:59:16 +08:00
Jiaming Yuan
979e392deb Fix warnings in GPU dask tests. (#10358) 2024-06-04 12:58:58 +08:00
Jiaming Yuan
0808e50ae8 Sync stream in ellpack format. (#10374) 2024-06-04 12:58:26 +08:00
Philip Hyunsu Cho
c4ec64d409 Fix logo URL [skip ci] (#10382) 2024-06-03 20:48:19 -07:00
Philip Hyunsu Cho
4057f861c1 [CI] Add nightly CI job to test against dev version of deps (#10351)
* [CI] Add nightly CI job to test against dev version of deps

* Update build-containers.sh

* Add build step

* Wait for build artifact

* Try pinning dask

* Address reviewers' comments

* Fix unbound variable error

* Specify dev version exactly

* Pin dask=2024.1.1
2024-06-03 19:28:55 -07:00
Sid Mehta
eb6622ff7a Add Comet Logo to the Readme. (#10380) 2024-06-04 08:44:52 +08:00
dependabot[bot]
4847f24840 Bump com.nvidia:rapids-4-spark_2.12 in /jvm-packages (#10362)
Bumps com.nvidia:rapids-4-spark_2.12 from 24.04.0 to 24.04.1.

---
updated-dependencies:
- dependency-name: com.nvidia:rapids-4-spark_2.12
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-03 21:21:44 +08:00
dependabot[bot]
492bb76f64 Bump org.codehaus.mojo:exec-maven-plugin in /jvm-packages (#10363)
Bumps [org.codehaus.mojo:exec-maven-plugin](https://github.com/mojohaus/exec-maven-plugin) from 3.2.0 to 3.3.0.
- [Release notes](https://github.com/mojohaus/exec-maven-plugin/releases)
- [Commits](https://github.com/mojohaus/exec-maven-plugin/compare/3.2.0...3.3.0)

---
updated-dependencies:
- dependency-name: org.codehaus.mojo:exec-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-03 21:03:58 +08:00
dependabot[bot]
7157b9586b Bump org.apache.maven.plugins:maven-javadoc-plugin in /jvm-packages (#10360)
Bumps [org.apache.maven.plugins:maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 3.6.3 to 3.7.0.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-3.6.3...maven-javadoc-plugin-3.7.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-03 21:03:02 +08:00
dependabot[bot]
99a7f5b3ab Bump org.apache.maven.plugins:maven-javadoc-plugin (#10373)
Bumps [org.apache.maven.plugins:maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 3.6.3 to 3.7.0.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-3.6.3...maven-javadoc-plugin-3.7.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-03 21:02:19 +08:00
Jiaming Yuan
7f3e92d71a Bump the cache github action to 4.0.2. (#10377) 2024-06-03 16:52:21 +08:00
dependabot[bot]
1164dc07cd Bump actions/setup-python from 5.0.0 to 5.1.0 (#10368)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.0.0 to 5.1.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](0a5c615913...82c7e631bb)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-03 13:53:21 +08:00
dependabot[bot]
6cfc3e16fc Bump actions/checkout from 4.1.1 to 4.1.6 (#10369)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.1 to 4.1.6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](b4ffde65f4...a5ac7e51b4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-02 13:13:54 +08:00
dependabot[bot]
8286a190b7 Bump actions/upload-artifact from 4.3.1 to 4.3.3 (#10366)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.3.1 to 4.3.3.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](5d5d22a312...65462800fd)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-02 02:11:49 +08:00
Jiaming Yuan
4f48647932 Fix typo. (#10353) 2024-06-02 02:07:55 +08:00
Jiaming Yuan
92cba25fe2 Remove reference to R win64 MSVC build. (#10355) 2024-06-02 02:06:35 +08:00
Jiaming Yuan
d2d01d977a Remove unnecessary fetch operations in external memory. (#10342) 2024-05-31 13:16:40 +08:00
Jiaming Yuan
c2e3d4f3cd [dask] Update dask demo for using the new dask backend. (#10347) 2024-05-31 08:03:20 +08:00
Jiaming Yuan
e6eefea5e2 [coll] Move the rabit poll helper. (#10349) 2024-05-31 08:02:21 +08:00
Astariul
0717e886e5 [doc] Fix typo & format in C API documentation (#10350) 2024-05-30 23:14:42 +08:00
Philip Hyunsu Cho
324f2d4e4a Handle float128 generically (#10322) 2024-05-30 20:14:39 +08:00
david-cortes
8998733ef4 [R] Rename BIAS -> (Intercept) (#10337) 2024-05-30 19:43:32 +08:00
Astariul
bc6c993aaa [doc] Fix typo (#10340) 2024-05-29 17:13:30 +08:00
Jiaming Yuan
2de67f0050 [coll] Prevent race during error check. (#10319) 2024-05-28 15:43:16 -07:00
Jiaming Yuan
7354955cbb Test federated plugin using GitHub action. (#10336)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-05-29 02:28:14 +08:00
Philip Hyunsu Cho
7ae5c972f9 [CI] Upgrade github workflows to use latest Conda setup action (#10320)
Co-authored-by: Christian Clauss <cclauss@me.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-28 10:23:07 -07:00
dependabot[bot]
e20ed8ab9c Bump org.sonatype.plugins:nexus-staging-maven-plugin (#10335)
Bumps org.sonatype.plugins:nexus-staging-maven-plugin from 1.6.13 to 1.7.0.

---
updated-dependencies:
- dependency-name: org.sonatype.plugins:nexus-staging-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-28 19:08:40 +08:00
Jiaming Yuan
5627af6b21 [coll] Increase timeout limit. (#10332) 2024-05-28 10:20:49 +08:00
david-cortes
949f062229 [R] Fix incorrect division of classification/ranking objectives (#10327) 2024-05-27 10:27:32 +08:00
david-cortes
b2008773bb [R] Update docs for custom user functions (#10328) 2024-05-27 10:26:34 +08:00
Dmitry Razdoburdin
0058301e6f [sycl] optimise hist building (#10311)
Co-authored-by: Dmitry Razdoburdin <>
2024-05-27 10:21:22 +08:00
Bobby Wang
9def441e9a [CI] add script to generate meta info and upload to s3 (#10295)
* [CI] add script to generate meta info and upload to s3

* Write Python script to generate meta.json

* Update other pipelines

* Add wheel_name field

* Add description

---------

Co-authored-by: Hyunsu Cho <phcho@nvidia.com>
2024-05-24 10:03:28 -07:00
david-cortes
5086decb0c [R] Reshape predictions for custom eval metric when they are 2D (#10323) 2024-05-24 17:28:30 +08:00
dependabot[bot]
95ba0998b3 Bump org.codehaus.mojo:exec-maven-plugin from 3.2.0 to 3.3.0 in /jvm-packages/xgboost4j (#10309)
updated-dependencies:
- dependency-name: org.codehaus.mojo:exec-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-23 18:11:25 +08:00
dependabot[bot]
089bee0a00 Bump org.apache.maven.plugins:maven-deploy-plugin (#10240)
Bumps [org.apache.maven.plugins:maven-deploy-plugin](https://github.com/apache/maven-deploy-plugin) from 3.1.1 to 3.1.2.
- [Release notes](https://github.com/apache/maven-deploy-plugin/releases)
- [Commits](https://github.com/apache/maven-deploy-plugin/compare/maven-deploy-plugin-3.1.1...maven-deploy-plugin-3.1.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-deploy-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-23 17:55:49 +08:00
dependabot[bot]
5a084fb9b3 Bump org.apache.maven.plugins:maven-jar-plugin in /jvm-packages (#10244)
Bumps [org.apache.maven.plugins:maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.4.0 to 3.4.1.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.4.0...maven-jar-plugin-3.4.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-23 17:06:44 +08:00
dependabot[bot]
291d417f57 Bump net.alchim31.maven:scala-maven-plugin in /jvm-packages/xgboost4j (#10260)
Bumps net.alchim31.maven:scala-maven-plugin from 4.9.0 to 4.9.1.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-23 14:40:37 +08:00
Jiaming Yuan
15eb553c1f [doc] Add a coarse map for XGBoost features to assist development. [skip ci] (#10310) 2024-05-23 14:25:15 +08:00
Bobby Wang
932d7201f9 [jvm-packages] refine tracker (#10313)
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-23 12:46:21 +08:00
Jiaming Yuan
966dc81788 [coll] Keep the tracker alive during initialization error. (#10306) 2024-05-23 11:13:59 +08:00
Jiaming Yuan
d5fcbee44b Add timeout for distributed tests. (#10315) 2024-05-23 11:11:49 +08:00
dependabot[bot]
b8a7773736 Bump org.apache.maven.plugins:maven-deploy-plugin (#10235)
Bumps [org.apache.maven.plugins:maven-deploy-plugin](https://github.com/apache/maven-deploy-plugin) from 3.1.1 to 3.1.2.
- [Release notes](https://github.com/apache/maven-deploy-plugin/releases)
- [Commits](https://github.com/apache/maven-deploy-plugin/compare/maven-deploy-plugin-3.1.1...maven-deploy-plugin-3.1.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-deploy-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-22 18:53:30 +08:00
dependabot[bot]
e56ca69c31 Bump dorny/paths-filter from 2 to 3 (#10276)
Bumps [dorny/paths-filter](https://github.com/dorny/paths-filter) from 2 to 3.
- [Release notes](https://github.com/dorny/paths-filter/releases)
- [Changelog](https://github.com/dorny/paths-filter/blob/master/CHANGELOG.md)
- [Commits](https://github.com/dorny/paths-filter/compare/v2...v3)

---
updated-dependencies:
- dependency-name: dorny/paths-filter
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-05-22 17:14:21 +08:00
Jiaming Yuan
1b25d23583 [JVM-packages] Prevent memory leak. (#10307) 2024-05-22 13:47:59 +08:00
dependabot[bot]
6a43a4b9d3 Bump mamba-org/provision-with-micromamba from 14 to 16 (#10275)
Bumps [mamba-org/provision-with-micromamba](https://github.com/mamba-org/provision-with-micromamba) from 14 to 16.
- [Release notes](https://github.com/mamba-org/provision-with-micromamba/releases)
- [Commits](f347426e57...3c96c0c276)

---
updated-dependencies:
- dependency-name: mamba-org/provision-with-micromamba
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-21 18:39:35 -07:00
Dmitry Razdoburdin
c7e7ce7569 [SYCL] Add nodes initialisation (#10269)
---------

Co-authored-by: Dmitry Razdoburdin <>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-21 23:38:52 +08:00
Jiaming Yuan
7a54ca41c9 [CI] Bump checkout action version. (#10305) 2024-05-21 16:38:20 +08:00
dependabot[bot]
d66b5570f4 Bump commons-logging:commons-logging in /jvm-packages/xgboost4j (#10294)
Bumps commons-logging:commons-logging from 1.3.1 to 1.3.2.

---
updated-dependencies:
- dependency-name: commons-logging:commons-logging
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-21 15:49:26 +08:00
dependabot[bot]
841867e05a Bump actions/checkout from 2 to 4 (#10274)
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Commits](https://github.com/actions/checkout/compare/v2...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-21 13:18:03 +08:00
dependabot[bot]
e7f8f40240 Bump ossf/scorecard-action from 2.3.1 to 2.3.3 (#10280)
Bumps [ossf/scorecard-action](https://github.com/ossf/scorecard-action) from 2.3.1 to 2.3.3.
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](0864cf1902...dc50aa9510)

---
updated-dependencies:
- dependency-name: ossf/scorecard-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-21 11:56:56 +08:00
dependabot[bot]
d5c9ef64a5 Bump conda-incubator/setup-miniconda from 2.1.1 to 3.0.4 (#10278)
Bumps [conda-incubator/setup-miniconda](https://github.com/conda-incubator/setup-miniconda) from 2.1.1 to 3.0.4.
- [Release notes](https://github.com/conda-incubator/setup-miniconda/releases)
- [Commits](https://github.com/conda-incubator/setup-miniconda/compare/v2.1.1...v3.0.4)

---
updated-dependencies:
- dependency-name: conda-incubator/setup-miniconda
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-21 10:15:00 +08:00
Jiaming Yuan
a5a58102e5 Revamp the rabit implementation. (#10112)
This PR replaces the original RABIT implementation with a new one, which has already been partially merged into XGBoost. The new one features:
- Federated learning for both CPU and GPU.
- NCCL.
- More data types.
- A unified interface for all the underlying implementations.
- Improved timeout handling for both tracker and workers.
- Exhausted tests with metrics (fixed a couple of bugs along the way).
- A reusable tracker for Python and JVM packages.
2024-05-20 11:56:23 +08:00
Jiaming Yuan
ba9b4cb1ee Fix pylint. (#10296) 2024-05-17 13:28:39 +08:00
Jiaming Yuan
835e59e538 Use a thread pool for external memory. (#10288) 2024-05-16 19:32:12 +08:00
Philip Hyunsu Cho
ee2afb3256 Adopt new logo (#10270) 2024-05-14 12:58:50 -07:00
Jiaming Yuan
ca1d04bcb7 Release data in cache. (#10286) 2024-05-14 14:20:19 +08:00
Jiaming Yuan
f1f69ff10e [CI] Fixes for using the latest modin. (#10285) 2024-05-14 12:13:35 +08:00
Jiaming Yuan
871fabeee3 [doc][dask] Update notes about k8s. (#10271) 2024-05-14 04:21:02 +08:00
Christian Clauss
75fe2ff0c3 Keep GitHub Actions up to date with Dependabot (#10268)
# Fixes software supply chain safety warnings like at the bottom right of
https://github.com/dmlc/xgboost/actions/runs/9048469681

* [Keeping your actions up to date with Dependabot](https://docs.github.com/en/code-security/dependabot/working-with-dependabot/keeping-your-actions-up-to-date-with-dependabot)
* [Configuration options for the dependabot.yml file - package-ecosystem](https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#package-ecosystem)
2024-05-13 01:57:13 -07:00
Jiaming Yuan
d81e319e78 Fixes for the latest pandas. (#10266)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-05-12 11:15:46 +08:00
Jiaming Yuan
5e816e616a [CI] Upgrade setup-r. (#10267) 2024-05-12 05:25:29 +08:00
Jiaming Yuan
5de57435c7 Be more lenient on floating point error for AUC. (#10264) 2024-05-11 08:48:11 +08:00
Dmitry Razdoburdin
f588252481 [sycl] add loss guided hist building (#10251)
Co-authored-by: Dmitry Razdoburdin <>
2024-05-10 22:35:13 +08:00
Bobby Wang
9b465052ce [jvm-packages] fix group col for gpu packages (#10254) 2024-05-09 07:44:07 +08:00
Jiaming Yuan
8237920c48 [jvm-packagaes] Freeze spark to 3.4.1 for now. (#10253)
The newer spark version for CPU conflicts with the more conservative version used by
rapids.
2024-05-07 09:00:59 -07:00
Jiaming Yuan
73afef1a6e Fixes for numpy 2.0. (#10252) 2024-05-07 03:54:32 +08:00
Dmitry Razdoburdin
dcc9639b91 [sycl] add data initialisation for training (#10222)
Co-authored-by: Dmitry Razdoburdin <>
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-05-05 12:07:10 +08:00
Jiaming Yuan
5e64276a9b Update nvtx. (#10227) 2024-04-29 06:33:46 +08:00
Jiaming Yuan
837d44a345 Support more sklearn tags for testing. (#10230) 2024-04-29 06:33:23 +08:00
dependabot[bot]
f8c3d22587 Bump org.apache.spark:spark-mllib_2.12 (#10070)
Bumps org.apache.spark:spark-mllib_2.12 from 3.4.1 to 3.5.1.

---
updated-dependencies:
- dependency-name: org.apache.spark:spark-mllib_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-28 15:02:14 -07:00
Jiaming Yuan
54754f29dd [pyspark] Sort workers by task ID. (#10220) 2024-04-28 18:05:15 +08:00
dependabot[bot]
f355418186 Bump org.apache.maven.plugins:maven-gpg-plugin (#10211)
Bumps [org.apache.maven.plugins:maven-gpg-plugin](https://github.com/apache/maven-gpg-plugin) from 3.2.3 to 3.2.4.
- [Release notes](https://github.com/apache/maven-gpg-plugin/releases)
- [Commits](https://github.com/apache/maven-gpg-plugin/compare/maven-gpg-plugin-3.2.3...maven-gpg-plugin-3.2.4)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-gpg-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-26 12:35:48 -07:00
dependabot[bot]
4d69ce96b3 Bump org.apache.maven.plugins:maven-jar-plugin (#10210)
Bumps [org.apache.maven.plugins:maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.4.0 to 3.4.1.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.4.0...maven-jar-plugin-3.4.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-26 12:35:34 -07:00
dependabot[bot]
a5003fc8ce Bump net.alchim31.maven:scala-maven-plugin in /jvm-packages/xgboost4j (#10217)
Bumps net.alchim31.maven:scala-maven-plugin from 4.8.1 to 4.9.0.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-26 12:35:20 -07:00
dependabot[bot]
8ed85b8ce7 Bump hadoop.version from 3.3.6 to 3.4.0 in /jvm-packages/xgboost4j (#10156)
Bumps `hadoop.version` from 3.3.6 to 3.4.0.

Updates `org.apache.hadoop:hadoop-hdfs` from 3.3.6 to 3.4.0

Updates `org.apache.hadoop:hadoop-common` from 3.3.6 to 3.4.0

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-hdfs
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-26 11:15:03 -07:00
Philip Hyunsu Cho
edb945d59b [CI] Use native arm64 worker in GHAction to build M1 wheel (#10225)
* [CI] Use native arm64 worker in GHAction to build M1 wheel

* Set up Conda

* Use mamba

* debug

* fix

* fix

* fix

* fix

* fix

* Temporarily disable other tests

* Fix prefix

* Use micromamba

* Use conda-incubator/setup-miniconda

* Use mambaforge

* Fix

* Fix prefix

* Don't use deprecated set-output

* Add verbose output from build

* verbose

* Specify arch

* Bump setup-miniconda to v3

* Use Python 3.9

* Restore deleted files

* WAR.

---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-04-26 10:16:55 -07:00
Jiaming Yuan
a81b78e56b [CI] Test new setup-r. (#10228) 2024-04-26 08:23:31 -07:00
Dmitry Razdoburdin
58513dc288 [SYCL] Add sampling initialization (#10216)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-04-25 04:35:52 +08:00
Jiaming Yuan
59d7b8dc72 [doc] Add typing to dask demos. (#10207) 2024-04-23 00:57:05 +08:00
Jiaming Yuan
3fbb221fec [coll] Implement shutdown for tracker and comm. (#10208)
- Force shutdown the tracker.
- Implement shutdown notice for error handling thread in comm.
2024-04-20 04:08:17 +08:00
Bobby Wang
8fb05c8c95 [pyspark] support stage-level for yarn/k8s (#10209) 2024-04-20 00:24:40 +08:00
dependabot[bot]
bb212bf33c Bump org.apache.flink:flink-clients in /jvm-packages (#10197)
Bumps [org.apache.flink:flink-clients](https://github.com/apache/flink) from 1.18.0 to 1.19.0.
- [Commits](https://github.com/apache/flink/compare/release-1.18.0...release-1.19.0)

---
updated-dependencies:
- dependency-name: org.apache.flink:flink-clients
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-18 15:20:31 -07:00
Jiaming Yuan
3f64b4fde3 [coll] Add global functions. (#10203) 2024-04-19 03:17:23 +08:00
dependabot[bot]
551fa6e25e Bump scalatest.version from 3.2.17 to 3.2.18 in /jvm-packages/xgboost4j (#10196)
Bumps `scalatest.version` from 3.2.17 to 3.2.18.

Updates `org.scalatest:scalatest_2.12` from 3.2.17 to 3.2.18
- [Release notes](https://github.com/scalatest/scalatest/releases)
- [Commits](https://github.com/scalatest/scalatest/compare/release-3.2.17...release-3.2.18)

Updates `org.scalactic:scalactic_2.12` from 3.2.17 to 3.2.18
- [Release notes](https://github.com/scalatest/scalatest/releases)
- [Commits](https://github.com/scalatest/scalatest/compare/release-3.2.17...release-3.2.18)

---
updated-dependencies:
- dependency-name: org.scalatest:scalatest_2.12
  dependency-type: direct:development
  update-type: version-update:semver-patch
- dependency-name: org.scalactic:scalactic_2.12
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-18 11:46:28 -07:00
dependabot[bot]
531ff21b20 Bump org.scala-lang.modules:scala-collection-compat_2.12 (#10193)
Bumps [org.scala-lang.modules:scala-collection-compat_2.12](https://github.com/scala/scala-collection-compat) from 2.11.0 to 2.12.0.
- [Release notes](https://github.com/scala/scala-collection-compat/releases)
- [Commits](https://github.com/scala/scala-collection-compat/compare/v2.11.0...v2.12.0)

---
updated-dependencies:
- dependency-name: org.scala-lang.modules:scala-collection-compat_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-18 07:18:18 -07:00
Jiaming Yuan
303c603c7d [pyspark] Reuse the collective communicator. (#10198) 2024-04-18 19:09:30 +08:00
dependabot[bot]
0aa2600399 Bump org.apache.maven.plugins:maven-jar-plugin (#10202)
Bumps [org.apache.maven.plugins:maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.3.0 to 3.4.0.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.3.0...maven-jar-plugin-3.4.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-17 23:04:41 -07:00
Philip Hyunsu Cho
f53f5ca359 [CI] Update machine images (#10201) 2024-04-17 19:15:06 -07:00
Jiaming Yuan
4b10200456 [coll] Improve event loop. (#10199)
- Add a test for blocking calls.
- Do not require the queue to be empty after waking up; this frees up the thread to answer blocking calls.
- Handle EOF in read.
- Improve the error message in the result. Allow concatenation of multiple results.
2024-04-18 03:29:52 +08:00
dependabot[bot]
7c0c9677a9 Bump org.apache.maven.plugins:maven-jar-plugin (#10191)
Bumps [org.apache.maven.plugins:maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.3.0 to 3.4.0.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.3.0...maven-jar-plugin-3.4.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-16 19:52:33 -07:00
Philip Hyunsu Cho
32be4669fb [jvm-packages] Ombinus patch to update all minor dependencies (#10188)
* Fold in #10184

* Fold in #10176

* Fold in #10168

* Fold in #10165

* Fold in #10164

* Fold in #10155

* Fold in #10062

* Fold in #9984

* Fold in #9843

* Upgrade to Maven 3.6.3
2024-04-16 19:43:04 -07:00
Philip Hyunsu Cho
3d1d97c8cc [CI] Reduce clutter from dependabot (#10187) 2024-04-16 19:42:08 -07:00
Eric Leung
9e354fb120 docs: update Ruby package link (#10182) 2024-04-16 15:09:59 -07:00
github-actions[bot]
2925cebdca [CI] Use latest RAPIDS; Pandas 2.0 compatibility fix (#10175)
* [CI] Update RAPIDS to latest stable

* [CI] Use rapidsai stable channel; fix syntax errors in Dockerfile.gpu

* Don't combine astype() with loc()

* Work around https://github.com/dmlc/xgboost/issues/10181

* Fix formatting

* Fix test

---------

Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2024-04-15 13:38:53 -07:00
Dmitry Razdoburdin
6e5c335cea [SYCL] Add basic features for QuantileHistMaker (#10174)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-04-15 21:24:46 +08:00
Hyunsu Cho
882f4136e0 [CI] Update create-pull-request action 2024-04-13 13:52:12 -07:00
Trinh Quoc Anh
732e27cebc [doc] Update python3statement URL (#10179) 2024-04-13 01:10:50 +08:00
Jiaming Yuan
1022909bbe Fix global config for external memory. (#10173)
Pass the thread-local configuration between threads.
2024-04-11 01:29:28 +08:00
Jiaming Yuan
f0a138f33a Fix pyspark with verbosity=3. (#10172) 2024-04-09 23:18:56 +08:00
dependabot[bot]
a99bb38bd2 Bump org.apache.maven.plugins:maven-gpg-plugin from 3.1.0 to 3.2.2 in /jvm-packages/xgboost4j-spark (#10151) 2024-04-03 16:45:54 -07:00
Fabi
e15d61b916 docs: fix bug in tutorial (#10143) 2024-04-01 10:14:40 +08:00
david-cortes
bc9ea62ec0 [R] Make xgb.cv work with xgb.DMatrix only, adding support for survival and ranking fields (#10031)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-03-31 21:53:00 +08:00
Jiaming Yuan
8bad677c2f Update collective implementation. (#10152)
* Update collective implementation.

- Cleanup resource during `Finalize` to avoid handling threads in destructor.
- Calculate the size for allgather automatically.
- Use simple allgather for small (smaller than the number of worker) allreduce.
2024-03-30 18:57:31 +08:00
Jiaming Yuan
230010d9a0 Cleanup set info. (#10139)
- Use the array interface internally.
- Deprecate `XGDMatrixSetDenseInfo`.
- Deprecate `XGDMatrixSetUIntInfo`.
- Move the handling of `DataType` into the deprecated C function.

---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-03-26 23:26:24 +08:00
Dmitry Razdoburdin
6a7c6a8ae6 add sycl reaslisation of ghist builder (#10138)
Co-authored-by: Dmitry Razdoburdin <>
2024-03-23 12:55:25 +08:00
Jiaming Yuan
e1695775e9 [CI] Fix yml in github action. (#10134) 2024-03-20 16:38:03 +08:00
Jiaming Yuan
2b2aac85f4 [CI] Update scorecard actions. (#10133) 2024-03-20 15:51:38 +08:00
Jiaming Yuan
ca4801f81d Work with IPv6 in the new tracker. (#10125) 2024-03-20 05:19:23 +08:00
Jiaming Yuan
53fc17578f Use std::uint64_t for row index. (#10120)
- Use std::uint64_t instead of size_t to avoid implementation-defined type.
- Rename to bst_idx_t, to account for other types of indexing.
- Small cleanup to the base header.
2024-03-15 18:43:49 +08:00
Jiaming Yuan
56b1868278 Fix compilation with the latest ctk. (#10123) 2024-03-15 08:04:41 +08:00
Dmitry Razdoburdin
617970a0c2 [SYCL] Add split evaluation (#10119)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-03-15 01:46:46 +08:00
Michael Mayer
e0f890ba28 [R] deprecate watchlist (#10110) 2024-03-13 17:02:34 +08:00
Jiaming Yuan
1450aebb74 Fix pairwise objective with NDCG metric along with custom gain. (#10100)
* Fix pairwise objective with NDCG metric.

- Allow setting `ndcg_exp_gain` for `rank:pairwise`.

This is useful when using pairwise for objective but ndcg for metric.
2024-03-11 14:54:10 +08:00
Jiaming Yuan
06c9702028 [doc] Fix the default value for lambdarank_pair_method. (#10098) 2024-03-11 14:53:17 +08:00
david-cortes
b023a253b4 [R] Rename watchlist -> evals (#10032) 2024-03-10 06:48:06 +08:00
Jiaming Yuan
2c13f90384 Support graphviz plot for multi-target tree. (#10093) 2024-03-09 05:35:25 +08:00
Jiaming Yuan
e14c3b9325 Optional normalization for learning to rank. (#10094) 2024-03-08 12:41:21 +08:00
Philip Hyunsu Cho
bc516198dc [CI] Cancel GH Action job if a newer commit is published (#10088) 2024-03-04 21:36:08 -08:00
Philip Hyunsu Cho
23a37dcaf9 [CI] Test R package with CMake (#10087)
* [CI] Test R package with CMake

* Fix

* Fix

* Update test_r_package.py

* Fix CMake flag for R package

* Install system deps

* Fix

* Use sudo
2024-03-04 12:32:44 -08:00
Jiaming Yuan
d07b7fe8c8 Small cleanup for mock tests. (#10085) 2024-03-04 23:32:11 +08:00
Dmitry Razdoburdin
7a61216690 [sycl] add partitioning and related tests (#10080)
Co-authored-by: Dmitry Razdoburdin <>
2024-03-02 01:49:27 +08:00
david-cortes
2c12b956da [R] Refactor callback structure and attributes (#9957) 2024-03-01 15:57:47 +08:00
Jiaming Yuan
3941b31ade Disable column sample by node for the exact tree method. (#10083) 2024-03-01 14:16:10 +08:00
Jiaming Yuan
8189126d51 Add CUDA iterator to tensor view. (#10074) 2024-03-01 14:15:31 +08:00
Bobby Wang
d24df52bb9 [pyspark] rework the log (#10077) 2024-02-29 16:47:31 +08:00
Jiaming Yuan
5ac233280e Require context in aggregators. (#10075) 2024-02-28 03:12:42 +08:00
Dmitry Razdoburdin
761845f594 [SYCL] Implement row set collection. (#10057)
Co-authored-by: Dmitry Razdoburdin <>
2024-02-26 21:07:36 +08:00
Jiaming Yuan
0ce4372bd4 Use UBJSON for serializing splits for vertical data split. (#10059) 2024-02-25 00:18:23 +08:00
david-cortes
f7005d32c1 [R] Use inplace predict (#9829)
---------

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2024-02-24 02:03:54 +08:00
José Morales
729fd97196 [doc] Fix spark_estimator doc (#10066) 2024-02-23 12:01:24 +08:00
Philip Hyunsu Cho
9f7b94cf70 [CI] Patch GitHub Action pipeline (#10067) 2024-02-22 17:16:48 -08:00
Hyunsu Cho
3ab8ccaa0c [CI] Hotfix for GH Action 2024-02-22 14:55:41 -08:00
Hyunsu Cho
aaa950951b [CI] Hotfix for GH Action 2024-02-22 14:53:55 -08:00
Hyunsu Cho
5b1d7a760b [CI] Hotfix for GH Action 2024-02-22 14:40:14 -08:00
Jiaming Yuan
eb281ff9b4 [CI] Fix JVM tests on GH Action (#10064)
---------

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2024-02-22 14:21:32 -08:00
UncleLLD
b9171d8f0b [doc] Fix python docs (#10058) 2024-02-22 17:34:12 +08:00
Jiaming Yuan
2e4ea5ecc0 Support f64 for ubjson. (#10055) 2024-02-21 02:18:42 +08:00
Jiaming Yuan
8ea705e4d5 Support sample weight in sklearn custom objective. (#10050) 2024-02-21 00:43:14 +08:00
Jiaming Yuan
69a17d5114 Fix with None input. (#10052) 2024-02-20 22:34:22 +08:00
Jiaming Yuan
d37b83e8d9 Fix UBJSON with boolean value. (#10054) 2024-02-20 22:13:51 +08:00
david-cortes
6e3c899ba7 [R] Don't cap global number of threads for serialization (#10028) 2024-02-20 11:13:00 +08:00
Louis Desreumaux
edf501d227 Implement contribution prediction with QuantileDMatrix (#10043)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-02-19 21:03:29 +08:00
Dmitry Razdoburdin
057f03cacc [SYCL] Initial implementation of GHistIndexMatrix (#10045)
Co-authored-by: Dmitry Razdoburdin <>
2024-02-19 04:27:15 +08:00
UncleLLD
7cc256e246 update python intro doc (#10033) 2024-02-14 10:01:38 -08:00
github-actions[bot]
c182c584ca [CI] Update RAPIDS to latest stable (#10042)
Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
2024-02-13 15:29:09 -08:00
david-cortes
4de866211d [R] switch to URI reader (#10024) 2024-02-05 05:03:38 +08:00
Philip Hyunsu Cho
f2095f1d5b [Doc] Fix formatting for R package doc (#10030) 2024-02-04 16:23:35 +08:00
david-cortes
a730c7e67e [R] allow using seed with regular RNG (#10029) 2024-02-04 16:22:22 +08:00
david-cortes
662854c7d7 [R] Document handling of indexes (#10019)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-02-02 05:39:09 +08:00
Philip Hyunsu Cho
4dfbe2a893 [CI] Test building for 32-bit arch (#10021)
* [CI] Test building for 32-bit arch

* Update CMakeLists.txt

* Fix yaml

* Use Debian container

* Remove -Werror for 32-bit

* Revert "Remove -Werror for 32-bit"

This reverts commit c652bc6a037361bcceaf56fb01863210b462793d.

* Don't error for overloaded-virtual warning

* Ignore some warnings from dmlc-core

* Fix compiler warnings

* Fix formatting

* Apply suggestions from code review

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>

* Add more cast

---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-01-31 13:20:51 -08:00
Dmitry Razdoburdin
234674a0a6 [sync]. Add partition builder. (#10011)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-01-31 17:39:48 +08:00
david-cortes
0955213220 [R] rename proxy dmatrix -> data batch (#10016) 2024-01-31 15:43:58 +08:00
david-cortes
1e72dc1276 [R] Add optional check on column names matching in predict (#10020) 2024-01-31 15:43:22 +08:00
Ammar Azman
c53d59f8db [doc] Fix typos in code (#10023)
change `resutls` to `results`
2024-01-31 15:18:16 +08:00
david-cortes
5e00a71671 [R] rename slice to avoid dplyr conflict (#10017) 2024-01-31 05:33:37 +08:00
david-cortes
df7cf744b4 [R] Remove enable_categorical parameter (#10018) 2024-01-31 05:17:36 +08:00
david-cortes
3abbbe41ac [R] Add data iterator, quantile dmatrix, external memory, and missing feature_types (#9913) 2024-01-30 19:26:44 +08:00
UncleLLD
d9f4ab557a [doc] Fix data format (#10013) 2024-01-30 17:24:43 +08:00
Jiaming Yuan
54b71c8fba Fix with black 24.1.1. (#10014) 2024-01-30 17:24:11 +08:00
Jiaming Yuan
65d7bf2dfe Handle np integer in model slice and prediction. (#10007) 2024-01-26 04:58:48 +08:00
Jiaming Yuan
a76d6c6131 Fix cpp deprecation. (#10010) 2024-01-26 02:13:40 +08:00
Jiaming Yuan
d3f2dbe64f [dask] Add seed to demos. (#10009) 2024-01-26 02:09:38 +08:00
Philip Hyunsu Cho
c8f5d190c6 [CI] Stop Windows pipeline upon a failing pytest (#10003) 2024-01-24 22:54:21 -08:00
Philip Hyunsu Cho
60ec7b8424 Throw error for 32-bit archs (#10005) 2024-01-24 13:02:39 -08:00
Jiaming Yuan
d12cc1090a Refactor tests for training continuation. (#9997) 2024-01-24 16:07:19 +08:00
david-cortes
5062a3ab46 [R] Support booster slicing. (#9948) 2024-01-21 05:11:26 +08:00
david-cortes
c5d0608057 [R] Remove parameters and attributes related to ntree and rebase iterationrange (#9935) 2024-01-21 00:56:57 +08:00
david-cortes
60b9d2eeb9 [R] Avoid memory copies in predict (#9902) 2024-01-21 00:53:18 +08:00
Philip Hyunsu Cho
2c8fa8b8b9 [CI] Skip MSVC when building R package (#9995)
* [CI] Skip MSVC when building R package

* [CI] Stop building binary tarball for Windows

* Remove unused script
2024-01-18 08:09:53 -08:00
Jiaming Yuan
bde20dd897 Remove benchmark scripts. (#9992) 2024-01-17 13:19:34 +08:00
Jiaming Yuan
d07e8b503e Fix quantile regression demo. (#9991) 2024-01-17 13:19:08 +08:00
Jiaming Yuan
cacb4b1fdd Fix gain calculation in multi-target tree. (#9978) 2024-01-17 13:18:44 +08:00
Jiaming Yuan
85d09245f6 Fix error handling in the event loop. (#9990) 2024-01-17 05:35:35 +08:00
Jiaming Yuan
0798e36d73 [breaking] Remove deprecated parameters in the skl interface. (#9986) 2024-01-15 20:40:05 +08:00
greydoubt
2de85d3241 [doc] slight cleanup (#9988) 2024-01-15 19:09:20 +08:00
david-cortes
547abb8c12 [R] Remove unusable 'feature_names' argument and make 'model' first argument in inspection functions (#9939) 2024-01-15 17:16:30 +08:00
Philip Hyunsu Cho
1168a68872 [jvm-packages] Update release scripts (#9983)
* [jvm-packages] Add Scala version suffix to xgboost-jvm package (#9776)

* Update JVM script (#9714)

* Revamp pom.xml

* Update instructions in prepare_jvm_release.py

* Fix formatting

* [jvm-packages] Fix POM for xgboost-jvm metapackage (#9893)

* [jvm-packages] Fix POM for xgboost-jvm metapackage

* Add script for updating the Scala version

* Update change_scala_version.py to also change scala.version property (#9897)

* Remove 'release-cpu-only' profile

* Remove scala-2.13 profile; enable gpu package for Scala 2.13
2024-01-12 10:37:55 -08:00
Bobby Wang
f88c43801f [jvm-packages] update rapids dep to 23.12.1 (#9951)
With this PR, XGBoost GPU can support scala 2.13
2024-01-11 14:04:32 -08:00
rpopescu
73b3955dd4 Fix FieldEntry ctor specialisation syntax error (#9980)
Co-authored-by: Radu Popescu <radu.popescu@aptportfolio.com>
2024-01-11 12:10:21 +08:00
david-cortes
d3a8d284ab [R] On-demand serialization + standardization of attributes (#9924)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-01-11 05:08:42 +08:00
Jiaming Yuan
01c4711556 Check __cuda_array_interface__ instead of cupy class. (#9971)
* Now XGBoost can directly consume CUDA data from torch.
2024-01-09 19:59:01 +08:00
Jiaming Yuan
2f57bbde3c Additional tests for attributes and model booosted rounds. (#9962) 2024-01-09 09:54:39 +08:00
david-cortes
bed0349954 [R] Don't write files to user's directory (#9966) 2024-01-09 03:43:48 +08:00
david-cortes
7ff6d44efa [R] Use R's error stream for printing warnings (#9965) 2024-01-09 03:43:21 +08:00
Jiaming Yuan
b3eb5d0945 Use UBJ in Python checkpoint. (#9958) 2024-01-09 03:22:15 +08:00
Jiaming Yuan
fa5e2f6c45 Synthesize the AMES housing dataset for tests. (#9963) 2024-01-09 00:54:23 +08:00
Jiaming Yuan
9a30bdd313 Test loading models with invalid file extensions. (#9955) 2024-01-08 19:26:24 +08:00
Bobby Wang
3ff3a5f1ed [jvm-packages] support jdk 17 for test (#9959) 2024-01-08 17:30:49 +08:00
Jiaming Yuan
3976455af9 [jvm-packages] Use UBJ for checkpoints. (#9954) 2024-01-08 13:26:12 +08:00
Jiaming Yuan
38dd91f491 Save model in ubj as the default. (#9947) 2024-01-05 17:53:36 +08:00
Jiaming Yuan
c03a4d5088 Check support status for categorical features. (#9946) 2024-01-04 16:51:33 +08:00
david-cortes
db396ee340 [R] make sure output fits into int32 (#9949) 2024-01-04 16:51:22 +08:00
Jiaming Yuan
621348abb3 Fix multi-output with alternating strategies. (#9933)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-01-04 16:41:13 +08:00
Jiaming Yuan
5f7b5a6921 Add tests for pickling with custom obj and metric. (#9943) 2024-01-04 14:52:48 +08:00
Jiaming Yuan
26a5436a65 [doc] Describe feature info behavior. [skip ci] (#9866) 2024-01-04 14:52:19 +08:00
Jiaming Yuan
9f73127a23 Cleanup Python GPU tests. (#9934)
* Cleanup Python GPU tests.

- Remove the use of `gpu_hist` and `gpu_id` in cudf/cupy tests.
- Move base margin test into the testing directory.
2024-01-04 13:15:18 +08:00
david-cortes
3c004a4145 [R] Add missing DMatrix functions (#9929)
* `XGDMatrixGetQuantileCut`
* `XGDMatrixNumNonMissing`
* `XGDMatrixGetDataAsCSR`

---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-01-03 17:29:21 +08:00
david-cortes
49247458f9 [R] Minor improvements for evaluation printing (#9940) 2024-01-03 15:26:55 +08:00
david-cortes
9e33a10202 [R] Replace xgboost() with xgb.train() in most tests and examples (#9941) 2024-01-02 21:20:01 +08:00
david-cortes
32cbab1cc0 [R] put 'verbose' in correct argument (#9942) 2024-01-02 15:20:51 +08:00
david-cortes
73713de601 [R] rename Quality -> Gain (#9938) 2023-12-31 13:01:00 +08:00
david-cortes
8b9c98b65b [R] Clearer function signatures for S3 methods (#9937) 2023-12-31 10:45:04 +08:00
david-cortes
e40c4260ed [R] Enable 'dot' dump format (#9930) 2023-12-30 13:28:27 +08:00
Philip Hyunsu Cho
ef8bdaa047 [CI] Update machine images (#9932) 2023-12-29 11:15:38 -08:00
Jiaming Yuan
a7226c0222 Fix feature names with special characters. (#9923) 2023-12-28 22:45:13 +08:00
david-cortes
a197899161 [R] avoid leaking exception objects (#9916) 2023-12-26 20:29:55 +08:00
Michael Mayer
52620fdb34 [R] Improve more docstrings (#9919) 2023-12-26 17:30:13 +08:00
Jiaming Yuan
6a5f6ba694 [CI] Add timeout for distributed GPU tests. (#9917) 2023-12-24 00:09:05 +08:00
Michael Mayer
b807f3e30c [R] improve docstrings for "xgb.Booster.R" (#9906) 2023-12-21 10:01:30 +08:00
david-cortes
252e018275 correct name of function in header (#9905) 2023-12-20 10:52:00 +08:00
Jiaming Yuan
9d122293bc [doc] Fix typo. [skip ci] (#9904) 2023-12-20 09:17:00 +08:00
david-cortes
ae32936ba2 [R] Catch C++ exceptions (#9903) 2023-12-19 10:45:03 +08:00
david-cortes
ff3d82c006 [R] Refactor field logic for dmatrix (#9901) 2023-12-18 20:31:01 +08:00
Jiaming Yuan
0edd600f3d [doc] Brief introduction to base_score. (#9882) 2023-12-17 13:34:34 +08:00
david-cortes
db7f952ed6 update docs for parameters (#9900) 2023-12-16 12:19:22 +08:00
Dmitry Razdoburdin
2a6ab2547d SYCL inference optimization (#9876)
---------

Co-authored-by: Dmitry Razdoburdin <>
2023-12-15 11:04:39 +08:00
Jiaming Yuan
1c6e031c75 [R] Fix clang warning. (#9874) 2023-12-15 01:30:43 +08:00
Jiaming Yuan
125bc812f8 [doc] Reference enable_categorical doc in sklearn. (#9884) 2023-12-14 23:29:19 +08:00
Jiaming Yuan
1aa8c8d9be Support more scipy types. (#9881) 2023-12-14 18:28:37 +08:00
david-cortes
cd473c9da3 [R] enable multi-dimensional base_margin (#9885) 2023-12-14 09:16:53 +08:00
Philip Hyunsu Cho
936b22fdf3 [CI] Upload libxgboost4j.dylib (M1) to S3 bucket (#9886)
* [CI] Upload libxgboost4j.dylib (M1) to S3 bucket

* Fix typo
2023-12-13 15:25:51 -08:00
david-cortes
42173d7bc3 [doc] Clarify the effect of enable_categorical (#9877) 2023-12-13 08:39:41 +08:00
github-actions[bot]
d530d37707 [CI] Update RAPIDS to latest stable (#9857) 2023-12-12 22:58:03 +09:00
Dmitry Razdoburdin
43897b8296 Sycl implementation for objective functions (#9846)
---------

Co-authored-by: Dmitry Razdoburdin <>
2023-12-12 14:41:50 +08:00
david-cortes
ddab49a8be [doc][R] Update arguments for ellipsis in predict (#9868) 2023-12-12 12:13:47 +08:00
Jiaming Yuan
faf0f2df10 Support dataframe data format in native XGBoost. (#9828)
- Implement a columnar adapter.
- Refactor Python pandas handling code to avoid converting into a single numpy array.
- Add support in R for transforming columns.
- Support R data.frame and factor type.
2023-12-12 09:56:31 +08:00
Jiaming Yuan
b3700bbb3f Flexible find protobuf. (#9867) 2023-12-12 07:34:01 +08:00
david-cortes
562352101d [R] Move all DMatrix fields to function arguments (#9862) 2023-12-10 02:45:28 +08:00
Jiaming Yuan
1094d6015d [py] Use the first found native library. (#9860) 2023-12-08 17:23:16 +08:00
Jiaming Yuan
42de9206fc Support multi-target, fit intercept for hinge. (#9850) 2023-12-08 05:50:41 +08:00
Jiaming Yuan
39c637ee19 Use array interface in Python prediction return. (#9855) 2023-12-08 03:42:14 +08:00
david-cortes
2c0fc97306 Remove note about multi-quantile being python-only (#9854) 2023-12-07 05:17:15 +08:00
david-cortes
9e9d41b95c [R] Add note about serialization of DMatrix objects (#9853) 2023-12-07 03:11:15 +08:00
Jiaming Yuan
4bc1f3a388 [R] Bump requirement to 4.3.0. (#9847) 2023-12-07 00:12:45 +08:00
david-cortes
1de3f4135c [R] Enable vector-valued parameters (#9849) 2023-12-06 20:32:20 +08:00
david-cortes
0716c64ef7 [R] Error out on multidimensional arrays (#9852) 2023-12-06 17:43:51 +08:00
david-cortes
62571b79eb [R] Enable multi-output objectives (#9839) 2023-12-06 03:13:14 +08:00
david-cortes
9c56916fd7 [R] Very small performance tweaks (#9837) 2023-12-04 18:40:45 +08:00
Dmitry Razdoburdin
381f1d3dc9 Add support inference on SYCL devices (#9800)
---------

Co-authored-by: Dmitry Razdoburdin <>
Co-authored-by: Nikolay Petrov <nikolay.a.petrov@intel.com>
Co-authored-by: Alexandra <alexandra.epanchinzeva@intel.com>
2023-12-04 16:15:57 +08:00
david-cortes
7196c9d95e [R] Fix memory safety issues (#9823) 2023-12-02 13:43:50 +08:00
Jiaming Yuan
e78b46046e [CI] Update R version on Linux. (#9835) 2023-12-02 11:03:17 +08:00
Jiaming Yuan
2d8c67d6dc [jvm-packages] Bump dependencies. (#9827)
- #9811
- #9814
- #9826
- #9830
- #9833
- #9832
- #9831
- #9834
2023-12-02 07:34:56 +08:00
david-cortes
95af5c074b more usage of array interface, fix potential memory leaks of std::string (#9824) 2023-12-01 00:06:59 +08:00
david-cortes
37da66f865 [R] Use array interface for dense DMatrix creation (#9816)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-11-30 04:35:05 +08:00
Yuan (Terry) Tang
da3d55db5b Update affiliation (#9822) 2023-11-30 03:27:05 +08:00
Jiaming Yuan
e2e089ce12 [jvm-packages] Bump rapids version. (#9820) 2023-11-29 15:51:07 +08:00
david-cortes
c0ef2f8dce [R] Fix potential memory leaks in case of R allocation failures (#9817) 2023-11-29 13:14:17 +08:00
Jiaming Yuan
59684b2db6 [doc] Draft for language binding consistency. [skip ci] (#9755) 2023-11-29 13:13:40 +08:00
david-cortes
bfa1252fca [R][doc] Update docs about fitting from CSR (#9818) 2023-11-29 05:42:41 +08:00
Jiaming Yuan
34a2616696 [jvm-packages] Update dependencies. (#9809)
- scalatest: 3.2.17
- maven-checkstyle-plugin: 3.3.1
- maven-surefire-plugin: 3.2.2
- maven-project-info-reports-plugin: 3.5.0
- maven-javadoc-plugin: 3.6.2
2023-11-27 20:09:25 +08:00
Jiaming Yuan
e9f149481e [sklearn] Fix loading model attributes. (#9808) 2023-11-27 17:19:01 +08:00
Jiaming Yuan
3f4e22015a Mark NCCL python test optional. (#9804)
Skip the tests if XGBoost is not compiled with dlopen.
2023-11-25 11:25:47 +08:00
Jiaming Yuan
8fe1a2213c Cleanup code for distributed training. (#9805)
* Cleanup code for distributed training.

- Merge `GetNcclResult` into nccl stub.
- Split up utilities from the main dask module.
- Let Channel return `Result` to accommodate nccl channel.
- Remove old `use_label_encoder` parameter.
2023-11-25 09:10:56 +08:00
Jiaming Yuan
e9260de3f3 [breaking] Remove dense libsvm parser plugin. (#9799) 2023-11-23 00:12:39 +08:00
Jiaming Yuan
1877cb8e83 Change default metric for gamma regression to deviance. (#9757)
* Change default metric for gamma regression to deviance.

- Cleanup the gamma implementation.
- Use deviance instead since the objective is derived from deviance.
2023-11-22 21:17:48 +08:00
Jiaming Yuan
0715ab3c10 Use dlopen to load NCCL. (#9796)
This PR adds optional support for loading nccl with `dlopen` as an alternative of compile time linking. This is to address the size bloat issue with the PyPI binary release.
- Add CMake option to load `nccl` at runtime.
- Add an NCCL stub.

After this, `nccl` will be fetched from PyPI when using pip to install XGBoost, either by a user or by `pyproject.toml`. Others who want to link the nccl at compile time can continue to do so without any change.

At the moment, this is Linux only since we only support MNMG on Linux.
2023-11-22 19:27:31 +08:00
Jiaming Yuan
fedd9674c8 Implement column sampler in CUDA. (#9785)
- CUDA implementation.
- Extract the broadcasting logic, we will need the context parameter after revamping the collective implementation.
- Some changes to the event loop for fixing a deadlock in CI.
- Move argsort into algorithms.cuh, add support for cuda stream.
2023-11-17 04:29:08 +08:00
Bobby Wang
178cfe70a8 [pyspark][doc] Test and doc for stage-level scheduling. (#9786) 2023-11-16 18:15:59 +08:00
Jiaming Yuan
ada377c57e [coll] Reduce the scope of lock in the event loop. (#9784) 2023-11-15 14:16:19 +08:00
Bobby Wang
36a552ac98 [jvm-packages] support stage-level scheduling (#9775) 2023-11-14 08:59:45 +08:00
Ken Geis
162da7b52b fix typo in Parameters doc (#9781) 2023-11-13 03:09:06 +08:00
Jiaming Yuan
6fd4a30667 [coll] Increase timeout for allgather test. (#9777) 2023-11-09 05:26:40 +08:00
Jiaming Yuan
44099f585d [coll] Add C API for the tracker. (#9773) 2023-11-08 18:17:14 +08:00
Jiaming Yuan
06bdc15e9b [coll] Pass context to various functions. (#9772)
* [coll] Pass context to various functions.

In the future, the `Context` object would be required for collective operations, this PR
passes the context object to some required functions to prepare for swapping out the
implementation.
2023-11-08 09:54:05 +08:00
Jiaming Yuan
6c0a190f6d [coll] Add comm group. (#9759)
- Implement `CommGroup` for double dispatching.
- Small cleanup to tracker for handling abort.
2023-11-07 11:12:31 +08:00
Jiaming Yuan
c3a0622b49 Fix using categorical data with the score function of ranker. (#9753) 2023-11-07 07:29:11 +08:00
Jiaming Yuan
82828621d0 [doc] Add doc for linters and simplify c++ lint script. (#9750) 2023-11-07 05:03:30 +08:00
Jiaming Yuan
98238d63fa [dask] Change document to avoid using default import. (#9742)
This aligns dask with pyspark, users need to explicitly call:

```
from xgboost.dask import DaskXGBClassifier
from xgboost import dask as dxgb
```

In future releases, we might stop using the default import and remove the lazy loader.
2023-11-07 02:44:39 +08:00
Bobby Wang
093b675838 [Doc] update the tutorial of xgboost4j-spark-gpu (#9752)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-11-03 18:19:28 +08:00
david-cortes
be20df8c23 [Python] Accept numpy generators as random_state (#9743)
* accept numpy generators for random_state

* make linter happy

* fix tests
2023-11-01 16:20:44 -07:00
Jiaming Yuan
4da4e092b5 [coll] Improvements and fixes for tracker and allreduce. (#9745)
- Allow the tracker to wait.
- Fix allreduce type cast
- Return args from the federated tracker.
2023-11-02 04:06:46 +08:00
Philip Hyunsu Cho
0ff8572737 [CI] Build libxgboost4j.dylib with CMAKE_OSX_DEPLOYMENT_TARGET (#9749) 2023-11-01 11:20:28 -07:00
Philip Hyunsu Cho
1b9ed4a4a1 [CI] Improve CI for Mac M1 (#9748)
* [CI] Improve CI for Mac M1

* Add -v flag

* Disable OpenMP in libxgboost4j.dylib

* Target MacOS 10.15+ to use C++17
2023-11-01 10:03:56 -07:00
david-cortes
d3f0646779 [R] Avoid modifying importance dt in-place, fix aggregation (#9740) 2023-11-01 05:10:59 +08:00
Jiaming Yuan
bc995a4865 [coll] Add federated coll. (#9738)
- Define a new data type, the proto file is copied for now.
- Merge client and communicator into `FederatedColl`.
- Define CUDA variant.
- Migrate tests for CPU, add tests for CUDA.
2023-11-01 04:06:46 +08:00
Philip Hyunsu Cho
6b98305db4 [CI] Enable gmock in gtest (#9737) 2023-10-31 20:09:35 +08:00
Jiaming Yuan
80390e6cb6 [coll] Federated comm. (#9732) 2023-10-31 02:39:55 +08:00
Bobby Wang
fa65cf6646 [doc] How to configure regarding to stage-level (#9727)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-10-31 01:28:34 +08:00
omahs
2cfc90e8db Fix typos (#9731) 2023-10-30 16:52:12 +08:00
Jiaming Yuan
6755179e77 [coll] Add nccl. (#9726) 2023-10-28 16:33:58 +08:00
James Lamb
0c621094b3 [CI] enforce cmakelint checks (#9728) 2023-10-28 05:38:04 +08:00
Dmitry Razdoburdin
9c22df9342 Fix mingw hanging on regex in context (#9729)
---------

Co-authored-by: Dmitry Razdoburdin <>
2023-10-27 20:01:35 +08:00
Bobby Wang
1323531323 [pyspark] unify the way for determining whether runs on the GPU. (#9724) 2023-10-27 11:21:30 +08:00
Dmitry Razdoburdin
f41a08fda8 Add 'sycl' devices to the context (#9691)
Co-authored-by: Dmitry Razdoburdin <>
2023-10-26 22:17:56 +08:00
Philip Hyunsu Cho
d4d7097acc Update JVM script (#9714) 2023-10-24 17:48:38 -07:00
Philip Hyunsu Cho
01d59ded00 Fix libpath logic for Windows (#9712)
* Fix libpath logic for Windows (#9687)

* Use sys.base_prefix instead of sys.prefix (#9711)

* Use sys.base_prefix instead of sys.prefix

* Update libpath.py too
2023-10-24 17:25:28 -07:00
Jiaming Yuan
7a02facc9d Serialize expand entry for allgather. (#9702) 2023-10-24 14:33:28 +08:00
Hyunsu Cho
ee8b29c843 [CI] Hotfix for JVM test on GH Action 2023-10-23 20:02:33 -07:00
Philip Hyunsu Cho
87621322ed [CI] Build libxgboost4j.dylib for Intel Mac (#9704)
* [CI] Build libxgboost4j.dylib for Intel Mac

* Use correct runner name

* Fix shell command

* Add back branch condition
2023-10-23 19:15:24 -07:00
Jiaming Yuan
3ca06ac51e [doc] Mention data consistency for categorical features. (#9678) 2023-10-24 10:11:33 +08:00
Philip Hyunsu Cho
5e6cb63a56 [CI] Set up CI for Mac M1 (#9699) 2023-10-22 23:33:19 -07:00
Philip Hyunsu Cho
791de7789b [jvm-packages] Remove hard dependency on libjvm (#9698) 2023-10-21 23:14:38 -07:00
Jiaming Yuan
b771f58453 [coll] Define interface for bridging. (#9695)
* Define the basic interface that will shared by nccl, federated and native.
2023-10-20 16:20:48 +08:00
Rong Ou
6fbe6248f4 More in-memory input support for column split (#9685) 2023-10-20 16:02:36 +08:00
Chuck Atkins
83cdf14b2c CMake LTO and CUDA arch (#9677) 2023-10-20 13:01:37 +08:00
Philip Hyunsu Cho
3b86260b50 Fix build for AppleClang 11 (#9684) (#9693) 2023-10-18 12:27:21 -07:00
Jiaming Yuan
5d1bcde719 [coll] allgatherv. (#9688) 2023-10-19 03:13:50 +08:00
Dmitry Razdoburdin
ea9f09716b Reorder if-else statements to allow using of cpu branches for sycl-devices (#9682) 2023-10-18 10:55:33 +08:00
Jiaming Yuan
4c0e4422d0 [coll] allgather. (#9681) 2023-10-18 10:22:18 +08:00
Jiaming Yuan
48ac9b6cbe [coll] Allreduce. (#9679) 2023-10-17 13:57:14 +08:00
Rong Ou
da6803b75b Support column-wise data split with in-memory inputs (#9628)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-10-17 12:16:39 +08:00
Bobby Wang
4d1607eefd [pyspark] Support stage-level scheduling for training (#9519) 2023-10-17 10:35:39 +08:00
Thomas Lynn
83191f0839 Update learning_to_rank.rst; Correct qid sort in snippet (#9673) 2023-10-14 16:38:58 +08:00
Philip Hyunsu Cho
eee7cdf07e Fix build for GCC 8.x (#9670) (#9675) 2023-10-13 22:07:49 -07:00
James Lamb
eb562d3829 [CI] address cmakelint warnings about whitespace (#9674) 2023-10-14 12:46:07 +08:00
Jiaming Yuan
53049b16b8 [coll] Broadcast. (#9659) 2023-10-14 09:34:37 +08:00
Jiaming Yuan
81a059864a Skip check for pollhup. (#9661) 2023-10-13 14:35:14 +08:00
Philip Hyunsu Cho
a5e07a01f8 [CI] Pull CentOS 7 images from NGC (#9666) 2023-10-13 12:11:54 +08:00
Jiaming Yuan
cd8760cba3 [doc] Update document about running tests. [skip ci] (#9658) 2023-10-13 09:07:01 +08:00
Rong Ou
e164d51c43 Improve allgather functions (#9649) 2023-10-12 23:31:43 +08:00
github-actions[bot]
d1dee4ad99 [CI] Update RAPIDS to latest stable (#9654)
* [CI] Update RAPIDS to latest stable

* Remove slashes from Docker tag

---------

Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2023-10-11 23:26:09 -07:00
Jiaming Yuan
946ae1c440 [coll] Implement a new tracker and a communicator. (#9650)
* [coll] Implement a new tracker and a communicator.

The new tracker and communicators communicate through the use of JSON documents. Along
with which, communicators are aware of each other.
2023-10-12 12:49:16 +08:00
James Lamb
2e42f33fc1 [CI] standardize else() and enfunction() calls in CMake scripts (#9653) 2023-10-12 11:14:19 +08:00
Jiaming Yuan
084d89216c Add support for cgroupv2. (#9651) 2023-10-12 09:36:36 +08:00
James Lamb
51e32e4905 [CI] add cmakelint to C++ linting task (#9641)
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-10-11 16:04:10 +08:00
Rong Ou
0ecb4de963 [breaking] Change DMatrix construction to be distributed (#9623)
* Change column-split DMatrix construction to be distributed

* remove splitting code for row split
2023-10-10 23:35:57 +08:00
Jiaming Yuan
b14e535e78 [Coll] Implement get host address in libxgboost. (#9644)
- Port `xgboost.tracker.get_host_ip` in C++.
2023-10-10 10:01:14 +08:00
Jiaming Yuan
680d53db43 Extract JSON utils. (#9645) 2023-10-10 07:15:14 +08:00
Jiaming Yuan
4e5a7729c3 Fix lint errors. (#9634) 2023-10-09 19:04:31 +08:00
James Lamb
db8d117f7e [CI] standardize endif() calls in CMake scripts (#9637) 2023-10-08 11:45:20 +08:00
James Lamb
799f8485e2 [R] [CI] enforce lintr::function_left_parentheses_linter check (#9631) 2023-10-08 09:42:09 +08:00
Jiaming Yuan
4d7a187cb0 Remove XGBoosterGetModelRaw. (#9617)
Deprecated in 1.6.
2023-09-29 02:29:33 +08:00
Jiaming Yuan
d95be1c38d Small cleanup to jvm iter adapter. (#9616)
- Remove header dependency on c_api
- Remove remaining code for arrow.
2023-09-29 00:39:07 +08:00
Jiaming Yuan
417c3ba47e Workaround Apple clang issue. (#9615) 2023-09-28 22:51:47 +08:00
Jordan Fréry
295f13ef09 Add privacy preserving tutorial to index.rst (#9614) 2023-09-28 18:53:29 +08:00
Jiaming Yuan
60526100e3 Support arrow through pandas ext types. (#9612)
- Use pandas extension type for pyarrow support.
- Additional support for QDM.
- Additional support for inplace_predict.
2023-09-28 17:00:16 +08:00
Rong Ou
3f2093fb81 Test monotone constraints with column split (#9613) 2023-09-28 04:54:53 +08:00
Jordan Fréry
7cafd41a58 [doc] Add privacy preserving tutorial (#9610) 2023-09-28 02:50:01 +08:00
Rong Ou
d6d14d0fb9 Integration tests for interaction constraints with column-wise data split (#9611) 2023-09-27 08:27:43 +08:00
Jiaming Yuan
c75a3bc0a9 [breaking] [jvm-packages] Remove rabit check point. (#9599)
- Add `numBoostedRound` to jvm packages
- Remove rabit checkpoint version.
- Change the starting version of training continuation in JVM [breaking].
- Redefine the checkpoint version policy in jvm package. [breaking]
- Rename the Python check point callback parameter. [breaking]
- Unifies the checkpoint policy between Python and JVM.
2023-09-26 18:06:34 +08:00
Benoit Chevallier-Mames
7901a299b2 [doc] Add privacy-preserving Concrete ML links (#9598) (#9604) 2023-09-26 15:33:11 +08:00
Rong Ou
290b17ffda Test column sampler with column-wise data split (#9609) 2023-09-26 13:31:23 +08:00
Jiaming Yuan
1167e6c554 Limit the number of threads for external memory. (#9605) 2023-09-24 00:30:28 +08:00
Jiaming Yuan
cac2cd2e94 [R] Set number of threads in demos and tests. (#9591)
- Restrict the number of threads in IO.
- Specify the number of threads in demos and tests.
- Add helper scripts for checks.
2023-09-23 21:44:03 +08:00
Rong Ou
def77870f3 Test categorical features with column-split gpu quantile (#9595) 2023-09-23 09:55:09 +08:00
Jiaming Yuan
a90d204942 Use array interface for testing numpy arrays. (#9602) 2023-09-23 03:13:48 +08:00
Jiaming Yuan
bbf5b9ee57 [dask] Move dask module into directory. (#9597) 2023-09-23 01:28:18 +08:00
Jiaming Yuan
0080c97075 Workaround poll on macos. (#9596) 2023-09-21 01:09:36 +08:00
Jiaming Yuan
8c676c889d Remove internal use of gpu_id. (#9568) 2023-09-20 23:29:51 +08:00
Jiaming Yuan
38ac52dd87 Build a simple event loop for collective. (#9593) 2023-09-20 02:09:07 +08:00
Jiaming Yuan
259d80c0cf News for 2.0. [skip ci] (#9484)
---------

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2023-09-19 05:46:02 +08:00
Jiaming Yuan
0df1da2db4 fix rel script with relative path and end note version. [skip ci] (#9572) 2023-09-18 17:58:48 +08:00
James Lamb
730bc1f688 [R] remove unused headers (#9546) 2023-09-14 17:11:26 +08:00
Rong Ou
d8c3cc92ae More support for column split in gpu predictor (#9562) 2023-09-14 08:13:13 +08:00
Rong Ou
a343ae3b34 fix dupliate gpu check (#9578) 2023-09-14 05:53:46 +08:00
Jiaming Yuan
300f9ace06 Fix default metric configuration. (#9575) 2023-09-13 13:05:47 -07:00
Jiaming Yuan
b438d684d2 Utilities and cleanups for socket. (#9576)
- Use c++-17 nodiscard and nested ns.
- Add bind method to socket.
- Remove rabit parameters.
2023-09-14 01:41:42 +08:00
Jiaming Yuan
5abe50ff8c [R] Fix method name. (#9577) 2023-09-13 23:19:29 +08:00
Ikko Eltociear Ashimine
f90d034a86 [doc] Fix typo in python_packaging.rst (#9573) 2023-09-12 20:53:07 +08:00
Jon Yoquinto
d05ea589fb Allow JVM-Package to access inplace predict method (#9167)
---------

Co-authored-by: Stephan T. Lavavej <stl@nuwen.net>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
Co-authored-by: Joe <25804777+ByteSizedJoe@users.noreply.github.com>
2023-09-12 07:29:51 +08:00
Jiaming Yuan
9027686cac Support pandas 2.1.0. (#9557) 2023-09-11 17:44:51 +08:00
Rong Ou
66a0832778 Add tests for gpu_approx (#9553) 2023-09-07 17:21:58 +08:00
Bobby Wang
6c791b5b47 [pyspark] support gpu transform (#9542)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-09-07 12:15:50 +08:00
Rong Ou
0f35493b65 Add GPU support to NVFlare demo (#9552) 2023-09-06 17:03:59 +08:00
Jiaming Yuan
3b9e5909fb [CI] bump setup-r action version. (#9544) 2023-09-05 16:14:45 +08:00
Jiaming Yuan
adea842c83 Fix inplace predict with fallback when base margin is used. (#9536)
- Copy meta info from proxy DMatrix.
- Use `std::call_once` to emit less warnings.
2023-09-05 01:04:24 +08:00
James Lamb
d159ee8547 [R] reformat build scripts (#9540) 2023-09-04 17:40:46 +08:00
Bobby Wang
419e052314 [pyspark] rework transform to reuse same code (#9292) 2023-09-04 15:57:16 +08:00
James Lamb
98e45f7b54 add HTML files to gitignore (#9541) 2023-09-04 14:44:58 +08:00
Rong Ou
c928dd4ff5 Support vertical federated learning with gpu_hist (#9539) 2023-09-03 11:37:11 +08:00
Rong Ou
9bab06cbca Support column split in gpu hist updater (#9384) 2023-08-31 18:09:35 +08:00
Jiaming Yuan
ccfc90e4c6 [rabit] Improved connection handling. (#9531)
- Enable timeout.
- Report connection error from the system.
- Handle retry for both tracker connection and peer connection.
2023-08-30 13:00:04 +08:00
dependabot[bot]
2462e22cd4 Bump com.nvidia:rapids-4-spark_2.12 in /jvm-packages (#9517)
Bumps com.nvidia:rapids-4-spark_2.12 from 23.08.0 to 23.08.1.

---
updated-dependencies:
- dependency-name: com.nvidia:rapids-4-spark_2.12
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-29 15:10:16 +08:00
Jiaming Yuan
ddf2e68821 Use the new DeviceOrd in the linalg module. (#9527) 2023-08-29 13:37:29 +08:00
Jiaming Yuan
942b957eef Fix GPU categorical split memory allocation. (#9529) 2023-08-29 10:06:03 +08:00
Jiaming Yuan
be6a552956 [R] Support multi-class custom objective. (#9526) 2023-08-29 08:27:13 +08:00
Jiaming Yuan
90ef250ea1 [rabit] Drop support for MPI backend. (#9525)
- Add checks in cmake.
- Remove mpi related code.
2023-08-28 21:01:22 +08:00
Jiaming Yuan
c3574d932f [R] Fix integer inputs with NA. (#9522) 2023-08-28 18:36:11 +08:00
Jiaming Yuan
1b87a1d8f8 [rabit] Small cleanup to tracker initialization. (#9524)
- Remove recover related code.
- Clean startup, no need to consider previously connected nodes.
2023-08-27 05:10:59 +08:00
Jiaming Yuan
209335b18c Remove the deprecated Python rabit module. (#9523) 2023-08-27 03:37:05 +08:00
Jiaming Yuan
aa86bd5207 [dask] Filter models on worker. (#9518) 2023-08-25 20:23:47 +08:00
Jiaming Yuan
972730cde0 Use matrix for gradient. (#9508)
- Use the `linalg::Matrix` for storing gradients.
- New API for the custom objective.
- Custom objective for multi-class/multi-target is now required to return the correct shape.
- Custom objective for Python can accept arrays with any strides. (row-major, column-major)
2023-08-24 05:29:52 +08:00
Rong Ou
6103dca0bb Support column split in GPU evaluate splits (#9511) 2023-08-23 16:33:43 +08:00
Jiaming Yuan
8c10af45a0 Delay the check for vector leaf. (#9509) 2023-08-23 01:53:40 +08:00
Jiaming Yuan
3c09399f29 Fix device dispatch for linear updater. (#9507) 2023-08-23 00:17:35 +08:00
Jiaming Yuan
302bbdc958 mitigate flaky test with distributed l1 error. (#9499) 2023-08-22 13:46:35 +08:00
Jiaming Yuan
044fea1281 Drop support for loading remote files. (#9504) 2023-08-21 23:34:05 +08:00
dependabot[bot]
d779a11af9 Bump scala-collection-compat_2.12 from 2.10.0 to 2.11.0 in /jvm-packages (#9311)
Bumps [scala-collection-compat_2.12](https://github.com/scala/scala-collection-compat) from 2.10.0 to 2.11.0.
- [Release notes](https://github.com/scala/scala-collection-compat/releases)
- [Commits](https://github.com/scala/scala-collection-compat/compare/v2.10.0...v2.11.0)

---
updated-dependencies:
- dependency-name: org.scala-lang.modules:scala-collection-compat_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-21 10:27:35 +08:00
Jiaming Yuan
e6cf7a1278 Deprecate the command line interface. (#9485)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-08-21 06:47:48 +08:00
Jiaming Yuan
38a3e1b858 Fix release script for RC [skip ci] (#9505) 2023-08-21 05:24:35 +08:00
dependabot[bot]
74d5056c61 Bump spark.version.gpu in /jvm-packages/xgboost4j-spark-gpu (#9328)
Bumps `spark.version.gpu` from 3.3.2 to 3.4.1.

Updates `spark-core_2.12` from 3.3.2 to 3.4.1

Updates `spark-sql_2.12` from 3.3.2 to 3.4.1

Updates `spark-mllib_2.12` from 3.3.2 to 3.4.1

---
updated-dependencies:
- dependency-name: org.apache.spark:spark-core_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.spark:spark-sql_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.spark:spark-mllib_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-20 04:20:07 +08:00
Jiaming Yuan
db87d481bc [R] Differentiate dev version with release version. (#9503)
Use 2.1.0.0 as development version, we will change it to 2.1.0.1 during release.
2023-08-20 02:58:58 +08:00
dependabot[bot]
5358e1ebf0 Bump org.apache.commons:commons-lang3 in /jvm-packages (#9489)
Bumps org.apache.commons:commons-lang3 from 3.12.0 to 3.13.0.

---
updated-dependencies:
- dependency-name: org.apache.commons:commons-lang3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-20 00:37:15 +08:00
dependabot[bot]
d016309a15 Bump spark.version from 3.4.0 to 3.4.1 in /jvm-packages/xgboost4j-spark (#9326)
Bumps `spark.version` from 3.4.0 to 3.4.1.

Updates `spark-core_2.12` from 3.4.0 to 3.4.1

Updates `spark-sql_2.12` from 3.4.0 to 3.4.1

Updates `spark-mllib_2.12` from 3.4.0 to 3.4.1

---
updated-dependencies:
- dependency-name: org.apache.spark:spark-core_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.spark:spark-sql_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.spark:spark-mllib_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-19 18:14:35 +08:00
Jiaming Yuan
7f29a238e6 Return base score as intercept. (#9486) 2023-08-19 12:28:02 +08:00
dependabot[bot]
0bb87b5b35 Bump hadoop.version from 3.3.5 to 3.3.6 in /jvm-packages (#9331)
Bumps `hadoop.version` from 3.3.5 to 3.3.6.

Updates `hadoop-hdfs` from 3.3.5 to 3.3.6

Updates `hadoop-common` from 3.3.5 to 3.3.6

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-hdfs
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-18 20:59:04 +08:00
Thomas Zeger
b74802dea9 Fix safe_xgboost macro on c++ (#9501) 2023-08-18 04:36:06 +08:00
Jiaming Yuan
58530b1bc4 Bump version to 2.1. (#9498) 2023-08-18 01:04:04 +08:00
Bobby Wang
68be454cfa [pyspark] hotfix for GPU setup validation (#9495)
* [pyspark] fix a bug of validating gpu configuration

---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-08-17 16:01:39 +08:00
Jiaming Yuan
5188e27513 Fix version parsing with rc release. (#9493) 2023-08-16 22:44:58 +08:00
Jiaming Yuan
f380c10a93 Use hint for find nccl. (#9490) 2023-08-16 16:08:41 +08:00
Sean Yang
12fe2fc06c Fix federated learning demos and tests (#9488) 2023-08-16 15:25:05 +08:00
Jiaming Yuan
b2e93d2742 [doc] Quick note for the device parameter. [skip ci] (#9483) 2023-08-16 13:35:55 +08:00
Jiaming Yuan
c061e3ae50 [jvm-packages] Bump rapids version. (#9482) 2023-08-15 16:26:42 -07:00
James Lamb
b82e78c169 [R] remove commented-out code (#9481) 2023-08-15 13:44:08 +08:00
Boris
8463107013 Updated versions. Reorganised dependencies. (#9479) 2023-08-14 14:28:28 -07:00
Jiaming Yuan
19b59938b7 Convert input to str for hypothesis note. (#9480) 2023-08-15 02:27:58 +08:00
James Lamb
e3f624d8e7 [R] remove more uses of default values in internal functions (#9476) 2023-08-14 22:18:33 +08:00
James Lamb
2c84daeca7 [R] [doc] remove documentation index entries for internal functions (#9477) 2023-08-14 22:18:02 +08:00
Bobby Wang
344f90b67b [jvm-packages] throw exception when tree_method=approx and device=cuda (#9478)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-08-14 17:52:14 +08:00
Jiaming Yuan
05d7000096 Handle special characters in JSON model dump. (#9474) 2023-08-14 15:49:00 +08:00
github-actions[bot]
f03463c45b [CI] Update RAPIDS to latest stable (#9464)
* [CI] Update RAPIDS to latest stable

* [CI] Use CMake 3.26.4

---------

Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2023-08-13 18:54:37 -07:00
Jiaming Yuan
fd4335d0bf [doc] Document the current status of some features. (#9469) 2023-08-13 23:42:27 +08:00
Jiaming Yuan
801116c307 Test scikit-learn model IO with gblinear. (#9459) 2023-08-13 23:41:49 +08:00
Jiaming Yuan
bb56183396 Normalize file system path. (#9463) 2023-08-11 21:26:46 +08:00
Jiaming Yuan
bdc1a3c178 Fix pyspark parameter. (#9460)
- Don't pass the `use_gpu` parameter to the learner.
- Fix GPU approx with PySpark.
2023-08-11 19:07:50 +08:00
James Lamb
428f6cbbe2 [R] remove default values in internal booster manipulation functions (#9461) 2023-08-11 15:07:18 +08:00
ShaneConneely
d638535581 Update README.md (#9462) 2023-08-11 04:02:04 +08:00
James Lamb
44bd2981b2 [R] remove default values in internal utility functions (#9457) 2023-08-10 21:40:59 +08:00
James Lamb
9dbb71490c [Doc] fix typos in documentation (#9458) 2023-08-10 19:26:36 +08:00
James Lamb
4359356d46 [R] [CI] use lintr 3.1.0 (#9456) 2023-08-10 17:49:16 +08:00
Jiaming Yuan
1caa93221a Use realloc for histogram cache and expose the cache limit. (#9455) 2023-08-10 14:05:27 +08:00
Jiaming Yuan
a57371ef7c Fix links in R doc. (#9450) 2023-08-10 02:38:14 +08:00
Jiaming Yuan
f05a23b41c Use weakref instead of id for DataIter cache. (#9445)
- Fix case where Python reuses id from freed objects.
- Small optimization to column matrix with QDM by using `realloc` instead of copying data.
2023-08-10 00:40:06 +08:00
Bobby Wang
d495a180d8 [pyspark] add logs for training (#9449) 2023-08-09 18:32:23 +08:00
joshbrowning2358
7f854848d3 Update R docs based on deprecated parameters/behaviour (#9437) 2023-08-09 17:04:28 +08:00
Jiaming Yuan
f05294a6f2 Fix clang warnings. (#9447)
- static function in header. (which is marked as unused due to translation unit
visibility).
- Implicit copy operator is deprecated.
- Unused lambda capture.
- Moving a temporary variable prevents copy elision.
2023-08-09 15:34:45 +08:00
Philip Hyunsu Cho
819098a48f [R] Handle UTF-8 paths on Windows (#9448) 2023-08-08 21:29:19 -07:00
Jiaming Yuan
c1b2cff874 [CI] Check compiler warnings. (#9444) 2023-08-08 12:02:45 -07:00
Philip Hyunsu Cho
7ce090e775 Handle UTF-8 paths correctly on Windows platform (#9443)
* Fix round-trip serialization with UTF-8 paths

* Add compiler version check

* Add comment to C API functions

* Add Python tests

* [CI] Updatre MacOS deployment target

* Use std::filesystem instead of dmlc::TemporaryDirectory
2023-08-07 23:27:25 -07:00
Jiaming Yuan
97fd5207dd Use lambda function in ParallelFor2D. (#9441) 2023-08-08 14:04:46 +08:00
Jiaming Yuan
54029a59af Bound the size of the histogram cache. (#9440)
- A new histogram collection with a limit in size.
- Unify histogram building logic between hist, multi-hist, and approx.
2023-08-08 03:21:26 +08:00
Philip Hyunsu Cho
5bd163aa25 Explicitly specify libcudart_static in CMake config (#9436) 2023-08-05 14:15:44 -07:00
Philip Hyunsu Cho
7fc57f3974 Remove Koffie Labs from Sponsors list (#9434) 2023-08-04 06:52:27 -07:00
Rong Ou
bde1ebc209 Switch back to the GPUIDX macro (#9438) 2023-08-04 15:14:31 +08:00
Philip Hyunsu Cho
1aabc690ec [Doc] Clarify the output behavior of reg:logistic (#9435) 2023-08-03 20:42:07 -07:00
jinmfeng001
04c99683c3 Change training stage from ResultStage to ShuffleMapStage (#9423) 2023-08-03 23:40:04 +08:00
Jiaming Yuan
1332ff787f Unify the code path between local and distributed training. (#9433)
This removes the need for a local histogram space during distributed training, which cuts the cache size by half.
2023-08-03 21:46:36 +08:00
Hendrik Makait
f958e32683 Raise if expected workers are not alive in xgboost.dask.train (#9421) 2023-08-03 20:14:07 +08:00
Jiaming Yuan
7129988847 Accept only keyword arguments in data iterator. (#9431) 2023-08-03 12:44:16 +08:00
Jiaming Yuan
e93a274823 Small cleanup for histogram routines. (#9427)
* Small cleanup for histogram routines.

- Extract hist train param from GPU hist.
- Make histogram const after construction.
- Unify parameter names.
2023-08-02 18:28:26 +08:00
Rong Ou
c2b85ab68a Clean up MGPU C++ tests (#9430) 2023-08-02 14:31:18 +08:00
Jiaming Yuan
a9da2e244a [CI] Update github actions. (#9428) 2023-08-01 23:03:53 +08:00
Jiaming Yuan
912e341d57 Initial GPU support for the approx tree method. (#9414) 2023-07-31 15:50:28 +08:00
Bobby Wang
8f0efb4ab3 [jvm-packages] automatically set the max/min direction for best score (#9404) 2023-07-27 11:09:55 +08:00
Rong Ou
7579905e18 Retry switching to per-thread default stream (#9416) 2023-07-26 07:09:12 +08:00
Nicholas Hilton
54579da4d7 [doc] Fix typo in prediction.rst (#9415)
Typo for `pred_contribs` and `pred_interactions`
2023-07-26 07:03:04 +08:00
Jiaming Yuan
3a9996173e Revert "Switch to per-thread default stream (#9396)" (#9413)
This reverts commit f7f673b00c.
2023-07-24 12:03:28 -07:00
Bobby Wang
1b657a5513 [jvm-packages] set device to cuda when tree method is "gpu_hist" (#9412) 2023-07-24 18:32:25 +08:00
Jiaming Yuan
a196443a07 Implement sketching with Hessian on GPU. (#9399)
- Prepare for implementing approx on GPU.
- Unify the code path between weighted and uniform sketching on DMatrix.
2023-07-24 15:43:03 +08:00
Jiaming Yuan
851cba931e Define best_iteration only if early stopping is used. (#9403)
* Define `best_iteration` only if early stopping is used.

This is the behavior specified by the document but not honored in the actual code.

- Don't set the attributes if there's no early stopping.
- Clean up the code for callbacks, and replace assertions with proper exceptions.
- Assign the attributes when early stopping `save_best` is used.
- Turn the attributes into Python properties.

---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-07-24 12:43:35 +08:00
Jiaming Yuan
01e00efc53 [breaking] Remove support for single string feature info. (#9401)
- Input must be a sequence of strings.
- Improve validation error message.
2023-07-24 11:06:30 +08:00
Jiaming Yuan
275da176ba Document for device ordinal. (#9398)
- Rewrite GPU demos. notebook is converted to script to avoid committing additional png plots.
- Add GPU demos into the sphinx gallery.
- Add RMM demos into the sphinx gallery.
- Test for firing threads with different device ordinals.
2023-07-22 15:26:29 +08:00
Jiaming Yuan
22b0a55a04 Remove hist builder class. (#9400)
* Remove hist build class.

* Cleanup this stateless class.

* Add comment to thread block.
2023-07-22 10:43:12 +08:00
Jiaming Yuan
0de7c47495 Fix metric serialization. (#9405) 2023-07-22 08:39:21 +08:00
Jiaming Yuan
dbd5309b55 Fix warning message for device. (#9402) 2023-07-20 23:30:04 +08:00
Rong Ou
f7f673b00c Switch to per-thread default stream (#9396) 2023-07-20 08:21:00 +08:00
Jiaming Yuan
7a0ccfbb49 Add compute 90. (#9397) 2023-07-19 13:42:38 +08:00
Jiaming Yuan
0897477af0 Remove unmaintained jvm readme and dev scripts. (#9395) 2023-07-18 18:23:43 +08:00
Philip Hyunsu Cho
e082718c66 [CI] Build pip wheel with RMM support (#9383) 2023-07-18 01:52:26 -07:00
Jiaming Yuan
6e18d3a290 [pyspark] Handle the device parameter in pyspark. (#9390)
- Handle the new `device` parameter in PySpark.
- Deprecate the old `use_gpu` parameter.
2023-07-18 08:47:03 +08:00
Philip Hyunsu Cho
2a0ff209ff [CI] Block CI from running for dependabot PRs (#9394) 2023-07-17 10:53:57 -07:00
Jiaming Yuan
f4fb2be101 [jvm-packages] Add the new device parameter. (#9385) 2023-07-17 18:40:39 +08:00
Jiaming Yuan
2caceb157d [jvm-packages] Reduce log verbosity for GPU tests. (#9389) 2023-07-17 13:25:46 +08:00
Jiaming Yuan
b342ef951b Make feature validation immutable. (#9388) 2023-07-16 06:52:55 +08:00
Jiaming Yuan
0a07900b9f Fix integer overflow. (#9380) 2023-07-15 21:11:02 +08:00
Jiaming Yuan
16eb41936d Handle the new device parameter in dask and demos. (#9386)
* Handle the new `device` parameter in dask and demos.

- Check no ordinal is specified in the dask interface.
- Update demos.
- Update dask doc.
- Update the condition for QDM.
2023-07-15 19:11:20 +08:00
Jiaming Yuan
9da5050643 Turn warning messages into Python warnings. (#9387) 2023-07-15 07:46:43 +08:00
Jiaming Yuan
04aff3af8e Define the new device parameter. (#9362) 2023-07-13 19:30:25 +08:00
Cássia Sampaio
2d0cd2817e [doc] Fux learning_to_rank.rst (#9381)
just adding one missing bracket
2023-07-13 11:00:24 +08:00
jinmfeng001
a1367ea1f8 Set feature_names and feature_types in jvm-packages (#9364)
* 1. Add parameters to set feature names and feature types
2. Save feature names and feature types to native json model

* Change serialization and deserialization format to ubj.
2023-07-12 15:18:46 +08:00
Rong Ou
3632242e0b Support column split with GPU quantile (#9370) 2023-07-11 12:15:56 +08:00
Jiaming Yuan
97ed944209 Unify the hist tree method for different devices. (#9363) 2023-07-11 10:04:39 +08:00
Jiaming Yuan
20c52f07d2 Support exporting cut values (#9356) 2023-07-08 15:32:41 +08:00
edumugi
c3124813e8 Support numpy vertical split (#9365) 2023-07-08 13:18:12 +08:00
Jiaming Yuan
59787b23af Allow empty page in external memory. (#9361) 2023-07-08 09:24:35 +08:00
Rong Ou
15ca12a77e Fix NCCL test hang (#9367) 2023-07-07 11:21:35 +08:00
Jiaming Yuan
41c6813496 Preserve order of saved updaters config. (#9355)
- Save the updater sequence as an array instead of object.
- Warn only once.

The compatibility is kept, but we should be able to break it as the config is not loaded
in pickle model and it's declared to be not stable.
2023-07-05 20:20:07 +08:00
Jiaming Yuan
b572a39919 [doc] Fix removed reference. (#9358) 2023-07-05 16:49:25 +08:00
Jiaming Yuan
645037e376 Improve test coverage with predictor configuration. (#9354)
* Improve test coverage with predictor configuration.

- Test with ext memory.
- Test with QDM.
- Test with dart.
2023-07-05 15:17:22 +08:00
Oliver Holworthy
6c9c8a9001 Enable Installation of Python Package with System lib in a Virtual Environment (#9349) 2023-07-05 05:46:17 +08:00
Boris
bb2de1fd5d xgboost4j-gpu_2.12-2.0.0: added libxgboost4j.so back. (#9351) 2023-07-04 03:31:33 +08:00
Jiaming Yuan
d0916849a6 Remove unused weight from buffer for cat features. (#9341) 2023-07-04 01:07:09 +08:00
Jiaming Yuan
6155394a06 Update news for 1.7.6 [skip ci] (#9350) 2023-07-04 01:04:34 +08:00
Jiaming Yuan
e964654b8f [skl] Enable cat feature without specifying tree method. (#9353) 2023-07-03 22:06:17 +08:00
Jiaming Yuan
39390cc2ee [breaking] Remove the predictor param, allow fallback to prediction using DMatrix. (#9129)
- A `DeviceOrd` struct is implemented to indicate the device. It will eventually replace the `gpu_id` parameter.
- The `predictor` parameter is removed.
- Fallback to `DMatrix` when `inplace_predict` is not available.
- The heuristic for choosing a predictor is only used during training.
2023-07-03 19:23:54 +08:00
Rong Ou
3a0f787703 Support column split in GPU predictor (#9343) 2023-07-03 04:05:34 +08:00
Rong Ou
f90771eec6 Fix device communicator dependency (#9346) 2023-06-29 10:34:30 +08:00
Jiaming Yuan
f4798718c7 Use hist as the default tree method. (#9320) 2023-06-27 23:04:24 +08:00
Jiaming Yuan
bc267dd729 Use ptr from mmap for GHistIndexMatrix and ColumnMatrix. (#9315)
* Use ptr from mmap for `GHistIndexMatrix` and `ColumnMatrix`.

- Define a resource for holding various types of memory pointers.
- Define ref vector for holding resources.
- Swap the underlying resources for GHist and ColumnM.
- Add documentation for current status.
- s390x support is removed. It should work if you can compile XGBoost, all the old workaround code does is to get GCC to compile.
2023-06-27 19:05:46 +08:00
jasjung
96c3071a8a [doc] Update learning_to_rank.rst (#9336) 2023-06-27 13:56:18 +08:00
Jiaming Yuan
cfa9c42eb4 Fix callback in AFT viz demo. (#9333)
* Fix callback in AFT viz demo.

- Update the callback function.
- Add lint check.
2023-06-26 22:35:02 +08:00
Jiaming Yuan
6efe7c129f [doc] Update reference in R vignettes. (#9323) 2023-06-26 18:32:11 +08:00
1178 changed files with 79707 additions and 47503 deletions

View File

@@ -17,7 +17,7 @@ AllowShortEnumsOnASingleLine: true
AllowShortBlocksOnASingleLine: Never
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: All
AllowShortLambdasOnASingleLine: All
AllowShortLambdasOnASingleLine: Inline
AllowShortIfStatementsOnASingleLine: WithoutElse
AllowShortLoopsOnASingleLine: true
AlwaysBreakAfterDefinitionReturnType: None

View File

@@ -8,24 +8,28 @@ updates:
- package-ecosystem: "maven"
directory: "/jvm-packages"
schedule:
interval: "daily"
interval: "monthly"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j"
schedule:
interval: "daily"
interval: "monthly"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-gpu"
schedule:
interval: "daily"
interval: "monthly"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-example"
schedule:
interval: "daily"
interval: "monthly"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-spark"
schedule:
interval: "daily"
interval: "monthly"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-spark-gpu"
schedule:
interval: "daily"
interval: "monthly"
- package-ecosystem: "github-actions"
directory: /
schedule:
interval: "monthly"

34
.github/workflows/freebsd.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: FreeBSD
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 20
name: A job to run test in FreeBSD
steps:
- uses: actions/checkout@v4
with:
submodules: 'true'
- name: Test in FreeBSD
id: test
uses: vmactions/freebsd-vm@v1
with:
usesh: true
prepare: |
pkg install -y cmake git ninja googletest
run: |
mkdir build
cd build
cmake .. -GNinja -DGOOGLE_TEST=ON
ninja -v
./testxgboost

43
.github/workflows/i386.yml vendored Normal file
View File

@@ -0,0 +1,43 @@
name: XGBoost-i386-test
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
build-32bit:
name: Build 32-bit
runs-on: ubuntu-latest
services:
registry:
image: registry:2
ports:
- 5000:5000
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3.6.1
with:
driver-opts: network=host
- name: Build and push container
uses: docker/build-push-action@v6
with:
context: .
file: tests/ci_build/Dockerfile.i386
push: true
tags: localhost:5000/xgboost/build-32bit:latest
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build XGBoost
run: |
docker run --rm -v $PWD:/workspace -w /workspace \
-e CXXFLAGS='-Wno-error=overloaded-virtual -Wno-error=maybe-uninitialized -Wno-error=redundant-move' \
localhost:5000/xgboost/build-32bit:latest \
tests/ci_build/build_via_cmake.sh

View File

@@ -5,36 +5,40 @@ on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
test-with-jvm:
name: Test JVM on OS ${{ matrix.os }}
timeout-minutes: 30
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [windows-latest, ubuntu-latest, macos-11]
os: [windows-latest, ubuntu-latest, macos-13]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: actions/setup-python@7f80679172b057fc5e90d70d197929d454754a5a # v4.3.0
- uses: actions/setup-java@6a0805fcefea3d4657a47ac4c165951e33482018 # v4.2.2
with:
python-version: '3.8'
architecture: 'x64'
distribution: 'temurin'
java-version: '8'
- uses: actions/setup-java@d202f5dbf7256730fb690ec59f6381650114feb2 # v3.6.0
- uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3.0.4
with:
java-version: 1.8
- name: Install Python packages
run: |
python -m pip install wheel setuptools
python -m pip install awscli
miniforge-variant: Mambaforge
miniforge-version: latest
activate-environment: jvm_tests
environment-file: tests/ci_build/conda_env/jvm_tests.yml
use-mamba: true
- name: Cache Maven packages
uses: actions/cache@6998d139ddd3e68c71e9e398d8e40b71a2f39812 # v3.2.5
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
with:
path: ~/.m2
key: ${{ runner.os }}-m2-${{ hashFiles('./jvm-packages/pom.xml') }}
@@ -45,20 +49,28 @@ jobs:
cd jvm-packages
mvn test -B -pl :xgboost4j_2.12
- name: Test XGBoost4J (Core, Spark, Examples)
run: |
rm -rfv build/
cd jvm-packages
mvn -B test
if: matrix.os == 'ubuntu-latest' # Distributed training doesn't work on Windows
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
run: |
echo "branch=${GITHUB_REF#refs/heads/}" >> "$GITHUB_OUTPUT"
id: extract_branch
if: |
(github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')) &&
matrix.os == 'windows-latest'
(matrix.os == 'windows-latest' || matrix.os == 'macos-13')
- name: Publish artifact xgboost4j.dll to S3
run: |
cd lib/
Rename-Item -Path xgboost4j.dll -NewName xgboost4j_${{ github.sha }}.dll
dir
python -m awscli s3 cp xgboost4j_${{ github.sha }}.dll s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/ --acl public-read
python -m awscli s3 cp xgboost4j_${{ github.sha }}.dll s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/libxgboost4j/ --acl public-read --region us-west-2
if: |
(github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')) &&
matrix.os == 'windows-latest'
@@ -66,16 +78,19 @@ jobs:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
- name: Test XGBoost4J (Core, Spark, Examples)
- name: Publish artifact libxgboost4j.dylib to S3
shell: bash -l {0}
run: |
rm -rfv build/
cd jvm-packages
mvn -B test
if: matrix.os == 'ubuntu-latest' # Distributed training doesn't work on Windows
cd lib/
mv -v libxgboost4j.dylib libxgboost4j_${{ github.sha }}.dylib
ls
python -m awscli s3 cp libxgboost4j_${{ github.sha }}.dylib s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/libxgboost4j/ --acl public-read --region us-west-2
if: |
(github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')) &&
matrix.os == 'macos-13'
env:
RABIT_MOCK: ON
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
- name: Build and Test XGBoost4J with scala 2.13
run: |
@@ -83,5 +98,3 @@ jobs:
cd jvm-packages
mvn -B clean install test -Pdefault,scala-2.13
if: matrix.os == 'ubuntu-latest' # Distributed training doesn't work on Windows
env:
RABIT_MOCK: ON

View File

@@ -9,6 +9,10 @@ on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
gtest-cpu:
@@ -17,9 +21,9 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [macos-11]
os: [macos-12]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- name: Install system packages
@@ -29,7 +33,7 @@ jobs:
run: |
mkdir build
cd build
cmake .. -DGOOGLE_TEST=ON -DUSE_OPENMP=ON -DUSE_DMLC_GTEST=ON -DPLUGIN_DENSE_PARSER=ON -GNinja
cmake .. -DGOOGLE_TEST=ON -DUSE_OPENMP=ON -DUSE_DMLC_GTEST=ON -GNinja -DBUILD_DEPRECATED_CLI=ON -DUSE_SANITIZER=ON -DENABLED_SANITIZERS=address -DCMAKE_BUILD_TYPE=RelWithDebInfo
ninja -v
- name: Run gtest binary
run: |
@@ -45,7 +49,7 @@ jobs:
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- name: Install system packages
@@ -56,13 +60,52 @@ jobs:
run: |
mkdir build
cd build
cmake .. -GNinja -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON -DUSE_OPENMP=OFF
cmake .. -GNinja -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON -DUSE_OPENMP=OFF -DBUILD_DEPRECATED_CLI=ON
ninja -v
- name: Run gtest binary
run: |
cd build
ctest --extra-verbose
gtest-cpu-sycl:
name: Test Google C++ unittest (CPU SYCL)
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
python-version: ["3.10"]
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3.0.4
with:
miniforge-variant: Mambaforge
miniforge-version: latest
activate-environment: linux_sycl_test
environment-file: tests/ci_build/conda_env/linux_sycl_test.yml
use-mamba: true
- name: Display Conda env
run: |
conda info
conda list
- name: Build and install XGBoost
shell: bash -l {0}
run: |
mkdir build
cd build
cmake .. -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON -DPLUGIN_SYCL=ON -DCMAKE_CXX_COMPILER=g++ -DCMAKE_C_COMPILER=gcc -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX
make -j$(nproc)
- name: Run gtest binary for SYCL
run: |
cd build
./testxgboost --gtest_filter=Sycl*
- name: Run gtest binary for non SYCL
run: |
cd build
./testxgboost --gtest_filter=-Sycl*
c-api-demo:
name: Test installing XGBoost lib + building the C API demo
runs-on: ${{ matrix.os }}
@@ -73,17 +116,18 @@ jobs:
fail-fast: false
matrix:
os: ["ubuntu-latest"]
python-version: ["3.8"]
python-version: ["3.10"]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
- uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3.0.4
with:
cache-downloads: true
cache-env: true
environment-name: cpp_test
miniforge-variant: Mambaforge
miniforge-version: latest
activate-environment: cpp_test
environment-file: tests/ci_build/conda_env/cpp_test.yml
use-mamba: true
- name: Display Conda env
run: |
conda info
@@ -112,8 +156,9 @@ jobs:
- name: Build and install XGBoost shared library
run: |
cd build
cmake .. -DBUILD_STATIC_LIB=OFF -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja
cmake .. -DBUILD_STATIC_LIB=OFF -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja -DPLUGIN_FEDERATED=ON -DGOOGLE_TEST=ON
ninja -v install
./testxgboost
cd -
- name: Build and run C API demo with shared
run: |
@@ -132,27 +177,17 @@ jobs:
runs-on: ubuntu-latest
name: Code linting for C++
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: actions/setup-python@7f80679172b057fc5e90d70d197929d454754a5a # v4.3.0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3 # v5.2.0
with:
python-version: "3.8"
python-version: "3.10"
architecture: 'x64'
- name: Install Python packages
run: |
python -m pip install wheel setuptools cpplint pylint
python -m pip install wheel setuptools cmakelint cpplint pylint
- name: Run lint
run: |
python3 dmlc-core/scripts/lint.py xgboost cpp R-package/src
python3 dmlc-core/scripts/lint.py --exclude_path \
python-package/xgboost/dmlc-core \
python-package/xgboost/include \
python-package/xgboost/lib \
python-package/xgboost/rabit \
python-package/xgboost/src \
--pylint-rc python-package/.pylintrc \
xgboost \
cpp \
include src python-package
python3 tests/ci_build/lint_cpp.py
sh ./tests/ci_build/lint_cmake.sh

View File

@@ -9,6 +9,10 @@ defaults:
run:
shell: bash -l {0}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
python-mypy-lint:
runs-on: ubuntu-latest
@@ -17,15 +21,16 @@ jobs:
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
- uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3.0.4
with:
cache-downloads: true
cache-env: true
environment-name: python_lint
miniforge-variant: Mambaforge
miniforge-version: latest
activate-environment: python_lint
environment-file: tests/ci_build/conda_env/python_lint.yml
use-mamba: true
- name: Display Conda env
run: |
conda info
@@ -48,15 +53,16 @@ jobs:
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
- uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3.0.4
with:
cache-downloads: true
cache-env: true
environment-name: sdist_test
miniforge-variant: Mambaforge
miniforge-version: latest
activate-environment: sdist_test
environment-file: tests/ci_build/conda_env/sdist_test.yml
use-mamba: true
- name: Display Conda env
run: |
conda info
@@ -77,17 +83,17 @@ jobs:
name: Test installing XGBoost Python source package on ${{ matrix.os }}
strategy:
matrix:
os: [macos-11, windows-latest]
python-version: ["3.8"]
os: [macos-13, windows-latest]
python-version: ["3.10"]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- name: Install osx system dependencies
if: matrix.os == 'macos-11'
if: matrix.os == 'macos-13'
run: |
brew install ninja libomp
- uses: conda-incubator/setup-miniconda@35d1405e78aa3f784fe3ce9a2eb378d5eeb62169 # v2.1.1
- uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3.0.4
with:
auto-update-conda: true
python-version: ${{ matrix.python-version }}
@@ -115,19 +121,20 @@ jobs:
strategy:
matrix:
config:
- {os: macos-11}
- {os: macos-13}
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
- uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3.0.4
with:
cache-downloads: true
cache-env: true
environment-name: macos_test
miniforge-variant: Mambaforge
miniforge-version: latest
activate-environment: macos_cpu_test
environment-file: tests/ci_build/conda_env/macos_cpu_test.yml
use-mamba: true
- name: Display Conda env
run: |
@@ -143,7 +150,7 @@ jobs:
# Set prefix, to use OpenMP library from Conda env
# See https://github.com/dmlc/xgboost/issues/7039#issuecomment-1025038228
# to learn why we don't use libomp from Homebrew.
cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX -DBUILD_DEPRECATED_CLI=ON
ninja
- name: Install Python package
@@ -167,14 +174,14 @@ jobs:
strategy:
matrix:
config:
- {os: windows-latest, python-version: '3.8'}
- {os: windows-latest, python-version: '3.10'}
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: conda-incubator/setup-miniconda@35d1405e78aa3f784fe3ce9a2eb378d5eeb62169 # v2.1.1
- uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3.0.4
with:
auto-update-conda: true
python-version: ${{ matrix.config.python-version }}
@@ -190,7 +197,7 @@ jobs:
run: |
mkdir build_msvc
cd build_msvc
cmake .. -G"Visual Studio 17 2022" -DCMAKE_CONFIGURATION_TYPES="Release" -A x64 -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON
cmake .. -G"Visual Studio 17 2022" -DCMAKE_CONFIGURATION_TYPES="Release" -A x64 -DBUILD_DEPRECATED_CLI=ON
cmake --build . --config Release --parallel $(nproc)
- name: Install Python package
@@ -211,19 +218,20 @@ jobs:
strategy:
matrix:
config:
- {os: ubuntu-latest, python-version: "3.8"}
- {os: ubuntu-latest, python-version: "3.10"}
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
- uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3.0.4
with:
cache-downloads: true
cache-env: true
environment-name: linux_cpu_test
miniforge-variant: Mambaforge
miniforge-version: latest
activate-environment: linux_cpu_test
environment-file: tests/ci_build/conda_env/linux_cpu_test.yml
use-mamba: true
- name: Display Conda env
run: |
@@ -234,7 +242,7 @@ jobs:
run: |
mkdir build
cd build
cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX -DBUILD_DEPRECATED_CLI=ON
ninja
- name: Install Python package
@@ -255,3 +263,86 @@ jobs:
shell: bash -l {0}
run: |
pytest -s -v -rxXs --durations=0 ./tests/test_distributed/test_with_spark
python-sycl-tests-on-ubuntu:
name: Test XGBoost Python package with SYCL on ${{ matrix.config.os }}
runs-on: ${{ matrix.config.os }}
timeout-minutes: 90
strategy:
matrix:
config:
- {os: ubuntu-latest, python-version: "3.10"}
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3.0.4
with:
miniforge-variant: Mambaforge
miniforge-version: latest
activate-environment: linux_sycl_test
environment-file: tests/ci_build/conda_env/linux_sycl_test.yml
use-mamba: true
- name: Display Conda env
run: |
conda info
conda list
- name: Build XGBoost on Ubuntu
run: |
mkdir build
cd build
cmake .. -DPLUGIN_SYCL=ON -DCMAKE_CXX_COMPILER=g++ -DCMAKE_C_COMPILER=gcc -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
make -j$(nproc)
- name: Install Python package
run: |
cd python-package
python --version
pip install -v .
- name: Test Python package
run: |
pytest -s -v -rxXs --durations=0 ./tests/python-sycl/
python-system-installation-on-ubuntu:
name: Test XGBoost Python package System Installation on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- name: Set up Python 3.10
uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3 # v5.2.0
with:
python-version: "3.10"
- name: Install ninja
run: |
sudo apt-get update && sudo apt-get install -y ninja-build
- name: Build XGBoost on Ubuntu
run: |
mkdir build
cd build
cmake .. -GNinja
ninja
- name: Copy lib to system lib
run: |
cp lib/* "$(python -c 'import sys; print(sys.base_prefix)')/lib"
- name: Install XGBoost in Virtual Environment
run: |
cd python-package
pip install virtualenv
virtualenv venv
source venv/bin/activate && \
pip install -v . --config-settings use_system_libxgboost=True && \
python -c 'import xgboost'

View File

@@ -5,6 +5,14 @@ on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
defaults:
run:
shell: bash -l {0}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
python-wheels:
name: Build wheel for ${{ matrix.platform_id }}
@@ -12,30 +20,36 @@ jobs:
strategy:
matrix:
include:
- os: macos-latest
- os: macos-13
platform_id: macosx_x86_64
- os: macos-latest
- os: macos-14
platform_id: macosx_arm64
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- name: Setup Python
uses: actions/setup-python@7f80679172b057fc5e90d70d197929d454754a5a # v4.3.0
- name: Set up homebrew
uses: Homebrew/actions/setup-homebrew@68fa6aeb1ccb0596d311f2b34ec74ec21ee68e54
- name: Install libomp
run: brew install libomp
- uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3.0.4
with:
python-version: "3.8"
miniforge-variant: Mambaforge
miniforge-version: latest
python-version: "3.10"
use-mamba: true
- name: Build wheels
run: bash tests/ci_build/build_python_wheels.sh ${{ matrix.platform_id }} ${{ github.sha }}
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
run: |
echo "branch=${GITHUB_REF#refs/heads/}" >> "$GITHUB_OUTPUT"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Upload Python wheel
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
run: |
python -m pip install awscli
python -m awscli s3 cp wheelhouse/*.whl s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/ --acl public-read
python -m awscli s3 cp wheelhouse/*.whl s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/ --acl public-read --region us-west-2
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}

View File

@@ -10,6 +10,10 @@ on:
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
test-R-noLD:
if: github.event.comment.body == '/gha run r-nold-test' && contains('OWNER,MEMBER,COLLABORATOR', github.event.comment.author_association)
@@ -23,7 +27,7 @@ jobs:
run: |
apt update && apt install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev libglpk-dev libxml2-dev libharfbuzz-dev libfribidi-dev git -y
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'

View File

@@ -8,6 +8,10 @@ env:
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
lintr:
runs-on: ${{ matrix.config.os }}
@@ -21,20 +25,20 @@ jobs:
RSPM: ${{ matrix.config.rspm }}
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@50d1eae9b8da0bb3f8582c59a5b82225fa2fe7f2 # v2.3.1
- uses: r-lib/actions/setup-r@929c772977a3a13c8733b363bf5a2f685c25dd91 # v2.9.0
with:
r-version: ${{ matrix.config.r }}
- name: Cache R packages
uses: actions/cache@937d24475381cd9c75ae6db12cb4e79714b926ed # v3.0.11
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-7-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-7-${{ hashFiles('R-package/DESCRIPTION') }}
- name: Install dependencies
shell: Rscript {0}
@@ -46,7 +50,7 @@ jobs:
MAKEFLAGS="-j$(nproc)" R CMD INSTALL R-package/
Rscript tests/ci_build/lint_r.R $(pwd)
test-R-on-Windows:
test-Rpkg:
runs-on: ${{ matrix.config.os }}
name: Test R on OS ${{ matrix.config.os }}, R ${{ matrix.config.r }}, Compiler ${{ matrix.config.compiler }}, Build ${{ matrix.config.build }}
strategy:
@@ -54,30 +58,35 @@ jobs:
matrix:
config:
- {os: windows-latest, r: 'release', compiler: 'mingw', build: 'autotools'}
- {os: windows-latest, r: '4.2.0', compiler: 'msvc', build: 'cmake'}
- {os: ubuntu-latest, r: 'release', compiler: 'none', build: 'cmake'}
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
RSPM: ${{ matrix.config.rspm }}
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- name: Install system dependencies
run: |
sudo apt update
sudo apt install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev libglpk-dev libxml2-dev libharfbuzz-dev libfribidi-dev
if: matrix.config.os == 'ubuntu-latest'
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@50d1eae9b8da0bb3f8582c59a5b82225fa2fe7f2 # v2.3.1
- uses: r-lib/actions/setup-r@929c772977a3a13c8733b363bf5a2f685c25dd91 # v2.9.0
with:
r-version: ${{ matrix.config.r }}
- name: Cache R packages
uses: actions/cache@937d24475381cd9c75ae6db12cb4e79714b926ed # v3.0.11
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-7-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-7-${{ hashFiles('R-package/DESCRIPTION') }}
- uses: actions/setup-python@7f80679172b057fc5e90d70d197929d454754a5a # v4.3.0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3 # v5.2.0
with:
python-version: "3.8"
python-version: "3.10"
architecture: 'x64'
- uses: r-lib/actions/setup-tinytex@v2
@@ -90,12 +99,18 @@ jobs:
- name: Test R
run: |
python tests/ci_build/test_r_package.py --compiler='${{ matrix.config.compiler }}' --build-tool="${{ matrix.config.build }}" --task=check
if: matrix.config.compiler != 'none'
- name: Test R
run: |
python tests/ci_build/test_r_package.py --build-tool="${{ matrix.config.build }}" --task=check
if: matrix.config.compiler == 'none'
test-R-on-Debian:
name: Test R package on Debian
runs-on: ubuntu-latest
container:
image: rhub/debian-gcc-devel
image: rhub/debian-gcc-release
steps:
- name: Install system dependencies
@@ -108,21 +123,21 @@ jobs:
run: |
git config --global --add safe.directory "${GITHUB_WORKSPACE}"
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- name: Install dependencies
shell: bash -l {0}
run: |
/tmp/R-devel/bin/Rscript -e "source('./R-package/tests/helper_scripts/install_deps.R')"
Rscript -e "source('./R-package/tests/helper_scripts/install_deps.R')"
- name: Test R
shell: bash -l {0}
run: |
python3 tests/ci_build/test_r_package.py --r=/tmp/R-devel/bin/R --build-tool=autotools --task=check
python3 tests/ci_build/test_r_package.py --r=/usr/bin/R --build-tool=autotools --task=check
- uses: dorny/paths-filter@v2
- uses: dorny/paths-filter@v3
id: changes
with:
filters: |
@@ -132,4 +147,4 @@ jobs:
- name: Run document check
if: steps.changes.outputs.r_package == 'true'
run: |
python3 tests/ci_build/test_r_package.py --r=/tmp/R-devel/bin/R --task=doc
python3 tests/ci_build/test_r_package.py --r=/usr/bin/R --task=doc

View File

@@ -22,26 +22,26 @@ jobs:
steps:
- name: "Checkout code"
uses: actions/checkout@a12a3943b4bdde767164f792f33f40b04645d846 # tag=v3.0.0
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@99c53751e09b9529366343771cc321ec74e9bd3d # tag=v2.0.6
uses: ossf/scorecard-action@62b2cac7ed8198b15735ed49ab1e5cf35480ba46 # v2.4.0
with:
results_file: results.sarif
results_format: sarif
# Publish the results for public repositories to enable scorecard badges. For more details, see
# https://github.com/ossf/scorecard-action#publishing-results.
# For private repositories, `publish_results` will automatically be set to `false`, regardless
# https://github.com/ossf/scorecard-action#publishing-results.
# For private repositories, `publish_results` will automatically be set to `false`, regardless
# of the value entered here.
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@6673cd052c4cd6fcf4b4e6e60ea986c889389535 # tag=v3.0.0
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0
with:
name: SARIF file
path: results.sarif
@@ -49,6 +49,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@5f532563584d71fdef14ee64d17bafb34f751ce5 # tag=v1.0.26
uses: github/codeql-action/upload-sarif@83a02f7883b12e0e4e1a146174f5e2292a01e601 # v2.16.4
with:
sarif_file: results.sarif

View File

@@ -3,7 +3,7 @@ name: update-rapids
on:
workflow_dispatch:
schedule:
- cron: "0 20 * * *" # Run once daily
- cron: "0 20 * * 1" # Run once weekly
permissions:
pull-requests: write
@@ -25,14 +25,14 @@ jobs:
name: Check latest RAPIDS
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
with:
submodules: 'true'
- name: Check latest RAPIDS and update conftest.sh
run: |
bash tests/buildkite/update-rapids.sh
- name: Create Pull Request
uses: peter-evans/create-pull-request@v5
uses: peter-evans/create-pull-request@v6
if: github.ref == 'refs/heads/master'
with:
add-paths: |

12
.gitignore vendored
View File

@@ -27,12 +27,13 @@
*vali
*sdf
Release
*exe*
*exe
*exp
ipch
*.filters
*.user
*log
rmm_log.txt
Debug
*suo
.Rhistory
@@ -48,6 +49,7 @@ Debug
*.Rproj
./xgboost.mpi
./xgboost.mock
*.bak
#.Rbuildignore
R-package.Rproj
*.cache*
@@ -62,6 +64,7 @@ java/xgboost4j-demo/data/
java/xgboost4j-demo/tmp/
java/xgboost4j-demo/model/
nb-configuration*
# Eclipse
.project
.cproject
@@ -83,6 +86,7 @@ target
*.gcov
*.gcda
*.gcno
*.ubj
build_tests
/tests/cpp/xgboost_test
@@ -96,6 +100,7 @@ metastore_db
# files from R-package source install
**/config.status
R-package/config.h
R-package/src/Makevars
*.lib
@@ -145,7 +150,12 @@ __MACOSX/
model*.json
# R tests
*.htm
*.html
*.libsvm
*.rds
Rplots.pdf
*.zip
# nsys
*.nsys-rep

View File

@@ -12,7 +12,7 @@ submodules:
build:
os: ubuntu-22.04
tools:
python: "3.8"
python: "3.10"
apt_packages:
- graphviz
- cmake
@@ -32,4 +32,3 @@ formats:
python:
install:
- requirements: doc/requirements.txt
system_packages: true

View File

@@ -15,4 +15,3 @@
address = {New York, NY, USA},
keywords = {large-scale machine learning},
}

View File

@@ -1,36 +1,58 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(xgboost LANGUAGES CXX C VERSION 2.0.0)
if(PLUGIN_SYCL)
string(REPLACE " -isystem ${CONDA_PREFIX}/include" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
endif()
project(xgboost LANGUAGES CXX C VERSION 2.2.0)
include(cmake/Utils.cmake)
list(APPEND CMAKE_MODULE_PATH "${xgboost_SOURCE_DIR}/cmake/modules")
cmake_policy(SET CMP0022 NEW)
cmake_policy(SET CMP0079 NEW)
cmake_policy(SET CMP0076 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0063 NEW)
cmake_policy(SET CMP0063 NEW)
if ((${CMAKE_VERSION} VERSION_GREATER 3.13) OR (${CMAKE_VERSION} VERSION_EQUAL 3.13))
cmake_policy(SET CMP0077 NEW)
endif ((${CMAKE_VERSION} VERSION_GREATER 3.13) OR (${CMAKE_VERSION} VERSION_EQUAL 3.13))
# These policies are already set from 3.18 but we still need to set the policy
# default variables here for lower minimum versions in the submodules
set(CMAKE_POLICY_DEFAULT_CMP0063 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0069 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0076 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0077 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0079 NEW)
message(STATUS "CMake version ${CMAKE_VERSION}")
if (CMAKE_COMPILER_IS_GNUCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5.0)
message(FATAL_ERROR "GCC version must be at least 5.0!")
# Check compiler versions
# Use recent compilers to ensure that std::filesystem is available
if(MSVC)
if(MSVC_VERSION LESS 1920)
message(FATAL_ERROR "Need Visual Studio 2019 or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "8.1")
message(FATAL_ERROR "Need GCC 8.1 or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "11.0")
message(FATAL_ERROR "Need Xcode 11.0 (AppleClang 11.0) or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "9.0")
message(FATAL_ERROR "Need Clang 9.0 or newer to build XGBoost")
endif()
endif()
include(${xgboost_SOURCE_DIR}/cmake/FindPrefetchIntrinsics.cmake)
include(${xgboost_SOURCE_DIR}/cmake/PrefetchIntrinsics.cmake)
find_prefetch_intrinsics()
include(${xgboost_SOURCE_DIR}/cmake/Version.cmake)
write_version()
set_default_configuration_release()
#-- Options
include(CMakeDependentOption)
## User options
option(BUILD_C_DOC "Build documentation for C APIs using Doxygen." OFF)
option(USE_OPENMP "Build with OpenMP support." ON)
option(BUILD_STATIC_LIB "Build static library" OFF)
option(BUILD_DEPRECATED_CLI "Build the deprecated command line interface" OFF)
option(FORCE_SHARED_CRT "Build with dynamic CRT on Windows (/MD)" OFF)
option(RABIT_BUILD_MPI "Build MPI" OFF)
## Bindings
option(JVM_BINDINGS "Build JVM bindings" OFF)
option(R_LIB "Build shared library for R package" OFF)
@@ -45,19 +67,34 @@ option(USE_DMLC_GTEST "Use google tests bundled with dmlc-core submodule" OFF)
option(USE_DEVICE_DEBUG "Generate CUDA device debug info." OFF)
option(USE_NVTX "Build with cuda profiling annotations. Developers only." OFF)
set(NVTX_HEADER_DIR "" CACHE PATH "Path to the stand-alone nvtx header")
option(RABIT_MOCK "Build rabit with mock" OFF)
option(HIDE_CXX_SYMBOLS "Build shared library and hide all C++ symbols" OFF)
option(KEEP_BUILD_ARTIFACTS_IN_BINARY_DIR "Output build artifacts in CMake binary dir" OFF)
## CUDA
option(USE_CUDA "Build with GPU acceleration" OFF)
option(USE_NCCL "Build with NCCL to enable distributed GPU support." OFF)
# This is specifically designed for PyPI binary release and should be disabled for most of the cases.
option(USE_DLOPEN_NCCL "Whether to load nccl dynamically." OFF)
option(BUILD_WITH_SHARED_NCCL "Build with shared NCCL library." OFF)
set(GPU_COMPUTE_VER "" CACHE STRING
"Semicolon separated list of compute versions to be built against, e.g. '35;61'")
## Copied From dmlc
option(USE_HDFS "Build with HDFS support" OFF)
option(USE_AZURE "Build with AZURE support" OFF)
option(USE_S3 "Build with S3 support" OFF)
if(USE_CUDA)
if(NOT DEFINED CMAKE_CUDA_ARCHITECTURES AND NOT DEFINED ENV{CUDAARCHS})
set(GPU_COMPUTE_VER "" CACHE STRING
"Semicolon separated list of compute versions to be built against, e.g. '35;61'")
else()
# Clear any cached values from previous runs
unset(GPU_COMPUTE_VER)
unset(GPU_COMPUTE_VER CACHE)
endif()
endif()
# CUDA device LTO was introduced in CMake v3.25 and requires host LTO to also be enabled but can still
# be explicitly disabled allowing for LTO on host only, host and device, or neither, but device-only LTO
# is not a supproted configuration
cmake_dependent_option(USE_CUDA_LTO
"Enable link-time optimization for CUDA device code"
"${CMAKE_INTERPROCEDURAL_OPTIMIZATION}"
"CMAKE_VERSION VERSION_GREATER_EQUAL 3.25;USE_CUDA;CMAKE_INTERPROCEDURAL_OPTIMIZATION"
OFF)
## Sanitizers
option(USE_SANITIZER "Use santizer flags" OFF)
option(SANITIZER_PATH "Path to sanitizes.")
@@ -65,95 +102,145 @@ set(ENABLED_SANITIZERS "address" "leak" CACHE STRING
"Semicolon separated list of sanitizer names. E.g 'address;leak'. Supported sanitizers are
address, leak, undefined and thread.")
## Plugins
option(PLUGIN_DENSE_PARSER "Build dense parser plugin" OFF)
option(PLUGIN_RMM "Build with RAPIDS Memory Manager (RMM)" OFF)
option(PLUGIN_FEDERATED "Build with Federated Learning" OFF)
## TODO: 1. Add check if DPC++ compiler is used for building
option(PLUGIN_UPDATER_ONEAPI "DPC++ updater" OFF)
option(PLUGIN_SYCL "SYCL plugin" OFF)
option(ADD_PKGCONFIG "Add xgboost.pc into system." ON)
#-- Checks for building XGBoost
if (USE_DEBUG_OUTPUT AND (NOT (CMAKE_BUILD_TYPE MATCHES Debug)))
if(USE_DEBUG_OUTPUT AND (NOT (CMAKE_BUILD_TYPE MATCHES Debug)))
message(SEND_ERROR "Do not enable `USE_DEBUG_OUTPUT' with release build.")
endif (USE_DEBUG_OUTPUT AND (NOT (CMAKE_BUILD_TYPE MATCHES Debug)))
if (USE_NCCL AND NOT (USE_CUDA))
endif()
if(USE_NCCL AND NOT (USE_CUDA))
message(SEND_ERROR "`USE_NCCL` must be enabled with `USE_CUDA` flag.")
endif (USE_NCCL AND NOT (USE_CUDA))
if (USE_DEVICE_DEBUG AND NOT (USE_CUDA))
endif()
if(USE_DEVICE_DEBUG AND NOT (USE_CUDA))
message(SEND_ERROR "`USE_DEVICE_DEBUG` must be enabled with `USE_CUDA` flag.")
endif (USE_DEVICE_DEBUG AND NOT (USE_CUDA))
if (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
endif()
if(BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
message(SEND_ERROR "Build XGBoost with -DUSE_NCCL=ON to enable BUILD_WITH_SHARED_NCCL.")
endif (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
if (JVM_BINDINGS AND R_LIB)
endif()
if(USE_DLOPEN_NCCL AND (NOT USE_NCCL))
message(SEND_ERROR "Build XGBoost with -DUSE_NCCL=ON to enable USE_DLOPEN_NCCL.")
endif()
if(USE_DLOPEN_NCCL AND (NOT (CMAKE_SYSTEM_NAME STREQUAL "Linux")))
message(SEND_ERROR "`USE_DLOPEN_NCCL` supports only Linux at the moment.")
endif()
if(JVM_BINDINGS AND R_LIB)
message(SEND_ERROR "`R_LIB' is not compatible with `JVM_BINDINGS' as they both have customized configurations.")
endif (JVM_BINDINGS AND R_LIB)
if (R_LIB AND GOOGLE_TEST)
message(WARNING "Some C++ unittests will fail with `R_LIB` enabled,
as R package redirects some functions to R runtime implementation.")
endif (R_LIB AND GOOGLE_TEST)
if (USE_AVX)
message(SEND_ERROR "The option 'USE_AVX' is deprecated as experimental AVX features have been removed from XGBoost.")
endif (USE_AVX)
if (PLUGIN_LZ4)
message(SEND_ERROR "The option 'PLUGIN_LZ4' is removed from XGBoost.")
endif (PLUGIN_LZ4)
if (PLUGIN_RMM AND NOT (USE_CUDA))
endif()
if(R_LIB AND GOOGLE_TEST)
message(
WARNING
"Some C++ tests will fail with `R_LIB` enabled, as R package redirects some functions to R runtime implementation."
)
endif()
if(PLUGIN_RMM AND NOT (USE_CUDA))
message(SEND_ERROR "`PLUGIN_RMM` must be enabled with `USE_CUDA` flag.")
endif (PLUGIN_RMM AND NOT (USE_CUDA))
if (PLUGIN_RMM AND NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") OR (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")))
endif()
if(PLUGIN_RMM AND NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") OR (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")))
message(SEND_ERROR "`PLUGIN_RMM` must be used with GCC or Clang compiler.")
endif (PLUGIN_RMM AND NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") OR (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")))
if (PLUGIN_RMM AND NOT (CMAKE_SYSTEM_NAME STREQUAL "Linux"))
endif()
if(PLUGIN_RMM AND NOT (CMAKE_SYSTEM_NAME STREQUAL "Linux"))
message(SEND_ERROR "`PLUGIN_RMM` must be used with Linux.")
endif (PLUGIN_RMM AND NOT (CMAKE_SYSTEM_NAME STREQUAL "Linux"))
if (ENABLE_ALL_WARNINGS)
if ((NOT CMAKE_CXX_COMPILER_ID MATCHES "Clang") AND (NOT CMAKE_CXX_COMPILER_ID STREQUAL "GNU"))
endif()
if(ENABLE_ALL_WARNINGS)
if((NOT CMAKE_CXX_COMPILER_ID MATCHES "Clang") AND (NOT CMAKE_CXX_COMPILER_ID STREQUAL "GNU"))
message(SEND_ERROR "ENABLE_ALL_WARNINGS is only available for Clang and GCC.")
endif ((NOT CMAKE_CXX_COMPILER_ID MATCHES "Clang") AND (NOT CMAKE_CXX_COMPILER_ID STREQUAL "GNU"))
endif (ENABLE_ALL_WARNINGS)
if (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
endif()
endif()
if(BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
message(SEND_ERROR "Cannot build a static library libxgboost.a when R or JVM packages are enabled.")
endif (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
if (PLUGIN_FEDERATED)
if (CMAKE_CROSSCOMPILING)
endif()
if(PLUGIN_FEDERATED)
if(CMAKE_CROSSCOMPILING)
message(SEND_ERROR "Cannot cross compile with federated learning support")
endif ()
if (BUILD_STATIC_LIB)
endif()
if(BUILD_STATIC_LIB)
message(SEND_ERROR "Cannot build static lib with federated learning support")
endif ()
if (R_LIB OR JVM_BINDINGS)
endif()
if(R_LIB OR JVM_BINDINGS)
message(SEND_ERROR "Cannot enable federated learning support when R or JVM packages are enabled.")
endif ()
if (WIN32)
endif()
if(WIN32)
message(SEND_ERROR "Federated learning not supported for Windows platform")
endif ()
endif ()
endif()
endif()
#-- Removed options
if(USE_AVX)
message(SEND_ERROR "The option `USE_AVX` is deprecated as experimental AVX features have been removed from XGBoost.")
endif()
if(PLUGIN_LZ4)
message(SEND_ERROR "The option `PLUGIN_LZ4` is removed from XGBoost.")
endif()
if(RABIT_BUILD_MPI)
message(SEND_ERROR "The option `RABIT_BUILD_MPI` has been removed from XGBoost.")
endif()
if(USE_S3)
message(SEND_ERROR "The option `USE_S3` has been removed from XGBoost")
endif()
if(USE_AZURE)
message(SEND_ERROR "The option `USE_AZURE` has been removed from XGBoost")
endif()
if(USE_HDFS)
message(SEND_ERROR "The option `USE_HDFS` has been removed from XGBoost")
endif()
if(PLUGIN_DENSE_PARSER)
message(SEND_ERROR "The option `PLUGIN_DENSE_PARSER` has been removed from XGBoost.")
endif()
#-- Sanitizer
if (USE_SANITIZER)
if(USE_SANITIZER)
include(cmake/Sanitizer.cmake)
enable_sanitizers("${ENABLED_SANITIZERS}")
endif (USE_SANITIZER)
endif()
if (USE_CUDA)
if(USE_CUDA)
set(USE_OPENMP ON CACHE BOOL "CUDA requires OpenMP" FORCE)
# `export CXX=' is ignored by CMake CUDA.
set(CMAKE_CUDA_HOST_COMPILER ${CMAKE_CXX_COMPILER})
message(STATUS "Configured CUDA host compiler: ${CMAKE_CUDA_HOST_COMPILER}")
if(NOT DEFINED CMAKE_CUDA_HOST_COMPILER AND NOT DEFINED ENV{CUDAHOSTCXX})
set(CMAKE_CUDA_HOST_COMPILER ${CMAKE_CXX_COMPILER} CACHE FILEPATH
"The compiler executable to use when compiling host code for CUDA or HIP language files.")
mark_as_advanced(CMAKE_CUDA_HOST_COMPILER)
message(STATUS "Configured CUDA host compiler: ${CMAKE_CUDA_HOST_COMPILER}")
endif()
if(NOT DEFINED CMAKE_CUDA_RUNTIME_LIBRARY)
set(CMAKE_CUDA_RUNTIME_LIBRARY Static)
endif()
enable_language(CUDA)
if (${CMAKE_CUDA_COMPILER_VERSION} VERSION_LESS 11.0)
if(${CMAKE_CUDA_COMPILER_VERSION} VERSION_LESS 11.0)
message(FATAL_ERROR "CUDA version must be at least 11.0!")
endif()
set(GEN_CODE "")
format_gencode_flags("${GPU_COMPUTE_VER}" GEN_CODE)
add_subdirectory(${PROJECT_SOURCE_DIR}/gputreeshap)
if(DEFINED GPU_COMPUTE_VER)
compute_cmake_cuda_archs("${GPU_COMPUTE_VER}")
endif()
find_package(CUDAToolkit REQUIRED)
endif (USE_CUDA)
find_package(CCCL CONFIG)
if(NOT CCCL_FOUND)
message(STATUS "Standalone CCCL not found. Attempting to use CCCL from CUDA Toolkit...")
find_package(CCCL CONFIG
HINTS ${CUDAToolkit_LIBRARY_DIR}/cmake)
if(NOT CCCL_FOUND)
message(STATUS "Could not locate CCCL from CUDA Toolkit. Using Thrust and CUB from CUDA Toolkit...")
find_package(libcudacxx CONFIG REQUIRED
HINTS ${CUDAToolkit_LIBRARY_DIR}/cmake)
find_package(CUB CONFIG REQUIRED
HINTS ${CUDAToolkit_LIBRARY_DIR}/cmake)
find_package(Thrust CONFIG REQUIRED
HINTS ${CUDAToolkit_LIBRARY_DIR}/cmake)
thrust_create_target(Thrust HOST CPP DEVICE CUDA)
add_library(CCCL::CCCL INTERFACE IMPORTED GLOBAL)
target_link_libraries(CCCL::CCCL INTERFACE libcudacxx::libcudacxx CUB::CUB Thrust)
endif()
endif()
endif()
if (FORCE_COLORED_OUTPUT AND (CMAKE_GENERATOR STREQUAL "Ninja") AND
if(FORCE_COLORED_OUTPUT AND (CMAKE_GENERATOR STREQUAL "Ninja") AND
((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") OR
(CMAKE_CXX_COMPILER_ID STREQUAL "Clang")))
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fdiagnostics-color=always")
@@ -161,84 +248,99 @@ endif()
find_package(Threads REQUIRED)
if (USE_OPENMP)
if (APPLE)
find_package(OpenMP)
if (NOT OpenMP_FOUND)
# Try again with extra path info; required for libomp 15+ from Homebrew
execute_process(COMMAND brew --prefix libomp
OUTPUT_VARIABLE HOMEBREW_LIBOMP_PREFIX
OUTPUT_STRIP_TRAILING_WHITESPACE)
set(OpenMP_C_FLAGS
"-Xpreprocessor -fopenmp -I${HOMEBREW_LIBOMP_PREFIX}/include")
set(OpenMP_CXX_FLAGS
"-Xpreprocessor -fopenmp -I${HOMEBREW_LIBOMP_PREFIX}/include")
set(OpenMP_C_LIB_NAMES omp)
set(OpenMP_CXX_LIB_NAMES omp)
set(OpenMP_omp_LIBRARY ${HOMEBREW_LIBOMP_PREFIX}/lib/libomp.dylib)
find_package(OpenMP REQUIRED)
endif ()
else ()
# -- OpenMP
include(cmake/FindOpenMPMacOS.cmake)
if(USE_OPENMP)
if(APPLE)
find_openmp_macos()
else()
find_package(OpenMP REQUIRED)
endif ()
endif (USE_OPENMP)
#Add for IBM i
if (${CMAKE_SYSTEM_NAME} MATCHES "OS400")
endif()
endif()
# Add for IBM i
if(${CMAKE_SYSTEM_NAME} MATCHES "OS400")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -pthread")
set(CMAKE_CXX_ARCHIVE_CREATE "<CMAKE_AR> -X64 qc <TARGET> <OBJECTS>")
endif()
if (USE_NCCL)
if(USE_NCCL)
find_package(Nccl REQUIRED)
endif (USE_NCCL)
endif()
if(MSVC)
if(FORCE_SHARED_CRT)
message(STATUS "XGBoost: Using dynamically linked MSVC runtime...")
set(CMAKE_MSVC_RUNTIME_LIBRARY "MultiThreaded$<$<CONFIG:Debug>:Debug>DLL")
else()
message(STATUS "XGBoost: Using statically linked MSVC runtime...")
set(CMAKE_MSVC_RUNTIME_LIBRARY "MultiThreaded$<$<CONFIG:Debug>:Debug>")
endif()
endif()
# dmlc-core
msvc_use_static_runtime()
if (FORCE_SHARED_CRT)
set(DMLC_FORCE_SHARED_CRT ON)
endif ()
set(DMLC_FORCE_SHARED_CRT ${FORCE_SHARED_CRT})
add_subdirectory(${xgboost_SOURCE_DIR}/dmlc-core)
if (MSVC)
if (TARGET dmlc_unit_tests)
target_compile_options(dmlc_unit_tests PRIVATE
-D_CRT_SECURE_NO_WARNINGS -D_CRT_SECURE_NO_DEPRECATE)
endif (TARGET dmlc_unit_tests)
endif (MSVC)
# rabit
add_subdirectory(rabit)
if (RABIT_BUILD_MPI)
find_package(MPI REQUIRED)
endif (RABIT_BUILD_MPI)
if(MSVC)
if(TARGET dmlc_unit_tests)
target_compile_options(
dmlc_unit_tests PRIVATE
-D_CRT_SECURE_NO_WARNINGS -D_CRT_SECURE_NO_DEPRECATE
)
endif()
endif()
# core xgboost
add_subdirectory(${xgboost_SOURCE_DIR}/src)
target_link_libraries(objxgboost PUBLIC dmlc)
# Link -lstdc++fs for GCC 8.x
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS "9.0")
target_link_libraries(objxgboost PUBLIC stdc++fs)
endif()
# Exports some R specific definitions and objects
if (R_LIB)
if(R_LIB)
add_subdirectory(${xgboost_SOURCE_DIR}/R-package)
endif (R_LIB)
endif()
# This creates its own shared library `xgboost4j'.
if (JVM_BINDINGS)
if(JVM_BINDINGS)
add_subdirectory(${xgboost_SOURCE_DIR}/jvm-packages)
endif (JVM_BINDINGS)
endif()
# Plugin
add_subdirectory(${xgboost_SOURCE_DIR}/plugin)
if (PLUGIN_RMM)
if(PLUGIN_RMM)
find_package(rmm REQUIRED)
endif (PLUGIN_RMM)
# Patch the rmm targets so they reference the static cudart
# Remove this patch once RMM stops specifying cudart requirement
# (since RMM is a header-only library, it should not specify cudart in its CMake config)
get_target_property(rmm_link_libs rmm::rmm INTERFACE_LINK_LIBRARIES)
list(REMOVE_ITEM rmm_link_libs CUDA::cudart)
list(APPEND rmm_link_libs CUDA::cudart_static)
set_target_properties(rmm::rmm PROPERTIES INTERFACE_LINK_LIBRARIES "${rmm_link_libs}")
get_target_property(rmm_link_libs rmm::rmm INTERFACE_LINK_LIBRARIES)
endif()
if(PLUGIN_SYCL)
set(CMAKE_CXX_LINK_EXECUTABLE
"icpx <FLAGS> <CMAKE_CXX_LINK_FLAGS> -qopenmp <LINK_FLAGS> <OBJECTS> -o <TARGET> <LINK_LIBRARIES>")
set(CMAKE_CXX_CREATE_SHARED_LIBRARY
"icpx <CMAKE_SHARED_LIBRARY_CXX_FLAGS> -qopenmp <LANGUAGE_COMPILE_FLAGS> \
<CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS> <SONAME_FLAG>,<TARGET_SONAME> \
-o <TARGET> <OBJECTS> <LINK_LIBRARIES>")
endif()
#-- library
if (BUILD_STATIC_LIB)
if(BUILD_STATIC_LIB)
add_library(xgboost STATIC)
else (BUILD_STATIC_LIB)
else()
add_library(xgboost SHARED)
endif (BUILD_STATIC_LIB)
endif()
target_link_libraries(xgboost PRIVATE objxgboost)
target_include_directories(xgboost
INTERFACE
@@ -247,58 +349,73 @@ target_include_directories(xgboost
#-- End shared library
#-- CLI for xgboost
add_executable(runxgboost ${xgboost_SOURCE_DIR}/src/cli_main.cc)
target_link_libraries(runxgboost PRIVATE objxgboost)
target_include_directories(runxgboost
PRIVATE
${xgboost_SOURCE_DIR}/include
${xgboost_SOURCE_DIR}/dmlc-core/include
${xgboost_SOURCE_DIR}/rabit/include
)
set_target_properties(runxgboost PROPERTIES OUTPUT_NAME xgboost)
if(BUILD_DEPRECATED_CLI)
add_executable(runxgboost ${xgboost_SOURCE_DIR}/src/cli_main.cc)
target_link_libraries(runxgboost PRIVATE objxgboost)
target_include_directories(runxgboost
PRIVATE
${xgboost_SOURCE_DIR}/include
${xgboost_SOURCE_DIR}/dmlc-core/include
)
set_target_properties(runxgboost PROPERTIES OUTPUT_NAME xgboost)
xgboost_target_properties(runxgboost)
xgboost_target_link_libraries(runxgboost)
xgboost_target_defs(runxgboost)
if(KEEP_BUILD_ARTIFACTS_IN_BINARY_DIR)
set_output_directory(runxgboost ${xgboost_BINARY_DIR})
else()
set_output_directory(runxgboost ${xgboost_SOURCE_DIR})
endif()
endif()
#-- End CLI for xgboost
# Common setup for all targets
foreach(target xgboost objxgboost dmlc runxgboost)
foreach(target xgboost objxgboost dmlc)
xgboost_target_properties(${target})
xgboost_target_link_libraries(${target})
xgboost_target_defs(${target})
endforeach()
if (JVM_BINDINGS)
if(JVM_BINDINGS)
xgboost_target_properties(xgboost4j)
xgboost_target_link_libraries(xgboost4j)
xgboost_target_defs(xgboost4j)
endif (JVM_BINDINGS)
endif()
if (KEEP_BUILD_ARTIFACTS_IN_BINARY_DIR)
set_output_directory(runxgboost ${xgboost_BINARY_DIR})
if(USE_OPENMP AND APPLE)
patch_openmp_path_macos(xgboost libxgboost)
endif()
if(KEEP_BUILD_ARTIFACTS_IN_BINARY_DIR)
set_output_directory(xgboost ${xgboost_BINARY_DIR}/lib)
else ()
set_output_directory(runxgboost ${xgboost_SOURCE_DIR})
else()
set_output_directory(xgboost ${xgboost_SOURCE_DIR}/lib)
endif ()
endif()
# Ensure these two targets do not build simultaneously, as they produce outputs with conflicting names
add_dependencies(xgboost runxgboost)
if(BUILD_DEPRECATED_CLI)
add_dependencies(xgboost runxgboost)
endif()
#-- Installing XGBoost
if (R_LIB)
if(R_LIB)
include(cmake/RPackageInstallTargetSetup.cmake)
set_target_properties(xgboost PROPERTIES PREFIX "")
if (APPLE)
if(APPLE)
set_target_properties(xgboost PROPERTIES SUFFIX ".so")
endif (APPLE)
endif()
setup_rpackage_install_target(xgboost "${CMAKE_CURRENT_BINARY_DIR}/R-package-install")
set(CMAKE_INSTALL_PREFIX "${CMAKE_CURRENT_BINARY_DIR}/dummy_inst")
endif (R_LIB)
if (MINGW)
endif()
if(MINGW)
set_target_properties(xgboost PROPERTIES PREFIX "")
endif (MINGW)
endif()
if (BUILD_C_DOC)
if(BUILD_C_DOC)
include(cmake/Doc.cmake)
run_doxygen()
endif (BUILD_C_DOC)
endif()
include(CPack)
@@ -314,11 +431,19 @@ install(DIRECTORY ${xgboost_SOURCE_DIR}/include/xgboost
# > in any export set.
#
# https://github.com/dmlc/xgboost/issues/6085
if (BUILD_STATIC_LIB)
set(INSTALL_TARGETS xgboost runxgboost objxgboost dmlc)
else (BUILD_STATIC_LIB)
set(INSTALL_TARGETS xgboost runxgboost)
endif (BUILD_STATIC_LIB)
if(BUILD_STATIC_LIB)
if(BUILD_DEPRECATED_CLI)
set(INSTALL_TARGETS xgboost runxgboost objxgboost dmlc)
else()
set(INSTALL_TARGETS xgboost objxgboost dmlc)
endif()
else()
if(BUILD_DEPRECATED_CLI)
set(INSTALL_TARGETS xgboost runxgboost)
else()
set(INSTALL_TARGETS xgboost)
endif()
endif()
install(TARGETS ${INSTALL_TARGETS}
EXPORT XGBoostTargets
@@ -347,7 +472,7 @@ install(
DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/xgboost)
#-- Test
if (GOOGLE_TEST)
if(GOOGLE_TEST)
enable_testing()
# Unittests.
add_executable(testxgboost)
@@ -367,25 +492,22 @@ if (GOOGLE_TEST)
${xgboost_SOURCE_DIR}/tests/cli/machine.conf.in
${xgboost_BINARY_DIR}/tests/cli/machine.conf
@ONLY)
add_test(
NAME TestXGBoostCLI
COMMAND runxgboost ${xgboost_BINARY_DIR}/tests/cli/machine.conf
WORKING_DIRECTORY ${xgboost_BINARY_DIR})
set_tests_properties(TestXGBoostCLI
PROPERTIES
PASS_REGULAR_EXPRESSION ".*test-rmse:0.087.*")
endif (GOOGLE_TEST)
# For MSVC: Call msvc_use_static_runtime() once again to completely
# replace /MD with /MT. See https://github.com/dmlc/xgboost/issues/4462
# for issues caused by mixing of /MD and /MT flags
msvc_use_static_runtime()
if(BUILD_DEPRECATED_CLI)
add_test(
NAME TestXGBoostCLI
COMMAND runxgboost ${xgboost_BINARY_DIR}/tests/cli/machine.conf
WORKING_DIRECTORY ${xgboost_BINARY_DIR})
set_tests_properties(TestXGBoostCLI
PROPERTIES
PASS_REGULAR_EXPRESSION ".*test-rmse:0.087.*")
endif()
endif()
# Add xgboost.pc
if (ADD_PKGCONFIG)
if(ADD_PKGCONFIG)
configure_file(${xgboost_SOURCE_DIR}/cmake/xgboost.pc.in ${xgboost_BINARY_DIR}/xgboost.pc @ONLY)
install(
FILES ${xgboost_BINARY_DIR}/xgboost.pc
DESTINATION ${CMAKE_INSTALL_LIBDIR}/pkgconfig)
endif (ADD_PKGCONFIG)
endif()

View File

@@ -10,8 +10,8 @@ The Project Management Committee(PMC) consists group of active committers that m
- Tianqi is a Ph.D. student working on large-scale machine learning. He is the creator of the project.
* [Michael Benesty](https://github.com/pommedeterresautee)
- Michael is a lawyer and data scientist in France. He is the creator of XGBoost interactive analysis module in R.
* [Yuan Tang](https://github.com/terrytangyuan), Akuity
- Yuan is a founding engineer at Akuity. He contributed mostly in R and Python packages.
* [Yuan Tang](https://github.com/terrytangyuan), Red Hat
- Yuan is a principal software engineer at Red Hat. He contributed mostly in R and Python packages.
* [Nan Zhu](https://github.com/CodingCat), Uber
- Nan is a software engineer in Uber. He contributed mostly in JVM packages.
* [Jiaming Yuan](https://github.com/trivialfis)

222
NEWS.md
View File

@@ -1,8 +1,228 @@
XGBoost Change Log
==================
**Starting from 2.1.0, release note is recorded in the documentation.**
This file records the changes in xgboost library in reverse chronological order.
## 2.0.0 (2023 Aug 16)
We are excited to announce the release of XGBoost 2.0. This note will begin by covering some overall changes and then highlight specific updates to the package.
### Initial work on multi-target trees with vector-leaf outputs
We have been working on vector-leaf tree models for multi-target regression, multi-label classification, and multi-class classification in version 2.0. Previously, XGBoost would build a separate model for each target. However, with this new feature that's still being developed, XGBoost can build one tree for all targets. The feature has multiple benefits and trade-offs compared to the existing approach. It can help prevent overfitting, produce smaller models, and build trees that consider the correlation between targets. In addition, users can combine vector leaf and scalar leaf trees during a training session using a callback. Please note that the feature is still a working in progress, and many parts are not yet available. See #9043 for the current status. Related PRs: (#8538, #8697, #8902, #8884, #8895, #8898, #8612, #8652, #8698, #8908, #8928, #8968, #8616, #8922, #8890, #8872, #8889, #9509) Please note that, only the `hist` (default) tree method on CPU can be used for building vector leaf trees at the moment.
### New `device` parameter.
A new `device` parameter is set to replace the existing `gpu_id`, `gpu_hist`, `gpu_predictor`, `cpu_predictor`, `gpu_coord_descent`, and the PySpark specific parameter `use_gpu`. Onward, users need only the `device` parameter to select which device to run along with the ordinal of the device. For more information, please see our document page (https://xgboost.readthedocs.io/en/stable/parameter.html#general-parameters) . For example, with `device="cuda", tree_method="hist"`, XGBoost will run the `hist` tree method on GPU. (#9363, #8528, #8604, #9354, #9274, #9243, #8896, #9129, #9362, #9402, #9385, #9398, #9390, #9386, #9412, #9507, #9536). The old behavior of ``gpu_hist`` is preserved but deprecated. In addition, the `predictor` parameter is removed.
### `hist` is now the default tree method
Starting from 2.0, the `hist` tree method will be the default. In previous versions, XGBoost chooses `approx` or `exact` depending on the input data and training environment. The new default can help XGBoost train models more efficiently and consistently. (#9320, #9353)
### GPU-based approx tree method
There's initial support for using the `approx` tree method on GPU. The performance of the `approx` is not yet well optimized but is feature complete except for the JVM packages. It can be accessed through the use of the parameter combination `device="cuda", tree_method="approx"`. (#9414, #9399, #9478). Please note that the Scala-based Spark interface is not yet supported.
### Optimize and bound the size of the histogram on CPU, to control memory footprint
XGBoost has a new parameter `max_cached_hist_node` for users to limit the CPU cache size for histograms. It can help prevent XGBoost from caching histograms too aggressively. Without the cache, performance is likely to decrease. However, the size of the cache grows exponentially with the depth of the tree. The limit can be crucial when growing deep trees. In most cases, users need not configure this parameter as it does not affect the model's accuracy. (#9455, #9441, #9440, #9427, #9400).
Along with the cache limit, XGBoost also reduces the memory usage of the `hist` and `approx` tree method on distributed systems by cutting the size of the cache by half. (#9433)
### Improved external memory support
There is some exciting development around external memory support in XGBoost. It's still an experimental feature, but the performance has been significantly improved with the default `hist` tree method. We replaced the old file IO logic with memory map. In addition to performance, we have reduced CPU memory usage and added extensive documentation. Beginning from 2.0.0, we encourage users to try it with the `hist` tree method when the memory saving by `QuantileDMatrix` is not sufficient. (#9361, #9317, #9282, #9315, #8457)
### Learning to rank
We created a brand-new implementation for the learning-to-rank task. With the latest version, XGBoost gained a set of new features for ranking task including:
- A new parameter `lambdarank_pair_method` for choosing the pair construction strategy.
- A new parameter `lambdarank_num_pair_per_sample` for controlling the number of samples for each group.
- An experimental implementation of unbiased learning-to-rank, which can be accessed using the `lambdarank_unbiased` parameter.
- Support for custom gain function with `NDCG` using the `ndcg_exp_gain` parameter.
- Deterministic GPU computation for all objectives and metrics.
- `NDCG` is now the default objective function.
- Improved performance of metrics using caches.
- Support scikit-learn utilities for `XGBRanker`.
- Extensive documentation on how learning-to-rank works with XGBoost.
For more information, please see the [tutorial](https://xgboost.readthedocs.io/en/latest/tutorials/learning_to_rank.html). Related PRs: (#8771, #8692, #8783, #8789, #8790, #8859, #8887, #8893, #8906, #8931, #9075, #9015, #9381, #9336, #8822, #9222, #8984, #8785, #8786, #8768)
### Automatically estimated intercept
In the previous version, `base_score` was a constant that could be set as a training parameter. In the new version, XGBoost can automatically estimate this parameter based on input labels for optimal accuracy. (#8539, #8498, #8272, #8793, #8607)
### Quantile regression
The XGBoost algorithm now supports quantile regression, which involves minimizing the quantile loss (also called "pinball loss"). Furthermore, XGBoost allows for training with multiple target quantiles simultaneously with one tree per quantile. (#8775, #8761, #8760, #8758, #8750)
### L1 and Quantile regression now supports learning rate
Both objectives use adaptive trees due to the lack of proper Hessian values. In the new version, XGBoost can scale the leaf value with the learning rate accordingly. (#8866)
### Export cut value
Using the Python or the C package, users can export the quantile values (not to be confused with quantile regression) used for the `hist` tree method. (#9356)
### column-based split and federated learning
We made progress on column-based split for federated learning. In 2.0, both `approx`, `hist`, and `hist` with vector leaf can work with column-based data split, along with support for vertical federated learning. Work on GPU support is still on-going, stay tuned. (#8576, #8468, #8442, #8847, #8811, #8985, #8623, #8568, #8828, #8932, #9081, #9102, #9103, #9124, #9120, #9367, #9370, #9343, #9171, #9346, #9270, #9244, #8494, #8434, #8742, #8804, #8710, #8676, #9020, #9002, #9058, #9037, #9018, #9295, #9006, #9300, #8765, #9365, #9060)
### PySpark
After the initial introduction of the PySpark interface, it has gained some new features and optimizations in 2.0.
- GPU-based prediction. (#9292, #9542)
- Optimization for data initialization by avoiding the stack operation. (#9088)
- Support predict feature contribution. (#8633)
- Python typing support. (#9156, #9172, #9079, #8375)
- `use_gpu` is deprecated. The `device` parameter is preferred.
- Update eval_metric validation to support list of strings (#8826)
- Improved logs for training (#9449)
- Maintenance, including refactoring and document updates (#8324, #8465, #8605, #9202, #9460, #9302, #8385, #8630, #8525, #8496)
- Fix for GPU setup. (#9495)
### Other General New Features
Here's a list of new features that don't have their own section and yet are general to all language bindings.
- Use array interface for CSC matrix. This helps XGBoost to use a consistent number of threads and align the interface of the CSC matrix with other interfaces. In addition, memory usage is likely to decrease with CSC input thanks to on-the-fly type conversion. (#8672)
- CUDA compute 90 is now part of the default build.. (#9397)
### Other General Optimization
These optimizations are general to all language bindings. For language-specific optimization, please visit the corresponding sections.
- Performance for input with `array_interface` on CPU (like `numpy`) is significantly improved. (#9090)
- Some optimization with CUDA for data initialization. (#9199, #9209, #9144)
- Use the latest thrust policy to prevent synchronizing GPU devices. (#9212)
- XGBoost now uses a per-thread CUDA stream, which prevents synchronization with other streams. (#9416, #9396, #9413)
### Notable breaking change
Other than the aforementioned change with the `device` parameter, here's a list of breaking changes affecting all packages.
- Users must specify the format for text input (#9077). However, we suggest using third-party data structures such as `numpy.ndarray` instead of relying on text inputs. See https://github.com/dmlc/xgboost/issues/9472 for more info.
### Notable bug fixes
Some noteworthy bug fixes that are not related to specific language bindings are listed in this section.
- Some language environments use a different thread to perform garbage collection, which breaks the thread-local cache used in XGBoost. XGBoost 2.0 implements a new thread-safe cache using a light weight lock to replace the thread-local cache. (#8851)
- Fix model IO by clearing the prediction cache. (#8904)
- `inf` is checked during data construction. (#8911)
- Preserve order of saved updaters configuration. Usually, this is not an issue unless the `updater` parameter is used instead of the `tree_method` parameter (#9355)
- Fix GPU memory allocation issue with categorical splits. (#9529)
- Handle escape sequence like `\t\n` in feature names for JSON model dump. (#9474)
- Normalize file path for model IO and text input. This handles short paths on Windows and paths that contain `~` on Unix (#9463). In addition, all path inputs are required to be encoded in UTF-8 (#9448, #9443)
- Fix integer overflow on H100. (#9380)
- Fix weighted sketching on GPU with categorical features. (#9341)
- Fix metric serialization. The bug might cause some of the metrics to be dropped during evaluation. (#9405)
- Fixes compilation errors on MSVC x86 targets (#8823)
- Pick up the dmlc-core fix for the CSV parser. (#8897)
### Documentation
Aside from documents for new features, we have many smaller updates to improve user experience, from troubleshooting guides to typo fixes.
- Explain CPU/GPU interop. (#8450)
- Guide to troubleshoot NCCL errors. (#8943, #9206)
- Add a note for rabit port selection. (#8879)
- How to build the docs using conda (#9276)
- Explain how to obtain reproducible results on distributed systems. (#8903)
* Fixes and small updates to document and demonstration scripts. (#8626, #8436, #8995, #8907, #8923, #8926, #9358, #9232, #9201, #9469, #9462, #9458, #8543, #8597, #8401, #8784, #9213, #9098, #9008, #9223, #9333, #9434, #9435, #9415, #8773, #8752, #9291, #9549)
### Python package
* New Features and Improvements
- Support primitive types of pyarrow-backed pandas dataframe. (#8653)
- Warning messages emitted by XGBoost are now emitted using Python warnings. (#9387)
- User can now format the value printed near the bars on the `plot_importance` plot (#8540)
- XGBoost has improved half-type support (float16) with pandas, cupy, and cuDF. With GPU input, the handling is through CUDA `__half` type, and no data copy is made. (#8487, #9207, #8481)
- Support `Series` and Python primitive types in `inplace_predict` and `QuantileDMatrix` (#8547, #8542)
- Support all pandas' nullable integer types. (#8480)
- Custom metric with the scikit-learn interface now supports `sample_weight`. (#8706)
- Enable Installation of Python Package with System lib in a Virtual Environment (#9349)
- Raise if expected workers are not alive in `xgboost.dask.train` (#9421)
* Optimization
- Cache transformed data in `QuantileDMatrix` for efficiency. (#8666, #9445)
- Take datatable as row-major input. (#8472)
- Remove unnecessary conversions between data structures (#8546)
* Adopt modern Python packaging conventions (PEP 517, PEP 518, PEP 621)
- XGBoost adopted the modern Python packaging conventions. The old setup script `setup.py` is now replaced with the new configuration file `pyproject.toml`. Along with this, XGBoost now supports Python 3.11. (#9021, #9112, #9114, #9115) Consult the latest documentation for the updated instructions to build and install XGBoost.
* Fixes
- `DataIter` now accepts only keyword arguments. (#9431)
- Fix empty DMatrix with categorical features. (#8739)
- Convert ``DaskXGBClassifier.classes_`` to an array (#8452)
- Define `best_iteration` only if early stopping is used to be consistent with documented behavior. (#9403)
- Make feature validation immutable. (#9388)
* Breaking changes
- Discussed in the new `device` parameter section, the `predictor` parameter is now removed. (#9129)
- Remove support for single-string feature info. Feature type and names should be a sequence of strings (#9401)
- Remove parameters in the `save_model` call for the scikit-learn interface. (#8963)
- Remove the `ntree_limit` in the python package. This has been deprecated in previous versions. (#8345)
* Maintenance including formatting and refactoring along with type hints.
- More consistent use of `black` and `isort` for code formatting (#8420, #8748, #8867)
- Improved type support. Most of the type changes happen in the PySpark module; here, we list the remaining changes. (#8444, #8617, #9197, #9005)
- Set `enable_categorical` to True in predict. (#8592)
- Some refactoring and updates for tests (#8395, #8372, #8557, #8379, #8702, #9459, #9316, #8446, #8695, #8409, #8993, #9480)
* Documentation
- Add introduction and notes for the sklearn interface. (#8948)
- Demo for using dask for hyper-parameter optimization. (#8891)
- Document all supported Python input types. (#8643)
- Other documentation updates (#8944, #9304)
### R package
- Use the new data consumption interface for CSR and CSC. This provides better control for the number of threads and improves performance. (#8455, #8673)
- Accept multiple evaluation metrics during training. (#8657)
- Fix integer inputs with `NA`. (#9522)
- Some refactoring for the R package (#8545, #8430, #8614, #8624, #8613, #9457, #8689, #8563, #9461, #8647, #8564, #8565, #8736, #8610, #8609, #8599, #8704, #9456, #9450, #9476, #9477, #9481). Special thanks to @jameslamb.
- Document updates (#8886, #9323, #9437, #8998)
### JVM packages
Following are changes specific to various JVM-based packages.
- Stop using Rabit in prediction (#9054)
- Set feature_names and feature_types in jvm-packages. This is to prepare support for categorical features (#9364)
- Scala 2.13 support. (#9099)
- Change training stage from `ResultStage` to `ShuffleMapStage` (#9423)
- Automatically set the max/min direction for the best score during early stopping. (#9404)
* Revised support for `flink` (#9046)
* Breaking changes
- Scala-based tracker is removed. (#9078, #9045)
- Change `DeviceQuantileDmatrix` into `QuantileDMatrix` (#8461)
* Maintenance (#9253, #9166, #9395, #9389, #9224, #9233, #9351, #9479)
* CI bot PRs
We employed GitHub dependent bot to help us keep the dependencies up-to-date for JVM packages. With the help from the bot, we have cleared up all the dependencies that are lagging behind (#8501, #8507).
Here's a list of dependency update PRs including those made by dependent bots (#8456, #8560, #8571, #8561, #8562, #8600, #8594, #8524, #8509, #8548, #8549, #8533, #8521, #8534, #8532, #8516, #8503, #8531, #8530, #8518, #8512, #8515, #8517, #8506, #8504, #8502, #8629, #8815, #8813, #8814, #8877, #8876, #8875, #8874, #8873, #9049, #9070, #9073, #9039, #9083, #8917, #8952, #8980, #8973, #8962, #9252, #9208, #9131, #9136, #9219, #9160, #9158, #9163, #9184, #9192, #9265, #9268, #8882, #8837, #8662, #8661, #8390, #9056, #8508, #8925, #8920, #9149, #9230, #9097, #8648, #9203, #8593).
### Maintenance
Maintenance work includes refactoring, fixing small issues that don't affect end users. (#9256, #8627, #8756, #8735, #8966, #8864, #8747, #8892, #9057, #8921, #8949, #8941, #8942, #9108, #9125, #9155, #9153, #9176, #9447, #9444, #9436, #9438, #9430, #9200, #9210, #9055, #9014, #9004, #8999, #9154, #9148, #9283, #9246, #8888, #8900, #8871, #8861, #8858, #8791, #8807, #8751, #8703, #8696, #8693, #8677, #8686, #8665, #8660, #8386, #8371, #8410, #8578, #8574, #8483, #8443, #8454, #8733)
### CI
- Build pip wheel with RMM support (#9383)
- Other CI updates including updating dependencies and work on the CI infrastructure. (#9464, #9428, #8767, #9394, #9278, #9214, #9234, #9205, #9034, #9104, #8878, #9294, #8625, #8806, #8741, #8707, #8381, #8382, #8388, #8402, #8397, #8445, #8602, #8628, #8583, #8460, #9544)
## 1.7.6 (2023 Jun 16)
This is a patch release for bug fixes. The CRAN package for the R binding is kept at 1.7.5.
### Bug Fixes
* Fix distributed training with mixed dense and sparse partitions. (#9272)
* Fix monotone constraints on CPU with large trees. (#9122)
* [spark] Make the spark model have the same UID as its estimator (#9022)
* Optimize prediction with `QuantileDMatrix`. (#9096)
### Document
* Improve doxygen (#8959)
* Update the cuDF pip index URL. (#9106)
### Maintenance
* Fix tests with pandas 2.0. (#9014)
## 1.7.5 (2023 Mar 30)
This is a patch release for bug fixes.
@@ -1883,7 +2103,7 @@ This release marks a major milestone for the XGBoost project.
## v0.90 (2019.05.18)
### XGBoost Python package drops Python 2.x (#4379, #4381)
Python 2.x is reaching its end-of-life at the end of this year. [Many scientific Python packages are now moving to drop Python 2.x](https://python3statement.org/).
Python 2.x is reaching its end-of-life at the end of this year. [Many scientific Python packages are now moving to drop Python 2.x](https://python3statement.github.io/).
### XGBoost4J-Spark now requires Spark 2.4.x (#4377)
* Spark 2.3 is reaching its end-of-life soon. See discussion at #4389.

View File

@@ -4,3 +4,5 @@
^.*\.Rproj$
^\.Rproj\.user$
README.md
^doc$
^Meta$

View File

@@ -1,41 +1,60 @@
find_package(LibR REQUIRED)
message(STATUS "LIBR_CORE_LIBRARY " ${LIBR_CORE_LIBRARY})
file(GLOB_RECURSE R_SOURCES
file(
GLOB_RECURSE R_SOURCES
${CMAKE_CURRENT_LIST_DIR}/src/*.cc
${CMAKE_CURRENT_LIST_DIR}/src/*.c)
${CMAKE_CURRENT_LIST_DIR}/src/*.c
)
# Use object library to expose symbols
add_library(xgboost-r OBJECT ${R_SOURCES})
if (ENABLE_ALL_WARNINGS)
if(ENABLE_ALL_WARNINGS)
target_compile_options(xgboost-r PRIVATE -Wall -Wextra)
endif (ENABLE_ALL_WARNINGS)
target_compile_definitions(xgboost-r
PUBLIC
endif()
if(MSVC)
# https://github.com/microsoft/LightGBM/pull/6061
# MSVC doesn't work with anonymous types in structs. (R complex)
#
# syntax error: missing ';' before identifier 'private_data_c'
#
target_compile_definitions(xgboost-r PRIVATE -DR_LEGACY_RCOMPLEX)
endif()
target_compile_definitions(
xgboost-r PUBLIC
-DXGBOOST_STRICT_R_MODE=1
-DXGBOOST_CUSTOMIZE_GLOBAL_PRNG=1
-DDMLC_LOG_BEFORE_THROW=0
-DDMLC_DISABLE_STDIN=1
-DDMLC_LOG_CUSTOMIZE=1
-DRABIT_STRICT_CXX98_)
target_include_directories(xgboost-r
PRIVATE
)
target_include_directories(
xgboost-r PRIVATE
${LIBR_INCLUDE_DIRS}
${PROJECT_SOURCE_DIR}/include
${PROJECT_SOURCE_DIR}/dmlc-core/include
${PROJECT_SOURCE_DIR}/rabit/include)
)
target_link_libraries(xgboost-r PUBLIC ${LIBR_CORE_LIBRARY})
if (USE_OPENMP)
if(USE_OPENMP)
find_package(OpenMP REQUIRED)
target_link_libraries(xgboost-r PUBLIC OpenMP::OpenMP_CXX OpenMP::OpenMP_C)
endif (USE_OPENMP)
endif()
set_target_properties(
xgboost-r PROPERTIES
CXX_STANDARD 17
CXX_STANDARD_REQUIRED ON
POSITION_INDEPENDENT_CODE ON)
POSITION_INDEPENDENT_CODE ON
)
# Get compilation and link flags of xgboost-r and propagate to objxgboost
target_link_libraries(objxgboost PUBLIC xgboost-r)
# Add all objects of xgboost-r to objxgboost
target_sources(objxgboost INTERFACE $<TARGET_OBJECTS:xgboost-r>)

View File

@@ -1,8 +1,8 @@
Package: xgboost
Type: Package
Title: Extreme Gradient Boosting
Version: 2.0.0.1
Date: 2022-10-18
Version: 2.2.0.0
Date: 2024-06-03
Authors@R: c(
person("Tianqi", "Chen", role = c("aut"),
email = "tianqi.tchen@gmail.com"),
@@ -56,14 +56,17 @@ Suggests:
testthat,
igraph (>= 1.0.1),
float,
titanic
titanic,
RhpcBLASctl,
survival
Depends:
R (>= 3.3.0)
R (>= 4.3.0)
Imports:
Matrix (>= 1.1-0),
methods,
data.table (>= 1.9.6),
jsonlite (>= 1.0),
RoxygenNote: 7.2.3
jsonlite (>= 1.0)
Roxygen: list(markdown = TRUE)
RoxygenNote: 7.3.2
Encoding: UTF-8
SystemRequirements: GNU make, C++17

View File

@@ -1,46 +1,62 @@
# Generated by roxygen2: do not edit by hand
S3method("[",xgb.Booster)
S3method("[",xgb.DMatrix)
S3method("dimnames<-",xgb.DMatrix)
S3method(coef,xgb.Booster)
S3method(dim,xgb.DMatrix)
S3method(dimnames,xgb.DMatrix)
S3method(getinfo,xgb.Booster)
S3method(getinfo,xgb.DMatrix)
S3method(length,xgb.Booster)
S3method(predict,xgb.Booster)
S3method(predict,xgb.Booster.handle)
S3method(print,xgb.Booster)
S3method(print,xgb.DMatrix)
S3method(print,xgb.cv.synchronous)
S3method(print,xgboost)
S3method(setinfo,xgb.Booster)
S3method(setinfo,xgb.DMatrix)
S3method(slice,xgb.DMatrix)
S3method(variable.names,xgb.Booster)
export("xgb.attr<-")
export("xgb.attributes<-")
export("xgb.config<-")
export("xgb.parameters<-")
export(cb.cv.predict)
export(cb.early.stop)
export(cb.evaluation.log)
export(cb.gblinear.history)
export(cb.print.evaluation)
export(cb.reset.parameters)
export(cb.save.model)
export(getinfo)
export(setinfo)
export(slice)
export(xgb.Booster.complete)
export(xgb.Callback)
export(xgb.DMatrix)
export(xgb.DMatrix.hasinfo)
export(xgb.DMatrix.save)
export(xgb.DataBatch)
export(xgb.DataIter)
export(xgb.ExtMemDMatrix)
export(xgb.QuantileDMatrix)
export(xgb.QuantileDMatrix.from_iterator)
export(xgb.attr)
export(xgb.attributes)
export(xgb.cb.cv.predict)
export(xgb.cb.early.stop)
export(xgb.cb.evaluation.log)
export(xgb.cb.gblinear.history)
export(xgb.cb.print.evaluation)
export(xgb.cb.reset.parameters)
export(xgb.cb.save.model)
export(xgb.config)
export(xgb.copy.Booster)
export(xgb.create.features)
export(xgb.cv)
export(xgb.dump)
export(xgb.gblinear.history)
export(xgb.get.DMatrix.data)
export(xgb.get.DMatrix.num.non.missing)
export(xgb.get.DMatrix.qcut)
export(xgb.get.config)
export(xgb.get.num.boosted.rounds)
export(xgb.ggplot.deepness)
export(xgb.ggplot.importance)
export(xgb.ggplot.shap.summary)
export(xgb.importance)
export(xgb.is.same.Booster)
export(xgb.load)
export(xgb.load.raw)
export(xgb.model.dt.tree)
@@ -52,19 +68,16 @@ export(xgb.plot.shap.summary)
export(xgb.plot.tree)
export(xgb.save)
export(xgb.save.raw)
export(xgb.serialize)
export(xgb.set.config)
export(xgb.slice.Booster)
export(xgb.slice.DMatrix)
export(xgb.train)
export(xgb.unserialize)
export(xgboost)
import(methods)
importClassesFrom(Matrix,CsparseMatrix)
importClassesFrom(Matrix,dgCMatrix)
importClassesFrom(Matrix,dgeMatrix)
importFrom(Matrix,colSums)
importClassesFrom(Matrix,dgRMatrix)
importFrom(Matrix,sparse.model.matrix)
importFrom(Matrix,sparseMatrix)
importFrom(Matrix,sparseVector)
importFrom(Matrix,t)
importFrom(data.table,":=")
importFrom(data.table,as.data.table)
importFrom(data.table,data.table)
@@ -82,8 +95,12 @@ importFrom(graphics,points)
importFrom(graphics,title)
importFrom(jsonlite,fromJSON)
importFrom(jsonlite,toJSON)
importFrom(methods,new)
importFrom(stats,coef)
importFrom(stats,median)
importFrom(stats,predict)
importFrom(stats,sd)
importFrom(stats,variable.names)
importFrom(utils,head)
importFrom(utils,object.size)
importFrom(utils,str)

File diff suppressed because it is too large Load Diff

View File

@@ -26,6 +26,44 @@ NVL <- function(x, val) {
'multi:softprob', 'rank:pairwise', 'rank:ndcg', 'rank:map'))
}
.RANKING_OBJECTIVES <- function() {
return(c('rank:pairwise', 'rank:ndcg', 'rank:map'))
}
.OBJECTIVES_NON_DEFAULT_MODE <- function() {
return(c("reg:logistic", "binary:logitraw", "multi:softmax"))
}
.BINARY_CLASSIF_OBJECTIVES <- function() {
return(c("binary:logistic", "binary:hinge"))
}
.MULTICLASS_CLASSIF_OBJECTIVES <- function() {
return("multi:softprob")
}
.SURVIVAL_RIGHT_CENSORING_OBJECTIVES <- function() { # nolint
return(c("survival:cox", "survival:aft"))
}
.SURVIVAL_ALL_CENSORING_OBJECTIVES <- function() { # nolint
return("survival:aft")
}
.REGRESSION_OBJECTIVES <- function() {
return(c(
"reg:squarederror", "reg:squaredlogerror", "reg:logistic", "reg:pseudohubererror",
"reg:absoluteerror", "reg:quantileerror", "count:poisson", "reg:gamma", "reg:tweedie"
))
}
.MULTI_TARGET_OBJECTIVES <- function() {
return(c(
"reg:squarederror", "reg:squaredlogerror", "reg:logistic", "reg:pseudohubererror",
"reg:quantileerror", "reg:gamma"
))
}
#
# Low-level functions for boosting --------------------------------------------
@@ -66,7 +104,7 @@ check.booster.params <- function(params, ...) {
# for multiclass, expect num_class to be set
if (typeof(params[['objective']]) == "character" &&
substr(NVL(params[['objective']], 'x'), 1, 6) == 'multi:' &&
startsWith(NVL(params[['objective']], 'x'), 'multi:') &&
as.numeric(NVL(params[['num_class']], 0)) < 2) {
stop("'num_class' > 1 parameter must be set for multiclass classification")
}
@@ -93,6 +131,14 @@ check.booster.params <- function(params, ...) {
interaction_constraints <- sapply(params[['interaction_constraints']], function(x) paste0('[', paste(x, collapse = ','), ']'))
params[['interaction_constraints']] <- paste0('[', paste(interaction_constraints, collapse = ','), ']')
}
# for evaluation metrics, should generate multiple entries per metric
if (NROW(params[['eval_metric']]) > 1) {
eval_metrics <- as.list(params[["eval_metric"]])
names(eval_metrics) <- rep("eval_metric", length(eval_metrics))
params_without_ev_metrics <- within(params, rm("eval_metric"))
params <- c(params_without_ev_metrics, eval_metrics)
}
return(params)
}
@@ -134,27 +180,48 @@ check.custom.eval <- function(env = parent.frame()) {
if (!is.null(env$feval) &&
is.null(env$maximize) && (
!is.null(env$early_stopping_rounds) ||
has.callbacks(env$callbacks, 'cb.early.stop')))
has.callbacks(env$callbacks, "early_stop")))
stop("Please set 'maximize' to indicate whether the evaluation metric needs to be maximized or not")
}
# Update a booster handle for an iteration with dtrain data
xgb.iter.update <- function(booster_handle, dtrain, iter, obj = NULL) {
if (!identical(class(booster_handle), "xgb.Booster.handle")) {
stop("booster_handle must be of xgb.Booster.handle class")
}
xgb.iter.update <- function(bst, dtrain, iter, obj) {
if (!inherits(dtrain, "xgb.DMatrix")) {
stop("dtrain must be of xgb.DMatrix class")
}
handle <- xgb.get.handle(bst)
if (is.null(obj)) {
.Call(XGBoosterUpdateOneIter_R, booster_handle, as.integer(iter), dtrain)
.Call(XGBoosterUpdateOneIter_R, handle, as.integer(iter), dtrain)
} else {
pred <- predict(booster_handle, dtrain, outputmargin = TRUE, training = TRUE,
ntreelimit = 0)
pred <- predict(
bst,
dtrain,
outputmargin = TRUE,
training = TRUE
)
gpair <- obj(pred, dtrain)
.Call(XGBoosterBoostOneIter_R, booster_handle, dtrain, gpair$grad, gpair$hess)
n_samples <- dim(dtrain)[1]
grad <- gpair$grad
hess <- gpair$hess
if ((is.matrix(grad) && dim(grad)[1] != n_samples) ||
(is.vector(grad) && length(grad) != n_samples) ||
(is.vector(grad) != is.vector(hess))) {
warning(paste(
"Since 2.1.0, the shape of the gradient and hessian is required to be ",
"(n_samples, n_targets) or (n_samples, n_classes). Will reshape assuming ",
"column-major order.",
sep = ""
))
grad <- matrix(grad, nrow = n_samples)
hess <- matrix(hess, nrow = n_samples)
}
.Call(
XGBoosterTrainOneIter_R, handle, dtrain, iter, grad, hess
)
}
return(TRUE)
}
@@ -163,23 +230,22 @@ xgb.iter.update <- function(booster_handle, dtrain, iter, obj = NULL) {
# Evaluate one iteration.
# Returns a named vector of evaluation metrics
# with the names in a 'datasetname-metricname' format.
xgb.iter.eval <- function(booster_handle, watchlist, iter, feval = NULL) {
if (!identical(class(booster_handle), "xgb.Booster.handle"))
stop("class of booster_handle must be xgb.Booster.handle")
xgb.iter.eval <- function(bst, evals, iter, feval) {
handle <- xgb.get.handle(bst)
if (length(watchlist) == 0)
if (length(evals) == 0)
return(NULL)
evnames <- names(watchlist)
evnames <- names(evals)
if (is.null(feval)) {
msg <- .Call(XGBoosterEvalOneIter_R, booster_handle, as.integer(iter), watchlist, as.list(evnames))
msg <- .Call(XGBoosterEvalOneIter_R, handle, as.integer(iter), evals, as.list(evnames))
mat <- matrix(strsplit(msg, '\\s+|:')[[1]][-1], nrow = 2)
res <- structure(as.numeric(mat[2, ]), names = mat[1, ])
} else {
res <- sapply(seq_along(watchlist), function(j) {
w <- watchlist[[j]]
res <- sapply(seq_along(evals), function(j) {
w <- evals[[j]]
## predict using all trees
preds <- predict(booster_handle, w, outputmargin = TRUE, iterationrange = c(1, 1))
preds <- predict(bst, w, outputmargin = TRUE, iterationrange = "all")
eval_res <- feval(preds, w)
out <- eval_res$value
names(out) <- paste0(evnames[j], "-", eval_res$metric)
@@ -206,35 +272,45 @@ convert.labels <- function(labels, objective_name) {
}
# Generates random (stratified if needed) CV folds
generate.cv.folds <- function(nfold, nrows, stratified, label, params) {
generate.cv.folds <- function(nfold, nrows, stratified, label, group, params) {
if (NROW(group)) {
if (stratified) {
warning(
paste0(
"Stratified splitting is not supported when using 'group' attribute.",
" Will use unstratified splitting."
)
)
}
return(generate.group.folds(nfold, group))
}
objective <- params$objective
if (!is.character(objective)) {
warning("Will use unstratified splitting (custom objective used)")
stratified <- FALSE
}
# cannot stratify if label is NULL
if (stratified && is.null(label)) {
warning("Will use unstratified splitting (no 'labels' available)")
stratified <- FALSE
}
# cannot do it for rank
objective <- params$objective
if (is.character(objective) && strtrim(objective, 5) == 'rank:') {
stop("\n\tAutomatic generation of CV-folds is not implemented for ranking!\n",
stop("\n\tAutomatic generation of CV-folds is not implemented for ranking without 'group' field!\n",
"\tConsider providing pre-computed CV-folds through the 'folds=' parameter.\n")
}
# shuffle
rnd_idx <- sample.int(nrows)
if (stratified &&
length(label) == length(rnd_idx)) {
if (stratified && length(label) == length(rnd_idx)) {
y <- label[rnd_idx]
# WARNING: some heuristic logic is employed to identify classification setting!
# - For classification, need to convert y labels to factor before making the folds,
# and then do stratification by factor levels.
# - For regression, leave y numeric and do stratification by quantiles.
if (is.character(objective)) {
y <- convert.labels(y, params$objective)
} else {
# If no 'objective' given in params, it means that user either wants to
# use the default 'reg:squarederror' objective or has provided a custom
# obj function. Here, assume classification setting when y has 5 or less
# unique values:
if (length(unique(y)) <= 5) {
y <- factor(y)
}
y <- convert.labels(y, objective)
}
folds <- xgb.createFolds(y, nfold)
folds <- xgb.createFolds(y = y, k = nfold)
} else {
# make simple non-stratified folds
kstep <- length(rnd_idx) %/% nfold
@@ -248,10 +324,33 @@ generate.cv.folds <- function(nfold, nrows, stratified, label, params) {
return(folds)
}
generate.group.folds <- function(nfold, group) {
ngroups <- length(group) - 1
if (ngroups < nfold) {
stop("DMatrix has fewer groups than folds.")
}
seq_groups <- seq_len(ngroups)
indices <- lapply(seq_groups, function(gr) seq(group[gr] + 1, group[gr + 1]))
assignments <- base::split(seq_groups, as.integer(seq_groups %% nfold))
assignments <- unname(assignments)
out <- vector("list", nfold)
randomized_groups <- sample(ngroups)
for (idx in seq_len(nfold)) {
groups_idx_test <- randomized_groups[assignments[[idx]]]
groups_test <- indices[groups_idx_test]
idx_test <- unlist(groups_test)
attributes(idx_test)$group_test <- lengths(groups_test)
attributes(idx_test)$group_train <- lengths(indices[-groups_idx_test])
out[[idx]] <- idx_test
}
return(out)
}
# Creates CV folds stratified by the values of y.
# It was borrowed from caret::createFolds and simplified
# by always returning an unnamed list of fold indices.
xgb.createFolds <- function(y, k = 10) {
xgb.createFolds <- function(y, k) {
if (is.numeric(y)) {
## Group the numeric data based on their magnitudes
## and sample within those groups.
@@ -311,7 +410,7 @@ xgb.createFolds <- function(y, k = 10) {
#' At this time, some of the parameter names were changed in order to make the code style more uniform.
#' The deprecated parameters would be removed in the next release.
#'
#' To see all the current deprecated and new parameters, check the \code{xgboost:::depr_par_lut} table.
#' To see all the current deprecated and new parameters, check the `xgboost:::depr_par_lut` table.
#'
#' A deprecation warning is shown when any of the deprecated parameters is used in a call.
#' An additional warning is shown when there was a partial match to a deprecated parameter
@@ -320,48 +419,100 @@ xgb.createFolds <- function(y, k = 10) {
#' @name xgboost-deprecated
NULL
#' Do not use \code{\link[base]{saveRDS}} or \code{\link[base]{save}} for long-term archival of
#' models. Instead, use \code{\link{xgb.save}} or \code{\link{xgb.save.raw}}.
#' Model Serialization and Compatibility
#'
#' It is a common practice to use the built-in \code{\link[base]{saveRDS}} function (or
#' \code{\link[base]{save}}) to persist R objects to the disk. While it is possible to persist
#' \code{xgb.Booster} objects using \code{\link[base]{saveRDS}}, it is not advisable to do so if
#' the model is to be accessed in the future. If you train a model with the current version of
#' XGBoost and persist it with \code{\link[base]{saveRDS}}, the model is not guaranteed to be
#' accessible in later releases of XGBoost. To ensure that your model can be accessed in future
#' releases of XGBoost, use \code{\link{xgb.save}} or \code{\link{xgb.save.raw}} instead.
#' @description
#' When it comes to serializing XGBoost models, it's possible to use R serializers such as
#' [save()] or [saveRDS()] to serialize an XGBoost R model, but XGBoost also provides
#' its own serializers with better compatibility guarantees, which allow loading
#' said models in other language bindings of XGBoost.
#'
#' Note that an `xgb.Booster` object (**as produced by [xgb.train()]**, see rest of the doc
#' for objects produced by [xgboost()]), outside of its core components, might also keep:
#' - Additional model configuration (accessible through [xgb.config()]), which includes
#' model fitting parameters like `max_depth` and runtime parameters like `nthread`.
#' These are not necessarily useful for prediction/importance/plotting.
#' - Additional R specific attributes - e.g. results of callbacks, such as evaluation logs,
#' which are kept as a `data.table` object, accessible through
#' `attributes(model)$evaluation_log` if present.
#'
#' The first one (configurations) does not have the same compatibility guarantees as
#' the model itself, including attributes that are set and accessed through
#' [xgb.attributes()] - that is, such configuration might be lost after loading the
#' booster in a different XGBoost version, regardless of the serializer that was used.
#' These are saved when using [saveRDS()], but will be discarded if loaded into an
#' incompatible XGBoost version. They are not saved when using XGBoost's
#' serializers from its public interface including [xgb.save()] and [xgb.save.raw()].
#'
#' The second ones (R attributes) are not part of the standard XGBoost model structure,
#' and thus are not saved when using XGBoost's own serializers. These attributes are
#' only used for informational purposes, such as keeping track of evaluation metrics as
#' the model was fit, or saving the R call that produced the model, but are otherwise
#' not used for prediction / importance / plotting / etc.
#' These R attributes are only preserved when using R's serializers.
#'
#' In addition to the regular `xgb.Booster` objects producted by [xgb.train()], the
#' function [xgboost()] produces a different subclass `xgboost`, which keeps other
#' additional metadata as R attributes such as class names in classification problems,
#' and which has a dedicated `predict` method that uses different defaults. XGBoost's
#' own serializers can work with this `xgboost` class, but as they do not keep R
#' attributes, the resulting object, when deserialized, is downcasted to the regular
#' `xgb.Booster` class (i.e. it loses the metadata, and the resulting object will use
#' `predict.xgb.Booster` instead of `predict.xgboost`) - for these `xgboost` objects,
#' `saveRDS` might thus be a better option if the extra functionalities are needed.
#'
#' Note that XGBoost models in R starting from version `2.1.0` and onwards, and
#' XGBoost models before version `2.1.0`; have a very different R object structure and
#' are incompatible with each other. Hence, models that were saved with R serializers
#' like [saveRDS()] or [save()] before version `2.1.0` will not work with latter
#' `xgboost` versions and vice versa. Be aware that the structure of R model objects
#' could in theory change again in the future, so XGBoost's serializers
#' should be preferred for long-term storage.
#'
#' Furthermore, note that using the package `qs` for serialization will require
#' version 0.26 or higher of said package, and will have the same compatibility
#' restrictions as R serializers.
#'
#' @details
#' Use \code{\link{xgb.save}} to save the XGBoost model as a stand-alone file. You may opt into
#' Use [xgb.save()] to save the XGBoost model as a stand-alone file. You may opt into
#' the JSON format by specifying the JSON extension. To read the model back, use
#' \code{\link{xgb.load}}.
#' [xgb.load()].
#'
#' Use \code{\link{xgb.save.raw}} to save the XGBoost model as a sequence (vector) of raw bytes
#' Use [xgb.save.raw()] to save the XGBoost model as a sequence (vector) of raw bytes
#' in a future-proof manner. Future releases of XGBoost will be able to read the raw bytes and
#' re-construct the corresponding model. To read the model back, use \code{\link{xgb.load.raw}}.
#' The \code{\link{xgb.save.raw}} function is useful if you'd like to persist the XGBoost model
#' re-construct the corresponding model. To read the model back, use [xgb.load.raw()].
#' The [xgb.save.raw()] function is useful if you would like to persist the XGBoost model
#' as part of another R object.
#'
#' Note: Do not use \code{\link{xgb.serialize}} to store models long-term. It persists not only the
#' model but also internal configurations and parameters, and its format is not stable across
#' multiple XGBoost versions. Use \code{\link{xgb.serialize}} only for checkpointing.
#' Use [saveRDS()] if you require the R-specific attributes that a booster might have, such
#' as evaluation logs or the model class `xgboost` instead of `xgb.Booster`, but note that
#' future compatibility of such objects is outside XGBoost's control as it relies on R's
#' serialization format (see e.g. the details section in [serialize] and [save()] from base R).
#'
#' For more details and explanation about model persistence and archival, consult the page
#' \url{https://xgboost.readthedocs.io/en/latest/tutorials/saving_model.html}.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' data(agaricus.train, package = "xgboost")
#'
#' bst <- xgb.train(
#' data = xgb.DMatrix(agaricus.train$data, label = agaricus.train$label),
#' max_depth = 2,
#' eta = 1,
#' nthread = 2,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' # Save as a stand-alone file; load it with xgb.load()
#' xgb.save(bst, 'xgb.model')
#' bst2 <- xgb.load('xgb.model')
#' fname <- file.path(tempdir(), "xgb_model.ubj")
#' xgb.save(bst, fname)
#' bst2 <- xgb.load(fname)
#'
#' # Save as a stand-alone file (JSON); load it with xgb.load()
#' xgb.save(bst, 'xgb.model.json')
#' bst2 <- xgb.load('xgb.model.json')
#' if (file.exists('xgb.model.json')) file.remove('xgb.model.json')
#' fname <- file.path(tempdir(), "xgb_model.json")
#' xgb.save(bst, fname)
#' bst2 <- xgb.load(fname)
#'
#' # Save as a raw byte vector; load it with xgb.load.raw()
#' xgb_bytes <- xgb.save.raw(bst)
@@ -372,12 +523,12 @@ NULL
#' # Persist the R object. Here, saveRDS() is okay, since it doesn't persist
#' # xgb.Booster directly. What's being persisted is the future-proof byte representation
#' # as given by xgb.save.raw().
#' saveRDS(obj, 'my_object.rds')
#' fname <- file.path(tempdir(), "my_object.Rds")
#' saveRDS(obj, fname)
#' # Read back the R object
#' obj2 <- readRDS('my_object.rds')
#' obj2 <- readRDS(fname)
#' # Re-construct xgb.Booster object from the bytes
#' bst2 <- xgb.load.raw(obj2$xgb_model_bytes)
#' if (file.exists('my_object.rds')) file.remove('my_object.rds')
#'
#' @name a-compatibility-note-for-saveRDS-save
NULL
@@ -393,7 +544,8 @@ depr_par_lut <- matrix(c(
'plot.height', 'plot_height',
'plot.width', 'plot_width',
'n_first_tree', 'trees',
'dummy', 'DUMMY'
'dummy', 'DUMMY',
'watchlist', 'evals'
), ncol = 2, byrow = TRUE)
colnames(depr_par_lut) <- c('old', 'new')

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -2,15 +2,17 @@
#'
#' Save xgb.DMatrix object to binary file
#'
#' @param dmatrix the \code{xgb.DMatrix} object
#' @param dmatrix the `xgb.DMatrix` object
#' @param fname the name of the file to write.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' \dontshow{RhpcBLASctl::omp_set_num_threads(1)}
#' data(agaricus.train, package = "xgboost")
#'
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
#' dtrain <- xgb.DMatrix('xgb.DMatrix.data')
#' if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')
#' fname <- file.path(tempdir(), "xgb.DMatrix.data")
#' xgb.DMatrix.save(dtrain, fname)
#' dtrain <- xgb.DMatrix(fname)
#' @export
xgb.DMatrix.save <- function(dmatrix, fname) {
if (typeof(fname) != "character")

View File

@@ -1,17 +1,26 @@
#' Set and get global configuration
#'
#' Global configuration consists of a collection of parameters that can be applied in the global
#' scope. See \url{https://xgboost.readthedocs.io/en/stable/parameter.html} for the full list of
#' parameters supported in the global configuration. Use \code{xgb.set.config} to update the
#' values of one or more global-scope parameters. Use \code{xgb.get.config} to fetch the current
#' parameters supported in the global configuration. Use `xgb.set.config()` to update the
#' values of one or more global-scope parameters. Use `xgb.get.config()` to fetch the current
#' values of all global-scope parameters (listed in
#' \url{https://xgboost.readthedocs.io/en/stable/parameter.html}).
#'
#' @details
#' Note that serialization-related functions might use a globally-configured number of threads,
#' which is managed by the system's OpenMP (OMP) configuration instead. Typically, XGBoost methods
#' accept an `nthreads` parameter, but some methods like [readRDS()] might get executed before such
#' parameter can be supplied.
#'
#' The number of OMP threads can in turn be configured for example through an environment variable
#' `OMP_NUM_THREADS` (needs to be set before R is started), or through `RhpcBLASctl::omp_set_num_threads`.
#' @rdname xgbConfig
#' @title Set and get global configuration
#' @name xgb.set.config, xgb.get.config
#' @export xgb.set.config xgb.get.config
#' @param ... List of parameters to be set, as keyword arguments
#' @return
#' \code{xgb.set.config} returns \code{TRUE} to signal success. \code{xgb.get.config} returns
#' `xgb.set.config()` returns `TRUE` to signal success. `xgb.get.config()` returns
#' a list containing all global-scope parameters and their values.
#'
#' @examples

View File

@@ -1,20 +1,15 @@
#' Create new features from a previously learned model
#'
#' May improve the learning by adding new features to the training data based on the decision trees from a previously learned model.
#'
#' @param model decision tree boosting model learned on the original data
#' @param data original data (usually provided as a \code{dgCMatrix} matrix)
#' @param ... currently not used
#'
#' @return \code{dgCMatrix} matrix including both the original data and the new features.
#' May improve the learning by adding new features to the training data based on the
#' decision trees from a previously learned model.
#'
#' @details
#' This is the function inspired from the paragraph 3.1 of the paper:
#'
#' \strong{Practical Lessons from Predicting Clicks on Ads at Facebook}
#' **Practical Lessons from Predicting Clicks on Ads at Facebook**
#'
#' \emph{(Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yan, xin Shi, Antoine Atallah, Ralf Herbrich, Stuart Bowers,
#' Joaquin Quinonero Candela)}
#' *(Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yan, xin Shi, Antoine Atallah, Ralf Herbrich, Stuart Bowers,
#' Joaquin Quinonero Candela)*
#'
#' International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014
#'
@@ -33,11 +28,11 @@
#' where the first subtree has 3 leafs and the second 2 leafs. If an
#' instance ends up in leaf 2 in the first subtree and leaf 1 in
#' second subtree, the overall input to the linear classifier will
#' be the binary vector \code{[0, 1, 0, 1, 0]}, where the first 3 entries
#' be the binary vector `[0, 1, 0, 1, 0]`, where the first 3 entries
#' correspond to the leaves of the first subtree and last 2 to
#' those of the second subtree.
#'
#' [...]
#' ...
#'
#' We can understand boosted decision tree
#' based transformation as a supervised feature encoding that
@@ -45,16 +40,23 @@
#' vector. A traversal from root node to a leaf node represents
#' a rule on certain features."
#'
#' @param model Decision tree boosting model learned on the original data.
#' @param data Original data (usually provided as a `dgCMatrix` matrix).
#' @param ... Currently not used.
#'
#' @return A `dgCMatrix` matrix including both the original data and the new features.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#' data(agaricus.test, package = "xgboost")
#'
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
#'
#' param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
#' param <- list(max_depth = 2, eta = 1, objective = 'binary:logistic')
#' nrounds = 4
#'
#' bst = xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)
#' bst <- xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)
#'
#' # Model accuracy without new features
#' accuracy.before <- sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.test$label) /
@@ -71,7 +73,6 @@
#' new.dtest <- xgb.DMatrix(
#' data = new.features.test, label = agaricus.test$label, nthread = 2
#' )
#' watchlist <- list(train = new.dtrain)
#' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
#'
#' # Model accuracy with new features

View File

@@ -1,131 +1,155 @@
#' Cross Validation
#'
#' The cross validation function of xgboost
#' The cross validation function of xgboost.
#'
#' @param params the list of parameters. The complete list of parameters is
#' available in the \href{http://xgboost.readthedocs.io/en/latest/parameter.html}{online documentation}. Below
#' is a shorter summary:
#' \itemize{
#' \item \code{objective} objective function, common ones are
#' \itemize{
#' \item \code{reg:squarederror} Regression with squared loss.
#' \item \code{binary:logistic} logistic regression for classification.
#' \item See \code{\link[=xgb.train]{xgb.train}()} for complete list of objectives.
#' }
#' \item \code{eta} step size of each boosting step
#' \item \code{max_depth} maximum depth of the tree
#' \item \code{nthread} number of thread used in training, if not set, all threads are used
#' }
#' @param params The list of parameters. The complete list of parameters is available in the
#' [online documentation](http://xgboost.readthedocs.io/en/latest/parameter.html).
#' Below is a shorter summary:
#' - `objective`: Objective function, common ones are
#' - `reg:squarederror`: Regression with squared loss.
#' - `binary:logistic`: Logistic regression for classification.
#'
#' See \code{\link{xgb.train}} for further details.
#' See also demo/ for walkthrough example in R.
#' @param data takes an \code{xgb.DMatrix}, \code{matrix}, or \code{dgCMatrix} as the input.
#' @param nrounds the max number of iterations
#' @param nfold the original dataset is randomly partitioned into \code{nfold} equal size subsamples.
#' @param label vector of response values. Should be provided only when data is an R-matrix.
#' @param missing is only used when input is a dense matrix. By default is set to NA, which means
#' that NA values should be considered as 'missing' by the algorithm.
#' Sometimes, 0 or other extreme value might be used to represent missing values.
#' See [xgb.train()] for complete list of objectives.
#' - `eta`: Step size of each boosting step
#' - `max_depth`: Maximum depth of the tree
#' - `nthread`: Number of threads used in training. If not set, all threads are used
#'
#' See [xgb.train()] for further details.
#' See also demo for walkthrough example in R.
#'
#' Note that, while `params` accepts a `seed` entry and will use such parameter for model training if
#' supplied, this seed is not used for creation of train-test splits, which instead rely on R's own RNG
#' system - thus, for reproducible results, one needs to call the [set.seed()] function beforehand.
#' @param data An `xgb.DMatrix` object, with corresponding fields like `label` or bounds as required
#' for model training by the objective.
#'
#' Note that only the basic `xgb.DMatrix` class is supported - variants such as `xgb.QuantileDMatrix`
#' or `xgb.ExtMemDMatrix` are not supported here.
#' @param nrounds The max number of iterations.
#' @param nfold The original dataset is randomly partitioned into `nfold` equal size subsamples.
#' @param prediction A logical value indicating whether to return the test fold predictions
#' from each CV model. This parameter engages the \code{\link{cb.cv.predict}} callback.
#' @param showsd \code{boolean}, whether to show standard deviation of cross validation
#' @param metrics, list of evaluation metrics to be used in cross validation,
#' from each CV model. This parameter engages the [xgb.cb.cv.predict()] callback.
#' @param showsd Logical value whether to show standard deviation of cross validation.
#' @param metrics List of evaluation metrics to be used in cross validation,
#' when it is not specified, the evaluation metric is chosen according to objective function.
#' Possible options are:
#' \itemize{
#' \item \code{error} binary classification error rate
#' \item \code{rmse} Rooted mean square error
#' \item \code{logloss} negative log-likelihood function
#' \item \code{mae} Mean absolute error
#' \item \code{mape} Mean absolute percentage error
#' \item \code{auc} Area under curve
#' \item \code{aucpr} Area under PR curve
#' \item \code{merror} Exact matching error, used to evaluate multi-class classification
#' }
#' @param obj customized objective function. Returns gradient and second order
#' gradient with given prediction and dtrain.
#' @param feval customized evaluation function. Returns
#' \code{list(metric='metric-name', value='metric-value')} with given
#' prediction and dtrain.
#' @param stratified a \code{boolean} indicating whether sampling of folds should be stratified
#' by the values of outcome labels.
#' @param folds \code{list} provides a possibility to use a list of pre-defined CV folds
#' (each element must be a vector of test fold's indices). When folds are supplied,
#' the \code{nfold} and \code{stratified} parameters are ignored.
#' @param train_folds \code{list} list specifying which indicies to use for training. If \code{NULL}
#' (the default) all indices not specified in \code{folds} will be used for training.
#' @param verbose \code{boolean}, print the statistics during the process
#' @param print_every_n Print each n-th iteration evaluation messages when \code{verbose>0}.
#' Default is 1 which means all messages are printed. This parameter is passed to the
#' \code{\link{cb.print.evaluation}} callback.
#' @param early_stopping_rounds If \code{NULL}, the early stopping function is not triggered.
#' If set to an integer \code{k}, training with a validation set will stop if the performance
#' doesn't improve for \code{k} rounds.
#' Setting this parameter engages the \code{\link{cb.early.stop}} callback.
#' @param maximize If \code{feval} and \code{early_stopping_rounds} are set,
#' then this parameter must be set as well.
#' When it is \code{TRUE}, it means the larger the evaluation score the better.
#' This parameter is passed to the \code{\link{cb.early.stop}} callback.
#' @param callbacks a list of callback functions to perform various task during boosting.
#' See \code{\link{callbacks}}. Some of the callbacks are automatically created depending on the
#' parameters' values. User can provide either existing or their own callback methods in order
#' to customize the training process.
#' @param ... other parameters to pass to \code{params}.
#' - `error`: Binary classification error rate
#' - `rmse`: Root mean square error
#' - `logloss`: Negative log-likelihood function
#' - `mae`: Mean absolute error
#' - `mape`: Mean absolute percentage error
#' - `auc`: Area under curve
#' - `aucpr`: Area under PR curve
#' - `merror`: Exact matching error used to evaluate multi-class classification
#' @param obj Customized objective function. Returns gradient and second order
#' gradient with given prediction and dtrain.
#' @param feval Customized evaluation function. Returns
#' `list(metric='metric-name', value='metric-value')` with given prediction and dtrain.
#' @param stratified Logical flag indicating whether sampling of folds should be stratified
#' by the values of outcome labels. For real-valued labels in regression objectives,
#' stratification will be done by discretizing the labels into up to 5 buckets beforehand.
#'
#' If passing "auto", will be set to `TRUE` if the objective in `params` is a classification
#' objective (from XGBoost's built-in objectives, doesn't apply to custom ones), and to
#' `FALSE` otherwise.
#'
#' This parameter is ignored when `data` has a `group` field - in such case, the splitting
#' will be based on whole groups (note that this might make the folds have different sizes).
#'
#' Value `TRUE` here is **not** supported for custom objectives.
#' @param folds List with pre-defined CV folds (each element must be a vector of test fold's indices).
#' When folds are supplied, the `nfold` and `stratified` parameters are ignored.
#'
#' If `data` has a `group` field and the objective requires this field, each fold (list element)
#' must additionally have two attributes (retrievable through `attributes`) named `group_test`
#' and `group_train`, which should hold the `group` to assign through [setinfo.xgb.DMatrix()] to
#' the resulting DMatrices.
#' @param train_folds List specifying which indices to use for training. If `NULL`
#' (the default) all indices not specified in `folds` will be used for training.
#'
#' This is not supported when `data` has `group` field.
#' @param verbose Logical flag. Should statistics be printed during the process?
#' @param print_every_n Print each nth iteration evaluation messages when `verbose > 0`.
#' Default is 1 which means all messages are printed. This parameter is passed to the
#' [xgb.cb.print.evaluation()] callback.
#' @param early_stopping_rounds If `NULL`, the early stopping function is not triggered.
#' If set to an integer `k`, training with a validation set will stop if the performance
#' doesn't improve for `k` rounds.
#' Setting this parameter engages the [xgb.cb.early.stop()] callback.
#' @param maximize If `feval` and `early_stopping_rounds` are set,
#' then this parameter must be set as well.
#' When it is `TRUE`, it means the larger the evaluation score the better.
#' This parameter is passed to the [xgb.cb.early.stop()] callback.
#' @param callbacks A list of callback functions to perform various task during boosting.
#' See [xgb.Callback()]. Some of the callbacks are automatically created depending on the
#' parameters' values. User can provide either existing or their own callback methods in order
#' to customize the training process.
#' @param ... Other parameters to pass to `params`.
#'
#' @details
#' The original sample is randomly partitioned into \code{nfold} equal size subsamples.
#' The original sample is randomly partitioned into `nfold` equal size subsamples.
#'
#' Of the \code{nfold} subsamples, a single subsample is retained as the validation data for testing the model,
#' and the remaining \code{nfold - 1} subsamples are used as training data.
#' Of the `nfold` subsamples, a single subsample is retained as the validation data for testing the model,
#' and the remaining `nfold - 1` subsamples are used as training data.
#'
#' The cross-validation process is then repeated \code{nrounds} times, with each of the
#' \code{nfold} subsamples used exactly once as the validation data.
#' The cross-validation process is then repeated `nrounds` times, with each of the
#' `nfold` subsamples used exactly once as the validation data.
#'
#' All observations are used for both training and validation.
#'
#' Adapted from \url{https://en.wikipedia.org/wiki/Cross-validation_\%28statistics\%29}
#'
#' @return
#' An object of class \code{xgb.cv.synchronous} with the following elements:
#' \itemize{
#' \item \code{call} a function call.
#' \item \code{params} parameters that were passed to the xgboost library. Note that it does not
#' capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
#' \item \code{callbacks} callback functions that were either automatically assigned or
#' explicitly passed.
#' \item \code{evaluation_log} evaluation history stored as a \code{data.table} with the
#' first column corresponding to iteration number and the rest corresponding to the
#' CV-based evaluation means and standard deviations for the training and test CV-sets.
#' It is created by the \code{\link{cb.evaluation.log}} callback.
#' \item \code{niter} number of boosting iterations.
#' \item \code{nfeatures} number of features in training data.
#' \item \code{folds} the list of CV folds' indices - either those passed through the \code{folds}
#' parameter or randomly generated.
#' \item \code{best_iteration} iteration number with the best evaluation metric value
#' (only available with early stopping).
#' \item \code{best_ntreelimit} and the \code{ntreelimit} Deprecated attributes, use \code{best_iteration} instead.
#' \item \code{pred} CV prediction values available when \code{prediction} is set.
#' It is either vector or matrix (see \code{\link{cb.cv.predict}}).
#' \item \code{models} a list of the CV folds' models. It is only available with the explicit
#' setting of the \code{cb.cv.predict(save_models = TRUE)} callback.
#' }
#' An object of class 'xgb.cv.synchronous' with the following elements:
#' - `call`: Function call.
#' - `params`: Parameters that were passed to the xgboost library. Note that it does not
#' capture parameters changed by the [xgb.cb.reset.parameters()] callback.
#' - `evaluation_log`: Evaluation history stored as a `data.table` with the
#' first column corresponding to iteration number and the rest corresponding to the
#' CV-based evaluation means and standard deviations for the training and test CV-sets.
#' It is created by the [xgb.cb.evaluation.log()] callback.
#' - `niter`: Number of boosting iterations.
#' - `nfeatures`: Number of features in training data.
#' - `folds`: The list of CV folds' indices - either those passed through the `folds`
#' parameter or randomly generated.
#' - `best_iteration`: Iteration number with the best evaluation metric value
#' (only available with early stopping).
#'
#' Plus other potential elements that are the result of callbacks, such as a list `cv_predict` with
#' a sub-element `pred` when passing `prediction = TRUE`, which is added by the [xgb.cb.cv.predict()]
#' callback (note that one can also pass it manually under `callbacks` with different settings,
#' such as saving also the models created during cross validation); or a list `early_stop` which
#' will contain elements such as `best_iteration` when using the early stopping callback ([xgb.cb.early.stop()]).
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#'
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"),
#' max_depth = 3, eta = 1, objective = "binary:logistic")
#'
#' cv <- xgb.cv(
#' data = dtrain,
#' nrounds = 3,
#' nthread = 2,
#' nfold = 5,
#' metrics = list("rmse","auc"),
#' max_depth = 3,
#' eta = 1,objective = "binary:logistic"
#' )
#' print(cv)
#' print(cv, verbose=TRUE)
#' print(cv, verbose = TRUE)
#'
#' @export
xgb.cv <- function(params = list(), data, nrounds, nfold, label = NULL, missing = NA,
xgb.cv <- function(params = list(), data, nrounds, nfold,
prediction = FALSE, showsd = TRUE, metrics = list(),
obj = NULL, feval = NULL, stratified = TRUE, folds = NULL, train_folds = NULL,
obj = NULL, feval = NULL, stratified = "auto", folds = NULL, train_folds = NULL,
verbose = TRUE, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL, callbacks = list(), ...) {
check.deprecation(...)
stopifnot(inherits(data, "xgb.DMatrix"))
if (inherits(data, "xgb.DMatrix") && .Call(XGCheckNullPtr_R, data)) {
stop("'data' is an invalid 'xgb.DMatrix' object. Must be constructed again.")
}
params <- check.booster.params(params, ...)
# TODO: should we deprecate the redundant 'metrics' parameter?
@@ -135,19 +159,22 @@ xgb.cv <- function(params = list(), data, nrounds, nfold, label = NULL, missing
check.custom.obj()
check.custom.eval()
#if (is.null(params[['eval_metric']]) && is.null(feval))
# stop("Either 'eval_metric' or 'feval' must be provided for CV")
if (stratified == "auto") {
if (is.character(params$objective)) {
stratified <- (
(params$objective %in% .CLASSIFICATION_OBJECTIVES())
&& !(params$objective %in% .RANKING_OBJECTIVES())
)
} else {
stratified <- FALSE
}
}
# Check the labels
if ((inherits(data, 'xgb.DMatrix') && is.null(getinfo(data, 'label'))) ||
(!inherits(data, 'xgb.DMatrix') && is.null(label))) {
stop("Labels must be provided for CV either through xgb.DMatrix, or through 'label=' when 'data' is matrix")
} else if (inherits(data, 'xgb.DMatrix')) {
if (!is.null(label))
warning("xgb.cv: label will be ignored, since data is of type xgb.DMatrix")
cv_label <- getinfo(data, 'label')
} else {
cv_label <- label
# Check the labels and groups
cv_label <- getinfo(data, "label")
cv_group <- getinfo(data, "group")
if (!is.null(train_folds) && NROW(cv_group)) {
stop("'train_folds' is not supported for DMatrix object with 'group' field.")
}
# CV folds
@@ -158,121 +185,171 @@ xgb.cv <- function(params = list(), data, nrounds, nfold, label = NULL, missing
} else {
if (nfold <= 1)
stop("'nfold' must be > 1")
folds <- generate.cv.folds(nfold, nrow(data), stratified, cv_label, params)
folds <- generate.cv.folds(nfold, nrow(data), stratified, cv_label, cv_group, params)
}
# Potential TODO: sequential CV
#if (strategy == 'sequential')
# stop('Sequential CV strategy is not yet implemented')
# Callbacks
tmp <- .process.callbacks(callbacks, is_cv = TRUE)
callbacks <- tmp$callbacks
cb_names <- tmp$cb_names
rm(tmp)
# Early stopping callback
if (!is.null(early_stopping_rounds) && !("early_stop" %in% cb_names)) {
callbacks <- add.callback(
callbacks,
xgb.cb.early.stop(
early_stopping_rounds,
maximize = maximize,
verbose = verbose
),
as_first_elt = TRUE
)
}
# verbosity & evaluation printing callback:
params <- c(params, list(silent = 1))
print_every_n <- max(as.integer(print_every_n), 1L)
if (!has.callbacks(callbacks, 'cb.print.evaluation') && verbose) {
callbacks <- add.cb(callbacks, cb.print.evaluation(print_every_n, showsd = showsd))
if (verbose && !("print_evaluation" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.print.evaluation(print_every_n, showsd = showsd))
}
# evaluation log callback: always is on in CV
evaluation_log <- list()
if (!has.callbacks(callbacks, 'cb.evaluation.log')) {
callbacks <- add.cb(callbacks, cb.evaluation.log())
}
# Early stopping callback
stop_condition <- FALSE
if (!is.null(early_stopping_rounds) &&
!has.callbacks(callbacks, 'cb.early.stop')) {
callbacks <- add.cb(callbacks, cb.early.stop(early_stopping_rounds,
maximize = maximize, verbose = verbose))
if (!("evaluation_log" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.evaluation.log())
}
# CV-predictions callback
if (prediction &&
!has.callbacks(callbacks, 'cb.cv.predict')) {
callbacks <- add.cb(callbacks, cb.cv.predict(save_models = FALSE))
if (prediction && !("cv_predict" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.cv.predict(save_models = FALSE))
}
# Sort the callbacks into categories
cb <- categorize.callbacks(callbacks)
# create the booster-folds
# train_folds
dall <- xgb.get.DMatrix(data, label, missing, nthread = params$nthread)
dall <- data
bst_folds <- lapply(seq_along(folds), function(k) {
dtest <- slice(dall, folds[[k]])
dtest <- xgb.slice.DMatrix(dall, folds[[k]], allow_groups = TRUE)
# code originally contributed by @RolandASc on stackoverflow
if (is.null(train_folds))
dtrain <- slice(dall, unlist(folds[-k]))
dtrain <- xgb.slice.DMatrix(dall, unlist(folds[-k]), allow_groups = TRUE)
else
dtrain <- slice(dall, train_folds[[k]])
handle <- xgb.Booster.handle(params, list(dtrain, dtest))
list(dtrain = dtrain, bst = handle, watchlist = list(train = dtrain, test = dtest), index = folds[[k]])
dtrain <- xgb.slice.DMatrix(dall, train_folds[[k]], allow_groups = TRUE)
if (!is.null(attributes(folds[[k]])$group_test)) {
setinfo(dtest, "group", attributes(folds[[k]])$group_test)
setinfo(dtrain, "group", attributes(folds[[k]])$group_train)
}
bst <- xgb.Booster(
params = params,
cachelist = list(dtrain, dtest),
modelfile = NULL
)
bst <- bst$bst
list(dtrain = dtrain, bst = bst, evals = list(train = dtrain, test = dtest), index = folds[[k]])
})
rm(dall)
# a "basket" to collect some results from callbacks
basket <- list()
# extract parameters that can affect the relationship b/w #trees and #iterations
num_class <- max(as.numeric(NVL(params[['num_class']], 1)), 1) # nolint
num_parallel_tree <- max(as.numeric(NVL(params[['num_parallel_tree']], 1)), 1) # nolint
# those are fixed for CV (no training continuation)
begin_iteration <- 1
end_iteration <- nrounds
.execute.cb.before.training(
callbacks,
bst_folds,
dall,
NULL,
begin_iteration,
end_iteration
)
# synchronous CV boosting: run CV folds' models within each iteration
for (iteration in begin_iteration:end_iteration) {
for (f in cb$pre_iter) f()
.execute.cb.before.iter(
callbacks,
bst_folds,
dall,
NULL,
iteration
)
msg <- lapply(bst_folds, function(fd) {
xgb.iter.update(fd$bst, fd$dtrain, iteration - 1, obj)
xgb.iter.eval(fd$bst, fd$watchlist, iteration - 1, feval)
xgb.iter.update(
bst = fd$bst,
dtrain = fd$dtrain,
iter = iteration - 1,
obj = obj
)
xgb.iter.eval(
bst = fd$bst,
evals = fd$evals,
iter = iteration - 1,
feval = feval
)
})
msg <- simplify2array(msg)
bst_evaluation <- rowMeans(msg)
bst_evaluation_err <- sqrt(rowMeans(msg^2) - bst_evaluation^2) # nolint
for (f in cb$post_iter) f()
should_stop <- .execute.cb.after.iter(
callbacks,
bst_folds,
dall,
NULL,
iteration,
msg
)
if (stop_condition) break
if (should_stop) break
}
for (f in cb$finalize) f(finalize = TRUE)
cb_outputs <- .execute.cb.after.training(
callbacks,
bst_folds,
dall,
NULL,
iteration,
msg
)
# the CV result
ret <- list(
call = match.call(),
params = params,
callbacks = callbacks,
evaluation_log = evaluation_log,
niter = end_iteration,
nfeatures = ncol(data),
niter = iteration,
nfeatures = ncol(dall),
folds = folds
)
ret <- c(ret, basket)
ret <- c(ret, cb_outputs)
class(ret) <- 'xgb.cv.synchronous'
invisible(ret)
return(invisible(ret))
}
#' Print xgb.cv result
#'
#' Prints formatted results of \code{xgb.cv}.
#' Prints formatted results of [xgb.cv()].
#'
#' @param x an \code{xgb.cv.synchronous} object
#' @param verbose whether to print detailed data
#' @param ... passed to \code{data.table.print}
#' @param x An `xgb.cv.synchronous` object.
#' @param verbose Whether to print detailed data.
#' @param ... Passed to `data.table.print()`.
#'
#' @details
#' When not verbose, it would only print the evaluation results,
#' including the best iteration (when available).
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#'
#' train <- agaricus.train
#' cv <- xgb.cv(data = train$data, label = train$label, nfold = 5, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' cv <- xgb.cv(
#' data = xgb.DMatrix(train$data, label = train$label),
#' nfold = 5,
#' max_depth = 2,
#' eta = 1,
#' nthread = 2,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#' print(cv)
#' print(cv, verbose=TRUE)
#' print(cv, verbose = TRUE)
#'
#' @rdname print.xgb.cv
#' @method print xgb.cv.synchronous
@@ -292,23 +369,16 @@ print.xgb.cv.synchronous <- function(x, verbose = FALSE, ...) {
paste0('"', unlist(x$params), '"'),
sep = ' = ', collapse = ', '), '\n', sep = '')
}
if (!is.null(x$callbacks) && length(x$callbacks) > 0) {
cat('callbacks:\n')
lapply(callback.calls(x$callbacks), function(x) {
cat(' ')
print(x)
})
}
for (n in c('niter', 'best_iteration', 'best_ntreelimit')) {
if (is.null(x[[n]]))
for (n in c('niter', 'best_iteration')) {
if (is.null(x$early_stop[[n]]))
next
cat(n, ': ', x[[n]], '\n', sep = '')
cat(n, ': ', x$early_stop[[n]], '\n', sep = '')
}
if (!is.null(x$pred)) {
if (!is.null(x$cv_predict$pred)) {
cat('pred:\n')
str(x$pred)
str(x$cv_predict$pred)
}
}
@@ -316,9 +386,9 @@ print.xgb.cv.synchronous <- function(x, verbose = FALSE, ...) {
cat('evaluation_log:\n')
print(x$evaluation_log, row.names = FALSE, ...)
if (!is.null(x$best_iteration)) {
if (!is.null(x$early_stop$best_iteration)) {
cat('Best iteration:\n')
print(x$evaluation_log[x$best_iteration], row.names = FALSE, ...)
print(x$evaluation_log[x$early_stop$best_iteration], row.names = FALSE, ...)
}
invisible(x)
}

View File

@@ -1,32 +1,44 @@
#' Dump an xgboost model in text format.
#' Dump an XGBoost model in text format.
#'
#' Dump an xgboost model in text format.
#' Dump an XGBoost model in text format.
#'
#' @param model the model object.
#' @param fname the name of the text file where to save the model text dump.
#' If not provided or set to \code{NULL}, the model is returned as a \code{character} vector.
#' @param fmap feature map file representing feature types.
#' See demo/ for walkthrough example in R, and
#' \url{https://github.com/dmlc/xgboost/blob/master/demo/data/featmap.txt}
#' for example Format.
#' @param with_stats whether to dump some additional statistics about the splits.
#' When this option is on, the model dump contains two additional values:
#' gain is the approximate loss function gain we get in each split;
#' cover is the sum of second order gradient in each node.
#' @param dump_format either 'text' or 'json' format could be specified.
#' @param ... currently not used
#' @param model The model object.
#' @param fname The name of the text file where to save the model text dump.
#' If not provided or set to `NULL`, the model is returned as a character vector.
#' @param fmap Feature map file representing feature types. See demo/ for a walkthrough
#' example in R, and \url{https://github.com/dmlc/xgboost/blob/master/demo/data/featmap.txt}
#' to see an example of the value.
#' @param with_stats Whether to dump some additional statistics about the splits.
#' When this option is on, the model dump contains two additional values:
#' gain is the approximate loss function gain we get in each split;
#' cover is the sum of second order gradient in each node.
#' @param dump_format Either 'text', 'json', or 'dot' (graphviz) format could be specified.
#'
#' Format 'dot' for a single tree can be passed directly to packages that consume this format
#' for graph visualization, such as function `DiagrammeR::grViz()`
#' @param ... Currently not used
#'
#' @return
#' If fname is not provided or set to \code{NULL} the function will return the model
#' as a \code{character} vector. Otherwise it will return \code{TRUE}.
#' If fname is not provided or set to `NULL` the function will return the model
#' as a character vector. Otherwise it will return `TRUE`.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' \dontshow{RhpcBLASctl::omp_set_num_threads(1)}
#' data(agaricus.train, package = "xgboost")
#' data(agaricus.test, package = "xgboost")
#'
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#'
#' bst <- xgb.train(
#' data = xgb.DMatrix(train$data, label = train$label),
#' max_depth = 2,
#' eta = 1,
#' nthread = 2,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' # save the model in file 'xgb.model.dump'
#' dump_path = file.path(tempdir(), 'model.dump')
#' xgb.dump(bst, dump_path, with_stats = TRUE)
@@ -35,11 +47,15 @@
#' print(xgb.dump(bst, with_stats = TRUE))
#'
#' # print in JSON format:
#' cat(xgb.dump(bst, with_stats = TRUE, dump_format='json'))
#' cat(xgb.dump(bst, with_stats = TRUE, dump_format = "json"))
#'
#' # plot first tree leveraging the 'dot' format
#' if (requireNamespace('DiagrammeR', quietly = TRUE)) {
#' DiagrammeR::grViz(xgb.dump(bst, dump_format = "dot")[[1L]])
#' }
#' @export
xgb.dump <- function(model, fname = NULL, fmap = "", with_stats = FALSE,
dump_format = c("text", "json"), ...) {
dump_format = c("text", "json", "dot"), ...) {
check.deprecation(...)
dump_format <- match.arg(dump_format)
if (!inherits(model, "xgb.Booster"))
@@ -49,9 +65,16 @@ xgb.dump <- function(model, fname = NULL, fmap = "", with_stats = FALSE,
if (!(is.null(fmap) || is.character(fmap)))
stop("fmap: argument must be a character string (when provided)")
model <- xgb.Booster.complete(model)
model_dump <- .Call(XGBoosterDumpModel_R, model$handle, NVL(fmap, "")[1], as.integer(with_stats),
as.character(dump_format))
model_dump <- .Call(
XGBoosterDumpModel_R,
xgb.get.handle(model),
NVL(fmap, "")[1],
as.integer(with_stats),
as.character(dump_format)
)
if (dump_format == "dot") {
return(sapply(model_dump, function(x) gsub("^booster\\[\\d+\\]\\n", "\\1", x)))
}
if (is.null(fname))
model_dump <- gsub('\t', '', model_dump, fixed = TRUE)

View File

@@ -1,6 +1,5 @@
# ggplot backend for the xgboost plotting facilities
#' @rdname xgb.plot.importance
#' @export
xgb.ggplot.importance <- function(importance_matrix = NULL, top_n = NULL, measure = NULL,
@@ -103,6 +102,27 @@ xgb.ggplot.deepness <- function(model = NULL, which = c("2x1", "max.depth", "med
#' @export
xgb.ggplot.shap.summary <- function(data, shap_contrib = NULL, features = NULL, top_n = 10, model = NULL,
trees = NULL, target_class = NULL, approxcontrib = FALSE, subsample = NULL) {
if (inherits(data, "xgb.DMatrix")) {
stop(
"'xgb.ggplot.shap.summary' is not compatible with 'xgb.DMatrix' objects. Try passing a matrix or data.frame."
)
}
cols_categ <- NULL
if (!is.null(model)) {
ftypes <- getinfo(model, "feature_type")
if (NROW(ftypes)) {
if (length(ftypes) != ncol(data)) {
stop(sprintf("'data' has incorrect number of columns (expected: %d, got: %d).", length(ftypes), ncol(data)))
}
cols_categ <- colnames(data)[ftypes == "c"]
}
} else if (inherits(data, "data.frame")) {
cols_categ <- names(data)[sapply(data, function(x) is.factor(x) || is.character(x))]
}
if (NROW(cols_categ)) {
warning("Categorical features are ignored in 'xgb.ggplot.shap.summary'.")
}
data_list <- xgb.shap.data(
data = data,
shap_contrib = shap_contrib,
@@ -115,6 +135,10 @@ xgb.ggplot.shap.summary <- function(data, shap_contrib = NULL, features = NULL,
subsample = subsample,
max_observations = 10000 # 10,000 samples per feature.
)
if (NROW(cols_categ)) {
data_list <- lapply(data_list, function(x) x[, !(colnames(x) %in% cols_categ), drop = FALSE])
}
p_data <- prepare.ggplot.shap.data(data_list, normalize = TRUE)
# Reverse factor levels so that the first level is at the top of the plot
p_data[, "feature" := factor(feature, rev(levels(feature)))]
@@ -127,21 +151,20 @@ xgb.ggplot.shap.summary <- function(data, shap_contrib = NULL, features = NULL,
p
}
#' Combine and melt feature values and SHAP contributions for sample
#' observations.
#' Combine feature values and SHAP values
#'
#' Conforms to data format required for ggplot functions.
#' Internal function used to combine and melt feature values and SHAP contributions
#' as required for ggplot functions related to SHAP.
#'
#' Internal utility function.
#'
#' @param data_list List containing 'data' and 'shap_contrib' returned by
#' \code{xgb.shap.data()}.
#' @param normalize Whether to standardize feature values to have mean 0 and
#' standard deviation 1 (useful for comparing multiple features on the same
#' plot). Default \code{FALSE}.
#'
#' @return A data.table containing the observation ID, the feature name, the
#' @param data_list The result of `xgb.shap.data()`.
#' @param normalize Whether to standardize feature values to mean 0 and
#' standard deviation 1. This is useful for comparing multiple features on the same
#' plot. Default is `FALSE`. Note that it cannot be used when the data contains
#' categorical features.
#' @return A `data.table` containing the observation ID, the feature name, the
#' feature value (normalized if specified), and the SHAP contribution value.
#' @noRd
#' @keywords internal
prepare.ggplot.shap.data <- function(data_list, normalize = FALSE) {
data <- data_list[["data"]]
shap_contrib <- data_list[["shap_contrib"]]
@@ -162,14 +185,15 @@ prepare.ggplot.shap.data <- function(data_list, normalize = FALSE) {
p_data
}
#' Scale feature value to have mean 0, standard deviation 1
#' Scale feature values
#'
#' This is used to compare multiple features on the same plot.
#' Internal utility function
#' Internal function that scales feature values to mean 0 and standard deviation 1.
#' Useful to compare multiple features on the same plot.
#'
#' @param x Numeric vector
#'
#' @return Numeric vector with mean 0 and sd 1.
#' @param x Numeric vector.
#' @return Numeric vector with mean 0 and standard deviation 1.
#' @noRd
#' @keywords internal
normalize <- function(x) {
loc <- mean(x, na.rm = TRUE)
scale <- stats::sd(x, na.rm = TRUE)
@@ -181,7 +205,7 @@ normalize <- function(x) {
# ... the plots
# cols number of columns
# internal utility function
multiplot <- function(..., cols = 1) {
multiplot <- function(..., cols) {
plots <- list(...)
num_plots <- length(plots)

View File

@@ -1,107 +1,132 @@
#' Importance of features in a model.
#' Feature importance
#'
#' Creates a \code{data.table} of feature importances in a model.
#'
#' @param feature_names character vector of feature names. If the model already
#' contains feature names, those would be used when \code{feature_names=NULL} (default value).
#' Non-null \code{feature_names} could be provided to override those in the model.
#' @param model object of class \code{xgb.Booster}.
#' @param trees (only for the gbtree booster) an integer vector of tree indices that should be included
#' into the importance calculation. If set to \code{NULL}, all trees of the model are parsed.
#' It could be useful, e.g., in multiclass classification to get feature importances
#' for each class separately. IMPORTANT: the tree index in xgboost models
#' is zero-based (e.g., use \code{trees = 0:4} for first 5 trees).
#' @param data deprecated.
#' @param label deprecated.
#' @param target deprecated.
#' Creates a `data.table` of feature importances.
#'
#' @details
#'
#' This function works for both linear and tree models.
#'
#' For linear models, the importance is the absolute magnitude of linear coefficients.
#' For that reason, in order to obtain a meaningful ranking by importance for a linear model,
#' the features need to be on the same scale (which you also would want to do when using either
#' L1 or L2 regularization).
#' To obtain a meaningful ranking by importance for linear models, the features need to
#' be on the same scale (which is also recommended when using L1 or L2 regularization).
#'
#' @return
#' @param feature_names Character vector used to overwrite the feature names
#' of the model. The default is `NULL` (use original feature names).
#' @param model Object of class `xgb.Booster`.
#' @param trees An integer vector of tree indices that should be included
#' into the importance calculation (only for the "gbtree" booster).
#' The default (`NULL`) parses all trees.
#' It could be useful, e.g., in multiclass classification to get feature importances
#' for each class separately. *Important*: the tree index in XGBoost models
#' is zero-based (e.g., use `trees = 0:4` for the first five trees).
#' @param data Deprecated.
#' @param label Deprecated.
#' @param target Deprecated.
#' @return A `data.table` with the following columns:
#'
#' For a tree model, a \code{data.table} with the following columns:
#' \itemize{
#' \item \code{Features} names of the features used in the model;
#' \item \code{Gain} represents fractional contribution of each feature to the model based on
#' the total gain of this feature's splits. Higher percentage means a more important
#' predictive feature.
#' \item \code{Cover} metric of the number of observation related to this feature;
#' \item \code{Frequency} percentage representing the relative number of times
#' a feature have been used in trees.
#' }
#' For a tree model:
#' - `Features`: Names of the features used in the model.
#' - `Gain`: Fractional contribution of each feature to the model based on
#' the total gain of this feature's splits. Higher percentage means higher importance.
#' - `Cover`: Metric of the number of observation related to this feature.
#' - `Frequency`: Percentage of times a feature has been used in trees.
#'
#' A linear model's importance \code{data.table} has the following columns:
#' \itemize{
#' \item \code{Features} names of the features used in the model;
#' \item \code{Weight} the linear coefficient of this feature;
#' \item \code{Class} (only for multiclass models) class label.
#' }
#' For a linear model:
#' - `Features`: Names of the features used in the model.
#' - `Weight`: Linear coefficient of this feature.
#' - `Class`: Class label (only for multiclass models).
#'
#' If \code{feature_names} is not provided and \code{model} doesn't have \code{feature_names},
#' index of the features will be used instead. Because the index is extracted from the model dump
#' If `feature_names` is not provided and `model` doesn't have `feature_names`,
#' the index of the features will be used instead. Because the index is extracted from the model dump
#' (based on C++ code), it starts at 0 (as in C/C++ or Python) instead of 1 (usual in R).
#'
#' @examples
#'
#' # binomial classification using gbtree:
#' data(agaricus.train, package='xgboost')
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' # binomial classification using "gbtree":
#' data(agaricus.train, package = "xgboost")
#'
#' bst <- xgb.train(
#' data = xgb.DMatrix(agaricus.train$data, label = agaricus.train$label),
#' max_depth = 2,
#' eta = 1,
#' nthread = 2,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' xgb.importance(model = bst)
#'
#' # binomial classification using gblinear:
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, booster = "gblinear",
#' eta = 0.3, nthread = 1, nrounds = 20, objective = "binary:logistic")
#' # binomial classification using "gblinear":
#' bst <- xgb.train(
#' data = xgb.DMatrix(agaricus.train$data, label = agaricus.train$label),
#' booster = "gblinear",
#' eta = 0.3,
#' nthread = 1,
#' nrounds = 20,objective = "binary:logistic"
#' )
#'
#' xgb.importance(model = bst)
#'
#' # multiclass classification using gbtree:
#' # multiclass classification using "gbtree":
#' nclass <- 3
#' nrounds <- 10
#' mbst <- xgboost(data = as.matrix(iris[, -5]), label = as.numeric(iris$Species) - 1,
#' max_depth = 3, eta = 0.2, nthread = 2, nrounds = nrounds,
#' objective = "multi:softprob", num_class = nclass)
#' mbst <- xgb.train(
#' data = xgb.DMatrix(
#' as.matrix(iris[, -5]),
#' label = as.numeric(iris$Species) - 1
#' ),
#' max_depth = 3,
#' eta = 0.2,
#' nthread = 2,
#' nrounds = nrounds,
#' objective = "multi:softprob",
#' num_class = nclass
#' )
#'
#' # all classes clumped together:
#' xgb.importance(model = mbst)
#' # inspect importances separately for each class:
#' xgb.importance(model = mbst, trees = seq(from=0, by=nclass, length.out=nrounds))
#' xgb.importance(model = mbst, trees = seq(from=1, by=nclass, length.out=nrounds))
#' xgb.importance(model = mbst, trees = seq(from=2, by=nclass, length.out=nrounds))
#'
#' # multiclass classification using gblinear:
#' mbst <- xgboost(data = scale(as.matrix(iris[, -5])), label = as.numeric(iris$Species) - 1,
#' booster = "gblinear", eta = 0.2, nthread = 1, nrounds = 15,
#' objective = "multi:softprob", num_class = nclass)
#' # inspect importances separately for each class:
#' xgb.importance(
#' model = mbst, trees = seq(from = 0, by = nclass, length.out = nrounds)
#' )
#' xgb.importance(
#' model = mbst, trees = seq(from = 1, by = nclass, length.out = nrounds)
#' )
#' xgb.importance(
#' model = mbst, trees = seq(from = 2, by = nclass, length.out = nrounds)
#' )
#'
#' # multiclass classification using "gblinear":
#' mbst <- xgb.train(
#' data = xgb.DMatrix(
#' scale(as.matrix(iris[, -5])),
#' label = as.numeric(iris$Species) - 1
#' ),
#' booster = "gblinear",
#' eta = 0.2,
#' nthread = 1,
#' nrounds = 15,
#' objective = "multi:softprob",
#' num_class = nclass
#' )
#'
#' xgb.importance(model = mbst)
#'
#' @export
xgb.importance <- function(feature_names = NULL, model = NULL, trees = NULL,
xgb.importance <- function(model = NULL, feature_names = getinfo(model, "feature_name"), trees = NULL,
data = NULL, label = NULL, target = NULL) {
if (!(is.null(data) && is.null(label) && is.null(target)))
warning("xgb.importance: parameters 'data', 'label' and 'target' are deprecated")
if (!inherits(model, "xgb.Booster"))
stop("model: must be an object of class xgb.Booster")
if (is.null(feature_names) && !is.null(model$feature_names))
feature_names <- model$feature_names
if (!(is.null(feature_names) || is.character(feature_names)))
stop("feature_names: Has to be a character vector")
model <- xgb.Booster.complete(model)
config <- jsonlite::fromJSON(xgb.config(model))
if (config$learner$gradient_booster$name == "gblinear") {
handle <- xgb.get.handle(model)
if (xgb.booster_type(model) == "gblinear") {
args <- list(importance_type = "weight", feature_names = feature_names)
results <- .Call(
XGBoosterFeatureScore_R, model$handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null")
XGBoosterFeatureScore_R, handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null")
)
names(results) <- c("features", "shape", "weight")
if (length(results$shape) == 2) {
@@ -122,7 +147,7 @@ xgb.importance <- function(feature_names = NULL, model = NULL, trees = NULL,
for (importance_type in c("weight", "total_gain", "total_cover")) {
args <- list(importance_type = importance_type, feature_names = feature_names, tree_idx = trees)
results <- .Call(
XGBoosterFeatureScore_R, model$handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null")
XGBoosterFeatureScore_R, handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null")
)
names(results) <- c("features", "shape", importance_type)
concatenated[

View File

@@ -1,54 +1,66 @@
#' Load xgboost model from binary file
#' Load XGBoost model from binary file
#'
#' Load xgboost model from the binary model file.
#' Load XGBoost model from binary model file.
#'
#' @param modelfile the name of the binary input file.
#' @param modelfile The name of the binary input file.
#'
#' @details
#' The input file is expected to contain a model saved in an xgboost model format
#' using either \code{\link{xgb.save}} or \code{\link{cb.save.model}} in R, or using some
#' appropriate methods from other xgboost interfaces. E.g., a model trained in Python and
#' saved from there in xgboost format, could be loaded from R.
#' The input file is expected to contain a model saved in an XGBoost model format
#' using either [xgb.save()] in R, or using some
#' appropriate methods from other XGBoost interfaces. E.g., a model trained in Python and
#' saved from there in XGBoost format, could be loaded from R.
#'
#' Note: a model saved as an R-object, has to be loaded using corresponding R-methods,
#' not \code{xgb.load}.
#' Note: a model saved as an R object has to be loaded using corresponding R-methods,
#' not by [xgb.load()].
#'
#' @return
#' An object of \code{xgb.Booster} class.
#' An object of `xgb.Booster` class.
#'
#' @seealso
#' \code{\link{xgb.save}}, \code{\link{xgb.Booster.complete}}.
#' @seealso [xgb.save()]
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' \dontshow{RhpcBLASctl::omp_set_num_threads(1)}
#' data(agaricus.train, package = "xgboost")
#' data(agaricus.test, package = "xgboost")
#'
#' ## Keep the number of threads to 1 for examples
#' nthread <- 1
#' data.table::setDTthreads(nthread)
#'
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' xgb.save(bst, 'xgb.model')
#' bst <- xgb.load('xgb.model')
#' if (file.exists('xgb.model')) file.remove('xgb.model')
#' pred <- predict(bst, test$data)
#'
#' bst <- xgb.train(
#' data = xgb.DMatrix(train$data, label = train$label),
#' max_depth = 2,
#' eta = 1,
#' nthread = nthread,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' fname <- file.path(tempdir(), "xgb.ubj")
#' xgb.save(bst, fname)
#' bst <- xgb.load(fname)
#' @export
xgb.load <- function(modelfile) {
if (is.null(modelfile))
stop("xgb.load: modelfile cannot be NULL")
handle <- xgb.Booster.handle(modelfile = modelfile)
bst <- xgb.Booster(
params = list(),
cachelist = list(),
modelfile = modelfile
)
bst <- bst$bst
# re-use modelfile if it is raw so we do not need to serialize
if (typeof(modelfile) == "raw") {
warning(
paste(
"The support for loading raw booster with `xgb.load` will be ",
"discontinued in upcoming release. Use `xgb.load.raw` or",
" `xgb.unserialize` instead. "
"discontinued in upcoming release. Use `xgb.load.raw` instead. "
)
)
bst <- xgb.handleToBooster(handle, modelfile)
} else {
bst <- xgb.handleToBooster(handle, NULL)
}
bst <- xgb.Booster.complete(bst, saveraw = TRUE)
return(bst)
}

View File

@@ -1,23 +1,12 @@
#' Load serialised xgboost model from R's raw vector
#' Load serialised XGBoost model from R's raw vector
#'
#' User can generate raw memory buffer by calling xgb.save.raw
#'
#' @param buffer the buffer returned by xgb.save.raw
#' @param as_booster Return the loaded model as xgb.Booster instead of xgb.Booster.handle.
#' User can generate raw memory buffer by calling [xgb.save.raw()].
#'
#' @param buffer The buffer returned by [xgb.save.raw()].
#' @export
xgb.load.raw <- function(buffer, as_booster = FALSE) {
xgb.load.raw <- function(buffer) {
cachelist <- list()
handle <- .Call(XGBoosterCreate_R, cachelist)
.Call(XGBoosterLoadModelFromRaw_R, handle, buffer)
class(handle) <- "xgb.Booster.handle"
if (as_booster) {
booster <- list(handle = handle, raw = NULL)
class(booster) <- "xgb.Booster"
booster <- xgb.Booster.complete(booster, saveraw = TRUE)
return(booster)
} else {
return (handle)
}
bst <- .Call(XGBoosterCreate_R, cachelist)
.Call(XGBoosterLoadModelFromRaw_R, xgb.get.handle(bst), buffer)
return(bst)
}

View File

@@ -1,67 +1,70 @@
#' Parse a boosted tree model text dump
#' Parse model text dump
#'
#' Parse a boosted tree model text dump into a \code{data.table} structure.
#' Parse a boosted tree model text dump into a `data.table` structure.
#'
#' @param feature_names character vector of feature names. If the model already
#' contains feature names, those would be used when \code{feature_names=NULL} (default value).
#' Non-null \code{feature_names} could be provided to override those in the model.
#' @param model object of class \code{xgb.Booster}
#' @param text \code{character} vector previously generated by the \code{xgb.dump}
#' function (where parameter \code{with_stats = TRUE} should have been set).
#' \code{text} takes precedence over \code{model}.
#' @param trees an integer vector of tree indices that should be parsed.
#' If set to \code{NULL}, all trees of the model are parsed.
#' It could be useful, e.g., in multiclass classification to get only
#' the trees of one certain class. IMPORTANT: the tree index in xgboost models
#' is zero-based (e.g., use \code{trees = 0:4} for first 5 trees).
#' @param use_int_id a logical flag indicating whether nodes in columns "Yes", "No", "Missing" should be
#' represented as integers (when FALSE) or as "Tree-Node" character strings (when FALSE).
#' @param ... currently not used.
#' @param model Object of class `xgb.Booster`. If it contains feature names (they can
#' be set through [setinfo()]), they will be used in the output from this function.
#' @param text Character vector previously generated by the function [xgb.dump()]
#' (called with parameter `with_stats = TRUE`). `text` takes precedence over `model`.
#' @param trees An integer vector of tree indices that should be used. The default
#' (`NULL`) uses all trees. Useful, e.g., in multiclass classification to get only
#' the trees of one class. *Important*: the tree index in XGBoost models
#' is zero-based (e.g., use `trees = 0:4` for the first five trees).
#' @param use_int_id A logical flag indicating whether nodes in columns "Yes", "No", and
#' "Missing" should be represented as integers (when `TRUE`) or as "Tree-Node"
#' character strings (when `FALSE`, default).
#' @param ... Currently not used.
#'
#' @return
#' A \code{data.table} with detailed information about model trees' nodes.
#' A `data.table` with detailed information about tree nodes. It has the following columns:
#' - `Tree`: integer ID of a tree in a model (zero-based index).
#' - `Node`: integer ID of a node in a tree (zero-based index).
#' - `ID`: character identifier of a node in a model (only when `use_int_id = FALSE`).
#' - `Feature`: for a branch node, a feature ID or name (when available);
#' for a leaf node, it simply labels it as `"Leaf"`.
#' - `Split`: location of the split for a branch node (split condition is always "less than").
#' - `Yes`: ID of the next node when the split condition is met.
#' - `No`: ID of the next node when the split condition is not met.
#' - `Missing`: ID of the next node when the branch value is missing.
#' - `Gain`: either the split gain (change in loss) or the leaf value.
#' - `Cover`: metric related to the number of observations either seen by a split
#' or collected by a leaf during training.
#'
#' The columns of the \code{data.table} are:
#'
#' \itemize{
#' \item \code{Tree}: integer ID of a tree in a model (zero-based index)
#' \item \code{Node}: integer ID of a node in a tree (zero-based index)
#' \item \code{ID}: character identifier of a node in a model (only when \code{use_int_id=FALSE})
#' \item \code{Feature}: for a branch node, it's a feature id or name (when available);
#' for a leaf note, it simply labels it as \code{'Leaf'}
#' \item \code{Split}: location of the split for a branch node (split condition is always "less than")
#' \item \code{Yes}: ID of the next node when the split condition is met
#' \item \code{No}: ID of the next node when the split condition is not met
#' \item \code{Missing}: ID of the next node when branch value is missing
#' \item \code{Quality}: either the split gain (change in loss) or the leaf value
#' \item \code{Cover}: metric related to the number of observation either seen by a split
#' or collected by a leaf during training.
#' }
#'
#' When \code{use_int_id=FALSE}, columns "Yes", "No", and "Missing" point to model-wide node identifiers
#' in the "ID" column. When \code{use_int_id=TRUE}, those columns point to node identifiers from
#' When `use_int_id = FALSE`, columns "Yes", "No", and "Missing" point to model-wide node identifiers
#' in the "ID" column. When `use_int_id = TRUE`, those columns point to node identifiers from
#' the corresponding trees in the "Node" column.
#'
#' @examples
#' # Basic use:
#'
#' data(agaricus.train, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#' ## Keep the number of threads to 1 for examples
#' nthread <- 1
#' data.table::setDTthreads(nthread)
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#'
#' (dt <- xgb.model.dt.tree(colnames(agaricus.train$data), bst))
#' bst <- xgb.train(
#' data = xgb.DMatrix(agaricus.train$data, label = agaricus.train$label),
#' max_depth = 2,
#' eta = 1,
#' nthread = nthread,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' # This bst model already has feature_names stored with it, so those would be used when
#' # feature_names is not set:
#' (dt <- xgb.model.dt.tree(model = bst))
#' dt <- xgb.model.dt.tree(bst)
#'
#' # How to match feature names of splits that are following a current 'Yes' branch:
#'
#' merge(dt, dt[, .(ID, Y.Feature=Feature)], by.x='Yes', by.y='ID', all.x=TRUE)[order(Tree,Node)]
#' merge(
#' dt,
#' dt[, .(ID, Y.Feature = Feature)], by.x = "Yes", by.y = "ID", all.x = TRUE
#' )[
#' order(Tree, Node)
#' ]
#'
#' @export
xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
xgb.model.dt.tree <- function(model = NULL, text = NULL,
trees = NULL, use_int_id = FALSE, ...) {
check.deprecation(...)
@@ -71,23 +74,22 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
" (or NULL if 'model' was provided).")
}
if (is.null(feature_names) && !is.null(model) && !is.null(model$feature_names))
feature_names <- model$feature_names
if (!(is.null(feature_names) || is.character(feature_names))) {
stop("feature_names: must be a character vector")
}
if (!(is.null(trees) || is.numeric(trees))) {
stop("trees: must be a vector of integers.")
}
if (is.null(text)) {
text <- xgb.dump(model = model, with_stats = TRUE)
feature_names <- NULL
if (inherits(model, "xgb.Booster")) {
feature_names <- xgb.feature_names(model)
}
if (length(text) < 2 ||
sum(grepl('leaf=(\\d+)', text)) < 1) {
from_text <- TRUE
if (is.null(text)) {
text <- xgb.dump(model = model, with_stats = TRUE)
from_text <- FALSE
}
if (length(text) < 2 || !any(grepl('leaf=(-?\\d+)', text))) {
stop("Non-tree model detected! This function can only be used with tree models.")
}
@@ -106,16 +108,33 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
} else {
trees <- trees[trees >= 0 & trees <= max(td$Tree)]
}
td <- td[Tree %in% trees & !grepl('^booster', t)]
td <- td[Tree %in% trees & !is.na(t) & !startsWith(t, 'booster')]
td[, Node := as.integer(sub("^([0-9]+):.*", "\\1", t))]
if (!use_int_id) td[, ID := add.tree.id(Node, Tree)]
td[, isLeaf := grepl("leaf", t, fixed = TRUE)]
# parse branch lines
branch_rx <- paste0("f(\\d+)<(", anynumber_regex, ")\\] yes=(\\d+),no=(\\d+),missing=(\\d+),",
"gain=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
branch_cols <- c("Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover")
branch_rx_nonames <- paste0("f(\\d+)<(", anynumber_regex, ")\\] yes=(\\d+),no=(\\d+),missing=(\\d+),",
"gain=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
branch_rx_w_names <- paste0("\\d+:\\[(.+)<(", anynumber_regex, ")\\] yes=(\\d+),no=(\\d+),missing=(\\d+),",
"gain=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
text_has_feature_names <- FALSE
if (NROW(feature_names)) {
branch_rx <- branch_rx_w_names
text_has_feature_names <- TRUE
} else {
# Note: when passing a text dump, it might or might not have feature names,
# but that aspect is unknown from just the text attributes
branch_rx <- branch_rx_nonames
if (from_text) {
if (sum(grepl(branch_rx_w_names, text)) > sum(grepl(branch_rx_nonames, text))) {
branch_rx <- branch_rx_w_names
text_has_feature_names <- TRUE
}
}
}
branch_cols <- c("Feature", "Split", "Yes", "No", "Missing", "Gain", "Cover")
td[
isLeaf == FALSE,
(branch_cols) := {
@@ -125,7 +144,7 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
xtr[, 3:5] <- add.tree.id(xtr[, 3:5], Tree)
if (length(xtr) == 0) {
as.data.table(
list(Feature = "NA", Split = "NA", Yes = "NA", No = "NA", Missing = "NA", Quality = "NA", Cover = "NA")
list(Feature = "NA", Split = "NA", Yes = "NA", No = "NA", Missing = "NA", Gain = "NA", Cover = "NA")
)
} else {
as.data.table(xtr)
@@ -137,15 +156,17 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
is_stump <- function() {
return(length(td$Feature) == 1 && is.na(td$Feature))
}
if (!is.null(feature_names) && !is_stump()) {
if (length(feature_names) <= max(as.numeric(td$Feature), na.rm = TRUE))
stop("feature_names has less elements than there are features used in the model")
td[isLeaf == FALSE, Feature := feature_names[as.numeric(Feature) + 1]]
if (!text_has_feature_names) {
if (!is.null(feature_names) && !is_stump()) {
if (length(feature_names) <= max(as.numeric(td$Feature), na.rm = TRUE))
stop("feature_names has less elements than there are features used in the model")
td[isLeaf == FALSE, Feature := feature_names[as.numeric(Feature) + 1]]
}
}
# parse leaf lines
leaf_rx <- paste0("leaf=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
leaf_cols <- c("Feature", "Quality", "Cover")
leaf_cols <- c("Feature", "Gain", "Cover")
td[
isLeaf == TRUE,
(leaf_cols) := {
@@ -160,7 +181,7 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
]
# convert some columns to numeric
numeric_cols <- c("Split", "Quality", "Cover")
numeric_cols <- c("Split", "Gain", "Cover")
td[, (numeric_cols) := lapply(.SD, as.numeric), .SDcols = numeric_cols]
if (use_int_id) {
int_cols <- c("Yes", "No", "Missing")
@@ -173,7 +194,7 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
td[order(Tree, Node)]
}
# Avoid error messages during CRAN check.
# Avoid notes during CRAN check.
# The reason is that these variables are never declared
# They are mainly column names inferred by Data.table...
globalVariables(c("Tree", "Node", "ID", "Feature", "t", "isLeaf", ".SD", ".SDcols"))

View File

@@ -1,62 +1,74 @@
#' Plot model trees deepness
#' Plot model tree depth
#'
#' Visualizes distributions related to depth of tree leafs.
#' \code{xgb.plot.deepness} uses base R graphics, while \code{xgb.ggplot.deepness} uses the ggplot backend.
#' Visualizes distributions related to the depth of tree leaves.
#' - `xgb.plot.deepness()` uses base R graphics, while
#' - `xgb.ggplot.deepness()` uses "ggplot2".
#'
#' @param model either an \code{xgb.Booster} model generated by the \code{xgb.train} function
#' or a data.table result of the \code{xgb.model.dt.tree} function.
#' @param plot (base R barplot) whether a barplot should be produced.
#' If FALSE, only a data.table is returned.
#' @param which which distribution to plot (see details).
#' @param ... other parameters passed to \code{barplot} or \code{plot}.
#' @param model Either an `xgb.Booster` model, or the "data.table" returned
#' by [xgb.model.dt.tree()].
#' @param which Which distribution to plot (see details).
#' @param plot Should the plot be shown? Default is `TRUE`.
#' @param ... Other parameters passed to [graphics::barplot()] or [graphics::plot()].
#'
#' @details
#'
#' When \code{which="2x1"}, two distributions with respect to the leaf depth
#' When `which = "2x1"`, two distributions with respect to the leaf depth
#' are plotted on top of each other:
#' \itemize{
#' \item the distribution of the number of leafs in a tree model at a certain depth;
#' \item the distribution of average weighted number of observations ("cover")
#' ending up in leafs at certain depth.
#' }
#' Those could be helpful in determining sensible ranges of the \code{max_depth}
#' and \code{min_child_weight} parameters.
#' 1. The distribution of the number of leaves in a tree model at a certain depth.
#' 2. The distribution of the average weighted number of observations ("cover")
#' ending up in leaves at a certain depth.
#'
#' When \code{which="max.depth"} or \code{which="med.depth"}, plots of either maximum or median depth
#' per tree with respect to tree number are created. And \code{which="med.weight"} allows to see how
#' Those could be helpful in determining sensible ranges of the `max_depth`
#' and `min_child_weight` parameters.
#'
#' When `which = "max.depth"` or `which = "med.depth"`, plots of either maximum or
#' median depth per tree with respect to the tree number are created.
#'
#' Finally, `which = "med.weight"` allows to see how
#' a tree's median absolute leaf weight changes through the iterations.
#'
#' This function was inspired by the blog post
#' \url{https://github.com/aysent/random-forest-leaf-visualization}.
#' These functions have been inspired by the blog post
#' <https://github.com/aysent/random-forest-leaf-visualization>.
#'
#' @return
#' The return value of the two functions is as follows:
#' - `xgb.plot.deepness()`: A "data.table" (invisibly).
#' Each row corresponds to a terminal leaf in the model. It contains its information
#' about depth, cover, and weight (used in calculating predictions).
#' If `plot = TRUE`, also a plot is shown.
#' - `xgb.ggplot.deepness()`: When `which = "2x1"`, a list of two "ggplot" objects,
#' and a single "ggplot" object otherwise.
#'
#' Other than producing plots (when \code{plot=TRUE}), the \code{xgb.plot.deepness} function
#' silently returns a processed data.table where each row corresponds to a terminal leaf in a tree model,
#' and contains information about leaf's depth, cover, and weight (which is used in calculating predictions).
#'
#' The \code{xgb.ggplot.deepness} silently returns either a list of two ggplot graphs when \code{which="2x1"}
#' or a single ggplot graph for the other \code{which} options.
#'
#' @seealso
#'
#' \code{\link{xgb.train}}, \code{\link{xgb.model.dt.tree}}.
#' @seealso [xgb.train()] and [xgb.model.dt.tree()].
#'
#' @examples
#'
#' data(agaricus.train, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#' ## Keep the number of threads to 2 for examples
#' nthread <- 2
#' data.table::setDTthreads(nthread)
#'
#' # Change max_depth to a higher number to get a more significant result
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 6,
#' eta = 0.1, nthread = 2, nrounds = 50, objective = "binary:logistic",
#' subsample = 0.5, min_child_weight = 2)
#' ## Change max_depth to a higher number to get a more significant result
#' bst <- xgb.train(
#' data = xgb.DMatrix(agaricus.train$data, label = agaricus.train$label),
#' max_depth = 6,
#' nthread = nthread,
#' nrounds = 50,
#' objective = "binary:logistic",
#' subsample = 0.5,
#' min_child_weight = 2
#' )
#'
#' xgb.plot.deepness(bst)
#' xgb.ggplot.deepness(bst)
#'
#' xgb.plot.deepness(bst, which='max.depth', pch=16, col=rgb(0,0,1,0.3), cex=2)
#' xgb.plot.deepness(
#' bst, which = "max.depth", pch = 16, col = rgb(0, 0, 1, 0.3), cex = 2
#' )
#'
#' xgb.plot.deepness(bst, which='med.weight', pch=16, col=rgb(0,0,1,0.3), cex=2)
#' xgb.plot.deepness(
#' bst, which = "med.weight", pch = 16, col = rgb(0, 0, 1, 0.3), cex = 2
#' )
#'
#' @rdname xgb.plot.deepness
#' @export
@@ -80,7 +92,7 @@ xgb.plot.deepness <- function(model = NULL, which = c("2x1", "max.depth", "med.d
stop("Model tree columns are not as expected!\n",
" Note that this function works only for tree models.")
dt_depths <- merge(get.leaf.depth(dt_tree), dt_tree[, .(ID, Cover, Weight = Quality)], by = "ID")
dt_depths <- merge(get.leaf.depth(dt_tree), dt_tree[, .(ID, Cover, Weight = Gain)], by = "ID")
setkeyv(dt_depths, c("Tree", "ID"))
# count by depth levels, and also calculate average cover at a depth
dt_summaries <- dt_depths[, .(.N, Cover = mean(Cover)), Depth]
@@ -136,7 +148,7 @@ get.leaf.depth <- function(dt_tree) {
# list of paths to each leaf in a tree
paths <- lapply(paths_tmp$vpath, names)
# combine into a resulting path lengths table for a tree
data.table(Depth = sapply(paths, length), ID = To[Leaf == TRUE])
data.table(Depth = lengths(paths), ID = To[Leaf == TRUE])
}, by = Tree]
}
@@ -145,6 +157,6 @@ get.leaf.depth <- function(dt_tree) {
# They are mainly column names inferred by Data.table...
globalVariables(
c(
".N", "N", "Depth", "Quality", "Cover", "Tree", "ID", "Yes", "No", "Feature", "Leaf", "Weight"
".N", "N", "Depth", "Gain", "Cover", "Tree", "ID", "Yes", "No", "Feature", "Leaf", "Weight"
)
)

View File

@@ -1,59 +1,73 @@
#' Plot feature importance as a bar graph
#' Plot feature importance
#'
#' Represents previously calculated feature importance as a bar graph.
#' \code{xgb.plot.importance} uses base R graphics, while \code{xgb.ggplot.importance} uses the ggplot backend.
#'
#' @param importance_matrix a \code{data.table} returned by \code{\link{xgb.importance}}.
#' @param top_n maximal number of top features to include into the plot.
#' @param measure the name of importance measure to plot.
#' When \code{NULL}, 'Gain' would be used for trees and 'Weight' would be used for gblinear.
#' @param rel_to_first whether importance values should be represented as relative to the highest ranked feature.
#' See Details.
#' @param left_margin (base R barplot) allows to adjust the left margin size to fit feature names.
#' When it is NULL, the existing \code{par('mar')} is used.
#' @param cex (base R barplot) passed as \code{cex.names} parameter to \code{barplot}.
#' @param plot (base R barplot) whether a barplot should be produced.
#' If FALSE, only a data.table is returned.
#' @param n_clusters (ggplot only) a \code{numeric} vector containing the min and the max range
#' of the possible number of clusters of bars.
#' @param ... other parameters passed to \code{barplot} (except horiz, border, cex.names, names.arg, and las).
#' - `xgb.plot.importance()` uses base R graphics, while
#' - `xgb.ggplot.importance()` uses "ggplot".
#'
#' @details
#' The graph represents each feature as a horizontal bar of length proportional to the importance of a feature.
#' Features are shown ranked in a decreasing importance order.
#' It works for importances from both \code{gblinear} and \code{gbtree} models.
#' The graph represents each feature as a horizontal bar of length proportional to the
#' importance of a feature. Features are sorted by decreasing importance.
#' It works for both "gblinear" and "gbtree" models.
#'
#' When \code{rel_to_first = FALSE}, the values would be plotted as they were in \code{importance_matrix}.
#' For gbtree model, that would mean being normalized to the total of 1
#' When `rel_to_first = FALSE`, the values would be plotted as in `importance_matrix`.
#' For a "gbtree" model, that would mean being normalized to the total of 1
#' ("what is feature's importance contribution relative to the whole model?").
#' For linear models, \code{rel_to_first = FALSE} would show actual values of the coefficients.
#' Setting \code{rel_to_first = TRUE} allows to see the picture from the perspective of
#' For linear models, `rel_to_first = FALSE` would show actual values of the coefficients.
#' Setting `rel_to_first = TRUE` allows to see the picture from the perspective of
#' "what is feature's importance contribution relative to the most important feature?"
#'
#' The ggplot-backend method also performs 1-D clustering of the importance values,
#' with bar colors corresponding to different clusters that have somewhat similar importance values.
#' The "ggplot" backend performs 1-D clustering of the importance values,
#' with bar colors corresponding to different clusters having similar importance values.
#'
#' @param importance_matrix A `data.table` as returned by [xgb.importance()].
#' @param top_n Maximal number of top features to include into the plot.
#' @param measure The name of importance measure to plot.
#' When `NULL`, 'Gain' would be used for trees and 'Weight' would be used for gblinear.
#' @param rel_to_first Whether importance values should be represented as relative to
#' the highest ranked feature, see Details.
#' @param left_margin Adjust the left margin size to fit feature names.
#' When `NULL`, the existing `par("mar")` is used.
#' @param cex Passed as `cex.names` parameter to [graphics::barplot()].
#' @param plot Should the barplot be shown? Default is `TRUE`.
#' @param n_clusters A numeric vector containing the min and the max range
#' of the possible number of clusters of bars.
#' @param ... Other parameters passed to [graphics::barplot()]
#' (except `horiz`, `border`, `cex.names`, `names.arg`, and `las`).
#' Only used in `xgb.plot.importance()`.
#' @return
#' The \code{xgb.plot.importance} function creates a \code{barplot} (when \code{plot=TRUE})
#' and silently returns a processed data.table with \code{n_top} features sorted by importance.
#' The return value depends on the function:
#' - `xgb.plot.importance()`: Invisibly, a "data.table" with `n_top` features sorted
#' by importance. If `plot = TRUE`, the values are also plotted as barplot.
#' - `xgb.ggplot.importance()`: A customizable "ggplot" object.
#' E.g., to change the title, set `+ ggtitle("A GRAPH NAME")`.
#'
#' The \code{xgb.ggplot.importance} function returns a ggplot graph which could be customized afterwards.
#' E.g., to change the title of the graph, add \code{+ ggtitle("A GRAPH NAME")} to the result.
#'
#' @seealso
#' \code{\link[graphics]{barplot}}.
#' @seealso [graphics::barplot()]
#'
#' @examples
#' data(agaricus.train)
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 3,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' ## Keep the number of threads to 2 for examples
#' nthread <- 2
#' data.table::setDTthreads(nthread)
#'
#' bst <- xgb.train(
#' data = xgb.DMatrix(agaricus.train$data, label = agaricus.train$label),
#' max_depth = 3,
#' eta = 1,
#' nthread = nthread,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' importance_matrix <- xgb.importance(colnames(agaricus.train$data), model = bst)
#' xgb.plot.importance(
#' importance_matrix, rel_to_first = TRUE, xlab = "Relative importance"
#' )
#'
#' xgb.plot.importance(importance_matrix, rel_to_first = TRUE, xlab = "Relative importance")
#'
#' (gg <- xgb.ggplot.importance(importance_matrix, measure = "Frequency", rel_to_first = TRUE))
#' gg <- xgb.ggplot.importance(
#' importance_matrix, measure = "Frequency", rel_to_first = TRUE
#' )
#' gg
#' gg + ggplot2::ylab("Frequency")
#'
#' @rdname xgb.plot.importance
@@ -82,7 +96,13 @@ xgb.plot.importance <- function(importance_matrix = NULL, top_n = NULL, measure
}
# also aggregate, just in case when the values were not yet summed up by feature
importance_matrix <- importance_matrix[, Importance := sum(get(measure)), by = Feature]
importance_matrix <- importance_matrix[
, lapply(.SD, sum)
, .SDcols = setdiff(names(importance_matrix), "Feature")
, by = Feature
][
, Importance := get(measure)
]
# make sure it's ordered
importance_matrix <- importance_matrix[order(-abs(Importance))]

View File

@@ -1,17 +1,8 @@
#' Project all trees on one tree and plot it
#' Project all trees on one tree
#'
#' Visualization of the ensemble of trees as a single collective unit.
#'
#' @param model produced by the \code{xgb.train} function.
#' @param feature_names names of each feature as a \code{character} vector.
#' @param features_keep number of features to keep in each position of the multi trees.
#' @param plot_width width in pixels of the graph to produce
#' @param plot_height height in pixels of the graph to produce
#' @param render a logical flag for whether the graph should be rendered (see Value).
#' @param ... currently not used
#'
#' @details
#'
#' This function tries to capture the complexity of a gradient boosted tree model
#' in a cohesive way by compressing an ensemble of trees into a single tree-graph representation.
#' The goal is to improve the interpretability of a model generally seen as black box.
@@ -24,49 +15,57 @@
#' Moreover, the trees tend to reuse the same features.
#'
#' The function projects each tree onto one, and keeps for each position the
#' \code{features_keep} first features (based on the Gain per feature measure).
#' `features_keep` first features (based on the Gain per feature measure).
#'
#' This function is inspired by this blog post:
#' \url{https://wellecks.wordpress.com/2015/02/21/peering-into-the-black-box-visualizing-lambdamart/}
#' <https://wellecks.wordpress.com/2015/02/21/peering-into-the-black-box-visualizing-lambdamart/>
#'
#' @return
#'
#' When \code{render = TRUE}:
#' returns a rendered graph object which is an \code{htmlwidget} of class \code{grViz}.
#' Similar to ggplot objects, it needs to be printed to see it when not running from command line.
#'
#' When \code{render = FALSE}:
#' silently returns a graph object which is of DiagrammeR's class \code{dgr_graph}.
#' This could be useful if one wants to modify some of the graph attributes
#' before rendering the graph with \code{\link[DiagrammeR]{render_graph}}.
#' @inheritParams xgb.plot.tree
#' @param features_keep Number of features to keep in each position of the multi trees,
#' by default 5.
#' @inherit xgb.plot.tree return
#'
#' @examples
#'
#' data(agaricus.train, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 15,
#' eta = 1, nthread = 2, nrounds = 30, objective = "binary:logistic",
#' min_child_weight = 50, verbose = 0)
#' ## Keep the number of threads to 2 for examples
#' nthread <- 2
#' data.table::setDTthreads(nthread)
#'
#' bst <- xgb.train(
#' data = xgb.DMatrix(agaricus.train$data, label = agaricus.train$label),
#' max_depth = 15,
#' eta = 1,
#' nthread = nthread,
#' nrounds = 30,
#' objective = "binary:logistic",
#' min_child_weight = 50,
#' verbose = 0
#' )
#'
#' p <- xgb.plot.multi.trees(model = bst, features_keep = 3)
#' print(p)
#'
#' \dontrun{
#' # Below is an example of how to save this plot to a file.
#' # Note that for `export_graph` to work, the DiagrammeRsvg and rsvg packages must also be installed.
#' # Note that for export_graph() to work, the {DiagrammeRsvg} and {rsvg} packages
#' # must also be installed.
#'
#' library(DiagrammeR)
#' gr <- xgb.plot.multi.trees(model=bst, features_keep = 3, render=FALSE)
#' export_graph(gr, 'tree.pdf', width=1500, height=600)
#'
#' gr <- xgb.plot.multi.trees(model = bst, features_keep = 3, render = FALSE)
#' export_graph(gr, "tree.pdf", width = 1500, height = 600)
#' }
#'
#' @export
xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5, plot_width = NULL, plot_height = NULL,
xgb.plot.multi.trees <- function(model, features_keep = 5, plot_width = NULL, plot_height = NULL,
render = TRUE, ...) {
if (!requireNamespace("DiagrammeR", quietly = TRUE)) {
stop("DiagrammeR is required for xgb.plot.multi.trees")
}
check.deprecation(...)
tree.matrix <- xgb.model.dt.tree(feature_names = feature_names, model = model)
tree.matrix <- xgb.model.dt.tree(model = model)
# first number of the path represents the tree, then the following numbers are related to the path to follow
# root init
@@ -93,13 +92,13 @@ xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5,
data.table::set(tree.matrix, j = nm, value = sub("^\\d+-", "", tree.matrix[[nm]]))
nodes.dt <- tree.matrix[
, .(Quality = sum(Quality))
, .(Gain = sum(Gain))
, by = .(abs.node.position, Feature)
][, .(Text = paste0(
paste0(
Feature[seq_len(min(length(Feature), features_keep))],
" (",
format(Quality[seq_len(min(length(Quality), features_keep))], digits = 5),
format(Gain[seq_len(min(length(Gain), features_keep))], digits = 5),
")"
),
collapse = "\n"
@@ -110,11 +109,10 @@ xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5,
edges.dt <- data.table::rbindlist(
l = list(
tree.matrix[Feature != "Leaf", .(abs.node.position, Yes)],
tree.matrix[Feature != "Leaf", .(abs.node.position, No)]
tree.matrix[Feature != "Leaf", .(From = abs.node.position, To = Yes)],
tree.matrix[Feature != "Leaf", .(From = abs.node.position, To = No)]
)
)
data.table::setnames(edges.dt, c("From", "To"))
edges.dt <- edges.dt[, .N, .(From, To)]
edges.dt[, N := NULL]

View File

@@ -1,106 +1,163 @@
#' SHAP contribution dependency plots
#' SHAP dependence plots
#'
#' Visualizing the SHAP feature contribution to prediction dependencies on feature value.
#' Visualizes SHAP values against feature values to gain an impression of feature effects.
#'
#' @param data data as a \code{matrix} or \code{dgCMatrix}.
#' @param shap_contrib a matrix of SHAP contributions that was computed earlier for the above
#' \code{data}. When it is NULL, it is computed internally using \code{model} and \code{data}.
#' @param features a vector of either column indices or of feature names to plot. When it is NULL,
#' feature importance is calculated, and \code{top_n} high ranked features are taken.
#' @param top_n when \code{features} is NULL, top_n [1, 100] most important features in a model are taken.
#' @param model an \code{xgb.Booster} model. It has to be provided when either \code{shap_contrib}
#' or \code{features} is missing.
#' @param trees passed to \code{\link{xgb.importance}} when \code{features = NULL}.
#' @param target_class is only relevant for multiclass models. When it is set to a 0-based class index,
#' only SHAP contributions for that specific class are used.
#' If it is not set, SHAP importances are averaged over all classes.
#' @param approxcontrib passed to \code{\link{predict.xgb.Booster}} when \code{shap_contrib = NULL}.
#' @param subsample a random fraction of data points to use for plotting. When it is NULL,
#' it is set so that up to 100K data points are used.
#' @param n_col a number of columns in a grid of plots.
#' @param col color of the scatterplot markers.
#' @param pch scatterplot marker.
#' @param discrete_n_uniq a maximal number of unique values in a feature to consider it as discrete.
#' @param discrete_jitter an \code{amount} parameter of jitter added to discrete features' positions.
#' @param ylab a y-axis label in 1D plots.
#' @param plot_NA whether the contributions of cases with missing values should also be plotted.
#' @param col_NA a color of marker for missing value contributions.
#' @param pch_NA a marker type for NA values.
#' @param pos_NA a relative position of the x-location where NA values are shown:
#' \code{min(x) + (max(x) - min(x)) * pos_NA}.
#' @param plot_loess whether to plot loess-smoothed curves. The smoothing is only done for features with
#' more than 5 distinct values.
#' @param col_loess a color to use for the loess curves.
#' @param span_loess the \code{span} parameter in \code{\link[stats]{loess}}'s call.
#' @param which whether to do univariate or bivariate plotting. NOTE: only 1D is implemented so far.
#' @param plot whether a plot should be drawn. If FALSE, only a list of matrices is returned.
#' @param ... other parameters passed to \code{plot}.
#' @param data The data to explain as a `matrix`, `dgCMatrix`, or `data.frame`.
#' @param shap_contrib Matrix of SHAP contributions of `data`.
#' The default (`NULL`) computes it from `model` and `data`.
#' @param features Vector of column indices or feature names to plot. When `NULL`
#' (default), the `top_n` most important features are selected by [xgb.importance()].
#' @param top_n How many of the most important features (<= 100) should be selected?
#' By default 1 for SHAP dependence and 10 for SHAP summary.
#' Only used when `features = NULL`.
#' @param model An `xgb.Booster` model. Only required when `shap_contrib = NULL` or
#' `features = NULL`.
#' @param trees Passed to [xgb.importance()] when `features = NULL`.
#' @param target_class Only relevant for multiclass models. The default (`NULL`)
#' averages the SHAP values over all classes. Pass a (0-based) class index
#' to show only SHAP values of that class.
#' @param approxcontrib Passed to `predict()` when `shap_contrib = NULL`.
#' @param subsample Fraction of data points randomly picked for plotting.
#' The default (`NULL`) will use up to 100k data points.
#' @param n_col Number of columns in a grid of plots.
#' @param col Color of the scatterplot markers.
#' @param pch Scatterplot marker.
#' @param discrete_n_uniq Maximal number of unique feature values to consider the
#' feature as discrete.
#' @param discrete_jitter Jitter amount added to the values of discrete features.
#' @param ylab The y-axis label in 1D plots.
#' @param plot_NA Should contributions of cases with missing values be plotted?
#' Default is `TRUE`.
#' @param col_NA Color of marker for missing value contributions.
#' @param pch_NA Marker type for `NA` values.
#' @param pos_NA Relative position of the x-location where `NA` values are shown:
#' `min(x) + (max(x) - min(x)) * pos_NA`.
#' @param plot_loess Should loess-smoothed curves be plotted? (Default is `TRUE`).
#' The smoothing is only done for features with more than 5 distinct values.
#' @param col_loess Color of loess curves.
#' @param span_loess The `span` parameter of [stats::loess()].
#' @param which Whether to do univariate or bivariate plotting. Currently, only "1d" is implemented.
#' @param plot Should the plot be drawn? (Default is `TRUE`).
#' If `FALSE`, only a list of matrices is returned.
#' @param ... Other parameters passed to [graphics::plot()].
#'
#' @details
#'
#' These scatterplots represent how SHAP feature contributions depend of feature values.
#' The similarity to partial dependency plots is that they also give an idea for how feature values
#' affect predictions. However, in partial dependency plots, we usually see marginal dependencies
#' of model prediction on feature value, while SHAP contribution dependency plots display the estimated
#' contributions of a feature to model prediction for each individual case.
#' The similarity to partial dependence plots is that they also give an idea for how feature values
#' affect predictions. However, in partial dependence plots, we see marginal dependencies
#' of model prediction on feature value, while SHAP dependence plots display the estimated
#' contributions of a feature to the prediction for each individual case.
#'
#' When \code{plot_loess = TRUE} is set, feature values are rounded to 3 significant digits and
#' weighted LOESS is computed and plotted, where weights are the numbers of data points
#' When `plot_loess = TRUE`, feature values are rounded to three significant digits and
#' weighted LOESS is computed and plotted, where the weights are the numbers of data points
#' at each rounded value.
#'
#' Note: SHAP contributions are shown on the scale of model margin. E.g., for a logistic binomial objective,
#' the margin is prediction before a sigmoidal transform into probability-like values.
#' Note: SHAP contributions are on the scale of the model margin.
#' E.g., for a logistic binomial objective, the margin is on log-odds scale.
#' Also, since SHAP stands for "SHapley Additive exPlanation" (model prediction = sum of SHAP
#' contributions for all features + bias), depending on the objective used, transforming SHAP
#' contributions for a feature from the marginal to the prediction space is not necessarily
#' a meaningful thing to do.
#'
#' @return
#'
#' In addition to producing plots (when \code{plot=TRUE}), it silently returns a list of two matrices:
#' \itemize{
#' \item \code{data} the values of selected features;
#' \item \code{shap_contrib} the contributions of selected features.
#' }
#' In addition to producing plots (when `plot = TRUE`), it silently returns a list of two matrices:
#' - `data`: Feature value matrix.
#' - `shap_contrib`: Corresponding SHAP value matrix.
#'
#' @references
#'
#' Scott M. Lundberg, Su-In Lee, "A Unified Approach to Interpreting Model Predictions", NIPS Proceedings 2017, \url{https://arxiv.org/abs/1705.07874}
#'
#' Scott M. Lundberg, Su-In Lee, "Consistent feature attribution for tree ensembles", \url{https://arxiv.org/abs/1706.06060}
#' 1. Scott M. Lundberg, Su-In Lee, "A Unified Approach to Interpreting Model Predictions",
#' NIPS Proceedings 2017, <https://arxiv.org/abs/1705.07874>
#' 2. Scott M. Lundberg, Su-In Lee, "Consistent feature attribution for tree ensembles",
#' <https://arxiv.org/abs/1706.06060>
#'
#' @examples
#'
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#' data(agaricus.test, package = "xgboost")
#'
#' bst <- xgboost(agaricus.train$data, agaricus.train$label, nrounds = 50,
#' eta = 0.1, max_depth = 3, subsample = .5,
#' method = "hist", objective = "binary:logistic", nthread = 2, verbose = 0)
#' ## Keep the number of threads to 1 for examples
#' nthread <- 1
#' data.table::setDTthreads(nthread)
#' nrounds <- 20
#'
#' bst <- xgb.train(
#' data = xgb.DMatrix(agaricus.train$data, agaricus.train$label),
#' nrounds = nrounds,
#' eta = 0.1,
#' max_depth = 3,
#' subsample = 0.5,
#' objective = "binary:logistic",
#' nthread = nthread,
#' verbose = 0
#' )
#'
#' xgb.plot.shap(agaricus.test$data, model = bst, features = "odor=none")
#'
#' contr <- predict(bst, agaricus.test$data, predcontrib = TRUE)
#' xgb.plot.shap(agaricus.test$data, contr, model = bst, top_n = 12, n_col = 3)
#' xgb.ggplot.shap.summary(agaricus.test$data, contr, model = bst, top_n = 12) # Summary plot
#'
#' # multiclass example - plots for each class separately:
#' # Summary plot
#' xgb.ggplot.shap.summary(agaricus.test$data, contr, model = bst, top_n = 12)
#'
#' # Multiclass example - plots for each class separately:
#' nclass <- 3
#' nrounds <- 20
#' x <- as.matrix(iris[, -5])
#' set.seed(123)
#' is.na(x[sample(nrow(x) * 4, 30)]) <- TRUE # introduce some missing values
#' mbst <- xgboost(data = x, label = as.numeric(iris$Species) - 1, nrounds = nrounds,
#' max_depth = 2, eta = 0.3, subsample = .5, nthread = 2,
#' objective = "multi:softprob", num_class = nclass, verbose = 0)
#' trees0 <- seq(from=0, by=nclass, length.out=nrounds)
#'
#' mbst <- xgb.train(
#' data = xgb.DMatrix(x, label = as.numeric(iris$Species) - 1),
#' nrounds = nrounds,
#' max_depth = 2,
#' eta = 0.3,
#' subsample = 0.5,
#' nthread = nthread,
#' objective = "multi:softprob",
#' num_class = nclass,
#' verbose = 0
#' )
#' trees0 <- seq(from = 0, by = nclass, length.out = nrounds)
#' col <- rgb(0, 0, 1, 0.5)
#' xgb.plot.shap(x, model = mbst, trees = trees0, target_class = 0, top_n = 4,
#' n_col = 2, col = col, pch = 16, pch_NA = 17)
#' xgb.plot.shap(x, model = mbst, trees = trees0 + 1, target_class = 1, top_n = 4,
#' n_col = 2, col = col, pch = 16, pch_NA = 17)
#' xgb.plot.shap(x, model = mbst, trees = trees0 + 2, target_class = 2, top_n = 4,
#' n_col = 2, col = col, pch = 16, pch_NA = 17)
#' xgb.ggplot.shap.summary(x, model = mbst, target_class = 0, top_n = 4) # Summary plot
#'
#' xgb.plot.shap(
#' x,
#' model = mbst,
#' trees = trees0,
#' target_class = 0,
#' top_n = 4,
#' n_col = 2,
#' col = col,
#' pch = 16,
#' pch_NA = 17
#' )
#'
#' xgb.plot.shap(
#' x,
#' model = mbst,
#' trees = trees0 + 1,
#' target_class = 1,
#' top_n = 4,
#' n_col = 2,
#' col = col,
#' pch = 16,
#' pch_NA = 17
#' )
#'
#' xgb.plot.shap(
#' x,
#' model = mbst,
#' trees = trees0 + 2,
#' target_class = 2,
#' top_n = 4,
#' n_col = 2,
#' col = col,
#' pch = 16,
#' pch_NA = 17
#' )
#'
#' # Summary plot
#' xgb.ggplot.shap.summary(x, model = mbst, target_class = 0, top_n = 4)
#'
#' @rdname xgb.plot.shap
#' @export
@@ -183,46 +240,56 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
invisible(list(data = data, shap_contrib = shap_contrib))
}
#' SHAP contribution dependency summary plot
#' SHAP summary plot
#'
#' Compare SHAP contributions of different features.
#' Visualizes SHAP contributions of different features.
#'
#' A point plot (each point representing one sample from \code{data}) is
#' A point plot (each point representing one observation from `data`) is
#' produced for each feature, with the points plotted on the SHAP value axis.
#' Each point (observation) is coloured based on its feature value. The plot
#' hence allows us to see which features have a negative / positive contribution
#' Each point (observation) is coloured based on its feature value.
#'
#' The plot allows to see which features have a negative / positive contribution
#' on the model prediction, and whether the contribution is different for larger
#' or smaller values of the feature. We effectively try to replicate the
#' \code{summary_plot} function from https://github.com/slundberg/shap.
#' or smaller values of the feature. Inspired by the summary plot of
#' <https://github.com/shap/shap>.
#'
#' @inheritParams xgb.plot.shap
#'
#' @return A \code{ggplot2} object.
#' @return A `ggplot2` object.
#' @export
#'
#' @examples # See \code{\link{xgb.plot.shap}}.
#' @seealso \code{\link{xgb.plot.shap}}, \code{\link{xgb.ggplot.shap.summary}},
#' \url{https://github.com/slundberg/shap}
#' @examples
#' # See examples in xgb.plot.shap()
#'
#' @seealso [xgb.plot.shap()], [xgb.ggplot.shap.summary()],
#' and the Python library <https://github.com/shap/shap>.
xgb.plot.shap.summary <- function(data, shap_contrib = NULL, features = NULL, top_n = 10, model = NULL,
trees = NULL, target_class = NULL, approxcontrib = FALSE, subsample = NULL) {
# Only ggplot implementation is available.
xgb.ggplot.shap.summary(data, shap_contrib, features, top_n, model, trees, target_class, approxcontrib, subsample)
}
#' Prepare data for SHAP plots. To be used in xgb.plot.shap, xgb.plot.shap.summary, etc.
#' Internal utility function.
#' Prepare data for SHAP plots
#'
#' Internal function used in [xgb.plot.shap()], [xgb.plot.shap.summary()], etc.
#'
#' @inheritParams xgb.plot.shap
#' @param max_observations Maximum number of observations to consider.
#' @keywords internal
#' @noRd
#'
#' @return A list containing: 'data', a matrix containing sample observations
#' and their feature values; 'shap_contrib', a matrix containing the SHAP contribution
#' values for these observations.
#' @return
#' A list containing:
#' - `data`: The matrix of feature values.
#' - `shap_contrib`: The matrix with corresponding SHAP values.
xgb.shap.data <- function(data, shap_contrib = NULL, features = NULL, top_n = 1, model = NULL,
trees = NULL, target_class = NULL, approxcontrib = FALSE,
subsample = NULL, max_observations = 100000) {
if (!is.matrix(data) && !inherits(data, "dgCMatrix"))
stop("data: must be either matrix or dgCMatrix")
if (!inherits(data, c("matrix", "dsparseMatrix", "data.frame")))
stop("data: must be matrix, sparse matrix, or data.frame.")
if (inherits(data, "data.frame") && length(class(data)) > 1L) {
data <- as.data.frame(data)
}
if (is.null(shap_contrib) && (is.null(model) || !inherits(model, "xgb.Booster")))
stop("when shap_contrib is not provided, one must provide an xgb.Booster model")
@@ -230,18 +297,31 @@ xgb.shap.data <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
if (is.null(features) && (is.null(model) || !inherits(model, "xgb.Booster")))
stop("when features are not provided, one must provide an xgb.Booster model to rank the features")
last_dim <- function(v) dim(v)[length(dim(v))]
if (!is.null(shap_contrib) &&
(!is.matrix(shap_contrib) || nrow(shap_contrib) != nrow(data) || ncol(shap_contrib) != ncol(data) + 1))
(!is.array(shap_contrib) || nrow(shap_contrib) != nrow(data) || last_dim(shap_contrib) != ncol(data) + 1))
stop("shap_contrib is not compatible with the provided data")
if (is.character(features) && is.null(colnames(data)))
stop("either provide `data` with column names or provide `features` as column indices")
if (is.null(model$feature_names) && model$nfeatures != ncol(data))
model_feature_names <- NULL
if (is.null(features) && !is.null(model)) {
model_feature_names <- xgb.feature_names(model)
}
if (is.null(model_feature_names) && xgb.num_feature(model) != ncol(data))
stop("if model has no feature_names, columns in `data` must match features in model")
if (!is.null(subsample)) {
idx <- sample(x = seq_len(nrow(data)), size = as.integer(subsample * nrow(data)), replace = FALSE)
if (subsample <= 0 || subsample >= 1) {
stop("'subsample' must be a number between zero and one (non-inclusive).")
}
sample_size <- as.integer(subsample * nrow(data))
if (sample_size < 2) {
stop("Sampling fraction involves less than 2 rows.")
}
idx <- sample(x = seq_len(nrow(data)), size = sample_size, replace = FALSE)
} else {
idx <- seq_len(min(nrow(data), max_observations))
}
@@ -250,23 +330,43 @@ xgb.shap.data <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
colnames(data) <- paste0("X", seq_len(ncol(data)))
}
if (!is.null(shap_contrib)) {
if (is.list(shap_contrib)) { # multiclass: either choose a class or merge
shap_contrib <- if (!is.null(target_class)) shap_contrib[[target_class + 1]] else Reduce("+", lapply(shap_contrib, abs))
}
shap_contrib <- shap_contrib[idx, ]
if (is.null(colnames(shap_contrib))) {
colnames(shap_contrib) <- paste0("X", seq_len(ncol(data)))
}
} else {
shap_contrib <- predict(model, newdata = data, predcontrib = TRUE, approxcontrib = approxcontrib)
if (is.list(shap_contrib)) { # multiclass: either choose a class or merge
shap_contrib <- if (!is.null(target_class)) shap_contrib[[target_class + 1]] else Reduce("+", lapply(shap_contrib, abs))
reshape_3d_shap_contrib <- function(shap_contrib, target_class) {
# multiclass: either choose a class or merge
if (is.list(shap_contrib)) {
if (!is.null(target_class)) {
shap_contrib <- shap_contrib[[target_class + 1]]
} else {
shap_contrib <- Reduce("+", lapply(shap_contrib, abs))
}
} else if (length(dim(shap_contrib)) > 2) {
if (!is.null(target_class)) {
orig_shape <- dim(shap_contrib)
shap_contrib <- shap_contrib[, target_class + 1, , drop = TRUE]
if (!is.matrix(shap_contrib)) {
shap_contrib <- matrix(shap_contrib, orig_shape[c(1L, 3L)])
}
} else {
shap_contrib <- apply(abs(shap_contrib), c(1L, 3L), sum)
}
}
return(shap_contrib)
}
if (is.null(shap_contrib)) {
shap_contrib <- predict(
model,
newdata = data,
predcontrib = TRUE,
approxcontrib = approxcontrib
)
}
shap_contrib <- reshape_3d_shap_contrib(shap_contrib, target_class)
if (is.null(colnames(shap_contrib))) {
colnames(shap_contrib) <- paste0("X", seq_len(ncol(data)))
}
if (is.null(features)) {
if (!is.null(model$feature_names)) {
if (!is.null(model_feature_names)) {
imp <- xgb.importance(model = model, trees = trees)
} else {
imp <- xgb.importance(model = model, trees = trees, feature_names = colnames(data))

View File

@@ -1,74 +1,104 @@
#' Plot a boosted tree model
#' Plot boosted trees
#'
#' Read a tree model text dump and plot the model.
#'
#' @param feature_names names of each feature as a \code{character} vector.
#' @param model produced by the \code{xgb.train} function.
#' @param trees an integer vector of tree indices that should be visualized.
#' If set to \code{NULL}, all trees of the model are included.
#' IMPORTANT: the tree index in xgboost model is zero-based
#' (e.g., use \code{trees = 0:2} for the first 3 trees in a model).
#' @param plot_width the width of the diagram in pixels.
#' @param plot_height the height of the diagram in pixels.
#' @param render a logical flag for whether the graph should be rendered (see Value).
#' @param show_node_id a logical flag for whether to show node id's in the graph.
#' @param ... currently not used.
#'
#' @details
#' When using `style="xgboost"`, the content of each node is visualized as follows:
#' - For non-terminal nodes, it will display the split condition (number or name if
#' available, and the condition that would decide to which node to go next).
#' - Those nodes will be connected to their children by arrows that indicate whether the
#' branch corresponds to the condition being met or not being met.
#' - Terminal (leaf) nodes contain the margin to add when ending there.
#'
#' The content of each node is organised that way:
#'
#' \itemize{
#' \item Feature name.
#' \item \code{Cover}: The sum of second order gradient of training data classified to the leaf.
#' If it is square loss, this simply corresponds to the number of instances seen by a split
#' or collected by a leaf during training.
#' The deeper in the tree a node is, the lower this metric will be.
#' \item \code{Gain} (for split nodes): the information gain metric of a split
#' When using `style="R"`, the content of each node is visualized like this:
#' - *Feature name*.
#' - *Cover:* The sum of second order gradients of training data.
#' For the squared loss, this simply corresponds to the number of instances in the node.
#' The deeper in the tree, the lower the value.
#' - *Gain* (for split nodes): Information gain metric of a split
#' (corresponds to the importance of the node in the model).
#' \item \code{Value} (for leafs): the margin value that the leaf may contribute to prediction.
#' }
#' The tree root nodes also indicate the Tree index (0-based).
#' - *Value* (for leaves): Margin value that the leaf may contribute to the prediction.
#'
#' The tree root nodes also indicate the tree index (0-based).
#'
#' The "Yes" branches are marked by the "< split_value" label.
#' The branches that also used for missing values are marked as bold
#' The branches also used for missing values are marked as bold
#' (as in "carrying extra capacity").
#'
#' This function uses \href{https://www.graphviz.org/}{GraphViz} as a backend of DiagrammeR.
#' This function uses [GraphViz](https://www.graphviz.org/) as DiagrammeR backend.
#'
#' @param model Object of class `xgb.Booster`. If it contains feature names (they can be set through
#' [setinfo()], they will be used in the output from this function.
#' @param trees An integer vector of tree indices that should be used.
#' The default (`NULL`) uses all trees.
#' Useful, e.g., in multiclass classification to get only
#' the trees of one class. *Important*: the tree index in XGBoost models
#' is zero-based (e.g., use `trees = 0:2` for the first three trees).
#' @param plot_width,plot_height Width and height of the graph in pixels.
#' The values are passed to `DiagrammeR::render_graph()`.
#' @param render Should the graph be rendered or not? The default is `TRUE`.
#' @param show_node_id a logical flag for whether to show node id's in the graph.
#' @param style Style to use for the plot:
#' - `"xgboost"`: will use the plot style defined in the core XGBoost library,
#' which is shared between different interfaces through the 'dot' format. This
#' style was not available before version 2.1.0 in R. It always plots the trees
#' vertically (from top to bottom).
#' - `"R"`: will use the style defined from XGBoost's R interface, which predates
#' the introducition of the standardized style from the core library. It might plot
#' the trees horizontally (from left to right).
#'
#' Note that `style="xgboost"` is only supported when all of the following conditions are met:
#' - Only a single tree is being plotted.
#' - Node IDs are not added to the graph.
#' - The graph is being returned as `htmlwidget` (`render=TRUE`).
#' @param ... Currently not used.
#' @return
#'
#' When \code{render = TRUE}:
#' returns a rendered graph object which is an \code{htmlwidget} of class \code{grViz}.
#' Similar to ggplot objects, it needs to be printed to see it when not running from command line.
#'
#' When \code{render = FALSE}:
#' silently returns a graph object which is of DiagrammeR's class \code{dgr_graph}.
#' This could be useful if one wants to modify some of the graph attributes
#' before rendering the graph with \code{\link[DiagrammeR]{render_graph}}.
#' The value depends on the `render` parameter:
#' - If `render = TRUE` (default): Rendered graph object which is an htmlwidget of
#' class `grViz`. Similar to "ggplot" objects, it needs to be printed when not
#' running from the command line.
#' - If `render = FALSE`: Graph object which is of DiagrammeR's class `dgr_graph`.
#' This could be useful if one wants to modify some of the graph attributes
#' before rendering the graph with `DiagrammeR::render_graph()`.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#'
#' bst <- xgb.train(
#' data = xgb.DMatrix(agaricus.train$data, agaricus.train$label),
#' max_depth = 3,
#' eta = 1,
#' nthread = 2,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' # plot the first tree, using the style from xgboost's core library
#' # (this plot should look identical to the ones generated from other
#' # interfaces like the python package for xgboost)
#' xgb.plot.tree(model = bst, trees = 1, style = "xgboost")
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 3,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' # plot all the trees
#' xgb.plot.tree(model = bst)
#' xgb.plot.tree(model = bst, trees = NULL)
#'
#' # plot only the first tree and display the node ID:
#' xgb.plot.tree(model = bst, trees = 0, show_node_id = TRUE)
#'
#' \dontrun{
#' # Below is an example of how to save this plot to a file.
#' # Note that for `export_graph` to work, the DiagrammeRsvg and rsvg packages must also be installed.
#' # Note that for export_graph() to work, the {DiagrammeRsvg}
#' # and {rsvg} packages must also be installed.
#'
#' library(DiagrammeR)
#' gr <- xgb.plot.tree(model=bst, trees=0:1, render=FALSE)
#' export_graph(gr, 'tree.pdf', width=1500, height=1900)
#' export_graph(gr, 'tree.png', width=1500, height=1900)
#'
#' gr <- xgb.plot.tree(model = bst, trees = 0:1, render = FALSE)
#' export_graph(gr, "tree.pdf", width = 1500, height = 1900)
#' export_graph(gr, "tree.png", width = 1500, height = 1900)
#' }
#'
#' @export
xgb.plot.tree <- function(feature_names = NULL, model = NULL, trees = NULL, plot_width = NULL, plot_height = NULL,
render = TRUE, show_node_id = FALSE, ...) {
xgb.plot.tree <- function(model = NULL, trees = NULL, plot_width = NULL, plot_height = NULL,
render = TRUE, show_node_id = FALSE, style = c("R", "xgboost"), ...) {
check.deprecation(...)
if (!inherits(model, "xgb.Booster")) {
stop("model: Has to be an object of class xgb.Booster")
@@ -78,9 +108,20 @@ xgb.plot.tree <- function(feature_names = NULL, model = NULL, trees = NULL, plot
stop("DiagrammeR package is required for xgb.plot.tree", call. = FALSE)
}
dt <- xgb.model.dt.tree(feature_names = feature_names, model = model, trees = trees)
style <- as.character(head(style, 1L))
stopifnot(style %in% c("R", "xgboost"))
if (style == "xgboost") {
if (NROW(trees) != 1L || !render || show_node_id) {
stop("style='xgboost' is only supported for single, rendered tree, without node IDs.")
}
dt[, label := paste0(Feature, "\nCover: ", Cover, ifelse(Feature == "Leaf", "\nValue: ", "\nGain: "), Quality)]
txt <- xgb.dump(model, dump_format = "dot")
return(DiagrammeR::grViz(txt[[trees + 1]], width = plot_width, height = plot_height))
}
dt <- xgb.model.dt.tree(model = model, trees = trees)
dt[, label := paste0(Feature, "\nCover: ", Cover, ifelse(Feature == "Leaf", "\nValue: ", "\nGain: "), Gain)]
if (show_node_id)
dt[, label := paste0(ID, ": ", label)]
dt[Node == 0, label := paste0("Tree ", Tree, "\n", label)]
@@ -147,4 +188,4 @@ xgb.plot.tree <- function(feature_names = NULL, model = NULL, trees = NULL, plot
# Avoid error messages during CRAN check.
# The reason is that these variables are never declared
# They are mainly column names inferred by Data.table...
globalVariables(c("Feature", "ID", "Cover", "Quality", "Split", "Yes", "No", "Missing", ".", "shape", "filledcolor", "label"))
globalVariables(c("Feature", "ID", "Cover", "Gain", "Split", "Yes", "No", "Missing", ".", "shape", "filledcolor", "label"))

View File

@@ -1,38 +1,59 @@
#' Save xgboost model to binary file
#' Save XGBoost model to binary file
#'
#' Save xgboost model to a file in binary format.
#' Save XGBoost model to a file in binary or JSON format.
#'
#' @param model model object of \code{xgb.Booster} class.
#' @param fname name of the file to write.
#' @param model Model object of `xgb.Booster` class.
#' @param fname Name of the file to write. Its extension determines the serialization format:
#' - ".ubj": Use the universal binary JSON format (recommended).
#' This format uses binary types for e.g. floating point numbers, thereby preventing any loss
#' of precision when converting to a human-readable JSON text or similar.
#' - ".json": Use plain JSON, which is a human-readable format.
#' - ".deprecated": Use **deprecated** binary format. This format will
#' not be able to save attributes introduced after v1 of XGBoost, such as the "best_iteration"
#' attribute that boosters might keep, nor feature names or user-specifiec attributes.
#' - If the format is not specified by passing one of the file extensions above, will
#' default to UBJ.
#'
#' @details
#' This methods allows to save a model in an xgboost-internal binary format which is universal
#' among the various xgboost interfaces. In R, the saved model file could be read-in later
#' using either the \code{\link{xgb.load}} function or the \code{xgb_model} parameter
#' of \code{\link{xgb.train}}.
#'
#' Note: a model can also be saved as an R-object (e.g., by using \code{\link[base]{readRDS}}
#' or \code{\link[base]{save}}). However, it would then only be compatible with R, and
#' corresponding R-methods would need to be used to load it. Moreover, persisting the model with
#' \code{\link[base]{readRDS}} or \code{\link[base]{save}}) will cause compatibility problems in
#' future versions of XGBoost. Consult \code{\link{a-compatibility-note-for-saveRDS-save}} to learn
#' how to persist models in a future-proof way, i.e. to make the model accessible in future
#' This methods allows to save a model in an XGBoost-internal binary or text format which is universal
#' among the various xgboost interfaces. In R, the saved model file could be read later
#' using either the [xgb.load()] function or the `xgb_model` parameter of [xgb.train()].
#'
#' Note: a model can also be saved as an R object (e.g., by using [readRDS()]
#' or [save()]). However, it would then only be compatible with R, and
#' corresponding R methods would need to be used to load it. Moreover, persisting the model with
#' [readRDS()] or [save()] might cause compatibility problems in
#' future versions of XGBoost. Consult [a-compatibility-note-for-saveRDS-save] to learn
#' how to persist models in a future-proof way, i.e., to make the model accessible in future
#' releases of XGBoost.
#'
#' @seealso
#' \code{\link{xgb.load}}, \code{\link{xgb.Booster.complete}}.
#' @seealso [xgb.load()]
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' \dontshow{RhpcBLASctl::omp_set_num_threads(1)}
#' data(agaricus.train, package = "xgboost")
#' data(agaricus.test, package = "xgboost")
#'
#' ## Keep the number of threads to 1 for examples
#' nthread <- 1
#' data.table::setDTthreads(nthread)
#'
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' xgb.save(bst, 'xgb.model')
#' bst <- xgb.load('xgb.model')
#' if (file.exists('xgb.model')) file.remove('xgb.model')
#' pred <- predict(bst, test$data)
#'
#' bst <- xgb.train(
#' data = xgb.DMatrix(train$data, label = train$label),
#' max_depth = 2,
#' eta = 1,
#' nthread = nthread,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' fname <- file.path(tempdir(), "xgb.ubj")
#' xgb.save(bst, fname)
#' bst <- xgb.load(fname)
#' @export
xgb.save <- function(model, fname) {
if (typeof(fname) != "character")
@@ -41,8 +62,7 @@ xgb.save <- function(model, fname) {
stop("model must be xgb.Booster.",
if (inherits(model, "xgb.DMatrix")) " Use xgb.DMatrix.save to save an xgb.DMatrix object." else "")
}
model <- xgb.Booster.complete(model, saveraw = FALSE)
fname <- path.expand(fname)
.Call(XGBoosterSaveModel_R, model$handle, fname[1])
.Call(XGBoosterSaveModel_R, xgb.get.handle(model), enc2utf8(fname[1]))
return(TRUE)
}

View File

@@ -1,31 +1,40 @@
#' Save xgboost model to R's raw vector,
#' user can call xgb.load.raw to load the model back from raw vector
#' Save XGBoost model to R's raw vector
#'
#' Save xgboost model from xgboost or xgb.train
#' Save XGBoost model from [xgboost()] or [xgb.train()].
#' Call [xgb.load.raw()] to load the model back from raw vector.
#'
#' @param model the model object.
#' @param raw_format The format for encoding the booster. Available options are
#' \itemize{
#' \item \code{json}: Encode the booster into JSON text document.
#' \item \code{ubj}: Encode the booster into Universal Binary JSON.
#' \item \code{deprecated}: Encode the booster into old customized binary format.
#' }
#'
#' Right now the default is \code{deprecated} but will be changed to \code{ubj} in upcoming release.
#' @param model The model object.
#' @param raw_format The format for encoding the booster:
#' - "json": Encode the booster into JSON text document.
#' - "ubj": Encode the booster into Universal Binary JSON.
#' - "deprecated": Encode the booster into old customized binary format.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' \dontshow{RhpcBLASctl::omp_set_num_threads(1)}
#' data(agaricus.train, package = "xgboost")
#' data(agaricus.test, package = "xgboost")
#'
#' ## Keep the number of threads to 1 for examples
#' nthread <- 1
#' data.table::setDTthreads(nthread)
#'
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#'
#' bst <- xgb.train(
#' data = xgb.DMatrix(train$data, label = train$label),
#' max_depth = 2,
#' eta = 1,
#' nthread = nthread,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' raw <- xgb.save.raw(bst)
#' bst <- xgb.load.raw(raw)
#' pred <- predict(bst, test$data)
#'
#' @export
xgb.save.raw <- function(model, raw_format = "deprecated") {
xgb.save.raw <- function(model, raw_format = "ubj") {
handle <- xgb.get.handle(model)
args <- list(format = raw_format)
.Call(XGBoosterSaveModelToRaw_R, handle, jsonlite::toJSON(args, auto_unbox = TRUE))

View File

@@ -1,21 +0,0 @@
#' Serialize the booster instance into R's raw vector. The serialization method differs
#' from \code{\link{xgb.save.raw}} as the latter one saves only the model but not
#' parameters. This serialization format is not stable across different xgboost versions.
#'
#' @param booster the booster instance
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' raw <- xgb.serialize(bst)
#' bst <- xgb.unserialize(raw)
#'
#' @export
xgb.serialize <- function(booster) {
handle <- xgb.get.handle(booster)
.Call(XGBoosterSerializeToBuffer_R, handle)
}

View File

@@ -1,253 +1,268 @@
#' eXtreme Gradient Boosting Training
#'
#' \code{xgb.train} is an advanced interface for training an xgboost model.
#' The \code{xgboost} function is a simpler wrapper for \code{xgb.train}.
#' `xgb.train()` is an advanced interface for training an xgboost model.
#' The [xgboost()] function is a simpler wrapper for `xgb.train()`.
#'
#' @param params the list of parameters. The complete list of parameters is
#' available in the \href{http://xgboost.readthedocs.io/en/latest/parameter.html}{online documentation}. Below
#' is a shorter summary:
#' available in the [online documentation](http://xgboost.readthedocs.io/en/latest/parameter.html).
#' Below is a shorter summary:
#'
#' 1. General Parameters
#' **1. General Parameters**
#'
#' \itemize{
#' \item \code{booster} which booster to use, can be \code{gbtree} or \code{gblinear}. Default: \code{gbtree}.
#' }
#' - `booster`: Which booster to use, can be `gbtree` or `gblinear`. Default: `gbtree`.
#'
#' 2. Booster Parameters
#' **2. Booster Parameters**
#'
#' 2.1. Parameters for Tree Booster
#' **2.1. Parameters for Tree Booster**
#' - `eta`: The learning rate: scale the contribution of each tree by a factor of `0 < eta < 1`
#' when it is added to the current approximation.
#' Used to prevent overfitting by making the boosting process more conservative.
#' Lower value for `eta` implies larger value for `nrounds`: low `eta` value means model
#' more robust to overfitting but slower to compute. Default: 0.3.
#' - `gamma`: Minimum loss reduction required to make a further partition on a leaf node of the tree.
#' the larger, the more conservative the algorithm will be.
#' - `max_depth`: Maximum depth of a tree. Default: 6.
#' - `min_child_weight`: Minimum sum of instance weight (hessian) needed in a child.
#' If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight,
#' then the building process will give up further partitioning.
#' In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node.
#' The larger, the more conservative the algorithm will be. Default: 1.
#' - `subsample`: Subsample ratio of the training instance.
#' Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees
#' and this will prevent overfitting. It makes computation shorter (because less data to analyse).
#' It is advised to use this parameter with `eta` and increase `nrounds`. Default: 1.
#' - `colsample_bytree`: Subsample ratio of columns when constructing each tree. Default: 1.
#' - `lambda`: L2 regularization term on weights. Default: 1.
#' - `alpha`: L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0.
#' - `num_parallel_tree`: Experimental parameter. number of trees to grow per round.
#' Useful to test Random Forest through XGBoost.
#' (set `colsample_bytree < 1`, `subsample < 1` and `round = 1`) accordingly.
#' Default: 1.
#' - `monotone_constraints`: A numerical vector consists of `1`, `0` and `-1` with its length
#' equals to the number of features in the training data.
#' `1` is increasing, `-1` is decreasing and `0` is no constraint.
#' - `interaction_constraints`: A list of vectors specifying feature indices of permitted interactions.
#' Each item of the list represents one permitted interaction where specified features are allowed to interact with each other.
#' Feature index values should start from `0` (`0` references the first column).
#' Leave argument unspecified for no interaction constraints.
#'
#' \itemize{
#' \item{ \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1}
#' when it is added to the current approximation.
#' Used to prevent overfitting by making the boosting process more conservative.
#' Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model
#' more robust to overfitting but slower to compute. Default: 0.3}
#' \item{ \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree.
#' the larger, the more conservative the algorithm will be.}
#' \item \code{max_depth} maximum depth of a tree. Default: 6
#' \item{\code{min_child_weight} minimum sum of instance weight (hessian) needed in a child.
#' If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight,
#' then the building process will give up further partitioning.
#' In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node.
#' The larger, the more conservative the algorithm will be. Default: 1}
#' \item{ \code{subsample} subsample ratio of the training instance.
#' Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees
#' and this will prevent overfitting. It makes computation shorter (because less data to analyse).
#' It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1}
#' \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
#' \item \code{lambda} L2 regularization term on weights. Default: 1
#' \item \code{alpha} L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0
#' \item{ \code{num_parallel_tree} Experimental parameter. number of trees to grow per round.
#' Useful to test Random Forest through XGBoost
#' (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly.
#' Default: 1}
#' \item{ \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length
#' equals to the number of features in the training data.
#' \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.}
#' \item{ \code{interaction_constraints} A list of vectors specifying feature indices of permitted interactions.
#' Each item of the list represents one permitted interaction where specified features are allowed to interact with each other.
#' Feature index values should start from \code{0} (\code{0} references the first column).
#' Leave argument unspecified for no interaction constraints.}
#' }
#' **2.2. Parameters for Linear Booster**
#'
#' 2.2. Parameters for Linear Booster
#' - `lambda`: L2 regularization term on weights. Default: 0.
#' - `lambda_bias`: L2 regularization term on bias. Default: 0.
#' - `alpha`: L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0.
#'
#' \itemize{
#' \item \code{lambda} L2 regularization term on weights. Default: 0
#' \item \code{lambda_bias} L2 regularization term on bias. Default: 0
#' \item \code{alpha} L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0
#' }
#' **3. Task Parameters**
#'
#' 3. Task Parameters
#' - `objective`: Specifies the learning task and the corresponding learning objective.
#' users can pass a self-defined function to it. The default objective options are below:
#' - `reg:squarederror`: Regression with squared loss (default).
#' - `reg:squaredlogerror`: Regression with squared log loss \eqn{1/2 \cdot (\log(pred + 1) - \log(label + 1))^2}.
#' All inputs are required to be greater than -1.
#' Also, see metric rmsle for possible issue with this objective.
#' - `reg:logistic`: Logistic regression.
#' - `reg:pseudohubererror`: Regression with Pseudo Huber loss, a twice differentiable alternative to absolute loss.
#' - `binary:logistic`: Logistic regression for binary classification. Output probability.
#' - `binary:logitraw`: Logistic regression for binary classification, output score before logistic transformation.
#' - `binary:hinge`: Hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities.
#' - `count:poisson`: Poisson regression for count data, output mean of Poisson distribution.
#' The parameter `max_delta_step` is set to 0.7 by default in poisson regression
#' (used to safeguard optimization).
#' - `survival:cox`: Cox regression for right censored survival time data (negative values are considered right censored).
#' Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional
#' hazard function \eqn{h(t) = h_0(t) \cdot HR}.
#' - `survival:aft`: Accelerated failure time model for censored survival time data. See
#' [Survival Analysis with Accelerated Failure Time](https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html)
#' for details.
#' The parameter `aft_loss_distribution` specifies the Probability Density Function
#' used by `survival:aft` and the `aft-nloglik` metric.
#' - `multi:softmax`: Set xgboost to do multiclass classification using the softmax objective.
#' Class is represented by a number and should be from 0 to `num_class - 1`.
#' - `multi:softprob`: Same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be
#' further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging
#' to each class.
#' - `rank:pairwise`: Set XGBoost to do ranking task by minimizing the pairwise loss.
#' - `rank:ndcg`: Use LambdaMART to perform list-wise ranking where
#' [Normalized Discounted Cumulative Gain (NDCG)](https://en.wikipedia.org/wiki/Discounted_cumulative_gain) is maximized.
#' - `rank:map`: Use LambdaMART to perform list-wise ranking where
#' [Mean Average Precision (MAP)](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision)
#' is maximized.
#' - `reg:gamma`: Gamma regression with log-link. Output is a mean of gamma distribution.
#' It might be useful, e.g., for modeling insurance claims severity, or for any outcome that might be
#' [gamma-distributed](https://en.wikipedia.org/wiki/Gamma_distribution#Applications).
#' - `reg:tweedie`: Tweedie regression with log-link.
#' It might be useful, e.g., for modeling total loss in insurance, or for any outcome that might be
#' [Tweedie-distributed](https://en.wikipedia.org/wiki/Tweedie_distribution#Applications).
#'
#' \itemize{
#' \item{ \code{objective} specify the learning task and the corresponding learning objective, users can pass a self-defined function to it.
#' The default objective options are below:
#' \itemize{
#' \item \code{reg:squarederror} Regression with squared loss (Default).
#' \item{ \code{reg:squaredlogerror}: regression with squared log loss \eqn{1/2 * (log(pred + 1) - log(label + 1))^2}.
#' All inputs are required to be greater than -1.
#' Also, see metric rmsle for possible issue with this objective.}
#' \item \code{reg:logistic} logistic regression.
#' \item \code{reg:pseudohubererror}: regression with Pseudo Huber loss, a twice differentiable alternative to absolute loss.
#' \item \code{binary:logistic} logistic regression for binary classification. Output probability.
#' \item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation.
#' \item \code{binary:hinge}: hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities.
#' \item{ \code{count:poisson}: Poisson regression for count data, output mean of Poisson distribution.
#' \code{max_delta_step} is set to 0.7 by default in poisson regression (used to safeguard optimization).}
#' \item{ \code{survival:cox}: Cox regression for right censored survival time data (negative values are considered right censored).
#' Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional
#' hazard function \code{h(t) = h0(t) * HR)}.}
#' \item{ \code{survival:aft}: Accelerated failure time model for censored survival time data. See
#' \href{https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html}{Survival Analysis with Accelerated Failure Time}
#' for details.}
#' \item \code{aft_loss_distribution}: Probability Density Function used by \code{survival:aft} and \code{aft-nloglik} metric.
#' \item{ \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective.
#' Class is represented by a number and should be from 0 to \code{num_class - 1}.}
#' \item{ \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be
#' further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging
#' to each class.}
#' \item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss.
#' \item{ \code{rank:ndcg}: Use LambdaMART to perform list-wise ranking where
#' \href{https://en.wikipedia.org/wiki/Discounted_cumulative_gain}{Normalized Discounted Cumulative Gain (NDCG)} is maximized.}
#' \item{ \code{rank:map}: Use LambdaMART to perform list-wise ranking where
#' \href{https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision}{Mean Average Precision (MAP)}
#' is maximized.}
#' \item{ \code{reg:gamma}: gamma regression with log-link.
#' Output is a mean of gamma distribution.
#' It might be useful, e.g., for modeling insurance claims severity, or for any outcome that might be
#' \href{https://en.wikipedia.org/wiki/Gamma_distribution#Applications}{gamma-distributed}.}
#' \item{ \code{reg:tweedie}: Tweedie regression with log-link.
#' It might be useful, e.g., for modeling total loss in insurance, or for any outcome that might be
#' \href{https://en.wikipedia.org/wiki/Tweedie_distribution#Applications}{Tweedie-distributed}.}
#' }
#' }
#' \item \code{base_score} the initial prediction score of all instances, global bias. Default: 0.5
#' \item{ \code{eval_metric} evaluation metrics for validation data.
#' Users can pass a self-defined function to it.
#' Default: metric will be assigned according to objective
#' (rmse for regression, and error for classification, mean average precision for ranking).
#' List is provided in detail section.}
#' }
#' For custom objectives, one should pass a function taking as input the current predictions (as a numeric
#' vector or matrix) and the training data (as an `xgb.DMatrix` object) that will return a list with elements
#' `grad` and `hess`, which should be numeric vectors or matrices with number of rows matching to the numbers
#' of rows in the training data (same shape as the predictions that are passed as input to the function).
#' For multi-valued custom objectives, should have shape `[nrows, ntargets]`. Note that negative values of
#' the Hessian will be clipped, so one might consider using the expected Hessian (Fisher information) if the
#' objective is non-convex.
#'
#' @param data training dataset. \code{xgb.train} accepts only an \code{xgb.DMatrix} as the input.
#' \code{xgboost}, in addition, also accepts \code{matrix}, \code{dgCMatrix}, or name of a local data file.
#' @param nrounds max number of boosting iterations.
#' @param watchlist named list of xgb.DMatrix datasets to use for evaluating model performance.
#' Metrics specified in either \code{eval_metric} or \code{feval} will be computed for each
#' of these datasets during each boosting iteration, and stored in the end as a field named
#' \code{evaluation_log} in the resulting object. When either \code{verbose>=1} or
#' \code{\link{cb.print.evaluation}} callback is engaged, the performance results are continuously
#' printed out during the training.
#' E.g., specifying \code{watchlist=list(validation1=mat1, validation2=mat2)} allows to track
#' the performance of each round's model on mat1 and mat2.
#' @param obj customized objective function. Returns gradient and second order
#' gradient with given prediction and dtrain.
#' @param feval customized evaluation function. Returns
#' \code{list(metric='metric-name', value='metric-value')} with given
#' prediction and dtrain.
#' See the tutorials [Custom Objective and Evaluation Metric](https://xgboost.readthedocs.io/en/stable/tutorials/custom_metric_obj.html)
#' and [Advanced Usage of Custom Objectives](https://xgboost.readthedocs.io/en/stable/tutorials/advanced_custom_obj)
#' for more information about custom objectives.
#'
#' - `base_score`: The initial prediction score of all instances, global bias. Default: 0.5.
#' - `eval_metric`: Evaluation metrics for validation data.
#' Users can pass a self-defined function to it.
#' Default: metric will be assigned according to objective
#' (rmse for regression, and error for classification, mean average precision for ranking).
#' List is provided in detail section.
#' @param data Training dataset. `xgb.train()` accepts only an `xgb.DMatrix` as the input.
#' [xgboost()], in addition, also accepts `matrix`, `dgCMatrix`, or name of a local data file.
#' @param nrounds Max number of boosting iterations.
#' @param evals Named list of `xgb.DMatrix` datasets to use for evaluating model performance.
#' Metrics specified in either `eval_metric` or `feval` will be computed for each
#' of these datasets during each boosting iteration, and stored in the end as a field named
#' `evaluation_log` in the resulting object. When either `verbose>=1` or
#' [xgb.cb.print.evaluation()] callback is engaged, the performance results are continuously
#' printed out during the training.
#' E.g., specifying `evals=list(validation1=mat1, validation2=mat2)` allows to track
#' the performance of each round's model on mat1 and mat2.
#' @param obj Customized objective function. Should take two arguments: the first one will be the
#' current predictions (either a numeric vector or matrix depending on the number of targets / classes),
#' and the second one will be the `data` DMatrix object that is used for training.
#'
#' It should return a list with two elements `grad` and `hess` (in that order), as either
#' numeric vectors or numeric matrices depending on the number of targets / classes (same
#' dimension as the predictions that are passed as first argument).
#' @param feval Customized evaluation function. Just like `obj`, should take two arguments, with
#' the first one being the predictions and the second one the `data` DMatrix.
#'
#' Should return a list with two elements `metric` (name that will be displayed for this metric,
#' should be a string / character), and `value` (the number that the function calculates, should
#' be a numeric scalar).
#'
#' Note that even if passing `feval`, objectives also have an associated default metric that
#' will be evaluated in addition to it. In order to disable the built-in metric, one can pass
#' parameter `disable_default_eval_metric = TRUE`.
#' @param verbose If 0, xgboost will stay silent. If 1, it will print information about performance.
#' If 2, some additional information will be printed out.
#' Note that setting \code{verbose > 0} automatically engages the
#' \code{cb.print.evaluation(period=1)} callback function.
#' @param print_every_n Print each n-th iteration evaluation messages when \code{verbose>0}.
#' Default is 1 which means all messages are printed. This parameter is passed to the
#' \code{\link{cb.print.evaluation}} callback.
#' @param early_stopping_rounds If \code{NULL}, the early stopping function is not triggered.
#' If set to an integer \code{k}, training with a validation set will stop if the performance
#' doesn't improve for \code{k} rounds.
#' Setting this parameter engages the \code{\link{cb.early.stop}} callback.
#' @param maximize If \code{feval} and \code{early_stopping_rounds} are set,
#' then this parameter must be set as well.
#' When it is \code{TRUE}, it means the larger the evaluation score the better.
#' This parameter is passed to the \code{\link{cb.early.stop}} callback.
#' @param save_period when it is non-NULL, model is saved to disk after every \code{save_period} rounds,
#' 0 means save at the end. The saving is handled by the \code{\link{cb.save.model}} callback.
#' If 2, some additional information will be printed out.
#' Note that setting `verbose > 0` automatically engages the
#' `xgb.cb.print.evaluation(period=1)` callback function.
#' @param print_every_n Print each nth iteration evaluation messages when `verbose>0`.
#' Default is 1 which means all messages are printed. This parameter is passed to the
#' [xgb.cb.print.evaluation()] callback.
#' @param early_stopping_rounds If `NULL`, the early stopping function is not triggered.
#' If set to an integer `k`, training with a validation set will stop if the performance
#' doesn't improve for `k` rounds. Setting this parameter engages the [xgb.cb.early.stop()] callback.
#' @param maximize If `feval` and `early_stopping_rounds` are set, then this parameter must be set as well.
#' When it is `TRUE`, it means the larger the evaluation score the better.
#' This parameter is passed to the [xgb.cb.early.stop()] callback.
#' @param save_period When not `NULL`, model is saved to disk after every `save_period` rounds.
#' 0 means save at the end. The saving is handled by the [xgb.cb.save.model()] callback.
#' @param save_name the name or path for periodically saved model file.
#' @param xgb_model a previously built model to continue the training from.
#' Could be either an object of class \code{xgb.Booster}, or its raw data, or the name of a
#' file with a previously saved model.
#' @param callbacks a list of callback functions to perform various task during boosting.
#' See \code{\link{callbacks}}. Some of the callbacks are automatically created depending on the
#' parameters' values. User can provide either existing or their own callback methods in order
#' to customize the training process.
#' @param ... other parameters to pass to \code{params}.
#' @param label vector of response values. Should not be provided when data is
#' a local data file name or an \code{xgb.DMatrix}.
#' @param missing by default is set to NA, which means that NA values should be considered as 'missing'
#' by the algorithm. Sometimes, 0 or other extreme value might be used to represent missing values.
#' This parameter is only used when input is a dense matrix.
#' @param weight a vector indicating the weight for each row of the input.
#' @param xgb_model A previously built model to continue the training from.
#' Could be either an object of class `xgb.Booster`, or its raw data, or the name of a
#' file with a previously saved model.
#' @param callbacks A list of callback functions to perform various task during boosting.
#' See [xgb.Callback()]. Some of the callbacks are automatically created depending on the
#' parameters' values. User can provide either existing or their own callback methods in order
#' to customize the training process.
#'
#' Note that some callbacks might try to leave attributes in the resulting model object,
#' such as an evaluation log (a `data.table` object) - be aware that these objects are kept
#' as R attributes, and thus do not get saved when using XGBoost's own serializaters like
#' [xgb.save()] (but are kept when using R serializers like [saveRDS()]).
#' @param ... other parameters to pass to `params`.
#'
#' @return An object of class `xgb.Booster`.
#'
#' @details
#' These are the training functions for \code{xgboost}.
#' These are the training functions for [xgboost()].
#'
#' The \code{xgb.train} interface supports advanced features such as \code{watchlist},
#' The `xgb.train()` interface supports advanced features such as `evals`,
#' customized objective and evaluation metric functions, therefore it is more flexible
#' than the \code{xgboost} interface.
#' than the [xgboost()] interface.
#'
#' Parallelization is automatically enabled if \code{OpenMP} is present.
#' Number of threads can also be manually specified via \code{nthread} parameter.
#' Parallelization is automatically enabled if OpenMP is present.
#' Number of threads can also be manually specified via the `nthread` parameter.
#'
#' While in other interfaces, the default random seed defaults to zero, in R, if a parameter `seed`
#' is not manually supplied, it will generate a random seed through R's own random number generator,
#' whose seed in turn is controllable through `set.seed`. If `seed` is passed, it will override the
#' RNG from R.
#'
#' The evaluation metric is chosen automatically by XGBoost (according to the objective)
#' when the \code{eval_metric} parameter is not provided.
#' User may set one or several \code{eval_metric} parameters.
#' when the `eval_metric` parameter is not provided.
#' User may set one or several `eval_metric` parameters.
#' Note that when using a customized metric, only this single metric can be used.
#' The following is the list of built-in metrics for which XGBoost provides optimized implementation:
#' \itemize{
#' \item \code{rmse} root mean square error. \url{https://en.wikipedia.org/wiki/Root_mean_square_error}
#' \item \code{logloss} negative log-likelihood. \url{https://en.wikipedia.org/wiki/Log-likelihood}
#' \item \code{mlogloss} multiclass logloss. \url{https://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html}
#' \item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
#' By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
#' Different threshold (e.g., 0.) could be specified as "error@0."
#' \item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
#' \item \code{mae} Mean absolute error
#' \item \code{mape} Mean absolute percentage error
#' \item{ \code{auc} Area under the curve.
#' \url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.}
#' \item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation.
#' \item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{https://en.wikipedia.org/wiki/NDCG}
#' }
#' - `rmse`: Root mean square error. \url{https://en.wikipedia.org/wiki/Root_mean_square_error}
#' - `logloss`: Negative log-likelihood. \url{https://en.wikipedia.org/wiki/Log-likelihood}
#' - `mlogloss`: Multiclass logloss. \url{https://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html}
#' - `error`: Binary classification error rate. It is calculated as `(# wrong cases) / (# all cases)`.
#' By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
#' Different threshold (e.g., 0.) could be specified as `error@0`.
#' - `merror`: Multiclass classification error rate. It is calculated as `(# wrong cases) / (# all cases)`.
#' - `mae`: Mean absolute error.
#' - `mape`: Mean absolute percentage error.
#' - `auc`: Area under the curve.
#' \url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
#' - `aucpr`: Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation.
#' - `ndcg`: Normalized Discounted Cumulative Gain (for ranking task). \url{https://en.wikipedia.org/wiki/NDCG}
#'
#' The following callbacks are automatically created when certain parameters are set:
#' \itemize{
#' \item \code{cb.print.evaluation} is turned on when \code{verbose > 0};
#' and the \code{print_every_n} parameter is passed to it.
#' \item \code{cb.evaluation.log} is on when \code{watchlist} is present.
#' \item \code{cb.early.stop}: when \code{early_stopping_rounds} is set.
#' \item \code{cb.save.model}: when \code{save_period > 0} is set.
#' }
#' - [xgb.cb.print.evaluation()] is turned on when `verbose > 0` and the `print_every_n`
#' parameter is passed to it.
#' - [xgb.cb.evaluation.log()] is on when `evals` is present.
#' - [xgb.cb.early.stop()]: When `early_stopping_rounds` is set.
#' - [xgb.cb.save.model()]: When `save_period > 0` is set.
#'
#' @return
#' An object of class \code{xgb.Booster} with the following elements:
#' \itemize{
#' \item \code{handle} a handle (pointer) to the xgboost model in memory.
#' \item \code{raw} a cached memory dump of the xgboost model saved as R's \code{raw} type.
#' \item \code{niter} number of boosting iterations.
#' \item \code{evaluation_log} evaluation history stored as a \code{data.table} with the
#' first column corresponding to iteration number and the rest corresponding to evaluation
#' metrics' values. It is created by the \code{\link{cb.evaluation.log}} callback.
#' \item \code{call} a function call.
#' \item \code{params} parameters that were passed to the xgboost library. Note that it does not
#' capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
#' \item \code{callbacks} callback functions that were either automatically assigned or
#' explicitly passed.
#' \item \code{best_iteration} iteration number with the best evaluation metric value
#' (only available with early stopping).
#' \item \code{best_score} the best evaluation metric value during early stopping.
#' (only available with early stopping).
#' \item \code{feature_names} names of the training dataset features
#' (only when column names were defined in training data).
#' \item \code{nfeatures} number of features in training data.
#' }
#' Note that objects of type `xgb.Booster` as returned by this function behave a bit differently
#' from typical R objects (it's an 'altrep' list class), and it makes a separation between
#' internal booster attributes (restricted to jsonifyable data), accessed through [xgb.attr()]
#' and shared between interfaces through serialization functions like [xgb.save()]; and
#' R-specific attributes (typically the result from a callback), accessed through [attributes()]
#' and [attr()], which are otherwise
#' only used in the R interface, only kept when using R's serializers like [saveRDS()], and
#' not anyhow used by functions like `predict.xgb.Booster()`.
#'
#' @seealso
#' \code{\link{callbacks}},
#' \code{\link{predict.xgb.Booster}},
#' \code{\link{xgb.cv}}
#' Be aware that one such R attribute that is automatically added is `params` - this attribute
#' is assigned from the `params` argument to this function, and is only meant to serve as a
#' reference for what went into the booster, but is not used in other methods that take a booster
#' object - so for example, changing the booster's configuration requires calling `xgb.config<-`
#' or `xgb.parameters<-`, while simply modifying `attributes(model)$params$<...>` will have no
#' effect elsewhere.
#'
#' @seealso [xgb.Callback()], [predict.xgb.Booster()], [xgb.cv()]
#'
#' @references
#'
#' Tianqi Chen and Carlos Guestrin, "XGBoost: A Scalable Tree Boosting System",
#' 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, 2016, \url{https://arxiv.org/abs/1603.02754}
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#' data(agaricus.test, package = "xgboost")
#'
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
#' watchlist <- list(train = dtrain, eval = dtest)
#' ## Keep the number of threads to 1 for examples
#' nthread <- 1
#' data.table::setDTthreads(nthread)
#'
#' dtrain <- with(
#' agaricus.train, xgb.DMatrix(data, label = label, nthread = nthread)
#' )
#' dtest <- with(
#' agaricus.test, xgb.DMatrix(data, label = label, nthread = nthread)
#' )
#' evals <- list(train = dtrain, eval = dtest)
#'
#' ## A simple xgb.train example:
#' param <- list(max_depth = 2, eta = 1, verbose = 0, nthread = 2,
#' objective = "binary:logistic", eval_metric = "auc")
#' bst <- xgb.train(param, dtrain, nrounds = 2, watchlist)
#' param <- list(
#' max_depth = 2,
#' eta = 1,
#' nthread = nthread,
#' objective = "binary:logistic",
#' eval_metric = "auc"
#' )
#' bst <- xgb.train(param, dtrain, nrounds = 2, evals = evals, verbose = 0)
#'
#'
#' ## An xgb.train example where custom objective and evaluation metric are used:
#' ## An xgb.train example where custom objective and evaluation metric are
#' ## used:
#' logregobj <- function(preds, dtrain) {
#' labels <- getinfo(dtrain, "label")
#' preds <- 1/(1 + exp(-preds))
@@ -263,40 +278,69 @@
#'
#' # These functions could be used by passing them either:
#' # as 'objective' and 'eval_metric' parameters in the params list:
#' param <- list(max_depth = 2, eta = 1, verbose = 0, nthread = 2,
#' objective = logregobj, eval_metric = evalerror)
#' bst <- xgb.train(param, dtrain, nrounds = 2, watchlist)
#' param <- list(
#' max_depth = 2,
#' eta = 1,
#' nthread = nthread,
#' objective = logregobj,
#' eval_metric = evalerror
#' )
#' bst <- xgb.train(param, dtrain, nrounds = 2, evals = evals, verbose = 0)
#'
#' # or through the ... arguments:
#' param <- list(max_depth = 2, eta = 1, verbose = 0, nthread = 2)
#' bst <- xgb.train(param, dtrain, nrounds = 2, watchlist,
#' objective = logregobj, eval_metric = evalerror)
#' param <- list(max_depth = 2, eta = 1, nthread = nthread)
#' bst <- xgb.train(
#' param,
#' dtrain,
#' nrounds = 2,
#' evals = evals,
#' verbose = 0,
#' objective = logregobj,
#' eval_metric = evalerror
#' )
#'
#' # or as dedicated 'obj' and 'feval' parameters of xgb.train:
#' bst <- xgb.train(param, dtrain, nrounds = 2, watchlist,
#' obj = logregobj, feval = evalerror)
#' bst <- xgb.train(
#' param, dtrain, nrounds = 2, evals = evals, obj = logregobj, feval = evalerror
#' )
#'
#'
#' ## An xgb.train example of using variable learning rates at each iteration:
#' param <- list(max_depth = 2, eta = 1, verbose = 0, nthread = 2,
#' objective = "binary:logistic", eval_metric = "auc")
#' param <- list(
#' max_depth = 2,
#' eta = 1,
#' nthread = nthread,
#' objective = "binary:logistic",
#' eval_metric = "auc"
#' )
#' my_etas <- list(eta = c(0.5, 0.1))
#' bst <- xgb.train(param, dtrain, nrounds = 2, watchlist,
#' callbacks = list(cb.reset.parameters(my_etas)))
#'
#' bst <- xgb.train(
#' param,
#' dtrain,
#' nrounds = 2,
#' evals = evals,
#' verbose = 0,
#' callbacks = list(xgb.cb.reset.parameters(my_etas))
#' )
#'
#' ## Early stopping:
#' bst <- xgb.train(param, dtrain, nrounds = 25, watchlist,
#' early_stopping_rounds = 3)
#' bst <- xgb.train(
#' param, dtrain, nrounds = 25, evals = evals, early_stopping_rounds = 3
#' )
#'
#' ## An 'xgboost' interface example:
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label,
#' max_depth = 2, eta = 1, nthread = 2, nrounds = 2,
#' objective = "binary:logistic")
#' bst <- xgboost(
#' x = agaricus.train$data,
#' y = factor(agaricus.train$label),
#' params = list(max_depth = 2, eta = 1),
#' nthread = nthread,
#' nrounds = 2
#' )
#' pred <- predict(bst, agaricus.test$data)
#'
#' @rdname xgb.train
#' @export
xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
xgb.train <- function(params = list(), data, nrounds, evals = list(),
obj = NULL, feval = NULL, verbose = 1, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL,
save_period = NULL, save_name = "xgboost.model",
@@ -309,75 +353,78 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
check.custom.obj()
check.custom.eval()
# data & watchlist checks
# data & evals checks
dtrain <- data
if (!inherits(dtrain, "xgb.DMatrix"))
stop("second argument dtrain must be xgb.DMatrix")
if (length(watchlist) > 0) {
if (typeof(watchlist) != "list" ||
!all(vapply(watchlist, inherits, logical(1), what = 'xgb.DMatrix')))
stop("watchlist must be a list of xgb.DMatrix elements")
evnames <- names(watchlist)
if (length(evals) > 0) {
if (typeof(evals) != "list" ||
!all(vapply(evals, inherits, logical(1), what = 'xgb.DMatrix')))
stop("'evals' must be a list of xgb.DMatrix elements")
evnames <- names(evals)
if (is.null(evnames) || any(evnames == ""))
stop("each element of the watchlist must have a name tag")
stop("each element of 'evals' must have a name tag")
}
# Handle multiple evaluation metrics given as a list
for (m in params$eval_metric) {
params <- c(params, list(eval_metric = m))
}
# evaluation printing callback
params <- c(params)
print_every_n <- max(as.integer(print_every_n), 1L)
if (!has.callbacks(callbacks, 'cb.print.evaluation') &&
verbose) {
callbacks <- add.cb(callbacks, cb.print.evaluation(print_every_n))
}
# evaluation log callback: it is automatically enabled when watchlist is provided
evaluation_log <- list()
if (!has.callbacks(callbacks, 'cb.evaluation.log') &&
length(watchlist) > 0) {
callbacks <- add.cb(callbacks, cb.evaluation.log())
}
# Model saving callback
if (!is.null(save_period) &&
!has.callbacks(callbacks, 'cb.save.model')) {
callbacks <- add.cb(callbacks, cb.save.model(save_period, save_name))
}
# Early stopping callback
stop_condition <- FALSE
if (!is.null(early_stopping_rounds) &&
!has.callbacks(callbacks, 'cb.early.stop')) {
callbacks <- add.cb(callbacks, cb.early.stop(early_stopping_rounds,
maximize = maximize, verbose = verbose))
params['validate_parameters'] <- TRUE
if (!("seed" %in% names(params))) {
params[["seed"]] <- sample(.Machine$integer.max, size = 1)
}
# Sort the callbacks into categories
cb <- categorize.callbacks(callbacks)
params['validate_parameters'] <- TRUE
if (!is.null(params[['seed']])) {
warning("xgb.train: `seed` is ignored in R package. Use `set.seed()` instead.")
# callbacks
tmp <- .process.callbacks(callbacks, is_cv = FALSE)
callbacks <- tmp$callbacks
cb_names <- tmp$cb_names
rm(tmp)
# Early stopping callback (should always come first)
if (!is.null(early_stopping_rounds) && !("early_stop" %in% cb_names)) {
callbacks <- add.callback(
callbacks,
xgb.cb.early.stop(
early_stopping_rounds,
maximize = maximize,
verbose = verbose
),
as_first_elt = TRUE
)
}
# evaluation printing callback
print_every_n <- max(as.integer(print_every_n), 1L)
if (verbose && !("print_evaluation" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.print.evaluation(print_every_n))
}
# evaluation log callback: it is automatically enabled when 'evals' is provided
if (length(evals) && !("evaluation_log" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.evaluation.log())
}
# Model saving callback
if (!is.null(save_period) && !("save_model" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.save.model(save_period, save_name))
}
# The tree updating process would need slightly different handling
is_update <- NVL(params[['process_type']], '.') == 'update'
# Construct a booster (either a new one or load from xgb_model)
handle <- xgb.Booster.handle(params, append(watchlist, dtrain), xgb_model)
bst <- xgb.handleToBooster(handle)
bst <- xgb.Booster(
params = params,
cachelist = append(evals, dtrain),
modelfile = xgb_model
)
niter_init <- bst$niter
bst <- bst$bst
.Call(
XGBoosterCopyInfoFromDMatrix_R,
xgb.get.handle(bst),
dtrain
)
# extract parameters that can affect the relationship b/w #trees and #iterations
num_class <- max(as.numeric(NVL(params[['num_class']], 1)), 1)
num_parallel_tree <- max(as.numeric(NVL(params[['num_parallel_tree']], 1)), 1)
# When the 'xgb_model' was set, find out how many boosting iterations it has
niter_init <- 0
if (!is.null(xgb_model)) {
niter_init <- as.numeric(xgb.attr(bst, 'niter')) + 1
if (length(niter_init) == 0) {
niter_init <- xgb.ntree(bst) %/% (num_parallel_tree * num_class)
}
}
if (is_update && nrounds > niter_init)
stop("nrounds cannot be larger than ", niter_init, " (nrounds of xgb_model)")
@@ -385,49 +432,83 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
begin_iteration <- niter_skip + 1
end_iteration <- niter_skip + nrounds
.execute.cb.before.training(
callbacks,
bst,
dtrain,
evals,
begin_iteration,
end_iteration
)
# the main loop for boosting iterations
for (iteration in begin_iteration:end_iteration) {
for (f in cb$pre_iter) f()
.execute.cb.before.iter(
callbacks,
bst,
dtrain,
evals,
iteration
)
xgb.iter.update(bst$handle, dtrain, iteration - 1, obj)
xgb.iter.update(
bst = bst,
dtrain = dtrain,
iter = iteration - 1,
obj = obj
)
if (length(watchlist) > 0)
bst_evaluation <- xgb.iter.eval(bst$handle, watchlist, iteration - 1, feval) # nolint: object_usage_linter
xgb.attr(bst$handle, 'niter') <- iteration - 1
for (f in cb$post_iter) f()
if (stop_condition) break
}
for (f in cb$finalize) f(finalize = TRUE)
bst <- xgb.Booster.complete(bst, saveraw = TRUE)
# store the total number of boosting iterations
bst$niter <- end_iteration
# store the evaluation results
if (length(evaluation_log) > 0 &&
nrow(evaluation_log) > 0) {
# include the previous compatible history when available
if (inherits(xgb_model, 'xgb.Booster') &&
!is_update &&
!is.null(xgb_model$evaluation_log) &&
isTRUE(all.equal(colnames(evaluation_log),
colnames(xgb_model$evaluation_log)))) {
evaluation_log <- rbindlist(list(xgb_model$evaluation_log, evaluation_log))
bst_evaluation <- NULL
if (length(evals) > 0) {
bst_evaluation <- xgb.iter.eval(
bst = bst,
evals = evals,
iter = iteration - 1,
feval = feval
)
}
bst$evaluation_log <- evaluation_log
should_stop <- .execute.cb.after.iter(
callbacks,
bst,
dtrain,
evals,
iteration,
bst_evaluation
)
if (should_stop) break
}
bst$call <- match.call()
bst$params <- params
bst$callbacks <- callbacks
if (!is.null(colnames(dtrain)))
bst$feature_names <- colnames(dtrain)
bst$nfeatures <- ncol(dtrain)
cb_outputs <- .execute.cb.after.training(
callbacks,
bst,
dtrain,
evals,
iteration,
bst_evaluation
)
extra_attrs <- list(
call = match.call(),
params = params
)
curr_attrs <- attributes(bst)
if (NROW(curr_attrs)) {
curr_attrs <- curr_attrs[
setdiff(
names(curr_attrs),
c(names(extra_attrs), names(cb_outputs))
)
]
}
curr_attrs <- c(extra_attrs, curr_attrs)
if (NROW(cb_outputs)) {
curr_attrs <- c(curr_attrs, cb_outputs)
}
attributes(bst) <- curr_attrs
return(bst)
}

View File

@@ -1,41 +0,0 @@
#' Load the instance back from \code{\link{xgb.serialize}}
#'
#' @param buffer the buffer containing booster instance saved by \code{\link{xgb.serialize}}
#' @param handle An \code{xgb.Booster.handle} object which will be overwritten with
#' the new deserialized object. Must be a null handle (e.g. when loading the model through
#' `readRDS`). If not provided, a new handle will be created.
#' @return An \code{xgb.Booster.handle} object.
#'
#' @export
xgb.unserialize <- function(buffer, handle = NULL) {
cachelist <- list()
if (is.null(handle)) {
handle <- .Call(XGBoosterCreate_R, cachelist)
} else {
if (!is.null.handle(handle))
stop("'handle' is not null/empty. Cannot overwrite existing handle.")
.Call(XGBoosterCreateInEmptyObj_R, cachelist, handle)
}
tryCatch(
.Call(XGBoosterUnserializeFromBuffer_R, handle, buffer),
error = function(e) {
error_msg <- conditionMessage(e)
m <- regexec("(src[\\\\/]learner.cc:[0-9]+): Check failed: (header == serialisation_header_)",
error_msg, perl = TRUE)
groups <- regmatches(error_msg, m)[[1]]
if (length(groups) == 3) {
warning(paste("The model had been generated by XGBoost version 1.0.0 or earlier and was ",
"loaded from a RDS file. We strongly ADVISE AGAINST using saveRDS() ",
"function, to ensure that your model can be read in current and upcoming ",
"XGBoost releases. Please use xgb.save() instead to preserve models for the ",
"long term. For more details and explanation, see ",
"https://xgboost.readthedocs.io/en/latest/tutorials/saving_model.html",
sep = ""))
.Call(XGBoosterLoadModelFromRaw_R, handle, buffer)
} else {
stop(e)
}
})
class(handle) <- "xgb.Booster.handle"
return (handle)
}

File diff suppressed because it is too large Load Diff

66
R-package/config.h.in Normal file
View File

@@ -0,0 +1,66 @@
/* config.h.in. Generated from configure.ac by autoheader. */
/* Define if building universal (internal helper macro) */
#undef AC_APPLE_UNIVERSAL_BUILD
/* Define to 1 if you have the <inttypes.h> header file. */
#undef HAVE_INTTYPES_H
/* Define to 1 if you have the <stdint.h> header file. */
#undef HAVE_STDINT_H
/* Define to 1 if you have the <stdio.h> header file. */
#undef HAVE_STDIO_H
/* Define to 1 if you have the <stdlib.h> header file. */
#undef HAVE_STDLIB_H
/* Define to 1 if you have the <strings.h> header file. */
#undef HAVE_STRINGS_H
/* Define to 1 if you have the <string.h> header file. */
#undef HAVE_STRING_H
/* Define to 1 if you have the <sys/stat.h> header file. */
#undef HAVE_SYS_STAT_H
/* Define to 1 if you have the <sys/types.h> header file. */
#undef HAVE_SYS_TYPES_H
/* Define to 1 if you have the <unistd.h> header file. */
#undef HAVE_UNISTD_H
/* Define to the address where bug reports for this package should be sent. */
#undef PACKAGE_BUGREPORT
/* Define to the full name of this package. */
#undef PACKAGE_NAME
/* Define to the full name and version of this package. */
#undef PACKAGE_STRING
/* Define to the one symbol short name of this package. */
#undef PACKAGE_TARNAME
/* Define to the home page for this package. */
#undef PACKAGE_URL
/* Define to the version of this package. */
#undef PACKAGE_VERSION
/* Define to 1 if all of the C90 standard headers exist (not just the ones
required in a freestanding environment). This macro is provided for
backward compatibility; new code need not use it. */
#undef STDC_HEADERS
/* Define WORDS_BIGENDIAN to 1 if your processor stores words with the most
significant byte first (like Motorola and SPARC, unlike Intel). */
#if defined AC_APPLE_UNIVERSAL_BUILD
# if defined __BIG_ENDIAN__
# define WORDS_BIGENDIAN 1
# endif
#else
# ifndef WORDS_BIGENDIAN
# undef WORDS_BIGENDIAN
# endif
#endif

578
R-package/configure vendored
View File

@@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.71 for xgboost 2.0.0.
# Generated by GNU Autoconf 2.71 for xgboost 2.2.0.
#
#
# Copyright (C) 1992-1996, 1998-2017, 2020-2021 Free Software Foundation,
@@ -607,17 +607,50 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='xgboost'
PACKAGE_TARNAME='xgboost'
PACKAGE_VERSION='2.0.0'
PACKAGE_STRING='xgboost 2.0.0'
PACKAGE_VERSION='2.2.0'
PACKAGE_STRING='xgboost 2.2.0'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
# Factoring default headers for most tests.
ac_includes_default="\
#include <stddef.h>
#ifdef HAVE_STDIO_H
# include <stdio.h>
#endif
#ifdef HAVE_STDLIB_H
# include <stdlib.h>
#endif
#ifdef HAVE_STRING_H
# include <string.h>
#endif
#ifdef HAVE_INTTYPES_H
# include <inttypes.h>
#endif
#ifdef HAVE_STDINT_H
# include <stdint.h>
#endif
#ifdef HAVE_STRINGS_H
# include <strings.h>
#endif
#ifdef HAVE_SYS_TYPES_H
# include <sys/types.h>
#endif
#ifdef HAVE_SYS_STAT_H
# include <sys/stat.h>
#endif
#ifdef HAVE_UNISTD_H
# include <unistd.h>
#endif"
ac_header_cxx_list=
ac_subst_vars='LTLIBOBJS
LIBOBJS
BACKTRACE_LIB
ENDIAN_FLAG
OPENMP_LIB
OPENMP_CXXFLAGS
USE_LITTLE_ENDIAN
OBJEXT
EXEEXT
ac_ct_CXX
@@ -676,7 +709,8 @@ CXXFLAGS
LDFLAGS
LIBS
CPPFLAGS
CCC'
CCC
USE_LITTLE_ENDIAN'
# Initialize some variables set by options.
@@ -1225,7 +1259,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures xgboost 2.0.0 to adapt to many kinds of systems.
\`configure' configures xgboost 2.2.0 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@@ -1287,7 +1321,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of xgboost 2.0.0:";;
short | recursive ) echo "Configuration of xgboost 2.2.0:";;
esac
cat <<\_ACEOF
@@ -1299,6 +1333,9 @@ Some influential environment variables:
LIBS libraries to pass to the linker, e.g. -l<library>
CPPFLAGS (Objective) C/C++ preprocessor flags, e.g. -I<include dir> if
you have headers in a nonstandard directory <include dir>
USE_LITTLE_ENDIAN
"Whether to build with little endian (checks at compile time if
unset)"
Use these variables to override the choices made by `configure' or to help
it to find libraries and programs with nonstandard names/locations.
@@ -1367,7 +1404,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
xgboost configure 2.0.0
xgboost configure 2.2.0
generated by GNU Autoconf 2.71
Copyright (C) 2021 Free Software Foundation, Inc.
@@ -1509,6 +1546,39 @@ fi
as_fn_set_status $ac_retval
} # ac_fn_cxx_try_run
# ac_fn_cxx_check_header_compile LINENO HEADER VAR INCLUDES
# ---------------------------------------------------------
# Tests whether HEADER exists and can be compiled using the include files in
# INCLUDES, setting the cache variable VAR accordingly.
ac_fn_cxx_check_header_compile ()
{
as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $2" >&5
printf %s "checking for $2... " >&6; }
if eval test \${$3+y}
then :
printf %s "(cached) " >&6
else $as_nop
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
$4
#include <$2>
_ACEOF
if ac_fn_cxx_try_compile "$LINENO"
then :
eval "$3=yes"
else $as_nop
eval "$3=no"
fi
rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
fi
eval ac_res=\$$3
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5
printf "%s\n" "$ac_res" >&6; }
eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno
} # ac_fn_cxx_check_header_compile
ac_configure_args_raw=
for ac_arg
do
@@ -1533,7 +1603,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by xgboost $as_me 2.0.0, which was
It was created by xgboost $as_me 2.2.0, which was
generated by GNU Autoconf 2.71. Invocation command line was
$ $0$ac_configure_args_raw
@@ -2020,6 +2090,15 @@ main (int argc, char **argv)
}
"
as_fn_append ac_header_cxx_list " stdio.h stdio_h HAVE_STDIO_H"
as_fn_append ac_header_cxx_list " stdlib.h stdlib_h HAVE_STDLIB_H"
as_fn_append ac_header_cxx_list " string.h string_h HAVE_STRING_H"
as_fn_append ac_header_cxx_list " inttypes.h inttypes_h HAVE_INTTYPES_H"
as_fn_append ac_header_cxx_list " stdint.h stdint_h HAVE_STDINT_H"
as_fn_append ac_header_cxx_list " strings.h strings_h HAVE_STRINGS_H"
as_fn_append ac_header_cxx_list " sys/stat.h sys_stat_h HAVE_SYS_STAT_H"
as_fn_append ac_header_cxx_list " sys/types.h sys_types_h HAVE_SYS_TYPES_H"
as_fn_append ac_header_cxx_list " unistd.h unistd_h HAVE_UNISTD_H"
# Check that the precious variables saved in the cache have kept the same
# value.
ac_cache_corrupted=false
@@ -2792,38 +2871,289 @@ fi
### Endian detection
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking endian" >&5
printf %s "checking endian... " >&6; }
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: " >&5
printf "%s\n" "" >&6; }
if test "$cross_compiling" = yes
ac_header= ac_cache=
for ac_item in $ac_header_cxx_list
do
if test $ac_cache; then
ac_fn_cxx_check_header_compile "$LINENO" $ac_header ac_cv_header_$ac_cache "$ac_includes_default"
if eval test \"x\$ac_cv_header_$ac_cache\" = xyes; then
printf "%s\n" "#define $ac_item 1" >> confdefs.h
fi
ac_header= ac_cache=
elif test $ac_header; then
ac_cache=$ac_item
else
ac_header=$ac_item
fi
done
if test $ac_cv_header_stdlib_h = yes && test $ac_cv_header_string_h = yes
then :
{ { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;}
as_fn_error $? "cannot run test program while cross compiling
See \`config.log' for more details" "$LINENO" 5; }
printf "%s\n" "#define STDC_HEADERS 1" >>confdefs.h
fi
if test -z "${USE_LITTLE_ENDIAN+x}"
then :
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: Checking system endianness as USE_LITTLE_ENDIAN is unset" >&5
printf "%s\n" "$as_me: Checking system endianness as USE_LITTLE_ENDIAN is unset" >&6;}
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking system endianness" >&5
printf %s "checking system endianness... " >&6; }
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether byte ordering is bigendian" >&5
printf %s "checking whether byte ordering is bigendian... " >&6; }
if test ${ac_cv_c_bigendian+y}
then :
printf %s "(cached) " >&6
else $as_nop
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
ac_cv_c_bigendian=unknown
# See if we're dealing with a universal compiler.
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stdint.h>
#ifndef __APPLE_CC__
not a universal capable compiler
#endif
typedef int dummy;
_ACEOF
if ac_fn_cxx_try_compile "$LINENO"
then :
# Check for potential -arch flags. It is not universal unless
# there are at least two -arch flags with different values.
ac_arch=
ac_prev=
for ac_word in $CC $CFLAGS $CPPFLAGS $LDFLAGS; do
if test -n "$ac_prev"; then
case $ac_word in
i?86 | x86_64 | ppc | ppc64)
if test -z "$ac_arch" || test "$ac_arch" = "$ac_word"; then
ac_arch=$ac_word
else
ac_cv_c_bigendian=universal
break
fi
;;
esac
ac_prev=
elif test "x$ac_word" = "x-arch"; then
ac_prev=arch
fi
done
fi
rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
if test $ac_cv_c_bigendian = unknown; then
# See if sys/param.h defines the BYTE_ORDER macro.
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <sys/types.h>
#include <sys/param.h>
int
main (void)
{
const uint16_t endianness = 256; return !!(*(const uint8_t *)&endianness);
#if ! (defined BYTE_ORDER && defined BIG_ENDIAN \
&& defined LITTLE_ENDIAN && BYTE_ORDER && BIG_ENDIAN \
&& LITTLE_ENDIAN)
bogus endian macros
#endif
;
return 0;
}
_ACEOF
if ac_fn_cxx_try_compile "$LINENO"
then :
# It does; now see whether it defined to BIG_ENDIAN or not.
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <sys/types.h>
#include <sys/param.h>
int
main (void)
{
#if BYTE_ORDER != BIG_ENDIAN
not big endian
#endif
;
return 0;
}
_ACEOF
if ac_fn_cxx_try_compile "$LINENO"
then :
ac_cv_c_bigendian=yes
else $as_nop
ac_cv_c_bigendian=no
fi
rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
fi
rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
fi
if test $ac_cv_c_bigendian = unknown; then
# See if <limits.h> defines _LITTLE_ENDIAN or _BIG_ENDIAN (e.g., Solaris).
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <limits.h>
int
main (void)
{
#if ! (defined _LITTLE_ENDIAN || defined _BIG_ENDIAN)
bogus endian macros
#endif
;
return 0;
}
_ACEOF
if ac_fn_cxx_try_compile "$LINENO"
then :
# It does; now see whether it defined to _BIG_ENDIAN or not.
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <limits.h>
int
main (void)
{
#ifndef _BIG_ENDIAN
not big endian
#endif
;
return 0;
}
_ACEOF
if ac_fn_cxx_try_compile "$LINENO"
then :
ac_cv_c_bigendian=yes
else $as_nop
ac_cv_c_bigendian=no
fi
rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
fi
rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
fi
if test $ac_cv_c_bigendian = unknown; then
# Compile a test program.
if test "$cross_compiling" = yes
then :
# Try to guess by grepping values from an object file.
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
unsigned short int ascii_mm[] =
{ 0x4249, 0x4765, 0x6E44, 0x6961, 0x6E53, 0x7953, 0 };
unsigned short int ascii_ii[] =
{ 0x694C, 0x5454, 0x656C, 0x6E45, 0x6944, 0x6E61, 0 };
int use_ascii (int i) {
return ascii_mm[i] + ascii_ii[i];
}
unsigned short int ebcdic_ii[] =
{ 0x89D3, 0xE3E3, 0x8593, 0x95C5, 0x89C4, 0x9581, 0 };
unsigned short int ebcdic_mm[] =
{ 0xC2C9, 0xC785, 0x95C4, 0x8981, 0x95E2, 0xA8E2, 0 };
int use_ebcdic (int i) {
return ebcdic_mm[i] + ebcdic_ii[i];
}
extern int foo;
int
main (void)
{
return use_ascii (foo) == use_ebcdic (foo);
;
return 0;
}
_ACEOF
if ac_fn_cxx_try_compile "$LINENO"
then :
if grep BIGenDianSyS conftest.$ac_objext >/dev/null; then
ac_cv_c_bigendian=yes
fi
if grep LiTTleEnDian conftest.$ac_objext >/dev/null ; then
if test "$ac_cv_c_bigendian" = unknown; then
ac_cv_c_bigendian=no
else
# finding both strings is unlikely to happen, but who knows?
ac_cv_c_bigendian=unknown
fi
fi
fi
rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext
else $as_nop
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
$ac_includes_default
int
main (void)
{
/* Are we little or big endian? From Harbison&Steele. */
union
{
long int l;
char c[sizeof (long int)];
} u;
u.l = 1;
return u.c[sizeof (long int) - 1] == 1;
;
return 0;
}
_ACEOF
if ac_fn_cxx_try_run "$LINENO"
then :
ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=1"
ac_cv_c_bigendian=no
else $as_nop
ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=0"
ac_cv_c_bigendian=yes
fi
rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
conftest.$ac_objext conftest.beam conftest.$ac_ext
fi
fi
fi
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_bigendian" >&5
printf "%s\n" "$ac_cv_c_bigendian" >&6; }
case $ac_cv_c_bigendian in #(
yes)
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: using big endian" >&5
printf "%s\n" "using big endian" >&6; }
ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=0";; #(
no)
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: using little endian" >&5
printf "%s\n" "using little endian" >&6; }
ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=1" ;; #(
universal)
printf "%s\n" "#define AC_APPLE_UNIVERSAL_BUILD 1" >>confdefs.h
;; #(
*)
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unknown" >&5
printf "%s\n" "unknown" >&6; }
as_fn_error $? "Could not determine endianness. Please set USE_LITTLE_ENDIAN" "$LINENO" 5
;;
esac
else $as_nop
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: Forcing endianness to: ${USE_LITTLE_ENDIAN}" >&5
printf "%s\n" "$as_me: Forcing endianness to: ${USE_LITTLE_ENDIAN}" >&6;}
ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=${USE_LITTLE_ENDIAN}"
fi
OPENMP_CXXFLAGS=""
@@ -2877,6 +3207,8 @@ fi
ac_config_files="$ac_config_files src/Makevars"
ac_config_headers="$ac_config_headers config.h"
cat >confcache <<\_ACEOF
# This file is a shell script that caches the results of configure
# tests run on this system so they can be shared between configure
@@ -2967,43 +3299,7 @@ test "x$prefix" = xNONE && prefix=$ac_default_prefix
# Let make expand exec_prefix.
test "x$exec_prefix" = xNONE && exec_prefix='${prefix}'
# Transform confdefs.h into DEFS.
# Protect against shell expansion while executing Makefile rules.
# Protect against Makefile macro expansion.
#
# If the first sed substitution is executed (which looks for macros that
# take arguments), then branch to the quote section. Otherwise,
# look for a macro that doesn't take arguments.
ac_script='
:mline
/\\$/{
N
s,\\\n,,
b mline
}
t clear
:clear
s/^[ ]*#[ ]*define[ ][ ]*\([^ (][^ (]*([^)]*)\)[ ]*\(.*\)/-D\1=\2/g
t quote
s/^[ ]*#[ ]*define[ ][ ]*\([^ ][^ ]*\)[ ]*\(.*\)/-D\1=\2/g
t quote
b any
:quote
s/[ `~#$^&*(){}\\|;'\''"<>?]/\\&/g
s/\[/\\&/g
s/\]/\\&/g
s/\$/$$/g
H
:any
${
g
s/^\n//
s/\n/ /g
p
}
'
DEFS=`sed -n "$ac_script" confdefs.h`
DEFS=-DHAVE_CONFIG_H
ac_libobjs=
ac_ltlibobjs=
@@ -3023,6 +3319,7 @@ LTLIBOBJS=$ac_ltlibobjs
: "${CONFIG_STATUS=./config.status}"
ac_write_fail=0
ac_clean_files_save=$ac_clean_files
@@ -3412,7 +3709,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by xgboost $as_me 2.0.0, which was
This file was extended by xgboost $as_me 2.2.0, which was
generated by GNU Autoconf 2.71. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@@ -3430,11 +3727,15 @@ case $ac_config_files in *"
"*) set x $ac_config_files; shift; ac_config_files=$*;;
esac
case $ac_config_headers in *"
"*) set x $ac_config_headers; shift; ac_config_headers=$*;;
esac
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
# Files that config.status was made for.
config_files="$ac_config_files"
config_headers="$ac_config_headers"
_ACEOF
@@ -3455,10 +3756,15 @@ Usage: $0 [OPTION]... [TAG]...
--recheck update $as_me by reconfiguring in the same conditions
--file=FILE[:TEMPLATE]
instantiate the configuration file FILE
--header=FILE[:TEMPLATE]
instantiate the configuration header FILE
Configuration files:
$config_files
Configuration headers:
$config_headers
Report bugs to the package provider."
_ACEOF
@@ -3467,7 +3773,7 @@ ac_cs_config_escaped=`printf "%s\n" "$ac_cs_config" | sed "s/^ //; s/'/'\\\\\\\\
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config='$ac_cs_config_escaped'
ac_cs_version="\\
xgboost config.status 2.0.0
xgboost config.status 2.2.0
configured by $0, generated by GNU Autoconf 2.71,
with options \\"\$ac_cs_config\\"
@@ -3521,7 +3827,18 @@ do
esac
as_fn_append CONFIG_FILES " '$ac_optarg'"
ac_need_defaults=false;;
--he | --h | --help | --hel | -h )
--header | --heade | --head | --hea )
$ac_shift
case $ac_optarg in
*\'*) ac_optarg=`printf "%s\n" "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;;
esac
as_fn_append CONFIG_HEADERS " '$ac_optarg'"
ac_need_defaults=false;;
--he | --h)
# Conflict between --help and --header
as_fn_error $? "ambiguous option: \`$1'
Try \`$0 --help' for more information.";;
--help | --hel | -h )
printf "%s\n" "$ac_cs_usage"; exit ;;
-q | -quiet | --quiet | --quie | --qui | --qu | --q \
| -silent | --silent | --silen | --sile | --sil | --si | --s)
@@ -3578,6 +3895,7 @@ for ac_config_target in $ac_config_targets
do
case $ac_config_target in
"src/Makevars") CONFIG_FILES="$CONFIG_FILES src/Makevars" ;;
"config.h") CONFIG_HEADERS="$CONFIG_HEADERS config.h" ;;
*) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;;
esac
@@ -3590,6 +3908,7 @@ done
# bizarre bug on SunOS 4.1.3.
if $ac_need_defaults; then
test ${CONFIG_FILES+y} || CONFIG_FILES=$config_files
test ${CONFIG_HEADERS+y} || CONFIG_HEADERS=$config_headers
fi
# Have a temporary directory for convenience. Make it in the build tree
@@ -3777,8 +4096,116 @@ fi
cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
fi # test -n "$CONFIG_FILES"
# Set up the scripts for CONFIG_HEADERS section.
# No need to generate them if there are no CONFIG_HEADERS.
# This happens for instance with `./config.status Makefile'.
if test -n "$CONFIG_HEADERS"; then
cat >"$ac_tmp/defines.awk" <<\_ACAWK ||
BEGIN {
_ACEOF
eval set X " :F $CONFIG_FILES "
# Transform confdefs.h into an awk script `defines.awk', embedded as
# here-document in config.status, that substitutes the proper values into
# config.h.in to produce config.h.
# Create a delimiter string that does not exist in confdefs.h, to ease
# handling of long lines.
ac_delim='%!_!# '
for ac_last_try in false false :; do
ac_tt=`sed -n "/$ac_delim/p" confdefs.h`
if test -z "$ac_tt"; then
break
elif $ac_last_try; then
as_fn_error $? "could not make $CONFIG_HEADERS" "$LINENO" 5
else
ac_delim="$ac_delim!$ac_delim _$ac_delim!! "
fi
done
# For the awk script, D is an array of macro values keyed by name,
# likewise P contains macro parameters if any. Preserve backslash
# newline sequences.
ac_word_re=[_$as_cr_Letters][_$as_cr_alnum]*
sed -n '
s/.\{148\}/&'"$ac_delim"'/g
t rset
:rset
s/^[ ]*#[ ]*define[ ][ ]*/ /
t def
d
:def
s/\\$//
t bsnl
s/["\\]/\\&/g
s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\
D["\1"]=" \3"/p
s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2"/p
d
:bsnl
s/["\\]/\\&/g
s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\
D["\1"]=" \3\\\\\\n"\\/p
t cont
s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2\\\\\\n"\\/p
t cont
d
:cont
n
s/.\{148\}/&'"$ac_delim"'/g
t clear
:clear
s/\\$//
t bsnlc
s/["\\]/\\&/g; s/^/"/; s/$/"/p
d
:bsnlc
s/["\\]/\\&/g; s/^/"/; s/$/\\\\\\n"\\/p
b cont
' <confdefs.h | sed '
s/'"$ac_delim"'/"\\\
"/g' >>$CONFIG_STATUS || ac_write_fail=1
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
for (key in D) D_is_set[key] = 1
FS = ""
}
/^[\t ]*#[\t ]*(define|undef)[\t ]+$ac_word_re([\t (]|\$)/ {
line = \$ 0
split(line, arg, " ")
if (arg[1] == "#") {
defundef = arg[2]
mac1 = arg[3]
} else {
defundef = substr(arg[1], 2)
mac1 = arg[2]
}
split(mac1, mac2, "(") #)
macro = mac2[1]
prefix = substr(line, 1, index(line, defundef) - 1)
if (D_is_set[macro]) {
# Preserve the white space surrounding the "#".
print prefix "define", macro P[macro] D[macro]
next
} else {
# Replace #undef with comments. This is necessary, for example,
# in the case of _POSIX_SOURCE, which is predefined and required
# on some systems where configure will not decide to define it.
if (defundef == "undef") {
print "/*", prefix defundef, macro, "*/"
next
}
}
}
{ print }
_ACAWK
_ACEOF
cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
as_fn_error $? "could not setup config headers machinery" "$LINENO" 5
fi # test -n "$CONFIG_HEADERS"
eval set X " :F $CONFIG_FILES :H $CONFIG_HEADERS "
shift
for ac_tag
do
@@ -3986,7 +4413,30 @@ which seems to be undefined. Please make sure it is defined" >&2;}
esac \
|| as_fn_error $? "could not create $ac_file" "$LINENO" 5
;;
:H)
#
# CONFIG_HEADER
#
if test x"$ac_file" != x-; then
{
printf "%s\n" "/* $configure_input */" >&1 \
&& eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs"
} >"$ac_tmp/config.h" \
|| as_fn_error $? "could not create $ac_file" "$LINENO" 5
if diff "$ac_file" "$ac_tmp/config.h" >/dev/null 2>&1; then
{ printf "%s\n" "$as_me:${as_lineno-$LINENO}: $ac_file is unchanged" >&5
printf "%s\n" "$as_me: $ac_file is unchanged" >&6;}
else
rm -f "$ac_file"
mv "$ac_tmp/config.h" "$ac_file" \
|| as_fn_error $? "could not create $ac_file" "$LINENO" 5
fi
else
printf "%s\n" "/* $configure_input */" >&1 \
&& eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" \
|| as_fn_error $? "could not create -" "$LINENO" 5
fi
;;
esac

View File

@@ -2,7 +2,7 @@
AC_PREREQ(2.69)
AC_INIT([xgboost],[2.0.0],[],[xgboost],[])
AC_INIT([xgboost],[2.2.0],[],[xgboost],[])
: ${R_HOME=`R RHOME`}
if test -z "${R_HOME}"; then
@@ -28,11 +28,22 @@ AC_MSG_RESULT([])
AC_CHECK_LIB([execinfo], [backtrace], [BACKTRACE_LIB=-lexecinfo], [BACKTRACE_LIB=''])
### Endian detection
AC_MSG_CHECKING([endian])
AC_MSG_RESULT([])
AC_RUN_IFELSE([AC_LANG_PROGRAM([[#include <stdint.h>]], [[const uint16_t endianness = 256; return !!(*(const uint8_t *)&endianness);]])],
[ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=1"],
[ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=0"])
AC_ARG_VAR(USE_LITTLE_ENDIAN, "Whether to build with little endian (checks at compile time if unset)")
AS_IF([test -z "${USE_LITTLE_ENDIAN+x}"], [
AC_MSG_NOTICE([Checking system endianness as USE_LITTLE_ENDIAN is unset])
AC_MSG_CHECKING([system endianness])
AC_C_BIGENDIAN(
[AC_MSG_RESULT([using big endian])
ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=0"],
[AC_MSG_RESULT([using little endian])
ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=1"],
[AC_MSG_RESULT([unknown])
AC_MSG_ERROR([Could not determine endianness. Please set USE_LITTLE_ENDIAN])]
)
], [
AC_MSG_NOTICE([Forcing endianness to: ${USE_LITTLE_ENDIAN}])
ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=${USE_LITTLE_ENDIAN}"
])
OPENMP_CXXFLAGS=""
@@ -73,4 +84,5 @@ AC_SUBST(OPENMP_LIB)
AC_SUBST(ENDIAN_FLAG)
AC_SUBST(BACKTRACE_LIB)
AC_CONFIG_FILES([src/Makevars])
AC_CONFIG_HEADERS([config.h])
AC_OUTPUT

View File

@@ -1,15 +0,0 @@
basic_walkthrough Basic feature walkthrough
caret_wrapper Use xgboost to train in caret library
custom_objective Customize loss function, and evaluation metric
boost_from_prediction Boosting from existing prediction
predict_first_ntree Predicting using first n trees
generalized_linear_model Generalized Linear Model
cross_validation Cross validation
create_sparse_matrix Create Sparse Matrix
predict_leaf_indices Predicting the corresponding leaves
early_stopping Early Stop in training
poisson_regression Poisson regression on count data
tweedie_regression Tweedie regression
gpu_accelerated GPU-accelerated tree building algorithms
interaction_constraints Interaction constraints among features

View File

@@ -1,20 +0,0 @@
XGBoost R Feature Walkthrough
====
* [Basic walkthrough of wrappers](basic_walkthrough.R)
* [Train a xgboost model from caret library](caret_wrapper.R)
* [Customize loss function, and evaluation metric](custom_objective.R)
* [Boosting from existing prediction](boost_from_prediction.R)
* [Predicting using first n trees](predict_first_ntree.R)
* [Generalized Linear Model](generalized_linear_model.R)
* [Cross validation](cross_validation.R)
* [Create a sparse matrix from a dense one](create_sparse_matrix.R)
* [Use GPU-accelerated tree building algorithms](gpu_accelerated.R)
Benchmarks
====
* [Starter script for Kaggle Higgs Boson](../../demo/kaggle-higgs)
Notes
====
* Contribution of examples, benchmarks is more than welcomed!
* If you like to share how you use xgboost to solve your problem, send a pull request :)

View File

@@ -1,112 +0,0 @@
require(xgboost)
require(methods)
# we load in the agaricus dataset
# In this example, we are aiming to predict whether a mushroom is edible
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
train <- agaricus.train
test <- agaricus.test
# the loaded data is stored in sparseMatrix, and label is a numeric vector in {0,1}
class(train$label)
class(train$data)
#-------------Basic Training using XGBoost-----------------
# this is the basic usage of xgboost you can put matrix in data field
# note: we are putting in sparse matrix here, xgboost naturally handles sparse input
# use sparse matrix when your feature is sparse(e.g. when you are using one-hot encoding vector)
print("Training xgboost with sparseMatrix")
bst <- xgboost(data = train$data, label = train$label, max_depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic")
# alternatively, you can put in dense matrix, i.e. basic R-matrix
print("Training xgboost with Matrix")
bst <- xgboost(data = as.matrix(train$data), label = train$label, max_depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic")
# you can also put in xgb.DMatrix object, which stores label, data and other meta datas needed for advanced features
print("Training xgboost with xgb.DMatrix")
dtrain <- xgb.DMatrix(data = train$data, label = train$label)
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nrounds = 2, nthread = 2,
objective = "binary:logistic")
# Verbose = 0,1,2
print("Train xgboost with verbose 0, no message")
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic", verbose = 0)
print("Train xgboost with verbose 1, print evaluation metric")
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic", verbose = 1)
print("Train xgboost with verbose 2, also print information about tree")
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic", verbose = 2)
# you can also specify data as file path to a LIBSVM format input
# since we do not have this file with us, the following line is just for illustration
# bst <- xgboost(data = 'agaricus.train.svm', max_depth = 2, eta = 1, nrounds = 2,objective = "binary:logistic")
#--------------------basic prediction using xgboost--------------
# you can do prediction using the following line
# you can put in Matrix, sparseMatrix, or xgb.DMatrix
pred <- predict(bst, test$data)
err <- mean(as.numeric(pred > 0.5) != test$label)
print(paste("test-error=", err))
#-------------------save and load models-------------------------
# save model to binary local file
xgb.save(bst, "xgboost.model")
# load binary model to R
bst2 <- xgb.load("xgboost.model")
pred2 <- predict(bst2, test$data)
# pred2 should be identical to pred
print(paste("sum(abs(pred2-pred))=", sum(abs(pred2 - pred))))
# save model to R's raw vector
raw <- xgb.save.raw(bst)
# load binary model to R
bst3 <- xgb.load.raw(raw)
pred3 <- predict(bst3, test$data)
# pred3 should be identical to pred
print(paste("sum(abs(pred3-pred))=", sum(abs(pred3 - pred))))
#----------------Advanced features --------------
# to use advanced features, we need to put data in xgb.DMatrix
dtrain <- xgb.DMatrix(data = train$data, label = train$label)
dtest <- xgb.DMatrix(data = test$data, label = test$label)
#---------------Using watchlist----------------
# watchlist is a list of xgb.DMatrix, each of them is tagged with name
watchlist <- list(train = dtrain, test = dtest)
# to train with watchlist, use xgb.train, which contains more advanced features
# watchlist allows us to monitor the evaluation result on all data in the list
print("Train xgboost using xgb.train with watchlist")
bst <- xgb.train(data = dtrain, max_depth = 2, eta = 1, nrounds = 2, watchlist = watchlist,
nthread = 2, objective = "binary:logistic")
# we can change evaluation metrics, or use multiple evaluation metrics
print("train xgboost using xgb.train with watchlist, watch logloss and error")
bst <- xgb.train(data = dtrain, max_depth = 2, eta = 1, nrounds = 2, watchlist = watchlist,
eval_metric = "error", eval_metric = "logloss",
nthread = 2, objective = "binary:logistic")
# xgb.DMatrix can also be saved using xgb.DMatrix.save
xgb.DMatrix.save(dtrain, "dtrain.buffer")
# to load it in, simply call xgb.DMatrix
dtrain2 <- xgb.DMatrix("dtrain.buffer")
bst <- xgb.train(data = dtrain2, max_depth = 2, eta = 1, nrounds = 2, watchlist = watchlist,
nthread = 2, objective = "binary:logistic")
# information can be extracted from xgb.DMatrix using getinfo
label <- getinfo(dtest, "label")
pred <- predict(bst, dtest)
err <- as.numeric(sum(as.integer(pred > 0.5) != label)) / length(label)
print(paste("test-error=", err))
# You can dump the tree you learned using xgb.dump into a text file
dump_path <- file.path(tempdir(), 'dump.raw.txt')
xgb.dump(bst, dump_path, with_stats = TRUE)
# Finally, you can check which features are the most important.
print("Most important features (look at column Gain):")
imp_matrix <- xgb.importance(feature_names = colnames(train$data), model = bst)
print(imp_matrix)
# Feature importance bar plot by gain
print("Feature importance Plot : ")
print(xgb.plot.importance(importance_matrix = imp_matrix))

View File

@@ -1,26 +0,0 @@
require(xgboost)
# load in the agaricus dataset
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
watchlist <- list(eval = dtest, train = dtrain)
###
# advanced: start from a initial base prediction
#
print('start running example to start from a initial prediction')
# train xgboost for 1 round
param <- list(max_depth = 2, eta = 1, nthread = 2, objective = 'binary:logistic')
bst <- xgb.train(param, dtrain, 1, watchlist)
# Note: we need the margin value instead of transformed prediction in set_base_margin
# do predict with output_margin=TRUE, will always give you margin values before logistic transformation
ptrain <- predict(bst, dtrain, outputmargin = TRUE)
ptest <- predict(bst, dtest, outputmargin = TRUE)
# set the base_margin property of dtrain and dtest
# base margin is the base prediction we will boost from
setinfo(dtrain, "base_margin", ptrain)
setinfo(dtest, "base_margin", ptest)
print('this is result of boost from initial prediction')
bst <- xgb.train(params = param, data = dtrain, nrounds = 1, watchlist = watchlist)

View File

@@ -1,44 +0,0 @@
# install development version of caret library that contains xgboost models
require(caret)
require(xgboost)
require(data.table)
require(vcd)
require(e1071)
# Load Arthritis dataset in memory.
data(Arthritis)
# Create a copy of the dataset with data.table package
# (data.table is 100% compliant with R dataframe but its syntax is a lot more consistent
# and its performance are really good).
df <- data.table(Arthritis, keep.rownames = FALSE)
# Let's add some new categorical features to see if it helps.
# Of course these feature are highly correlated to the Age feature.
# Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features,
# even in case of highly correlated features.
# For the first feature we create groups of age by rounding the real age.
# Note that we transform it to factor (categorical data) so the algorithm treat them as independant values.
df[, AgeDiscret := as.factor(round(Age / 10, 0))]
# Here is an even stronger simplification of the real age with an arbitrary split at 30 years old.
# I choose this value based on nothing.
# We will see later if simplifying the information based on arbitrary values is a good strategy
# (I am sure you already have an idea of how well it will work!).
df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))]
# We remove ID as there is nothing to learn from this feature (it will just add some noise as the dataset is small).
df[, ID := NULL]
#-------------Basic Training using XGBoost in caret Library-----------------
# Set up control parameters for caret::train
# Here we use 10-fold cross-validation, repeating twice, and using random search for tuning hyper-parameters.
fitControl <- trainControl(method = "repeatedcv", number = 10, repeats = 2, search = "random")
# train a xgbTree model using caret::train
model <- train(factor(Improved) ~ ., data = df, method = "xgbTree", trControl = fitControl)
# Instead of tree for our boosters, you can also fit a linear regression or logistic regression model
# using xgbLinear
# model <- train(factor(Improved)~., data = df, method = "xgbLinear", trControl = fitControl)
# See model results
print(model)

View File

@@ -1,117 +0,0 @@
require(xgboost)
require(Matrix)
require(data.table)
if (!require(vcd)) {
install.packages('vcd') #Available in CRAN. Used for its dataset with categorical values.
require(vcd)
}
# According to its documentation, XGBoost works only on numbers.
# Sometimes the dataset we have to work on have categorical data.
# A categorical variable is one which have a fixed number of values.
# By example, if for each observation a variable called "Colour" can have only
# "red", "blue" or "green" as value, it is a categorical variable.
#
# In R, categorical variable is called Factor.
# Type ?factor in console for more information.
#
# In this demo we will see how to transform a dense dataframe with categorical variables to a sparse matrix
# before analyzing it in XGBoost.
# The method we are going to see is usually called "one hot encoding".
#load Arthritis dataset in memory.
data(Arthritis)
# create a copy of the dataset with data.table package
# (data.table is 100% compliant with R dataframe but its syntax is a lot more consistent
# and its performance are really good).
df <- data.table(Arthritis, keep.rownames = FALSE)
# Let's have a look to the data.table
cat("Print the dataset\n")
print(df)
# 2 columns have factor type, one has ordinal type
# (ordinal variable is a categorical variable with values which can be ordered, here: None > Some > Marked).
cat("Structure of the dataset\n")
str(df)
# Let's add some new categorical features to see if it helps.
# Of course these feature are highly correlated to the Age feature.
# Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features,
# even in case of highly correlated features.
# For the first feature we create groups of age by rounding the real age.
# Note that we transform it to factor (categorical data) so the algorithm treat them as independent values.
df[, AgeDiscret := as.factor(round(Age / 10, 0))]
# Here is an even stronger simplification of the real age with an arbitrary split at 30 years old.
# I choose this value based on nothing.
# We will see later if simplifying the information based on arbitrary values is a good strategy
# (I am sure you already have an idea of how well it will work!).
df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))]
# We remove ID as there is nothing to learn from this feature (it will just add some noise as the dataset is small).
df[, ID := NULL]
# List the different values for the column Treatment: Placebo, Treated.
cat("Values of the categorical feature Treatment\n")
print(levels(df[, Treatment]))
# Next step, we will transform the categorical data to dummy variables.
# This method is also called one hot encoding.
# The purpose is to transform each value of each categorical feature in one binary feature.
#
# Let's take, the column Treatment will be replaced by two columns, Placebo, and Treated.
# Each of them will be binary.
# For example an observation which had the value Placebo in column Treatment before the transformation will have, after the transformation,
# the value 1 in the new column Placebo and the value 0 in the new column Treated.
#
# Formulae Improved~.-1 used below means transform all categorical features but column Improved to binary values.
# Column Improved is excluded because it will be our output column, the one we want to predict.
sparse_matrix <- sparse.model.matrix(Improved ~ . - 1, data = df)
cat("Encoding of the sparse Matrix\n")
print(sparse_matrix)
# Create the output vector (not sparse)
# 1. Set, for all rows, field in Y column to 0;
# 2. set Y to 1 when Improved == Marked;
# 3. Return Y column
output_vector <- df[, Y := 0][Improved == "Marked", Y := 1][, Y]
# Following is the same process as other demo
cat("Learning...\n")
bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 9,
eta = 1, nthread = 2, nrounds = 10, objective = "binary:logistic")
importance <- xgb.importance(feature_names = colnames(sparse_matrix), model = bst)
print(importance)
# According to the matrix below, the most important feature in this dataset to predict if the treatment will work is the Age.
# The second most important feature is having received a placebo or not.
# The sex is third.
# Then we see our generated features (AgeDiscret). We can see that their contribution is very low (Gain column).
# Does these result make sense?
# Let's check some Chi2 between each of these features and the outcome.
print(chisq.test(df$Age, df$Y))
# Pearson correlation between Age and illness disappearing is 35
print(chisq.test(df$AgeDiscret, df$Y))
# Our first simplification of Age gives a Pearson correlation of 8.
print(chisq.test(df$AgeCat, df$Y))
# The perfectly random split I did between young and old at 30 years old have a low correlation of 2.
# It's a result we may expect as may be in my mind > 30 years is being old (I am 32 and starting feeling old, this may explain that),
# but for the illness we are studying, the age to be vulnerable is not the same.
# Don't let your "gut" lower the quality of your model. In "data science", there is science :-)
# As you can see, in general destroying information by simplifying it won't improve your model.
# Chi2 just demonstrates that.
# But in more complex cases, creating a new feature based on existing one which makes link with the outcome
# more obvious may help the algorithm and improve the model.
# The case studied here is not enough complex to show that. Check Kaggle forum for some challenging datasets.
# However it's almost always worse when you add some arbitrary rules.
# Moreover, you can notice that even if we have added some not useful new features highly correlated with
# other features, the boosting tree algorithm have been able to choose the best one, which in this case is the Age.
# Linear model may not be that strong in these scenario.

View File

@@ -1,51 +0,0 @@
require(xgboost)
# load in the agaricus dataset
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
nrounds <- 2
param <- list(max_depth = 2, eta = 1, nthread = 2, objective = 'binary:logistic')
cat('running cross validation\n')
# do cross validation, this will print result out as
# [iteration] metric_name:mean_value+std_value
# std_value is standard deviation of the metric
xgb.cv(param, dtrain, nrounds, nfold = 5, metrics = 'error')
cat('running cross validation, disable standard deviation display\n')
# do cross validation, this will print result out as
# [iteration] metric_name:mean_value+std_value
# std_value is standard deviation of the metric
xgb.cv(param, dtrain, nrounds, nfold = 5,
metrics = 'error', showsd = FALSE)
###
# you can also do cross validation with customized loss function
# See custom_objective.R
##
print ('running cross validation, with customized loss function')
logregobj <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
preds <- 1 / (1 + exp(-preds))
grad <- preds - labels
hess <- preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
evalerror <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
err <- as.numeric(sum(labels != (preds > 0))) / length(labels)
return(list(metric = "error", value = err))
}
param <- list(max_depth = 2, eta = 1,
objective = logregobj, eval_metric = evalerror)
# train with customized objective
xgb.cv(params = param, data = dtrain, nrounds = nrounds, nfold = 5)
# do cross validation with prediction values for each fold
res <- xgb.cv(params = param, data = dtrain, nrounds = nrounds, nfold = 5, prediction = TRUE)
res$evaluation_log
length(res$pred)

View File

@@ -1,65 +0,0 @@
require(xgboost)
# load in the agaricus dataset
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
# note: for customized objective function, we leave objective as default
# note: what we are getting is margin value in prediction
# you must know what you are doing
watchlist <- list(eval = dtest, train = dtrain)
num_round <- 2
# user define objective function, given prediction, return gradient and second order gradient
# this is log likelihood loss
logregobj <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
preds <- 1 / (1 + exp(-preds))
grad <- preds - labels
hess <- preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
# user defined evaluation function, return a pair metric_name, result
# NOTE: when you do customized loss function, the default prediction value is margin
# this may make builtin evaluation metric not function properly
# for example, we are doing logistic loss, the prediction is score before logistic transformation
# the builtin evaluation error assumes input is after logistic transformation
# Take this in mind when you use the customization, and maybe you need write customized evaluation function
evalerror <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
err <- as.numeric(sum(labels != (preds > 0))) / length(labels)
return(list(metric = "error", value = err))
}
param <- list(max_depth = 2, eta = 1, nthread = 2, verbosity = 0,
objective = logregobj, eval_metric = evalerror)
print ('start training with user customized objective')
# training with customized objective, we can also do step by step training
# simply look at xgboost.py's implementation of train
bst <- xgb.train(param, dtrain, num_round, watchlist)
#
# there can be cases where you want additional information
# being considered besides the property of DMatrix you can get by getinfo
# you can set additional information as attributes if DMatrix
# set label attribute of dtrain to be label, we use label as an example, it can be anything
attr(dtrain, 'label') <- getinfo(dtrain, 'label')
# this is new customized objective, where you can access things you set
# same thing applies to customized evaluation function
logregobjattr <- function(preds, dtrain) {
# now you can access the attribute in customized function
labels <- attr(dtrain, 'label')
preds <- 1 / (1 + exp(-preds))
grad <- preds - labels
hess <- preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
param <- list(max_depth = 2, eta = 1, nthread = 2, verbosity = 0,
objective = logregobjattr, eval_metric = evalerror)
print ('start training with user customized objective, with additional attributes in DMatrix')
# training with customized objective, we can also do step by step training
# simply look at xgboost.py's implementation of train
bst <- xgb.train(param, dtrain, num_round, watchlist)

View File

@@ -1,40 +0,0 @@
require(xgboost)
# load in the agaricus dataset
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
# note: for customized objective function, we leave objective as default
# note: what we are getting is margin value in prediction
# you must know what you are doing
param <- list(max_depth = 2, eta = 1, nthread = 2, verbosity = 0)
watchlist <- list(eval = dtest)
num_round <- 20
# user define objective function, given prediction, return gradient and second order gradient
# this is log likelihood loss
logregobj <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
preds <- 1 / (1 + exp(-preds))
grad <- preds - labels
hess <- preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
# user defined evaluation function, return a pair metric_name, result
# NOTE: when you do customized loss function, the default prediction value is margin
# this may make builtin evaluation metric not function properly
# for example, we are doing logistic loss, the prediction is score before logistic transformation
# the builtin evaluation error assumes input is after logistic transformation
# Take this in mind when you use the customization, and maybe you need write customized evaluation function
evalerror <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
err <- as.numeric(sum(labels != (preds > 0))) / length(labels)
return(list(metric = "error", value = err))
}
print ('start training with early Stopping setting')
bst <- xgb.train(param, dtrain, num_round, watchlist,
objective = logregobj, eval_metric = evalerror, maximize = FALSE,
early_stopping_round = 3)
bst <- xgb.cv(param, dtrain, num_round, nfold = 5,
objective = logregobj, eval_metric = evalerror,
maximize = FALSE, early_stopping_rounds = 3)

View File

@@ -1,33 +0,0 @@
require(xgboost)
# load in the agaricus dataset
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
##
# this script demonstrate how to fit generalized linear model in xgboost
# basically, we are using linear model, instead of tree for our boosters
# you can fit a linear regression, or logistic regression model
##
# change booster to gblinear, so that we are fitting a linear model
# alpha is the L1 regularizer
# lambda is the L2 regularizer
# you can also set lambda_bias which is L2 regularizer on the bias term
param <- list(objective = "binary:logistic", booster = "gblinear",
nthread = 2, alpha = 0.0001, lambda = 1)
# normally, you do not need to set eta (step_size)
# XGBoost uses a parallel coordinate descent algorithm (shotgun),
# there could be affection on convergence with parallelization on certain cases
# setting eta to be smaller value, e.g 0.5 can make the optimization more stable
##
# the rest of settings are the same
##
watchlist <- list(eval = dtest, train = dtrain)
num_round <- 2
bst <- xgb.train(param, dtrain, num_round, watchlist)
ypred <- predict(bst, dtest)
labels <- getinfo(dtest, 'label')
cat('error of preds=', mean(as.numeric(ypred > 0.5) != labels), '\n')

View File

@@ -1,45 +0,0 @@
# An example of using GPU-accelerated tree building algorithms
#
# NOTE: it can only run if you have a CUDA-enable GPU and the package was
# specially compiled with GPU support.
#
# For the current functionality, see
# https://xgboost.readthedocs.io/en/latest/gpu/index.html
#
library('xgboost')
# Simulate N x p random matrix with some binomial response dependent on pp columns
set.seed(111)
N <- 1000000
p <- 50
pp <- 25
X <- matrix(runif(N * p), ncol = p)
betas <- 2 * runif(pp) - 1
sel <- sort(sample(p, pp))
m <- X[, sel] %*% betas - 1 + rnorm(N)
y <- rbinom(N, 1, plogis(m))
tr <- sample.int(N, N * 0.75)
dtrain <- xgb.DMatrix(X[tr, ], label = y[tr])
dtest <- xgb.DMatrix(X[-tr, ], label = y[-tr])
wl <- list(train = dtrain, test = dtest)
# An example of running 'gpu_hist' algorithm
# which is
# - similar to the 'hist'
# - the fastest option for moderately large datasets
# - current limitations: max_depth < 16, does not implement guided loss
# You can use tree_method = 'gpu_hist' for another GPU accelerated algorithm,
# which is slower, more memory-hungry, but does not use binning.
param <- list(objective = 'reg:logistic', eval_metric = 'auc', subsample = 0.5, nthread = 4,
max_bin = 64, tree_method = 'gpu_hist')
pt <- proc.time()
bst_gpu <- xgb.train(param, dtrain, watchlist = wl, nrounds = 50)
proc.time() - pt
# Compare to the 'hist' algorithm:
param$tree_method <- 'hist'
pt <- proc.time()
bst_hist <- xgb.train(param, dtrain, watchlist = wl, nrounds = 50)
proc.time() - pt

View File

@@ -1,113 +0,0 @@
library(xgboost)
library(data.table)
set.seed(1024)
# Function to obtain a list of interactions fitted in trees, requires input of maximum depth
treeInteractions <- function(input_tree, input_max_depth) {
ID_merge <- i.id <- i.feature <- NULL # Suppress warning "no visible binding for global variable"
trees <- data.table::copy(input_tree) # copy tree input to prevent overwriting
if (input_max_depth < 2) return(list()) # no interactions if max depth < 2
if (nrow(input_tree) == 1) return(list())
# Attach parent nodes
for (i in 2:input_max_depth) {
if (i == 2) trees[, ID_merge := ID] else trees[, ID_merge := get(paste0('parent_', i - 2))]
parents_left <- trees[!is.na(Split), list(i.id = ID, i.feature = Feature, ID_merge = Yes)]
parents_right <- trees[!is.na(Split), list(i.id = ID, i.feature = Feature, ID_merge = No)]
data.table::setorderv(trees, 'ID_merge')
data.table::setorderv(parents_left, 'ID_merge')
data.table::setorderv(parents_right, 'ID_merge')
trees <- merge(trees, parents_left, by = 'ID_merge', all.x = TRUE)
trees[!is.na(i.id), c(paste0('parent_', i - 1), paste0('parent_feat_', i - 1))
:= list(i.id, i.feature)]
trees[, c('i.id', 'i.feature') := NULL]
trees <- merge(trees, parents_right, by = 'ID_merge', all.x = TRUE)
trees[!is.na(i.id), c(paste0('parent_', i - 1), paste0('parent_feat_', i - 1))
:= list(i.id, i.feature)]
trees[, c('i.id', 'i.feature') := NULL]
}
# Extract nodes with interactions
interaction_trees <- trees[!is.na(Split) & !is.na(parent_1), # nolint: object_usage_linter
c('Feature', paste0('parent_feat_', 1:(input_max_depth - 1))),
with = FALSE]
interaction_trees_split <- split(interaction_trees, seq_len(nrow(interaction_trees)))
interaction_list <- lapply(interaction_trees_split, as.character)
# Remove NAs (no parent interaction)
interaction_list <- lapply(interaction_list, function(x) x[!is.na(x)])
# Remove non-interactions (same variable)
interaction_list <- lapply(interaction_list, unique) # remove same variables
interaction_length <- sapply(interaction_list, length)
interaction_list <- interaction_list[interaction_length > 1]
interaction_list <- unique(lapply(interaction_list, sort))
return(interaction_list)
}
# Generate sample data
x <- list()
for (i in 1:10) {
x[[i]] <- i * rnorm(1000, 10)
}
x <- as.data.table(x)
y <- -1 * x[, rowSums(.SD)] + x[['V1']] * x[['V2']] + x[['V3']] * x[['V4']] * x[['V5']]
+ rnorm(1000, 0.001) + 3 * sin(x[['V7']])
train <- as.matrix(x)
# Interaction constraint list (column names form)
interaction_list <- list(c('V1', 'V2'), c('V3', 'V4', 'V5'))
# Convert interaction constraint list into feature index form
cols2ids <- function(object, col_names) {
LUT <- seq_along(col_names) - 1
names(LUT) <- col_names
rapply(object, function(x) LUT[x], classes = "character", how = "replace")
}
interaction_list_fid <- cols2ids(interaction_list, colnames(train))
# Fit model with interaction constraints
bst <- xgboost(data = train, label = y, max_depth = 4,
eta = 0.1, nthread = 2, nrounds = 1000,
interaction_constraints = interaction_list_fid)
bst_tree <- xgb.model.dt.tree(colnames(train), bst)
bst_interactions <- treeInteractions(bst_tree, 4)
# interactions constrained to combinations of V1*V2 and V3*V4*V5
# Fit model without interaction constraints
bst2 <- xgboost(data = train, label = y, max_depth = 4,
eta = 0.1, nthread = 2, nrounds = 1000)
bst2_tree <- xgb.model.dt.tree(colnames(train), bst2)
bst2_interactions <- treeInteractions(bst2_tree, 4) # much more interactions
# Fit model with both interaction and monotonicity constraints
bst3 <- xgboost(data = train, label = y, max_depth = 4,
eta = 0.1, nthread = 2, nrounds = 1000,
interaction_constraints = interaction_list_fid,
monotone_constraints = c(-1, 0, 0, 0, 0, 0, 0, 0, 0, 0))
bst3_tree <- xgb.model.dt.tree(colnames(train), bst3)
bst3_interactions <- treeInteractions(bst3_tree, 4)
# interactions still constrained to combinations of V1*V2 and V3*V4*V5
# Show monotonic constraints still apply by checking scores after incrementing V1
x1 <- sort(unique(x[['V1']]))
for (i in seq_along(x1)){
testdata <- copy(x[, - ('V1')])
testdata[['V1']] <- x1[i]
testdata <- testdata[, paste0('V', 1:10), with = FALSE]
pred <- predict(bst3, as.matrix(testdata))
# Should not print out anything due to monotonic constraints
if (i > 1) if (any(pred > prev_pred)) print(i)
prev_pred <- pred
}

View File

@@ -1,6 +0,0 @@
data(mtcars)
head(mtcars)
bst <- xgboost(data = as.matrix(mtcars[, -11]), label = mtcars[, 11],
objective = 'count:poisson', nrounds = 5)
pred <- predict(bst, as.matrix(mtcars[, -11]))
sqrt(mean((pred - mtcars[, 11]) ^ 2))

View File

@@ -1,23 +0,0 @@
require(xgboost)
# load in the agaricus dataset
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
param <- list(max_depth = 2, eta = 1, objective = 'binary:logistic')
watchlist <- list(eval = dtest, train = dtrain)
nrounds <- 2
# training the model for two rounds
bst <- xgb.train(param, dtrain, nrounds, nthread = 2, watchlist)
cat('start testing prediction from first n trees\n')
labels <- getinfo(dtest, 'label')
### predict using first 1 tree
ypred1 <- predict(bst, dtest, ntreelimit = 1)
# by default, we predict using all the trees
ypred2 <- predict(bst, dtest)
cat('error of ypred1=', mean(as.numeric(ypred1 > 0.5) != labels), '\n')
cat('error of ypred2=', mean(as.numeric(ypred2 > 0.5) != labels), '\n')

View File

@@ -1,55 +0,0 @@
require(xgboost)
require(data.table)
require(Matrix)
set.seed(1982)
# load in the agaricus dataset
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)
param <- list(max_depth = 2, eta = 1, objective = 'binary:logistic')
nrounds <- 4
# training the model for two rounds
bst <- xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)
# Model accuracy without new features
accuracy.before <- (sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.test$label)
/ length(agaricus.test$label))
# by default, we predict using all the trees
pred_with_leaf <- predict(bst, dtest, predleaf = TRUE)
head(pred_with_leaf)
create.new.tree.features <- function(model, original.features) {
pred_with_leaf <- predict(model, original.features, predleaf = TRUE)
cols <- list()
for (i in 1:model$niter) {
# max is not the real max but it s not important for the purpose of adding features
leaf.id <- sort(unique(pred_with_leaf[, i]))
cols[[i]] <- factor(x = pred_with_leaf[, i], level = leaf.id)
}
cbind(original.features, sparse.model.matrix(~ . - 1, as.data.frame(cols)))
}
# Convert previous features to one hot encoding
new.features.train <- create.new.tree.features(bst, agaricus.train$data)
new.features.test <- create.new.tree.features(bst, agaricus.test$data)
colnames(new.features.test) <- colnames(new.features.train)
# learning with new features
new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label)
new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label)
watchlist <- list(train = new.dtrain)
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
# Model accuracy with new features
accuracy.after <- (sum((predict(bst, new.dtest) >= 0.5) == agaricus.test$label)
/ length(agaricus.test$label))
# Here the accuracy was already good and is now perfect.
cat(paste("The accuracy was", accuracy.before, "before adding leaf features and it is now",
accuracy.after, "!\n"))

View File

@@ -1,14 +0,0 @@
# running all scripts in demo folder, removed during packaging.
demo(basic_walkthrough, package = 'xgboost')
demo(custom_objective, package = 'xgboost')
demo(boost_from_prediction, package = 'xgboost')
demo(predict_first_ntree, package = 'xgboost')
demo(generalized_linear_model, package = 'xgboost')
demo(cross_validation, package = 'xgboost')
demo(create_sparse_matrix, package = 'xgboost')
demo(predict_leaf_indices, package = 'xgboost')
demo(early_stopping, package = 'xgboost')
demo(poisson_regression, package = 'xgboost')
demo(caret_wrapper, package = 'xgboost')
demo(tweedie_regression, package = 'xgboost')
#demo(gpu_accelerated, package = 'xgboost') # can only run when built with GPU support

View File

@@ -1,49 +0,0 @@
library(xgboost)
library(data.table)
library(cplm)
data(AutoClaim)
# auto insurance dataset analyzed by Yip and Yau (2005)
dt <- data.table(AutoClaim)
# exclude these columns from the model matrix
exclude <- c('POLICYNO', 'PLCYDATE', 'CLM_FREQ5', 'CLM_AMT5', 'CLM_FLAG', 'IN_YY')
# retains the missing values
# NOTE: this dataset is comes ready out of the box
options(na.action = 'na.pass')
x <- sparse.model.matrix(~ . - 1, data = dt[, -exclude, with = FALSE])
options(na.action = 'na.omit')
# response
y <- dt[, CLM_AMT5]
d_train <- xgb.DMatrix(data = x, label = y, missing = NA)
# the tweedie_variance_power parameter determines the shape of
# distribution
# - closer to 1 is more poisson like and the mass
# is more concentrated near zero
# - closer to 2 is more gamma like and the mass spreads to the
# the right with less concentration near zero
params <- list(
objective = 'reg:tweedie',
eval_metric = 'rmse',
tweedie_variance_power = 1.4,
max_depth = 6,
eta = 1)
bst <- xgb.train(
data = d_train,
params = params,
maximize = FALSE,
watchlist = list(train = d_train),
nrounds = 20)
var_imp <- xgb.importance(attr(x, 'Dimnames')[[2]], model = bst)
preds <- predict(bst, d_train)
rmse <- sqrt(sum(mean((y - preds) ^ 2)))

View File

@@ -55,7 +55,7 @@ message(sprintf("Creating '%s' from '%s'", OUT_DEF_FILE, IN_DLL_FILE))
}
# use objdump to dump all the symbols
OBJDUMP_FILE <- "objdump-out.txt"
OBJDUMP_FILE <- file.path(tempdir(), "objdump-out.txt")
.pipe_shell_command_to_stdout(
command = "objdump"
, args = c("-p", IN_DLL_FILE)

View File

@@ -2,48 +2,101 @@
% Please edit documentation in R/utils.R
\name{a-compatibility-note-for-saveRDS-save}
\alias{a-compatibility-note-for-saveRDS-save}
\title{Do not use \code{\link[base]{saveRDS}} or \code{\link[base]{save}} for long-term archival of
models. Instead, use \code{\link{xgb.save}} or \code{\link{xgb.save.raw}}.}
\title{Model Serialization and Compatibility}
\description{
It is a common practice to use the built-in \code{\link[base]{saveRDS}} function (or
\code{\link[base]{save}}) to persist R objects to the disk. While it is possible to persist
\code{xgb.Booster} objects using \code{\link[base]{saveRDS}}, it is not advisable to do so if
the model is to be accessed in the future. If you train a model with the current version of
XGBoost and persist it with \code{\link[base]{saveRDS}}, the model is not guaranteed to be
accessible in later releases of XGBoost. To ensure that your model can be accessed in future
releases of XGBoost, use \code{\link{xgb.save}} or \code{\link{xgb.save.raw}} instead.
When it comes to serializing XGBoost models, it's possible to use R serializers such as
\code{\link[=save]{save()}} or \code{\link[=saveRDS]{saveRDS()}} to serialize an XGBoost R model, but XGBoost also provides
its own serializers with better compatibility guarantees, which allow loading
said models in other language bindings of XGBoost.
Note that an \code{xgb.Booster} object (\strong{as produced by \code{\link[=xgb.train]{xgb.train()}}}, see rest of the doc
for objects produced by \code{\link[=xgboost]{xgboost()}}), outside of its core components, might also keep:
\itemize{
\item Additional model configuration (accessible through \code{\link[=xgb.config]{xgb.config()}}), which includes
model fitting parameters like \code{max_depth} and runtime parameters like \code{nthread}.
These are not necessarily useful for prediction/importance/plotting.
\item Additional R specific attributes - e.g. results of callbacks, such as evaluation logs,
which are kept as a \code{data.table} object, accessible through
\code{attributes(model)$evaluation_log} if present.
}
The first one (configurations) does not have the same compatibility guarantees as
the model itself, including attributes that are set and accessed through
\code{\link[=xgb.attributes]{xgb.attributes()}} - that is, such configuration might be lost after loading the
booster in a different XGBoost version, regardless of the serializer that was used.
These are saved when using \code{\link[=saveRDS]{saveRDS()}}, but will be discarded if loaded into an
incompatible XGBoost version. They are not saved when using XGBoost's
serializers from its public interface including \code{\link[=xgb.save]{xgb.save()}} and \code{\link[=xgb.save.raw]{xgb.save.raw()}}.
The second ones (R attributes) are not part of the standard XGBoost model structure,
and thus are not saved when using XGBoost's own serializers. These attributes are
only used for informational purposes, such as keeping track of evaluation metrics as
the model was fit, or saving the R call that produced the model, but are otherwise
not used for prediction / importance / plotting / etc.
These R attributes are only preserved when using R's serializers.
In addition to the regular \code{xgb.Booster} objects producted by \code{\link[=xgb.train]{xgb.train()}}, the
function \code{\link[=xgboost]{xgboost()}} produces a different subclass \code{xgboost}, which keeps other
additional metadata as R attributes such as class names in classification problems,
and which has a dedicated \code{predict} method that uses different defaults. XGBoost's
own serializers can work with this \code{xgboost} class, but as they do not keep R
attributes, the resulting object, when deserialized, is downcasted to the regular
\code{xgb.Booster} class (i.e. it loses the metadata, and the resulting object will use
\code{predict.xgb.Booster} instead of \code{predict.xgboost}) - for these \code{xgboost} objects,
\code{saveRDS} might thus be a better option if the extra functionalities are needed.
Note that XGBoost models in R starting from version \verb{2.1.0} and onwards, and
XGBoost models before version \verb{2.1.0}; have a very different R object structure and
are incompatible with each other. Hence, models that were saved with R serializers
like \code{\link[=saveRDS]{saveRDS()}} or \code{\link[=save]{save()}} before version \verb{2.1.0} will not work with latter
\code{xgboost} versions and vice versa. Be aware that the structure of R model objects
could in theory change again in the future, so XGBoost's serializers
should be preferred for long-term storage.
Furthermore, note that using the package \code{qs} for serialization will require
version 0.26 or higher of said package, and will have the same compatibility
restrictions as R serializers.
}
\details{
Use \code{\link{xgb.save}} to save the XGBoost model as a stand-alone file. You may opt into
Use \code{\link[=xgb.save]{xgb.save()}} to save the XGBoost model as a stand-alone file. You may opt into
the JSON format by specifying the JSON extension. To read the model back, use
\code{\link{xgb.load}}.
\code{\link[=xgb.load]{xgb.load()}}.
Use \code{\link{xgb.save.raw}} to save the XGBoost model as a sequence (vector) of raw bytes
Use \code{\link[=xgb.save.raw]{xgb.save.raw()}} to save the XGBoost model as a sequence (vector) of raw bytes
in a future-proof manner. Future releases of XGBoost will be able to read the raw bytes and
re-construct the corresponding model. To read the model back, use \code{\link{xgb.load.raw}}.
The \code{\link{xgb.save.raw}} function is useful if you'd like to persist the XGBoost model
re-construct the corresponding model. To read the model back, use \code{\link[=xgb.load.raw]{xgb.load.raw()}}.
The \code{\link[=xgb.save.raw]{xgb.save.raw()}} function is useful if you would like to persist the XGBoost model
as part of another R object.
Note: Do not use \code{\link{xgb.serialize}} to store models long-term. It persists not only the
model but also internal configurations and parameters, and its format is not stable across
multiple XGBoost versions. Use \code{\link{xgb.serialize}} only for checkpointing.
Use \code{\link[=saveRDS]{saveRDS()}} if you require the R-specific attributes that a booster might have, such
as evaluation logs or the model class \code{xgboost} instead of \code{xgb.Booster}, but note that
future compatibility of such objects is outside XGBoost's control as it relies on R's
serialization format (see e.g. the details section in \link{serialize} and \code{\link[=save]{save()}} from base R).
For more details and explanation about model persistence and archival, consult the page
\url{https://xgboost.readthedocs.io/en/latest/tutorials/saving_model.html}.
}
\examples{
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
data(agaricus.train, package = "xgboost")
bst <- xgb.train(
data = xgb.DMatrix(agaricus.train$data, label = agaricus.train$label),
max_depth = 2,
eta = 1,
nthread = 2,
nrounds = 2,
objective = "binary:logistic"
)
# Save as a stand-alone file; load it with xgb.load()
xgb.save(bst, 'xgb.model')
bst2 <- xgb.load('xgb.model')
fname <- file.path(tempdir(), "xgb_model.ubj")
xgb.save(bst, fname)
bst2 <- xgb.load(fname)
# Save as a stand-alone file (JSON); load it with xgb.load()
xgb.save(bst, 'xgb.model.json')
bst2 <- xgb.load('xgb.model.json')
if (file.exists('xgb.model.json')) file.remove('xgb.model.json')
fname <- file.path(tempdir(), "xgb_model.json")
xgb.save(bst, fname)
bst2 <- xgb.load(fname)
# Save as a raw byte vector; load it with xgb.load.raw()
xgb_bytes <- xgb.save.raw(bst)
@@ -54,11 +107,11 @@ obj <- list(xgb_model_bytes = xgb.save.raw(bst), description = "My first XGBoost
# Persist the R object. Here, saveRDS() is okay, since it doesn't persist
# xgb.Booster directly. What's being persisted is the future-proof byte representation
# as given by xgb.save.raw().
saveRDS(obj, 'my_object.rds')
fname <- file.path(tempdir(), "my_object.Rds")
saveRDS(obj, fname)
# Read back the R object
obj2 <- readRDS('my_object.rds')
obj2 <- readRDS(fname)
# Re-construct xgb.Booster object from the bytes
bst2 <- xgb.load.raw(obj2$xgb_model_bytes)
if (file.exists('my_object.rds')) file.remove('my_object.rds')
}

View File

@@ -16,18 +16,17 @@ This data set is originally from the Mushroom data set,
UCI Machine Learning Repository.
}
\details{
This data set includes the following fields:
It includes the following fields:
\itemize{
\item \code{label} the label for each record
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
\item \code{label}: The label for each record.
\item \code{data}: A sparse Matrix of 'dgCMatrix' class with 126 columns.
}
}
\references{
https://archive.ics.uci.edu/ml/datasets/Mushroom
\url{https://archive.ics.uci.edu/ml/datasets/Mushroom}
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
[http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
\url{http://archive.ics.uci.edu/ml}. Irvine, CA: University of California,
School of Information and Computer Science.
}
\keyword{datasets}

View File

@@ -16,18 +16,17 @@ This data set is originally from the Mushroom data set,
UCI Machine Learning Repository.
}
\details{
This data set includes the following fields:
It includes the following fields:
\itemize{
\item \code{label} the label for each record
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
\item \code{label}: The label for each record.
\item \code{data}: A sparse Matrix of 'dgCMatrix' class with 126 columns.
}
}
\references{
https://archive.ics.uci.edu/ml/datasets/Mushroom
\url{https://archive.ics.uci.edu/ml/datasets/Mushroom}
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
[http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
\url{http://archive.ics.uci.edu/ml}. Irvine, CA: University of California,
School of Information and Computer Science.
}
\keyword{datasets}

View File

@@ -1,37 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{callbacks}
\alias{callbacks}
\title{Callback closures for booster training.}
\description{
These are used to perform various service tasks either during boosting iterations or at the end.
This approach helps to modularize many of such tasks without bloating the main training methods,
and it offers .
}
\details{
By default, a callback function is run after each boosting iteration.
An R-attribute \code{is_pre_iteration} could be set for a callback to define a pre-iteration function.
When a callback function has \code{finalize} parameter, its finalizer part will also be run after
the boosting is completed.
WARNING: side-effects!!! Be aware that these callback functions access and modify things in
the environment from which they are called from, which is a fairly uncommon thing to do in R.
To write a custom callback closure, make sure you first understand the main concepts about R environments.
Check either R documentation on \code{\link[base]{environment}} or the
\href{http://adv-r.had.co.nz/Environments.html}{Environments chapter} from the "Advanced R"
book by Hadley Wickham. Further, the best option is to read the code of some of the existing callbacks -
choose ones that do something similar to what you want to achieve. Also, you would need to get familiar
with the objects available inside of the \code{xgb.train} and \code{xgb.cv} internal environments.
}
\seealso{
\code{\link{cb.print.evaluation}},
\code{\link{cb.evaluation.log}},
\code{\link{cb.reset.parameters}},
\code{\link{cb.early.stop}},
\code{\link{cb.save.model}},
\code{\link{cb.cv.predict}},
\code{\link{xgb.train}},
\code{\link{xgb.cv}}
}

View File

@@ -1,43 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.cv.predict}
\alias{cb.cv.predict}
\title{Callback closure for returning cross-validation based predictions.}
\usage{
cb.cv.predict(save_models = FALSE)
}
\arguments{
\item{save_models}{a flag for whether to save the folds' models.}
}
\value{
Predictions are returned inside of the \code{pred} element, which is either a vector or a matrix,
depending on the number of prediction outputs per data row. The order of predictions corresponds
to the order of rows in the original dataset. Note that when a custom \code{folds} list is
provided in \code{xgb.cv}, the predictions would only be returned properly when this list is a
non-overlapping list of k sets of indices, as in a standard k-fold CV. The predictions would not be
meaningful when user-provided folds have overlapping indices as in, e.g., random sampling splits.
When some of the indices in the training dataset are not included into user-provided \code{folds},
their prediction value would be \code{NA}.
}
\description{
Callback closure for returning cross-validation based predictions.
}
\details{
This callback function saves predictions for all of the test folds,
and also allows to save the folds' models.
It is a "finalizer" callback and it uses early stopping information whenever it is available,
thus it must be run after the early stopping callback if the early stopping is used.
Callback function expects the following values to be set in its calling frame:
\code{bst_folds},
\code{basket},
\code{data},
\code{end_iteration},
\code{params},
\code{num_parallel_tree},
\code{num_class}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@@ -1,63 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.early.stop}
\alias{cb.early.stop}
\title{Callback closure to activate the early stopping.}
\usage{
cb.early.stop(
stopping_rounds,
maximize = FALSE,
metric_name = NULL,
verbose = TRUE
)
}
\arguments{
\item{stopping_rounds}{The number of rounds with no improvement in
the evaluation metric in order to stop the training.}
\item{maximize}{whether to maximize the evaluation metric}
\item{metric_name}{the name of an evaluation column to use as a criteria for early
stopping. If not set, the last column would be used.
Let's say the test data in \code{watchlist} was labelled as \code{dtest},
and one wants to use the AUC in test data for early stopping regardless of where
it is in the \code{watchlist}, then one of the following would need to be set:
\code{metric_name='dtest-auc'} or \code{metric_name='dtest_auc'}.
All dash '-' characters in metric names are considered equivalent to '_'.}
\item{verbose}{whether to print the early stopping information.}
}
\description{
Callback closure to activate the early stopping.
}
\details{
This callback function determines the condition for early stopping
by setting the \code{stop_condition = TRUE} flag in its calling frame.
The following additional fields are assigned to the model's R object:
\itemize{
\item \code{best_score} the evaluation score at the best iteration
\item \code{best_iteration} at which boosting iteration the best score has occurred (1-based index)
}
The Same values are also stored as xgb-attributes:
\itemize{
\item \code{best_iteration} is stored as a 0-based iteration index (for interoperability of binary models)
\item \code{best_msg} message string is also stored.
}
At least one data element is required in the evaluation watchlist for early stopping to work.
Callback function expects the following values to be set in its calling frame:
\code{stop_condition},
\code{bst_evaluation},
\code{rank},
\code{bst} (or \code{bst_folds} and \code{basket}),
\code{iteration},
\code{begin_iteration},
\code{end_iteration},
\code{num_parallel_tree}.
}
\seealso{
\code{\link{callbacks}},
\code{\link{xgb.attr}}
}

View File

@@ -1,31 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.evaluation.log}
\alias{cb.evaluation.log}
\title{Callback closure for logging the evaluation history}
\usage{
cb.evaluation.log()
}
\description{
Callback closure for logging the evaluation history
}
\details{
This callback function appends the current iteration evaluation results \code{bst_evaluation}
available in the calling parent frame to the \code{evaluation_log} list in a calling frame.
The finalizer callback (called with \code{finalize = TURE} in the end) converts
the \code{evaluation_log} list into a final data.table.
The iteration evaluation result \code{bst_evaluation} must be a named numeric vector.
Note: in the column names of the final data.table, the dash '-' character is replaced with
the underscore '_' in order to make the column names more like regular R identifiers.
Callback function expects the following values to be set in its calling frame:
\code{evaluation_log},
\code{bst_evaluation},
\code{iteration}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@@ -1,96 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.gblinear.history}
\alias{cb.gblinear.history}
\title{Callback closure for collecting the model coefficients history of a gblinear booster
during its training.}
\usage{
cb.gblinear.history(sparse = FALSE)
}
\arguments{
\item{sparse}{when set to FALSE/TRUE, a dense/sparse matrix is used to store the result.
Sparse format is useful when one expects only a subset of coefficients to be non-zero,
when using the "thrifty" feature selector with fairly small number of top features
selected per iteration.}
}
\value{
Results are stored in the \code{coefs} element of the closure.
The \code{\link{xgb.gblinear.history}} convenience function provides an easy
way to access it.
With \code{xgb.train}, it is either a dense of a sparse matrix.
While with \code{xgb.cv}, it is a list (an element per each fold) of such
matrices.
}
\description{
Callback closure for collecting the model coefficients history of a gblinear booster
during its training.
}
\details{
To keep things fast and simple, gblinear booster does not internally store the history of linear
model coefficients at each boosting iteration. This callback provides a workaround for storing
the coefficients' path, by extracting them after each training iteration.
Callback function expects the following values to be set in its calling frame:
\code{bst} (or \code{bst_folds}).
}
\examples{
#### Binary classification:
#
# In the iris dataset, it is hard to linearly separate Versicolor class from the rest
# without considering the 2nd order interactions:
x <- model.matrix(Species ~ .^2, iris)[,-1]
colnames(x)
dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor"), nthread = 2)
param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc",
lambda = 0.0003, alpha = 0.0003, nthread = 2)
# For 'shotgun', which is a default linear updater, using high eta values may result in
# unstable behaviour in some datasets. With this simple dataset, however, the high learning
# rate does not break the convergence, but allows us to illustrate the typical pattern of
# "stochastic explosion" behaviour of this lock-free algorithm at early boosting iterations.
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 1.,
callbacks = list(cb.gblinear.history()))
# Extract the coefficients' path and plot them vs boosting iteration number:
coef_path <- xgb.gblinear.history(bst)
matplot(coef_path, type = 'l')
# With the deterministic coordinate descent updater, it is safer to use higher learning rates.
# Will try the classical componentwise boosting which selects a single best feature per round:
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 0.8,
updater = 'coord_descent', feature_selector = 'thrifty', top_k = 1,
callbacks = list(cb.gblinear.history()))
matplot(xgb.gblinear.history(bst), type = 'l')
# Componentwise boosting is known to have similar effect to Lasso regularization.
# Try experimenting with various values of top_k, eta, nrounds,
# as well as different feature_selectors.
# For xgb.cv:
bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8,
callbacks = list(cb.gblinear.history()))
# coefficients in the CV fold #3
matplot(xgb.gblinear.history(bst)[[3]], type = 'l')
#### Multiclass classification:
#
dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1, nthread = 1)
param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3,
lambda = 0.0003, alpha = 0.0003, nthread = 1)
# For the default linear updater 'shotgun' it sometimes is helpful
# to use smaller eta to reduce instability
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 50, eta = 0.5,
callbacks = list(cb.gblinear.history()))
# Will plot the coefficient paths separately for each class:
matplot(xgb.gblinear.history(bst, class_index = 0), type = 'l')
matplot(xgb.gblinear.history(bst, class_index = 1), type = 'l')
matplot(xgb.gblinear.history(bst, class_index = 2), type = 'l')
# CV:
bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 70, eta = 0.5,
callbacks = list(cb.gblinear.history(FALSE)))
# 1st fold of 1st class
matplot(xgb.gblinear.history(bst, class_index = 0)[[1]], type = 'l')
}
\seealso{
\code{\link{callbacks}}, \code{\link{xgb.gblinear.history}}.
}

View File

@@ -1,29 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.print.evaluation}
\alias{cb.print.evaluation}
\title{Callback closure for printing the result of evaluation}
\usage{
cb.print.evaluation(period = 1, showsd = TRUE)
}
\arguments{
\item{period}{results would be printed every number of periods}
\item{showsd}{whether standard deviations should be printed (when available)}
}
\description{
Callback closure for printing the result of evaluation
}
\details{
The callback function prints the result of evaluation at every \code{period} iterations.
The initial and the last iteration's evaluations are always printed.
Callback function expects the following values to be set in its calling frame:
\code{bst_evaluation} (also \code{bst_evaluation_err} when available),
\code{iteration},
\code{begin_iteration},
\code{end_iteration}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@@ -1,33 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.save.model}
\alias{cb.save.model}
\title{Callback closure for saving a model file.}
\usage{
cb.save.model(save_period = 0, save_name = "xgboost.model")
}
\arguments{
\item{save_period}{save the model to disk after every
\code{save_period} iterations; 0 means save the model at the end.}
\item{save_name}{the name or path for the saved model file.
It can contain a \code{\link[base]{sprintf}} formatting specifier
to include the integer iteration number in the file name.
E.g., with \code{save_name} = 'xgboost_%04d.model',
the file saved at iteration 50 would be named "xgboost_0050.model".}
}
\description{
Callback closure for saving a model file.
}
\details{
This callback function allows to save an xgb-model file, either periodically after each \code{save_period}'s or at the end.
Callback function expects the following values to be set in its calling frame:
\code{bst},
\code{iteration},
\code{begin_iteration},
\code{end_iteration}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@@ -0,0 +1,54 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.Booster.R
\name{coef.xgb.Booster}
\alias{coef.xgb.Booster}
\title{Extract coefficients from linear booster}
\usage{
\method{coef}{xgb.Booster}(object, ...)
}
\arguments{
\item{object}{A fitted booster of 'gblinear' type.}
\item{...}{Not used.}
}
\value{
The extracted coefficients:
\itemize{
\item If there is only one coefficient per column in the data, will be returned as a
vector, potentially containing the feature names if available, with the intercept
as first column.
\item If there is more than one coefficient per column in the data (e.g. when using
\code{objective="multi:softmax"}), will be returned as a matrix with dimensions equal
to \verb{[num_features, num_cols]}, with the intercepts as first row. Note that the column
(classes in multi-class classification) dimension will not be named.
}
The intercept returned here will include the 'base_score' parameter (unlike the 'bias'
or the last coefficient in the model dump, which doesn't have 'base_score' added to it),
hence one should get the same values from calling \code{predict(..., outputmargin = TRUE)} and
from performing a matrix multiplication with \code{model.matrix(~., ...)}.
Be aware that the coefficients are obtained by first converting them to strings and
back, so there will always be some very small lose of precision compared to the actual
coefficients as used by \link{predict.xgb.Booster}.
}
\description{
Extracts the coefficients from a 'gblinear' booster object,
as produced by \code{\link[=xgb.train]{xgb.train()}} when using parameter \code{booster="gblinear"}.
Note: this function will error out if passing a booster model
which is not of "gblinear" type.
}
\examples{
library(xgboost)
data(mtcars)
y <- mtcars[, 1]
x <- as.matrix(mtcars[, -1])
dm <- xgb.DMatrix(data = x, label = y, nthread = 1)
params <- list(booster = "gblinear", nthread = 1)
model <- xgb.train(data = dm, params = params, nrounds = 2)
coef(model)
}

View File

@@ -13,13 +13,14 @@
Returns a vector of numbers of rows and of columns in an \code{xgb.DMatrix}.
}
\details{
Note: since \code{nrow} and \code{ncol} internally use \code{dim}, they can also
Note: since \code{\link[=nrow]{nrow()}} and \code{\link[=ncol]{ncol()}} internally use \code{\link[=dim]{dim()}}, they can also
be directly used with an \code{xgb.DMatrix} object.
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.train, package = "xgboost")
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
dtrain <- xgb.DMatrix(train$data, label = train$label, nthread = 2)
stopifnot(nrow(dtrain) == nrow(train$data))
stopifnot(ncol(dtrain) == ncol(train$data))

View File

@@ -10,26 +10,27 @@
\method{dimnames}{xgb.DMatrix}(x) <- value
}
\arguments{
\item{x}{object of class \code{xgb.DMatrix}}
\item{x}{Object of class \code{xgb.DMatrix}.}
\item{value}{a list of two elements: the first one is ignored
\item{value}{A list of two elements: the first one is ignored
and the second one is column names}
}
\description{
Only column names are supported for \code{xgb.DMatrix}, thus setting of
row names would have no effect and returned row names would be NULL.
row names would have no effect and returned row names would be \code{NULL}.
}
\details{
Generic \code{dimnames} methods are used by \code{colnames}.
Since row names are irrelevant, it is recommended to use \code{colnames} directly.
Generic \code{\link[=dimnames]{dimnames()}} methods are used by \code{\link[=colnames]{colnames()}}.
Since row names are irrelevant, it is recommended to use \code{\link[=colnames]{colnames()}} directly.
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.train, package = "xgboost")
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
dtrain <- xgb.DMatrix(train$data, label = train$label, nthread = 2)
dimnames(dtrain)
colnames(dtrain)
colnames(dtrain) <- make.names(1:ncol(train$data))
print(dtrain, verbose=TRUE)
print(dtrain, verbose = TRUE)
}

View File

@@ -1,44 +1,97 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{getinfo}
% Please edit documentation in R/xgb.Booster.R, R/xgb.DMatrix.R
\name{getinfo.xgb.Booster}
\alias{getinfo.xgb.Booster}
\alias{setinfo.xgb.Booster}
\alias{getinfo}
\alias{getinfo.xgb.DMatrix}
\title{Get information of an xgb.DMatrix object}
\alias{setinfo}
\alias{setinfo.xgb.DMatrix}
\title{Get or set information of xgb.DMatrix and xgb.Booster objects}
\usage{
getinfo(object, ...)
\method{getinfo}{xgb.Booster}(object, name)
\method{getinfo}{xgb.DMatrix}(object, name, ...)
\method{setinfo}{xgb.Booster}(object, name, info)
getinfo(object, name)
\method{getinfo}{xgb.DMatrix}(object, name)
setinfo(object, name, info)
\method{setinfo}{xgb.DMatrix}(object, name, info)
}
\arguments{
\item{object}{Object of class \code{xgb.DMatrix}}
\item{object}{Object of class \code{xgb.DMatrix} or \code{xgb.Booster}.}
\item{...}{other parameters}
\item{name}{The name of the information field to get (see details).}
\item{name}{the name of the information field to get (see details)}
\item{info}{The specific field of information to set.}
}
\value{
For \code{getinfo()}, will return the requested field. For \code{setinfo()},
will always return value \code{TRUE} if it succeeds.
}
\description{
Get information of an xgb.DMatrix object
Get or set information of xgb.DMatrix and xgb.Booster objects
}
\details{
The \code{name} field can be one of the following:
The \code{name} field can be one of the following for \code{xgb.DMatrix}:
\itemize{
\item \code{label}: label XGBoost learn from ;
\item \code{weight}: to do a weight rescale ;
\item \code{base_margin}: base margin is the base prediction XGBoost will boost from ;
\item \code{nrow}: number of rows of the \code{xgb.DMatrix}.
\item label
\item weight
\item base_margin
\item label_lower_bound
\item label_upper_bound
\item group
\item feature_type
\item feature_name
\item nrow
}
\code{group} can be setup by \code{setinfo} but can't be retrieved by \code{getinfo}.
See the documentation for \code{\link[=xgb.DMatrix]{xgb.DMatrix()}} for more information about these fields.
For \code{xgb.Booster}, can be one of the following:
\itemize{
\item \code{feature_type}
\item \code{feature_name}
}
Note that, while 'qid' cannot be retrieved, it is possible to get the equivalent 'group'
for a DMatrix that had 'qid' assigned.
\strong{Important}: when calling \code{\link[=setinfo]{setinfo()}}, the objects are modified in-place. See
\code{\link[=xgb.copy.Booster]{xgb.copy.Booster()}} for an idea of this in-place assignment works.
See the documentation for \code{\link[=xgb.DMatrix]{xgb.DMatrix()}} for possible fields that can be set
(which correspond to arguments in that function).
Note that the following fields are allowed in the construction of an \code{xgb.DMatrix}
but \strong{are not} allowed here:
\itemize{
\item data
\item missing
\item silent
\item nthread
}
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.train, package = "xgboost")
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
labels <- getinfo(dtrain, 'label')
setinfo(dtrain, 'label', 1-labels)
labels <- getinfo(dtrain, "label")
setinfo(dtrain, "label", 1 - labels)
labels2 <- getinfo(dtrain, 'label')
stopifnot(all(labels2 == 1-labels))
labels2 <- getinfo(dtrain, "label")
stopifnot(all(labels2 == 1 - labels))
data(agaricus.train, package = "xgboost")
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
labels <- getinfo(dtrain, "label")
setinfo(dtrain, "label", 1 - labels)
labels2 <- getinfo(dtrain, "label")
stopifnot(all.equal(labels2, 1 - labels))
}

View File

@@ -1,18 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.ggplot.R
\name{normalize}
\alias{normalize}
\title{Scale feature value to have mean 0, standard deviation 1}
\usage{
normalize(x)
}
\arguments{
\item{x}{Numeric vector}
}
\value{
Numeric vector with mean 0 and sd 1.
}
\description{
This is used to compare multiple features on the same plot.
Internal utility function
}

View File

@@ -2,113 +2,183 @@
% Please edit documentation in R/xgb.Booster.R
\name{predict.xgb.Booster}
\alias{predict.xgb.Booster}
\alias{predict.xgb.Booster.handle}
\title{Predict method for eXtreme Gradient Boosting model}
\title{Predict method for XGBoost model}
\usage{
\method{predict}{xgb.Booster}(
object,
newdata,
missing = NA,
outputmargin = FALSE,
ntreelimit = NULL,
predleaf = FALSE,
predcontrib = FALSE,
approxcontrib = FALSE,
predinteraction = FALSE,
reshape = FALSE,
training = FALSE,
iterationrange = NULL,
strict_shape = FALSE,
avoid_transpose = FALSE,
validate_features = FALSE,
base_margin = NULL,
...
)
\method{predict}{xgb.Booster.handle}(object, ...)
}
\arguments{
\item{object}{Object of class \code{xgb.Booster} or \code{xgb.Booster.handle}}
\item{object}{Object of class \code{xgb.Booster}.}
\item{newdata}{takes \code{matrix}, \code{dgCMatrix}, \code{dgRMatrix}, \code{dsparseVector},
local data file or \code{xgb.DMatrix}.
\item{newdata}{Takes \code{data.frame}, \code{matrix}, \code{dgCMatrix}, \code{dgRMatrix}, \code{dsparseVector},
local data file, or \code{xgb.DMatrix}.
For single-row predictions on sparse data, it's recommended to use CSR format. If passing
a sparse vector, it will take it as a row vector.}
For single-row predictions on sparse data, it is recommended to use CSR format. If passing
a sparse vector, it will take it as a row vector.
\item{missing}{Missing is only used when input is dense matrix. Pick a float value that represents
missing values in data (e.g., sometimes 0 or some other extreme value is used).}
Note that, for repeated predictions on the same data, one might want to create a DMatrix to
pass here instead of passing R types like matrices or data frames, as predictions will be
faster on DMatrix.
\item{outputmargin}{whether the prediction should be returned in the for of original untransformed
sum of predictions from boosting iterations' results. E.g., setting \code{outputmargin=TRUE} for
logistic regression would result in predictions for log-odds instead of probabilities.}
If \code{newdata} is a \code{data.frame}, be aware that:
\itemize{
\item Columns will be converted to numeric if they aren't already, which could potentially make
the operation slower than in an equivalent \code{matrix} object.
\item The order of the columns must match with that of the data from which the model was fitted
(i.e. columns will not be referenced by their names, just by their order in the data).
\item If the model was fitted to data with categorical columns, these columns must be of
\code{factor} type here, and must use the same encoding (i.e. have the same levels).
\item If \code{newdata} contains any \code{factor} columns, they will be converted to base-0
encoding (same as during DMatrix creation) - hence, one should not pass a \code{factor}
under a column which during training had a different type.
}}
\item{ntreelimit}{Deprecated, use \code{iterationrange} instead.}
\item{missing}{Float value that represents missing values in data
(e.g., 0 or some other extreme value).
\item{predleaf}{whether predict leaf index.}
This parameter is not used when \code{newdata} is an \code{xgb.DMatrix} - in such cases,
should pass this as an argument to the DMatrix constructor instead.}
\item{predcontrib}{whether to return feature contributions to individual predictions (see Details).}
\item{outputmargin}{Whether the prediction should be returned in the form of
original untransformed sum of predictions from boosting iterations' results.
E.g., setting \code{outputmargin = TRUE} for logistic regression would return log-odds
instead of probabilities.}
\item{approxcontrib}{whether to use a fast approximation for feature contributions (see Details).}
\item{predleaf}{Whether to predict per-tree leaf indices.}
\item{predinteraction}{whether to return contributions of feature interactions to individual predictions (see Details).}
\item{predcontrib}{Whether to return feature contributions to individual predictions (see Details).}
\item{reshape}{whether to reshape the vector of predictions to a matrix form when there are several
prediction outputs per case. This option has no effect when either of predleaf, predcontrib,
or predinteraction flags is TRUE.}
\item{approxcontrib}{Whether to use a fast approximation for feature contributions (see Details).}
\item{training}{whether is the prediction result used for training. For dart booster,
\item{predinteraction}{Whether to return contributions of feature interactions to individual predictions (see Details).}
\item{training}{Whether the prediction result is used for training. For dart booster,
training predicting will perform dropout.}
\item{iterationrange}{Specifies which layer of trees are used in prediction. For
example, if a random forest is trained with 100 rounds. Specifying
`iterationrange=(1, 21)`, then only the forests built during [1, 21) (half open set)
rounds are used in this prediction. It's 1-based index just like R vector. When set
to \code{c(1, 1)} XGBoost will use all trees.}
\item{iterationrange}{Sequence of rounds/iterations from the model to use for prediction, specified by passing
a two-dimensional vector with the start and end numbers in the sequence (same format as R's \code{seq} - i.e.
base-1 indexing, and inclusive of both ends).
\item{strict_shape}{Default is \code{FALSE}. When it's set to \code{TRUE}, output
type and shape of prediction are invariant to model type.}
For example, passing \code{c(1,20)} will predict using the first twenty iterations, while passing \code{c(1,1)} will
predict using only the first one.
\item{...}{Parameters passed to \code{predict.xgb.Booster}}
If passing \code{NULL}, will either stop at the best iteration if the model used early stopping, or use all
of the iterations (rounds) otherwise.
If passing "all", will use all of the rounds regardless of whether the model had early stopping or not.}
\item{strict_shape}{Whether to always return an array with the same dimensions for the given prediction mode
regardless of the model type - meaning that, for example, both a multi-class and a binary classification
model would generate output arrays with the same number of dimensions, with the 'class' dimension having
size equal to '1' for the binary model.
If passing \code{FALSE} (the default), dimensions will be simplified according to the model type, so that a
binary classification model for example would not have a redundant dimension for 'class'.
See documentation for the return type for the exact shape of the output arrays for each prediction mode.}
\item{avoid_transpose}{Whether to output the resulting predictions in the same memory layout in which they
are generated by the core XGBoost library, without transposing them to match the expected output shape.
Internally, XGBoost uses row-major order for the predictions it generates, while R arrays use column-major
order, hence the result needs to be transposed in order to have the expected shape when represented as
an R array or matrix, which might be a slow operation.
If passing \code{TRUE}, then the result will have dimensions in reverse order - for example, rows
will be the last dimensions instead of the first dimension.}
\item{validate_features}{When \code{TRUE}, validate that the Booster's and newdata's
feature_names match (only applicable when both \code{object} and \code{newdata} have feature names).
If the column names differ and \code{newdata} is not an \code{xgb.DMatrix}, will try to reorder
the columns in \code{newdata} to match with the booster's.
If the booster has feature types and \code{newdata} is either an \code{xgb.DMatrix} or
\code{data.frame}, will additionally verify that categorical columns are of the
correct type in \code{newdata}, throwing an error if they do not match.
If passing \code{FALSE}, it is assumed that the feature names and types are the same,
and come in the same order as in the training data.
Note that this check might add some sizable latency to the predictions, so it's
recommended to disable it for performance-sensitive applications.}
\item{base_margin}{Base margin used for boosting from existing model.
Note that, if \code{newdata} is an \code{xgb.DMatrix} object, this argument will
be ignored as it needs to be added to the DMatrix instead (e.g. by passing it as
an argument in its constructor, or by calling \code{\link[=setinfo.xgb.DMatrix]{setinfo.xgb.DMatrix()}}.}
\item{...}{Not used.}
}
\value{
The return type is different depending whether \code{strict_shape} is set to \code{TRUE}. By default,
for regression or binary classification, it returns a vector of length \code{nrows(newdata)}.
For multiclass classification, either a \code{num_class * nrows(newdata)} vector or
a \code{(nrows(newdata), num_class)} dimension matrix is returned, depending on
the \code{reshape} value.
A numeric vector or array, with corresponding dimensions depending on the prediction mode and on
parameter \code{strict_shape} as follows:
When \code{predleaf = TRUE}, the output is a matrix object with the
number of columns corresponding to the number of trees.
If passing \code{strict_shape=FALSE}:\itemize{
\item For regression or binary classification: a vector of length \code{nrows}.
\item For multi-class and multi-target objectives: a matrix of dimensions \verb{[nrows, ngroups]}.
When \code{predcontrib = TRUE} and it is not a multiclass setting, the output is a matrix object with
\code{num_features + 1} columns. The last "+ 1" column in a matrix corresponds to bias.
For a multiclass case, a list of \code{num_class} elements is returned, where each element is
such a matrix. The contribution values are on the scale of untransformed margin
(e.g., for binary classification would mean that the contributions are log-odds deviations from bias).
Note that objective variant \code{multi:softmax} defaults towards predicting most likely class (a vector
\code{nrows}) instead of per-class probabilities.
\item For \code{predleaf}: a matrix with one column per tree.
When \code{predinteraction = TRUE} and it is not a multiclass setting, the output is a 3d array with
dimensions \code{c(nrow, num_features + 1, num_features + 1)}. The off-diagonal (in the last two dimensions)
elements represent different features interaction contributions. The array is symmetric WRT the last
two dimensions. The "+ 1" columns corresponds to bias. Summing this array along the last dimension should
produce practically the same result as predict with \code{predcontrib = TRUE}.
For a multiclass case, a list of \code{num_class} elements is returned, where each element is
such an array.
For multi-class / multi-target, they will be arranged so that columns in the output will have
the leafs from one group followed by leafs of the other group (e.g. order will be \code{group1:feat1},
\code{group1:feat2}, ..., \code{group2:feat1}, \code{group2:feat2}, ...).
\item For \code{predcontrib}: when not multi-class / multi-target, a matrix with dimensions
\verb{[nrows, nfeats+1]}. The last "+ 1" column corresponds to the baseline value.
When \code{strict_shape} is set to \code{TRUE}, the output is always an array. For
normal prediction, the output is a 2-dimension array \code{(num_class, nrow(newdata))}.
For multi-class and multi-target objectives, will be an array with dimensions \verb{[nrows, ngroups, nfeats+1]}.
For \code{predcontrib = TRUE}, output is \code{(ncol(newdata) + 1, num_class, nrow(newdata))}
For \code{predinteraction = TRUE}, output is \code{(ncol(newdata) + 1, ncol(newdata) + 1, num_class, nrow(newdata))}
For \code{predleaf = TRUE}, output is \code{(n_trees_in_forest, num_class, n_iterations, nrow(newdata))}
The contribution values are on the scale of untransformed margin (e.g., for binary classification,
the values are log-odds deviations from the baseline).
\item For \code{predinteraction}: when not multi-class / multi-target, the output is a 3D array of
dimensions \verb{[nrows, nfeats+1, nfeats+1]}. The off-diagonal (in the last two dimensions)
elements represent different feature interaction contributions. The array is symmetric w.r.t. the last
two dimensions. The "+ 1" columns corresponds to the baselines. Summing this array along the last
dimension should produce practically the same result as \code{predcontrib = TRUE}.
For multi-class and multi-target, will be a 4D array with dimensions \verb{[nrows, ngroups, nfeats+1, nfeats+1]}
}
If passing \code{strict_shape=FALSE}, the result is always an array:
\itemize{
\item For normal predictions, the dimension is \verb{[nrows, ngroups]}.
\item For \code{predcontrib=TRUE}, the dimension is \verb{[nrows, ngroups, nfeats+1]}.
\item For \code{predinteraction=TRUE}, the dimension is \verb{[nrows, ngroups, nfeats+1, nfeats+1]}.
\item For \code{predleaf=TRUE}, the dimension is \verb{[nrows, niter, ngroups, num_parallel_tree]}.
}
If passing \code{avoid_transpose=TRUE}, then the dimensions in all cases will be in reverse order - for
example, for \code{predinteraction}, they will be \verb{[nfeats+1, nfeats+1, ngroups, nrows]}
instead of \verb{[nrows, ngroups, nfeats+1, nfeats+1]}.
}
\description{
Predicted values based on either xgboost model or model handle object.
Predict values on data based on XGBoost model.
}
\details{
Note that \code{iterationrange} would currently do nothing for predictions from gblinear,
since gblinear doesn't keep its boosting history.
Note that \code{iterationrange} would currently do nothing for predictions from "gblinear",
since "gblinear" doesn't keep its boosting history.
One possible practical applications of the \code{predleaf} option is to use the model
as a generator of new features which capture non-linearity and interactions,
e.g., as implemented in \code{\link{xgb.create.features}}.
e.g., as implemented in \code{\link[=xgb.create.features]{xgb.create.features()}}.
Setting \code{predcontrib = TRUE} allows to calculate contributions of each feature to
individual predictions. For "gblinear" booster, feature contributions are simply linear terms
@@ -124,23 +194,35 @@ Since it quadratically depends on the number of features, it is recommended to p
of the most important features first. See below about the format of the returned results.
The \code{predict()} method uses as many threads as defined in \code{xgb.Booster} object (all by default).
If you want to change their number, then assign a new number to \code{nthread} using \code{\link{xgb.parameters<-}}.
Note also that converting a matrix to \code{\link{xgb.DMatrix}} uses multiple threads too.
If you want to change their number, assign a new number to \code{nthread} using \code{\link[=xgb.parameters<-]{xgb.parameters<-()}}.
Note that converting a matrix to \code{\link[=xgb.DMatrix]{xgb.DMatrix()}} uses multiple threads too.
}
\examples{
## binary classification:
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
data(agaricus.train, package = "xgboost")
data(agaricus.test, package = "xgboost")
## Keep the number of threads to 2 for examples
nthread <- 2
data.table::setDTthreads(nthread)
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 0.5, nthread = 2, nrounds = 5, objective = "binary:logistic")
bst <- xgb.train(
data = xgb.DMatrix(train$data, label = train$label),
max_depth = 2,
eta = 0.5,
nthread = nthread,
nrounds = 5,
objective = "binary:logistic"
)
# use all trees by default
pred <- predict(bst, test$data)
# use only the 1st tree
pred1 <- predict(bst, test$data, iterationrange = c(1, 2))
pred1 <- predict(bst, test$data, iterationrange = c(1, 1))
# Predicting tree leafs:
# the result is an nsamples X ntrees matrix
@@ -155,7 +237,7 @@ str(pred_contr)
summary(rowSums(pred_contr) - qlogis(pred))
# for the 1st record, let's inspect its features that had non-zero contribution to prediction:
contr1 <- pred_contr[1,]
contr1 <- contr1[-length(contr1)] # drop BIAS
contr1 <- contr1[-length(contr1)] # drop intercept
contr1 <- contr1[contr1 != 0] # drop non-contributing features
contr1 <- contr1[order(abs(contr1))] # order by contribution magnitude
old_mar <- par("mar")
@@ -168,39 +250,59 @@ par(mar = old_mar)
lb <- as.numeric(iris$Species) - 1
num_class <- 3
set.seed(11)
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 4, eta = 0.5, nthread = 2, nrounds = 10, subsample = 0.5,
objective = "multi:softprob", num_class = num_class)
bst <- xgb.train(
data = xgb.DMatrix(as.matrix(iris[, -5]), label = lb),
max_depth = 4,
eta = 0.5,
nthread = 2,
nrounds = 10,
subsample = 0.5,
objective = "multi:softprob",
num_class = num_class
)
# predict for softmax returns num_class probability numbers per case:
pred <- predict(bst, as.matrix(iris[, -5]))
str(pred)
# reshape it to a num_class-columns matrix
pred <- matrix(pred, ncol=num_class, byrow=TRUE)
# convert the probabilities to softmax labels
pred_labels <- max.col(pred) - 1
# the following should result in the same error as seen in the last iteration
sum(pred_labels != lb)/length(lb)
sum(pred_labels != lb) / length(lb)
# compare that to the predictions from softmax:
# compare with predictions from softmax:
set.seed(11)
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 4, eta = 0.5, nthread = 2, nrounds = 10, subsample = 0.5,
objective = "multi:softmax", num_class = num_class)
bst <- xgb.train(
data = xgb.DMatrix(as.matrix(iris[, -5]), label = lb),
max_depth = 4,
eta = 0.5,
nthread = 2,
nrounds = 10,
subsample = 0.5,
objective = "multi:softmax",
num_class = num_class
)
pred <- predict(bst, as.matrix(iris[, -5]))
str(pred)
all.equal(pred, pred_labels)
# prediction from using only 5 iterations should result
# in the same error as seen in iteration 5:
pred5 <- predict(bst, as.matrix(iris[, -5]), iterationrange=c(1, 6))
sum(pred5 != lb)/length(lb)
pred5 <- predict(bst, as.matrix(iris[, -5]), iterationrange = c(1, 5))
sum(pred5 != lb) / length(lb)
}
\references{
Scott M. Lundberg, Su-In Lee, "A Unified Approach to Interpreting Model Predictions", NIPS Proceedings 2017, \url{https://arxiv.org/abs/1705.07874}
Scott M. Lundberg, Su-In Lee, "Consistent feature attribution for tree ensembles", \url{https://arxiv.org/abs/1706.06060}
\enumerate{
\item Scott M. Lundberg, Su-In Lee, "A Unified Approach to Interpreting Model Predictions",
NIPS Proceedings 2017, \url{https://arxiv.org/abs/1705.07874}
\item Scott M. Lundberg, Su-In Lee, "Consistent feature attribution for tree ensembles",
\url{https://arxiv.org/abs/1706.06060}
}
}
\seealso{
\code{\link{xgb.train}}.
\code{\link[=xgb.train]{xgb.train()}}
}

View File

@@ -1,27 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.ggplot.R
\name{prepare.ggplot.shap.data}
\alias{prepare.ggplot.shap.data}
\title{Combine and melt feature values and SHAP contributions for sample
observations.}
\usage{
prepare.ggplot.shap.data(data_list, normalize = FALSE)
}
\arguments{
\item{data_list}{List containing 'data' and 'shap_contrib' returned by
\code{xgb.shap.data()}.}
\item{normalize}{Whether to standardize feature values to have mean 0 and
standard deviation 1 (useful for comparing multiple features on the same
plot). Default \code{FALSE}.}
}
\value{
A data.table containing the observation ID, the feature name, the
feature value (normalized if specified), and the SHAP contribution value.
}
\description{
Conforms to data format required for ggplot functions.
}
\details{
Internal utility function.
}

View File

@@ -4,26 +4,33 @@
\alias{print.xgb.Booster}
\title{Print xgb.Booster}
\usage{
\method{print}{xgb.Booster}(x, verbose = FALSE, ...)
\method{print}{xgb.Booster}(x, ...)
}
\arguments{
\item{x}{an xgb.Booster object}
\item{x}{An \code{xgb.Booster} object.}
\item{verbose}{whether to print detailed data (e.g., attribute values)}
\item{...}{not currently used}
\item{...}{Not used.}
}
\value{
The same \code{x} object, returned invisibly
}
\description{
Print information about xgb.Booster.
Print information about \code{xgb.Booster}.
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.train, package = "xgboost")
train <- agaricus.train
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
attr(bst, 'myattr') <- 'memo'
bst <- xgb.train(
data = xgb.DMatrix(train$data, label = train$label),
max_depth = 2,
eta = 1,
nthread = 2,
nrounds = 2,
objective = "binary:logistic"
)
attr(bst, "myattr") <- "memo"
print(bst)
print(bst, verbose=TRUE)
}

View File

@@ -7,21 +7,22 @@
\method{print}{xgb.DMatrix}(x, verbose = FALSE, ...)
}
\arguments{
\item{x}{an xgb.DMatrix object}
\item{x}{An xgb.DMatrix object.}
\item{verbose}{whether to print colnames (when present)}
\item{verbose}{Whether to print colnames (when present).}
\item{...}{not currently used}
\item{...}{Not currently used.}
}
\description{
Print information about xgb.DMatrix.
Currently it displays dimensions and presence of info-fields and colnames.
}
\examples{
data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
data(agaricus.train, package = "xgboost")
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtrain
print(dtrain, verbose=TRUE)
print(dtrain, verbose = TRUE)
}

View File

@@ -7,25 +7,33 @@
\method{print}{xgb.cv.synchronous}(x, verbose = FALSE, ...)
}
\arguments{
\item{x}{an \code{xgb.cv.synchronous} object}
\item{x}{An \code{xgb.cv.synchronous} object.}
\item{verbose}{whether to print detailed data}
\item{verbose}{Whether to print detailed data.}
\item{...}{passed to \code{data.table.print}}
\item{...}{Passed to \code{data.table.print()}.}
}
\description{
Prints formatted results of \code{xgb.cv}.
Prints formatted results of \code{\link[=xgb.cv]{xgb.cv()}}.
}
\details{
When not verbose, it would only print the evaluation results,
including the best iteration (when available).
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.train, package = "xgboost")
train <- agaricus.train
cv <- xgb.cv(data = train$data, label = train$label, nfold = 5, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
cv <- xgb.cv(
data = xgb.DMatrix(train$data, label = train$label),
nfold = 5,
max_depth = 2,
eta = 1,
nthread = 2,
nrounds = 2,
objective = "binary:logistic"
)
print(cv)
print(cv, verbose=TRUE)
print(cv, verbose = TRUE)
}

View File

@@ -1,42 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{setinfo}
\alias{setinfo}
\alias{setinfo.xgb.DMatrix}
\title{Set information of an xgb.DMatrix object}
\usage{
setinfo(object, ...)
\method{setinfo}{xgb.DMatrix}(object, name, info, ...)
}
\arguments{
\item{object}{Object of class "xgb.DMatrix"}
\item{...}{other parameters}
\item{name}{the name of the field to get}
\item{info}{the specific field of information to set}
}
\description{
Set information of an xgb.DMatrix object
}
\details{
The \code{name} field can be one of the following:
\itemize{
\item \code{label}: label XGBoost learn from ;
\item \code{weight}: to do a weight rescale ;
\item \code{base_margin}: base margin is the base prediction XGBoost will boost from ;
\item \code{group}: number of rows in each group (to use with \code{rank:pairwise} objective).
}
}
\examples{
data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
labels <- getinfo(dtrain, 'label')
setinfo(dtrain, 'label', 1-labels)
labels2 <- getinfo(dtrain, 'label')
stopifnot(all.equal(labels2, 1-labels))
}

View File

@@ -1,39 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{slice}
\alias{slice}
\alias{slice.xgb.DMatrix}
\alias{[.xgb.DMatrix}
\title{Get a new DMatrix containing the specified rows of
original xgb.DMatrix object}
\usage{
slice(object, ...)
\method{slice}{xgb.DMatrix}(object, idxset, ...)
\method{[}{xgb.DMatrix}(object, idxset, colset = NULL)
}
\arguments{
\item{object}{Object of class "xgb.DMatrix"}
\item{...}{other parameters (currently not used)}
\item{idxset}{a integer vector of indices of rows needed}
\item{colset}{currently not used (columns subsetting is not available)}
}
\description{
Get a new DMatrix containing the specified rows of
original xgb.DMatrix object
}
\examples{
data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dsub <- slice(dtrain, 1:42)
labels1 <- getinfo(dsub, 'label')
dsub <- dtrain[1:42, ]
labels2 <- getinfo(dsub, 'label')
all.equal(labels1, labels2)
}

View File

@@ -0,0 +1,22 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.Booster.R
\name{variable.names.xgb.Booster}
\alias{variable.names.xgb.Booster}
\title{Get Features Names from Booster}
\usage{
\method{variable.names}{xgb.Booster}(object, ...)
}
\arguments{
\item{object}{An \code{xgb.Booster} object.}
\item{...}{Not used.}
}
\description{
Returns the feature / variable / column names from a fitted
booster object, which are set automatically during the call to \code{\link[=xgb.train]{xgb.train()}}
from the DMatrix names, or which can be set manually through \code{\link[=setinfo]{setinfo()}}.
If the object doesn't have feature names, will return \code{NULL}.
It is equivalent to calling \code{getinfo(object, "feature_name")}.
}

View File

@@ -1,52 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.Booster.R
\name{xgb.Booster.complete}
\alias{xgb.Booster.complete}
\title{Restore missing parts of an incomplete xgb.Booster object.}
\usage{
xgb.Booster.complete(object, saveraw = TRUE)
}
\arguments{
\item{object}{object of class \code{xgb.Booster}}
\item{saveraw}{a flag indicating whether to append \code{raw} Booster memory dump data
when it doesn't already exist.}
}
\value{
An object of \code{xgb.Booster} class.
}
\description{
It attempts to complete an \code{xgb.Booster} object by restoring either its missing
raw model memory dump (when it has no \code{raw} data but its \code{xgb.Booster.handle} is valid)
or its missing internal handle (when its \code{xgb.Booster.handle} is not valid
but it has a raw Booster memory dump).
}
\details{
While this method is primarily for internal use, it might be useful in some practical situations.
E.g., when an \code{xgb.Booster} model is saved as an R object and then is loaded as an R object,
its handle (pointer) to an internal xgboost model would be invalid. The majority of xgboost methods
should still work for such a model object since those methods would be using
\code{xgb.Booster.complete} internally. However, one might find it to be more efficient to call the
\code{xgb.Booster.complete} function explicitly once after loading a model as an R-object.
That would prevent further repeated implicit reconstruction of an internal booster model.
}
\examples{
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
saveRDS(bst, "xgb.model.rds")
# Warning: The resulting RDS file is only compatible with the current XGBoost version.
# Refer to the section titled "a-compatibility-note-for-saveRDS-save".
bst1 <- readRDS("xgb.model.rds")
if (file.exists("xgb.model.rds")) file.remove("xgb.model.rds")
# the handle is invalid:
print(bst1$handle)
bst1 <- xgb.Booster.complete(bst1)
# now the handle points to a valid internal booster model:
print(bst1$handle)
}

View File

@@ -0,0 +1,243 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{xgb.Callback}
\alias{xgb.Callback}
\title{XGBoost Callback Constructor}
\usage{
xgb.Callback(
cb_name = "custom_callback",
env = new.env(),
f_before_training = function(env, model, data, evals, begin_iteration, end_iteration)
NULL,
f_before_iter = function(env, model, data, evals, iteration) NULL,
f_after_iter = function(env, model, data, evals, iteration, iter_feval) NULL,
f_after_training = function(env, model, data, evals, iteration, final_feval,
prev_cb_res) NULL
)
}
\arguments{
\item{cb_name}{Name for the callback.
If the callback produces some non-NULL result (from executing the function passed under
\code{f_after_training}), that result will be added as an R attribute to the resulting booster
(or as a named element in the result of CV), with the attribute name specified here.
Names of callbacks must be unique - i.e. there cannot be two callbacks with the same name.}
\item{env}{An environment object that will be passed to the different functions in the callback.
Note that this environment will not be shared with other callbacks.}
\item{f_before_training}{A function that will be executed before the training has started.
If passing \code{NULL} for this or for the other function inputs, then no function will be executed.
If passing a function, it will be called with parameters supplied as non-named arguments
matching the function signatures that are shown in the default value for each function argument.}
\item{f_before_iter}{A function that will be executed before each boosting round.
This function can signal whether the training should be finalized or not, by outputting
a value that evaluates to \code{TRUE} - i.e. if the output from the function provided here at
a given round is \code{TRUE}, then training will be stopped before the current iteration happens.
Return values of \code{NULL} will be interpreted as \code{FALSE}.}
\item{f_after_iter}{A function that will be executed after each boosting round.
This function can signal whether the training should be finalized or not, by outputting
a value that evaluates to \code{TRUE} - i.e. if the output from the function provided here at
a given round is \code{TRUE}, then training will be stopped at that round.
Return values of \code{NULL} will be interpreted as \code{FALSE}.}
\item{f_after_training}{A function that will be executed after training is finished.
This function can optionally output something non-NULL, which will become part of the R
attributes of the booster (assuming one passes \code{keep_extra_attributes=TRUE} to \code{\link[=xgb.train]{xgb.train()}})
under the name supplied for parameter \code{cb_name} imn the case of \code{\link[=xgb.train]{xgb.train()}}; or a part
of the named elements in the result of \code{\link[=xgb.cv]{xgb.cv()}}.}
}
\value{
An \code{xgb.Callback} object, which can be passed to \code{\link[=xgb.train]{xgb.train()}} or \code{\link[=xgb.cv]{xgb.cv()}}.
}
\description{
Constructor for defining the structure of callback functions that can be executed
at different stages of model training (before / after training, before / after each boosting
iteration).
}
\details{
Arguments that will be passed to the supplied functions are as follows:
\itemize{
\item env The same environment that is passed under argument \code{env}.
It may be modified by the functions in order to e.g. keep tracking of what happens
across iterations or similar.
This environment is only used by the functions supplied to the callback, and will
not be kept after the model fitting function terminates (see parameter \code{f_after_training}).
\item model The booster object when using \code{\link[=xgb.train]{xgb.train()}}, or the folds when using \code{\link[=xgb.cv]{xgb.cv()}}.
For \code{\link[=xgb.cv]{xgb.cv()}}, folds are a list with a structure as follows:
\itemize{
\item \code{dtrain}: The training data for the fold (as an \code{xgb.DMatrix} object).
\item \code{bst}: Rhe \code{xgb.Booster} object for the fold.
\item \code{evals}: A list containing two DMatrices, with names \code{train} and \code{test}
(\code{test} is the held-out data for the fold).
\item \code{index}: The indices of the hold-out data for that fold (base-1 indexing),
from which the \code{test} entry in \code{evals} was obtained.
}
This object should \strong{not} be in-place modified in ways that conflict with the
training (e.g. resetting the parameters for a training update in a way that resets
the number of rounds to zero in order to overwrite rounds).
Note that any R attributes that are assigned to the booster during the callback functions,
will not be kept thereafter as the booster object variable is not re-assigned during
training. It is however possible to set C-level attributes of the booster through
\code{\link[=xgb.attr]{xgb.attr()}} or \code{\link[=xgb.attributes]{xgb.attributes()}}, which should remain available for the rest
of the iterations and after the training is done.
For keeping variables across iterations, it's recommended to use \code{env} instead.
\item data The data to which the model is being fit, as an \code{xgb.DMatrix} object.
Note that, for \code{\link[=xgb.cv]{xgb.cv()}}, this will be the full data, while data for the specific
folds can be found in the \code{model} object.
\item evals The evaluation data, as passed under argument \code{evals} to \code{\link[=xgb.train]{xgb.train()}}.
For \code{\link[=xgb.cv]{xgb.cv()}}, this will always be \code{NULL}.
\item begin_iteration Index of the first boosting iteration that will be executed (base-1 indexing).
This will typically be '1', but when using training continuation, depending on the
parameters for updates, boosting rounds will be continued from where the previous
model ended, in which case this will be larger than 1.
\item end_iteration Index of the last boostign iteration that will be executed
(base-1 indexing, inclusive of this end).
It should match with argument \code{nrounds} passed to \code{\link[=xgb.train]{xgb.train()}} or \code{\link[=xgb.cv]{xgb.cv()}}.
Note that boosting might be interrupted before reaching this last iteration, for
example by using the early stopping callback \code{\link[=xgb.cb.early.stop]{xgb.cb.early.stop()}}.
\item iteration Index of the iteration number that is being executed (first iteration
will be the same as parameter \code{begin_iteration}, then next one will add +1, and so on).
\item iter_feval Evaluation metrics for \code{evals} that were supplied, either
determined by the objective, or by parameter \code{feval}.
For \code{\link[=xgb.train]{xgb.train()}}, this will be a named vector with one entry per element in
\code{evals}, where the names are determined as 'evals name' + '-' + 'metric name' - for
example, if \code{evals} contains an entry named "tr" and the metric is "rmse",
this will be a one-element vector with name "tr-rmse".
For \code{\link[=xgb.cv]{xgb.cv()}}, this will be a 2d matrix with dimensions \verb{[length(evals), nfolds]},
where the row names will follow the same naming logic as the one-dimensional vector
that is passed in \code{\link[=xgb.train]{xgb.train()}}.
Note that, internally, the built-in callbacks such as \link{xgb.cb.print.evaluation} summarize
this table by calculating the row-wise means and standard deviations.
\item final_feval The evaluation results after the last boosting round is executed
(same format as \code{iter_feval}, and will be the exact same input as passed under
\code{iter_feval} to the last round that is executed during model fitting).
\item prev_cb_res Result from a previous run of a callback sharing the same name
(as given by parameter \code{cb_name}) when conducting training continuation, if there
was any in the booster R attributes.
Sometimes, one might want to append the new results to the previous one, and this will
be done automatically by the built-in callbacks such as \link{xgb.cb.evaluation.log},
which will append the new rows to the previous table.
If no such previous callback result is available (which it never will when fitting
a model from start instead of updating an existing model), this will be \code{NULL}.
For \code{\link[=xgb.cv]{xgb.cv()}}, which doesn't support training continuation, this will always be \code{NULL}.
}
The following names (\code{cb_name} values) are reserved for internal callbacks:
\itemize{
\item print_evaluation
\item evaluation_log
\item reset_parameters
\item early_stop
\item save_model
\item cv_predict
\item gblinear_history
}
The following names are reserved for other non-callback attributes:
\itemize{
\item names
\item class
\item call
\item params
\item niter
\item nfeatures
\item folds
}
When using the built-in early stopping callback (\link{xgb.cb.early.stop}), said callback
will always be executed before the others, as it sets some booster C-level attributes
that other callbacks might also use. Otherwise, the order of execution will match with
the order in which the callbacks are passed to the model fitting function.
}
\examples{
# Example constructing a custom callback that calculates
# squared error on the training data (no separate test set),
# and outputs the per-iteration results.
ssq_callback <- xgb.Callback(
cb_name = "ssq",
f_before_training = function(env, model, data, evals,
begin_iteration, end_iteration) {
# A vector to keep track of a number at each iteration
env$logs <- rep(NA_real_, end_iteration - begin_iteration + 1)
},
f_after_iter = function(env, model, data, evals, iteration, iter_feval) {
# This calculates the sum of squared errors on the training data.
# Note that this can be better done by passing an 'evals' entry,
# but this demonstrates a way in which callbacks can be structured.
pred <- predict(model, data)
err <- pred - getinfo(data, "label")
sq_err <- sum(err^2)
env$logs[iteration] <- sq_err
cat(
sprintf(
"Squared error at iteration \%d: \%.2f\n",
iteration, sq_err
)
)
# A return value of 'TRUE' here would signal to finalize the training
return(FALSE)
},
f_after_training = function(env, model, data, evals, iteration,
final_feval, prev_cb_res) {
return(env$logs)
}
)
data(mtcars)
y <- mtcars$mpg
x <- as.matrix(mtcars[, -1])
dm <- xgb.DMatrix(x, label = y, nthread = 1)
model <- xgb.train(
data = dm,
params = list(objective = "reg:squarederror", nthread = 1),
nrounds = 5,
callbacks = list(ssq_callback),
keep_extra_attributes = TRUE
)
# Result from 'f_after_iter' will be available as an attribute
attributes(model)$ssq
}
\seealso{
Built-in callbacks:
\itemize{
\item \link{xgb.cb.print.evaluation}
\item \link{xgb.cb.evaluation.log}
\item \link{xgb.cb.reset.parameters}
\item \link{xgb.cb.early.stop}
\item \link{xgb.cb.save.model}
\item \link{xgb.cb.cv.predict}
\item \link{xgb.cb.gblinear.history}
}
}

View File

@@ -2,44 +2,197 @@
% Please edit documentation in R/xgb.DMatrix.R
\name{xgb.DMatrix}
\alias{xgb.DMatrix}
\alias{xgb.QuantileDMatrix}
\title{Construct xgb.DMatrix object}
\usage{
xgb.DMatrix(
data,
info = list(),
label = NULL,
weight = NULL,
base_margin = NULL,
missing = NA,
silent = FALSE,
feature_names = colnames(data),
feature_types = NULL,
nthread = NULL,
...
group = NULL,
qid = NULL,
label_lower_bound = NULL,
label_upper_bound = NULL,
feature_weights = NULL,
data_split_mode = "row"
)
xgb.QuantileDMatrix(
data,
label = NULL,
weight = NULL,
base_margin = NULL,
missing = NA,
feature_names = colnames(data),
feature_types = NULL,
nthread = NULL,
group = NULL,
qid = NULL,
label_lower_bound = NULL,
label_upper_bound = NULL,
feature_weights = NULL,
ref = NULL,
max_bin = NULL
)
}
\arguments{
\item{data}{a \code{matrix} object (either numeric or integer), a \code{dgCMatrix} object,
a \code{dgRMatrix} object (only when making predictions from a fitted model),
a \code{dsparseVector} object (only when making predictions from a fitted model, will be
interpreted as a row vector), or a character string representing a filename.}
\item{data}{Data from which to create a DMatrix, which can then be used for fitting models or
for getting predictions out of a fitted model.
\item{info}{a named list of additional information to store in the \code{xgb.DMatrix} object.
See \code{\link{setinfo}} for the specific allowed kinds of}
Supported input types are as follows:\itemize{
\item \code{matrix} objects, with types \code{numeric}, \code{integer}, or \code{logical}.
\item \code{data.frame} objects, with columns of types \code{numeric}, \code{integer}, \code{logical}, or \code{factor}.
\item{missing}{a float value to represents missing values in data (used only when input is a dense matrix).
It is useful when a 0 or some other extreme value represents missing values in data.}
Note that xgboost uses base-0 encoding for categorical types, hence \code{factor} types (which use base-1
encoding') will be converted inside the function call. Be aware that the encoding used for \code{factor}
types is not kept as part of the model, so in subsequent calls to \code{predict}, it is the user's
responsibility to ensure that factor columns have the same levels as the ones from which the DMatrix
was constructed.
Other column types are not supported.
\item CSR matrices, as class \code{dgRMatrix} from package \code{Matrix}.
\item CSC matrices, as class \code{dgCMatrix} from package \code{Matrix}. These are \strong{not} supported for
'xgb.QuantileDMatrix'.
\item Single-row CSR matrices, as class \code{dsparseVector} from package \code{Matrix}, which is interpreted
as a single row (only when making predictions from a fitted model).
\item Text files in a supported format, passed as a \code{character} variable containing the URI path to
the file, with an optional format specifier.
These are \strong{not} supported for \code{xgb.QuantileDMatrix}. Supported formats are:\itemize{
\item XGBoost's own binary format for DMatrices, as produced by \code{\link[=xgb.DMatrix.save]{xgb.DMatrix.save()}}.
\item SVMLight (a.k.a. LibSVM) format for CSR matrices. This format can be signaled by suffix
\code{?format=libsvm} at the end of the file path. It will be the default format if not
otherwise specified.
\item CSV files (comma-separated values). This format can be specified by adding suffix
\code{?format=csv} at the end ofthe file path. It will \strong{not} be auto-deduced from file extensions.
}
Be aware that the format of the file will not be auto-deduced - for example, if a file is named 'file.csv',
it will not look at the extension or file contents to determine that it is a comma-separated value.
Instead, the format must be specified following the URI format, so the input to \code{data} should be passed
like this: \code{"file.csv?format=csv"} (or \code{"file.csv?format=csv&label_column=0"} if the first column
corresponds to the labels).
For more information about passing text files as input, see the articles
\href{https://xgboost.readthedocs.io/en/stable/tutorials/input_format.html}{Text Input Format of DMatrix} and
\href{https://xgboost.readthedocs.io/en/stable/python/python_intro.html#python-data-interface}{Data Interface}.
}}
\item{label}{Label of the training data. For classification problems, should be passed encoded as
integers with numeration starting at zero.}
\item{weight}{Weight for each instance.
Note that, for ranking task, weights are per-group. In ranking task, one weight
is assigned to each group (not each data point). This is because we
only care about the relative ordering of data points within each group,
so it doesn't make sense to assign weights to individual data points.}
\item{base_margin}{Base margin used for boosting from existing model.
In the case of multi-output models, one can also pass multi-dimensional base_margin.}
\item{missing}{A float value to represents missing values in data (not used when creating DMatrix
from text files). It is useful to change when a zero, infinite, or some other
extreme value represents missing values in data.}
\item{silent}{whether to suppress printing an informational message after loading from a file.}
\item{feature_names}{Set names for features. Overrides column names in data frame and matrix.
Note: columns are not referenced by name when calling \code{predict}, so the column order there
must be the same as in the DMatrix construction, regardless of the column names.}
\item{feature_types}{Set types for features.
If \code{data} is a \code{data.frame} and passing \code{feature_types} is not supplied,
feature types will be deduced automatically from the column types.
Otherwise, one can pass a character vector with the same length as number of columns in \code{data},
with the following possible values:
\itemize{
\item "c", which represents categorical columns.
\item "q", which represents numeric columns.
\item "int", which represents integer columns.
\item "i", which represents logical (boolean) columns.
}
Note that, while categorical types are treated differently from the rest for model fitting
purposes, the other types do not influence the generated model, but have effects in other
functionalities such as feature importances.
\strong{Important}: Categorical features, if specified manually through \code{feature_types}, must
be encoded as integers with numeration starting at zero, and the same encoding needs to be
applied when passing data to \code{\link[=predict]{predict()}}. Even if passing \code{factor} types, the encoding will
not be saved, so make sure that \code{factor} columns passed to \code{predict} have the same \code{levels}.}
\item{nthread}{Number of threads used for creating DMatrix.}
\item{...}{the \code{info} data could be passed directly as parameters, without creating an \code{info} list.}
\item{group}{Group size for all ranking group.}
\item{qid}{Query ID for data samples, used for ranking.}
\item{label_lower_bound}{Lower bound for survival training.}
\item{label_upper_bound}{Upper bound for survival training.}
\item{feature_weights}{Set feature weights for column sampling.}
\item{data_split_mode}{When passing a URI (as R \code{character}) as input, this signals
whether to split by row or column. Allowed values are \code{"row"} and \code{"col"}.
In distributed mode, the file is split accordingly; otherwise this is only an indicator on
how the file was split beforehand. Default to row.
This is not used when \code{data} is not a URI.}
\item{ref}{The training dataset that provides quantile information, needed when creating
validation/test dataset with \code{\link[=xgb.QuantileDMatrix]{xgb.QuantileDMatrix()}}. Supplying the training DMatrix
as a reference means that the same quantisation applied to the training data is
applied to the validation/test data}
\item{max_bin}{The number of histogram bin, should be consistent with the training parameter
\code{max_bin}.
This is only supported when constructing a QuantileDMatrix.}
}
\value{
An 'xgb.DMatrix' object. If calling 'xgb.QuantileDMatrix', it will have additional
subclass 'xgb.QuantileDMatrix'.
}
\description{
Construct xgb.DMatrix object from either a dense matrix, a sparse matrix, or a local file.
Supported input file formats are either a LIBSVM text file or a binary file that was created previously by
\code{\link{xgb.DMatrix.save}}).
Construct an 'xgb.DMatrix' object from a given data source, which can then be passed to functions
such as \code{\link[=xgb.train]{xgb.train()}} or \code{\link[=predict]{predict()}}.
}
\details{
Function \code{xgb.QuantileDMatrix()} will construct a DMatrix with quantization for the histogram
method already applied to it, which can be used to reduce memory usage (compared to using a
a regular DMatrix first and then creating a quantization out of it) when using the histogram
method (\code{tree_method = "hist"}, which is the default algorithm), but is not usable for the
sorted-indices method (\code{tree_method = "exact"}), nor for the approximate method
(\code{tree_method = "approx"}).
Note that DMatrix objects are not serializable through R functions such as \code{\link[=saveRDS]{saveRDS()}} or \code{\link[=save]{save()}}.
If a DMatrix gets serialized and then de-serialized (for example, when saving data in an R session or caching
chunks in an Rmd file), the resulting object will not be usable anymore and will need to be reconstructed
from the original source of data.
}
\examples{
data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
dtrain <- xgb.DMatrix('xgb.DMatrix.data')
if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')
data(agaricus.train, package = "xgboost")
## Keep the number of threads to 1 for examples
nthread <- 1
data.table::setDTthreads(nthread)
dtrain <- with(
agaricus.train, xgb.DMatrix(data, label = label, nthread = nthread)
)
fname <- file.path(tempdir(), "xgb.DMatrix.data")
xgb.DMatrix.save(dtrain, fname)
dtrain <- xgb.DMatrix(fname)
}

View File

@@ -0,0 +1,31 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{xgb.DMatrix.hasinfo}
\alias{xgb.DMatrix.hasinfo}
\title{Check whether DMatrix object has a field}
\usage{
xgb.DMatrix.hasinfo(object, info)
}
\arguments{
\item{object}{The DMatrix object to check for the given \code{info} field.}
\item{info}{The field to check for presence or absence in \code{object}.}
}
\description{
Checks whether an xgb.DMatrix object has a given field assigned to
it, such as weights, labels, etc.
}
\examples{
x <- matrix(1:10, nrow = 5)
dm <- xgb.DMatrix(x, nthread = 1)
# 'dm' so far does not have any fields set
xgb.DMatrix.hasinfo(dm, "label")
# Fields can be added after construction
setinfo(dm, "label", 1:5)
xgb.DMatrix.hasinfo(dm, "label")
}
\seealso{
\code{\link[=xgb.DMatrix]{xgb.DMatrix()}}, \code{\link[=getinfo.xgb.DMatrix]{getinfo.xgb.DMatrix()}}, \code{\link[=setinfo.xgb.DMatrix]{setinfo.xgb.DMatrix()}}
}

View File

@@ -15,9 +15,11 @@ xgb.DMatrix.save(dmatrix, fname)
Save xgb.DMatrix object to binary file
}
\examples{
data(agaricus.train, package='xgboost')
\dontshow{RhpcBLASctl::omp_set_num_threads(1)}
data(agaricus.train, package = "xgboost")
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
dtrain <- xgb.DMatrix('xgb.DMatrix.data')
if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')
fname <- file.path(tempdir(), "xgb.DMatrix.data")
xgb.DMatrix.save(dtrain, fname)
dtrain <- xgb.DMatrix(fname)
}

View File

@@ -0,0 +1,111 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{xgb.DataBatch}
\alias{xgb.DataBatch}
\title{Structure for Data Batches}
\usage{
xgb.DataBatch(
data,
label = NULL,
weight = NULL,
base_margin = NULL,
feature_names = colnames(data),
feature_types = NULL,
group = NULL,
qid = NULL,
label_lower_bound = NULL,
label_upper_bound = NULL,
feature_weights = NULL
)
}
\arguments{
\item{data}{Batch of data belonging to this batch.
Note that not all of the input types supported by \code{\link[=xgb.DMatrix]{xgb.DMatrix()}} are possible
to pass here. Supported types are:
\itemize{
\item \code{matrix}, with types \code{numeric}, \code{integer}, and \code{logical}. Note that for types
\code{integer} and \code{logical}, missing values might not be automatically recognized as
as such - see the documentation for parameter \code{missing} in \code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}}
for details on this.
\item \code{data.frame}, with the same types as supported by 'xgb.DMatrix' and same
conversions applied to it. See the documentation for parameter \code{data} in
\code{\link[=xgb.DMatrix]{xgb.DMatrix()}} for details on it.
\item CSR matrices, as class \code{dgRMatrix} from package "Matrix".
}}
\item{label}{Label of the training data. For classification problems, should be passed encoded as
integers with numeration starting at zero.}
\item{weight}{Weight for each instance.
Note that, for ranking task, weights are per-group. In ranking task, one weight
is assigned to each group (not each data point). This is because we
only care about the relative ordering of data points within each group,
so it doesn't make sense to assign weights to individual data points.}
\item{base_margin}{Base margin used for boosting from existing model.
In the case of multi-output models, one can also pass multi-dimensional base_margin.}
\item{feature_names}{Set names for features. Overrides column names in data frame and matrix.
Note: columns are not referenced by name when calling \code{predict}, so the column order there
must be the same as in the DMatrix construction, regardless of the column names.}
\item{feature_types}{Set types for features.
If \code{data} is a \code{data.frame} and passing \code{feature_types} is not supplied,
feature types will be deduced automatically from the column types.
Otherwise, one can pass a character vector with the same length as number of columns in \code{data},
with the following possible values:
\itemize{
\item "c", which represents categorical columns.
\item "q", which represents numeric columns.
\item "int", which represents integer columns.
\item "i", which represents logical (boolean) columns.
}
Note that, while categorical types are treated differently from the rest for model fitting
purposes, the other types do not influence the generated model, but have effects in other
functionalities such as feature importances.
\strong{Important}: Categorical features, if specified manually through \code{feature_types}, must
be encoded as integers with numeration starting at zero, and the same encoding needs to be
applied when passing data to \code{\link[=predict]{predict()}}. Even if passing \code{factor} types, the encoding will
not be saved, so make sure that \code{factor} columns passed to \code{predict} have the same \code{levels}.}
\item{group}{Group size for all ranking group.}
\item{qid}{Query ID for data samples, used for ranking.}
\item{label_lower_bound}{Lower bound for survival training.}
\item{label_upper_bound}{Upper bound for survival training.}
\item{feature_weights}{Set feature weights for column sampling.}
}
\value{
An object of class \code{xgb.DataBatch}, which is just a list containing the
data and parameters passed here. It does \strong{not} inherit from \code{xgb.DMatrix}.
}
\description{
Helper function to supply data in batches of a data iterator when
constructing a DMatrix from external memory through \code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}}
or through \code{\link[=xgb.QuantileDMatrix.from_iterator]{xgb.QuantileDMatrix.from_iterator()}}.
This function is \strong{only} meant to be called inside of a callback function (which
is passed as argument to function \code{\link[=xgb.DataIter]{xgb.DataIter()}} to construct a data iterator)
when constructing a DMatrix through external memory - otherwise, one should call
\code{\link[=xgb.DMatrix]{xgb.DMatrix()}} or \code{\link[=xgb.QuantileDMatrix]{xgb.QuantileDMatrix()}}.
The object that results from calling this function directly is \strong{not} like
an \code{xgb.DMatrix} - i.e. cannot be used to train a model, nor to get predictions - only
possible usage is to supply data to an iterator, from which a DMatrix is then constructed.
For more information and for example usage, see the documentation for \code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}}.
}
\seealso{
\code{\link[=xgb.DataIter]{xgb.DataIter()}}, \code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}}.
}

View File

@@ -0,0 +1,52 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{xgb.DataIter}
\alias{xgb.DataIter}
\title{XGBoost Data Iterator}
\usage{
xgb.DataIter(env = new.env(), f_next, f_reset)
}
\arguments{
\item{env}{An R environment to pass to the callback functions supplied here, which can be
used to keep track of variables to determine how to handle the batches.
For example, one might want to keep track of an iteration number in this environment in order
to know which part of the data to pass next.}
\item{f_next}{\verb{function(env)} which is responsible for:
\itemize{
\item Accessing or retrieving the next batch of data in the iterator.
\item Supplying this data by calling function \code{\link[=xgb.DataBatch]{xgb.DataBatch()}} on it and returning the result.
\item Keeping track of where in the iterator batch it is or will go next, which can for example
be done by modifiying variables in the \code{env} variable that is passed here.
\item Signaling whether there are more batches to be consumed or not, by returning \code{NULL}
when the stream of data ends (all batches in the iterator have been consumed), or the result from
calling \code{\link[=xgb.DataBatch]{xgb.DataBatch()}} when there are more batches in the line to be consumed.
}}
\item{f_reset}{\verb{function(env)} which is responsible for reseting the data iterator
(i.e. taking it back to the first batch, called before and after the sequence of batches
has been consumed).
Note that, after resetting the iterator, the batches will be accessed again, so the same data
(and in the same order) must be passed in subsequent iterations.}
}
\value{
An \code{xgb.DataIter} object, containing the same inputs supplied here, which can then
be passed to \code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}}.
}
\description{
Interface to create a custom data iterator in order to construct a DMatrix
from external memory.
This function is responsible for generating an R object structure containing callback
functions and an environment shared with them.
The output structure from this function is then meant to be passed to \code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}},
which will consume the data and create a DMatrix from it by executing the callback functions.
For more information, and for a usage example, see the documentation for \code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}}.
}
\seealso{
\code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}}, \code{\link[=xgb.DataBatch]{xgb.DataBatch()}}.
}

View File

@@ -0,0 +1,121 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{xgb.ExtMemDMatrix}
\alias{xgb.ExtMemDMatrix}
\title{DMatrix from External Data}
\usage{
xgb.ExtMemDMatrix(
data_iterator,
cache_prefix = tempdir(),
missing = NA,
nthread = NULL
)
}
\arguments{
\item{data_iterator}{A data iterator structure as returned by \code{\link[=xgb.DataIter]{xgb.DataIter()}},
which includes an environment shared between function calls, and functions to access
the data in batches on-demand.}
\item{cache_prefix}{The path of cache file, caller must initialize all the directories in this path.}
\item{missing}{A float value to represents missing values in data.
Note that, while functions like \code{\link[=xgb.DMatrix]{xgb.DMatrix()}} can take a generic \code{NA} and interpret it
correctly for different types like \code{numeric} and \code{integer}, if an \code{NA} value is passed here,
it will not be adapted for different input types.
For example, in R \code{integer} types, missing values are represented by integer number \code{-2147483648}
(since machine 'integer' types do not have an inherent 'NA' value) - hence, if one passes \code{NA},
which is interpreted as a floating-point NaN by \code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}} and by
\code{\link[=xgb.QuantileDMatrix.from_iterator]{xgb.QuantileDMatrix.from_iterator()}}, these integer missing values will not be treated as missing.
This should not pose any problem for \code{numeric} types, since they do have an inheret NaN value.}
\item{nthread}{Number of threads used for creating DMatrix.}
}
\value{
An 'xgb.DMatrix' object, with subclass 'xgb.ExtMemDMatrix', in which the data is not
held internally but accessed through the iterator when needed.
}
\description{
Create a special type of XGBoost 'DMatrix' object from external data
supplied by an \code{\link[=xgb.DataIter]{xgb.DataIter()}} object, potentially passed in batches from a
bigger set that might not fit entirely in memory.
The data supplied by the iterator is accessed on-demand as needed, multiple times,
without being concatenated, but note that fields like 'label' \strong{will} be
concatenated from multiple calls to the data iterator.
For more information, see the guide 'Using XGBoost External Memory Version':
\url{https://xgboost.readthedocs.io/en/stable/tutorials/external_memory.html}
}
\examples{
data(mtcars)
# This custom environment will be passed to the iterator
# functions at each call. It is up to the user to keep
# track of the iteration number in this environment.
iterator_env <- as.environment(
list(
iter = 0,
x = mtcars[, -1],
y = mtcars[, 1]
)
)
# Data is passed in two batches.
# In this example, batches are obtained by subsetting the 'x' variable.
# This is not advantageous to do, since the data is already loaded in memory
# and can be passed in full in one go, but there can be situations in which
# only a subset of the data will fit in the computer's memory, and it can
# be loaded in batches that are accessed one-at-a-time only.
iterator_next <- function(iterator_env) {
curr_iter <- iterator_env[["iter"]]
if (curr_iter >= 2) {
# there are only two batches, so this signals end of the stream
return(NULL)
}
if (curr_iter == 0) {
x_batch <- iterator_env[["x"]][1:16, ]
y_batch <- iterator_env[["y"]][1:16]
} else {
x_batch <- iterator_env[["x"]][17:32, ]
y_batch <- iterator_env[["y"]][17:32]
}
on.exit({
iterator_env[["iter"]] <- curr_iter + 1
})
# Function 'xgb.DataBatch' must be called manually
# at each batch with all the appropriate attributes,
# such as feature names and feature types.
return(xgb.DataBatch(data = x_batch, label = y_batch))
}
# This moves the iterator back to its beginning
iterator_reset <- function(iterator_env) {
iterator_env[["iter"]] <- 0
}
data_iterator <- xgb.DataIter(
env = iterator_env,
f_next = iterator_next,
f_reset = iterator_reset
)
cache_prefix <- tempdir()
# DMatrix will be constructed from the iterator's batches
dm <- xgb.ExtMemDMatrix(data_iterator, cache_prefix, nthread = 1)
# After construction, can be used as a regular DMatrix
params <- list(nthread = 1, objective = "reg:squarederror")
model <- xgb.train(data = dm, nrounds = 2, params = params)
# Predictions can also be called on it, and should be the same
# as if the data were passed differently.
pred_dm <- predict(model, dm)
pred_mat <- predict(model, as.matrix(mtcars[, -1]))
}
\seealso{
\code{\link[=xgb.DataIter]{xgb.DataIter()}}, \code{\link[=xgb.DataBatch]{xgb.DataBatch()}}, \code{\link[=xgb.QuantileDMatrix.from_iterator]{xgb.QuantileDMatrix.from_iterator()}}
}

View File

@@ -0,0 +1,65 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{xgb.QuantileDMatrix.from_iterator}
\alias{xgb.QuantileDMatrix.from_iterator}
\title{QuantileDMatrix from External Data}
\usage{
xgb.QuantileDMatrix.from_iterator(
data_iterator,
missing = NA,
nthread = NULL,
ref = NULL,
max_bin = NULL
)
}
\arguments{
\item{data_iterator}{A data iterator structure as returned by \code{\link[=xgb.DataIter]{xgb.DataIter()}},
which includes an environment shared between function calls, and functions to access
the data in batches on-demand.}
\item{missing}{A float value to represents missing values in data.
Note that, while functions like \code{\link[=xgb.DMatrix]{xgb.DMatrix()}} can take a generic \code{NA} and interpret it
correctly for different types like \code{numeric} and \code{integer}, if an \code{NA} value is passed here,
it will not be adapted for different input types.
For example, in R \code{integer} types, missing values are represented by integer number \code{-2147483648}
(since machine 'integer' types do not have an inherent 'NA' value) - hence, if one passes \code{NA},
which is interpreted as a floating-point NaN by \code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}} and by
\code{\link[=xgb.QuantileDMatrix.from_iterator]{xgb.QuantileDMatrix.from_iterator()}}, these integer missing values will not be treated as missing.
This should not pose any problem for \code{numeric} types, since they do have an inheret NaN value.}
\item{nthread}{Number of threads used for creating DMatrix.}
\item{ref}{The training dataset that provides quantile information, needed when creating
validation/test dataset with \code{\link[=xgb.QuantileDMatrix]{xgb.QuantileDMatrix()}}. Supplying the training DMatrix
as a reference means that the same quantisation applied to the training data is
applied to the validation/test data}
\item{max_bin}{The number of histogram bin, should be consistent with the training parameter
\code{max_bin}.
This is only supported when constructing a QuantileDMatrix.}
}
\value{
An 'xgb.DMatrix' object, with subclass 'xgb.QuantileDMatrix'.
}
\description{
Create an \code{xgb.QuantileDMatrix} object (exact same class as would be returned by
calling function \code{\link[=xgb.QuantileDMatrix]{xgb.QuantileDMatrix()}}, with the same advantages and limitations) from
external data supplied by \code{\link[=xgb.DataIter]{xgb.DataIter()}}, potentially passed in batches from
a bigger set that might not fit entirely in memory, same way as \code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}}.
Note that, while external data will only be loaded through the iterator (thus the full data
might not be held entirely in-memory), the quantized representation of the data will get
created in-memory, being concatenated from multiple calls to the data iterator. The quantized
version is typically lighter than the original data, so there might be cases in which this
representation could potentially fit in memory even if the full data does not.
For more information, see the guide 'Using XGBoost External Memory Version':
\url{https://xgboost.readthedocs.io/en/stable/tutorials/external_memory.html}
}
\seealso{
\code{\link[=xgb.DataIter]{xgb.DataIter()}}, \code{\link[=xgb.DataBatch]{xgb.DataBatch()}}, \code{\link[=xgb.ExtMemDMatrix]{xgb.ExtMemDMatrix()}},
\code{\link[=xgb.QuantileDMatrix]{xgb.QuantileDMatrix()}}
}

Some files were not shown because too many files have changed in this diff Show More