Compare commits

..

1630 Commits

Author SHA1 Message Date
Hendrik Groove
9122d959e8 revert 2024-10-22 01:21:22 +02:00
Hendrik Groove
c78b0b752a remove async 2024-10-22 01:18:12 +02:00
Hendrik Groove
54e93c7a0b switch to default stream 2024-10-22 01:14:29 +02:00
Hendrik Groove
2bbb8b3786 revert 2024-10-22 00:58:01 +02:00
Hendrik Groove
55f995fc50 remove async 2024-10-22 00:53:56 +02:00
Hendrik Groove
cbaf5511ac test 2024-10-22 00:51:34 +02:00
Hendrik Groove
59cc283242 use hip functions 2024-10-22 00:46:07 +02:00
Hendrik Groove
038d61a802 try other stream 2024-10-22 00:14:00 +02:00
Hendrik Groove
8fb2258a2f reset 2024-10-22 00:10:40 +02:00
Hendrik Groove
02882a4b33 reset regression_obj.cu 2024-10-22 00:05:53 +02:00
Hendrik Groove
0d92f6ca9c hipStreamDefault 2024-10-21 23:58:27 +02:00
Hendrik Groove
2a554ba4a7 reset device helper 2024-10-21 23:53:19 +02:00
Hendrik Groove
1c666db349 reset learner 2024-10-21 23:49:22 +02:00
Hendrik Groove
1931a70598 stream logic 2024-10-21 23:24:10 +02:00
Hendrik Groove
20a9c223b6 remove logging 2024-10-21 21:52:34 +02:00
Hendrik Groove
9659d0e7bd WARP_SIZE 2024-10-21 21:35:25 +02:00
Hendrik Groove
768c8b298c remove logging 2024-10-21 21:24:47 +02:00
Hendrik Groove
94ffd57641 try direct instantiation of EvaluateSplitsKernel 2024-10-21 21:22:57 +02:00
Hendrik Groove
ee17a5a26c try compile option 2024-10-21 21:19:11 +02:00
Hendrik Groove
8c15f3b665 revert 2024-10-21 11:43:29 +02:00
Hendrik Groove
bb2feab0b2 try 2024-10-21 01:55:41 +02:00
Hendrik Groove
1b6c6baf76 restore stream logic 2024-10-21 01:39:43 +02:00
Hendrik Groove
b3ee7a59c7 try other stream 2024-10-21 01:33:00 +02:00
Hendrik Groove
ea3e7adcdc hipStreamPerThread 2024-10-21 01:16:40 +02:00
Hendrik Groove
807ee5da88 fix 2024-10-21 00:39:06 +02:00
Hendrik Groove
a135895f3a fix 2024-10-21 00:35:28 +02:00
Hendrik Groove
ed1636b9c0 fix 2024-10-21 00:32:33 +02:00
Hendrik Groove
1f4154d756 fix 2024-10-21 00:29:38 +02:00
Hendrik Groove
86fcbaf0e5 fix 2024-10-21 00:26:25 +02:00
Hendrik Groove
0d600b4535 try to change stream 2024-10-21 00:25:18 +02:00
Hendrik Groove
6fcffef7dc fix 2024-10-21 00:18:25 +02:00
Hendrik Groove
ca6fcd361e fix 2024-10-21 00:13:27 +02:00
Hendrik Groove
c39ad981ce fix 2024-10-21 00:11:08 +02:00
Hendrik Groove
1de5734d4c more logging 2024-10-21 00:08:50 +02:00
Hendrik Groove
e2e6b6e71f more logging 2024-10-21 00:06:21 +02:00
Hendrik Groove
db66fad9e9 SumReduction logging 2024-10-20 23:27:50 +02:00
Hendrik Groove
bf2ef6c586 log reduce function 2024-10-20 23:26:21 +02:00
Hendrik Groove
58a27ba968 more logging 2024-10-20 20:59:23 +02:00
Hendrik Groove
c964dd62b4 more logging 2024-10-20 20:53:50 +02:00
Hendrik Groove
4a10135006 validate label debug 2024-10-20 18:11:03 +02:00
Hendrik Groove
f54355f470 fix path 2024-10-20 17:56:27 +02:00
Hendrik Groove
08f3936bc9 fix path 2024-10-20 17:51:05 +02:00
Hendrik Groove
f50d5344f3 get gradient error logging 2024-10-20 17:40:52 +02:00
Hendrik Groove
ab41cd26a6 add gpu error check 2024-10-20 17:34:51 +02:00
Hendrik Groove
fd95be5f20 validate label logging 2024-10-20 17:32:22 +02:00
Hendrik Groove
60a3bea7c6 add logging 2024-10-20 17:30:17 +02:00
Hendrik Groove
7301022fed logging 2024-10-20 17:05:34 +02:00
Hendrik Groove
288193cf82 try 2024-10-20 02:41:57 +02:00
Hendrik Groove
e142b52540 use new func 2024-10-20 02:18:41 +02:00
Hendrik Groove
e8fceb8198 add logging 2024-10-20 02:03:55 +02:00
Hendrik Groove
971d3ca8cd array interface 2024-10-20 01:46:48 +02:00
Hendrik Groove
206f305b65 array interface 2024-10-20 01:28:40 +02:00
Hendrik Groove
8e703f3a5a try hipHostMalloc 2024-10-20 01:13:44 +02:00
Hendrik Groove
0790bf7f8f change back 2024-10-17 17:47:10 +02:00
Hendrik Groove
d8a92fe783 test 2024-10-17 17:42:37 +02:00
Hui Liu
bce48bffc6
Merge pull request #2 from hliuca/master-rocm
Merge latest upstream changes
2024-04-23 09:50:11 -07:00
Hui Liu
45dc134151 merge changes from upstream 2024-04-22 14:22:16 -07:00
Hui Liu
b27f35e270 rm hip from src 2024-04-22 12:31:14 -07:00
Hui Liu
8b75204fed merge latest change from upstream 2024-04-22 09:35:31 -07:00
Jiaming Yuan
3fbb221fec
[coll] Implement shutdown for tracker and comm. (#10208)
- Force shutdown the tracker.
- Implement shutdown notice for error handling thread in comm.
2024-04-20 04:08:17 +08:00
Bobby Wang
8fb05c8c95
[pyspark] support stage-level for yarn/k8s (#10209) 2024-04-20 00:24:40 +08:00
dependabot[bot]
bb212bf33c
Bump org.apache.flink:flink-clients in /jvm-packages (#10197)
Bumps [org.apache.flink:flink-clients](https://github.com/apache/flink) from 1.18.0 to 1.19.0.
- [Commits](https://github.com/apache/flink/compare/release-1.18.0...release-1.19.0)

---
updated-dependencies:
- dependency-name: org.apache.flink:flink-clients
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-18 15:20:31 -07:00
Jiaming Yuan
3f64b4fde3
[coll] Add global functions. (#10203) 2024-04-19 03:17:23 +08:00
dependabot[bot]
551fa6e25e
Bump scalatest.version from 3.2.17 to 3.2.18 in /jvm-packages/xgboost4j (#10196)
Bumps `scalatest.version` from 3.2.17 to 3.2.18.

Updates `org.scalatest:scalatest_2.12` from 3.2.17 to 3.2.18
- [Release notes](https://github.com/scalatest/scalatest/releases)
- [Commits](https://github.com/scalatest/scalatest/compare/release-3.2.17...release-3.2.18)

Updates `org.scalactic:scalactic_2.12` from 3.2.17 to 3.2.18
- [Release notes](https://github.com/scalatest/scalatest/releases)
- [Commits](https://github.com/scalatest/scalatest/compare/release-3.2.17...release-3.2.18)

---
updated-dependencies:
- dependency-name: org.scalatest:scalatest_2.12
  dependency-type: direct:development
  update-type: version-update:semver-patch
- dependency-name: org.scalactic:scalactic_2.12
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-18 11:46:28 -07:00
dependabot[bot]
531ff21b20
Bump org.scala-lang.modules:scala-collection-compat_2.12 (#10193)
Bumps [org.scala-lang.modules:scala-collection-compat_2.12](https://github.com/scala/scala-collection-compat) from 2.11.0 to 2.12.0.
- [Release notes](https://github.com/scala/scala-collection-compat/releases)
- [Commits](https://github.com/scala/scala-collection-compat/compare/v2.11.0...v2.12.0)

---
updated-dependencies:
- dependency-name: org.scala-lang.modules:scala-collection-compat_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-18 07:18:18 -07:00
Jiaming Yuan
303c603c7d
[pyspark] Reuse the collective communicator. (#10198) 2024-04-18 19:09:30 +08:00
dependabot[bot]
0aa2600399
Bump org.apache.maven.plugins:maven-jar-plugin (#10202)
Bumps [org.apache.maven.plugins:maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.3.0 to 3.4.0.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.3.0...maven-jar-plugin-3.4.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-17 23:04:41 -07:00
Philip Hyunsu Cho
f53f5ca359
[CI] Update machine images (#10201) 2024-04-17 19:15:06 -07:00
Jiaming Yuan
4b10200456
[coll] Improve event loop. (#10199)
- Add a test for blocking calls.
- Do not require the queue to be empty after waking up; this frees up the thread to answer blocking calls.
- Handle EOF in read.
- Improve the error message in the result. Allow concatenation of multiple results.
2024-04-18 03:29:52 +08:00
dependabot[bot]
7c0c9677a9
Bump org.apache.maven.plugins:maven-jar-plugin (#10191)
Bumps [org.apache.maven.plugins:maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.3.0 to 3.4.0.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.3.0...maven-jar-plugin-3.4.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-16 19:52:33 -07:00
Philip Hyunsu Cho
32be4669fb
[jvm-packages] Ombinus patch to update all minor dependencies (#10188)
* Fold in #10184

* Fold in #10176

* Fold in #10168

* Fold in #10165

* Fold in #10164

* Fold in #10155

* Fold in #10062

* Fold in #9984

* Fold in #9843

* Upgrade to Maven 3.6.3
2024-04-16 19:43:04 -07:00
Philip Hyunsu Cho
3d1d97c8cc
[CI] Reduce clutter from dependabot (#10187) 2024-04-16 19:42:08 -07:00
Eric Leung
9e354fb120
docs: update Ruby package link (#10182) 2024-04-16 15:09:59 -07:00
github-actions[bot]
2925cebdca
[CI] Use latest RAPIDS; Pandas 2.0 compatibility fix (#10175)
* [CI] Update RAPIDS to latest stable

* [CI] Use rapidsai stable channel; fix syntax errors in Dockerfile.gpu

* Don't combine astype() with loc()

* Work around https://github.com/dmlc/xgboost/issues/10181

* Fix formatting

* Fix test

---------

Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2024-04-15 13:38:53 -07:00
Dmitry Razdoburdin
6e5c335cea
[SYCL] Add basic features for QuantileHistMaker (#10174)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-04-15 21:24:46 +08:00
Hyunsu Cho
882f4136e0
[CI] Update create-pull-request action 2024-04-13 13:52:12 -07:00
Trinh Quoc Anh
732e27cebc
[doc] Update python3statement URL (#10179) 2024-04-13 01:10:50 +08:00
Jiaming Yuan
1022909bbe
Fix global config for external memory. (#10173)
Pass the thread-local configuration between threads.
2024-04-11 01:29:28 +08:00
Hui Liu
42edd78f30 update rocgputreeshap 2024-04-09 12:47:57 -07:00
Jiaming Yuan
f0a138f33a
Fix pyspark with verbosity=3. (#10172) 2024-04-09 23:18:56 +08:00
dependabot[bot]
a99bb38bd2
Bump org.apache.maven.plugins:maven-gpg-plugin from 3.1.0 to 3.2.2 in /jvm-packages/xgboost4j-spark (#10151) 2024-04-03 16:45:54 -07:00
Fabi
e15d61b916
docs: fix bug in tutorial (#10143) 2024-04-01 10:14:40 +08:00
david-cortes
bc9ea62ec0
[R] Make xgb.cv work with xgb.DMatrix only, adding support for survival and ranking fields (#10031)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-03-31 21:53:00 +08:00
Jiaming Yuan
8bad677c2f
Update collective implementation. (#10152)
* Update collective implementation.

- Cleanup resource during `Finalize` to avoid handling threads in destructor.
- Calculate the size for allgather automatically.
- Use simple allgather for small (smaller than the number of worker) allreduce.
2024-03-30 18:57:31 +08:00
Jiaming Yuan
230010d9a0
Cleanup set info. (#10139)
- Use the array interface internally.
- Deprecate `XGDMatrixSetDenseInfo`.
- Deprecate `XGDMatrixSetUIntInfo`.
- Move the handling of `DataType` into the deprecated C function.

---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-03-26 23:26:24 +08:00
Dmitry Razdoburdin
6a7c6a8ae6
add sycl reaslisation of ghist builder (#10138)
Co-authored-by: Dmitry Razdoburdin <>
2024-03-23 12:55:25 +08:00
Hui Liu
ff549ae933 sync upstream code 2024-03-20 16:14:38 -07:00
Jiaming Yuan
e1695775e9
[CI] Fix yml in github action. (#10134) 2024-03-20 16:38:03 +08:00
Jiaming Yuan
2b2aac85f4
[CI] Update scorecard actions. (#10133) 2024-03-20 15:51:38 +08:00
Jiaming Yuan
ca4801f81d
Work with IPv6 in the new tracker. (#10125) 2024-03-20 05:19:23 +08:00
Jiaming Yuan
53fc17578f
Use std::uint64_t for row index. (#10120)
- Use std::uint64_t instead of size_t to avoid implementation-defined type.
- Rename to bst_idx_t, to account for other types of indexing.
- Small cleanup to the base header.
2024-03-15 18:43:49 +08:00
Jiaming Yuan
56b1868278
Fix compilation with the latest ctk. (#10123) 2024-03-15 08:04:41 +08:00
Dmitry Razdoburdin
617970a0c2
[SYCL] Add split evaluation (#10119)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-03-15 01:46:46 +08:00
Michael Mayer
e0f890ba28
[R] deprecate watchlist (#10110) 2024-03-13 17:02:34 +08:00
Hui Liu
3ad7461ddc add ROCm installation 2024-03-12 09:53:10 -07:00
Hui Liu
fe36d96247 add ROCm installation 2024-03-12 09:52:53 -07:00
Hui Liu
968dbf25fb merge latest changes 2024-03-12 09:13:09 -07:00
Jiaming Yuan
1450aebb74
Fix pairwise objective with NDCG metric along with custom gain. (#10100)
* Fix pairwise objective with NDCG metric.

- Allow setting `ndcg_exp_gain` for `rank:pairwise`.

This is useful when using pairwise for objective but ndcg for metric.
2024-03-11 14:54:10 +08:00
Jiaming Yuan
06c9702028
[doc] Fix the default value for lambdarank_pair_method. (#10098) 2024-03-11 14:53:17 +08:00
david-cortes
b023a253b4
[R] Rename watchlist -> evals (#10032) 2024-03-10 06:48:06 +08:00
Jiaming Yuan
2c13f90384
Support graphviz plot for multi-target tree. (#10093) 2024-03-09 05:35:25 +08:00
Jiaming Yuan
e14c3b9325
Optional normalization for learning to rank. (#10094) 2024-03-08 12:41:21 +08:00
Philip Hyunsu Cho
bc516198dc
[CI] Cancel GH Action job if a newer commit is published (#10088) 2024-03-04 21:36:08 -08:00
Philip Hyunsu Cho
23a37dcaf9
[CI] Test R package with CMake (#10087)
* [CI] Test R package with CMake

* Fix

* Fix

* Update test_r_package.py

* Fix CMake flag for R package

* Install system deps

* Fix

* Use sudo
2024-03-04 12:32:44 -08:00
Jiaming Yuan
d07b7fe8c8
Small cleanup for mock tests. (#10085) 2024-03-04 23:32:11 +08:00
Dmitry Razdoburdin
7a61216690
[sycl] add partitioning and related tests (#10080)
Co-authored-by: Dmitry Razdoburdin <>
2024-03-02 01:49:27 +08:00
david-cortes
2c12b956da
[R] Refactor callback structure and attributes (#9957) 2024-03-01 15:57:47 +08:00
Jiaming Yuan
3941b31ade
Disable column sample by node for the exact tree method. (#10083) 2024-03-01 14:16:10 +08:00
Jiaming Yuan
8189126d51
Add CUDA iterator to tensor view. (#10074) 2024-03-01 14:15:31 +08:00
Bobby Wang
d24df52bb9
[pyspark] rework the log (#10077) 2024-02-29 16:47:31 +08:00
Jiaming Yuan
5ac233280e
Require context in aggregators. (#10075) 2024-02-28 03:12:42 +08:00
Dmitry Razdoburdin
761845f594
[SYCL] Implement row set collection. (#10057)
Co-authored-by: Dmitry Razdoburdin <>
2024-02-26 21:07:36 +08:00
Jiaming Yuan
0ce4372bd4
Use UBJSON for serializing splits for vertical data split. (#10059) 2024-02-25 00:18:23 +08:00
david-cortes
f7005d32c1
[R] Use inplace predict (#9829)
---------

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2024-02-24 02:03:54 +08:00
José Morales
729fd97196
[doc] Fix spark_estimator doc (#10066) 2024-02-23 12:01:24 +08:00
Philip Hyunsu Cho
9f7b94cf70
[CI] Patch GitHub Action pipeline (#10067) 2024-02-22 17:16:48 -08:00
Hyunsu Cho
3ab8ccaa0c [CI] Hotfix for GH Action 2024-02-22 14:55:41 -08:00
Hyunsu Cho
aaa950951b [CI] Hotfix for GH Action 2024-02-22 14:53:55 -08:00
Hyunsu Cho
5b1d7a760b [CI] Hotfix for GH Action 2024-02-22 14:40:14 -08:00
Jiaming Yuan
eb281ff9b4
[CI] Fix JVM tests on GH Action (#10064)
---------

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2024-02-22 14:21:32 -08:00
UncleLLD
b9171d8f0b
[doc] Fix python docs (#10058) 2024-02-22 17:34:12 +08:00
Jiaming Yuan
2e4ea5ecc0
Support f64 for ubjson. (#10055) 2024-02-21 02:18:42 +08:00
Jiaming Yuan
8ea705e4d5
Support sample weight in sklearn custom objective. (#10050) 2024-02-21 00:43:14 +08:00
Jiaming Yuan
69a17d5114
Fix with None input. (#10052) 2024-02-20 22:34:22 +08:00
Jiaming Yuan
d37b83e8d9
Fix UBJSON with boolean value. (#10054) 2024-02-20 22:13:51 +08:00
david-cortes
6e3c899ba7
[R] Don't cap global number of threads for serialization (#10028) 2024-02-20 11:13:00 +08:00
Louis Desreumaux
edf501d227
Implement contribution prediction with QuantileDMatrix (#10043)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-02-19 21:03:29 +08:00
Dmitry Razdoburdin
057f03cacc
[SYCL] Initial implementation of GHistIndexMatrix (#10045)
Co-authored-by: Dmitry Razdoburdin <>
2024-02-19 04:27:15 +08:00
UncleLLD
7cc256e246
update python intro doc (#10033) 2024-02-14 10:01:38 -08:00
github-actions[bot]
c182c584ca
[CI] Update RAPIDS to latest stable (#10042)
Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
2024-02-13 15:29:09 -08:00
david-cortes
4de866211d
[R] switch to URI reader (#10024) 2024-02-05 05:03:38 +08:00
Philip Hyunsu Cho
f2095f1d5b
[Doc] Fix formatting for R package doc (#10030) 2024-02-04 16:23:35 +08:00
david-cortes
a730c7e67e
[R] allow using seed with regular RNG (#10029) 2024-02-04 16:22:22 +08:00
Hui Liu
2f6027525d Merge branch 'sync-2024Jan24' into master-rocm 2024-02-01 14:49:27 -08:00
Hui Liu
44db1cef54 Merge branch 'master' into sync-2024Jan24 2024-02-01 14:41:48 -08:00
david-cortes
662854c7d7
[R] Document handling of indexes (#10019)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-02-02 05:39:09 +08:00
Philip Hyunsu Cho
4dfbe2a893
[CI] Test building for 32-bit arch (#10021)
* [CI] Test building for 32-bit arch

* Update CMakeLists.txt

* Fix yaml

* Use Debian container

* Remove -Werror for 32-bit

* Revert "Remove -Werror for 32-bit"

This reverts commit c652bc6a037361bcceaf56fb01863210b462793d.

* Don't error for overloaded-virtual warning

* Ignore some warnings from dmlc-core

* Fix compiler warnings

* Fix formatting

* Apply suggestions from code review

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>

* Add more cast

---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-01-31 13:20:51 -08:00
Dmitry Razdoburdin
234674a0a6
[sync]. Add partition builder. (#10011)
---------

Co-authored-by: Dmitry Razdoburdin <>
2024-01-31 17:39:48 +08:00
david-cortes
0955213220
[R] rename proxy dmatrix -> data batch (#10016) 2024-01-31 15:43:58 +08:00
david-cortes
1e72dc1276
[R] Add optional check on column names matching in predict (#10020) 2024-01-31 15:43:22 +08:00
Ammar Azman
c53d59f8db
[doc] Fix typos in code (#10023)
change `resutls` to `results`
2024-01-31 15:18:16 +08:00
david-cortes
5e00a71671
[R] rename slice to avoid dplyr conflict (#10017) 2024-01-31 05:33:37 +08:00
david-cortes
df7cf744b4
[R] Remove enable_categorical parameter (#10018) 2024-01-31 05:17:36 +08:00
david-cortes
3abbbe41ac
[R] Add data iterator, quantile dmatrix, external memory, and missing feature_types (#9913) 2024-01-30 19:26:44 +08:00
UncleLLD
d9f4ab557a
[doc] Fix data format (#10013) 2024-01-30 17:24:43 +08:00
Jiaming Yuan
54b71c8fba
Fix with black 24.1.1. (#10014) 2024-01-30 17:24:11 +08:00
Hui Liu
cf8c7e63af Merge branch 'sync-2024Jan24' into master-rocm 2024-01-26 15:47:12 -08:00
Hui Liu
2cb579ff3c fix memory type 2024-01-26 15:46:42 -08:00
Jiaming Yuan
65d7bf2dfe
Handle np integer in model slice and prediction. (#10007) 2024-01-26 04:58:48 +08:00
Jiaming Yuan
a76d6c6131
Fix cpp deprecation. (#10010) 2024-01-26 02:13:40 +08:00
Jiaming Yuan
d3f2dbe64f
[dask] Add seed to demos. (#10009) 2024-01-26 02:09:38 +08:00
Philip Hyunsu Cho
c8f5d190c6
[CI] Stop Windows pipeline upon a failing pytest (#10003) 2024-01-24 22:54:21 -08:00
Hui Liu
e3e3e34cd2 merge latest changes 2024-01-24 13:41:05 -08:00
Hui Liu
3fe874078c merge latest changes 2024-01-24 13:30:08 -08:00
Philip Hyunsu Cho
60ec7b8424
Throw error for 32-bit archs (#10005) 2024-01-24 13:02:39 -08:00
Hui Liu
74677e4e9d use __HIPCC__ for device code 2024-01-24 11:57:58 -08:00
Hui Liu
069cf1d019 use __HIPCC__ for device code 2024-01-24 11:30:01 -08:00
Jiaming Yuan
d12cc1090a
Refactor tests for training continuation. (#9997) 2024-01-24 16:07:19 +08:00
Hui Liu
1e0ccf7b87 fix random 2024-01-21 12:48:41 -08:00
david-cortes
5062a3ab46
[R] Support booster slicing. (#9948) 2024-01-21 05:11:26 +08:00
david-cortes
c5d0608057
[R] Remove parameters and attributes related to ntree and rebase iterationrange (#9935) 2024-01-21 00:56:57 +08:00
david-cortes
60b9d2eeb9
[R] Avoid memory copies in predict (#9902) 2024-01-21 00:53:18 +08:00
Philip Hyunsu Cho
2c8fa8b8b9
[CI] Skip MSVC when building R package (#9995)
* [CI] Skip MSVC when building R package

* [CI] Stop building binary tarball for Windows

* Remove unused script
2024-01-18 08:09:53 -08:00
Jiaming Yuan
bde20dd897
Remove benchmark scripts. (#9992) 2024-01-17 13:19:34 +08:00
Jiaming Yuan
d07e8b503e
Fix quantile regression demo. (#9991) 2024-01-17 13:19:08 +08:00
Jiaming Yuan
cacb4b1fdd
Fix gain calculation in multi-target tree. (#9978) 2024-01-17 13:18:44 +08:00
Jiaming Yuan
85d09245f6
Fix error handling in the event loop. (#9990) 2024-01-17 05:35:35 +08:00
Jiaming Yuan
0798e36d73
[breaking] Remove deprecated parameters in the skl interface. (#9986) 2024-01-15 20:40:05 +08:00
greydoubt
2de85d3241
[doc] slight cleanup (#9988) 2024-01-15 19:09:20 +08:00
david-cortes
547abb8c12
[R] Remove unusable 'feature_names' argument and make 'model' first argument in inspection functions (#9939) 2024-01-15 17:16:30 +08:00
Hui Liu
9759e28e6a compiler errors fix 2024-01-12 12:09:01 -08:00
Philip Hyunsu Cho
1168a68872
[jvm-packages] Update release scripts (#9983)
* [jvm-packages] Add Scala version suffix to xgboost-jvm package (#9776)

* Update JVM script (#9714)

* Revamp pom.xml

* Update instructions in prepare_jvm_release.py

* Fix formatting

* [jvm-packages] Fix POM for xgboost-jvm metapackage (#9893)

* [jvm-packages] Fix POM for xgboost-jvm metapackage

* Add script for updating the Scala version

* Update change_scala_version.py to also change scala.version property (#9897)

* Remove 'release-cpu-only' profile

* Remove scala-2.13 profile; enable gpu package for Scala 2.13
2024-01-12 10:37:55 -08:00
Hui Liu
1e1e8be3a5 merge latest, Jan 12 2024 2024-01-12 09:57:11 -08:00
Hui Liu
c42c7d99f1 fix memoryType 2024-01-11 14:10:30 -08:00
Bobby Wang
f88c43801f
[jvm-packages] update rapids dep to 23.12.1 (#9951)
With this PR, XGBoost GPU can support scala 2.13
2024-01-11 14:04:32 -08:00
Hui Liu
fd3ad29dc4 workaround memoryType and change rccl config 2024-01-11 14:03:05 -08:00
rpopescu
73b3955dd4
Fix FieldEntry ctor specialisation syntax error (#9980)
Co-authored-by: Radu Popescu <radu.popescu@aptportfolio.com>
2024-01-11 12:10:21 +08:00
david-cortes
d3a8d284ab
[R] On-demand serialization + standardization of attributes (#9924)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-01-11 05:08:42 +08:00
Jiaming Yuan
01c4711556
Check __cuda_array_interface__ instead of cupy class. (#9971)
* Now XGBoost can directly consume CUDA data from torch.
2024-01-09 19:59:01 +08:00
Jiaming Yuan
2f57bbde3c
Additional tests for attributes and model booosted rounds. (#9962) 2024-01-09 09:54:39 +08:00
david-cortes
bed0349954
[R] Don't write files to user's directory (#9966) 2024-01-09 03:43:48 +08:00
david-cortes
7ff6d44efa
[R] Use R's error stream for printing warnings (#9965) 2024-01-09 03:43:21 +08:00
Jiaming Yuan
b3eb5d0945
Use UBJ in Python checkpoint. (#9958) 2024-01-09 03:22:15 +08:00
Jiaming Yuan
fa5e2f6c45
Synthesize the AMES housing dataset for tests. (#9963) 2024-01-09 00:54:23 +08:00
Jiaming Yuan
9a30bdd313
Test loading models with invalid file extensions. (#9955) 2024-01-08 19:26:24 +08:00
Bobby Wang
3ff3a5f1ed
[jvm-packages] support jdk 17 for test (#9959) 2024-01-08 17:30:49 +08:00
Jiaming Yuan
3976455af9
[jvm-packages] Use UBJ for checkpoints. (#9954) 2024-01-08 13:26:12 +08:00
Jiaming Yuan
38dd91f491
Save model in ubj as the default. (#9947) 2024-01-05 17:53:36 +08:00
Jiaming Yuan
c03a4d5088
Check support status for categorical features. (#9946) 2024-01-04 16:51:33 +08:00
david-cortes
db396ee340
[R] make sure output fits into int32 (#9949) 2024-01-04 16:51:22 +08:00
Jiaming Yuan
621348abb3
Fix multi-output with alternating strategies. (#9933)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2024-01-04 16:41:13 +08:00
Jiaming Yuan
5f7b5a6921
Add tests for pickling with custom obj and metric. (#9943) 2024-01-04 14:52:48 +08:00
Jiaming Yuan
26a5436a65
[doc] Describe feature info behavior. [skip ci] (#9866) 2024-01-04 14:52:19 +08:00
Jiaming Yuan
9f73127a23
Cleanup Python GPU tests. (#9934)
* Cleanup Python GPU tests.

- Remove the use of `gpu_hist` and `gpu_id` in cudf/cupy tests.
- Move base margin test into the testing directory.
2024-01-04 13:15:18 +08:00
david-cortes
3c004a4145
[R] Add missing DMatrix functions (#9929)
* `XGDMatrixGetQuantileCut`
* `XGDMatrixNumNonMissing`
* `XGDMatrixGetDataAsCSR`

---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2024-01-03 17:29:21 +08:00
david-cortes
49247458f9
[R] Minor improvements for evaluation printing (#9940) 2024-01-03 15:26:55 +08:00
david-cortes
9e33a10202
[R] Replace xgboost() with xgb.train() in most tests and examples (#9941) 2024-01-02 21:20:01 +08:00
david-cortes
32cbab1cc0
[R] put 'verbose' in correct argument (#9942) 2024-01-02 15:20:51 +08:00
david-cortes
73713de601
[R] rename Quality -> Gain (#9938) 2023-12-31 13:01:00 +08:00
david-cortes
8b9c98b65b
[R] Clearer function signatures for S3 methods (#9937) 2023-12-31 10:45:04 +08:00
david-cortes
e40c4260ed
[R] Enable 'dot' dump format (#9930) 2023-12-30 13:28:27 +08:00
Philip Hyunsu Cho
ef8bdaa047
[CI] Update machine images (#9932) 2023-12-29 11:15:38 -08:00
Jiaming Yuan
a7226c0222
Fix feature names with special characters. (#9923) 2023-12-28 22:45:13 +08:00
david-cortes
a197899161
[R] avoid leaking exception objects (#9916) 2023-12-26 20:29:55 +08:00
Michael Mayer
52620fdb34
[R] Improve more docstrings (#9919) 2023-12-26 17:30:13 +08:00
Jiaming Yuan
6a5f6ba694
[CI] Add timeout for distributed GPU tests. (#9917) 2023-12-24 00:09:05 +08:00
Michael Mayer
b807f3e30c
[R] improve docstrings for "xgb.Booster.R" (#9906) 2023-12-21 10:01:30 +08:00
david-cortes
252e018275
correct name of function in header (#9905) 2023-12-20 10:52:00 +08:00
Jiaming Yuan
9d122293bc
[doc] Fix typo. [skip ci] (#9904) 2023-12-20 09:17:00 +08:00
david-cortes
ae32936ba2
[R] Catch C++ exceptions (#9903) 2023-12-19 10:45:03 +08:00
david-cortes
ff3d82c006
[R] Refactor field logic for dmatrix (#9901) 2023-12-18 20:31:01 +08:00
Jiaming Yuan
0edd600f3d
[doc] Brief introduction to base_score. (#9882) 2023-12-17 13:34:34 +08:00
david-cortes
db7f952ed6
update docs for parameters (#9900) 2023-12-16 12:19:22 +08:00
Dmitry Razdoburdin
2a6ab2547d
SYCL inference optimization (#9876)
---------

Co-authored-by: Dmitry Razdoburdin <>
2023-12-15 11:04:39 +08:00
Jiaming Yuan
1c6e031c75
[R] Fix clang warning. (#9874) 2023-12-15 01:30:43 +08:00
Jiaming Yuan
125bc812f8
[doc] Reference enable_categorical doc in sklearn. (#9884) 2023-12-14 23:29:19 +08:00
Jiaming Yuan
1aa8c8d9be
Support more scipy types. (#9881) 2023-12-14 18:28:37 +08:00
Hui Liu
2d7ffbdf3d merge latest changes 2023-12-13 21:06:28 -08:00
david-cortes
cd473c9da3
[R] enable multi-dimensional base_margin (#9885) 2023-12-14 09:16:53 +08:00
Philip Hyunsu Cho
936b22fdf3
[CI] Upload libxgboost4j.dylib (M1) to S3 bucket (#9886)
* [CI] Upload libxgboost4j.dylib (M1) to S3 bucket

* Fix typo
2023-12-13 15:25:51 -08:00
david-cortes
42173d7bc3
[doc] Clarify the effect of enable_categorical (#9877) 2023-12-13 08:39:41 +08:00
github-actions[bot]
d530d37707
[CI] Update RAPIDS to latest stable (#9857) 2023-12-12 22:58:03 +09:00
Dmitry Razdoburdin
43897b8296
Sycl implementation for objective functions (#9846)
---------

Co-authored-by: Dmitry Razdoburdin <>
2023-12-12 14:41:50 +08:00
david-cortes
ddab49a8be
[doc][R] Update arguments for ellipsis in predict (#9868) 2023-12-12 12:13:47 +08:00
Jiaming Yuan
faf0f2df10
Support dataframe data format in native XGBoost. (#9828)
- Implement a columnar adapter.
- Refactor Python pandas handling code to avoid converting into a single numpy array.
- Add support in R for transforming columns.
- Support R data.frame and factor type.
2023-12-12 09:56:31 +08:00
Jiaming Yuan
b3700bbb3f
Flexible find protobuf. (#9867) 2023-12-12 07:34:01 +08:00
david-cortes
562352101d
[R] Move all DMatrix fields to function arguments (#9862) 2023-12-10 02:45:28 +08:00
Jiaming Yuan
1094d6015d
[py] Use the first found native library. (#9860) 2023-12-08 17:23:16 +08:00
Jiaming Yuan
42de9206fc
Support multi-target, fit intercept for hinge. (#9850) 2023-12-08 05:50:41 +08:00
Jiaming Yuan
39c637ee19
Use array interface in Python prediction return. (#9855) 2023-12-08 03:42:14 +08:00
david-cortes
2c0fc97306
Remove note about multi-quantile being python-only (#9854) 2023-12-07 05:17:15 +08:00
david-cortes
9e9d41b95c
[R] Add note about serialization of DMatrix objects (#9853) 2023-12-07 03:11:15 +08:00
Jiaming Yuan
4bc1f3a388
[R] Bump requirement to 4.3.0. (#9847) 2023-12-07 00:12:45 +08:00
david-cortes
1de3f4135c
[R] Enable vector-valued parameters (#9849) 2023-12-06 20:32:20 +08:00
david-cortes
0716c64ef7
[R] Error out on multidimensional arrays (#9852) 2023-12-06 17:43:51 +08:00
david-cortes
62571b79eb
[R] Enable multi-output objectives (#9839) 2023-12-06 03:13:14 +08:00
david-cortes
9c56916fd7
[R] Very small performance tweaks (#9837) 2023-12-04 18:40:45 +08:00
Dmitry Razdoburdin
381f1d3dc9
Add support inference on SYCL devices (#9800)
---------

Co-authored-by: Dmitry Razdoburdin <>
Co-authored-by: Nikolay Petrov <nikolay.a.petrov@intel.com>
Co-authored-by: Alexandra <alexandra.epanchinzeva@intel.com>
2023-12-04 16:15:57 +08:00
david-cortes
7196c9d95e
[R] Fix memory safety issues (#9823) 2023-12-02 13:43:50 +08:00
Jiaming Yuan
e78b46046e
[CI] Update R version on Linux. (#9835) 2023-12-02 11:03:17 +08:00
Jiaming Yuan
2d8c67d6dc
[jvm-packages] Bump dependencies. (#9827)
- #9811
- #9814
- #9826
- #9830
- #9833
- #9832
- #9831
- #9834
2023-12-02 07:34:56 +08:00
david-cortes
95af5c074b
more usage of array interface, fix potential memory leaks of std::string (#9824) 2023-12-01 00:06:59 +08:00
david-cortes
37da66f865
[R] Use array interface for dense DMatrix creation (#9816)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-11-30 04:35:05 +08:00
Yuan (Terry) Tang
da3d55db5b
Update affiliation (#9822) 2023-11-30 03:27:05 +08:00
Jiaming Yuan
e2e089ce12
[jvm-packages] Bump rapids version. (#9820) 2023-11-29 15:51:07 +08:00
david-cortes
c0ef2f8dce
[R] Fix potential memory leaks in case of R allocation failures (#9817) 2023-11-29 13:14:17 +08:00
Jiaming Yuan
59684b2db6
[doc] Draft for language binding consistency. [skip ci] (#9755) 2023-11-29 13:13:40 +08:00
david-cortes
bfa1252fca
[R][doc] Update docs about fitting from CSR (#9818) 2023-11-29 05:42:41 +08:00
Jiaming Yuan
34a2616696
[jvm-packages] Update dependencies. (#9809)
- scalatest: 3.2.17
- maven-checkstyle-plugin: 3.3.1
- maven-surefire-plugin: 3.2.2
- maven-project-info-reports-plugin: 3.5.0
- maven-javadoc-plugin: 3.6.2
2023-11-27 20:09:25 +08:00
Jiaming Yuan
e9f149481e
[sklearn] Fix loading model attributes. (#9808) 2023-11-27 17:19:01 +08:00
Jiaming Yuan
3f4e22015a
Mark NCCL python test optional. (#9804)
Skip the tests if XGBoost is not compiled with dlopen.
2023-11-25 11:25:47 +08:00
Jiaming Yuan
8fe1a2213c
Cleanup code for distributed training. (#9805)
* Cleanup code for distributed training.

- Merge `GetNcclResult` into nccl stub.
- Split up utilities from the main dask module.
- Let Channel return `Result` to accommodate nccl channel.
- Remove old `use_label_encoder` parameter.
2023-11-25 09:10:56 +08:00
Jiaming Yuan
e9260de3f3
[breaking] Remove dense libsvm parser plugin. (#9799) 2023-11-23 00:12:39 +08:00
Jiaming Yuan
1877cb8e83
Change default metric for gamma regression to deviance. (#9757)
* Change default metric for gamma regression to deviance.

- Cleanup the gamma implementation.
- Use deviance instead since the objective is derived from deviance.
2023-11-22 21:17:48 +08:00
Jiaming Yuan
0715ab3c10
Use dlopen to load NCCL. (#9796)
This PR adds optional support for loading nccl with `dlopen` as an alternative of compile time linking. This is to address the size bloat issue with the PyPI binary release.
- Add CMake option to load `nccl` at runtime.
- Add an NCCL stub.

After this, `nccl` will be fetched from PyPI when using pip to install XGBoost, either by a user or by `pyproject.toml`. Others who want to link the nccl at compile time can continue to do so without any change.

At the moment, this is Linux only since we only support MNMG on Linux.
2023-11-22 19:27:31 +08:00
Jiaming Yuan
fedd9674c8
Implement column sampler in CUDA. (#9785)
- CUDA implementation.
- Extract the broadcasting logic, we will need the context parameter after revamping the collective implementation.
- Some changes to the event loop for fixing a deadlock in CI.
- Move argsort into algorithms.cuh, add support for cuda stream.
2023-11-17 04:29:08 +08:00
Bobby Wang
178cfe70a8
[pyspark][doc] Test and doc for stage-level scheduling. (#9786) 2023-11-16 18:15:59 +08:00
Jiaming Yuan
ada377c57e
[coll] Reduce the scope of lock in the event loop. (#9784) 2023-11-15 14:16:19 +08:00
Bobby Wang
36a552ac98
[jvm-packages] support stage-level scheduling (#9775) 2023-11-14 08:59:45 +08:00
Ken Geis
162da7b52b
fix typo in Parameters doc (#9781) 2023-11-13 03:09:06 +08:00
Jiaming Yuan
6fd4a30667
[coll] Increase timeout for allgather test. (#9777) 2023-11-09 05:26:40 +08:00
Jiaming Yuan
44099f585d
[coll] Add C API for the tracker. (#9773) 2023-11-08 18:17:14 +08:00
Jiaming Yuan
06bdc15e9b
[coll] Pass context to various functions. (#9772)
* [coll] Pass context to various functions.

In the future, the `Context` object would be required for collective operations, this PR
passes the context object to some required functions to prepare for swapping out the
implementation.
2023-11-08 09:54:05 +08:00
Jiaming Yuan
6c0a190f6d
[coll] Add comm group. (#9759)
- Implement `CommGroup` for double dispatching.
- Small cleanup to tracker for handling abort.
2023-11-07 11:12:31 +08:00
Jiaming Yuan
c3a0622b49
Fix using categorical data with the score function of ranker. (#9753) 2023-11-07 07:29:11 +08:00
Jiaming Yuan
82828621d0
[doc] Add doc for linters and simplify c++ lint script. (#9750) 2023-11-07 05:03:30 +08:00
Jiaming Yuan
98238d63fa
[dask] Change document to avoid using default import. (#9742)
This aligns dask with pyspark, users need to explicitly call:

```
from xgboost.dask import DaskXGBClassifier
from xgboost import dask as dxgb
```

In future releases, we might stop using the default import and remove the lazy loader.
2023-11-07 02:44:39 +08:00
Bobby Wang
093b675838
[Doc] update the tutorial of xgboost4j-spark-gpu (#9752)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-11-03 18:19:28 +08:00
Hui Liu
c81731308c fix RCCL 2023-11-02 16:39:24 -07:00
Hui Liu
51efb7442e support HIP for half in coll 2023-11-02 10:53:12 -07:00
Hui Liu
3af5dfd546 Merge branch 'master' 2023-11-02 09:05:31 -07:00
david-cortes
be20df8c23
[Python] Accept numpy generators as random_state (#9743)
* accept numpy generators for random_state

* make linter happy

* fix tests
2023-11-01 16:20:44 -07:00
Jiaming Yuan
4da4e092b5
[coll] Improvements and fixes for tracker and allreduce. (#9745)
- Allow the tracker to wait.
- Fix allreduce type cast
- Return args from the federated tracker.
2023-11-02 04:06:46 +08:00
Philip Hyunsu Cho
0ff8572737
[CI] Build libxgboost4j.dylib with CMAKE_OSX_DEPLOYMENT_TARGET (#9749) 2023-11-01 11:20:28 -07:00
Philip Hyunsu Cho
1b9ed4a4a1
[CI] Improve CI for Mac M1 (#9748)
* [CI] Improve CI for Mac M1

* Add -v flag

* Disable OpenMP in libxgboost4j.dylib

* Target MacOS 10.15+ to use C++17
2023-11-01 10:03:56 -07:00
Hui Liu
129bb76941 enable federated 2023-10-31 16:31:56 -07:00
Hui Liu
123af45327 Merge branch 'master' 2023-10-31 15:59:31 -07:00
david-cortes
d3f0646779
[R] Avoid modifying importance dt in-place, fix aggregation (#9740) 2023-11-01 05:10:59 +08:00
Jiaming Yuan
bc995a4865
[coll] Add federated coll. (#9738)
- Define a new data type, the proto file is copied for now.
- Merge client and communicator into `FederatedColl`.
- Define CUDA variant.
- Migrate tests for CPU, add tests for CUDA.
2023-11-01 04:06:46 +08:00
Hui Liu
6ac806fefd Merge branch 'master' 2023-10-31 09:05:56 -07:00
Philip Hyunsu Cho
6b98305db4
[CI] Enable gmock in gtest (#9737) 2023-10-31 20:09:35 +08:00
Hui Liu
8fab17ae8f rm hip.h files 2023-10-30 21:20:28 -07:00
Hui Liu
9b7aa1a7cd unify cuda to hip 2023-10-30 17:12:06 -07:00
Hui Liu
4eb371b3f0 unify cuda to hip 2023-10-30 17:10:06 -07:00
Hui Liu
6df27eadc9 rm hip_category from source 2023-10-30 16:34:49 -07:00
Hui Liu
02f5464fa6 enable coll and comm 2023-10-30 15:15:05 -07:00
Hui Liu
b6b5218245 enable RCCL 2023-10-30 14:05:04 -07:00
Hui Liu
d7f1235b7d Merge branch 'master' into sync-condition-2023Oct11 2023-10-30 13:19:33 -07:00
Hui Liu
1bedd76e94 rm un-necessary code 2023-10-30 13:14:45 -07:00
Hui Liu
40dc263602 enable ROCm for jvm and R 2023-10-30 12:52:44 -07:00
Jiaming Yuan
80390e6cb6
[coll] Federated comm. (#9732) 2023-10-31 02:39:55 +08:00
Bobby Wang
fa65cf6646
[doc] How to configure regarding to stage-level (#9727)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-10-31 01:28:34 +08:00
omahs
2cfc90e8db
Fix typos (#9731) 2023-10-30 16:52:12 +08:00
Jiaming Yuan
6755179e77
[coll] Add nccl. (#9726) 2023-10-28 16:33:58 +08:00
James Lamb
0c621094b3
[CI] enforce cmakelint checks (#9728) 2023-10-28 05:38:04 +08:00
Hui Liu
32ae49ab92 temp hack for multi GPUs 2023-10-27 13:00:49 -07:00
Hui Liu
6bbca9a8b7 restore learner 2023-10-27 11:15:06 -07:00
Hui Liu
6762230d9a namespace to reduce code 2023-10-27 10:51:32 -07:00
Hui Liu
4302200a33 Merge branch 'master' into sync-condition-2023Oct11 2023-10-27 10:09:37 -07:00
Hui Liu
4a4b528d54 add namespace aliases to reduce code 2023-10-27 09:11:55 -07:00
Dmitry Razdoburdin
9c22df9342
Fix mingw hanging on regex in context (#9729)
---------

Co-authored-by: Dmitry Razdoburdin <>
2023-10-27 20:01:35 +08:00
Bobby Wang
1323531323
[pyspark] unify the way for determining whether runs on the GPU. (#9724) 2023-10-27 11:21:30 +08:00
Hui Liu
e00131c465 Merge branch 'master' into sync-condition-2023Oct11 2023-10-26 11:35:48 -07:00
Dmitry Razdoburdin
f41a08fda8
Add 'sycl' devices to the context (#9691)
Co-authored-by: Dmitry Razdoburdin <>
2023-10-26 22:17:56 +08:00
Philip Hyunsu Cho
d4d7097acc
Update JVM script (#9714) 2023-10-24 17:48:38 -07:00
Philip Hyunsu Cho
01d59ded00
Fix libpath logic for Windows (#9712)
* Fix libpath logic for Windows (#9687)

* Use sys.base_prefix instead of sys.prefix (#9711)

* Use sys.base_prefix instead of sys.prefix

* Update libpath.py too
2023-10-24 17:25:28 -07:00
Hui Liu
cd28b9f997 add back per-thread 2023-10-24 15:17:19 -07:00
Hui Liu
3752b06550 Merge branch 'master' into sync-condition-2023Oct11 2023-10-24 10:46:38 -07:00
Jiaming Yuan
7a02facc9d
Serialize expand entry for allgather. (#9702) 2023-10-24 14:33:28 +08:00
Hui Liu
79319dfd4d format 2023-10-23 22:29:48 -07:00
Hui Liu
558352afc9 fix stream 2023-10-23 21:51:20 -07:00
Hui Liu
24be98f61f Merge branch 'master' into sync-condition-2023Oct11 2023-10-23 21:29:54 -07:00
Hyunsu Cho
ee8b29c843
[CI] Hotfix for JVM test on GH Action 2023-10-23 20:02:33 -07:00
Philip Hyunsu Cho
87621322ed
[CI] Build libxgboost4j.dylib for Intel Mac (#9704)
* [CI] Build libxgboost4j.dylib for Intel Mac

* Use correct runner name

* Fix shell command

* Add back branch condition
2023-10-23 19:15:24 -07:00
Jiaming Yuan
3ca06ac51e
[doc] Mention data consistency for categorical features. (#9678) 2023-10-24 10:11:33 +08:00
Hui Liu
65012b356c rm some hip 2023-10-23 17:13:02 -07:00
Hui Liu
f9f39b092b add HIP LIB PATH 2023-10-23 16:52:33 -07:00
Hui Liu
643b334919 add nccl_device_communicator.hip 2023-10-23 16:43:03 -07:00
Hui Liu
6ba66463b6 fix uuid and Clear/SetValid 2023-10-23 16:32:26 -07:00
Hui Liu
55994b1ac7 enable ROCm on latest XGBoost 2023-10-23 11:15:04 -07:00
Hui Liu
15421e40d9 enable ROCm on latest XGBoost 2023-10-23 11:07:08 -07:00
Philip Hyunsu Cho
5e6cb63a56
[CI] Set up CI for Mac M1 (#9699) 2023-10-22 23:33:19 -07:00
Philip Hyunsu Cho
791de7789b
[jvm-packages] Remove hard dependency on libjvm (#9698) 2023-10-21 23:14:38 -07:00
Jiaming Yuan
b771f58453
[coll] Define interface for bridging. (#9695)
* Define the basic interface that will shared by nccl, federated and native.
2023-10-20 16:20:48 +08:00
Rong Ou
6fbe6248f4
More in-memory input support for column split (#9685) 2023-10-20 16:02:36 +08:00
Chuck Atkins
83cdf14b2c
CMake LTO and CUDA arch (#9677) 2023-10-20 13:01:37 +08:00
Your Name
fb19e15ce3 rm setup.py 2023-10-19 11:59:19 -07:00
Philip Hyunsu Cho
3b86260b50
Fix build for AppleClang 11 (#9684) (#9693) 2023-10-18 12:27:21 -07:00
Jiaming Yuan
5d1bcde719
[coll] allgatherv. (#9688) 2023-10-19 03:13:50 +08:00
Dmitry Razdoburdin
ea9f09716b
Reorder if-else statements to allow using of cpu branches for sycl-devices (#9682) 2023-10-18 10:55:33 +08:00
Jiaming Yuan
4c0e4422d0
[coll] allgather. (#9681) 2023-10-18 10:22:18 +08:00
Your Name
ffbbc9c968 add cuda to hip wrapper 2023-10-17 12:42:37 -07:00
Jiaming Yuan
48ac9b6cbe
[coll] Allreduce. (#9679) 2023-10-17 13:57:14 +08:00
Rong Ou
da6803b75b
Support column-wise data split with in-memory inputs (#9628)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-10-17 12:16:39 +08:00
Bobby Wang
4d1607eefd
[pyspark] Support stage-level scheduling for training (#9519) 2023-10-17 10:35:39 +08:00
Thomas Lynn
83191f0839
Update learning_to_rank.rst; Correct qid sort in snippet (#9673) 2023-10-14 16:38:58 +08:00
Philip Hyunsu Cho
eee7cdf07e
Fix build for GCC 8.x (#9670) (#9675) 2023-10-13 22:07:49 -07:00
James Lamb
eb562d3829
[CI] address cmakelint warnings about whitespace (#9674) 2023-10-14 12:46:07 +08:00
Jiaming Yuan
53049b16b8
[coll] Broadcast. (#9659) 2023-10-14 09:34:37 +08:00
Jiaming Yuan
81a059864a
Skip check for pollhup. (#9661) 2023-10-13 14:35:14 +08:00
Philip Hyunsu Cho
a5e07a01f8
[CI] Pull CentOS 7 images from NGC (#9666) 2023-10-13 12:11:54 +08:00
Jiaming Yuan
cd8760cba3
[doc] Update document about running tests. [skip ci] (#9658) 2023-10-13 09:07:01 +08:00
Your Name
ea19555474 temp merge, disable 1 line, SetValid 2023-10-12 16:16:44 -07:00
Rong Ou
e164d51c43
Improve allgather functions (#9649) 2023-10-12 23:31:43 +08:00
github-actions[bot]
d1dee4ad99
[CI] Update RAPIDS to latest stable (#9654)
* [CI] Update RAPIDS to latest stable

* Remove slashes from Docker tag

---------

Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2023-10-11 23:26:09 -07:00
Jiaming Yuan
946ae1c440
[coll] Implement a new tracker and a communicator. (#9650)
* [coll] Implement a new tracker and a communicator.

The new tracker and communicators communicate through the use of JSON documents. Along
with which, communicators are aware of each other.
2023-10-12 12:49:16 +08:00
James Lamb
2e42f33fc1
[CI] standardize else() and enfunction() calls in CMake scripts (#9653) 2023-10-12 11:14:19 +08:00
Jiaming Yuan
084d89216c
Add support for cgroupv2. (#9651) 2023-10-12 09:36:36 +08:00
James Lamb
51e32e4905
[CI] add cmakelint to C++ linting task (#9641)
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-10-11 16:04:10 +08:00
Rong Ou
0ecb4de963
[breaking] Change DMatrix construction to be distributed (#9623)
* Change column-split DMatrix construction to be distributed

* remove splitting code for row split
2023-10-10 23:35:57 +08:00
Jiaming Yuan
b14e535e78
[Coll] Implement get host address in libxgboost. (#9644)
- Port `xgboost.tracker.get_host_ip` in C++.
2023-10-10 10:01:14 +08:00
Jiaming Yuan
680d53db43
Extract JSON utils. (#9645) 2023-10-10 07:15:14 +08:00
Jiaming Yuan
4e5a7729c3
Fix lint errors. (#9634) 2023-10-09 19:04:31 +08:00
James Lamb
db8d117f7e
[CI] standardize endif() calls in CMake scripts (#9637) 2023-10-08 11:45:20 +08:00
James Lamb
799f8485e2
[R] [CI] enforce lintr::function_left_parentheses_linter check (#9631) 2023-10-08 09:42:09 +08:00
Jiaming Yuan
4d7a187cb0
Remove XGBoosterGetModelRaw. (#9617)
Deprecated in 1.6.
2023-09-29 02:29:33 +08:00
Jiaming Yuan
d95be1c38d
Small cleanup to jvm iter adapter. (#9616)
- Remove header dependency on c_api
- Remove remaining code for arrow.
2023-09-29 00:39:07 +08:00
Jiaming Yuan
417c3ba47e
Workaround Apple clang issue. (#9615) 2023-09-28 22:51:47 +08:00
Jordan Fréry
295f13ef09
Add privacy preserving tutorial to index.rst (#9614) 2023-09-28 18:53:29 +08:00
Jiaming Yuan
60526100e3
Support arrow through pandas ext types. (#9612)
- Use pandas extension type for pyarrow support.
- Additional support for QDM.
- Additional support for inplace_predict.
2023-09-28 17:00:16 +08:00
Rong Ou
3f2093fb81
Test monotone constraints with column split (#9613) 2023-09-28 04:54:53 +08:00
Jordan Fréry
7cafd41a58
[doc] Add privacy preserving tutorial (#9610) 2023-09-28 02:50:01 +08:00
Rong Ou
d6d14d0fb9
Integration tests for interaction constraints with column-wise data split (#9611) 2023-09-27 08:27:43 +08:00
Jiaming Yuan
c75a3bc0a9
[breaking] [jvm-packages] Remove rabit check point. (#9599)
- Add `numBoostedRound` to jvm packages
- Remove rabit checkpoint version.
- Change the starting version of training continuation in JVM [breaking].
- Redefine the checkpoint version policy in jvm package. [breaking]
- Rename the Python check point callback parameter. [breaking]
- Unifies the checkpoint policy between Python and JVM.
2023-09-26 18:06:34 +08:00
Benoit Chevallier-Mames
7901a299b2
[doc] Add privacy-preserving Concrete ML links (#9598) (#9604) 2023-09-26 15:33:11 +08:00
Rong Ou
290b17ffda
Test column sampler with column-wise data split (#9609) 2023-09-26 13:31:23 +08:00
Jiaming Yuan
1167e6c554
Limit the number of threads for external memory. (#9605) 2023-09-24 00:30:28 +08:00
Jiaming Yuan
cac2cd2e94
[R] Set number of threads in demos and tests. (#9591)
- Restrict the number of threads in IO.
- Specify the number of threads in demos and tests.
- Add helper scripts for checks.
2023-09-23 21:44:03 +08:00
Rong Ou
def77870f3
Test categorical features with column-split gpu quantile (#9595) 2023-09-23 09:55:09 +08:00
Jiaming Yuan
a90d204942
Use array interface for testing numpy arrays. (#9602) 2023-09-23 03:13:48 +08:00
Jiaming Yuan
bbf5b9ee57
[dask] Move dask module into directory. (#9597) 2023-09-23 01:28:18 +08:00
Jiaming Yuan
0080c97075
Workaround poll on macos. (#9596) 2023-09-21 01:09:36 +08:00
Jiaming Yuan
8c676c889d
Remove internal use of gpu_id. (#9568) 2023-09-20 23:29:51 +08:00
Jiaming Yuan
38ac52dd87
Build a simple event loop for collective. (#9593) 2023-09-20 02:09:07 +08:00
Jiaming Yuan
259d80c0cf
News for 2.0. [skip ci] (#9484)
---------

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2023-09-19 05:46:02 +08:00
Jiaming Yuan
0df1da2db4
fix rel script with relative path and end note version. [skip ci] (#9572) 2023-09-18 17:58:48 +08:00
James Lamb
730bc1f688
[R] remove unused headers (#9546) 2023-09-14 17:11:26 +08:00
Rong Ou
d8c3cc92ae
More support for column split in gpu predictor (#9562) 2023-09-14 08:13:13 +08:00
Rong Ou
a343ae3b34
fix dupliate gpu check (#9578) 2023-09-14 05:53:46 +08:00
Jiaming Yuan
300f9ace06
Fix default metric configuration. (#9575) 2023-09-13 13:05:47 -07:00
Jiaming Yuan
b438d684d2
Utilities and cleanups for socket. (#9576)
- Use c++-17 nodiscard and nested ns.
- Add bind method to socket.
- Remove rabit parameters.
2023-09-14 01:41:42 +08:00
Jiaming Yuan
5abe50ff8c
[R] Fix method name. (#9577) 2023-09-13 23:19:29 +08:00
Ikko Eltociear Ashimine
f90d034a86
[doc] Fix typo in python_packaging.rst (#9573) 2023-09-12 20:53:07 +08:00
Jon Yoquinto
d05ea589fb
Allow JVM-Package to access inplace predict method (#9167)
---------

Co-authored-by: Stephan T. Lavavej <stl@nuwen.net>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
Co-authored-by: Joe <25804777+ByteSizedJoe@users.noreply.github.com>
2023-09-12 07:29:51 +08:00
Jiaming Yuan
9027686cac
Support pandas 2.1.0. (#9557) 2023-09-11 17:44:51 +08:00
Rong Ou
66a0832778
Add tests for gpu_approx (#9553) 2023-09-07 17:21:58 +08:00
Bobby Wang
6c791b5b47
[pyspark] support gpu transform (#9542)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-09-07 12:15:50 +08:00
Rong Ou
0f35493b65
Add GPU support to NVFlare demo (#9552) 2023-09-06 17:03:59 +08:00
Hui Liu
85d3017ca5
Merge pull request #1 from ROCmSoftwarePlatform/create-pull-request/update-rapids
[CI] Update RAPIDS to latest stable
2023-09-05 13:10:11 -07:00
Jiaming Yuan
3b9e5909fb
[CI] bump setup-r action version. (#9544) 2023-09-05 16:14:45 +08:00
Jiaming Yuan
adea842c83
Fix inplace predict with fallback when base margin is used. (#9536)
- Copy meta info from proxy DMatrix.
- Use `std::call_once` to emit less warnings.
2023-09-05 01:04:24 +08:00
James Lamb
d159ee8547
[R] reformat build scripts (#9540) 2023-09-04 17:40:46 +08:00
Bobby Wang
419e052314
[pyspark] rework transform to reuse same code (#9292) 2023-09-04 15:57:16 +08:00
James Lamb
98e45f7b54
add HTML files to gitignore (#9541) 2023-09-04 14:44:58 +08:00
Rong Ou
c928dd4ff5
Support vertical federated learning with gpu_hist (#9539) 2023-09-03 11:37:11 +08:00
Rong Ou
9bab06cbca
Support column split in gpu hist updater (#9384) 2023-08-31 18:09:35 +08:00
Jiaming Yuan
ccfc90e4c6
[rabit] Improved connection handling. (#9531)
- Enable timeout.
- Report connection error from the system.
- Handle retry for both tracker connection and peer connection.
2023-08-30 13:00:04 +08:00
dependabot[bot]
2462e22cd4
Bump com.nvidia:rapids-4-spark_2.12 in /jvm-packages (#9517)
Bumps com.nvidia:rapids-4-spark_2.12 from 23.08.0 to 23.08.1.

---
updated-dependencies:
- dependency-name: com.nvidia:rapids-4-spark_2.12
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-29 15:10:16 +08:00
Jiaming Yuan
ddf2e68821
Use the new DeviceOrd in the linalg module. (#9527) 2023-08-29 13:37:29 +08:00
Jiaming Yuan
942b957eef
Fix GPU categorical split memory allocation. (#9529) 2023-08-29 10:06:03 +08:00
Jiaming Yuan
be6a552956
[R] Support multi-class custom objective. (#9526) 2023-08-29 08:27:13 +08:00
Jiaming Yuan
90ef250ea1
[rabit] Drop support for MPI backend. (#9525)
- Add checks in cmake.
- Remove mpi related code.
2023-08-28 21:01:22 +08:00
Jiaming Yuan
c3574d932f
[R] Fix integer inputs with NA. (#9522) 2023-08-28 18:36:11 +08:00
Jiaming Yuan
1b87a1d8f8
[rabit] Small cleanup to tracker initialization. (#9524)
- Remove recover related code.
- Clean startup, no need to consider previously connected nodes.
2023-08-27 05:10:59 +08:00
Jiaming Yuan
209335b18c
Remove the deprecated Python rabit module. (#9523) 2023-08-27 03:37:05 +08:00
Jiaming Yuan
aa86bd5207
[dask] Filter models on worker. (#9518) 2023-08-25 20:23:47 +08:00
Jiaming Yuan
972730cde0
Use matrix for gradient. (#9508)
- Use the `linalg::Matrix` for storing gradients.
- New API for the custom objective.
- Custom objective for multi-class/multi-target is now required to return the correct shape.
- Custom objective for Python can accept arrays with any strides. (row-major, column-major)
2023-08-24 05:29:52 +08:00
Rong Ou
6103dca0bb
Support column split in GPU evaluate splits (#9511) 2023-08-23 16:33:43 +08:00
Jiaming Yuan
8c10af45a0
Delay the check for vector leaf. (#9509) 2023-08-23 01:53:40 +08:00
Jiaming Yuan
3c09399f29
Fix device dispatch for linear updater. (#9507) 2023-08-23 00:17:35 +08:00
Jiaming Yuan
302bbdc958
mitigate flaky test with distributed l1 error. (#9499) 2023-08-22 13:46:35 +08:00
Jiaming Yuan
044fea1281
Drop support for loading remote files. (#9504) 2023-08-21 23:34:05 +08:00
dependabot[bot]
d779a11af9
Bump scala-collection-compat_2.12 from 2.10.0 to 2.11.0 in /jvm-packages (#9311)
Bumps [scala-collection-compat_2.12](https://github.com/scala/scala-collection-compat) from 2.10.0 to 2.11.0.
- [Release notes](https://github.com/scala/scala-collection-compat/releases)
- [Commits](https://github.com/scala/scala-collection-compat/compare/v2.10.0...v2.11.0)

---
updated-dependencies:
- dependency-name: org.scala-lang.modules:scala-collection-compat_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-21 10:27:35 +08:00
Jiaming Yuan
e6cf7a1278
Deprecate the command line interface. (#9485)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-08-21 06:47:48 +08:00
Jiaming Yuan
38a3e1b858
Fix release script for RC [skip ci] (#9505) 2023-08-21 05:24:35 +08:00
dependabot[bot]
74d5056c61
Bump spark.version.gpu in /jvm-packages/xgboost4j-spark-gpu (#9328)
Bumps `spark.version.gpu` from 3.3.2 to 3.4.1.

Updates `spark-core_2.12` from 3.3.2 to 3.4.1

Updates `spark-sql_2.12` from 3.3.2 to 3.4.1

Updates `spark-mllib_2.12` from 3.3.2 to 3.4.1

---
updated-dependencies:
- dependency-name: org.apache.spark:spark-core_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.spark:spark-sql_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.spark:spark-mllib_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-20 04:20:07 +08:00
Jiaming Yuan
db87d481bc
[R] Differentiate dev version with release version. (#9503)
Use 2.1.0.0 as development version, we will change it to 2.1.0.1 during release.
2023-08-20 02:58:58 +08:00
dependabot[bot]
5358e1ebf0
Bump org.apache.commons:commons-lang3 in /jvm-packages (#9489)
Bumps org.apache.commons:commons-lang3 from 3.12.0 to 3.13.0.

---
updated-dependencies:
- dependency-name: org.apache.commons:commons-lang3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-20 00:37:15 +08:00
dependabot[bot]
d016309a15
Bump spark.version from 3.4.0 to 3.4.1 in /jvm-packages/xgboost4j-spark (#9326)
Bumps `spark.version` from 3.4.0 to 3.4.1.

Updates `spark-core_2.12` from 3.4.0 to 3.4.1

Updates `spark-sql_2.12` from 3.4.0 to 3.4.1

Updates `spark-mllib_2.12` from 3.4.0 to 3.4.1

---
updated-dependencies:
- dependency-name: org.apache.spark:spark-core_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.spark:spark-sql_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.spark:spark-mllib_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-19 18:14:35 +08:00
Jiaming Yuan
7f29a238e6
Return base score as intercept. (#9486) 2023-08-19 12:28:02 +08:00
dependabot[bot]
0bb87b5b35
Bump hadoop.version from 3.3.5 to 3.3.6 in /jvm-packages (#9331)
Bumps `hadoop.version` from 3.3.5 to 3.3.6.

Updates `hadoop-hdfs` from 3.3.5 to 3.3.6

Updates `hadoop-common` from 3.3.5 to 3.3.6

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-hdfs
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-18 20:59:04 +08:00
Thomas Zeger
b74802dea9
Fix safe_xgboost macro on c++ (#9501) 2023-08-18 04:36:06 +08:00
Jiaming Yuan
58530b1bc4
Bump version to 2.1. (#9498) 2023-08-18 01:04:04 +08:00
Bobby Wang
68be454cfa
[pyspark] hotfix for GPU setup validation (#9495)
* [pyspark] fix a bug of validating gpu configuration

---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-08-17 16:01:39 +08:00
Jiaming Yuan
5188e27513
Fix version parsing with rc release. (#9493) 2023-08-16 22:44:58 +08:00
Jiaming Yuan
f380c10a93
Use hint for find nccl. (#9490) 2023-08-16 16:08:41 +08:00
Sean Yang
12fe2fc06c
Fix federated learning demos and tests (#9488) 2023-08-16 15:25:05 +08:00
Jiaming Yuan
b2e93d2742
[doc] Quick note for the device parameter. [skip ci] (#9483) 2023-08-16 13:35:55 +08:00
Jiaming Yuan
c061e3ae50
[jvm-packages] Bump rapids version. (#9482) 2023-08-15 16:26:42 -07:00
James Lamb
b82e78c169
[R] remove commented-out code (#9481) 2023-08-15 13:44:08 +08:00
Boris
8463107013
Updated versions. Reorganised dependencies. (#9479) 2023-08-14 14:28:28 -07:00
Jiaming Yuan
19b59938b7
Convert input to str for hypothesis note. (#9480) 2023-08-15 02:27:58 +08:00
James Lamb
e3f624d8e7
[R] remove more uses of default values in internal functions (#9476) 2023-08-14 22:18:33 +08:00
James Lamb
2c84daeca7
[R] [doc] remove documentation index entries for internal functions (#9477) 2023-08-14 22:18:02 +08:00
Bobby Wang
344f90b67b
[jvm-packages] throw exception when tree_method=approx and device=cuda (#9478)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-08-14 17:52:14 +08:00
Jiaming Yuan
05d7000096
Handle special characters in JSON model dump. (#9474) 2023-08-14 15:49:00 +08:00
github-actions[bot]
f03463c45b
[CI] Update RAPIDS to latest stable (#9464)
* [CI] Update RAPIDS to latest stable

* [CI] Use CMake 3.26.4

---------

Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2023-08-13 18:54:37 -07:00
Jiaming Yuan
fd4335d0bf
[doc] Document the current status of some features. (#9469) 2023-08-13 23:42:27 +08:00
Jiaming Yuan
801116c307
Test scikit-learn model IO with gblinear. (#9459) 2023-08-13 23:41:49 +08:00
Jiaming Yuan
bb56183396
Normalize file system path. (#9463) 2023-08-11 21:26:46 +08:00
Jiaming Yuan
bdc1a3c178
Fix pyspark parameter. (#9460)
- Don't pass the `use_gpu` parameter to the learner.
- Fix GPU approx with PySpark.
2023-08-11 19:07:50 +08:00
James Lamb
428f6cbbe2
[R] remove default values in internal booster manipulation functions (#9461) 2023-08-11 15:07:18 +08:00
amdsc21
5929890174 [CI] Update RAPIDS to latest stable 2023-08-10 20:02:16 +00:00
ShaneConneely
d638535581
Update README.md (#9462) 2023-08-11 04:02:04 +08:00
James Lamb
44bd2981b2
[R] remove default values in internal utility functions (#9457) 2023-08-10 21:40:59 +08:00
James Lamb
9dbb71490c
[Doc] fix typos in documentation (#9458) 2023-08-10 19:26:36 +08:00
James Lamb
4359356d46
[R] [CI] use lintr 3.1.0 (#9456) 2023-08-10 17:49:16 +08:00
Jiaming Yuan
1caa93221a
Use realloc for histogram cache and expose the cache limit. (#9455) 2023-08-10 14:05:27 +08:00
Jiaming Yuan
a57371ef7c
Fix links in R doc. (#9450) 2023-08-10 02:38:14 +08:00
Jiaming Yuan
f05a23b41c
Use weakref instead of id for DataIter cache. (#9445)
- Fix case where Python reuses id from freed objects.
- Small optimization to column matrix with QDM by using `realloc` instead of copying data.
2023-08-10 00:40:06 +08:00
Bobby Wang
d495a180d8
[pyspark] add logs for training (#9449) 2023-08-09 18:32:23 +08:00
joshbrowning2358
7f854848d3
Update R docs based on deprecated parameters/behaviour (#9437) 2023-08-09 17:04:28 +08:00
Jiaming Yuan
f05294a6f2
Fix clang warnings. (#9447)
- static function in header. (which is marked as unused due to translation unit
visibility).
- Implicit copy operator is deprecated.
- Unused lambda capture.
- Moving a temporary variable prevents copy elision.
2023-08-09 15:34:45 +08:00
Philip Hyunsu Cho
819098a48f
[R] Handle UTF-8 paths on Windows (#9448) 2023-08-08 21:29:19 -07:00
Jiaming Yuan
c1b2cff874
[CI] Check compiler warnings. (#9444) 2023-08-08 12:02:45 -07:00
Philip Hyunsu Cho
7ce090e775
Handle UTF-8 paths correctly on Windows platform (#9443)
* Fix round-trip serialization with UTF-8 paths

* Add compiler version check

* Add comment to C API functions

* Add Python tests

* [CI] Updatre MacOS deployment target

* Use std::filesystem instead of dmlc::TemporaryDirectory
2023-08-07 23:27:25 -07:00
Jiaming Yuan
97fd5207dd
Use lambda function in ParallelFor2D. (#9441) 2023-08-08 14:04:46 +08:00
Jiaming Yuan
54029a59af
Bound the size of the histogram cache. (#9440)
- A new histogram collection with a limit in size.
- Unify histogram building logic between hist, multi-hist, and approx.
2023-08-08 03:21:26 +08:00
Philip Hyunsu Cho
5bd163aa25
Explicitly specify libcudart_static in CMake config (#9436) 2023-08-05 14:15:44 -07:00
Philip Hyunsu Cho
7fc57f3974
Remove Koffie Labs from Sponsors list (#9434) 2023-08-04 06:52:27 -07:00
Rong Ou
bde1ebc209
Switch back to the GPUIDX macro (#9438) 2023-08-04 15:14:31 +08:00
Philip Hyunsu Cho
1aabc690ec
[Doc] Clarify the output behavior of reg:logistic (#9435) 2023-08-03 20:42:07 -07:00
jinmfeng001
04c99683c3
Change training stage from ResultStage to ShuffleMapStage (#9423) 2023-08-03 23:40:04 +08:00
Jiaming Yuan
1332ff787f
Unify the code path between local and distributed training. (#9433)
This removes the need for a local histogram space during distributed training, which cuts the cache size by half.
2023-08-03 21:46:36 +08:00
Hendrik Makait
f958e32683
Raise if expected workers are not alive in xgboost.dask.train (#9421) 2023-08-03 20:14:07 +08:00
Jiaming Yuan
7129988847
Accept only keyword arguments in data iterator. (#9431) 2023-08-03 12:44:16 +08:00
Jiaming Yuan
e93a274823
Small cleanup for histogram routines. (#9427)
* Small cleanup for histogram routines.

- Extract hist train param from GPU hist.
- Make histogram const after construction.
- Unify parameter names.
2023-08-02 18:28:26 +08:00
Rong Ou
c2b85ab68a
Clean up MGPU C++ tests (#9430) 2023-08-02 14:31:18 +08:00
Jiaming Yuan
a9da2e244a
[CI] Update github actions. (#9428) 2023-08-01 23:03:53 +08:00
Jiaming Yuan
912e341d57
Initial GPU support for the approx tree method. (#9414) 2023-07-31 15:50:28 +08:00
Bobby Wang
8f0efb4ab3
[jvm-packages] automatically set the max/min direction for best score (#9404) 2023-07-27 11:09:55 +08:00
Rong Ou
7579905e18
Retry switching to per-thread default stream (#9416) 2023-07-26 07:09:12 +08:00
Nicholas Hilton
54579da4d7
[doc] Fix typo in prediction.rst (#9415)
Typo for `pred_contribs` and `pred_interactions`
2023-07-26 07:03:04 +08:00
Jiaming Yuan
3a9996173e
Revert "Switch to per-thread default stream (#9396)" (#9413)
This reverts commit f7f673b00c15458fb4dd74a2a0d2ba80369c5faf.
2023-07-24 12:03:28 -07:00
Bobby Wang
1b657a5513
[jvm-packages] set device to cuda when tree method is "gpu_hist" (#9412) 2023-07-24 18:32:25 +08:00
Jiaming Yuan
a196443a07
Implement sketching with Hessian on GPU. (#9399)
- Prepare for implementing approx on GPU.
- Unify the code path between weighted and uniform sketching on DMatrix.
2023-07-24 15:43:03 +08:00
Jiaming Yuan
851cba931e
Define best_iteration only if early stopping is used. (#9403)
* Define `best_iteration` only if early stopping is used.

This is the behavior specified by the document but not honored in the actual code.

- Don't set the attributes if there's no early stopping.
- Clean up the code for callbacks, and replace assertions with proper exceptions.
- Assign the attributes when early stopping `save_best` is used.
- Turn the attributes into Python properties.

---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-07-24 12:43:35 +08:00
Jiaming Yuan
01e00efc53
[breaking] Remove support for single string feature info. (#9401)
- Input must be a sequence of strings.
- Improve validation error message.
2023-07-24 11:06:30 +08:00
Jiaming Yuan
275da176ba
Document for device ordinal. (#9398)
- Rewrite GPU demos. notebook is converted to script to avoid committing additional png plots.
- Add GPU demos into the sphinx gallery.
- Add RMM demos into the sphinx gallery.
- Test for firing threads with different device ordinals.
2023-07-22 15:26:29 +08:00
Jiaming Yuan
22b0a55a04
Remove hist builder class. (#9400)
* Remove hist build class.

* Cleanup this stateless class.

* Add comment to thread block.
2023-07-22 10:43:12 +08:00
Jiaming Yuan
0de7c47495
Fix metric serialization. (#9405) 2023-07-22 08:39:21 +08:00
Jiaming Yuan
dbd5309b55
Fix warning message for device. (#9402) 2023-07-20 23:30:04 +08:00
Rong Ou
f7f673b00c
Switch to per-thread default stream (#9396) 2023-07-20 08:21:00 +08:00
Jiaming Yuan
7a0ccfbb49
Add compute 90. (#9397) 2023-07-19 13:42:38 +08:00
Jiaming Yuan
0897477af0
Remove unmaintained jvm readme and dev scripts. (#9395) 2023-07-18 18:23:43 +08:00
Philip Hyunsu Cho
e082718c66
[CI] Build pip wheel with RMM support (#9383) 2023-07-18 01:52:26 -07:00
Jiaming Yuan
6e18d3a290
[pyspark] Handle the device parameter in pyspark. (#9390)
- Handle the new `device` parameter in PySpark.
- Deprecate the old `use_gpu` parameter.
2023-07-18 08:47:03 +08:00
Philip Hyunsu Cho
2a0ff209ff
[CI] Block CI from running for dependabot PRs (#9394) 2023-07-17 10:53:57 -07:00
Jiaming Yuan
f4fb2be101
[jvm-packages] Add the new device parameter. (#9385) 2023-07-17 18:40:39 +08:00
Jiaming Yuan
2caceb157d
[jvm-packages] Reduce log verbosity for GPU tests. (#9389) 2023-07-17 13:25:46 +08:00
Jiaming Yuan
b342ef951b
Make feature validation immutable. (#9388) 2023-07-16 06:52:55 +08:00
Jiaming Yuan
0a07900b9f
Fix integer overflow. (#9380) 2023-07-15 21:11:02 +08:00
Jiaming Yuan
16eb41936d
Handle the new device parameter in dask and demos. (#9386)
* Handle the new `device` parameter in dask and demos.

- Check no ordinal is specified in the dask interface.
- Update demos.
- Update dask doc.
- Update the condition for QDM.
2023-07-15 19:11:20 +08:00
Jiaming Yuan
9da5050643
Turn warning messages into Python warnings. (#9387) 2023-07-15 07:46:43 +08:00
Jiaming Yuan
04aff3af8e
Define the new device parameter. (#9362) 2023-07-13 19:30:25 +08:00
Cássia Sampaio
2d0cd2817e
[doc] Fux learning_to_rank.rst (#9381)
just adding one missing bracket
2023-07-13 11:00:24 +08:00
jinmfeng001
a1367ea1f8
Set feature_names and feature_types in jvm-packages (#9364)
* 1. Add parameters to set feature names and feature types
2. Save feature names and feature types to native json model

* Change serialization and deserialization format to ubj.
2023-07-12 15:18:46 +08:00
Rong Ou
3632242e0b
Support column split with GPU quantile (#9370) 2023-07-11 12:15:56 +08:00
Jiaming Yuan
97ed944209
Unify the hist tree method for different devices. (#9363) 2023-07-11 10:04:39 +08:00
Jiaming Yuan
20c52f07d2
Support exporting cut values (#9356) 2023-07-08 15:32:41 +08:00
edumugi
c3124813e8
Support numpy vertical split (#9365) 2023-07-08 13:18:12 +08:00
Jiaming Yuan
59787b23af
Allow empty page in external memory. (#9361) 2023-07-08 09:24:35 +08:00
Rong Ou
15ca12a77e
Fix NCCL test hang (#9367) 2023-07-07 11:21:35 +08:00
Jiaming Yuan
41c6813496
Preserve order of saved updaters config. (#9355)
- Save the updater sequence as an array instead of object.
- Warn only once.

The compatibility is kept, but we should be able to break it as the config is not loaded
in pickle model and it's declared to be not stable.
2023-07-05 20:20:07 +08:00
Jiaming Yuan
b572a39919
[doc] Fix removed reference. (#9358) 2023-07-05 16:49:25 +08:00
Jiaming Yuan
645037e376
Improve test coverage with predictor configuration. (#9354)
* Improve test coverage with predictor configuration.

- Test with ext memory.
- Test with QDM.
- Test with dart.
2023-07-05 15:17:22 +08:00
Oliver Holworthy
6c9c8a9001
Enable Installation of Python Package with System lib in a Virtual Environment (#9349) 2023-07-05 05:46:17 +08:00
Boris
bb2de1fd5d
xgboost4j-gpu_2.12-2.0.0: added libxgboost4j.so back. (#9351) 2023-07-04 03:31:33 +08:00
Jiaming Yuan
d0916849a6
Remove unused weight from buffer for cat features. (#9341) 2023-07-04 01:07:09 +08:00
Jiaming Yuan
6155394a06
Update news for 1.7.6 [skip ci] (#9350) 2023-07-04 01:04:34 +08:00
Jiaming Yuan
e964654b8f
[skl] Enable cat feature without specifying tree method. (#9353) 2023-07-03 22:06:17 +08:00
Jiaming Yuan
39390cc2ee
[breaking] Remove the predictor param, allow fallback to prediction using DMatrix. (#9129)
- A `DeviceOrd` struct is implemented to indicate the device. It will eventually replace the `gpu_id` parameter.
- The `predictor` parameter is removed.
- Fallback to `DMatrix` when `inplace_predict` is not available.
- The heuristic for choosing a predictor is only used during training.
2023-07-03 19:23:54 +08:00
Rong Ou
3a0f787703
Support column split in GPU predictor (#9343) 2023-07-03 04:05:34 +08:00
Rong Ou
f90771eec6
Fix device communicator dependency (#9346) 2023-06-29 10:34:30 +08:00
Jiaming Yuan
f4798718c7
Use hist as the default tree method. (#9320) 2023-06-27 23:04:24 +08:00
Jiaming Yuan
bc267dd729
Use ptr from mmap for GHistIndexMatrix and ColumnMatrix. (#9315)
* Use ptr from mmap for `GHistIndexMatrix` and `ColumnMatrix`.

- Define a resource for holding various types of memory pointers.
- Define ref vector for holding resources.
- Swap the underlying resources for GHist and ColumnM.
- Add documentation for current status.
- s390x support is removed. It should work if you can compile XGBoost, all the old workaround code does is to get GCC to compile.
2023-06-27 19:05:46 +08:00
jasjung
96c3071a8a
[doc] Update learning_to_rank.rst (#9336) 2023-06-27 13:56:18 +08:00
Jiaming Yuan
cfa9c42eb4
Fix callback in AFT viz demo. (#9333)
* Fix callback in AFT viz demo.

- Update the callback function.
- Add lint check.
2023-06-26 22:35:02 +08:00
Jiaming Yuan
6efe7c129f
[doc] Update reference in R vignettes. (#9323) 2023-06-26 18:32:11 +08:00
amdsc21
2e7e9d3b2d update rocgputreeshap branch 2023-06-23 19:50:08 +02:00
amdsc21
3e0c7d1dee new url for rocgputreeshap 2023-06-23 19:46:45 +02:00
amdsc21
2f47a1ebe6 rm warp-primitives 2023-06-22 21:43:00 +02:00
Jiaming Yuan
54da4b3185
Cleanup to prepare for using mmap pointer in external memory. (#9317)
- Update SparseDMatrix comment.
- Use a pointer in the bitfield. We will replace the `std::vector<bool>` in `ColumnMatrix` with bitfield.
- Clean up the page source. The timer is removed as it's inaccurate once we swap the mmap pointer into the page.
2023-06-22 06:43:11 +08:00
Jiaming Yuan
4066d68261
[doc] Clarify early stopping. (#9304) 2023-06-20 17:56:47 +08:00
Jiaming Yuan
6d22ea793c
Test QDM with sparse data on CPU. (#9316) 2023-06-19 21:27:03 +08:00
Jiaming Yuan
ee6809e642
Use mmap for external memory. (#9282)
- Have basic infrastructure for mmap.
- Release file write handle.
2023-06-19 18:52:55 +08:00
Rong Ou
d8beb517ed
Support bitwise allreduce in NCCL communicator (#9300) 2023-06-17 01:56:50 +08:00
George Othon
2718ff530c
[doc] Variable 'label' is not defined in the pyspark application example (#9302) 2023-06-16 05:06:52 +08:00
amdsc21
5ca7daaa13 merge latest changes 2023-06-15 21:39:14 +02:00
Jacek Laskowski
0df1272695
[docs] How to build the docs using conda (#9276)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-06-15 07:39:26 +08:00
Rong Ou
e70810be8a
Refactor device communicator to make allreduce more flexible (#9295) 2023-06-14 03:53:03 +08:00
Philip Hyunsu Cho
c2f0486d37
[CI] Run two pipeline loaders for responsiveness (#9294) 2023-06-12 09:52:40 -07:00
Jake Blitch
aad1313154
Fix community.rst typos. (#9291) 2023-06-11 09:09:27 +08:00
ZHAOKAI WANG
2b76061659
remove redundant method in expand_entry (#9283) 2023-06-10 05:18:21 +08:00
amdsc21
5f78360949 merge changes Jun092023 2023-06-09 22:41:33 +02:00
Jiaming Yuan
152e2fb072
Unify test helpers for creating ctx. (#9274) 2023-06-10 03:35:22 +08:00
Jiaming Yuan
ea0deeca68
Disable dense optimization in hist for distributed training. (#9272) 2023-06-10 02:31:34 +08:00
github-actions[bot]
8c1065f645
[CI] Update RAPIDS to latest stable (#9278)
Co-authored-by: hcho3 <hcho3@users.noreply.github.com>
2023-06-09 09:55:08 -07:00
Jiaming Yuan
1fcc26a6f8
Set ndcg to default for LTR. (#8822)
- Add document.
- Add tests.
- Use `ndcg` with `topk` as default.
2023-06-09 23:31:33 +08:00
Philip Hyunsu Cho
e4dd6051a0
Use good commit message when updating Rapids 2023-06-08 19:30:25 -07:00
Philip Hyunsu Cho
2ec2ecf013
Allow admin to manually trigger update_rapids workflow 2023-06-08 19:21:36 -07:00
Philip Hyunsu Cho
181dee13e9
Update update_rapids.yml 2023-06-08 19:11:49 -07:00
Rong Ou
ff122d61ff
More tests for cpu predictor with column split (#9270) 2023-06-08 22:47:19 +08:00
ZHAOKAI WANG
84d3fcb7ea
Fix cpu_predictor categorical feature disaptch (#9256) 2023-06-08 01:24:04 +08:00
dependabot[bot]
e229692572
Bump maven-surefire-plugin from 3.1.0 to 3.1.2 in /jvm-packages (#9265)
Bumps [maven-surefire-plugin](https://github.com/apache/maven-surefire) from 3.1.0 to 3.1.2.
- [Release notes](https://github.com/apache/maven-surefire/releases)
- [Commits](https://github.com/apache/maven-surefire/compare/surefire-3.1.0...surefire-3.1.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-surefire-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-07 20:53:20 +08:00
dependabot[bot]
4a5802ed2c
Bump maven-project-info-reports-plugin in /jvm-packages (#9268)
Bumps [maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 3.4.4 to 3.4.5.
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-3.4.4...maven-project-info-reports-plugin-3.4.5)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-07 19:07:36 +08:00
amdsc21
35cde3b1b2 remove some hip.h 2023-06-07 04:48:09 +02:00
amdsc21
ce345c30a8 remove some hip.h 2023-06-07 03:39:01 +02:00
amdsc21
af8845405a sync Jun 5 2023-06-07 02:43:21 +02:00
Jiaming Yuan
0cba2cdbb0
Support linalg data structures in check device. (#9243) 2023-06-06 09:47:24 +08:00
Jiaming Yuan
fc8110ef79
Remove document and demo in RABIT. (#9246) 2023-06-06 08:20:10 +08:00
Boris
7f9cb921f4
Rearranged maven profiles so that scala-2.13 artifacts are published without gpu-related libraries (#9253) 2023-06-05 13:52:10 -07:00
dependabot[bot]
a474a66573
Bump maven-release-plugin from 3.0.0 to 3.0.1 in /jvm-packages (#9252)
Bumps [maven-release-plugin](https://github.com/apache/maven-release) from 3.0.0 to 3.0.1.
- [Release notes](https://github.com/apache/maven-release/releases)
- [Commits](https://github.com/apache/maven-release/compare/maven-release-3.0.0...maven-release-3.0.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-release-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 21:29:59 +08:00
Rong Ou
962a20693f
More support for column split in cpu predictor (#9244)
- Added column split support to `PredictInstance` and `PredictLeaf`.
- Refactoring of tests.
2023-06-05 08:05:38 +08:00
Philip Hyunsu Cho
3bf0f145bb
Update update_rapids.yml 2023-06-03 13:12:12 -07:00
Philip Hyunsu Cho
a1fad72ab3
Update outdated build badges (#9232) 2023-06-02 08:22:25 -07:00
Philip Hyunsu Cho
288539ac78
[CI] Automatically bump Rapids version in containers (#9234)
* [CI] Use RAPIDS 23.04

* [CI] Remove outdated filters in dependabot

* [CI] Automatically bump Rapids version in containers

* Automate pull request
2023-06-02 08:17:41 -07:00
Jiaming Yuan
9fbde21e9d
Rework the precision metric. (#9222)
- Rework the precision metric for both CPU and GPU.
- Mention it in the document.
- Cleanup old support code for GPU ranking metric.
- Deterministic GPU implementation.

* Drop support for classification.

* type.

* use batch shape.

* lint.

* cpu build.

* cpu build.

* lint.

* Tests.

* Fix.

* Cleanup error message.
2023-06-02 20:49:43 +08:00
amdsc21
9ee1852d4e restore device helper 2023-06-02 02:55:13 +02:00
Your Name
6ecd7903f2 Merge branch 'master' into sync-condition-2023Jun01 2023-06-01 15:58:31 -07:00
Your Name
42867a4805 sync Jun 1 2023-06-01 15:55:06 -07:00
Philip Hyunsu Cho
db8288121d
Revert "Publishing scala-2.13 artifacts to the maven S3 repo. (#9224)" (#9233)
This reverts commit bb2a17b90c4f212191932dce3453591bd7f47cfc.
2023-06-01 14:39:39 -07:00
Boris
bb2a17b90c
Publishing scala-2.13 artifacts to the maven S3 repo. (#9224) 2023-06-01 10:45:18 -07:00
dependabot[bot]
e93b805a75
Bump scala.version from 2.12.17 to 2.12.18 in /jvm-packages (#9230)
Bumps `scala.version` from 2.12.17 to 2.12.18.

Updates `scala-compiler` from 2.12.17 to 2.12.18
- [Release notes](https://github.com/scala/scala/releases)
- [Commits](https://github.com/scala/scala/compare/v2.12.17...v2.12.18)

Updates `scala-library` from 2.12.17 to 2.12.18
- [Release notes](https://github.com/scala/scala/releases)
- [Commits](https://github.com/scala/scala/compare/v2.12.17...v2.12.18)

---
updated-dependencies:
- dependency-name: org.scala-lang:scala-compiler
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.scala-lang:scala-library
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-01 10:44:43 -07:00
ZHAOKAI WANG
fa2ab1f021
TreeRefresher note word spelling modification (#9223) 2023-05-31 20:27:27 +08:00
Jiaming Yuan
aba4559c4f
[doc] Update dask demo. (#9201) 2023-05-31 05:01:02 +08:00
Jiaming Yuan
7f20eaed93
[doc] Troubleshoot nccl shared memory. [skip ci] (#9206) 2023-05-31 05:00:02 +08:00
Jiaming Yuan
62e9387cd5
[ci] Update PySpark version. (#9214) 2023-05-31 03:00:44 +08:00
Jiaming Yuan
17fd3f55e9
Optimize adapter element counting on GPU. (#9209)
- Implement a simple `IterSpan` for passing iterators with size.
- Use shared memory for column size counts.
- Use one thread for each sample in row count to reduce atomic operations.
2023-05-30 23:28:43 +08:00
Jiaming Yuan
097f11b6e0
Support CUDA f16 without transformation. (#9207)
- Support f16 from cupy.
- Include CUDA header explicitly.
- Cleanup cmake nvtx support.
2023-05-30 20:54:31 +08:00
dependabot[bot]
6f83d9c69a
Bump maven-project-info-reports-plugin in /jvm-packages (#9219)
Bumps [maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 3.4.3 to 3.4.4.
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-3.4.3...maven-project-info-reports-plugin-3.4.4)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-30 19:10:07 +08:00
Jiaming Yuan
ae7450ce54
Skip optional synchronization in thrust. (#9212) 2023-05-30 17:23:09 +08:00
Jean Lescut-Muller
ddec0f378c
[doc] Show derivative of the custom objective (#9213)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-05-30 04:07:12 +08:00
Bobby Wang
320323f533
[pyspark] add parameters in the ctor of all estimators. (#9202)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-05-29 05:58:16 +08:00
Jiaming Yuan
03bc6e6427
Remove unused variables. (#9210)
- remove used variables.
- Remove signed comparison warnings.
2023-05-28 05:24:15 +08:00
dependabot[bot]
d563d6d8f4
Bump scala-collection-compat_2.12 from 2.9.0 to 2.10.0 in /jvm-packages (#9208)
Bumps [scala-collection-compat_2.12](https://github.com/scala/scala-collection-compat) from 2.9.0 to 2.10.0.
- [Release notes](https://github.com/scala/scala-collection-compat/releases)
- [Commits](https://github.com/scala/scala-collection-compat/compare/v2.9.0...v2.10.0)

---
updated-dependencies:
- dependency-name: org.scala-lang.modules:scala-collection-compat_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-28 00:22:28 +08:00
Boris
a01df102c9
Scala 2.13 support. (#9099)
1. Updated the test logic
2. Added smoke tests for Spark examples.
3. Added integration tests for Spark with Scala 2.13
2023-05-27 19:34:02 +08:00
Jiaming Yuan
8c174ef2d3
[CI] Update images that are not related to binary release. (#9205)
* [CI] Update images that are not related to the binary release.

- Update clang-tidy, prefer tools from the Ubuntu repository.
- Update GPU image to 22.04.
- Small cleanup to the tidy script.
- Remove gpu_jvm, which seems to be unused.
2023-05-27 17:40:46 +08:00
michael-gendy-mention-me
c5677a2b2c
Remove type: ignore hints (#9197) 2023-05-27 07:48:28 +08:00
Jiaming Yuan
053aababd4
Avoid thrust logical operation. (#9199)
Thrust implementation of `thrust::all_of/any_of/none_of` adopts an early stopping strategy
to bailout early by dividing the input into small batches. This is not ideal for data
validation as we expect all data to be valid. The strategy leads to excessive kernel
launches and stream synchronization.

* Use reduce from dh instead.
2023-05-27 01:36:58 +08:00
dependabot[bot]
614f47c477
Bump flink-clients from 1.17.0 to 1.17.1 in /jvm-packages (#9203)
Bumps [flink-clients](https://github.com/apache/flink) from 1.17.0 to 1.17.1.
- [Commits](https://github.com/apache/flink/compare/release-1.17.0...release-1.17.1)

---
updated-dependencies:
- dependency-name: org.apache.flink:flink-clients
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-26 18:42:24 +08:00
Rong Ou
5b69534b43
Support column split in multi-target hist (#9171) 2023-05-26 16:56:05 +08:00
Rong Ou
acd363033e
Fix running MGPU gtests (#9200) 2023-05-26 05:26:38 +08:00
amdsc21
c5b575e00e fix host __assert_fail 2023-05-24 19:40:24 +02:00
amdsc21
1354138b7d Merge branch 'master' into sync-condition-2023May15 2023-05-24 17:44:16 +02:00
dependabot[bot]
5d99b441d5
Bump scalatest_2.12 from 3.2.15 to 3.2.16 in /jvm-packages/xgboost4j (#9160)
Bumps [scalatest_2.12](https://github.com/scalatest/scalatest) from 3.2.15 to 3.2.16.
- [Release notes](https://github.com/scalatest/scalatest/releases)
- [Commits](https://github.com/scalatest/scalatest/compare/release-3.2.15...release-3.2.16)

---
updated-dependencies:
- dependency-name: org.scalatest:scalatest_2.12
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-24 09:09:25 +08:00
dependabot[bot]
e38e94ba4d
Bump rapids-4-spark_2.12 from 23.04.0 to 23.04.1 in /jvm-packages (#9158)
Bumps rapids-4-spark_2.12 from 23.04.0 to 23.04.1.

---
updated-dependencies:
- dependency-name: com.nvidia:rapids-4-spark_2.12
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-24 07:15:46 +08:00
dependabot[bot]
d6d83c818f
Bump maven-assembly-plugin from 3.5.0 to 3.6.0 in /jvm-packages (#9163)
Bumps [maven-assembly-plugin](https://github.com/apache/maven-assembly-plugin) from 3.5.0 to 3.6.0.
- [Commits](https://github.com/apache/maven-assembly-plugin/compare/maven-assembly-plugin-3.5.0...maven-assembly-plugin-3.6.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-assembly-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-23 13:56:12 -07:00
dependabot[bot]
22b0fc0992
Bump maven-source-plugin from 3.2.1 to 3.3.0 in /jvm-packages (#9184)
Bumps [maven-source-plugin](https://github.com/apache/maven-source-plugin) from 3.2.1 to 3.3.0.
- [Commits](https://github.com/apache/maven-source-plugin/compare/maven-source-plugin-3.2.1...maven-source-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-source-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-24 03:29:44 +08:00
dependabot[bot]
e67a0b8599
Bump maven-checkstyle-plugin from 3.2.2 to 3.3.0 in /jvm-packages (#9192)
Bumps [maven-checkstyle-plugin](https://github.com/apache/maven-checkstyle-plugin) from 3.2.2 to 3.3.0.
- [Commits](https://github.com/apache/maven-checkstyle-plugin/compare/maven-checkstyle-plugin-3.2.2...maven-checkstyle-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-checkstyle-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-24 01:43:47 +08:00
amdsc21
b994a38b28 Merge branch 'master' into sync-condition-2023May15 2023-05-23 01:07:50 +02:00
Jiaming Yuan
3913ff470f
Import data lazily during tests. (#9176) 2023-05-23 03:58:31 +08:00
amdsc21
3a834c4992 change workflow 2023-05-20 07:04:06 +02:00
amdsc21
b22644fc10 add hip.h 2023-05-20 01:25:33 +02:00
amdsc21
7663d47383 Merge branch 'master' into sync-condition-2023May15 2023-05-19 20:30:35 +02:00
Bobby Wang
6274fba0a5
[pyspark] support tying (#9172) 2023-05-19 14:39:26 +08:00
amdsc21
88fc8badfa Merge branch 'master' into sync-condition-2023May15 2023-05-17 19:55:50 +02:00
Bobby Wang
caf326d508
[pyspark] Refactor and typing support for models (#9156) 2023-05-17 16:38:51 +08:00
Bobby Wang
cb370c4f7d
[jvm] separate spark.version for cpu and gpu (#9166) 2023-05-17 07:12:20 +08:00
amdsc21
8cad8c693c sync up May15 2023 2023-05-15 18:59:18 +02:00
Stephan T. Lavavej
7375bd058b
Fix IndexTransformIter. (#9155) 2023-05-12 21:25:54 +08:00
Stephan T. Lavavej
59edfdb315
Fix typo: _defined => defined (#9153) 2023-05-11 16:34:45 -07:00
Stephan T. Lavavej
779b82c098
Avoid redefining macros. (#9154) 2023-05-11 15:59:25 -07:00
Rong Ou
603f8ce2fa
Support hist in the partition builder under column split (#9120) 2023-05-11 05:24:29 +08:00
Rong Ou
52311dcec9
Fix multi-threaded gtests (#9148) 2023-05-10 19:15:32 +08:00
Jiaming Yuan
e4129ed6ee
[jvm-packages] Remove akka in tester. (#9149) 2023-05-10 14:10:58 +08:00
dependabot[bot]
2ab6660943
Bump maven-surefire-plugin in /jvm-packages/xgboost4j-spark (#9131)
Bumps [maven-surefire-plugin](https://github.com/apache/maven-surefire) from 3.0.0 to 3.1.0.
- [Release notes](https://github.com/apache/maven-surefire/releases)
- [Commits](https://github.com/apache/maven-surefire/compare/surefire-3.0.0...surefire-3.1.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-surefire-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-10 12:21:36 +08:00
dependabot[bot]
d21e7e5f82
Bump maven-gpg-plugin from 3.0.1 to 3.1.0 in /jvm-packages (#9136)
Bumps [maven-gpg-plugin](https://github.com/apache/maven-gpg-plugin) from 3.0.1 to 3.1.0.
- [Commits](https://github.com/apache/maven-gpg-plugin/compare/maven-gpg-plugin-3.0.1...maven-gpg-plugin-3.1.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-gpg-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-10 10:21:36 +08:00
Philip Hyunsu Cho
0cd4382d72
Fix config-settings handling in pip install (#9115)
* Fix config_settings handling in pip install

* Fix formatting

* Fix flag use_system_libxgboost

* Add setuptools to doc requirements.txt

* Fix mypy
2023-05-09 17:54:20 -07:00
Jiaming Yuan
09b44915e7
[doc] Replace recommonmark with myst-parser. (#9125) 2023-05-10 08:11:36 +08:00
Jiaming Yuan
85988a3178
Wait for data CUDA stream instead of sync. (#9144)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-05-09 09:52:21 +08:00
Uriya Harpeness
a075aa24ba
Move python tool configurations to pyproject.toml, and add the python 3.11 classifier. (#9112) 2023-05-06 02:59:06 +08:00
Jiaming Yuan
55968ed3fa
Fix monotone constraints on CPU. (#9122) 2023-05-06 01:07:54 +08:00
Rong Ou
250b22dd22
Fix nvflare horizontal demo (#9124) 2023-05-05 16:48:22 +08:00
Jiaming Yuan
47b3cb6fb7
Remove unused parameters in RABIT. (#9108) 2023-05-05 05:26:24 +08:00
Philip Hyunsu Cho
07b2d5a26d
Add useful links to pyproject.toml (#9114) 2023-05-02 12:47:15 -07:00
amdsc21
b066accad6 fix lambdarank_obj 2023-05-02 21:06:22 +02:00
amdsc21
b324d51f14 fix array_interface.h half type 2023-05-02 20:50:50 +02:00
amdsc21
65097212b3 fix IterativeDeviceDMatrix, support HIP 2023-05-02 20:20:11 +02:00
amdsc21
4a24ca2f95 fix helpers.h, enable HIP 2023-05-02 20:04:23 +02:00
amdsc21
83e6fceb5c fix lambdarank_obj.cc, support HIP 2023-05-02 19:03:18 +02:00
amdsc21
e4538cb13c fix, to support hip 2023-05-02 17:43:11 +02:00
amdsc21
5446c501af merge 23Mar01 2023-05-02 00:05:58 +02:00
amdsc21
313a74b582 add Shap Magic to check if use cat 2023-05-01 21:55:14 +02:00
Jiaming Yuan
08ce495b5d
Use Booster context in DMatrix. (#8896)
- Pass context from booster to DMatrix.
- Use context instead of integer for `n_threads`.
- Check the consistency configuration for `max_bin`.
- Test for all combinations of initialization options.
2023-04-28 21:47:14 +08:00
Jiaming Yuan
1f9a57d17b
[Breaking] Require format to be specified in input URI. (#9077)
Previously, we use `libsvm` as default when format is not specified. However, the dmlc
data parser is not particularly robust against errors, and the most common type of error
is undefined format.

Along with which, we will recommend users to use other data loader instead. We will
continue the maintenance of the parsers as it's currently used for many internal tests
including federated learning.
2023-04-28 19:45:15 +08:00
Bobby Wang
e922004329
[doc] fix the cudf installation [skip ci] (#9106) 2023-04-28 19:43:58 +08:00
Jiaming Yuan
17ff471616
Optimize array interface input. (#9090) 2023-04-28 18:01:58 +08:00
Rong Ou
fb941262b4
Add demo for vertical federated learning (#9103) 2023-04-28 16:03:21 +08:00
Jiaming Yuan
e206b899ef
Rework MAP and Pairwise for LTR. (#9075) 2023-04-28 02:39:12 +08:00
Jiaming Yuan
0e470ef606
Optimize prediction with QuantileDMatrix. (#9096)
- Reduce overhead in `FVecDrop`.
- Reduce overhead caused by `HostVector()` calls.
2023-04-28 00:51:41 +08:00
Jiaming Yuan
fa267ad093
[CI] Freeze R version to 4.2.0 with MSVC. (#9104) 2023-04-27 22:48:31 +08:00
Jiaming Yuan
96d3f8a6f3
[doc] Update document. (#9098)
- Mention flink is still under construction.
- Update doxygen version.
- Fix warnings from doxygen about defgroup title and mismatched parameter name.
2023-04-27 19:29:03 +08:00
Rong Ou
511d4996b5
Rely on gRPC to generate random port (#9102) 2023-04-27 09:48:26 +08:00
Jiaming Yuan
101a2e643d
[jvm-packages] Bump rapids version. (#9097) 2023-04-27 09:46:46 +08:00
Scott Gustafson
353ed5339d
Convert `DaskXGBClassifier.classes_` to an array (#8452)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-04-27 02:23:35 +08:00
Boris
0e7377ba9c
Updated flink 1.8 -> 1.17. Added smoke tests for Flink (#9046) 2023-04-26 18:41:11 +08:00
Rong Ou
a320b402a5
More refactoring to take advantage of collective aggregators (#9081) 2023-04-26 03:36:09 +08:00
dependabot[bot]
49ccae7fb9
Bump spark.version from 3.1.1 to 3.4.0 in /jvm-packages (#9039)
Bumps `spark.version` from 3.1.1 to 3.4.0.

Updates `spark-mllib_2.12` from 3.1.1 to 3.4.0

Updates `spark-core_2.12` from 3.1.1 to 3.4.0

Updates `spark-sql_2.12` from 3.1.1 to 3.4.0

---
updated-dependencies:
- dependency-name: org.apache.spark:spark-mllib_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.spark:spark-core_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
- dependency-name: org.apache.spark:spark-sql_2.12
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-26 01:32:06 +08:00
Bobby Wang
17add4776f
[pyspark] Don't stack for non feature columns (#9088) 2023-04-25 23:09:12 +08:00
dependabot[bot]
a2cc78c1fb
Bump scala.version from 2.12.8 to 2.12.17 in /jvm-packages (#9083)
Bumps `scala.version` from 2.12.8 to 2.12.17.

Updates `scala-compiler` from 2.12.8 to 2.12.17
- [Release notes](https://github.com/scala/scala/releases)
- [Commits](https://github.com/scala/scala/compare/v2.12.8...v2.12.17)

Updates `scala-library` from 2.12.8 to 2.12.17
- [Release notes](https://github.com/scala/scala/releases)
- [Commits](https://github.com/scala/scala/compare/v2.12.8...v2.12.17)

---
updated-dependencies:
- dependency-name: org.scala-lang:scala-compiler
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.scala-lang:scala-library
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 21:30:38 +08:00
Bobby Wang
339f21e1bf
[pyspark] fix a type hint with old pyspark release (#9079) 2023-04-24 20:04:14 +08:00
Bobby Wang
d237378452
[jvm-packages] Clean up the dependencies after removing scala versioned tracker (#9078) 2023-04-24 17:49:08 +08:00
Jiaming Yuan
c512c3f46b
[jvm-packages] Bump rapids version. (#9056) 2023-04-22 15:46:44 +08:00
Rong Ou
8dbe0510de
More collective aggregators (#9060) 2023-04-22 03:32:05 +08:00
Jiaming Yuan
7032981350
Fix timer annotation. (#9057) 2023-04-21 22:53:58 +08:00
austinzh
3b742dc4f1
Stop using Rabit in predition (#9054) 2023-04-21 19:38:07 +08:00
dependabot[bot]
39b0fde0e7
Bump kryo from 5.4.0 to 5.5.0 in /jvm-packages (#9070)
Bumps [kryo](https://github.com/EsotericSoftware/kryo) from 5.4.0 to 5.5.0.
- [Release notes](https://github.com/EsotericSoftware/kryo/releases)
- [Commits](https://github.com/EsotericSoftware/kryo/compare/kryo-parent-5.4.0...kryo-parent-5.5.0)

---
updated-dependencies:
- dependency-name: com.esotericsoftware:kryo
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-21 18:16:34 +08:00
dependabot[bot]
ee84e22c8d
Bump maven-checkstyle-plugin from 3.2.1 to 3.2.2 in /jvm-packages (#9073)
Bumps [maven-checkstyle-plugin](https://github.com/apache/maven-checkstyle-plugin) from 3.2.1 to 3.2.2.
- [Release notes](https://github.com/apache/maven-checkstyle-plugin/releases)
- [Commits](https://github.com/apache/maven-checkstyle-plugin/compare/maven-checkstyle-plugin-3.2.1...maven-checkstyle-plugin-3.2.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-checkstyle-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-21 18:16:08 +08:00
Jiaming Yuan
b908680bec
Fix race condition in cpp metric tests. (#9058) 2023-04-21 05:24:10 +08:00
Philip Hyunsu Cho
a5cd2412de
Replace setup.py with pyproject.toml (#9021)
* Create pyproject.toml
* Implement a custom build backend (see below) in packager directory. Build logic from setup.py has been refactored and migrated into the new backend.
* Tested: pip wheel . (build wheel), python -m build --sdist . (source distribution)
2023-04-20 13:51:39 -07:00
Jiaming Yuan
a7b3dd3176
Fix compiler warnings. (#9055) 2023-04-21 02:26:47 +08:00
dependabot[bot]
2acd78b44b
Bump maven-project-info-reports-plugin in /jvm-packages/xgboost4j (#9049)
Bumps [maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 3.4.2 to 3.4.3.
- [Release notes](https://github.com/apache/maven-project-info-reports-plugin/releases)
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-3.4.2...maven-project-info-reports-plugin-3.4.3)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-21 00:10:45 +08:00
Emil Ejbyfeldt
a84a1fde02
[jvm-packages] Update scalatest to 3.2.15 (#8925)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-04-20 22:16:56 +08:00
Jiaming Yuan
564df59204
[breaking] [jvm-packages] Remove scala-implemented tracker. (#9045) 2023-04-20 16:29:35 +08:00
amdsc21
65d83e288f fix device query 2023-04-19 19:53:26 +02:00
Rong Ou
42d100de18
Make sure metrics work with federated learning (#9037) 2023-04-19 15:39:11 +08:00
Jiaming Yuan
ef13dd31b1
Rework the NDCG objective. (#9015) 2023-04-18 21:16:06 +08:00
Rong Ou
ba9d24ff7b
Make sure metrics work with column-wise distributed training (#9020) 2023-04-18 03:48:23 +08:00
amdsc21
f645cf51c1 Merge branch 'master' into sync-condition-2023Apr11 2023-04-17 18:33:00 +02:00
WeichenXu
191d0aa5cf
[spark] Make spark model have the same UID with its estimator (#9022)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2023-04-14 02:53:30 +08:00
Philip Hyunsu Cho
8e0f320db3
[CI] Don't run CI automatically for dependabot (#9034) 2023-04-13 08:19:56 -07:00
amdsc21
db8420225b fix RCCL 2023-04-12 01:09:14 +02:00
amdsc21
843fdde61b sync Apr 11 2023 2023-04-11 20:03:25 +02:00
amdsc21
08bc4b0c0f Merge branch 'master' into sync-condition-2023Apr11 2023-04-11 19:38:38 +02:00
amdsc21
6825d986fd move Dockerfile to ci 2023-04-11 19:34:23 +02:00
Jiaming Yuan
fe9dff339c
Convert federated learner test into test suite. (#9018)
* Convert federated learner test into test suite.

- Add specialization to learning to rank.
2023-04-11 09:52:55 +08:00
Jiaming Yuan
2c8d735cb3
Fix tests with pandas 2.0. (#9014)
* Fix tests with pandas 2.0.

- `is_categorical` is replaced by `is_categorical_dtype`.
- one hot encoding returns boolean type instead of integer type.
2023-04-11 00:17:34 +08:00
Sarah Charlotte Johnson
ebd64f6e22
[doc] Update Dask deployment options (#9008) 2023-04-07 01:09:15 +08:00
Jiaming Yuan
1cf4d93246
Convert federated tests into test suite. (#9006)
- Add specialization for learning to rank.
2023-04-04 01:29:47 +08:00
Rong Ou
15e073ca9d
Make objectives work with vertical distributed and federated learning (#9002) 2023-04-03 17:07:42 +08:00
Jiaming Yuan
720a8c3273
[doc] Remove parameter type in Python doc strings. (#9005) 2023-04-01 04:04:30 +08:00
Jiaming Yuan
4caca2947d
Improve helper script for making release. [skip ci] (#9004)
* Merge source tarball generation script.
* Generate Python source wheel.
* Generate hashes and release note.
2023-03-31 23:14:58 +08:00
Jiaming Yuan
bcb55d3b6a
Portable macro definition. (#8999) 2023-03-31 20:48:59 +08:00
Jiaming Yuan
bac22734fb
Remove ntree limit in python package. (#8345)
- Remove `ntree_limit`. The parameter has been deprecated since 1.4.0.
- The SHAP package compatibility is broken.
2023-03-31 19:01:55 +08:00
paklui
d155ec77f9 building docker for xgboost-amd-condition 2023-03-30 13:36:39 -07:00
Jiaming Yuan
b647403baa
Update release news. [skip ci] (#9000) 2023-03-31 03:52:09 +08:00
Jiaming Yuan
cd05e38533
[doc][R] Update link. (#8998) 2023-03-30 19:09:07 +08:00
Jiaming Yuan
d062a9e009
Define pair generation strategies for LTR. (#8984) 2023-03-30 12:00:35 +08:00
amdsc21
991738690f Merge branch 'sync-condition-2023Mar27' into amd-condition 2023-03-30 05:16:36 +02:00
amdsc21
aeb3fd1c95 Merge branch 'master' into sync-condition-2023Mar27 2023-03-30 05:15:55 +02:00
Rong Ou
d385cc64e2
Fix aft_loss_distribution documentation (#8995) 2023-03-29 19:13:23 -07:00
amdsc21
141a062e00 Merge branch 'sync-condition-2023Mar27' into amd-condition 2023-03-30 00:47:16 +02:00
amdsc21
acad01afc9 sync Mar 29 2023-03-30 00:46:50 +02:00
Jiaming Yuan
a58055075b
[dask] Return the first valid booster instead of all valid ones. (#8993)
* [dask] Return the first valid booster instead of all valid ones.

- Reduce memory footprint of the returned model.

* mypy error.

* lint.

* duplicated.
2023-03-30 03:16:18 +08:00
Philip Hyunsu Cho
6676c28cbc
[CI] Fix Windows wheel to be compatible with Poetry (#8991)
* [CI] Fix Windows wheel to be compatible with Poetry

* Typo

* Eagerly scan globs to avoid patching same file twice
2023-03-28 21:32:54 -07:00
Rong Ou
ff26cd3212
More tests for column split and vertical federated learning (#8985)
Added some more tests for the learner and fit_stump, for both column-wise distributed learning and vertical federated learning.

Also moved the `IsRowSplit` and `IsColumnSplit` methods from the `DMatrix` to the `MetaInfo` since in some places we only have access to the `MetaInfo`. Added a new convenience method `IsVerticalFederatedLearning`.

Some refactoring of the testing fixtures.
2023-03-28 16:40:26 +08:00
amdsc21
f289e5001d Merge branch 'sync-condition-2023Mar27' into amd-condition 2023-03-28 00:24:12 +02:00
amdsc21
06d9b998ce fix CAPI BuildInfo 2023-03-28 00:14:18 +02:00
amdsc21
c50cc424bc sync Mar 27 2023 2023-03-27 18:54:41 +02:00
Jiaming Yuan
401ce5cf5e
Run linters with the multi output demo. (#8966) 2023-03-28 00:47:28 +08:00
Jiaming Yuan
acc110c251
[MT-TREE] Support prediction cache and model slicing. (#8968)
- Fix prediction range.
- Support prediction cache in mt-hist.
- Support model slicing.
- Make the booster a Python iterable by defining `__iter__`.
- Cleanup removed/deprecated parameters.
- A new field in the output model `iteration_indptr` for pointing to the ranges of trees for each iteration.
2023-03-27 23:10:54 +08:00
Jiaming Yuan
c2b3a13e70
[breaking][skl] Remove parameter serialization. (#8963)
- Remove parameter serialization in the scikit-learn interface.

The scikit-lear interface `save_model` will save only the model and discard all
hyper-parameters. This is to align with the native XGBoost interface, which distinguishes
the hyper-parameter and model parameters.

With the scikit-learn interface, model parameters are attributes of the estimator. For
instance, `n_features_in_`, `n_classes_` are always accessible with
`estimator.n_features_in_` and `estimator.n_classes_`, but not with the
`estimator.get_params`.

- Define a `load_model` method for classifier to load its own attributes.

- Set n_estimators to None by default.
2023-03-27 21:34:10 +08:00
dependabot[bot]
90645c4957
Bump maven-resources-plugin from 3.3.0 to 3.3.1 in /jvm-packages (#8980)
Bumps [maven-resources-plugin](https://github.com/apache/maven-resources-plugin) from 3.3.0 to 3.3.1.
- [Release notes](https://github.com/apache/maven-resources-plugin/releases)
- [Commits](https://github.com/apache/maven-resources-plugin/compare/maven-resources-plugin-3.3.0...maven-resources-plugin-3.3.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-resources-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-27 16:03:45 +08:00
dependabot[bot]
43878b10b6
Bump maven-deploy-plugin in /jvm-packages/xgboost4j-spark-gpu (#8973)
Bumps [maven-deploy-plugin](https://github.com/apache/maven-deploy-plugin) from 3.0.0 to 3.1.1.
- [Release notes](https://github.com/apache/maven-deploy-plugin/releases)
- [Commits](https://github.com/apache/maven-deploy-plugin/compare/maven-deploy-plugin-3.0.0...maven-deploy-plugin-3.1.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-deploy-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-27 12:47:13 +08:00
amdsc21
8c77e936d1 tune grid size 2023-03-26 17:45:19 +02:00
amdsc21
18034a4291 tune histogram 2023-03-26 01:42:51 +01:00
amdsc21
7ee4734d3a rm device_helpers.hip.h from cu 2023-03-26 00:24:11 +01:00
amdsc21
ee582f03c3 rm device_helpers.hip.h from cuh 2023-03-25 23:35:57 +01:00
amdsc21
f3286bac04 rm warp header 2023-03-25 23:01:44 +01:00
amdsc21
3ee3bea683 fix warp header 2023-03-25 22:37:37 +01:00
amdsc21
5098735698 Merge branch 'condition-sync-Mar24-23' into hui-condition 2023-03-25 05:28:40 +01:00
amdsc21
e74b3bbf3c fix macro 2023-03-25 05:17:39 +01:00
amdsc21
22525c002a fix macro 2023-03-25 05:08:30 +01:00
amdsc21
80961039d7 fix macro 2023-03-25 05:00:55 +01:00
amdsc21
1474789787 add new file 2023-03-25 04:54:02 +01:00
amdsc21
1dc138404a initial merge, fix linalg.h 2023-03-25 04:48:47 +01:00
amdsc21
e1d050f64e initial merge, fix linalg.h 2023-03-25 04:37:43 +01:00
amdsc21
7fbc561e17 initial merge 2023-03-25 04:31:55 +01:00
amdsc21
d97be6f396 enable last 3 tests 2023-03-25 04:05:05 +01:00
amdsc21
f1211cffca enable last 3 tests 2023-03-25 00:45:52 +01:00
amdsc21
e0716afabf fix objective/objective.cc, CMakeFile and setup.py 2023-03-23 20:22:34 +01:00
dependabot[bot]
cff50fe3ef
Bump hadoop.version from 3.3.4 to 3.3.5 in /jvm-packages (#8962)
Bumps `hadoop.version` from 3.3.4 to 3.3.5.

Updates `hadoop-hdfs` from 3.3.4 to 3.3.5

Updates `hadoop-common` from 3.3.4 to 3.3.5

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-hdfs
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-23 16:12:04 +08:00
Jiaming Yuan
21a52c7f98
[doc] Add introduction and notes for the sklearn interface. (#8948) 2023-03-23 13:30:42 +08:00
Jiaming Yuan
bf88dadb61
[doc] Fix callback example. (#8944) 2023-03-23 03:27:04 +08:00
Jiaming Yuan
15a2724ff7
Removed outdated configuration serialization logic. (#8942)
- `saved_params` is empty.
- `saved_configs_` contains `num_round`, which is not used anywhere inside xgboost.
2023-03-23 01:31:46 +08:00
Jiaming Yuan
151882dd26
Initial support for multi-target tree. (#8616)
* Implement multi-target for hist.

- Add new hist tree builder.
- Move data fetchers for tests.
- Dispatch function calls in gbm base on the tree type.
2023-03-22 23:49:56 +08:00
Jiaming Yuan
ea04d4c46c
[doc] [dask] Troubleshooting NCCL errors. (#8943) 2023-03-22 22:17:26 +08:00
Jiaming Yuan
a551bed803
Remove duplicated learning rate parameter. (#8941) 2023-03-22 20:51:14 +08:00
Jiaming Yuan
a05799ed39
Specify char type in JSON. (#8949)
char is defined as signed on x86 but unsigned on arm64

- Use `std::int8_t` instead of char.
- Fix include when clang is pretending to be gcc.
2023-03-22 19:13:44 +08:00
Jiaming Yuan
5891f752c8
Rework the MAP metric. (#8931)
- The new implementation is more strict as only binary labels are accepted. The previous implementation converts values greater than 1 to 1.
- Deterministic GPU. (no atomic add).
- Fix top-k handling.
- Precise definition of MAP. (There are other variants on how to handle top-k).
- Refactor GPU ranking tests.
2023-03-22 17:45:20 +08:00
Rong Ou
b240f055d3
Support vertical federated learning (#8932) 2023-03-22 14:25:26 +08:00
Philip Hyunsu Cho
8dc1e4b3ea
Improve doxygen (#8959)
* Remove Sphinx build from GH Action

* Build Doxygen as part of RTD build

* Add jQuery
2023-03-21 09:22:11 -07:00
dependabot[bot]
34092d7fd0
Bump maven-release-plugin in /jvm-packages/xgboost4j-spark (#8952)
Bumps [maven-release-plugin](https://github.com/apache/maven-release) from 2.5.3 to 3.0.0.
- [Release notes](https://github.com/apache/maven-release/releases)
- [Commits](https://github.com/apache/maven-release/compare/maven-release-2.5.3...maven-release-3.0.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-release-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-21 15:34:43 +08:00
amdsc21
595cd81251 add max shared mem workaround 2023-03-19 20:08:42 +01:00
amdsc21
0325ce0bed update gputreeshap 2023-03-19 20:07:36 +01:00
Jiaming Yuan
9b6cc0ed07
Refactor hist to prepare for multi-target builder. (#8928)
- Extract the builder from the updater class. We need a new builder for multi-target.
- Extract `UpdateTree`, it can be reused for different builders. Eventually, other tree
  updaters can use it as well.
2023-03-17 17:21:04 +08:00
Philip Hyunsu Cho
36263dd109
[jvm-packages] Use akka 2.6 (#8920) 2023-03-16 20:06:42 -07:00
Quentin Fiard
55ed50c860
Fix a few typos in the C API tutorial (#8926) 2023-03-16 20:24:03 +08:00
Jiaming Yuan
a093770f36
Partitioner for multi-target tree. (#8922) 2023-03-16 18:49:34 +08:00
amdsc21
a79a35c22c add warp size 2023-03-15 22:00:26 +01:00
Jiaming Yuan
26209a42a5
Define git attributes for renormalization. (#8921) 2023-03-16 02:43:11 +08:00
Philip Hyunsu Cho
a2cdba51ce
Use hi-res SVG logo (#8923) 2023-03-15 10:02:38 -07:00
dependabot[bot]
fd016e43c6
Bump maven-surefire-plugin from 2.22.2 to 3.0.0 in /jvm-packages (#8917)
Bumps [maven-surefire-plugin](https://github.com/apache/maven-surefire) from 2.22.2 to 3.0.0.
- [Release notes](https://github.com/apache/maven-surefire/releases)
- [Commits](https://github.com/apache/maven-surefire/compare/surefire-2.22.2...surefire-3.0.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-surefire-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-15 18:51:46 +08:00
Jiaming Yuan
f186c87cf9
Check inf in data for all types of DMatrix. (#8911) 2023-03-15 11:24:35 +08:00
amdsc21
4484c7f073 disable Optin Shared Mem 2023-03-15 02:10:16 +01:00
amdsc21
8207015e48 fix ../tests/cpp/common/test_span.h 2023-03-14 22:19:06 +01:00
Jiaming Yuan
72e8331eab
Reimplement the NDCG metric. (#8906)
- Add support for non-exp gain.
- Cache the DMatrix object to avoid re-calculating the IDCG.
- Make GPU implementation deterministic. (no atomic add)
2023-03-15 03:26:17 +08:00
Jiaming Yuan
8685556af2
Implement hist evaluator for multi-target tree. (#8908) 2023-03-15 01:42:51 +08:00
Jiaming Yuan
95e2baf7c2
[doc] Fix typo [skip ci] (#8907) 2023-03-15 00:55:17 +08:00
Jiaming Yuan
910ce580c8
Clear all cache after model load. (#8904) 2023-03-14 22:09:36 +08:00
Jiaming Yuan
c400fa1e8d
Predictor for vector leaf. (#8898) 2023-03-14 19:07:10 +08:00
amdsc21
364df7db0f fix ../tree/gpu_hist/evaluate_splits.hip bugs, size 64 2023-03-14 06:17:21 +01:00
amdsc21
a2bab03205 fix aft_obj.hip 2023-03-13 23:19:59 +01:00
Jiaming Yuan
8be6095ece
Implement NDCG cache. (#8893) 2023-03-13 22:16:31 +08:00
Jiaming Yuan
9bade7203a
Remove public access to tree model param. (#8902)
* Make tree model param a private member.
* Number of features and targets are immutable after construction.

This is to reduce the number of places where we can run configuration.
2023-03-13 20:55:10 +08:00
Jiaming Yuan
5ba3509dd3
Define multi expand entry. (#8895) 2023-03-13 19:31:05 +08:00
Jiaming Yuan
bbee355b45
[doc][dask] Note on reproducible result. [skip ci] (#8903) 2023-03-13 19:30:35 +08:00
amdsc21
b71c1b50de fix macro, no ! 2023-03-12 23:02:28 +01:00
amdsc21
fa2336fcfd sort bug fix 2023-03-12 07:09:10 +01:00
Jiaming Yuan
3689695d16
[CI] Run RMM gtests. (#8900)
* [CI] Run RMM gtests.

* Update test-cpp-gpu.sh

---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-03-12 03:14:31 +08:00
amdsc21
7d96758382 macro format 2023-03-11 06:57:24 +01:00
amdsc21
b0dacc5a80 fix bug 2023-03-11 03:47:23 +01:00
amdsc21
f64152bf97 add helpers.hip 2023-03-11 02:56:50 +01:00
amdsc21
b4dbe7a649 fix isnan 2023-03-11 02:39:58 +01:00
amdsc21
e5b6219a84 typo 2023-03-11 02:30:27 +01:00
amdsc21
3a07b1edf8 complete test porting 2023-03-11 02:17:05 +01:00
amdsc21
9bf16a2ca6 testing porting 2023-03-11 01:38:54 +01:00
amdsc21
332f6a89a9 more tests 2023-03-11 01:33:48 +01:00
amdsc21
204d0c9a53 add hip tests 2023-03-11 00:38:16 +01:00
Jiaming Yuan
36a7396658
Replace dmlc any with std any. (#8892) 2023-03-11 06:11:04 +08:00
amdsc21
e961016e71 rm HIPCUB 2023-03-10 22:21:37 +01:00
amdsc21
f0b8c02f15 merge latest changes 2023-03-10 22:10:20 +01:00
Rong Ou
79efcd37f5
Pick up dmlc-core fix for CSV parser (#8897) 2023-03-11 04:51:43 +08:00
Jiaming Yuan
2aa838c75e
Define multi-strategy parameter. (#8890) 2023-03-11 02:58:01 +08:00
amdsc21
5e8b1842b9 fix Pointer Attr 2023-03-10 19:06:02 +01:00
Jiaming Yuan
6deaec8027
Pass obj info by reference instead of by value. (#8889)
- Pass obj info into tree updater as const pointer.

This way we don't have to initialize the learner model param before configuring gbm, hence
breaking up the dependency of configurations.
2023-03-11 01:38:28 +08:00
amdsc21
9f072b50ba fix __popc 2023-03-10 17:14:31 +01:00
amdsc21
e1ddb5ae58 fix macro XGBOOST_USE_HIP 2023-03-10 07:11:05 +01:00
amdsc21
643e2a7b39 fix macro XGBOOST_USE_HIP 2023-03-10 07:09:41 +01:00
amdsc21
bde3107c3e fix macro XGBOOST_USE_HIP 2023-03-10 07:01:25 +01:00
amdsc21
5edfc1e2e9 finish ellpack_page.cc 2023-03-10 06:41:25 +01:00
amdsc21
c073417d0c finish aft_obj.cu 2023-03-10 06:39:03 +01:00
amdsc21
9bbbeb3f03 finish multiclass_obj.cu 2023-03-10 06:35:46 +01:00
amdsc21
4bde2e3412 finish multiclass_obj.cu 2023-03-10 06:35:21 +01:00
amdsc21
58a9fe07b6 finish multiclass_obj.cu 2023-03-10 06:35:06 +01:00
amdsc21
41407850d5 finish rank_obj.cu 2023-03-10 06:29:08 +01:00
amdsc21
968a1db4c0 finish regression_obj.cu 2023-03-10 06:07:53 +01:00
amdsc21
ad710e4888 finish hinge.cu 2023-03-10 06:04:59 +01:00
amdsc21
4e3c699814 finish adaptive.cu 2023-03-10 06:02:48 +01:00
amdsc21
757de84398 finish quantile.cu 2023-03-10 05:55:51 +01:00
amdsc21
d27f9dfdce finish host_device_vector.cu 2023-03-10 05:45:38 +01:00
amdsc21
14cc438a64 finish stats.cu 2023-03-10 05:38:16 +01:00
amdsc21
911a5d8a60 finish hist_util.cu 2023-03-10 05:32:38 +01:00
amdsc21
54b076b40f finish common.cu 2023-03-10 05:20:29 +01:00
amdsc21
91a5ef762e finish common.cu 2023-03-10 05:19:41 +01:00
amdsc21
8fd2af1c8b finish numeric.cu 2023-03-10 05:16:23 +01:00
amdsc21
bb6adda8a3 finish c_api.cu 2023-03-10 05:12:51 +01:00
amdsc21
a76ccff390 finish c_api.cu 2023-03-10 05:11:20 +01:00
amdsc21
61c0b19331 finish ellpack_page_source.cu 2023-03-10 05:06:36 +01:00
amdsc21
fa9f69dd85 finish sparse_page_dmatrix.cu 2023-03-10 05:04:57 +01:00
Jiaming Yuan
54e001bbf4
[doc][dask] Reference examples from coiled. [skip ci] (#8891) 2023-03-09 20:03:24 -08:00
amdsc21
080fc35c4b finish ellpack_page_raw_format.cu 2023-03-10 05:02:35 +01:00
amdsc21
ccce4cf7e1 finish data.cu 2023-03-10 05:00:57 +01:00
Jiaming Yuan
c5c8f643f2
Remove the cub submodule. (#8888)
XGBoost now uses CTK-11.8 for binary packages, there's no need to maintain a cub
submodule anymore.
2023-03-09 19:43:02 -08:00
amdsc21
713ab9e1a0 finish sparse_page_source.cu 2023-03-10 04:42:56 +01:00
amdsc21
134cbfddbe finish gradient_index.cu 2023-03-10 04:40:33 +01:00
amdsc21
6e2c5be83e finish array_interface.cu 2023-03-10 04:36:04 +01:00
amdsc21
185dbce21f finish ellpack_page.cu 2023-03-10 04:26:09 +01:00
amdsc21
49732359ef finish iterative_dmatrix.cu 2023-03-10 03:47:00 +01:00
amdsc21
ec9f500a49 finish proxy_dmatrix.cu 2023-03-10 03:40:07 +01:00
amdsc21
53244bef6f finish simple_dmatrix.cu 2023-03-10 03:38:09 +01:00
amdsc21
f0febfbcac finish gpu_predictor.cu 2023-03-10 01:29:54 +01:00
amdsc21
1c58ff61d1 finish fit_stump.cu 2023-03-10 00:46:29 +01:00
amdsc21
1530c03f7d finish constraints.cu 2023-03-09 22:43:51 +01:00
amdsc21
309268de02 finish updater_gpu_hist.cu 2023-03-09 22:40:44 +01:00
amdsc21
500428cc0f finish row_partitioner.cu 2023-03-09 22:31:11 +01:00
amdsc21
495816f694 finished gradient_based_sampler.cu 2023-03-09 22:26:08 +01:00
amdsc21
df42dd2c53 finished evaluator.cu 2023-03-09 22:22:05 +01:00
amdsc21
f55243fda0 finish evaluate_splits.cu 2023-03-09 22:15:10 +01:00
amdsc21
1e09c21456 finished feature_groups.cu 2023-03-09 21:31:00 +01:00
amdsc21
0ed5d3c849 finished histogram.cu 2023-03-09 21:28:37 +01:00
amdsc21
f67e7de7ef finished communicator.cu 2023-03-09 21:02:48 +01:00
amdsc21
5044713388 finished updater_gpu_coordinate.cu 2023-03-09 20:53:54 +01:00
amdsc21
c875f0425f finished rank_metric.cu 2023-03-09 20:48:31 +01:00
amdsc21
4fd08b6c32 finished survival_metric.cu 2023-03-09 20:41:52 +01:00
amdsc21
b9d86d44d6 finish multiclass_metric.cu 2023-03-09 20:37:16 +01:00
amdsc21
a56055225a fix auc.cu 2023-03-09 20:29:38 +01:00
amdsc21
6eba0a56ec fix CMakeLists.txt 2023-03-09 18:57:14 +01:00
Jiaming Yuan
5feee8d4a9
Define core multi-target regression tree structure. (#8884)
- Define a new tree struct embedded in the `RegTree`.
- Provide dispatching functions in `RegTree`.
- Fix some c++-17 warnings about the use of nodiscard (currently we disable the warning on
  the CI).
- Use uint32_t instead of size_t for `bst_target_t` as it has a defined size and can be used
  as part of dmlc parameter.
- Hide the `Segment` struct inside the categorical split matrix.
2023-03-09 19:03:06 +08:00
Jiaming Yuan
46dfcc7d22
Define a new ranking parameter. (#8887) 2023-03-09 17:46:24 +08:00
Krzysztof Dyba
e8a69013e6
[R] update predict docs (#8886) 2023-03-09 05:58:39 +08:00
amdsc21
00c24a58b1 finish elementwise_metric.cu 2023-03-08 22:50:07 +01:00
amdsc21
6fa248b75f try elementwise_metric.cu 2023-03-08 22:42:48 +01:00
amdsc21
946f9e9802 fix gbtree.cc 2023-03-08 21:44:20 +01:00
amdsc21
4c4e5af29c port elementwise_metric.cu 2023-03-08 21:39:56 +01:00
amdsc21
7e1b06417b finish gbtree.cu porting 2023-03-08 21:09:56 +01:00
amdsc21
cdd7794641 add unused option 2023-03-08 20:37:53 +01:00
amdsc21
cd743a1ae9 fix DispatchRadixSort 2023-03-08 20:31:23 +01:00
amdsc21
a45005863b fix DispatchScan 2023-03-08 20:15:33 +01:00
Jiaming Yuan
8c16da8863
[doc] Add note for rabit port. [skip ci] (#8879) 2023-03-08 19:00:10 +08:00
amdsc21
bdcb036592 add context.hip 2023-03-08 07:34:19 +01:00
amdsc21
7a3a9b682a add device_helpers.hip.h 2023-03-08 07:18:33 +01:00
amdsc21
0a711662c3 add device_helpers.hip.h 2023-03-08 07:10:32 +01:00
amdsc21
312e58ec99 enable rocm, fix common.h 2023-03-08 06:45:03 +01:00
amdsc21
ca8f4e7993 enable rocm, fix stats.cuh 2023-03-08 06:43:06 +01:00
amdsc21
60795f22de enable rocm, fix linalg_op.cuh 2023-03-08 06:42:20 +01:00
amdsc21
05fdca893f enable rocm, fix cuda_pinned_allocator.h 2023-03-08 06:39:40 +01:00
amdsc21
d8cc93f3f2 enable rocm, fix algorithm.cuh 2023-03-08 06:38:35 +01:00
amdsc21
62c4efac51 enable rocm, fix transform.h 2023-03-08 06:37:34 +01:00
amdsc21
ba9e00d911 enable rocm, fix hist_util.cuh 2023-03-08 06:36:15 +01:00
amdsc21
d3be67ad8e enable rocm, fix quantile.cuh 2023-03-08 06:32:09 +01:00
amdsc21
2eb0b6aae4 enable rocm, fix threading_utils.cuh 2023-03-08 06:30:52 +01:00
amdsc21
327f1494f1 enable rocm, fix cuda_context.cuh 2023-03-08 06:29:45 +01:00
amdsc21
fa92aa56ee enable rocm, fix device_adapter.cuh 2023-03-08 06:26:31 +01:00
amdsc21
427f6c2a1a enable rocm, fix simple_dmatrix.cuh 2023-03-08 06:24:34 +01:00
amdsc21
270c7b4802 enable rocm, fix row_partitioner.cuh 2023-03-08 06:22:25 +01:00
amdsc21
0fc1f640a9 enable rocm, fix nccl_device_communicator.cuh 2023-03-08 06:18:13 +01:00
dependabot[bot]
85c3334c2b
Bump hadoop-common from 3.2.4 to 3.3.4 in /jvm-packages (#8882)
Bumps hadoop-common from 3.2.4 to 3.3.4.

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 13:15:39 +08:00
amdsc21
762fd9028d enable rocm, fix device_communicator_adapter.cuh 2023-03-08 06:13:29 +01:00
amdsc21
f2009533e1 rm hip.h 2023-03-08 06:04:01 +01:00
amdsc21
53b5cd73f2 add hip flags 2023-03-08 03:42:51 +01:00
amdsc21
52b05d934e add hip 2023-03-08 03:32:19 +01:00
amdsc21
840f15209c add HIP flags, common 2023-03-08 03:11:49 +01:00
amdsc21
1e1c7fd8d5 add HIP flags, c_api 2023-03-08 01:34:37 +01:00
amdsc21
f5f800c80d add HIP flags 2023-03-08 01:33:38 +01:00
amdsc21
6b7be96373 add HIP flags 2023-03-08 01:22:25 +01:00
amdsc21
75712b9c3c enable HIP flags 2023-03-08 01:10:07 +01:00
amdsc21
ed45aa2816 Merge branch 'master' into dev-hui 2023-03-08 00:39:33 +01:00
Jiaming Yuan
f236640427
Support F order for the tensor type. (#8872)
- Add F order support for tensor and view.
- Use parameter pack for automatic type cast. (avoid excessive static cast for shape).
2023-03-08 03:27:49 +08:00
dependabot[bot]
f53055f75e
Bump maven-assembly-plugin from 3.4.2 to 3.5.0 in /jvm-packages (#8837)
Bumps [maven-assembly-plugin](https://github.com/apache/maven-assembly-plugin) from 3.4.2 to 3.5.0.
- [Release notes](https://github.com/apache/maven-assembly-plugin/releases)
- [Commits](https://github.com/apache/maven-assembly-plugin/compare/maven-assembly-plugin-3.4.2...maven-assembly-plugin-3.5.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-assembly-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 02:20:40 +08:00
Jiaming Yuan
f7ce0ec0df
Upgrade gcc toolchain to 9.x. (#8878)
* Use new tool chain.

* Use gcc-9.

* Use cmake from system.

* DOn't link leak.
2023-03-07 08:25:23 -08:00
dependabot[bot]
2b2eb0d0f1
Bump scala-maven-plugin in /jvm-packages/xgboost4j-spark-gpu (#8877)
Bumps scala-maven-plugin from 4.8.0 to 4.8.1.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 19:33:33 +08:00
dependabot[bot]
5eabcae27b
Bump scala-maven-plugin from 4.8.0 to 4.8.1 in /jvm-packages/xgboost4j (#8876)
Bumps scala-maven-plugin from 4.8.0 to 4.8.1.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 19:33:16 +08:00
dependabot[bot]
d06b1fc26e
Bump scala-maven-plugin in /jvm-packages/xgboost4j-example (#8875)
Bumps scala-maven-plugin from 4.8.0 to 4.8.1.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 19:32:06 +08:00
dependabot[bot]
ffa5eb2aa4
Bump scala-maven-plugin in /jvm-packages/xgboost4j-gpu (#8874)
Bumps scala-maven-plugin from 4.8.0 to 4.8.1.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 19:31:50 +08:00
dependabot[bot]
0f6c502d36
Bump scala-maven-plugin in /jvm-packages/xgboost4j-spark (#8873)
Bumps scala-maven-plugin from 4.8.0 to 4.8.1.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 19:31:23 +08:00
amdsc21
f286ae5bfa add hip rocthrust hipcub 2023-03-07 06:35:00 +01:00
amdsc21
f13a7f8d91 add submodules 2023-03-07 05:44:24 +01:00
amdsc21
c51a1c9aae rename hip.cc to hip 2023-03-07 05:39:53 +01:00
amdsc21
30de728631 fix hip.cc 2023-03-07 05:11:42 +01:00
amdsc21
75fa15b36d add hip support 2023-03-07 04:02:49 +01:00
amdsc21
eb30cb6293 add hip support 2023-03-07 03:49:52 +01:00
amdsc21
cafbfce51f add hip.h 2023-03-07 03:46:26 +01:00
amdsc21
6039a71e6c add hip structure 2023-03-07 02:17:19 +01:00
Jiaming Yuan
7eba285a1e
Support sklearn cross validation for ranker. (#8859)
* Support sklearn cross validation for ranker.

- Add a convention for X to include a special `qid` column.

sklearn utilities consider only `X`, `y` and `sample_weight` for supervised learning
algorithms, but we need an additional qid array for ranking.

It's important to be able to support the cross validation function in sklearn since all
other tuning functions like grid search are based on cross validation.
2023-03-07 00:22:08 +08:00
Jiaming Yuan
cad7401783
Disable gcc parallel extension if openmp is not available. (#8871)
`<parallel/algorithm>` internally includes the <omp.h> header, which leads to an error
when openmp is not available.
2023-03-06 22:51:06 +08:00
Jiaming Yuan
228a46e8ad
Support learning rate for zero-hessian objectives. (#8866) 2023-03-06 20:33:28 +08:00
Jiaming Yuan
173096a6a7
Discover libasan.so.6. (#8864) 2023-03-06 18:56:54 +08:00
Jiaming Yuan
6a892ce281
Specify src path for isort. (#8867) 2023-03-06 17:30:27 +08:00
Jiaming Yuan
4d665b3fb0
Restore clang tidy test. (#8861) 2023-03-03 13:47:04 -08:00
Rong Ou
2dc22e7aad
Take advantage of C++17 features (#8858)
---------

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-03-04 00:24:13 +08:00
Rory Mitchell
69a50248b7
Fix scope of feature set pointers (#8850)
---------

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2023-03-02 12:37:14 +08:00
mzzhang95
6cef9a08e9
[pyspark] Update eval_metric validation to support list of strings (#8826) 2023-03-02 08:24:12 +08:00
Jiaming Yuan
803d5e3c4c
Update c++ requirement to 17 for the R package. (#8860) 2023-03-01 14:49:39 -08:00
Rong Ou
a5852365fd
Update dmlc-core to get C++17 deprecation warning (#8855) 2023-03-01 12:30:59 -08:00
Rong Ou
7cbaee9916
Support column split in approx tree method (#8847) 2023-03-02 03:59:07 +08:00
Philip Hyunsu Cho
6d8afb2218
[CI] Require C++17 + CMake 3.18; Use CUDA 11.8 in CI (#8853)
* Update to C++17

* Turn off unity build

* Update CMake to 3.18

* Use MSVC 2022 + CUDA 11.8

* Re-create stack for worker images

* Allocate more disk space for Windows

* Tempiorarily disable clang-tidy

* RAPIDS now requires Python 3.10+

* Unpin cuda-python

* Use latest NCCL

* Use Ubuntu 20.04 in RMM image

* Mark failing mgpu test as xfail
2023-03-01 09:22:24 -08:00
Jiaming Yuan
d54ef56f6f
Fix cache with gc (#8851)
- Make DMatrixCache thread-safe.
- Remove the use of thread-local memory.
2023-03-01 00:39:06 +08:00
Rong Ou
d9688f93c7
Support column-split in row partitioner (#8828) 2023-02-26 04:43:35 +08:00
Mauro Leggieri
90c0633a28
Fixes compilation errors on MSVC x86 targets (#8823) 2023-02-26 03:20:28 +08:00
Rong Ou
a65ad0bd9c
Support column split in histogram builder (#8811) 2023-02-17 22:37:01 +08:00
dependabot[bot]
40fd3d6d5f
Bump maven-javadoc-plugin in /jvm-packages/xgboost4j-gpu (#8815)
Bumps [maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 3.4.1 to 3.5.0.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-3.4.1...maven-javadoc-plugin-3.5.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-17 16:39:16 +08:00
dependabot[bot]
6ce9a35f55
Bump maven-javadoc-plugin from 3.4.1 to 3.5.0 in /jvm-packages/xgboost4j (#8813)
Bumps [maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 3.4.1 to 3.5.0.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-3.4.1...maven-javadoc-plugin-3.5.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-17 15:04:06 +08:00
dependabot[bot]
d62daa0b32
Bump maven-javadoc-plugin from 3.4.1 to 3.5.0 in /jvm-packages (#8814)
Bumps [maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 3.4.1 to 3.5.0.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-3.4.1...maven-javadoc-plugin-3.5.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 23:16:11 +08:00
Jiaming Yuan
c0afdb6786
Fix CPU bin compression with categorical data. (#8809)
* Fix CPU bin compression with categorical data.

* The bug causes the maximum category to be lesser than 256 or the maximum number of bins when
the input data is dense.
2023-02-16 04:20:34 +08:00
Jiaming Yuan
cce4af4acf
Initial support for quantile loss. (#8750)
- Add support for Python.
- Add objective.
2023-02-16 02:30:18 +08:00
Jiaming Yuan
282b1729da
Specify the number of threads for parallel sort. (#8735)
* Specify the number of threads for parallel sort.

- Pass context object into argsort.
- Replace macros with inline functions.
2023-02-16 00:20:19 +08:00
Jiaming Yuan
c7c485d052
Extract fit intercept. (#8793) 2023-02-15 22:41:31 +08:00
Jiaming Yuan
594371e35b
Fix CPP lint. (#8807) 2023-02-15 20:16:35 +08:00
Jiaming Yuan
e62167937b
[CI] Update action cache for jvm tests. (#8806) 2023-02-15 18:43:48 +08:00
Rong Ou
74572b5d45
Add convenience method for allgather (#8804) 2023-02-15 11:37:11 +08:00
WeichenXu
f27a7258c6
Fix feature types param (#8772)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2023-02-14 02:16:42 +08:00
Jiaming Yuan
52d0230b58
Fix merge conflict. (#8791) 2023-02-13 23:43:42 +08:00
Jiaming Yuan
81b2ee1153
Pass DMatrix into metric for caching. (#8790) 2023-02-13 22:15:05 +08:00
Jiaming Yuan
31d3ec07af
Extract device algorithms. (#8789) 2023-02-13 20:53:53 +08:00
Jiaming Yuan
457f704e3d
Add quantile metric. (#8761) 2023-02-13 19:07:40 +08:00
Jiaming Yuan
d11a0044cf
Generalize prediction cache. (#8783)
* Extract most of the functionality into `DMatrixCache`.
* Move API entry to independent file to reduce dependency on `predictor.h` file.
* Add test.

---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-02-13 12:36:43 +08:00
Rong Ou
ed91e775ec
Fix quantile tests running on multi-gpus (#8775)
* Fix quantile tests running on multi-gpus

* Run some gtests with multiple GPUs

* fix mgpu test naming

* Instruct NCCL to print extra logs

* Allocate extra space in /dev/shm to enable NCCL

* use gtest_skip to skip mgpu tests

---------

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2023-02-12 17:00:26 -08:00
Jiaming Yuan
225b3158f6
Support custom metric in sklearn ranker. (#8786) 2023-02-12 13:14:07 +08:00
Jiaming Yuan
17b709acb9
Rename ranking utils to threading utils. (#8785) 2023-02-12 05:41:18 +08:00
Jiaming Yuan
70c9b885ef
Extract floating point rounding routines. (#8771) 2023-02-12 04:26:41 +08:00
Jiaming Yuan
e9c178f402
[doc] Document update [skip ci] (#8784)
- Remove version specifics in cat demo.
- Remove aws yarn.
- Update faq.
- Stop mentioning MPI.
- Update sphinx inventory links.
- Fix typo.
2023-02-12 04:25:22 +08:00
Jiaming Yuan
8a16944664
Fix ranking with quantile dmatrix and group weight. (#8762) 2023-02-10 20:32:35 +08:00
Dai-Jie (Jay) Wu
ad0ccc6e4f
[doc] fix inconsistent doc and minor typo for external memory (#8773) 2023-02-10 01:05:34 +08:00
Jiaming Yuan
199c421d60
Send default configuration from metric to objective. (#8760) 2023-02-09 20:18:07 +08:00
Jiaming Yuan
5f76edd296
Extract make metric name from ranking metric. (#8768)
- Extract the metric parsing routine from ranking.
- Add a test.
- Accept null for string view.
2023-02-09 18:30:21 +08:00
Jiaming Yuan
4ead65a28c
Increase timeout limit for linear. (#8767) 2023-02-09 18:20:12 +08:00
Rong Ou
cbf98cb9c6
Add Allgather to collective communicator (#8765)
* Add Allgather to collective communicator
2023-02-09 11:31:22 +08:00
Jiaming Yuan
48cefa012e
Support multiple alphas for segmented quantile. (#8758) 2023-02-07 17:17:59 +08:00
Jiaming Yuan
c4802bfcd0
Cleanup booster param types. (#8756) 2023-02-07 15:52:19 +08:00
Jiaming Yuan
7b3d473593
[doc] Add demo for inference using individual tree. (#8752) 2023-02-07 04:40:18 +08:00
Jiaming Yuan
28bb01aa22
Extract optional weight. (#8747)
- Extract optional weight from coommon.h to reduce dependency on this header.
- Add test.
2023-02-07 03:11:53 +08:00
Jiaming Yuan
0f37a01dd9
Require black formatter for the python package. (#8748) 2023-02-07 01:53:33 +08:00
Jiaming Yuan
a2e433a089
Fix empty DMatrix with categorical features. (#8739) 2023-02-07 00:40:11 +08:00
Rory Mitchell
7214a45e83
Fix different number of features in gpu_hist evaluator. (#8754) 2023-02-06 23:15:16 +08:00
Rong Ou
66191e9926
Support cpu quantile sketch with column-wise data split (#8742) 2023-02-05 14:26:24 +08:00
Jiaming Yuan
c1786849e3
Use array interface for CSC matrix. (#8672)
* Use array interface for CSC matrix.

Use array interface for CSC matrix and align the interface with CSR and dense.

- Fix nthread issue in the R package DMatrix.
- Unify the behavior of handling `missing` with other inputs.
- Unify the behavior of handling `missing` around R, Python, Java, and Scala DMatrix.
- Expose `num_non_missing` to the JVM interface.
- Deprecate old CSR and CSC constructors.
2023-02-05 01:59:46 +08:00
BenEfrati
213b5602d9
Add sample_weight to eval_metric (#8706) 2023-02-05 00:06:38 +08:00
Philip Hyunsu Cho
dd79ab846f
[CI] Fix failing arm build (#8751)
* Always install Conda env into /opt/python; use Mamba

* Change ownership of Conda env to buildkite-agent user

* Use unique name

* Fix
2023-02-03 22:32:48 -08:00
Jiaming Yuan
0e61ba57d6
Fix GPU L1 error. (#8749) 2023-02-04 03:02:00 +08:00
Hamel Husain
16ef016ba7
[CI] Use bash -l {0} as the default in GitHub Actions (#8741) 2023-01-31 15:00:29 +08:00
James Lamb
0d8248ddcd
[R] discourage use of regex for fixed string comparisons (#8736) 2023-01-30 18:47:21 +08:00
Jiaming Yuan
1325ba9251
Support primitive types of pyarrow-backed pandas dataframe. (#8653)
Categorical data (dictionary) is not supported at the moment.
2023-01-30 17:53:29 +08:00
Jiaming Yuan
3760cede0f
Consistent use of context to specify number of threads. (#8733)
- Use context in all tests.
- Use context in R.
- Use context in C API DMatrix initialization. (0 threads is used as dft).
2023-01-30 15:25:31 +08:00
Jiaming Yuan
21a28f2cc5
Small refactor for hist builder. (#8698)
- Use span instead of vector as parameter. No perf change as the builder work on pointer.
- Use const pointer for reg tree.
2023-01-30 14:06:41 +08:00
Rong Ou
8af98e30fc
Use in-memory communicator to test quantile (#8710) 2023-01-27 23:28:28 +08:00
James Lamb
96e6b6beba
[ci] remove unused imports in tests (#8707) 2023-01-25 14:10:29 +08:00
Philip Hyunsu Cho
d29e45371f
[R-package] Alter xgb.train() to accept multiple eval metrics as a list (#8657) 2023-01-24 17:14:14 -08:00
James Lamb
0f4d52a864
[R] add tests on print.xgb.DMatrix() (#8704) 2023-01-22 06:44:14 +08:00
Jiaming Yuan
9fb12b20a4
Cleanup the callback module. (#8702)
- Cleanup pylint markers.
- run formatter.
- Update examples of using callback.
2023-01-22 00:13:49 +08:00
Jiaming Yuan
34eee56256
Fix compiler warnings. (#8703)
Fix warnings about signed/unsigned comparisons.
2023-01-21 15:16:23 +08:00
Jiaming Yuan
e49e0998c0
Extract CPU sampling routines. (#8697) 2023-01-19 23:28:18 +08:00
Jiaming Yuan
7a068af1a3
Workaround CUDA warning. (#8696) 2023-01-19 09:16:08 +08:00
James Lamb
6933240837
[python-package] remove unused functions in xgboost.data (#8695) 2023-01-19 08:02:54 +08:00
Jiaming Yuan
4416452f94
Return single thread from context when called inside omp region. (#8693) 2023-01-18 09:23:37 +08:00
Jiaming Yuan
31b9cbab3d
Make sure input numpy array is aligned. (#8690)
- use `np.require` to specify that the alignment is required.
- scipy csr as well.
- validate input pointer in `ArrayInterface`.
2023-01-18 08:12:13 +08:00
Jiaming Yuan
175986b739
[doc] Add missing document for pyspark ranker. [skip ci] (#8692) 2023-01-18 07:52:18 +08:00
Rong Ou
78396f8a6e
Initial support for column-split cpu predictor (#8676) 2023-01-18 06:33:13 +08:00
James Lamb
980233e648
[R] remove XGBoosterPredict_R (fixes #8687) (#8689) 2023-01-17 14:19:01 +08:00
Jiaming Yuan
247946a875
Cache transformed in QuantileDMatrix for efficiency. (#8666) 2023-01-17 06:02:40 +08:00
James Lamb
06ba285f71
[R] fix OpenMP detection on macOS (#8684) 2023-01-17 05:01:26 +08:00
Jiaming Yuan
43152657d4
Extract JSON type check. (#8677)
- Reuse it in `GetMissing`.
- Add test.
2023-01-17 03:11:07 +08:00
Jiaming Yuan
9f598efc3e
Rename context in Metric. (#8686) 2023-01-17 01:10:13 +08:00
Jiaming Yuan
d6018eb4b9
Remove all use of DeviceQuantileDMatrix. (#8665) 2023-01-17 00:04:10 +08:00
Jiaming Yuan
0ae8df9a65
Define default ctors for gpair. (#8660)
* Define default ctors for gpair.

Fix clang warning:

Definition of implicit copy assignment operator for 'GradientPairInternal<float>' is
deprecated because it has a user-declared copy constructor
2023-01-16 22:52:13 +08:00
dependabot[bot]
a9c6199723
Bump maven-project-info-reports-plugin in /jvm-packages (#8662)
Bumps [maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 3.4.1 to 3.4.2.
- [Release notes](https://github.com/apache/maven-project-info-reports-plugin/releases)
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-3.4.1...maven-project-info-reports-plugin-3.4.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-16 04:57:28 +08:00
dependabot[bot]
37d4482e3e
Bump maven-checkstyle-plugin from 3.2.0 to 3.2.1 in /jvm-packages (#8661)
Bumps [maven-checkstyle-plugin](https://github.com/apache/maven-checkstyle-plugin) from 3.2.0 to 3.2.1.
- [Release notes](https://github.com/apache/maven-checkstyle-plugin/releases)
- [Commits](https://github.com/apache/maven-checkstyle-plugin/compare/maven-checkstyle-plugin-3.2.0...maven-checkstyle-plugin-3.2.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-checkstyle-plugin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-16 02:02:03 +08:00
James Lamb
e227abc57a
[R] avoid leaving test files behind (#8685) 2023-01-15 23:34:54 +08:00
Jiaming Yuan
e7d612d22c
[R] Fix threads used to create DMatrix in predict. (#8681) 2023-01-15 03:09:08 +08:00
James Lamb
292df67824
[R] remove unused define XGBOOST_CUSTOMIZE_LOGGER (#8647) 2023-01-15 02:29:25 +08:00
Jiaming Yuan
f7a2f52136
[R] Get CXX flags from R CMD config. (#8669) 2023-01-14 16:48:21 +08:00
Jiaming Yuan
07cf3d3e53
Fix threads in DMatrix slice. (#8667) 2023-01-14 07:16:57 +08:00
Jiaming Yuan
e27cda7626
[CI] Skip pyspark sparse tests. (#8675) 2023-01-14 05:37:00 +08:00
Jiaming Yuan
b2b6a8aa39
[R] fix CSR input. (#8673) 2023-01-14 01:32:41 +08:00
Bobby Wang
72ec0c5484
[pyspark] support pred_contribs (#8633) 2023-01-11 16:51:12 +08:00
Jiaming Yuan
cfa994d57f
Multi-target support for L1 error. (#8652)
- Add matrix support to the median function.
- Iterate through each target for quantile computation.
2023-01-11 05:51:14 +08:00
Jiaming Yuan
badeff1d74
Init estimation for regression. (#8272) 2023-01-11 02:04:56 +08:00
Jiaming Yuan
1b58d81315
[doc] Document Python inputs. (#8643) 2023-01-10 15:39:32 +08:00
Bobby Wang
4e12f3e1bc
[Breaking][jvm-packages] Bump rapids version to 22.12.0 (#8648)
* [jvm-packages] Bump rapids version to 22.12.0

This PR bumps spark version to 3.1.1 and the rapids version
to 22.12.0, which results in the latest xgboost can't run
with the old rapids packages.
2023-01-07 18:59:17 +08:00
Jiaming Yuan
06a1cb6e03
Release news for patch releases including upcoming 1.7.3. [skip ci] (#8645) 2023-01-06 16:19:16 +08:00
Emre Batuhan Baloğlu
2b88099c74
[doc] Update custom_metric_obj.rst (#8626) 2023-01-06 05:08:25 +08:00
Jiaming Yuan
e68a152d9e
Do not return internal value for get_params. (#8634) 2023-01-05 17:48:26 +08:00
Jiaming Yuan
26c9882e23
Fix loading GPU pickle with a CPU-only xgboost distribution. (#8632)
We can handle loading the pickle on a CPU-only machine if the XGBoost is built with CUDA
enabled (Linux and Windows PyPI package), but not if the distribution is CPU-only (macOS
PyPI package).
2023-01-05 02:14:30 +08:00
Bobby Wang
d3ad0524e7
[pyspark] Re-work _fit function (#8630) 2023-01-04 18:21:57 +08:00
Jiaming Yuan
beefd28471
Split up SHAP from RegTree. (#8612)
* Split up SHAP from `RegTree`.

Simplify the tree interface.
2023-01-04 18:17:47 +08:00
Jiaming Yuan
d308124910
Refactor PySpark tests. (#8605)
- Convert classifier tests to pytest tests.
- Replace hardcoded tests.
2023-01-04 17:05:16 +08:00
James Lamb
fa44a33ee6
remove unused variables in JSON-parsing code (#8627) 2023-01-04 15:50:33 +08:00
Jiaming Yuan
6eaddaa9c3
[CI] Fix CI with updated dependencies. (#8631)
* [CI] Fix CI with updated dependencies.

- Fix jvm package get iris.

* Skip SHAP test for now.

* Revert "Skip SHAP test for now."

This reverts commit 9aa28b4d8aee53fa95d92d2a879c6783ff4b2faa.

* Catch all exceptions.
2023-01-03 21:04:04 -08:00
Jiaming Yuan
8d545ab2a2
Implement fit stump. (#8607) 2023-01-04 04:14:51 +08:00
dependabot[bot]
20e6087579
Bump kryo from 5.3.0 to 5.4.0 in /jvm-packages (#8629)
Bumps [kryo](https://github.com/EsotericSoftware/kryo) from 5.3.0 to 5.4.0.
- [Release notes](https://github.com/EsotericSoftware/kryo/releases)
- [Commits](https://github.com/EsotericSoftware/kryo/compare/kryo-parent-5.3.0...kryo-parent-5.4.0)

---
updated-dependencies:
- dependency-name: com.esotericsoftware:kryo
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-03 18:44:39 +08:00
James Lamb
dd72af2620
[CI] fix git errors related to directory ownership (#8628) 2023-01-01 16:05:44 -08:00
James Lamb
9a98c3726c
[R] [CI] add more linting checks (#8624) 2022-12-29 18:20:36 +08:00
James Lamb
b05abfc494
[CI] remove unused cpp test helper function (#8625) 2022-12-28 02:47:52 +08:00
Rong Ou
3ceeb8c61c
Add data split mode to DMatrix MetaInfo (#8568) 2022-12-25 20:37:37 +08:00
Rong Ou
77b069c25d
Support bitwise allreduce operations in the communicator (#8623) 2022-12-25 06:40:05 +08:00
James Lamb
c7e82b5914
[R] enforce lintr checks (fixes #8012) (#8613) 2022-12-25 05:02:56 +08:00
James Lamb
f489d824ca
[R] remove unused imports in tests (#8614) 2022-12-25 03:45:47 +08:00
Jiaming Yuan
c430ae52f3
Fix mypy errors with the latest numpy. (#8617) 2022-12-21 01:42:05 -08:00
Philip Hyunsu Cho
5bf9e79413
[CI] Disable gtest with RMM (#8620) 2022-12-21 01:41:34 -08:00
Jiaming Yuan
c6a8754c62
Define CUDA Context. (#8604)
We will transition to non-default and non-blocking CUDA stream.
2022-12-20 15:15:07 +08:00
James Lamb
e01639548a
[R] remove unused compiler flag RABIT_CUSTOMIZE_MSG_ (#8610) 2022-12-17 19:36:35 +08:00
James Lamb
17ce1f26c8
[R] address some lintr warnings (#8609) 2022-12-17 18:36:14 +08:00
James Lamb
53e6e32718
[R] resolve assignment_linter warnings (#8599) 2022-12-17 01:22:41 +08:00
Jiaming Yuan
f6effa1734
Support Series and Python primitives in inplace_predict and QDM (#8547) 2022-12-17 00:15:15 +08:00
Jiaming Yuan
a10e4cba4e
Fix linalg iterator. (#8603) 2022-12-16 23:05:03 +08:00
Jiaming Yuan
38887a1876
Fix windows build on buildkite. (#8602) 2022-12-16 21:12:24 +08:00
Jiaming Yuan
43a647a4dd
Fix inference with categorical feature. (#8591) 2022-12-15 17:57:26 +08:00
Esteban Djeordjian
7dc3e95a77
Added ranges for alpha and lambda in docs (#8597) 2022-12-15 16:51:04 +08:00
dependabot[bot]
0c38ca7f6e
Bump nexus-staging-maven-plugin from 1.6.7 to 1.6.13 in /jvm-packages (#8600) 2022-12-15 08:44:05 +00:00
Jiaming Yuan
001e663d42
Set enable_categorical to True in predict. (#8592) 2022-12-15 05:27:06 +08:00
James Lamb
7a07dcf651
[R] resolve line_length_linter warnings (#8565)
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2022-12-14 21:04:24 +08:00
dependabot[bot]
eac980fbfc
Bump maven-checkstyle-plugin from 3.1.2 to 3.2.0 in /jvm-packages (#8594)
Bumps [maven-checkstyle-plugin](https://github.com/apache/maven-checkstyle-plugin) from 3.1.2 to 3.2.0.
- [Release notes](https://github.com/apache/maven-checkstyle-plugin/releases)
- [Commits](https://github.com/apache/maven-checkstyle-plugin/compare/maven-checkstyle-plugin-3.1.2...maven-checkstyle-plugin-3.2.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-checkstyle-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-14 19:46:03 +08:00
James Lamb
06ea6c7e79
[python] remove unnecessary conversions between data structures (#8546)
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2022-12-14 18:32:02 +08:00
dependabot[bot]
f64871c74a
Bump spark.version from 3.0.1 to 3.0.3 in /jvm-packages (#8593)
Bumps `spark.version` from 3.0.1 to 3.0.3.

Updates `spark-mllib_2.12` from 3.0.1 to 3.0.3

Updates `spark-core_2.12` from 3.0.1 to 3.0.3

Updates `spark-sql_2.12` from 3.0.1 to 3.0.3

---
updated-dependencies:
- dependency-name: org.apache.spark:spark-mllib_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.spark:spark-core_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.apache.spark:spark-sql_2.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-14 17:23:48 +08:00
Jiaming Yuan
40343c8ee1
Test dask demos. (#8557)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-12-13 18:37:31 +08:00
Rong Ou
15a88ceef0
Fix deprecated CUB calls in CUDA 12.0 (#8578) 2022-12-12 17:02:30 +08:00
Philip Hyunsu Cho
35d8447282
[CI] Use conda-forge channel in conda (#8583) 2022-12-11 23:25:29 -08:00
Rong Ou
42e6fbb0db
Fix sklearn test that calls a removed field (#8579) 2022-12-09 13:06:44 -08:00
Jiaming Yuan
deb3edf562
Support list and tuple for QDM. (#8542) 2022-12-10 01:14:44 +08:00
Jiaming Yuan
8824b40961
Update date in release script. [skip ci] (#8574) 2022-12-09 23:16:10 +08:00
Rong Ou
0caf2be684
Update NVFlare demo to work with the latest release (#8576) 2022-12-09 02:48:20 +08:00
James Lamb
ffee35e0f0
[R] [ci] remove dependency on {devtools} (#8563) 2022-12-09 01:21:28 +08:00
James Lamb
fbe40d00d8
[R] resolve brace_linter warnings (#8564) 2022-12-08 23:01:00 +08:00
Bobby Wang
40a1a2ffa8
[pyspark] check use_qdm across all the workers (#8496) 2022-12-08 18:09:17 +08:00
dependabot[bot]
5aeb8f7009
Bump maven-gpg-plugin from 1.5 to 3.0.1 in /jvm-packages (#8571) 2022-12-08 06:59:11 +00:00
dependabot[bot]
f592a5125b
Bump flink.version from 1.7.2 to 1.8.3 in /jvm-packages (#8561) 2022-12-07 20:53:22 +00:00
dependabot[bot]
27aea6c7b5
Bump maven-surefire-plugin from 2.19.1 to 2.22.2 in /jvm-packages (#8562) 2022-12-07 17:56:05 +00:00
Gianfrancesco Angelini
5540019373
feat(py, plot_importance): + values_format as arg (#8540) 2022-12-08 00:47:28 +08:00
François Bobot
8c6630c310
Typo in model schema (#8543)
categorical -> categories
2022-12-07 22:56:59 +08:00
Matthew Rocklin
b7ffdcdbb9
Properly await async method client.wait_for_workers (#8558)
* Properly await async method client.wait_for_workers

* ignore mypy error.

Co-authored-by: jiamingy <jm.yuan@outlook.com>
2022-12-07 21:49:30 +08:00
dependabot[bot]
4f1e453ff5
Bump maven-project-info-reports-plugin in /jvm-packages (#8560)
Bumps [maven-project-info-reports-plugin](https://github.com/apache/maven-project-info-reports-plugin) from 2.2 to 3.4.1.
- [Release notes](https://github.com/apache/maven-project-info-reports-plugin/releases)
- [Commits](https://github.com/apache/maven-project-info-reports-plugin/compare/maven-project-info-reports-plugin-2.2...maven-project-info-reports-plugin-3.4.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-project-info-reports-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-07 14:33:29 +08:00
Jiaming Yuan
3e26107a9c
Rename and extract Context. (#8528)
* Rename `GenericParameter` to `Context`.
* Rename header file to reflect the change.
* Rename all references.
2022-12-07 04:58:54 +08:00
James Lamb
05fc6f3ca9
[R] [ci] move linting code out of package (#8545) 2022-12-07 03:18:17 +08:00
Jiaming Yuan
e38fe21e0d
Cleanup regression objectives. (#8539) 2022-12-07 01:05:42 +08:00
dependabot[bot]
7774bf628e
Bump scalatest-maven-plugin from 1.0 to 2.2.0 in /jvm-packages (#8509)
Bumps [scalatest-maven-plugin](https://github.com/scalatest/scalatest-maven-plugin) from 1.0 to 2.2.0.
- [Release notes](https://github.com/scalatest/scalatest-maven-plugin/releases)
- [Commits](https://github.com/scalatest/scalatest-maven-plugin/commits/release-2.2.0)

---
updated-dependencies:
- dependency-name: org.scalatest:scalatest-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 21:34:22 +08:00
dependabot[bot]
4a99c9bdb8
Bump commons-lang3 from 3.9 to 3.12.0 in /jvm-packages (#8548)
Bumps commons-lang3 from 3.9 to 3.12.0.

---
updated-dependencies:
- dependency-name: org.apache.commons:commons-lang3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 20:13:46 +08:00
Jiaming Yuan
d99bdd1b1e
[CI] Fix github action mismatched glibcxx. (#8551)
* [CI] Fix github action mismatched glibcxx.

Split up the Linux test to use the toolchain from conda forge.
2022-12-06 17:42:15 +08:00
dependabot[bot]
ed1a4f3205
Bump maven-source-plugin from 2.2.1 to 3.2.1 in /jvm-packages (#8549)
Bumps [maven-source-plugin](https://github.com/apache/maven-source-plugin) from 2.2.1 to 3.2.1.
- [Release notes](https://github.com/apache/maven-source-plugin/releases)
- [Commits](https://github.com/apache/maven-source-plugin/compare/maven-source-plugin-2.2.1...maven-source-plugin-3.2.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-source-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 14:10:19 +08:00
dependabot[bot]
e85c9b987b
Bump maven-site-plugin from 3.0 to 3.12.1 in /jvm-packages (#8533)
Bumps [maven-site-plugin](https://github.com/apache/maven-site-plugin) from 3.0 to 3.12.1.
- [Release notes](https://github.com/apache/maven-site-plugin/releases)
- [Commits](https://github.com/apache/maven-site-plugin/compare/maven-site-plugin-3.0...maven-site-plugin-3.12.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-site-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 11:35:53 +08:00
Jiaming Yuan
7ac52e674f
[doc] Update model schema. (#8538)
* Update model schema with `num_target`.
2022-12-06 11:35:07 +08:00
dependabot[bot]
2790e3091f
Bump maven-assembly-plugin from 2.6 to 3.4.2 in /jvm-packages (#8521)
Bumps [maven-assembly-plugin](https://github.com/apache/maven-assembly-plugin) from 2.6 to 3.4.2.
- [Release notes](https://github.com/apache/maven-assembly-plugin/releases)
- [Commits](https://github.com/apache/maven-assembly-plugin/compare/maven-assembly-plugin-2.6...maven-assembly-plugin-3.4.2)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-assembly-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 04:07:04 +08:00
dependabot[bot]
0c1769b3a5
Bump maven-javadoc-plugin in /jvm-packages/xgboost4j (#8534)
Bumps [maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 2.10.3 to 3.4.1.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-2.10.3...maven-javadoc-plugin-3.4.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 02:30:46 +08:00
dependabot[bot]
67752e3967
Bump scala-maven-plugin from 3.2.2 to 4.8.0 in /jvm-packages (#8532)
Bumps scala-maven-plugin from 3.2.2 to 4.8.0.

---
updated-dependencies:
- dependency-name: net.alchim31.maven:scala-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 00:58:30 +08:00
Jiaming Yuan
8afcecc025
[doc] Fix outdated document [skip ci] (#8527)
* [doc] Fix document around categorical parameters. [skip ci]

* note on validate parameter [skip ci]

* Fix dask doc as well [skip ci]
2022-12-06 00:56:17 +08:00
Jiaming Yuan
e143a4dd7e
[pyspark] Refactor local tests. (#8525)
- Use pytest fixture for spark session.
- Replace hardcoded results.
2022-12-05 23:49:54 +08:00
Philip Hyunsu Cho
42c5ee5588
[jvm-packages] Bump version of akka packages (#8524) 2022-12-05 22:45:00 +08:00
Jiaming Yuan
e3bf5565ab
Extract transform iterator. (#8498) 2022-12-05 21:37:07 +08:00
Jiaming Yuan
d8544e4d9e
[R] Remove unused assert definition. (#8526) 2022-12-05 20:29:03 +08:00
dependabot[bot]
d8d2eefa63
Bump junit from 4.13.1 to 4.13.2 in /jvm-packages/xgboost4j-gpu (#8516)
Bumps [junit](https://github.com/junit-team/junit4) from 4.13.1 to 4.13.2.
- [Release notes](https://github.com/junit-team/junit4/releases)
- [Changelog](https://github.com/junit-team/junit4/blob/main/doc/ReleaseNotes4.13.1.md)
- [Commits](https://github.com/junit-team/junit4/compare/r4.13.1...r4.13.2)

---
updated-dependencies:
- dependency-name: junit:junit
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-05 19:11:33 +08:00
dependabot[bot]
8e8d3ac708
Bump kryo from 4.0.2 to 5.3.0 in /jvm-packages (#8503)
Bumps [kryo](https://github.com/EsotericSoftware/kryo) from 4.0.2 to 5.3.0.
- [Release notes](https://github.com/EsotericSoftware/kryo/releases)
- [Commits](https://github.com/EsotericSoftware/kryo/compare/kryo-parent-4.0.2...kryo-parent-5.3.0)

---
updated-dependencies:
- dependency-name: com.esotericsoftware:kryo
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-05 18:01:59 +08:00
dependabot[bot]
3bfe90c183
Bump exec-maven-plugin in /jvm-packages/xgboost4j-gpu (#8531)
Bumps [exec-maven-plugin](https://github.com/mojohaus/exec-maven-plugin) from 1.6.0 to 3.1.0.
- [Release notes](https://github.com/mojohaus/exec-maven-plugin/releases)
- [Commits](https://github.com/mojohaus/exec-maven-plugin/compare/exec-maven-plugin-1.6.0...exec-maven-plugin-3.1.0)

---
updated-dependencies:
- dependency-name: org.codehaus.mojo:exec-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-05 17:41:41 +08:00
dependabot[bot]
a903241fbf
Bump maven-javadoc-plugin in /jvm-packages/xgboost4j-gpu (#8530)
Bumps [maven-javadoc-plugin](https://github.com/apache/maven-javadoc-plugin) from 2.10.3 to 3.4.1.
- [Release notes](https://github.com/apache/maven-javadoc-plugin/releases)
- [Commits](https://github.com/apache/maven-javadoc-plugin/compare/maven-javadoc-plugin-2.10.3...maven-javadoc-plugin-3.4.1)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-javadoc-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-05 16:32:08 +08:00
Bobby Wang
f1e9bbcee5
[breakinig] [jvm-packages] change DeviceQuantileDmatrix into QuantileDMatrix (#8461) 2022-12-05 12:23:21 +08:00
Rong Ou
78d65a1928
Initial support for column-wise data split (#8468) 2022-12-04 01:37:51 +08:00
dependabot[bot]
c0609b98f1
Bump exec-maven-plugin from 1.6.0 to 3.1.0 in /jvm-packages/xgboost4j (#8518)
Bumps [exec-maven-plugin](https://github.com/mojohaus/exec-maven-plugin) from 1.6.0 to 3.1.0.
- [Release notes](https://github.com/mojohaus/exec-maven-plugin/releases)
- [Commits](https://github.com/mojohaus/exec-maven-plugin/compare/exec-maven-plugin-1.6.0...exec-maven-plugin-3.1.0)

---
updated-dependencies:
- dependency-name: org.codehaus.mojo:exec-maven-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:24:22 -08:00
dependabot[bot]
ba0ed255ef
Bump maven-jar-plugin from 3.0.2 to 3.3.0 in /jvm-packages/xgboost4j-gpu (#8512)
Bumps [maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.0.2 to 3.3.0.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.0.2...maven-jar-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:23:03 -08:00
dependabot[bot]
1d8bb7332f
Bump maven-resources-plugin in /jvm-packages/xgboost4j (#8515)
Bumps [maven-resources-plugin](https://github.com/apache/maven-resources-plugin) from 3.1.0 to 3.3.0.
- [Release notes](https://github.com/apache/maven-resources-plugin/releases)
- [Commits](https://github.com/apache/maven-resources-plugin/compare/maven-resources-plugin-3.1.0...maven-resources-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-resources-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:22:26 -08:00
dependabot[bot]
dcc92a6703
Bump maven-jar-plugin from 3.0.2 to 3.3.0 in /jvm-packages/xgboost4j (#8517)
Bumps [maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.0.2 to 3.3.0.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.0.2...maven-jar-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:21:42 -08:00
dependabot[bot]
fcafd3a777
Bump maven-jar-plugin from 3.0.2 to 3.3.0 in /jvm-packages (#8506)
Bumps [maven-jar-plugin](https://github.com/apache/maven-jar-plugin) from 3.0.2 to 3.3.0.
- [Release notes](https://github.com/apache/maven-jar-plugin/releases)
- [Commits](https://github.com/apache/maven-jar-plugin/compare/maven-jar-plugin-3.0.2...maven-jar-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-jar-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:21:01 -08:00
dependabot[bot]
b23e97f8b0
Bump maven-resources-plugin from 3.1.0 to 3.3.0 in /jvm-packages (#8504)
Bumps [maven-resources-plugin](https://github.com/apache/maven-resources-plugin) from 3.1.0 to 3.3.0.
- [Release notes](https://github.com/apache/maven-resources-plugin/releases)
- [Commits](https://github.com/apache/maven-resources-plugin/compare/maven-resources-plugin-3.1.0...maven-resources-plugin-3.3.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-resources-plugin
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-02 17:20:37 -08:00
Bobby Wang
8e41ad24f5
[pyspark] sort qid for SparkRanker (#8497)
* [pyspark] sort qid for SparkRandker

* resolve comments
2022-12-01 16:40:35 -08:00
dependabot[bot]
f747e05eac
Bump maven-deploy-plugin from 2.8.2 to 3.0.0 in /jvm-packages (#8502)
Bumps [maven-deploy-plugin](https://github.com/apache/maven-deploy-plugin) from 2.8.2 to 3.0.0.
- [Release notes](https://github.com/apache/maven-deploy-plugin/releases)
- [Commits](https://github.com/apache/maven-deploy-plugin/compare/maven-deploy-plugin-2.8.2...maven-deploy-plugin-3.0.0)

---
updated-dependencies:
- dependency-name: org.apache.maven.plugins:maven-deploy-plugin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-01 16:39:59 -08:00
Philip Hyunsu Cho
2546d139d6
[jvm-packages] Add missing commons-lang3 dependency to xgboost4j-gpu (#8508)
* [jvm-packages] Add missing commons-lang3 dependency to xgboost4j-gpu

* Update commons-lang3
2022-12-01 16:27:11 -08:00
Philip Hyunsu Cho
7c6f2346d3
[jvm-packages] Configure dependabot properly (#8507)
* [jvm-packages] Configure dependabot properly

* Allow automatic updates for Scala and Spark within the same major version
2022-12-01 16:26:47 -08:00
Philip Hyunsu Cho
f550109641
Bump some old dependencies of JVM packages (#8456) 2022-11-30 23:04:08 -08:00
Philip Hyunsu Cho
9a98e79649
[jvm-packages] Set up dependabot (#8501) 2022-11-30 22:46:17 -08:00
Rong Ou
a8255ea678
Add an in-memory collective communicator (#8494) 2022-12-01 00:24:12 +08:00
Jiaming Yuan
157e98edf7
Support half type from cupy. (#8487) 2022-11-30 17:56:42 +08:00
Jiaming Yuan
addaa63732
Support null value in CUDA array interface. (#8486)
* Support null value in CUDA array interface.

- Fix for potential null value in array interface.
- Fix incorrect check on mask stride.

* Simple tests.

* Extract mask.
2022-11-28 17:48:25 -08:00
Jiaming Yuan
3fc1046fd3
Reduce compiler warnings on CPU-only build. (#8483) 2022-11-29 00:04:16 +08:00
Jiaming Yuan
d666ba775e
Support all pandas nullable integer types. (#8480)
- Enumerate all pandas integer types.
- Tests for `None`, `nan`, and `pd.NA`
2022-11-28 22:38:16 +08:00
Jiaming Yuan
f2209c1fe4
Don't shuffle columns in categorical tests. (#8446) 2022-11-28 20:28:06 +08:00
WeichenXu
67ea1c3435
[pyspark] Make QDM optional based on cuDF check (#8471) 2022-11-27 14:58:54 +08:00
Jiaming Yuan
8f97c92541
Support half type for pandas. (#8481) 2022-11-24 12:47:40 +08:00
Jiaming Yuan
e07245f110
Take datatable as row major input. (#8472)
* Take datatable as row major input.

Try to avoid a transform with dense table.
2022-11-24 09:20:13 +08:00
Jiaming Yuan
284dcf8d22
Add script for change version. (#8443)
- Replace jvm regex replacement script with mvn command.
- Replace cmake script for python version with python script.
- Automate rest of the manual steps.

The script can handle dev branch, rc release, and formal release version.
2022-11-24 00:06:39 +08:00
Jiaming Yuan
5f1a6fca0d
[R] Use new interface for creating DMatrix from CSR. (#8455)
* [R] Use new interface for creating DMatrix from CSR.

- CSC is still using the old API.

The old API is not aware of `nthread` parameter, which makes DMatrix to use all available
thread during construction and during transformation lie `SparsePage` -> `CSCPage`.
2022-11-23 21:36:43 +08:00
Nick Becker
58d211545f
explain cpu/gpu interop and link to model IO tutorial (#8450) 2022-11-23 20:58:28 +08:00
Bobby Wang
2dde65f807
[ci] reduce pyspark test time (#8324) 2022-11-21 16:58:00 +08:00
Joyce
3b8a0e08f7
feat: use commit hash instead of version to actions workflows (#8460)
Signed-off-by: Joyce Brum <joycebrum@google.com>

Signed-off-by: Joyce Brum <joycebrum@google.com>
2022-11-17 22:04:11 +08:00
Rong Ou
30b1a26fc0
Remove unused page size constant (#8457) 2022-11-17 11:41:39 +08:00
Otto von Sperling
812d577597
Fix inline code blocks in 'spark_estimator.rst' (#8465) 2022-11-15 05:47:58 +08:00
Robert Maynard
16f96b6cfb
Work with newer thrust and libcudacxx (#8454)
* Thrust 1.17 removes the experimental/pinned_allocator.

When xgboost is brought into a large project it can
be compiled against Thrust 1.17+ which don't offer
this experimental allocator.

To ensure that going forward xgboost works in all environments we provide a xgboost namespaced version of
the pinned_allocator that previously was in Thrust.
2022-11-11 04:22:53 +08:00
Gavin Zhang
0c6266bc4a
SO_DOMAIN do not support on IBM i, using getsockname instead (#8437)
Co-authored-by: GavinZhang <zhanggan@cn.ibm.com>
2022-11-10 23:54:57 +08:00
Jiaming Yuan
9dd8d70f0e
Fix mypy errors. (#8444) 2022-11-09 13:19:11 +08:00
Jiaming Yuan
0252d504d8
Fix R package build on CI. (#8445)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-11-09 12:18:36 +08:00
Jiaming Yuan
a83748eb45
[CI] Revise R tests. (#8430)
- Use the standard package check (check on the tarball instead of the source tree).
- Run commands in parallel.
- Cleanup dependencies installation.
- Replace makefile.
- Documentation.
- Test using the image from rhub.
2022-11-09 09:12:13 +08:00
Rong Ou
4449e30184
Always link federated proto statically (#8442) 2022-11-09 07:47:38 +08:00
Jiaming Yuan
ca0f7f2714
[doc] Update C tutorial. [skip ci] (#8436)
- Use rst references instead of doxygen links.
- Replace deprecated functions.

- Add SaveModel; put free step last [skip ci]

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-11-09 07:14:12 +08:00
Jiaming Yuan
0b36f8fba1
[R] Fix CRAN test notes. (#8428)
- Limit the number of used CPU cores in examples.
- Add a note for the constraint.
- Bring back the cleanup script.
2022-11-09 02:03:30 +08:00
Rong Ou
8e76f5f595
Use DataSplitMode to configure data loading (#8434)
* Use `DataSplitMode` to configure data loading
2022-11-08 16:21:50 +08:00
Jiaming Yuan
0d3da9869c
Require isort on all Python files. (#8420) 2022-11-08 12:59:06 +08:00
James Lamb
bf8de227a9
[CI] remove unused import in python tests (#8409) 2022-11-03 22:27:25 +08:00
James Lamb
b1b2524dbb
add files from python tests to .gitignore (#8410) 2022-11-03 07:57:45 +08:00
Rong Ou
99fa8dad2d
Add back xgboost.rabit for backwards compatibility (#8408)
* Add back xgboost.rabit for backwards compatibility

* fix my errors

* Fix lint

* Use FutureWarning

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-11-01 21:47:41 -07:00
Philip Hyunsu Cho
0db903b471
Fix formatting in NEWS.md [skip ci] 2022-10-31 15:42:31 -07:00
Jiaming Yuan
917cbc0699
1.7 release note. [skip ci] (#8374)
* Draft for 1.7 release note. [skip ci]

* Wording [skip ci]

* Update with backports [skip ci]

* Apply suggestions from code review [skip ci]

* Apply suggestions from code review [skip ci]

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Update NEWS.md [skip ci]

Co-authored-by: Rory Mitchell <r.a.mitchell.nz@gmail.com>

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
Co-authored-by: Rory Mitchell <r.a.mitchell.nz@gmail.com>
2022-10-31 09:32:33 -07:00
Jiaming Yuan
2ed3c29c8a
[CI] Cleanup github action tests. (#8397)
- Merge doxygen build with sphinx.
- Use mamba on non-windows Github Action.
2022-10-29 06:04:27 +08:00
Joyce
7174d60ed2
Fix Scorecard Github Action not working (#8402)
* chore: create security policy

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* chore: only latest release on security police

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* chore: security policy support on effort base

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* Use dedicated e-mail address for security reporting

* fix: upgrade scorecard action version

Signed-off-by: Joyce Brum <joycebrum@google.com>

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>
Signed-off-by: Joyce Brum <joycebrum@google.com>
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-10-28 16:25:43 -04:00
Jiaming Yuan
a408c34558
Update JSON parser demo with categorical feature. (#8401)
- Parse categorical features in the Python example.
- Add tests.
- Update document.
2022-10-28 20:57:43 +08:00
Jiaming Yuan
cfd2a9f872
Extract dask and spark test into distributed test. (#8395)
- Move test files.
- Run spark and dask separately to prevent conflicts.
- Gather common code into the testing module.
2022-10-28 16:24:32 +08:00
Jiaming Yuan
f73520bfff
Bump development version to 2.0. (#8390) 2022-10-28 15:21:19 +08:00
Christian Clauss
ae27e228c4
xrange() was removed in Python 3 in favor or range() (#8371) 2022-10-27 16:36:14 +08:00
Yizhi Liu
5699f60a88
Type fix for WebAssembly: use bst_ulong instead of size_t for ncol in CSR conversion. (#8369) 2022-10-26 19:21:45 +08:00
Jiaming Yuan
a2593e60bf
Speedup R test on github. (#8388) 2022-10-26 18:02:27 +08:00
Jiaming Yuan
786aa27134
[doc] Additional notes for release [skip ci] (#8367) 2022-10-26 17:55:15 +08:00
Jiaming Yuan
cf70864fa3
Move Python testing utilities into xgboost module. (#8379)
- Add typehints.
- Fixes for pylint.

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-10-26 16:56:11 +08:00
Jiaming Yuan
7e53189e7c
[pyspark] Improve tutorial on enabling GPU support. (#8385)
- Quote the databricks doc on how to manage dependencies.
- Some wording changes.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-10-26 15:45:54 +08:00
Thomas Stanley
ba9cc43464
Fix acronym (#8386) 2022-10-26 06:22:30 +08:00
Philip Hyunsu Cho
8bb55949ef
Fix building XGBoost with libomp 15 (#8384) 2022-10-25 12:01:11 -07:00
Jiaming Yuan
d0b99bdd95
[pyspark] Add type hint to basic utilities. (#8375) 2022-10-25 17:26:25 +08:00
Jiaming Yuan
1d2f6de573
remove travis status [skip ci] (#8382) 2022-10-24 14:37:33 +08:00
Jiaming Yuan
a3b8bca46a
Remove travis configuration file. [skip ci] (#8381) 2022-10-23 02:49:29 +08:00
Jiaming Yuan
bb5e18c29c
Fix CUDA async stream. (#8380) 2022-10-22 23:13:28 +08:00
Christian Clauss
5761f27e5e
Use ==/!= to compare constant literals (str, bytes, int, float, tuple) (#8372) 2022-10-22 21:53:03 +08:00
Jiaming Yuan
99467f3999
[doc] Cleanup outdated documents for GPU. [skip ci] (#8378) 2022-10-21 20:13:31 +08:00
Jiaming Yuan
28a466ab51
Fixes for R checks. (#8330)
- Bump configure.ac version.
- Remove amalgamation to reduce the build time for a single object with the added benefit that we can use parallel build during development.
- Fix c function prototype warning.
- Remove Windows automake file generation step to make the build script easier to understand.
2022-10-20 02:52:54 +08:00
Dmitry Razdoburdin
5bd849f1b5
Unify the partitioner for hist and approx.
Co-authored-by: dmitry.razdoburdin <drazdobu@jfldaal005.jf.intel.com>
Co-authored-by: jiamingy <jm.yuan@outlook.com>
2022-10-20 02:49:20 +08:00
Jiaming Yuan
c69af90319
Fix github action r tests. (#8364) 2022-10-20 01:07:18 +08:00
Jiaming Yuan
c884b9e888
Validate features for inplace predict. (#8359) 2022-10-19 23:05:36 +08:00
Joyce
52977f0cdf
Create Security Police (#8360)
* chore: create security policy

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* chore: only latest release on security police

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* chore: security policy support on effort base

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* Use dedicated e-mail address for security reporting

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-10-18 17:15:30 -07:00
luca-s
c47c71e34f
XGBRanker documentation: few clarifications (#8356) 2022-10-19 01:54:14 +08:00
Bobby Wang
76f95a6667
[pyspark] Filter out the unsupported train parameters (#8355) 2022-10-18 23:26:02 +08:00
Jiaming Yuan
3901f5d9db
[pyspark] Cleanup data processing. (#8344)
* Enable additional combinations of ctor parameters.
* Unify procedures for QuantileDMatrix and DMatrix.
2022-10-18 14:56:23 +08:00
Rong Ou
521086d56b
Make federated client more robust (#8351) 2022-10-18 13:52:44 +08:00
luca-s
5647fc6542
XGBRanker documentation: missing default objective (#8347) 2022-10-18 10:43:29 +08:00
Rong Ou
8f3dee58be
Speed up tests with federated learning enabled (#8350)
* Speed up tests with federated learning enabled

* Re-enable timeouts

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-10-17 15:17:04 -07:00
Jiaming Yuan
031d66ec27
Configuration for init estimation. (#8343)
* Configuration for init estimation.

* Check whether the model needs configuration based on const attribute `ModelFitted`
instead of a mutable state.
* Add parameter `boost_from_average` to tell whether the user has specified base score.
* Add tests.
2022-10-18 01:52:24 +08:00
Jiaming Yuan
2176e511fc
Disable pytest-timeout for now. (#8348) 2022-10-17 23:06:10 +08:00
Jiaming Yuan
fcddbc9264
FIx incorrect function name. (#8346) 2022-10-17 19:28:20 +08:00
Rong Ou
80e10e02ab
Avoid blank lines with federated training (#8342) 2022-10-14 14:55:01 +08:00
Rong Ou
b3208aac4e
Fix NVFLARE demo (#8340) 2022-10-14 12:18:34 +08:00
Jiaming Yuan
748d516c50
[pyspark] Enable running GPU tests on variable number of GPUs. (#8335) 2022-10-13 21:03:45 +08:00
Jiaming Yuan
4633b476e9
[doc] Display survival demos in sphinx doc. [skip ci] (#8328) 2022-10-13 20:51:23 +08:00
Jiaming Yuan
3ef1703553
Allow using string view to find JSON value. (#8332)
- Allow comparison between string and string view.
- Fix compiler warnings.
2022-10-13 17:10:13 +08:00
Philip Hyunsu Cho
29595102b9
[CI] Set up test analytics for CPU Python tests (#8333)
* [CI] Set up test analytics for CPU Python tests

* Install test collector
2022-10-12 23:15:50 -07:00
Philip Hyunsu Cho
2faa744aba
[CI] Test federated learning plugin in the CI (#8325) 2022-10-12 13:57:39 -07:00
Jiaming Yuan
97a5b088a5
[pyspark] Use quantile dmatrix. (#8284) 2022-10-12 20:38:53 +08:00
Rory Mitchell
ce0382dcb0
[CI] Refactor tests to reduce CI time. (#8312) 2022-10-12 11:32:06 +02:00
Rong Ou
39afdac3be
Better error message when world size and rank are set as strings (#8316)
Co-authored-by: jiamingy <jm.yuan@outlook.com>
2022-10-12 15:53:25 +08:00
Rory Mitchell
210915c985
Use integer gradients in gpu_hist split evaluation (#8274) 2022-10-11 12:16:27 +02:00
Jiaming Yuan
c68684ff4c
Update parameter for categorical feature. (#8285) 2022-10-10 19:48:29 +08:00
Jiaming Yuan
5545c49cfc
Require keyword args for data iterator. (#8327) 2022-10-10 17:47:13 +08:00
Jiaming Yuan
e1f9f80df2
Use gpu predictor for get csr test. (#8323) 2022-10-10 16:12:37 +08:00
Philip Hyunsu Cho
a71421e825
[CI] Update GitHub Actions to use macos-11 (#8321) 2022-10-08 00:40:43 -07:00
Philip Hyunsu Cho
d70e59fefc
Fix Intel's link [skip ci] 2022-10-06 16:55:42 -07:00
Philip Hyunsu Cho
50ff8a2623
More CI improvements (#8313)
* Reduce clutter in log of Python test

* Set up BuildKite test analytics

* Add separate step for building containers

* Enable incremental update of CI stack; custom agent IAM policy
2022-10-06 06:33:46 -08:00
Philip Hyunsu Cho
bc7a6ec603
Fix clang tidy (#8314)
* Fix clang-tidy

* Exempt clang-tidy from budget check

* Move clang-tidy
2022-10-06 05:16:06 -08:00
Dmitry Razdoburdin
c24e9d712c
Dispatcher for template parameters of BuildHist Kernels (#8259)
* Intoducing Column Wise Hist Building

* linting

* more linting

* bug fixing

* Removing column samping optimization for a while to simplify the review process.

* linting

* Removing unnecessary changes

* Use DispatchBinType in hist_util.cc

* Adding force_read_by column flag to buildhist. Adding tests for column wise buiilhist.

* Introducing new dispatcher for compile time flags in hist building

* fixing bug with using of DispatchBinType

* Fixing building

* Merging with master branch

Co-authored-by: dmitry.razdoburdin <drazdobu@jfldaal005.jf.intel.com>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-10-06 03:02:29 -08:00
Rong Ou
8d4038da57
Don't split input data in federated mode (#8279)
Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-10-05 18:19:28 -08:00
Philip Hyunsu Cho
66fd9f5207
Update sponsors list [skip ci] (#8309) 2022-10-05 16:40:46 -08:00
Rory Mitchell
909e49e214
Reduce docker image size. (#8306) 2022-10-05 15:55:51 -08:00
Rong Ou
668b8a0ea4
[Breaking] Switch from rabit to the collective communicator (#8257)
* Switch from rabit to the collective communicator

* fix size_t specialization

* really fix size_t

* try again

* add include

* more include

* fix lint errors

* remove rabit includes

* fix pylint error

* return dict from communicator context

* fix communicator shutdown

* fix dask test

* reset communicator mocklist

* fix distributed tests

* do not save device communicator

* fix jvm gpu tests

* add python test for federated communicator

* Update gputreeshap submodule

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-10-05 14:39:01 -08:00
Jiaming Yuan
e47b3a3da3
Upgrade mypy. (#8302)
Some breaking changes were made in mypy.
2022-10-05 14:31:59 +08:00
Jiaming Yuan
97c3a80a34
Add C document to sphinx, fix arrow. (#8300)
- Group C API.
- Add C API sphinx doc.
- Consistent use of `OptionalArg` and the parameter name `config`.
- Remove call to deprecated functions in demo.
- Fix some formatting errors.
- Add links to c examples in the document (only visible with doxygen pages)
- Fix arrow.
2022-10-05 09:52:15 +08:00
Philip Hyunsu Cho
b2bbf49015
Additional improvements to CI (#8303)
* Wait until budget check is complete

* Ensure that multi-GPU tests run for the master branch

* Fix
2022-10-04 03:03:38 -08:00
Rory Mitchell
d686bf52a6
Reduce time for some multi-gpu tests (#8288)
* Faster dask tests

* Reuse AllReducer objects in tests.

* Faster boost from prediction tests.

* Use rmm dask fixture.

* Speed up dask demo.

* mypy

* Format with black.

* mypy

* Clang-tidy

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-10-04 02:49:33 -08:00
Philip Hyunsu Cho
ca0547bb65
[CI] Use RAPIDS 22.10 (#8298)
* [CI] Use RAPIDS 22.10

* Store CUDA and RAPIDS versions in one place

* Fix

* Add missing #include

* Update gputreeshap submodule

* Fix

* Remove outdated distributed tests
2022-10-03 23:18:07 -08:00
Philip Hyunsu Cho
37886a5dff
[CI] Document the use of Docker wrapper script (#8297)
* [CI] Document the use of Docker wrapper script

* Grammer fixes

* Document buildkite pipeline defs

* tests/buildkite/*.sh isn't meant to run locally
2022-10-02 12:45:00 -07:00
Philip Hyunsu Cho
9af99760d4
Various CI savings (#8291) 2022-09-30 05:42:56 -07:00
Jiaming Yuan
299e5000a4
Fix buildkite label. (#8287) 2022-09-29 17:33:19 -07:00
Jiaming Yuan
55cf24cc32
Obtain CSR matrix from DMatrix. (#8269) 2022-09-29 20:41:43 +08:00
Philip Hyunsu Cho
b14c44ee5e
[CI] Put Multi-GPU test suites in separate pipeline (#8286)
* [CI] Put Multi-GPU test suites in separate pipeline

* Avoid unset var error in Bash
2022-09-29 00:41:48 -08:00
Bobby Wang
cbf3a5f918
[pyspark][doc] add more doc for pyspark (#8271)
Co-authored-by: fis <jm.yuan@outlook.com>
2022-09-29 11:58:18 +08:00
Bobby Wang
c91fed083d
[pyspark] disable repartition_random_shuffle by default (#8283) 2022-09-29 10:50:51 +08:00
Jiaming Yuan
6925b222e0
Fix mixed types with cuDF. (#8280) 2022-09-29 00:57:52 +08:00
Jiaming Yuan
f835368bcf
Mark next release as 1.7 instead of 2.0 (#8281) 2022-09-28 14:33:37 +08:00
Jiaming Yuan
6d1452074a
Remove MGPU cpp tests. (#8276)
Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-09-27 21:18:23 +08:00
Jiaming Yuan
fcab51aa82
Support more pandas nullable types (#8262)
- Float32/64
- Category.
2022-09-27 01:59:50 +08:00
Alex
1082ccd3cc
GitHub Workflows security hardening (#8267)
Signed-off-by: Alex <aleksandrosansan@gmail.com>
2022-09-27 00:54:27 +08:00
Rory Mitchell
8f77677193
Use quantised gradients in gpu_hist histograms (#8246) 2022-09-26 17:35:35 +02:00
Jiaming Yuan
4056974e37
Fix sparse threshold warning. (#8268) 2022-09-26 22:22:11 +08:00
WeichenXu
ff71c69adf
[pyspark] Add validation for param 'early_stopping_rounds' and 'validation_indicator_col' (#8250)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2022-09-26 17:43:03 +08:00
Jiaming Yuan
0cd11b893a
[doc] Fix sphinx build. (#8270) 2022-09-26 12:33:31 +08:00
Joyce
be5b95e743
Enable OpenSSF Scorecard Github Action (#8263)
* chore: enable scorecard github action

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* docs: add scorecard badge to the README file

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>
2022-09-25 13:02:36 -07:00
Bobby Wang
8d247f0d64
[jvm-packages] fix spark-rapids compatibility issue (#8240)
* [jvm-packages] fix spark-rapids compatibility issue

spark-rapids (from 22.10) has shimmed GpuColumnVector, which means
we can't call it directly. So this PR call the UnshimmedGpuColumnVector
2022-09-22 23:31:29 +08:00
WeichenXu
ab342af242
[pyspark] Fix xgboost spark estimator dataset repartition issues (#8231) 2022-09-22 21:31:41 +08:00
Jiaming Yuan
3fd331f8f2
Add checks to C pointer arguments. (#8254) 2022-09-22 19:02:22 +08:00
Dmitry Razdoburdin
eb7bbee2c9
Optional by-column histogram build. (#8233)
Co-authored-by: dmitry.razdoburdin <drazdobu@jfldaal005.jf.intel.com>
2022-09-22 05:16:13 +08:00
Jiaming Yuan
b791446623
Initial support for IPv6 (#8225)
- Merge rabit socket into XGBoost.
- Dask interface support.
- Add test to the socket.
2022-09-21 18:06:50 +08:00
Rong Ou
7d43e74e71
JNI wrapper for the collective communicator (#8242) 2022-09-21 04:20:25 +08:00
Jiaming Yuan
fffb1fca52
Calculate base_score based on input labels for mae. (#8107)
Fit an intercept as base score for abs loss.
2022-09-20 20:53:54 +08:00
Bobby Wang
4f42aa5f12
[pyspark] make the model saved by pyspark compatible (#8219)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-09-20 16:43:49 +08:00
Bobby Wang
520586ffa7
[pyspark] fix empty data issue when constructing DMatrix (#8245)
Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-09-20 16:43:20 +08:00
Philip Hyunsu Cho
70df36c99c
[CI] Retire Jenkins server (#8243) 2022-09-14 08:46:23 -07:00
Jiaming Yuan
2e63af6117
Mitigate flaky data iter test. (#8244)
- Reduce the number of batches.
- Verify labels.
2022-09-14 17:54:14 +08:00
Jiaming Yuan
bdf265076d
Make QuantileDMatrix default to sklearn esitmators. (#8220) 2022-09-13 13:52:19 +08:00
Rong Ou
a2686543a9
Common interface for collective communication (#8057)
* implement broadcast for federated communicator

* implement allreduce

* add communicator factory

* add device adapter

* add device communicator to factory

* add rabit communicator

* add rabit communicator to the factory

* add nccl device communicator

* add synchronize to device communicator

* add back print and getprocessorname

* add python wrapper and c api

* clean up types

* fix non-gpu build

* try to fix ci

* fix std::size_t

* portable string compare ignore case

* c style size_t

* fix lint errors

* cross platform setenv

* fix memory leak

* fix lint errors

* address review feedback

* add python test for rabit communicator

* fix failing gtest

* use json to configure communicators

* fix lint error

* get rid of factories

* fix cpu build

* fix include

* fix python import

* don't export collective.py yet

* skip collective communicator pytest on windows

* add review feedback

* update documentation

* remove mpi communicator type

* fix tests

* shutdown the communicator separately

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-09-12 15:21:12 -07:00
Jiaming Yuan
bc818316f2
Prepare for improving Windows networking compatibility. (#8234)
* Prepare for improving Windows networking compatibility.

* Include dmlc filesystem indirectly as dmlc/filesystem.h includes windows.h, which
  conflicts with winsock2.h
* Define `NOMINMAX` conditionally.
* Link the winsock library when mysys32 is used.
* Add config file for read the doc.
2022-09-10 15:16:49 +08:00
Jiaming Yuan
dd44ac91b8
[CI] Use binary R dependencies on Windows. (#8241) 2022-09-09 19:51:15 -07:00
Philip Hyunsu Cho
23faf656ad
[CI] Don't require manual approval for master branch (#8235) 2022-09-08 09:26:22 -08:00
Philip Hyunsu Cho
e888eb2fa9
[CI] Migrate CI pipelines from Jenkins to BuildKite (#8142)
* [CI] Migrate CI pipelines from Jenkins to BuildKite

* Require manual approval

* Less verbose output when pulling Docker

* Remove us-east-2 from metadata.py

* Add documentation

* Add missing underscore

* Add missing punctuation

* More specific instruction

* Better paragraph structure
2022-09-07 16:29:25 -08:00
Philip Hyunsu Cho
b397d64c96
Drop use of deleted virtual function to support older MacOS (#8226)
* Support older MacOS

* Update json.h
2022-09-07 11:25:59 -08:00
Rehan Guha
dc07137a2c
Updated dart.rst with correct links (#8229)
Updated the DART paper link as it was invalid and link was broken.
2022-09-08 00:57:09 +08:00
Jiaming Yuan
b5eb36f1af
Add max_cat_threshold to GPU and handle missing cat values. (#8212) 2022-09-07 00:57:51 +08:00
Jiaming Yuan
441ffc017a
Copy data from Ellpack to GHist. (#8215) 2022-09-06 23:05:49 +08:00
Bobby Wang
7ee10e3dbd
[pyspark] Cleanup the comments (#8217) 2022-09-05 16:20:12 +08:00
Jiaming Yuan
ada4a86d1c
Fix dask interface with latest cupy. (#8210) 2022-09-03 03:10:43 +08:00
Dmitry Razdoburdin
deae99e662
Optimization/buildhist/hist util (#8218)
* BuildHistKernel optimization

Co-authored-by: dmitry.razdoburdin <drazdobu@jfldaal005.jf.intel.com>
2022-09-02 19:39:45 +08:00
Rong Ou
b78bc734d9
Fix dask.py lint error (#8216) 2022-09-02 16:30:01 +08:00
Philip Hyunsu Cho
56395d120b
Work around MSVC behavior wrt constexpr capture (#8211)
* Work around MSVC behavior wrt constexpr capture

* Fix lint
2022-08-31 11:42:08 -08:00
CW
a868498c18
[doc] Update prediction.rst (#8214) 2022-08-31 21:00:12 +08:00
Jiaming Yuan
8dac90a593
Mark parameter validation non-experimental. (#8206) 2022-08-30 15:49:43 +08:00
Rong Ou
d6e2013c5f
Set max message size in insecure gRPC (#8203) 2022-08-26 16:33:51 +08:00
WeichenXu
651f0a8889
[pyspark] Fixing xgboost.spark python doc (#8200)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2022-08-25 14:41:48 +08:00
WeichenXu
d03794ce7a
[pyspark] Add param validation for "objective" and "eval_metric" param, and remove invalid booster params (#8173)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2022-08-24 15:29:43 +08:00
Jiaming Yuan
9b32e6e2dc
Fix release script. (#8187) (#8195) 2022-08-23 15:08:30 +08:00
WeichenXu
f4628c22a4
[pyspark] Implement SparkXGBRanker estimator (#8172)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2022-08-23 02:35:19 +08:00
Philip Hyunsu Cho
35ef8abc27
[CI] Prune unused archs from libnccl (#8179)
* [CI] Prune unused archs from libnccl

* Put pruning logic in CI directory

* Don't use --color in grep
2022-08-21 00:46:16 -08:00
Rong Ou
ad3bc0edee
Allow insecure gRPC connections for federated learning (#8181)
* Allow insecure gRPC connections for federated learning

* format
2022-08-19 12:16:14 +08:00
WeichenXu
53d2a733b0
[pyspark] Make Xgboost estimator support using sparse matrix as optimization (#8145)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2022-08-19 01:57:28 +08:00
Rory Mitchell
1703dc330f
Optimise histogram kernels (#8118) 2022-08-18 14:07:26 +02:00
Gavin Zhang
40a10c217d
Use make on i system (#8178)
Co-authored-by: GavinZhang <zhanggan@cn.ibm.com>
2022-08-18 12:55:32 +08:00
dependabot[bot]
93966b0d19
Bump hadoop-common from 3.2.3 to 3.2.4 in /jvm-packages/xgboost4j-flink (#8157)
Bumps hadoop-common from 3.2.3 to 3.2.4.

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-15 06:47:27 -08:00
Andy Kattine
a9458fd844
Grammar Fix in Introduction to Boosted Trees (#8166)
Added "of" to "objective functions is that they consist of two parts" in line 32 of ./doc/tutorials/model.rst
2022-08-15 15:19:47 +08:00
Ravi Makhija
fa869eebd9
Edit grammar in custom metric tutorial (#8163) 2022-08-13 01:02:25 +08:00
Rory Mitchell
f421c26d35
Tune cuda architectures (#8152) 2022-08-11 13:36:47 -07:00
Jiaming Yuan
16bca5d4a1
Support CPU input for device QuantileDMatrix. (#8136)
- Copy `GHistIndexMatrix` to `Ellpack` when needed.
2022-08-11 21:21:26 +08:00
Jiaming Yuan
36e7c5364d
[dask] Deterministic rank assignment. (#8018) 2022-08-11 19:17:58 +08:00
Ravi Makhija
20d1bba1bb
Simplify Python getting started example (#8153)
Load data set via `sklearn` rather than a local file path.
2022-08-11 16:42:09 +08:00
Jiaming Yuan
d868126c39
[CI] Fix R build on Jenkins. (#8154) 2022-08-11 14:50:03 +08:00
Jiaming Yuan
570f8ae4ba
Use black on more Python files. (#8137) 2022-08-11 01:38:11 +08:00
Jiaming Yuan
bdb291f1c2
[doc] Clarification for feature importance. (#8151) 2022-08-11 00:30:42 +08:00
Jiaming Yuan
446d536c23
Fix loading DMatrix binary in distributed env. (#8149)
- Try to load DMatrix binary before trying to parse text input.
- Remove some unmaintained code.
2022-08-10 22:53:16 +08:00
Jiaming Yuan
8fc60b31bc
Update PyPi wheel size limit. (#8150) 2022-08-10 18:49:57 +08:00
Jiaming Yuan
9ae547f994
Use config_context in sklearn interface. (#8141) 2022-08-09 14:48:54 +08:00
Bobby Wang
03cc3b359c
[pyspark] support a list of feature column names (#8117) 2022-08-08 17:05:27 +08:00
Jiaming Yuan
bcc8679a05
Update CUDA docker image and NCCL. (#8139) 2022-08-07 16:32:41 +08:00
Praateek Mahajan
ff471b3fab
In PySpark Estimator example use the model with validation_indicator (#8131)
* use the validation_indicator model

* use the validation_indicator model for regression
2022-08-03 13:57:41 +08:00
Jiaming Yuan
d87f69215e
Quantile DMatrix for CPU. (#8130)
- Add a new `QuantileDMatrix` that works for both CPU and GPU.
- Deprecate `DeviceQuantileDMatrix`.
2022-08-02 15:51:23 +08:00
Jiaming Yuan
2cba1d9fcc
Fix compatibility with latest cupy. (#8129)
* Fix compatibility with latest cupy.

* Freeze mypy.
2022-08-01 15:24:42 +08:00
Philip Hyunsu Cho
24c2373080
[Doc] Indicate lack of py-xgboost-gpu on Windows (#8127) 2022-07-28 12:57:16 -07:00
Jiaming Yuan
2c70751d1e
Implement iterative DMatrix for CPU. (#8116) 2022-07-26 22:34:21 +08:00
Jiaming Yuan
546de5efd2
[pyspark] Cleanup data processing. (#8088)
- Use numpy stack for handling list of arrays.
- Reuse concat function from dask.
- Prepare for `QuantileDMatrix`.
- Remove unused code.
- Use iterator for prediction to avoid initializing xgboost model
2022-07-26 15:00:52 +08:00
Jiaming Yuan
3970e4e6bb
Move pylint helper from dmlc-core. (#8101)
* Move pylint helper from dmlc-core.

- Move the helper into the XGBoost ci_build.
- Run it with multiprocessing.

* Fix original test.
2022-07-23 08:12:37 +08:00
Jiaming Yuan
7785d65c8a
Fix feature weights with multiple column sampling. (#8100) 2022-07-22 20:23:05 +08:00
Jiaming Yuan
4a4e5c7c18
Prepare gradient index for Quantile DMatrix. (#8103)
* Prepare gradient index for Quantile DMatrix.

- Implement push batch with adapter batch.
- Implement `GetFvalue` for prediction.
2022-07-22 17:26:33 +08:00
Rory Mitchell
1be09848a7
Refactor split valuation kernel (#8073) 2022-07-21 15:41:50 +02:00
Tim Gates
cb40bbdadd
docs: fix simple typo, cannonical -> canonical (#8099)
There is a small typo in src/common/partition_builder.h.

Should read `canonical` rather than `cannonical`.

Signed-off-by: Tim Gates <tim.gates@iress.com>
2022-07-20 21:04:50 +08:00
QuellaZhang
703261e78f
[MSVC][std:c++latest] Fix compiler error (#8093)
Co-authored-by: QuellaZhang <zhangyi2090@163.com>
2022-07-20 15:15:39 +08:00
Jiaming Yuan
ef11b024e8
Cleanup data generator. (#8094)
- Avoid duplicated definition of data shape.
- Explicitly define numpy iterator for CPU data.
2022-07-20 13:48:52 +08:00
Jiaming Yuan
5156be0f49
Limit max_depth to 30 for GPU. (#8098) 2022-07-20 12:28:49 +08:00
Jiaming Yuan
8bdea72688
[Python] Require black and isort for new Python files. (#8096)
* [Python] Require black and isort for new Python files.

- Require black and isort for spark and dask module.

These files are relatively new and are more conform to the black formatter. We will
convert the rest of the library as we move forward.

Other libraries including dask/distributed and optuna use the same formatting style and
have a more strict standard. The black formatter is indeed quite nice, automating it can
help us unify the code style.

- Gather Python checks into a single script.
2022-07-20 10:25:24 +08:00
WeichenXu
f23cc92130
[pyspark] User guide doc and tutorials (#8082)
Co-authored-by: Bobby Wang <wbo4958@gmail.com>
2022-07-19 22:25:14 +08:00
Bobby Wang
f801d3cf15
[PySpark] change the returning model type to string from binary (#8085)
* [PySpark] change the returning model type to string from binary

XGBoost pyspark can be can be accelerated by RAPIDS Accelerator seamlessly by
changing the returning model type from binary to string.
2022-07-19 18:39:20 +08:00
Jiaming Yuan
2365f82750
[dask] Mitigate non-deterministic test. (#8077) 2022-07-19 16:55:59 +08:00
Rong Ou
7a6b711eb8
Remove unused updater basemaker (#8091) 2022-07-19 15:41:27 +08:00
Philip Hyunsu Cho
4325178822
[CI] Clear workspace after budget check (#8092)
* [CI] Clear workspace after budget check

* Windows too
2022-07-18 19:17:33 -07:00
Jiaming Yuan
4083440690
Small cleanups to various data types. (#8086)
- Use `bst_bin_t` in batch param constructor.
- Use `StringView` to avoid `std::string` when appropriate.
- Avoid using `MetaInfo` in quantile constructor to limit the scope of parameter.
2022-07-18 22:39:36 +08:00
Jiaming Yuan
e28f6f6657
[doc] Integrate pyspark module into sphinx doc [skip ci] (#8066) 2022-07-17 10:46:09 +08:00
Rafail Giavrimis
579ab23b10
Check cudf lazily (#8084) 2022-07-17 09:27:43 +08:00
Bobby Wang
a33f35eecf
[PySpark] add gpu support for spark local mode (#8068) 2022-07-17 07:59:06 +08:00
Bobby Wang
91bb9e2cb3
[PySpark] fix raw_prediction_col parameter and minor cleanup (#8067) 2022-07-16 17:58:57 +08:00
Jiaming Yuan
0ce80b7bcf
Mitigate flaky GPU test. (#8078)
The flakiness is caused by the global random engine, which will take some time to fix.
2022-07-16 13:45:32 +08:00
Jiaming Yuan
7a5586f3db
Fix GPU quantile distributed test. (#8076) 2022-07-16 11:40:53 +08:00
Jiaming Yuan
8fccc3c4ad
[dask] Fix potential error in demo. (#8079)
* Use dask_cudf instead.
2022-07-15 18:42:29 +08:00
Jiaming Yuan
647d3844dd
Make test for categorical data deterministic. (#8080) 2022-07-15 14:48:39 +08:00
Jiaming Yuan
dae7a41baa
Update Python requirement to >=3.8. (#8071)
Additional changes:
- Use mamba for CPU test on Jenkins.
- Cleanup CPU test dependencies.
- Restore some of the modin tests
2022-07-14 18:01:47 +08:00
Jiaming Yuan
8dd96013f1
Split up column matrix initialization. (#8060)
* Split up column matrix initialization.

This PR splits the column matrix initialization into 2 steps, the first one initializes
the storage while the second one does the transpose. By doing so, we can reuse the code
for Quantile DMatrix.
2022-07-14 10:34:47 +08:00
Philip Hyunsu Cho
36cf979b82
[CI] Fix S3 uploads (#8069)
* [CI] Fix S3 upload issues

* Don't launch Docker containers when uploading to S3
2022-07-13 16:23:00 -07:00
Jiaming Yuan
abaa593aa0
Fix compiler warnings. (#8059)
- Remove unused parameters.
- Avoid comparison of different signedness.
2022-07-14 05:29:56 +08:00
Jiaming Yuan
937352c78f
Fix R package Windows build. (#8065) 2022-07-14 05:27:38 +08:00
WeichenXu
176fec8789
PySpark XGBoost integration (#8020)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2022-07-13 13:11:18 +08:00
Jiaming Yuan
8959622836
[dask] Use an invalid port for test. (#8064) 2022-07-13 11:59:02 +08:00
Rory Mitchell
0bdaca25ca
Use single precision in gain calculation, use pointers instead of span. (#8051) 2022-07-12 21:56:27 +02:00
Jiaming Yuan
a5bc8e2c6a
Fix mypy error with the latest dask. (#8052)
* Fix mypy error with latest dask.

Dask is adding type hints to its codebase and as the result, checks in XGBoost can be
performed more rigorously.

- Remove compatibility with old dask version where multi lock was missing.
- Restrict input of `X` to be non-series.
- Adopt latest definition of `Delayed`.
- Avoid passing optional `host_ip`.
- Avoid deprecated `worker.nthreads`.
2022-07-09 08:02:42 +08:00
Jiaming Yuan
210eb471e9
[R] Implement feature info for DMatrix. (#8048) 2022-07-09 05:57:39 +08:00
Jiaming Yuan
701f32b227
[py-sckl] Raise import error if skl is not installed. (#8049) 2022-07-09 05:56:46 +08:00
Rory Mitchell
794cbaa60a
Fuse split evaluation kernels (#8026) 2022-07-05 10:24:31 +02:00
Jiaming Yuan
ff1c559084
Remove unused variable. (#8046) 2022-07-05 01:59:22 +08:00
Jiaming Yuan
8746f9cddf
Rename IterativeDMatrix. (#8045) 2022-07-04 18:52:31 +08:00
Jiaming Yuan
f24bfc7684
Bump R cache version. (#8044) 2022-07-03 03:53:05 +08:00
Michael Chirico
3af02584c1
error early if missing DiagrammeR (#8037) 2022-07-02 19:37:53 +08:00
Rory Mitchell
bc4f802b17
Batch UpdatePosition using cudaMemcpy (#7964) 2022-06-30 17:52:40 +02:00
kiwiwarmnfuzzy
2407381c3d
Force auc.cc to be statically linked (#8039) 2022-06-30 19:24:22 +08:00
Jiaming Yuan
e88d6e071d
Fix compiler warning in JSON IO. (#8031)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-06-30 01:13:22 +08:00
Jiaming Yuan
dcaf580476
Fix Python package source install. (#8036)
* Copy gputreeshap.
2022-06-29 21:45:09 +08:00
Rong Ou
6eb23353d7
Update nvflare demo for release 2.1.2 (#8038) 2022-06-29 17:58:06 +08:00
Joris LIMONIER
f470ad3af9
Fix multiple typos (#8028)
Fix 4 "graphiz" instead of "graphviz".
2022-06-27 19:21:58 +08:00
Rong Ou
45dc1f818a
Make federated plugin work with cmake 3.16.3 (#8029) 2022-06-27 17:26:41 +08:00
Rong Ou
0725fd6081
fix federated learning plugin (#8027) 2022-06-24 08:41:07 +08:00
Bobby Wang
a68580e2a7
[jvm-packages] fix executor crashing issue when transforming on xgboost4j-spark-gpu (#8025)
* [jvm-packages] fix executor crashing issue when transforming on xgboost4j-spark-gpu

the API XGBoosterSetParam is not thread-safe. Dring the phase of transforming,
XGBoost runs several transforming tasks at a time, and each of them will set
the "gpu_id" and "predictor" parameters, so if several tasks (multi-threads)
all XGBoosterSetParam simultaneously, it may cause the memory to be corrupted
and cause SIGSEGV.

This PR first get the booster from broadcast and set to the correct gpu_id
and predictor, and then all transforming taskes will use the same booster to
do the transforming.
2022-06-24 01:18:41 +08:00
Jiaming Yuan
f0c1b842bf
Implement sketching with adapter. (#8019) 2022-06-23 00:03:02 +08:00
Jiaming Yuan
142a208a90
Fix compiler warnings. (#8022)
- Remove/fix unused parameters
- Remove deprecated code in rabit.
- Update dmlc-core.
2022-06-22 21:29:10 +08:00
Bobby Wang
e44a082620
[jvm-packages] update nccl version to 2.12.12-1 (#8015) 2022-06-21 17:34:09 +08:00
Rong Ou
e5ec546da5
[Breaking] Remove rabit support for custom reductions and grow_local_histmaker updater (#7992) 2022-06-21 15:08:23 +08:00
Jiaming Yuan
4a87ea49b8
Reduce regularization for CPU gblinear. (#8013) 2022-06-21 01:05:27 +08:00
Jiaming Yuan
d285d6ba2a
Reduce regularization in GPU gblinear test. (#8010) 2022-06-20 23:55:12 +08:00
Jiaming Yuan
e58e417603
[CI] Fix lintr error. (#8011) 2022-06-20 22:17:14 +08:00
Jiaming Yuan
9b0eb66b78
Fix GPU driver test. (#8008)
* Initialize the training parameter.
2022-06-20 19:37:31 +08:00
Jiaming Yuan
637e42a0c0
Use 22.04 for RMM. (#8001)
22.06 is not released yet.
2022-06-17 04:07:31 +08:00
Jiaming Yuan
bb47fd8c49
[jvm-packages] Change log level for tracker message. (#7968) 2022-06-09 18:15:08 +08:00
Jiaming Yuan
8f8bd8147a
Fix LTR with weighted Quantile DMatrix. (#7975)
* Fix LTR with weighted Quantile DMatrix.

* Better tests.
2022-06-09 01:33:41 +08:00
Jiaming Yuan
1a33b50a0d
Fix compiler warnings. (#7974)
- Remove unused parameters. There are still many warnings that are not yet
addressed. Currently, the warnings in dmlc-core dominate the error log.
- Remove `distributed` parameter from metric.
- Fixes some warnings about signed comparison.
2022-06-06 22:56:25 +08:00
Jiaming Yuan
d48123d23b
Fix rmm build (#7973)
- Optionally switch to c++17
- Use rmm CMake target.
- Workaround compiler errors.
- Fix GPUMetric inheritance.
- Run death tests even if it's built with RMM support.

Co-authored-by: jakirkham <jakirkham@gmail.com>
2022-06-06 20:18:32 +08:00
Philip Hyunsu Cho
1ced638165
Document how to reproduce Docker environment from Jenkins (#7971) 2022-06-04 20:56:53 +09:00
Jiaming Yuan
b90c6d25e8
Implement max_cat_threshold for CPU. (#7957) 2022-06-04 11:02:46 +08:00
Bobby Wang
78694405a6
[jvm-packages] add jni for setting feature name and type (#7966) 2022-06-03 11:09:48 +08:00
Gavin Zhang
6426449c8b
Support IBM i OS (#7920) 2022-06-02 23:38:35 +08:00
Rong Ou
31e6902e43
Support GPU training in the NVFlare demo (#7965) 2022-06-02 21:52:36 +08:00
Jiaming Yuan
6b55150e80
Fix pylint errors. (#7967) 2022-06-02 18:04:46 +08:00
Jiaming Yuan
13b15e07e8
Handle formatted JSON input. (#7953) 2022-06-01 16:20:58 +08:00
Rong Ou
d3429f2ff6
Increase gRPC max receive message size for federated learning (#7958) 2022-06-01 13:21:54 +08:00
Bobby Wang
545fd4548e
[jvm-packages] refactor xgboost read/write (#7956)
1. Removed the duplicated Default XGBoost read/write which is copied from
  spark 2.3.x
2. Put some utils into util package
2022-06-01 11:38:49 +08:00
Yang Jiandan
27c66f12d1
set log level as ERROR for trackerProcess has some stderr output (#7952) 2022-05-31 22:54:38 +08:00
Bobby Wang
5a7dc41351
[doc] update doc for dumping model to be json or ubj for jvm packages (#7955) 2022-05-31 14:43:13 +08:00
Rong Ou
80339c3427
Enable distributed GPU training over Rabit (#7930) 2022-05-31 04:09:45 +08:00
Bobby Wang
6275cdc486
[jvm-packages] add format option when saving a model (#7940) 2022-05-30 15:49:59 +08:00
Gyeongjae Choi
cc6d57aa0d
Add minimal emscripten build support (#7954) 2022-05-30 14:11:40 +08:00
Tim Sabsch
7a039e03fe
Fix incomplete type hints for verbose (#7945) 2022-05-30 12:08:24 +08:00
Bobby Wang
fbc3d861bb
[jvm-packages] remove default parameters (#7938) 2022-05-28 10:31:19 +08:00
Philip Hyunsu Cho
47224dd6d3
Use private mirror to host llvm-openmp tarballs (#7950) 2022-05-27 14:56:59 -07:00
Jiaming Yuan
bde4f25794
Handle missing categorical value in CPU evaluator. (#7948) 2022-05-27 14:15:47 +08:00
Philip Hyunsu Cho
2070afea02
[CI] Rotate package repository keys (#7943) 2022-05-26 17:06:46 -07:00
Jiaming Yuan
18cbebaeb9
Unify the cat split storage for CPU. (#7937)
* Unify the cat split storage for CPU.

* Cleanup.

* Workaround.
2022-05-26 04:14:40 -07:00
Daniel Clausen
755d9d4609
[JVM-Packages] Auto-detection of MUSL is replaced by system properties (#7921)
This PR removes auto-detection of MUSL-based Linux systems in favor of system properties the user can set to configure a specific path for a native library.
2022-05-26 10:53:15 +08:00
Jiaming Yuan
606be9e663
Handle missing values in one hot splits. (#7934) 2022-05-24 20:48:41 +08:00
Jiaming Yuan
18a38f7ca0
Refactor for GHistIndex. (#7923)
* Pass sparse page as adapter, which prepares for quantile dmatrix.
* Remove old external memory code like `rbegin` and extra `Init` function.
* Simplify type dispatch.
2022-05-23 23:04:53 +08:00
Jiaming Yuan
d314680a15
Verify shared object version at load. (#7928) 2022-05-23 20:53:30 +08:00
Jiaming Yuan
474366c020
Add convergence test for sparse datasets. (#7922) 2022-05-23 18:07:26 +08:00
Rory Mitchell
f6babc814c
Do not initialise data structures to maximum possible tree size. (#7919) 2022-05-19 19:45:53 +02:00
Philip Hyunsu Cho
6f424d8d6c
[Doc] Warn against loading JSON from external source (#7918) 2022-05-18 17:02:36 -07:00
Jiaming Yuan
f93a727869
Address remaining mypy errors in python package. (#7914) 2022-05-18 22:46:15 +08:00
Jiaming Yuan
edf9a9608e
Fix type conversion warning. (#7916) 2022-05-18 20:14:14 +08:00
Jiaming Yuan
765097d514
Simplify inplace-predict. (#7910)
Pass the `X` as part of Proxy DMatrix instead of an independent `dmlc::any`.
2022-05-18 17:52:00 +08:00
Jiaming Yuan
19775ffe15
Use adapter to initialize column matrix. (#7912) 2022-05-18 16:15:12 +08:00
Bobby Wang
5ef33adf68
[jvm-packges] set the correct objective if user doesn't explicitly set it (#7781) 2022-05-18 14:05:18 +08:00
Chengyang
806c92c80b
Add Type Hints for Python Package (#7742)
Co-authored-by: Chengyang Gu <bridgream@gmail.com>
Co-authored-by: Jiamingy <jm.yuan@outlook.com>
2022-05-17 22:14:09 +08:00
Rory Mitchell
71d3b2e036
Fuse gpu_hist all-reduce calls where possible (#7867) 2022-05-17 13:27:50 +02:00
Bobby Wang
b41cf92dc2
[jvm-packages] move dmatrix building into rabit context for cpu pipeline (#7908) 2022-05-17 14:52:25 +08:00
Rong Ou
77d4a53c32
use RabitContext intead of init/finalize (#7911) 2022-05-17 12:15:41 +08:00
Jiaming Yuan
4fcfd9c96e
Fix and cleanup for column matrix. (#7901)
* Fix missed type dispatching for dense columns with missing values.
* Code cleanup to reduce special cases.
* Reduce memory usage.
2022-05-16 21:11:50 +08:00
Bobby Wang
1496789561
[doc] update the doc for jvm model compatibility (#7907) 2022-05-16 14:05:26 +08:00
Sze Yeung
a06d53688c
Correct a mistake in Setting Parameters section (#7905) 2022-05-15 18:56:31 -07:00
Philip Hyunsu Cho
4cd14aee5a
Rename misspelled config parameter for pseudo-Huber (#7904) 2022-05-15 06:38:33 -07:00
Jiaming Yuan
1baad8650c
Small cleanup to Column. (#7898)
* Define forward iterator to hide the internal state.
2022-05-15 12:39:10 +08:00
Jiaming Yuan
ee382c4153
Update news for 1.6.1 (#7877) 2022-05-14 15:38:18 -07:00
Rong Ou
af907e2d0d
Demo of federated learning using NVFlare (#7879)
Co-authored-by: jiamingy <jm.yuan@outlook.com>
2022-05-14 22:45:41 +08:00
Bobby Wang
11e46e4bc0
[Breaking][jvm-packages] make classification model be xgboost-compatible (#7896) 2022-05-14 15:43:05 +08:00
Jiaming Yuan
1b6538b4e5
[breaking] Drop single precision histogram (#7892)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-05-13 19:54:55 +08:00
Jiaming Yuan
c8f9d4b6e6
Show libxgboost.so path in build info. (#7893) 2022-05-13 18:08:56 +08:00
Bobby Wang
9fa7ed1743
[Breaking][jvm-packages] remove timeoutRequestWorkers parameter (#7839) 2022-05-13 16:26:25 +08:00
Jiaming Yuan
11d65fcb21
Extract partial sum into an independent function. (#7889) 2022-05-13 14:30:35 +08:00
Jiaming Yuan
db80671d6b
Fix monotone constraint with tuple input. (#7891) 2022-05-13 04:00:03 +08:00
Jiaming Yuan
94ca52b7b7
Fix overflow in prediction size. (#7885) 2022-05-12 02:44:03 +08:00
Jiaming Yuan
8ba4722d04
Remove pyarrow workaround. (#7884) 2022-05-11 20:54:48 +08:00
Philip Hyunsu Cho
65e6d73b95
[CI] Automate artifact fetch step in JVM release process (#7882) 2022-05-11 00:35:22 -07:00
Jiaming Yuan
16ba74d008
Update CUDA version requirement in CMake script. (#7876) 2022-05-09 04:16:22 +08:00
Philip Hyunsu Cho
d2bc0f0f08
Allow loading old models from RDS (#7864) 2022-05-06 22:49:38 -07:00
Amit Bera
1823db53f2
updated winning solution under readme.md (#7862) 2022-05-06 17:38:07 +08:00
Rory Mitchell
7ef54e39ec
Small refactor to categoricals (#7858) 2022-05-05 17:47:02 +02:00
Rong Ou
14ef38b834
Initial support for federated learning (#7831)
Federated learning plugin for xgboost:
* A gRPC server to aggregate MPI-style requests (allgather, allreduce, broadcast) from federated workers.
* A Rabit engine for the federated environment.
* Integration test to simulate federated learning.

Additional followups are needed to address GPU support, better security, and privacy, etc.
2022-05-05 21:49:22 +08:00
Jiaming Yuan
46e0bce212
Use maximum category in sketch. (#7853) 2022-05-05 19:56:49 +08:00
Jiaming Yuan
8ab5e13b5d
Fix typo [skip ci] (#7861) 2022-05-04 18:34:45 +08:00
Jiaming Yuan
317d7be6ee
Always use partition based categorical splits. (#7857) 2022-05-03 22:30:32 +08:00
Rory Mitchell
90cce38236
Remove single_precision_histogram for gpu_hist (#7828) 2022-05-03 14:53:19 +02:00
Jiaming Yuan
50d854e02e
[CI] Test with latest RAPIDS. (#7816) 2022-04-30 11:55:10 -07:00
Bobby Wang
1b103e1f5f
[CI] make container be able to re-attached (#7848)
When re-starting the container, it will fail in entrypoint.sh which
will exit when adding an existing group or user
2022-04-29 19:00:35 -07:00
Jiaming Yuan
288c52596c
Define bin type. (#7850) 2022-04-29 19:41:39 +08:00
Michael Allman
f7db16add1
Ignore all Java exceptions when looking for Linux musl support (#7844) 2022-04-28 15:44:30 +08:00
Bobby Wang
a94e1b172e
[jvm-packages] Fix model compatibility (#7845) 2022-04-28 02:05:38 +08:00
Bobby Wang
686caad40c
[jvm-package] remove the coalesce in barrier mode (#7846) 2022-04-27 23:34:22 +08:00
Jiaming Yuan
fdf533f2b9
[POC] Experimental support for l1 error. (#7812)
Support adaptive tree, a feature supported by both sklearn and lightgbm.  The tree leaf is recomputed based on residue of labels and predictions after construction.

For l1 error, the optimal value is the median (50 percentile).

This is marked as experimental support for the following reasons:
- The value is not well defined for distributed training, where we might have empty leaves for local workers. Right now I just use the original leaf value for computing the average with other workers, which might cause significant errors.
- Some follow-ups are required, for exact, pruner, and optimization for quantile function. Also, we need to calculate the initial estimation.
2022-04-26 21:41:55 +08:00
Jiaming Yuan
ad06172c6b
Refactor pandas dataframe handling. (#7843) 2022-04-26 18:53:43 +08:00
Bobby Wang
bef1f939ce
[doc] remove the doc about killing SparkContext [skip ci] (#7840) 2022-04-25 19:29:16 +08:00
Bobby Wang
dc2e699656
[Breaking][jvm-packages] Use barrier execution mode (#7836)
With the introduction of the barrier execution mode. we don't need to kill SparkContext when some xgboost tasks failed. Instead, Spark will handle the errors for us. So in this PR, `killSparkContextOnWorkerFailure` parameter is deleted.
2022-04-25 17:09:52 +08:00
Bobby Wang
6ece549a90
[doc] update the jvm tutorial to 1.6.1 [skip ci] (#7834) 2022-04-24 14:25:22 +08:00
Jiaming Yuan
332380479b
Avoid warning in np primitive type tests. (#7833) 2022-04-23 02:07:01 +08:00
Bobby Wang
c45665a55a
[jvm-packages] move the dmatrix building into rabit context (#7823)
This fixes the QuantileDeviceDMatrix in distributed environment.
2022-04-23 00:06:50 +08:00
Jiaming Yuan
f0f76259c9
Remove STRING_TYPES. (#7827) 2022-04-22 19:07:51 +08:00
forestkey
c13a2a3114
[doc] "irrevelant" to "irrelevant" (#7832) 2022-04-22 16:54:30 +08:00
Jiaming Yuan
c70fa502a5
Expose feature_types to sklearn interface. (#7821) 2022-04-21 20:23:35 +08:00
Jiaming Yuan
401d451569
Clear configuration cache. (#7826) 2022-04-21 19:09:54 +08:00
Jiaming Yuan
52d4eda786
Deprecate use_label_encoder in XGBClassifier. (#7822)
* Deprecate `use_label_encoder` in XGBClassifier.

* We have removed the encoder, now prepare to remove the indicator.
2022-04-21 13:14:02 +08:00
Jiaming Yuan
5815df4c46
Remove warning in 1.4. (#7815) 2022-04-20 01:19:09 +08:00
Jiaming Yuan
d0de954af2
v1.6.0 release note. [skip ci] (#7746)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-04-16 16:27:54 +08:00
Jiaming Yuan
5dea21273a
Fix training continuation with categorical model. (#7810)
* Make sure the task is initialized before construction of tree updater.

This is a quick fix meant to be backported to 1.6, for a full fix we should pass the model
param into tree updater by reference instead.
2022-04-15 18:21:02 +08:00
Bobby Wang
2d83b2ad8f
[jvm-packages] add hostIp and python exec for rabit tracker (#7808) 2022-04-15 16:28:43 +08:00
Bobby Wang
6f032b7152
[doc] fix a typo in jvm/index.rst (#7806) 2022-04-13 17:02:42 -07:00
dependabot[bot]
1bb1913811
Bump hadoop-common from 2.10.1 to 3.2.3 in /jvm-packages/xgboost4j-flink (#7801)
Bumps hadoop-common from 2.10.1 to 3.2.3.

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-13 22:24:44 +08:00
Ikko Ashimine
56e4baff7c
[doc] Fix typo in build.rst (#7800)
avaiable -> available
2022-04-13 16:45:26 +08:00
Bobby Wang
3f536b5308
[jvm-packages] fix evaluation when featuresCols is used (#7798) 2022-04-13 12:52:50 +08:00
Bobby Wang
4b00c64d96
[doc] improve xgboost4j-spark-gpu doc [skip ci] (#7793)
Co-authored-by: Sameer Raheja <sameerz@users.noreply.github.com>
2022-04-12 12:02:16 +08:00
Bobby Wang
118192f116
[jvm-packages] xgboost4j-spark should work when featuresCols is specified (#7789) 2022-04-08 13:21:04 +08:00
Bobby Wang
729d227b89
[jvm-packages] remove the dep of com.fasterxml.jackson (#7791) 2022-04-08 13:04:34 +08:00
Bobby Wang
89d6419fd5
[jvm-packages] add doc for xgboost4j-spark-gpu (#7779)
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2022-04-07 11:35:01 +08:00
Bobby Wang
2454407f3a
[jvm-packages] unify setFeaturesCol API for XGBoostRegressor (#7784) 2022-04-05 13:35:33 +08:00
Philip Hyunsu Cho
e5ab8f3ebe
[CI] Speed up CPU test pipeline (#7772) 2022-04-01 02:39:04 +08:00
Jiaming Yuan
bcce17e688
Remove text loading in basic walk through demo. (#7753) 2022-04-01 00:59:42 +08:00
giuliohome
c467e90ac1
[doc] Update doc for Kubernetes Operator (#7777) 2022-03-31 23:10:49 +08:00
Jiaming Yuan
fd78af404b
Drop support for deprecated CUDA architectures. (#7774)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-03-31 21:42:23 +08:00
Jiaming Yuan
02dd7b6913
Remove use of distutils. (#7770)
distutils is deprecated and replaced by other stdlib constructs.
2022-03-31 19:03:10 +08:00
Philip Hyunsu Cho
e8eff3581b
[CI] Enable faulthandler to show details when 0xC0000005 error occurs (#7771) (#7775) 2022-03-31 17:40:06 +08:00
Jiaming Yuan
6fa1afdffc
Avoid compiler warning about comparison. (#7768) 2022-03-31 08:52:14 +08:00
Jiaming Yuan
522636cb52
Bump version. (#7769) 2022-03-31 06:33:22 +08:00
Jiaming Yuan
9150fdbd4d
Support pandas nullable types. (#7760) 2022-03-30 08:51:52 +08:00
Jiaming Yuan
d4796482b5
Fix failures on R hub and Win builder. (#7763)
* Update date.
* Workaround amalgamation build with clang. (SimpleDMatrix instantiation)
* Workaround compiler error with driver push.
* Revert autoconf requirement.
* Fix model IO on 32-bit environment. (i386)
* Clarify the function name.
2022-03-30 07:14:33 +08:00
Jiaming Yuan
a50b84244e
Cleanup configuration for constraints. (#7758) 2022-03-29 04:22:46 +08:00
Jiaming Yuan
3c9b04460a
Move num_parallel_tree to model parameter. (#7751)
The size of forest should be a property of model itself instead of a training
hyper-parameter.
2022-03-29 02:32:42 +08:00
Jiaming Yuan
8b3ecfca25
Mitigate flaky tests. (#7749)
* Skip non-increasing test with external memory when subsample is used.
* Increase bin numbers for boost from prediction test. This mitigates the effect of
  non-deterministic partitioning.
2022-03-28 21:20:50 +08:00
Christian Marquardt
39c5616af2
Added CPPFLAGS and LDFLAGS to the testing for OpenMP during R installation from source. (#7759) 2022-03-28 19:14:07 +08:00
Haoming Chen
b37ff3d492
Fix cox objective test by using XGBOOST_PARALLEL_STABLE_SORT (#7756) 2022-03-26 17:58:30 +08:00
Jiaming Yuan
b3ba0e8708
Check cupy lazily. (#7752) 2022-03-26 06:09:58 +08:00
Jiaming Yuan
af0cf88921
Workaround compiler error. (#7745) 2022-03-25 17:05:14 +08:00
Jiaming Yuan
64575591d8
Use context in SetInfo. (#7687)
* Use the name `Context`.
* Pass a context object into `SetInfo`.
* Add context to proxy matrix.
* Add context to iterative DMatrix.

This is to remove the use of the default number of threads during `SetInfo` as a follow-up on
removing the global omp variable while preparing for CUDA stream semantic.  Currently, XGBoost
uses the legacy CUDA stream, we will gradually remove them in the future in favor of non-blocking streams.
2022-03-24 22:16:26 +08:00
Oleksandr Pryimak
f5b20286e2
[jvm-packages] Launch dev jvm image under my user (#4676)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-03-23 10:39:51 -07:00
Chengyang
c92ab2ce49
Add type hints to core.py (#7707)
Co-authored-by: Chengyang Gu <bridgream@gmail.com>
Co-authored-by: jiamingy <jm.yuan@outlook.com>
2022-03-23 21:12:14 +08:00
Philip Hyunsu Cho
66cb4afc6c
Update install doc (#7747) 2022-03-23 17:20:01 +08:00
Aging
f20ffa8db3
Update JVM dev build Dockerfile and shell script (#6792)
Co-authored-by: Zhuo Yuzhen <yuzhuo@paypal.com>
2022-03-22 16:39:10 -07:00
Jiaming Yuan
4d81c741e9
External memory support for hist (#7531)
* Generate column matrix from gHistIndex.
* Avoid synchronization with the sparse page once the cache is written.
* Cleanups: Remove member variables/functions, change the update routine to look like approx and gpu_hist.
* Remove pruner.
2022-03-22 00:13:20 +08:00
Jiaming Yuan
cd55823112
Demo for using custom objective with multi-target regression. (#7736) 2022-03-20 17:44:25 +08:00
Jiaming Yuan
996cc705af
Small cleanup to hist tree method. (#7735)
* Remove special optimization using number of bins.
* Remove 1-based index for column sampling.
* Remove data layout.
* Unify update prediction cache.
2022-03-20 03:44:55 +08:00
Jiaming Yuan
718472dbe2
[CI] Upgrade GitHub action Windows workers. (#7739) 2022-03-20 01:44:33 +08:00
Jiaming Yuan
9a400731d9
Replace device sync with stream sync. (#7737) 2022-03-19 23:22:23 +08:00
Jiaming Yuan
da351621a1
[R] Fix parsing decision stump. (#7689) 2022-03-17 01:08:22 +08:00
Jiaming Yuan
e78a38b837
Sort sparse page index when constructing DMatrix. (#7731) 2022-03-16 18:01:05 +08:00
Xiaochang Wu
613ec36c5a
Support building SimpleDMatrix from Arrow data format (#7512)
* Integrate with Arrow C data API.
* Support Arrow dataset.
* Support Arrow table.

Co-authored-by: Xiaochang Wu <xiaochang.wu@intel.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
Co-authored-by: Zhang Zhang <zhang.zhang@intel.com>
2022-03-15 13:25:19 +08:00
William Hicks
6b6849b001
Correct xgboost-config directory for inclusion in other projects (#7730) 2022-03-15 03:18:44 +08:00
Jiaming Yuan
98d6faefd6
Implement slope for Pseduo-Huber. (#7727)
* Add objective and metric.
* Some refactoring for CPU/GPU dispatching using linalg module.
2022-03-14 21:42:38 +08:00
Daniel Clausen
4dafb5fac8
[JVM-Packages] Add support for detecting musl-based Linux (#7624)
Co-authored-by: Marc Philipp <marc@gradle.com>
2022-03-14 00:37:27 +08:00
Haoming Chen
04fc575c0e
Run tests in a temporary directory (#7723)
Fix some tests to run in a temporary directory in case the root
directory is not writable. Note that most of tests are already
running in the temporary directory, so this PR just make them
consistent.
2022-03-12 21:24:36 +08:00
Haoming Chen
55463b76c1
Initialize TreeUpdater ctx_ with nullptr (#7722) 2022-03-10 22:33:32 +08:00
Jiaming Yuan
a62a3d991d
[dask] prediction with categorical data. (#7708) 2022-03-10 00:21:48 +08:00
Pradipta Ghosh
68b6d6bbe2
Fix for Feature shape mismatch error (#7715) 2022-03-03 21:36:29 +08:00
Cheng Li
a92e0f6240
multi groups in the constraints (#7711) 2022-03-01 18:10:15 +08:00
Jiaming Yuan
1d468e20a4
Optimize GPU evaluation function for categorical data. (#7705)
* Use transform and cache.
2022-02-28 17:46:29 +08:00
Jiaming Yuan
18a4af63aa
Update documents and tests. (#7659)
* Revise documents after recent refactoring and cat support.
* Add tests for behavior of max_depth and max_leaves.
2022-02-26 03:57:47 +08:00
Jiaming Yuan
5eed2990ad
Fix file descriptor leak. (#7704) 2022-02-25 17:49:33 +08:00
Philip Hyunsu Cho
1b25dd59f9
Use CUDA 11 in clang-tidy (#7701)
* Show command args when clang-tidy fails

* Add option to specify CUDA args

* Use clang-tidy 11

* [CI] Use CUDA 11
2022-02-24 15:15:07 -08:00
Jiaming Yuan
83a66b4994
Support categorical data for hist. (#7695)
* Extract partitioner from hist.
* Implement categorical data support by passing the gradient index directly into the partitioner.
* Organize/update document.
* Remove code for negative hessian.
2022-02-25 03:47:14 +08:00
Jiaming Yuan
f60d95b0ba
[R] Construct booster object in load.raw. (#7686) 2022-02-24 10:06:18 +08:00
Bobby Wang
89aa8ddf52
[jvm-packages] fix the prediction issue for multi:softmax (#7694) 2022-02-24 01:09:45 +08:00
Jiaming Yuan
6762c45494
Small cleanup to gradient index and hist. (#7668)
* Code comments.
* Const accessor to index.
* Remove some weird variables in the `Index` class.
* Simplify the `MemStackAllocator`.
2022-02-23 11:37:21 +08:00
Jiaming Yuan
49c74a5369
Update R package description. (#7691)
* Change role.
* Remove cmake file when building the package.
2022-02-23 08:36:37 +08:00
Bobby Wang
e3e6de5ed9
[jvm-packages] unify the set features API (#7692)
xgboost4j-spark provides 2 sets of API for setting features, one for CPU, another for GPU, which may cause confusion.

This PR removes the GPU API and adds an override CPU function setFeaturesCol to accept Array[String] parameters.
2022-02-23 03:37:25 +08:00
Jiaming Yuan
c859764d29
[doc] Clarify that states in callbacks are mutated. (#7685)
* Fix copy for cv.  This prevents inserting default callbacks into the input list.
* Clarify the behavior of callbacks in training/cv.
* Fix typos in doc.
2022-02-22 11:45:00 +08:00
Jiaming Yuan
584bae1fc6
Fix document build with scikit-learn (#7684)
* Require sphinx >= 4.4 for RTD.

* Install sklearn.
2022-02-22 08:58:54 +08:00
Jiaming Yuan
e56d1779e1
Require Python 3.7. (#7682)
* Update setup.py.
2022-02-21 05:46:48 +08:00
Jiaming Yuan
549f3bd781
Honor CPU counts from CFS. (#7654) 2022-02-21 03:13:26 +08:00
Jiaming Yuan
671b3c8d8e
Fix typo. (#7680) 2022-02-20 03:42:47 +08:00
Jiaming Yuan
b2341eab0c
[R] Fix broken links. (#7670) 2022-02-20 00:55:48 +08:00
Bobby Wang
131858e7cb
[jvm-packages] Do not repartition when nWorker = 1 (#7676) 2022-02-19 21:45:54 +08:00
Jiaming Yuan
f08c5dcb06
Cleanup some pylint errors. (#7667)
* Cleanup some pylint errors.

* Cleanup pylint errors in rabit modules.
* Make data iter an abstract class and cleanup private access.
* Cleanup no-self-use for booster.
2022-02-19 18:53:12 +08:00
Jiaming Yuan
b76c5d54bf
Define export symbols in callback module. (#7665) 2022-02-19 18:52:41 +08:00
Jiaming Yuan
7366d3b20c
Ensure models with categorical splits don't use old binary format. (#7666) 2022-02-19 08:05:28 +08:00
Jiaming Yuan
14d61b0141
[doc] Update document for building from source. (#7664)
- Mention standard install command for R package.
- Remove repeated "get source" step.
- Remove troubleshooting on Windows.  It's outdated considering VS 2022 is already out.
2022-02-19 04:57:03 +08:00
Jiaming Yuan
d625dc2047
Work around nvcc error. (#7673) 2022-02-19 01:41:46 +08:00
Jiaming Yuan
3877043d41
Avoid print for R package. (#7672) 2022-02-18 08:06:24 +08:00
Jiaming Yuan
711f7f3851
Avoid std::terminate for R package. (#7661)
This is part of CRAN policies.
2022-02-17 01:27:20 +08:00
Jiaming Yuan
12949c6b31
[R] Implement feature weights. (#7660) 2022-02-16 22:20:52 +08:00
Philip Hyunsu Cho
0149f81a5a
[CI] Fix S3 upload (#7662) 2022-02-16 01:35:27 -08:00
Jiaming Yuan
93eebe8664
[doc] Fix broken link. [skip ci] (#7655) 2022-02-15 14:07:34 +08:00
Jiaming Yuan
0da7d872ef
[doc] Update for prediction. (#7648) 2022-02-15 05:01:55 +08:00
Jiaming Yuan
0d0abe1845
Support optimal partitioning for GPU hist. (#7652)
* Implement `MaxCategory` in quantile.
* Implement partition-based split for GPU evaluation.  Currently, it's based on the existing evaluation function.
* Extract an evaluator from GPU Hist to store the needed states.
* Added some CUDA stream/event utilities.
* Update document with references.
* Fixed a bug in approx evaluator where the number of data points is less than the number of categories.
2022-02-15 03:03:12 +08:00
Jiaming Yuan
2369d55e9a
Add tests for prediction cache. (#7650)
* Extract the test from approx for other tree methods.
* Add note on how it works.
2022-02-15 00:28:00 +08:00
Jiaming Yuan
5cd1f71b51
[dask] Improve configuration for port. (#7645)
- Try port 0 to let the OS return the available port.
- Add port configuration.
2022-02-14 21:34:34 +08:00
Jiaming Yuan
b52c4e13b0
[dask] Fix empty partition with pandas input. (#7644)
Empty partition is different from empty dataset.  For the former case, each worker has
non-empty dask collections, but each collection might contain empty partition.
2022-02-14 19:35:51 +08:00
Jiaming Yuan
1f020a6097
Add maintainer for R package. (#7649) 2022-02-12 23:45:30 +08:00
Jiaming Yuan
1441a6cd27
[CI] Update R cache. (#7646) 2022-02-11 19:50:11 +08:00
Jiaming Yuan
2775c2a1ab
Prepare external memory support for hist. (#7638)
This PR prepares the GHistIndexMatrix to host the column matrix which is used by the hist tree method by accepting sparse_threshold parameter.

Some cleanups are made to ensure the correct batch param is being passed into DMatrix along with some additional tests for correctness of SimpleDMatrix.
2022-02-10 16:58:02 +08:00
dependabot[bot]
87c01f49d8
Bump hadoop-common from 2.7.3 to 2.10.1 in /jvm-packages/xgboost4j-flink (#7641)
Bumps hadoop-common from 2.7.3 to 2.10.1.

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-09 17:07:35 -08:00
Jiaming Yuan
fe4ce920b2
[dask] Cleanup dask module. (#7634)
* Add a new utility for mapping function onto workers.
* Unify the type for feature names.
* Clean up the iterator.
* Fix prediction with DaskDMatrix worker specification.
* Fix base margin with DeviceQuantileDMatrix.
* Support vs 2022 in setup.py.
2022-02-08 20:41:46 +08:00
Jiaming Yuan
926af9951e
Add missing train parameter for sklearn interface. (#7629)
Some other parameters are still missing and rely on **kwargs, for instance parameters from
dart.
2022-02-08 13:20:19 +08:00
Jiaming Yuan
3e693e4f97
[dask] Fix nthread config with dask sklearn wrapper. (#7633) 2022-02-08 06:38:32 +08:00
Ed Shee
d152c59a9c
fixed broken link to Seldon XGBoost server (#7628) 2022-02-05 01:03:29 +08:00
Philip Hyunsu Cho
34a238ca98
[CI] Clean up Python wheel build pipeline (#7626)
* [CI] Always upload artifacts to [branch_name]/

* [CI] Move detailed setup inside build_python_wheels.sh

* Fix typo
2022-02-03 00:55:44 -08:00
Philip Hyunsu Cho
f6e6d0b2c0
[CI] Build Python wheels for MacOS (x86_64 and arm64) (#7621)
* Build Python wheels for OSX (x86_64 and arm64)

* Use Conda's libomp when running Python tests

* fix

* Add comment to explain CIBW_TARGET_OSX_ARM64

* Update release script

* Add comments in build_python_wheels.sh

* Document wheel pipeline
2022-02-02 17:35:48 -08:00
Philip Hyunsu Cho
271a7c5d43
[Doc] fix typo in install doc (#7623) 2022-01-31 13:35:56 -08:00
Philip Hyunsu Cho
c621775f34
Replace all uses of deprecated function sklearn.datasets.load_boston (#7373)
* Replace all uses of deprecated function sklearn.datasets.load_boston

* More renaming

* Fix bad name

* Update assertion

* Fix n boosted rounds.

* Avoid over regularization.

* Rebase.

* Avoid over regularization.

* Whac-a-mole

Co-authored-by: fis <jm.yuan@outlook.com>
2022-01-30 04:27:57 -08:00
Philip Hyunsu Cho
b4340abf56
Add special handling for multi:softmax in sklearn predict (#7607)
* Add special handling for multi:softmax in sklearn predict

* Add test coverage
2022-01-29 15:54:49 -08:00
david-cortes
7f738e7f6f
[R] Accept CSR data for predictions (#7615) 2022-01-30 00:54:57 +08:00
Michael Chirico
549bd419bb
use exit hook to remove temp file (#7611)
This guarantees the removal will trigger for unexpected early exits
2022-01-29 16:06:52 +08:00
Philip Hyunsu Cho
f21301c749
[Doc] Add instruction to install XGBoost for Apple Silicon using Conda (#7612) 2022-01-28 01:06:39 -08:00
Jiaming Yuan
81210420c6
Remove omp_get_max_threads (#7608)
This is the one last PR for removing omp global variable.

* Add context object to the `DMatrix`.  This bridges `DMatrix` with https://github.com/dmlc/xgboost/issues/7308 .
* Require context to be available at the construction time of booster.
* Add `n_threads` support for R csc DMatrix constructor.
* Remove `omp_get_max_threads` in R glue code.
* Remove threading utilities that rely on omp global variable.
2022-01-28 16:09:22 +08:00
Philip Hyunsu Cho
028bdc1740
[R] Fix typo in docstring (#7606) 2022-01-26 23:33:25 +08:00
Jiaming Yuan
e060519d4f
Avoid regenerating the gradient index for approx. (#7591) 2022-01-26 21:41:30 +08:00
Jiaming Yuan
5d7818e75d
Remove omp_get_max_threads in tree updaters. (#7590) 2022-01-26 19:55:47 +08:00
Jiaming Yuan
24789429fd
Support latest pandas Index type. (#7595) 2022-01-26 18:20:10 +08:00
AJ Schmidt
511805c981
Compress fatbins (#7601)
* compress CUDA device code

Co-authored-by: ptaylor <paul.e.taylor@me.com>
2022-01-25 18:30:59 +08:00
Jiaming Yuan
6967ef7267
Remove omp_get_max_threads in objective. (#7589) 2022-01-24 04:35:49 +08:00
Jiaming Yuan
5817840858
Remove omp_get_max_threads in data. (#7588) 2022-01-24 02:44:07 +08:00
Jiaming Yuan
f84291c1e1
Fix max_cat_to_onehot doc annotation [skip ci] (#7592) 2022-01-23 16:33:23 +08:00
Jiaming Yuan
d262503781
[R] Implement new save raw in R. (#7571) 2022-01-22 20:55:47 +08:00
Jiaming Yuan
ef4dae4c0e
[dask] Add scheduler address to dask config. (#7581)
- Add user configuration.
- Bring back to the logic of using scheduler address from dask.  This was removed when we were trying to support GKE, now we bring it back and let xgboost try it if direct guess or host IP from user config failed.
2022-01-22 01:56:32 +08:00
Jiaming Yuan
5ddd4a9d06
Small cleanup to tests. (#7585)
* Use random port in dask tests to avoid warnings for occupied port.
* Increase the difficulty of AUC tests.
2022-01-21 06:26:57 +00:00
Philip Hyunsu Cho
9fd510faa5
[CI] Clarify steps for publishing artifacts to Maven Central (#7582) 2022-01-20 14:23:07 -08:00
Jiaming Yuan
529cf8a54a
Configure cub version automatically. (#7579)
Note that when cub inside CUDA is being used, XGBoost performs checks on input size
instead of using internal cub function to accept inputs larger than maximum integer.
2022-01-20 19:49:26 +08:00
Jiaming Yuan
ac7a36367c
[jvm-packages] Implement new save_raw in jvm-packages. (#7570)
* New `toByteArray` that accepts a parameter for format.
2022-01-19 16:00:14 +08:00
Jiaming Yuan
b4ec1682c6
Update document for multi output and categorical. (#7574)
* Group together categorical related parameters.
* Update documents about multioutput and categorical.
2022-01-19 04:35:17 +08:00
Jiaming Yuan
dac9eb13bd
Implement new save_raw in Python. (#7572)
* Expose the new C API function to Python.
* Remove old document and helper script.
* Small optimization to the `save_raw` and Json ctors.
2022-01-19 02:27:51 +08:00
Jiaming Yuan
9f20a3315e
Test with latest numpy. (#7573) 2022-01-19 00:46:23 +08:00
Jiaming Yuan
bb56bb9a13
Fix merge conflict. (#7577) 2022-01-18 23:01:34 +08:00
Jiaming Yuan
cc06fab9a7
Support distributed CPU env for categorical data. (#7575)
* Add support for cat data in sketch allreduce.
* Share tests between CPU and GPU.
2022-01-18 21:56:07 +08:00
Jiaming Yuan
deab0e32ba
Validate out of range categorical value. (#7576)
* Use float in CPU categorical set to preserve the input value.
* Check out of range values.
2022-01-18 20:16:19 +08:00
Jiaming Yuan
d6ea5cc1ed
Cover approx tree method for categorical data tests. (#7569)
* Add tree to df tests.
* Add plotting tests.
* Add histogram tests.
2022-01-16 11:31:40 +08:00
Jiaming Yuan
465dc63833
Fix tree param feature type. (#7565) 2022-01-16 04:46:29 +08:00
Jiaming Yuan
a1bcd33a3b
[breaking] Change internal model serialization to UBJSON. (#7556)
* Use typed array for models.
* Change the memory snapshot format.
* Add new C API for saving to raw format.
2022-01-16 02:11:53 +08:00
Jiaming Yuan
13b0fa4b97
Implement get_group. (#7564) 2022-01-16 02:07:42 +08:00
Jiaming Yuan
52277cc3da
Rename build info function to be consistent with rest of the API. (#7553) 2022-01-14 00:39:28 +08:00
Jiaming Yuan
e94b766310
Fix early stopping with linear model. (#7554) 2022-01-13 21:53:06 +08:00
Jiaming Yuan
e5e47c3c99
Clarify the behavior of invalid categorical value handling. (#7529) 2022-01-13 16:11:52 +08:00
Philip Hyunsu Cho
20c0d60ac7
Restore functionality of max_depth=0 in hist (#7551)
* Restore functionality of max_depth=0 in hist

* Add test case
2022-01-11 01:37:44 +08:00
Jiaming Yuan
2db808021d
Silent some warnings for unused variable. (#7548) 2022-01-11 01:16:26 +08:00
Jiaming Yuan
c635d4c46a
Implement ubjson. (#7549)
* Implement ubjson.

This is a partial implementation of UBJSON with support for typed arrays.  Some missing
features are `f64`, typed object, and the no-op.
2022-01-10 23:24:23 +08:00
Jiaming Yuan
001503186c
Rewrite approx (#7214)
This PR rewrites the approx tree method to use codebase from hist for better performance and code sharing.

The rewrite has many benefits:
- Support for both `max_leaves` and `max_depth`.
- Support for `grow_policy`.
- Support for mono constraint.
- Support for feature weights.
- Support for easier bin configuration (`max_bin`).
- Support for categorical data.
- Faster performance for most of the datasets. (many times faster)
- Support for prediction cache.
- Significantly better performance for external memory.
- Unites the code base between approx and hist.
2022-01-10 21:15:05 +08:00
Jiaming Yuan
ed95e77752
[jvm-packages] Update JNI header. (#7550) 2022-01-10 14:59:40 +08:00
Jiaming Yuan
91c1a1c52f
Fix index type for bitfield. (#7541) 2022-01-05 19:23:29 +08:00
Jiaming Yuan
0df2ae63c7
Fix num_boosted_rounds for linear model. (#7538)
* Add note.

* Fix n boosted rounds.
2022-01-05 03:29:33 +08:00
Jiaming Yuan
28af6f9abb
Remove omp_get_max_threads in gbm and linear. (#7537)
* Use ctx in gbm.

* Use ctx threads in gbm and linear.
2022-01-05 03:28:52 +08:00
Jiaming Yuan
eea094e1bc
Remove some warnings from clang. (#7533)
* Unused variable.
* Unnecessary virtual function.
2022-01-05 03:28:21 +08:00
Jiaming Yuan
ec56d5869b
[doc] Include dask examples into doc. (#7530) 2022-01-05 03:27:22 +08:00
Jiaming Yuan
54582f641a
[doc] Use cross references in sphinx doc. (#7522)
* Use cross references instead of URL.
* Fix auto doc for callback.
2022-01-05 03:21:25 +08:00
Jiaming Yuan
eb1efb54b5
Define feature_names_in_. (#7526)
* Define `feature_names_in_`.
* Raise attribute error if it's not defined.
2022-01-05 01:35:34 +08:00
Jiaming Yuan
8f0a42a266
Initial support for multi-label classification. (#7521)
* Add support in sklearn classifier.
2022-01-04 23:58:21 +08:00
Jiaming Yuan
68cdbc9c16
Remove omp_get_max_threads in CPU predictor. (#7519)
This is part of the on going effort to remove the dependency on global omp variables.
2022-01-04 22:12:15 +08:00
Ikko Ashimine
5516281881
Fix typo in tree_model.cc (#7539)
occurance -> occurrence
2021-12-30 20:12:25 +08:00
Randall Britten
a4a0ebb85d
[doc] Lowercase omega for per tree complexity (#7532)
As suggested on issue #7480
2021-12-29 23:05:54 +08:00
Louis Desreumaux
3886c3dd8f
Remove macro definitions of snprintf and vsnprintf (#7536) 2021-12-26 08:05:59 +08:00
Ginko Balboa
29bfa94bb6
Fix external memory with gpu_hist and subsampling combination bug. (#7481)
Instead of accessing data from the `original_page_`, access the data from the first page of the available batch.

fix #7476

Co-authored-by: jiamingy <jm.yuan@outlook.com>
2021-12-24 11:15:35 +08:00
Jiaming Yuan
7f399eac8b
Use double for GPU Hist node sum. (#7507) 2021-12-22 08:41:35 +08:00
Jiaming Yuan
eabec370e4
[R] Fix single sample prediction. (#7524) 2021-12-21 14:11:07 +08:00
Bobby Wang
e8c1eb99e4
[jvm-package] Clean up the legacy gpu support tests (#7523) 2021-12-21 09:15:51 +08:00
Xiaochang Wu
59bd1ab17e
Skip callback demo test if matplotlib is not installed (#7520) 2021-12-19 08:20:38 +08:00
Jiaming Yuan
58a6723eb1
Initial support for multioutput regression. (#7514)
* Add num target model parameter, which is configured from input labels.
* Change elementwise metric and indexing for weights.
* Add demo.
* Add tests.
2021-12-18 09:28:38 +08:00
Jiaming Yuan
9ab73f737e
Extract Sketch Entry from hist maker. (#7503)
* Extract Sketch Entry from hist maker.

* Add a new sketch container for sorted inputs.
* Optimize bin search.
2021-12-18 05:36:56 +08:00
Qingyun Wu
b4a1236cfc
[doc] Update the link to the tuning example in FLAML 2021-12-17 14:31:00 +08:00
Bobby Wang
24e25802a7
[jvm-packages] Add Rapids plugin support (#7491)
* Add GPU pre-processing pipeline.
2021-12-17 13:11:12 +08:00
Jiaming Yuan
5b1161bb64
Convert labels into tensor. (#7456)
* Add a new ctor to tensor for `initilizer_list`.
* Change labels from host device vector to tensor.
* Rename the field from `labels_` to `labels` since it's a public member.
2021-12-17 00:58:35 +08:00
Jiaming Yuan
6f8a4633b7
Fix Python typehint with upgraded mypy. (#7513) 2021-12-16 23:08:08 +08:00
Jiaming Yuan
70b12d898a
[dask] Fix ddqdm with empty partition. (#7510)
* Fix empty partition.

* war.
2021-12-16 20:37:29 +08:00
Jiaming Yuan
a512b4b394
[doc] Promote dask from experimental. [skip ci] (#7509) 2021-12-16 14:17:06 +08:00
Jiaming Yuan
05497a9141
[dask] Fix asyncio. (#7508) 2021-12-13 01:48:25 +08:00
Jiaming Yuan
01152f89ee
Remove unused parameters. (#7499) 2021-12-09 14:24:51 +08:00
Harvey
1864fab592
Minor edits to Parameters doc page. (#7500)
* bost -> both

* doc improvement

* use original filename

* syntax highlight false

* missed a few highlights
2021-12-07 15:46:44 +08:00
Jiaming Yuan
021f8bf28b
Fix pylint. (#7498) 2021-12-07 13:23:30 +08:00
Jiaming Yuan
eee527d264
Add approx partitioner. (#7467) 2021-11-27 15:22:06 +08:00
Jiaming Yuan
85cbd32c5a
Add range-based slicing to tensor view. (#7453) 2021-11-27 13:42:36 +08:00
danmarinescu
6f38f5affa
Updated CMake version requirement in build.rst (#7487)
The documentation states that to build from source you need CMake 3.13 or higher. However, according to https://github.com/dmlc/xgboost/blob/master/CMakeLists.txt#L1 CMake 3.14 or higher is required.
2021-11-27 09:58:01 +08:00
Jiaming Yuan
557ffc4bf5
Reduce base margin to 2 dim for now. (#7455) 2021-11-27 00:46:13 +08:00
Jiaming Yuan
bf7bb575b4
Test CPU histogram with cat data. (#7465) 2021-11-27 00:43:28 +08:00
Bobby Wang
24be04e848
[jvm-packages] Add DeviceQuantileDMatrix to Scala binding (#7459) 2021-11-24 20:23:18 +08:00
Philip Hyunsu Cho
619c450a49
[CI] Add missing step extract_branch (#7479) 2021-11-24 17:35:59 +08:00
Jiaming Yuan
820e1c01ef
Fix macos package upload. (#7475)
* Split up the tests.
2021-11-24 03:43:49 +08:00
Jiaming Yuan
488f12a996
Fix github macos package upload. (#7474) 2021-11-24 00:29:11 +08:00
Jiaming Yuan
c024c42dce
Modernize XGBoost Python document. (#7468)
* Use sphinx gallery to integrate examples.
* Remove mock objects.
* Add dask doc inventory.
2021-11-23 23:24:52 +08:00
Philip Hyunsu Cho
96a9848c9e
[CI] Fix continuous delivery pipeline for MacOS (#7472) 2021-11-23 22:22:08 +08:00
Jiaming Yuan
b124a27f57
Support scipy sparse in dask. (#7457) 2021-11-23 16:45:36 +08:00
Jiaming Yuan
5262e933f7
Remove unnecessary constexpr. (#7466) 2021-11-23 16:42:08 +08:00
Philip Hyunsu Cho
0c67685e43
[CI] Add a helper script to aid Maven release (#7470)
* [CI] Add a helper script to aid Maven release

* Move script to dev/ [skip ci]

* Update command [skip ci]
2021-11-23 00:11:07 -08:00
Harvey
0552ca8021
Fix typo (#7469) 2021-11-23 08:58:45 +08:00
Jiaming Yuan
176110a22d
Support external memory in CPU histogram building. (#7372) 2021-11-23 01:13:33 +08:00
Jiaming Yuan
d33854af1b
[Breaking] Accept multi-dim meta info. (#7405)
This PR changes base_margin into a 3-dim array, with one of them being reserved for multi-target classification. Also, a breaking change is made for binary serialization due to extra dimension along with a fix for saving the feature weights. Lastly, it unifies the prediction initialization between CPU and GPU. After this PR, the meta info setter in Python will be based on array interface.
2021-11-18 23:02:54 +08:00
Jiaming Yuan
9fb4338964
Add test for eta and mitigate float error. (#7446)
* Add eta test.
* Don't skip test.
2021-11-18 20:42:48 +08:00
Bobby Wang
7cfb310eb4
Rework transform (#7440)
extract the common part of transform code from XGBoostClassifier
and XGBoostRegressor
2021-11-18 15:48:57 +08:00
Philip Hyunsu Cho
2adf222fb2
[CI] CI cost saving (#7407)
* [CI] Drop CUDA 10.1; Require 11.0

* Change NCCL version

* Use CUDA 10.1 for clang-tidy, for now

* Remove JDK 11 and 12

* Fix NCCL version

* Don't require 11.0 just yet, until clang-tidy is fixed

* Skip MultiClassesSerializationTest.GpuHist
2021-11-17 21:02:20 -08:00
Jiaming Yuan
b0015fda96
Fix R CRAN failures. (#7404)
* Remove hist builder dtor.

* Initialize values.

* Tolerance.

* Remove the use of nthread in col maker.
2021-11-16 10:51:12 +08:00
Jiaming Yuan
55ee272ea8
Extend array interface to handle ndarray. (#7434)
* Extend array interface to handle ndarray.

The `ArrayInterface` class is extended to support multi-dim array inputs. Previously this
class handles only 2-dim (vector is also matrix).  This PR specifies the expected
dimension at compile-time and the array interface can perform various checks automatically
for input data. Also, adapters like CSR are more rigorous about their input.  Lastly, row
vector and column vector are handled without intervention from the caller.
2021-11-16 09:52:15 +08:00
Jiaming Yuan
e27f543deb
Set use_logger in tracker to false. (#7438) 2021-11-16 05:12:42 +08:00
Jiaming Yuan
d4274bc556
Fix typo. (#7433) 2021-11-15 01:28:11 +08:00
Jiaming Yuan
a7057fa64c
Implement typed storage for tensor. (#7429)
* Add `Tensor` class.
* Add elementwise kernel for CPU and GPU.
* Add unravel index.
* Move some computation to compile time.
2021-11-14 18:53:13 +08:00
Kian Meng Ang
d27a11ff87
Fix typos in python package (#7432) 2021-11-14 17:20:19 +08:00
Jiaming Yuan
8cc75f1576
Cleanup Python tests. (#7426) 2021-11-14 15:47:05 +08:00
Jiaming Yuan
38ca96c9fc
[CI] Install igraph as binary. (#7417) 2021-11-12 19:04:28 +08:00
Jiaming Yuan
46726ec176
Expose build info (#7399) 2021-11-12 18:22:46 +08:00
Jiaming Yuan
937fa282b5
Extract string view. (#7416)
* Add equality operators.
* Return a view in substr.
* Add proper iterator types.
2021-11-12 18:22:30 +08:00
Jiaming Yuan
ca6f980932
Check number of trees in inplace predict. (#7409) 2021-11-12 18:20:23 +08:00
Jiaming Yuan
97d7582457
Delay breaking changes to 1.6. (#7420)
The patch is too big to be backported.
2021-11-12 16:46:03 +08:00
Bobby Wang
cb685607b2
[jvm-packages] Rework the train pipeline (#7401)
1. Add PreXGBoost to build RDD[Watches] from Dataset
2. Feed RDD[Watches] built from PreXGBoost to XGBoost to train
2021-11-10 17:51:38 +08:00
Jiaming Yuan
8df0a252b7
[doc] Update document for GPU. [skip ci] (#7403)
* Remove outdated workaround and description.
2021-11-09 02:05:55 +08:00
Jiaming Yuan
d7d1b6e3a6
CPU evaluation for cat data. (#7393)
* Implementation for one hot based.
* Implementation for partition based. (LightGBM)
2021-11-06 14:41:35 +08:00
Jiaming Yuan
6ede12412c
Update dmlc-core and use data iter for GPU sampling tests. (#7398)
* Update dmlc-core.
* New parquet parser in dmlc-core.
* Use data iter for GPU sampling tests.
2021-11-06 05:12:49 +08:00
Jiaming Yuan
c968217ca8
[R] Fix global feature importance and predict with 1 sample. (#7394)
* [R] Fix global feature importance.

* Add implementation for tree index.  The parameter is not documented in C API since we
should work on porting the model slicing to R instead of supporting more use of tree
index.

* Fix the difference between "gain" and "total_gain".

* debug.

* Fix prediction.
2021-11-05 10:07:00 +08:00
Jiaming Yuan
48aff0eabd
[doc][jvm-packages] Update information about Python tracker. [skip ci] (#7396) 2021-11-05 05:55:13 +08:00
Jiaming Yuan
b06040b6d0
Implement a general array view. (#7365)
* Replace existing matrix and vector view.

This is to prepare for handling higher dimension data and prediction when we support multi-target models.
2021-11-05 04:16:11 +08:00
Jiaming Yuan
232144ca09
Add note about CRAN release [skip ci] (#7395) 2021-11-05 00:34:14 +08:00
Jiaming Yuan
4100827971
Pass infomation about objective to tree methods. (#7385)
* Define the `ObjInfo` and pass it down to every tree updater.
2021-11-04 01:52:44 +08:00
Jiaming Yuan
ccdabe4512
Support building gradient index with cat data. (#7371) 2021-11-03 22:37:37 +08:00
Jiaming Yuan
57a4b4ff64
Handle OMP_THREAD_LIMIT. (#7390) 2021-11-03 15:44:38 +08:00
Jiaming Yuan
e6ab594e14
Change shebang used in CLI demo. (#7389)
Change from system Python to environment python3.  For Ubuntu 20.04, only `python3` is
available and there's no `python`.  So at least `python3` is consistent with Python
virtual env, Ubuntu and anaconda.
2021-11-02 22:11:19 +08:00
Jiaming Yuan
a55d43ccfd
Add test for invalid categorical data values. (#7380)
* Add test for invalid categorical data values.

* Add check during sketching.
2021-11-02 18:00:52 +08:00
Jiaming Yuan
c74df31bf9
Cleanup the train function. (#7377)
* Move attribute setter to callback.
* Remove the internal train function.
* Remove unnecessary initialization.
2021-11-02 18:00:26 +08:00
Jiaming Yuan
154b15060e
Move callbacks from fit to __init__. (#7375) 2021-11-02 17:51:42 +08:00
Jiaming Yuan
32e673d8c4
Support building with CTK11.5. (#7379)
* Support building with CTK11.5.

* Require system cub installation for CTK11.4+.
* Check thrust version for segmented sort.
2021-11-02 16:22:26 +08:00
Jiaming Yuan
a13321148a
Support multi-class with base margin. (#7381)
This is already partially supported but never properly tested. So the only possible way to use it is calling `numpy.ndarray.flatten` with `base_margin` before passing it into XGBoost. This PR adds proper support
for most of the data types along with tests.
2021-11-02 13:38:00 +08:00
Jiaming Yuan
6295dc3b67
Fix span reverse iterator. (#7387)
* Fix span reverse iterator.

* Disable `rbegin` on device code to avoid calling host function.
* Add `trbegin` and friends.
2021-11-02 13:35:59 +08:00
Jiaming Yuan
8211e5f341
Add clang-format config. (#7383)
Generated using `clang-format -style=google -dump-config > .clang-format`, with column
width changed from 80 to 100 to be consistent with existing cpplint check.
2021-11-02 13:34:38 +08:00
Jiaming Yuan
0f7a9b42f1
Use double precision in metric calculation. (#7364) 2021-11-02 12:00:32 +08:00
Jiaming Yuan
239dbb3c0a
Move macos test to github action. (#7382)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2021-10-30 14:40:32 +08:00
Bobby Wang
b81ebbef62
[jvm-packages] Fix json4s binary compatibility issue (#7376)
Spark 3.2 depends on 3.7.0-M11 which has changed some implicited functions'
signatures. And it will result the xgboost4j built against spark 3.0/3.1
failed when saving the model.
2021-10-30 03:20:57 +08:00
Jiaming Yuan
c6769488b3
Typehint for subset of core API. (#7348) 2021-10-28 20:47:04 +08:00
Jiaming Yuan
45aef75cca
Move skl eval_metric and early_stopping rounds to model params. (#6751)
A new parameter `custom_metric` is added to `train` and `cv` to distinguish the behaviour from the old `feval`.  And `feval` is deprecated.  The new `custom_metric` receives transformed prediction when the built-in objective is used.  This enables XGBoost to use cost functions from other libraries like scikit-learn directly without going through the definition of the link function.

`eval_metric` and `early_stopping_rounds` in sklearn interface are moved from `fit` to `__init__` and is now saved as part of the scikit-learn model.  The old ones in `fit` function are now deprecated. The new `eval_metric` in `__init__` has the same new behaviour as `custom_metric`.

Added more detailed documents for the behaviour of custom objective and metric.
2021-10-28 17:20:20 +08:00
Jiaming Yuan
6b074add66
Update setup.py. (#7360)
* Add new classifiers.
* Typehint.
2021-10-28 14:58:31 +08:00
Jiaming Yuan
3c4aa9b2ea
[breaking] Remove label encoder deprecated in 1.3. (#7357) 2021-10-28 13:24:29 +08:00
Jiaming Yuan
d05754f558
Avoid OMP reduction in AUC. (#7362) 2021-10-28 05:03:52 +08:00
Jiaming Yuan
ac9bfaa4f2
Handle missing values in dataframe with category dtype. (#7331)
* Replace -1 in pandas initializer.
* Unify `IsValid` functor.
* Mimic pandas data handling in cuDF glue code.
* Check invalid categories.
* Fix DDM sketching.
2021-10-28 03:33:54 +08:00
Jiaming Yuan
2eee87423c
Remove old custom objective demo. (#7369)
We have 2 new custom objective demos covering both regression and classification with
accompanying tutorials in documents.
2021-10-27 16:31:48 +08:00
Jiaming Yuan
b9414b6477
Update GPU doc for PR-AUC. [skip ci] (#7368) 2021-10-27 16:31:07 +08:00
Jiaming Yuan
d4349426d8
Re-implement PR-AUC. (#7297)
* Support binary/multi-class classification, ranking.
* Add documents.
* Handle missing data.
2021-10-26 13:07:50 +08:00
nicovdijk
a6bcd54b47
[jvm-packages] Fix for space in sys.executable path in create_jni.py (#7358) 2021-10-25 13:45:11 +08:00
Jiaming Yuan
fd61c61071
Avoid omp reduction in rank metric. (#7349) 2021-10-22 14:13:34 +08:00
Jiaming Yuan
e36b066344
[doc] Document the status of RTD hosting. [skip ci] (#7353) 2021-10-22 14:12:55 +08:00
Jiaming Yuan
864d236a82
[doc] Remove num_pbuffer. [skip ci] (#7356) 2021-10-22 14:12:32 +08:00
nicovdijk
31a307cf6b
[XGBoost4J-Spark] Serialization for custom objective and eval (#7274)
* added type hints to custom_obj and custom_eval for Spark persistence


Co-authored-by: Bobby Wang <wbo4958@gmail.com>
2021-10-21 16:22:23 +08:00
Jiaming Yuan
7593fa9982
1.5 release note. [skip ci] (#7271)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2021-10-21 13:43:31 +08:00
Jiaming Yuan
d1f00fb0b7
Stricter validation for group. (#7345) 2021-10-21 12:13:33 +08:00
nicovdijk
74bab6e504
Control logging for early stopping using shouldPrint() (#7326) 2021-10-21 12:12:06 +08:00
Jiaming Yuan
8d7c6366d7
Accept histogram cut instead gradient index in evaluation. (#7336) 2021-10-20 18:04:46 +08:00
Jiaming Yuan
15685996fc
[doc] Small improvements for categorical data document. (#7330) 2021-10-20 18:04:32 +08:00
Jiaming Yuan
f999897615
[dask] Use nthread in DMatrix construction. (#7337)
This is consistent with the thread overriding behavior.
2021-10-20 15:16:40 +08:00
Philip Hyunsu Cho
b8e8f0fcd9
[doc] Use latest Sphinx RTD theme (#7347) 2021-10-20 00:04:43 -07:00
Jiaming Yuan
3b0b74fa94
[doc] Use RTD theme. (#7346) 2021-10-19 23:49:19 -07:00
Jiaming Yuan
376b448015
[doc] Fix broken links. (#7341)
* Fix most of the link checks from sphinx.
* Remove duplicate explicit target name.
2021-10-20 14:45:30 +08:00
Jiaming Yuan
f53da412aa
Add typehint to tracker. (#7338) 2021-10-20 12:49:36 +08:00
Jiaming Yuan
5ff210ed75
Small fix for the release doc and script. [skip ci] (#7332)
Add Philip as co-maintainer of maven packages.
2021-10-20 12:49:12 +08:00
Jiaming Yuan
c42e3fbcf3
[doc] Fix early stopping document. (#7334) 2021-10-18 11:21:16 -07:00
Bobby Wang
4fd149b3a2
[jvm-packages] update checkstyle (#7335)
* [jvm-packages] update scalastyle

1. bump scalastyle-maven-plugin and maven-checkstyle-plugin to latest
2. remove unused imports

* fix code style check
2021-10-18 18:42:01 +08:00
Jiaming Yuan
fbb0dc4275
Remove auto configuration of seed_per_iteration. (#7009)
* Remove auto configuration of seed_per_iteration.

This should be related to model recovery from rabit, which is removed.

* Document.
2021-10-17 15:58:57 +08:00
Jiaming Yuan
fb1a9e6bc5
Avoid omp reduction in coordinate descent and aft metrics. (#7316)
Aside from the omp issue, parameter configuration for aft metric is simplified.
2021-10-17 15:55:49 +08:00
Jiaming Yuan
f56e2e9a66
Support categorical data with pandas Dataframe in inplace prediction (#7322) 2021-10-17 14:32:06 +08:00
Jiaming Yuan
8e619010d0
Extract CPUExpandEntry and HistParam. (#7321)
* Remove kRootNid.
* Check for empty hessian.
2021-10-17 14:22:25 +08:00
Jiaming Yuan
6cdcfe8128
Improve external memory demo. (#7320)
* Use npy format.
* Add evaluation.
* Use make_regression.
2021-10-17 11:25:24 +08:00
Jiaming Yuan
e6a142fe70
Fix document about best_iteration (#7324) 2021-10-14 15:30:46 -07:00
Jiaming Yuan
4ddf8d001c
Deterministic result for element-wise/mclass metrics. (#7303)
Remove openmp reduction.
2021-10-13 14:22:40 +08:00
Jiaming Yuan
406c70ba0e
[doc] Fix typo. [skip ci] (#7311) 2021-10-12 19:10:18 +08:00
Jiaming Yuan
0bd8f21e4e
Add document for categorical data. (#7307) 2021-10-12 16:10:59 +08:00
Jiaming Yuan
a7d0c66457
Remove unused code. (#7293) 2021-10-12 15:04:41 +08:00
Jiaming Yuan
130df8cdda
Add tests for tree grow policy. (#7302) 2021-10-12 15:04:06 +08:00
Jiaming Yuan
5b17bb0031
Fix prediction with cat data in sklearn interface. (#7306)
* Specify DMatrix parameter for pre-processing dataframe.
* Add document about the behaviour of prediction.
2021-10-12 14:31:12 +08:00
Jiaming Yuan
89d87e5331
Update GPU Tree SHAP (#7304) 2021-10-11 21:39:50 +08:00
Jiaming Yuan
298af6f409
Fix weighted samples in multi-class AUC. (#7300) 2021-10-11 15:12:29 +08:00
Jiaming Yuan
69d3b1b8b4
Remove old callback deprecated in 1.3. (#7280) 2021-10-08 17:24:59 +08:00
Jiaming Yuan
578de9f762
Fix cv verbose_eval (#7291) 2021-10-08 12:28:38 +08:00
Jiaming Yuan
f7caac2563
Bump version to 1.6.0 in master. (#7259) 2021-10-07 16:09:26 +08:00
Jiaming Yuan
e2660ab8f3
Extend release script with R packages. [skip ci] (#7278) 2021-10-07 16:08:42 +08:00
Yuan Tang
cc459755be
Update affiliation (#7289) 2021-10-07 16:07:34 +08:00
Jiaming Yuan
d8cb395380
Fix gamma neg log likelihood. (#7275) 2021-10-05 16:57:08 +08:00
Jiaming Yuan
b3b03200e2
Remove old warning in 1.3 (#7279) 2021-10-01 08:05:50 +08:00
Philip Hyunsu Cho
2a0368b7ca
Add CMake option to use /MD runtime (#7277) 2021-09-30 13:13:57 +08:00
Jiaming Yuan
b2d8431aea
[R] Fix document for nthread. (#7263) 2021-09-28 11:46:24 +08:00
1389 changed files with 126360 additions and 54323 deletions

214
.clang-format Normal file
View File

@ -0,0 +1,214 @@
---
Language: Cpp
# BasedOnStyle: Google
AccessModifierOffset: -1
AlignAfterOpenBracket: Align
AlignArrayOfStructures: None
AlignConsecutiveMacros: None
AlignConsecutiveAssignments: None
AlignConsecutiveBitFields: None
AlignConsecutiveDeclarations: None
AlignEscapedNewlines: Left
AlignOperands: Align
AlignTrailingComments: true
AllowAllArgumentsOnNextLine: true
AllowAllParametersOfDeclarationOnNextLine: true
AllowShortEnumsOnASingleLine: true
AllowShortBlocksOnASingleLine: Never
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: All
AllowShortLambdasOnASingleLine: Inline
AllowShortIfStatementsOnASingleLine: WithoutElse
AllowShortLoopsOnASingleLine: true
AlwaysBreakAfterDefinitionReturnType: None
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: Yes
AttributeMacros:
- __capability
BinPackArguments: true
BinPackParameters: true
BraceWrapping:
AfterCaseLabel: false
AfterClass: false
AfterControlStatement: Never
AfterEnum: false
AfterFunction: false
AfterNamespace: false
AfterObjCDeclaration: false
AfterStruct: false
AfterUnion: false
AfterExternBlock: false
BeforeCatch: false
BeforeElse: false
BeforeLambdaBody: false
BeforeWhile: false
IndentBraces: false
SplitEmptyFunction: true
SplitEmptyRecord: true
SplitEmptyNamespace: true
BreakBeforeBinaryOperators: None
BreakBeforeConceptDeclarations: true
BreakBeforeBraces: Attach
BreakBeforeInheritanceComma: false
BreakInheritanceList: BeforeColon
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: false
BreakConstructorInitializers: BeforeColon
BreakAfterJavaFieldAnnotations: false
BreakStringLiterals: true
ColumnLimit: 100
CommentPragmas: '^ IWYU pragma:'
QualifierAlignment: Leave
CompactNamespaces: false
ConstructorInitializerIndentWidth: 4
ContinuationIndentWidth: 4
Cpp11BracedListStyle: true
DeriveLineEnding: true
DerivePointerAlignment: true
DisableFormat: false
EmptyLineAfterAccessModifier: Never
EmptyLineBeforeAccessModifier: LogicalBlock
ExperimentalAutoDetectBinPacking: false
PackConstructorInitializers: NextLine
BasedOnStyle: ''
ConstructorInitializerAllOnOneLineOrOnePerLine: false
AllowAllConstructorInitializersOnNextLine: true
FixNamespaceComments: true
ForEachMacros:
- foreach
- Q_FOREACH
- BOOST_FOREACH
IfMacros:
- KJ_IF_MAYBE
IncludeBlocks: Regroup
IncludeCategories:
- Regex: '^<ext/.*\.h>'
Priority: 2
SortPriority: 0
CaseSensitive: false
- Regex: '^<.*\.h>'
Priority: 1
SortPriority: 0
CaseSensitive: false
- Regex: '^<.*'
Priority: 2
SortPriority: 0
CaseSensitive: false
- Regex: '.*'
Priority: 3
SortPriority: 0
CaseSensitive: false
IncludeIsMainRegex: '([-_](test|unittest))?$'
IncludeIsMainSourceRegex: ''
IndentAccessModifiers: false
IndentCaseLabels: true
IndentCaseBlocks: false
IndentGotoLabels: true
IndentPPDirectives: None
IndentExternBlock: AfterExternBlock
IndentRequires: false
IndentWidth: 2
IndentWrappedFunctionNames: false
InsertTrailingCommas: None
JavaScriptQuotes: Leave
JavaScriptWrapImports: true
KeepEmptyLinesAtTheStartOfBlocks: false
LambdaBodyIndentation: Signature
MacroBlockBegin: ''
MacroBlockEnd: ''
MaxEmptyLinesToKeep: 1
NamespaceIndentation: None
ObjCBinPackProtocolList: Never
ObjCBlockIndentWidth: 2
ObjCBreakBeforeNestedBlockParam: true
ObjCSpaceAfterProperty: false
ObjCSpaceBeforeProtocolList: true
PenaltyBreakAssignment: 2
PenaltyBreakBeforeFirstCallParameter: 1
PenaltyBreakComment: 300
PenaltyBreakFirstLessLess: 120
PenaltyBreakString: 1000
PenaltyBreakTemplateDeclaration: 10
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PenaltyIndentedWhitespace: 0
PointerAlignment: Left
PPIndentWidth: -1
RawStringFormats:
- Language: Cpp
Delimiters:
- cc
- CC
- cpp
- Cpp
- CPP
- 'c++'
- 'C++'
CanonicalDelimiter: ''
BasedOnStyle: google
- Language: TextProto
Delimiters:
- pb
- PB
- proto
- PROTO
EnclosingFunctions:
- EqualsProto
- EquivToProto
- PARSE_PARTIAL_TEXT_PROTO
- PARSE_TEST_PROTO
- PARSE_TEXT_PROTO
- ParseTextOrDie
- ParseTextProtoOrDie
- ParseTestProto
- ParsePartialTestProto
CanonicalDelimiter: pb
BasedOnStyle: google
ReferenceAlignment: Pointer
ReflowComments: true
ShortNamespaceLines: 1
SortIncludes: CaseSensitive
SortJavaStaticImport: Before
SortUsingDeclarations: true
SpaceAfterCStyleCast: false
SpaceAfterLogicalNot: false
SpaceAfterTemplateKeyword: true
SpaceBeforeAssignmentOperators: true
SpaceBeforeCaseColon: false
SpaceBeforeCpp11BracedList: false
SpaceBeforeCtorInitializerColon: true
SpaceBeforeInheritanceColon: true
SpaceBeforeParens: ControlStatements
SpaceAroundPointerQualifiers: Default
SpaceBeforeRangeBasedForLoopColon: true
SpaceInEmptyBlock: false
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 2
SpacesInAngles: Never
SpacesInConditionalStatement: false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInLineCommentPrefix:
Minimum: 1
Maximum: -1
SpacesInParentheses: false
SpacesInSquareBrackets: false
SpaceBeforeSquareBrackets: false
BitFieldColonSpacing: Both
Standard: Auto
StatementAttributeLikeMacros:
- Q_EMIT
StatementMacros:
- Q_UNUSED
- QT_REQUIRE_VERSION
TabWidth: 8
UseCRLF: false
UseTab: Never
WhitespaceSensitiveMacros:
- STRINGIZE
- PP_STRINGIZE
- BOOST_PP_STRINGIZE
- NS_SWIFT_NAME
- CF_SWIFT_NAME
...

View File

@ -1,4 +1,4 @@
Checks: 'modernize-*,-modernize-make-*,-modernize-use-auto,-modernize-raw-string-literal,-modernize-avoid-c-arrays,-modernize-use-trailing-return-type,google-*,-google-default-arguments,-clang-diagnostic-#pragma-messages,readability-identifier-naming'
Checks: 'modernize-*,-modernize-use-nodiscard,-modernize-concat-nested-namespaces,-modernize-make-*,-modernize-use-auto,-modernize-raw-string-literal,-modernize-avoid-c-arrays,-modernize-use-trailing-return-type,google-*,-google-default-arguments,-clang-diagnostic-#pragma-messages,readability-identifier-naming'
CheckOptions:
- { key: readability-identifier-naming.ClassCase, value: CamelCase }
- { key: readability-identifier-naming.StructCase, value: CamelCase }

18
.gitattributes vendored Normal file
View File

@ -0,0 +1,18 @@
* text=auto
*.c text eol=lf
*.h text eol=lf
*.cc text eol=lf
*.cuh text eol=lf
*.cu text eol=lf
*.py text eol=lf
*.txt text eol=lf
*.R text eol=lf
*.scala text eol=lf
*.java text eol=lf
*.sh text eol=lf
*.rst text eol=lf
*.md text eol=lf
*.csv text eol=lf

31
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,31 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "maven"
directory: "/jvm-packages"
schedule:
interval: "monthly"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j"
schedule:
interval: "daily"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-gpu"
schedule:
interval: "monthly"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-example"
schedule:
interval: "monthly"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-spark"
schedule:
interval: "daily"
- package-ecosystem: "maven"
directory: "/jvm-packages/xgboost4j-spark-gpu"
schedule:
interval: "monthly"

43
.github/workflows/i386.yml vendored Normal file
View File

@ -0,0 +1,43 @@
name: XGBoost-i386-test
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
build-32bit:
name: Build 32-bit
runs-on: ubuntu-latest
services:
registry:
image: registry:2
ports:
- 5000:5000
steps:
- uses: actions/checkout@v2.5.0
with:
submodules: 'true'
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: network=host
- name: Build and push container
uses: docker/build-push-action@v5
with:
context: .
file: tests/ci_build/Dockerfile.i386
push: true
tags: localhost:5000/xgboost/build-32bit:latest
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build XGBoost
run: |
docker run --rm -v $PWD:/workspace -w /workspace \
-e CXXFLAGS='-Wno-error=overloaded-virtual -Wno-error=maybe-uninitialized -Wno-error=redundant-move' \
localhost:5000/xgboost/build-32bit:latest \
tests/ci_build/build_via_cmake.sh

View File

@ -2,6 +2,13 @@ name: XGBoost-JVM-Tests
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
test-with-jvm:
name: Test JVM on OS ${{ matrix.os }}
@ -9,53 +16,59 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [windows-latest, ubuntu-latest, macos-10.15]
os: [windows-latest, ubuntu-latest, macos-11]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
with:
submodules: 'true'
- uses: actions/setup-python@v2
- uses: mamba-org/setup-micromamba@422500192359a097648154e8db4e39bdb6c6eed7 # v1.8.1
with:
python-version: '3.8'
architecture: 'x64'
- uses: actions/setup-java@v1
with:
java-version: 1.8
- name: Install Python packages
run: |
python -m pip install wheel setuptools
python -m pip install awscli
micromamba-version: '1.5.6-0'
environment-name: jvm_tests
create-args: >-
python=3.10
awscli
cache-downloads: true
cache-environment: true
init-shell: bash powershell
- name: Cache Maven packages
uses: actions/cache@v2
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2 # v4.0.0
with:
path: ~/.m2
key: ${{ runner.os }}-m2-${{ hashFiles('./jvm-packages/pom.xml') }}
restore-keys: ${{ runner.os }}-m2
restore-keys: ${{ runner.os }}-m2-${{ hashFiles('./jvm-packages/pom.xml') }}
- name: Test XGBoost4J
- name: Build xgboost4j.dll
run: |
mkdir build
cd build
cmake .. -G"Visual Studio 17 2022" -A x64 -DJVM_BINDINGS=ON
cmake --build . --config Release
if: matrix.os == 'windows-latest'
- name: Test XGBoost4J (Core)
run: |
cd jvm-packages
mvn test -B -pl :xgboost4j_2.12
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
run: |
echo "branch=${GITHUB_REF#refs/heads/}" >> "$GITHUB_OUTPUT"
id: extract_branch
if: |
(github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')) &&
matrix.os == 'windows-latest'
(matrix.os == 'windows-latest' || matrix.os == 'macos-11')
- name: Publish artifact xgboost4j.dll to S3
run: |
cd lib/
Rename-Item -Path xgboost4j.dll -NewName xgboost4j_${{ github.sha }}.dll
dir
python -m awscli s3 cp xgboost4j_${{ github.sha }}.dll s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/ --acl public-read
python -m awscli s3 cp xgboost4j_${{ github.sha }}.dll s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/libxgboost4j/ --acl public-read --region us-west-2
if: |
(github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')) &&
matrix.os == 'windows-latest'
@ -63,8 +76,22 @@ jobs:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
- name: Publish artifact libxgboost4j.dylib to S3
shell: bash -l {0}
run: |
cd lib/
mv -v libxgboost4j.dylib libxgboost4j_${{ github.sha }}.dylib
ls
python -m awscli s3 cp libxgboost4j_${{ github.sha }}.dylib s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/libxgboost4j/ --acl public-read --region us-west-2
if: |
(github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')) &&
matrix.os == 'macos-11'
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
- name: Test XGBoost4J-Spark
- name: Test XGBoost4J (Core, Spark, Examples)
run: |
rm -rfv build/
cd jvm-packages
@ -72,3 +99,13 @@ jobs:
if: matrix.os == 'ubuntu-latest' # Distributed training doesn't work on Windows
env:
RABIT_MOCK: ON
- name: Build and Test XGBoost4J with scala 2.13
run: |
rm -rfv build/
cd jvm-packages
mvn -B clean install test -Pdefault,scala-2.13
if: matrix.os == 'ubuntu-latest' # Distributed training doesn't work on Windows
env:
RABIT_MOCK: ON

View File

@ -6,6 +6,13 @@ name: XGBoost-CI
# events but only for the master branch
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
gtest-cpu:
@ -14,22 +21,19 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [macos-10.15]
os: [macos-11]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- name: Install system packages
run: |
# Use libomp 11.1.0: https://github.com/dmlc/xgboost/issues/7039
wget https://raw.githubusercontent.com/Homebrew/homebrew-core/679923b4eb48a8dc7ecc1f05d06063cd79b3fc00/Formula/libomp.rb -O $(find $(brew --repository) -name libomp.rb)
brew install ninja libomp
brew pin libomp
- name: Build gtest binary
run: |
mkdir build
cd build
cmake .. -DGOOGLE_TEST=ON -DUSE_OPENMP=ON -DUSE_DMLC_GTEST=ON -DPLUGIN_DENSE_PARSER=ON -GNinja
cmake .. -DGOOGLE_TEST=ON -DUSE_OPENMP=ON -DUSE_DMLC_GTEST=ON -GNinja -DBUILD_DEPRECATED_CLI=ON
ninja -v
- name: Run gtest binary
run: |
@ -45,7 +49,7 @@ jobs:
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- name: Install system packages
@ -56,40 +60,79 @@ jobs:
run: |
mkdir build
cd build
cmake .. -GNinja -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON -DUSE_OPENMP=OFF
cmake .. -GNinja -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON -DUSE_OPENMP=OFF -DBUILD_DEPRECATED_CLI=ON
ninja -v
- name: Run gtest binary
run: |
cd build
ctest --extra-verbose
gtest-cpu-sycl:
name: Test Google C++ unittest (CPU SYCL)
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
python-version: ["3.8"]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with:
cache-downloads: true
cache-env: true
environment-name: linux_sycl_test
environment-file: tests/ci_build/conda_env/linux_sycl_test.yml
- name: Display Conda env
run: |
conda info
conda list
- name: Build and install XGBoost
shell: bash -l {0}
run: |
mkdir build
cd build
cmake .. -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON -DPLUGIN_SYCL=ON -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX
make -j$(nproc)
- name: Run gtest binary for SYCL
run: |
cd build
./testxgboost --gtest_filter=Sycl*
- name: Run gtest binary for non SYCL
run: |
cd build
./testxgboost --gtest_filter=-Sycl*
c-api-demo:
name: Test installing XGBoost lib + building the C API demo
runs-on: ${{ matrix.os }}
defaults:
run:
shell: bash -l {0}
strategy:
fail-fast: false
matrix:
os: ["ubuntu-latest"]
python-version: ["3.8"]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- name: Install system packages
run: |
sudo apt-get install -y --no-install-recommends ninja-build
- uses: conda-incubator/setup-miniconda@v2
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with:
auto-update-conda: true
python-version: ${{ matrix.python-version }}
activate-environment: test
cache-downloads: true
cache-env: true
environment-name: cpp_test
environment-file: tests/ci_build/conda_env/cpp_test.yml
- name: Display Conda env
shell: bash -l {0}
run: |
conda info
conda list
- name: Build and install XGBoost static library
shell: bash -l {0}
run: |
mkdir build
cd build
@ -97,7 +140,6 @@ jobs:
ninja -v install
cd -
- name: Build and run C API demo with static
shell: bash -l {0}
run: |
pushd .
cd demo/c-api/
@ -109,15 +151,14 @@ jobs:
cd ..
rm -rf ./build
popd
- name: Build and install XGBoost shared library
shell: bash -l {0}
run: |
cd build
cmake .. -DBUILD_STATIC_LIB=OFF -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja
ninja -v install
cd -
- name: Build and run C API demo with shared
shell: bash -l {0}
run: |
pushd .
cd demo/c-api/
@ -130,103 +171,21 @@ jobs:
./tests/ci_build/verify_link.sh ./demo/c-api/build/basic/api-demo
./tests/ci_build/verify_link.sh ./demo/c-api/build/external-memory/external-memory-demo
lint:
cpp-lint:
runs-on: ubuntu-latest
name: Code linting for Python and C++
name: Code linting for C++
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: actions/setup-python@v2
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
with:
python-version: '3.7'
python-version: "3.8"
architecture: 'x64'
- name: Install Python packages
run: |
python -m pip install wheel setuptools
python -m pip install pylint cpplint numpy scipy scikit-learn
python -m pip install wheel setuptools cmakelint cpplint pylint
- name: Run lint
run: |
make lint
mypy:
runs-on: ubuntu-latest
name: Type checking for Python
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: actions/setup-python@v2
with:
python-version: '3.7'
architecture: 'x64'
- name: Install Python packages
run: |
python -m pip install wheel setuptools mypy pandas dask[complete] distributed
- name: Run mypy
run: |
make mypy
doxygen:
runs-on: ubuntu-latest
name: Generate C/C++ API doc using Doxygen
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: actions/setup-python@v2
with:
python-version: '3.7'
architecture: 'x64'
- name: Install system packages
run: |
sudo apt-get install -y --no-install-recommends doxygen graphviz ninja-build
python -m pip install wheel setuptools
python -m pip install awscli
- name: Run Doxygen
run: |
mkdir build
cd build
cmake .. -DBUILD_C_DOC=ON -GNinja
ninja -v doc_doxygen
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Publish
run: |
cd build/
tar cvjf ${{ steps.extract_branch.outputs.branch }}.tar.bz2 doc_doxygen/
python -m awscli s3 cp ./${{ steps.extract_branch.outputs.branch }}.tar.bz2 s3://xgboost-docs/doxygen/ --acl public-read
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
sphinx:
runs-on: ubuntu-latest
name: Build docs using Sphinx
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: actions/setup-python@v2
with:
python-version: '3.8'
architecture: 'x64'
- name: Install system packages
run: |
sudo apt-get install -y --no-install-recommends graphviz
python -m pip install wheel setuptools
python -m pip install -r doc/requirements.txt
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Run Sphinx
run: |
make -C doc html
env:
SPHINX_GIT_BRANCH: ${{ steps.extract_branch.outputs.branch }}
python3 tests/ci_build/lint_cpp.py
sh ./tests/ci_build/lint_cmake.sh

View File

@ -2,63 +2,183 @@ name: XGBoost-Python-Tests
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
defaults:
run:
shell: bash -l {0}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
python-sdist-test:
python-mypy-lint:
runs-on: ubuntu-latest
name: Type and format checks for the Python package
strategy:
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with:
cache-downloads: true
cache-env: true
environment-name: python_lint
environment-file: tests/ci_build/conda_env/python_lint.yml
- name: Display Conda env
run: |
conda info
conda list
- name: Run mypy
run: |
python tests/ci_build/lint_python.py --format=0 --type-check=1 --pylint=0
- name: Run formatter
run: |
python tests/ci_build/lint_python.py --format=1 --type-check=0 --pylint=0
- name: Run pylint
run: |
python tests/ci_build/lint_python.py --format=0 --type-check=0 --pylint=1
python-sdist-test-on-Linux:
# Mismatched glibcxx version between system and conda forge.
runs-on: ${{ matrix.os }}
name: Test installing XGBoost Python source package on ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-10.15, windows-latest]
python-version: ["3.8"]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- name: Install osx system dependencies
if: matrix.os == 'macos-10.15'
run: |
# Use libomp 11.1.0: https://github.com/dmlc/xgboost/issues/7039
wget https://raw.githubusercontent.com/Homebrew/homebrew-core/679923b4eb48a8dc7ecc1f05d06063cd79b3fc00/Formula/libomp.rb -O $(find $(brew --repository) -name libomp.rb)
brew install ninja libomp
brew pin libomp
- name: Install Ubuntu system dependencies
if: matrix.os == 'ubuntu-latest'
run: |
sudo apt-get install -y --no-install-recommends ninja-build
- uses: conda-incubator/setup-miniconda@v2
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with:
auto-update-conda: true
python-version: ${{ matrix.python-version }}
activate-environment: test
cache-downloads: true
cache-env: true
environment-name: sdist_test
environment-file: tests/ci_build/conda_env/sdist_test.yml
- name: Display Conda env
shell: bash -l {0}
run: |
conda info
conda list
- name: Build and install XGBoost
shell: bash -l {0}
run: |
cd python-package
python --version
python setup.py sdist
python -m build --sdist
pip install -v ./dist/xgboost-*.tar.gz --config-settings use_openmp=False
cd ..
python -c 'import xgboost'
python-sdist-test:
# Use system toolchain instead of conda toolchain for macos and windows.
# MacOS has linker error if clang++ from conda-forge is used
runs-on: ${{ matrix.os }}
name: Test installing XGBoost Python source package on ${{ matrix.os }}
strategy:
matrix:
os: [macos-11, windows-latest]
python-version: ["3.8"]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- name: Install osx system dependencies
if: matrix.os == 'macos-11'
run: |
brew install ninja libomp
- uses: conda-incubator/setup-miniconda@35d1405e78aa3f784fe3ce9a2eb378d5eeb62169 # v2.1.1
with:
auto-update-conda: true
python-version: ${{ matrix.python-version }}
activate-environment: test
- name: Install build
run: |
conda install -c conda-forge python-build
- name: Display Conda env
run: |
conda info
conda list
- name: Build and install XGBoost
run: |
cd python-package
python --version
python -m build --sdist
pip install -v ./dist/xgboost-*.tar.gz
cd ..
python -c 'import xgboost'
python-tests-on-win:
python-tests-on-macos:
name: Test XGBoost Python package on ${{ matrix.config.os }}
runs-on: ${{ matrix.config.os }}
timeout-minutes: 60
strategy:
matrix:
config:
- {os: windows-2016, python-version: '3.8'}
- {os: macos-11}
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: conda-incubator/setup-miniconda@v2
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with:
cache-downloads: true
cache-env: true
environment-name: macos_test
environment-file: tests/ci_build/conda_env/macos_cpu_test.yml
- name: Display Conda env
run: |
conda info
conda list
- name: Build XGBoost on macos
run: |
brew install ninja
mkdir build
cd build
# Set prefix, to use OpenMP library from Conda env
# See https://github.com/dmlc/xgboost/issues/7039#issuecomment-1025038228
# to learn why we don't use libomp from Homebrew.
cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX -DBUILD_DEPRECATED_CLI=ON
ninja
- name: Install Python package
run: |
cd python-package
python --version
pip install -v .
- name: Test Python package
run: |
pytest -s -v -rxXs --durations=0 ./tests/python
- name: Test Dask Interface
run: |
pytest -s -v -rxXs --durations=0 ./tests/test_distributed/test_with_dask
python-tests-on-win:
name: Test XGBoost Python package on ${{ matrix.config.os }}
runs-on: ${{ matrix.config.os }}
timeout-minutes: 60
strategy:
matrix:
config:
- {os: windows-latest, python-version: '3.8'}
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: conda-incubator/setup-miniconda@35d1405e78aa3f784fe3ce9a2eb378d5eeb62169 # v2.1.1
with:
auto-update-conda: true
python-version: ${{ matrix.config.python-version }}
@ -66,99 +186,158 @@ jobs:
environment-file: tests/ci_build/conda_env/win64_cpu_test.yml
- name: Display Conda env
shell: bash -l {0}
run: |
conda info
conda list
- name: Build XGBoost on Windows
shell: bash -l {0}
run: |
mkdir build_msvc
cd build_msvc
cmake .. -G"Visual Studio 15 2017" -DCMAKE_CONFIGURATION_TYPES="Release" -A x64 -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON
cmake .. -G"Visual Studio 17 2022" -DCMAKE_CONFIGURATION_TYPES="Release" -A x64 -DBUILD_DEPRECATED_CLI=ON
cmake --build . --config Release --parallel $(nproc)
- name: Install Python package
shell: bash -l {0}
run: |
cd python-package
python --version
python setup.py bdist_wheel --universal
pip wheel -v . --wheel-dir dist/
pip install ./dist/*.whl
- name: Test Python package
shell: bash -l {0}
run: |
pytest -s -v ./tests/python
pytest -s -v -rxXs --durations=0 ./tests/python
python-tests-on-macos:
python-tests-on-ubuntu:
name: Test XGBoost Python package on ${{ matrix.config.os }}
runs-on: ${{ matrix.config.os }}
timeout-minutes: 90
strategy:
matrix:
config:
- {os: macos-10.15, python-version "3.8" }
- {os: ubuntu-latest, python-version: "3.8"}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: conda-incubator/setup-miniconda@v2
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with:
auto-update-conda: true
python-version: ${{ matrix.config.python-version }}
activate-environment: macos_test
environment-file: tests/ci_build/conda_env/macos_cpu_test.yml
cache-downloads: true
cache-env: true
environment-name: linux_cpu_test
environment-file: tests/ci_build/conda_env/linux_cpu_test.yml
- name: Display Conda env
shell: bash -l {0}
run: |
conda info
conda list
- name: Build XGBoost on macos
- name: Build XGBoost on Ubuntu
run: |
wget https://raw.githubusercontent.com/Homebrew/homebrew-core/679923b4eb48a8dc7ecc1f05d06063cd79b3fc00/Formula/libomp.rb -O $(find $(brew --repository) -name libomp.rb)
brew install ninja libomp
brew pin libomp
mkdir build
cd build
cmake .. -GNinja -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON
cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX -DBUILD_DEPRECATED_CLI=ON
ninja
- name: Install Python package
shell: bash -l {0}
run: |
cd python-package
python --version
python setup.py bdist_wheel --universal
pip install ./dist/*.whl
pip install -v .
- name: Test Python package
run: |
pytest -s -v -rxXs --durations=0 ./tests/python
- name: Test Dask Interface
run: |
pytest -s -v -rxXs --durations=0 ./tests/test_distributed/test_with_dask
- name: Test PySpark Interface
shell: bash -l {0}
run: |
pytest -s -v ./tests/python
pytest -s -v -rxXs --durations=0 ./tests/test_distributed/test_with_spark
- name: Rename Python wheel
shell: bash -l {0}
python-sycl-tests-on-ubuntu:
name: Test XGBoost Python package with SYCL on ${{ matrix.config.os }}
runs-on: ${{ matrix.config.os }}
timeout-minutes: 90
strategy:
matrix:
config:
- {os: ubuntu-latest, python-version: "3.8"}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with:
cache-downloads: true
cache-env: true
environment-name: linux_sycl_test
environment-file: tests/ci_build/conda_env/linux_sycl_test.yml
- name: Display Conda env
run: |
TAG=macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64
python tests/ci_build/rename_whl.py python-package/dist/*.whl ${{ github.sha }} ${TAG}
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Upload Python wheel
shell: bash -l {0}
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
conda info
conda list
- name: Build XGBoost on Ubuntu
run: |
python -m awscli s3 cp python-package/dist/*.whl s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/ --acl public-read
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
mkdir build
cd build
cmake .. -DPLUGIN_SYCL=ON -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
make -j$(nproc)
- name: Install Python package
run: |
cd python-package
python --version
pip install -v .
- name: Test Python package
run: |
pytest -s -v -rxXs --durations=0 ./tests/python-sycl/
python-system-installation-on-ubuntu:
name: Test XGBoost Python package System Installation on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- name: Set up Python 3.8
uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
with:
python-version: 3.8
- name: Install ninja
run: |
sudo apt-get update && sudo apt-get install -y ninja-build
- name: Build XGBoost on Ubuntu
run: |
mkdir build
cd build
cmake .. -GNinja
ninja
- name: Copy lib to system lib
run: |
cp lib/* "$(python -c 'import sys; print(sys.base_prefix)')/lib"
- name: Install XGBoost in Virtual Environment
run: |
cd python-package
pip install virtualenv
virtualenv venv
source venv/bin/activate && \
pip install -v . --config-settings use_system_libxgboost=True && \
python -c 'import xgboost'

45
.github/workflows/python_wheels.yml vendored Normal file
View File

@ -0,0 +1,45 @@
name: XGBoost-Python-Wheels
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
python-wheels:
name: Build wheel for ${{ matrix.platform_id }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
include:
- os: macos-latest
platform_id: macosx_x86_64
- os: macos-latest
platform_id: macosx_arm64
steps:
- uses: actions/checkout@a12a3943b4bdde767164f792f33f40b04645d846 # v3.0.0
with:
submodules: 'true'
- name: Setup Python
uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
with:
python-version: "3.8"
- name: Build wheels
run: bash tests/ci_build/build_python_wheels.sh ${{ matrix.platform_id }} ${{ github.sha }}
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Upload Python wheel
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
run: |
python -m pip install awscli
python -m awscli s3 cp wheelhouse/*.whl s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/ --acl public-read
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}

View File

@ -1,4 +1,4 @@
# Run R tests with noLD R. Only triggered by a pull request review
# Run expensive R tests with the help of rhub. Only triggered by a pull request review
# See discussion at https://github.com/dmlc/xgboost/pull/6378
name: XGBoost-R-noLD
@ -7,34 +7,34 @@ on:
pull_request_review_comment:
types: [created]
env:
R_PACKAGES: c('XML', 'igraph', 'data.table', 'ggplot2', 'DiagrammeR', 'Ckmeans.1d.dp', 'vcd', 'testthat', 'lintr', 'knitr', 'rmarkdown', 'e1071', 'cplm', 'devtools', 'float', 'titanic')
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
test-R-noLD:
if: github.event.comment.body == '/gha run r-nold-test' && contains('OWNER,MEMBER,COLLABORATOR', github.event.comment.author_association)
timeout-minutes: 120
runs-on: ubuntu-latest
container: rhub/debian-gcc-devel-nold
container:
image: rhub/debian-gcc-devel-nold
steps:
- name: Install git and system packages
shell: bash
run: |
apt-get update && apt-get install -y git libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev libxml2-dev
apt update && apt install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev libglpk-dev libxml2-dev libharfbuzz-dev libfribidi-dev git -y
- uses: actions/checkout@v2
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- name: Install dependencies
shell: bash
shell: bash -l {0}
run: |
cat > install_libs.R <<EOT
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
EOT
/tmp/R-devel/bin/Rscript install_libs.R
/tmp/R-devel/bin/Rscript -e "source('./R-package/tests/helper_scripts/install_deps.R')"
- name: Run R tests
shell: bash

View File

@ -3,9 +3,15 @@ name: XGBoost-R-Tests
on: [push, pull_request]
env:
R_PACKAGES: c('XML', 'data.table', 'ggplot2', 'DiagrammeR', 'Ckmeans.1d.dp', 'vcd', 'testthat', 'lintr', 'knitr', 'rmarkdown', 'e1071', 'cplm', 'devtools', 'float', 'titanic')
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
permissions:
contents: read # to fetch code (actions/checkout)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
lintr:
runs-on: ${{ matrix.config.os }}
@ -13,137 +19,132 @@ jobs:
strategy:
matrix:
config:
- {os: windows-latest, r: 'release', compiler: 'mingw', build: 'autotools'}
- {os: ubuntu-latest, r: 'release'}
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
RSPM: ${{ matrix.config.rspm }}
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@master
- uses: r-lib/actions/setup-r@e40ad904310fc92e96951c1b0d64f3de6cbe9e14 # v2.6.5
with:
r-version: ${{ matrix.config.r }}
- name: Cache R packages
uses: actions/cache@v2
uses: actions/cache@937d24475381cd9c75ae6db12cb4e79714b926ed # v3.0.11
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-2-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-2-${{ hashFiles('R-package/DESCRIPTION') }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
- name: Install dependencies
shell: Rscript {0}
run: |
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- name: Install igraph on Windows
shell: Rscript {0}
if: matrix.config.os == 'windows-latest'
run: |
install.packages('igraph', type='binary')
source("./R-package/tests/helper_scripts/install_deps.R")
- name: Run lintr
run: |
cd R-package
R.exe CMD INSTALL .
Rscript.exe tests/helper_scripts/run_lint.R
MAKEFLAGS="-j$(nproc)" R CMD INSTALL R-package/
Rscript tests/ci_build/lint_r.R $(pwd)
test-with-R:
test-Rpkg:
runs-on: ${{ matrix.config.os }}
name: Test R on OS ${{ matrix.config.os }}, R ${{ matrix.config.r }}, Compiler ${{ matrix.config.compiler }}, Build ${{ matrix.config.build }}
strategy:
fail-fast: false
matrix:
config:
- {os: windows-2016, r: 'release', compiler: 'mingw', build: 'autotools'}
- {os: windows-2016, r: 'release', compiler: 'msvc', build: 'cmake'}
- {os: windows-2016, r: 'release', compiler: 'mingw', build: 'cmake'}
- {os: windows-latest, r: 'release', compiler: 'mingw', build: 'autotools'}
- {os: ubuntu-latest, r: 'release', compiler: 'none', build: 'cmake'}
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
RSPM: ${{ matrix.config.rspm }}
steps:
- uses: actions/checkout@v2
- name: Install system dependencies
run: |
sudo apt update
sudo apt install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev libglpk-dev libxml2-dev libharfbuzz-dev libfribidi-dev
if: matrix.config.os == 'ubuntu-latest'
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@master
- uses: r-lib/actions/setup-r@e40ad904310fc92e96951c1b0d64f3de6cbe9e14 # v2.6.5
with:
r-version: ${{ matrix.config.r }}
- name: Cache R packages
uses: actions/cache@v2
uses: actions/cache@937d24475381cd9c75ae6db12cb4e79714b926ed # v3.0.11
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-2-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-2-${{ hashFiles('R-package/DESCRIPTION') }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-6-${{ hashFiles('R-package/DESCRIPTION') }}
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
with:
python-version: "3.8"
architecture: 'x64'
- uses: r-lib/actions/setup-tinytex@v2
- name: Install dependencies
shell: Rscript {0}
run: |
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- name: Install igraph on Windows
shell: Rscript {0}
if: matrix.config.os == 'windows-2016'
run: |
install.packages('igraph', type='binary', dependencies = c('Depends', 'Imports', 'LinkingTo'))
- uses: actions/setup-python@v2
with:
python-version: '3.7'
architecture: 'x64'
source("./R-package/tests/helper_scripts/install_deps.R")
- name: Test R
run: |
python tests/ci_build/test_r_package.py --compiler='${{ matrix.config.compiler }}' --build-tool='${{ matrix.config.build }}'
python tests/ci_build/test_r_package.py --compiler='${{ matrix.config.compiler }}' --build-tool="${{ matrix.config.build }}" --task=check
if: matrix.config.compiler != 'none'
test-R-CRAN:
- name: Test R
run: |
python tests/ci_build/test_r_package.py --build-tool="${{ matrix.config.build }}" --task=check
if: matrix.config.compiler == 'none'
test-R-on-Debian:
name: Test R package on Debian
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
config:
- {r: 'release'}
container:
image: rhub/debian-gcc-release
steps:
- uses: actions/checkout@v2
- name: Install system dependencies
run: |
# Must run before checkout to have the latest git installed.
# No need to add pandoc, the container has it figured out.
apt update && apt install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev libglpk-dev libxml2-dev libharfbuzz-dev libfribidi-dev git -y
- name: Trust git cloning project sources
run: |
git config --global --add safe.directory "${GITHUB_WORKSPACE}"
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@master
with:
r-version: ${{ matrix.config.r }}
- uses: r-lib/actions/setup-tinytex@master
- name: Install system packages
run: |
sudo apt-get update && sudo apt-get install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev pandoc pandoc-citeproc libglpk-dev
- name: Cache R packages
uses: actions/cache@v2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-2-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-2-${{ hashFiles('R-package/DESCRIPTION') }}
- name: Install dependencies
shell: Rscript {0}
shell: bash -l {0}
run: |
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
install.packages('igraph', repos = 'http://cloud.r-project.org', dependencies = c('Depends', 'Imports', 'LinkingTo'))
Rscript -e "source('./R-package/tests/helper_scripts/install_deps.R')"
- name: Check R Package
- name: Test R
shell: bash -l {0}
run: |
# Print stacktrace upon success of failure
make Rcheck || tests/ci_build/print_r_stacktrace.sh fail
tests/ci_build/print_r_stacktrace.sh success
python3 tests/ci_build/test_r_package.py --r=/usr/bin/R --build-tool=autotools --task=check
- uses: dorny/paths-filter@v2
id: changes
with:
filters: |
r_package:
- 'R-package/**'
- name: Run document check
if: steps.changes.outputs.r_package == 'true'
run: |
python3 tests/ci_build/test_r_package.py --r=/usr/bin/R --task=doc

54
.github/workflows/scorecards.yml vendored Normal file
View File

@ -0,0 +1,54 @@
name: Scorecards supply-chain security
on:
# Only the default branch is supported.
branch_protection_rule:
schedule:
- cron: '17 2 * * 6'
push:
branches: [ "master" ]
# Declare default permissions as read only.
permissions: read-all
jobs:
analysis:
name: Scorecards analysis
runs-on: ubuntu-latest
permissions:
# Needed to upload the results to code-scanning dashboard.
security-events: write
# Used to receive a badge.
id-token: write
steps:
- name: "Checkout code"
uses: actions/checkout@a12a3943b4bdde767164f792f33f40b04645d846 # v3.0.0
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@0864cf19026789058feabb7e87baa5f140aac736 # v2.3.1
with:
results_file: results.sarif
results_format: sarif
# Publish the results for public repositories to enable scorecard badges. For more details, see
# https://github.com/ossf/scorecard-action#publishing-results.
# For private repositories, `publish_results` will automatically be set to `false`, regardless
# of the value entered here.
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@5d5d22a31266ced268874388b861e4b58bb5c2f3 # v4.3.1
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@83a02f7883b12e0e4e1a146174f5e2292a01e601 # v2.16.4
with:
sarif_file: results.sarif

44
.github/workflows/update_rapids.yml vendored Normal file
View File

@ -0,0 +1,44 @@
name: update-rapids
on:
workflow_dispatch:
schedule:
- cron: "0 20 * * 1" # Run once weekly
permissions:
pull-requests: write
contents: write
defaults:
run:
shell: bash -l {0}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # To use GitHub CLI
jobs:
update-rapids:
name: Check latest RAPIDS
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- name: Check latest RAPIDS and update conftest.sh
run: |
bash tests/buildkite/update-rapids.sh
- name: Create Pull Request
uses: peter-evans/create-pull-request@v6
if: github.ref == 'refs/heads/master'
with:
add-paths: |
tests/buildkite
branch: create-pull-request/update-rapids
base: master
title: "[CI] Update RAPIDS to latest stable"
commit-message: "[CI] Update RAPIDS to latest stable"

31
.gitignore vendored
View File

@ -48,10 +48,13 @@ Debug
*.Rproj
./xgboost.mpi
./xgboost.mock
*.bak
#.Rbuildignore
R-package.Rproj
*.cache*
.mypy_cache/
doxygen
# java
java/xgboost4j/target
java/xgboost4j/tmp
@ -63,6 +66,7 @@ nb-configuration*
# Eclipse
.project
.cproject
.classpath
.pydevproject
.settings/
build
@ -96,8 +100,11 @@ metastore_db
R-package/src/Makevars
*.lib
# Visual Studio Code
/.vscode/
# Visual Studio
.vs/
CMakeSettings.json
*.ilk
*.pdb
# IntelliJ/CLion
.idea
@ -125,3 +132,23 @@ credentials.csv
*.pub
*.rdp
*_rsa
# Visual Studio code + extensions
.vscode
.metals
.bloop
# python tests
demo/**/*.txt
*.dmatrix
.hypothesis
__MACOSX/
model*.json
# R tests
*.htm
*.html
*.libsvm
*.rds
Rplots.pdf
*.zip

6
.gitmodules vendored
View File

@ -2,9 +2,9 @@
path = dmlc-core
url = https://github.com/dmlc/dmlc-core
branch = main
[submodule "cub"]
path = cub
url = https://github.com/NVlabs/cub
[submodule "gputreeshap"]
path = gputreeshap
url = https://github.com/rapidsai/gputreeshap.git
[submodule "rocgputreeshap"]
path = rocgputreeshap
url = https://github.com/ROCmSoftwarePlatform/rocgputreeshap

34
.readthedocs.yaml Normal file
View File

@ -0,0 +1,34 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
submodules:
include: all
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.8"
apt_packages:
- graphviz
- cmake
- g++
- doxygen
- ninja-build
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: doc/conf.py
# If using Sphinx, optionally build your docs in additional formats such as PDF
formats:
- pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: doc/requirements.txt

View File

@ -1,53 +0,0 @@
sudo: required
dist: bionic
env:
global:
- secure: "lqkL5SCM/CBwgVb1GWoOngpojsa0zCSGcvF0O3/45rBT1EpNYtQ4LRJ1+XcHi126vdfGoim/8i7AQhn5eOgmZI8yAPBeoUZ5zSrejD3RUpXr2rXocsvRRP25Z4mIuAGHD9VAHtvTdhBZRVV818W02pYduSzAeaY61q/lU3xmWsE="
- secure: "mzms6X8uvdhRWxkPBMwx+mDl3d+V1kUpZa7UgjT+dr4rvZMzvKtjKp/O0JZZVogdgZjUZf444B98/7AvWdSkGdkfz2QdmhWmXzNPfNuHtmfCYMdijsgFIGLuD3GviFL/rBiM2vgn32T3QqFiEJiC5StparnnXimPTc9TpXQRq5c="
jobs:
include:
- os: linux
arch: s390x
env: TASK=s390x_test
# dependent brew packages
# the dependencies from homebrew is installed manually from setup script due to outdated image from travis.
addons:
homebrew:
update: false
apt:
packages:
- unzip
before_install:
- source tests/travis/travis_setup_env.sh
install:
- source tests/travis/setup.sh
script:
- tests/travis/run_test.sh
cache:
directories:
- ${HOME}/.cache/usr
- ${HOME}/.cache/pip
before_cache:
- tests/travis/travis_before_cache.sh
after_failure:
- tests/travis/travis_after_failure.sh
after_success:
- tree build
- bash <(curl -s https://codecov.io/bash) -a '-o src/ src/*.c'
notifications:
email:
on_success: change
on_failure: always

View File

@ -15,4 +15,3 @@
address = {New York, NY, USA},
keywords = {large-scale machine learning},
}

View File

@ -1,34 +1,60 @@
cmake_minimum_required(VERSION 3.14 FATAL_ERROR)
project(xgboost LANGUAGES CXX C VERSION 1.5.2)
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
if(PLUGIN_SYCL)
set(CMAKE_CXX_COMPILER "g++")
set(CMAKE_C_COMPILER "gcc")
string(REPLACE " -isystem ${CONDA_PREFIX}/include" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
endif()
project(xgboost LANGUAGES CXX C VERSION 2.1.0)
include(cmake/Utils.cmake)
list(APPEND CMAKE_MODULE_PATH "${xgboost_SOURCE_DIR}/cmake/modules")
cmake_policy(SET CMP0022 NEW)
cmake_policy(SET CMP0079 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0063 NEW)
cmake_policy(SET CMP0063 NEW)
if ((${CMAKE_VERSION} VERSION_GREATER 3.13) OR (${CMAKE_VERSION} VERSION_EQUAL 3.13))
cmake_policy(SET CMP0077 NEW)
endif ((${CMAKE_VERSION} VERSION_GREATER 3.13) OR (${CMAKE_VERSION} VERSION_EQUAL 3.13))
# These policies are already set from 3.18 but we still need to set the policy
# default variables here for lower minimum versions in the submodules
set(CMAKE_POLICY_DEFAULT_CMP0063 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0069 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0076 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0077 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0079 NEW)
message(STATUS "CMake version ${CMAKE_VERSION}")
if (CMAKE_COMPILER_IS_GNUCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5.0)
message(FATAL_ERROR "GCC version must be at least 5.0!")
# Check compiler versions
# Use recent compilers to ensure that std::filesystem is available
if(MSVC)
if(MSVC_VERSION LESS 1920)
message(FATAL_ERROR "Need Visual Studio 2019 or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "8.1")
message(FATAL_ERROR "Need GCC 8.1 or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "11.0")
message(FATAL_ERROR "Need Xcode 11.0 (AppleClang 11.0) or newer to build XGBoost")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "9.0")
message(FATAL_ERROR "Need Clang 9.0 or newer to build XGBoost")
endif()
endif()
include(${xgboost_SOURCE_DIR}/cmake/FindPrefetchIntrinsics.cmake)
include(${xgboost_SOURCE_DIR}/cmake/PrefetchIntrinsics.cmake)
find_prefetch_intrinsics()
include(${xgboost_SOURCE_DIR}/cmake/Version.cmake)
write_version()
set_default_configuration_release()
#-- Options
include(CMakeDependentOption)
## User options
option(BUILD_C_DOC "Build documentation for C APIs using Doxygen." OFF)
option(USE_OPENMP "Build with OpenMP support." ON)
option(BUILD_STATIC_LIB "Build static library" OFF)
option(RABIT_BUILD_MPI "Build MPI" OFF)
option(BUILD_DEPRECATED_CLI "Build the deprecated command line interface" OFF)
option(FORCE_SHARED_CRT "Build with dynamic CRT on Windows (/MD)" OFF)
## Bindings
option(JVM_BINDINGS "Build JVM bindings" OFF)
option(R_LIB "Build shared library for R package" OFF)
@ -40,22 +66,45 @@ option(ENABLE_ALL_WARNINGS "Enable all compiler warnings. Only effective for GCC
option(LOG_CAPI_INVOCATION "Log all C API invocations for debugging" OFF)
option(GOOGLE_TEST "Build google tests" OFF)
option(USE_DMLC_GTEST "Use google tests bundled with dmlc-core submodule" OFF)
option(USE_DEVICE_DEBUG "Generate CUDA device debug info." OFF)
option(USE_DEVICE_DEBUG "Generate CUDA/HIP device debug info." OFF)
option(USE_NVTX "Build with cuda profiling annotations. Developers only." OFF)
set(NVTX_HEADER_DIR "" CACHE PATH "Path to the stand-alone nvtx header")
option(RABIT_MOCK "Build rabit with mock" OFF)
option(HIDE_CXX_SYMBOLS "Build shared library and hide all C++ symbols" OFF)
option(KEEP_BUILD_ARTIFACTS_IN_BINARY_DIR "Output build artifacts in CMake binary dir" OFF)
## CUDA
option(USE_CUDA "Build with GPU acceleration" OFF)
option(USE_PER_THREAD_DEFAULT_STREAM "Build with per-thread default stream" ON)
option(USE_NCCL "Build with NCCL to enable distributed GPU support." OFF)
# This is specifically designed for PyPI binary release and should be disabled for most of the cases.
option(USE_DLOPEN_NCCL "Whether to load nccl dynamically." OFF)
option(BUILD_WITH_SHARED_NCCL "Build with shared NCCL library." OFF)
option(BUILD_WITH_CUDA_CUB "Build with cub in CUDA installation" OFF)
set(GPU_COMPUTE_VER "" CACHE STRING
"Semicolon separated list of compute versions to be built against, e.g. '35;61'")
## Copied From dmlc
option(USE_HDFS "Build with HDFS support" OFF)
option(USE_AZURE "Build with AZURE support" OFF)
option(USE_S3 "Build with S3 support" OFF)
if(USE_CUDA)
if(NOT DEFINED CMAKE_CUDA_ARCHITECTURES AND NOT DEFINED ENV{CUDAARCHS})
set(GPU_COMPUTE_VER "" CACHE STRING
"Semicolon separated list of compute versions to be built against, e.g. '35;61'")
else()
# Clear any cached values from previous runs
unset(GPU_COMPUTE_VER)
unset(GPU_COMPUTE_VER CACHE)
endif()
endif()
# CUDA device LTO was introduced in CMake v3.25 and requires host LTO to also be enabled but can still
# be explicitly disabled allowing for LTO on host only, host and device, or neither, but device-only LTO
# is not a supproted configuration
cmake_dependent_option(USE_CUDA_LTO
"Enable link-time optimization for CUDA device code"
"${CMAKE_INTERPROCEDURAL_OPTIMIZATION}"
"CMAKE_VERSION VERSION_GREATER_EQUAL 3.25;USE_CUDA;CMAKE_INTERPROCEDURAL_OPTIMIZATION"
OFF)
## HIP
option(USE_HIP "Build with GPU acceleration" OFF)
option(USE_RCCL "Build with RCCL to enable distributed GPU support." OFF)
# This is specifically designed for PyPI binary release and should be disabled for most of the cases.
option(USE_DLOPEN_RCCL "Whether to load nccl dynamically." OFF)
option(BUILD_WITH_SHARED_RCCL "Build with shared RCCL library." OFF)
## Sanitizers
option(USE_SANITIZER "Use santizer flags" OFF)
option(SANITIZER_PATH "Path to sanitizes.")
@ -63,85 +112,159 @@ set(ENABLED_SANITIZERS "address" "leak" CACHE STRING
"Semicolon separated list of sanitizer names. E.g 'address;leak'. Supported sanitizers are
address, leak, undefined and thread.")
## Plugins
option(PLUGIN_DENSE_PARSER "Build dense parser plugin" OFF)
option(PLUGIN_RMM "Build with RAPIDS Memory Manager (RMM)" OFF)
option(PLUGIN_FEDERATED "Build with Federated Learning" OFF)
## TODO: 1. Add check if DPC++ compiler is used for building
option(PLUGIN_UPDATER_ONEAPI "DPC++ updater" OFF)
option(PLUGIN_SYCL "SYCL plugin" OFF)
option(ADD_PKGCONFIG "Add xgboost.pc into system." ON)
#-- Checks for building XGBoost
if (USE_DEBUG_OUTPUT AND (NOT (CMAKE_BUILD_TYPE MATCHES Debug)))
if(USE_DEBUG_OUTPUT AND (NOT (CMAKE_BUILD_TYPE MATCHES Debug)))
message(SEND_ERROR "Do not enable `USE_DEBUG_OUTPUT' with release build.")
endif (USE_DEBUG_OUTPUT AND (NOT (CMAKE_BUILD_TYPE MATCHES Debug)))
if (USE_NCCL AND NOT (USE_CUDA))
endif()
if(USE_NCCL AND NOT (USE_CUDA))
message(SEND_ERROR "`USE_NCCL` must be enabled with `USE_CUDA` flag.")
endif (USE_NCCL AND NOT (USE_CUDA))
if (USE_DEVICE_DEBUG AND NOT (USE_CUDA))
endif()
if(USE_DEVICE_DEBUG AND NOT (USE_CUDA))
message(SEND_ERROR "`USE_DEVICE_DEBUG` must be enabled with `USE_CUDA` flag.")
endif (USE_DEVICE_DEBUG AND NOT (USE_CUDA))
if (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
endif()
if(BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
message(SEND_ERROR "Build XGBoost with -DUSE_NCCL=ON to enable BUILD_WITH_SHARED_NCCL.")
endif (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
if (JVM_BINDINGS AND R_LIB)
endif()
if(USE_DLOPEN_NCCL AND (NOT USE_NCCL))
message(SEND_ERROR "Build XGBoost with -DUSE_NCCL=ON to enable USE_DLOPEN_NCCL.")
endif()
if(USE_DLOPEN_NCCL AND (NOT (CMAKE_SYSTEM_NAME STREQUAL "Linux")))
message(SEND_ERROR "`USE_DLOPEN_NCCL` supports only Linux at the moment.")
endif()
if(USE_RCCL AND NOT (USE_HIP))
message(SEND_ERROR "`USE_RCCL` must be enabled with `USE_HIP` flag.")
endif()
if(BUILD_WITH_SHARED_RCCL AND (NOT USE_RCCL))
message(SEND_ERROR "Build XGBoost with -DUSE_RCCL=ON to enable BUILD_WITH_SHARED_RCCL.")
endif()
if(USE_DLOPEN_RCCL AND (NOT USE_RCCL))
message(SEND_ERROR "Build XGBoost with -DUSE_RCCL=ON to enable USE_DLOPEN_RCCL.")
endif()
if(USE_DLOPEN_RCCL AND (NOT (CMAKE_SYSTEM_NAME STREQUAL "Linux")))
message(SEND_ERROR "`USE_DLOPEN_RCCL` supports only Linux at the moment.")
endif()
if(JVM_BINDINGS AND R_LIB)
message(SEND_ERROR "`R_LIB' is not compatible with `JVM_BINDINGS' as they both have customized configurations.")
endif (JVM_BINDINGS AND R_LIB)
if (R_LIB AND GOOGLE_TEST)
message(WARNING "Some C++ unittests will fail with `R_LIB` enabled,
as R package redirects some functions to R runtime implementation.")
endif (R_LIB AND GOOGLE_TEST)
if (USE_AVX)
message(SEND_ERROR "The option 'USE_AVX' is deprecated as experimental AVX features have been removed from XGBoost.")
endif (USE_AVX)
if (PLUGIN_LZ4)
message(SEND_ERROR "The option 'PLUGIN_LZ4' is removed from XGBoost.")
endif (PLUGIN_LZ4)
if (PLUGIN_RMM AND NOT (USE_CUDA))
endif()
if(R_LIB AND GOOGLE_TEST)
message(
WARNING
"Some C++ tests will fail with `R_LIB` enabled, as R package redirects some functions to R runtime implementation."
)
endif()
if(PLUGIN_RMM AND NOT (USE_CUDA))
message(SEND_ERROR "`PLUGIN_RMM` must be enabled with `USE_CUDA` flag.")
endif (PLUGIN_RMM AND NOT (USE_CUDA))
if (PLUGIN_RMM AND NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") OR (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")))
endif()
if(PLUGIN_RMM AND NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") OR (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")))
message(SEND_ERROR "`PLUGIN_RMM` must be used with GCC or Clang compiler.")
endif (PLUGIN_RMM AND NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") OR (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")))
if (PLUGIN_RMM AND NOT (CMAKE_SYSTEM_NAME STREQUAL "Linux"))
endif()
if(PLUGIN_RMM AND NOT (CMAKE_SYSTEM_NAME STREQUAL "Linux"))
message(SEND_ERROR "`PLUGIN_RMM` must be used with Linux.")
endif (PLUGIN_RMM AND NOT (CMAKE_SYSTEM_NAME STREQUAL "Linux"))
if (ENABLE_ALL_WARNINGS)
if ((NOT CMAKE_CXX_COMPILER_ID MATCHES "Clang") AND (NOT CMAKE_CXX_COMPILER_ID STREQUAL "GNU"))
endif()
if(ENABLE_ALL_WARNINGS)
if((NOT CMAKE_CXX_COMPILER_ID MATCHES "Clang") AND (NOT CMAKE_CXX_COMPILER_ID STREQUAL "GNU"))
message(SEND_ERROR "ENABLE_ALL_WARNINGS is only available for Clang and GCC.")
endif ((NOT CMAKE_CXX_COMPILER_ID MATCHES "Clang") AND (NOT CMAKE_CXX_COMPILER_ID STREQUAL "GNU"))
endif (ENABLE_ALL_WARNINGS)
if (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
endif()
endif()
if(BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
message(SEND_ERROR "Cannot build a static library libxgboost.a when R or JVM packages are enabled.")
endif (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
if (PLUGIN_RMM AND (NOT BUILD_WITH_CUDA_CUB))
message(SEND_ERROR "Cannot build with RMM using cub submodule.")
endif (PLUGIN_RMM AND (NOT BUILD_WITH_CUDA_CUB))
endif()
if(PLUGIN_FEDERATED)
if(CMAKE_CROSSCOMPILING)
message(SEND_ERROR "Cannot cross compile with federated learning support")
endif()
if(BUILD_STATIC_LIB)
message(SEND_ERROR "Cannot build static lib with federated learning support")
endif()
if(R_LIB OR JVM_BINDINGS)
message(SEND_ERROR "Cannot enable federated learning support when R or JVM packages are enabled.")
endif()
if(WIN32)
message(SEND_ERROR "Federated learning not supported for Windows platform")
endif()
endif()
#-- Removed options
if(USE_AVX)
message(SEND_ERROR "The option `USE_AVX` is deprecated as experimental AVX features have been removed from XGBoost.")
endif()
if(PLUGIN_LZ4)
message(SEND_ERROR "The option `PLUGIN_LZ4` is removed from XGBoost.")
endif()
if(RABIT_BUILD_MPI)
message(SEND_ERROR "The option `RABIT_BUILD_MPI` has been removed from XGBoost.")
endif()
if(USE_S3)
message(SEND_ERROR "The option `USE_S3` has been removed from XGBoost")
endif()
if(USE_AZURE)
message(SEND_ERROR "The option `USE_AZURE` has been removed from XGBoost")
endif()
if(USE_HDFS)
message(SEND_ERROR "The option `USE_HDFS` has been removed from XGBoost")
endif()
if(PLUGIN_DENSE_PARSER)
message(SEND_ERROR "The option `PLUGIN_DENSE_PARSER` has been removed from XGBoost.")
endif()
#-- Sanitizer
if (USE_SANITIZER)
if(USE_SANITIZER)
include(cmake/Sanitizer.cmake)
enable_sanitizers("${ENABLED_SANITIZERS}")
endif (USE_SANITIZER)
endif()
if (USE_CUDA)
if(USE_CUDA)
set(USE_OPENMP ON CACHE BOOL "CUDA requires OpenMP" FORCE)
# `export CXX=' is ignored by CMake CUDA.
set(CMAKE_CUDA_HOST_COMPILER ${CMAKE_CXX_COMPILER})
message(STATUS "Configured CUDA host compiler: ${CMAKE_CUDA_HOST_COMPILER}")
if(NOT DEFINED CMAKE_CUDA_HOST_COMPILER AND NOT DEFINED ENV{CUDAHOSTCXX})
set(CMAKE_CUDA_HOST_COMPILER ${CMAKE_CXX_COMPILER} CACHE FILEPATH
"The compiler executable to use when compiling host code for CUDA or HIP language files.")
mark_as_advanced(CMAKE_CUDA_HOST_COMPILER)
message(STATUS "Configured CUDA host compiler: ${CMAKE_CUDA_HOST_COMPILER}")
endif()
if(NOT DEFINED CMAKE_CUDA_RUNTIME_LIBRARY)
set(CMAKE_CUDA_RUNTIME_LIBRARY Static)
endif()
enable_language(CUDA)
if (${CMAKE_CUDA_COMPILER_VERSION} VERSION_LESS 10.1)
message(FATAL_ERROR "CUDA version must be at least 10.1!")
if(${CMAKE_CUDA_COMPILER_VERSION} VERSION_LESS 11.0)
message(FATAL_ERROR "CUDA version must be at least 11.0!")
endif()
if(DEFINED GPU_COMPUTE_VER)
compute_cmake_cuda_archs("${GPU_COMPUTE_VER}")
endif()
set(GEN_CODE "")
format_gencode_flags("${GPU_COMPUTE_VER}" GEN_CODE)
add_subdirectory(${PROJECT_SOURCE_DIR}/gputreeshap)
if ((${CMAKE_CUDA_COMPILER_VERSION} VERSION_GREATER_EQUAL 11.4) AND (NOT BUILD_WITH_CUDA_CUB))
message(SEND_ERROR "`BUILD_WITH_CUDA_CUB` should be set to `ON` for CUDA >= 11.4")
endif ()
endif (USE_CUDA)
find_package(CUDAToolkit REQUIRED)
endif()
if (FORCE_COLORED_OUTPUT AND (CMAKE_GENERATOR STREQUAL "Ninja") AND
if (USE_HIP)
set(USE_OPENMP ON CACHE BOOL "HIP requires OpenMP" FORCE)
# `export CXX=' is ignored by CMake HIP.
set(CMAKE_HIP_HOST_COMPILER ${CMAKE_CXX_COMPILER})
message(STATUS "Configured HIP host compiler: ${CMAKE_HIP_HOST_COMPILER}")
enable_language(HIP)
find_package(hip REQUIRED)
find_package(rocthrust REQUIRED)
find_package(hipcub REQUIRED)
set(CMAKE_HIP_FLAGS "${CMAKE_HIP_FLAGS} -I${HIP_INCLUDE_DIRS}")
set(CMAKE_HIP_FLAGS "${CMAKE_HIP_FLAGS} -Wunused-result -w")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D__HIP_PLATFORM_AMD__")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -I${HIP_INCLUDE_DIRS}")
#set(CMAKE_HIP_SEPARABLE_COMPILATION ON)
add_subdirectory(${PROJECT_SOURCE_DIR}/rocgputreeshap)
endif (USE_HIP)
if(FORCE_COLORED_OUTPUT AND (CMAKE_GENERATOR STREQUAL "Ninja") AND
((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") OR
(CMAKE_CXX_COMPILER_ID STREQUAL "Clang")))
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fdiagnostics-color=always")
@ -149,59 +272,110 @@ endif()
find_package(Threads REQUIRED)
if (USE_OPENMP)
if (APPLE)
# Require CMake 3.16+ on Mac OSX, as previous versions of CMake had trouble locating
# OpenMP on Mac. See https://github.com/dmlc/xgboost/pull/5146#issuecomment-568312706
cmake_minimum_required(VERSION 3.16)
endif (APPLE)
find_package(OpenMP REQUIRED)
endif (USE_OPENMP)
if(USE_OPENMP)
if(APPLE)
find_package(OpenMP)
if(NOT OpenMP_FOUND)
# Try again with extra path info; required for libomp 15+ from Homebrew
execute_process(COMMAND brew --prefix libomp
OUTPUT_VARIABLE HOMEBREW_LIBOMP_PREFIX
OUTPUT_STRIP_TRAILING_WHITESPACE)
set(OpenMP_C_FLAGS
"-Xpreprocessor -fopenmp -I${HOMEBREW_LIBOMP_PREFIX}/include")
set(OpenMP_CXX_FLAGS
"-Xpreprocessor -fopenmp -I${HOMEBREW_LIBOMP_PREFIX}/include")
set(OpenMP_C_LIB_NAMES omp)
set(OpenMP_CXX_LIB_NAMES omp)
set(OpenMP_omp_LIBRARY ${HOMEBREW_LIBOMP_PREFIX}/lib/libomp.dylib)
find_package(OpenMP REQUIRED)
endif()
else()
find_package(OpenMP REQUIRED)
endif()
endif()
#Add for IBM i
if(${CMAKE_SYSTEM_NAME} MATCHES "OS400")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -pthread")
set(CMAKE_CXX_ARCHIVE_CREATE "<CMAKE_AR> -X64 qc <TARGET> <OBJECTS>")
endif()
if (USE_NCCL)
if(USE_NCCL)
find_package(Nccl REQUIRED)
endif (USE_NCCL)
endif()
if (USE_RCCL)
find_package(rccl REQUIRED)
endif (USE_RCCL)
# dmlc-core
msvc_use_static_runtime()
if(FORCE_SHARED_CRT)
set(DMLC_FORCE_SHARED_CRT ON)
endif()
add_subdirectory(${xgboost_SOURCE_DIR}/dmlc-core)
if (MSVC)
if (TARGET dmlc_unit_tests)
target_compile_options(dmlc_unit_tests PRIVATE
-D_CRT_SECURE_NO_WARNINGS -D_CRT_SECURE_NO_DEPRECATE)
endif (TARGET dmlc_unit_tests)
endif (MSVC)
if(MSVC)
if(TARGET dmlc_unit_tests)
target_compile_options(
dmlc_unit_tests PRIVATE
-D_CRT_SECURE_NO_WARNINGS -D_CRT_SECURE_NO_DEPRECATE
)
endif()
endif()
# rabit
add_subdirectory(rabit)
if (RABIT_BUILD_MPI)
find_package(MPI REQUIRED)
endif (RABIT_BUILD_MPI)
# core xgboost
add_subdirectory(${xgboost_SOURCE_DIR}/src)
target_link_libraries(objxgboost PUBLIC dmlc)
# Link -lstdc++fs for GCC 8.x
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS "9.0")
target_link_libraries(objxgboost PUBLIC stdc++fs)
endif()
# Exports some R specific definitions and objects
if (R_LIB)
if(R_LIB)
add_subdirectory(${xgboost_SOURCE_DIR}/R-package)
endif (R_LIB)
endif()
# This creates its own shared library `xgboost4j'.
if (JVM_BINDINGS)
if(JVM_BINDINGS)
add_subdirectory(${xgboost_SOURCE_DIR}/jvm-packages)
endif (JVM_BINDINGS)
endif()
# Plugin
add_subdirectory(${xgboost_SOURCE_DIR}/plugin)
if(PLUGIN_RMM)
find_package(rmm REQUIRED)
# Patch the rmm targets so they reference the static cudart
# Remove this patch once RMM stops specifying cudart requirement
# (since RMM is a header-only library, it should not specify cudart in its CMake config)
get_target_property(rmm_link_libs rmm::rmm INTERFACE_LINK_LIBRARIES)
list(REMOVE_ITEM rmm_link_libs CUDA::cudart)
list(APPEND rmm_link_libs CUDA::cudart_static)
set_target_properties(rmm::rmm PROPERTIES INTERFACE_LINK_LIBRARIES "${rmm_link_libs}")
get_target_property(rmm_link_libs rmm::rmm INTERFACE_LINK_LIBRARIES)
endif()
if(PLUGIN_SYCL)
set(CMAKE_CXX_LINK_EXECUTABLE
"icpx <FLAGS> <CMAKE_CXX_LINK_FLAGS> -qopenmp <LINK_FLAGS> <OBJECTS> -o <TARGET> <LINK_LIBRARIES>")
set(CMAKE_CXX_CREATE_SHARED_LIBRARY
"icpx <CMAKE_SHARED_LIBRARY_CXX_FLAGS> -qopenmp <LANGUAGE_COMPILE_FLAGS> \
<CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS> <SONAME_FLAG>,<TARGET_SONAME> \
-o <TARGET> <OBJECTS> <LINK_LIBRARIES>")
endif()
#-- library
if (BUILD_STATIC_LIB)
if(BUILD_STATIC_LIB)
add_library(xgboost STATIC)
else (BUILD_STATIC_LIB)
else()
add_library(xgboost SHARED)
endif (BUILD_STATIC_LIB)
endif()
target_link_libraries(xgboost PRIVATE objxgboost)
target_include_directories(xgboost
INTERFACE
@ -210,53 +384,70 @@ target_include_directories(xgboost
#-- End shared library
#-- CLI for xgboost
add_executable(runxgboost ${xgboost_SOURCE_DIR}/src/cli_main.cc)
target_link_libraries(runxgboost PRIVATE objxgboost)
target_include_directories(runxgboost
PRIVATE
${xgboost_SOURCE_DIR}/include
${xgboost_SOURCE_DIR}/dmlc-core/include
${xgboost_SOURCE_DIR}/rabit/include
)
set_target_properties(runxgboost PROPERTIES OUTPUT_NAME xgboost)
if(BUILD_DEPRECATED_CLI)
add_executable(runxgboost ${xgboost_SOURCE_DIR}/src/cli_main.cc)
target_link_libraries(runxgboost PRIVATE objxgboost)
target_include_directories(runxgboost
PRIVATE
${xgboost_SOURCE_DIR}/include
${xgboost_SOURCE_DIR}/dmlc-core/include
${xgboost_SOURCE_DIR}/rabit/include
)
set_target_properties(runxgboost PROPERTIES OUTPUT_NAME xgboost)
xgboost_target_properties(runxgboost)
xgboost_target_link_libraries(runxgboost)
xgboost_target_defs(runxgboost)
if(KEEP_BUILD_ARTIFACTS_IN_BINARY_DIR)
set_output_directory(runxgboost ${xgboost_BINARY_DIR})
else()
set_output_directory(runxgboost ${xgboost_SOURCE_DIR})
endif()
endif()
#-- End CLI for xgboost
# Common setup for all targets
foreach(target xgboost objxgboost dmlc runxgboost)
foreach(target xgboost objxgboost dmlc)
xgboost_target_properties(${target})
xgboost_target_link_libraries(${target})
xgboost_target_defs(${target})
endforeach()
if (JVM_BINDINGS)
if(JVM_BINDINGS)
xgboost_target_properties(xgboost4j)
xgboost_target_link_libraries(xgboost4j)
xgboost_target_defs(xgboost4j)
endif (JVM_BINDINGS)
endif()
if(KEEP_BUILD_ARTIFACTS_IN_BINARY_DIR)
set_output_directory(xgboost ${xgboost_BINARY_DIR}/lib)
else()
set_output_directory(xgboost ${xgboost_SOURCE_DIR}/lib)
endif()
set_output_directory(runxgboost ${xgboost_SOURCE_DIR})
set_output_directory(xgboost ${xgboost_SOURCE_DIR}/lib)
# Ensure these two targets do not build simultaneously, as they produce outputs with conflicting names
add_dependencies(xgboost runxgboost)
if(BUILD_DEPRECATED_CLI)
add_dependencies(xgboost runxgboost)
endif()
#-- Installing XGBoost
if (R_LIB)
if(R_LIB)
include(cmake/RPackageInstallTargetSetup.cmake)
set_target_properties(xgboost PROPERTIES PREFIX "")
if (APPLE)
if(APPLE)
set_target_properties(xgboost PROPERTIES SUFFIX ".so")
endif (APPLE)
endif()
setup_rpackage_install_target(xgboost "${CMAKE_CURRENT_BINARY_DIR}/R-package-install")
set(CMAKE_INSTALL_PREFIX "${CMAKE_CURRENT_BINARY_DIR}/dummy_inst")
endif (R_LIB)
if (MINGW)
endif()
if(MINGW)
set_target_properties(xgboost PROPERTIES PREFIX "")
endif (MINGW)
endif()
if (BUILD_C_DOC)
if(BUILD_C_DOC)
include(cmake/Doc.cmake)
run_doxygen()
endif (BUILD_C_DOC)
endif()
include(CPack)
@ -272,11 +463,19 @@ install(DIRECTORY ${xgboost_SOURCE_DIR}/include/xgboost
# > in any export set.
#
# https://github.com/dmlc/xgboost/issues/6085
if (BUILD_STATIC_LIB)
set(INSTALL_TARGETS xgboost runxgboost objxgboost dmlc)
else (BUILD_STATIC_LIB)
set(INSTALL_TARGETS xgboost runxgboost)
endif (BUILD_STATIC_LIB)
if(BUILD_STATIC_LIB)
if(BUILD_DEPRECATED_CLI)
set(INSTALL_TARGETS xgboost runxgboost objxgboost dmlc)
else()
set(INSTALL_TARGETS xgboost objxgboost dmlc)
endif()
else()
if(BUILD_DEPRECATED_CLI)
set(INSTALL_TARGETS xgboost runxgboost)
else()
set(INSTALL_TARGETS xgboost)
endif()
endif()
install(TARGETS ${INSTALL_TARGETS}
EXPORT XGBoostTargets
@ -300,12 +499,12 @@ write_basic_package_version_file(
COMPATIBILITY AnyNewerVersion)
install(
FILES
${CMAKE_BINARY_DIR}/cmake/xgboost-config.cmake
${CMAKE_CURRENT_BINARY_DIR}/cmake/xgboost-config.cmake
${CMAKE_BINARY_DIR}/cmake/xgboost-config-version.cmake
DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/xgboost)
#-- Test
if (GOOGLE_TEST)
if(GOOGLE_TEST)
enable_testing()
# Unittests.
add_executable(testxgboost)
@ -325,14 +524,16 @@ if (GOOGLE_TEST)
${xgboost_SOURCE_DIR}/tests/cli/machine.conf.in
${xgboost_BINARY_DIR}/tests/cli/machine.conf
@ONLY)
add_test(
NAME TestXGBoostCLI
COMMAND runxgboost ${xgboost_BINARY_DIR}/tests/cli/machine.conf
WORKING_DIRECTORY ${xgboost_BINARY_DIR})
set_tests_properties(TestXGBoostCLI
PROPERTIES
PASS_REGULAR_EXPRESSION ".*test-rmse:0.087.*")
endif (GOOGLE_TEST)
if(BUILD_DEPRECATED_CLI)
add_test(
NAME TestXGBoostCLI
COMMAND runxgboost ${xgboost_BINARY_DIR}/tests/cli/machine.conf
WORKING_DIRECTORY ${xgboost_BINARY_DIR})
set_tests_properties(TestXGBoostCLI
PROPERTIES
PASS_REGULAR_EXPRESSION ".*test-rmse:0.087.*")
endif()
endif()
# For MSVC: Call msvc_use_static_runtime() once again to completely
# replace /MD with /MT. See https://github.com/dmlc/xgboost/issues/4462
@ -340,10 +541,10 @@ endif (GOOGLE_TEST)
msvc_use_static_runtime()
# Add xgboost.pc
if (ADD_PKGCONFIG)
if(ADD_PKGCONFIG)
configure_file(${xgboost_SOURCE_DIR}/cmake/xgboost.pc.in ${xgboost_BINARY_DIR}/xgboost.pc @ONLY)
install(
FILES ${xgboost_BINARY_DIR}/xgboost.pc
DESTINATION ${CMAKE_INSTALL_LIBDIR}/pkgconfig)
endif (ADD_PKGCONFIG)
endif()

View File

@ -10,8 +10,8 @@ The Project Management Committee(PMC) consists group of active committers that m
- Tianqi is a Ph.D. student working on large-scale machine learning. He is the creator of the project.
* [Michael Benesty](https://github.com/pommedeterresautee)
- Michael is a lawyer and data scientist in France. He is the creator of XGBoost interactive analysis module in R.
* [Yuan Tang](https://github.com/terrytangyuan), Ant Group
- Yuan is a software engineer in Ant Group. He contributed mostly in R and Python packages.
* [Yuan Tang](https://github.com/terrytangyuan), Red Hat
- Yuan is a principal software engineer at Red Hat. He contributed mostly in R and Python packages.
* [Nan Zhu](https://github.com/CodingCat), Uber
- Nan is a software engineer in Uber. He contributed mostly in JVM packages.
* [Jiaming Yuan](https://github.com/trivialfis)

453
Jenkinsfile vendored
View File

@ -1,453 +0,0 @@
#!/usr/bin/groovy
// -*- mode: groovy -*-
// Jenkins pipeline
// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/
// Command to run command inside a docker container
dockerRun = 'tests/ci_build/ci_build.sh'
// Which CUDA version to use when building reference distribution wheel
ref_cuda_ver = '10.1'
import groovy.transform.Field
@Field
def commit_id // necessary to pass a variable from one stage to another
pipeline {
// Each stage specify its own agent
agent none
environment {
DOCKER_CACHE_ECR_ID = '492475357299'
DOCKER_CACHE_ECR_REGION = 'us-west-2'
}
// Setup common job properties
options {
ansiColor('xterm')
timestamps()
timeout(time: 240, unit: 'MINUTES')
buildDiscarder(logRotator(numToKeepStr: '10'))
preserveStashes()
}
// Build stages
stages {
stage('Jenkins Linux: Initialize') {
agent { label 'job_initializer' }
steps {
script {
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
checkoutSrcs()
commit_id = "${GIT_COMMIT}"
}
sh 'python3 tests/jenkins_get_approval.py'
stash name: 'srcs'
}
}
stage('Jenkins Linux: Build') {
agent none
steps {
script {
parallel ([
'clang-tidy': { ClangTidy() },
'build-cpu': { BuildCPU() },
'build-cpu-arm64': { BuildCPUARM64() },
'build-cpu-rabit-mock': { BuildCPUMock() },
// Build reference, distribution-ready Python wheel with CUDA 10.1
// using CentOS 7 image
'build-gpu-cuda10.1': { BuildCUDA(cuda_version: '10.1') },
// The build-gpu-* builds below use Ubuntu image
'build-gpu-cuda11.0': { BuildCUDA(cuda_version: '11.0', build_rmm: true) },
'build-gpu-rpkg': { BuildRPackageWithCUDA(cuda_version: '10.1') },
'build-jvm-packages-gpu-cuda10.1': { BuildJVMPackagesWithCUDA(spark_version: '3.0.0', cuda_version: '11.0') },
'build-jvm-packages': { BuildJVMPackages(spark_version: '3.0.0') },
'build-jvm-doc': { BuildJVMDoc() }
])
}
}
}
stage('Jenkins Linux: Test') {
agent none
steps {
script {
parallel ([
'test-python-cpu': { TestPythonCPU() },
'test-python-cpu-arm64': { TestPythonCPUARM64() },
// artifact_cuda_version doesn't apply to RMM tests; RMM tests will always match CUDA version between artifact and host env
'test-python-gpu-cuda11.0-cross': { TestPythonGPU(artifact_cuda_version: '10.1', host_cuda_version: '11.0', test_rmm: true) },
'test-python-gpu-cuda11.0': { TestPythonGPU(artifact_cuda_version: '11.0', host_cuda_version: '11.0') },
'test-python-mgpu-cuda11.0': { TestPythonGPU(artifact_cuda_version: '10.1', host_cuda_version: '11.0', multi_gpu: true, test_rmm: true) },
'test-cpp-gpu-cuda11.0': { TestCppGPU(artifact_cuda_version: '11.0', host_cuda_version: '11.0', test_rmm: true) },
'test-jvm-jdk8': { CrossTestJVMwithJDK(jdk_version: '8', spark_version: '3.0.0') },
'test-jvm-jdk11': { CrossTestJVMwithJDK(jdk_version: '11') },
'test-jvm-jdk12': { CrossTestJVMwithJDK(jdk_version: '12') }
])
}
}
}
stage('Jenkins Linux: Deploy') {
agent none
steps {
script {
parallel ([
'deploy-jvm-packages': { DeployJVMPackages(spark_version: '3.0.0') }
])
}
}
}
}
}
// check out source code from git
def checkoutSrcs() {
retry(5) {
try {
timeout(time: 2, unit: 'MINUTES') {
checkout scm
sh 'git submodule update --init'
}
} catch (exc) {
deleteDir()
error "Failed to fetch source codes"
}
}
}
def GetCUDABuildContainerType(cuda_version) {
return (cuda_version == ref_cuda_ver) ? 'gpu_build_centos7' : 'gpu_build'
}
def ClangTidy() {
node('linux && cpu_build') {
unstash name: 'srcs'
echo "Running clang-tidy job..."
def container_type = "clang_tidy"
def docker_binary = "docker"
def dockerArgs = "--build-arg CUDA_VERSION_ARG=10.1"
sh """
${dockerRun} ${container_type} ${docker_binary} ${dockerArgs} python3 tests/ci_build/tidy.py
"""
deleteDir()
}
}
def BuildCPU() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Build CPU"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} rm -fv dmlc-core/include/dmlc/build_config_default.h
# This step is not necessary, but here we include it, to ensure that DMLC_CORE_USE_CMAKE flag is correctly propagated
# We want to make sure that we use the configured header build/dmlc/build_config.h instead of include/dmlc/build_config_default.h.
# See discussion at https://github.com/dmlc/xgboost/issues/5510
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_via_cmake.sh -DPLUGIN_DENSE_PARSER=ON
${dockerRun} ${container_type} ${docker_binary} bash -c "cd build && ctest --extra-verbose"
"""
// Sanitizer test
def docker_extra_params = "CI_DOCKER_EXTRA_PARAMS_INIT='-e ASAN_SYMBOLIZER_PATH=/usr/bin/llvm-symbolizer -e ASAN_OPTIONS=symbolize=1 -e UBSAN_OPTIONS=print_stacktrace=1:log_path=ubsan_error.log --cap-add SYS_PTRACE'"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_via_cmake.sh -DUSE_SANITIZER=ON -DENABLED_SANITIZERS="address;leak;undefined" \
-DCMAKE_BUILD_TYPE=Debug -DSANITIZER_PATH=/usr/lib/x86_64-linux-gnu/
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} bash -c "cd build && ctest --exclude-regex AllTestsInDMLCUnitTests --extra-verbose"
"""
stash name: 'xgboost_cli', includes: 'xgboost'
deleteDir()
}
}
def BuildCPUARM64() {
node('linux && arm64') {
unstash name: 'srcs'
echo "Build CPU ARM64"
def container_type = "aarch64"
def docker_binary = "docker"
def wheel_tag = "manylinux2014_aarch64"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_via_cmake.sh --conda-env=aarch64_test -DOPEN_MP:BOOL=ON -DHIDE_CXX_SYMBOL=ON
${dockerRun} ${container_type} ${docker_binary} bash -c "cd build && ctest --extra-verbose"
${dockerRun} ${container_type} ${docker_binary} bash -c "cd python-package && rm -rf dist/* && python setup.py bdist_wheel --universal"
${dockerRun} ${container_type} ${docker_binary} python tests/ci_build/rename_whl.py python-package/dist/*.whl ${commit_id} ${wheel_tag}
${dockerRun} ${container_type} ${docker_binary} bash -c "auditwheel repair --plat ${wheel_tag} python-package/dist/*.whl && python tests/ci_build/rename_whl.py wheelhouse/*.whl ${commit_id} ${wheel_tag}"
mv -v wheelhouse/*.whl python-package/dist/
# Make sure that libgomp.so is vendored in the wheel
${dockerRun} ${container_type} ${docker_binary} bash -c "unzip -l python-package/dist/*.whl | grep libgomp || exit -1"
"""
echo 'Stashing Python wheel...'
stash name: "xgboost_whl_arm64_cpu", includes: 'python-package/dist/*.whl'
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
echo 'Uploading Python wheel...'
path = ("${BRANCH_NAME}" == 'master') ? '' : "${BRANCH_NAME}/"
s3Upload bucket: 'xgboost-nightly-builds', path: path, acl: 'PublicRead', workingDir: 'python-package/dist', includePathPattern:'**/*.whl'
}
stash name: 'xgboost_cli_arm64', includes: 'xgboost'
deleteDir()
}
}
def BuildCPUMock() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Build CPU with rabit mock"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_mock_cmake.sh
"""
echo 'Stashing rabit C++ test executable (xgboost)...'
stash name: 'xgboost_rabit_tests', includes: 'xgboost'
deleteDir()
}
}
def BuildCUDA(args) {
node('linux && cpu_build') {
unstash name: 'srcs'
echo "Build with CUDA ${args.cuda_version}"
def container_type = GetCUDABuildContainerType(args.cuda_version)
def docker_binary = "docker"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.cuda_version}"
def arch_flag = ""
if (env.BRANCH_NAME != 'master' && !(env.BRANCH_NAME.startsWith('release'))) {
arch_flag = "-DGPU_COMPUTE_VER=75"
}
def wheel_tag = "manylinux2014_x86_64"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_via_cmake.sh -DUSE_CUDA=ON -DUSE_NCCL=ON -DOPEN_MP:BOOL=ON -DHIDE_CXX_SYMBOLS=ON ${arch_flag}
${dockerRun} ${container_type} ${docker_binary} ${docker_args} bash -c "cd python-package && rm -rf dist/* && python setup.py bdist_wheel --universal"
${dockerRun} ${container_type} ${docker_binary} ${docker_args} python tests/ci_build/rename_whl.py python-package/dist/*.whl ${commit_id} ${wheel_tag}
"""
if (args.cuda_version == ref_cuda_ver) {
sh """
${dockerRun} auditwheel_x86_64 ${docker_binary} auditwheel repair --plat ${wheel_tag} python-package/dist/*.whl
${dockerRun} ${container_type} ${docker_binary} ${docker_args} python tests/ci_build/rename_whl.py wheelhouse/*.whl ${commit_id} ${wheel_tag}
mv -v wheelhouse/*.whl python-package/dist/
# Make sure that libgomp.so is vendored in the wheel
${dockerRun} auditwheel_x86_64 ${docker_binary} bash -c "unzip -l python-package/dist/*.whl | grep libgomp || exit -1"
"""
}
echo 'Stashing Python wheel...'
stash name: "xgboost_whl_cuda${args.cuda_version}", includes: 'python-package/dist/*.whl'
if (args.cuda_version == ref_cuda_ver && (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release'))) {
echo 'Uploading Python wheel...'
path = ("${BRANCH_NAME}" == 'master') ? '' : "${BRANCH_NAME}/"
s3Upload bucket: 'xgboost-nightly-builds', path: path, acl: 'PublicRead', workingDir: 'python-package/dist', includePathPattern:'**/*.whl'
}
echo 'Stashing C++ test executable (testxgboost)...'
stash name: "xgboost_cpp_tests_cuda${args.cuda_version}", includes: 'build/testxgboost'
if (args.build_rmm) {
echo "Build with CUDA ${args.cuda_version} and RMM"
container_type = "rmm"
docker_binary = "docker"
docker_args = "--build-arg CUDA_VERSION_ARG=${args.cuda_version}"
sh """
rm -rf build/
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_via_cmake.sh --conda-env=gpu_test -DUSE_CUDA=ON -DUSE_NCCL=ON -DPLUGIN_RMM=ON -DBUILD_WITH_CUDA_CUB=ON ${arch_flag}
${dockerRun} ${container_type} ${docker_binary} ${docker_args} bash -c "cd python-package && rm -rf dist/* && python setup.py bdist_wheel --universal"
${dockerRun} ${container_type} ${docker_binary} ${docker_args} python tests/ci_build/rename_whl.py python-package/dist/*.whl ${commit_id} manylinux2014_x86_64
"""
echo 'Stashing Python wheel...'
stash name: "xgboost_whl_rmm_cuda${args.cuda_version}", includes: 'python-package/dist/*.whl'
echo 'Stashing C++ test executable (testxgboost)...'
stash name: "xgboost_cpp_tests_rmm_cuda${args.cuda_version}", includes: 'build/testxgboost'
}
deleteDir()
}
}
def BuildRPackageWithCUDA(args) {
node('linux && cpu_build') {
unstash name: 'srcs'
def container_type = 'gpu_build_r_centos7'
def docker_binary = "docker"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.cuda_version}"
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_r_pkg_with_cuda.sh ${commit_id}
"""
echo 'Uploading R tarball...'
path = ("${BRANCH_NAME}" == 'master') ? '' : "${BRANCH_NAME}/"
s3Upload bucket: 'xgboost-nightly-builds', path: path, acl: 'PublicRead', includePathPattern:'xgboost_r_gpu_linux_*.tar.gz'
}
deleteDir()
}
}
def BuildJVMPackagesWithCUDA(args) {
node('linux && mgpu') {
unstash name: 'srcs'
echo "Build XGBoost4J-Spark with Spark ${args.spark_version}, CUDA ${args.cuda_version}"
def container_type = "jvm_gpu_build"
def docker_binary = "nvidia-docker"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.cuda_version}"
def arch_flag = ""
if (env.BRANCH_NAME != 'master' && !(env.BRANCH_NAME.startsWith('release'))) {
arch_flag = "-DGPU_COMPUTE_VER=75"
}
// Use only 4 CPU cores
def docker_extra_params = "CI_DOCKER_EXTRA_PARAMS_INIT='--cpuset-cpus 0-3'"
sh """
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_jvm_packages.sh ${args.spark_version} -Duse.cuda=ON $arch_flag
"""
echo "Stashing XGBoost4J JAR with CUDA ${args.cuda_version} ..."
stash name: 'xgboost4j_jar_gpu', includes: "jvm-packages/xgboost4j-gpu/target/*.jar,jvm-packages/xgboost4j-spark-gpu/target/*.jar"
deleteDir()
}
}
def BuildJVMPackages(args) {
node('linux && cpu') {
unstash name: 'srcs'
echo "Build XGBoost4J-Spark with Spark ${args.spark_version}"
def container_type = "jvm"
def docker_binary = "docker"
// Use only 4 CPU cores
def docker_extra_params = "CI_DOCKER_EXTRA_PARAMS_INIT='--cpuset-cpus 0-3'"
sh """
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_jvm_packages.sh ${args.spark_version}
"""
echo 'Stashing XGBoost4J JAR...'
stash name: 'xgboost4j_jar', includes: "jvm-packages/xgboost4j/target/*.jar,jvm-packages/xgboost4j-spark/target/*.jar,jvm-packages/xgboost4j-example/target/*.jar"
deleteDir()
}
}
def BuildJVMDoc() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Building JVM doc..."
def container_type = "jvm"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_jvm_doc.sh ${BRANCH_NAME}
"""
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
echo 'Uploading doc...'
s3Upload file: "jvm-packages/${BRANCH_NAME}.tar.bz2", bucket: 'xgboost-docs', acl: 'PublicRead', path: "${BRANCH_NAME}.tar.bz2"
}
deleteDir()
}
}
def TestPythonCPU() {
node('linux && cpu') {
unstash name: "xgboost_whl_cuda${ref_cuda_ver}"
unstash name: 'srcs'
unstash name: 'xgboost_cli'
echo "Test Python CPU"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/test_python.sh cpu
"""
deleteDir()
}
}
def TestPythonCPUARM64() {
node('linux && arm64') {
unstash name: "xgboost_whl_arm64_cpu"
unstash name: 'srcs'
unstash name: 'xgboost_cli_arm64'
echo "Test Python CPU ARM64"
def container_type = "aarch64"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/test_python.sh cpu-arm64
"""
deleteDir()
}
}
def TestPythonGPU(args) {
def nodeReq = (args.multi_gpu) ? 'linux && mgpu' : 'linux && gpu'
def artifact_cuda_version = (args.artifact_cuda_version) ?: ref_cuda_ver
node(nodeReq) {
unstash name: "xgboost_whl_cuda${artifact_cuda_version}"
unstash name: "xgboost_cpp_tests_cuda${artifact_cuda_version}"
unstash name: 'srcs'
echo "Test Python GPU: CUDA ${args.host_cuda_version}"
def container_type = "gpu"
def docker_binary = "nvidia-docker"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.host_cuda_version}"
def mgpu_indicator = (args.multi_gpu) ? 'mgpu' : 'gpu'
// Allocate extra space in /dev/shm to enable NCCL
def docker_extra_params = (args.multi_gpu) ? "CI_DOCKER_EXTRA_PARAMS_INIT='--shm-size=4g'" : ''
sh "${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_python.sh ${mgpu_indicator}"
if (args.test_rmm) {
sh "rm -rfv build/ python-package/dist/"
unstash name: "xgboost_whl_rmm_cuda${args.host_cuda_version}"
unstash name: "xgboost_cpp_tests_rmm_cuda${args.host_cuda_version}"
sh "${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_python.sh ${mgpu_indicator} --use-rmm-pool"
}
deleteDir()
}
}
def TestCppGPU(args) {
def nodeReq = 'linux && mgpu'
def artifact_cuda_version = (args.artifact_cuda_version) ?: ref_cuda_ver
node(nodeReq) {
unstash name: "xgboost_cpp_tests_cuda${artifact_cuda_version}"
unstash name: 'srcs'
echo "Test C++, CUDA ${args.host_cuda_version}"
def container_type = "gpu"
def docker_binary = "nvidia-docker"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.host_cuda_version}"
sh "${dockerRun} ${container_type} ${docker_binary} ${docker_args} build/testxgboost"
if (args.test_rmm) {
sh "rm -rfv build/"
unstash name: "xgboost_cpp_tests_rmm_cuda${args.host_cuda_version}"
echo "Test C++, CUDA ${args.host_cuda_version} with RMM"
container_type = "rmm"
docker_binary = "nvidia-docker"
docker_args = "--build-arg CUDA_VERSION_ARG=${args.host_cuda_version}"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} bash -c "source activate gpu_test && build/testxgboost --use-rmm-pool --gtest_filter=-*DeathTest.*"
"""
}
deleteDir()
}
}
def CrossTestJVMwithJDK(args) {
node('linux && cpu') {
unstash name: 'xgboost4j_jar'
unstash name: 'srcs'
if (args.spark_version != null) {
echo "Test XGBoost4J on a machine with JDK ${args.jdk_version}, Spark ${args.spark_version}"
} else {
echo "Test XGBoost4J on a machine with JDK ${args.jdk_version}"
}
def container_type = "jvm_cross"
def docker_binary = "docker"
def spark_arg = (args.spark_version != null) ? "--build-arg SPARK_VERSION=${args.spark_version}" : ""
def docker_args = "--build-arg JDK_VERSION=${args.jdk_version} ${spark_arg}"
// Run integration tests only when spark_version is given
def docker_extra_params = (args.spark_version != null) ? "CI_DOCKER_EXTRA_PARAMS_INIT='-e RUN_INTEGRATION_TEST=1'" : ""
sh """
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_jvm_cross.sh
"""
deleteDir()
}
}
def DeployJVMPackages(args) {
node('linux && cpu') {
unstash name: 'srcs'
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
echo 'Deploying to xgboost-maven-repo S3 repo...'
sh """
${dockerRun} jvm_gpu_build docker --build-arg CUDA_VERSION_ARG=10.1 tests/ci_build/deploy_jvm_packages.sh ${args.spark_version}
"""
}
deleteDir()
}
}

View File

@ -1,163 +0,0 @@
#!/usr/bin/groovy
// -*- mode: groovy -*-
/* Jenkins pipeline for Windows AMD64 target */
import groovy.transform.Field
@Field
def commit_id // necessary to pass a variable from one stage to another
pipeline {
agent none
// Setup common job properties
options {
timestamps()
timeout(time: 240, unit: 'MINUTES')
buildDiscarder(logRotator(numToKeepStr: '10'))
preserveStashes()
}
// Build stages
stages {
stage('Jenkins Win64: Initialize') {
agent { label 'job_initializer' }
steps {
script {
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
checkoutSrcs()
commit_id = "${GIT_COMMIT}"
}
sh 'python3 tests/jenkins_get_approval.py'
stash name: 'srcs'
}
}
stage('Jenkins Win64: Build') {
agent none
steps {
script {
parallel ([
'build-win64-cuda10.1': { BuildWin64() },
'build-rpkg-win64-cuda10.1': { BuildRPackageWithCUDAWin64() }
])
}
}
}
stage('Jenkins Win64: Test') {
agent none
steps {
script {
parallel ([
'test-win64-cuda10.1': { TestWin64() },
])
}
}
}
}
}
// check out source code from git
def checkoutSrcs() {
retry(5) {
try {
timeout(time: 2, unit: 'MINUTES') {
checkout scm
sh 'git submodule update --init'
}
} catch (exc) {
deleteDir()
error "Failed to fetch source codes"
}
}
}
def BuildWin64() {
node('win64 && cuda10_unified') {
deleteDir()
unstash name: 'srcs'
echo "Building XGBoost for Windows AMD64 target..."
bat "nvcc --version"
def arch_flag = ""
if (env.BRANCH_NAME != 'master' && !(env.BRANCH_NAME.startsWith('release'))) {
arch_flag = "-DGPU_COMPUTE_VER=75"
}
bat """
mkdir build
cd build
cmake .. -G"Visual Studio 15 2017 Win64" -DUSE_CUDA=ON -DCMAKE_VERBOSE_MAKEFILE=ON -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON ${arch_flag} -DCMAKE_UNITY_BUILD=ON
"""
bat """
cd build
"C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\MSBuild\\15.0\\Bin\\MSBuild.exe" xgboost.sln /m /p:Configuration=Release /nodeReuse:false
"""
bat """
cd python-package
conda activate && python setup.py bdist_wheel --universal && for /R %%i in (dist\\*.whl) DO python ../tests/ci_build/rename_whl.py "%%i" ${commit_id} win_amd64
"""
echo "Insert vcomp140.dll (OpenMP runtime) into the wheel..."
bat """
cd python-package\\dist
COPY /B ..\\..\\tests\\ci_build\\insert_vcomp140.py
conda activate && python insert_vcomp140.py *.whl
"""
echo 'Stashing Python wheel...'
stash name: 'xgboost_whl', includes: 'python-package/dist/*.whl'
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
echo 'Uploading Python wheel...'
path = ("${BRANCH_NAME}" == 'master') ? '' : "${BRANCH_NAME}/"
s3Upload bucket: 'xgboost-nightly-builds', path: path, acl: 'PublicRead', workingDir: 'python-package/dist', includePathPattern:'**/*.whl'
}
echo 'Stashing C++ test executable (testxgboost)...'
stash name: 'xgboost_cpp_tests', includes: 'build/testxgboost.exe'
stash name: 'xgboost_cli', includes: 'xgboost.exe'
deleteDir()
}
}
def BuildRPackageWithCUDAWin64() {
node('win64 && cuda10_unified') {
deleteDir()
unstash name: 'srcs'
bat "nvcc --version"
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
bat """
bash tests/ci_build/build_r_pkg_with_cuda_win64.sh ${commit_id}
"""
echo 'Uploading R tarball...'
path = ("${BRANCH_NAME}" == 'master') ? '' : "${BRANCH_NAME}/"
s3Upload bucket: 'xgboost-nightly-builds', path: path, acl: 'PublicRead', includePathPattern:'xgboost_r_gpu_win64_*.tar.gz'
}
deleteDir()
}
}
def TestWin64() {
node('win64 && cuda10_unified') {
deleteDir()
unstash name: 'srcs'
unstash name: 'xgboost_whl'
unstash name: 'xgboost_cli'
unstash name: 'xgboost_cpp_tests'
echo "Test Win64"
bat "nvcc --version"
echo "Running C++ tests..."
bat "build\\testxgboost.exe"
echo "Installing Python dependencies..."
def env_name = 'win64_' + UUID.randomUUID().toString().replaceAll('-', '')
bat "conda activate && mamba env create -n ${env_name} --file=tests/ci_build/conda_env/win64_test.yml"
echo "Installing Python wheel..."
bat """
conda activate ${env_name} && for /R %%i in (python-package\\dist\\*.whl) DO python -m pip install "%%i"
"""
echo "Running Python tests..."
bat "conda activate ${env_name} && python -m pytest -v -s -rxXs --fulltrace tests\\python"
bat """
conda activate ${env_name} && python -m pytest -v -s -rxXs --fulltrace -m "(not slow) and (not mgpu)" tests\\python-gpu
"""
bat "conda env remove --name ${env_name}"
deleteDir()
}
}

165
Makefile
View File

@ -1,165 +0,0 @@
ifndef DMLC_CORE
DMLC_CORE = dmlc-core
endif
ifndef RABIT
RABIT = rabit
endif
ROOTDIR = $(CURDIR)
# workarounds for some buggy old make & msys2 versions seen in windows
ifeq (NA, $(shell test ! -d "$(ROOTDIR)" && echo NA ))
$(warning Attempting to fix non-existing ROOTDIR [$(ROOTDIR)])
ROOTDIR := $(shell pwd)
$(warning New ROOTDIR [$(ROOTDIR)] $(shell test -d "$(ROOTDIR)" && echo " is OK" ))
endif
MAKE_OK := $(shell "$(MAKE)" -v 2> /dev/null)
ifndef MAKE_OK
$(warning Attempting to recover non-functional MAKE [$(MAKE)])
MAKE := $(shell which make 2> /dev/null)
MAKE_OK := $(shell "$(MAKE)" -v 2> /dev/null)
endif
$(warning MAKE [$(MAKE)] - $(if $(MAKE_OK),checked OK,PROBLEM))
include $(DMLC_CORE)/make/dmlc.mk
# set compiler defaults for OSX versus *nix
# let people override either
OS := $(shell uname)
ifeq ($(OS), Darwin)
ifndef CC
export CC = $(if $(shell which clang), clang, gcc)
endif
ifndef CXX
export CXX = $(if $(shell which clang++), clang++, g++)
endif
else
# linux defaults
ifndef CC
export CC = gcc
endif
ifndef CXX
export CXX = g++
endif
endif
export CFLAGS= -DDMLC_LOG_CUSTOMIZE=1 -std=c++14 -Wall -Wno-unknown-pragmas -Iinclude $(ADD_CFLAGS)
CFLAGS += -I$(DMLC_CORE)/include -I$(RABIT)/include -I$(GTEST_PATH)/include
ifeq ($(TEST_COVER), 1)
CFLAGS += -g -O0 -fprofile-arcs -ftest-coverage
else
CFLAGS += -O3 -funroll-loops
endif
ifndef LINT_LANG
LINT_LANG= "all"
endif
# specify tensor path
.PHONY: clean all lint clean_all doxygen rcpplint pypack Rpack Rbuild Rcheck
build/%.o: src/%.cc
@mkdir -p $(@D)
$(CXX) $(CFLAGS) -MM -MT build/$*.o $< >build/$*.d
$(CXX) -c $(CFLAGS) $< -o $@
# The should be equivalent to $(ALL_OBJ) except for build/cli_main.o
amalgamation/xgboost-all0.o: amalgamation/xgboost-all0.cc
$(CXX) -c $(CFLAGS) $< -o $@
rcpplint:
python3 dmlc-core/scripts/lint.py xgboost ${LINT_LANG} R-package/src
lint: rcpplint
python3 dmlc-core/scripts/lint.py --exclude_path python-package/xgboost/dmlc-core \
python-package/xgboost/include python-package/xgboost/lib \
python-package/xgboost/make python-package/xgboost/rabit \
python-package/xgboost/src --pylint-rc ${PWD}/python-package/.pylintrc xgboost \
${LINT_LANG} include src python-package
ifeq ($(TEST_COVER), 1)
cover: check
@- $(foreach COV_OBJ, $(COVER_OBJ), \
gcov -pbcul -o $(shell dirname $(COV_OBJ)) $(COV_OBJ) > gcov.log || cat gcov.log; \
)
endif
# dask is required to pass, others are not
# If any of the dask tests failed, contributor won't see the other error.
mypy:
cd python-package; \
mypy ./xgboost/dask.py && \
mypy ./xgboost/rabit.py && \
mypy ../demo/guide-python/external_memory.py && \
mypy ../tests/python-gpu/test_gpu_with_dask.py && \
mypy ../tests/python/test_data_iterator.py && \
mypy ../tests/python-gpu/test_gpu_data_iterator.py && \
mypy ./xgboost/sklearn.py || exit 1; \
mypy . || true ;
clean:
$(RM) -rf build lib bin *~ */*~ */*/*~ */*/*/*~ */*.o */*/*.o */*/*/*.o #xgboost
$(RM) -rf build_tests *.gcov tests/cpp/xgboost_test
if [ -d "R-package/src" ]; then \
cd R-package/src; \
$(RM) -rf rabit src include dmlc-core amalgamation *.so *.dll; \
cd $(ROOTDIR); \
fi
clean_all: clean
cd $(DMLC_CORE); "$(MAKE)" clean; cd $(ROOTDIR)
cd $(RABIT); "$(MAKE)" clean; cd $(ROOTDIR)
# create pip source dist (sdist) pack for PyPI
pippack: clean_all
cd python-package; python setup.py sdist; mv dist/*.tar.gz ..; cd ..
# Script to make a clean installable R package.
Rpack: clean_all
rm -rf xgboost xgboost*.tar.gz
cp -r R-package xgboost
rm -rf xgboost/src/*.o xgboost/src/*.so xgboost/src/*.dll
rm -rf xgboost/src/*/*.o
rm -rf xgboost/demo/*.model xgboost/demo/*.buffer xgboost/demo/*.txt
rm -rf xgboost/demo/runall.R
cp -r src xgboost/src/src
cp -r include xgboost/src/include
cp -r amalgamation xgboost/src/amalgamation
mkdir -p xgboost/src/rabit
cp -r rabit/include xgboost/src/rabit/include
cp -r rabit/src xgboost/src/rabit/src
rm -rf xgboost/src/rabit/src/*.o
mkdir -p xgboost/src/dmlc-core
cp -r dmlc-core/include xgboost/src/dmlc-core/include
cp -r dmlc-core/src xgboost/src/dmlc-core/src
cp ./LICENSE xgboost
# Modify PKGROOT in Makevars.in
cat R-package/src/Makevars.in|sed '2s/.*/PKGROOT=./' > xgboost/src/Makevars.in
# Configure Makevars.win (Windows-specific Makevars, likely using MinGW)
cp xgboost/src/Makevars.in xgboost/src/Makevars.win
cat xgboost/src/Makevars.in| sed '3s/.*/ENABLE_STD_THREAD=0/' > xgboost/src/Makevars.win
sed -i -e 's/@OPENMP_CXXFLAGS@/$$\(SHLIB_OPENMP_CXXFLAGS\)/g' xgboost/src/Makevars.win
sed -i -e 's/-pthread/$$\(SHLIB_PTHREAD_FLAGS\)/g' xgboost/src/Makevars.win
sed -i -e 's/@ENDIAN_FLAG@/-DDMLC_CMAKE_LITTLE_ENDIAN=1/g' xgboost/src/Makevars.win
sed -i -e 's/@BACKTRACE_LIB@//g' xgboost/src/Makevars.win
sed -i -e 's/@OPENMP_LIB@//g' xgboost/src/Makevars.win
rm -f xgboost/src/Makevars.win-e # OSX sed create this extra file; remove it
bash R-package/remove_warning_suppression_pragma.sh
bash xgboost/remove_warning_suppression_pragma.sh
rm xgboost/remove_warning_suppression_pragma.sh
rm -rfv xgboost/tests/helper_scripts/
R ?= R
Rbuild: Rpack
$(R) CMD build xgboost
rm -rf xgboost
Rcheck: Rbuild
$(R) CMD check --as-cran xgboost*.tar.gz
-include build/*.d
-include build/*/*.d

915
NEWS.md
View File

@ -3,6 +3,919 @@ XGBoost Change Log
This file records the changes in xgboost library in reverse chronological order.
## 2.0.0 (2023 Aug 16)
We are excited to announce the release of XGBoost 2.0. This note will begin by covering some overall changes and then highlight specific updates to the package.
### Initial work on multi-target trees with vector-leaf outputs
We have been working on vector-leaf tree models for multi-target regression, multi-label classification, and multi-class classification in version 2.0. Previously, XGBoost would build a separate model for each target. However, with this new feature that's still being developed, XGBoost can build one tree for all targets. The feature has multiple benefits and trade-offs compared to the existing approach. It can help prevent overfitting, produce smaller models, and build trees that consider the correlation between targets. In addition, users can combine vector leaf and scalar leaf trees during a training session using a callback. Please note that the feature is still a working in progress, and many parts are not yet available. See #9043 for the current status. Related PRs: (#8538, #8697, #8902, #8884, #8895, #8898, #8612, #8652, #8698, #8908, #8928, #8968, #8616, #8922, #8890, #8872, #8889, #9509) Please note that, only the `hist` (default) tree method on CPU can be used for building vector leaf trees at the moment.
### New `device` parameter.
A new `device` parameter is set to replace the existing `gpu_id`, `gpu_hist`, `gpu_predictor`, `cpu_predictor`, `gpu_coord_descent`, and the PySpark specific parameter `use_gpu`. Onward, users need only the `device` parameter to select which device to run along with the ordinal of the device. For more information, please see our document page (https://xgboost.readthedocs.io/en/stable/parameter.html#general-parameters) . For example, with `device="cuda", tree_method="hist"`, XGBoost will run the `hist` tree method on GPU. (#9363, #8528, #8604, #9354, #9274, #9243, #8896, #9129, #9362, #9402, #9385, #9398, #9390, #9386, #9412, #9507, #9536). The old behavior of ``gpu_hist`` is preserved but deprecated. In addition, the `predictor` parameter is removed.
### `hist` is now the default tree method
Starting from 2.0, the `hist` tree method will be the default. In previous versions, XGBoost chooses `approx` or `exact` depending on the input data and training environment. The new default can help XGBoost train models more efficiently and consistently. (#9320, #9353)
### GPU-based approx tree method
There's initial support for using the `approx` tree method on GPU. The performance of the `approx` is not yet well optimized but is feature complete except for the JVM packages. It can be accessed through the use of the parameter combination `device="cuda", tree_method="approx"`. (#9414, #9399, #9478). Please note that the Scala-based Spark interface is not yet supported.
### Optimize and bound the size of the histogram on CPU, to control memory footprint
XGBoost has a new parameter `max_cached_hist_node` for users to limit the CPU cache size for histograms. It can help prevent XGBoost from caching histograms too aggressively. Without the cache, performance is likely to decrease. However, the size of the cache grows exponentially with the depth of the tree. The limit can be crucial when growing deep trees. In most cases, users need not configure this parameter as it does not affect the model's accuracy. (#9455, #9441, #9440, #9427, #9400).
Along with the cache limit, XGBoost also reduces the memory usage of the `hist` and `approx` tree method on distributed systems by cutting the size of the cache by half. (#9433)
### Improved external memory support
There is some exciting development around external memory support in XGBoost. It's still an experimental feature, but the performance has been significantly improved with the default `hist` tree method. We replaced the old file IO logic with memory map. In addition to performance, we have reduced CPU memory usage and added extensive documentation. Beginning from 2.0.0, we encourage users to try it with the `hist` tree method when the memory saving by `QuantileDMatrix` is not sufficient. (#9361, #9317, #9282, #9315, #8457)
### Learning to rank
We created a brand-new implementation for the learning-to-rank task. With the latest version, XGBoost gained a set of new features for ranking task including:
- A new parameter `lambdarank_pair_method` for choosing the pair construction strategy.
- A new parameter `lambdarank_num_pair_per_sample` for controlling the number of samples for each group.
- An experimental implementation of unbiased learning-to-rank, which can be accessed using the `lambdarank_unbiased` parameter.
- Support for custom gain function with `NDCG` using the `ndcg_exp_gain` parameter.
- Deterministic GPU computation for all objectives and metrics.
- `NDCG` is now the default objective function.
- Improved performance of metrics using caches.
- Support scikit-learn utilities for `XGBRanker`.
- Extensive documentation on how learning-to-rank works with XGBoost.
For more information, please see the [tutorial](https://xgboost.readthedocs.io/en/latest/tutorials/learning_to_rank.html). Related PRs: (#8771, #8692, #8783, #8789, #8790, #8859, #8887, #8893, #8906, #8931, #9075, #9015, #9381, #9336, #8822, #9222, #8984, #8785, #8786, #8768)
### Automatically estimated intercept
In the previous version, `base_score` was a constant that could be set as a training parameter. In the new version, XGBoost can automatically estimate this parameter based on input labels for optimal accuracy. (#8539, #8498, #8272, #8793, #8607)
### Quantile regression
The XGBoost algorithm now supports quantile regression, which involves minimizing the quantile loss (also called "pinball loss"). Furthermore, XGBoost allows for training with multiple target quantiles simultaneously with one tree per quantile. (#8775, #8761, #8760, #8758, #8750)
### L1 and Quantile regression now supports learning rate
Both objectives use adaptive trees due to the lack of proper Hessian values. In the new version, XGBoost can scale the leaf value with the learning rate accordingly. (#8866)
### Export cut value
Using the Python or the C package, users can export the quantile values (not to be confused with quantile regression) used for the `hist` tree method. (#9356)
### column-based split and federated learning
We made progress on column-based split for federated learning. In 2.0, both `approx`, `hist`, and `hist` with vector leaf can work with column-based data split, along with support for vertical federated learning. Work on GPU support is still on-going, stay tuned. (#8576, #8468, #8442, #8847, #8811, #8985, #8623, #8568, #8828, #8932, #9081, #9102, #9103, #9124, #9120, #9367, #9370, #9343, #9171, #9346, #9270, #9244, #8494, #8434, #8742, #8804, #8710, #8676, #9020, #9002, #9058, #9037, #9018, #9295, #9006, #9300, #8765, #9365, #9060)
### PySpark
After the initial introduction of the PySpark interface, it has gained some new features and optimizations in 2.0.
- GPU-based prediction. (#9292, #9542)
- Optimization for data initialization by avoiding the stack operation. (#9088)
- Support predict feature contribution. (#8633)
- Python typing support. (#9156, #9172, #9079, #8375)
- `use_gpu` is deprecated. The `device` parameter is preferred.
- Update eval_metric validation to support list of strings (#8826)
- Improved logs for training (#9449)
- Maintenance, including refactoring and document updates (#8324, #8465, #8605, #9202, #9460, #9302, #8385, #8630, #8525, #8496)
- Fix for GPU setup. (#9495)
### Other General New Features
Here's a list of new features that don't have their own section and yet are general to all language bindings.
- Use array interface for CSC matrix. This helps XGBoost to use a consistent number of threads and align the interface of the CSC matrix with other interfaces. In addition, memory usage is likely to decrease with CSC input thanks to on-the-fly type conversion. (#8672)
- CUDA compute 90 is now part of the default build.. (#9397)
### Other General Optimization
These optimizations are general to all language bindings. For language-specific optimization, please visit the corresponding sections.
- Performance for input with `array_interface` on CPU (like `numpy`) is significantly improved. (#9090)
- Some optimization with CUDA for data initialization. (#9199, #9209, #9144)
- Use the latest thrust policy to prevent synchronizing GPU devices. (#9212)
- XGBoost now uses a per-thread CUDA stream, which prevents synchronization with other streams. (#9416, #9396, #9413)
### Notable breaking change
Other than the aforementioned change with the `device` parameter, here's a list of breaking changes affecting all packages.
- Users must specify the format for text input (#9077). However, we suggest using third-party data structures such as `numpy.ndarray` instead of relying on text inputs. See https://github.com/dmlc/xgboost/issues/9472 for more info.
### Notable bug fixes
Some noteworthy bug fixes that are not related to specific language bindings are listed in this section.
- Some language environments use a different thread to perform garbage collection, which breaks the thread-local cache used in XGBoost. XGBoost 2.0 implements a new thread-safe cache using a light weight lock to replace the thread-local cache. (#8851)
- Fix model IO by clearing the prediction cache. (#8904)
- `inf` is checked during data construction. (#8911)
- Preserve order of saved updaters configuration. Usually, this is not an issue unless the `updater` parameter is used instead of the `tree_method` parameter (#9355)
- Fix GPU memory allocation issue with categorical splits. (#9529)
- Handle escape sequence like `\t\n` in feature names for JSON model dump. (#9474)
- Normalize file path for model IO and text input. This handles short paths on Windows and paths that contain `~` on Unix (#9463). In addition, all path inputs are required to be encoded in UTF-8 (#9448, #9443)
- Fix integer overflow on H100. (#9380)
- Fix weighted sketching on GPU with categorical features. (#9341)
- Fix metric serialization. The bug might cause some of the metrics to be dropped during evaluation. (#9405)
- Fixes compilation errors on MSVC x86 targets (#8823)
- Pick up the dmlc-core fix for the CSV parser. (#8897)
### Documentation
Aside from documents for new features, we have many smaller updates to improve user experience, from troubleshooting guides to typo fixes.
- Explain CPU/GPU interop. (#8450)
- Guide to troubleshoot NCCL errors. (#8943, #9206)
- Add a note for rabit port selection. (#8879)
- How to build the docs using conda (#9276)
- Explain how to obtain reproducible results on distributed systems. (#8903)
* Fixes and small updates to document and demonstration scripts. (#8626, #8436, #8995, #8907, #8923, #8926, #9358, #9232, #9201, #9469, #9462, #9458, #8543, #8597, #8401, #8784, #9213, #9098, #9008, #9223, #9333, #9434, #9435, #9415, #8773, #8752, #9291, #9549)
### Python package
* New Features and Improvements
- Support primitive types of pyarrow-backed pandas dataframe. (#8653)
- Warning messages emitted by XGBoost are now emitted using Python warnings. (#9387)
- User can now format the value printed near the bars on the `plot_importance` plot (#8540)
- XGBoost has improved half-type support (float16) with pandas, cupy, and cuDF. With GPU input, the handling is through CUDA `__half` type, and no data copy is made. (#8487, #9207, #8481)
- Support `Series` and Python primitive types in `inplace_predict` and `QuantileDMatrix` (#8547, #8542)
- Support all pandas' nullable integer types. (#8480)
- Custom metric with the scikit-learn interface now supports `sample_weight`. (#8706)
- Enable Installation of Python Package with System lib in a Virtual Environment (#9349)
- Raise if expected workers are not alive in `xgboost.dask.train` (#9421)
* Optimization
- Cache transformed data in `QuantileDMatrix` for efficiency. (#8666, #9445)
- Take datatable as row-major input. (#8472)
- Remove unnecessary conversions between data structures (#8546)
* Adopt modern Python packaging conventions (PEP 517, PEP 518, PEP 621)
- XGBoost adopted the modern Python packaging conventions. The old setup script `setup.py` is now replaced with the new configuration file `pyproject.toml`. Along with this, XGBoost now supports Python 3.11. (#9021, #9112, #9114, #9115) Consult the latest documentation for the updated instructions to build and install XGBoost.
* Fixes
- `DataIter` now accepts only keyword arguments. (#9431)
- Fix empty DMatrix with categorical features. (#8739)
- Convert ``DaskXGBClassifier.classes_`` to an array (#8452)
- Define `best_iteration` only if early stopping is used to be consistent with documented behavior. (#9403)
- Make feature validation immutable. (#9388)
* Breaking changes
- Discussed in the new `device` parameter section, the `predictor` parameter is now removed. (#9129)
- Remove support for single-string feature info. Feature type and names should be a sequence of strings (#9401)
- Remove parameters in the `save_model` call for the scikit-learn interface. (#8963)
- Remove the `ntree_limit` in the python package. This has been deprecated in previous versions. (#8345)
* Maintenance including formatting and refactoring along with type hints.
- More consistent use of `black` and `isort` for code formatting (#8420, #8748, #8867)
- Improved type support. Most of the type changes happen in the PySpark module; here, we list the remaining changes. (#8444, #8617, #9197, #9005)
- Set `enable_categorical` to True in predict. (#8592)
- Some refactoring and updates for tests (#8395, #8372, #8557, #8379, #8702, #9459, #9316, #8446, #8695, #8409, #8993, #9480)
* Documentation
- Add introduction and notes for the sklearn interface. (#8948)
- Demo for using dask for hyper-parameter optimization. (#8891)
- Document all supported Python input types. (#8643)
- Other documentation updates (#8944, #9304)
### R package
- Use the new data consumption interface for CSR and CSC. This provides better control for the number of threads and improves performance. (#8455, #8673)
- Accept multiple evaluation metrics during training. (#8657)
- Fix integer inputs with `NA`. (#9522)
- Some refactoring for the R package (#8545, #8430, #8614, #8624, #8613, #9457, #8689, #8563, #9461, #8647, #8564, #8565, #8736, #8610, #8609, #8599, #8704, #9456, #9450, #9476, #9477, #9481). Special thanks to @jameslamb.
- Document updates (#8886, #9323, #9437, #8998)
### JVM packages
Following are changes specific to various JVM-based packages.
- Stop using Rabit in prediction (#9054)
- Set feature_names and feature_types in jvm-packages. This is to prepare support for categorical features (#9364)
- Scala 2.13 support. (#9099)
- Change training stage from `ResultStage` to `ShuffleMapStage` (#9423)
- Automatically set the max/min direction for the best score during early stopping. (#9404)
* Revised support for `flink` (#9046)
* Breaking changes
- Scala-based tracker is removed. (#9078, #9045)
- Change `DeviceQuantileDmatrix` into `QuantileDMatrix` (#8461)
* Maintenance (#9253, #9166, #9395, #9389, #9224, #9233, #9351, #9479)
* CI bot PRs
We employed GitHub dependent bot to help us keep the dependencies up-to-date for JVM packages. With the help from the bot, we have cleared up all the dependencies that are lagging behind (#8501, #8507).
Here's a list of dependency update PRs including those made by dependent bots (#8456, #8560, #8571, #8561, #8562, #8600, #8594, #8524, #8509, #8548, #8549, #8533, #8521, #8534, #8532, #8516, #8503, #8531, #8530, #8518, #8512, #8515, #8517, #8506, #8504, #8502, #8629, #8815, #8813, #8814, #8877, #8876, #8875, #8874, #8873, #9049, #9070, #9073, #9039, #9083, #8917, #8952, #8980, #8973, #8962, #9252, #9208, #9131, #9136, #9219, #9160, #9158, #9163, #9184, #9192, #9265, #9268, #8882, #8837, #8662, #8661, #8390, #9056, #8508, #8925, #8920, #9149, #9230, #9097, #8648, #9203, #8593).
### Maintenance
Maintenance work includes refactoring, fixing small issues that don't affect end users. (#9256, #8627, #8756, #8735, #8966, #8864, #8747, #8892, #9057, #8921, #8949, #8941, #8942, #9108, #9125, #9155, #9153, #9176, #9447, #9444, #9436, #9438, #9430, #9200, #9210, #9055, #9014, #9004, #8999, #9154, #9148, #9283, #9246, #8888, #8900, #8871, #8861, #8858, #8791, #8807, #8751, #8703, #8696, #8693, #8677, #8686, #8665, #8660, #8386, #8371, #8410, #8578, #8574, #8483, #8443, #8454, #8733)
### CI
- Build pip wheel with RMM support (#9383)
- Other CI updates including updating dependencies and work on the CI infrastructure. (#9464, #9428, #8767, #9394, #9278, #9214, #9234, #9205, #9034, #9104, #8878, #9294, #8625, #8806, #8741, #8707, #8381, #8382, #8388, #8402, #8397, #8445, #8602, #8628, #8583, #8460, #9544)
## 1.7.6 (2023 Jun 16)
This is a patch release for bug fixes. The CRAN package for the R binding is kept at 1.7.5.
### Bug Fixes
* Fix distributed training with mixed dense and sparse partitions. (#9272)
* Fix monotone constraints on CPU with large trees. (#9122)
* [spark] Make the spark model have the same UID as its estimator (#9022)
* Optimize prediction with `QuantileDMatrix`. (#9096)
### Document
* Improve doxygen (#8959)
* Update the cuDF pip index URL. (#9106)
### Maintenance
* Fix tests with pandas 2.0. (#9014)
## 1.7.5 (2023 Mar 30)
This is a patch release for bug fixes.
* C++ requirement is updated to C++-17, along with which, CUDA 11.8 is used as the default CTK. (#8860, #8855, #8853)
* Fix import for pyspark ranker. (#8692)
* Fix Windows binary wheel to be compatible with Poetry (#8991)
* Fix GPU hist with column sampling. (#8850)
* Make sure iterative DMatrix is properly initialized. (#8997)
* [R] Update link in document. (#8998)
## 1.7.4 (2023 Feb 16)
This is a patch release for bug fixes.
* [R] Fix OpenMP detection on macOS. (#8684)
* [Python] Make sure input numpy array is aligned. (#8690)
* Fix feature interaction with column sampling in gpu_hist evaluator. (#8754)
* Fix GPU L1 error. (#8749)
* [PySpark] Fix feature types param (#8772)
* Fix ranking with quantile dmatrix and group weight. (#8762)
## 1.7.3 (2023 Jan 6)
This is a patch release for bug fixes.
* [Breaking] XGBoost Sklearn estimator method `get_params` no longer returns internally configured values. (#8634)
* Fix linalg iterator, which may crash the L1 error. (#8603)
* Fix loading pickled GPU model with a CPU-only XGBoost build. (#8632)
* Fix inference with unseen categories with categorical features. (#8591, #8602)
* CI fixes. (#8620, #8631, #8579)
## v1.7.2 (2022 Dec 8)
This is a patch release for bug fixes.
* Work with newer thrust and libcudacxx (#8432)
* Support null value in CUDA array interface namespace. (#8486)
* Use `getsockname` instead of `SO_DOMAIN` on AIX. (#8437)
* [pyspark] Make QDM optional based on a cuDF check (#8471)
* [pyspark] sort qid for SparkRanker. (#8497)
* [dask] Properly await async method client.wait_for_workers. (#8558)
* [R] Fix CRAN test notes. (#8428)
* [doc] Fix outdated document [skip ci]. (#8527)
* [CI] Fix github action mismatched glibcxx. (#8551)
## v1.7.1 (2022 Nov 3)
This is a patch release to incorporate the following hotfix:
* Add back xgboost.rabit for backwards compatibility (#8411)
## v1.7.0 (2022 Oct 20)
We are excited to announce the feature packed XGBoost 1.7 release. The release note will walk through some of the major new features first, then make a summary for other improvements and language-binding-specific changes.
### PySpark
XGBoost 1.7 features initial support for PySpark integration. The new interface is adapted from the existing PySpark XGBoost interface developed by databricks with additional features like `QuantileDMatrix` and the rapidsai plugin (GPU pipeline) support. The new Spark XGBoost Python estimators not only benefit from PySpark ml facilities for powerful distributed computing but also enjoy the rest of the Python ecosystem. Users can define a custom objective, callbacks, and metrics in Python and use them with this interface on distributed clusters. The support is labeled as experimental with more features to come in future releases. For a brief introduction please visit the tutorial on XGBoost's [document page](https://xgboost.readthedocs.io/en/latest/tutorials/spark_estimator.html). (#8355, #8344, #8335, #8284, #8271, #8283, #8250, #8231, #8219, #8245, #8217, #8200, #8173, #8172, #8145, #8117, #8131, #8088, #8082, #8085, #8066, #8068, #8067, #8020, #8385)
Due to its initial support status, the new interface has some limitations; categorical features and multi-output models are not yet supported.
### Development of categorical data support
More progress on the experimental support for categorical features. In 1.7, XGBoost can handle missing values in categorical features and features a new parameter `max_cat_threshold`, which limits the number of categories that can be used in the split evaluation. The parameter is enabled when the partitioning algorithm is used and helps prevent over-fitting. Also, the sklearn interface can now accept the `feature_types` parameter to use data types other than dataframe for categorical features. (#8280, #7821, #8285, #8080, #7948, #7858, #7853, #8212, #7957, #7937, #7934)
### Experimental support for federated learning and new communication collective
An exciting addition to XGBoost is the experimental federated learning support. The federated learning is implemented with a gRPC federated server that aggregates allreduce calls, and federated clients that train on local data and use existing tree methods (approx, hist, gpu_hist). Currently, this only supports horizontal federated learning (samples are split across participants, and each participant has all the features and labels). Future plans include vertical federated learning (features split across participants), and stronger privacy guarantees with homomorphic encryption and differential privacy. See [Demo with NVFlare integration](demo/nvflare/README.md) for example usage with nvflare.
As part of the work, XGBoost 1.7 has replaced the old rabit module with the new collective module as the network communication interface with added support for runtime backend selection. In previous versions, the backend is defined at compile time and can not be changed once built. In this new release, users can choose between `rabit` and `federated.` (#8029, #8351, #8350, #8342, #8340, #8325, #8279, #8181, #8027, #7958, #7831, #7879, #8257, #8316, #8242, #8057, #8203, #8038, #7965, #7930, #7911)
The feature is available in the public PyPI binary package for testing.
### Quantile DMatrix
Before 1.7, XGBoost has an internal data structure called `DeviceQuantileDMatrix` (and its distributed version). We now extend its support to CPU and renamed it to `QuantileDMatrix`. This data structure is used for optimizing memory usage for the `hist` and `gpu_hist` tree methods. The new feature helps reduce CPU memory usage significantly, especially for dense data. The new `QuantileDMatrix` can be initialized from both CPU and GPU data, and regardless of where the data comes from, the constructed instance can be used by both the CPU algorithm and GPU algorithm including training and prediction (with some overhead of conversion if the device of data and training algorithm doesn't match). Also, a new parameter `ref` is added to `QuantileDMatrix`, which can be used to construct validation/test datasets. Lastly, it's set as default in the scikit-learn interface when a supported tree method is specified by users. (#7889, #7923, #8136, #8215, #8284, #8268, #8220, #8346, #8327, #8130, #8116, #8103, #8094, #8086, #7898, #8060, #8019, #8045, #7901, #7912, #7922)
### Mean absolute error
The mean absolute error is a new member of the collection of objectives in XGBoost. It's noteworthy since MAE has zero hessian value, which is unusual to XGBoost as XGBoost relies on Newton optimization. Without valid Hessian values, the convergence speed can be slow. As part of the support for MAE, we added line searches into the XGBoost training algorithm to overcome the difficulty of training without valid Hessian values. In the future, we will extend the line search to other objectives where it's appropriate for faster convergence speed. (#8343, #8107, #7812, #8380)
### XGBoost on Browser
With the help of the [pyodide](https://github.com/pyodide/pyodide) project, you can now run XGBoost on browsers. (#7954, #8369)
### Experimental IPv6 Support for Dask
With the growing adaption of the new internet protocol, XGBoost joined the club. In the latest release, the Dask interface can be used on IPv6 clusters, see XGBoost's Dask tutorial for details. (#8225, #8234)
### Optimizations
We have new optimizations for both the `hist` and `gpu_hist` tree methods to make XGBoost's training even more efficient.
* Hist
Hist now supports optional by-column histogram build, which is automatically configured based on various conditions of input data. This helps the XGBoost CPU hist algorithm to scale better with different shapes of training datasets. (#8233, #8259). Also, the build histogram kernel now can better utilize CPU registers (#8218)
* GPU Hist
GPU hist performance is significantly improved for wide datasets. GPU hist now supports batched node build, which reduces kernel latency and increases throughput. The improvement is particularly significant when growing deep trees with the default ``depthwise`` policy. (#7919, #8073, #8051, #8118, #7867, #7964, #8026)
### Breaking Changes
Breaking changes made in the 1.7 release are summarized below.
- The `grow_local_histmaker` updater is removed. This updater is rarely used in practice and has no test. We decided to remove it and focus have XGBoot focus on other more efficient algorithms. (#7992, #8091)
- Single precision histogram is removed due to its lack of accuracy caused by significant floating point error. In some cases the error can be difficult to detect due to log-scale operations, which makes the parameter dangerous to use. (#7892, #7828)
- Deprecated CUDA architectures are no longer supported in the release binaries. (#7774)
- As part of the federated learning development, the `rabit` module is replaced with the new `collective` module. It's a drop-in replacement with added runtime backend selection, see the federated learning section for more details (#8257)
### General new features and improvements
Before diving into package-specific changes, some general new features other than those listed at the beginning are summarized here.
* Users of `DMatrix` and `QuantileDMatrix` can get the data from XGBoost. In previous versions, only getters for meta info like labels are available. The new method is available in Python (`DMatrix::get_data`) and C. (#8269, #8323)
* In previous versions, the GPU histogram tree method may generate phantom gradient for missing values due to floating point error. We fixed such an error in this release and XGBoost is much better equated to handle floating point errors when training on GPU. (#8274, #8246)
* Parameter validation is no longer experimental. (#8206)
* C pointer parameters and JSON parameters are vigorously checked. (#8254, #8254)
* Improved handling of JSON model input. (#7953, #7918)
* Support IBM i OS (#7920, #8178)
### Fixes
Some noteworthy bug fixes that are not related to specific language binding are listed in this section.
* Rename misspelled config parameter for pseudo-Huber (#7904)
* Fix feature weights with nested column sampling. (#8100)
* Fix loading DMatrix binary in distributed env. (#8149)
* Force auc.cc to be statically linked for unusual compiler platforms. (#8039)
* New logic for detecting libomp on macos (#8384).
### Python Package
* Python 3.8 is now the minimum required Python version. (#8071)
* More progress on type hint support. Except for the new PySpark interface, the XGBoost module is fully typed. (#7742, #7945, #8302, #7914, #8052)
* XGBoost now validates the feature names in `inplace_predict`, which also affects the predict function in scikit-learn estimators as it uses `inplace_predict` internally. (#8359)
* Users can now get the data from `DMatrix` using `DMatrix::get_data` or `QuantileDMatrix::get_data`.
* Show `libxgboost.so` path in build info. (#7893)
* Raise import error when using the sklearn module while scikit-learn is missing. (#8049)
* Use `config_context` in the sklearn interface. (#8141)
* Validate features for inplace prediction. (#8359)
* Pandas dataframe handling is refactored to reduce data fragmentation. (#7843)
* Support more pandas nullable types (#8262)
* Remove pyarrow workaround. (#7884)
* Binary wheel size
We aim to enable as many features as possible in XGBoost's default binary distribution on PyPI (package installed with pip), but there's a upper limit on the size of the binary wheel. In 1.7, XGBoost reduces the size of the wheel by pruning unused CUDA architectures. (#8179, #8152, #8150)
* Fixes
Some noteworthy fixes are listed here:
- Fix the Dask interface with the latest cupy. (#8210)
- Check cuDF lazily to avoid potential errors with cuda-python. (#8084)
* Fix potential error in DMatrix constructor on 32-bit platform. (#8369)
* Maintenance work
- Linter script is moved from dmlc-core to XGBoost with added support for formatting, mypy, and parallel run, along with some fixes (#7967, #8101, #8216)
- We now require the use of `isort` and `black` for selected files. (#8137, #8096)
- Code cleanups. (#7827)
- Deprecate `use_label_encoder` in XGBClassifier. The label encoder has already been deprecated and removed in the previous version. These changes only affect the indicator parameter (#7822)
- Remove the use of distutils. (#7770)
- Refactor and fixes for tests (#8077, #8064, #8078, #8076, #8013, #8010, #8244, #7833)
* Documents
- [dask] Fix potential error in demo. (#8079)
- Improved documentation for the ranker. (#8356, #8347)
- Indicate lack of py-xgboost-gpu on Windows (#8127)
- Clarification for feature importance. (#8151)
- Simplify Python getting started example (#8153)
### R Package
We summarize improvements for the R package briefly here:
* Feature info including names and types are now passed to DMatrix in preparation for categorical feature support. (#804)
* XGBoost 1.7 can now gracefully load old R models from RDS for better compatibility with 3-party tuning libraries (#7864)
* The R package now can be built with parallel compilation, along with fixes for warnings in CRAN tests. (#8330)
* Emit error early if DiagrammeR is missing (#8037)
* Fix R package Windows build. (#8065)
### JVM Packages
The consistency between JVM packages and other language bindings is greatly improved in 1.7, improvements range from model serialization format to the default value of hyper-parameters.
* Java package now supports feature names and feature types for DMatrix in preparation for categorical feature support. (#7966)
* Models trained by the JVM packages can now be safely used with other language bindings. (#7896, #7907)
* Users can specify the model format when saving models with a stream. (#7940, #7955)
* The default value for training parameters is now sourced from XGBoost directly, which helps JVM packages be consistent with other packages. (#7938)
* Set the correct objective if the user doesn't explicitly set it (#7781)
* Auto-detection of MUSL is replaced by system properties (#7921)
* Improved error message for launching tracker. (#7952, #7968)
* Fix a race condition in parameter configuration. (#8025)
* [Breaking] ` timeoutRequestWorkers` is now removed. With the support for barrier mode, this parameter is no longer needed. (#7839)
* Dependencies updates. (#7791, #8157, #7801, #8240)
### Documents
- Document for the C interface is greatly improved and is now displayed at the [sphinx document page](https://xgboost.readthedocs.io/en/latest/c.html). Thanks to the breathe project, you can view the C API just like the Python API. (#8300)
- We now avoid having XGBoost internal text parser in demos and recommend users use dedicated libraries for loading data whenever it's feasible. (#7753)
- Python survival training demos are now displayed at [sphinx gallery](https://xgboost.readthedocs.io/en/latest/python/survival-examples/index.html). (#8328)
- Some typos, links, format, and grammar fixes. (#7800, #7832, #7861, #8099, #8163, #8166, #8229, #8028, #8214, #7777, #7905, #8270, #8309, d70e59fef, #7806)
- Updated winning solution under readme.md (#7862)
- New security policy. (#8360)
- GPU document is overhauled as we consider CUDA support to be feature-complete. (#8378)
### Maintenance
* Code refactoring and cleanups. (#7850, #7826, #7910, #8332, #8204)
* Reduce compiler warnings. (#7768, #7916, #8046, #8059, #7974, #8031, #8022)
* Compiler workarounds. (#8211, #8314, #8226, #8093)
* Dependencies update. (#8001, #7876, #7973, #8298, #7816)
* Remove warnings emitted in previous versions. (#7815)
* Small fixes occurred during development. (#8008)
### CI and Tests
* We overhauled the CI infrastructure to reduce the CI cost and lift the maintenance burdens. Jenkins is replaced with buildkite for better automation, with which, finer control of test runs is implemented to reduce overall cost. Also, we refactored some of the existing tests to reduce their runtime, drooped the size of docker images, and removed multi-GPU C++ tests. Lastly, `pytest-timeout` is added as an optional dependency for running Python tests to keep the test time in check. (#7772, #8291, #8286, #8276, #8306, #8287, #8243, #8313, #8235, #8288, #8303, #8142, #8092, #8333, #8312, #8348)
* New documents for how to reproduce the CI environment (#7971, #8297)
* Improved automation for JVM release. (#7882)
* GitHub Action security-related updates. (#8263, #8267, #8360)
* Other fixes and maintenance work. (#8154, #7848, #8069, #7943)
* Small updates and fixes to GitHub action pipelines. (#8364, #8321, #8241, #7950, #8011)
## v1.6.1 (2022 May 9)
This is a patch release for bug fixes and Spark barrier mode support. The R package is unchanged.
### Experimental support for categorical data
- Fix segfault when the number of samples is smaller than the number of categories. (https://github.com/dmlc/xgboost/pull/7853)
- Enable partition-based split for all model types. (https://github.com/dmlc/xgboost/pull/7857)
### JVM packages
We replaced the old parallelism tracker with spark barrier mode to improve the robustness of the JVM package and fix the GPU training pipeline.
- Fix GPU training pipeline quantile synchronization. (#7823, #7834)
- Use barrier model in spark package. (https://github.com/dmlc/xgboost/pull/7836, https://github.com/dmlc/xgboost/pull/7840, https://github.com/dmlc/xgboost/pull/7845, https://github.com/dmlc/xgboost/pull/7846)
- Fix shared object loading on some platforms. (https://github.com/dmlc/xgboost/pull/7844)
## v1.6.0 (2022 Apr 16)
After a long period of development, XGBoost v1.6.0 is packed with many new features and
improvements. We summarize them in the following sections starting with an introduction to
some major new features, then moving on to language binding specific changes including new
features and notable bug fixes for that binding.
### Development of categorical data support
This version of XGBoost features new improvements and full coverage of experimental
categorical data support in Python and C package with tree model. Both `hist`, `approx`
and `gpu_hist` now support training with categorical data. Also, partition-based
categorical split is introduced in this release. This split type is first available in
LightGBM in the context of gradient boosting. The previous XGBoost release supported one-hot split where the splitting criteria is of form `x \in {c}`, i.e. the categorical feature `x` is tested against a single candidate. The new release allows for more expressive conditions: `x \in S` where the categorical feature `x` is tested against multiple candidates. Moreover, it is now possible to use any tree algorithms (`hist`, `approx`, `gpu_hist`) when creating categorical splits. For more
information, please see our tutorial on [categorical
data](https://xgboost.readthedocs.io/en/latest/tutorials/categorical.html), along with
examples linked on that page. (#7380, #7708, #7695, #7330, #7307, #7322, #7705,
#7652, #7592, #7666, #7576, #7569, #7529, #7575, #7393, #7465, #7385, #7371, #7745, #7810)
In the future, we will continue to improve categorical data support with new features and
optimizations. Also, we are looking forward to bringing the feature beyond Python binding,
contributions and feedback are welcomed! Lastly, as a result of experimental status, the
behavior might be subject to change, especially the default value of related
hyper-parameters.
### Experimental support for multi-output model
XGBoost 1.6 features initial support for the multi-output model, which includes
multi-output regression and multi-label classification. Along with this, the XGBoost
classifier has proper support for base margin without to need for the user to flatten the
input. In this initial support, XGBoost builds one model for each target similar to the
sklearn meta estimator, for more details, please see our [quick
introduction](https://xgboost.readthedocs.io/en/latest/tutorials/multioutput.html).
(#7365, #7736, #7607, #7574, #7521, #7514, #7456, #7453, #7455, #7434, #7429, #7405, #7381)
### External memory support
External memory support for both approx and hist tree method is considered feature
complete in XGBoost 1.6. Building upon the iterator-based interface introduced in the
previous version, now both `hist` and `approx` iterates over each batch of data during
training and prediction. In previous versions, `hist` concatenates all the batches into
an internal representation, which is removed in this version. As a result, users can
expect higher scalability in terms of data size but might experience lower performance due
to disk IO. (#7531, #7320, #7638, #7372)
### Rewritten approx
The `approx` tree method is rewritten based on the existing `hist` tree method. The
rewrite closes the feature gap between `approx` and `hist` and improves the performance.
Now the behavior of `approx` should be more aligned with `hist` and `gpu_hist`. Here is a
list of user-visible changes:
- Supports both `max_leaves` and `max_depth`.
- Supports `grow_policy`.
- Supports monotonic constraint.
- Supports feature weights.
- Use `max_bin` to replace `sketch_eps`.
- Supports categorical data.
- Faster performance for many of the datasets.
- Improved performance and robustness for distributed training.
- Supports prediction cache.
- Significantly better performance for external memory when `depthwise` policy is used.
### New serialization format
Based on the existing JSON serialization format, we introduce UBJSON support as a more
efficient alternative. Both formats will be available in the future and we plan to
gradually [phase out](https://github.com/dmlc/xgboost/issues/7547) support for the old
binary model format. Users can opt to use the different formats in the serialization
function by providing the file extension `json` or `ubj`. Also, the `save_raw` function in
all supported languages bindings gains a new parameter for exporting the model in different
formats, available options are `json`, `ubj`, and `deprecated`, see document for the
language binding you are using for details. Lastly, the default internal serialization
format is set to UBJSON, which affects Python pickle and R RDS. (#7572, #7570, #7358,
#7571, #7556, #7549, #7416)
### General new features and improvements
Aside from the major new features mentioned above, some others are summarized here:
* Users can now access the build information of XGBoost binary in Python and C
interface. (#7399, #7553)
* Auto-configuration of `seed_per_iteration` is removed, now distributed training should
generate closer results to single node training when sampling is used. (#7009)
* A new parameter `huber_slope` is introduced for the `Pseudo-Huber` objective.
* During source build, XGBoost can choose cub in the system path automatically. (#7579)
* XGBoost now honors the CPU counts from CFS, which is usually set in docker
environments. (#7654, #7704)
* The metric `aucpr` is rewritten for better performance and GPU support. (#7297, #7368)
* Metric calculation is now performed in double precision. (#7364)
* XGBoost no longer mutates the global OpenMP thread limit. (#7537, #7519, #7608, #7590,
#7589, #7588, #7687)
* The default behavior of `max_leave` and `max_depth` is now unified (#7302, #7551).
* CUDA fat binary is now compressed. (#7601)
* Deterministic result for evaluation metric and linear model. In previous versions of
XGBoost, evaluation results might differ slightly for each run due to parallel reduction
for floating-point values, which is now addressed. (#7362, #7303, #7316, #7349)
* XGBoost now uses double for GPU Hist node sum, which improves the accuracy of
`gpu_hist`. (#7507)
### Performance improvements
Most of the performance improvements are integrated into other refactors during feature
developments. The `approx` should see significant performance gain for many datasets as
mentioned in the previous section, while the `hist` tree method also enjoys improved
performance with the removal of the internal `pruner` along with some other
refactoring. Lastly, `gpu_hist` no longer synchronizes the device during training. (#7737)
### General bug fixes
This section lists bug fixes that are not specific to any language binding.
* The `num_parallel_tree` is now a model parameter instead of a training hyper-parameter,
which fixes model IO with random forest. (#7751)
* Fixes in CMake script for exporting configuration. (#7730)
* XGBoost can now handle unsorted sparse input. This includes text file formats like
libsvm and scipy sparse matrix where column index might not be sorted. (#7731)
* Fix tree param feature type, this affects inputs with the number of columns greater than
the maximum value of int32. (#7565)
* Fix external memory with gpu_hist and subsampling. (#7481)
* Check the number of trees in inplace predict, this avoids a potential segfault when an
incorrect value for `iteration_range` is provided. (#7409)
* Fix non-stable result in cox regression (#7756)
### Changes in the Python package
Other than the changes in Dask, the XGBoost Python package gained some new features and
improvements along with small bug fixes.
* Python 3.7 is required as the lowest Python version. (#7682)
* Pre-built binary wheel for Apple Silicon. (#7621, #7612, #7747) Apple Silicon users will
now be able to run `pip install xgboost` to install XGBoost.
* MacOS users no longer need to install `libomp` from Homebrew, as the XGBoost wheel now
bundles `libomp.dylib` library.
* There are new parameters for users to specify the custom metric with new
behavior. XGBoost can now output transformed prediction values when a custom objective is
not supplied. See our explanation in the
[tutorial](https://xgboost.readthedocs.io/en/latest/tutorials/custom_metric_obj.html#reverse-link-function)
for details.
* For the sklearn interface, following the estimator guideline from scikit-learn, all
parameters in `fit` that are not related to input data are moved into the constructor
and can be set by `set_params`. (#6751, #7420, #7375, #7369)
* Apache arrow format is now supported, which can bring better performance to users'
pipeline (#7512)
* Pandas nullable types are now supported (#7760)
* A new function `get_group` is introduced for `DMatrix` to allow users to get the group
information in the custom objective function. (#7564)
* More training parameters are exposed in the sklearn interface instead of relying on the
`**kwargs`. (#7629)
* A new attribute `feature_names_in_` is defined for all sklearn estimators like
`XGBRegressor` to follow the convention of sklearn. (#7526)
* More work on Python type hint. (#7432, #7348, #7338, #7513, #7707)
* Support the latest pandas Index type. (#7595)
* Fix for Feature shape mismatch error on s390x platform (#7715)
* Fix using feature names for constraints with multiple groups (#7711)
* We clarified the behavior of the callback function when it contains mutable
states. (#7685)
* Lastly, there are some code cleanups and maintenance work. (#7585, #7426, #7634, #7665,
#7667, #7377, #7360, #7498, #7438, #7667, #7752, #7749, #7751)
### Changes in the Dask interface
* Dask module now supports user-supplied host IP and port address of scheduler node.
Please see [introduction](https://xgboost.readthedocs.io/en/latest/tutorials/dask.html#troubleshooting) and
[API document](https://xgboost.readthedocs.io/en/latest/python/python_api.html#optional-dask-configuration)
for reference. (#7645, #7581)
* Internal `DMatrix` construction in dask now honers thread configuration. (#7337)
* A fix for `nthread` configuration using the Dask sklearn interface. (#7633)
* The Dask interface can now handle empty partitions. An empty partition is different
from an empty worker, the latter refers to the case when a worker has no partition of an
input dataset, while the former refers to some partitions on a worker that has zero
sizes. (#7644, #7510)
* Scipy sparse matrix is supported as Dask array partition. (#7457)
* Dask interface is no longer considered experimental. (#7509)
### Changes in the R package
This section summarizes the new features, improvements, and bug fixes to the R package.
* `load.raw` can optionally construct a booster as return. (#7686)
* Fix parsing decision stump, which affects both transforming text representation to data
table and plotting. (#7689)
* Implement feature weights. (#7660)
* Some improvements for complying the CRAN release policy. (#7672, #7661, #7763)
* Support CSR data for predictions (#7615)
* Document update (#7263, #7606)
* New maintainer for the CRAN package (#7691, #7649)
* Handle non-standard installation of toolchain on macos (#7759)
### Changes in JVM-packages
Some new features for JVM-packages are introduced for a more integrated GPU pipeline and
better compatibility with musl-based Linux. Aside from this, we have a few notable bug
fixes.
* User can specify the tracker IP address for training, which helps running XGBoost on
restricted network environments. (#7808)
* Add support for detecting musl-based Linux (#7624)
* Add `DeviceQuantileDMatrix` to Scala binding (#7459)
* Add Rapids plugin support, now more of the JVM pipeline can be accelerated by RAPIDS (#7491, #7779, #7793, #7806)
* The setters for CPU and GPU are more aligned (#7692, #7798)
* Control logging for early stopping (#7326)
* Do not repartition when nWorker = 1 (#7676)
* Fix the prediction issue for `multi:softmax` (#7694)
* Fix for serialization of custom objective and eval (#7274)
* Update documentation about Python tracker (#7396)
* Remove jackson from dependency, which fixes CVE-2020-36518. (#7791)
* Some refactoring to the training pipeline for better compatibility between CPU and
GPU. (#7440, #7401, #7789, #7784)
* Maintenance work. (#7550, #7335, #7641, #7523, #6792, #4676)
### Deprecation
Other than the changes in the Python package and serialization, we removed some deprecated
features in previous releases. Also, as mentioned in the previous section, we plan to
phase out the old binary format in future releases.
* Remove old warning in 1.3 (#7279)
* Remove label encoder deprecated in 1.3. (#7357)
* Remove old callback deprecated in 1.3. (#7280)
* Pre-built binary will no longer support deprecated CUDA architectures including sm35 and
sm50. Users can continue to use these platforms with source build. (#7767)
### Documentation
This section lists some of the general changes to XGBoost's document, for language binding
specific change please visit related sections.
* Document is overhauled to use the new RTD theme, along with integration of Python
examples using Sphinx gallery. Also, we replaced most of the hard-coded URLs with sphinx
references. (#7347, #7346, #7468, #7522, #7530)
* Small update along with fixes for broken links, typos, etc. (#7684, #7324, #7334, #7655,
#7628, #7623, #7487, #7532, #7500, #7341, #7648, #7311)
* Update document for GPU. [skip ci] (#7403)
* Document the status of RTD hosting. (#7353)
* Update document for building from source. (#7664)
* Add note about CRAN release [skip ci] (#7395)
### Maintenance
This is a summary of maintenance work that is not specific to any language binding.
* Add CMake option to use /MD runtime (#7277)
* Add clang-format configuration. (#7383)
* Code cleanups (#7539, #7536, #7466, #7499, #7533, #7735, #7722, #7668, #7304, #7293,
#7321, #7356, #7345, #7387, #7577, #7548, #7469, #7680, #7433, #7398)
* Improved tests with better coverage and latest dependency (#7573, #7446, #7650, #7520,
#7373, #7723, #7611, #7771)
* Improved automation of the release process. (#7278, #7332, #7470)
* Compiler workarounds (#7673)
* Change shebang used in CLI demo. (#7389)
* Update affiliation (#7289)
### CI
Some fixes and update to XGBoost's CI infrastructure. (#7739, #7701, #7382, #7662, #7646,
#7582, #7407, #7417, #7475, #7474, #7479, #7472, #7626)
## v1.5.0 (2021 Oct 11)
This release comes with many exciting new features and optimizations, along with some bug
fixes. We will describe the experimental categorical data support and the external memory
interface independently. Package-specific new features will be listed in respective
sections.
### Development on categorical data support
In version 1.3, XGBoost introduced an experimental feature for handling categorical data
natively, without one-hot encoding. XGBoost can fit categorical splits in decision
trees. (Currently, the generated splits will be of form `x \in {v}`, where the input is
compared to a single category value. A future version of XGBoost will generate splits that
compare the input against a list of multiple category values.)
Most of the other features, including prediction, SHAP value computation, feature
importance, and model plotting were revised to natively handle categorical splits. Also,
all Python interfaces including native interface with and without quantized `DMatrix`,
scikit-learn interface, and Dask interface now accept categorical data with a wide range
of data structures support including numpy/cupy array and cuDF/pandas/modin dataframe. In
practice, the following are required for enabling categorical data support during
training:
- Use Python package.
- Use `gpu_hist` to train the model.
- Use JSON model file format for saving the model.
Once the model is trained, it can be used with most of the features that are available on
the Python package. For a quick introduction, see
https://xgboost.readthedocs.io/en/latest/tutorials/categorical.html
Related PRs: (#7011, #7001, #7042, #7041, #7047, #7043, #7036, #7054, #7053, #7065, #7213, #7228, #7220, #7221, #7231, #7306)
* Next steps
- Revise the CPU training algorithm to handle categorical data natively and generate categorical splits
- Extend the CPU and GPU algorithms to generate categorical splits of form `x \in S`
where the input is compared with multiple category values. split. (#7081)
### External memory
This release features a brand-new interface and implementation for external memory (also
known as out-of-core training). (#6901, #7064, #7088, #7089, #7087, #7092, #7070,
#7216). The new implementation leverages the data iterator interface, which is currently
used to create `DeviceQuantileDMatrix`. For a quick introduction, see
https://xgboost.readthedocs.io/en/latest/tutorials/external_memory.html#data-iterator
. During the development of this new interface, `lz4` compression is removed. (#7076).
Please note that external memory support is still experimental and not ready for
production use yet. All future development will focus on this new interface and users are
advised to migrate. (You are using the old interface if you are using a URL suffix to use
external memory.)
### New features in Python package
* Support numpy array interface and all numeric types from numpy in `DMatrix`
construction and `inplace_predict` (#6998, #7003). Now XGBoost no longer makes data
copy when input is numpy array view.
* The early stopping callback in Python has a new `min_delta` parameter to control the
stopping behavior (#7137)
* Python package now supports calculating feature scores for the linear model, which is
also available on R package. (#7048)
* Python interface now supports configuring constraints using feature names instead of
feature indices.
* Typehint support for more Python code including scikit-learn interface and rabit
module. (#6799, #7240)
* Add tutorial for XGBoost-Ray (#6884)
### New features in R package
* In 1.4 we have a new prediction function in the C API which is used by the Python
package. This release revises the R package to use the new prediction function as well.
A new parameter `iteration_range` for the predict function is available, which can be
used for specifying the range of trees for running prediction. (#6819, #7126)
* R package now supports the `nthread` parameter in `DMatrix` construction. (#7127)
### New features in JVM packages
* Support GPU dataframe and `DeviceQuantileDMatrix` (#7195). Constructing `DMatrix`
with GPU data structures and the interface for quantized `DMatrix` were first
introduced in the Python package and are now available in the xgboost4j package.
* JVM packages now support saving and getting early stopping attributes. (#7095) Here is a
quick [example](https://github.com/dmlc/xgboost/jvm-packages/xgboost4j-example/src/main/java/ml/dmlc/xgboost4j/java/example/EarlyStopping.java "example") in JAVA (#7252).
### General new features
* We now have a pre-built binary package for R on Windows with GPU support. (#7185)
* CUDA compute capability 86 is now part of the default CMake build configuration with
newly added support for CUDA 11.4. (#7131, #7182, #7254)
* XGBoost can be compiled using system CUB provided by CUDA 11.x installation. (#7232)
### Optimizations
The performance for both `hist` and `gpu_hist` has been significantly improved in 1.5
with the following optimizations:
* GPU multi-class model training now supports prediction cache. (#6860)
* GPU histogram building is sped up and the overall training time is 2-3 times faster on
large datasets (#7180, #7198). In addition, we removed the parameter `deterministic_histogram` and now
the GPU algorithm is always deterministic.
* CPU hist has an optimized procedure for data sampling (#6922)
* More performance optimization in regression and binary classification objectives on
CPU (#7206)
* Tree model dump is now performed in parallel (#7040)
### Breaking changes
* `n_gpus` was deprecated in 1.0 release and is now removed.
* Feature grouping in CPU hist tree method is removed, which was disabled long
ago. (#7018)
* C API for Quantile DMatrix is changed to be consistent with the new external memory
implementation. (#7082)
### Notable general bug fixes
* XGBoost no long changes global CUDA device ordinal when `gpu_id` is specified (#6891,
#6987)
* Fix `gamma` negative likelihood evaluation metric. (#7275)
* Fix integer value of `verbose_eal` for `xgboost.cv` function in Python. (#7291)
* Remove extra sync in CPU hist for dense data, which can lead to incorrect tree node
statistics. (#7120, #7128)
* Fix a bug in GPU hist when data size is larger than `UINT32_MAX` with missing
values. (#7026)
* Fix a thread safety issue in prediction with the `softmax` objective. (#7104)
* Fix a thread safety issue in CPU SHAP value computation. (#7050) Please note that all
prediction functions in Python are thread-safe.
* Fix model slicing. (#7149, #7078)
* Workaround a bug in old GCC which can lead to segfault during construction of
DMatrix. (#7161)
* Fix histogram truncation in GPU hist, which can lead to slightly-off results. (#7181)
* Fix loading GPU linear model pickle files on CPU-only machine. (#7154)
* Check input value is duplicated when CPU quantile queue is full (#7091)
* Fix parameter loading with training continuation. (#7121)
* Fix CMake interface for exposing C library by specifying dependencies. (#7099)
* Callback and early stopping are explicitly disabled for the scikit-learn interface
random forest estimator. (#7236)
* Fix compilation error on x86 (32-bit machine) (#6964)
* Fix CPU memory usage with extremely sparse datasets (#7255)
* Fix a bug in GPU multi-class AUC implementation with weighted data (#7300)
### Python package
Other than the items mentioned in the previous sections, there are some Python-specific
improvements.
* Change development release postfix to `dev` (#6988)
* Fix early stopping behavior with MAPE metric (#7061)
* Fixed incorrect feature mismatch error message (#6949)
* Add predictor to skl constructor. (#7000, #7159)
* Re-enable feature validation in predict proba. (#7177)
* scikit learn interface regression estimator now can pass the scikit-learn estimator
check and is fully compatible with scikit-learn utilities. `__sklearn_is_fitted__` is
implemented as part of the changes (#7130, #7230)
* Conform the latest pylint. (#7071, #7241)
* Support latest panda range index in DMatrix construction. (#7074)
* Fix DMatrix construction from pandas series. (#7243)
* Fix typo and grammatical mistake in error message (#7134)
* [dask] disable work stealing explicitly for training tasks (#6794)
* [dask] Set dataframe index in predict. (#6944)
* [dask] Fix prediction on df with latest dask. (#6969)
* [dask] Fix dask predict on `DaskDMatrix` with `iteration_range`. (#7005)
* [dask] Disallow importing non-dask estimators from xgboost.dask (#7133)
### R package
Improvements other than new features on R package:
* Optimization for updating R handles in-place (#6903)
* Removed the magrittr dependency. (#6855, #6906, #6928)
* The R package now hides all C++ symbols to avoid conflicts. (#7245)
* Other maintenance including code cleanups, document updates. (#6863, #6915, #6930, #6966, #6967)
### JVM packages
Improvements other than new features on JVM packages:
* Constructors with implicit missing value are deprecated due to confusing behaviors. (#7225)
* Reduce scala-compiler, scalatest dependency scopes (#6730)
* Making the Java library loader emit helpful error messages on missing dependencies. (#6926)
* JVM packages now use the Python tracker in XGBoost instead of dmlc. The one in XGBoost
is shared between JVM packages and Python Dask and enjoys better maintenance (#7132)
* Fix "key not found: train" error (#6842)
* Fix model loading from stream (#7067)
### General document improvements
* Overhaul the installation documents. (#6877)
* A few demos are added for AFT with dask (#6853), callback with dask (#6995), inference
in C (#7151), `process_type`. (#7135)
* Fix PDF format of document. (#7143)
* Clarify the behavior of `use_rmm`. (#6808)
* Clarify prediction function. (#6813)
* Improve tutorial on feature interactions (#7219)
* Add small example for dask sklearn interface. (#6970)
* Update Python intro. (#7235)
* Some fixes/updates (#6810, #6856, #6935, #6948, #6976, #7084, #7097, #7170, #7173, #7174, #7226, #6979, #6809, #6796, #6979)
### Maintenance
* Some refactoring around CPU hist, which lead to better performance but are listed under general maintenance tasks:
- Extract evaluate splits from CPU hist. (#7079)
- Merge lossgude and depthwise strategies for CPU hist (#7007)
- Simplify sparse and dense CPU hist kernels (#7029)
- Extract histogram builder from CPU Hist. (#7152)
* Others
- Fix `gpu_id` with custom objective. (#7015)
- Fix typos in AUC. (#6795)
- Use constexpr in `dh::CopyIf`. (#6828)
- Update dmlc-core. (#6862)
- Bump version to 1.5.0 snapshot in master. (#6875)
- Relax shotgun test. (#6900)
- Guard against index error in prediction. (#6982)
- Hide symbols in CI build + hide symbols for C and CUDA (#6798)
- Persist data in dask test. (#7077)
- Fix typo in arguments of PartitionBuilder::Init (#7113)
- Fix typo in src/common/hist.cc BuildHistKernel (#7116)
- Use upstream URI in distributed quantile tests. (#7129)
- Include cpack (#7160)
- Remove synchronization in monitor. (#7164)
- Remove unused code. (#7175)
- Fix building on CUDA 11.0. (#7187)
- Better error message for `ncclUnhandledCudaError`. (#7190)
- Add noexcept to JSON objects. (#7205)
- Improve wording for warning (#7248)
- Fix typo in release script. [skip ci] (#7238)
- Relax shotgun test. (#6918)
- Relax test for decision stump in distributed environment. (#6919)
- [dask] speed up tests (#7020)
### CI
* [CI] Rotate access keys for uploading MacOS artifacts from Travis CI (#7253)
* Reduce Travis environment setup time. (#6912)
* Restore R cache on github action. (#6985)
* [CI] Remove stray build artifact to avoid error in artifact packaging (#6994)
* [CI] Move appveyor tests to action (#6986)
* Remove appveyor badge. [skip ci] (#7035)
* [CI] Configure RAPIDS, dask, modin (#7033)
* Test on s390x. (#7038)
* [CI] Upgrade to CMake 3.14 (#7060)
* [CI] Update R cache. (#7102)
* [CI] Pin libomp to 11.1.0 (#7107)
* [CI] Upgrade build image to CentOS 7 + GCC 8; require CUDA 10.1 and later (#7141)
* [dask] Work around segfault in prediction. (#7112)
* [dask] Remove the workaround for segfault. (#7146)
* [CI] Fix hanging Python setup in Windows CI (#7186)
* [CI] Clean up in beginning of each task in Win CI (#7189)
* Fix travis. (#7237)
### Acknowledgement
* **Contributors**: Adam Pocock (@Craigacp), Jeff H (@JeffHCross), Johan Hansson (@JohanWork), Jose Manuel Llorens (@JoseLlorensRipolles), Benjamin Szőke (@Livius90), @ReeceGoding, @ShvetsKS, Robert Zabel (@ZabelTech), Ali (@ali5h), Andrew Ziem (@az0), Andy Adinets (@canonizer), @david-cortes, Daniel Saxton (@dsaxton), Emil Sadek (@esadek), @farfarawayzyt, Gil Forsyth (@gforsyth), @giladmaya, @graue70, Philip Hyunsu Cho (@hcho3), James Lamb (@jameslamb), José Morales (@jmoralez), Kai Fricke (@krfricke), Christian Lorentzen (@lorentzenchr), Mads R. B. Kristensen (@madsbk), Anton Kostin (@masguit42), Martin Petříček (@mpetricek-corp), @naveenkb, Taewoo Kim (@oOTWK), Viktor Szathmáry (@phraktle), Robert Maynard (@robertmaynard), TP Boudreau (@tpboudreau), Jiaming Yuan (@trivialfis), Paul Taylor (@trxcllnt), @vslaykovsky, Bobby Wang (@wbo4958),
* **Reviewers**: Nan Zhu (@CodingCat), Adam Pocock (@Craigacp), Jose Manuel Llorens (@JoseLlorensRipolles), Kodi Arfer (@Kodiologist), Benjamin Szőke (@Livius90), Mark Guryanov (@MarkGuryanov), Rory Mitchell (@RAMitchell), @ReeceGoding, @ShvetsKS, Egor Smirnov (@SmirnovEgorRu), Andrew Ziem (@az0), @candalfigomoro, Andy Adinets (@canonizer), Dante Gama Dessavre (@dantegd), @david-cortes, Daniel Saxton (@dsaxton), @farfarawayzyt, Gil Forsyth (@gforsyth), Harutaka Kawamura (@harupy), Philip Hyunsu Cho (@hcho3), @jakirkham, James Lamb (@jameslamb), José Morales (@jmoralez), James Bourbeau (@jrbourbeau), Christian Lorentzen (@lorentzenchr), Martin Petříček (@mpetricek-corp), Nikolay Petrov (@napetrov), @naveenkb, Viktor Szathmáry (@phraktle), Robin Teuwens (@rteuwens), Yuan Tang (@terrytangyuan), TP Boudreau (@tpboudreau), Jiaming Yuan (@trivialfis), @vkuzmin-uber, Bobby Wang (@wbo4958), William Hicks (@wphicks)
## v1.4.2 (2021.05.13)
This is a patch release for Python package with following fixes:
@ -1188,7 +2101,7 @@ This release marks a major milestone for the XGBoost project.
## v0.90 (2019.05.18)
### XGBoost Python package drops Python 2.x (#4379, #4381)
Python 2.x is reaching its end-of-life at the end of this year. [Many scientific Python packages are now moving to drop Python 2.x](https://python3statement.org/).
Python 2.x is reaching its end-of-life at the end of this year. [Many scientific Python packages are now moving to drop Python 2.x](https://python3statement.github.io/).
### XGBoost4J-Spark now requires Spark 2.4.x (#4377)
* Spark 2.3 is reaching its end-of-life soon. See discussion at #4389.

View File

@ -4,3 +4,5 @@
^.*\.Rproj$
^\.Rproj\.user$
README.md
^doc$
^Meta$

View File

@ -1,42 +1,62 @@
find_package(LibR REQUIRED)
message(STATUS "LIBR_CORE_LIBRARY " ${LIBR_CORE_LIBRARY})
file(GLOB_RECURSE R_SOURCES
file(
GLOB_RECURSE R_SOURCES
${CMAKE_CURRENT_LIST_DIR}/src/*.cc
${CMAKE_CURRENT_LIST_DIR}/src/*.c)
${CMAKE_CURRENT_LIST_DIR}/src/*.c
)
# Use object library to expose symbols
add_library(xgboost-r OBJECT ${R_SOURCES})
if (ENABLE_ALL_WARNINGS)
if(ENABLE_ALL_WARNINGS)
target_compile_options(xgboost-r PRIVATE -Wall -Wextra)
endif (ENABLE_ALL_WARNINGS)
target_compile_definitions(xgboost-r
PUBLIC
endif()
if(MSVC)
# https://github.com/microsoft/LightGBM/pull/6061
# MSVC doesn't work with anonymous types in structs. (R complex)
#
# syntax error: missing ';' before identifier 'private_data_c'
#
target_compile_definitions(xgboost-r PRIVATE -DR_LEGACY_RCOMPLEX)
endif()
target_compile_definitions(
xgboost-r PUBLIC
-DXGBOOST_STRICT_R_MODE=1
-DXGBOOST_CUSTOMIZE_GLOBAL_PRNG=1
-DDMLC_LOG_BEFORE_THROW=0
-DDMLC_DISABLE_STDIN=1
-DDMLC_LOG_CUSTOMIZE=1
-DRABIT_CUSTOMIZE_MSG_
-DRABIT_STRICT_CXX98_)
target_include_directories(xgboost-r
PRIVATE
-DRABIT_STRICT_CXX98_
)
target_include_directories(
xgboost-r PRIVATE
${LIBR_INCLUDE_DIRS}
${PROJECT_SOURCE_DIR}/include
${PROJECT_SOURCE_DIR}/dmlc-core/include
${PROJECT_SOURCE_DIR}/rabit/include)
${PROJECT_SOURCE_DIR}/rabit/include
)
target_link_libraries(xgboost-r PUBLIC ${LIBR_CORE_LIBRARY})
if (USE_OPENMP)
if(USE_OPENMP)
find_package(OpenMP REQUIRED)
target_link_libraries(xgboost-r PUBLIC OpenMP::OpenMP_CXX OpenMP::OpenMP_C)
endif (USE_OPENMP)
endif()
set_target_properties(
xgboost-r PROPERTIES
CXX_STANDARD 14
CXX_STANDARD 17
CXX_STANDARD_REQUIRED ON
POSITION_INDEPENDENT_CODE ON)
POSITION_INDEPENDENT_CODE ON
)
# Get compilation and link flags of xgboost-r and propagate to objxgboost
target_link_libraries(objxgboost PUBLIC xgboost-r)
# Add all objects of xgboost-r to objxgboost
target_sources(objxgboost INTERFACE $<TARGET_OBJECTS:xgboost-r>)

View File

@ -1,12 +1,12 @@
Package: xgboost
Type: Package
Title: Extreme Gradient Boosting
Version: 1.5.2.1
Date: 2022-1-17
Version: 2.1.0.0
Date: 2023-08-19
Authors@R: c(
person("Tianqi", "Chen", role = c("aut"),
email = "tianqi.tchen@gmail.com"),
person("Tong", "He", role = c("aut", "cre"),
person("Tong", "He", role = c("aut"),
email = "hetong007@gmail.com"),
person("Michael", "Benesty", role = c("aut"),
email = "michael@benesty.fr"),
@ -26,7 +26,8 @@ Authors@R: c(
person("Min", "Lin", role = c("aut")),
person("Yifeng", "Geng", role = c("aut")),
person("Yutian", "Li", role = c("aut")),
person("Jiaming", "Yuan", role = c("aut")),
person("Jiaming", "Yuan", role = c("aut", "cre"),
email = "jm.yuan@outlook.com"),
person("XGBoost contributors", role = c("cph"),
comment = "base XGBoost implementation")
)
@ -53,17 +54,18 @@ Suggests:
Ckmeans.1d.dp (>= 3.3.1),
vcd (>= 1.3),
testthat,
lintr,
igraph (>= 1.0.1),
float,
crayon,
titanic
titanic,
RhpcBLASctl
Depends:
R (>= 3.3.0)
R (>= 4.3.0)
Imports:
Matrix (>= 1.1-0),
methods,
data.table (>= 1.9.6),
jsonlite (>= 1.0),
RoxygenNote: 7.1.1
SystemRequirements: GNU make, C++14
jsonlite (>= 1.0)
Roxygen: list(markdown = TRUE)
RoxygenNote: 7.3.1
Encoding: UTF-8
SystemRequirements: GNU make, C++17

View File

@ -1,9 +1,9 @@
Copyright (c) 2014 by Tianqi Chen and Contributors
Copyright (c) 2014-2023, Tianqi Chen and XBGoost Contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software

View File

@ -1,46 +1,61 @@
# Generated by roxygen2: do not edit by hand
S3method("[",xgb.Booster)
S3method("[",xgb.DMatrix)
S3method("dimnames<-",xgb.DMatrix)
S3method(coef,xgb.Booster)
S3method(dim,xgb.DMatrix)
S3method(dimnames,xgb.DMatrix)
S3method(getinfo,xgb.Booster)
S3method(getinfo,xgb.DMatrix)
S3method(length,xgb.Booster)
S3method(predict,xgb.Booster)
S3method(predict,xgb.Booster.handle)
S3method(print,xgb.Booster)
S3method(print,xgb.DMatrix)
S3method(print,xgb.cv.synchronous)
S3method(setinfo,xgb.Booster)
S3method(setinfo,xgb.DMatrix)
S3method(slice,xgb.DMatrix)
S3method(variable.names,xgb.Booster)
export("xgb.attr<-")
export("xgb.attributes<-")
export("xgb.config<-")
export("xgb.parameters<-")
export(cb.cv.predict)
export(cb.early.stop)
export(cb.evaluation.log)
export(cb.gblinear.history)
export(cb.print.evaluation)
export(cb.reset.parameters)
export(cb.save.model)
export(getinfo)
export(setinfo)
export(slice)
export(xgb.Booster.complete)
export(xgb.Callback)
export(xgb.DMatrix)
export(xgb.DMatrix.hasinfo)
export(xgb.DMatrix.save)
export(xgb.DataBatch)
export(xgb.DataIter)
export(xgb.ExternalDMatrix)
export(xgb.QuantileDMatrix)
export(xgb.QuantileDMatrix.from_iterator)
export(xgb.attr)
export(xgb.attributes)
export(xgb.cb.cv.predict)
export(xgb.cb.early.stop)
export(xgb.cb.evaluation.log)
export(xgb.cb.gblinear.history)
export(xgb.cb.print.evaluation)
export(xgb.cb.reset.parameters)
export(xgb.cb.save.model)
export(xgb.config)
export(xgb.copy.Booster)
export(xgb.create.features)
export(xgb.cv)
export(xgb.dump)
export(xgb.gblinear.history)
export(xgb.get.DMatrix.data)
export(xgb.get.DMatrix.num.non.missing)
export(xgb.get.DMatrix.qcut)
export(xgb.get.config)
export(xgb.get.num.boosted.rounds)
export(xgb.ggplot.deepness)
export(xgb.ggplot.importance)
export(xgb.ggplot.shap.summary)
export(xgb.importance)
export(xgb.is.same.Booster)
export(xgb.load)
export(xgb.load.raw)
export(xgb.model.dt.tree)
@ -52,19 +67,16 @@ export(xgb.plot.shap.summary)
export(xgb.plot.tree)
export(xgb.save)
export(xgb.save.raw)
export(xgb.serialize)
export(xgb.set.config)
export(xgb.slice.Booster)
export(xgb.slice.DMatrix)
export(xgb.train)
export(xgb.unserialize)
export(xgboost)
import(methods)
importClassesFrom(Matrix,CsparseMatrix)
importClassesFrom(Matrix,dgCMatrix)
importClassesFrom(Matrix,dgeMatrix)
importFrom(Matrix,colSums)
importClassesFrom(Matrix,dgRMatrix)
importFrom(Matrix,sparse.model.matrix)
importFrom(Matrix,sparseMatrix)
importFrom(Matrix,sparseVector)
importFrom(Matrix,t)
importFrom(data.table,":=")
importFrom(data.table,as.data.table)
importFrom(data.table,data.table)
@ -82,8 +94,12 @@ importFrom(graphics,points)
importFrom(graphics,title)
importFrom(jsonlite,fromJSON)
importFrom(jsonlite,toJSON)
importFrom(methods,new)
importFrom(stats,coef)
importFrom(stats,median)
importFrom(stats,predict)
importFrom(stats,sd)
importFrom(stats,variable.names)
importFrom(utils,head)
importFrom(utils,object.size)
importFrom(utils,str)

File diff suppressed because it is too large Load Diff

View File

@ -26,6 +26,11 @@ NVL <- function(x, val) {
'multi:softprob', 'rank:pairwise', 'rank:ndcg', 'rank:map'))
}
.RANKING_OBJECTIVES <- function() {
return(c('binary:logistic', 'binary:logitraw', 'binary:hinge', 'multi:softmax',
'multi:softprob'))
}
#
# Low-level functions for boosting --------------------------------------------
@ -38,11 +43,11 @@ check.booster.params <- function(params, ...) {
stop("params must be a list")
# in R interface, allow for '.' instead of '_' in parameter names
names(params) <- gsub("\\.", "_", names(params))
names(params) <- gsub(".", "_", names(params), fixed = TRUE)
# merge parameters from the params and the dots-expansion
dot_params <- list(...)
names(dot_params) <- gsub("\\.", "_", names(dot_params))
names(dot_params) <- gsub(".", "_", names(dot_params), fixed = TRUE)
if (length(intersect(names(params),
names(dot_params))) > 0)
stop("Same parameters in 'params' and in the call are not allowed. Please check your 'params' list.")
@ -82,7 +87,7 @@ check.booster.params <- function(params, ...) {
# interaction constraints parser (convert from list of column indices to string)
if (!is.null(params[['interaction_constraints']]) &&
typeof(params[['interaction_constraints']]) != "character"){
typeof(params[['interaction_constraints']]) != "character") {
# check input class
if (!identical(class(params[['interaction_constraints']]), 'list')) stop('interaction_constraints should be class list')
if (!all(unique(sapply(params[['interaction_constraints']], class)) %in% c('numeric', 'integer'))) {
@ -93,6 +98,14 @@ check.booster.params <- function(params, ...) {
interaction_constraints <- sapply(params[['interaction_constraints']], function(x) paste0('[', paste(x, collapse = ','), ']'))
params[['interaction_constraints']] <- paste0('[', paste(interaction_constraints, collapse = ','), ']')
}
# for evaluation metrics, should generate multiple entries per metric
if (NROW(params[['eval_metric']]) > 1) {
eval_metrics <- as.list(params[["eval_metric"]])
names(eval_metrics) <- rep("eval_metric", length(eval_metrics))
params_without_ev_metrics <- within(params, rm("eval_metric"))
params <- c(params_without_ev_metrics, eval_metrics)
}
return(params)
}
@ -134,27 +147,49 @@ check.custom.eval <- function(env = parent.frame()) {
if (!is.null(env$feval) &&
is.null(env$maximize) && (
!is.null(env$early_stopping_rounds) ||
has.callbacks(env$callbacks, 'cb.early.stop')))
has.callbacks(env$callbacks, "early_stop")))
stop("Please set 'maximize' to indicate whether the evaluation metric needs to be maximized or not")
}
# Update a booster handle for an iteration with dtrain data
xgb.iter.update <- function(booster_handle, dtrain, iter, obj = NULL) {
if (!identical(class(booster_handle), "xgb.Booster.handle")) {
stop("booster_handle must be of xgb.Booster.handle class")
}
xgb.iter.update <- function(bst, dtrain, iter, obj) {
if (!inherits(dtrain, "xgb.DMatrix")) {
stop("dtrain must be of xgb.DMatrix class")
}
handle <- xgb.get.handle(bst)
if (is.null(obj)) {
.Call(XGBoosterUpdateOneIter_R, booster_handle, as.integer(iter), dtrain)
.Call(XGBoosterUpdateOneIter_R, handle, as.integer(iter), dtrain)
} else {
pred <- predict(booster_handle, dtrain, outputmargin = TRUE, training = TRUE,
ntreelimit = 0)
pred <- predict(
bst,
dtrain,
outputmargin = TRUE,
training = TRUE,
reshape = TRUE
)
gpair <- obj(pred, dtrain)
.Call(XGBoosterBoostOneIter_R, booster_handle, dtrain, gpair$grad, gpair$hess)
n_samples <- dim(dtrain)[1]
grad <- gpair$grad
hess <- gpair$hess
if ((is.matrix(grad) && dim(grad)[1] != n_samples) ||
(is.vector(grad) && length(grad) != n_samples) ||
(is.vector(grad) != is.vector(hess))) {
warning(paste(
"Since 2.1.0, the shape of the gradient and hessian is required to be ",
"(n_samples, n_targets) or (n_samples, n_classes). Will reshape assuming ",
"column-major order.",
sep = ""
))
grad <- matrix(grad, nrow = n_samples)
hess <- matrix(hess, nrow = n_samples)
}
.Call(
XGBoosterTrainOneIter_R, handle, dtrain, iter, grad, hess
)
}
return(TRUE)
}
@ -163,23 +198,22 @@ xgb.iter.update <- function(booster_handle, dtrain, iter, obj = NULL) {
# Evaluate one iteration.
# Returns a named vector of evaluation metrics
# with the names in a 'datasetname-metricname' format.
xgb.iter.eval <- function(booster_handle, watchlist, iter, feval = NULL) {
if (!identical(class(booster_handle), "xgb.Booster.handle"))
stop("class of booster_handle must be xgb.Booster.handle")
xgb.iter.eval <- function(bst, evals, iter, feval) {
handle <- xgb.get.handle(bst)
if (length(watchlist) == 0)
if (length(evals) == 0)
return(NULL)
evnames <- names(watchlist)
evnames <- names(evals)
if (is.null(feval)) {
msg <- .Call(XGBoosterEvalOneIter_R, booster_handle, as.integer(iter), watchlist, as.list(evnames))
msg <- .Call(XGBoosterEvalOneIter_R, handle, as.integer(iter), evals, as.list(evnames))
mat <- matrix(strsplit(msg, '\\s+|:')[[1]][-1], nrow = 2)
res <- structure(as.numeric(mat[2, ]), names = mat[1, ])
} else {
res <- sapply(seq_along(watchlist), function(j) {
w <- watchlist[[j]]
res <- sapply(seq_along(evals), function(j) {
w <- evals[[j]]
## predict using all trees
preds <- predict(booster_handle, w, outputmargin = TRUE, iterationrange = c(1, 1))
preds <- predict(bst, w, outputmargin = TRUE, iterationrange = "all")
eval_res <- feval(preds, w)
out <- eval_res$value
names(out) <- paste0(evnames[j], "-", eval_res$metric)
@ -206,35 +240,45 @@ convert.labels <- function(labels, objective_name) {
}
# Generates random (stratified if needed) CV folds
generate.cv.folds <- function(nfold, nrows, stratified, label, params) {
generate.cv.folds <- function(nfold, nrows, stratified, label, group, params) {
if (NROW(group)) {
if (stratified) {
warning(
paste0(
"Stratified splitting is not supported when using 'group' attribute.",
" Will use unstratified splitting."
)
)
}
return(generate.group.folds(nfold, group))
}
objective <- params$objective
if (!is.character(objective)) {
warning("Will use unstratified splitting (custom objective used)")
stratified <- FALSE
}
# cannot stratify if label is NULL
if (stratified && is.null(label)) {
warning("Will use unstratified splitting (no 'labels' available)")
stratified <- FALSE
}
# cannot do it for rank
objective <- params$objective
if (is.character(objective) && strtrim(objective, 5) == 'rank:') {
stop("\n\tAutomatic generation of CV-folds is not implemented for ranking!\n",
stop("\n\tAutomatic generation of CV-folds is not implemented for ranking without 'group' field!\n",
"\tConsider providing pre-computed CV-folds through the 'folds=' parameter.\n")
}
# shuffle
rnd_idx <- sample.int(nrows)
if (stratified &&
length(label) == length(rnd_idx)) {
if (stratified && length(label) == length(rnd_idx)) {
y <- label[rnd_idx]
# WARNING: some heuristic logic is employed to identify classification setting!
# - For classification, need to convert y labels to factor before making the folds,
# and then do stratification by factor levels.
# - For regression, leave y numeric and do stratification by quantiles.
if (is.character(objective)) {
y <- convert.labels(y, params$objective)
} else {
# If no 'objective' given in params, it means that user either wants to
# use the default 'reg:squarederror' objective or has provided a custom
# obj function. Here, assume classification setting when y has 5 or less
# unique values:
if (length(unique(y)) <= 5) {
y <- factor(y)
}
y <- convert.labels(y, objective)
}
folds <- xgb.createFolds(y, nfold)
folds <- xgb.createFolds(y = y, k = nfold)
} else {
# make simple non-stratified folds
kstep <- length(rnd_idx) %/% nfold
@ -248,11 +292,33 @@ generate.cv.folds <- function(nfold, nrows, stratified, label, params) {
return(folds)
}
generate.group.folds <- function(nfold, group) {
ngroups <- length(group) - 1
if (ngroups < nfold) {
stop("DMatrix has fewer groups than folds.")
}
seq_groups <- seq_len(ngroups)
indices <- lapply(seq_groups, function(gr) seq(group[gr] + 1, group[gr + 1]))
assignments <- base::split(seq_groups, as.integer(seq_groups %% nfold))
assignments <- unname(assignments)
out <- vector("list", nfold)
randomized_groups <- sample(ngroups)
for (idx in seq_len(nfold)) {
groups_idx_test <- randomized_groups[assignments[[idx]]]
groups_test <- indices[groups_idx_test]
idx_test <- unlist(groups_test)
attributes(idx_test)$group_test <- lengths(groups_test)
attributes(idx_test)$group_train <- lengths(indices[-groups_idx_test])
out[[idx]] <- idx_test
}
return(out)
}
# Creates CV folds stratified by the values of y.
# It was borrowed from caret::createFolds and simplified
# by always returning an unnamed list of fold indices.
xgb.createFolds <- function(y, k = 10)
{
xgb.createFolds <- function(y, k) {
if (is.numeric(y)) {
## Group the numeric data based on their magnitudes
## and sample within those groups.
@ -321,16 +387,45 @@ xgb.createFolds <- function(y, k = 10)
#' @name xgboost-deprecated
NULL
#' Do not use \code{\link[base]{saveRDS}} or \code{\link[base]{save}} for long-term archival of
#' models. Instead, use \code{\link{xgb.save}} or \code{\link{xgb.save.raw}}.
#' @title Model Serialization and Compatibility
#' @description
#'
#' It is a common practice to use the built-in \code{\link[base]{saveRDS}} function (or
#' \code{\link[base]{save}}) to persist R objects to the disk. While it is possible to persist
#' \code{xgb.Booster} objects using \code{\link[base]{saveRDS}}, it is not advisable to do so if
#' the model is to be accessed in the future. If you train a model with the current version of
#' XGBoost and persist it with \code{\link[base]{saveRDS}}, the model is not guaranteed to be
#' accessible in later releases of XGBoost. To ensure that your model can be accessed in future
#' releases of XGBoost, use \code{\link{xgb.save}} or \code{\link{xgb.save.raw}} instead.
#' When it comes to serializing XGBoost models, it's possible to use R serializers such as
#' \link{save} or \link{saveRDS} to serialize an XGBoost R model, but XGBoost also provides
#' its own serializers with better compatibility guarantees, which allow loading
#' said models in other language bindings of XGBoost.
#'
#' Note that an `xgb.Booster` object, outside of its core components, might also keep:\itemize{
#' \item Additional model configuration (accessible through \link{xgb.config}),
#' which includes model fitting parameters like `max_depth` and runtime parameters like `nthread`.
#' These are not necessarily useful for prediction/importance/plotting.
#' \item Additional R-specific attributes - e.g. results of callbacks, such as evaluation logs,
#' which are kept as a `data.table` object, accessible through `attributes(model)$evaluation_log`
#' if present.
#' }
#'
#' The first one (configurations) does not have the same compatibility guarantees as
#' the model itself, including attributes that are set and accessed through \link{xgb.attributes} - that is, such configuration
#' might be lost after loading the booster in a different XGBoost version, regardless of the
#' serializer that was used. These are saved when using \link{saveRDS}, but will be discarded
#' if loaded into an incompatible XGBoost version. They are not saved when using XGBoost's
#' serializers from its public interface including \link{xgb.save} and \link{xgb.save.raw}.
#'
#' The second ones (R attributes) are not part of the standard XGBoost model structure, and thus are
#' not saved when using XGBoost's own serializers. These attributes are only used for informational
#' purposes, such as keeping track of evaluation metrics as the model was fit, or saving the R
#' call that produced the model, but are otherwise not used for prediction / importance / plotting / etc.
#' These R attributes are only preserved when using R's serializers.
#'
#' Note that XGBoost models in R starting from version `2.1.0` and onwards, and XGBoost models
#' before version `2.1.0`; have a very different R object structure and are incompatible with
#' each other. Hence, models that were saved with R serializers live `saveRDS` or `save` before
#' version `2.1.0` will not work with latter `xgboost` versions and vice versa. Be aware that
#' the structure of R model objects could in theory change again in the future, so XGBoost's serializers
#' should be preferred for long-term storage.
#'
#' Furthermore, note that using the package `qs` for serialization will require version 0.26 or
#' higher of said package, and will have the same compatibility restrictions as R serializers.
#'
#' @details
#' Use \code{\link{xgb.save}} to save the XGBoost model as a stand-alone file. You may opt into
@ -343,26 +438,29 @@ NULL
#' The \code{\link{xgb.save.raw}} function is useful if you'd like to persist the XGBoost model
#' as part of another R object.
#'
#' Note: Do not use \code{\link{xgb.serialize}} to store models long-term. It persists not only the
#' model but also internal configurations and parameters, and its format is not stable across
#' multiple XGBoost versions. Use \code{\link{xgb.serialize}} only for checkpointing.
#' Use \link{saveRDS} if you require the R-specific attributes that a booster might have, such
#' as evaluation logs, but note that future compatibility of such objects is outside XGBoost's
#' control as it relies on R's serialization format (see e.g. the details section in
#' \link{serialize} and \link{save} from base R).
#'
#' For more details and explanation about model persistence and archival, consult the page
#' \url{https://xgboost.readthedocs.io/en/latest/tutorials/saving_model.html}.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' bst <- xgb.train(data = xgb.DMatrix(agaricus.train$data, label = agaricus.train$label),
#' max_depth = 2, eta = 1, nthread = 2, nrounds = 2,
#' objective = "binary:logistic")
#'
#' # Save as a stand-alone file; load it with xgb.load()
#' xgb.save(bst, 'xgb.model')
#' bst2 <- xgb.load('xgb.model')
#' fname <- file.path(tempdir(), "xgb_model.ubj")
#' xgb.save(bst, fname)
#' bst2 <- xgb.load(fname)
#'
#' # Save as a stand-alone file (JSON); load it with xgb.load()
#' xgb.save(bst, 'xgb.model.json')
#' bst2 <- xgb.load('xgb.model.json')
#' if (file.exists('xgb.model.json')) file.remove('xgb.model.json')
#' fname <- file.path(tempdir(), "xgb_model.json")
#' xgb.save(bst, fname)
#' bst2 <- xgb.load(fname)
#'
#' # Save as a raw byte vector; load it with xgb.load.raw()
#' xgb_bytes <- xgb.save.raw(bst)
@ -373,12 +471,12 @@ NULL
#' # Persist the R object. Here, saveRDS() is okay, since it doesn't persist
#' # xgb.Booster directly. What's being persisted is the future-proof byte representation
#' # as given by xgb.save.raw().
#' saveRDS(obj, 'my_object.rds')
#' fname <- file.path(tempdir(), "my_object.Rds")
#' saveRDS(obj, fname)
#' # Read back the R object
#' obj2 <- readRDS('my_object.rds')
#' obj2 <- readRDS(fname)
#' # Re-construct xgb.Booster object from the bytes
#' bst2 <- xgb.load.raw(obj2$xgb_model_bytes)
#' if (file.exists('my_object.rds')) file.remove('my_object.rds')
#'
#' @name a-compatibility-note-for-saveRDS-save
NULL
@ -394,7 +492,8 @@ depr_par_lut <- matrix(c(
'plot.height', 'plot_height',
'plot.width', 'plot_width',
'n_first_tree', 'trees',
'dummy', 'DUMMY'
'dummy', 'DUMMY',
'watchlist', 'evals'
), ncol = 2, byrow = TRUE)
colnames(depr_par_lut) <- c('old', 'new')

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -6,11 +6,12 @@
#' @param fname the name of the file to write.
#'
#' @examples
#' \dontshow{RhpcBLASctl::omp_set_num_threads(1)}
#' data(agaricus.train, package='xgboost')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label))
#' xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
#' dtrain <- xgb.DMatrix('xgb.DMatrix.data')
#' if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' fname <- file.path(tempdir(), "xgb.DMatrix.data")
#' xgb.DMatrix.save(dtrain, fname)
#' dtrain <- xgb.DMatrix(fname)
#' @export
xgb.DMatrix.save <- function(dmatrix, fname) {
if (typeof(fname) != "character")

View File

@ -4,7 +4,14 @@
#' values of one or more global-scope parameters. Use \code{xgb.get.config} to fetch the current
#' values of all global-scope parameters (listed in
#' \url{https://xgboost.readthedocs.io/en/stable/parameter.html}).
#' @details
#' Note that serialization-related functions might use a globally-configured number of threads,
#' which is managed by the system's OpenMP (OMP) configuration instead. Typically, XGBoost methods
#' accept an `nthreads` parameter, but some methods like `readRDS` might get executed before such
#' parameter can be supplied.
#'
#' The number of OMP threads can in turn be configured for example through an environment variable
#' `OMP_NUM_THREADS` (needs to be set before R is started), or through `RhpcBLASctl::omp_set_num_threads`.
#' @rdname xgbConfig
#' @title Set and get global configuration
#' @name xgb.set.config, xgb.get.config

View File

@ -48,10 +48,10 @@
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label))
#' dtest <- with(agaricus.test, xgb.DMatrix(data, label = label))
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
#'
#' param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
#' param <- list(max_depth=2, eta=1, objective='binary:logistic')
#' nrounds = 4
#'
#' bst = xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)
@ -65,9 +65,12 @@
#' new.features.test <- xgb.create.features(model = bst, agaricus.test$data)
#'
#' # learning with new features
#' new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label)
#' new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label)
#' watchlist <- list(train = new.dtrain)
#' new.dtrain <- xgb.DMatrix(
#' data = new.features.train, label = agaricus.train$label, nthread = 2
#' )
#' new.dtest <- xgb.DMatrix(
#' data = new.features.test, label = agaricus.test$label, nthread = 2
#' )
#' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
#'
#' # Model accuracy with new features
@ -79,7 +82,7 @@
#' accuracy.after, "!\n"))
#'
#' @export
xgb.create.features <- function(model, data, ...){
xgb.create.features <- function(model, data, ...) {
check.deprecation(...)
pred_with_leaf <- predict(model, data, predleaf = TRUE)
cols <- lapply(as.data.frame(pred_with_leaf), factor)

View File

@ -1,6 +1,6 @@
#' Cross Validation
#'
#' The cross validation function of xgboost
#' The cross validation function of xgboost.
#'
#' @param params the list of parameters. The complete list of parameters is
#' available in the \href{http://xgboost.readthedocs.io/en/latest/parameter.html}{online documentation}. Below
@ -19,15 +19,19 @@
#'
#' See \code{\link{xgb.train}} for further details.
#' See also demo/ for walkthrough example in R.
#' @param data takes an \code{xgb.DMatrix}, \code{matrix}, or \code{dgCMatrix} as the input.
#'
#' Note that, while `params` accepts a `seed` entry and will use such parameter for model training if
#' supplied, this seed is not used for creation of train-test splits, which instead rely on R's own RNG
#' system - thus, for reproducible results, one needs to call the `set.seed` function beforehand.
#' @param data An `xgb.DMatrix` object, with corresponding fields like `label` or bounds as required
#' for model training by the objective.
#'
#' Note that only the basic `xgb.DMatrix` class is supported - variants such as `xgb.QuantileDMatrix`
#' or `xgb.ExternalDMatrix` are not supported here.
#' @param nrounds the max number of iterations
#' @param nfold the original dataset is randomly partitioned into \code{nfold} equal size subsamples.
#' @param label vector of response values. Should be provided only when data is an R-matrix.
#' @param missing is only used when input is a dense matrix. By default is set to NA, which means
#' that NA values should be considered as 'missing' by the algorithm.
#' Sometimes, 0 or other extreme value might be used to represent missing values.
#' @param prediction A logical value indicating whether to return the test fold predictions
#' from each CV model. This parameter engages the \code{\link{cb.cv.predict}} callback.
#' from each CV model. This parameter engages the \code{\link{xgb.cb.cv.predict}} callback.
#' @param showsd \code{boolean}, whether to show standard deviation of cross validation
#' @param metrics, list of evaluation metrics to be used in cross validation,
#' when it is not specified, the evaluation metric is chosen according to objective function.
@ -47,27 +51,44 @@
#' @param feval customized evaluation function. Returns
#' \code{list(metric='metric-name', value='metric-value')} with given
#' prediction and dtrain.
#' @param stratified a \code{boolean} indicating whether sampling of folds should be stratified
#' by the values of outcome labels.
#' @param stratified A \code{boolean} indicating whether sampling of folds should be stratified
#' by the values of outcome labels. For real-valued labels in regression objectives,
#' stratification will be done by discretizing the labels into up to 5 buckets beforehand.
#'
#' If passing "auto", will be set to `TRUE` if the objective in `params` is a classification
#' objective (from XGBoost's built-in objectives, doesn't apply to custom ones), and to
#' `FALSE` otherwise.
#'
#' This parameter is ignored when `data` has a `group` field - in such case, the splitting
#' will be based on whole groups (note that this might make the folds have different sizes).
#'
#' Value `TRUE` here is \bold{not} supported for custom objectives.
#' @param folds \code{list} provides a possibility to use a list of pre-defined CV folds
#' (each element must be a vector of test fold's indices). When folds are supplied,
#' the \code{nfold} and \code{stratified} parameters are ignored.
#'
#' If `data` has a `group` field and the objective requires this field, each fold (list element)
#' must additionally have two attributes (retrievable through \link{attributes}) named `group_test`
#' and `group_train`, which should hold the `group` to assign through \link{setinfo.xgb.DMatrix} to
#' the resulting DMatrices.
#' @param train_folds \code{list} list specifying which indicies to use for training. If \code{NULL}
#' (the default) all indices not specified in \code{folds} will be used for training.
#'
#' This is not supported when `data` has `group` field.
#' @param verbose \code{boolean}, print the statistics during the process
#' @param print_every_n Print each n-th iteration evaluation messages when \code{verbose>0}.
#' Default is 1 which means all messages are printed. This parameter is passed to the
#' \code{\link{cb.print.evaluation}} callback.
#' \code{\link{xgb.cb.print.evaluation}} callback.
#' @param early_stopping_rounds If \code{NULL}, the early stopping function is not triggered.
#' If set to an integer \code{k}, training with a validation set will stop if the performance
#' doesn't improve for \code{k} rounds.
#' Setting this parameter engages the \code{\link{cb.early.stop}} callback.
#' Setting this parameter engages the \code{\link{xgb.cb.early.stop}} callback.
#' @param maximize If \code{feval} and \code{early_stopping_rounds} are set,
#' then this parameter must be set as well.
#' When it is \code{TRUE}, it means the larger the evaluation score the better.
#' This parameter is passed to the \code{\link{cb.early.stop}} callback.
#' This parameter is passed to the \code{\link{xgb.cb.early.stop}} callback.
#' @param callbacks a list of callback functions to perform various task during boosting.
#' See \code{\link{callbacks}}. Some of the callbacks are automatically created depending on the
#' See \code{\link{xgb.Callback}}. Some of the callbacks are automatically created depending on the
#' parameters' values. User can provide either existing or their own callback methods in order
#' to customize the training process.
#' @param ... other parameters to pass to \code{params}.
@ -75,9 +96,11 @@
#' @details
#' The original sample is randomly partitioned into \code{nfold} equal size subsamples.
#'
#' Of the \code{nfold} subsamples, a single subsample is retained as the validation data for testing the model, and the remaining \code{nfold - 1} subsamples are used as training data.
#' Of the \code{nfold} subsamples, a single subsample is retained as the validation data for testing the model,
#' and the remaining \code{nfold - 1} subsamples are used as training data.
#'
#' The cross-validation process is then repeated \code{nrounds} times, with each of the \code{nfold} subsamples used exactly once as the validation data.
#' The cross-validation process is then repeated \code{nrounds} times, with each of the
#' \code{nfold} subsamples used exactly once as the validation data.
#'
#' All observations are used for both training and validation.
#'
@ -88,42 +111,45 @@
#' \itemize{
#' \item \code{call} a function call.
#' \item \code{params} parameters that were passed to the xgboost library. Note that it does not
#' capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
#' \item \code{callbacks} callback functions that were either automatically assigned or
#' explicitly passed.
#' capture parameters changed by the \code{\link{xgb.cb.reset.parameters}} callback.
#' \item \code{evaluation_log} evaluation history stored as a \code{data.table} with the
#' first column corresponding to iteration number and the rest corresponding to the
#' CV-based evaluation means and standard deviations for the training and test CV-sets.
#' It is created by the \code{\link{cb.evaluation.log}} callback.
#' It is created by the \code{\link{xgb.cb.evaluation.log}} callback.
#' \item \code{niter} number of boosting iterations.
#' \item \code{nfeatures} number of features in training data.
#' \item \code{folds} the list of CV folds' indices - either those passed through the \code{folds}
#' parameter or randomly generated.
#' \item \code{best_iteration} iteration number with the best evaluation metric value
#' (only available with early stopping).
#' \item \code{best_ntreelimit} and the \code{ntreelimit} Deprecated attributes, use \code{best_iteration} instead.
#' \item \code{pred} CV prediction values available when \code{prediction} is set.
#' It is either vector or matrix (see \code{\link{cb.cv.predict}}).
#' \item \code{models} a list of the CV folds' models. It is only available with the explicit
#' setting of the \code{cb.cv.predict(save_models = TRUE)} callback.
#' }
#'
#' Plus other potential elements that are the result of callbacks, such as a list `cv_predict` with
#' a sub-element `pred` when passing `prediction = TRUE`, which is added by the \link{xgb.cb.cv.predict}
#' callback (note that one can also pass it manually under `callbacks` with different settings,
#' such as saving also the models created during cross validation); or a list `early_stop` which
#' will contain elements such as `best_iteration` when using the early stopping callback (\link{xgb.cb.early.stop}).
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label))
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"),
#' max_depth = 3, eta = 1, objective = "binary:logistic")
#' max_depth = 3, eta = 1, objective = "binary:logistic")
#' print(cv)
#' print(cv, verbose=TRUE)
#'
#' @export
xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing = NA,
prediction = FALSE, showsd = TRUE, metrics=list(),
obj = NULL, feval = NULL, stratified = TRUE, folds = NULL, train_folds = NULL,
verbose = TRUE, print_every_n=1L,
xgb.cv <- function(params = list(), data, nrounds, nfold,
prediction = FALSE, showsd = TRUE, metrics = list(),
obj = NULL, feval = NULL, stratified = "auto", folds = NULL, train_folds = NULL,
verbose = TRUE, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL, callbacks = list(), ...) {
check.deprecation(...)
stopifnot(inherits(data, "xgb.DMatrix"))
if (inherits(data, "xgb.DMatrix") && .Call(XGCheckNullPtr_R, data)) {
stop("'data' is an invalid 'xgb.DMatrix' object. Must be constructed again.")
}
params <- check.booster.params(params, ...)
# TODO: should we deprecate the redundant 'metrics' parameter?
@ -133,19 +159,22 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
check.custom.obj()
check.custom.eval()
#if (is.null(params[['eval_metric']]) && is.null(feval))
# stop("Either 'eval_metric' or 'feval' must be provided for CV")
if (stratified == "auto") {
if (is.character(params$objective)) {
stratified <- (
(params$objective %in% .CLASSIFICATION_OBJECTIVES())
&& !(params$objective %in% .RANKING_OBJECTIVES())
)
} else {
stratified <- FALSE
}
}
# Check the labels
if ((inherits(data, 'xgb.DMatrix') && is.null(getinfo(data, 'label'))) ||
(!inherits(data, 'xgb.DMatrix') && is.null(label))) {
stop("Labels must be provided for CV either through xgb.DMatrix, or through 'label=' when 'data' is matrix")
} else if (inherits(data, 'xgb.DMatrix')) {
if (!is.null(label))
warning("xgb.cv: label will be ignored, since data is of type xgb.DMatrix")
cv_label <- getinfo(data, 'label')
} else {
cv_label <- label
# Check the labels and groups
cv_label <- getinfo(data, "label")
cv_group <- getinfo(data, "group")
if (!is.null(train_folds) && NROW(cv_group)) {
stop("'train_folds' is not supported for DMatrix object with 'group' field.")
}
# CV folds
@ -156,98 +185,140 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
} else {
if (nfold <= 1)
stop("'nfold' must be > 1")
folds <- generate.cv.folds(nfold, nrow(data), stratified, cv_label, params)
folds <- generate.cv.folds(nfold, nrow(data), stratified, cv_label, cv_group, params)
}
# Potential TODO: sequential CV
#if (strategy == 'sequential')
# stop('Sequential CV strategy is not yet implemented')
# Callbacks
tmp <- .process.callbacks(callbacks, is_cv = TRUE)
callbacks <- tmp$callbacks
cb_names <- tmp$cb_names
rm(tmp)
# Early stopping callback
if (!is.null(early_stopping_rounds) && !("early_stop" %in% cb_names)) {
callbacks <- add.callback(
callbacks,
xgb.cb.early.stop(
early_stopping_rounds,
maximize = maximize,
verbose = verbose
),
as_first_elt = TRUE
)
}
# verbosity & evaluation printing callback:
params <- c(params, list(silent = 1))
print_every_n <- max(as.integer(print_every_n), 1L)
if (!has.callbacks(callbacks, 'cb.print.evaluation') && verbose) {
callbacks <- add.cb(callbacks, cb.print.evaluation(print_every_n, showsd = showsd))
if (verbose && !("print_evaluation" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.print.evaluation(print_every_n, showsd = showsd))
}
# evaluation log callback: always is on in CV
evaluation_log <- list()
if (!has.callbacks(callbacks, 'cb.evaluation.log')) {
callbacks <- add.cb(callbacks, cb.evaluation.log())
}
# Early stopping callback
stop_condition <- FALSE
if (!is.null(early_stopping_rounds) &&
!has.callbacks(callbacks, 'cb.early.stop')) {
callbacks <- add.cb(callbacks, cb.early.stop(early_stopping_rounds,
maximize = maximize, verbose = verbose))
if (!("evaluation_log" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.evaluation.log())
}
# CV-predictions callback
if (prediction &&
!has.callbacks(callbacks, 'cb.cv.predict')) {
callbacks <- add.cb(callbacks, cb.cv.predict(save_models = FALSE))
if (prediction && !("cv_predict" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.cv.predict(save_models = FALSE))
}
# Sort the callbacks into categories
cb <- categorize.callbacks(callbacks)
# create the booster-folds
# train_folds
dall <- xgb.get.DMatrix(data, label, missing)
dall <- data
bst_folds <- lapply(seq_along(folds), function(k) {
dtest <- slice(dall, folds[[k]])
dtest <- xgb.slice.DMatrix(dall, folds[[k]], allow_groups = TRUE)
# code originally contributed by @RolandASc on stackoverflow
if (is.null(train_folds))
dtrain <- slice(dall, unlist(folds[-k]))
dtrain <- xgb.slice.DMatrix(dall, unlist(folds[-k]), allow_groups = TRUE)
else
dtrain <- slice(dall, train_folds[[k]])
handle <- xgb.Booster.handle(params, list(dtrain, dtest))
list(dtrain = dtrain, bst = handle, watchlist = list(train = dtrain, test = dtest), index = folds[[k]])
dtrain <- xgb.slice.DMatrix(dall, train_folds[[k]], allow_groups = TRUE)
if (!is.null(attributes(folds[[k]])$group_test)) {
setinfo(dtest, "group", attributes(folds[[k]])$group_test)
setinfo(dtrain, "group", attributes(folds[[k]])$group_train)
}
bst <- xgb.Booster(
params = params,
cachelist = list(dtrain, dtest),
modelfile = NULL
)
bst <- bst$bst
list(dtrain = dtrain, bst = bst, evals = list(train = dtrain, test = dtest), index = folds[[k]])
})
rm(dall)
# a "basket" to collect some results from callbacks
basket <- list()
# extract parameters that can affect the relationship b/w #trees and #iterations
num_class <- max(as.numeric(NVL(params[['num_class']], 1)), 1) # nolint
num_parallel_tree <- max(as.numeric(NVL(params[['num_parallel_tree']], 1)), 1) # nolint
# those are fixed for CV (no training continuation)
begin_iteration <- 1
end_iteration <- nrounds
.execute.cb.before.training(
callbacks,
bst_folds,
dall,
NULL,
begin_iteration,
end_iteration
)
# synchronous CV boosting: run CV folds' models within each iteration
for (iteration in begin_iteration:end_iteration) {
for (f in cb$pre_iter) f()
.execute.cb.before.iter(
callbacks,
bst_folds,
dall,
NULL,
iteration
)
msg <- lapply(bst_folds, function(fd) {
xgb.iter.update(fd$bst, fd$dtrain, iteration - 1, obj)
xgb.iter.eval(fd$bst, fd$watchlist, iteration - 1, feval)
xgb.iter.update(
bst = fd$bst,
dtrain = fd$dtrain,
iter = iteration - 1,
obj = obj
)
xgb.iter.eval(
bst = fd$bst,
evals = fd$evals,
iter = iteration - 1,
feval = feval
)
})
msg <- simplify2array(msg)
bst_evaluation <- rowMeans(msg)
bst_evaluation_err <- sqrt(rowMeans(msg^2) - bst_evaluation^2) # nolint
for (f in cb$post_iter) f()
should_stop <- .execute.cb.after.iter(
callbacks,
bst_folds,
dall,
NULL,
iteration,
msg
)
if (stop_condition) break
if (should_stop) break
}
for (f in cb$finalize) f(finalize = TRUE)
cb_outputs <- .execute.cb.after.training(
callbacks,
bst_folds,
dall,
NULL,
iteration,
msg
)
# the CV result
ret <- list(
call = match.call(),
params = params,
callbacks = callbacks,
evaluation_log = evaluation_log,
niter = end_iteration,
nfeatures = ncol(data),
niter = iteration,
nfeatures = ncol(dall),
folds = folds
)
ret <- c(ret, basket)
ret <- c(ret, cb_outputs)
class(ret) <- 'xgb.cv.synchronous'
invisible(ret)
return(invisible(ret))
}
@ -267,8 +338,8 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' cv <- xgb.cv(data = train$data, label = train$label, nfold = 5, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' cv <- xgb.cv(data = xgb.DMatrix(train$data, label = train$label), nfold = 5, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' print(cv)
#' print(cv, verbose=TRUE)
#'
@ -290,23 +361,16 @@ print.xgb.cv.synchronous <- function(x, verbose = FALSE, ...) {
paste0('"', unlist(x$params), '"'),
sep = ' = ', collapse = ', '), '\n', sep = '')
}
if (!is.null(x$callbacks) && length(x$callbacks) > 0) {
cat('callbacks:\n')
lapply(callback.calls(x$callbacks), function(x) {
cat(' ')
print(x)
})
}
for (n in c('niter', 'best_iteration', 'best_ntreelimit')) {
if (is.null(x[[n]]))
for (n in c('niter', 'best_iteration')) {
if (is.null(x$early_stop[[n]]))
next
cat(n, ': ', x[[n]], '\n', sep = '')
cat(n, ': ', x$early_stop[[n]], '\n', sep = '')
}
if (!is.null(x$pred)) {
if (!is.null(x$cv_predict$pred)) {
cat('pred:\n')
str(x$pred)
str(x$cv_predict$pred)
}
}
@ -314,9 +378,9 @@ print.xgb.cv.synchronous <- function(x, verbose = FALSE, ...) {
cat('evaluation_log:\n')
print(x$evaluation_log, row.names = FALSE, ...)
if (!is.null(x$best_iteration)) {
if (!is.null(x$early_stop$best_iteration)) {
cat('Best iteration:\n')
print(x$evaluation_log[x$best_iteration], row.names = FALSE, ...)
print(x$evaluation_log[x$early_stop$best_iteration], row.names = FALSE, ...)
}
invisible(x)
}

View File

@ -13,7 +13,10 @@
#' When this option is on, the model dump contains two additional values:
#' gain is the approximate loss function gain we get in each split;
#' cover is the sum of second order gradient in each node.
#' @param dump_format either 'text' or 'json' format could be specified.
#' @param dump_format either 'text', 'json', or 'dot' (graphviz) format could be specified.
#'
#' Format 'dot' for a single tree can be passed directly to packages that consume this format
#' for graph visualization, such as function [DiagrammeR::grViz()]
#' @param ... currently not used
#'
#' @return
@ -21,6 +24,7 @@
#' as a \code{character} vector. Otherwise it will return \code{TRUE}.
#'
#' @examples
#' \dontshow{RhpcBLASctl::omp_set_num_threads(1)}
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
@ -37,9 +41,13 @@
#' # print in JSON format:
#' cat(xgb.dump(bst, with_stats = TRUE, dump_format='json'))
#'
#' # plot first tree leveraging the 'dot' format
#' if (requireNamespace('DiagrammeR', quietly = TRUE)) {
#' DiagrammeR::grViz(xgb.dump(bst, dump_format = "dot")[[1L]])
#' }
#' @export
xgb.dump <- function(model, fname = NULL, fmap = "", with_stats=FALSE,
dump_format = c("text", "json"), ...) {
xgb.dump <- function(model, fname = NULL, fmap = "", with_stats = FALSE,
dump_format = c("text", "json", "dot"), ...) {
check.deprecation(...)
dump_format <- match.arg(dump_format)
if (!inherits(model, "xgb.Booster"))
@ -49,9 +57,16 @@ xgb.dump <- function(model, fname = NULL, fmap = "", with_stats=FALSE,
if (!(is.null(fmap) || is.character(fmap)))
stop("fmap: argument must be a character string (when provided)")
model <- xgb.Booster.complete(model)
model_dump <- .Call(XGBoosterDumpModel_R, model$handle, NVL(fmap, "")[1], as.integer(with_stats),
as.character(dump_format))
model_dump <- .Call(
XGBoosterDumpModel_R,
xgb.get.handle(model),
NVL(fmap, "")[1],
as.integer(with_stats),
as.character(dump_format)
)
if (dump_format == "dot") {
return(sapply(model_dump, function(x) gsub("^booster\\[\\d+\\]\\n", "\\1", x)))
}
if (is.null(fname))
model_dump <- gsub('\t', '', model_dump, fixed = TRUE)

View File

@ -4,7 +4,7 @@
#' @rdname xgb.plot.importance
#' @export
xgb.ggplot.importance <- function(importance_matrix = NULL, top_n = NULL, measure = NULL,
rel_to_first = FALSE, n_clusters = c(1:10), ...) {
rel_to_first = FALSE, n_clusters = seq_len(10), ...) {
importance_matrix <- xgb.plot.importance(importance_matrix, top_n = top_n, measure = measure,
rel_to_first = rel_to_first, plot = FALSE, ...)
@ -127,21 +127,20 @@ xgb.ggplot.shap.summary <- function(data, shap_contrib = NULL, features = NULL,
p
}
#' Combine and melt feature values and SHAP contributions for sample
#' observations.
#' Combine feature values and SHAP values
#'
#' Conforms to data format required for ggplot functions.
#' Internal function used to combine and melt feature values and SHAP contributions
#' as required for ggplot functions related to SHAP.
#'
#' Internal utility function.
#' @param data_list The result of `xgb.shap.data()`.
#' @param normalize Whether to standardize feature values to mean 0 and
#' standard deviation 1. This is useful for comparing multiple features on the same
#' plot. Default is \code{FALSE}.
#'
#' @param data_list List containing 'data' and 'shap_contrib' returned by
#' \code{xgb.shap.data()}.
#' @param normalize Whether to standardize feature values to have mean 0 and
#' standard deviation 1 (useful for comparing multiple features on the same
#' plot). Default \code{FALSE}.
#'
#' @return A data.table containing the observation ID, the feature name, the
#' @return A `data.table` containing the observation ID, the feature name, the
#' feature value (normalized if specified), and the SHAP contribution value.
#' @noRd
#' @keywords internal
prepare.ggplot.shap.data <- function(data_list, normalize = FALSE) {
data <- data_list[["data"]]
shap_contrib <- data_list[["shap_contrib"]]
@ -162,14 +161,16 @@ prepare.ggplot.shap.data <- function(data_list, normalize = FALSE) {
p_data
}
#' Scale feature value to have mean 0, standard deviation 1
#' Scale feature values
#'
#' This is used to compare multiple features on the same plot.
#' Internal utility function
#' Internal function that scales feature values to mean 0 and standard deviation 1.
#' Useful to compare multiple features on the same plot.
#'
#' @param x Numeric vector
#' @param x Numeric vector.
#'
#' @return Numeric vector with mean 0 and sd 1.
#' @return Numeric vector with mean 0 and standard deviation 1.
#' @noRd
#' @keywords internal
normalize <- function(x) {
loc <- mean(x, na.rm = TRUE)
scale <- stats::sd(x, na.rm = TRUE)
@ -181,7 +182,7 @@ normalize <- function(x) {
# ... the plots
# cols number of columns
# internal utility function
multiplot <- function(..., cols = 1) {
multiplot <- function(..., cols) {
plots <- list(...)
num_plots <- length(plots)

View File

@ -1,110 +1,139 @@
#' Importance of features in a model.
#' Feature importance
#'
#' Creates a \code{data.table} of feature importances in a model.
#' Creates a `data.table` of feature importances.
#'
#' @param feature_names character vector of feature names. If the model already
#' contains feature names, those would be used when \code{feature_names=NULL} (default value).
#' Non-null \code{feature_names} could be provided to override those in the model.
#' @param model object of class \code{xgb.Booster}.
#' @param trees (only for the gbtree booster) an integer vector of tree indices that should be included
#' into the importance calculation. If set to \code{NULL}, all trees of the model are parsed.
#' It could be useful, e.g., in multiclass classification to get feature importances
#' for each class separately. IMPORTANT: the tree index in xgboost models
#' is zero-based (e.g., use \code{trees = 0:4} for first 5 trees).
#' @param data deprecated.
#' @param label deprecated.
#' @param target deprecated.
#' @param feature_names Character vector used to overwrite the feature names
#' of the model. The default is `NULL` (use original feature names).
#' @param model Object of class `xgb.Booster`.
#' @param trees An integer vector of tree indices that should be included
#' into the importance calculation (only for the "gbtree" booster).
#' The default (`NULL`) parses all trees.
#' It could be useful, e.g., in multiclass classification to get feature importances
#' for each class separately. *Important*: the tree index in XGBoost models
#' is zero-based (e.g., use `trees = 0:4` for the first five trees).
#' @param data Deprecated.
#' @param label Deprecated.
#' @param target Deprecated.
#'
#' @details
#'
#' This function works for both linear and tree models.
#'
#' For linear models, the importance is the absolute magnitude of linear coefficients.
#' For that reason, in order to obtain a meaningful ranking by importance for a linear model,
#' the features need to be on the same scale (which you also would want to do when using either
#' L1 or L2 regularization).
#' To obtain a meaningful ranking by importance for linear models, the features need to
#' be on the same scale (which is also recommended when using L1 or L2 regularization).
#'
#' @return
#' @return A `data.table` with the following columns:
#'
#' For a tree model, a \code{data.table} with the following columns:
#' \itemize{
#' \item \code{Features} names of the features used in the model;
#' \item \code{Gain} represents fractional contribution of each feature to the model based on
#' the total gain of this feature's splits. Higher percentage means a more important
#' predictive feature.
#' \item \code{Cover} metric of the number of observation related to this feature;
#' \item \code{Frequency} percentage representing the relative number of times
#' a feature have been used in trees.
#' }
#' For a tree model:
#' - `Features`: Names of the features used in the model.
#' - `Gain`: Fractional contribution of each feature to the model based on
#' the total gain of this feature's splits. Higher percentage means higher importance.
#' - `Cover`: Metric of the number of observation related to this feature.
#' - `Frequency`: Percentage of times a feature has been used in trees.
#'
#' A linear model's importance \code{data.table} has the following columns:
#' \itemize{
#' \item \code{Features} names of the features used in the model;
#' \item \code{Weight} the linear coefficient of this feature;
#' \item \code{Class} (only for multiclass models) class label.
#' }
#' For a linear model:
#' - `Features`: Names of the features used in the model.
#' - `Weight`: Linear coefficient of this feature.
#' - `Class`: Class label (only for multiclass models).
#'
#' If \code{feature_names} is not provided and \code{model} doesn't have \code{feature_names},
#' index of the features will be used instead. Because the index is extracted from the model dump
#' If `feature_names` is not provided and `model` doesn't have `feature_names`,
#' the index of the features will be used instead. Because the index is extracted from the model dump
#' (based on C++ code), it starts at 0 (as in C/C++ or Python) instead of 1 (usual in R).
#'
#' @examples
#'
#' # binomial classification using gbtree:
#' data(agaricus.train, package='xgboost')
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' # binomial classification using "gbtree":
#' data(agaricus.train, package = "xgboost")
#'
#' bst <- xgboost(
#' data = agaricus.train$data,
#' label = agaricus.train$label,
#' max_depth = 2,
#' eta = 1,
#' nthread = 2,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' xgb.importance(model = bst)
#'
#' # binomial classification using gblinear:
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, booster = "gblinear",
#' eta = 0.3, nthread = 1, nrounds = 20, objective = "binary:logistic")
#' # binomial classification using "gblinear":
#' bst <- xgboost(
#' data = agaricus.train$data,
#' label = agaricus.train$label,
#' booster = "gblinear",
#' eta = 0.3,
#' nthread = 1,
#' nrounds = 20,objective = "binary:logistic"
#' )
#'
#' xgb.importance(model = bst)
#'
#' # multiclass classification using gbtree:
#' # multiclass classification using "gbtree":
#' nclass <- 3
#' nrounds <- 10
#' mbst <- xgboost(data = as.matrix(iris[, -5]), label = as.numeric(iris$Species) - 1,
#' max_depth = 3, eta = 0.2, nthread = 2, nrounds = nrounds,
#' objective = "multi:softprob", num_class = nclass)
#' mbst <- xgboost(
#' data = as.matrix(iris[, -5]),
#' label = as.numeric(iris$Species) - 1,
#' max_depth = 3,
#' eta = 0.2,
#' nthread = 2,
#' nrounds = nrounds,
#' objective = "multi:softprob",
#' num_class = nclass
#' )
#'
#' # all classes clumped together:
#' xgb.importance(model = mbst)
#' # inspect importances separately for each class:
#' xgb.importance(model = mbst, trees = seq(from=0, by=nclass, length.out=nrounds))
#' xgb.importance(model = mbst, trees = seq(from=1, by=nclass, length.out=nrounds))
#' xgb.importance(model = mbst, trees = seq(from=2, by=nclass, length.out=nrounds))
#'
#' # multiclass classification using gblinear:
#' mbst <- xgboost(data = scale(as.matrix(iris[, -5])), label = as.numeric(iris$Species) - 1,
#' booster = "gblinear", eta = 0.2, nthread = 1, nrounds = 15,
#' objective = "multi:softprob", num_class = nclass)
#' # inspect importances separately for each class:
#' xgb.importance(
#' model = mbst, trees = seq(from = 0, by = nclass, length.out = nrounds)
#' )
#' xgb.importance(
#' model = mbst, trees = seq(from = 1, by = nclass, length.out = nrounds)
#' )
#' xgb.importance(
#' model = mbst, trees = seq(from = 2, by = nclass, length.out = nrounds)
#' )
#'
#' # multiclass classification using "gblinear":
#' mbst <- xgboost(
#' data = scale(as.matrix(iris[, -5])),
#' label = as.numeric(iris$Species) - 1,
#' booster = "gblinear",
#' eta = 0.2,
#' nthread = 1,
#' nrounds = 15,
#' objective = "multi:softprob",
#' num_class = nclass
#' )
#'
#' xgb.importance(model = mbst)
#'
#' @export
xgb.importance <- function(feature_names = NULL, model = NULL, trees = NULL,
data = NULL, label = NULL, target = NULL){
xgb.importance <- function(model = NULL, feature_names = getinfo(model, "feature_name"), trees = NULL,
data = NULL, label = NULL, target = NULL) {
if (!(is.null(data) && is.null(label) && is.null(target)))
warning("xgb.importance: parameters 'data', 'label' and 'target' are deprecated")
if (!inherits(model, "xgb.Booster"))
stop("model: must be an object of class xgb.Booster")
if (is.null(feature_names) && !is.null(model$feature_names))
feature_names <- model$feature_names
if (!(is.null(feature_names) || is.character(feature_names)))
stop("feature_names: Has to be a character vector")
model <- xgb.Booster.complete(model)
config <- jsonlite::fromJSON(xgb.config(model))
if (config$learner$gradient_booster$name == "gblinear") {
handle <- xgb.get.handle(model)
if (xgb.booster_type(model) == "gblinear") {
args <- list(importance_type = "weight", feature_names = feature_names)
results <- .Call(
XGBoosterFeatureScore_R, model$handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null")
XGBoosterFeatureScore_R, handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null")
)
names(results) <- c("features", "shape", "weight")
n_classes <- if (length(results$shape) == 2) { results$shape[2] } else { 0 }
if (length(results$shape) == 2) {
n_classes <- results$shape[2]
} else {
n_classes <- 0
}
importance <- if (n_classes == 0) {
data.table(Feature = results$features, Weight = results$weight)[order(-abs(Weight))]
} else {
@ -118,7 +147,7 @@ xgb.importance <- function(feature_names = NULL, model = NULL, trees = NULL,
for (importance_type in c("weight", "total_gain", "total_cover")) {
args <- list(importance_type = importance_type, feature_names = feature_names, tree_idx = trees)
results <- .Call(
XGBoosterFeatureScore_R, model$handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null")
XGBoosterFeatureScore_R, handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null")
)
names(results) <- c("features", "shape", importance_type)
concatenated[

View File

@ -5,8 +5,8 @@
#' @param modelfile the name of the binary input file.
#'
#' @details
#' The input file is expected to contain a model saved in an xgboost-internal binary format
#' using either \code{\link{xgb.save}} or \code{\link{cb.save.model}} in R, or using some
#' The input file is expected to contain a model saved in an xgboost model format
#' using either \code{\link{xgb.save}} or \code{\link{xgb.cb.save.model}} in R, or using some
#' appropriate methods from other xgboost interfaces. E.g., a model trained in Python and
#' saved from there in xgboost format, could be loaded from R.
#'
@ -17,31 +17,50 @@
#' An object of \code{xgb.Booster} class.
#'
#' @seealso
#' \code{\link{xgb.save}}, \code{\link{xgb.Booster.complete}}.
#' \code{\link{xgb.save}}
#'
#' @examples
#' \dontshow{RhpcBLASctl::omp_set_num_threads(1)}
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#'
#' ## Keep the number of threads to 1 for examples
#' nthread <- 1
#' data.table::setDTthreads(nthread)
#'
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' xgb.save(bst, 'xgb.model')
#' bst <- xgb.load('xgb.model')
#' if (file.exists('xgb.model')) file.remove('xgb.model')
#' pred <- predict(bst, test$data)
#' bst <- xgb.train(
#' data = xgb.DMatrix(train$data, label = train$label),
#' max_depth = 2,
#' eta = 1,
#' nthread = nthread,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' fname <- file.path(tempdir(), "xgb.ubj")
#' xgb.save(bst, fname)
#' bst <- xgb.load(fname)
#' @export
xgb.load <- function(modelfile) {
if (is.null(modelfile))
stop("xgb.load: modelfile cannot be NULL")
handle <- xgb.Booster.handle(modelfile = modelfile)
bst <- xgb.Booster(
params = list(),
cachelist = list(),
modelfile = modelfile
)
bst <- bst$bst
# re-use modelfile if it is raw so we do not need to serialize
if (typeof(modelfile) == "raw") {
bst <- xgb.handleToBooster(handle, modelfile)
} else {
bst <- xgb.handleToBooster(handle, NULL)
warning(
paste(
"The support for loading raw booster with `xgb.load` will be ",
"discontinued in upcoming release. Use `xgb.load.raw` instead. "
)
)
}
bst <- xgb.Booster.complete(bst, saveraw = TRUE)
return(bst)
}

View File

@ -3,12 +3,10 @@
#' User can generate raw memory buffer by calling xgb.save.raw
#'
#' @param buffer the buffer returned by xgb.save.raw
#'
#' @export
xgb.load.raw <- function(buffer) {
cachelist <- list()
handle <- .Call(XGBoosterCreate_R, cachelist)
.Call(XGBoosterLoadModelFromRaw_R, handle, buffer)
class(handle) <- "xgb.Booster.handle"
return (handle)
bst <- .Call(XGBoosterCreate_R, cachelist)
.Call(XGBoosterLoadModelFromRaw_R, xgb.get.handle(bst), buffer)
return(bst)
}

View File

@ -1,68 +1,73 @@
#' Parse a boosted tree model text dump
#' Parse model text dump
#'
#' Parse a boosted tree model text dump into a \code{data.table} structure.
#' Parse a boosted tree model text dump into a `data.table` structure.
#'
#' @param feature_names character vector of feature names. If the model already
#' contains feature names, those would be used when \code{feature_names=NULL} (default value).
#' Non-null \code{feature_names} could be provided to override those in the model.
#' @param model object of class \code{xgb.Booster}
#' @param text \code{character} vector previously generated by the \code{xgb.dump}
#' function (where parameter \code{with_stats = TRUE} should have been set).
#' \code{text} takes precedence over \code{model}.
#' @param trees an integer vector of tree indices that should be parsed.
#' If set to \code{NULL}, all trees of the model are parsed.
#' It could be useful, e.g., in multiclass classification to get only
#' the trees of one certain class. IMPORTANT: the tree index in xgboost models
#' is zero-based (e.g., use \code{trees = 0:4} for first 5 trees).
#' @param use_int_id a logical flag indicating whether nodes in columns "Yes", "No", "Missing" should be
#' represented as integers (when FALSE) or as "Tree-Node" character strings (when FALSE).
#' @param ... currently not used.
#' @param model Object of class `xgb.Booster`. If it contains feature names (they can be set through
#' \link{setinfo}), they will be used in the output from this function.
#' @param text Character vector previously generated by the function [xgb.dump()]
#' (called with parameter `with_stats = TRUE`). `text` takes precedence over `model`.
#' @param trees An integer vector of tree indices that should be used.
#' The default (`NULL`) uses all trees.
#' Useful, e.g., in multiclass classification to get only
#' the trees of one class. *Important*: the tree index in XGBoost models
#' is zero-based (e.g., use `trees = 0:4` for the first five trees).
#' @param use_int_id A logical flag indicating whether nodes in columns "Yes", "No", and
#' "Missing" should be represented as integers (when `TRUE`) or as "Tree-Node"
#' character strings (when `FALSE`, default).
#' @param ... Currently not used.
#'
#' @return
#' A \code{data.table} with detailed information about model trees' nodes.
#' A `data.table` with detailed information about tree nodes. It has the following columns:
#' - `Tree`: integer ID of a tree in a model (zero-based index).
#' - `Node`: integer ID of a node in a tree (zero-based index).
#' - `ID`: character identifier of a node in a model (only when `use_int_id = FALSE`).
#' - `Feature`: for a branch node, a feature ID or name (when available);
#' for a leaf node, it simply labels it as `"Leaf"`.
#' - `Split`: location of the split for a branch node (split condition is always "less than").
#' - `Yes`: ID of the next node when the split condition is met.
#' - `No`: ID of the next node when the split condition is not met.
#' - `Missing`: ID of the next node when the branch value is missing.
#' - `Gain`: either the split gain (change in loss) or the leaf value.
#' - `Cover`: metric related to the number of observations either seen by a split
#' or collected by a leaf during training.
#'
#' The columns of the \code{data.table} are:
#'
#' \itemize{
#' \item \code{Tree}: integer ID of a tree in a model (zero-based index)
#' \item \code{Node}: integer ID of a node in a tree (zero-based index)
#' \item \code{ID}: character identifier of a node in a model (only when \code{use_int_id=FALSE})
#' \item \code{Feature}: for a branch node, it's a feature id or name (when available);
#' for a leaf note, it simply labels it as \code{'Leaf'}
#' \item \code{Split}: location of the split for a branch node (split condition is always "less than")
#' \item \code{Yes}: ID of the next node when the split condition is met
#' \item \code{No}: ID of the next node when the split condition is not met
#' \item \code{Missing}: ID of the next node when branch value is missing
#' \item \code{Quality}: either the split gain (change in loss) or the leaf value
#' \item \code{Cover}: metric related to the number of observation either seen by a split
#' or collected by a leaf during training.
#' }
#'
#' When \code{use_int_id=FALSE}, columns "Yes", "No", and "Missing" point to model-wide node identifiers
#' in the "ID" column. When \code{use_int_id=TRUE}, those columns point to node identifiers from
#' When `use_int_id = FALSE`, columns "Yes", "No", and "Missing" point to model-wide node identifiers
#' in the "ID" column. When `use_int_id = TRUE`, those columns point to node identifiers from
#' the corresponding trees in the "Node" column.
#'
#' @examples
#' # Basic use:
#'
#' data(agaricus.train, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#' ## Keep the number of threads to 1 for examples
#' nthread <- 1
#' data.table::setDTthreads(nthread)
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#'
#' (dt <- xgb.model.dt.tree(colnames(agaricus.train$data), bst))
#' bst <- xgboost(
#' data = agaricus.train$data,
#' label = agaricus.train$label,
#' max_depth = 2,
#' eta = 1,
#' nthread = nthread,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' # This bst model already has feature_names stored with it, so those would be used when
#' # feature_names is not set:
#' (dt <- xgb.model.dt.tree(model = bst))
#' dt <- xgb.model.dt.tree(bst)
#'
#' # How to match feature names of splits that are following a current 'Yes' branch:
#'
#' merge(dt, dt[, .(ID, Y.Feature=Feature)], by.x='Yes', by.y='ID', all.x=TRUE)[order(Tree,Node)]
#' merge(
#' dt,
#' dt[, .(ID, Y.Feature = Feature)], by.x = "Yes", by.y = "ID", all.x = TRUE
#' )[
#' order(Tree, Node)
#' ]
#'
#' @export
xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
trees = NULL, use_int_id = FALSE, ...){
xgb.model.dt.tree <- function(model = NULL, text = NULL,
trees = NULL, use_int_id = FALSE, ...) {
check.deprecation(...)
if (!inherits(model, "xgb.Booster") && !is.character(text)) {
@ -71,23 +76,22 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
" (or NULL if 'model' was provided).")
}
if (is.null(feature_names) && !is.null(model) && !is.null(model$feature_names))
feature_names <- model$feature_names
if (!(is.null(feature_names) || is.character(feature_names))) {
stop("feature_names: must be a character vector")
}
if (!(is.null(trees) || is.numeric(trees))) {
stop("trees: must be a vector of integers.")
}
if (is.null(text)){
text <- xgb.dump(model = model, with_stats = TRUE)
feature_names <- NULL
if (inherits(model, "xgb.Booster")) {
feature_names <- xgb.feature_names(model)
}
if (length(text) < 2 ||
sum(grepl('yes=(\\d+),no=(\\d+)', text)) < 1) {
from_text <- TRUE
if (is.null(text)) {
text <- xgb.dump(model = model, with_stats = TRUE)
from_text <- FALSE
}
if (length(text) < 2 || !any(grepl('leaf=(\\d+)', text))) {
stop("Non-tree model detected! This function can only be used with tree models.")
}
@ -113,36 +117,73 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
td[, isLeaf := grepl("leaf", t, fixed = TRUE)]
# parse branch lines
branch_rx <- paste0("f(\\d+)<(", anynumber_regex, ")\\] yes=(\\d+),no=(\\d+),missing=(\\d+),",
"gain=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
branch_cols <- c("Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover")
td[isLeaf == FALSE,
(branch_cols) := {
matches <- regmatches(t, regexec(branch_rx, t))
# skip some indices with spurious capture groups from anynumber_regex
xtr <- do.call(rbind, matches)[, c(2, 3, 5, 6, 7, 8, 10), drop = FALSE]
xtr[, 3:5] <- add.tree.id(xtr[, 3:5], Tree)
as.data.table(xtr)
}]
branch_rx_nonames <- paste0("f(\\d+)<(", anynumber_regex, ")\\] yes=(\\d+),no=(\\d+),missing=(\\d+),",
"gain=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
branch_rx_w_names <- paste0("\\d+:\\[(.+)<(", anynumber_regex, ")\\] yes=(\\d+),no=(\\d+),missing=(\\d+),",
"gain=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
text_has_feature_names <- FALSE
if (NROW(feature_names)) {
branch_rx <- branch_rx_w_names
text_has_feature_names <- TRUE
} else {
# Note: when passing a text dump, it might or might not have feature names,
# but that aspect is unknown from just the text attributes
branch_rx <- branch_rx_nonames
if (from_text) {
if (sum(grepl(branch_rx_w_names, text)) > sum(grepl(branch_rx_nonames, text))) {
branch_rx <- branch_rx_w_names
text_has_feature_names <- TRUE
}
}
}
branch_cols <- c("Feature", "Split", "Yes", "No", "Missing", "Gain", "Cover")
td[
isLeaf == FALSE,
(branch_cols) := {
matches <- regmatches(t, regexec(branch_rx, t))
# skip some indices with spurious capture groups from anynumber_regex
xtr <- do.call(rbind, matches)[, c(2, 3, 5, 6, 7, 8, 10), drop = FALSE]
xtr[, 3:5] <- add.tree.id(xtr[, 3:5], Tree)
if (length(xtr) == 0) {
as.data.table(
list(Feature = "NA", Split = "NA", Yes = "NA", No = "NA", Missing = "NA", Gain = "NA", Cover = "NA")
)
} else {
as.data.table(xtr)
}
}
]
# assign feature_names when available
if (!is.null(feature_names)) {
if (length(feature_names) <= max(as.numeric(td$Feature), na.rm = TRUE))
stop("feature_names has less elements than there are features used in the model")
td[isLeaf == FALSE, Feature := feature_names[as.numeric(Feature) + 1]]
is_stump <- function() {
return(length(td$Feature) == 1 && is.na(td$Feature))
}
if (!text_has_feature_names) {
if (!is.null(feature_names) && !is_stump()) {
if (length(feature_names) <= max(as.numeric(td$Feature), na.rm = TRUE))
stop("feature_names has less elements than there are features used in the model")
td[isLeaf == FALSE, Feature := feature_names[as.numeric(Feature) + 1]]
}
}
# parse leaf lines
leaf_rx <- paste0("leaf=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
leaf_cols <- c("Feature", "Quality", "Cover")
td[isLeaf == TRUE,
(leaf_cols) := {
matches <- regmatches(t, regexec(leaf_rx, t))
xtr <- do.call(rbind, matches)[, c(2, 4)]
c("Leaf", as.data.table(xtr))
}]
leaf_cols <- c("Feature", "Gain", "Cover")
td[
isLeaf == TRUE,
(leaf_cols) := {
matches <- regmatches(t, regexec(leaf_rx, t))
xtr <- do.call(rbind, matches)[, c(2, 4)]
if (length(xtr) == 2) {
c("Leaf", as.data.table(xtr[1]), as.data.table(xtr[2]))
} else {
c("Leaf", as.data.table(xtr))
}
}
]
# convert some columns to numeric
numeric_cols <- c("Split", "Quality", "Cover")
numeric_cols <- c("Split", "Gain", "Cover")
td[, (numeric_cols) := lapply(.SD, as.numeric), .SDcols = numeric_cols]
if (use_int_id) {
int_cols <- c("Yes", "No", "Missing")

View File

@ -1,62 +1,74 @@
#' Plot model trees deepness
#' Plot model tree depth
#'
#' Visualizes distributions related to depth of tree leafs.
#' \code{xgb.plot.deepness} uses base R graphics, while \code{xgb.ggplot.deepness} uses the ggplot backend.
#' Visualizes distributions related to the depth of tree leaves.
#' - `xgb.plot.deepness()` uses base R graphics, while
#' - `xgb.ggplot.deepness()` uses "ggplot2".
#'
#' @param model either an \code{xgb.Booster} model generated by the \code{xgb.train} function
#' or a data.table result of the \code{xgb.model.dt.tree} function.
#' @param plot (base R barplot) whether a barplot should be produced.
#' If FALSE, only a data.table is returned.
#' @param which which distribution to plot (see details).
#' @param ... other parameters passed to \code{barplot} or \code{plot}.
#' @param model Either an `xgb.Booster` model, or the "data.table" returned by [xgb.model.dt.tree()].
#' @param which Which distribution to plot (see details).
#' @param plot Should the plot be shown? Default is `TRUE`.
#' @param ... Other parameters passed to [graphics::barplot()] or [graphics::plot()].
#'
#' @details
#'
#' When \code{which="2x1"}, two distributions with respect to the leaf depth
#' When `which = "2x1"`, two distributions with respect to the leaf depth
#' are plotted on top of each other:
#' \itemize{
#' \item the distribution of the number of leafs in a tree model at a certain depth;
#' \item the distribution of average weighted number of observations ("cover")
#' ending up in leafs at certain depth.
#' }
#' Those could be helpful in determining sensible ranges of the \code{max_depth}
#' and \code{min_child_weight} parameters.
#' 1. The distribution of the number of leaves in a tree model at a certain depth.
#' 2. The distribution of the average weighted number of observations ("cover")
#' ending up in leaves at a certain depth.
#'
#' When \code{which="max.depth"} or \code{which="med.depth"}, plots of either maximum or median depth
#' per tree with respect to tree number are created. And \code{which="med.weight"} allows to see how
#' Those could be helpful in determining sensible ranges of the `max_depth`
#' and `min_child_weight` parameters.
#'
#' When `which = "max.depth"` or `which = "med.depth"`, plots of either maximum or
#' median depth per tree with respect to the tree number are created.
#'
#' Finally, `which = "med.weight"` allows to see how
#' a tree's median absolute leaf weight changes through the iterations.
#'
#' This function was inspired by the blog post
#' \url{https://github.com/aysent/random-forest-leaf-visualization}.
#' These functions have been inspired by the blog post
#' <https://github.com/aysent/random-forest-leaf-visualization>.
#'
#' @return
#' The return value of the two functions is as follows:
#' - `xgb.plot.deepness()`: A "data.table" (invisibly).
#' Each row corresponds to a terminal leaf in the model. It contains its information
#' about depth, cover, and weight (used in calculating predictions).
#' If `plot = TRUE`, also a plot is shown.
#' - `xgb.ggplot.deepness()`: When `which = "2x1"`, a list of two "ggplot" objects,
#' and a single "ggplot" object otherwise.
#'
#' Other than producing plots (when \code{plot=TRUE}), the \code{xgb.plot.deepness} function
#' silently returns a processed data.table where each row corresponds to a terminal leaf in a tree model,
#' and contains information about leaf's depth, cover, and weight (which is used in calculating predictions).
#'
#' The \code{xgb.ggplot.deepness} silently returns either a list of two ggplot graphs when \code{which="2x1"}
#' or a single ggplot graph for the other \code{which} options.
#'
#' @seealso
#'
#' \code{\link{xgb.train}}, \code{\link{xgb.model.dt.tree}}.
#' @seealso [xgb.train()] and [xgb.model.dt.tree()].
#'
#' @examples
#'
#' data(agaricus.train, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#' ## Keep the number of threads to 2 for examples
#' nthread <- 2
#' data.table::setDTthreads(nthread)
#'
#' # Change max_depth to a higher number to get a more significant result
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 6,
#' eta = 0.1, nthread = 2, nrounds = 50, objective = "binary:logistic",
#' subsample = 0.5, min_child_weight = 2)
#' ## Change max_depth to a higher number to get a more significant result
#' bst <- xgboost(
#' data = agaricus.train$data,
#' label = agaricus.train$label,
#' max_depth = 6,
#' nthread = nthread,
#' nrounds = 50,
#' objective = "binary:logistic",
#' subsample = 0.5,
#' min_child_weight = 2
#' )
#'
#' xgb.plot.deepness(bst)
#' xgb.ggplot.deepness(bst)
#'
#' xgb.plot.deepness(bst, which='max.depth', pch=16, col=rgb(0,0,1,0.3), cex=2)
#' xgb.plot.deepness(
#' bst, which = "max.depth", pch = 16, col = rgb(0, 0, 1, 0.3), cex = 2
#' )
#'
#' xgb.plot.deepness(bst, which='med.weight', pch=16, col=rgb(0,0,1,0.3), cex=2)
#' xgb.plot.deepness(
#' bst, which = "med.weight", pch = 16, col = rgb(0, 0, 1, 0.3), cex = 2
#' )
#'
#' @rdname xgb.plot.deepness
#' @export
@ -80,7 +92,7 @@ xgb.plot.deepness <- function(model = NULL, which = c("2x1", "max.depth", "med.d
stop("Model tree columns are not as expected!\n",
" Note that this function works only for tree models.")
dt_depths <- merge(get.leaf.depth(dt_tree), dt_tree[, .(ID, Cover, Weight = Quality)], by = "ID")
dt_depths <- merge(get.leaf.depth(dt_tree), dt_tree[, .(ID, Cover, Weight = Gain)], by = "ID")
setkeyv(dt_depths, c("Tree", "ID"))
# count by depth levels, and also calculate average cover at a depth
dt_summaries <- dt_depths[, .(.N, Cover = mean(Cover)), Depth]
@ -136,7 +148,7 @@ get.leaf.depth <- function(dt_tree) {
# list of paths to each leaf in a tree
paths <- lapply(paths_tmp$vpath, names)
# combine into a resulting path lengths table for a tree
data.table(Depth = sapply(paths, length), ID = To[Leaf == TRUE])
data.table(Depth = lengths(paths), ID = To[Leaf == TRUE])
}, by = Tree]
}
@ -145,6 +157,6 @@ get.leaf.depth <- function(dt_tree) {
# They are mainly column names inferred by Data.table...
globalVariables(
c(
".N", "N", "Depth", "Quality", "Cover", "Tree", "ID", "Yes", "No", "Feature", "Leaf", "Weight"
".N", "N", "Depth", "Gain", "Cover", "Tree", "ID", "Yes", "No", "Feature", "Leaf", "Weight"
)
)

View File

@ -1,59 +1,75 @@
#' Plot feature importance as a bar graph
#' Plot feature importance
#'
#' Represents previously calculated feature importance as a bar graph.
#' \code{xgb.plot.importance} uses base R graphics, while \code{xgb.ggplot.importance} uses the ggplot backend.
#' - `xgb.plot.importance()` uses base R graphics, while
#' - `xgb.ggplot.importance()` uses "ggplot".
#'
#' @param importance_matrix a \code{data.table} returned by \code{\link{xgb.importance}}.
#' @param top_n maximal number of top features to include into the plot.
#' @param measure the name of importance measure to plot.
#' When \code{NULL}, 'Gain' would be used for trees and 'Weight' would be used for gblinear.
#' @param rel_to_first whether importance values should be represented as relative to the highest ranked feature.
#' See Details.
#' @param left_margin (base R barplot) allows to adjust the left margin size to fit feature names.
#' When it is NULL, the existing \code{par('mar')} is used.
#' @param cex (base R barplot) passed as \code{cex.names} parameter to \code{barplot}.
#' @param plot (base R barplot) whether a barplot should be produced.
#' If FALSE, only a data.table is returned.
#' @param n_clusters (ggplot only) a \code{numeric} vector containing the min and the max range
#' @param importance_matrix A `data.table` as returned by [xgb.importance()].
#' @param top_n Maximal number of top features to include into the plot.
#' @param measure The name of importance measure to plot.
#' When `NULL`, 'Gain' would be used for trees and 'Weight' would be used for gblinear.
#' @param rel_to_first Whether importance values should be represented as relative to
#' the highest ranked feature, see Details.
#' @param left_margin Adjust the left margin size to fit feature names.
#' When `NULL`, the existing `par("mar")` is used.
#' @param cex Passed as `cex.names` parameter to [graphics::barplot()].
#' @param plot Should the barplot be shown? Default is `TRUE`.
#' @param n_clusters A numeric vector containing the min and the max range
#' of the possible number of clusters of bars.
#' @param ... other parameters passed to \code{barplot} (except horiz, border, cex.names, names.arg, and las).
#' @param ... Other parameters passed to [graphics::barplot()]
#' (except `horiz`, `border`, `cex.names`, `names.arg`, and `las`).
#' Only used in `xgb.plot.importance()`.
#'
#' @details
#' The graph represents each feature as a horizontal bar of length proportional to the importance of a feature.
#' Features are shown ranked in a decreasing importance order.
#' It works for importances from both \code{gblinear} and \code{gbtree} models.
#' Features are sorted by decreasing importance.
#' It works for both "gblinear" and "gbtree" models.
#'
#' When \code{rel_to_first = FALSE}, the values would be plotted as they were in \code{importance_matrix}.
#' For gbtree model, that would mean being normalized to the total of 1
#' When `rel_to_first = FALSE`, the values would be plotted as in `importance_matrix`.
#' For a "gbtree" model, that would mean being normalized to the total of 1
#' ("what is feature's importance contribution relative to the whole model?").
#' For linear models, \code{rel_to_first = FALSE} would show actual values of the coefficients.
#' Setting \code{rel_to_first = TRUE} allows to see the picture from the perspective of
#' For linear models, `rel_to_first = FALSE` would show actual values of the coefficients.
#' Setting `rel_to_first = TRUE` allows to see the picture from the perspective of
#' "what is feature's importance contribution relative to the most important feature?"
#'
#' The ggplot-backend method also performs 1-D clustering of the importance values,
#' with bar colors corresponding to different clusters that have somewhat similar importance values.
#' The "ggplot" backend performs 1-D clustering of the importance values,
#' with bar colors corresponding to different clusters having similar importance values.
#'
#' @return
#' The \code{xgb.plot.importance} function creates a \code{barplot} (when \code{plot=TRUE})
#' and silently returns a processed data.table with \code{n_top} features sorted by importance.
#' The return value depends on the function:
#' - `xgb.plot.importance()`: Invisibly, a "data.table" with `n_top` features sorted
#' by importance. If `plot = TRUE`, the values are also plotted as barplot.
#' - `xgb.ggplot.importance()`: A customizable "ggplot" object.
#' E.g., to change the title, set `+ ggtitle("A GRAPH NAME")`.
#'
#' The \code{xgb.ggplot.importance} function returns a ggplot graph which could be customized afterwards.
#' E.g., to change the title of the graph, add \code{+ ggtitle("A GRAPH NAME")} to the result.
#'
#' @seealso
#' \code{\link[graphics]{barplot}}.
#' @seealso [graphics::barplot()]
#'
#' @examples
#' data(agaricus.train)
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 3,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' ## Keep the number of threads to 2 for examples
#' nthread <- 2
#' data.table::setDTthreads(nthread)
#'
#' bst <- xgboost(
#' data = agaricus.train$data,
#' label = agaricus.train$label,
#' max_depth = 3,
#' eta = 1,
#' nthread = nthread,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' importance_matrix <- xgb.importance(colnames(agaricus.train$data), model = bst)
#' xgb.plot.importance(
#' importance_matrix, rel_to_first = TRUE, xlab = "Relative importance"
#' )
#'
#' xgb.plot.importance(importance_matrix, rel_to_first = TRUE, xlab = "Relative importance")
#'
#' (gg <- xgb.ggplot.importance(importance_matrix, measure = "Frequency", rel_to_first = TRUE))
#' gg <- xgb.ggplot.importance(
#' importance_matrix, measure = "Frequency", rel_to_first = TRUE
#' )
#' gg
#' gg + ggplot2::ylab("Frequency")
#'
#' @rdname xgb.plot.importance
@ -82,7 +98,13 @@ xgb.plot.importance <- function(importance_matrix = NULL, top_n = NULL, measure
}
# also aggregate, just in case when the values were not yet summed up by feature
importance_matrix <- importance_matrix[, Importance := sum(get(measure)), by = Feature]
importance_matrix <- importance_matrix[
, lapply(.SD, sum)
, .SDcols = setdiff(names(importance_matrix), "Feature")
, by = Feature
][
, Importance := get(measure)
]
# make sure it's ordered
importance_matrix <- importance_matrix[order(-abs(Importance))]
@ -102,7 +124,9 @@ xgb.plot.importance <- function(importance_matrix = NULL, top_n = NULL, measure
original_mar <- par()$mar
# reset margins so this function doesn't have side effects
on.exit({par(mar = original_mar)})
on.exit({
par(mar = original_mar)
})
mar <- original_mar
if (!is.null(left_margin))

View File

@ -1,14 +1,10 @@
#' Project all trees on one tree and plot it
#' Project all trees on one tree
#'
#' Visualization of the ensemble of trees as a single collective unit.
#'
#' @param model produced by the \code{xgb.train} function.
#' @param feature_names names of each feature as a \code{character} vector.
#' @param features_keep number of features to keep in each position of the multi trees.
#' @param plot_width width in pixels of the graph to produce
#' @param plot_height height in pixels of the graph to produce
#' @param render a logical flag for whether the graph should be rendered (see Value).
#' @param ... currently not used
#' @inheritParams xgb.plot.tree
#' @param features_keep Number of features to keep in each position of the multi trees,
#' by default 5.
#'
#' @details
#'
@ -24,46 +20,55 @@
#' Moreover, the trees tend to reuse the same features.
#'
#' The function projects each tree onto one, and keeps for each position the
#' \code{features_keep} first features (based on the Gain per feature measure).
#' `features_keep` first features (based on the Gain per feature measure).
#'
#' This function is inspired by this blog post:
#' \url{https://wellecks.wordpress.com/2015/02/21/peering-into-the-black-box-visualizing-lambdamart/}
#' <https://wellecks.wordpress.com/2015/02/21/peering-into-the-black-box-visualizing-lambdamart/>
#'
#' @return
#'
#' When \code{render = TRUE}:
#' returns a rendered graph object which is an \code{htmlwidget} of class \code{grViz}.
#' Similar to ggplot objects, it needs to be printed to see it when not running from command line.
#'
#' When \code{render = FALSE}:
#' silently returns a graph object which is of DiagrammeR's class \code{dgr_graph}.
#' This could be useful if one wants to modify some of the graph attributes
#' before rendering the graph with \code{\link[DiagrammeR]{render_graph}}.
#' @inherit xgb.plot.tree return
#'
#' @examples
#'
#' data(agaricus.train, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 15,
#' eta = 1, nthread = 2, nrounds = 30, objective = "binary:logistic",
#' min_child_weight = 50, verbose = 0)
#' ## Keep the number of threads to 2 for examples
#' nthread <- 2
#' data.table::setDTthreads(nthread)
#'
#' bst <- xgboost(
#' data = agaricus.train$data,
#' label = agaricus.train$label,
#' max_depth = 15,
#' eta = 1,
#' nthread = nthread,
#' nrounds = 30,
#' objective = "binary:logistic",
#' min_child_weight = 50,
#' verbose = 0
#' )
#'
#' p <- xgb.plot.multi.trees(model = bst, features_keep = 3)
#' print(p)
#'
#' \dontrun{
#' # Below is an example of how to save this plot to a file.
#' # Note that for `export_graph` to work, the DiagrammeRsvg and rsvg packages must also be installed.
#' # Note that for export_graph() to work, the {DiagrammeRsvg} and {rsvg} packages
#' # must also be installed.
#'
#' library(DiagrammeR)
#' gr <- xgb.plot.multi.trees(model=bst, features_keep = 3, render=FALSE)
#' export_graph(gr, 'tree.pdf', width=1500, height=600)
#'
#' gr <- xgb.plot.multi.trees(model = bst, features_keep = 3, render = FALSE)
#' export_graph(gr, "tree.pdf", width = 1500, height = 600)
#' }
#'
#' @export
xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5, plot_width = NULL, plot_height = NULL,
render = TRUE, ...){
xgb.plot.multi.trees <- function(model, features_keep = 5, plot_width = NULL, plot_height = NULL,
render = TRUE, ...) {
if (!requireNamespace("DiagrammeR", quietly = TRUE)) {
stop("DiagrammeR is required for xgb.plot.multi.trees")
}
check.deprecation(...)
tree.matrix <- xgb.model.dt.tree(feature_names = feature_names, model = model)
tree.matrix <- xgb.model.dt.tree(model = model)
# first number of the path represents the tree, then the following numbers are related to the path to follow
# root init
@ -90,13 +95,13 @@ xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5,
data.table::set(tree.matrix, j = nm, value = sub("^\\d+-", "", tree.matrix[[nm]]))
nodes.dt <- tree.matrix[
, .(Quality = sum(Quality))
, .(Gain = sum(Gain))
, by = .(abs.node.position, Feature)
][, .(Text = paste0(
paste0(
Feature[1:min(length(Feature), features_keep)],
Feature[seq_len(min(length(Feature), features_keep))],
" (",
format(Quality[1:min(length(Quality), features_keep)], digits = 5),
format(Gain[seq_len(min(length(Gain), features_keep))], digits = 5),
")"
),
collapse = "\n"

View File

@ -1,106 +1,165 @@
#' SHAP contribution dependency plots
#' SHAP dependence plots
#'
#' Visualizing the SHAP feature contribution to prediction dependencies on feature value.
#' Visualizes SHAP values against feature values to gain an impression of feature effects.
#'
#' @param data data as a \code{matrix} or \code{dgCMatrix}.
#' @param shap_contrib a matrix of SHAP contributions that was computed earlier for the above
#' \code{data}. When it is NULL, it is computed internally using \code{model} and \code{data}.
#' @param features a vector of either column indices or of feature names to plot. When it is NULL,
#' feature importance is calculated, and \code{top_n} high ranked features are taken.
#' @param top_n when \code{features} is NULL, top_n [1, 100] most important features in a model are taken.
#' @param model an \code{xgb.Booster} model. It has to be provided when either \code{shap_contrib}
#' or \code{features} is missing.
#' @param trees passed to \code{\link{xgb.importance}} when \code{features = NULL}.
#' @param target_class is only relevant for multiclass models. When it is set to a 0-based class index,
#' only SHAP contributions for that specific class are used.
#' If it is not set, SHAP importances are averaged over all classes.
#' @param approxcontrib passed to \code{\link{predict.xgb.Booster}} when \code{shap_contrib = NULL}.
#' @param subsample a random fraction of data points to use for plotting. When it is NULL,
#' it is set so that up to 100K data points are used.
#' @param n_col a number of columns in a grid of plots.
#' @param col color of the scatterplot markers.
#' @param pch scatterplot marker.
#' @param discrete_n_uniq a maximal number of unique values in a feature to consider it as discrete.
#' @param discrete_jitter an \code{amount} parameter of jitter added to discrete features' positions.
#' @param ylab a y-axis label in 1D plots.
#' @param plot_NA whether the contributions of cases with missing values should also be plotted.
#' @param col_NA a color of marker for missing value contributions.
#' @param pch_NA a marker type for NA values.
#' @param pos_NA a relative position of the x-location where NA values are shown:
#' \code{min(x) + (max(x) - min(x)) * pos_NA}.
#' @param plot_loess whether to plot loess-smoothed curves. The smoothing is only done for features with
#' more than 5 distinct values.
#' @param col_loess a color to use for the loess curves.
#' @param span_loess the \code{span} parameter in \code{\link[stats]{loess}}'s call.
#' @param which whether to do univariate or bivariate plotting. NOTE: only 1D is implemented so far.
#' @param plot whether a plot should be drawn. If FALSE, only a list of matrices is returned.
#' @param ... other parameters passed to \code{plot}.
#' @param data The data to explain as a `matrix` or `dgCMatrix`.
#' @param shap_contrib Matrix of SHAP contributions of `data`.
#' The default (`NULL`) computes it from `model` and `data`.
#' @param features Vector of column indices or feature names to plot.
#' When `NULL` (default), the `top_n` most important features are selected
#' by [xgb.importance()].
#' @param top_n How many of the most important features (<= 100) should be selected?
#' By default 1 for SHAP dependence and 10 for SHAP summary).
#' Only used when `features = NULL`.
#' @param model An `xgb.Booster` model. Only required when `shap_contrib = NULL` or
#' `features = NULL`.
#' @param trees Passed to [xgb.importance()] when `features = NULL`.
#' @param target_class Only relevant for multiclass models. The default (`NULL`)
#' averages the SHAP values over all classes. Pass a (0-based) class index
#' to show only SHAP values of that class.
#' @param approxcontrib Passed to `predict()` when `shap_contrib = NULL`.
#' @param subsample Fraction of data points randomly picked for plotting.
#' The default (`NULL`) will use up to 100k data points.
#' @param n_col Number of columns in a grid of plots.
#' @param col Color of the scatterplot markers.
#' @param pch Scatterplot marker.
#' @param discrete_n_uniq Maximal number of unique feature values to consider the
#' feature as discrete.
#' @param discrete_jitter Jitter amount added to the values of discrete features.
#' @param ylab The y-axis label in 1D plots.
#' @param plot_NA Should contributions of cases with missing values be plotted?
#' Default is `TRUE`.
#' @param col_NA Color of marker for missing value contributions.
#' @param pch_NA Marker type for `NA` values.
#' @param pos_NA Relative position of the x-location where `NA` values are shown:
#' `min(x) + (max(x) - min(x)) * pos_NA`.
#' @param plot_loess Should loess-smoothed curves be plotted? (Default is `TRUE`).
#' The smoothing is only done for features with more than 5 distinct values.
#' @param col_loess Color of loess curves.
#' @param span_loess The `span` parameter of [stats::loess()].
#' @param which Whether to do univariate or bivariate plotting. Currently, only "1d" is implemented.
#' @param plot Should the plot be drawn? (Default is `TRUE`).
#' If `FALSE`, only a list of matrices is returned.
#' @param ... Other parameters passed to [graphics::plot()].
#'
#' @details
#'
#' These scatterplots represent how SHAP feature contributions depend of feature values.
#' The similarity to partial dependency plots is that they also give an idea for how feature values
#' affect predictions. However, in partial dependency plots, we usually see marginal dependencies
#' of model prediction on feature value, while SHAP contribution dependency plots display the estimated
#' contributions of a feature to model prediction for each individual case.
#' The similarity to partial dependence plots is that they also give an idea for how feature values
#' affect predictions. However, in partial dependence plots, we see marginal dependencies
#' of model prediction on feature value, while SHAP dependence plots display the estimated
#' contributions of a feature to the prediction for each individual case.
#'
#' When \code{plot_loess = TRUE} is set, feature values are rounded to 3 significant digits and
#' weighted LOESS is computed and plotted, where weights are the numbers of data points
#' When `plot_loess = TRUE`, feature values are rounded to three significant digits and
#' weighted LOESS is computed and plotted, where the weights are the numbers of data points
#' at each rounded value.
#'
#' Note: SHAP contributions are shown on the scale of model margin. E.g., for a logistic binomial objective,
#' the margin is prediction before a sigmoidal transform into probability-like values.
#' Note: SHAP contributions are on the scale of the model margin.
#' E.g., for a logistic binomial objective, the margin is on log-odds scale.
#' Also, since SHAP stands for "SHapley Additive exPlanation" (model prediction = sum of SHAP
#' contributions for all features + bias), depending on the objective used, transforming SHAP
#' contributions for a feature from the marginal to the prediction space is not necessarily
#' a meaningful thing to do.
#'
#' @return
#'
#' In addition to producing plots (when \code{plot=TRUE}), it silently returns a list of two matrices:
#' \itemize{
#' \item \code{data} the values of selected features;
#' \item \code{shap_contrib} the contributions of selected features.
#' }
#' In addition to producing plots (when `plot = TRUE`), it silently returns a list of two matrices:
#' - `data`: Feature value matrix.
#' - `shap_contrib`: Corresponding SHAP value matrix.
#'
#' @references
#'
#' Scott M. Lundberg, Su-In Lee, "A Unified Approach to Interpreting Model Predictions", NIPS Proceedings 2017, \url{https://arxiv.org/abs/1705.07874}
#'
#' Scott M. Lundberg, Su-In Lee, "Consistent feature attribution for tree ensembles", \url{https://arxiv.org/abs/1706.06060}
#' 1. Scott M. Lundberg, Su-In Lee, "A Unified Approach to Interpreting Model Predictions",
#' NIPS Proceedings 2017, <https://arxiv.org/abs/1705.07874>
#' 2. Scott M. Lundberg, Su-In Lee, "Consistent feature attribution for tree ensembles",
#' <https://arxiv.org/abs/1706.06060>
#'
#' @examples
#'
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#' data(agaricus.test, package = "xgboost")
#'
#' bst <- xgboost(agaricus.train$data, agaricus.train$label, nrounds = 50,
#' eta = 0.1, max_depth = 3, subsample = .5,
#' method = "hist", objective = "binary:logistic", nthread = 2, verbose = 0)
#' ## Keep the number of threads to 1 for examples
#' nthread <- 1
#' data.table::setDTthreads(nthread)
#' nrounds <- 20
#'
#' bst <- xgboost(
#' agaricus.train$data,
#' agaricus.train$label,
#' nrounds = nrounds,
#' eta = 0.1,
#' max_depth = 3,
#' subsample = 0.5,
#' objective = "binary:logistic",
#' nthread = nthread,
#' verbose = 0
#' )
#'
#' xgb.plot.shap(agaricus.test$data, model = bst, features = "odor=none")
#'
#' contr <- predict(bst, agaricus.test$data, predcontrib = TRUE)
#' xgb.plot.shap(agaricus.test$data, contr, model = bst, top_n = 12, n_col = 3)
#' xgb.ggplot.shap.summary(agaricus.test$data, contr, model = bst, top_n = 12) # Summary plot
#'
#' # multiclass example - plots for each class separately:
#' # Summary plot
#' xgb.ggplot.shap.summary(agaricus.test$data, contr, model = bst, top_n = 12)
#'
#' # Multiclass example - plots for each class separately:
#' nclass <- 3
#' nrounds <- 20
#' x <- as.matrix(iris[, -5])
#' set.seed(123)
#' is.na(x[sample(nrow(x) * 4, 30)]) <- TRUE # introduce some missing values
#' mbst <- xgboost(data = x, label = as.numeric(iris$Species) - 1, nrounds = nrounds,
#' max_depth = 2, eta = 0.3, subsample = .5, nthread = 2,
#' objective = "multi:softprob", num_class = nclass, verbose = 0)
#' trees0 <- seq(from=0, by=nclass, length.out=nrounds)
#'
#' mbst <- xgboost(
#' data = x,
#' label = as.numeric(iris$Species) - 1,
#' nrounds = nrounds,
#' max_depth = 2,
#' eta = 0.3,
#' subsample = 0.5,
#' nthread = nthread,
#' objective = "multi:softprob",
#' num_class = nclass,
#' verbose = 0
#' )
#' trees0 <- seq(from = 0, by = nclass, length.out = nrounds)
#' col <- rgb(0, 0, 1, 0.5)
#' xgb.plot.shap(x, model = mbst, trees = trees0, target_class = 0, top_n = 4,
#' n_col = 2, col = col, pch = 16, pch_NA = 17)
#' xgb.plot.shap(x, model = mbst, trees = trees0 + 1, target_class = 1, top_n = 4,
#' n_col = 2, col = col, pch = 16, pch_NA = 17)
#' xgb.plot.shap(x, model = mbst, trees = trees0 + 2, target_class = 2, top_n = 4,
#' n_col = 2, col = col, pch = 16, pch_NA = 17)
#' xgb.ggplot.shap.summary(x, model = mbst, target_class = 0, top_n = 4) # Summary plot
#' xgb.plot.shap(
#' x,
#' model = mbst,
#' trees = trees0,
#' target_class = 0,
#' top_n = 4,
#' n_col = 2,
#' col = col,
#' pch = 16,
#' pch_NA = 17
#' )
#'
#' xgb.plot.shap(
#' x,
#' model = mbst,
#' trees = trees0 + 1,
#' target_class = 1,
#' top_n = 4,
#' n_col = 2,
#' col = col,
#' pch = 16,
#' pch_NA = 17
#' )
#'
#' xgb.plot.shap(
#' x,
#' model = mbst,
#' trees = trees0 + 2,
#' target_class = 2,
#' top_n = 4,
#' n_col = 2,
#' col = col,
#' pch = 16,
#' pch_NA = 17
#' )
#'
#' # Summary plot
#' xgb.ggplot.shap.summary(x, model = mbst, target_class = 0, top_n = 4)
#'
#' @rdname xgb.plot.shap
#' @export
@ -143,7 +202,7 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
y <- shap_contrib[, f][ord]
x_lim <- range(x, na.rm = TRUE)
y_lim <- range(y, na.rm = TRUE)
do_na <- plot_NA && any(is.na(x))
do_na <- plot_NA && anyNA(x)
if (do_na) {
x_range <- diff(x_lim)
loc_na <- min(x, na.rm = TRUE) + x_range * pos_NA
@ -183,41 +242,48 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
invisible(list(data = data, shap_contrib = shap_contrib))
}
#' SHAP contribution dependency summary plot
#' SHAP summary plot
#'
#' Compare SHAP contributions of different features.
#' Visualizes SHAP contributions of different features.
#'
#' A point plot (each point representing one sample from \code{data}) is
#' A point plot (each point representing one observation from `data`) is
#' produced for each feature, with the points plotted on the SHAP value axis.
#' Each point (observation) is coloured based on its feature value. The plot
#' hence allows us to see which features have a negative / positive contribution
#' Each point (observation) is coloured based on its feature value.
#'
#' The plot allows to see which features have a negative / positive contribution
#' on the model prediction, and whether the contribution is different for larger
#' or smaller values of the feature. We effectively try to replicate the
#' \code{summary_plot} function from https://github.com/slundberg/shap.
#' or smaller values of the feature. Inspired by the summary plot of
#' <https://github.com/shap/shap>.
#'
#' @inheritParams xgb.plot.shap
#'
#' @return A \code{ggplot2} object.
#' @return A `ggplot2` object.
#' @export
#'
#' @examples # See \code{\link{xgb.plot.shap}}.
#' @seealso \code{\link{xgb.plot.shap}}, \code{\link{xgb.ggplot.shap.summary}},
#' \url{https://github.com/slundberg/shap}
#' @examples
#' # See examples in xgb.plot.shap()
#'
#' @seealso [xgb.plot.shap()], [xgb.ggplot.shap.summary()],
#' and the Python library <https://github.com/shap/shap>.
xgb.plot.shap.summary <- function(data, shap_contrib = NULL, features = NULL, top_n = 10, model = NULL,
trees = NULL, target_class = NULL, approxcontrib = FALSE, subsample = NULL) {
# Only ggplot implementation is available.
xgb.ggplot.shap.summary(data, shap_contrib, features, top_n, model, trees, target_class, approxcontrib, subsample)
}
#' Prepare data for SHAP plots. To be used in xgb.plot.shap, xgb.plot.shap.summary, etc.
#' Internal utility function.
#' Prepare data for SHAP plots
#'
#' Internal function used in [xgb.plot.shap()], [xgb.plot.shap.summary()], etc.
#'
#' @inheritParams xgb.plot.shap
#' @param max_observations Maximum number of observations to consider.
#' @keywords internal
#' @noRd
#'
#' @return A list containing: 'data', a matrix containing sample observations
#' and their feature values; 'shap_contrib', a matrix containing the SHAP contribution
#' values for these observations.
#' @return
#' A list containing:
#' - `data`: The matrix of feature values.
#' - `shap_contrib`: The matrix with corresponding SHAP values.
xgb.shap.data <- function(data, shap_contrib = NULL, features = NULL, top_n = 1, model = NULL,
trees = NULL, target_class = NULL, approxcontrib = FALSE,
subsample = NULL, max_observations = 100000) {
@ -237,7 +303,11 @@ xgb.shap.data <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
if (is.character(features) && is.null(colnames(data)))
stop("either provide `data` with column names or provide `features` as column indices")
if (is.null(model$feature_names) && model$nfeatures != ncol(data))
model_feature_names <- NULL
if (is.null(features) && !is.null(model)) {
model_feature_names <- xgb.feature_names(model)
}
if (is.null(model_feature_names) && xgb.num_feature(model) != ncol(data))
stop("if model has no feature_names, columns in `data` must match features in model")
if (!is.null(subsample)) {
@ -266,14 +336,14 @@ xgb.shap.data <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
}
if (is.null(features)) {
if (!is.null(model$feature_names)) {
if (!is.null(model_feature_names)) {
imp <- xgb.importance(model = model, trees = trees)
} else {
imp <- xgb.importance(model = model, trees = trees, feature_names = colnames(data))
}
top_n <- top_n[1]
if (top_n < 1 | top_n > 100) stop("top_n: must be an integer within [1, 100]")
features <- imp$Feature[1:min(top_n, NROW(imp))]
if (top_n < 1 || top_n > 100) stop("top_n: must be an integer within [1, 100]")
features <- imp$Feature[seq_len(min(top_n, NROW(imp)))]
}
if (is.character(features)) {
features <- match(features, colnames(data))

View File

@ -1,74 +1,109 @@
#' Plot a boosted tree model
#' Plot boosted trees
#'
#' Read a tree model text dump and plot the model.
#'
#' @param feature_names names of each feature as a \code{character} vector.
#' @param model produced by the \code{xgb.train} function.
#' @param trees an integer vector of tree indices that should be visualized.
#' If set to \code{NULL}, all trees of the model are included.
#' IMPORTANT: the tree index in xgboost model is zero-based
#' (e.g., use \code{trees = 0:2} for the first 3 trees in a model).
#' @param plot_width the width of the diagram in pixels.
#' @param plot_height the height of the diagram in pixels.
#' @param render a logical flag for whether the graph should be rendered (see Value).
#' @param model Object of class `xgb.Booster`. If it contains feature names (they can be set through
#' \link{setinfo}), they will be used in the output from this function.
#' @param trees An integer vector of tree indices that should be used.
#' The default (`NULL`) uses all trees.
#' Useful, e.g., in multiclass classification to get only
#' the trees of one class. *Important*: the tree index in XGBoost models
#' is zero-based (e.g., use `trees = 0:2` for the first three trees).
#' @param plot_width,plot_height Width and height of the graph in pixels.
#' The values are passed to [DiagrammeR::render_graph()].
#' @param render Should the graph be rendered or not? The default is `TRUE`.
#' @param show_node_id a logical flag for whether to show node id's in the graph.
#' @param style Style to use for the plot. Options are:\itemize{
#' \item `"xgboost"`: will use the plot style defined in the core XGBoost library,
#' which is shared between different interfaces through the 'dot' format. This
#' style was not available before version 2.1.0 in R. It always plots the trees
#' vertically (from top to bottom).
#' \item `"R"`: will use the style defined from XGBoost's R interface, which predates
#' the introducition of the standardized style from the core library. It might plot
#' the trees horizontally (from left to right).
#' }
#'
#' Note that `style="xgboost"` is only supported when all of the following conditions are met:\itemize{
#' \item Only a single tree is being plotted.
#' \item Node IDs are not added to the graph.
#' \item The graph is being returned as `htmlwidget` (`render=TRUE`).
#' }
#' @param ... currently not used.
#'
#' @details
#'
#' The content of each node is organised that way:
#' When using `style="xgboost"`, the content of each node is visualized as follows:
#' - For non-terminal nodes, it will display the split condition (number or name if
#' available, and the condition that would decide to which node to go next).
#' - Those nodes will be connected to their children by arrows that indicate whether the
#' branch corresponds to the condition being met or not being met.
#' - Terminal (leaf) nodes contain the margin to add when ending there.
#'
#' \itemize{
#' \item Feature name.
#' \item \code{Cover}: The sum of second order gradient of training data classified to the leaf.
#' If it is square loss, this simply corresponds to the number of instances seen by a split
#' or collected by a leaf during training.
#' The deeper in the tree a node is, the lower this metric will be.
#' \item \code{Gain} (for split nodes): the information gain metric of a split
#' When using `style="R"`, the content of each node is visualized like this:
#' - *Feature name*.
#' - *Cover:* The sum of second order gradients of training data.
#' For the squared loss, this simply corresponds to the number of instances in the node.
#' The deeper in the tree, the lower the value.
#' - *Gain* (for split nodes): Information gain metric of a split
#' (corresponds to the importance of the node in the model).
#' \item \code{Value} (for leafs): the margin value that the leaf may contribute to prediction.
#' }
#' The tree root nodes also indicate the Tree index (0-based).
#' - *Value* (for leaves): Margin value that the leaf may contribute to the prediction.
#'
#' The tree root nodes also indicate the tree index (0-based).
#'
#' The "Yes" branches are marked by the "< split_value" label.
#' The branches that also used for missing values are marked as bold
#' The branches also used for missing values are marked as bold
#' (as in "carrying extra capacity").
#'
#' This function uses \href{http://www.graphviz.org/}{GraphViz} as a backend of DiagrammeR.
#' This function uses [GraphViz](https://www.graphviz.org/) as DiagrammeR backend.
#'
#' @return
#'
#' When \code{render = TRUE}:
#' returns a rendered graph object which is an \code{htmlwidget} of class \code{grViz}.
#' Similar to ggplot objects, it needs to be printed to see it when not running from command line.
#'
#' When \code{render = FALSE}:
#' silently returns a graph object which is of DiagrammeR's class \code{dgr_graph}.
#' This could be useful if one wants to modify some of the graph attributes
#' before rendering the graph with \code{\link[DiagrammeR]{render_graph}}.
#' The value depends on the `render` parameter:
#' - If `render = TRUE` (default): Rendered graph object which is an htmlwidget of
#' class `grViz`. Similar to "ggplot" objects, it needs to be printed when not
#' running from the command line.
#' - If `render = FALSE`: Graph object which is of DiagrammeR's class `dgr_graph`.
#' This could be useful if one wants to modify some of the graph attributes
#' before rendering the graph with [DiagrammeR::render_graph()].
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.train, package = "xgboost")
#'
#' bst <- xgboost(
#' data = agaricus.train$data,
#' label = agaricus.train$label,
#' max_depth = 3,
#' eta = 1,
#' nthread = 2,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#'
#' # plot the first tree, using the style from xgboost's core library
#' # (this plot should look identical to the ones generated from other
#' # interfaces like the python package for xgboost)
#' xgb.plot.tree(model = bst, trees = 1, style = "xgboost")
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 3,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' # plot all the trees
#' xgb.plot.tree(model = bst)
#' xgb.plot.tree(model = bst, trees = NULL)
#'
#' # plot only the first tree and display the node ID:
#' xgb.plot.tree(model = bst, trees = 0, show_node_id = TRUE)
#'
#' \dontrun{
#' # Below is an example of how to save this plot to a file.
#' # Note that for `export_graph` to work, the DiagrammeRsvg and rsvg packages must also be installed.
#' # Note that for export_graph() to work, the {DiagrammeRsvg}
#' # and {rsvg} packages must also be installed.
#'
#' library(DiagrammeR)
#' gr <- xgb.plot.tree(model=bst, trees=0:1, render=FALSE)
#' export_graph(gr, 'tree.pdf', width=1500, height=1900)
#' export_graph(gr, 'tree.png', width=1500, height=1900)
#'
#' gr <- xgb.plot.tree(model = bst, trees = 0:1, render = FALSE)
#' export_graph(gr, "tree.pdf", width = 1500, height = 1900)
#' export_graph(gr, "tree.png", width = 1500, height = 1900)
#' }
#'
#' @export
xgb.plot.tree <- function(feature_names = NULL, model = NULL, trees = NULL, plot_width = NULL, plot_height = NULL,
render = TRUE, show_node_id = FALSE, ...){
xgb.plot.tree <- function(model = NULL, trees = NULL, plot_width = NULL, plot_height = NULL,
render = TRUE, show_node_id = FALSE, style = c("R", "xgboost"), ...) {
check.deprecation(...)
if (!inherits(model, "xgb.Booster")) {
stop("model: Has to be an object of class xgb.Booster")
@ -78,9 +113,20 @@ xgb.plot.tree <- function(feature_names = NULL, model = NULL, trees = NULL, plot
stop("DiagrammeR package is required for xgb.plot.tree", call. = FALSE)
}
dt <- xgb.model.dt.tree(feature_names = feature_names, model = model, trees = trees)
style <- as.character(head(style, 1L))
stopifnot(style %in% c("R", "xgboost"))
if (style == "xgboost") {
if (NROW(trees) != 1L || !render || show_node_id) {
stop("style='xgboost' is only supported for single, rendered tree, without node IDs.")
}
dt[, label := paste0(Feature, "\nCover: ", Cover, ifelse(Feature == "Leaf", "\nValue: ", "\nGain: "), Quality)]
txt <- xgb.dump(model, dump_format = "dot")
return(DiagrammeR::grViz(txt[[trees + 1]], width = plot_width, height = plot_height))
}
dt <- xgb.model.dt.tree(model = model, trees = trees)
dt[, label := paste0(Feature, "\nCover: ", Cover, ifelse(Feature == "Leaf", "\nValue: ", "\nGain: "), Gain)]
if (show_node_id)
dt[, label := paste0(ID, ": ", label)]
dt[Node == 0, label := paste0("Tree ", Tree, "\n", label)]
@ -98,18 +144,22 @@ xgb.plot.tree <- function(feature_names = NULL, model = NULL, trees = NULL, plot
data = dt$Feature,
fontcolor = "black")
edges <- DiagrammeR::create_edge_df(
from = match(rep(dt[Feature != "Leaf", c(ID)], 2), dt$ID),
to = match(dt[Feature != "Leaf", c(Yes, No)], dt$ID),
label = c(
dt[Feature != "Leaf", paste("<", Split)],
rep("", nrow(dt[Feature != "Leaf"]))
),
style = c(
dt[Feature != "Leaf", ifelse(Missing == Yes, "bold", "solid")],
dt[Feature != "Leaf", ifelse(Missing == No, "bold", "solid")]
),
rel = "leading_to")
if (nrow(dt[Feature != "Leaf"]) != 0) {
edges <- DiagrammeR::create_edge_df(
from = match(rep(dt[Feature != "Leaf", c(ID)], 2), dt$ID),
to = match(dt[Feature != "Leaf", c(Yes, No)], dt$ID),
label = c(
dt[Feature != "Leaf", paste("<", Split)],
rep("", nrow(dt[Feature != "Leaf"]))
),
style = c(
dt[Feature != "Leaf", ifelse(Missing == Yes, "bold", "solid")],
dt[Feature != "Leaf", ifelse(Missing == No, "bold", "solid")]
),
rel = "leading_to")
} else {
edges <- NULL
}
graph <- DiagrammeR::create_graph(
nodes_df = nodes,
@ -143,4 +193,4 @@ xgb.plot.tree <- function(feature_names = NULL, model = NULL, trees = NULL, plot
# Avoid error messages during CRAN check.
# The reason is that these variables are never declared
# They are mainly column names inferred by Data.table...
globalVariables(c("Feature", "ID", "Cover", "Quality", "Split", "Yes", "No", "Missing", ".", "shape", "filledcolor", "label"))
globalVariables(c("Feature", "ID", "Cover", "Gain", "Split", "Yes", "No", "Missing", ".", "shape", "filledcolor", "label"))

View File

@ -1,12 +1,24 @@
#' Save xgboost model to binary file
#'
#' Save xgboost model to a file in binary format.
#' Save xgboost model to a file in binary or JSON format.
#'
#' @param model model object of \code{xgb.Booster} class.
#' @param fname name of the file to write.
#' @param model Model object of \code{xgb.Booster} class.
#' @param fname Name of the file to write.
#'
#' Note that the extension of this file name determined the serialization format to use:\itemize{
#' \item Extension ".ubj" will use the universal binary JSON format (recommended).
#' This format uses binary types for e.g. floating point numbers, thereby preventing any loss
#' of precision when converting to a human-readable JSON text or similar.
#' \item Extension ".json" will use plain JSON, which is a human-readable format.
#' \item Extension ".deprecated" will use a \bold{deprecated} binary format. This format will
#' not be able to save attributes introduced after v1 of XGBoost, such as the "best_iteration"
#' attribute that boosters might keep, nor feature names or user-specifiec attributes.
#' \item If the format is not specified by passing one of the file extensions above, will
#' default to UBJ.
#' }
#'
#' @details
#' This methods allows to save a model in an xgboost-internal binary format which is universal
#' This methods allows to save a model in an xgboost-internal binary or text format which is universal
#' among the various xgboost interfaces. In R, the saved model file could be read-in later
#' using either the \code{\link{xgb.load}} function or the \code{xgb_model} parameter
#' of \code{\link{xgb.train}}.
@ -14,25 +26,36 @@
#' Note: a model can also be saved as an R-object (e.g., by using \code{\link[base]{readRDS}}
#' or \code{\link[base]{save}}). However, it would then only be compatible with R, and
#' corresponding R-methods would need to be used to load it. Moreover, persisting the model with
#' \code{\link[base]{readRDS}} or \code{\link[base]{save}}) will cause compatibility problems in
#' \code{\link[base]{readRDS}} or \code{\link[base]{save}}) might cause compatibility problems in
#' future versions of XGBoost. Consult \code{\link{a-compatibility-note-for-saveRDS-save}} to learn
#' how to persist models in a future-proof way, i.e. to make the model accessible in future
#' releases of XGBoost.
#'
#' @seealso
#' \code{\link{xgb.load}}, \code{\link{xgb.Booster.complete}}.
#' \code{\link{xgb.load}}
#'
#' @examples
#' \dontshow{RhpcBLASctl::omp_set_num_threads(1)}
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#'
#' ## Keep the number of threads to 1 for examples
#' nthread <- 1
#' data.table::setDTthreads(nthread)
#'
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' xgb.save(bst, 'xgb.model')
#' bst <- xgb.load('xgb.model')
#' if (file.exists('xgb.model')) file.remove('xgb.model')
#' pred <- predict(bst, test$data)
#' bst <- xgb.train(
#' data = xgb.DMatrix(train$data, label = train$label),
#' max_depth = 2,
#' eta = 1,
#' nthread = nthread,
#' nrounds = 2,
#' objective = "binary:logistic"
#' )
#' fname <- file.path(tempdir(), "xgb.ubj")
#' xgb.save(bst, fname)
#' bst <- xgb.load(fname)
#' @export
xgb.save <- function(model, fname) {
if (typeof(fname) != "character")
@ -41,8 +64,7 @@ xgb.save <- function(model, fname) {
stop("model must be xgb.Booster.",
if (inherits(model, "xgb.DMatrix")) " Use xgb.DMatrix.save to save an xgb.DMatrix object." else "")
}
model <- xgb.Booster.complete(model, saveraw = FALSE)
fname <- path.expand(fname)
.Call(XGBoosterSaveModel_R, model$handle, fname[1])
.Call(XGBoosterSaveModel_R, xgb.get.handle(model), enc2utf8(fname[1]))
return(TRUE)
}

View File

@ -4,20 +4,33 @@
#' Save xgboost model from xgboost or xgb.train
#'
#' @param model the model object.
#' @param raw_format The format for encoding the booster. Available options are
#' \itemize{
#' \item \code{json}: Encode the booster into JSON text document.
#' \item \code{ubj}: Encode the booster into Universal Binary JSON.
#' \item \code{deprecated}: Encode the booster into old customized binary format.
#' }
#'
#' @examples
#' \dontshow{RhpcBLASctl::omp_set_num_threads(1)}
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#'
#' ## Keep the number of threads to 2 for examples
#' nthread <- 2
#' data.table::setDTthreads(nthread)
#'
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' bst <- xgb.train(data = xgb.DMatrix(train$data, label = train$label), max_depth = 2,
#' eta = 1, nthread = nthread, nrounds = 2,objective = "binary:logistic")
#'
#' raw <- xgb.save.raw(bst)
#' bst <- xgb.load.raw(raw)
#' pred <- predict(bst, test$data)
#'
#' @export
xgb.save.raw <- function(model) {
xgb.save.raw <- function(model, raw_format = "ubj") {
handle <- xgb.get.handle(model)
.Call(XGBoosterModelToRaw_R, handle)
args <- list(format = raw_format)
.Call(XGBoosterSaveModelToRaw_R, handle, jsonlite::toJSON(args, auto_unbox = TRUE))
}

View File

@ -1,21 +0,0 @@
#' Serialize the booster instance into R's raw vector. The serialization method differs
#' from \code{\link{xgb.save.raw}} as the latter one saves only the model but not
#' parameters. This serialization format is not stable across different xgboost versions.
#'
#' @param booster the booster instance
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' raw <- xgb.serialize(bst)
#' bst <- xgb.unserialize(raw)
#'
#' @export
xgb.serialize <- function(booster) {
handle <- xgb.get.handle(booster)
.Call(XGBoosterSerializeToBuffer_R, handle)
}

View File

@ -18,17 +18,37 @@
#' 2.1. Parameters for Tree Booster
#'
#' \itemize{
#' \item \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1} when it is added to the current approximation. Used to prevent overfitting by making the boosting process more conservative. Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model more robust to overfitting but slower to compute. Default: 0.3
#' \item \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be.
#' \item{ \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1}
#' when it is added to the current approximation.
#' Used to prevent overfitting by making the boosting process more conservative.
#' Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model
#' more robust to overfitting but slower to compute. Default: 0.3}
#' \item{ \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree.
#' the larger, the more conservative the algorithm will be.}
#' \item \code{max_depth} maximum depth of a tree. Default: 6
#' \item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1
#' \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1
#' \item{\code{min_child_weight} minimum sum of instance weight (hessian) needed in a child.
#' If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight,
#' then the building process will give up further partitioning.
#' In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node.
#' The larger, the more conservative the algorithm will be. Default: 1}
#' \item{ \code{subsample} subsample ratio of the training instance.
#' Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees
#' and this will prevent overfitting. It makes computation shorter (because less data to analyse).
#' It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1}
#' \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
#' \item \code{lambda} L2 regularization term on weights. Default: 1
#' \item \code{alpha} L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0
#' \item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through XGBoost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1
#' \item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.
#' \item \code{interaction_constraints} A list of vectors specifying feature indices of permitted interactions. Each item of the list represents one permitted interaction where specified features are allowed to interact with each other. Feature index values should start from \code{0} (\code{0} references the first column). Leave argument unspecified for no interaction constraints.
#' \item{ \code{num_parallel_tree} Experimental parameter. number of trees to grow per round.
#' Useful to test Random Forest through XGBoost
#' (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly.
#' Default: 1}
#' \item{ \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length
#' equals to the number of features in the training data.
#' \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.}
#' \item{ \code{interaction_constraints} A list of vectors specifying feature indices of permitted interactions.
#' Each item of the list represents one permitted interaction where specified features are allowed to interact with each other.
#' Feature index values should start from \code{0} (\code{0} references the first column).
#' Leave argument unspecified for no interaction constraints.}
#' }
#'
#' 2.2. Parameters for Linear Booster
@ -42,41 +62,65 @@
#' 3. Task Parameters
#'
#' \itemize{
#' \item \code{objective} specify the learning task and the corresponding learning objective, users can pass a self-defined function to it. The default objective options are below:
#' \item{ \code{objective} specify the learning task and the corresponding learning objective, users can pass a self-defined function to it.
#' The default objective options are below:
#' \itemize{
#' \item \code{reg:squarederror} Regression with squared loss (Default).
#' \item \code{reg:squaredlogerror}: regression with squared log loss \eqn{1/2 * (log(pred + 1) - log(label + 1))^2}. All inputs are required to be greater than -1. Also, see metric rmsle for possible issue with this objective.
#' \item{ \code{reg:squaredlogerror}: regression with squared log loss \eqn{1/2 * (log(pred + 1) - log(label + 1))^2}.
#' All inputs are required to be greater than -1.
#' Also, see metric rmsle for possible issue with this objective.}
#' \item \code{reg:logistic} logistic regression.
#' \item \code{reg:pseudohubererror}: regression with Pseudo Huber loss, a twice differentiable alternative to absolute loss.
#' \item \code{binary:logistic} logistic regression for binary classification. Output probability.
#' \item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation.
#' \item \code{binary:hinge}: hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities.
#' \item \code{count:poisson}: Poisson regression for count data, output mean of Poisson distribution. \code{max_delta_step} is set to 0.7 by default in poisson regression (used to safeguard optimization).
#' \item \code{survival:cox}: Cox regression for right censored survival time data (negative values are considered right censored). Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional hazard function \code{h(t) = h0(t) * HR)}.
#' \item \code{survival:aft}: Accelerated failure time model for censored survival time data. See \href{https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html}{Survival Analysis with Accelerated Failure Time} for details.
#' \item{ \code{count:poisson}: Poisson regression for count data, output mean of Poisson distribution.
#' \code{max_delta_step} is set to 0.7 by default in poisson regression (used to safeguard optimization).}
#' \item{ \code{survival:cox}: Cox regression for right censored survival time data (negative values are considered right censored).
#' Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional
#' hazard function \code{h(t) = h0(t) * HR)}.}
#' \item{ \code{survival:aft}: Accelerated failure time model for censored survival time data. See
#' \href{https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html}{Survival Analysis with Accelerated Failure Time}
#' for details.}
#' \item \code{aft_loss_distribution}: Probability Density Function used by \code{survival:aft} and \code{aft-nloglik} metric.
#' \item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{num_class - 1}.
#' \item \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class.
#' \item{ \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective.
#' Class is represented by a number and should be from 0 to \code{num_class - 1}.}
#' \item{ \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be
#' further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging
#' to each class.}
#' \item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss.
#' \item \code{rank:ndcg}: Use LambdaMART to perform list-wise ranking where \href{https://en.wikipedia.org/wiki/Discounted_cumulative_gain}{Normalized Discounted Cumulative Gain (NDCG)} is maximized.
#' \item \code{rank:map}: Use LambdaMART to perform list-wise ranking where \href{https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision}{Mean Average Precision (MAP)} is maximized.
#' \item \code{reg:gamma}: gamma regression with log-link. Output is a mean of gamma distribution. It might be useful, e.g., for modeling insurance claims severity, or for any outcome that might be \href{https://en.wikipedia.org/wiki/Gamma_distribution#Applications}{gamma-distributed}.
#' \item \code{reg:tweedie}: Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any outcome that might be \href{https://en.wikipedia.org/wiki/Tweedie_distribution#Applications}{Tweedie-distributed}.
#' \item{ \code{rank:ndcg}: Use LambdaMART to perform list-wise ranking where
#' \href{https://en.wikipedia.org/wiki/Discounted_cumulative_gain}{Normalized Discounted Cumulative Gain (NDCG)} is maximized.}
#' \item{ \code{rank:map}: Use LambdaMART to perform list-wise ranking where
#' \href{https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision}{Mean Average Precision (MAP)}
#' is maximized.}
#' \item{ \code{reg:gamma}: gamma regression with log-link.
#' Output is a mean of gamma distribution.
#' It might be useful, e.g., for modeling insurance claims severity, or for any outcome that might be
#' \href{https://en.wikipedia.org/wiki/Gamma_distribution#Applications}{gamma-distributed}.}
#' \item{ \code{reg:tweedie}: Tweedie regression with log-link.
#' It might be useful, e.g., for modeling total loss in insurance, or for any outcome that might be
#' \href{https://en.wikipedia.org/wiki/Tweedie_distribution#Applications}{Tweedie-distributed}.}
#' }
#' }
#' \item \code{base_score} the initial prediction score of all instances, global bias. Default: 0.5
#' \item \code{eval_metric} evaluation metrics for validation data. Users can pass a self-defined function to it. Default: metric will be assigned according to objective(rmse for regression, and error for classification, mean average precision for ranking). List is provided in detail section.
#' \item{ \code{eval_metric} evaluation metrics for validation data.
#' Users can pass a self-defined function to it.
#' Default: metric will be assigned according to objective
#' (rmse for regression, and error for classification, mean average precision for ranking).
#' List is provided in detail section.}
#' }
#'
#' @param data training dataset. \code{xgb.train} accepts only an \code{xgb.DMatrix} as the input.
#' \code{xgboost}, in addition, also accepts \code{matrix}, \code{dgCMatrix}, or name of a local data file.
#' @param nrounds max number of boosting iterations.
#' @param watchlist named list of xgb.DMatrix datasets to use for evaluating model performance.
#' @param evals Named list of `xgb.DMatrix` datasets to use for evaluating model performance.
#' Metrics specified in either \code{eval_metric} or \code{feval} will be computed for each
#' of these datasets during each boosting iteration, and stored in the end as a field named
#' \code{evaluation_log} in the resulting object. When either \code{verbose>=1} or
#' \code{\link{cb.print.evaluation}} callback is engaged, the performance results are continuously
#' \code{\link{xgb.cb.print.evaluation}} callback is engaged, the performance results are continuously
#' printed out during the training.
#' E.g., specifying \code{watchlist=list(validation1=mat1, validation2=mat2)} allows to track
#' E.g., specifying \code{evals=list(validation1=mat1, validation2=mat2)} allows to track
#' the performance of each round's model on mat1 and mat2.
#' @param obj customized objective function. Returns gradient and second order
#' gradient with given prediction and dtrain.
@ -86,28 +130,33 @@
#' @param verbose If 0, xgboost will stay silent. If 1, it will print information about performance.
#' If 2, some additional information will be printed out.
#' Note that setting \code{verbose > 0} automatically engages the
#' \code{cb.print.evaluation(period=1)} callback function.
#' \code{xgb.cb.print.evaluation(period=1)} callback function.
#' @param print_every_n Print each n-th iteration evaluation messages when \code{verbose>0}.
#' Default is 1 which means all messages are printed. This parameter is passed to the
#' \code{\link{cb.print.evaluation}} callback.
#' \code{\link{xgb.cb.print.evaluation}} callback.
#' @param early_stopping_rounds If \code{NULL}, the early stopping function is not triggered.
#' If set to an integer \code{k}, training with a validation set will stop if the performance
#' doesn't improve for \code{k} rounds.
#' Setting this parameter engages the \code{\link{cb.early.stop}} callback.
#' Setting this parameter engages the \code{\link{xgb.cb.early.stop}} callback.
#' @param maximize If \code{feval} and \code{early_stopping_rounds} are set,
#' then this parameter must be set as well.
#' When it is \code{TRUE}, it means the larger the evaluation score the better.
#' This parameter is passed to the \code{\link{cb.early.stop}} callback.
#' This parameter is passed to the \code{\link{xgb.cb.early.stop}} callback.
#' @param save_period when it is non-NULL, model is saved to disk after every \code{save_period} rounds,
#' 0 means save at the end. The saving is handled by the \code{\link{cb.save.model}} callback.
#' 0 means save at the end. The saving is handled by the \code{\link{xgb.cb.save.model}} callback.
#' @param save_name the name or path for periodically saved model file.
#' @param xgb_model a previously built model to continue the training from.
#' Could be either an object of class \code{xgb.Booster}, or its raw data, or the name of a
#' file with a previously saved model.
#' @param callbacks a list of callback functions to perform various task during boosting.
#' See \code{\link{callbacks}}. Some of the callbacks are automatically created depending on the
#' See \code{\link{xgb.Callback}}. Some of the callbacks are automatically created depending on the
#' parameters' values. User can provide either existing or their own callback methods in order
#' to customize the training process.
#'
#' Note that some callbacks might try to leave attributes in the resulting model object,
#' such as an evaluation log (a `data.table` object) - be aware that these objects are kept
#' as R attributes, and thus do not get saved when using XGBoost's own serializaters like
#' \link{xgb.save} (but are kept when using R serializers like \link{saveRDS}).
#' @param ... other parameters to pass to \code{params}.
#' @param label vector of response values. Should not be provided when data is
#' a local data file name or an \code{xgb.DMatrix}.
@ -116,15 +165,24 @@
#' This parameter is only used when input is a dense matrix.
#' @param weight a vector indicating the weight for each row of the input.
#'
#' @return
#' An object of class \code{xgb.Booster}.
#'
#' @details
#' These are the training functions for \code{xgboost}.
#'
#' The \code{xgb.train} interface supports advanced features such as \code{watchlist},
#' The \code{xgb.train} interface supports advanced features such as \code{evals},
#' customized objective and evaluation metric functions, therefore it is more flexible
#' than the \code{xgboost} interface.
#'
#' Parallelization is automatically enabled if \code{OpenMP} is present.
#' Number of threads can also be manually specified via \code{nthread} parameter.
#' Number of threads can also be manually specified via the \code{nthread}
#' parameter.
#'
#' While in other interfaces, the default random seed defaults to zero, in R, if a parameter `seed`
#' is not manually supplied, it will generate a random seed through R's own random number generator,
#' whose seed in turn is controllable through `set.seed`. If `seed` is passed, it will override the
#' RNG from R.
#'
#' The evaluation metric is chosen automatically by XGBoost (according to the objective)
#' when the \code{eval_metric} parameter is not provided.
@ -141,45 +199,39 @@
#' \item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
#' \item \code{mae} Mean absolute error
#' \item \code{mape} Mean absolute percentage error
#' \item \code{auc} Area under the curve. \url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
#' \item{ \code{auc} Area under the curve.
#' \url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.}
#' \item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation.
#' \item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{https://en.wikipedia.org/wiki/NDCG}
#' }
#'
#' The following callbacks are automatically created when certain parameters are set:
#' \itemize{
#' \item \code{cb.print.evaluation} is turned on when \code{verbose > 0};
#' \item \code{xgb.cb.print.evaluation} is turned on when \code{verbose > 0};
#' and the \code{print_every_n} parameter is passed to it.
#' \item \code{cb.evaluation.log} is on when \code{watchlist} is present.
#' \item \code{cb.early.stop}: when \code{early_stopping_rounds} is set.
#' \item \code{cb.save.model}: when \code{save_period > 0} is set.
#' \item \code{xgb.cb.evaluation.log} is on when \code{evals} is present.
#' \item \code{xgb.cb.early.stop}: when \code{early_stopping_rounds} is set.
#' \item \code{xgb.cb.save.model}: when \code{save_period > 0} is set.
#' }
#'
#' @return
#' An object of class \code{xgb.Booster} with the following elements:
#' \itemize{
#' \item \code{handle} a handle (pointer) to the xgboost model in memory.
#' \item \code{raw} a cached memory dump of the xgboost model saved as R's \code{raw} type.
#' \item \code{niter} number of boosting iterations.
#' \item \code{evaluation_log} evaluation history stored as a \code{data.table} with the
#' first column corresponding to iteration number and the rest corresponding to evaluation
#' metrics' values. It is created by the \code{\link{cb.evaluation.log}} callback.
#' \item \code{call} a function call.
#' \item \code{params} parameters that were passed to the xgboost library. Note that it does not
#' capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
#' \item \code{callbacks} callback functions that were either automatically assigned or
#' explicitly passed.
#' \item \code{best_iteration} iteration number with the best evaluation metric value
#' (only available with early stopping).
#' \item \code{best_score} the best evaluation metric value during early stopping.
#' (only available with early stopping).
#' \item \code{feature_names} names of the training dataset features
#' (only when column names were defined in training data).
#' \item \code{nfeatures} number of features in training data.
#' }
#' Note that objects of type `xgb.Booster` as returned by this function behave a bit differently
#' from typical R objects (it's an 'altrep' list class), and it makes a separation between
#' internal booster attributes (restricted to jsonifyable data), accessed through \link{xgb.attr}
#' and shared between interfaces through serialization functions like \link{xgb.save}; and
#' R-specific attributes (typically the result from a callback), accessed through \link{attributes}
#' and \link{attr}, which are otherwise
#' only used in the R interface, only kept when using R's serializers like \link{saveRDS}, and
#' not anyhow used by functions like \link{predict.xgb.Booster}.
#'
#' Be aware that one such R attribute that is automatically added is `params` - this attribute
#' is assigned from the `params` argument to this function, and is only meant to serve as a
#' reference for what went into the booster, but is not used in other methods that take a booster
#' object - so for example, changing the booster's configuration requires calling `xgb.config<-`
#' or 'xgb.parameters<-', while simply modifying `attributes(model)$params$<...>` will have no
#' effect elsewhere.
#'
#' @seealso
#' \code{\link{callbacks}},
#' \code{\link{xgb.Callback}},
#' \code{\link{predict.xgb.Booster}},
#' \code{\link{xgb.cv}}
#'
@ -192,17 +244,25 @@
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#'
#' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label))
#' dtest <- with(agaricus.test, xgb.DMatrix(data, label = label))
#' watchlist <- list(train = dtrain, eval = dtest)
#' ## Keep the number of threads to 1 for examples
#' nthread <- 1
#' data.table::setDTthreads(nthread)
#'
#' dtrain <- with(
#' agaricus.train, xgb.DMatrix(data, label = label, nthread = nthread)
#' )
#' dtest <- with(
#' agaricus.test, xgb.DMatrix(data, label = label, nthread = nthread)
#' )
#' evals <- list(train = dtrain, eval = dtest)
#'
#' ## A simple xgb.train example:
#' param <- list(max_depth = 2, eta = 1, verbose = 0, nthread = 2,
#' param <- list(max_depth = 2, eta = 1, nthread = nthread,
#' objective = "binary:logistic", eval_metric = "auc")
#' bst <- xgb.train(param, dtrain, nrounds = 2, watchlist)
#' bst <- xgb.train(param, dtrain, nrounds = 2, evals = evals, verbose = 0)
#'
#'
#' ## An xgb.train example where custom objective and evaluation metric are used:
#' ## An xgb.train example where custom objective and evaluation metric are
#' ## used:
#' logregobj <- function(preds, dtrain) {
#' labels <- getinfo(dtrain, "label")
#' preds <- 1/(1 + exp(-preds))
@ -218,40 +278,40 @@
#'
#' # These functions could be used by passing them either:
#' # as 'objective' and 'eval_metric' parameters in the params list:
#' param <- list(max_depth = 2, eta = 1, verbose = 0, nthread = 2,
#' param <- list(max_depth = 2, eta = 1, nthread = nthread,
#' objective = logregobj, eval_metric = evalerror)
#' bst <- xgb.train(param, dtrain, nrounds = 2, watchlist)
#' bst <- xgb.train(param, dtrain, nrounds = 2, evals = evals, verbose = 0)
#'
#' # or through the ... arguments:
#' param <- list(max_depth = 2, eta = 1, verbose = 0, nthread = 2)
#' bst <- xgb.train(param, dtrain, nrounds = 2, watchlist,
#' param <- list(max_depth = 2, eta = 1, nthread = nthread)
#' bst <- xgb.train(param, dtrain, nrounds = 2, evals = evals, verbose = 0,
#' objective = logregobj, eval_metric = evalerror)
#'
#' # or as dedicated 'obj' and 'feval' parameters of xgb.train:
#' bst <- xgb.train(param, dtrain, nrounds = 2, watchlist,
#' bst <- xgb.train(param, dtrain, nrounds = 2, evals = evals,
#' obj = logregobj, feval = evalerror)
#'
#'
#' ## An xgb.train example of using variable learning rates at each iteration:
#' param <- list(max_depth = 2, eta = 1, verbose = 0, nthread = 2,
#' param <- list(max_depth = 2, eta = 1, nthread = nthread,
#' objective = "binary:logistic", eval_metric = "auc")
#' my_etas <- list(eta = c(0.5, 0.1))
#' bst <- xgb.train(param, dtrain, nrounds = 2, watchlist,
#' callbacks = list(cb.reset.parameters(my_etas)))
#' bst <- xgb.train(param, dtrain, nrounds = 2, evals = evals, verbose = 0,
#' callbacks = list(xgb.cb.reset.parameters(my_etas)))
#'
#' ## Early stopping:
#' bst <- xgb.train(param, dtrain, nrounds = 25, watchlist,
#' bst <- xgb.train(param, dtrain, nrounds = 25, evals = evals,
#' early_stopping_rounds = 3)
#'
#' ## An 'xgboost' interface example:
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label,
#' max_depth = 2, eta = 1, nthread = 2, nrounds = 2,
#' max_depth = 2, eta = 1, nthread = nthread, nrounds = 2,
#' objective = "binary:logistic")
#' pred <- predict(bst, agaricus.test$data)
#'
#' @rdname xgb.train
#' @export
xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
xgb.train <- function(params = list(), data, nrounds, evals = list(),
obj = NULL, feval = NULL, verbose = 1, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL,
save_period = NULL, save_name = "xgboost.model",
@ -264,71 +324,78 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
check.custom.obj()
check.custom.eval()
# data & watchlist checks
# data & evals checks
dtrain <- data
if (!inherits(dtrain, "xgb.DMatrix"))
stop("second argument dtrain must be xgb.DMatrix")
if (length(watchlist) > 0) {
if (typeof(watchlist) != "list" ||
!all(vapply(watchlist, inherits, logical(1), what = 'xgb.DMatrix')))
stop("watchlist must be a list of xgb.DMatrix elements")
evnames <- names(watchlist)
if (length(evals) > 0) {
if (typeof(evals) != "list" ||
!all(vapply(evals, inherits, logical(1), what = 'xgb.DMatrix')))
stop("'evals' must be a list of xgb.DMatrix elements")
evnames <- names(evals)
if (is.null(evnames) || any(evnames == ""))
stop("each element of the watchlist must have a name tag")
stop("each element of 'evals' must have a name tag")
}
# Handle multiple evaluation metrics given as a list
for (m in params$eval_metric) {
params <- c(params, list(eval_metric = m))
}
# evaluation printing callback
params <- c(params)
print_every_n <- max(as.integer(print_every_n), 1L)
if (!has.callbacks(callbacks, 'cb.print.evaluation') &&
verbose) {
callbacks <- add.cb(callbacks, cb.print.evaluation(print_every_n))
params['validate_parameters'] <- TRUE
if (!("seed" %in% names(params))) {
params[["seed"]] <- sample(.Machine$integer.max, size = 1)
}
# evaluation log callback: it is automatically enabled when watchlist is provided
evaluation_log <- list()
if (!has.callbacks(callbacks, 'cb.evaluation.log') &&
length(watchlist) > 0) {
callbacks <- add.cb(callbacks, cb.evaluation.log())
# callbacks
tmp <- .process.callbacks(callbacks, is_cv = FALSE)
callbacks <- tmp$callbacks
cb_names <- tmp$cb_names
rm(tmp)
# Early stopping callback (should always come first)
if (!is.null(early_stopping_rounds) && !("early_stop" %in% cb_names)) {
callbacks <- add.callback(
callbacks,
xgb.cb.early.stop(
early_stopping_rounds,
maximize = maximize,
verbose = verbose
),
as_first_elt = TRUE
)
}
# evaluation printing callback
print_every_n <- max(as.integer(print_every_n), 1L)
if (verbose && !("print_evaluation" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.print.evaluation(print_every_n))
}
# evaluation log callback: it is automatically enabled when 'evals' is provided
if (length(evals) && !("evaluation_log" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.evaluation.log())
}
# Model saving callback
if (!is.null(save_period) &&
!has.callbacks(callbacks, 'cb.save.model')) {
callbacks <- add.cb(callbacks, cb.save.model(save_period, save_name))
}
# Early stopping callback
stop_condition <- FALSE
if (!is.null(early_stopping_rounds) &&
!has.callbacks(callbacks, 'cb.early.stop')) {
callbacks <- add.cb(callbacks, cb.early.stop(early_stopping_rounds,
maximize = maximize, verbose = verbose))
}
# Sort the callbacks into categories
cb <- categorize.callbacks(callbacks)
params['validate_parameters'] <- TRUE
if (!is.null(params[['seed']])) {
warning("xgb.train: `seed` is ignored in R package. Use `set.seed()` instead.")
if (!is.null(save_period) && !("save_model" %in% cb_names)) {
callbacks <- add.callback(callbacks, xgb.cb.save.model(save_period, save_name))
}
# The tree updating process would need slightly different handling
is_update <- NVL(params[['process_type']], '.') == 'update'
# Construct a booster (either a new one or load from xgb_model)
handle <- xgb.Booster.handle(params, append(watchlist, dtrain), xgb_model)
bst <- xgb.handleToBooster(handle)
bst <- xgb.Booster(
params = params,
cachelist = append(evals, dtrain),
modelfile = xgb_model
)
niter_init <- bst$niter
bst <- bst$bst
.Call(
XGBoosterCopyInfoFromDMatrix_R,
xgb.get.handle(bst),
dtrain
)
# extract parameters that can affect the relationship b/w #trees and #iterations
num_class <- max(as.numeric(NVL(params[['num_class']], 1)), 1)
num_parallel_tree <- max(as.numeric(NVL(params[['num_parallel_tree']], 1)), 1)
# When the 'xgb_model' was set, find out how many boosting iterations it has
niter_init <- 0
if (!is.null(xgb_model)) {
niter_init <- as.numeric(xgb.attr(bst, 'niter')) + 1
if (length(niter_init) == 0) {
niter_init <- xgb.ntree(bst) %/% (num_parallel_tree * num_class)
}
}
if (is_update && nrounds > niter_init)
stop("nrounds cannot be larger than ", niter_init, " (nrounds of xgb_model)")
@ -336,49 +403,83 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
begin_iteration <- niter_skip + 1
end_iteration <- niter_skip + nrounds
.execute.cb.before.training(
callbacks,
bst,
dtrain,
evals,
begin_iteration,
end_iteration
)
# the main loop for boosting iterations
for (iteration in begin_iteration:end_iteration) {
for (f in cb$pre_iter) f()
.execute.cb.before.iter(
callbacks,
bst,
dtrain,
evals,
iteration
)
xgb.iter.update(bst$handle, dtrain, iteration - 1, obj)
xgb.iter.update(
bst = bst,
dtrain = dtrain,
iter = iteration - 1,
obj = obj
)
if (length(watchlist) > 0)
bst_evaluation <- xgb.iter.eval(bst$handle, watchlist, iteration - 1, feval)
xgb.attr(bst$handle, 'niter') <- iteration - 1
for (f in cb$post_iter) f()
if (stop_condition) break
}
for (f in cb$finalize) f(finalize = TRUE)
bst <- xgb.Booster.complete(bst, saveraw = TRUE)
# store the total number of boosting iterations
bst$niter <- end_iteration
# store the evaluation results
if (length(evaluation_log) > 0 &&
nrow(evaluation_log) > 0) {
# include the previous compatible history when available
if (inherits(xgb_model, 'xgb.Booster') &&
!is_update &&
!is.null(xgb_model$evaluation_log) &&
isTRUE(all.equal(colnames(evaluation_log),
colnames(xgb_model$evaluation_log)))) {
evaluation_log <- rbindlist(list(xgb_model$evaluation_log, evaluation_log))
bst_evaluation <- NULL
if (length(evals) > 0) {
bst_evaluation <- xgb.iter.eval(
bst = bst,
evals = evals,
iter = iteration - 1,
feval = feval
)
}
bst$evaluation_log <- evaluation_log
should_stop <- .execute.cb.after.iter(
callbacks,
bst,
dtrain,
evals,
iteration,
bst_evaluation
)
if (should_stop) break
}
bst$call <- match.call()
bst$params <- params
bst$callbacks <- callbacks
if (!is.null(colnames(dtrain)))
bst$feature_names <- colnames(dtrain)
bst$nfeatures <- ncol(dtrain)
cb_outputs <- .execute.cb.after.training(
callbacks,
bst,
dtrain,
evals,
iteration,
bst_evaluation
)
extra_attrs <- list(
call = match.call(),
params = params
)
curr_attrs <- attributes(bst)
if (NROW(curr_attrs)) {
curr_attrs <- curr_attrs[
setdiff(
names(curr_attrs),
c(names(extra_attrs), names(cb_outputs))
)
]
}
curr_attrs <- c(extra_attrs, curr_attrs)
if (NROW(cb_outputs)) {
curr_attrs <- c(curr_attrs, cb_outputs)
}
attributes(bst) <- curr_attrs
return(bst)
}

View File

@ -1,41 +0,0 @@
#' Load the instance back from \code{\link{xgb.serialize}}
#'
#' @param buffer the buffer containing booster instance saved by \code{\link{xgb.serialize}}
#' @param handle An \code{xgb.Booster.handle} object which will be overwritten with
#' the new deserialized object. Must be a null handle (e.g. when loading the model through
#' `readRDS`). If not provided, a new handle will be created.
#' @return An \code{xgb.Booster.handle} object.
#'
#' @export
xgb.unserialize <- function(buffer, handle = NULL) {
cachelist <- list()
if (is.null(handle)) {
handle <- .Call(XGBoosterCreate_R, cachelist)
} else {
if (!is.null.handle(handle))
stop("'handle' is not null/empty. Cannot overwrite existing handle.")
.Call(XGBoosterCreateInEmptyObj_R, cachelist, handle)
}
tryCatch(
.Call(XGBoosterUnserializeFromBuffer_R, handle, buffer),
error = function(e) {
error_msg <- conditionMessage(e)
m <- regexec("(src[\\\\/]learner.cc:[0-9]+): Check failed: (header == serialisation_header_)",
error_msg, perl = TRUE)
groups <- regmatches(error_msg, m)[[1]]
if (length(groups) == 3) {
warning(paste("The model had been generated by XGBoost version 1.0.0 or earlier and was ",
"loaded from a RDS file. We strongly ADVISE AGAINST using saveRDS() ",
"function, to ensure that your model can be read in current and upcoming ",
"XGBoost releases. Please use xgb.save() instead to preserve models for the ",
"long term. For more details and explanation, see ",
"https://xgboost.readthedocs.io/en/latest/tutorials/saving_model.html",
sep = ""))
.Call(XGBoosterLoadModelFromRaw_R, handle, buffer)
} else {
stop(e)
}
})
class(handle) <- "xgb.Booster.handle"
return (handle)
}

View File

@ -10,15 +10,21 @@ xgboost <- function(data = NULL, label = NULL, missing = NA, weight = NULL,
save_period = NULL, save_name = "xgboost.model",
xgb_model = NULL, callbacks = list(), ...) {
merged <- check.booster.params(params, ...)
dtrain <- xgb.get.DMatrix(data, label, missing, weight, nthread = merged$nthread)
dtrain <- xgb.get.DMatrix(
data = data,
label = label,
missing = missing,
weight = weight,
nthread = merged$nthread
)
watchlist <- list(train = dtrain)
evals <- list(train = dtrain)
bst <- xgb.train(params, dtrain, nrounds, watchlist, verbose = verbose, print_every_n = print_every_n,
bst <- xgb.train(params, dtrain, nrounds, evals, verbose = verbose, print_every_n = print_every_n,
early_stopping_rounds = early_stopping_rounds, maximize = maximize,
save_period = save_period, save_name = save_name,
xgb_model = xgb_model, callbacks = callbacks, ...)
return (bst)
return(bst)
}
#' Training part from Mushroom Data Set
@ -34,10 +40,10 @@ xgboost <- function(data = NULL, label = NULL, missing = NA, weight = NULL,
#' }
#'
#' @references
#' https://archive.ics.uci.edu/ml/datasets/Mushroom
#' <https://archive.ics.uci.edu/ml/datasets/Mushroom>
#'
#' Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
#' [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
#' <http://archive.ics.uci.edu/ml>. Irvine, CA: University of California,
#' School of Information and Computer Science.
#'
#' @docType data
@ -61,10 +67,10 @@ NULL
#' }
#'
#' @references
#' https://archive.ics.uci.edu/ml/datasets/Mushroom
#' <https://archive.ics.uci.edu/ml/datasets/Mushroom>
#'
#' Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
#' [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
#' <http://archive.ics.uci.edu/ml>. Irvine, CA: University of California,
#' School of Information and Computer Science.
#'
#' @docType data
@ -76,12 +82,8 @@ NULL
NULL
# Various imports
#' @importClassesFrom Matrix dgCMatrix dgeMatrix
#' @importFrom Matrix colSums
#' @importClassesFrom Matrix dgCMatrix dgRMatrix CsparseMatrix
#' @importFrom Matrix sparse.model.matrix
#' @importFrom Matrix sparseVector
#' @importFrom Matrix sparseMatrix
#' @importFrom Matrix t
#' @importFrom data.table data.table
#' @importFrom data.table is.data.table
#' @importFrom data.table as.data.table
@ -92,9 +94,13 @@ NULL
#' @importFrom data.table setnames
#' @importFrom jsonlite fromJSON
#' @importFrom jsonlite toJSON
#' @importFrom methods new
#' @importFrom utils object.size str tail
#' @importFrom stats coef
#' @importFrom stats predict
#' @importFrom stats median
#' @importFrom stats sd
#' @importFrom stats variable.names
#' @importFrom utils head
#' @importFrom graphics barplot
#' @importFrom graphics lines

1841
R-package/configure vendored

File diff suppressed because it is too large Load Diff

View File

@ -2,10 +2,25 @@
AC_PREREQ(2.69)
AC_INIT([xgboost],[0.6-3],[],[xgboost],[])
AC_INIT([xgboost],[2.1.0],[],[xgboost],[])
# Use this line to set CC variable to a C compiler
AC_PROG_CC
: ${R_HOME=`R RHOME`}
if test -z "${R_HOME}"; then
echo "could not determine R_HOME"
exit 1
fi
CXX17=`"${R_HOME}/bin/R" CMD config CXX17`
CXX17STD=`"${R_HOME}/bin/R" CMD config CXX17STD`
CXX="${CXX17} ${CXX17STD}"
CXXFLAGS=`"${R_HOME}/bin/R" CMD config CXXFLAGS`
CC=`"${R_HOME}/bin/R" CMD config CC`
CFLAGS=`"${R_HOME}/bin/R" CMD config CFLAGS`
CPPFLAGS=`"${R_HOME}/bin/R" CMD config CPPFLAGS`
LDFLAGS=`"${R_HOME}/bin/R" CMD config LDFLAGS`
AC_LANG(C++)
### Check whether backtrace() is part of libc or the external lib libexecinfo
AC_MSG_CHECKING([Backtrace lib])
@ -28,12 +43,19 @@ fi
if test `uname -s` = "Darwin"
then
OPENMP_CXXFLAGS='-Xclang -fopenmp'
OPENMP_LIB='-lomp'
if command -v brew &> /dev/null
then
HOMEBREW_LIBOMP_PREFIX=`brew --prefix libomp`
else
# Homebrew not found
HOMEBREW_LIBOMP_PREFIX=''
fi
OPENMP_CXXFLAGS="-Xpreprocessor -fopenmp -I${HOMEBREW_LIBOMP_PREFIX}/include"
OPENMP_LIB="-lomp -L${HOMEBREW_LIBOMP_PREFIX}/lib"
ac_pkg_openmp=no
AC_MSG_CHECKING([whether OpenMP will work in a package])
AC_LANG_CONFTEST([AC_LANG_PROGRAM([[#include <omp.h>]], [[ return (omp_get_max_threads() <= 1); ]])])
${CC} -o conftest conftest.c ${OPENMP_LIB} ${OPENMP_CXXFLAGS} 2>/dev/null && ./conftest && ac_pkg_openmp=yes
${CXX} -o conftest conftest.cpp ${CPPFLAGS} ${LDFLAGS} ${OPENMP_LIB} ${OPENMP_CXXFLAGS} 2>/dev/null && ./conftest && ac_pkg_openmp=yes
AC_MSG_RESULT([${ac_pkg_openmp}])
if test "${ac_pkg_openmp}" = no; then
OPENMP_CXXFLAGS=''

View File

@ -1,5 +1,4 @@
basic_walkthrough Basic feature walkthrough
caret_wrapper Use xgboost to train in caret library
custom_objective Customize loss function, and evaluation metric
boost_from_prediction Boosting from existing prediction
predict_first_ntree Predicting using first n trees

View File

@ -1,7 +1,6 @@
XGBoost R Feature Walkthrough
====
* [Basic walkthrough of wrappers](basic_walkthrough.R)
* [Train a xgboost model from caret library](caret_wrapper.R)
* [Customize loss function, and evaluation metric](custom_objective.R)
* [Boosting from existing prediction](boost_from_prediction.R)
* [Predicting using first n trees](predict_first_ntree.R)

View File

@ -55,6 +55,8 @@ print(paste("test-error=", err))
# save model to binary local file
xgb.save(bst, "xgboost.model")
# load binary model to R
# Function doesn't take 'nthreads', but can be set like this:
RhpcBLASctl::omp_set_num_threads(1)
bst2 <- xgb.load("xgboost.model")
pred2 <- predict(bst2, test$data)
# pred2 should be identical to pred
@ -63,7 +65,7 @@ print(paste("sum(abs(pred2-pred))=", sum(abs(pred2 - pred))))
# save model to R's raw vector
raw <- xgb.save.raw(bst)
# load binary model to R
bst3 <- xgb.load(raw)
bst3 <- xgb.load.raw(raw)
pred3 <- predict(bst3, test$data)
# pred3 should be identical to pred
print(paste("sum(abs(pred3-pred))=", sum(abs(pred3 - pred))))
@ -72,17 +74,17 @@ print(paste("sum(abs(pred3-pred))=", sum(abs(pred3 - pred))))
# to use advanced features, we need to put data in xgb.DMatrix
dtrain <- xgb.DMatrix(data = train$data, label = train$label)
dtest <- xgb.DMatrix(data = test$data, label = test$label)
#---------------Using watchlist----------------
# watchlist is a list of xgb.DMatrix, each of them is tagged with name
watchlist <- list(train = dtrain, test = dtest)
# to train with watchlist, use xgb.train, which contains more advanced features
# watchlist allows us to monitor the evaluation result on all data in the list
print("Train xgboost using xgb.train with watchlist")
bst <- xgb.train(data = dtrain, max_depth = 2, eta = 1, nrounds = 2, watchlist = watchlist,
#---------------Using an evaluation set----------------
# 'evals' is a list of xgb.DMatrix, each of them is tagged with name
evals <- list(train = dtrain, test = dtest)
# to train with an evaluation set, use xgb.train, which contains more advanced features
# 'evals' argument allows us to monitor the evaluation result on all data in the list
print("Train xgboost using xgb.train with evaluation data")
bst <- xgb.train(data = dtrain, max_depth = 2, eta = 1, nrounds = 2, evals = evals,
nthread = 2, objective = "binary:logistic")
# we can change evaluation metrics, or use multiple evaluation metrics
print("train xgboost using xgb.train with watchlist, watch logloss and error")
bst <- xgb.train(data = dtrain, max_depth = 2, eta = 1, nrounds = 2, watchlist = watchlist,
print("train xgboost using xgb.train with evaluation data, watch logloss and error")
bst <- xgb.train(data = dtrain, max_depth = 2, eta = 1, nrounds = 2, evals = evals,
eval_metric = "error", eval_metric = "logloss",
nthread = 2, objective = "binary:logistic")
@ -90,7 +92,7 @@ bst <- xgb.train(data = dtrain, max_depth = 2, eta = 1, nrounds = 2, watchlist =
xgb.DMatrix.save(dtrain, "dtrain.buffer")
# to load it in, simply call xgb.DMatrix
dtrain2 <- xgb.DMatrix("dtrain.buffer")
bst <- xgb.train(data = dtrain2, max_depth = 2, eta = 1, nrounds = 2, watchlist = watchlist,
bst <- xgb.train(data = dtrain2, max_depth = 2, eta = 1, nrounds = 2, evals = evals,
nthread = 2, objective = "binary:logistic")
# information can be extracted from xgb.DMatrix using getinfo
label <- getinfo(dtest, "label")

View File

@ -5,14 +5,14 @@ data(agaricus.test, package = 'xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
watchlist <- list(eval = dtest, train = dtrain)
evals <- list(eval = dtest, train = dtrain)
###
# advanced: start from a initial base prediction
#
print('start running example to start from a initial prediction')
# train xgboost for 1 round
param <- list(max_depth = 2, eta = 1, nthread = 2, objective = 'binary:logistic')
bst <- xgb.train(param, dtrain, 1, watchlist)
bst <- xgb.train(param, dtrain, 1, evals)
# Note: we need the margin value instead of transformed prediction in set_base_margin
# do predict with output_margin=TRUE, will always give you margin values before logistic transformation
ptrain <- predict(bst, dtrain, outputmargin = TRUE)
@ -23,4 +23,4 @@ setinfo(dtrain, "base_margin", ptrain)
setinfo(dtest, "base_margin", ptest)
print('this is result of boost from initial prediction')
bst <- xgb.train(params = param, data = dtrain, nrounds = 1, watchlist = watchlist)
bst <- xgb.train(params = param, data = dtrain, nrounds = 1, evals = evals)

View File

@ -1,35 +0,0 @@
# install development version of caret library that contains xgboost models
devtools::install_github("topepo/caret/pkg/caret")
require(caret)
require(xgboost)
require(data.table)
require(vcd)
require(e1071)
# Load Arthritis dataset in memory.
data(Arthritis)
# Create a copy of the dataset with data.table package (data.table is 100% compliant with R dataframe but its syntax is a lot more consistent and its performance are really good).
df <- data.table(Arthritis, keep.rownames = FALSE)
# Let's add some new categorical features to see if it helps. Of course these feature are highly correlated to the Age feature. Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features, even in case of highly correlated features.
# For the first feature we create groups of age by rounding the real age. Note that we transform it to factor (categorical data) so the algorithm treat them as independant values.
df[, AgeDiscret := as.factor(round(Age / 10, 0))]
# Here is an even stronger simplification of the real age with an arbitrary split at 30 years old. I choose this value based on nothing. We will see later if simplifying the information based on arbitrary values is a good strategy (I am sure you already have an idea of how well it will work!).
df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))]
# We remove ID as there is nothing to learn from this feature (it will just add some noise as the dataset is small).
df[, ID := NULL]
#-------------Basic Training using XGBoost in caret Library-----------------
# Set up control parameters for caret::train
# Here we use 10-fold cross-validation, repeating twice, and using random search for tuning hyper-parameters.
fitControl <- trainControl(method = "repeatedcv", number = 10, repeats = 2, search = "random")
# train a xgbTree model using caret::train
model <- train(factor(Improved)~., data = df, method = "xgbTree", trControl = fitControl)
# Instead of tree for our boosters, you can also fit a linear regression or logistic regression model using xgbLinear
# model <- train(factor(Improved)~., data = df, method = "xgbLinear", trControl = fitControl)
# See model results
print(model)

View File

@ -7,34 +7,47 @@ if (!require(vcd)) {
}
# According to its documentation, XGBoost works only on numbers.
# Sometimes the dataset we have to work on have categorical data.
# A categorical variable is one which have a fixed number of values. By example, if for each observation a variable called "Colour" can have only "red", "blue" or "green" as value, it is a categorical variable.
# A categorical variable is one which have a fixed number of values.
# By example, if for each observation a variable called "Colour" can have only
# "red", "blue" or "green" as value, it is a categorical variable.
#
# In R, categorical variable is called Factor.
# Type ?factor in console for more information.
#
# In this demo we will see how to transform a dense dataframe with categorical variables to a sparse matrix before analyzing it in XGBoost.
# In this demo we will see how to transform a dense dataframe with categorical variables to a sparse matrix
# before analyzing it in XGBoost.
# The method we are going to see is usually called "one hot encoding".
#load Arthritis dataset in memory.
data(Arthritis)
# create a copy of the dataset with data.table package (data.table is 100% compliant with R dataframe but its syntax is a lot more consistent and its performance are really good).
# create a copy of the dataset with data.table package
# (data.table is 100% compliant with R dataframe but its syntax is a lot more consistent
# and its performance are really good).
df <- data.table(Arthritis, keep.rownames = FALSE)
# Let's have a look to the data.table
cat("Print the dataset\n")
print(df)
# 2 columns have factor type, one has ordinal type (ordinal variable is a categorical variable with values which can be ordered, here: None > Some > Marked).
# 2 columns have factor type, one has ordinal type
# (ordinal variable is a categorical variable with values which can be ordered, here: None > Some > Marked).
cat("Structure of the dataset\n")
str(df)
# Let's add some new categorical features to see if it helps. Of course these feature are highly correlated to the Age feature. Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features, even in case of highly correlated features.
# Let's add some new categorical features to see if it helps.
# Of course these feature are highly correlated to the Age feature.
# Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features,
# even in case of highly correlated features.
# For the first feature we create groups of age by rounding the real age. Note that we transform it to factor (categorical data) so the algorithm treat them as independent values.
# For the first feature we create groups of age by rounding the real age.
# Note that we transform it to factor (categorical data) so the algorithm treat them as independent values.
df[, AgeDiscret := as.factor(round(Age / 10, 0))]
# Here is an even stronger simplification of the real age with an arbitrary split at 30 years old. I choose this value based on nothing. We will see later if simplifying the information based on arbitrary values is a good strategy (I am sure you already have an idea of how well it will work!).
# Here is an even stronger simplification of the real age with an arbitrary split at 30 years old.
# I choose this value based on nothing.
# We will see later if simplifying the information based on arbitrary values is a good strategy
# (I am sure you already have an idea of how well it will work!).
df[, AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))]
# We remove ID as there is nothing to learn from this feature (it will just add some noise as the dataset is small).
@ -48,7 +61,10 @@ print(levels(df[, Treatment]))
# This method is also called one hot encoding.
# The purpose is to transform each value of each categorical feature in one binary feature.
#
# Let's take, the column Treatment will be replaced by two columns, Placebo, and Treated. Each of them will be binary. For example an observation which had the value Placebo in column Treatment before the transformation will have, after the transformation, the value 1 in the new column Placebo and the value 0 in the new column Treated.
# Let's take, the column Treatment will be replaced by two columns, Placebo, and Treated.
# Each of them will be binary.
# For example an observation which had the value Placebo in column Treatment before the transformation will have, after the transformation,
# the value 1 in the new column Placebo and the value 0 in the new column Treated.
#
# Formulae Improved~.-1 used below means transform all categorical features but column Improved to binary values.
# Column Improved is excluded because it will be our output column, the one we want to predict.
@ -65,12 +81,15 @@ output_vector <- df[, Y := 0][Improved == "Marked", Y := 1][, Y]
# Following is the same process as other demo
cat("Learning...\n")
bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 9,
eta = 1, nthread = 2, nrounds = 10, objective = "binary:logistic")
bst <- xgb.train(data = xgb.DMatrix(sparse_matrix, label = output_vector), max_depth = 9,
eta = 1, nthread = 2, nrounds = 10, objective = "binary:logistic")
importance <- xgb.importance(feature_names = colnames(sparse_matrix), model = bst)
print(importance)
# According to the matrix below, the most important feature in this dataset to predict if the treatment will work is the Age. The second most important feature is having received a placebo or not. The sex is third. Then we see our generated features (AgeDiscret). We can see that their contribution is very low (Gain column).
# According to the matrix below, the most important feature in this dataset to predict if the treatment will work is the Age.
# The second most important feature is having received a placebo or not.
# The sex is third.
# Then we see our generated features (AgeDiscret). We can see that their contribution is very low (Gain column).
# Does these result make sense?
# Let's check some Chi2 between each of these features and the outcome.
@ -82,8 +101,17 @@ print(chisq.test(df$AgeDiscret, df$Y))
# Our first simplification of Age gives a Pearson correlation of 8.
print(chisq.test(df$AgeCat, df$Y))
# The perfectly random split I did between young and old at 30 years old have a low correlation of 2. It's a result we may expect as may be in my mind > 30 years is being old (I am 32 and starting feeling old, this may explain that), but for the illness we are studying, the age to be vulnerable is not the same. Don't let your "gut" lower the quality of your model. In "data science", there is science :-)
# The perfectly random split I did between young and old at 30 years old have a low correlation of 2.
# It's a result we may expect as may be in my mind > 30 years is being old (I am 32 and starting feeling old, this may explain that),
# but for the illness we are studying, the age to be vulnerable is not the same.
# Don't let your "gut" lower the quality of your model. In "data science", there is science :-)
# As you can see, in general destroying information by simplifying it won't improve your model. Chi2 just demonstrates that. But in more complex cases, creating a new feature based on existing one which makes link with the outcome more obvious may help the algorithm and improve the model. The case studied here is not enough complex to show that. Check Kaggle forum for some challenging datasets.
# As you can see, in general destroying information by simplifying it won't improve your model.
# Chi2 just demonstrates that.
# But in more complex cases, creating a new feature based on existing one which makes link with the outcome
# more obvious may help the algorithm and improve the model.
# The case studied here is not enough complex to show that. Check Kaggle forum for some challenging datasets.
# However it's almost always worse when you add some arbitrary rules.
# Moreover, you can notice that even if we have added some not useful new features highly correlated with other features, the boosting tree algorithm have been able to choose the best one, which in this case is the Age. Linear model may not be that strong in these scenario.
# Moreover, you can notice that even if we have added some not useful new features highly correlated with
# other features, the boosting tree algorithm have been able to choose the best one, which in this case is the Age.
# Linear model may not be that strong in these scenario.

View File

@ -12,7 +12,7 @@ cat('running cross validation\n')
# do cross validation, this will print result out as
# [iteration] metric_name:mean_value+std_value
# std_value is standard deviation of the metric
xgb.cv(param, dtrain, nrounds, nfold = 5, metrics = {'error'})
xgb.cv(param, dtrain, nrounds, nfold = 5, metrics = 'error')
cat('running cross validation, disable standard deviation display\n')
# do cross validation, this will print result out as
@ -25,7 +25,7 @@ xgb.cv(param, dtrain, nrounds, nfold = 5,
# you can also do cross validation with customized loss function
# See custom_objective.R
##
print ('running cross validation, with customized loss function')
print('running cross validation, with customized loss function')
logregobj <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")

View File

@ -8,7 +8,7 @@ dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
# note: for customized objective function, we leave objective as default
# note: what we are getting is margin value in prediction
# you must know what you are doing
watchlist <- list(eval = dtest, train = dtrain)
evals <- list(eval = dtest, train = dtrain)
num_round <- 2
# user define objective function, given prediction, return gradient and second order gradient
@ -35,10 +35,10 @@ evalerror <- function(preds, dtrain) {
param <- list(max_depth = 2, eta = 1, nthread = 2, verbosity = 0,
objective = logregobj, eval_metric = evalerror)
print ('start training with user customized objective')
print('start training with user customized objective')
# training with customized objective, we can also do step by step training
# simply look at xgboost.py's implementation of train
bst <- xgb.train(param, dtrain, num_round, watchlist)
bst <- xgb.train(param, dtrain, num_round, evals)
#
# there can be cases where you want additional information
@ -59,7 +59,7 @@ logregobjattr <- function(preds, dtrain) {
}
param <- list(max_depth = 2, eta = 1, nthread = 2, verbosity = 0,
objective = logregobjattr, eval_metric = evalerror)
print ('start training with user customized objective, with additional attributes in DMatrix')
print('start training with user customized objective, with additional attributes in DMatrix')
# training with customized objective, we can also do step by step training
# simply look at xgboost.py's implementation of train
bst <- xgb.train(param, dtrain, num_round, watchlist)
bst <- xgb.train(param, dtrain, num_round, evals)

View File

@ -8,7 +8,7 @@ dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
# note: what we are getting is margin value in prediction
# you must know what you are doing
param <- list(max_depth = 2, eta = 1, nthread = 2, verbosity = 0)
watchlist <- list(eval = dtest)
evals <- list(eval = dtest)
num_round <- 20
# user define objective function, given prediction, return gradient and second order gradient
# this is log likelihood loss
@ -30,9 +30,9 @@ evalerror <- function(preds, dtrain) {
err <- as.numeric(sum(labels != (preds > 0))) / length(labels)
return(list(metric = "error", value = err))
}
print ('start training with early Stopping setting')
print('start training with early Stopping setting')
bst <- xgb.train(param, dtrain, num_round, watchlist,
bst <- xgb.train(param, dtrain, num_round, evals,
objective = logregobj, eval_metric = evalerror, maximize = FALSE,
early_stopping_round = 3)
bst <- xgb.cv(param, dtrain, num_round, nfold = 5,

View File

@ -25,9 +25,9 @@ param <- list(objective = "binary:logistic", booster = "gblinear",
##
# the rest of settings are the same
##
watchlist <- list(eval = dtest, train = dtrain)
evals <- list(eval = dtest, train = dtrain)
num_round <- 2
bst <- xgb.train(param, dtrain, num_round, watchlist)
bst <- xgb.train(param, dtrain, num_round, evals)
ypred <- predict(bst, dtest)
labels <- getinfo(dtest, 'label')
cat('error of preds=', mean(as.numeric(ypred > 0.5) != labels), '\n')

View File

@ -23,7 +23,7 @@ y <- rbinom(N, 1, plogis(m))
tr <- sample.int(N, N * 0.75)
dtrain <- xgb.DMatrix(X[tr, ], label = y[tr])
dtest <- xgb.DMatrix(X[-tr, ], label = y[-tr])
wl <- list(train = dtrain, test = dtest)
evals <- list(train = dtrain, test = dtest)
# An example of running 'gpu_hist' algorithm
# which is
@ -35,11 +35,11 @@ wl <- list(train = dtrain, test = dtest)
param <- list(objective = 'reg:logistic', eval_metric = 'auc', subsample = 0.5, nthread = 4,
max_bin = 64, tree_method = 'gpu_hist')
pt <- proc.time()
bst_gpu <- xgb.train(param, dtrain, watchlist = wl, nrounds = 50)
bst_gpu <- xgb.train(param, dtrain, evals = evals, nrounds = 50)
proc.time() - pt
# Compare to the 'hist' algorithm:
param$tree_method <- 'hist'
pt <- proc.time()
bst_hist <- xgb.train(param, dtrain, watchlist = wl, nrounds = 50)
bst_hist <- xgb.train(param, dtrain, evals = evals, nrounds = 50)
proc.time() - pt

View File

@ -33,7 +33,7 @@ treeInteractions <- function(input_tree, input_max_depth) {
}
# Extract nodes with interactions
interaction_trees <- trees[!is.na(Split) & !is.na(parent_1),
interaction_trees <- trees[!is.na(Split) & !is.na(parent_1), # nolint: object_usage_linter
c('Feature', paste0('parent_feat_', 1:(input_max_depth - 1))),
with = FALSE]
interaction_trees_split <- split(interaction_trees, seq_len(nrow(interaction_trees)))
@ -44,7 +44,7 @@ treeInteractions <- function(input_tree, input_max_depth) {
# Remove non-interactions (same variable)
interaction_list <- lapply(interaction_list, unique) # remove same variables
interaction_length <- sapply(interaction_list, length)
interaction_length <- lengths(interaction_list)
interaction_list <- interaction_list[interaction_length > 1]
interaction_list <- unique(lapply(interaction_list, sort))
return(interaction_list)
@ -74,26 +74,26 @@ cols2ids <- function(object, col_names) {
interaction_list_fid <- cols2ids(interaction_list, colnames(train))
# Fit model with interaction constraints
bst <- xgboost(data = train, label = y, max_depth = 4,
eta = 0.1, nthread = 2, nrounds = 1000,
interaction_constraints = interaction_list_fid)
bst <- xgb.train(data = xgb.DMatrix(train, label = y), max_depth = 4,
eta = 0.1, nthread = 2, nrounds = 1000,
interaction_constraints = interaction_list_fid)
bst_tree <- xgb.model.dt.tree(colnames(train), bst)
bst_interactions <- treeInteractions(bst_tree, 4)
# interactions constrained to combinations of V1*V2 and V3*V4*V5
# Fit model without interaction constraints
bst2 <- xgboost(data = train, label = y, max_depth = 4,
eta = 0.1, nthread = 2, nrounds = 1000)
bst2 <- xgb.train(data = xgb.DMatrix(train, label = y), max_depth = 4,
eta = 0.1, nthread = 2, nrounds = 1000)
bst2_tree <- xgb.model.dt.tree(colnames(train), bst2)
bst2_interactions <- treeInteractions(bst2_tree, 4) # much more interactions
# Fit model with both interaction and monotonicity constraints
bst3 <- xgboost(data = train, label = y, max_depth = 4,
eta = 0.1, nthread = 2, nrounds = 1000,
interaction_constraints = interaction_list_fid,
monotone_constraints = c(-1, 0, 0, 0, 0, 0, 0, 0, 0, 0))
bst3 <- xgb.train(data = xgb.DMatrix(train, label = y), max_depth = 4,
eta = 0.1, nthread = 2, nrounds = 1000,
interaction_constraints = interaction_list_fid,
monotone_constraints = c(-1, 0, 0, 0, 0, 0, 0, 0, 0, 0))
bst3_tree <- xgb.model.dt.tree(colnames(train), bst3)
bst3_interactions <- treeInteractions(bst3_tree, 4)

View File

@ -1,6 +1,6 @@
data(mtcars)
head(mtcars)
bst <- xgboost(data = as.matrix(mtcars[, -11]), label = mtcars[, 11],
objective = 'count:poisson', nrounds = 5)
bst <- xgb.train(data = xgb.DMatrix(as.matrix(mtcars[, -11]), label = mtcars[, 11]),
objective = 'count:poisson', nrounds = 5)
pred <- predict(bst, as.matrix(mtcars[, -11]))
sqrt(mean((pred - mtcars[, 11]) ^ 2))

View File

@ -6,16 +6,16 @@ dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
param <- list(max_depth = 2, eta = 1, objective = 'binary:logistic')
watchlist <- list(eval = dtest, train = dtrain)
evals <- list(eval = dtest, train = dtrain)
nrounds <- 2
# training the model for two rounds
bst <- xgb.train(param, dtrain, nrounds, nthread = 2, watchlist)
bst <- xgb.train(param, dtrain, nrounds, nthread = 2, evals = evals)
cat('start testing prediction from first n trees\n')
labels <- getinfo(dtest, 'label')
### predict using first 1 tree
ypred1 <- predict(bst, dtest, ntreelimit = 1)
ypred1 <- predict(bst, dtest, iterationrange = c(1, 1))
# by default, we predict using all the trees
ypred2 <- predict(bst, dtest)

View File

@ -24,10 +24,10 @@ accuracy.before <- (sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.te
pred_with_leaf <- predict(bst, dtest, predleaf = TRUE)
head(pred_with_leaf)
create.new.tree.features <- function(model, original.features){
create.new.tree.features <- function(model, original.features) {
pred_with_leaf <- predict(model, original.features, predleaf = TRUE)
cols <- list()
for (i in 1:model$niter) {
for (i in 1:xgb.get.num.boosted.rounds(model)) {
# max is not the real max but it s not important for the purpose of adding features
leaf.id <- sort(unique(pred_with_leaf[, i]))
cols[[i]] <- factor(x = pred_with_leaf[, i], level = leaf.id)
@ -43,7 +43,6 @@ colnames(new.features.test) <- colnames(new.features.train)
# learning with new features
new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label)
new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label)
watchlist <- list(train = new.dtrain)
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
# Model accuracy with new features

View File

@ -1,4 +1,4 @@
# running all scripts in demo folder
# running all scripts in demo folder, removed during packaging.
demo(basic_walkthrough, package = 'xgboost')
demo(custom_objective, package = 'xgboost')
demo(boost_from_prediction, package = 'xgboost')
@ -9,6 +9,5 @@ demo(create_sparse_matrix, package = 'xgboost')
demo(predict_leaf_indices, package = 'xgboost')
demo(early_stopping, package = 'xgboost')
demo(poisson_regression, package = 'xgboost')
demo(caret_wrapper, package = 'xgboost')
demo(tweedie_regression, package = 'xgboost')
#demo(gpu_accelerated, package = 'xgboost') # can only run when built with GPU support

View File

@ -39,7 +39,7 @@ bst <- xgb.train(
data = d_train,
params = params,
maximize = FALSE,
watchlist = list(train = d_train),
evals = list(train = d_train),
nrounds = 20)
var_imp <- xgb.importance(attr(x, 'Dimnames')[[2]], model = bst)

View File

@ -55,7 +55,7 @@ message(sprintf("Creating '%s' from '%s'", OUT_DEF_FILE, IN_DLL_FILE))
}
# use objdump to dump all the symbols
OBJDUMP_FILE <- "objdump-out.txt"
OBJDUMP_FILE <- file.path(tempdir(), "objdump-out.txt")
.pipe_shell_command_to_stdout(
command = "objdump"
, args = c("-p", IN_DLL_FILE)
@ -79,9 +79,9 @@ end_of_table <- empty_lines[empty_lines > start_index][1L]
# Read the contents of the table
exported_symbols <- objdump_results[(start_index + 1L):end_of_table]
exported_symbols <- gsub("\t", "", exported_symbols)
exported_symbols <- gsub("\t", "", exported_symbols, fixed = TRUE)
exported_symbols <- gsub(".*\\] ", "", exported_symbols)
exported_symbols <- gsub(" ", "", exported_symbols)
exported_symbols <- gsub(" ", "", exported_symbols, fixed = TRUE)
# Write R.def file
writeLines(

View File

@ -2,16 +2,44 @@
% Please edit documentation in R/utils.R
\name{a-compatibility-note-for-saveRDS-save}
\alias{a-compatibility-note-for-saveRDS-save}
\title{Do not use \code{\link[base]{saveRDS}} or \code{\link[base]{save}} for long-term archival of
models. Instead, use \code{\link{xgb.save}} or \code{\link{xgb.save.raw}}.}
\title{Model Serialization and Compatibility}
\description{
It is a common practice to use the built-in \code{\link[base]{saveRDS}} function (or
\code{\link[base]{save}}) to persist R objects to the disk. While it is possible to persist
\code{xgb.Booster} objects using \code{\link[base]{saveRDS}}, it is not advisable to do so if
the model is to be accessed in the future. If you train a model with the current version of
XGBoost and persist it with \code{\link[base]{saveRDS}}, the model is not guaranteed to be
accessible in later releases of XGBoost. To ensure that your model can be accessed in future
releases of XGBoost, use \code{\link{xgb.save}} or \code{\link{xgb.save.raw}} instead.
When it comes to serializing XGBoost models, it's possible to use R serializers such as
\link{save} or \link{saveRDS} to serialize an XGBoost R model, but XGBoost also provides
its own serializers with better compatibility guarantees, which allow loading
said models in other language bindings of XGBoost.
Note that an \code{xgb.Booster} object, outside of its core components, might also keep:\itemize{
\item Additional model configuration (accessible through \link{xgb.config}),
which includes model fitting parameters like \code{max_depth} and runtime parameters like \code{nthread}.
These are not necessarily useful for prediction/importance/plotting.
\item Additional R-specific attributes - e.g. results of callbacks, such as evaluation logs,
which are kept as a \code{data.table} object, accessible through \code{attributes(model)$evaluation_log}
if present.
}
The first one (configurations) does not have the same compatibility guarantees as
the model itself, including attributes that are set and accessed through \link{xgb.attributes} - that is, such configuration
might be lost after loading the booster in a different XGBoost version, regardless of the
serializer that was used. These are saved when using \link{saveRDS}, but will be discarded
if loaded into an incompatible XGBoost version. They are not saved when using XGBoost's
serializers from its public interface including \link{xgb.save} and \link{xgb.save.raw}.
The second ones (R attributes) are not part of the standard XGBoost model structure, and thus are
not saved when using XGBoost's own serializers. These attributes are only used for informational
purposes, such as keeping track of evaluation metrics as the model was fit, or saving the R
call that produced the model, but are otherwise not used for prediction / importance / plotting / etc.
These R attributes are only preserved when using R's serializers.
Note that XGBoost models in R starting from version \verb{2.1.0} and onwards, and XGBoost models
before version \verb{2.1.0}; have a very different R object structure and are incompatible with
each other. Hence, models that were saved with R serializers live \code{saveRDS} or \code{save} before
version \verb{2.1.0} will not work with latter \code{xgboost} versions and vice versa. Be aware that
the structure of R model objects could in theory change again in the future, so XGBoost's serializers
should be preferred for long-term storage.
Furthermore, note that using the package \code{qs} for serialization will require version 0.26 or
higher of said package, and will have the same compatibility restrictions as R serializers.
}
\details{
Use \code{\link{xgb.save}} to save the XGBoost model as a stand-alone file. You may opt into
@ -24,26 +52,29 @@ re-construct the corresponding model. To read the model back, use \code{\link{xg
The \code{\link{xgb.save.raw}} function is useful if you'd like to persist the XGBoost model
as part of another R object.
Note: Do not use \code{\link{xgb.serialize}} to store models long-term. It persists not only the
model but also internal configurations and parameters, and its format is not stable across
multiple XGBoost versions. Use \code{\link{xgb.serialize}} only for checkpointing.
Use \link{saveRDS} if you require the R-specific attributes that a booster might have, such
as evaluation logs, but note that future compatibility of such objects is outside XGBoost's
control as it relies on R's serialization format (see e.g. the details section in
\link{serialize} and \link{save} from base R).
For more details and explanation about model persistence and archival, consult the page
\url{https://xgboost.readthedocs.io/en/latest/tutorials/saving_model.html}.
}
\examples{
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
bst <- xgb.train(data = xgb.DMatrix(agaricus.train$data, label = agaricus.train$label),
max_depth = 2, eta = 1, nthread = 2, nrounds = 2,
objective = "binary:logistic")
# Save as a stand-alone file; load it with xgb.load()
xgb.save(bst, 'xgb.model')
bst2 <- xgb.load('xgb.model')
fname <- file.path(tempdir(), "xgb_model.ubj")
xgb.save(bst, fname)
bst2 <- xgb.load(fname)
# Save as a stand-alone file (JSON); load it with xgb.load()
xgb.save(bst, 'xgb.model.json')
bst2 <- xgb.load('xgb.model.json')
if (file.exists('xgb.model.json')) file.remove('xgb.model.json')
fname <- file.path(tempdir(), "xgb_model.json")
xgb.save(bst, fname)
bst2 <- xgb.load(fname)
# Save as a raw byte vector; load it with xgb.load.raw()
xgb_bytes <- xgb.save.raw(bst)
@ -54,11 +85,11 @@ obj <- list(xgb_model_bytes = xgb.save.raw(bst), description = "My first XGBoost
# Persist the R object. Here, saveRDS() is okay, since it doesn't persist
# xgb.Booster directly. What's being persisted is the future-proof byte representation
# as given by xgb.save.raw().
saveRDS(obj, 'my_object.rds')
fname <- file.path(tempdir(), "my_object.Rds")
saveRDS(obj, fname)
# Read back the R object
obj2 <- readRDS('my_object.rds')
obj2 <- readRDS(fname)
# Re-construct xgb.Booster object from the bytes
bst2 <- xgb.load.raw(obj2$xgb_model_bytes)
if (file.exists('my_object.rds')) file.remove('my_object.rds')
}

View File

@ -19,15 +19,15 @@ UCI Machine Learning Repository.
This data set includes the following fields:
\itemize{
\item \code{label} the label for each record
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
\item \code{label} the label for each record
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
}
}
\references{
https://archive.ics.uci.edu/ml/datasets/Mushroom
\url{https://archive.ics.uci.edu/ml/datasets/Mushroom}
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
[http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
\url{http://archive.ics.uci.edu/ml}. Irvine, CA: University of California,
School of Information and Computer Science.
}
\keyword{datasets}

View File

@ -19,15 +19,15 @@ UCI Machine Learning Repository.
This data set includes the following fields:
\itemize{
\item \code{label} the label for each record
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
\item \code{label} the label for each record
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
}
}
\references{
https://archive.ics.uci.edu/ml/datasets/Mushroom
\url{https://archive.ics.uci.edu/ml/datasets/Mushroom}
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
[http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
\url{http://archive.ics.uci.edu/ml}. Irvine, CA: University of California,
School of Information and Computer Science.
}
\keyword{datasets}

View File

@ -1,37 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{callbacks}
\alias{callbacks}
\title{Callback closures for booster training.}
\description{
These are used to perform various service tasks either during boosting iterations or at the end.
This approach helps to modularize many of such tasks without bloating the main training methods,
and it offers .
}
\details{
By default, a callback function is run after each boosting iteration.
An R-attribute \code{is_pre_iteration} could be set for a callback to define a pre-iteration function.
When a callback function has \code{finalize} parameter, its finalizer part will also be run after
the boosting is completed.
WARNING: side-effects!!! Be aware that these callback functions access and modify things in
the environment from which they are called from, which is a fairly uncommon thing to do in R.
To write a custom callback closure, make sure you first understand the main concepts about R environments.
Check either R documentation on \code{\link[base]{environment}} or the
\href{http://adv-r.had.co.nz/Environments.html}{Environments chapter} from the "Advanced R"
book by Hadley Wickham. Further, the best option is to read the code of some of the existing callbacks -
choose ones that do something similar to what you want to achieve. Also, you would need to get familiar
with the objects available inside of the \code{xgb.train} and \code{xgb.cv} internal environments.
}
\seealso{
\code{\link{cb.print.evaluation}},
\code{\link{cb.evaluation.log}},
\code{\link{cb.reset.parameters}},
\code{\link{cb.early.stop}},
\code{\link{cb.save.model}},
\code{\link{cb.cv.predict}},
\code{\link{xgb.train}},
\code{\link{xgb.cv}}
}

View File

@ -1,63 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.early.stop}
\alias{cb.early.stop}
\title{Callback closure to activate the early stopping.}
\usage{
cb.early.stop(
stopping_rounds,
maximize = FALSE,
metric_name = NULL,
verbose = TRUE
)
}
\arguments{
\item{stopping_rounds}{The number of rounds with no improvement in
the evaluation metric in order to stop the training.}
\item{maximize}{whether to maximize the evaluation metric}
\item{metric_name}{the name of an evaluation column to use as a criteria for early
stopping. If not set, the last column would be used.
Let's say the test data in \code{watchlist} was labelled as \code{dtest},
and one wants to use the AUC in test data for early stopping regardless of where
it is in the \code{watchlist}, then one of the following would need to be set:
\code{metric_name='dtest-auc'} or \code{metric_name='dtest_auc'}.
All dash '-' characters in metric names are considered equivalent to '_'.}
\item{verbose}{whether to print the early stopping information.}
}
\description{
Callback closure to activate the early stopping.
}
\details{
This callback function determines the condition for early stopping
by setting the \code{stop_condition = TRUE} flag in its calling frame.
The following additional fields are assigned to the model's R object:
\itemize{
\item \code{best_score} the evaluation score at the best iteration
\item \code{best_iteration} at which boosting iteration the best score has occurred (1-based index)
}
The Same values are also stored as xgb-attributes:
\itemize{
\item \code{best_iteration} is stored as a 0-based iteration index (for interoperability of binary models)
\item \code{best_msg} message string is also stored.
}
At least one data element is required in the evaluation watchlist for early stopping to work.
Callback function expects the following values to be set in its calling frame:
\code{stop_condition},
\code{bst_evaluation},
\code{rank},
\code{bst} (or \code{bst_folds} and \code{basket}),
\code{iteration},
\code{begin_iteration},
\code{end_iteration},
\code{num_parallel_tree}.
}
\seealso{
\code{\link{callbacks}},
\code{\link{xgb.attr}}
}

View File

@ -1,31 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.evaluation.log}
\alias{cb.evaluation.log}
\title{Callback closure for logging the evaluation history}
\usage{
cb.evaluation.log()
}
\description{
Callback closure for logging the evaluation history
}
\details{
This callback function appends the current iteration evaluation results \code{bst_evaluation}
available in the calling parent frame to the \code{evaluation_log} list in a calling frame.
The finalizer callback (called with \code{finalize = TURE} in the end) converts
the \code{evaluation_log} list into a final data.table.
The iteration evaluation result \code{bst_evaluation} must be a named numeric vector.
Note: in the column names of the final data.table, the dash '-' character is replaced with
the underscore '_' in order to make the column names more like regular R identifiers.
Callback function expects the following values to be set in its calling frame:
\code{evaluation_log},
\code{bst_evaluation},
\code{iteration}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@ -1,29 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.print.evaluation}
\alias{cb.print.evaluation}
\title{Callback closure for printing the result of evaluation}
\usage{
cb.print.evaluation(period = 1, showsd = TRUE)
}
\arguments{
\item{period}{results would be printed every number of periods}
\item{showsd}{whether standard deviations should be printed (when available)}
}
\description{
Callback closure for printing the result of evaluation
}
\details{
The callback function prints the result of evaluation at every \code{period} iterations.
The initial and the last iteration's evaluations are always printed.
Callback function expects the following values to be set in its calling frame:
\code{bst_evaluation} (also \code{bst_evaluation_err} when available),
\code{iteration},
\code{begin_iteration},
\code{end_iteration}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@ -1,33 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.save.model}
\alias{cb.save.model}
\title{Callback closure for saving a model file.}
\usage{
cb.save.model(save_period = 0, save_name = "xgboost.model")
}
\arguments{
\item{save_period}{save the model to disk after every
\code{save_period} iterations; 0 means save the model at the end.}
\item{save_name}{the name or path for the saved model file.
It can contain a \code{\link[base]{sprintf}} formatting specifier
to include the integer iteration number in the file name.
E.g., with \code{save_name} = 'xgboost_%04d.model',
the file saved at iteration 50 would be named "xgboost_0050.model".}
}
\description{
Callback closure for saving a model file.
}
\details{
This callback function allows to save an xgb-model file, either periodically after each \code{save_period}'s or at the end.
Callback function expects the following values to be set in its calling frame:
\code{bst},
\code{iteration},
\code{begin_iteration},
\code{end_iteration}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@ -0,0 +1,50 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.Booster.R
\name{coef.xgb.Booster}
\alias{coef.xgb.Booster}
\title{Extract coefficients from linear booster}
\usage{
\method{coef}{xgb.Booster}(object, ...)
}
\arguments{
\item{object}{A fitted booster of 'gblinear' type.}
\item{...}{Not used.}
}
\value{
The extracted coefficients:\itemize{
\item If there's only one coefficient per column in the data, will be returned as a
vector, potentially containing the feature names if available, with the intercept
as first column.
\item If there's more than one coefficient per column in the data (e.g. when using
\code{objective="multi:softmax"}), will be returned as a matrix with dimensions equal
to \verb{[num_features, num_cols]}, with the intercepts as first row. Note that the column
(classes in multi-class classification) dimension will not be named.
}
The intercept returned here will include the 'base_score' parameter (unlike the 'bias'
or the last coefficient in the model dump, which doesn't have 'base_score' added to it),
hence one should get the same values from calling \code{predict(..., outputmargin = TRUE)} and
from performing a matrix multiplication with \code{model.matrix(~., ...)}.
Be aware that the coefficients are obtained by first converting them to strings and
back, so there will always be some very small lose of precision compared to the actual
coefficients as used by \link{predict.xgb.Booster}.
}
\description{
Extracts the coefficients from a 'gblinear' booster object,
as produced by \code{xgb.train} when using parameter \code{booster="gblinear"}.
Note: this function will error out if passing a booster model
which is not of "gblinear" type.
}
\examples{
library(xgboost)
data(mtcars)
y <- mtcars[, 1]
x <- as.matrix(mtcars[, -1])
dm <- xgb.DMatrix(data = x, label = y, nthread = 1)
params <- list(booster = "gblinear", nthread = 1)
model <- xgb.train(data = dm, params = params, nrounds = 2)
coef(model)
}

View File

@ -19,7 +19,7 @@ be directly used with an \code{xgb.DMatrix} object.
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label)
dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
stopifnot(nrow(dtrain) == nrow(train$data))
stopifnot(ncol(dtrain) == ncol(train$data))

View File

@ -26,7 +26,7 @@ Since row names are irrelevant, it is recommended to use \code{colnames} directl
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label)
dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
dimnames(dtrain)
colnames(dtrain)
colnames(dtrain) <- make.names(1:ncol(train$data))

View File

@ -1,44 +1,93 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{getinfo}
% Please edit documentation in R/xgb.Booster.R, R/xgb.DMatrix.R
\name{getinfo.xgb.Booster}
\alias{getinfo.xgb.Booster}
\alias{setinfo.xgb.Booster}
\alias{getinfo}
\alias{getinfo.xgb.DMatrix}
\title{Get information of an xgb.DMatrix object}
\alias{setinfo}
\alias{setinfo.xgb.DMatrix}
\title{Get or set information of xgb.DMatrix and xgb.Booster objects}
\usage{
getinfo(object, ...)
\method{getinfo}{xgb.Booster}(object, name)
\method{getinfo}{xgb.DMatrix}(object, name, ...)
\method{setinfo}{xgb.Booster}(object, name, info)
getinfo(object, name)
\method{getinfo}{xgb.DMatrix}(object, name)
setinfo(object, name, info)
\method{setinfo}{xgb.DMatrix}(object, name, info)
}
\arguments{
\item{object}{Object of class \code{xgb.DMatrix}}
\item{...}{other parameters}
\item{object}{Object of class \code{xgb.DMatrix} of \code{xgb.Booster}.}
\item{name}{the name of the information field to get (see details)}
\item{info}{the specific field of information to set}
}
\value{
For \code{getinfo}, will return the requested field. For \code{setinfo}, will always return value \code{TRUE}
if it succeeds.
}
\description{
Get information of an xgb.DMatrix object
Get or set information of xgb.DMatrix and xgb.Booster objects
}
\details{
The \code{name} field can be one of the following:
The \code{name} field can be one of the following for \code{xgb.DMatrix}:
\itemize{
\item \code{label}: label XGBoost learn from ;
\item \code{weight}: to do a weight rescale ;
\item \code{base_margin}: base margin is the base prediction XGBoost will boost from ;
\item \code{nrow}: number of rows of the \code{xgb.DMatrix}.
\item \code{label}
\item \code{weight}
\item \code{base_margin}
\item \code{label_lower_bound}
\item \code{label_upper_bound}
\item \code{group}
\item \code{feature_type}
\item \code{feature_name}
\item \code{nrow}
}
See the documentation for \link{xgb.DMatrix} for more information about these fields.
For \code{xgb.Booster}, can be one of the following:
\itemize{
\item \code{feature_type}
\item \code{feature_name}
}
\code{group} can be setup by \code{setinfo} but can't be retrieved by \code{getinfo}.
Note that, while 'qid' cannot be retrieved, it's possible to get the equivalent 'group'
for a DMatrix that had 'qid' assigned.
\bold{Important}: when calling \code{setinfo}, the objects are modified in-place. See
\link{xgb.copy.Booster} for an idea of this in-place assignment works.
See the documentation for \link{xgb.DMatrix} for possible fields that can be set
(which correspond to arguments in that function).
Note that the following fields are allowed in the construction of an \code{xgb.DMatrix}
but \bold{aren't} allowed here:\itemize{
\item data
\item missing
\item silent
\item nthread
}
}
\examples{
data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label))
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
labels <- getinfo(dtrain, 'label')
setinfo(dtrain, 'label', 1-labels)
labels2 <- getinfo(dtrain, 'label')
stopifnot(all(labels2 == 1-labels))
data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
labels <- getinfo(dtrain, 'label')
setinfo(dtrain, 'label', 1-labels)
labels2 <- getinfo(dtrain, 'label')
stopifnot(all.equal(labels2, 1-labels))
}

View File

@ -1,18 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.ggplot.R
\name{normalize}
\alias{normalize}
\title{Scale feature value to have mean 0, standard deviation 1}
\usage{
normalize(x)
}
\arguments{
\item{x}{Numeric vector}
}
\value{
Numeric vector with mean 0 and sd 1.
}
\description{
This is used to compare multiple features on the same plot.
Internal utility function
}

View File

@ -2,15 +2,13 @@
% Please edit documentation in R/xgb.Booster.R
\name{predict.xgb.Booster}
\alias{predict.xgb.Booster}
\alias{predict.xgb.Booster.handle}
\title{Predict method for eXtreme Gradient Boosting model}
\title{Predict method for XGBoost model}
\usage{
\method{predict}{xgb.Booster}(
object,
newdata,
missing = NA,
outputmargin = FALSE,
ntreelimit = NULL,
predleaf = FALSE,
predcontrib = FALSE,
approxcontrib = FALSE,
@ -19,92 +17,142 @@
training = FALSE,
iterationrange = NULL,
strict_shape = FALSE,
validate_features = FALSE,
base_margin = NULL,
...
)
\method{predict}{xgb.Booster.handle}(object, ...)
}
\arguments{
\item{object}{Object of class \code{xgb.Booster} or \code{xgb.Booster.handle}}
\item{object}{Object of class \code{xgb.Booster}.}
\item{newdata}{takes \code{matrix}, \code{dgCMatrix}, local data file or \code{xgb.DMatrix}.}
\item{newdata}{Takes \code{data.frame}, \code{matrix}, \code{dgCMatrix}, \code{dgRMatrix}, \code{dsparseVector},
local data file, or \code{xgb.DMatrix}.
\item{missing}{Missing is only used when input is dense matrix. Pick a float value that represents
missing values in data (e.g., sometimes 0 or some other extreme value is used).}
\if{html}{\out{<div class="sourceCode">}}\preformatted{ For single-row predictions on sparse data, it's recommended to use CSR format. If passing
a sparse vector, it will take it as a row vector.
\item{outputmargin}{whether the prediction should be returned in the for of original untransformed
Note that, for repeated predictions on the same data, one might want to create a DMatrix to
pass here instead of passing R types like matrices or data frames, as predictions will be
faster on DMatrix.
If `newdata` is a `data.frame`, be aware that:\\itemize\{
\\item Columns will be converted to numeric if they aren't already, which could potentially make
the operation slower than in an equivalent `matrix` object.
\\item The order of the columns must match with that of the data from which the model was fitted
(i.e. columns will not be referenced by their names, just by their order in the data).
\\item If the model was fitted to data with categorical columns, these columns must be of
`factor` type here, and must use the same encoding (i.e. have the same levels).
\\item If `newdata` contains any `factor` columns, they will be converted to base-0
encoding (same as during DMatrix creation) - hence, one should not pass a `factor`
under a column which during training had a different type.
\}
}\if{html}{\out{</div>}}}
\item{missing}{Float value that represents missing values in data (e.g., 0 or some other extreme value).
\if{html}{\out{<div class="sourceCode">}}\preformatted{ This parameter is not used when `newdata` is an `xgb.DMatrix` - in such cases, should pass
this as an argument to the DMatrix constructor instead.
}\if{html}{\out{</div>}}}
\item{outputmargin}{Whether the prediction should be returned in the form of original untransformed
sum of predictions from boosting iterations' results. E.g., setting \code{outputmargin=TRUE} for
logistic regression would result in predictions for log-odds instead of probabilities.}
logistic regression would return log-odds instead of probabilities.}
\item{ntreelimit}{Deprecated, use \code{iterationrange} instead.}
\item{predleaf}{Whether to predict per-tree leaf indices.}
\item{predleaf}{whether predict leaf index.}
\item{predcontrib}{Whether to return feature contributions to individual predictions (see Details).}
\item{predcontrib}{whether to return feature contributions to individual predictions (see Details).}
\item{approxcontrib}{Whether to use a fast approximation for feature contributions (see Details).}
\item{approxcontrib}{whether to use a fast approximation for feature contributions (see Details).}
\item{predinteraction}{Whether to return contributions of feature interactions to individual predictions (see Details).}
\item{predinteraction}{whether to return contributions of feature interactions to individual predictions (see Details).}
\item{reshape}{Whether to reshape the vector of predictions to matrix form when there are several
prediction outputs per case. No effect if \code{predleaf}, \code{predcontrib},
or \code{predinteraction} is \code{TRUE}.}
\item{reshape}{whether to reshape the vector of predictions to a matrix form when there are several
prediction outputs per case. This option has no effect when either of predleaf, predcontrib,
or predinteraction flags is TRUE.}
\item{training}{whether is the prediction result used for training. For dart booster,
\item{training}{Whether the prediction result is used for training. For dart booster,
training predicting will perform dropout.}
\item{iterationrange}{Specifies which layer of trees are used in prediction. For
example, if a random forest is trained with 100 rounds. Specifying
`iteration_range=(1, 21)`, then only the forests built during [1, 21) (half open set)
rounds are used in this prediction. It's 1-based index just like R vector. When set
to \code{c(1, 1)} XGBoost will use all trees.}
\item{iterationrange}{Sequence of rounds/iterations from the model to use for prediction, specified by passing
a two-dimensional vector with the start and end numbers in the sequence (same format as R's \code{seq} - i.e.
base-1 indexing, and inclusive of both ends).
\item{strict_shape}{Default is \code{FALSE}. When it's set to \code{TRUE}, output
type and shape of prediction are invariant to model type.}
\if{html}{\out{<div class="sourceCode">}}\preformatted{ For example, passing `c(1,20)` will predict using the first twenty iterations, while passing `c(1,1)` will
predict using only the first one.
\item{...}{Parameters passed to \code{predict.xgb.Booster}}
If passing `NULL`, will either stop at the best iteration if the model used early stopping, or use all
of the iterations (rounds) otherwise.
If passing "all", will use all of the rounds regardless of whether the model had early stopping or not.
}\if{html}{\out{</div>}}}
\item{strict_shape}{Default is \code{FALSE}. When set to \code{TRUE}, the output
type and shape of predictions are invariant to the model type.}
\item{validate_features}{When \code{TRUE}, validate that the Booster's and newdata's feature_names
match (only applicable when both \code{object} and \code{newdata} have feature names).
\if{html}{\out{<div class="sourceCode">}}\preformatted{ If the column names differ and `newdata` is not an `xgb.DMatrix`, will try to reorder
the columns in `newdata` to match with the booster's.
If the booster has feature types and `newdata` is either an `xgb.DMatrix` or `data.frame`,
will additionally verify that categorical columns are of the correct type in `newdata`,
throwing an error if they do not match.
If passing `FALSE`, it is assumed that the feature names and types are the same,
and come in the same order as in the training data.
Note that this check might add some sizable latency to the predictions, so it's
recommended to disable it for performance-sensitive applications.
}\if{html}{\out{</div>}}}
\item{base_margin}{Base margin used for boosting from existing model.
\if{html}{\out{<div class="sourceCode">}}\preformatted{ Note that, if `newdata` is an `xgb.DMatrix` object, this argument will
be ignored as it needs to be added to the DMatrix instead (e.g. by passing it as
an argument in its constructor, or by calling \link{setinfo.xgb.DMatrix}).
}\if{html}{\out{</div>}}}
\item{...}{Not used.}
}
\value{
The return type is different depending whether \code{strict_shape} is set to \code{TRUE}. By default,
for regression or binary classification, it returns a vector of length \code{nrows(newdata)}.
For multiclass classification, either a \code{num_class * nrows(newdata)} vector or
a \code{(nrows(newdata), num_class)} dimension matrix is returned, depending on
the \code{reshape} value.
The return type depends on \code{strict_shape}. If \code{FALSE} (default):
\itemize{
\item For regression or binary classification: A vector of length \code{nrows(newdata)}.
\item For multiclass classification: A vector of length \code{num_class * nrows(newdata)} or
a \verb{(nrows(newdata), num_class)} matrix, depending on the \code{reshape} value.
\item When \code{predleaf = TRUE}: A matrix with one column per tree.
\item When \code{predcontrib = TRUE}: When not multiclass, a matrix with
\code{ num_features + 1} columns. The last "+ 1" column corresponds to the baseline value.
In the multiclass case, a list of \code{num_class} such matrices.
The contribution values are on the scale of untransformed margin
(e.g., for binary classification, the values are log-odds deviations from the baseline).
\item When \code{predinteraction = TRUE}: When not multiclass, the output is a 3d array of
dimension \code{c(nrow, num_features + 1, num_features + 1)}. The off-diagonal (in the last two dimensions)
elements represent different feature interaction contributions. The array is symmetric WRT the last
two dimensions. The "+ 1" columns corresponds to the baselines. Summing this array along the last dimension should
produce practically the same result as \code{predcontrib = TRUE}.
In the multiclass case, a list of \code{num_class} such arrays.
}
When \code{predleaf = TRUE}, the output is a matrix object with the
number of columns corresponding to the number of trees.
When \code{predcontrib = TRUE} and it is not a multiclass setting, the output is a matrix object with
\code{num_features + 1} columns. The last "+ 1" column in a matrix corresponds to bias.
For a multiclass case, a list of \code{num_class} elements is returned, where each element is
such a matrix. The contribution values are on the scale of untransformed margin
(e.g., for binary classification would mean that the contributions are log-odds deviations from bias).
When \code{predinteraction = TRUE} and it is not a multiclass setting, the output is a 3d array with
dimensions \code{c(nrow, num_features + 1, num_features + 1)}. The off-diagonal (in the last two dimensions)
elements represent different features interaction contributions. The array is symmetric WRT the last
two dimensions. The "+ 1" columns corresponds to bias. Summing this array along the last dimension should
produce practically the same result as predict with \code{predcontrib = TRUE}.
For a multiclass case, a list of \code{num_class} elements is returned, where each element is
such an array.
When \code{strict_shape} is set to \code{TRUE}, the output is always an array. For
normal prediction, the output is a 2-dimension array \code{(num_class, nrow(newdata))}.
For \code{predcontrib = TRUE}, output is \code{(ncol(newdata) + 1, num_class, nrow(newdata))}
For \code{predinteraction = TRUE}, output is \code{(ncol(newdata) + 1, ncol(newdata) + 1, num_class, nrow(newdata))}
For \code{predleaf = TRUE}, output is \code{(n_trees_in_forest, num_class, n_iterations, nrow(newdata))}
When \code{strict_shape = TRUE}, the output is always an array:
\itemize{
\item For normal predictions, the output has dimension \verb{(num_class, nrow(newdata))}.
\item For \code{predcontrib = TRUE}, the dimension is \verb{(ncol(newdata) + 1, num_class, nrow(newdata))}.
\item For \code{predinteraction = TRUE}, the dimension is \verb{(ncol(newdata) + 1, ncol(newdata) + 1, num_class, nrow(newdata))}.
\item For \code{predleaf = TRUE}, the dimension is \verb{(n_trees_in_forest, num_class, n_iterations, nrow(newdata))}.
}
}
\description{
Predicted values based on either xgboost model or model handle object.
Predict values on data based on xgboost model.
}
\details{
Note that \code{iterationrange} would currently do nothing for predictions from gblinear,
since gblinear doesn't keep its boosting history.
Note that \code{iterationrange} would currently do nothing for predictions from "gblinear",
since "gblinear" doesn't keep its boosting history.
One possible practical applications of the \code{predleaf} option is to use the model
as a generator of new features which capture non-linearity and interactions,
e.g., as implemented in \code{\link{xgb.create.features}}.
e.g., as implemented in \code{\link[=xgb.create.features]{xgb.create.features()}}.
Setting \code{predcontrib = TRUE} allows to calculate contributions of each feature to
individual predictions. For "gblinear" booster, feature contributions are simply linear terms
@ -118,21 +166,37 @@ With \code{predinteraction = TRUE}, SHAP values of contributions of interaction
are computed. Note that this operation might be rather expensive in terms of compute and memory.
Since it quadratically depends on the number of features, it is recommended to perform selection
of the most important features first. See below about the format of the returned results.
The \code{predict()} method uses as many threads as defined in \code{xgb.Booster} object (all by default).
If you want to change their number, assign a new number to \code{nthread} using \code{\link[=xgb.parameters<-]{xgb.parameters<-()}}.
Note that converting a matrix to \code{\link[=xgb.DMatrix]{xgb.DMatrix()}} uses multiple threads too.
}
\examples{
## binary classification:
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
data(agaricus.train, package = "xgboost")
data(agaricus.test, package = "xgboost")
## Keep the number of threads to 2 for examples
nthread <- 2
data.table::setDTthreads(nthread)
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 0.5, nthread = 2, nrounds = 5, objective = "binary:logistic")
bst <- xgb.train(
data = xgb.DMatrix(train$data, label = train$label),
max_depth = 2,
eta = 0.5,
nthread = nthread,
nrounds = 5,
objective = "binary:logistic"
)
# use all trees by default
pred <- predict(bst, test$data)
# use only the 1st tree
pred1 <- predict(bst, test$data, iterationrange = c(1, 2))
pred1 <- predict(bst, test$data, iterationrange = c(1, 1))
# Predicting tree leafs:
# the result is an nsamples X ntrees matrix
@ -160,39 +224,61 @@ par(mar = old_mar)
lb <- as.numeric(iris$Species) - 1
num_class <- 3
set.seed(11)
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 4, eta = 0.5, nthread = 2, nrounds = 10, subsample = 0.5,
objective = "multi:softprob", num_class = num_class)
bst <- xgb.train(
data = xgb.DMatrix(as.matrix(iris[, -5]), label = lb),
max_depth = 4,
eta = 0.5,
nthread = 2,
nrounds = 10,
subsample = 0.5,
objective = "multi:softprob",
num_class = num_class
)
# predict for softmax returns num_class probability numbers per case:
pred <- predict(bst, as.matrix(iris[, -5]))
str(pred)
# reshape it to a num_class-columns matrix
pred <- matrix(pred, ncol=num_class, byrow=TRUE)
pred <- matrix(pred, ncol = num_class, byrow = TRUE)
# convert the probabilities to softmax labels
pred_labels <- max.col(pred) - 1
# the following should result in the same error as seen in the last iteration
sum(pred_labels != lb)/length(lb)
sum(pred_labels != lb) / length(lb)
# compare that to the predictions from softmax:
# compare with predictions from softmax:
set.seed(11)
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 4, eta = 0.5, nthread = 2, nrounds = 10, subsample = 0.5,
objective = "multi:softmax", num_class = num_class)
bst <- xgb.train(
data = xgb.DMatrix(as.matrix(iris[, -5]), label = lb),
max_depth = 4,
eta = 0.5,
nthread = 2,
nrounds = 10,
subsample = 0.5,
objective = "multi:softmax",
num_class = num_class
)
pred <- predict(bst, as.matrix(iris[, -5]))
str(pred)
all.equal(pred, pred_labels)
# prediction from using only 5 iterations should result
# in the same error as seen in iteration 5:
pred5 <- predict(bst, as.matrix(iris[, -5]), iterationrange=c(1, 6))
sum(pred5 != lb)/length(lb)
pred5 <- predict(bst, as.matrix(iris[, -5]), iterationrange = c(1, 5))
sum(pred5 != lb) / length(lb)
}
\references{
Scott M. Lundberg, Su-In Lee, "A Unified Approach to Interpreting Model Predictions", NIPS Proceedings 2017, \url{https://arxiv.org/abs/1705.07874}
Scott M. Lundberg, Su-In Lee, "Consistent feature attribution for tree ensembles", \url{https://arxiv.org/abs/1706.06060}
\enumerate{
\item Scott M. Lundberg, Su-In Lee, "A Unified Approach to Interpreting Model Predictions",
NIPS Proceedings 2017, \url{https://arxiv.org/abs/1705.07874}
\item Scott M. Lundberg, Su-In Lee, "Consistent feature attribution for tree ensembles",
\url{https://arxiv.org/abs/1706.06060}
}
}
\seealso{
\code{\link{xgb.train}}.
\code{\link[=xgb.train]{xgb.train()}}
}

View File

@ -1,27 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.ggplot.R
\name{prepare.ggplot.shap.data}
\alias{prepare.ggplot.shap.data}
\title{Combine and melt feature values and SHAP contributions for sample
observations.}
\usage{
prepare.ggplot.shap.data(data_list, normalize = FALSE)
}
\arguments{
\item{data_list}{List containing 'data' and 'shap_contrib' returned by
\code{xgb.shap.data()}.}
\item{normalize}{Whether to standardize feature values to have mean 0 and
standard deviation 1 (useful for comparing multiple features on the same
plot). Default \code{FALSE}.}
}
\value{
A data.table containing the observation ID, the feature name, the
feature value (normalized if specified), and the SHAP contribution value.
}
\description{
Conforms to data format required for ggplot functions.
}
\details{
Internal utility function.
}

View File

@ -4,26 +4,35 @@
\alias{print.xgb.Booster}
\title{Print xgb.Booster}
\usage{
\method{print}{xgb.Booster}(x, verbose = FALSE, ...)
\method{print}{xgb.Booster}(x, ...)
}
\arguments{
\item{x}{an xgb.Booster object}
\item{x}{An \code{xgb.Booster} object.}
\item{verbose}{whether to print detailed data (e.g., attribute values)}
\item{...}{not currently used}
\item{...}{Not used.}
}
\value{
The same \code{x} object, returned invisibly
}
\description{
Print information about xgb.Booster.
Print information about \code{xgb.Booster}.
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.train, package = "xgboost")
train <- agaricus.train
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
attr(bst, 'myattr') <- 'memo'
bst <- xgboost(
data = train$data,
label = train$label,
max_depth = 2,
eta = 1,
nthread = 2,
nrounds = 2,
objective = "binary:logistic"
)
attr(bst, "myattr") <- "memo"
print(bst)
print(bst, verbose=TRUE)
}

View File

@ -19,7 +19,7 @@ Currently it displays dimensions and presence of info-fields and colnames.
}
\examples{
data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label))
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtrain
print(dtrain, verbose=TRUE)

View File

@ -23,8 +23,8 @@ including the best iteration (when available).
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
cv <- xgb.cv(data = train$data, label = train$label, nfold = 5, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
cv <- xgb.cv(data = xgb.DMatrix(train$data, label = train$label), nfold = 5, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
print(cv)
print(cv, verbose=TRUE)

View File

@ -1,42 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{setinfo}
\alias{setinfo}
\alias{setinfo.xgb.DMatrix}
\title{Set information of an xgb.DMatrix object}
\usage{
setinfo(object, ...)
\method{setinfo}{xgb.DMatrix}(object, name, info, ...)
}
\arguments{
\item{object}{Object of class "xgb.DMatrix"}
\item{...}{other parameters}
\item{name}{the name of the field to get}
\item{info}{the specific field of information to set}
}
\description{
Set information of an xgb.DMatrix object
}
\details{
The \code{name} field can be one of the following:
\itemize{
\item \code{label}: label XGBoost learn from ;
\item \code{weight}: to do a weight rescale ;
\item \code{base_margin}: base margin is the base prediction XGBoost will boost from ;
\item \code{group}: number of rows in each group (to use with \code{rank:pairwise} objective).
}
}
\examples{
data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label))
labels <- getinfo(dtrain, 'label')
setinfo(dtrain, 'label', 1-labels)
labels2 <- getinfo(dtrain, 'label')
stopifnot(all.equal(labels2, 1-labels))
}

View File

@ -0,0 +1,22 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.Booster.R
\name{variable.names.xgb.Booster}
\alias{variable.names.xgb.Booster}
\title{Get Features Names from Booster}
\usage{
\method{variable.names}{xgb.Booster}(object, ...)
}
\arguments{
\item{object}{An \code{xgb.Booster} object.}
\item{...}{Not used.}
}
\description{
Returns the feature / variable / column names from a fitted
booster object, which are set automatically during the call to \link{xgb.train}
from the DMatrix names, or which can be set manually through \link{setinfo}.
If the object doesn't have feature names, will return \code{NULL}.
It is equivalent to calling \code{getinfo(object, "feature_name")}.
}

View File

@ -1,52 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.Booster.R
\name{xgb.Booster.complete}
\alias{xgb.Booster.complete}
\title{Restore missing parts of an incomplete xgb.Booster object.}
\usage{
xgb.Booster.complete(object, saveraw = TRUE)
}
\arguments{
\item{object}{object of class \code{xgb.Booster}}
\item{saveraw}{a flag indicating whether to append \code{raw} Booster memory dump data
when it doesn't already exist.}
}
\value{
An object of \code{xgb.Booster} class.
}
\description{
It attempts to complete an \code{xgb.Booster} object by restoring either its missing
raw model memory dump (when it has no \code{raw} data but its \code{xgb.Booster.handle} is valid)
or its missing internal handle (when its \code{xgb.Booster.handle} is not valid
but it has a raw Booster memory dump).
}
\details{
While this method is primarily for internal use, it might be useful in some practical situations.
E.g., when an \code{xgb.Booster} model is saved as an R object and then is loaded as an R object,
its handle (pointer) to an internal xgboost model would be invalid. The majority of xgboost methods
should still work for such a model object since those methods would be using
\code{xgb.Booster.complete} internally. However, one might find it to be more efficient to call the
\code{xgb.Booster.complete} function explicitly once after loading a model as an R-object.
That would prevent further repeated implicit reconstruction of an internal booster model.
}
\examples{
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
saveRDS(bst, "xgb.model.rds")
# Warning: The resulting RDS file is only compatible with the current XGBoost version.
# Refer to the section titled "a-compatibility-note-for-saveRDS-save".
bst1 <- readRDS("xgb.model.rds")
if (file.exists("xgb.model.rds")) file.remove("xgb.model.rds")
# the handle is invalid:
print(bst1$handle)
bst1 <- xgb.Booster.complete(bst1)
# now the handle points to a valid internal booster model:
print(bst1$handle)
}

View File

@ -0,0 +1,248 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{xgb.Callback}
\alias{xgb.Callback}
\title{XGBoost Callback Constructor}
\usage{
xgb.Callback(
cb_name = "custom_callback",
env = new.env(),
f_before_training = function(env, model, data, evals, begin_iteration, end_iteration)
NULL,
f_before_iter = function(env, model, data, evals, iteration) NULL,
f_after_iter = function(env, model, data, evals, iteration, iter_feval) NULL,
f_after_training = function(env, model, data, evals, iteration, final_feval,
prev_cb_res) NULL
)
}
\arguments{
\item{cb_name}{Name for the callback.
If the callback produces some non-NULL result (from executing the function passed under
\code{f_after_training}), that result will be added as an R attribute to the resulting booster
(or as a named element in the result of CV), with the attribute name specified here.
Names of callbacks must be unique - i.e. there cannot be two callbacks with the same name.}
\item{env}{An environment object that will be passed to the different functions in the callback.
Note that this environment will not be shared with other callbacks.}
\item{f_before_training}{A function that will be executed before the training has started.
If passing \code{NULL} for this or for the other function inputs, then no function will be executed.
If passing a function, it will be called with parameters supplied as non-named arguments
matching the function signatures that are shown in the default value for each function argument.}
\item{f_before_iter}{A function that will be executed before each boosting round.
This function can signal whether the training should be finalized or not, by outputting
a value that evaluates to \code{TRUE} - i.e. if the output from the function provided here at
a given round is \code{TRUE}, then training will be stopped before the current iteration happens.
Return values of \code{NULL} will be interpreted as \code{FALSE}.}
\item{f_after_iter}{A function that will be executed after each boosting round.
This function can signal whether the training should be finalized or not, by outputting
a value that evaluates to \code{TRUE} - i.e. if the output from the function provided here at
a given round is \code{TRUE}, then training will be stopped at that round.
Return values of \code{NULL} will be interpreted as \code{FALSE}.}
\item{f_after_training}{A function that will be executed after training is finished.
This function can optionally output something non-NULL, which will become part of the R
attributes of the booster (assuming one passes \code{keep_extra_attributes=TRUE} to \link{xgb.train})
under the name supplied for parameter \code{cb_name} imn the case of \link{xgb.train}; or a part
of the named elements in the result of \link{xgb.cv}.}
}
\value{
An \code{xgb.Callback} object, which can be passed to \link{xgb.train} or \link{xgb.cv}.
}
\description{
Constructor for defining the structure of callback functions that can be executed
at different stages of model training (before / after training, before / after each boosting
iteration).
}
\details{
Arguments that will be passed to the supplied functions are as follows:\itemize{
\item env The same environment that is passed under argument \code{env}.
It may be modified by the functions in order to e.g. keep tracking of what happens
across iterations or similar.
This environment is only used by the functions supplied to the callback, and will
not be kept after the model fitting function terminates (see parameter \code{f_after_training}).
\item model The booster object when using \link{xgb.train}, or the folds when using
\link{xgb.cv}.
For \link{xgb.cv}, folds are a list with a structure as follows:\itemize{
\item \code{dtrain}: The training data for the fold (as an \code{xgb.DMatrix} object).
\item \code{bst}: Rhe \code{xgb.Booster} object for the fold.
\item \code{evals}: A list containing two DMatrices, with names \code{train} and \code{test}
(\code{test} is the held-out data for the fold).
\item \code{index}: The indices of the hold-out data for that fold (base-1 indexing),
from which the \code{test} entry in \code{evals} was obtained.
}
This object should \bold{not} be in-place modified in ways that conflict with the
training (e.g. resetting the parameters for a training update in a way that resets
the number of rounds to zero in order to overwrite rounds).
Note that any R attributes that are assigned to the booster during the callback functions,
will not be kept thereafter as the booster object variable is not re-assigned during
training. It is however possible to set C-level attributes of the booster through
\link{xgb.attr} or \link{xgb.attributes}, which should remain available for the rest
of the iterations and after the training is done.
For keeping variables across iterations, it's recommended to use \code{env} instead.
\item data The data to which the model is being fit, as an \code{xgb.DMatrix} object.
Note that, for \link{xgb.cv}, this will be the full data, while data for the specific
folds can be found in the \code{model} object.
\item evals The evaluation data, as passed under argument \code{evals} to
\link{xgb.train}.
For \link{xgb.cv}, this will always be \code{NULL}.
\item begin_iteration Index of the first boosting iteration that will be executed
(base-1 indexing).
This will typically be '1', but when using training continuation, depending on the
parameters for updates, boosting rounds will be continued from where the previous
model ended, in which case this will be larger than 1.
\item end_iteration Index of the last boostign iteration that will be executed
(base-1 indexing, inclusive of this end).
It should match with argument \code{nrounds} passed to \link{xgb.train} or \link{xgb.cv}.
Note that boosting might be interrupted before reaching this last iteration, for
example by using the early stopping callback \link{xgb.cb.early.stop}.
\item iteration Index of the iteration number that is being executed (first iteration
will be the same as parameter \code{begin_iteration}, then next one will add +1, and so on).
\item iter_feval Evaluation metrics for \code{evals} that were supplied, either
determined by the objective, or by parameter \code{feval}.
For \link{xgb.train}, this will be a named vector with one entry per element in
\code{evals}, where the names are determined as 'evals name' + '-' + 'metric name' - for
example, if \code{evals} contains an entry named "tr" and the metric is "rmse",
this will be a one-element vector with name "tr-rmse".
For \link{xgb.cv}, this will be a 2d matrix with dimensions \verb{[length(evals), nfolds]},
where the row names will follow the same naming logic as the one-dimensional vector
that is passed in \link{xgb.train}.
Note that, internally, the built-in callbacks such as \link{xgb.cb.print.evaluation} summarize
this table by calculating the row-wise means and standard deviations.
\item final_feval The evaluation results after the last boosting round is executed
(same format as \code{iter_feval}, and will be the exact same input as passed under
\code{iter_feval} to the last round that is executed during model fitting).
\item prev_cb_res Result from a previous run of a callback sharing the same name
(as given by parameter \code{cb_name}) when conducting training continuation, if there
was any in the booster R attributes.
Some times, one might want to append the new results to the previous one, and this will
be done automatically by the built-in callbacks such as \link{xgb.cb.evaluation.log},
which will append the new rows to the previous table.
If no such previous callback result is available (which it never will when fitting
a model from start instead of updating an existing model), this will be \code{NULL}.
For \link{xgb.cv}, which doesn't support training continuation, this will always be \code{NULL}.
}
The following names (\code{cb_name} values) are reserved for internal callbacks:\itemize{
\item print_evaluation
\item evaluation_log
\item reset_parameters
\item early_stop
\item save_model
\item cv_predict
\item gblinear_history
}
The following names are reserved for other non-callback attributes:\itemize{
\item names
\item class
\item call
\item params
\item niter
\item nfeatures
\item folds
}
When using the built-in early stopping callback (\link{xgb.cb.early.stop}), said callback
will always be executed before the others, as it sets some booster C-level attributes
that other callbacks might also use. Otherwise, the order of execution will match with
the order in which the callbacks are passed to the model fitting function.
}
\examples{
# Example constructing a custom callback that calculates
# squared error on the training data (no separate test set),
# and outputs the per-iteration results.
ssq_callback <- xgb.Callback(
cb_name = "ssq",
f_before_training = function(env, model, data, evals,
begin_iteration, end_iteration) {
# A vector to keep track of a number at each iteration
env$logs <- rep(NA_real_, end_iteration - begin_iteration + 1)
},
f_after_iter = function(env, model, data, evals, iteration, iter_feval) {
# This calculates the sum of squared errors on the training data.
# Note that this can be better done by passing an 'evals' entry,
# but this demonstrates a way in which callbacks can be structured.
pred <- predict(model, data)
err <- pred - getinfo(data, "label")
sq_err <- sum(err^2)
env$logs[iteration] <- sq_err
cat(
sprintf(
"Squared error at iteration \%d: \%.2f\n",
iteration, sq_err
)
)
# A return value of 'TRUE' here would signal to finalize the training
return(FALSE)
},
f_after_training = function(env, model, data, evals, iteration,
final_feval, prev_cb_res) {
return(env$logs)
}
)
data(mtcars)
y <- mtcars$mpg
x <- as.matrix(mtcars[, -1])
dm <- xgb.DMatrix(x, label = y, nthread = 1)
model <- xgb.train(
data = dm,
params = list(objective = "reg:squarederror", nthread = 1),
nrounds = 5,
callbacks = list(ssq_callback),
keep_extra_attributes = TRUE
)
# Result from 'f_after_iter' will be available as an attribute
attributes(model)$ssq
}
\seealso{
Built-in callbacks:\itemize{
\item \link{xgb.cb.print.evaluation}
\item \link{xgb.cb.evaluation.log}
\item \link{xgb.cb.reset.parameters}
\item \link{xgb.cb.early.stop}
\item \link{xgb.cb.save.model}
\item \link{xgb.cb.cv.predict}
\item \link{xgb.cb.gblinear.history}
}
}

View File

@ -2,42 +2,199 @@
% Please edit documentation in R/xgb.DMatrix.R
\name{xgb.DMatrix}
\alias{xgb.DMatrix}
\alias{xgb.QuantileDMatrix}
\title{Construct xgb.DMatrix object}
\usage{
xgb.DMatrix(
data,
info = list(),
label = NULL,
weight = NULL,
base_margin = NULL,
missing = NA,
silent = FALSE,
feature_names = colnames(data),
feature_types = NULL,
nthread = NULL,
...
group = NULL,
qid = NULL,
label_lower_bound = NULL,
label_upper_bound = NULL,
feature_weights = NULL,
data_split_mode = "row"
)
xgb.QuantileDMatrix(
data,
label = NULL,
weight = NULL,
base_margin = NULL,
missing = NA,
feature_names = colnames(data),
feature_types = NULL,
nthread = NULL,
group = NULL,
qid = NULL,
label_lower_bound = NULL,
label_upper_bound = NULL,
feature_weights = NULL,
ref = NULL,
max_bin = NULL
)
}
\arguments{
\item{data}{a \code{matrix} object (either numeric or integer), a \code{dgCMatrix} object, or a character
string representing a filename.}
\item{data}{Data from which to create a DMatrix, which can then be used for fitting models or
for getting predictions out of a fitted model.
\item{info}{a named list of additional information to store in the \code{xgb.DMatrix} object.
See \code{\link{setinfo}} for the specific allowed kinds of}
Supported input types are as follows:\itemize{
\item \code{matrix} objects, with types \code{numeric}, \code{integer}, or \code{logical}.
\item \code{data.frame} objects, with columns of types \code{numeric}, \code{integer}, \code{logical}, or \code{factor}.
\item{missing}{a float value to represents missing values in data (used only when input is a dense matrix).
It is useful when a 0 or some other extreme value represents missing values in data.}
Note that xgboost uses base-0 encoding for categorical types, hence \code{factor} types (which use base-1
encoding') will be converted inside the function call. Be aware that the encoding used for \code{factor}
types is not kept as part of the model, so in subsequent calls to \code{predict}, it is the user's
responsibility to ensure that factor columns have the same levels as the ones from which the DMatrix
was constructed.
Other column types are not supported.
\item CSR matrices, as class \code{dgRMatrix} from package \code{Matrix}.
\item CSC matrices, as class \code{dgCMatrix} from package \code{Matrix}. These are \bold{not} supported for
'xgb.QuantileDMatrix'.
\item Single-row CSR matrices, as class \code{dsparseVector} from package \code{Matrix}, which is interpreted
as a single row (only when making predictions from a fitted model).
\item Text files in a supported format, passed as a \code{character} variable containing the URI path to
the file, with an optional format specifier.
These are \bold{not} supported for \code{xgb.QuantileDMatrix}. Supported formats are:\itemize{
\item XGBoost's own binary format for DMatrices, as produced by \link{xgb.DMatrix.save}.
\item SVMLight (a.k.a. LibSVM) format for CSR matrices. This format can be signaled by suffix
\code{?format=libsvm} at the end of the file path. It will be the default format if not
otherwise specified.
\item CSV files (comma-separated values). This format can be specified by adding suffix
\code{?format=csv} at the end ofthe file path. It will \bold{not} be auto-deduced from file extensions.
}
Be aware that the format of the file will not be auto-deduced - for example, if a file is named 'file.csv',
it will not look at the extension or file contents to determine that it is a comma-separated value.
Instead, the format must be specified following the URI format, so the input to \code{data} should be passed
like this: \code{"file.csv?format=csv"} (or \code{"file.csv?format=csv&label_column=0"} if the first column
corresponds to the labels).
For more information about passing text files as input, see the articles
\href{https://xgboost.readthedocs.io/en/stable/tutorials/input_format.html}{Text Input Format of DMatrix} and
\href{https://xgboost.readthedocs.io/en/stable/python/python_intro.html#python-data-interface}{Data Interface}.
}}
\item{label}{Label of the training data. For classification problems, should be passed encoded as
integers with numeration starting at zero.}
\item{weight}{Weight for each instance.
Note that, for ranking task, weights are per-group. In ranking task, one weight
is assigned to each group (not each data point). This is because we
only care about the relative ordering of data points within each group,
so it doesn't make sense to assign weights to individual data points.}
\item{base_margin}{Base margin used for boosting from existing model.
\if{html}{\out{<div class="sourceCode">}}\preformatted{ In the case of multi-output models, one can also pass multi-dimensional base_margin.
}\if{html}{\out{</div>}}}
\item{missing}{A float value to represents missing values in data (not used when creating DMatrix
from text files).
It is useful to change when a zero, infinite, or some other extreme value represents missing
values in data.}
\item{silent}{whether to suppress printing an informational message after loading from a file.}
\item{feature_names}{Set names for features. Overrides column names in data
frame and matrix.
\if{html}{\out{<div class="sourceCode">}}\preformatted{ Note: columns are not referenced by name when calling `predict`, so the column order there
must be the same as in the DMatrix construction, regardless of the column names.
}\if{html}{\out{</div>}}}
\item{feature_types}{Set types for features.
If \code{data} is a \code{data.frame} and passing \code{feature_types} is not supplied, feature types will be deduced
automatically from the column types.
Otherwise, one can pass a character vector with the same length as number of columns in \code{data},
with the following possible values:\itemize{
\item "c", which represents categorical columns.
\item "q", which represents numeric columns.
\item "int", which represents integer columns.
\item "i", which represents logical (boolean) columns.
}
Note that, while categorical types are treated differently from the rest for model fitting
purposes, the other types do not influence the generated model, but have effects in other
functionalities such as feature importances.
\bold{Important}: categorical features, if specified manually through \code{feature_types}, must
be encoded as integers with numeration starting at zero, and the same encoding needs to be
applied when passing data to \code{predict}. Even if passing \code{factor} types, the encoding will
not be saved, so make sure that \code{factor} columns passed to \code{predict} have the same \code{levels}.}
\item{nthread}{Number of threads used for creating DMatrix.}
\item{...}{the \code{info} data could be passed directly as parameters, without creating an \code{info} list.}
\item{group}{Group size for all ranking group.}
\item{qid}{Query ID for data samples, used for ranking.}
\item{label_lower_bound}{Lower bound for survival training.}
\item{label_upper_bound}{Upper bound for survival training.}
\item{feature_weights}{Set feature weights for column sampling.}
\item{data_split_mode}{When passing a URI (as R \code{character}) as input, this signals
whether to split by row or column. Allowed values are \code{"row"} and \code{"col"}.
In distributed mode, the file is split accordingly; otherwise this is only an indicator on
how the file was split beforehand. Default to row.
This is not used when \code{data} is not a URI.}
\item{ref}{The training dataset that provides quantile information, needed when creating
validation/test dataset with \code{xgb.QuantileDMatrix}. Supplying the training DMatrix
as a reference means that the same quantisation applied to the training data is
applied to the validation/test data}
\item{max_bin}{The number of histogram bin, should be consistent with the training parameter
\code{max_bin}.
This is only supported when constructing a QuantileDMatrix.}
}
\value{
An 'xgb.DMatrix' object. If calling 'xgb.QuantileDMatrix', it will have additional
subclass 'xgb.QuantileDMatrix'.
}
\description{
Construct xgb.DMatrix object from either a dense matrix, a sparse matrix, or a local file.
Supported input file formats are either a LIBSVM text file or a binary file that was created previously by
\code{\link{xgb.DMatrix.save}}).
Construct an 'xgb.DMatrix' object from a given data source, which can then be passed to functions
such as \link{xgb.train} or \link{predict.xgb.Booster}.
}
\details{
Function 'xgb.QuantileDMatrix' will construct a DMatrix with quantization for the histogram
method already applied to it, which can be used to reduce memory usage (compared to using a
a regular DMatrix first and then creating a quantization out of it) when using the histogram
method (\code{tree_method = "hist"}, which is the default algorithm), but is not usable for the
sorted-indices method (\code{tree_method = "exact"}), nor for the approximate method
(\code{tree_method = "approx"}).
Note that DMatrix objects are not serializable through R functions such as \code{saveRDS} or \code{save}.
If a DMatrix gets serialized and then de-serialized (for example, when saving data in an R session or caching
chunks in an Rmd file), the resulting object will not be usable anymore and will need to be reconstructed
from the original source of data.
}
\examples{
data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label))
xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
dtrain <- xgb.DMatrix('xgb.DMatrix.data')
if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')
## Keep the number of threads to 1 for examples
nthread <- 1
data.table::setDTthreads(nthread)
dtrain <- with(
agaricus.train, xgb.DMatrix(data, label = label, nthread = nthread)
)
fname <- file.path(tempdir(), "xgb.DMatrix.data")
xgb.DMatrix.save(dtrain, fname)
dtrain <- xgb.DMatrix(fname)
}

View File

@ -0,0 +1,32 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{xgb.DMatrix.hasinfo}
\alias{xgb.DMatrix.hasinfo}
\title{Check whether DMatrix object has a field}
\usage{
xgb.DMatrix.hasinfo(object, info)
}
\arguments{
\item{object}{The DMatrix object to check for the given \code{info} field.}
\item{info}{The field to check for presence or absence in \code{object}.}
}
\description{
Checks whether an xgb.DMatrix object has a given field assigned to
it, such as weights, labels, etc.
}
\examples{
library(xgboost)
x <- matrix(1:10, nrow = 5)
dm <- xgb.DMatrix(x, nthread = 1)
# 'dm' so far doesn't have any fields set
xgb.DMatrix.hasinfo(dm, "label")
# Fields can be added after construction
setinfo(dm, "label", 1:5)
xgb.DMatrix.hasinfo(dm, "label")
}
\seealso{
\link{xgb.DMatrix}, \link{getinfo.xgb.DMatrix}, \link{setinfo.xgb.DMatrix}
}

View File

@ -15,9 +15,10 @@ xgb.DMatrix.save(dmatrix, fname)
Save xgb.DMatrix object to binary file
}
\examples{
\dontshow{RhpcBLASctl::omp_set_num_threads(1)}
data(agaricus.train, package='xgboost')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label))
xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
dtrain <- xgb.DMatrix('xgb.DMatrix.data')
if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')
dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
fname <- file.path(tempdir(), "xgb.DMatrix.data")
xgb.DMatrix.save(dtrain, fname)
dtrain <- xgb.DMatrix(fname)
}

View File

@ -0,0 +1,112 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{xgb.DataBatch}
\alias{xgb.DataBatch}
\title{Structure for Data Batches}
\usage{
xgb.DataBatch(
data,
label = NULL,
weight = NULL,
base_margin = NULL,
feature_names = colnames(data),
feature_types = NULL,
group = NULL,
qid = NULL,
label_lower_bound = NULL,
label_upper_bound = NULL,
feature_weights = NULL
)
}
\arguments{
\item{data}{Batch of data belonging to this batch.
Note that not all of the input types supported by \link{xgb.DMatrix} are possible
to pass here. Supported types are:\itemize{
\item \code{matrix}, with types \code{numeric}, \code{integer}, and \code{logical}. Note that for types
\code{integer} and \code{logical}, missing values might not be automatically recognized as
as such - see the documentation for parameter \code{missing} in \link{xgb.ExternalDMatrix}
for details on this.
\item \code{data.frame}, with the same types as supported by 'xgb.DMatrix' and same
conversions applied to it. See the documentation for parameter \code{data} in
\link{xgb.DMatrix} for details on it.
\item CSR matrices, as class \code{dgRMatrix} from package \code{Matrix}.
}}
\item{label}{Label of the training data. For classification problems, should be passed encoded as
integers with numeration starting at zero.}
\item{weight}{Weight for each instance.
Note that, for ranking task, weights are per-group. In ranking task, one weight
is assigned to each group (not each data point). This is because we
only care about the relative ordering of data points within each group,
so it doesn't make sense to assign weights to individual data points.}
\item{base_margin}{Base margin used for boosting from existing model.
\if{html}{\out{<div class="sourceCode">}}\preformatted{ In the case of multi-output models, one can also pass multi-dimensional base_margin.
}\if{html}{\out{</div>}}}
\item{feature_names}{Set names for features. Overrides column names in data
frame and matrix.
\if{html}{\out{<div class="sourceCode">}}\preformatted{ Note: columns are not referenced by name when calling `predict`, so the column order there
must be the same as in the DMatrix construction, regardless of the column names.
}\if{html}{\out{</div>}}}
\item{feature_types}{Set types for features.
If \code{data} is a \code{data.frame} and passing \code{feature_types} is not supplied, feature types will be deduced
automatically from the column types.
Otherwise, one can pass a character vector with the same length as number of columns in \code{data},
with the following possible values:\itemize{
\item "c", which represents categorical columns.
\item "q", which represents numeric columns.
\item "int", which represents integer columns.
\item "i", which represents logical (boolean) columns.
}
Note that, while categorical types are treated differently from the rest for model fitting
purposes, the other types do not influence the generated model, but have effects in other
functionalities such as feature importances.
\bold{Important}: categorical features, if specified manually through \code{feature_types}, must
be encoded as integers with numeration starting at zero, and the same encoding needs to be
applied when passing data to \code{predict}. Even if passing \code{factor} types, the encoding will
not be saved, so make sure that \code{factor} columns passed to \code{predict} have the same \code{levels}.}
\item{group}{Group size for all ranking group.}
\item{qid}{Query ID for data samples, used for ranking.}
\item{label_lower_bound}{Lower bound for survival training.}
\item{label_upper_bound}{Upper bound for survival training.}
\item{feature_weights}{Set feature weights for column sampling.}
}
\value{
An object of class \code{xgb.DataBatch}, which is just a list containing the
data and parameters passed here. It does \bold{not} inherit from \code{xgb.DMatrix}.
}
\description{
Helper function to supply data in batches of a data iterator when
constructing a DMatrix from external memory through \link{xgb.ExternalDMatrix}
or through \link{xgb.QuantileDMatrix.from_iterator}.
This function is \bold{only} meant to be called inside of a callback function (which
is passed as argument to function \link{xgb.DataIter} to construct a data iterator)
when constructing a DMatrix through external memory - otherwise, one should call
\link{xgb.DMatrix} or \link{xgb.QuantileDMatrix}.
The object that results from calling this function directly is \bold{not} like
an \code{xgb.DMatrix} - i.e. cannot be used to train a model, nor to get predictions - only
possible usage is to supply data to an iterator, from which a DMatrix is then constructed.
For more information and for example usage, see the documentation for \link{xgb.ExternalDMatrix}.
}
\seealso{
\link{xgb.DataIter}, \link{xgb.ExternalDMatrix}.
}

Some files were not shown because too many files have changed in this diff Show More