Compare commits

..

939 Commits

Author SHA1 Message Date
Jiaming Yuan
36eb41c960 Bump version to 1.7.6 (#9305) 2023-06-16 03:33:16 +08:00
Jiaming Yuan
39ddf40a8d [backport] Optimize prediction with QuantileDMatrix. (#9096) (#9303) 2023-06-15 23:32:03 +08:00
Jiaming Yuan
573f1c7db4 [backport] Fix monotone constraints on CPU. (#9122) (#9287)
* [backport] Fix monotone constraints on CPU. (#9122)
2023-06-11 17:51:25 +08:00
Jiaming Yuan
abc80d2a6d [backport] Improve doxygen (#8959) (#9284)
* Remove Sphinx build from GH Action

* Build Doxygen as part of RTD build

* Add jQuery

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-06-11 13:22:23 +08:00
Jiaming Yuan
e882fb3262 [backport] [spark] Make spark model have the same UID with its estimator (#9022) (#9285)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Co-authored-by: WeichenXu <weichen.xu@databricks.com>
2023-06-11 13:18:23 +08:00
Jiaming Yuan
3218f6cd3c [backport] Disable dense opt for distributed training. (#9272) (#9288) 2023-06-11 11:08:45 +08:00
Jiaming Yuan
a962611de7 Disable SHAP test on 1.7 (#9290) 2023-06-11 02:13:36 +08:00
Jiaming Yuan
14476e8868 [backport] Fix tests with pandas 2.0. (#9014) (#9289)
* Fix tests with pandas 2.0.

- `is_categorical` is replaced by `is_categorical_dtype`.
- one hot encoding returns boolean type instead of integer type.
2023-06-11 00:52:44 +08:00
Jiaming Yuan
03f3879b71 [backport] [doc] fix the cudf installation [skip ci] (#9106) (#9286)
Co-authored-by: Bobby Wang <wbo4958@gmail.com>
2023-06-10 04:09:27 +08:00
Jiaming Yuan
21d95f3d8f [backport] [doc][R] Update link. (#8998) (#9001) 2023-03-30 20:02:31 +08:00
Jiaming Yuan
5cd4015d70 [backport] Fill column size. (#8997) 2023-03-30 15:21:42 +08:00
Jiaming Yuan
b8c6b86792 Bump version to 1.7.5. (#8994) 2023-03-29 21:41:10 +08:00
Jiaming Yuan
1baebe231b [backport] [CI] Fix Windows wheel to be compatible with Poetry (#8991) (#8992)
* [CI] Fix Windows wheel to be compatible with Poetry

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-03-29 14:26:20 +08:00
Jiaming Yuan
365da0b8f4 [backport] [doc] Add missing document for pyspark ranker. (#8692) (#8990) 2023-03-29 12:02:51 +08:00
Jiaming Yuan
f5f03dfb61 [backport] Update dmlc-core to get C++17 deprecation warning (#8855) (#8982)
Co-authored-by: Rong Ou <rong.ou@gmail.com>
2023-03-27 21:31:30 +08:00
Jiaming Yuan
a1c209182d [backport] Update c++ requirement to 17 for the R package. (#8860) (#8983) 2023-03-27 18:24:25 +08:00
Jiaming Yuan
4be75d852c [backport] Fix scope of feature set pointers (#8850) (#8972)
---------

Co-authored-by: Rory Mitchell <r.a.mitchell.nz@gmail.com>
2023-03-27 00:33:08 +08:00
Jiaming Yuan
ba50e6eb62 [backport] [CI] Require C++17 + CMake 3.18; Use CUDA 11.8 in CI (#8853) (#8971)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-03-26 00:10:03 +08:00
Jiaming Yuan
36ad160501 Bump version to 1.7.4. (#8805) 2023-02-16 06:40:01 +08:00
Jiaming Yuan
c22f6db4bf [backport] Fix CPU bin compression with categorical data. (#8809) (#8810)
* [backport] Fix CPU bin compression with categorical data. (#8809)

* Fix CPU bin compression with categorical data.

* The bug causes the maximum category to be lesser than 256 or the maximum number of bins when
the input data is dense.

* Avoid test symbol.
2023-02-16 06:39:25 +08:00
Jiaming Yuan
f15a6d2b19 [backport] Fix ranking with quantile dmatrix and group weight. (#8762) (#8800)
* [backport] Fix ranking with quantile dmatrix and group weight. (#8762)

* backport test utilities.
2023-02-15 02:45:09 +08:00
Jiaming Yuan
08a547f5c2 [backport] Fix feature types param (#8772) (#8801)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Co-authored-by: WeichenXu <weichen.xu@databricks.com>
2023-02-15 01:39:20 +08:00
Jiaming Yuan
60303db2ee [backport] Fix GPU L1 error. (#8749) (#8770)
* [backport] Fix GPU L1 error. (#8749)

* Fix backport.
2023-02-09 20:16:39 +08:00
Jiaming Yuan
df984f9c43 [backport] Fix different number of features in gpu_hist evaluator. (#8754) (#8769)
Co-authored-by: Rory Mitchell <r.a.mitchell.nz@gmail.com>
2023-02-09 18:31:49 +08:00
Jiaming Yuan
2f22f8d49b [backport] Make sure input numpy array is aligned. (#8690) (#8696) (#8734)
* [backport] Make sure input numpy array is aligned. (#8690)

- use `np.require` to specify that the alignment is required.
- scipy csr as well.
- validate input pointer in `ArrayInterface`.

* Workaround CUDA warning. (#8696)

* backport from half type support for alignment.

* fix import.
2023-02-06 16:58:15 +08:00
Jiaming Yuan
68d86336d7 [backport] [R] fix OpenMP detection on macOS (#8684) (#8732)
Co-authored-by: James Lamb <jaylamb20@gmail.com>
2023-01-29 12:43:10 +08:00
Jiaming Yuan
76bdca072a [R] Fix threads used to create DMatrix in predict. (#8681) (#8682) 2023-01-15 04:00:31 +08:00
Jiaming Yuan
021e6a842a [backport] [R] Get CXX flags from R CMD config. (#8669) (#8680) 2023-01-14 18:46:59 +08:00
Jiaming Yuan
e5bef4ffce [backport] Fix threads in DMatrix slice. (#8667) (#8679) 2023-01-14 18:46:04 +08:00
Jiaming Yuan
10bb0a74ef [backport] [CI] Skip pyspark sparse tests. (#8675) (#8678) 2023-01-14 06:40:17 +08:00
Jiaming Yuan
e803d06d8c [backport] [R] Remove unused assert definition. (#8526) (#8668) 2023-01-13 04:55:29 +08:00
Jiaming Yuan
ccf43d4ba0 Bump R package version to 1.7.3. (#8649) 2023-01-06 20:34:05 +08:00
Jiaming Yuan
dd58c2ac47 Bump version to 1.7.3. (#8646) 2023-01-06 17:55:51 +08:00
Jiaming Yuan
899e4c8988 [backport] Do not return internal value for get_params. (#8634) (#8642) 2023-01-06 02:28:39 +08:00
Jiaming Yuan
a2085bf223 [backport] Fix loading GPU pickle with a CPU-only xgboost distribution. (#8632) (#8641)
We can handle loading the pickle on a CPU-only machine if the XGBoost is built with CUDA
enabled (Linux and Windows PyPI package), but not if the distribution is CPU-only (macOS
PyPI package).
2023-01-06 02:28:21 +08:00
Jiaming Yuan
067b704e58 [backport] Fix inference with categorical feature. (#8591) (#8602) (#8638)
* Fix inference with categorical feature. (#8591)

* Fix windows build on buildkite. (#8602)

* workaround.
2023-01-06 01:17:49 +08:00
Jiaming Yuan
1a834b2b85 Fix linalg iterator. (#8603) (#8639) 2023-01-05 23:16:10 +08:00
Jiaming Yuan
162b48a1a4 [backport] [CI] Disable gtest with RMM (#8620) (#8640)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-01-05 23:13:45 +08:00
Jiaming Yuan
83a078b7e5 [backport] Fix sklearn test that calls a removed field (#8579) (#8636)
Co-authored-by: Rong Ou <rong.ou@gmail.com>
2023-01-05 21:17:05 +08:00
Jiaming Yuan
575fba651b [backport] [CI] Fix CI with updated dependencies. (#8631) (#8635) 2023-01-05 19:10:58 +08:00
Jiaming Yuan
62ed8b5fef Bump release version to 1.7.2. (#8569) 2022-12-08 21:46:26 +08:00
Jiaming Yuan
a980e10744 Properly await async method client.wait_for_workers (#8558) (#8567)
* Properly await async method client.wait_for_workers

* ignore mypy error.

Co-authored-by: jiamingy <jm.yuan@outlook.com>

Co-authored-by: Matthew Rocklin <mrocklin@gmail.com>
2022-12-07 23:25:05 +08:00
Jiaming Yuan
59c54e361b [pyspark] Make QDM optional based on cuDF check (#8471) (#8556)
Co-authored-by: WeichenXu <weichen.xu@databricks.com>
2022-12-07 03:19:35 +08:00
Jiaming Yuan
60a8c8ebba [pyspark] sort qid for SparkRanker (#8497) (#8555)
* [pyspark] sort qid for SparkRandker

* resolve comments

Co-authored-by: Bobby Wang <wbo4958@gmail.com>
2022-12-07 02:07:37 +08:00
Jiaming Yuan
58bc225657 [backport] [CI] Fix github action mismatched glibcxx. (#8551) (#8552)
Split up the Linux test to use the toolchain from conda forge.
2022-12-06 21:35:26 +08:00
Jiaming Yuan
850b53100f [backport] [doc] Fix outdated document [skip ci] (#8527) (#8553)
* [doc] Fix document around categorical parameters. [skip ci]

* note on validate parameter [skip ci]

* Fix dask doc as well [skip ci]
2022-12-06 18:21:14 +08:00
Philip Hyunsu Cho
67b657dad0 SO_DOMAIN do not support on IBM i, using getsockname instead (#8437) (#8500) 2022-11-30 11:47:59 -08:00
Philip Hyunsu Cho
db14e3feb7 Support null value in CUDA array interface. (#8486) (#8499) 2022-11-30 11:44:54 -08:00
Robert Maynard
9372370dda Work with newer thrust and libcudacxx (#8432)
* Thrust 1.17 removes the experimental/pinned_allocator.

When xgboost is brought into a large project it can
be compiled against Thrust 1.17+ which don't offer
this experimental allocator.

To ensure that going forward xgboost works in all environments we provide a xgboost namespaced version of
the pinned_allocator that previously was in Thrust.

* Update gputreeshap to work with libcudacxx 1.9
2022-11-11 01:15:25 +08:00
Jiaming Yuan
1136a7e0c3 Fix CRAN note on cleanup. (#8447) 2022-11-09 14:22:54 +08:00
Jiaming Yuan
a347cd512b [backport] [R] Fix CRAN test notes. (#8428) (#8440)
- Limit the number of used CPU cores in examples.
- Add a note for the constraint.
- Bring back the cleanup script.
2022-11-09 07:12:46 +08:00
Jiaming Yuan
9ff0c0832a Fix 1.7.1 version file. (#8427) 2022-11-06 03:19:54 +08:00
Philip Hyunsu Cho
534c940a7e Release 1.7.1 (#8413)
* Release 1.7.1

* Review comment
2022-11-03 15:37:54 -07:00
Philip Hyunsu Cho
5b76acccff Add back xgboost.rabit for backwards compatibility (#8408) (#8411) 2022-11-02 07:56:55 -07:00
Hyunsu Cho
4bc59ef7c3 Release 1.7 2022-10-31 10:53:07 -07:00
Jiaming Yuan
e43cd60c0e [backport] Type fix for WebAssembly. (#8369) (#8394)
Co-authored-by: Yizhi Liu <liuyizhi@apache.org>
2022-10-26 20:47:16 +08:00
Jiaming Yuan
3f92970a39 [backport] Fix CUDA async stream. (#8380) (#8392) 2022-10-26 20:46:38 +08:00
Jiaming Yuan
e17f7010bf [backport][doc] Cleanup outdated documents for GPU. [skip ci] (#8378) (#8393) 2022-10-26 19:49:00 +08:00
Jiaming Yuan
aa30ce10da [backport][pyspark] Improve tutorial on enabling GPU support. (#8385) [skip ci] (#8391)
- Quote the databricks doc on how to manage dependencies.
- Some wording changes.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-10-26 19:31:34 +08:00
Philip Hyunsu Cho
153d995b58 Fix building XGBoost with libomp 15 (#8384) (#8387) 2022-10-26 00:43:10 -07:00
Jiaming Yuan
463313d9be Remove cleanup script in R package. (#8370) 2022-10-20 14:22:13 +08:00
Jiaming Yuan
7cf58a2c65 Make 1.7.0rc1. (#8365) 2022-10-20 12:01:18 +08:00
Jiaming Yuan
28a466ab51 Fixes for R checks. (#8330)
- Bump configure.ac version.
- Remove amalgamation to reduce the build time for a single object with the added benefit that we can use parallel build during development.
- Fix c function prototype warning.
- Remove Windows automake file generation step to make the build script easier to understand.
2022-10-20 02:52:54 +08:00
Dmitry Razdoburdin
5bd849f1b5 Unify the partitioner for hist and approx.
Co-authored-by: dmitry.razdoburdin <drazdobu@jfldaal005.jf.intel.com>
Co-authored-by: jiamingy <jm.yuan@outlook.com>
2022-10-20 02:49:20 +08:00
Jiaming Yuan
c69af90319 Fix github action r tests. (#8364) 2022-10-20 01:07:18 +08:00
Jiaming Yuan
c884b9e888 Validate features for inplace predict. (#8359) 2022-10-19 23:05:36 +08:00
Joyce
52977f0cdf Create Security Police (#8360)
* chore: create security policy

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* chore: only latest release on security police

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* chore: security policy support on effort base

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* Use dedicated e-mail address for security reporting

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-10-18 17:15:30 -07:00
luca-s
c47c71e34f XGBRanker documentation: few clarifications (#8356) 2022-10-19 01:54:14 +08:00
Bobby Wang
76f95a6667 [pyspark] Filter out the unsupported train parameters (#8355) 2022-10-18 23:26:02 +08:00
Jiaming Yuan
3901f5d9db [pyspark] Cleanup data processing. (#8344)
* Enable additional combinations of ctor parameters.
* Unify procedures for QuantileDMatrix and DMatrix.
2022-10-18 14:56:23 +08:00
Rong Ou
521086d56b Make federated client more robust (#8351) 2022-10-18 13:52:44 +08:00
luca-s
5647fc6542 XGBRanker documentation: missing default objective (#8347) 2022-10-18 10:43:29 +08:00
Rong Ou
8f3dee58be Speed up tests with federated learning enabled (#8350)
* Speed up tests with federated learning enabled

* Re-enable timeouts

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-10-17 15:17:04 -07:00
Jiaming Yuan
031d66ec27 Configuration for init estimation. (#8343)
* Configuration for init estimation.

* Check whether the model needs configuration based on const attribute `ModelFitted`
instead of a mutable state.
* Add parameter `boost_from_average` to tell whether the user has specified base score.
* Add tests.
2022-10-18 01:52:24 +08:00
Jiaming Yuan
2176e511fc Disable pytest-timeout for now. (#8348) 2022-10-17 23:06:10 +08:00
Jiaming Yuan
fcddbc9264 FIx incorrect function name. (#8346) 2022-10-17 19:28:20 +08:00
Rong Ou
80e10e02ab Avoid blank lines with federated training (#8342) 2022-10-14 14:55:01 +08:00
Rong Ou
b3208aac4e Fix NVFLARE demo (#8340) 2022-10-14 12:18:34 +08:00
Jiaming Yuan
748d516c50 [pyspark] Enable running GPU tests on variable number of GPUs. (#8335) 2022-10-13 21:03:45 +08:00
Jiaming Yuan
4633b476e9 [doc] Display survival demos in sphinx doc. [skip ci] (#8328) 2022-10-13 20:51:23 +08:00
Jiaming Yuan
3ef1703553 Allow using string view to find JSON value. (#8332)
- Allow comparison between string and string view.
- Fix compiler warnings.
2022-10-13 17:10:13 +08:00
Philip Hyunsu Cho
29595102b9 [CI] Set up test analytics for CPU Python tests (#8333)
* [CI] Set up test analytics for CPU Python tests

* Install test collector
2022-10-12 23:15:50 -07:00
Philip Hyunsu Cho
2faa744aba [CI] Test federated learning plugin in the CI (#8325) 2022-10-12 13:57:39 -07:00
Jiaming Yuan
97a5b088a5 [pyspark] Use quantile dmatrix. (#8284) 2022-10-12 20:38:53 +08:00
Rory Mitchell
ce0382dcb0 [CI] Refactor tests to reduce CI time. (#8312) 2022-10-12 11:32:06 +02:00
Rong Ou
39afdac3be Better error message when world size and rank are set as strings (#8316)
Co-authored-by: jiamingy <jm.yuan@outlook.com>
2022-10-12 15:53:25 +08:00
Rory Mitchell
210915c985 Use integer gradients in gpu_hist split evaluation (#8274) 2022-10-11 12:16:27 +02:00
Jiaming Yuan
c68684ff4c Update parameter for categorical feature. (#8285) 2022-10-10 19:48:29 +08:00
Jiaming Yuan
5545c49cfc Require keyword args for data iterator. (#8327) 2022-10-10 17:47:13 +08:00
Jiaming Yuan
e1f9f80df2 Use gpu predictor for get csr test. (#8323) 2022-10-10 16:12:37 +08:00
Philip Hyunsu Cho
a71421e825 [CI] Update GitHub Actions to use macos-11 (#8321) 2022-10-08 00:40:43 -07:00
Philip Hyunsu Cho
d70e59fefc Fix Intel's link [skip ci] 2022-10-06 16:55:42 -07:00
Philip Hyunsu Cho
50ff8a2623 More CI improvements (#8313)
* Reduce clutter in log of Python test

* Set up BuildKite test analytics

* Add separate step for building containers

* Enable incremental update of CI stack; custom agent IAM policy
2022-10-06 06:33:46 -08:00
Philip Hyunsu Cho
bc7a6ec603 Fix clang tidy (#8314)
* Fix clang-tidy

* Exempt clang-tidy from budget check

* Move clang-tidy
2022-10-06 05:16:06 -08:00
Dmitry Razdoburdin
c24e9d712c Dispatcher for template parameters of BuildHist Kernels (#8259)
* Intoducing Column Wise Hist Building

* linting

* more linting

* bug fixing

* Removing column samping optimization for a while to simplify the review process.

* linting

* Removing unnecessary changes

* Use DispatchBinType in hist_util.cc

* Adding force_read_by column flag to buildhist. Adding tests for column wise buiilhist.

* Introducing new dispatcher for compile time flags in hist building

* fixing bug with using of DispatchBinType

* Fixing building

* Merging with master branch

Co-authored-by: dmitry.razdoburdin <drazdobu@jfldaal005.jf.intel.com>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-10-06 03:02:29 -08:00
Rong Ou
8d4038da57 Don't split input data in federated mode (#8279)
Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-10-05 18:19:28 -08:00
Philip Hyunsu Cho
66fd9f5207 Update sponsors list [skip ci] (#8309) 2022-10-05 16:40:46 -08:00
Rory Mitchell
909e49e214 Reduce docker image size. (#8306) 2022-10-05 15:55:51 -08:00
Rong Ou
668b8a0ea4 [Breaking] Switch from rabit to the collective communicator (#8257)
* Switch from rabit to the collective communicator

* fix size_t specialization

* really fix size_t

* try again

* add include

* more include

* fix lint errors

* remove rabit includes

* fix pylint error

* return dict from communicator context

* fix communicator shutdown

* fix dask test

* reset communicator mocklist

* fix distributed tests

* do not save device communicator

* fix jvm gpu tests

* add python test for federated communicator

* Update gputreeshap submodule

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-10-05 14:39:01 -08:00
Jiaming Yuan
e47b3a3da3 Upgrade mypy. (#8302)
Some breaking changes were made in mypy.
2022-10-05 14:31:59 +08:00
Jiaming Yuan
97c3a80a34 Add C document to sphinx, fix arrow. (#8300)
- Group C API.
- Add C API sphinx doc.
- Consistent use of `OptionalArg` and the parameter name `config`.
- Remove call to deprecated functions in demo.
- Fix some formatting errors.
- Add links to c examples in the document (only visible with doxygen pages)
- Fix arrow.
2022-10-05 09:52:15 +08:00
Philip Hyunsu Cho
b2bbf49015 Additional improvements to CI (#8303)
* Wait until budget check is complete

* Ensure that multi-GPU tests run for the master branch

* Fix
2022-10-04 03:03:38 -08:00
Rory Mitchell
d686bf52a6 Reduce time for some multi-gpu tests (#8288)
* Faster dask tests

* Reuse AllReducer objects in tests.

* Faster boost from prediction tests.

* Use rmm dask fixture.

* Speed up dask demo.

* mypy

* Format with black.

* mypy

* Clang-tidy

Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-10-04 02:49:33 -08:00
Philip Hyunsu Cho
ca0547bb65 [CI] Use RAPIDS 22.10 (#8298)
* [CI] Use RAPIDS 22.10

* Store CUDA and RAPIDS versions in one place

* Fix

* Add missing #include

* Update gputreeshap submodule

* Fix

* Remove outdated distributed tests
2022-10-03 23:18:07 -08:00
Philip Hyunsu Cho
37886a5dff [CI] Document the use of Docker wrapper script (#8297)
* [CI] Document the use of Docker wrapper script

* Grammer fixes

* Document buildkite pipeline defs

* tests/buildkite/*.sh isn't meant to run locally
2022-10-02 12:45:00 -07:00
Philip Hyunsu Cho
9af99760d4 Various CI savings (#8291) 2022-09-30 05:42:56 -07:00
Jiaming Yuan
299e5000a4 Fix buildkite label. (#8287) 2022-09-29 17:33:19 -07:00
Jiaming Yuan
55cf24cc32 Obtain CSR matrix from DMatrix. (#8269) 2022-09-29 20:41:43 +08:00
Philip Hyunsu Cho
b14c44ee5e [CI] Put Multi-GPU test suites in separate pipeline (#8286)
* [CI] Put Multi-GPU test suites in separate pipeline

* Avoid unset var error in Bash
2022-09-29 00:41:48 -08:00
Bobby Wang
cbf3a5f918 [pyspark][doc] add more doc for pyspark (#8271)
Co-authored-by: fis <jm.yuan@outlook.com>
2022-09-29 11:58:18 +08:00
Bobby Wang
c91fed083d [pyspark] disable repartition_random_shuffle by default (#8283) 2022-09-29 10:50:51 +08:00
Jiaming Yuan
6925b222e0 Fix mixed types with cuDF. (#8280) 2022-09-29 00:57:52 +08:00
Jiaming Yuan
f835368bcf Mark next release as 1.7 instead of 2.0 (#8281) 2022-09-28 14:33:37 +08:00
Jiaming Yuan
6d1452074a Remove MGPU cpp tests. (#8276)
Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-09-27 21:18:23 +08:00
Jiaming Yuan
fcab51aa82 Support more pandas nullable types (#8262)
- Float32/64
- Category.
2022-09-27 01:59:50 +08:00
Alex
1082ccd3cc GitHub Workflows security hardening (#8267)
Signed-off-by: Alex <aleksandrosansan@gmail.com>
2022-09-27 00:54:27 +08:00
Rory Mitchell
8f77677193 Use quantised gradients in gpu_hist histograms (#8246) 2022-09-26 17:35:35 +02:00
Jiaming Yuan
4056974e37 Fix sparse threshold warning. (#8268) 2022-09-26 22:22:11 +08:00
WeichenXu
ff71c69adf [pyspark] Add validation for param 'early_stopping_rounds' and 'validation_indicator_col' (#8250)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2022-09-26 17:43:03 +08:00
Jiaming Yuan
0cd11b893a [doc] Fix sphinx build. (#8270) 2022-09-26 12:33:31 +08:00
Joyce
be5b95e743 Enable OpenSSF Scorecard Github Action (#8263)
* chore: enable scorecard github action

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

* docs: add scorecard badge to the README file

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>

Signed-off-by: Joyce Brum <joycebrumu.u@gmail.com>
2022-09-25 13:02:36 -07:00
Bobby Wang
8d247f0d64 [jvm-packages] fix spark-rapids compatibility issue (#8240)
* [jvm-packages] fix spark-rapids compatibility issue

spark-rapids (from 22.10) has shimmed GpuColumnVector, which means
we can't call it directly. So this PR call the UnshimmedGpuColumnVector
2022-09-22 23:31:29 +08:00
WeichenXu
ab342af242 [pyspark] Fix xgboost spark estimator dataset repartition issues (#8231) 2022-09-22 21:31:41 +08:00
Jiaming Yuan
3fd331f8f2 Add checks to C pointer arguments. (#8254) 2022-09-22 19:02:22 +08:00
Dmitry Razdoburdin
eb7bbee2c9 Optional by-column histogram build. (#8233)
Co-authored-by: dmitry.razdoburdin <drazdobu@jfldaal005.jf.intel.com>
2022-09-22 05:16:13 +08:00
Jiaming Yuan
b791446623 Initial support for IPv6 (#8225)
- Merge rabit socket into XGBoost.
- Dask interface support.
- Add test to the socket.
2022-09-21 18:06:50 +08:00
Rong Ou
7d43e74e71 JNI wrapper for the collective communicator (#8242) 2022-09-21 04:20:25 +08:00
Jiaming Yuan
fffb1fca52 Calculate base_score based on input labels for mae. (#8107)
Fit an intercept as base score for abs loss.
2022-09-20 20:53:54 +08:00
Bobby Wang
4f42aa5f12 [pyspark] make the model saved by pyspark compatible (#8219)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-09-20 16:43:49 +08:00
Bobby Wang
520586ffa7 [pyspark] fix empty data issue when constructing DMatrix (#8245)
Co-authored-by: Hyunsu Philip Cho <chohyu01@cs.washington.edu>
2022-09-20 16:43:20 +08:00
Philip Hyunsu Cho
70df36c99c [CI] Retire Jenkins server (#8243) 2022-09-14 08:46:23 -07:00
Jiaming Yuan
2e63af6117 Mitigate flaky data iter test. (#8244)
- Reduce the number of batches.
- Verify labels.
2022-09-14 17:54:14 +08:00
Jiaming Yuan
bdf265076d Make QuantileDMatrix default to sklearn esitmators. (#8220) 2022-09-13 13:52:19 +08:00
Rong Ou
a2686543a9 Common interface for collective communication (#8057)
* implement broadcast for federated communicator

* implement allreduce

* add communicator factory

* add device adapter

* add device communicator to factory

* add rabit communicator

* add rabit communicator to the factory

* add nccl device communicator

* add synchronize to device communicator

* add back print and getprocessorname

* add python wrapper and c api

* clean up types

* fix non-gpu build

* try to fix ci

* fix std::size_t

* portable string compare ignore case

* c style size_t

* fix lint errors

* cross platform setenv

* fix memory leak

* fix lint errors

* address review feedback

* add python test for rabit communicator

* fix failing gtest

* use json to configure communicators

* fix lint error

* get rid of factories

* fix cpu build

* fix include

* fix python import

* don't export collective.py yet

* skip collective communicator pytest on windows

* add review feedback

* update documentation

* remove mpi communicator type

* fix tests

* shutdown the communicator separately

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-09-12 15:21:12 -07:00
Jiaming Yuan
bc818316f2 Prepare for improving Windows networking compatibility. (#8234)
* Prepare for improving Windows networking compatibility.

* Include dmlc filesystem indirectly as dmlc/filesystem.h includes windows.h, which
  conflicts with winsock2.h
* Define `NOMINMAX` conditionally.
* Link the winsock library when mysys32 is used.
* Add config file for read the doc.
2022-09-10 15:16:49 +08:00
Jiaming Yuan
dd44ac91b8 [CI] Use binary R dependencies on Windows. (#8241) 2022-09-09 19:51:15 -07:00
Philip Hyunsu Cho
23faf656ad [CI] Don't require manual approval for master branch (#8235) 2022-09-08 09:26:22 -08:00
Philip Hyunsu Cho
e888eb2fa9 [CI] Migrate CI pipelines from Jenkins to BuildKite (#8142)
* [CI] Migrate CI pipelines from Jenkins to BuildKite

* Require manual approval

* Less verbose output when pulling Docker

* Remove us-east-2 from metadata.py

* Add documentation

* Add missing underscore

* Add missing punctuation

* More specific instruction

* Better paragraph structure
2022-09-07 16:29:25 -08:00
Philip Hyunsu Cho
b397d64c96 Drop use of deleted virtual function to support older MacOS (#8226)
* Support older MacOS

* Update json.h
2022-09-07 11:25:59 -08:00
Rehan Guha
dc07137a2c Updated dart.rst with correct links (#8229)
Updated the DART paper link as it was invalid and link was broken.
2022-09-08 00:57:09 +08:00
Jiaming Yuan
b5eb36f1af Add max_cat_threshold to GPU and handle missing cat values. (#8212) 2022-09-07 00:57:51 +08:00
Jiaming Yuan
441ffc017a Copy data from Ellpack to GHist. (#8215) 2022-09-06 23:05:49 +08:00
Bobby Wang
7ee10e3dbd [pyspark] Cleanup the comments (#8217) 2022-09-05 16:20:12 +08:00
Jiaming Yuan
ada4a86d1c Fix dask interface with latest cupy. (#8210) 2022-09-03 03:10:43 +08:00
Dmitry Razdoburdin
deae99e662 Optimization/buildhist/hist util (#8218)
* BuildHistKernel optimization

Co-authored-by: dmitry.razdoburdin <drazdobu@jfldaal005.jf.intel.com>
2022-09-02 19:39:45 +08:00
Rong Ou
b78bc734d9 Fix dask.py lint error (#8216) 2022-09-02 16:30:01 +08:00
Philip Hyunsu Cho
56395d120b Work around MSVC behavior wrt constexpr capture (#8211)
* Work around MSVC behavior wrt constexpr capture

* Fix lint
2022-08-31 11:42:08 -08:00
CW
a868498c18 [doc] Update prediction.rst (#8214) 2022-08-31 21:00:12 +08:00
Jiaming Yuan
8dac90a593 Mark parameter validation non-experimental. (#8206) 2022-08-30 15:49:43 +08:00
Rong Ou
d6e2013c5f Set max message size in insecure gRPC (#8203) 2022-08-26 16:33:51 +08:00
WeichenXu
651f0a8889 [pyspark] Fixing xgboost.spark python doc (#8200)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2022-08-25 14:41:48 +08:00
WeichenXu
d03794ce7a [pyspark] Add param validation for "objective" and "eval_metric" param, and remove invalid booster params (#8173)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2022-08-24 15:29:43 +08:00
Jiaming Yuan
9b32e6e2dc Fix release script. (#8187) (#8195) 2022-08-23 15:08:30 +08:00
WeichenXu
f4628c22a4 [pyspark] Implement SparkXGBRanker estimator (#8172)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2022-08-23 02:35:19 +08:00
Philip Hyunsu Cho
35ef8abc27 [CI] Prune unused archs from libnccl (#8179)
* [CI] Prune unused archs from libnccl

* Put pruning logic in CI directory

* Don't use --color in grep
2022-08-21 00:46:16 -08:00
Rong Ou
ad3bc0edee Allow insecure gRPC connections for federated learning (#8181)
* Allow insecure gRPC connections for federated learning

* format
2022-08-19 12:16:14 +08:00
WeichenXu
53d2a733b0 [pyspark] Make Xgboost estimator support using sparse matrix as optimization (#8145)
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2022-08-19 01:57:28 +08:00
Rory Mitchell
1703dc330f Optimise histogram kernels (#8118) 2022-08-18 14:07:26 +02:00
Gavin Zhang
40a10c217d Use make on i system (#8178)
Co-authored-by: GavinZhang <zhanggan@cn.ibm.com>
2022-08-18 12:55:32 +08:00
dependabot[bot]
93966b0d19 Bump hadoop-common from 3.2.3 to 3.2.4 in /jvm-packages/xgboost4j-flink (#8157)
Bumps hadoop-common from 3.2.3 to 3.2.4.

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-15 06:47:27 -08:00
Andy Kattine
a9458fd844 Grammar Fix in Introduction to Boosted Trees (#8166)
Added "of" to "objective functions is that they consist of two parts" in line 32 of ./doc/tutorials/model.rst
2022-08-15 15:19:47 +08:00
Ravi Makhija
fa869eebd9 Edit grammar in custom metric tutorial (#8163) 2022-08-13 01:02:25 +08:00
Rory Mitchell
f421c26d35 Tune cuda architectures (#8152) 2022-08-11 13:36:47 -07:00
Jiaming Yuan
16bca5d4a1 Support CPU input for device QuantileDMatrix. (#8136)
- Copy `GHistIndexMatrix` to `Ellpack` when needed.
2022-08-11 21:21:26 +08:00
Jiaming Yuan
36e7c5364d [dask] Deterministic rank assignment. (#8018) 2022-08-11 19:17:58 +08:00
Ravi Makhija
20d1bba1bb Simplify Python getting started example (#8153)
Load data set via `sklearn` rather than a local file path.
2022-08-11 16:42:09 +08:00
Jiaming Yuan
d868126c39 [CI] Fix R build on Jenkins. (#8154) 2022-08-11 14:50:03 +08:00
Jiaming Yuan
570f8ae4ba Use black on more Python files. (#8137) 2022-08-11 01:38:11 +08:00
Jiaming Yuan
bdb291f1c2 [doc] Clarification for feature importance. (#8151) 2022-08-11 00:30:42 +08:00
Jiaming Yuan
446d536c23 Fix loading DMatrix binary in distributed env. (#8149)
- Try to load DMatrix binary before trying to parse text input.
- Remove some unmaintained code.
2022-08-10 22:53:16 +08:00
Jiaming Yuan
8fc60b31bc Update PyPi wheel size limit. (#8150) 2022-08-10 18:49:57 +08:00
Jiaming Yuan
9ae547f994 Use config_context in sklearn interface. (#8141) 2022-08-09 14:48:54 +08:00
Bobby Wang
03cc3b359c [pyspark] support a list of feature column names (#8117) 2022-08-08 17:05:27 +08:00
Jiaming Yuan
bcc8679a05 Update CUDA docker image and NCCL. (#8139) 2022-08-07 16:32:41 +08:00
Praateek Mahajan
ff471b3fab In PySpark Estimator example use the model with validation_indicator (#8131)
* use the validation_indicator model

* use the validation_indicator model for regression
2022-08-03 13:57:41 +08:00
Jiaming Yuan
d87f69215e Quantile DMatrix for CPU. (#8130)
- Add a new `QuantileDMatrix` that works for both CPU and GPU.
- Deprecate `DeviceQuantileDMatrix`.
2022-08-02 15:51:23 +08:00
Jiaming Yuan
2cba1d9fcc Fix compatibility with latest cupy. (#8129)
* Fix compatibility with latest cupy.

* Freeze mypy.
2022-08-01 15:24:42 +08:00
Philip Hyunsu Cho
24c2373080 [Doc] Indicate lack of py-xgboost-gpu on Windows (#8127) 2022-07-28 12:57:16 -07:00
Jiaming Yuan
2c70751d1e Implement iterative DMatrix for CPU. (#8116) 2022-07-26 22:34:21 +08:00
Jiaming Yuan
546de5efd2 [pyspark] Cleanup data processing. (#8088)
- Use numpy stack for handling list of arrays.
- Reuse concat function from dask.
- Prepare for `QuantileDMatrix`.
- Remove unused code.
- Use iterator for prediction to avoid initializing xgboost model
2022-07-26 15:00:52 +08:00
Jiaming Yuan
3970e4e6bb Move pylint helper from dmlc-core. (#8101)
* Move pylint helper from dmlc-core.

- Move the helper into the XGBoost ci_build.
- Run it with multiprocessing.

* Fix original test.
2022-07-23 08:12:37 +08:00
Jiaming Yuan
7785d65c8a Fix feature weights with multiple column sampling. (#8100) 2022-07-22 20:23:05 +08:00
Jiaming Yuan
4a4e5c7c18 Prepare gradient index for Quantile DMatrix. (#8103)
* Prepare gradient index for Quantile DMatrix.

- Implement push batch with adapter batch.
- Implement `GetFvalue` for prediction.
2022-07-22 17:26:33 +08:00
Rory Mitchell
1be09848a7 Refactor split valuation kernel (#8073) 2022-07-21 15:41:50 +02:00
Tim Gates
cb40bbdadd docs: fix simple typo, cannonical -> canonical (#8099)
There is a small typo in src/common/partition_builder.h.

Should read `canonical` rather than `cannonical`.

Signed-off-by: Tim Gates <tim.gates@iress.com>
2022-07-20 21:04:50 +08:00
QuellaZhang
703261e78f [MSVC][std:c++latest] Fix compiler error (#8093)
Co-authored-by: QuellaZhang <zhangyi2090@163.com>
2022-07-20 15:15:39 +08:00
Jiaming Yuan
ef11b024e8 Cleanup data generator. (#8094)
- Avoid duplicated definition of data shape.
- Explicitly define numpy iterator for CPU data.
2022-07-20 13:48:52 +08:00
Jiaming Yuan
5156be0f49 Limit max_depth to 30 for GPU. (#8098) 2022-07-20 12:28:49 +08:00
Jiaming Yuan
8bdea72688 [Python] Require black and isort for new Python files. (#8096)
* [Python] Require black and isort for new Python files.

- Require black and isort for spark and dask module.

These files are relatively new and are more conform to the black formatter. We will
convert the rest of the library as we move forward.

Other libraries including dask/distributed and optuna use the same formatting style and
have a more strict standard. The black formatter is indeed quite nice, automating it can
help us unify the code style.

- Gather Python checks into a single script.
2022-07-20 10:25:24 +08:00
WeichenXu
f23cc92130 [pyspark] User guide doc and tutorials (#8082)
Co-authored-by: Bobby Wang <wbo4958@gmail.com>
2022-07-19 22:25:14 +08:00
Bobby Wang
f801d3cf15 [PySpark] change the returning model type to string from binary (#8085)
* [PySpark] change the returning model type to string from binary

XGBoost pyspark can be can be accelerated by RAPIDS Accelerator seamlessly by
changing the returning model type from binary to string.
2022-07-19 18:39:20 +08:00
Jiaming Yuan
2365f82750 [dask] Mitigate non-deterministic test. (#8077) 2022-07-19 16:55:59 +08:00
Rong Ou
7a6b711eb8 Remove unused updater basemaker (#8091) 2022-07-19 15:41:27 +08:00
Philip Hyunsu Cho
4325178822 [CI] Clear workspace after budget check (#8092)
* [CI] Clear workspace after budget check

* Windows too
2022-07-18 19:17:33 -07:00
Jiaming Yuan
4083440690 Small cleanups to various data types. (#8086)
- Use `bst_bin_t` in batch param constructor.
- Use `StringView` to avoid `std::string` when appropriate.
- Avoid using `MetaInfo` in quantile constructor to limit the scope of parameter.
2022-07-18 22:39:36 +08:00
Jiaming Yuan
e28f6f6657 [doc] Integrate pyspark module into sphinx doc [skip ci] (#8066) 2022-07-17 10:46:09 +08:00
Rafail Giavrimis
579ab23b10 Check cudf lazily (#8084) 2022-07-17 09:27:43 +08:00
Bobby Wang
a33f35eecf [PySpark] add gpu support for spark local mode (#8068) 2022-07-17 07:59:06 +08:00
Bobby Wang
91bb9e2cb3 [PySpark] fix raw_prediction_col parameter and minor cleanup (#8067) 2022-07-16 17:58:57 +08:00
Jiaming Yuan
0ce80b7bcf Mitigate flaky GPU test. (#8078)
The flakiness is caused by the global random engine, which will take some time to fix.
2022-07-16 13:45:32 +08:00
Jiaming Yuan
7a5586f3db Fix GPU quantile distributed test. (#8076) 2022-07-16 11:40:53 +08:00
Jiaming Yuan
8fccc3c4ad [dask] Fix potential error in demo. (#8079)
* Use dask_cudf instead.
2022-07-15 18:42:29 +08:00
Jiaming Yuan
647d3844dd Make test for categorical data deterministic. (#8080) 2022-07-15 14:48:39 +08:00
Jiaming Yuan
dae7a41baa Update Python requirement to >=3.8. (#8071)
Additional changes:
- Use mamba for CPU test on Jenkins.
- Cleanup CPU test dependencies.
- Restore some of the modin tests
2022-07-14 18:01:47 +08:00
Jiaming Yuan
8dd96013f1 Split up column matrix initialization. (#8060)
* Split up column matrix initialization.

This PR splits the column matrix initialization into 2 steps, the first one initializes
the storage while the second one does the transpose. By doing so, we can reuse the code
for Quantile DMatrix.
2022-07-14 10:34:47 +08:00
Philip Hyunsu Cho
36cf979b82 [CI] Fix S3 uploads (#8069)
* [CI] Fix S3 upload issues

* Don't launch Docker containers when uploading to S3
2022-07-13 16:23:00 -07:00
Jiaming Yuan
abaa593aa0 Fix compiler warnings. (#8059)
- Remove unused parameters.
- Avoid comparison of different signedness.
2022-07-14 05:29:56 +08:00
Jiaming Yuan
937352c78f Fix R package Windows build. (#8065) 2022-07-14 05:27:38 +08:00
WeichenXu
176fec8789 PySpark XGBoost integration (#8020)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2022-07-13 13:11:18 +08:00
Jiaming Yuan
8959622836 [dask] Use an invalid port for test. (#8064) 2022-07-13 11:59:02 +08:00
Rory Mitchell
0bdaca25ca Use single precision in gain calculation, use pointers instead of span. (#8051) 2022-07-12 21:56:27 +02:00
Jiaming Yuan
a5bc8e2c6a Fix mypy error with the latest dask. (#8052)
* Fix mypy error with latest dask.

Dask is adding type hints to its codebase and as the result, checks in XGBoost can be
performed more rigorously.

- Remove compatibility with old dask version where multi lock was missing.
- Restrict input of `X` to be non-series.
- Adopt latest definition of `Delayed`.
- Avoid passing optional `host_ip`.
- Avoid deprecated `worker.nthreads`.
2022-07-09 08:02:42 +08:00
Jiaming Yuan
210eb471e9 [R] Implement feature info for DMatrix. (#8048) 2022-07-09 05:57:39 +08:00
Jiaming Yuan
701f32b227 [py-sckl] Raise import error if skl is not installed. (#8049) 2022-07-09 05:56:46 +08:00
Rory Mitchell
794cbaa60a Fuse split evaluation kernels (#8026) 2022-07-05 10:24:31 +02:00
Jiaming Yuan
ff1c559084 Remove unused variable. (#8046) 2022-07-05 01:59:22 +08:00
Jiaming Yuan
8746f9cddf Rename IterativeDMatrix. (#8045) 2022-07-04 18:52:31 +08:00
Jiaming Yuan
f24bfc7684 Bump R cache version. (#8044) 2022-07-03 03:53:05 +08:00
Michael Chirico
3af02584c1 error early if missing DiagrammeR (#8037) 2022-07-02 19:37:53 +08:00
Rory Mitchell
bc4f802b17 Batch UpdatePosition using cudaMemcpy (#7964) 2022-06-30 17:52:40 +02:00
kiwiwarmnfuzzy
2407381c3d Force auc.cc to be statically linked (#8039) 2022-06-30 19:24:22 +08:00
Jiaming Yuan
e88d6e071d Fix compiler warning in JSON IO. (#8031)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2022-06-30 01:13:22 +08:00
Jiaming Yuan
dcaf580476 Fix Python package source install. (#8036)
* Copy gputreeshap.
2022-06-29 21:45:09 +08:00
Rong Ou
6eb23353d7 Update nvflare demo for release 2.1.2 (#8038) 2022-06-29 17:58:06 +08:00
Joris LIMONIER
f470ad3af9 Fix multiple typos (#8028)
Fix 4 "graphiz" instead of "graphviz".
2022-06-27 19:21:58 +08:00
Rong Ou
45dc1f818a Make federated plugin work with cmake 3.16.3 (#8029) 2022-06-27 17:26:41 +08:00
Rong Ou
0725fd6081 fix federated learning plugin (#8027) 2022-06-24 08:41:07 +08:00
Bobby Wang
a68580e2a7 [jvm-packages] fix executor crashing issue when transforming on xgboost4j-spark-gpu (#8025)
* [jvm-packages] fix executor crashing issue when transforming on xgboost4j-spark-gpu

the API XGBoosterSetParam is not thread-safe. Dring the phase of transforming,
XGBoost runs several transforming tasks at a time, and each of them will set
the "gpu_id" and "predictor" parameters, so if several tasks (multi-threads)
all XGBoosterSetParam simultaneously, it may cause the memory to be corrupted
and cause SIGSEGV.

This PR first get the booster from broadcast and set to the correct gpu_id
and predictor, and then all transforming taskes will use the same booster to
do the transforming.
2022-06-24 01:18:41 +08:00
Jiaming Yuan
f0c1b842bf Implement sketching with adapter. (#8019) 2022-06-23 00:03:02 +08:00
Jiaming Yuan
142a208a90 Fix compiler warnings. (#8022)
- Remove/fix unused parameters
- Remove deprecated code in rabit.
- Update dmlc-core.
2022-06-22 21:29:10 +08:00
Bobby Wang
e44a082620 [jvm-packages] update nccl version to 2.12.12-1 (#8015) 2022-06-21 17:34:09 +08:00
Rong Ou
e5ec546da5 [Breaking] Remove rabit support for custom reductions and grow_local_histmaker updater (#7992) 2022-06-21 15:08:23 +08:00
Jiaming Yuan
4a87ea49b8 Reduce regularization for CPU gblinear. (#8013) 2022-06-21 01:05:27 +08:00
Jiaming Yuan
d285d6ba2a Reduce regularization in GPU gblinear test. (#8010) 2022-06-20 23:55:12 +08:00
Jiaming Yuan
e58e417603 [CI] Fix lintr error. (#8011) 2022-06-20 22:17:14 +08:00
Jiaming Yuan
9b0eb66b78 Fix GPU driver test. (#8008)
* Initialize the training parameter.
2022-06-20 19:37:31 +08:00
Jiaming Yuan
637e42a0c0 Use 22.04 for RMM. (#8001)
22.06 is not released yet.
2022-06-17 04:07:31 +08:00
Jiaming Yuan
bb47fd8c49 [jvm-packages] Change log level for tracker message. (#7968) 2022-06-09 18:15:08 +08:00
Jiaming Yuan
8f8bd8147a Fix LTR with weighted Quantile DMatrix. (#7975)
* Fix LTR with weighted Quantile DMatrix.

* Better tests.
2022-06-09 01:33:41 +08:00
Jiaming Yuan
1a33b50a0d Fix compiler warnings. (#7974)
- Remove unused parameters. There are still many warnings that are not yet
addressed. Currently, the warnings in dmlc-core dominate the error log.
- Remove `distributed` parameter from metric.
- Fixes some warnings about signed comparison.
2022-06-06 22:56:25 +08:00
Jiaming Yuan
d48123d23b Fix rmm build (#7973)
- Optionally switch to c++17
- Use rmm CMake target.
- Workaround compiler errors.
- Fix GPUMetric inheritance.
- Run death tests even if it's built with RMM support.

Co-authored-by: jakirkham <jakirkham@gmail.com>
2022-06-06 20:18:32 +08:00
Philip Hyunsu Cho
1ced638165 Document how to reproduce Docker environment from Jenkins (#7971) 2022-06-04 20:56:53 +09:00
Jiaming Yuan
b90c6d25e8 Implement max_cat_threshold for CPU. (#7957) 2022-06-04 11:02:46 +08:00
Bobby Wang
78694405a6 [jvm-packages] add jni for setting feature name and type (#7966) 2022-06-03 11:09:48 +08:00
Gavin Zhang
6426449c8b Support IBM i OS (#7920) 2022-06-02 23:38:35 +08:00
Rong Ou
31e6902e43 Support GPU training in the NVFlare demo (#7965) 2022-06-02 21:52:36 +08:00
Jiaming Yuan
6b55150e80 Fix pylint errors. (#7967) 2022-06-02 18:04:46 +08:00
Jiaming Yuan
13b15e07e8 Handle formatted JSON input. (#7953) 2022-06-01 16:20:58 +08:00
Rong Ou
d3429f2ff6 Increase gRPC max receive message size for federated learning (#7958) 2022-06-01 13:21:54 +08:00
Bobby Wang
545fd4548e [jvm-packages] refactor xgboost read/write (#7956)
1. Removed the duplicated Default XGBoost read/write which is copied from
  spark 2.3.x
2. Put some utils into util package
2022-06-01 11:38:49 +08:00
Yang Jiandan
27c66f12d1 set log level as ERROR for trackerProcess has some stderr output (#7952) 2022-05-31 22:54:38 +08:00
Bobby Wang
5a7dc41351 [doc] update doc for dumping model to be json or ubj for jvm packages (#7955) 2022-05-31 14:43:13 +08:00
Rong Ou
80339c3427 Enable distributed GPU training over Rabit (#7930) 2022-05-31 04:09:45 +08:00
Bobby Wang
6275cdc486 [jvm-packages] add format option when saving a model (#7940) 2022-05-30 15:49:59 +08:00
Gyeongjae Choi
cc6d57aa0d Add minimal emscripten build support (#7954) 2022-05-30 14:11:40 +08:00
Tim Sabsch
7a039e03fe Fix incomplete type hints for verbose (#7945) 2022-05-30 12:08:24 +08:00
Bobby Wang
fbc3d861bb [jvm-packages] remove default parameters (#7938) 2022-05-28 10:31:19 +08:00
Philip Hyunsu Cho
47224dd6d3 Use private mirror to host llvm-openmp tarballs (#7950) 2022-05-27 14:56:59 -07:00
Jiaming Yuan
bde4f25794 Handle missing categorical value in CPU evaluator. (#7948) 2022-05-27 14:15:47 +08:00
Philip Hyunsu Cho
2070afea02 [CI] Rotate package repository keys (#7943) 2022-05-26 17:06:46 -07:00
Jiaming Yuan
18cbebaeb9 Unify the cat split storage for CPU. (#7937)
* Unify the cat split storage for CPU.

* Cleanup.

* Workaround.
2022-05-26 04:14:40 -07:00
Daniel Clausen
755d9d4609 [JVM-Packages] Auto-detection of MUSL is replaced by system properties (#7921)
This PR removes auto-detection of MUSL-based Linux systems in favor of system properties the user can set to configure a specific path for a native library.
2022-05-26 10:53:15 +08:00
Jiaming Yuan
606be9e663 Handle missing values in one hot splits. (#7934) 2022-05-24 20:48:41 +08:00
Jiaming Yuan
18a38f7ca0 Refactor for GHistIndex. (#7923)
* Pass sparse page as adapter, which prepares for quantile dmatrix.
* Remove old external memory code like `rbegin` and extra `Init` function.
* Simplify type dispatch.
2022-05-23 23:04:53 +08:00
Jiaming Yuan
d314680a15 Verify shared object version at load. (#7928) 2022-05-23 20:53:30 +08:00
Jiaming Yuan
474366c020 Add convergence test for sparse datasets. (#7922) 2022-05-23 18:07:26 +08:00
Rory Mitchell
f6babc814c Do not initialise data structures to maximum possible tree size. (#7919) 2022-05-19 19:45:53 +02:00
Philip Hyunsu Cho
6f424d8d6c [Doc] Warn against loading JSON from external source (#7918) 2022-05-18 17:02:36 -07:00
Jiaming Yuan
f93a727869 Address remaining mypy errors in python package. (#7914) 2022-05-18 22:46:15 +08:00
Jiaming Yuan
edf9a9608e Fix type conversion warning. (#7916) 2022-05-18 20:14:14 +08:00
Jiaming Yuan
765097d514 Simplify inplace-predict. (#7910)
Pass the `X` as part of Proxy DMatrix instead of an independent `dmlc::any`.
2022-05-18 17:52:00 +08:00
Jiaming Yuan
19775ffe15 Use adapter to initialize column matrix. (#7912) 2022-05-18 16:15:12 +08:00
Bobby Wang
5ef33adf68 [jvm-packges] set the correct objective if user doesn't explicitly set it (#7781) 2022-05-18 14:05:18 +08:00
Chengyang
806c92c80b Add Type Hints for Python Package (#7742)
Co-authored-by: Chengyang Gu <bridgream@gmail.com>
Co-authored-by: Jiamingy <jm.yuan@outlook.com>
2022-05-17 22:14:09 +08:00
Rory Mitchell
71d3b2e036 Fuse gpu_hist all-reduce calls where possible (#7867) 2022-05-17 13:27:50 +02:00
Bobby Wang
b41cf92dc2 [jvm-packages] move dmatrix building into rabit context for cpu pipeline (#7908) 2022-05-17 14:52:25 +08:00
Rong Ou
77d4a53c32 use RabitContext intead of init/finalize (#7911) 2022-05-17 12:15:41 +08:00
Jiaming Yuan
4fcfd9c96e Fix and cleanup for column matrix. (#7901)
* Fix missed type dispatching for dense columns with missing values.
* Code cleanup to reduce special cases.
* Reduce memory usage.
2022-05-16 21:11:50 +08:00
Bobby Wang
1496789561 [doc] update the doc for jvm model compatibility (#7907) 2022-05-16 14:05:26 +08:00
Sze Yeung
a06d53688c Correct a mistake in Setting Parameters section (#7905) 2022-05-15 18:56:31 -07:00
Philip Hyunsu Cho
4cd14aee5a Rename misspelled config parameter for pseudo-Huber (#7904) 2022-05-15 06:38:33 -07:00
Jiaming Yuan
1baad8650c Small cleanup to Column. (#7898)
* Define forward iterator to hide the internal state.
2022-05-15 12:39:10 +08:00
Jiaming Yuan
ee382c4153 Update news for 1.6.1 (#7877) 2022-05-14 15:38:18 -07:00
Rong Ou
af907e2d0d Demo of federated learning using NVFlare (#7879)
Co-authored-by: jiamingy <jm.yuan@outlook.com>
2022-05-14 22:45:41 +08:00
Bobby Wang
11e46e4bc0 [Breaking][jvm-packages] make classification model be xgboost-compatible (#7896) 2022-05-14 15:43:05 +08:00
Jiaming Yuan
1b6538b4e5 [breaking] Drop single precision histogram (#7892)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-05-13 19:54:55 +08:00
Jiaming Yuan
c8f9d4b6e6 Show libxgboost.so path in build info. (#7893) 2022-05-13 18:08:56 +08:00
Bobby Wang
9fa7ed1743 [Breaking][jvm-packages] remove timeoutRequestWorkers parameter (#7839) 2022-05-13 16:26:25 +08:00
Jiaming Yuan
11d65fcb21 Extract partial sum into an independent function. (#7889) 2022-05-13 14:30:35 +08:00
Jiaming Yuan
db80671d6b Fix monotone constraint with tuple input. (#7891) 2022-05-13 04:00:03 +08:00
Jiaming Yuan
94ca52b7b7 Fix overflow in prediction size. (#7885) 2022-05-12 02:44:03 +08:00
Jiaming Yuan
8ba4722d04 Remove pyarrow workaround. (#7884) 2022-05-11 20:54:48 +08:00
Philip Hyunsu Cho
65e6d73b95 [CI] Automate artifact fetch step in JVM release process (#7882) 2022-05-11 00:35:22 -07:00
Jiaming Yuan
16ba74d008 Update CUDA version requirement in CMake script. (#7876) 2022-05-09 04:16:22 +08:00
Philip Hyunsu Cho
d2bc0f0f08 Allow loading old models from RDS (#7864) 2022-05-06 22:49:38 -07:00
Amit Bera
1823db53f2 updated winning solution under readme.md (#7862) 2022-05-06 17:38:07 +08:00
Rory Mitchell
7ef54e39ec Small refactor to categoricals (#7858) 2022-05-05 17:47:02 +02:00
Rong Ou
14ef38b834 Initial support for federated learning (#7831)
Federated learning plugin for xgboost:
* A gRPC server to aggregate MPI-style requests (allgather, allreduce, broadcast) from federated workers.
* A Rabit engine for the federated environment.
* Integration test to simulate federated learning.

Additional followups are needed to address GPU support, better security, and privacy, etc.
2022-05-05 21:49:22 +08:00
Jiaming Yuan
46e0bce212 Use maximum category in sketch. (#7853) 2022-05-05 19:56:49 +08:00
Jiaming Yuan
8ab5e13b5d Fix typo [skip ci] (#7861) 2022-05-04 18:34:45 +08:00
Jiaming Yuan
317d7be6ee Always use partition based categorical splits. (#7857) 2022-05-03 22:30:32 +08:00
Rory Mitchell
90cce38236 Remove single_precision_histogram for gpu_hist (#7828) 2022-05-03 14:53:19 +02:00
Jiaming Yuan
50d854e02e [CI] Test with latest RAPIDS. (#7816) 2022-04-30 11:55:10 -07:00
Bobby Wang
1b103e1f5f [CI] make container be able to re-attached (#7848)
When re-starting the container, it will fail in entrypoint.sh which
will exit when adding an existing group or user
2022-04-29 19:00:35 -07:00
Jiaming Yuan
288c52596c Define bin type. (#7850) 2022-04-29 19:41:39 +08:00
Michael Allman
f7db16add1 Ignore all Java exceptions when looking for Linux musl support (#7844) 2022-04-28 15:44:30 +08:00
Bobby Wang
a94e1b172e [jvm-packages] Fix model compatibility (#7845) 2022-04-28 02:05:38 +08:00
Bobby Wang
686caad40c [jvm-package] remove the coalesce in barrier mode (#7846) 2022-04-27 23:34:22 +08:00
Jiaming Yuan
fdf533f2b9 [POC] Experimental support for l1 error. (#7812)
Support adaptive tree, a feature supported by both sklearn and lightgbm.  The tree leaf is recomputed based on residue of labels and predictions after construction.

For l1 error, the optimal value is the median (50 percentile).

This is marked as experimental support for the following reasons:
- The value is not well defined for distributed training, where we might have empty leaves for local workers. Right now I just use the original leaf value for computing the average with other workers, which might cause significant errors.
- Some follow-ups are required, for exact, pruner, and optimization for quantile function. Also, we need to calculate the initial estimation.
2022-04-26 21:41:55 +08:00
Jiaming Yuan
ad06172c6b Refactor pandas dataframe handling. (#7843) 2022-04-26 18:53:43 +08:00
Bobby Wang
bef1f939ce [doc] remove the doc about killing SparkContext [skip ci] (#7840) 2022-04-25 19:29:16 +08:00
Bobby Wang
dc2e699656 [Breaking][jvm-packages] Use barrier execution mode (#7836)
With the introduction of the barrier execution mode. we don't need to kill SparkContext when some xgboost tasks failed. Instead, Spark will handle the errors for us. So in this PR, `killSparkContextOnWorkerFailure` parameter is deleted.
2022-04-25 17:09:52 +08:00
Bobby Wang
6ece549a90 [doc] update the jvm tutorial to 1.6.1 [skip ci] (#7834) 2022-04-24 14:25:22 +08:00
Jiaming Yuan
332380479b Avoid warning in np primitive type tests. (#7833) 2022-04-23 02:07:01 +08:00
Bobby Wang
c45665a55a [jvm-packages] move the dmatrix building into rabit context (#7823)
This fixes the QuantileDeviceDMatrix in distributed environment.
2022-04-23 00:06:50 +08:00
Jiaming Yuan
f0f76259c9 Remove STRING_TYPES. (#7827) 2022-04-22 19:07:51 +08:00
forestkey
c13a2a3114 [doc] "irrevelant" to "irrelevant" (#7832) 2022-04-22 16:54:30 +08:00
Jiaming Yuan
c70fa502a5 Expose feature_types to sklearn interface. (#7821) 2022-04-21 20:23:35 +08:00
Jiaming Yuan
401d451569 Clear configuration cache. (#7826) 2022-04-21 19:09:54 +08:00
Jiaming Yuan
52d4eda786 Deprecate use_label_encoder in XGBClassifier. (#7822)
* Deprecate `use_label_encoder` in XGBClassifier.

* We have removed the encoder, now prepare to remove the indicator.
2022-04-21 13:14:02 +08:00
Jiaming Yuan
5815df4c46 Remove warning in 1.4. (#7815) 2022-04-20 01:19:09 +08:00
Jiaming Yuan
d0de954af2 v1.6.0 release note. [skip ci] (#7746)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-04-16 16:27:54 +08:00
Jiaming Yuan
5dea21273a Fix training continuation with categorical model. (#7810)
* Make sure the task is initialized before construction of tree updater.

This is a quick fix meant to be backported to 1.6, for a full fix we should pass the model
param into tree updater by reference instead.
2022-04-15 18:21:02 +08:00
Bobby Wang
2d83b2ad8f [jvm-packages] add hostIp and python exec for rabit tracker (#7808) 2022-04-15 16:28:43 +08:00
Bobby Wang
6f032b7152 [doc] fix a typo in jvm/index.rst (#7806) 2022-04-13 17:02:42 -07:00
dependabot[bot]
1bb1913811 Bump hadoop-common from 2.10.1 to 3.2.3 in /jvm-packages/xgboost4j-flink (#7801)
Bumps hadoop-common from 2.10.1 to 3.2.3.

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-13 22:24:44 +08:00
Ikko Ashimine
56e4baff7c [doc] Fix typo in build.rst (#7800)
avaiable -> available
2022-04-13 16:45:26 +08:00
Bobby Wang
3f536b5308 [jvm-packages] fix evaluation when featuresCols is used (#7798) 2022-04-13 12:52:50 +08:00
Bobby Wang
4b00c64d96 [doc] improve xgboost4j-spark-gpu doc [skip ci] (#7793)
Co-authored-by: Sameer Raheja <sameerz@users.noreply.github.com>
2022-04-12 12:02:16 +08:00
Bobby Wang
118192f116 [jvm-packages] xgboost4j-spark should work when featuresCols is specified (#7789) 2022-04-08 13:21:04 +08:00
Bobby Wang
729d227b89 [jvm-packages] remove the dep of com.fasterxml.jackson (#7791) 2022-04-08 13:04:34 +08:00
Bobby Wang
89d6419fd5 [jvm-packages] add doc for xgboost4j-spark-gpu (#7779)
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2022-04-07 11:35:01 +08:00
Bobby Wang
2454407f3a [jvm-packages] unify setFeaturesCol API for XGBoostRegressor (#7784) 2022-04-05 13:35:33 +08:00
Philip Hyunsu Cho
e5ab8f3ebe [CI] Speed up CPU test pipeline (#7772) 2022-04-01 02:39:04 +08:00
Jiaming Yuan
bcce17e688 Remove text loading in basic walk through demo. (#7753) 2022-04-01 00:59:42 +08:00
giuliohome
c467e90ac1 [doc] Update doc for Kubernetes Operator (#7777) 2022-03-31 23:10:49 +08:00
Jiaming Yuan
fd78af404b Drop support for deprecated CUDA architectures. (#7774)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-03-31 21:42:23 +08:00
Jiaming Yuan
02dd7b6913 Remove use of distutils. (#7770)
distutils is deprecated and replaced by other stdlib constructs.
2022-03-31 19:03:10 +08:00
Philip Hyunsu Cho
e8eff3581b [CI] Enable faulthandler to show details when 0xC0000005 error occurs (#7771) (#7775) 2022-03-31 17:40:06 +08:00
Jiaming Yuan
6fa1afdffc Avoid compiler warning about comparison. (#7768) 2022-03-31 08:52:14 +08:00
Jiaming Yuan
522636cb52 Bump version. (#7769) 2022-03-31 06:33:22 +08:00
Jiaming Yuan
9150fdbd4d Support pandas nullable types. (#7760) 2022-03-30 08:51:52 +08:00
Jiaming Yuan
d4796482b5 Fix failures on R hub and Win builder. (#7763)
* Update date.
* Workaround amalgamation build with clang. (SimpleDMatrix instantiation)
* Workaround compiler error with driver push.
* Revert autoconf requirement.
* Fix model IO on 32-bit environment. (i386)
* Clarify the function name.
2022-03-30 07:14:33 +08:00
Jiaming Yuan
a50b84244e Cleanup configuration for constraints. (#7758) 2022-03-29 04:22:46 +08:00
Jiaming Yuan
3c9b04460a Move num_parallel_tree to model parameter. (#7751)
The size of forest should be a property of model itself instead of a training
hyper-parameter.
2022-03-29 02:32:42 +08:00
Jiaming Yuan
8b3ecfca25 Mitigate flaky tests. (#7749)
* Skip non-increasing test with external memory when subsample is used.
* Increase bin numbers for boost from prediction test. This mitigates the effect of
  non-deterministic partitioning.
2022-03-28 21:20:50 +08:00
Christian Marquardt
39c5616af2 Added CPPFLAGS and LDFLAGS to the testing for OpenMP during R installation from source. (#7759) 2022-03-28 19:14:07 +08:00
Haoming Chen
b37ff3d492 Fix cox objective test by using XGBOOST_PARALLEL_STABLE_SORT (#7756) 2022-03-26 17:58:30 +08:00
Jiaming Yuan
b3ba0e8708 Check cupy lazily. (#7752) 2022-03-26 06:09:58 +08:00
Jiaming Yuan
af0cf88921 Workaround compiler error. (#7745) 2022-03-25 17:05:14 +08:00
Jiaming Yuan
64575591d8 Use context in SetInfo. (#7687)
* Use the name `Context`.
* Pass a context object into `SetInfo`.
* Add context to proxy matrix.
* Add context to iterative DMatrix.

This is to remove the use of the default number of threads during `SetInfo` as a follow-up on
removing the global omp variable while preparing for CUDA stream semantic.  Currently, XGBoost
uses the legacy CUDA stream, we will gradually remove them in the future in favor of non-blocking streams.
2022-03-24 22:16:26 +08:00
Oleksandr Pryimak
f5b20286e2 [jvm-packages] Launch dev jvm image under my user (#4676)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2022-03-23 10:39:51 -07:00
Chengyang
c92ab2ce49 Add type hints to core.py (#7707)
Co-authored-by: Chengyang Gu <bridgream@gmail.com>
Co-authored-by: jiamingy <jm.yuan@outlook.com>
2022-03-23 21:12:14 +08:00
Philip Hyunsu Cho
66cb4afc6c Update install doc (#7747) 2022-03-23 17:20:01 +08:00
Aging
f20ffa8db3 Update JVM dev build Dockerfile and shell script (#6792)
Co-authored-by: Zhuo Yuzhen <yuzhuo@paypal.com>
2022-03-22 16:39:10 -07:00
Jiaming Yuan
4d81c741e9 External memory support for hist (#7531)
* Generate column matrix from gHistIndex.
* Avoid synchronization with the sparse page once the cache is written.
* Cleanups: Remove member variables/functions, change the update routine to look like approx and gpu_hist.
* Remove pruner.
2022-03-22 00:13:20 +08:00
Jiaming Yuan
cd55823112 Demo for using custom objective with multi-target regression. (#7736) 2022-03-20 17:44:25 +08:00
Jiaming Yuan
996cc705af Small cleanup to hist tree method. (#7735)
* Remove special optimization using number of bins.
* Remove 1-based index for column sampling.
* Remove data layout.
* Unify update prediction cache.
2022-03-20 03:44:55 +08:00
Jiaming Yuan
718472dbe2 [CI] Upgrade GitHub action Windows workers. (#7739) 2022-03-20 01:44:33 +08:00
Jiaming Yuan
9a400731d9 Replace device sync with stream sync. (#7737) 2022-03-19 23:22:23 +08:00
Jiaming Yuan
da351621a1 [R] Fix parsing decision stump. (#7689) 2022-03-17 01:08:22 +08:00
Jiaming Yuan
e78a38b837 Sort sparse page index when constructing DMatrix. (#7731) 2022-03-16 18:01:05 +08:00
Xiaochang Wu
613ec36c5a Support building SimpleDMatrix from Arrow data format (#7512)
* Integrate with Arrow C data API.
* Support Arrow dataset.
* Support Arrow table.

Co-authored-by: Xiaochang Wu <xiaochang.wu@intel.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
Co-authored-by: Zhang Zhang <zhang.zhang@intel.com>
2022-03-15 13:25:19 +08:00
William Hicks
6b6849b001 Correct xgboost-config directory for inclusion in other projects (#7730) 2022-03-15 03:18:44 +08:00
Jiaming Yuan
98d6faefd6 Implement slope for Pseduo-Huber. (#7727)
* Add objective and metric.
* Some refactoring for CPU/GPU dispatching using linalg module.
2022-03-14 21:42:38 +08:00
Daniel Clausen
4dafb5fac8 [JVM-Packages] Add support for detecting musl-based Linux (#7624)
Co-authored-by: Marc Philipp <marc@gradle.com>
2022-03-14 00:37:27 +08:00
Haoming Chen
04fc575c0e Run tests in a temporary directory (#7723)
Fix some tests to run in a temporary directory in case the root
directory is not writable. Note that most of tests are already
running in the temporary directory, so this PR just make them
consistent.
2022-03-12 21:24:36 +08:00
Haoming Chen
55463b76c1 Initialize TreeUpdater ctx_ with nullptr (#7722) 2022-03-10 22:33:32 +08:00
Jiaming Yuan
a62a3d991d [dask] prediction with categorical data. (#7708) 2022-03-10 00:21:48 +08:00
Pradipta Ghosh
68b6d6bbe2 Fix for Feature shape mismatch error (#7715) 2022-03-03 21:36:29 +08:00
Cheng Li
a92e0f6240 multi groups in the constraints (#7711) 2022-03-01 18:10:15 +08:00
Jiaming Yuan
1d468e20a4 Optimize GPU evaluation function for categorical data. (#7705)
* Use transform and cache.
2022-02-28 17:46:29 +08:00
Jiaming Yuan
18a4af63aa Update documents and tests. (#7659)
* Revise documents after recent refactoring and cat support.
* Add tests for behavior of max_depth and max_leaves.
2022-02-26 03:57:47 +08:00
Jiaming Yuan
5eed2990ad Fix file descriptor leak. (#7704) 2022-02-25 17:49:33 +08:00
Philip Hyunsu Cho
1b25dd59f9 Use CUDA 11 in clang-tidy (#7701)
* Show command args when clang-tidy fails

* Add option to specify CUDA args

* Use clang-tidy 11

* [CI] Use CUDA 11
2022-02-24 15:15:07 -08:00
Jiaming Yuan
83a66b4994 Support categorical data for hist. (#7695)
* Extract partitioner from hist.
* Implement categorical data support by passing the gradient index directly into the partitioner.
* Organize/update document.
* Remove code for negative hessian.
2022-02-25 03:47:14 +08:00
Jiaming Yuan
f60d95b0ba [R] Construct booster object in load.raw. (#7686) 2022-02-24 10:06:18 +08:00
Bobby Wang
89aa8ddf52 [jvm-packages] fix the prediction issue for multi:softmax (#7694) 2022-02-24 01:09:45 +08:00
Jiaming Yuan
6762c45494 Small cleanup to gradient index and hist. (#7668)
* Code comments.
* Const accessor to index.
* Remove some weird variables in the `Index` class.
* Simplify the `MemStackAllocator`.
2022-02-23 11:37:21 +08:00
Jiaming Yuan
49c74a5369 Update R package description. (#7691)
* Change role.
* Remove cmake file when building the package.
2022-02-23 08:36:37 +08:00
Bobby Wang
e3e6de5ed9 [jvm-packages] unify the set features API (#7692)
xgboost4j-spark provides 2 sets of API for setting features, one for CPU, another for GPU, which may cause confusion.

This PR removes the GPU API and adds an override CPU function setFeaturesCol to accept Array[String] parameters.
2022-02-23 03:37:25 +08:00
Jiaming Yuan
c859764d29 [doc] Clarify that states in callbacks are mutated. (#7685)
* Fix copy for cv.  This prevents inserting default callbacks into the input list.
* Clarify the behavior of callbacks in training/cv.
* Fix typos in doc.
2022-02-22 11:45:00 +08:00
Jiaming Yuan
584bae1fc6 Fix document build with scikit-learn (#7684)
* Require sphinx >= 4.4 for RTD.

* Install sklearn.
2022-02-22 08:58:54 +08:00
Jiaming Yuan
e56d1779e1 Require Python 3.7. (#7682)
* Update setup.py.
2022-02-21 05:46:48 +08:00
Jiaming Yuan
549f3bd781 Honor CPU counts from CFS. (#7654) 2022-02-21 03:13:26 +08:00
Jiaming Yuan
671b3c8d8e Fix typo. (#7680) 2022-02-20 03:42:47 +08:00
Jiaming Yuan
b2341eab0c [R] Fix broken links. (#7670) 2022-02-20 00:55:48 +08:00
Bobby Wang
131858e7cb [jvm-packages] Do not repartition when nWorker = 1 (#7676) 2022-02-19 21:45:54 +08:00
Jiaming Yuan
f08c5dcb06 Cleanup some pylint errors. (#7667)
* Cleanup some pylint errors.

* Cleanup pylint errors in rabit modules.
* Make data iter an abstract class and cleanup private access.
* Cleanup no-self-use for booster.
2022-02-19 18:53:12 +08:00
Jiaming Yuan
b76c5d54bf Define export symbols in callback module. (#7665) 2022-02-19 18:52:41 +08:00
Jiaming Yuan
7366d3b20c Ensure models with categorical splits don't use old binary format. (#7666) 2022-02-19 08:05:28 +08:00
Jiaming Yuan
14d61b0141 [doc] Update document for building from source. (#7664)
- Mention standard install command for R package.
- Remove repeated "get source" step.
- Remove troubleshooting on Windows.  It's outdated considering VS 2022 is already out.
2022-02-19 04:57:03 +08:00
Jiaming Yuan
d625dc2047 Work around nvcc error. (#7673) 2022-02-19 01:41:46 +08:00
Jiaming Yuan
3877043d41 Avoid print for R package. (#7672) 2022-02-18 08:06:24 +08:00
Jiaming Yuan
711f7f3851 Avoid std::terminate for R package. (#7661)
This is part of CRAN policies.
2022-02-17 01:27:20 +08:00
Jiaming Yuan
12949c6b31 [R] Implement feature weights. (#7660) 2022-02-16 22:20:52 +08:00
Philip Hyunsu Cho
0149f81a5a [CI] Fix S3 upload (#7662) 2022-02-16 01:35:27 -08:00
Jiaming Yuan
93eebe8664 [doc] Fix broken link. [skip ci] (#7655) 2022-02-15 14:07:34 +08:00
Jiaming Yuan
0da7d872ef [doc] Update for prediction. (#7648) 2022-02-15 05:01:55 +08:00
Jiaming Yuan
0d0abe1845 Support optimal partitioning for GPU hist. (#7652)
* Implement `MaxCategory` in quantile.
* Implement partition-based split for GPU evaluation.  Currently, it's based on the existing evaluation function.
* Extract an evaluator from GPU Hist to store the needed states.
* Added some CUDA stream/event utilities.
* Update document with references.
* Fixed a bug in approx evaluator where the number of data points is less than the number of categories.
2022-02-15 03:03:12 +08:00
Jiaming Yuan
2369d55e9a Add tests for prediction cache. (#7650)
* Extract the test from approx for other tree methods.
* Add note on how it works.
2022-02-15 00:28:00 +08:00
Jiaming Yuan
5cd1f71b51 [dask] Improve configuration for port. (#7645)
- Try port 0 to let the OS return the available port.
- Add port configuration.
2022-02-14 21:34:34 +08:00
Jiaming Yuan
b52c4e13b0 [dask] Fix empty partition with pandas input. (#7644)
Empty partition is different from empty dataset.  For the former case, each worker has
non-empty dask collections, but each collection might contain empty partition.
2022-02-14 19:35:51 +08:00
Jiaming Yuan
1f020a6097 Add maintainer for R package. (#7649) 2022-02-12 23:45:30 +08:00
Jiaming Yuan
1441a6cd27 [CI] Update R cache. (#7646) 2022-02-11 19:50:11 +08:00
Jiaming Yuan
2775c2a1ab Prepare external memory support for hist. (#7638)
This PR prepares the GHistIndexMatrix to host the column matrix which is used by the hist tree method by accepting sparse_threshold parameter.

Some cleanups are made to ensure the correct batch param is being passed into DMatrix along with some additional tests for correctness of SimpleDMatrix.
2022-02-10 16:58:02 +08:00
dependabot[bot]
87c01f49d8 Bump hadoop-common from 2.7.3 to 2.10.1 in /jvm-packages/xgboost4j-flink (#7641)
Bumps hadoop-common from 2.7.3 to 2.10.1.

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-09 17:07:35 -08:00
Jiaming Yuan
fe4ce920b2 [dask] Cleanup dask module. (#7634)
* Add a new utility for mapping function onto workers.
* Unify the type for feature names.
* Clean up the iterator.
* Fix prediction with DaskDMatrix worker specification.
* Fix base margin with DeviceQuantileDMatrix.
* Support vs 2022 in setup.py.
2022-02-08 20:41:46 +08:00
Jiaming Yuan
926af9951e Add missing train parameter for sklearn interface. (#7629)
Some other parameters are still missing and rely on **kwargs, for instance parameters from
dart.
2022-02-08 13:20:19 +08:00
Jiaming Yuan
3e693e4f97 [dask] Fix nthread config with dask sklearn wrapper. (#7633) 2022-02-08 06:38:32 +08:00
Ed Shee
d152c59a9c fixed broken link to Seldon XGBoost server (#7628) 2022-02-05 01:03:29 +08:00
Philip Hyunsu Cho
34a238ca98 [CI] Clean up Python wheel build pipeline (#7626)
* [CI] Always upload artifacts to [branch_name]/

* [CI] Move detailed setup inside build_python_wheels.sh

* Fix typo
2022-02-03 00:55:44 -08:00
Philip Hyunsu Cho
f6e6d0b2c0 [CI] Build Python wheels for MacOS (x86_64 and arm64) (#7621)
* Build Python wheels for OSX (x86_64 and arm64)

* Use Conda's libomp when running Python tests

* fix

* Add comment to explain CIBW_TARGET_OSX_ARM64

* Update release script

* Add comments in build_python_wheels.sh

* Document wheel pipeline
2022-02-02 17:35:48 -08:00
Philip Hyunsu Cho
271a7c5d43 [Doc] fix typo in install doc (#7623) 2022-01-31 13:35:56 -08:00
Philip Hyunsu Cho
c621775f34 Replace all uses of deprecated function sklearn.datasets.load_boston (#7373)
* Replace all uses of deprecated function sklearn.datasets.load_boston

* More renaming

* Fix bad name

* Update assertion

* Fix n boosted rounds.

* Avoid over regularization.

* Rebase.

* Avoid over regularization.

* Whac-a-mole

Co-authored-by: fis <jm.yuan@outlook.com>
2022-01-30 04:27:57 -08:00
Philip Hyunsu Cho
b4340abf56 Add special handling for multi:softmax in sklearn predict (#7607)
* Add special handling for multi:softmax in sklearn predict

* Add test coverage
2022-01-29 15:54:49 -08:00
david-cortes
7f738e7f6f [R] Accept CSR data for predictions (#7615) 2022-01-30 00:54:57 +08:00
Michael Chirico
549bd419bb use exit hook to remove temp file (#7611)
This guarantees the removal will trigger for unexpected early exits
2022-01-29 16:06:52 +08:00
Philip Hyunsu Cho
f21301c749 [Doc] Add instruction to install XGBoost for Apple Silicon using Conda (#7612) 2022-01-28 01:06:39 -08:00
Jiaming Yuan
81210420c6 Remove omp_get_max_threads (#7608)
This is the one last PR for removing omp global variable.

* Add context object to the `DMatrix`.  This bridges `DMatrix` with https://github.com/dmlc/xgboost/issues/7308 .
* Require context to be available at the construction time of booster.
* Add `n_threads` support for R csc DMatrix constructor.
* Remove `omp_get_max_threads` in R glue code.
* Remove threading utilities that rely on omp global variable.
2022-01-28 16:09:22 +08:00
Philip Hyunsu Cho
028bdc1740 [R] Fix typo in docstring (#7606) 2022-01-26 23:33:25 +08:00
Jiaming Yuan
e060519d4f Avoid regenerating the gradient index for approx. (#7591) 2022-01-26 21:41:30 +08:00
Jiaming Yuan
5d7818e75d Remove omp_get_max_threads in tree updaters. (#7590) 2022-01-26 19:55:47 +08:00
Jiaming Yuan
24789429fd Support latest pandas Index type. (#7595) 2022-01-26 18:20:10 +08:00
AJ Schmidt
511805c981 Compress fatbins (#7601)
* compress CUDA device code

Co-authored-by: ptaylor <paul.e.taylor@me.com>
2022-01-25 18:30:59 +08:00
Jiaming Yuan
6967ef7267 Remove omp_get_max_threads in objective. (#7589) 2022-01-24 04:35:49 +08:00
Jiaming Yuan
5817840858 Remove omp_get_max_threads in data. (#7588) 2022-01-24 02:44:07 +08:00
Jiaming Yuan
f84291c1e1 Fix max_cat_to_onehot doc annotation [skip ci] (#7592) 2022-01-23 16:33:23 +08:00
Jiaming Yuan
d262503781 [R] Implement new save raw in R. (#7571) 2022-01-22 20:55:47 +08:00
Jiaming Yuan
ef4dae4c0e [dask] Add scheduler address to dask config. (#7581)
- Add user configuration.
- Bring back to the logic of using scheduler address from dask.  This was removed when we were trying to support GKE, now we bring it back and let xgboost try it if direct guess or host IP from user config failed.
2022-01-22 01:56:32 +08:00
Jiaming Yuan
5ddd4a9d06 Small cleanup to tests. (#7585)
* Use random port in dask tests to avoid warnings for occupied port.
* Increase the difficulty of AUC tests.
2022-01-21 06:26:57 +00:00
Philip Hyunsu Cho
9fd510faa5 [CI] Clarify steps for publishing artifacts to Maven Central (#7582) 2022-01-20 14:23:07 -08:00
Jiaming Yuan
529cf8a54a Configure cub version automatically. (#7579)
Note that when cub inside CUDA is being used, XGBoost performs checks on input size
instead of using internal cub function to accept inputs larger than maximum integer.
2022-01-20 19:49:26 +08:00
Jiaming Yuan
ac7a36367c [jvm-packages] Implement new save_raw in jvm-packages. (#7570)
* New `toByteArray` that accepts a parameter for format.
2022-01-19 16:00:14 +08:00
Jiaming Yuan
b4ec1682c6 Update document for multi output and categorical. (#7574)
* Group together categorical related parameters.
* Update documents about multioutput and categorical.
2022-01-19 04:35:17 +08:00
Jiaming Yuan
dac9eb13bd Implement new save_raw in Python. (#7572)
* Expose the new C API function to Python.
* Remove old document and helper script.
* Small optimization to the `save_raw` and Json ctors.
2022-01-19 02:27:51 +08:00
Jiaming Yuan
9f20a3315e Test with latest numpy. (#7573) 2022-01-19 00:46:23 +08:00
Jiaming Yuan
bb56bb9a13 Fix merge conflict. (#7577) 2022-01-18 23:01:34 +08:00
Jiaming Yuan
cc06fab9a7 Support distributed CPU env for categorical data. (#7575)
* Add support for cat data in sketch allreduce.
* Share tests between CPU and GPU.
2022-01-18 21:56:07 +08:00
Jiaming Yuan
deab0e32ba Validate out of range categorical value. (#7576)
* Use float in CPU categorical set to preserve the input value.
* Check out of range values.
2022-01-18 20:16:19 +08:00
Jiaming Yuan
d6ea5cc1ed Cover approx tree method for categorical data tests. (#7569)
* Add tree to df tests.
* Add plotting tests.
* Add histogram tests.
2022-01-16 11:31:40 +08:00
Jiaming Yuan
465dc63833 Fix tree param feature type. (#7565) 2022-01-16 04:46:29 +08:00
Jiaming Yuan
a1bcd33a3b [breaking] Change internal model serialization to UBJSON. (#7556)
* Use typed array for models.
* Change the memory snapshot format.
* Add new C API for saving to raw format.
2022-01-16 02:11:53 +08:00
Jiaming Yuan
13b0fa4b97 Implement get_group. (#7564) 2022-01-16 02:07:42 +08:00
Jiaming Yuan
52277cc3da Rename build info function to be consistent with rest of the API. (#7553) 2022-01-14 00:39:28 +08:00
Jiaming Yuan
e94b766310 Fix early stopping with linear model. (#7554) 2022-01-13 21:53:06 +08:00
Jiaming Yuan
e5e47c3c99 Clarify the behavior of invalid categorical value handling. (#7529) 2022-01-13 16:11:52 +08:00
Philip Hyunsu Cho
20c0d60ac7 Restore functionality of max_depth=0 in hist (#7551)
* Restore functionality of max_depth=0 in hist

* Add test case
2022-01-11 01:37:44 +08:00
Jiaming Yuan
2db808021d Silent some warnings for unused variable. (#7548) 2022-01-11 01:16:26 +08:00
Jiaming Yuan
c635d4c46a Implement ubjson. (#7549)
* Implement ubjson.

This is a partial implementation of UBJSON with support for typed arrays.  Some missing
features are `f64`, typed object, and the no-op.
2022-01-10 23:24:23 +08:00
Jiaming Yuan
001503186c Rewrite approx (#7214)
This PR rewrites the approx tree method to use codebase from hist for better performance and code sharing.

The rewrite has many benefits:
- Support for both `max_leaves` and `max_depth`.
- Support for `grow_policy`.
- Support for mono constraint.
- Support for feature weights.
- Support for easier bin configuration (`max_bin`).
- Support for categorical data.
- Faster performance for most of the datasets. (many times faster)
- Support for prediction cache.
- Significantly better performance for external memory.
- Unites the code base between approx and hist.
2022-01-10 21:15:05 +08:00
Jiaming Yuan
ed95e77752 [jvm-packages] Update JNI header. (#7550) 2022-01-10 14:59:40 +08:00
Jiaming Yuan
91c1a1c52f Fix index type for bitfield. (#7541) 2022-01-05 19:23:29 +08:00
Jiaming Yuan
0df2ae63c7 Fix num_boosted_rounds for linear model. (#7538)
* Add note.

* Fix n boosted rounds.
2022-01-05 03:29:33 +08:00
Jiaming Yuan
28af6f9abb Remove omp_get_max_threads in gbm and linear. (#7537)
* Use ctx in gbm.

* Use ctx threads in gbm and linear.
2022-01-05 03:28:52 +08:00
Jiaming Yuan
eea094e1bc Remove some warnings from clang. (#7533)
* Unused variable.
* Unnecessary virtual function.
2022-01-05 03:28:21 +08:00
Jiaming Yuan
ec56d5869b [doc] Include dask examples into doc. (#7530) 2022-01-05 03:27:22 +08:00
Jiaming Yuan
54582f641a [doc] Use cross references in sphinx doc. (#7522)
* Use cross references instead of URL.
* Fix auto doc for callback.
2022-01-05 03:21:25 +08:00
Jiaming Yuan
eb1efb54b5 Define feature_names_in_. (#7526)
* Define `feature_names_in_`.
* Raise attribute error if it's not defined.
2022-01-05 01:35:34 +08:00
Jiaming Yuan
8f0a42a266 Initial support for multi-label classification. (#7521)
* Add support in sklearn classifier.
2022-01-04 23:58:21 +08:00
Jiaming Yuan
68cdbc9c16 Remove omp_get_max_threads in CPU predictor. (#7519)
This is part of the on going effort to remove the dependency on global omp variables.
2022-01-04 22:12:15 +08:00
Ikko Ashimine
5516281881 Fix typo in tree_model.cc (#7539)
occurance -> occurrence
2021-12-30 20:12:25 +08:00
Randall Britten
a4a0ebb85d [doc] Lowercase omega for per tree complexity (#7532)
As suggested on issue #7480
2021-12-29 23:05:54 +08:00
Louis Desreumaux
3886c3dd8f Remove macro definitions of snprintf and vsnprintf (#7536) 2021-12-26 08:05:59 +08:00
Ginko Balboa
29bfa94bb6 Fix external memory with gpu_hist and subsampling combination bug. (#7481)
Instead of accessing data from the `original_page_`, access the data from the first page of the available batch.

fix #7476

Co-authored-by: jiamingy <jm.yuan@outlook.com>
2021-12-24 11:15:35 +08:00
Jiaming Yuan
7f399eac8b Use double for GPU Hist node sum. (#7507) 2021-12-22 08:41:35 +08:00
Jiaming Yuan
eabec370e4 [R] Fix single sample prediction. (#7524) 2021-12-21 14:11:07 +08:00
Bobby Wang
e8c1eb99e4 [jvm-package] Clean up the legacy gpu support tests (#7523) 2021-12-21 09:15:51 +08:00
Xiaochang Wu
59bd1ab17e Skip callback demo test if matplotlib is not installed (#7520) 2021-12-19 08:20:38 +08:00
Jiaming Yuan
58a6723eb1 Initial support for multioutput regression. (#7514)
* Add num target model parameter, which is configured from input labels.
* Change elementwise metric and indexing for weights.
* Add demo.
* Add tests.
2021-12-18 09:28:38 +08:00
Jiaming Yuan
9ab73f737e Extract Sketch Entry from hist maker. (#7503)
* Extract Sketch Entry from hist maker.

* Add a new sketch container for sorted inputs.
* Optimize bin search.
2021-12-18 05:36:56 +08:00
Qingyun Wu
b4a1236cfc [doc] Update the link to the tuning example in FLAML 2021-12-17 14:31:00 +08:00
Bobby Wang
24e25802a7 [jvm-packages] Add Rapids plugin support (#7491)
* Add GPU pre-processing pipeline.
2021-12-17 13:11:12 +08:00
Jiaming Yuan
5b1161bb64 Convert labels into tensor. (#7456)
* Add a new ctor to tensor for `initilizer_list`.
* Change labels from host device vector to tensor.
* Rename the field from `labels_` to `labels` since it's a public member.
2021-12-17 00:58:35 +08:00
Jiaming Yuan
6f8a4633b7 Fix Python typehint with upgraded mypy. (#7513) 2021-12-16 23:08:08 +08:00
Jiaming Yuan
70b12d898a [dask] Fix ddqdm with empty partition. (#7510)
* Fix empty partition.

* war.
2021-12-16 20:37:29 +08:00
Jiaming Yuan
a512b4b394 [doc] Promote dask from experimental. [skip ci] (#7509) 2021-12-16 14:17:06 +08:00
Jiaming Yuan
05497a9141 [dask] Fix asyncio. (#7508) 2021-12-13 01:48:25 +08:00
Jiaming Yuan
01152f89ee Remove unused parameters. (#7499) 2021-12-09 14:24:51 +08:00
Harvey
1864fab592 Minor edits to Parameters doc page. (#7500)
* bost -> both

* doc improvement

* use original filename

* syntax highlight false

* missed a few highlights
2021-12-07 15:46:44 +08:00
Jiaming Yuan
021f8bf28b Fix pylint. (#7498) 2021-12-07 13:23:30 +08:00
Jiaming Yuan
eee527d264 Add approx partitioner. (#7467) 2021-11-27 15:22:06 +08:00
Jiaming Yuan
85cbd32c5a Add range-based slicing to tensor view. (#7453) 2021-11-27 13:42:36 +08:00
danmarinescu
6f38f5affa Updated CMake version requirement in build.rst (#7487)
The documentation states that to build from source you need CMake 3.13 or higher. However, according to https://github.com/dmlc/xgboost/blob/master/CMakeLists.txt#L1 CMake 3.14 or higher is required.
2021-11-27 09:58:01 +08:00
Jiaming Yuan
557ffc4bf5 Reduce base margin to 2 dim for now. (#7455) 2021-11-27 00:46:13 +08:00
Jiaming Yuan
bf7bb575b4 Test CPU histogram with cat data. (#7465) 2021-11-27 00:43:28 +08:00
Bobby Wang
24be04e848 [jvm-packages] Add DeviceQuantileDMatrix to Scala binding (#7459) 2021-11-24 20:23:18 +08:00
Philip Hyunsu Cho
619c450a49 [CI] Add missing step extract_branch (#7479) 2021-11-24 17:35:59 +08:00
Jiaming Yuan
820e1c01ef Fix macos package upload. (#7475)
* Split up the tests.
2021-11-24 03:43:49 +08:00
Jiaming Yuan
488f12a996 Fix github macos package upload. (#7474) 2021-11-24 00:29:11 +08:00
Jiaming Yuan
c024c42dce Modernize XGBoost Python document. (#7468)
* Use sphinx gallery to integrate examples.
* Remove mock objects.
* Add dask doc inventory.
2021-11-23 23:24:52 +08:00
Philip Hyunsu Cho
96a9848c9e [CI] Fix continuous delivery pipeline for MacOS (#7472) 2021-11-23 22:22:08 +08:00
Jiaming Yuan
b124a27f57 Support scipy sparse in dask. (#7457) 2021-11-23 16:45:36 +08:00
Jiaming Yuan
5262e933f7 Remove unnecessary constexpr. (#7466) 2021-11-23 16:42:08 +08:00
Philip Hyunsu Cho
0c67685e43 [CI] Add a helper script to aid Maven release (#7470)
* [CI] Add a helper script to aid Maven release

* Move script to dev/ [skip ci]

* Update command [skip ci]
2021-11-23 00:11:07 -08:00
Harvey
0552ca8021 Fix typo (#7469) 2021-11-23 08:58:45 +08:00
Jiaming Yuan
176110a22d Support external memory in CPU histogram building. (#7372) 2021-11-23 01:13:33 +08:00
Jiaming Yuan
d33854af1b [Breaking] Accept multi-dim meta info. (#7405)
This PR changes base_margin into a 3-dim array, with one of them being reserved for multi-target classification. Also, a breaking change is made for binary serialization due to extra dimension along with a fix for saving the feature weights. Lastly, it unifies the prediction initialization between CPU and GPU. After this PR, the meta info setter in Python will be based on array interface.
2021-11-18 23:02:54 +08:00
Jiaming Yuan
9fb4338964 Add test for eta and mitigate float error. (#7446)
* Add eta test.
* Don't skip test.
2021-11-18 20:42:48 +08:00
Bobby Wang
7cfb310eb4 Rework transform (#7440)
extract the common part of transform code from XGBoostClassifier
and XGBoostRegressor
2021-11-18 15:48:57 +08:00
Philip Hyunsu Cho
2adf222fb2 [CI] CI cost saving (#7407)
* [CI] Drop CUDA 10.1; Require 11.0

* Change NCCL version

* Use CUDA 10.1 for clang-tidy, for now

* Remove JDK 11 and 12

* Fix NCCL version

* Don't require 11.0 just yet, until clang-tidy is fixed

* Skip MultiClassesSerializationTest.GpuHist
2021-11-17 21:02:20 -08:00
Jiaming Yuan
b0015fda96 Fix R CRAN failures. (#7404)
* Remove hist builder dtor.

* Initialize values.

* Tolerance.

* Remove the use of nthread in col maker.
2021-11-16 10:51:12 +08:00
Jiaming Yuan
55ee272ea8 Extend array interface to handle ndarray. (#7434)
* Extend array interface to handle ndarray.

The `ArrayInterface` class is extended to support multi-dim array inputs. Previously this
class handles only 2-dim (vector is also matrix).  This PR specifies the expected
dimension at compile-time and the array interface can perform various checks automatically
for input data. Also, adapters like CSR are more rigorous about their input.  Lastly, row
vector and column vector are handled without intervention from the caller.
2021-11-16 09:52:15 +08:00
Jiaming Yuan
e27f543deb Set use_logger in tracker to false. (#7438) 2021-11-16 05:12:42 +08:00
Jiaming Yuan
d4274bc556 Fix typo. (#7433) 2021-11-15 01:28:11 +08:00
Jiaming Yuan
a7057fa64c Implement typed storage for tensor. (#7429)
* Add `Tensor` class.
* Add elementwise kernel for CPU and GPU.
* Add unravel index.
* Move some computation to compile time.
2021-11-14 18:53:13 +08:00
Kian Meng Ang
d27a11ff87 Fix typos in python package (#7432) 2021-11-14 17:20:19 +08:00
Jiaming Yuan
8cc75f1576 Cleanup Python tests. (#7426) 2021-11-14 15:47:05 +08:00
Jiaming Yuan
38ca96c9fc [CI] Install igraph as binary. (#7417) 2021-11-12 19:04:28 +08:00
Jiaming Yuan
46726ec176 Expose build info (#7399) 2021-11-12 18:22:46 +08:00
Jiaming Yuan
937fa282b5 Extract string view. (#7416)
* Add equality operators.
* Return a view in substr.
* Add proper iterator types.
2021-11-12 18:22:30 +08:00
Jiaming Yuan
ca6f980932 Check number of trees in inplace predict. (#7409) 2021-11-12 18:20:23 +08:00
Jiaming Yuan
97d7582457 Delay breaking changes to 1.6. (#7420)
The patch is too big to be backported.
2021-11-12 16:46:03 +08:00
Bobby Wang
cb685607b2 [jvm-packages] Rework the train pipeline (#7401)
1. Add PreXGBoost to build RDD[Watches] from Dataset
2. Feed RDD[Watches] built from PreXGBoost to XGBoost to train
2021-11-10 17:51:38 +08:00
Jiaming Yuan
8df0a252b7 [doc] Update document for GPU. [skip ci] (#7403)
* Remove outdated workaround and description.
2021-11-09 02:05:55 +08:00
Jiaming Yuan
d7d1b6e3a6 CPU evaluation for cat data. (#7393)
* Implementation for one hot based.
* Implementation for partition based. (LightGBM)
2021-11-06 14:41:35 +08:00
Jiaming Yuan
6ede12412c Update dmlc-core and use data iter for GPU sampling tests. (#7398)
* Update dmlc-core.
* New parquet parser in dmlc-core.
* Use data iter for GPU sampling tests.
2021-11-06 05:12:49 +08:00
Jiaming Yuan
c968217ca8 [R] Fix global feature importance and predict with 1 sample. (#7394)
* [R] Fix global feature importance.

* Add implementation for tree index.  The parameter is not documented in C API since we
should work on porting the model slicing to R instead of supporting more use of tree
index.

* Fix the difference between "gain" and "total_gain".

* debug.

* Fix prediction.
2021-11-05 10:07:00 +08:00
Jiaming Yuan
48aff0eabd [doc][jvm-packages] Update information about Python tracker. [skip ci] (#7396) 2021-11-05 05:55:13 +08:00
Jiaming Yuan
b06040b6d0 Implement a general array view. (#7365)
* Replace existing matrix and vector view.

This is to prepare for handling higher dimension data and prediction when we support multi-target models.
2021-11-05 04:16:11 +08:00
Jiaming Yuan
232144ca09 Add note about CRAN release [skip ci] (#7395) 2021-11-05 00:34:14 +08:00
Jiaming Yuan
4100827971 Pass infomation about objective to tree methods. (#7385)
* Define the `ObjInfo` and pass it down to every tree updater.
2021-11-04 01:52:44 +08:00
Jiaming Yuan
ccdabe4512 Support building gradient index with cat data. (#7371) 2021-11-03 22:37:37 +08:00
Jiaming Yuan
57a4b4ff64 Handle OMP_THREAD_LIMIT. (#7390) 2021-11-03 15:44:38 +08:00
Jiaming Yuan
e6ab594e14 Change shebang used in CLI demo. (#7389)
Change from system Python to environment python3.  For Ubuntu 20.04, only `python3` is
available and there's no `python`.  So at least `python3` is consistent with Python
virtual env, Ubuntu and anaconda.
2021-11-02 22:11:19 +08:00
Jiaming Yuan
a55d43ccfd Add test for invalid categorical data values. (#7380)
* Add test for invalid categorical data values.

* Add check during sketching.
2021-11-02 18:00:52 +08:00
Jiaming Yuan
c74df31bf9 Cleanup the train function. (#7377)
* Move attribute setter to callback.
* Remove the internal train function.
* Remove unnecessary initialization.
2021-11-02 18:00:26 +08:00
Jiaming Yuan
154b15060e Move callbacks from fit to __init__. (#7375) 2021-11-02 17:51:42 +08:00
Jiaming Yuan
32e673d8c4 Support building with CTK11.5. (#7379)
* Support building with CTK11.5.

* Require system cub installation for CTK11.4+.
* Check thrust version for segmented sort.
2021-11-02 16:22:26 +08:00
Jiaming Yuan
a13321148a Support multi-class with base margin. (#7381)
This is already partially supported but never properly tested. So the only possible way to use it is calling `numpy.ndarray.flatten` with `base_margin` before passing it into XGBoost. This PR adds proper support
for most of the data types along with tests.
2021-11-02 13:38:00 +08:00
Jiaming Yuan
6295dc3b67 Fix span reverse iterator. (#7387)
* Fix span reverse iterator.

* Disable `rbegin` on device code to avoid calling host function.
* Add `trbegin` and friends.
2021-11-02 13:35:59 +08:00
Jiaming Yuan
8211e5f341 Add clang-format config. (#7383)
Generated using `clang-format -style=google -dump-config > .clang-format`, with column
width changed from 80 to 100 to be consistent with existing cpplint check.
2021-11-02 13:34:38 +08:00
Jiaming Yuan
0f7a9b42f1 Use double precision in metric calculation. (#7364) 2021-11-02 12:00:32 +08:00
Jiaming Yuan
239dbb3c0a Move macos test to github action. (#7382)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2021-10-30 14:40:32 +08:00
Bobby Wang
b81ebbef62 [jvm-packages] Fix json4s binary compatibility issue (#7376)
Spark 3.2 depends on 3.7.0-M11 which has changed some implicited functions'
signatures. And it will result the xgboost4j built against spark 3.0/3.1
failed when saving the model.
2021-10-30 03:20:57 +08:00
Jiaming Yuan
c6769488b3 Typehint for subset of core API. (#7348) 2021-10-28 20:47:04 +08:00
Jiaming Yuan
45aef75cca Move skl eval_metric and early_stopping rounds to model params. (#6751)
A new parameter `custom_metric` is added to `train` and `cv` to distinguish the behaviour from the old `feval`.  And `feval` is deprecated.  The new `custom_metric` receives transformed prediction when the built-in objective is used.  This enables XGBoost to use cost functions from other libraries like scikit-learn directly without going through the definition of the link function.

`eval_metric` and `early_stopping_rounds` in sklearn interface are moved from `fit` to `__init__` and is now saved as part of the scikit-learn model.  The old ones in `fit` function are now deprecated. The new `eval_metric` in `__init__` has the same new behaviour as `custom_metric`.

Added more detailed documents for the behaviour of custom objective and metric.
2021-10-28 17:20:20 +08:00
Jiaming Yuan
6b074add66 Update setup.py. (#7360)
* Add new classifiers.
* Typehint.
2021-10-28 14:58:31 +08:00
Jiaming Yuan
3c4aa9b2ea [breaking] Remove label encoder deprecated in 1.3. (#7357) 2021-10-28 13:24:29 +08:00
Jiaming Yuan
d05754f558 Avoid OMP reduction in AUC. (#7362) 2021-10-28 05:03:52 +08:00
Jiaming Yuan
ac9bfaa4f2 Handle missing values in dataframe with category dtype. (#7331)
* Replace -1 in pandas initializer.
* Unify `IsValid` functor.
* Mimic pandas data handling in cuDF glue code.
* Check invalid categories.
* Fix DDM sketching.
2021-10-28 03:33:54 +08:00
Jiaming Yuan
2eee87423c Remove old custom objective demo. (#7369)
We have 2 new custom objective demos covering both regression and classification with
accompanying tutorials in documents.
2021-10-27 16:31:48 +08:00
Jiaming Yuan
b9414b6477 Update GPU doc for PR-AUC. [skip ci] (#7368) 2021-10-27 16:31:07 +08:00
Jiaming Yuan
d4349426d8 Re-implement PR-AUC. (#7297)
* Support binary/multi-class classification, ranking.
* Add documents.
* Handle missing data.
2021-10-26 13:07:50 +08:00
nicovdijk
a6bcd54b47 [jvm-packages] Fix for space in sys.executable path in create_jni.py (#7358) 2021-10-25 13:45:11 +08:00
Jiaming Yuan
fd61c61071 Avoid omp reduction in rank metric. (#7349) 2021-10-22 14:13:34 +08:00
Jiaming Yuan
e36b066344 [doc] Document the status of RTD hosting. [skip ci] (#7353) 2021-10-22 14:12:55 +08:00
Jiaming Yuan
864d236a82 [doc] Remove num_pbuffer. [skip ci] (#7356) 2021-10-22 14:12:32 +08:00
nicovdijk
31a307cf6b [XGBoost4J-Spark] Serialization for custom objective and eval (#7274)
* added type hints to custom_obj and custom_eval for Spark persistence


Co-authored-by: Bobby Wang <wbo4958@gmail.com>
2021-10-21 16:22:23 +08:00
Jiaming Yuan
7593fa9982 1.5 release note. [skip ci] (#7271)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2021-10-21 13:43:31 +08:00
Jiaming Yuan
d1f00fb0b7 Stricter validation for group. (#7345) 2021-10-21 12:13:33 +08:00
nicovdijk
74bab6e504 Control logging for early stopping using shouldPrint() (#7326) 2021-10-21 12:12:06 +08:00
Jiaming Yuan
8d7c6366d7 Accept histogram cut instead gradient index in evaluation. (#7336) 2021-10-20 18:04:46 +08:00
Jiaming Yuan
15685996fc [doc] Small improvements for categorical data document. (#7330) 2021-10-20 18:04:32 +08:00
Jiaming Yuan
f999897615 [dask] Use nthread in DMatrix construction. (#7337)
This is consistent with the thread overriding behavior.
2021-10-20 15:16:40 +08:00
Philip Hyunsu Cho
b8e8f0fcd9 [doc] Use latest Sphinx RTD theme (#7347) 2021-10-20 00:04:43 -07:00
Jiaming Yuan
3b0b74fa94 [doc] Use RTD theme. (#7346) 2021-10-19 23:49:19 -07:00
Jiaming Yuan
376b448015 [doc] Fix broken links. (#7341)
* Fix most of the link checks from sphinx.
* Remove duplicate explicit target name.
2021-10-20 14:45:30 +08:00
Jiaming Yuan
f53da412aa Add typehint to tracker. (#7338) 2021-10-20 12:49:36 +08:00
Jiaming Yuan
5ff210ed75 Small fix for the release doc and script. [skip ci] (#7332)
Add Philip as co-maintainer of maven packages.
2021-10-20 12:49:12 +08:00
Jiaming Yuan
c42e3fbcf3 [doc] Fix early stopping document. (#7334) 2021-10-18 11:21:16 -07:00
Bobby Wang
4fd149b3a2 [jvm-packages] update checkstyle (#7335)
* [jvm-packages] update scalastyle

1. bump scalastyle-maven-plugin and maven-checkstyle-plugin to latest
2. remove unused imports

* fix code style check
2021-10-18 18:42:01 +08:00
Jiaming Yuan
fbb0dc4275 Remove auto configuration of seed_per_iteration. (#7009)
* Remove auto configuration of seed_per_iteration.

This should be related to model recovery from rabit, which is removed.

* Document.
2021-10-17 15:58:57 +08:00
Jiaming Yuan
fb1a9e6bc5 Avoid omp reduction in coordinate descent and aft metrics. (#7316)
Aside from the omp issue, parameter configuration for aft metric is simplified.
2021-10-17 15:55:49 +08:00
Jiaming Yuan
f56e2e9a66 Support categorical data with pandas Dataframe in inplace prediction (#7322) 2021-10-17 14:32:06 +08:00
Jiaming Yuan
8e619010d0 Extract CPUExpandEntry and HistParam. (#7321)
* Remove kRootNid.
* Check for empty hessian.
2021-10-17 14:22:25 +08:00
Jiaming Yuan
6cdcfe8128 Improve external memory demo. (#7320)
* Use npy format.
* Add evaluation.
* Use make_regression.
2021-10-17 11:25:24 +08:00
Jiaming Yuan
e6a142fe70 Fix document about best_iteration (#7324) 2021-10-14 15:30:46 -07:00
Jiaming Yuan
4ddf8d001c Deterministic result for element-wise/mclass metrics. (#7303)
Remove openmp reduction.
2021-10-13 14:22:40 +08:00
Jiaming Yuan
406c70ba0e [doc] Fix typo. [skip ci] (#7311) 2021-10-12 19:10:18 +08:00
Jiaming Yuan
0bd8f21e4e Add document for categorical data. (#7307) 2021-10-12 16:10:59 +08:00
Jiaming Yuan
a7d0c66457 Remove unused code. (#7293) 2021-10-12 15:04:41 +08:00
Jiaming Yuan
130df8cdda Add tests for tree grow policy. (#7302) 2021-10-12 15:04:06 +08:00
Jiaming Yuan
5b17bb0031 Fix prediction with cat data in sklearn interface. (#7306)
* Specify DMatrix parameter for pre-processing dataframe.
* Add document about the behaviour of prediction.
2021-10-12 14:31:12 +08:00
Jiaming Yuan
89d87e5331 Update GPU Tree SHAP (#7304) 2021-10-11 21:39:50 +08:00
Jiaming Yuan
298af6f409 Fix weighted samples in multi-class AUC. (#7300) 2021-10-11 15:12:29 +08:00
Jiaming Yuan
69d3b1b8b4 Remove old callback deprecated in 1.3. (#7280) 2021-10-08 17:24:59 +08:00
Jiaming Yuan
578de9f762 Fix cv verbose_eval (#7291) 2021-10-08 12:28:38 +08:00
Jiaming Yuan
f7caac2563 Bump version to 1.6.0 in master. (#7259) 2021-10-07 16:09:26 +08:00
Jiaming Yuan
e2660ab8f3 Extend release script with R packages. [skip ci] (#7278) 2021-10-07 16:08:42 +08:00
Yuan Tang
cc459755be Update affiliation (#7289) 2021-10-07 16:07:34 +08:00
Jiaming Yuan
d8cb395380 Fix gamma neg log likelihood. (#7275) 2021-10-05 16:57:08 +08:00
Jiaming Yuan
b3b03200e2 Remove old warning in 1.3 (#7279) 2021-10-01 08:05:50 +08:00
Philip Hyunsu Cho
2a0368b7ca Add CMake option to use /MD runtime (#7277) 2021-09-30 13:13:57 +08:00
Jiaming Yuan
b2d8431aea [R] Fix document for nthread. (#7263) 2021-09-28 11:46:24 +08:00
Jiaming Yuan
d8a549e6ac Avoid thread block with sparse data. (#7255) 2021-09-25 13:11:34 +08:00
Jiaming Yuan
ca17f8a5fc Dispatch thrust versions and upgrade rmm. (#7254)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2021-09-25 03:43:23 +08:00
Jiaming Yuan
fbd58bf190 [jvm-packages] Create demo and test for xgboost4j early stopping. (#7252) 2021-09-25 03:29:27 +08:00
Bobby Wang
0ee11dac77 [jvm-packages][xgboost4j-gpu] Support GPU dataframe and DeviceQuantileDMatrix (#7195)
Following classes are added to support dataframe in java binding:

- `Column` is an abstract type for a single column in tabular data.
- `ColumnBatch` is an abstract type for dataframe.

- `CuDFColumn` is an implementaiton of `Column` that consume cuDF column
- `CudfColumnBatch` is an implementation of `ColumnBatch` that consumes cuDF dataframe.

- `DeviceQuantileDMatrix` is the interface for quantized data.

The Java implementation mimics the Python interface and uses `__cuda_array_interface__` protocol for memory indexing.  One difference is on JVM package, the data batch is staged on the host as java iterators cannot be reset.

Co-authored-by: jiamingy <jm.yuan@outlook.com>
2021-09-24 14:25:00 +08:00
Philip Hyunsu Cho
d27a427dc5 [CI] Rotate access keys for uploading MacOS artifacts from Travis CI (#7253) 2021-09-24 10:44:00 +08:00
ShvetsKS
475fd1abec Reduced span overheads in objective function calculate (#7206)
Co-authored-by: fis <jm.yuan@outlook.com>
2021-09-23 04:43:59 +08:00
Jiaming Yuan
9472be7d77 Fix initialization from pandas series. (#7243) 2021-09-23 04:43:25 +08:00
david-cortes
4f93e5586a Improve wording for warning (#7248)
This warning sounds  a bit ungrammatical. Additionally, the second part of the warning is not clear. This PR changes the wording to make it clearer.
2021-09-21 10:48:11 +08:00
Jiaming Yuan
18bd16341a Update Python intro. [skip ci] (#7235)
* Fix the link to demo.
* Stop recommending text file inputs.
* Brief mention to scikit-learn interface.
* Fix indent warning in tree method doc.
2021-09-21 02:47:09 +00:00
david-cortes
61a619b5c3 [R] Avoid symbol naming conflicts with other packages (#7245)
* don't register all R symbols

* typo
2021-09-19 11:17:08 -07:00
Jiaming Yuan
e48e05e6e2 Add typehint to rabit module. (#7240) 2021-09-17 18:31:02 +08:00
Jiaming Yuan
c735c17f33 Disable callback and ES on random forest. (#7236) 2021-09-17 18:21:17 +08:00
Jiaming Yuan
c311a8c1d8 Enable compiling with system cub. (#7232)
- Tested with all CUDA 11.x.
- Workaround cub scan by using discard iterator in AUC.
- Limit the size of Argsort when compiled with CUDA cub.
2021-09-17 14:28:18 +08:00
Jiaming Yuan
b18f5f61b0 Fix pylint (#7241) 2021-09-17 11:50:36 +08:00
Jiaming Yuan
38a23f66a8 Fix typo in release script. [skip ci] (#7238) 2021-09-17 11:14:05 +08:00
Jiaming Yuan
8ad7e8eeb0 [doc] Fix typo. [skip ci] (#7226) 2021-09-17 11:13:49 +08:00
Jiaming Yuan
22d56cebf1 Encode pandas categorical data automatically. (#7231) 2021-09-17 11:09:55 +08:00
Jiaming Yuan
32e0858501 Fix travis. (#7237) 2021-09-17 10:06:23 +08:00
Jiaming Yuan
31c1e13f90 Categorical data support in CPU sketching. (#7221) 2021-09-17 04:37:09 +08:00
Jiaming Yuan
9f63d6fead [jvm-packages] Deprecate constructors with implicit missing value. (#7225) 2021-09-17 04:35:04 +08:00
Jiaming Yuan
0ed979b096 Support more input types for categorical data. (#7220)
* Support more input types for categorical data.

* Shorten the type name from "categorical" to "c".
* Tests for np/cp array and scipy csr/csc/coo.
* Specify the type for feature info.
2021-09-16 20:39:30 +08:00
Jiaming Yuan
2942dc68e4 Fix mixed types in GPU sketching. (#7228) 2021-09-16 00:10:25 +08:00
Jiaming Yuan
037dd0820d Implement __sklearn_is_fitted__. (#7230) 2021-09-15 19:09:04 +08:00
Jiaming Yuan
d997c967d5 Demo for experimental categorical data support. (#7213) 2021-09-15 08:20:12 +08:00
Jiaming Yuan
3515931305 Initial support for external memory in gradient index. (#7183)
* Add hessian to batch param in preparation of new approx impl.
* Extract a push method for gradient index matrix.
* Use span instead of vector ref for hessian in sketching.
* Create a binary format for gradient index.
2021-09-13 12:40:56 +08:00
Christian Lorentzen
a0dcf6f5c1 [DOC] Improve tutorial on feature interactions (#7219) 2021-09-12 21:40:02 +08:00
Jiaming Yuan
804b2ac60f Expose DMatrix API for CUDA columnar and array. (#7217)
* Use JSON encoded configurations.
* Expose them into header file.
2021-09-09 17:55:25 +08:00
Jiaming Yuan
68a2c7b8d6 Fix memory leak in demo. (#7216) 2021-09-09 13:51:03 +08:00
Jiaming Yuan
b12e7f7edd Add noexcept to JSON objects. (#7205) 2021-09-07 13:56:48 +08:00
Jiaming Yuan
3a4f51f39f Avoid calling CUDA code on CPU for linear model. (#7154) 2021-09-01 10:45:31 +08:00
Jiaming Yuan
ba69244a94 Restore the custom double atomic add. (#7198) 2021-08-28 18:30:42 +08:00
Jiaming Yuan
7a1d67f9cb [breaking] Use integer atomic for GPU histogram. (#7180)
On GPU we use rouding factor to truncate the gradient for deterministic results. This PR changes the gradient representation to fixed point number with exponent aligned with rounding factor.

    [breaking] Drop non-deterministic histogram.
    Use fixed point for shared memory.

This PR is to improve the performance of GPU Hist. 

Co-authored-by: Andy Adinets <aadinets@nvidia.com>
2021-08-28 05:17:05 +08:00
Jiaming Yuan
e7d7ab6bc3 Better error message for ncclUnhandledCudaError. (#7190) 2021-08-27 10:29:22 +08:00
Philip Hyunsu Cho
b70e07da1f [CI] Clean up in beginning of each task in Win CI (#7189) 2021-08-25 04:15:22 -07:00
Jiaming Yuan
cdfaa705f3 Fix building on CUDA 11.0. (#7187) 2021-08-25 02:57:53 -07:00
Philip Hyunsu Cho
3060f0b562 [CI] Automatically build GPU-enabled R package for Windows (#7185)
* [CI] Automatically build GPU-enabled R package for Windows

* Update Jenkinsfile-win64

* Build R package for the release branch only

* Update install doc
2021-08-25 02:11:01 -07:00
Jiaming Yuan
9c64618cb6 [breaking] Remove CUDA sm_35, add sm_86 (#7182) 2021-08-25 16:04:23 +08:00
Philip Hyunsu Cho
d04312b9c0 [CI] Fix hanging Python setup in Windows CI (#7186) 2021-08-24 22:03:51 -07:00
Jiaming Yuan
ee8d1f5ed8 Fix histogram truncation. (#7181)
* Fix truncation.

* Lint.

* lint.
2021-08-24 18:34:32 -07:00
Jiaming Yuan
3290a4f3ed Re-enable feature validation in predict proba. (#7177) 2021-08-22 15:28:08 +08:00
Jiaming Yuan
bf562bd33c Remove unused code. (#7175) 2021-08-18 14:02:19 +08:00
Anton Kostin
01b7acba30 Update conf.py (#7174) 2021-08-17 03:38:26 +08:00
Anton Kostin
ec849ec335 Update README.md (#7173) 2021-08-17 03:37:53 +08:00
Martin Petříček
46c46829ce Fix model loading from stream (#7067)
Fix bug introduced in 17913713b5 (allow loading from byte array)

When loading model from stream, only last buffer read from the input stream is used to construct the model.

This may work for models smaller than 1 MiB (if you are lucky enough to read the whole model at once), but will always fail if the model is larger.
2021-08-15 21:04:33 +08:00
Jiaming Yuan
6bcbc77226 [doc] Fix typo. [skip ci] (#7170) 2021-08-13 03:48:16 +08:00
Jiaming Yuan
3f38d983a6 Fix prediction configuration. (#7159)
After the predictor parameter was added to the constructor, this configuration was broken.
2021-08-11 16:34:36 +08:00
Jiaming Yuan
9600ca83f3 Remove synchronization in monitor. (#7164)
* Remove synchronization in monitor.

Calling rabit functions during destruction is flaky.

* Add xgboost prefix to nvtx marker.
2021-08-11 16:33:53 +08:00
Jiaming Yuan
149f209af6 Extract histogram builder from CPU Hist. (#7152)
* Extract the CPU histogram builder.
* Fix tests.
* Reduce number of histograms being built.
2021-08-09 21:15:21 +08:00
Philip Hyunsu Cho
336af4f974 Work around a segfault observed in SparsePage::Push() (#7161)
* Work around a segfault observed in SparsePage::Push()

* Revert "Work around a segfault observed in SparsePage::Push()"

This reverts commit 30934844d00908750a5442082eb4769b1489f6a9.

* Don't call vector::resize() inside OpenMP block

* Set GITHUB_PAT env var to fix R tests

* Use built-in GITHUB_TOKEN
2021-08-08 02:12:30 -07:00
AJ Schmidt
f7003dc819 Include cpack (#7160)
Co-authored-by: ptaylor <paul.e.taylor@me.com>
2021-08-07 00:57:34 +08:00
Jiaming Yuan
8a84be37b8 Pass scikit learn estimator checks for regressor. (#7130)
* Check data shape.
* Check labels.
2021-08-03 18:58:20 +08:00
Jiaming Yuan
8ee127469f [R] Fix nthread in DMatrix constructor. (#7127)
* Break the R C API for nthread.
2021-08-03 17:39:25 +08:00
Jiaming Yuan
ba47eda61b [doc] Use figure directive. (#7143) 2021-08-03 15:56:25 +08:00
Jiaming Yuan
e2c406f5c8 Support min_delta in early stopping. (#7137)
* Support `min_delta` in early stopping.

* Remove abs_tol.
2021-08-03 14:29:17 +08:00
Jiaming Yuan
7bdedacb54 Document for process_type. (#7135)
* Update document for prune and refresh.

* Add demo.
2021-08-03 13:11:52 +08:00
Jiaming Yuan
d080b5a953 Fix model slicing. (#7149)
* Use correct pointer.
* Remove best_iteration/best_score.
2021-08-03 11:51:56 +08:00
Jiaming Yuan
36346f8f56 C API demo for inference. (#7151) 2021-08-03 00:46:47 +08:00
Jiaming Yuan
1369133916 [dask] Remove the workaround for segfault. (#7146) 2021-07-30 03:57:53 +08:00
Philip Hyunsu Cho
f1a4a1ac95 [CI] Upgrade build image to CentOS 7 + GCC 8; require CUDA 10.1 and later (#7141) 2021-07-29 10:54:33 -07:00
graue70
dfdf0b08fc Fix typo and grammatical mistake in error message (#7134) 2021-07-28 17:17:05 +08:00
Gil Forsyth
92ae3abc97 [dask] Disallow importing non-dask estimators from xgboost.dask (#7133)
* Disallow importing non-dask estimators from xgboost.dask

This is mostly a style change, but also avoids a user error (that I have
committed on a few occasions).  Since `XGBRegressor` and `XGBClassifier`
are imported as parent classes for the `dask` estimators, without
defining an `__all__`, autocomplete (or muscle) memory will produce the
following with little prompting:

```
from xgboost.dask import XGBClassifier
```

There's nothing inherently wrong with that, but given that
`XGBClassifier` is not `dask` enabled, it can lead to confusing behavior
until you figure out you should've typed

```
from xgboost.dask import DaskXGBClassifier
```

Another option is to alias import the existing non-dask estimators.

* Remove base/iter class, add train predict funcs
2021-07-28 02:07:23 +08:00
Robert Maynard
1a75f43304 Allow compilation with nvcc 11.4 (#7131)
* Use type aliases for discard iterators

* update to include host_vector as thrust 1.12 doesn't bring it in as a side-effect

* cub::DispatchRadixSort requires signed offset types
2021-07-27 20:05:33 +08:00
Jiaming Yuan
7017dd5a26 [JVM-Packages] Use Python tracker in XGBoost for JVM package. (#7132) 2021-07-27 16:20:42 +08:00
Jiaming Yuan
48d5de80a2 [R] Fix softprob reshape. (#7126) 2021-07-27 15:25:17 +08:00
Jiaming Yuan
7ee7a95b84 Use upstream URI in distributed quantile tests. (#7129)
* Use upstream URI in distributed quantile tests.

* Fix test cv `PytestAssertRewriteWarning`.
2021-07-27 14:09:49 +08:00
Jiaming Yuan
e88ac9cc54 [dask] Extend tree stats tests. (#7128)
* Add tests to GPU.
* Assert cover in children sums up to the parent.
2021-07-27 12:22:13 +08:00
Jiaming Yuan
778135f657 Fix parameter loading with training continuation. (#7121)
* Add a demo for training continuation.
2021-07-23 10:51:47 +08:00
Taewoo Kim
41e882f80b Check input value is duplicated when quantile queue is full (#7091)
Co-authored-by: Taewoo Kim <taewoo@layer6.com>
2021-07-23 03:07:01 +08:00
ShvetsKS
caa9e527dd Remove extra sync for dense data (#7120)
Co-authored-by: SHVETS, KIRILL <kirill.shvets@intel.com>
2021-07-22 19:02:31 +08:00
Jiaming Yuan
e6088366df Export Python Interface for external memory. (#7070)
* Add Python iterator interface.
* Add tests.
* Add demo.
* Add documents.
* Handle empty dataset.
2021-07-22 15:15:53 +08:00
farfarawayzyt
e64ee6592f fix typo in src/common/hist.cc BuildHistKernel (#7116) 2021-07-21 19:53:05 +08:00
naveenkb
9f7f8b976d [XGBoost4J-Spark] bestIteration and bestScore for early stopping (#7095) 2021-07-19 18:46:49 +08:00
farfarawayzyt
d7c14496d2 fix typo in arguments of PartitionBuilder::Init (#7113)
Co-authored-by: Yuntian Zhang <zhangyt@lamda.nju.edu.cn>
2021-07-16 15:46:22 +08:00
Jiaming Yuan
bd1f3a38f0 Rewrite sparse dmatrix using callbacks. (#7092)
- Reduce dependency on dmlc parsers and provide an interface for users to load data by themselves.
- Remove use of threaded iterator and IO queue.
- Remove `page_size`.
- Make sure the number of pages in memory is bounded.
- Make sure the cache can not be violated.
- Provide an interface for internal algorithms to process data asynchronously.
2021-07-16 12:33:31 +08:00
Jiaming Yuan
2f524e9f41 [dask] Work around segfault in prediction. (#7112) 2021-07-16 04:27:05 +08:00
Jiaming Yuan
abec3dbf6d Fix thread safety of softmax prediction. (#7104) 2021-07-16 02:06:55 +08:00
Philip Hyunsu Cho
2801d69fb7 [CI] Pin libomp to 11.1.0 (#7107) 2021-07-15 11:16:51 +08:00
Jiaming Yuan
8e8232fb4c [CI] Update R cache. (#7102) 2021-07-14 03:15:35 +08:00
Jiaming Yuan
345796825f Optional find dependency in installed cmake config. (#7099)
* Find dependency only when xgboost is built as static library.
* Resolve msvc warning.
* Add test for linking shared library.
2021-07-11 17:20:55 +08:00
ZabelTech
1d91f71119 fix typo in XGDMatrixSetFloatInfo example (#7097) 2021-07-10 21:40:25 +08:00
Jiaming Yuan
77f6cf2d13 Support hessian in host sketch container. (#7081)
Prepare for migrating approx onto hist's codebase.
2021-07-08 16:33:58 +08:00
Jiaming Yuan
84d359efb8 Support host data in proxy DMatrix. (#7087) 2021-07-08 11:35:48 +08:00
Jiaming Yuan
5d7cdf2e36 [Breaking] Rename Quantile DMatrix C API. (#7082)
The role of ProxyDMatrix is going beyond what it was designed.  Now it's used by both
QuantileDeviceDMatrix and inplace prediction.  After the refactoring of sparse DMatrix it
will also be used for external memory.  Renaming the C API to extract it from
QuantileDeviceDMatrix.
2021-07-08 11:34:14 +08:00
Jiaming Yuan
c766f143ab Refactor external memory formats. (#7089)
* Save base_rowid.
* Return write size.
* Remove unused function.
2021-07-08 04:04:51 +08:00
Jiaming Yuan
689eb8f620 Check external memory support for exact tree method. (#7088) 2021-07-08 02:12:57 +08:00
Jiaming Yuan
615ab2b03e Extract evaluate splits from CPU hist. (#7079)
Other than modularizing the split evaluation function, this PR also removes some more functions including `InitNewNodes` and `BuildNodeStats` among some other unused variables.  Also, scattered code like setting leaf weights is grouped into the split evaluator and `NodeEntry` is simplified and made private.  Another subtle difference with the original implementation is that the modified code doesn't call `tree[nidx].Parent()` to traversal upward.
2021-07-07 15:16:25 +08:00
Jeff H
d22b293f2f Update reference to treelite website (#7084)
treelite.io is no longer a valid site and re-directs users to a parked domain. Re-directing to the documentation is safer at this point.
2021-07-06 22:15:07 -07:00
Jiaming Yuan
f937f514aa Remove lz4 compression with external memory. (#7076) 2021-07-06 14:46:43 +08:00
Jiaming Yuan
116d711815 Make SimpleDMatrix ctor reusable. (#7075) 2021-07-06 13:38:24 +08:00
Jiaming Yuan
d7e1fa7664 Fix feature names and types in output model slice. (#7078) 2021-07-06 11:47:49 +08:00
Jiaming Yuan
ffa66aace0 Persist data in dask test. (#7077) 2021-07-06 11:47:17 +08:00
Jiaming Yuan
b56d3d5d5c Fix with latest panda range index. (#7074) 2021-07-03 16:43:52 +08:00
Jiaming Yuan
93f3acdef9 Fix with latest pylint. (#7071) 2021-07-02 21:26:00 +08:00
Jiaming Yuan
a5d222fcdb Handle categorical split in model histogram and dataframe. (#7065)
* Error on get_split_value_histogram when feature is categorical
* Add a category column to output dataframe
2021-07-02 13:10:36 +08:00
Jiaming Yuan
1cd20efe68 Move GHistIndex into DMatrix. (#7064) 2021-07-01 00:44:49 +08:00
Jiaming Yuan
1c8fdf2218 Remove use of device_idx in dh::LaunchN. (#7063)
It's an unused parameter, removing it can make the CI log more readable.
2021-06-29 11:37:26 +08:00
Philip Hyunsu Cho
dd4db347f3 Fix early stopping behavior with MAPE metric (#7061) 2021-06-26 03:02:33 +08:00
Jiaming Yuan
8fa32fdda2 Implement categorical data support for SHAP. (#7053)
* Add CPU implementation.
* Update GPUTreeSHAP.
* Add GPU implementation by defining custom split condition.
2021-06-25 19:02:46 +08:00
Jiaming Yuan
663136aa08 Implement feature score for linear model. (#7048)
* Add feature score support for linear model.
* Port R interface to the new implementation.
* Add linear model support in Python.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2021-06-25 14:34:02 +08:00
Philip Hyunsu Cho
b2d300e727 [CI] Upgrade to CMake 3.14 (#7060)
* [CI] Upgrade to CMake 3.14

* Add FATAL_ERROR directive, for users with CMake 2.x
2021-06-24 18:07:24 -07:00
Jiaming Yuan
1d4d345634 Tests for dask skl categorical data support. (#7054) 2021-06-24 16:33:57 +08:00
Jiaming Yuan
da1ad798ca Convert numpy float to Python float in feat score. (#7047) 2021-06-21 20:58:43 +08:00
Jiaming Yuan
bbfffb444d Fix race condition in CPU shap. (#7050) 2021-06-21 10:03:15 +08:00
Jiaming Yuan
29f8fd6fee Support categorical split in tree model dump. (#7036) 2021-06-18 16:46:20 +08:00
Jiaming Yuan
7968c0d051 Test on s390x. (#7038)
* Fix && remove unused parameter.
2021-06-18 14:55:08 +08:00
Jiaming Yuan
86715e4cd4 Support categorical data for dask functional interface and DQM. (#7043)
* Support categorical data for dask functional interface and DQM.

* Implement categorical data support for GPU GK-merge.
* Add support for dask functional interface.
* Add support for DQM.

* Get newer cupy.
2021-06-18 13:06:52 +08:00
Jiaming Yuan
7dd29ffd47 Implement feature score in GBTree. (#7041)
* Categorical data support.
* Eliminate text parsing during feature score computation.
2021-06-18 11:53:16 +08:00
Jiaming Yuan
dcd84b3979 [CI] Configure RAPIDS, dask, modin (#7033)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2021-06-18 10:27:51 +08:00
Jiaming Yuan
d9799b09d0 Categorical data support for cuDF. (#7042)
* Add support in DMatrix.
* Add support in DQM, except for iterator.
2021-06-17 13:54:33 +08:00
Jiaming Yuan
5c2d7a18c9 Parallel model dump for trees. (#7040) 2021-06-15 14:08:26 +08:00
ShvetsKS
2567404ab6 Simplify sparse and dense CPU hist kernels (#7029)
* Simplify sparse and dense kernels
* Extract row partitioner.

Co-authored-by: Kirill Shvets <kirill.shvets@intel.com>
2021-06-11 18:26:30 +08:00
Jiaming Yuan
1faad825f4 Remove appveyor badge. [skip ci] (#7035) 2021-06-11 14:37:18 +08:00
Jiaming Yuan
b56614e9b8 [R] Use new predict function. (#6819)
* Call new C prediction API.
* Add `strict_shape`.
* Add `iterationrange`.
* Update document.
2021-06-11 13:03:29 +08:00
jmoralez
25514e104a [dask] speed up tests (#7020) 2021-06-11 11:43:01 +08:00
Jiaming Yuan
f79cc4a7a4 Implement categorical prediction for CPU and GPU predict leaf. (#7001)
* Categorical prediction with CPU predictor and GPU predict leaf.

* Implement categorical prediction for CPU prediction.
* Implement categorical prediction for GPU predict leaf.
* Refactor the prediction functions to have a unified get next node function.

Co-authored-by: Shvets Kirill <kirill.shvets@intel.com>
2021-06-11 10:11:45 +08:00
Jiaming Yuan
72f9daf9b6 Fix gpu_id with custom objective. (#7015) 2021-06-09 14:51:17 +08:00
TP Boudreau
bd2ca543c4 Fix BinarySearchBin() argument types (#7026) 2021-06-08 19:05:46 +08:00
Jiaming Yuan
7beb2f7fae Hide symbols in CI build + hide symbols for C and CUDA (#6798)
* Hide symbols in CI build.
* Hide symbols for other languages.
2021-06-04 02:35:46 +08:00
Jiaming Yuan
c4b9f4f622 Add enable_categorical to sklearn. (#7011) 2021-06-04 02:29:14 +08:00
Philip Hyunsu Cho
655e6992f6 [Dask] Add example of using custom callback in Dask (#6995) 2021-06-03 07:05:55 +08:00
ShvetsKS
5cdaac00c1 Remove feature grouping (#7018)
Co-authored-by: Kirill Shvets <kirill.shvets@intel.com>
2021-06-03 04:35:26 +08:00
Philip Hyunsu Cho
05db6a6c29 [CI] Upgrade cuDF and RMM to 21.06 nightly (#7012)
* [CI] Upgrade cuDF and RMM to 21.06 nightly

* Trim outdated test cases

* Pin Dask version to 2021.05.0 for now
2021-06-02 11:59:30 -07:00
ShvetsKS
57c732655e Merge lossgude and depthwise strategies for CPU hist (#7007)
* fix java/scala test: max depth is also valid parameter for lossguide

Co-authored-by: Kirill Shvets <kirill.shvets@intel.com>
2021-06-03 01:49:43 +08:00
Jiaming Yuan
ee4f51a631 Support for all primitive types from array. (#7003)
* Change C API name.
* Test for all primitive types from array.
* Add native support for CPU 128 float.
* Convert boolean and float16 in Python.

* Fix dask version for now.
2021-06-01 08:34:48 +08:00
Jiaming Yuan
816b789bf0 Add predictor to skl constructor. (#7000) 2021-05-29 04:52:56 +08:00
ShvetsKS
55b823b27d Reduce 'InitSampling' complexity and set gradients to zero (#6922)
Co-authored-by: Kirill Shvets <kirill.shvets@intel.com>
2021-05-29 04:52:23 +08:00
Jiaming Yuan
89a49cf30e Fix dask predict on DaskDMatrix with iteration_range. (#7005) 2021-05-29 04:43:12 +08:00
Jiaming Yuan
4cf95a6041 Support numpy array interface (#6998) 2021-05-27 16:08:22 +08:00
Jiaming Yuan
ab6fd304c4 [Python] Change development release postfix to dev (#6988) 2021-05-27 16:06:51 +08:00
Jiaming Yuan
29d6a5e2b8 [CI] Move appveyor tests to action (#6986)
* Drop support for VS14, use VS15 instead.
* Drop support for mingw.
* Remove debug build.
* Split up jvm tests.
* Split up Python tests.
2021-05-27 04:49:45 +08:00
Jiaming Yuan
86e60e3ba8 Guard against index error in prediction. (#6982)
* Remove `best_ntree_limit` from documents.
2021-05-25 23:24:59 +08:00
Philip Hyunsu Cho
c6d87e5e18 [CI] Remove stray build artifact to avoid error in artifact packaging (#6994) 2021-05-25 19:48:27 +08:00
Jiaming Yuan
a4bc7ecf27 Restore R cache on github action. (#6985) 2021-05-25 18:53:44 +08:00
Jiaming Yuan
6e52aefb37 Revert OMP guard. (#6987)
The guard protects the global variable from being changed by XGBoost.  But this leads to a
bug that the `n_threads` parameter is no longer used after the first iteration.  This is
due to the fact that `omp_set_num_threads` is only called once in `Learner::Configure` at
the beginning of the training process.

The guard is still useful for `gpu_id`, since this is called all the times in our codebase
doesn't matter which iteration we are currently running.
2021-05-25 08:56:28 +08:00
Jiaming Yuan
cf06a266a8 [dask][doc] Wrap the example in main guard. (#6979) 2021-05-25 08:24:47 +08:00
Mads R. B. Kristensen
81bdfb835d lazy_isinstance(): use .__class__ for type check (#6974) 2021-05-21 11:33:08 +08:00
Emil Sadek
29c942f2a8 [doc] Capitalize section headers (#6976) 2021-05-21 11:31:05 +08:00
Adam Pocock
2320aa0da2 Making the Java library loader emit helpful error messages on missing dependencies. (#6926) 2021-05-19 14:53:56 +08:00
Jiaming Yuan
5cb51a191e [dask][doc] Add small example for sklearn interface. (#6970) 2021-05-19 13:50:45 +08:00
Jiaming Yuan
7e846bb965 Fix prediction on df with latest dask. (#6969) 2021-05-19 12:23:03 +08:00
Jiaming Yuan
6e104f0570 Add news for 1.4.2. [skip ci] (#6963) 2021-05-17 02:50:55 +08:00
ReeceGoding
42fc7ca6a0 Corrected lapply comment in callbacks.R (#6967)
The comment was made false by the removal of the pipes.
2021-05-17 02:31:50 +08:00
Livius
a4886c404a Fix compilation error on x86 (#6964)
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2021-05-14 13:31:49 +08:00
ReeceGoding
f94f479358 Simplify list2mat call from lapply in callbacks.R (#6966) 2021-05-14 03:40:58 +08:00
Jiaming Yuan
d245bc891e Add tolerance to early stopping. (#6942) 2021-05-14 00:19:51 +08:00
James Lamb
894e9bc5d4 [R-package] remove dependency on {magrittr} (#6928)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2021-05-13 04:34:59 +08:00
Jiaming Yuan
44cc9c04ea Fix multiclass auc with empty dataset. (#6947) 2021-05-12 15:01:14 +08:00
Jiaming Yuan
05ac415780 [dask] Set dataframe index in predict. (#6944) 2021-05-12 13:24:21 +08:00
Andrew Ziem
3e7e426b36 Fix spelling in documents (#6948)
* Update roxygen2 doc.

Co-authored-by: fis <jm.yuan@outlook.com>
2021-05-11 20:44:36 +08:00
vslaykovsky
2a9979e256 Fixed incorrect feature mismatch error message (#6949)
data.shape[0] denotes the number of samples, data.shape[1] is the number of features
2021-05-11 13:52:11 +08:00
Philip Hyunsu Cho
90cd724be1 [CI] Fix CI/CD pipeline broken by latest auditwheel (4.0.0) (#6951) 2021-05-10 22:43:15 -07:00
Daniel Saxton
e41619b1fc Link to valid tree_method values in docs (#6935) 2021-05-06 17:33:18 +08:00
Philip Hyunsu Cho
ec6ce08cd0 [jvm-packages] Make it easier to release GPU/CPU code artifacts to Maven Central (#6940) 2021-05-04 14:00:03 -07:00
Jose Manuel Llorens
4ddbaeea32 Improve warning when using np.ndarray subsets (#6934) 2021-05-04 13:24:41 +08:00
Ali
b35dd76dca [R] don't remove CMakeLists in cleanup (#6930)
currently installing the R-pacakge will leave the repo in dirty state, since
`CmakeLists.txt` is already checked in. This fixes the `cleanup`
script to not delete this file.
2021-05-03 17:46:15 +08:00
Jiaming Yuan
37ad60fe25 Enforce input data is not object. (#6927)
* Check for object data type.
* Allow strided arrays with greater underlying buffer size.
2021-05-02 00:09:01 +08:00
Jiaming Yuan
a1d23f6613 Relax test for decision stump in distributed environment. (#6919) 2021-04-30 09:04:11 +08:00
Jiaming Yuan
45ddc39c1d Relax shotgun test. (#6918) 2021-04-30 09:03:12 +08:00
Jiaming Yuan
34df1f588b Reduce Travis environment setup time. (#6912)
* Remove unused r from travis.
* Don't update homebrew.
* Don't install indirect/unused dependencies like libgit2, wget, openssl.
* Move graphviz installation to conda.
2021-04-30 09:02:40 +08:00
Jiaming Yuan
b31d37eac5 [CI] Fix custom metric test with empty dataset. (#6917) 2021-04-30 09:00:05 +08:00
Jiaming Yuan
db6285fb55 [CI] Skip external memory gtest on osx. (#6901) 2021-04-30 08:59:33 +08:00
david-cortes
4e1a8b1fe5 Update R handles in-place (#6903)
* update R handles in-place #fixes 6896

* update test to expect non-null handle

* remove unused variable

* fix failing tests

* solve linter complains
2021-04-29 12:50:46 -07:00
Philip Hyunsu Cho
5472ef626c [R] Re-generate Roxygen2 doc (#6915) 2021-04-29 11:55:07 -07:00
James Lamb
20f34d9776 [R-package] Update dependencies from CMake-based installation (#6906)
* remove stringi
* add Matrix and jsonlite
2021-04-29 01:32:01 +08:00
Jiaming Yuan
ef473b1f09 Disable pylint error. (#6911) 2021-04-29 01:01:37 +08:00
Jiaming Yuan
8760ec4827 Ensure predict leaf output 1-dim vector where there's only 1 tree. (#6889) 2021-04-23 15:07:48 +08:00
Jiaming Yuan
54afa3ac7a Relax shotgun test. (#6900)
It's non-deterministic algorithm, the test is flaky.
2021-04-23 13:01:44 +08:00
Jiaming Yuan
a2ecbdaa31 Add an API guard to prevent global variables being changed. (#6891) 2021-04-23 10:27:57 +08:00
Jiaming Yuan
896aede340 Reorganize the installation documents. (#6877)
* Split up installation and building from source.
* Use consistent section titles.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2021-04-22 04:48:32 +08:00
Jiaming Yuan
74b41637de Revert "[jvm-packages] Add XGBOOST_RABIT_TRACKER_IP_FOR_TEST to set rabit tracker IP. (#6869)" (#6886)
This reverts commit 2828da3c4c.
2021-04-21 11:20:10 -07:00
Kai Fricke
c8cc3eacc9 [docs] Add tutorial for XGBoost-Ray (#6884)
* Add XGBoost-Ray tutorial

* Add link to modin
2021-04-22 02:07:13 +08:00
Bobby Wang
2828da3c4c [jvm-packages] Add XGBOOST_RABIT_TRACKER_IP_FOR_TEST to set rabit tracker IP. (#6869)
* Add `XGBOOST_RABIT_TRACKER_IP_FOR_TEST` to set rabit tracker IP

* change spark and rabit tracker IP to 127.0.0.1on GitHub Action.

Co-authored-by: fis <jm.yuan@outlook.com>
2021-04-22 02:00:22 +08:00
Jiaming Yuan
233bdf105f Remove setDaemon in tracker. (#6872) 2021-04-22 01:57:13 +08:00
Jiaming Yuan
71b938f608 1.4.1 release news. (#6876) 2021-04-22 01:55:57 +08:00
Jiaming Yuan
146549260a Bump version to 1.5.0 snapshot in master. (#6875) 2021-04-22 01:53:44 +08:00
Jiaming Yuan
bec2b4f094 Revert "Use CPU input for test_boost_from_prediction. (#6818)" (#6858)
This reverts commit 74f3a2f4b5.
2021-04-20 14:54:02 +08:00
Bobby Wang
2c684ffd32 [jvm-packages] fix "key not found: train" issue (#6842)
* [jvm-packages] fix "key not found: train" issue

* fix bug
2021-04-18 23:28:39 -07:00
Jiaming Yuan
556a83022d Implement unified update prediction cache for (gpu_)hist. (#6860)
* Implement utilites for linalg.
* Unify the update prediction cache functions.
* Implement update prediction cache for multi-class gpu hist.
2021-04-17 00:29:34 +08:00
Jiaming Yuan
1b26a2a561 Copy output data for argsort. (#6866)
Fix GPU AUC.
2021-04-16 21:05:01 +08:00
Jiaming Yuan
a5d7094a45 Update documents. (#6856)
* Add early stopping section to prediction doc.
* Remove best_ntree_limit.
* Better doxygen output.
2021-04-16 12:41:03 +08:00
ReeceGoding
d31a57cf5f Removed typo in callbacks.R (#6863)
Changed "TURE" to "TRUE".
2021-04-16 05:43:22 +08:00
Jiaming Yuan
bccb7e87d1 Update dmlc-core. (#6862)
* Install pandoc, pandoc-citeproc on CI.
2021-04-16 00:14:17 +08:00
ReeceGoding
2e8c101b4a Removed magrittr dependency in callbacks.R (#6855) 2021-04-15 18:45:17 +08:00
Philip Hyunsu Cho
4224c08cac Add demo for using AFT survival with Dask (#6853) 2021-04-13 16:18:33 -07:00
Philip Hyunsu Cho
878b990fcd [CI] Upload Doxygen to correct destination (#6854) 2021-04-13 16:18:13 -07:00
Jiaming Yuan
dee5ef2dfd Typehint for Sklearn. (#6799) 2021-04-14 06:55:21 +08:00
Jiaming Yuan
3d919db0c0 Fix pip release script. [skip ci] (#6845) 2021-04-14 06:46:02 +08:00
Jiaming Yuan
b9a4f3336a 1.4 release notes. (#6843) 2021-04-13 08:38:27 +08:00
Philip Hyunsu Cho
ea7a6a0321 [CI] Pack R package tarball with pre-built xgboost.so (with GPU support) (#6827)
* Add scripts for packaging R package with GPU-enabled libxgboost.so

* [CI] Automatically build R package tarball

* Add comments

* Don't build tarball for pull requests

* Update the installation doc
2021-04-07 21:15:34 -07:00
Jiaming Yuan
f294c4e023 Use constexpr in dh::CopyIf. (#6828) 2021-04-08 07:37:47 +08:00
Viktor Szathmáry
b65e3c4444 [jvm] reduce scala-compiler, scalatest dependency scopes (#6730)
* [jvm] reduce scala-compiler, scalatest dependency scopes

* [jvm] workaround for GpuTestSuite scalatest dependency

* scalatest scope tweak
2021-04-07 15:22:08 -07:00
Jiaming Yuan
7bcc8b3e5c Use batched copy if. (#6826) 2021-04-06 10:34:04 +08:00
giladmaya
aa0d8f20c1 Support configuring constraints by feature names (#6783)
Co-authored-by: fis <jm.yuan@outlook.com>
2021-04-04 06:53:33 +08:00
Jiaming Yuan
7e06c81894 Fix approximated predict contribution. (#6811) 2021-04-03 02:15:03 +08:00
Jiaming Yuan
0cced530ea [doc] Clarify prediction function. (#6813) 2021-04-03 02:12:04 +08:00
Jiaming Yuan
b1fdb220f4 Remove deprecated n_gpus parameter. (#6821) 2021-04-02 03:02:32 +08:00
Jiaming Yuan
74f3a2f4b5 Use CPU input for test_boost_from_prediction. (#6818) 2021-04-02 00:11:35 +08:00
Jiaming Yuan
47b62480af More general predict proba. (#6817)
* Use `output_margin` for `softmax`.
* Add test for dask binary cls.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2021-04-01 19:52:12 +08:00
Jiaming Yuan
a5c852660b Update document for sklearn model IO. (#6809)
* Update the use of JSON.
* Remove unnecessary type cast.
2021-04-01 15:52:36 +08:00
Jiaming Yuan
905fdd3e08 Fix typos in AUC. (#6795) 2021-03-31 16:35:42 +08:00
Jiaming Yuan
ca998df912 Clarify the behavior of use_rmm. (#6808)
* Clarify the `use_rmm` flag in document and demo.
2021-03-31 15:43:11 +08:00
Jiaming Yuan
3039dd194b Don't estimate sketch batch size when rmm is used. (#6807) 2021-03-31 15:29:56 +08:00
Jiaming Yuan
10ae0f9511 Fix doc for apply method. (#6796) 2021-03-31 15:28:31 +08:00
Jiaming Yuan
138fe8516a Remove unnecessary calls to iota. (#6797) 2021-03-31 15:27:23 +08:00
Jiaming Yuan
79b8b560d2 Optimize dart inplace predict perf. (#6804) 2021-03-31 15:20:54 +08:00
JohanWork
4aa12e10c0 Update URL (#6810) 2021-03-30 22:27:30 +08:00
James Lamb
f01af43eb0 [dask] disable work stealing explicitly for training tasks (#6794) 2021-03-29 16:47:56 +08:00
Jiaming Yuan
a59c7323b4 Fix inplace predict missing value. (#6787) 2021-03-27 05:36:10 +08:00
Jiaming Yuan
5c87c2bba8 Update demo for prediction. (#6789)
* Remove use of deprecated ntree_limit.
* Add sklearn demo.
2021-03-27 03:09:25 +08:00
ShvetsKS
8825670c9c Memory consumption fix for row-major adapters (#6779)
Co-authored-by: Kirill Shvets <kirill.shvets@intel.com>
Co-authored-by: fis <jm.yuan@outlook.com>
2021-03-26 08:44:30 +08:00
Philip Hyunsu Cho
744c46995c [CI] Upload xgboost4j.dll to S3 (#6781) 2021-03-25 11:34:34 -07:00
Jiaming Yuan
a7083d3c13 Fix dart inplace prediction with GPU input. (#6777)
* Fix dart inplace predict with data on GPU, which might trigger a fatal check
for device access right.
* Avoid copying data whenever possible.
2021-03-25 12:00:32 +08:00
Jiaming Yuan
1d90577800 Verify strictly positive labels for gamma regression. (#6778)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2021-03-25 11:46:52 +08:00
Jiaming Yuan
794fd6a46b Support v3 cuda array interface. (#6776) 2021-03-25 09:58:09 +08:00
Jiaming Yuan
bcc0277338 Re-implement ROC-AUC. (#6747)
* Re-implement ROC-AUC.

* Binary
* MultiClass
* LTR
* Add documents.

This PR resolves a few issues:
  - Define a value when the dataset is invalid, which can happen if there's an
  empty dataset, or when the dataset contains only positive or negative values.
  - Define ROC-AUC for multi-class classification.
  - Define weighted average value for distributed setting.
  - A correct implementation for learning to rank task.  Previous
  implementation is just binary classification with averaging across groups,
  which doesn't measure ordered learning to rank.
2021-03-20 16:52:40 +08:00
Jiaming Yuan
4ee8340e79 Support column major array. (#6765) 2021-03-20 05:19:46 +08:00
Jiaming Yuan
f6fe15d11f Improve parameter validation (#6769)
* Add quotes to unused parameters.
* Check for whitespace.
2021-03-20 01:56:55 +08:00
Jiaming Yuan
23b4165a6b Fix gamma deviance (#6761) 2021-03-20 01:56:17 +08:00
ReeceGoding
c2b6b80600 R documentation: Make construction of DMatrix consistent.
* Fix inconsistency of construction of DMatrix.
* Fix missing parameters.
2021-03-20 01:55:13 +08:00
Qingyun Wu
642336add7 [doc] Add FLAML as a fast tuning tool for XGBoost (#6770)
Co-authored-by: Qingyun Wu <qiw@microsoft.com>
2021-03-20 01:47:39 +08:00
Philip Hyunsu Cho
4230dcb614 Re-introduce double buffer in UpdatePosition, to fix perf regression in gpu_hist (#6757)
* Revert "gpu_hist performance tweaks (#5707)"

This reverts commit f779980f7e.

* Address reviewer's comment

* Fix build error
2021-03-18 13:56:10 -07:00
Jiaming Yuan
e2d8a99413 Add document for tests directory. [skip ci] (#6760) 2021-03-18 15:15:50 +08:00
ReeceGoding
4e00737c60 Fix R documentation for xgb.train. (#6764)
The [general documentation](https://xgboost.readthedocs.io/en/latest/parameter.html#parameters-for-tree-booster) clearly has alpha and lambda under its "Parameters for Tree Booster" heading. Furthermore, the R package clearly uses alpha and lambda when told to use the tree booster. This update adds those two parameters to the documentation for the R package.


Closed issue #6763.
2021-03-18 15:04:00 +08:00
Jiaming Yuan
4f75f514ce Fix GPU RF (#6755)
* Fix sampling.
2021-03-17 06:23:35 +08:00
Jiaming Yuan
1a73a28511 Add device argsort. (#6749)
This is part of https://github.com/dmlc/xgboost/pull/6747 .
2021-03-16 16:05:22 +08:00
Jiaming Yuan
325bc93e16 [dask] Use distributed.MultiLock (#6743)
* [dask] Use `distributed.MultiLock`

This enables training multiple models in parallel.

* Conditionally import `MultiLock`.
* Use async train directly in scikit learn interface.
* Use `worker_client` when available.
2021-03-16 14:19:41 +08:00
Igor Rukhovich
19a2c54265 Prediction by indices (subsample < 1) (#6683)
* Another implementation of predicting by indices

* Fixed omp parallel_for variable type

* Removed SparsePageView from Updater
2021-03-16 15:08:20 +13:00
Philip Hyunsu Cho
366f3cb9d8 Add use_rmm flag to global configuration (#6656)
* Ensure RMM is 0.18 or later

* Add use_rmm flag to global configuration

* Modify XGBCachingDeviceAllocatorImpl to skip CUB when use_rmm=True

* Update the demo

* [CI] Pin NumPy to 1.19.4, since NumPy 1.19.5 doesn't work with latest Shap
2021-03-09 14:53:05 -08:00
Philip Hyunsu Cho
e4894111ba Update dmlc-core submodule (#6745) 2021-03-07 00:30:26 -08:00
Bobby Wang
49c22c23b4 [jvm-packages] fix early stopping doesn't work even without custom_eval setting (#6738)
* [jvm-packages] fix early stopping doesn't work even without custom_eval setting

* remove debug info

* resolve comment
2021-03-06 20:19:40 -08:00
Philip Hyunsu Cho
5ae7f9944b [CI] Clear R package cache (#6746) 2021-03-06 08:37:16 -08:00
Jiaming Yuan
f20074e826 Check for invalid data. (#6742) 2021-03-04 14:37:20 +08:00
Jiaming Yuan
a9b4a95225 Fix learning rate scheduler with cv. (#6720)
* Expose more methods in cvpack and packed booster.
* Fix cv context in deprecated callbacks.
* Fix document.
2021-02-28 13:57:42 +08:00
kangsheng89
9c8523432a fix relocatable include in CMakeList (#6734) (#6737) 2021-02-27 19:17:29 +08:00
Roffild
1fa6793a4e Tests for regression metrics with weights. (#6729) 2021-02-25 22:08:14 +08:00
Jiaming Yuan
9da2287ab8 [breaking] Save booster feature info in JSON, remove feature name generation. (#6605)
* Save feature info in booster in JSON model.
* [breaking] Remove automatic feature name generation in `DMatrix`.

This PR is to enable reliable feature validation in Python package.
2021-02-25 18:54:16 +08:00
capybara
b6167cd2ff [dask] Use client to persist collections (#6722)
Co-authored-by: fis <jm.yuan@outlook.com>
2021-02-25 16:40:38 +08:00
Louis Desreumaux
9b530e5697 Improve OpenMP exception handling (#6680) 2021-02-25 13:56:16 +08:00
Jiaming Yuan
c375173dca Support pylint 2.7.0 (#6726) 2021-02-25 12:49:58 +08:00
Honza Sterba
17913713b5 [jvm] Add ability to load booster direct from byte array (#6655)
* Add ability to load booster direct from byte array

* fix compiler error

* move InputStream to byte-buffer conversion

- move it from Booster to XGBoost facade class
2021-02-23 11:28:27 -08:00
Jiaming Yuan
872e559b91 Use inplace predict for sklearn. (#6718)
* Use inplace predict for sklearn when possible.
2021-02-22 12:27:04 +08:00
Benjamin Lehmann
25077564ab Fixes small typo in sklearn documentation (#6717)
Replaces "dowm" with "down" on parameter n_jobs
2021-02-20 07:36:06 +08:00
Jiaming Yuan
bdedaab8d1 Fix pylint. (#6714) 2021-02-19 11:53:27 +08:00
ShvetsKS
9f15b9e322 Optimize CPU prediction (#6696)
Co-authored-by: Shvets Kirill <kirill.shvets@intel.com>
2021-02-16 14:41:22 +08:00
James Lamb
dc97b5f19f [dask] remove outdated comment (#6699) 2021-02-15 18:49:11 +08:00
Roffild
4c5d2608e0 [python-package] Fix class Booster: feature_types = None (#6705) 2021-02-13 17:50:23 +08:00
ShvetsKS
9a0399e898 Removed unnecessary PredictBatch calls (#6700)
Co-authored-by: Shvets Kirill <kirill.shvets@intel.com>
2021-02-10 20:15:14 +08:00
Ali
9b267a435e Bail out early if libxgboost exists in python setup (#6694)
Skip `copy_tree` when existing build is found.
2021-02-10 10:50:10 +08:00
Jiaming Yuan
e8c5c53e2f Use Predictor for dart. (#6693)
* Use normal predictor for dart booster.
* Implement `inplace_predict` for dart.
* Enable `dart` for dask interface now that it's thread-safe.
* categorical data should be working out of box for dart now.

The implementation is not very efficient as it has to pull back the data and
apply weight for each tree, but still a significant improvement over previous
implementation as now we no longer binary search for each sample.

* Fix output prediction shape on dataframe.
2021-02-09 23:30:19 +08:00
Jiaming Yuan
dbf7e9d3cb Remove R cache in github action. (#6695)
The cache stores outdated packages with wrong linkage.  Right now there's no way to clear the cache.
2021-02-09 18:53:20 +08:00
Jiaming Yuan
1335db6113 [dask] Improve documents. (#6687)
* Add tag for versions.
* use autoclass in sphinx build.
Made some class methods to be private to avoid exporting documents.
2021-02-09 09:20:58 +08:00
Jiaming Yuan
5d48d40d9a Fix DMatrix slice with feature types. (#6689) 2021-02-09 08:13:51 +08:00
Jiaming Yuan
218a5fb6dd Simplify Span checks. (#6685)
* Stop printing out message.
* Remove R specialization.

The printed message is not really useful anyway, without a reproducible example
there's no way to fix it.  But if there's a reproducible example, we can always
obtain these information by a debugger.  Removing the `printf` function avoids
creating the context in kernel.
2021-02-09 08:12:58 +08:00
Jiaming Yuan
4656b09d5d [breaking] Add prediction fucntion for DMatrix and use inplace predict for dask. (#6668)
* Add a new API function for predicting on `DMatrix`.  This function aligns
with rest of the `XGBoosterPredictFrom*` functions on semantic of function
arguments.
* Purge `ntree_limit` from libxgboost, use iteration instead.
* [dask] Use `inplace_predict` by default for dask sklearn models.
* [dask] Run prediction shape inference on worker instead of client.

The breaking change is in the Python sklearn `apply` function, I made it to be
consistent with other prediction functions where `best_iteration` is used by
default.
2021-02-08 18:26:32 +08:00
Jiaming Yuan
dbb5208a0a Use __array_interface__ for creating DMatrix from CSR. (#6675)
* Use __array_interface__ for creating DMatrix from CSR.
* Add configuration.
2021-02-05 21:09:47 +08:00
Jiaming Yuan
1e949110da Use generic dispatching routine for array interface. (#6672) 2021-02-05 09:23:38 +08:00
Jiaming Yuan
a4101de678 Fix divide by 0 in feature importance when no split is found. (#6676) 2021-02-05 03:39:30 +08:00
Jiaming Yuan
72892cc80d [dask] Disable gblinear and dart. (#6665) 2021-02-04 09:13:09 +08:00
Jiaming Yuan
9d62b14591 Fix document. [skip ci] (#6669) 2021-02-02 20:43:31 +08:00
Jiaming Yuan
411592a347 Enhance inplace prediction. (#6653)
* Accept array interface for csr and array.
* Accept an optional proxy dmatrix for metainfo.

This constructs an explicit `_ProxyDMatrix` type in Python.

* Remove unused doc.
* Add strict output.
2021-02-02 11:41:46 +08:00
Jiaming Yuan
87ab1ad607 [dask] Accept Future of model for prediction. (#6650)
This PR changes predict and inplace_predict to accept a Future of model, to avoid sending models to workers repeatably.

* Document is updated to reflect functionality additions in recent changes.
2021-02-02 08:45:52 +08:00
Jiaming Yuan
a9ec0ea6da Align device id in predict transform with predictor. (#6662) 2021-02-02 08:33:29 +08:00
Jiaming Yuan
d8ec7aad5a [dask] Add a 1 line sample to infer output shape. (#6645)
* [dask] Use a 1 line sample to infer output shape.

This is for inferring shape with direct prediction (without DaskDMatrix).
There are a few things that requires known output shape before carrying out
actual prediction, including dask meta data, output dataframe columns.

* Infer output shape based on local prediction.
* Remove set param in predict function as it's not thread safe nor necessary as
we now let dask to decide the parallelism.
* Simplify prediction on `DaskDMatrix`.
2021-01-30 18:55:50 +08:00
Jiaming Yuan
c3c8e66fc9 Make prediction functions thread safe. (#6648) 2021-01-28 23:29:43 +08:00
Philip Hyunsu Cho
0f2ed21a9d [Breaking] Change default evaluation metric for binary:logitraw objective to logloss (#6647) 2021-01-29 00:12:12 +09:00
Jiaming Yuan
d167892c7e [dask] Ensure model can be pickled. (#6651) 2021-01-28 21:47:57 +08:00
Philip Hyunsu Cho
0ad6e18a2a [CI] Do not mix up stashed executable built for ARM and x86_64 platforms (#6646) 2021-01-27 23:57:26 +09:00
Philip Hyunsu Cho
55ee2bd77f [CI] Add ARM64 test to Jenkins pipeline (#6643)
* Add ARM64 test to Jenkins pipeline

* Check for bundled libgomp

* Use a separate test suite for ARM64

* Ensure that x86 jobs don't run on ARM workers
2021-01-27 21:51:17 +09:00
Jiaming Yuan
1b70a323a7 Improve string view to reduce string allocation. (#6644) 2021-01-27 19:08:52 +08:00
Jiaming Yuan
bc08e0c9d1 Remove experimental_json_serialization from tests. (#6640) 2021-01-27 17:44:49 +08:00
Jiaming Yuan
8968ca7c0a Disable s390x and arm64 tests on travis for now. (#6641) 2021-01-27 16:21:40 +08:00
Jiaming Yuan
d19a0ddacf Move sdist test to action. (#6635)
* Move x86 linux and osx sdist test to action.

* Add Windows.
2021-01-26 08:25:59 +08:00
Jiaming Yuan
740d042255 Add base_margin for evaluation dataset. (#6591)
* Add base margin to evaluation datasets.
* Unify the code base for evaluation matrices.
2021-01-26 02:11:02 +08:00
Jiaming Yuan
4bf23c2391 Specify shape in prediction contrib and interaction. (#6614) 2021-01-26 02:08:22 +08:00
Jiaming Yuan
8942c98054 Define metainfo and other parameters for all DMatrix interfaces. (#6601)
This PR ensures all DMatrix types have a common interface.

* Fix logic in avoiding duplicated DMatrix in sklearn.
* Check for consistency between DMatrix types.
* Add doc for bounds.
2021-01-25 16:06:06 +08:00
Jiaming Yuan
561809200a Fix document for tree methods. (#6633) 2021-01-25 15:52:08 +08:00
Adam Pocock
fec66d033a [jvm-packages] JVM library loader extensions (#6630)
* [java] extending the library loader to use both OS and CPU architecture.

* Simplifying create_jni.py's architecture detection.

* Tidying up the architecture detection in create_jni.py
2021-01-25 15:51:39 +08:00
Jiaming Yuan
a275f40267 [dask] Rework base margin test. (#6627) 2021-01-22 17:49:13 +08:00
Jiaming Yuan
7bc56fa0ed Use simple print in tracker print function. (#6609) 2021-01-21 21:15:43 +08:00
Jiaming Yuan
26982f9fce Skip unused CMake argument in setup.py (#6611) 2021-01-21 17:25:33 +08:00
Jiaming Yuan
f0fd7629ae Add helper script and doc for releasing pip package. (#6613)
* Fix `long_description_content_type`.
2021-01-21 14:46:52 +08:00
Bobby Wang
9d2832a3a3 fix potential TaskFailedListener's callback won't be called (#6612)
there is possibility that onJobStart of TaskFailedListener won't be called, if
the job is submitted before the other thread adds addSparkListener.

detail can be found at https://github.com/dmlc/xgboost/pull/6019#issuecomment-760937628
2021-01-21 14:20:32 +08:00
Jiaming Yuan
f8bb678c67 Exclude dmlc test on github action. (#6625) 2021-01-20 18:50:20 +08:00
Jiaming Yuan
d6d72de339 Revert ntree limit fix (#6616)
The old (before fix) best_ntree_limit ignores the num_class parameters, which is incorrect. In before we workarounded it in c++ layer to avoid possible breaking changes on other language bindings. But the Python interpretation stayed incorrect. The PR fixed that in Python to consider num_class, but didn't remove the old workaround, so tree calculation in predictor is incorrect, see PredictBatch in CPUPredictor.
2021-01-19 23:51:16 +08:00
Jiaming Yuan
d132933550 Remove type check for solaris. (#6610) 2021-01-16 02:58:19 +08:00
Jiaming Yuan
d356b7a071 Restore unknown data support. (#6595) 2021-01-14 04:51:16 +08:00
Jiaming Yuan
89a00a5866 [dask] Random forest estimators (#6602) 2021-01-13 20:59:20 +08:00
Jiaming Yuan
0027220aa0 [breaking] Remove duplicated predict functions, Fix attributes IO. (#6593)
* Fix attributes not being restored.
* Rename all `data` to `X`. [breaking]
2021-01-13 16:56:49 +08:00
ShvetsKS
7f4d3a91b9 Multiclass prediction caching for CPU Hist (#6550)
Co-authored-by: Kirill Shvets <kirill.shvets@intel.com>
2021-01-13 04:42:07 +08:00
Jiaming Yuan
03cd087da1 Remove duplicated DMatrix. (#6592) 2021-01-12 09:36:56 +08:00
Jiaming Yuan
c709f2aaaf Fix evaluation result for XGBRanker. (#6594)
* Remove duplicated code, which fixes typo `evals_result` -> `evals_result_`.
2021-01-12 09:36:41 +08:00
Jiaming Yuan
f2f7dd87b8 Use view for SparsePage exclusively. (#6590) 2021-01-11 18:04:55 +08:00
Jiaming Yuan
78f2cd83d7 Suppress hypothesis health check for dask client. (#6589) 2021-01-11 14:11:57 +08:00
Jiaming Yuan
80065d571e [dask] Add DaskXGBRanker (#6576)
* Initial support for distributed LTR using dask.

* Support `qid` in libxgboost.
* Refactor `predict` and `n_features_in_`, `best_[score/iteration/ntree_limit]`
  to avoid duplicated code.
* Define `DaskXGBRanker`.

The dask ranker doesn't support group structure, instead it uses query id and
convert to group ptr internally.
2021-01-08 18:35:09 +08:00
Jiaming Yuan
96d3d32265 [dask] Add shap tests. (#6575) 2021-01-08 14:59:27 +08:00
Jiaming Yuan
7c9dcbedbc Fix best_ntree_limit for dart and gblinear. (#6579) 2021-01-08 10:05:39 +08:00
Jiaming Yuan
f5ff90cd87 Support _estimator_type. (#6582)
* Use `_estimator_type`.

For more info, see: https://scikit-learn.org/stable/developers/develop.html#estimator-types

* Model trained from dask can be loaded by single node skl interface.
2021-01-08 10:01:16 +08:00
Jiaming Yuan
8747885a8b Support Solaris. (#6578)
* Add system header.

* Remove use of TR1 on Solaris

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2021-01-07 09:05:05 +08:00
TP Boudreau
b2246ae7ef Update dmlc-core submodule and conform to new API (#6431)
* Update dmlc-core submodule and conform to new API

* Remove unsupported parameter from method signature

* Update dmlc-core submodule and conform to new API

* Update dmlc-core

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2021-01-05 16:12:22 -08:00
Jiaming Yuan
60cfd14349 [dask, sklearn] Fix predict proba. (#6566)
* For sklearn:
  - Handles user defined objective function.
  - Handles `softmax`.

* For dask:
  - Use the implementation from sklearn, the previous implementation doesn't perform any extra handling.
2021-01-05 08:29:06 +08:00
Jiaming Yuan
516a93d25c Fix best_ntree_limit. (#6569) 2021-01-03 05:58:54 +08:00
James Lamb
195a41cef1 [python-package] remove unnecessary files to reduce sdist size (fixes #6560) (#6565) 2021-01-02 15:56:39 +08:00
Jiaming Yuan
2b049b32e9 Document various tree methods. (#6564) 2021-01-02 15:40:46 +08:00
Philip Hyunsu Cho
fa13992264 Calling XGBModel.fit() should clear the Booster by default (#6562)
* Calling XGBModel.fit() should clear the Booster by default

* Document the behavior of fit()

* Allow sklearn object to be passed in directly via xgb_model argument

* Fix lint
2020-12-31 11:02:08 -08:00
Jiaming Yuan
5e9e525223 Remove warnings in tests. (#6554) 2020-12-31 13:41:18 +08:00
James Lamb
8ad22bf4e7 Add credentials to .gitignore (#6559) 2020-12-30 15:58:14 -08:00
Jiaming Yuan
de8fd852a5 [dask] Add type hints. (#6519)
* Add validate_features.
* Show type hints in doc.

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-12-29 19:41:02 +08:00
Jiaming Yuan
610ee632cc [Breaking] Rename data to X in predict_proba. (#6555)
New Scikit-Learn version uses keyword argument, and `X` is the predefined
keyword.

* Use pip to install latest Python graphviz on Windows CI.
2020-12-28 21:36:03 +08:00
Jiaming Yuan
cb207a355d Add script for generating release tarball. (#6544) 2020-12-23 16:08:10 +08:00
Gorkem Ozkaya
2231940d1d Clip small positive values in gamma-nloglik (#6537)
For the `gamma-nloglik` eval metric, small positive values in the labels are causing `NaN`'s in the outputs, as reported here: https://github.com/dmlc/xgboost/issues/5349. This will add clipping on them, similar to what is done in other metrics like `poisson-nloglik` and `logloss`.
2020-12-22 03:11:40 +08:00
MBSMachineLearning
95cbfad990 "featue_map" typo changed to "feature_map" (#6540) 2020-12-21 22:11:11 +08:00
Philip Hyunsu Cho
fbb980d9d3 Expand ~ into the home directory on Linux and MacOS (#6531) 2020-12-19 23:35:13 -08:00
Philip Hyunsu Cho
cd0821500c Add Saturn Cloud Dask XGBoost tutorial to Awesome XGBoost [skip ci] (#6532) 2020-12-19 15:57:05 -08:00
Philip Hyunsu Cho
380f6f4ab8 Remove cupy.array_equal, since it's not compatible with cuPy 7.8 (#6528) 2020-12-18 09:16:52 -08:00
Jiaming Yuan
ca3da55de4 Support early stopping with training continuation, correct num boosted rounds. (#6506)
* Implement early stopping with training continuation.

* Add new C API for obtaining boosted rounds.

* Fix off by 1 in `save_best`.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-12-17 19:59:19 +08:00
Philip Hyunsu Cho
125b3c0f2d Lazy import cuDF and Dask (#6522)
* Lazy import cuDF

* Lazy import Dask

Co-authored-by: PSEUDOTENSOR / Jonathan McKinney <pseudotensor@gmail.com>

* Fix lint

Co-authored-by: PSEUDOTENSOR / Jonathan McKinney <pseudotensor@gmail.com>
2020-12-17 01:51:35 -08:00
Philip Hyunsu Cho
ad1a527709 Enable loading model from <1.0.0 trained with objective='binary:logitraw' (#6517)
* Enable loading model from <1.0.0 trained with objective='binary:logitraw'

* Add binary:logitraw in model compatibility testing suite

* Feedback from @trivialfis: Override ProbToMargin() for LogisticRaw

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2020-12-16 16:53:46 -08:00
Philip Hyunsu Cho
bf6cfe3b99 [Breaking] Upgrade cuDF and RMM to 0.18 nightlies; require RMM 0.18+ for RMM plugin (#6510)
* [CI] Upgrade cuDF and RMM to 0.18 nightlies

* Modify RMM plugin to be compatible with RMM 0.18

* Update src/common/device_helpers.cuh

Co-authored-by: Mark Harris <mharris@nvidia.com>

Co-authored-by: Mark Harris <mharris@nvidia.com>
2020-12-16 10:07:52 -08:00
Jiaming Yuan
d8d684538c [CI] Split up main.yml, add mypy. (#6515) 2020-12-17 00:15:44 +08:00
Jiaming Yuan
c5876277a8 Drop saving binary format for memory snapshot. (#6513) 2020-12-17 00:14:57 +08:00
Jiaming Yuan
0e97d97d50 Fix merge conflict. (#6512) 2020-12-16 18:02:25 +08:00
hzy001
749364f25d Update the C API comments (#6457)
Signed-off-by: Hao Ziyu <haoziyu@qiyi.com>

Co-authored-by: Hao Ziyu <haoziyu@qiyi.com>
2020-12-16 14:56:13 +08:00
Jiaming Yuan
347f593169 Accept numpy array for DMatrix slice index. (#6368) 2020-12-16 14:42:52 +08:00
Jiaming Yuan
ef4a0e0aac Fix DMatrix feature names/types IO. (#6507)
* Fix feature names/types IO

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-12-16 14:24:27 +08:00
Jiaming Yuan
886486a519 Support categorical data in GPU weighted sketching. (#6508) 2020-12-16 14:23:28 +08:00
Igor Rukhovich
5c8ccf4455 Improved InitSampling function speed by 2.12 times (#6410)
* Improved InitSampling function speed by 2.12 times

* Added explicit conversion
2020-12-15 20:59:24 -08:00
Jiaming Yuan
3c3f026ec1 Move metric configuration into booster. (#6504) 2020-12-16 05:35:04 +08:00
Jiaming Yuan
d45c0d843b Show partition status in dask error. (#6366) 2020-12-16 02:58:21 +08:00
James Lamb
1e2c3ade9e [doc] [dask] Add example on early stopping with Dask (#6501)
Co-authored-by: fis <jm.yuan@outlook.com>
2020-12-15 22:23:23 +08:00
ShvetsKS
8139849ab6 Fix handling of print period in EvaluationMonitor (#6499)
Co-authored-by: Kirill Shvets <kirill.shvets@intel.com>
2020-12-15 19:20:19 +08:00
Philip Hyunsu Cho
9a194273cd Add conda-forge badge (#6502) 2020-12-14 18:58:03 -08:00
Philip Hyunsu Cho
aac4eba2ef Add release note for 1.3.0 in NEWS.md (#6495)
* Add release note for 1.3.0

* Address reviewer's comment

* Fix silly mistake

* Apply suggestions from code review

Co-authored-by: John Zedlewski <904524+JohnZed@users.noreply.github.com>

Co-authored-by: John Zedlewski <904524+JohnZed@users.noreply.github.com>
2020-12-14 14:42:30 -08:00
James Lamb
afc4567268 [doc] [dask] fix partitioning in Dask example (#6389) 2020-12-14 18:37:49 +08:00
Jiaming Yuan
a30461cf87 [dask] Support all parameters in regressor and classifier. (#6471)
* Add eval_metric.
* Add callback.
* Add feature weights.
* Add custom objective.
2020-12-14 07:35:56 +08:00
Philip Hyunsu Cho
c31e3efa7c Pass correct split_type to GPU predictor (#6491)
* Pass correct split_type to GPU predictor

* Add a test
2020-12-11 19:30:00 -08:00
Philip Hyunsu Cho
0d483cb7c1 Bump version to 1.4.0 snapshot in master (#6486) 2020-12-10 07:38:08 -08:00
Philip Hyunsu Cho
b8044e6136 [CI] Use manylinux2010_x86_64 container to vendor libgomp (#6485) 2020-12-10 07:37:15 -08:00
Jiaming Yuan
0ffaf0f5be Fix dask ip resolution. (#6475)
This adopts the solution used in dask/dask-xgboost#40 which employs the get_host_ip from dmlc-core tracker.
2020-12-07 16:36:23 -08:00
Jiaming Yuan
47b86180f6 Don't validate feature when number of rows is 0. (#6472) 2020-12-07 18:08:51 +08:00
Philip Hyunsu Cho
55bdf084cb [Doc] Document that AUC and AUCPR are for binary classification/ranking [skip ci] (#5899) 2020-12-06 22:17:20 -08:00
Jiaming Yuan
703c2d06aa Fix global config default value. (#6470) 2020-12-06 06:15:33 +08:00
Jiaming Yuan
d6386e45e8 Fix filtering callable objects in skl xgb param. (#6466)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-12-05 17:20:36 +08:00
Philip Hyunsu Cho
05e5563c2c [CI] Fix CentOS 6 Docker images (#6467) 2020-12-04 21:33:11 -08:00
Philip Hyunsu Cho
84b726ef53 Vendor libgomp in the manylinux Python wheel (#6461)
* Vendor libgomp in the manylinux2014_aarch64 wheel

* Use vault repo, since CentOS 6 has reached End-of-Life on Nov 30

* Vendor libgomp in the manylinux2010_x86_64 wheel

* Run verification step inside the container
2020-12-03 19:55:32 -08:00
Philip Hyunsu Cho
c103ec51d8 Enforce row-major order in cuPy array (#6459) 2020-12-03 18:29:10 -08:00
Philip Hyunsu Cho
4f70e14031 Fix docstring of config.py to use correct versionadded (#6458) 2020-12-03 10:41:53 -08:00
Philip Hyunsu Cho
fb56da5e8b Add global configuration (#6414)
* Add management functions for global configuration: XGBSetGlobalConfig(), XGBGetGlobalConfig().
* Add Python interface: set_config(), get_config(), and config_context().
* Add unit tests for Python
* Add R interface: xgb.set.config(), xgb.get.config()
* Add unit tests for R

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2020-12-03 00:05:18 -08:00
hzy001
c2ba4fb957 Fix broken links. (#6455)
Co-authored-by: Hao Ziyu <haoziyu@qiyi.com>
Co-authored-by: fis <jm.yuan@outlook.com>
2020-12-02 17:39:12 +08:00
Jiaming Yuan
927c316aeb Fix period in evaluation monitor. (#6441) 2020-11-29 03:18:33 +08:00
Jiaming Yuan
f4ff1c53fd Fix CLI ranking demo. (#6439)
Save model at final round.
2020-11-29 03:12:06 +08:00
Honza Sterba
b0036b339b Optionaly fail when gpu_id is set to invalid value (#6342) 2020-11-28 15:14:12 +08:00
ShvetsKS
956beead70 Thread local memory allocation for BuildHist (#6358)
* thread mem locality

* fix apply

* cleanup

* fix lint

* fix tests

* simple try

* fix

* fix

* apply comments

* fix comments

* fix

* apply simple comment

Co-authored-by: ShvetsKS <kirill.shvets@intel.com>
2020-11-25 17:50:12 +03:00
Philip Hyunsu Cho
4dbbeb635d [CI] Upgrade cuDF and RMM to 0.17 nightlies (#6434) 2020-11-24 13:21:41 -08:00
Philip Hyunsu Cho
0c85b90671 [R] Fix R package installation via CMake (#6423) 2020-11-22 05:49:09 -08:00
883 changed files with 74155 additions and 30623 deletions

214
.clang-format Normal file
View File

@@ -0,0 +1,214 @@
---
Language: Cpp
# BasedOnStyle: Google
AccessModifierOffset: -1
AlignAfterOpenBracket: Align
AlignArrayOfStructures: None
AlignConsecutiveMacros: None
AlignConsecutiveAssignments: None
AlignConsecutiveBitFields: None
AlignConsecutiveDeclarations: None
AlignEscapedNewlines: Left
AlignOperands: Align
AlignTrailingComments: true
AllowAllArgumentsOnNextLine: true
AllowAllParametersOfDeclarationOnNextLine: true
AllowShortEnumsOnASingleLine: true
AllowShortBlocksOnASingleLine: Never
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: All
AllowShortLambdasOnASingleLine: All
AllowShortIfStatementsOnASingleLine: WithoutElse
AllowShortLoopsOnASingleLine: true
AlwaysBreakAfterDefinitionReturnType: None
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: Yes
AttributeMacros:
- __capability
BinPackArguments: true
BinPackParameters: true
BraceWrapping:
AfterCaseLabel: false
AfterClass: false
AfterControlStatement: Never
AfterEnum: false
AfterFunction: false
AfterNamespace: false
AfterObjCDeclaration: false
AfterStruct: false
AfterUnion: false
AfterExternBlock: false
BeforeCatch: false
BeforeElse: false
BeforeLambdaBody: false
BeforeWhile: false
IndentBraces: false
SplitEmptyFunction: true
SplitEmptyRecord: true
SplitEmptyNamespace: true
BreakBeforeBinaryOperators: None
BreakBeforeConceptDeclarations: true
BreakBeforeBraces: Attach
BreakBeforeInheritanceComma: false
BreakInheritanceList: BeforeColon
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: false
BreakConstructorInitializers: BeforeColon
BreakAfterJavaFieldAnnotations: false
BreakStringLiterals: true
ColumnLimit: 100
CommentPragmas: '^ IWYU pragma:'
QualifierAlignment: Leave
CompactNamespaces: false
ConstructorInitializerIndentWidth: 4
ContinuationIndentWidth: 4
Cpp11BracedListStyle: true
DeriveLineEnding: true
DerivePointerAlignment: true
DisableFormat: false
EmptyLineAfterAccessModifier: Never
EmptyLineBeforeAccessModifier: LogicalBlock
ExperimentalAutoDetectBinPacking: false
PackConstructorInitializers: NextLine
BasedOnStyle: ''
ConstructorInitializerAllOnOneLineOrOnePerLine: false
AllowAllConstructorInitializersOnNextLine: true
FixNamespaceComments: true
ForEachMacros:
- foreach
- Q_FOREACH
- BOOST_FOREACH
IfMacros:
- KJ_IF_MAYBE
IncludeBlocks: Regroup
IncludeCategories:
- Regex: '^<ext/.*\.h>'
Priority: 2
SortPriority: 0
CaseSensitive: false
- Regex: '^<.*\.h>'
Priority: 1
SortPriority: 0
CaseSensitive: false
- Regex: '^<.*'
Priority: 2
SortPriority: 0
CaseSensitive: false
- Regex: '.*'
Priority: 3
SortPriority: 0
CaseSensitive: false
IncludeIsMainRegex: '([-_](test|unittest))?$'
IncludeIsMainSourceRegex: ''
IndentAccessModifiers: false
IndentCaseLabels: true
IndentCaseBlocks: false
IndentGotoLabels: true
IndentPPDirectives: None
IndentExternBlock: AfterExternBlock
IndentRequires: false
IndentWidth: 2
IndentWrappedFunctionNames: false
InsertTrailingCommas: None
JavaScriptQuotes: Leave
JavaScriptWrapImports: true
KeepEmptyLinesAtTheStartOfBlocks: false
LambdaBodyIndentation: Signature
MacroBlockBegin: ''
MacroBlockEnd: ''
MaxEmptyLinesToKeep: 1
NamespaceIndentation: None
ObjCBinPackProtocolList: Never
ObjCBlockIndentWidth: 2
ObjCBreakBeforeNestedBlockParam: true
ObjCSpaceAfterProperty: false
ObjCSpaceBeforeProtocolList: true
PenaltyBreakAssignment: 2
PenaltyBreakBeforeFirstCallParameter: 1
PenaltyBreakComment: 300
PenaltyBreakFirstLessLess: 120
PenaltyBreakString: 1000
PenaltyBreakTemplateDeclaration: 10
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PenaltyIndentedWhitespace: 0
PointerAlignment: Left
PPIndentWidth: -1
RawStringFormats:
- Language: Cpp
Delimiters:
- cc
- CC
- cpp
- Cpp
- CPP
- 'c++'
- 'C++'
CanonicalDelimiter: ''
BasedOnStyle: google
- Language: TextProto
Delimiters:
- pb
- PB
- proto
- PROTO
EnclosingFunctions:
- EqualsProto
- EquivToProto
- PARSE_PARTIAL_TEXT_PROTO
- PARSE_TEST_PROTO
- PARSE_TEXT_PROTO
- ParseTextOrDie
- ParseTextProtoOrDie
- ParseTestProto
- ParsePartialTestProto
CanonicalDelimiter: pb
BasedOnStyle: google
ReferenceAlignment: Pointer
ReflowComments: true
ShortNamespaceLines: 1
SortIncludes: CaseSensitive
SortJavaStaticImport: Before
SortUsingDeclarations: true
SpaceAfterCStyleCast: false
SpaceAfterLogicalNot: false
SpaceAfterTemplateKeyword: true
SpaceBeforeAssignmentOperators: true
SpaceBeforeCaseColon: false
SpaceBeforeCpp11BracedList: false
SpaceBeforeCtorInitializerColon: true
SpaceBeforeInheritanceColon: true
SpaceBeforeParens: ControlStatements
SpaceAroundPointerQualifiers: Default
SpaceBeforeRangeBasedForLoopColon: true
SpaceInEmptyBlock: false
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 2
SpacesInAngles: Never
SpacesInConditionalStatement: false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInLineCommentPrefix:
Minimum: 1
Maximum: -1
SpacesInParentheses: false
SpacesInSquareBrackets: false
SpaceBeforeSquareBrackets: false
BitFieldColonSpacing: Both
Standard: Auto
StatementAttributeLikeMacros:
- Q_EMIT
StatementMacros:
- Q_UNUSED
- QT_REQUIRE_VERSION
TabWidth: 8
UseCRLF: false
UseTab: Never
WhitespaceSensitiveMacros:
- STRINGIZE
- PP_STRINGIZE
- BOOST_PP_STRINGIZE
- NS_SWIFT_NAME
- CF_SWIFT_NAME
...

77
.github/workflows/jvm_tests.yml vendored Normal file
View File

@@ -0,0 +1,77 @@
name: XGBoost-JVM-Tests
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
test-with-jvm:
name: Test JVM on OS ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [windows-latest, ubuntu-latest, macos-11]
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: actions/setup-python@v2
with:
python-version: '3.8'
architecture: 'x64'
- uses: actions/setup-java@v1
with:
java-version: 1.8
- name: Install Python packages
run: |
python -m pip install wheel setuptools
python -m pip install awscli
- name: Cache Maven packages
uses: actions/cache@v2
with:
path: ~/.m2
key: ${{ runner.os }}-m2-${{ hashFiles('./jvm-packages/pom.xml') }}
restore-keys: ${{ runner.os }}-m2
- name: Test XGBoost4J
run: |
cd jvm-packages
mvn test -B -pl :xgboost4j_2.12
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: |
(github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')) &&
matrix.os == 'windows-latest'
- name: Publish artifact xgboost4j.dll to S3
run: |
cd lib/
Rename-Item -Path xgboost4j.dll -NewName xgboost4j_${{ github.sha }}.dll
dir
python -m awscli s3 cp xgboost4j_${{ github.sha }}.dll s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/ --acl public-read
if: |
(github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')) &&
matrix.os == 'windows-latest'
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
- name: Test XGBoost4J-Spark
run: |
rm -rfv build/
cd jvm-packages
mvn -B test
if: matrix.os == 'ubuntu-latest' # Distributed training doesn't work on Windows
env:
RABIT_MOCK: ON

View File

@@ -6,8 +6,8 @@ name: XGBoost-CI
# events but only for the master branch # events but only for the master branch
on: [push, pull_request] on: [push, pull_request]
env: permissions:
R_PACKAGES: c('XML', 'igraph', 'data.table', 'magrittr', 'ggplot2', 'DiagrammeR', 'Ckmeans.1d.dp', 'vcd', 'testthat', 'lintr', 'knitr', 'rmarkdown', 'e1071', 'cplm', 'devtools', 'float', 'titanic') contents: read # to fetch code (actions/checkout)
# A workflow run is made up of one or more jobs that can run sequentially or in parallel # A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs: jobs:
@@ -17,24 +17,25 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
os: [macos-10.15] os: [macos-11]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
with: with:
submodules: 'true' submodules: 'true'
- name: Install system packages - name: Install system packages
run: | run: |
brew install lz4 ninja libomp brew install ninja libomp
- name: Build gtest binary - name: Build gtest binary
run: | run: |
mkdir build mkdir build
cd build cd build
cmake .. -DGOOGLE_TEST=ON -DUSE_OPENMP=ON -DUSE_DMLC_GTEST=ON -DPLUGIN_LZ4=ON -DPLUGIN_DENSE_PARSER=ON -GNinja cmake .. -DGOOGLE_TEST=ON -DUSE_OPENMP=ON -DUSE_DMLC_GTEST=ON -DPLUGIN_DENSE_PARSER=ON -GNinja
ninja -v ninja -v
- name: Run gtest binary - name: Run gtest binary
run: | run: |
cd build cd build
ctest --extra-verbose ./testxgboost
ctest -R TestXGBoostCLI --extra-verbose
gtest-cpu-nonomp: gtest-cpu-nonomp:
name: Test Google C++ unittest (CPU Non-OMP) name: Test Google C++ unittest (CPU Non-OMP)
@@ -74,283 +75,86 @@ jobs:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
with: with:
submodules: 'true' submodules: 'true'
- name: Install system packages - uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
run: |
sudo apt-get install -y --no-install-recommends ninja-build
- uses: conda-incubator/setup-miniconda@v2
with: with:
auto-update-conda: true cache-downloads: true
python-version: ${{ matrix.python-version }} cache-env: true
environment-name: cpp_test
environment-file: tests/ci_build/conda_env/cpp_test.yml
- name: Display Conda env - name: Display Conda env
shell: bash -l {0} shell: bash -l {0}
run: | run: |
conda info conda info
conda list conda list
- name: Build and install XGBoost
- name: Build and install XGBoost static library
shell: bash -l {0} shell: bash -l {0}
run: | run: |
mkdir build mkdir build
cd build cd build
cmake .. -DBUILD_STATIC_LIB=ON -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja cmake .. -DBUILD_STATIC_LIB=ON -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja
ninja -v install ninja -v install
- name: Build and run C API demo cd -
- name: Build and run C API demo with static
shell: bash -l {0} shell: bash -l {0}
run: | run: |
pushd .
cd demo/c-api/ cd demo/c-api/
mkdir build mkdir build
cd build cd build
cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
ninja -v ninja -v
ctest
cd .. cd ..
./build/api-demo rm -rf ./build
popd
test-with-jvm: - name: Build and install XGBoost shared library
name: Test JVM on OS ${{ matrix.os }} shell: bash -l {0}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [windows-latest, ubuntu-latest]
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: actions/setup-java@v1
with:
java-version: 1.8
- name: Cache Maven packages
uses: actions/cache@v2
with:
path: ~/.m2
key: ${{ runner.os }}-m2-${{ hashFiles('./jvm-packages/pom.xml') }}
restore-keys: ${{ runner.os }}-m2
- name: Test XGBoost4J
run: | run: |
cd jvm-packages cd build
mvn test -B -pl :xgboost4j_2.12 cmake .. -DBUILD_STATIC_LIB=OFF -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja
ninja -v install
- name: Test XGBoost4J-Spark cd -
- name: Build and run C API demo with shared
shell: bash -l {0}
run: | run: |
rm -rfv build/ pushd .
cd jvm-packages cd demo/c-api/
mvn -B test mkdir build
if: matrix.os == 'ubuntu-latest' # Distributed training doesn't work on Windows cd build
env: cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
RABIT_MOCK: ON ninja -v
ctest
popd
./tests/ci_build/verify_link.sh ./demo/c-api/build/basic/api-demo
./tests/ci_build/verify_link.sh ./demo/c-api/build/external-memory/external-memory-demo
lint: lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: Code linting for Python and C++ name: Code linting for C++
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
with: with:
submodules: 'true' submodules: 'true'
- uses: actions/setup-python@v2 - uses: actions/setup-python@v2
with: with:
python-version: '3.7' python-version: "3.8"
architecture: 'x64' architecture: 'x64'
- name: Install Python packages - name: Install Python packages
run: | run: |
python -m pip install wheel setuptools python -m pip install wheel setuptools cpplint pylint
python -m pip install pylint cpplint numpy scipy scikit-learn
- name: Run lint - name: Run lint
run: | run: |
make lint LINT_LANG=cpp make lint
doxygen: python3 dmlc-core/scripts/lint.py --exclude_path \
runs-on: ubuntu-latest python-package/xgboost/dmlc-core \
name: Generate C/C++ API doc using Doxygen python-package/xgboost/include \
steps: python-package/xgboost/lib \
- uses: actions/checkout@v2 python-package/xgboost/rabit \
with: python-package/xgboost/src \
submodules: 'true' --pylint-rc python-package/.pylintrc \
- uses: actions/setup-python@v2 xgboost \
with: cpp \
python-version: '3.7' include src python-package
architecture: 'x64'
- name: Install system packages
run: |
sudo apt-get install -y --no-install-recommends doxygen graphviz ninja-build
python -m pip install wheel setuptools
python -m pip install awscli
- name: Run Doxygen
run: |
mkdir build
cd build
cmake .. -DBUILD_C_DOC=ON -GNinja
ninja -v doc_doxygen
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Publish
run: |
cd build/
tar cvjf ${{ steps.extract_branch.outputs.branch }}.tar.bz2 doc_doxygen/
python -m awscli s3 cp ./${{ steps.extract_branch.outputs.branch }}.tar.bz2 s3://xgboost-docs/ --acl public-read
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
sphinx:
runs-on: ubuntu-latest
name: Build docs using Sphinx
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: actions/setup-python@v2
with:
python-version: '3.7'
architecture: 'x64'
- name: Install system packages
run: |
sudo apt-get install -y --no-install-recommends graphviz
python -m pip install wheel setuptools
python -m pip install -r doc/requirements.txt
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Run Sphinx
run: |
make -C doc html
env:
SPHINX_GIT_BRANCH: ${{ steps.extract_branch.outputs.branch }}
lintr:
runs-on: ${{ matrix.config.os }}
name: Run R linters on OS ${{ matrix.config.os }}, R ${{ matrix.config.r }}, Compiler ${{ matrix.config.compiler }}, Build ${{ matrix.config.build }}
strategy:
matrix:
config:
- {os: windows-latest, r: 'release', compiler: 'mingw', build: 'autotools'}
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
RSPM: ${{ matrix.config.rspm }}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@master
with:
r-version: ${{ matrix.config.r }}
- name: Cache R packages
uses: actions/cache@v2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-1-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-2-
- name: Install dependencies
shell: Rscript {0}
run: |
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- name: Run lintr
run: |
cd R-package
R.exe CMD INSTALL .
Rscript.exe tests/helper_scripts/run_lint.R
test-with-R:
runs-on: ${{ matrix.config.os }}
name: Test R on OS ${{ matrix.config.os }}, R ${{ matrix.config.r }}, Compiler ${{ matrix.config.compiler }}, Build ${{ matrix.config.build }}
strategy:
fail-fast: false
matrix:
config:
- {os: windows-2016, r: 'release', compiler: 'mingw', build: 'autotools'}
- {os: windows-2016, r: 'release', compiler: 'msvc', build: 'cmake'}
- {os: windows-2016, r: 'release', compiler: 'mingw', build: 'cmake'}
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
RSPM: ${{ matrix.config.rspm }}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@master
with:
r-version: ${{ matrix.config.r }}
- name: Cache R packages
uses: actions/cache@v2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-1-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-2-
- name: Install dependencies
shell: Rscript {0}
run: |
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- uses: actions/setup-python@v2
with:
python-version: '3.7'
architecture: 'x64'
- name: Test R
run: |
python tests/ci_build/test_r_package.py --compiler="${{ matrix.config.compiler }}" --build-tool="${{ matrix.config.build }}"
test-R-CRAN:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
config:
- {r: 'release'}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@master
with:
r-version: ${{ matrix.config.r }}
- uses: r-lib/actions/setup-tinytex@master
- name: Cache R packages
uses: actions/cache@v2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-1-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-2-
- name: Install system packages
run: |
sudo apt-get update && sudo apt-get install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev
- name: Install dependencies
shell: Rscript {0}
run: |
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- name: Check R Package
run: |
# Print stacktrace upon success of failure
make Rcheck || tests/ci_build/print_r_stacktrace.sh fail
tests/ci_build/print_r_stacktrace.sh success

210
.github/workflows/python_tests.yml vendored Normal file
View File

@@ -0,0 +1,210 @@
name: XGBoost-Python-Tests
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
python-mypy-lint:
runs-on: ubuntu-latest
name: Type and format checks for the Python package
strategy:
matrix:
os: [ubuntu-latest]
python-version: ["3.8"]
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: conda-incubator/setup-miniconda@v2
with:
auto-update-conda: true
python-version: ${{ matrix.python-version }}
activate-environment: python_lint
environment-file: tests/ci_build/conda_env/python_lint.yml
- name: Display Conda env
shell: bash -l {0}
run: |
conda info
conda list
- name: Run mypy
shell: bash -l {0}
run: |
python tests/ci_build/lint_python.py --format=0 --type-check=1 --pylint=0
- name: Run formatter
shell: bash -l {0}
run: |
python tests/ci_build/lint_python.py --format=1 --type-check=0 --pylint=0
- name: Run pylint
shell: bash -l {0}
run: |
python tests/ci_build/lint_python.py --format=0 --type-check=0 --pylint=1
python-sdist-test-on-Linux:
# Mismatched glibcxx version between system and conda forge.
runs-on: ${{ matrix.os }}
name: Test installing XGBoost Python source package on ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with:
cache-downloads: true
cache-env: false
environment-name: sdist_test
environment-file: tests/ci_build/conda_env/sdist_test.yml
- name: Display Conda env
shell: bash -l {0}
run: |
conda info
conda list
- name: Build and install XGBoost
shell: bash -l {0}
run: |
cd python-package
python --version
python setup.py sdist
pip install -v ./dist/xgboost-*.tar.gz
cd ..
python -c 'import xgboost'
python-sdist-test:
# Use system toolchain instead of conda toolchain for macos and windows.
# MacOS has linker error if clang++ from conda-forge is used
runs-on: ${{ matrix.os }}
name: Test installing XGBoost Python source package on ${{ matrix.os }}
strategy:
matrix:
os: [macos-11, windows-latest]
python-version: ["3.8"]
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- name: Install osx system dependencies
if: matrix.os == 'macos-11'
run: |
brew install ninja libomp
- uses: conda-incubator/setup-miniconda@35d1405e78aa3f784fe3ce9a2eb378d5eeb62169 # v2.1.1
with:
auto-update-conda: true
python-version: ${{ matrix.python-version }}
activate-environment: test
- name: Display Conda env
shell: bash -l {0}
run: |
conda info
conda list
- name: Build and install XGBoost
shell: bash -l {0}
run: |
cd python-package
python --version
python setup.py sdist
pip install -v ./dist/xgboost-*.tar.gz
cd ..
python -c 'import xgboost'
python-tests-on-macos:
name: Test XGBoost Python package on ${{ matrix.config.os }}
runs-on: ${{ matrix.config.os }}
timeout-minutes: 60
strategy:
matrix:
config:
- {os: macos-11}
steps:
- uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v2.5.0
with:
submodules: 'true'
- uses: mamba-org/provision-with-micromamba@f347426e5745fe3dfc13ec5baf20496990d0281f # v14
with:
cache-downloads: true
cache-env: false
environment-name: macos_test
environment-file: tests/ci_build/conda_env/macos_cpu_test.yml
- name: Display Conda env
shell: bash -l {0}
run: |
conda info
conda list
- name: Build XGBoost on macos
shell: bash -l {0}
run: |
brew install ninja
mkdir build
cd build
# Set prefix, to use OpenMP library from Conda env
# See https://github.com/dmlc/xgboost/issues/7039#issuecomment-1025038228
# to learn why we don't use libomp from Homebrew.
cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
ninja
- name: Install Python package
shell: bash -l {0}
run: |
cd python-package
python --version
python setup.py install
- name: Test Python package
shell: bash -l {0}
run: |
pytest -s -v -rxXs --durations=0 ./tests/python
python-tests-on-win:
name: Test XGBoost Python package on ${{ matrix.config.os }}
runs-on: ${{ matrix.config.os }}
strategy:
matrix:
config:
- {os: windows-latest, python-version: '3.8'}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: conda-incubator/setup-miniconda@v2
with:
auto-update-conda: true
python-version: ${{ matrix.config.python-version }}
activate-environment: win64_env
environment-file: tests/ci_build/conda_env/win64_cpu_test.yml
- name: Display Conda env
shell: bash -l {0}
run: |
conda info
conda list
- name: Build XGBoost on Windows
shell: bash -l {0}
run: |
mkdir build_msvc
cd build_msvc
cmake .. -G"Visual Studio 17 2022" -DCMAKE_CONFIGURATION_TYPES="Release" -A x64 -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON
cmake --build . --config Release --parallel $(nproc)
- name: Install Python package
shell: bash -l {0}
run: |
cd python-package
python --version
python setup.py bdist_wheel --universal
pip install ./dist/*.whl
- name: Test Python package
shell: bash -l {0}
run: |
pytest -s -v -rxXs --durations=0 ./tests/python

41
.github/workflows/python_wheels.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: XGBoost-Python-Wheels
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
python-wheels:
name: Build wheel for ${{ matrix.platform_id }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
include:
- os: macos-latest
platform_id: macosx_x86_64
- os: macos-latest
platform_id: macosx_arm64
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: "3.8"
- name: Build wheels
run: bash tests/ci_build/build_python_wheels.sh ${{ matrix.platform_id }} ${{ github.sha }}
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Upload Python wheel
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
run: |
python -m pip install awscli
python -m awscli s3 cp wheelhouse/*.whl s3://xgboost-nightly-builds/${{ steps.extract_branch.outputs.branch }}/ --acl public-read
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}

View File

@@ -8,7 +8,10 @@ on:
types: [created] types: [created]
env: env:
R_PACKAGES: c('XML', 'igraph', 'data.table', 'magrittr', 'ggplot2', 'DiagrammeR', 'Ckmeans.1d.dp', 'vcd', 'testthat', 'lintr', 'knitr', 'rmarkdown', 'e1071', 'cplm', 'devtools', 'float', 'titanic') R_PACKAGES: c('XML', 'igraph', 'data.table', 'ggplot2', 'DiagrammeR', 'Ckmeans.1d.dp', 'vcd', 'testthat', 'lintr', 'knitr', 'rmarkdown', 'e1071', 'cplm', 'devtools', 'float', 'titanic')
permissions:
contents: read # to fetch code (actions/checkout)
jobs: jobs:
test-R-noLD: test-R-noLD:

164
.github/workflows/r_tests.yml vendored Normal file
View File

@@ -0,0 +1,164 @@
name: XGBoost-R-Tests
on: [push, pull_request]
env:
R_PACKAGES: c('XML', 'data.table', 'ggplot2', 'DiagrammeR', 'Ckmeans.1d.dp', 'vcd', 'testthat', 'lintr', 'knitr', 'rmarkdown', 'e1071', 'cplm', 'devtools', 'float', 'titanic')
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
_R_CHECK_EXAMPLE_TIMING_CPU_TO_ELAPSED_THRESHOLD_: 2.5
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
lintr:
runs-on: ${{ matrix.config.os }}
name: Run R linters on OS ${{ matrix.config.os }}, R ${{ matrix.config.r }}, Compiler ${{ matrix.config.compiler }}, Build ${{ matrix.config.build }}
strategy:
matrix:
config:
- {os: ubuntu-latest, r: 'release'}
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
RSPM: ${{ matrix.config.rspm }}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@v2
with:
r-version: ${{ matrix.config.r }}
- name: Cache R packages
uses: actions/cache@v2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }}
- name: Install dependencies
shell: Rscript {0}
run: |
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- name: Install igraph on Windows
shell: Rscript {0}
if: matrix.config.os == 'windows-latest'
run: |
install.packages('igraph', type='binary')
- name: Run lintr
run: |
cd R-package
R CMD INSTALL .
# Disable lintr errors for now: https://github.com/dmlc/xgboost/issues/8012
Rscript tests/helper_scripts/run_lint.R || true
test-with-R:
runs-on: ${{ matrix.config.os }}
name: Test R on OS ${{ matrix.config.os }}, R ${{ matrix.config.r }}, Compiler ${{ matrix.config.compiler }}, Build ${{ matrix.config.build }}
strategy:
fail-fast: false
matrix:
config:
- {os: windows-latest, r: 'release', compiler: 'mingw', build: 'autotools'}
- {os: windows-latest, r: 'release', compiler: 'msvc', build: 'cmake'}
- {os: windows-latest, r: 'release', compiler: 'mingw', build: 'cmake'}
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
_R_CHECK_EXAMPLE_TIMING_CPU_TO_ELAPSED_THRESHOLD_: 2.5
RSPM: ${{ matrix.config.rspm }}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@v2
with:
r-version: ${{ matrix.config.r }}
- name: Cache R packages
uses: actions/cache@v2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }}
- name: Install dependencies
shell: Rscript {0}
if: matrix.config.os != 'windows-latest'
run: |
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- name: Install binary dependencies
shell: Rscript {0}
if: matrix.config.os == 'windows-latest'
run: |
install.packages(${{ env.R_PACKAGES }},
type = 'binary',
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- uses: actions/setup-python@v2
with:
python-version: "3.8"
architecture: 'x64'
- name: Test R
run: |
python tests/ci_build/test_r_package.py --compiler='${{ matrix.config.compiler }}' --build-tool='${{ matrix.config.build }}'
test-R-CRAN:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
config:
- {r: 'release'}
env:
_R_CHECK_EXAMPLE_TIMING_CPU_TO_ELAPSED_THRESHOLD_: 2.5
MAKE: "make -j$(nproc)"
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@v2
with:
r-version: ${{ matrix.config.r }}
- uses: r-lib/actions/setup-tinytex@v2
- name: Install system packages
run: |
sudo apt-get update && sudo apt-get install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev pandoc pandoc-citeproc libglpk-dev
- name: Cache R packages
uses: actions/cache@v2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-5-${{ hashFiles('R-package/DESCRIPTION') }}
- name: Install dependencies
shell: Rscript {0}
run: |
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
install.packages('igraph', repos = 'http://cloud.r-project.org', dependencies = c('Depends', 'Imports', 'LinkingTo'))
- name: Check R Package
run: |
# Print stacktrace upon success of failure
make Rcheck || tests/ci_build/print_r_stacktrace.sh fail
tests/ci_build/print_r_stacktrace.sh success

54
.github/workflows/scorecards.yml vendored Normal file
View File

@@ -0,0 +1,54 @@
name: Scorecards supply-chain security
on:
# Only the default branch is supported.
branch_protection_rule:
schedule:
- cron: '17 2 * * 6'
push:
branches: [ "master" ]
# Declare default permissions as read only.
permissions: read-all
jobs:
analysis:
name: Scorecards analysis
runs-on: ubuntu-latest
permissions:
# Needed to upload the results to code-scanning dashboard.
security-events: write
# Used to receive a badge.
id-token: write
steps:
- name: "Checkout code"
uses: actions/checkout@a12a3943b4bdde767164f792f33f40b04645d846 # tag=v3.0.0
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@865b4092859256271290c77adbd10a43f4779972 # tag=v2.0.3
with:
results_file: results.sarif
results_format: sarif
# Publish the results for public repositories to enable scorecard badges. For more details, see
# https://github.com/ossf/scorecard-action#publishing-results.
# For private repositories, `publish_results` will automatically be set to `false`, regardless
# of the value entered here.
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@6673cd052c4cd6fcf4b4e6e60ea986c889389535 # tag=v3.0.0
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@5f532563584d71fdef14ee64d17bafb34f751ce5 # tag=v1.0.26
with:
sarif_file: results.sarif

28
.gitignore vendored
View File

@@ -52,6 +52,8 @@ Debug
R-package.Rproj R-package.Rproj
*.cache* *.cache*
.mypy_cache/ .mypy_cache/
doxygen
# java # java
java/xgboost4j/target java/xgboost4j/target
java/xgboost4j/tmp java/xgboost4j/tmp
@@ -63,6 +65,7 @@ nb-configuration*
# Eclipse # Eclipse
.project .project
.cproject .cproject
.classpath
.pydevproject .pydevproject
.settings/ .settings/
build build
@@ -96,8 +99,11 @@ metastore_db
R-package/src/Makevars R-package/src/Makevars
*.lib *.lib
# Visual Studio Code # Visual Studio
/.vscode/ .vs/
CMakeSettings.json
*.ilk
*.pdb
# IntelliJ/CLion # IntelliJ/CLion
.idea .idea
@@ -115,3 +121,21 @@ dask-worker-space/
# Jupyter notebook checkpoints # Jupyter notebook checkpoints
.ipynb_checkpoints/ .ipynb_checkpoints/
# credentials and key material
config
credentials
credentials.csv
*.env
*.pem
*.pub
*.rdp
*_rsa
# Visual Studio code + extensions
.vscode
.metals
.bloop
# hypothesis python tests
.hypothesis

1
.gitmodules vendored
View File

@@ -1,6 +1,7 @@
[submodule "dmlc-core"] [submodule "dmlc-core"]
path = dmlc-core path = dmlc-core
url = https://github.com/dmlc/dmlc-core url = https://github.com/dmlc/dmlc-core
branch = main
[submodule "cub"] [submodule "cub"]
path = cub path = cub
url = https://github.com/NVlabs/cub url = https://github.com/NVlabs/cub

35
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,35 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
submodules:
include: all
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.8"
apt_packages:
- graphviz
- cmake
- g++
- doxygen
- ninja-build
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: doc/conf.py
# If using Sphinx, optionally build your docs in additional formats such as PDF
formats:
- pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: doc/requirements.txt
system_packages: true

View File

@@ -4,60 +4,27 @@ dist: bionic
env: env:
global: global:
- secure: "PR16i9F8QtNwn99C5NDp8nptAS+97xwDtXEJJfEiEVhxPaaRkOp0MPWhogCaK0Eclxk1TqkgWbdXFknwGycX620AzZWa/A1K3gAs+GrpzqhnPMuoBJ0Z9qxXTbSJvCyvMbYwVrjaxc/zWqdMU8waWz8A7iqKGKs/SqbQ3rO6v7c=" - secure: "lqkL5SCM/CBwgVb1GWoOngpojsa0zCSGcvF0O3/45rBT1EpNYtQ4LRJ1+XcHi126vdfGoim/8i7AQhn5eOgmZI8yAPBeoUZ5zSrejD3RUpXr2rXocsvRRP25Z4mIuAGHD9VAHtvTdhBZRVV818W02pYduSzAeaY61q/lU3xmWsE="
- secure: "dAGAjBokqm/0nVoLMofQni/fWIBcYSmdq4XvCBX1ZAMDsWnuOfz/4XCY6h2lEI1rVHZQ+UdZkc9PioOHGPZh5BnvE49/xVVWr9c4/61lrDOlkD01ZjSAeoV0fAZq+93V/wPl4QV+MM+Sem9hNNzFSbN5VsQLAiWCSapWsLdKzqA=" - secure: "mzms6X8uvdhRWxkPBMwx+mDl3d+V1kUpZa7UgjT+dr4rvZMzvKtjKp/O0JZZVogdgZjUZf444B98/7AvWdSkGdkfz2QdmhWmXzNPfNuHtmfCYMdijsgFIGLuD3GviFL/rBiM2vgn32T3QqFiEJiC5StparnnXimPTc9TpXQRq5c="
jobs: jobs:
include: include:
- os: linux
arch: amd64
env: TASK=python_sdist_test
- os: linux
arch: arm64
env: TASK=python_sdist_test
- os: linux
arch: arm64
env: TASK=python_test
services:
- docker
- os: osx
arch: amd64
osx_image: xcode10.2
env: TASK=python_test
- os: osx
arch: amd64
osx_image: xcode10.2
env: TASK=python_sdist_test
- os: osx
arch: amd64
osx_image: xcode10.2
env: TASK=java_test
- os: linux - os: linux
arch: s390x arch: s390x
env: TASK=s390x_test env: TASK=s390x_test
# dependent brew packages # dependent brew packages
# the dependencies from homebrew is installed manually from setup script due to outdated image from travis.
addons: addons:
homebrew: homebrew:
packages: update: false
- cmake
- libomp
- graphviz
- openssl
- libgit2
- lz4
- wget
- r
update: true
apt: apt:
packages: packages:
- snapd
- unzip - unzip
before_install: before_install:
- source tests/travis/travis_setup_env.sh - source tests/travis/travis_setup_env.sh
- if [ "${TASK}" != "python_sdist_test" ]; then export PYTHONPATH=${PYTHONPATH}:${PWD}/python-package; fi
- echo "MAVEN_OPTS='-Xmx2g -XX:MaxPermSize=1024m -XX:ReservedCodeCacheSize=512m -Dorg.slf4j.simpleLogger.defaultLogLevel=error'" > ~/.mavenrc
install: install:
- source tests/travis/setup.sh - source tests/travis/setup.sh

View File

@@ -1,9 +1,10 @@
cmake_minimum_required(VERSION 3.13) cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(xgboost LANGUAGES CXX C VERSION 1.3.3) project(xgboost LANGUAGES CXX C VERSION 1.7.6)
include(cmake/Utils.cmake) include(cmake/Utils.cmake)
list(APPEND CMAKE_MODULE_PATH "${xgboost_SOURCE_DIR}/cmake/modules") list(APPEND CMAKE_MODULE_PATH "${xgboost_SOURCE_DIR}/cmake/modules")
cmake_policy(SET CMP0022 NEW) cmake_policy(SET CMP0022 NEW)
cmake_policy(SET CMP0079 NEW) cmake_policy(SET CMP0079 NEW)
cmake_policy(SET CMP0076 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0063 NEW) set(CMAKE_POLICY_DEFAULT_CMP0063 NEW)
cmake_policy(SET CMP0063 NEW) cmake_policy(SET CMP0063 NEW)
@@ -28,6 +29,7 @@ set_default_configuration_release()
option(BUILD_C_DOC "Build documentation for C APIs using Doxygen." OFF) option(BUILD_C_DOC "Build documentation for C APIs using Doxygen." OFF)
option(USE_OPENMP "Build with OpenMP support." ON) option(USE_OPENMP "Build with OpenMP support." ON)
option(BUILD_STATIC_LIB "Build static library" OFF) option(BUILD_STATIC_LIB "Build static library" OFF)
option(FORCE_SHARED_CRT "Build with dynamic CRT on Windows (/MD)" OFF)
option(RABIT_BUILD_MPI "Build MPI" OFF) option(RABIT_BUILD_MPI "Build MPI" OFF)
## Bindings ## Bindings
option(JVM_BINDINGS "Build JVM bindings" OFF) option(JVM_BINDINGS "Build JVM bindings" OFF)
@@ -49,6 +51,7 @@ option(HIDE_CXX_SYMBOLS "Build shared library and hide all C++ symbols" OFF)
option(USE_CUDA "Build with GPU acceleration" OFF) option(USE_CUDA "Build with GPU acceleration" OFF)
option(USE_NCCL "Build with NCCL to enable distributed GPU support." OFF) option(USE_NCCL "Build with NCCL to enable distributed GPU support." OFF)
option(BUILD_WITH_SHARED_NCCL "Build with shared NCCL library." OFF) option(BUILD_WITH_SHARED_NCCL "Build with shared NCCL library." OFF)
option(BUILD_WITH_CUDA_CUB "Build with cub in CUDA installation" OFF)
set(GPU_COMPUTE_VER "" CACHE STRING set(GPU_COMPUTE_VER "" CACHE STRING
"Semicolon separated list of compute versions to be built against, e.g. '35;61'") "Semicolon separated list of compute versions to be built against, e.g. '35;61'")
## Copied From dmlc ## Copied From dmlc
@@ -62,9 +65,9 @@ set(ENABLED_SANITIZERS "address" "leak" CACHE STRING
"Semicolon separated list of sanitizer names. E.g 'address;leak'. Supported sanitizers are "Semicolon separated list of sanitizer names. E.g 'address;leak'. Supported sanitizers are
address, leak, undefined and thread.") address, leak, undefined and thread.")
## Plugins ## Plugins
option(PLUGIN_LZ4 "Build lz4 plugin" OFF)
option(PLUGIN_DENSE_PARSER "Build dense parser plugin" OFF) option(PLUGIN_DENSE_PARSER "Build dense parser plugin" OFF)
option(PLUGIN_RMM "Build with RAPIDS Memory Manager (RMM)" OFF) option(PLUGIN_RMM "Build with RAPIDS Memory Manager (RMM)" OFF)
option(PLUGIN_FEDERATED "Build with Federated Learning" OFF)
## TODO: 1. Add check if DPC++ compiler is used for building ## TODO: 1. Add check if DPC++ compiler is used for building
option(PLUGIN_UPDATER_ONEAPI "DPC++ updater" OFF) option(PLUGIN_UPDATER_ONEAPI "DPC++ updater" OFF)
option(ADD_PKGCONFIG "Add xgboost.pc into system." ON) option(ADD_PKGCONFIG "Add xgboost.pc into system." ON)
@@ -92,6 +95,9 @@ endif (R_LIB AND GOOGLE_TEST)
if (USE_AVX) if (USE_AVX)
message(SEND_ERROR "The option 'USE_AVX' is deprecated as experimental AVX features have been removed from XGBoost.") message(SEND_ERROR "The option 'USE_AVX' is deprecated as experimental AVX features have been removed from XGBoost.")
endif (USE_AVX) endif (USE_AVX)
if (PLUGIN_LZ4)
message(SEND_ERROR "The option 'PLUGIN_LZ4' is removed from XGBoost.")
endif (PLUGIN_LZ4)
if (PLUGIN_RMM AND NOT (USE_CUDA)) if (PLUGIN_RMM AND NOT (USE_CUDA))
message(SEND_ERROR "`PLUGIN_RMM` must be enabled with `USE_CUDA` flag.") message(SEND_ERROR "`PLUGIN_RMM` must be enabled with `USE_CUDA` flag.")
endif (PLUGIN_RMM AND NOT (USE_CUDA)) endif (PLUGIN_RMM AND NOT (USE_CUDA))
@@ -109,6 +115,23 @@ endif (ENABLE_ALL_WARNINGS)
if (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS)) if (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
message(SEND_ERROR "Cannot build a static library libxgboost.a when R or JVM packages are enabled.") message(SEND_ERROR "Cannot build a static library libxgboost.a when R or JVM packages are enabled.")
endif (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS)) endif (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
if (PLUGIN_RMM AND (NOT BUILD_WITH_CUDA_CUB))
message(SEND_ERROR "Cannot build with RMM using cub submodule.")
endif (PLUGIN_RMM AND (NOT BUILD_WITH_CUDA_CUB))
if (PLUGIN_FEDERATED)
if (CMAKE_CROSSCOMPILING)
message(SEND_ERROR "Cannot cross compile with federated learning support")
endif ()
if (BUILD_STATIC_LIB)
message(SEND_ERROR "Cannot build static lib with federated learning support")
endif ()
if (R_LIB OR JVM_BINDINGS)
message(SEND_ERROR "Cannot enable federated learning support when R or JVM packages are enabled.")
endif ()
if (WIN32)
message(SEND_ERROR "Federated learning not supported for Windows platform")
endif ()
endif ()
#-- Sanitizer #-- Sanitizer
if (USE_SANITIZER) if (USE_SANITIZER)
@@ -117,18 +140,22 @@ if (USE_SANITIZER)
endif (USE_SANITIZER) endif (USE_SANITIZER)
if (USE_CUDA) if (USE_CUDA)
SET(USE_OPENMP ON CACHE BOOL "CUDA requires OpenMP" FORCE) set(USE_OPENMP ON CACHE BOOL "CUDA requires OpenMP" FORCE)
# `export CXX=' is ignored by CMake CUDA. # `export CXX=' is ignored by CMake CUDA.
set(CMAKE_CUDA_HOST_COMPILER ${CMAKE_CXX_COMPILER}) set(CMAKE_CUDA_HOST_COMPILER ${CMAKE_CXX_COMPILER})
message(STATUS "Configured CUDA host compiler: ${CMAKE_CUDA_HOST_COMPILER}") message(STATUS "Configured CUDA host compiler: ${CMAKE_CUDA_HOST_COMPILER}")
enable_language(CUDA) enable_language(CUDA)
if (${CMAKE_CUDA_COMPILER_VERSION} VERSION_LESS 10.0) if (${CMAKE_CUDA_COMPILER_VERSION} VERSION_LESS 11.0)
message(FATAL_ERROR "CUDA version must be at least 10.0!") message(FATAL_ERROR "CUDA version must be at least 11.0!")
endif() endif()
set(GEN_CODE "") set(GEN_CODE "")
format_gencode_flags("${GPU_COMPUTE_VER}" GEN_CODE) format_gencode_flags("${GPU_COMPUTE_VER}" GEN_CODE)
add_subdirectory(${PROJECT_SOURCE_DIR}/gputreeshap) add_subdirectory(${PROJECT_SOURCE_DIR}/gputreeshap)
if ((${CMAKE_CUDA_COMPILER_VERSION} VERSION_GREATER_EQUAL 11.4) AND (NOT BUILD_WITH_CUDA_CUB))
set(BUILD_WITH_CUDA_CUB ON)
endif ()
endif (USE_CUDA) endif (USE_CUDA)
if (FORCE_COLORED_OUTPUT AND (CMAKE_GENERATOR STREQUAL "Ninja") AND if (FORCE_COLORED_OUTPUT AND (CMAKE_GENERATOR STREQUAL "Ninja") AND
@@ -141,34 +168,54 @@ find_package(Threads REQUIRED)
if (USE_OPENMP) if (USE_OPENMP)
if (APPLE) if (APPLE)
# Require CMake 3.16+ on Mac OSX, as previous versions of CMake had trouble locating find_package(OpenMP)
# OpenMP on Mac. See https://github.com/dmlc/xgboost/pull/5146#issuecomment-568312706 if (NOT OpenMP_FOUND)
cmake_minimum_required(VERSION 3.16) # Try again with extra path info; required for libomp 15+ from Homebrew
endif (APPLE) execute_process(COMMAND brew --prefix libomp
find_package(OpenMP REQUIRED) OUTPUT_VARIABLE HOMEBREW_LIBOMP_PREFIX
OUTPUT_STRIP_TRAILING_WHITESPACE)
set(OpenMP_C_FLAGS
"-Xpreprocessor -fopenmp -I${HOMEBREW_LIBOMP_PREFIX}/include")
set(OpenMP_CXX_FLAGS
"-Xpreprocessor -fopenmp -I${HOMEBREW_LIBOMP_PREFIX}/include")
set(OpenMP_C_LIB_NAMES omp)
set(OpenMP_CXX_LIB_NAMES omp)
set(OpenMP_omp_LIBRARY ${HOMEBREW_LIBOMP_PREFIX}/lib/libomp.dylib)
find_package(OpenMP REQUIRED)
endif ()
else ()
find_package(OpenMP REQUIRED)
endif ()
endif (USE_OPENMP) endif (USE_OPENMP)
#Add for IBM i
if (${CMAKE_SYSTEM_NAME} MATCHES "OS400")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -pthread")
set(CMAKE_CXX_ARCHIVE_CREATE "<CMAKE_AR> -X64 qc <TARGET> <OBJECTS>")
endif()
if (USE_NCCL)
find_package(Nccl REQUIRED)
endif (USE_NCCL)
# dmlc-core # dmlc-core
msvc_use_static_runtime() msvc_use_static_runtime()
if (FORCE_SHARED_CRT)
set(DMLC_FORCE_SHARED_CRT ON)
endif ()
add_subdirectory(${xgboost_SOURCE_DIR}/dmlc-core) add_subdirectory(${xgboost_SOURCE_DIR}/dmlc-core)
set_target_properties(dmlc PROPERTIES
CXX_STANDARD 14
CXX_STANDARD_REQUIRED ON
POSITION_INDEPENDENT_CODE ON)
if (MSVC) if (MSVC)
target_compile_options(dmlc PRIVATE
-D_CRT_SECURE_NO_WARNINGS -D_CRT_SECURE_NO_DEPRECATE)
if (TARGET dmlc_unit_tests) if (TARGET dmlc_unit_tests)
target_compile_options(dmlc_unit_tests PRIVATE target_compile_options(dmlc_unit_tests PRIVATE
-D_CRT_SECURE_NO_WARNINGS -D_CRT_SECURE_NO_DEPRECATE) -D_CRT_SECURE_NO_WARNINGS -D_CRT_SECURE_NO_DEPRECATE)
endif (TARGET dmlc_unit_tests) endif (TARGET dmlc_unit_tests)
endif (MSVC) endif (MSVC)
if (ENABLE_ALL_WARNINGS)
target_compile_options(dmlc PRIVATE -Wall -Wextra)
endif (ENABLE_ALL_WARNINGS)
# rabit # rabit
add_subdirectory(rabit) add_subdirectory(rabit)
if (RABIT_BUILD_MPI)
find_package(MPI REQUIRED)
endif (RABIT_BUILD_MPI)
# core xgboost # core xgboost
add_subdirectory(${xgboost_SOURCE_DIR}/src) add_subdirectory(${xgboost_SOURCE_DIR}/src)
@@ -179,9 +226,18 @@ if (R_LIB)
add_subdirectory(${xgboost_SOURCE_DIR}/R-package) add_subdirectory(${xgboost_SOURCE_DIR}/R-package)
endif (R_LIB) endif (R_LIB)
# This creates its own shared library `xgboost4j'.
if (JVM_BINDINGS)
add_subdirectory(${xgboost_SOURCE_DIR}/jvm-packages)
endif (JVM_BINDINGS)
# Plugin # Plugin
add_subdirectory(${xgboost_SOURCE_DIR}/plugin) add_subdirectory(${xgboost_SOURCE_DIR}/plugin)
if (PLUGIN_RMM)
find_package(rmm REQUIRED)
endif (PLUGIN_RMM)
#-- library #-- library
if (BUILD_STATIC_LIB) if (BUILD_STATIC_LIB)
add_library(xgboost STATIC) add_library(xgboost STATIC)
@@ -189,48 +245,37 @@ else (BUILD_STATIC_LIB)
add_library(xgboost SHARED) add_library(xgboost SHARED)
endif (BUILD_STATIC_LIB) endif (BUILD_STATIC_LIB)
target_link_libraries(xgboost PRIVATE objxgboost) target_link_libraries(xgboost PRIVATE objxgboost)
if (USE_CUDA)
xgboost_set_cuda_flags(xgboost)
endif (USE_CUDA)
#-- Hide all C++ symbols
if (HIDE_CXX_SYMBOLS)
foreach(target objxgboost xgboost dmlc)
set_target_properties(${target} PROPERTIES CXX_VISIBILITY_PRESET hidden)
endforeach()
endif (HIDE_CXX_SYMBOLS)
target_include_directories(xgboost target_include_directories(xgboost
INTERFACE INTERFACE
$<INSTALL_INTERFACE:${CMAKE_INSTALL_PREFIX}/include> $<INSTALL_INTERFACE:$<INSTALL_PREFIX>/include>
$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/include>) $<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/include>)
# This creates its own shared library `xgboost4j'.
if (JVM_BINDINGS)
add_subdirectory(${xgboost_SOURCE_DIR}/jvm-packages)
endif (JVM_BINDINGS)
#-- End shared library #-- End shared library
#-- CLI for xgboost #-- CLI for xgboost
add_executable(runxgboost ${xgboost_SOURCE_DIR}/src/cli_main.cc) add_executable(runxgboost ${xgboost_SOURCE_DIR}/src/cli_main.cc)
target_link_libraries(runxgboost PRIVATE objxgboost) target_link_libraries(runxgboost PRIVATE objxgboost)
if (USE_NVTX)
enable_nvtx(runxgboost)
endif (USE_NVTX)
target_include_directories(runxgboost target_include_directories(runxgboost
PRIVATE PRIVATE
${xgboost_SOURCE_DIR}/include ${xgboost_SOURCE_DIR}/include
${xgboost_SOURCE_DIR}/dmlc-core/include ${xgboost_SOURCE_DIR}/dmlc-core/include
${xgboost_SOURCE_DIR}/rabit/include) ${xgboost_SOURCE_DIR}/rabit/include
set_target_properties( )
runxgboost PROPERTIES set_target_properties(runxgboost PROPERTIES OUTPUT_NAME xgboost)
OUTPUT_NAME xgboost
CXX_STANDARD 14
CXX_STANDARD_REQUIRED ON)
#-- End CLI for xgboost #-- End CLI for xgboost
# Common setup for all targets
foreach(target xgboost objxgboost dmlc runxgboost)
xgboost_target_properties(${target})
xgboost_target_link_libraries(${target})
xgboost_target_defs(${target})
endforeach()
if (JVM_BINDINGS)
xgboost_target_properties(xgboost4j)
xgboost_target_link_libraries(xgboost4j)
xgboost_target_defs(xgboost4j)
endif (JVM_BINDINGS)
set_output_directory(runxgboost ${xgboost_SOURCE_DIR}) set_output_directory(runxgboost ${xgboost_SOURCE_DIR})
set_output_directory(xgboost ${xgboost_SOURCE_DIR}/lib) set_output_directory(xgboost ${xgboost_SOURCE_DIR}/lib)
# Ensure these two targets do not build simultaneously, as they produce outputs with conflicting names # Ensure these two targets do not build simultaneously, as they produce outputs with conflicting names
@@ -255,6 +300,8 @@ if (BUILD_C_DOC)
run_doxygen() run_doxygen()
endif (BUILD_C_DOC) endif (BUILD_C_DOC)
include(CPack)
include(GNUInstallDirs) include(GNUInstallDirs)
# Install all headers. Please note that currently the C++ headers does not form an "API". # Install all headers. Please note that currently the C++ headers does not form an "API".
install(DIRECTORY ${xgboost_SOURCE_DIR}/include/xgboost install(DIRECTORY ${xgboost_SOURCE_DIR}/include/xgboost
@@ -295,7 +342,7 @@ write_basic_package_version_file(
COMPATIBILITY AnyNewerVersion) COMPATIBILITY AnyNewerVersion)
install( install(
FILES FILES
${CMAKE_BINARY_DIR}/cmake/xgboost-config.cmake ${CMAKE_CURRENT_BINARY_DIR}/cmake/xgboost-config.cmake
${CMAKE_BINARY_DIR}/cmake/xgboost-config-version.cmake ${CMAKE_BINARY_DIR}/cmake/xgboost-config-version.cmake
DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/xgboost) DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/xgboost)
@@ -303,12 +350,18 @@ install(
if (GOOGLE_TEST) if (GOOGLE_TEST)
enable_testing() enable_testing()
# Unittests. # Unittests.
add_executable(testxgboost)
target_link_libraries(testxgboost PRIVATE objxgboost)
xgboost_target_properties(testxgboost)
xgboost_target_link_libraries(testxgboost)
xgboost_target_defs(testxgboost)
add_subdirectory(${xgboost_SOURCE_DIR}/tests/cpp) add_subdirectory(${xgboost_SOURCE_DIR}/tests/cpp)
add_test( add_test(
NAME TestXGBoostLib NAME TestXGBoostLib
COMMAND testxgboost COMMAND testxgboost
WORKING_DIRECTORY ${xgboost_BINARY_DIR}) WORKING_DIRECTORY ${xgboost_BINARY_DIR})
# CLI tests # CLI tests
configure_file( configure_file(
${xgboost_SOURCE_DIR}/tests/cli/machine.conf.in ${xgboost_SOURCE_DIR}/tests/cli/machine.conf.in

View File

@@ -10,8 +10,8 @@ The Project Management Committee(PMC) consists group of active committers that m
- Tianqi is a Ph.D. student working on large-scale machine learning. He is the creator of the project. - Tianqi is a Ph.D. student working on large-scale machine learning. He is the creator of the project.
* [Michael Benesty](https://github.com/pommedeterresautee) * [Michael Benesty](https://github.com/pommedeterresautee)
- Michael is a lawyer and data scientist in France. He is the creator of XGBoost interactive analysis module in R. - Michael is a lawyer and data scientist in France. He is the creator of XGBoost interactive analysis module in R.
* [Yuan Tang](https://github.com/terrytangyuan), Ant Group * [Yuan Tang](https://github.com/terrytangyuan), Akuity
- Yuan is a software engineer in Ant Group. He contributed mostly in R and Python packages. - Yuan is a founding engineer at Akuity. He contributed mostly in R and Python packages.
* [Nan Zhu](https://github.com/CodingCat), Uber * [Nan Zhu](https://github.com/CodingCat), Uber
- Nan is a software engineer in Uber. He contributed mostly in JVM packages. - Nan is a software engineer in Uber. He contributed mostly in JVM packages.
* [Jiaming Yuan](https://github.com/trivialfis) * [Jiaming Yuan](https://github.com/trivialfis)
@@ -43,7 +43,7 @@ Committers are people who have made substantial contribution to the project and
Become a Committer Become a Committer
------------------ ------------------
XGBoost is a opensource project and we are actively looking for new committers who are willing to help maintaining and lead the project. XGBoost is a open source project and we are actively looking for new committers who are willing to help maintaining and lead the project.
Committers comes from contributors who: Committers comes from contributors who:
* Made substantial contribution to the project. * Made substantial contribution to the project.
* Willing to spent time on maintaining and lead the project. * Willing to spent time on maintaining and lead the project.
@@ -59,7 +59,7 @@ List of Contributors
* [Skipper Seabold](https://github.com/jseabold) * [Skipper Seabold](https://github.com/jseabold)
- Skipper is the major contributor to the scikit-learn module of XGBoost. - Skipper is the major contributor to the scikit-learn module of XGBoost.
* [Zygmunt Zając](https://github.com/zygmuntz) * [Zygmunt Zając](https://github.com/zygmuntz)
- Zygmunt is the master behind the early stopping feature frequently used by kagglers. - Zygmunt is the master behind the early stopping feature frequently used by Kagglers.
* [Ajinkya Kale](https://github.com/ajkl) * [Ajinkya Kale](https://github.com/ajkl)
* [Boliang Chen](https://github.com/cblsjtu) * [Boliang Chen](https://github.com/cblsjtu)
* [Yangqing Men](https://github.com/yanqingmen) * [Yangqing Men](https://github.com/yanqingmen)
@@ -91,7 +91,7 @@ List of Contributors
* [Henry Gouk](https://github.com/henrygouk) * [Henry Gouk](https://github.com/henrygouk)
* [Pierre de Sahb](https://github.com/pdesahb) * [Pierre de Sahb](https://github.com/pdesahb)
* [liuliang01](https://github.com/liuliang01) * [liuliang01](https://github.com/liuliang01)
- liuliang01 added support for the qid column for LibSVM input format. This makes ranking task easier in distributed setting. - liuliang01 added support for the qid column for LIBSVM input format. This makes ranking task easier in distributed setting.
* [Andrew Thia](https://github.com/BlueTea88) * [Andrew Thia](https://github.com/BlueTea88)
- Andrew Thia implemented feature interaction constraints - Andrew Thia implemented feature interaction constraints
* [Wei Tian](https://github.com/weitian) * [Wei Tian](https://github.com/weitian)

391
Jenkinsfile vendored
View File

@@ -1,391 +0,0 @@
#!/usr/bin/groovy
// -*- mode: groovy -*-
// Jenkins pipeline
// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/
// Command to run command inside a docker container
dockerRun = 'tests/ci_build/ci_build.sh'
// Which CUDA version to use when building reference distribution wheel
ref_cuda_ver = '10.0'
import groovy.transform.Field
@Field
def commit_id // necessary to pass a variable from one stage to another
pipeline {
// Each stage specify its own agent
agent none
environment {
DOCKER_CACHE_ECR_ID = '492475357299'
DOCKER_CACHE_ECR_REGION = 'us-west-2'
}
// Setup common job properties
options {
ansiColor('xterm')
timestamps()
timeout(time: 240, unit: 'MINUTES')
buildDiscarder(logRotator(numToKeepStr: '10'))
preserveStashes()
}
// Build stages
stages {
stage('Jenkins Linux: Initialize') {
agent { label 'job_initializer' }
steps {
script {
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
checkoutSrcs()
commit_id = "${GIT_COMMIT}"
}
sh 'python3 tests/jenkins_get_approval.py'
stash name: 'srcs'
}
}
stage('Jenkins Linux: Build') {
agent none
steps {
script {
parallel ([
'clang-tidy': { ClangTidy() },
'build-cpu': { BuildCPU() },
'build-cpu-rabit-mock': { BuildCPUMock() },
// Build reference, distribution-ready Python wheel with CUDA 10.0
// using CentOS 6 image
'build-gpu-cuda10.0': { BuildCUDA(cuda_version: '10.0') },
// The build-gpu-* builds below use Ubuntu image
'build-gpu-cuda10.1': { BuildCUDA(cuda_version: '10.1') },
'build-gpu-cuda10.2': { BuildCUDA(cuda_version: '10.2', build_rmm: true) },
'build-gpu-cuda11.0': { BuildCUDA(cuda_version: '11.0') },
'build-jvm-packages-gpu-cuda10.0': { BuildJVMPackagesWithCUDA(spark_version: '3.0.0', cuda_version: '10.0') },
'build-jvm-packages': { BuildJVMPackages(spark_version: '3.0.0') },
'build-jvm-doc': { BuildJVMDoc() }
])
}
}
}
stage('Jenkins Linux: Test') {
agent none
steps {
script {
parallel ([
'test-python-cpu': { TestPythonCPU() },
// artifact_cuda_version doesn't apply to RMM tests; RMM tests will always match CUDA version between artifact and host env
'test-python-gpu-cuda10.2': { TestPythonGPU(artifact_cuda_version: '10.0', host_cuda_version: '10.2', test_rmm: true) },
'test-python-gpu-cuda11.0-cross': { TestPythonGPU(artifact_cuda_version: '10.0', host_cuda_version: '11.0') },
'test-python-gpu-cuda11.0': { TestPythonGPU(artifact_cuda_version: '11.0', host_cuda_version: '11.0') },
'test-python-mgpu-cuda10.2': { TestPythonGPU(artifact_cuda_version: '10.0', host_cuda_version: '10.2', multi_gpu: true, test_rmm: true) },
'test-cpp-gpu-cuda10.2': { TestCppGPU(artifact_cuda_version: '10.2', host_cuda_version: '10.2', test_rmm: true) },
'test-cpp-gpu-cuda11.0': { TestCppGPU(artifact_cuda_version: '11.0', host_cuda_version: '11.0') },
'test-jvm-jdk8': { CrossTestJVMwithJDK(jdk_version: '8', spark_version: '3.0.0') },
'test-jvm-jdk11': { CrossTestJVMwithJDK(jdk_version: '11') },
'test-jvm-jdk12': { CrossTestJVMwithJDK(jdk_version: '12') }
])
}
}
}
stage('Jenkins Linux: Deploy') {
agent none
steps {
script {
parallel ([
'deploy-jvm-packages': { DeployJVMPackages(spark_version: '3.0.0') }
])
}
}
}
}
}
// check out source code from git
def checkoutSrcs() {
retry(5) {
try {
timeout(time: 2, unit: 'MINUTES') {
checkout scm
sh 'git submodule update --init'
}
} catch (exc) {
deleteDir()
error "Failed to fetch source codes"
}
}
}
def GetCUDABuildContainerType(cuda_version) {
return (cuda_version == ref_cuda_ver) ? 'gpu_build_centos6' : 'gpu_build'
}
def ClangTidy() {
node('linux && cpu_build') {
unstash name: 'srcs'
echo "Running clang-tidy job..."
def container_type = "clang_tidy"
def docker_binary = "docker"
def dockerArgs = "--build-arg CUDA_VERSION_ARG=10.1"
sh """
${dockerRun} ${container_type} ${docker_binary} ${dockerArgs} python3 tests/ci_build/tidy.py
"""
deleteDir()
}
}
def BuildCPU() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Build CPU"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} rm -fv dmlc-core/include/dmlc/build_config_default.h
# This step is not necessary, but here we include it, to ensure that DMLC_CORE_USE_CMAKE flag is correctly propagated
# We want to make sure that we use the configured header build/dmlc/build_config.h instead of include/dmlc/build_config_default.h.
# See discussion at https://github.com/dmlc/xgboost/issues/5510
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_via_cmake.sh -DPLUGIN_LZ4=ON -DPLUGIN_DENSE_PARSER=ON
${dockerRun} ${container_type} ${docker_binary} bash -c "cd build && ctest --extra-verbose"
"""
// Sanitizer test
def docker_extra_params = "CI_DOCKER_EXTRA_PARAMS_INIT='-e ASAN_SYMBOLIZER_PATH=/usr/bin/llvm-symbolizer -e ASAN_OPTIONS=symbolize=1 -e UBSAN_OPTIONS=print_stacktrace=1:log_path=ubsan_error.log --cap-add SYS_PTRACE'"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_via_cmake.sh -DUSE_SANITIZER=ON -DENABLED_SANITIZERS="address;leak;undefined" \
-DCMAKE_BUILD_TYPE=Debug -DSANITIZER_PATH=/usr/lib/x86_64-linux-gnu/
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} bash -c "cd build && ctest --exclude-regex AllTestsInDMLCUnitTests --extra-verbose"
"""
stash name: 'xgboost_cli', includes: 'xgboost'
deleteDir()
}
}
def BuildCPUMock() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Build CPU with rabit mock"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_mock_cmake.sh
"""
echo 'Stashing rabit C++ test executable (xgboost)...'
stash name: 'xgboost_rabit_tests', includes: 'xgboost'
deleteDir()
}
}
def BuildCUDA(args) {
node('linux && cpu_build') {
unstash name: 'srcs'
echo "Build with CUDA ${args.cuda_version}"
def container_type = GetCUDABuildContainerType(args.cuda_version)
def docker_binary = "docker"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.cuda_version}"
def arch_flag = ""
if (env.BRANCH_NAME != 'master' && !(env.BRANCH_NAME.startsWith('release'))) {
arch_flag = "-DGPU_COMPUTE_VER=75"
}
def wheel_tag = "manylinux2010_x86_64"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_via_cmake.sh -DUSE_CUDA=ON -DUSE_NCCL=ON -DOPEN_MP:BOOL=ON -DHIDE_CXX_SYMBOLS=ON ${arch_flag}
${dockerRun} ${container_type} ${docker_binary} ${docker_args} bash -c "cd python-package && rm -rf dist/* && python setup.py bdist_wheel --universal"
${dockerRun} ${container_type} ${docker_binary} ${docker_args} python tests/ci_build/rename_whl.py python-package/dist/*.whl ${commit_id} ${wheel_tag}
"""
if (args.cuda_version == ref_cuda_ver) {
sh """
${dockerRun} auditwheel_x86_64 ${docker_binary} auditwheel repair --plat ${wheel_tag} python-package/dist/*.whl
mv -v wheelhouse/*.whl python-package/dist/
# Make sure that libgomp.so is vendored in the wheel
${dockerRun} auditwheel_x86_64 ${docker_binary} bash -c "unzip -l python-package/dist/*.whl | grep libgomp || exit -1"
"""
}
echo 'Stashing Python wheel...'
stash name: "xgboost_whl_cuda${args.cuda_version}", includes: 'python-package/dist/*.whl'
if (args.cuda_version == ref_cuda_ver && (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release'))) {
echo 'Uploading Python wheel...'
path = ("${BRANCH_NAME}" == 'master') ? '' : "${BRANCH_NAME}/"
s3Upload bucket: 'xgboost-nightly-builds', path: path, acl: 'PublicRead', workingDir: 'python-package/dist', includePathPattern:'**/*.whl'
}
echo 'Stashing C++ test executable (testxgboost)...'
stash name: "xgboost_cpp_tests_cuda${args.cuda_version}", includes: 'build/testxgboost'
if (args.build_rmm) {
echo "Build with CUDA ${args.cuda_version} and RMM"
container_type = "rmm"
docker_binary = "docker"
docker_args = "--build-arg CUDA_VERSION_ARG=${args.cuda_version}"
sh """
rm -rf build/
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_via_cmake.sh --conda-env=gpu_test -DUSE_CUDA=ON -DUSE_NCCL=ON -DPLUGIN_RMM=ON ${arch_flag}
${dockerRun} ${container_type} ${docker_binary} ${docker_args} bash -c "cd python-package && rm -rf dist/* && python setup.py bdist_wheel --universal"
${dockerRun} ${container_type} ${docker_binary} ${docker_args} python tests/ci_build/rename_whl.py python-package/dist/*.whl ${commit_id} manylinux2010_x86_64
"""
echo 'Stashing Python wheel...'
stash name: "xgboost_whl_rmm_cuda${args.cuda_version}", includes: 'python-package/dist/*.whl'
echo 'Stashing C++ test executable (testxgboost)...'
stash name: "xgboost_cpp_tests_rmm_cuda${args.cuda_version}", includes: 'build/testxgboost'
}
deleteDir()
}
}
def BuildJVMPackagesWithCUDA(args) {
node('linux && mgpu') {
unstash name: 'srcs'
echo "Build XGBoost4J-Spark with Spark ${args.spark_version}, CUDA ${args.cuda_version}"
def container_type = "jvm_gpu_build"
def docker_binary = "nvidia-docker"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.cuda_version}"
def arch_flag = ""
if (env.BRANCH_NAME != 'master' && !(env.BRANCH_NAME.startsWith('release'))) {
arch_flag = "-DGPU_COMPUTE_VER=75"
}
// Use only 4 CPU cores
def docker_extra_params = "CI_DOCKER_EXTRA_PARAMS_INIT='--cpuset-cpus 0-3'"
sh """
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_jvm_packages.sh ${args.spark_version} -Duse.cuda=ON $arch_flag
"""
echo "Stashing XGBoost4J JAR with CUDA ${args.cuda_version} ..."
stash name: 'xgboost4j_jar_gpu', includes: "jvm-packages/xgboost4j-gpu/target/*.jar,jvm-packages/xgboost4j-spark-gpu/target/*.jar"
deleteDir()
}
}
def BuildJVMPackages(args) {
node('linux && cpu') {
unstash name: 'srcs'
echo "Build XGBoost4J-Spark with Spark ${args.spark_version}"
def container_type = "jvm"
def docker_binary = "docker"
// Use only 4 CPU cores
def docker_extra_params = "CI_DOCKER_EXTRA_PARAMS_INIT='--cpuset-cpus 0-3'"
sh """
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_jvm_packages.sh ${args.spark_version}
"""
echo 'Stashing XGBoost4J JAR...'
stash name: 'xgboost4j_jar', includes: "jvm-packages/xgboost4j/target/*.jar,jvm-packages/xgboost4j-spark/target/*.jar,jvm-packages/xgboost4j-example/target/*.jar"
deleteDir()
}
}
def BuildJVMDoc() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Building JVM doc..."
def container_type = "jvm"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_jvm_doc.sh ${BRANCH_NAME}
"""
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
echo 'Uploading doc...'
s3Upload file: "jvm-packages/${BRANCH_NAME}.tar.bz2", bucket: 'xgboost-docs', acl: 'PublicRead', path: "${BRANCH_NAME}.tar.bz2"
}
deleteDir()
}
}
def TestPythonCPU() {
node('linux && cpu') {
unstash name: "xgboost_whl_cuda${ref_cuda_ver}"
unstash name: 'srcs'
unstash name: 'xgboost_cli'
echo "Test Python CPU"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/test_python.sh cpu
"""
deleteDir()
}
}
def TestPythonGPU(args) {
def nodeReq = (args.multi_gpu) ? 'linux && mgpu' : 'linux && gpu'
def artifact_cuda_version = (args.artifact_cuda_version) ?: ref_cuda_ver
node(nodeReq) {
unstash name: "xgboost_whl_cuda${artifact_cuda_version}"
unstash name: "xgboost_cpp_tests_cuda${artifact_cuda_version}"
unstash name: 'srcs'
echo "Test Python GPU: CUDA ${args.host_cuda_version}"
def container_type = "gpu"
def docker_binary = "nvidia-docker"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.host_cuda_version}"
def mgpu_indicator = (args.multi_gpu) ? 'mgpu' : 'gpu'
// Allocate extra space in /dev/shm to enable NCCL
def docker_extra_params = (args.multi_gpu) ? "CI_DOCKER_EXTRA_PARAMS_INIT='--shm-size=4g'" : ''
sh "${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_python.sh ${mgpu_indicator}"
if (args.test_rmm) {
sh "rm -rfv build/ python-package/dist/"
unstash name: "xgboost_whl_rmm_cuda${args.host_cuda_version}"
unstash name: "xgboost_cpp_tests_rmm_cuda${args.host_cuda_version}"
sh "${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_python.sh ${mgpu_indicator} --use-rmm-pool"
}
deleteDir()
}
}
def TestCppGPU(args) {
def nodeReq = 'linux && mgpu'
def artifact_cuda_version = (args.artifact_cuda_version) ?: ref_cuda_ver
node(nodeReq) {
unstash name: "xgboost_cpp_tests_cuda${artifact_cuda_version}"
unstash name: 'srcs'
echo "Test C++, CUDA ${args.host_cuda_version}"
def container_type = "gpu"
def docker_binary = "nvidia-docker"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.host_cuda_version}"
sh "${dockerRun} ${container_type} ${docker_binary} ${docker_args} build/testxgboost"
if (args.test_rmm) {
sh "rm -rfv build/"
unstash name: "xgboost_cpp_tests_rmm_cuda${args.host_cuda_version}"
echo "Test C++, CUDA ${args.host_cuda_version} with RMM"
container_type = "rmm"
docker_binary = "nvidia-docker"
docker_args = "--build-arg CUDA_VERSION_ARG=${args.host_cuda_version}"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} bash -c "source activate gpu_test && build/testxgboost --use-rmm-pool --gtest_filter=-*DeathTest.*"
"""
}
deleteDir()
}
}
def CrossTestJVMwithJDK(args) {
node('linux && cpu') {
unstash name: 'xgboost4j_jar'
unstash name: 'srcs'
if (args.spark_version != null) {
echo "Test XGBoost4J on a machine with JDK ${args.jdk_version}, Spark ${args.spark_version}"
} else {
echo "Test XGBoost4J on a machine with JDK ${args.jdk_version}"
}
def container_type = "jvm_cross"
def docker_binary = "docker"
def spark_arg = (args.spark_version != null) ? "--build-arg SPARK_VERSION=${args.spark_version}" : ""
def docker_args = "--build-arg JDK_VERSION=${args.jdk_version} ${spark_arg}"
// Run integration tests only when spark_version is given
def docker_extra_params = (args.spark_version != null) ? "CI_DOCKER_EXTRA_PARAMS_INIT='-e RUN_INTEGRATION_TEST=1'" : ""
sh """
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_jvm_cross.sh
"""
deleteDir()
}
}
def DeployJVMPackages(args) {
node('linux && cpu') {
unstash name: 'srcs'
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
echo 'Deploying to xgboost-maven-repo S3 repo...'
sh """
${dockerRun} jvm_gpu_build docker --build-arg CUDA_VERSION_ARG=10.0 tests/ci_build/deploy_jvm_packages.sh ${args.spark_version}
"""
}
deleteDir()
}
}

View File

@@ -1,143 +0,0 @@
#!/usr/bin/groovy
// -*- mode: groovy -*-
/* Jenkins pipeline for Windows AMD64 target */
import groovy.transform.Field
@Field
def commit_id // necessary to pass a variable from one stage to another
pipeline {
agent none
// Setup common job properties
options {
timestamps()
timeout(time: 240, unit: 'MINUTES')
buildDiscarder(logRotator(numToKeepStr: '10'))
preserveStashes()
}
// Build stages
stages {
stage('Jenkins Win64: Initialize') {
agent { label 'job_initializer' }
steps {
script {
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
checkoutSrcs()
commit_id = "${GIT_COMMIT}"
}
sh 'python3 tests/jenkins_get_approval.py'
stash name: 'srcs'
}
}
stage('Jenkins Win64: Build') {
agent none
steps {
script {
parallel ([
'build-win64-cuda10.1': { BuildWin64() }
])
}
}
}
stage('Jenkins Win64: Test') {
agent none
steps {
script {
parallel ([
'test-win64-cuda10.1': { TestWin64() },
])
}
}
}
}
}
// check out source code from git
def checkoutSrcs() {
retry(5) {
try {
timeout(time: 2, unit: 'MINUTES') {
checkout scm
sh 'git submodule update --init'
}
} catch (exc) {
deleteDir()
error "Failed to fetch source codes"
}
}
}
def BuildWin64() {
node('win64 && cuda10_unified') {
unstash name: 'srcs'
echo "Building XGBoost for Windows AMD64 target..."
bat "nvcc --version"
def arch_flag = ""
if (env.BRANCH_NAME != 'master' && !(env.BRANCH_NAME.startsWith('release'))) {
arch_flag = "-DGPU_COMPUTE_VER=75"
}
bat """
mkdir build
cd build
cmake .. -G"Visual Studio 15 2017 Win64" -DUSE_CUDA=ON -DCMAKE_VERBOSE_MAKEFILE=ON -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON ${arch_flag} -DCMAKE_UNITY_BUILD=ON
"""
bat """
cd build
"C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\MSBuild\\15.0\\Bin\\MSBuild.exe" xgboost.sln /m /p:Configuration=Release /nodeReuse:false
"""
bat """
cd python-package
conda activate && python setup.py bdist_wheel --universal && for /R %%i in (dist\\*.whl) DO python ../tests/ci_build/rename_whl.py "%%i" ${commit_id} win_amd64
"""
echo "Insert vcomp140.dll (OpenMP runtime) into the wheel..."
bat """
cd python-package\\dist
COPY /B ..\\..\\tests\\ci_build\\insert_vcomp140.py
conda activate && python insert_vcomp140.py *.whl
"""
echo 'Stashing Python wheel...'
stash name: 'xgboost_whl', includes: 'python-package/dist/*.whl'
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
echo 'Uploading Python wheel...'
path = ("${BRANCH_NAME}" == 'master') ? '' : "${BRANCH_NAME}/"
s3Upload bucket: 'xgboost-nightly-builds', path: path, acl: 'PublicRead', workingDir: 'python-package/dist', includePathPattern:'**/*.whl'
}
echo 'Stashing C++ test executable (testxgboost)...'
stash name: 'xgboost_cpp_tests', includes: 'build/testxgboost.exe'
stash name: 'xgboost_cli', includes: 'xgboost.exe'
deleteDir()
}
}
def TestWin64() {
node('win64 && cuda10_unified') {
unstash name: 'srcs'
unstash name: 'xgboost_whl'
unstash name: 'xgboost_cli'
unstash name: 'xgboost_cpp_tests'
echo "Test Win64"
bat "nvcc --version"
echo "Running C++ tests..."
bat "build\\testxgboost.exe"
echo "Installing Python dependencies..."
def env_name = 'win64_' + UUID.randomUUID().toString().replaceAll('-', '')
bat "conda env create -n ${env_name} --file=tests/ci_build/conda_env/win64_test.yml"
echo "Installing Python wheel..."
bat """
conda activate ${env_name} && for /R %%i in (python-package\\dist\\*.whl) DO python -m pip install "%%i"
"""
echo "Running Python tests..."
bat "conda activate ${env_name} && python -m pytest -v -s -rxXs --fulltrace tests\\python"
bat """
conda activate ${env_name} && python -m pytest -v -s -rxXs --fulltrace -m "(not slow) and (not mgpu)" tests\\python-gpu
"""
bat "conda env remove --name ${env_name}"
deleteDir()
}
}

View File

@@ -86,6 +86,7 @@ cover: check
) )
endif endif
clean: clean:
$(RM) -rf build lib bin *~ */*~ */*/*~ */*/*/*~ */*.o */*/*.o */*/*/*.o #xgboost $(RM) -rf build lib bin *~ */*~ */*/*~ */*/*/*~ */*.o */*/*.o */*/*/*.o #xgboost
$(RM) -rf build_tests *.gcov tests/cpp/xgboost_test $(RM) -rf build_tests *.gcov tests/cpp/xgboost_test
@@ -122,20 +123,13 @@ Rpack: clean_all
cp -r dmlc-core/include xgboost/src/dmlc-core/include cp -r dmlc-core/include xgboost/src/dmlc-core/include
cp -r dmlc-core/src xgboost/src/dmlc-core/src cp -r dmlc-core/src xgboost/src/dmlc-core/src
cp ./LICENSE xgboost cp ./LICENSE xgboost
# Modify PKGROOT in Makevars.in
cat R-package/src/Makevars.in|sed '2s/.*/PKGROOT=./' > xgboost/src/Makevars.in cat R-package/src/Makevars.in|sed '2s/.*/PKGROOT=./' > xgboost/src/Makevars.in
# Configure Makevars.win (Windows-specific Makevars, likely using MinGW) cat R-package/src/Makevars.win|sed '2s/.*/PKGROOT=./' > xgboost/src/Makevars.win
cp xgboost/src/Makevars.in xgboost/src/Makevars.win
cat xgboost/src/Makevars.in| sed '3s/.*/ENABLE_STD_THREAD=0/' > xgboost/src/Makevars.win
sed -i -e 's/@OPENMP_CXXFLAGS@/$$\(SHLIB_OPENMP_CXXFLAGS\)/g' xgboost/src/Makevars.win
sed -i -e 's/-pthread/$$\(SHLIB_PTHREAD_FLAGS\)/g' xgboost/src/Makevars.win
sed -i -e 's/@ENDIAN_FLAG@/-DDMLC_CMAKE_LITTLE_ENDIAN=1/g' xgboost/src/Makevars.win
sed -i -e 's/@BACKTRACE_LIB@//g' xgboost/src/Makevars.win
sed -i -e 's/@OPENMP_LIB@//g' xgboost/src/Makevars.win
rm -f xgboost/src/Makevars.win-e # OSX sed create this extra file; remove it rm -f xgboost/src/Makevars.win-e # OSX sed create this extra file; remove it
bash R-package/remove_warning_suppression_pragma.sh bash R-package/remove_warning_suppression_pragma.sh
bash xgboost/remove_warning_suppression_pragma.sh bash xgboost/remove_warning_suppression_pragma.sh
rm xgboost/remove_warning_suppression_pragma.sh rm xgboost/remove_warning_suppression_pragma.sh
rm xgboost/CMakeLists.txt
rm -rfv xgboost/tests/helper_scripts/ rm -rfv xgboost/tests/helper_scripts/
R ?= R R ?= R

1002
NEWS.md

File diff suppressed because it is too large Load Diff

View File

@@ -31,7 +31,7 @@ if (USE_OPENMP)
endif (USE_OPENMP) endif (USE_OPENMP)
set_target_properties( set_target_properties(
xgboost-r PROPERTIES xgboost-r PROPERTIES
CXX_STANDARD 14 CXX_STANDARD 17
CXX_STANDARD_REQUIRED ON CXX_STANDARD_REQUIRED ON
POSITION_INDEPENDENT_CODE ON) POSITION_INDEPENDENT_CODE ON)

View File

@@ -1,12 +1,12 @@
Package: xgboost Package: xgboost
Type: Package Type: Package
Title: Extreme Gradient Boosting Title: Extreme Gradient Boosting
Version: 1.3.3.1 Version: 1.7.6.1
Date: 2020-08-28 Date: 2023-06-16
Authors@R: c( Authors@R: c(
person("Tianqi", "Chen", role = c("aut"), person("Tianqi", "Chen", role = c("aut"),
email = "tianqi.tchen@gmail.com"), email = "tianqi.tchen@gmail.com"),
person("Tong", "He", role = c("aut", "cre"), person("Tong", "He", role = c("aut"),
email = "hetong007@gmail.com"), email = "hetong007@gmail.com"),
person("Michael", "Benesty", role = c("aut"), person("Michael", "Benesty", role = c("aut"),
email = "michael@benesty.fr"), email = "michael@benesty.fr"),
@@ -26,9 +26,12 @@ Authors@R: c(
person("Min", "Lin", role = c("aut")), person("Min", "Lin", role = c("aut")),
person("Yifeng", "Geng", role = c("aut")), person("Yifeng", "Geng", role = c("aut")),
person("Yutian", "Li", role = c("aut")), person("Yutian", "Li", role = c("aut")),
person("Jiaming", "Yuan", role = c("aut", "cre"),
email = "jm.yuan@outlook.com"),
person("XGBoost contributors", role = c("cph"), person("XGBoost contributors", role = c("cph"),
comment = "base XGBoost implementation") comment = "base XGBoost implementation")
) )
Maintainer: Jiaming Yuan <jm.yuan@outlook.com>
Description: Extreme Gradient Boosting, which is an efficient implementation Description: Extreme Gradient Boosting, which is an efficient implementation
of the gradient boosting framework from Chen & Guestrin (2016) <doi:10.1145/2939672.2939785>. of the gradient boosting framework from Chen & Guestrin (2016) <doi:10.1145/2939672.2939785>.
This package is its R interface. The package includes efficient linear This package is its R interface. The package includes efficient linear
@@ -53,7 +56,6 @@ Suggests:
testthat, testthat,
lintr, lintr,
igraph (>= 1.0.1), igraph (>= 1.0.1),
jsonlite,
float, float,
crayon, crayon,
titanic titanic
@@ -63,6 +65,7 @@ Imports:
Matrix (>= 1.1-0), Matrix (>= 1.1-0),
methods, methods,
data.table (>= 1.9.6), data.table (>= 1.9.6),
magrittr (>= 1.5), jsonlite (>= 1.0),
RoxygenNote: 7.1.1 RoxygenNote: 7.2.3
SystemRequirements: GNU make, C++14 Encoding: UTF-8
SystemRequirements: GNU make, C++17

View File

@@ -1,9 +1,9 @@
Copyright (c) 2014 by Tianqi Chen and Contributors Copyright (c) 2014-2023, Tianqi Chen and XBGoost Contributors
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.
You may obtain a copy of the License at You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software Unless required by applicable law or agreed to in writing, software

View File

@@ -36,6 +36,7 @@ export(xgb.create.features)
export(xgb.cv) export(xgb.cv)
export(xgb.dump) export(xgb.dump)
export(xgb.gblinear.history) export(xgb.gblinear.history)
export(xgb.get.config)
export(xgb.ggplot.deepness) export(xgb.ggplot.deepness)
export(xgb.ggplot.importance) export(xgb.ggplot.importance)
export(xgb.ggplot.shap.summary) export(xgb.ggplot.shap.summary)
@@ -52,6 +53,7 @@ export(xgb.plot.tree)
export(xgb.save) export(xgb.save)
export(xgb.save.raw) export(xgb.save.raw)
export(xgb.serialize) export(xgb.serialize)
export(xgb.set.config)
export(xgb.train) export(xgb.train)
export(xgb.unserialize) export(xgb.unserialize)
export(xgboost) export(xgboost)
@@ -78,7 +80,8 @@ importFrom(graphics,lines)
importFrom(graphics,par) importFrom(graphics,par)
importFrom(graphics,points) importFrom(graphics,points)
importFrom(graphics,title) importFrom(graphics,title)
importFrom(magrittr,"%>%") importFrom(jsonlite,fromJSON)
importFrom(jsonlite,toJSON)
importFrom(stats,median) importFrom(stats,median)
importFrom(stats,predict) importFrom(stats,predict)
importFrom(utils,head) importFrom(utils,head)

View File

@@ -188,7 +188,7 @@ cb.reset.parameters <- function(new_params) {
pnames <- gsub("\\.", "_", names(new_params)) pnames <- gsub("\\.", "_", names(new_params))
nrounds <- NULL nrounds <- NULL
# run some checks in the begining # run some checks in the beginning
init <- function(env) { init <- function(env) {
nrounds <<- env$end_iteration - env$begin_iteration + 1 nrounds <<- env$end_iteration - env$begin_iteration + 1
@@ -263,10 +263,7 @@ cb.reset.parameters <- function(new_params) {
#' \itemize{ #' \itemize{
#' \item \code{best_score} the evaluation score at the best iteration #' \item \code{best_score} the evaluation score at the best iteration
#' \item \code{best_iteration} at which boosting iteration the best score has occurred (1-based index) #' \item \code{best_iteration} at which boosting iteration the best score has occurred (1-based index)
#' \item \code{best_ntreelimit} to use with the \code{ntreelimit} parameter in \code{predict}.
#' It differs from \code{best_iteration} in multiclass or random forest settings.
#' } #' }
#'
#' The Same values are also stored as xgb-attributes: #' The Same values are also stored as xgb-attributes:
#' \itemize{ #' \itemize{
#' \item \code{best_iteration} is stored as a 0-based iteration index (for interoperability of binary models) #' \item \code{best_iteration} is stored as a 0-based iteration index (for interoperability of binary models)
@@ -498,13 +495,12 @@ cb.cv.predict <- function(save_models = FALSE) {
rep(NA_real_, N) rep(NA_real_, N)
} }
ntreelimit <- NVL(env$basket$best_ntreelimit, iterationrange <- c(1, NVL(env$basket$best_iteration, env$end_iteration) + 1)
env$end_iteration * env$num_parallel_tree)
if (NVL(env$params[['booster']], '') == 'gblinear') { if (NVL(env$params[['booster']], '') == 'gblinear') {
ntreelimit <- 0 # must be 0 for gblinear iterationrange <- c(1, 1) # must be 0 for gblinear
} }
for (fd in env$bst_folds) { for (fd in env$bst_folds) {
pr <- predict(fd$bst, fd$watchlist[[2]], ntreelimit = ntreelimit, reshape = TRUE) pr <- predict(fd$bst, fd$watchlist[[2]], iterationrange = iterationrange, reshape = TRUE)
if (is.matrix(pred)) { if (is.matrix(pred)) {
pred[fd$index, ] <- pr pred[fd$index, ] <- pr
} else { } else {
@@ -533,7 +529,7 @@ cb.cv.predict <- function(save_models = FALSE) {
#' Callback closure for collecting the model coefficients history of a gblinear booster #' Callback closure for collecting the model coefficients history of a gblinear booster
#' during its training. #' during its training.
#' #'
#' @param sparse when set to FALSE/TURE, a dense/sparse matrix is used to store the result. #' @param sparse when set to FALSE/TRUE, a dense/sparse matrix is used to store the result.
#' Sparse format is useful when one expects only a subset of coefficients to be non-zero, #' Sparse format is useful when one expects only a subset of coefficients to be non-zero,
#' when using the "thrifty" feature selector with fairly small number of top features #' when using the "thrifty" feature selector with fairly small number of top features
#' selected per iteration. #' selected per iteration.
@@ -548,9 +544,11 @@ cb.cv.predict <- function(save_models = FALSE) {
#' #'
#' @return #' @return
#' Results are stored in the \code{coefs} element of the closure. #' Results are stored in the \code{coefs} element of the closure.
#' The \code{\link{xgb.gblinear.history}} convenience function provides an easy way to access it. #' The \code{\link{xgb.gblinear.history}} convenience function provides an easy
#' way to access it.
#' With \code{xgb.train}, it is either a dense of a sparse matrix. #' With \code{xgb.train}, it is either a dense of a sparse matrix.
#' While with \code{xgb.cv}, it is a list (an element per each fold) of such matrices. #' While with \code{xgb.cv}, it is a list (an element per each fold) of such
#' matrices.
#' #'
#' @seealso #' @seealso
#' \code{\link{callbacks}}, \code{\link{xgb.gblinear.history}}. #' \code{\link{callbacks}}, \code{\link{xgb.gblinear.history}}.
@@ -560,10 +558,9 @@ cb.cv.predict <- function(save_models = FALSE) {
#' # #' #
#' # In the iris dataset, it is hard to linearly separate Versicolor class from the rest #' # In the iris dataset, it is hard to linearly separate Versicolor class from the rest
#' # without considering the 2nd order interactions: #' # without considering the 2nd order interactions:
#' require(magrittr)
#' x <- model.matrix(Species ~ .^2, iris)[,-1] #' x <- model.matrix(Species ~ .^2, iris)[,-1]
#' colnames(x) #' colnames(x)
#' dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor")) #' dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor"), nthread = 2)
#' param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc", #' param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc",
#' lambda = 0.0003, alpha = 0.0003, nthread = 2) #' lambda = 0.0003, alpha = 0.0003, nthread = 2)
#' # For 'shotgun', which is a default linear updater, using high eta values may result in #' # For 'shotgun', which is a default linear updater, using high eta values may result in
@@ -581,21 +578,21 @@ cb.cv.predict <- function(save_models = FALSE) {
#' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 0.8, #' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 0.8,
#' updater = 'coord_descent', feature_selector = 'thrifty', top_k = 1, #' updater = 'coord_descent', feature_selector = 'thrifty', top_k = 1,
#' callbacks = list(cb.gblinear.history())) #' callbacks = list(cb.gblinear.history()))
#' xgb.gblinear.history(bst) %>% matplot(type = 'l') #' matplot(xgb.gblinear.history(bst), type = 'l')
#' # Componentwise boosting is known to have similar effect to Lasso regularization. #' # Componentwise boosting is known to have similar effect to Lasso regularization.
#' # Try experimenting with various values of top_k, eta, nrounds, #' # Try experimenting with various values of top_k, eta, nrounds,
#' # as well as different feature_selectors. #' # as well as different feature_selectors.
#' #'
#' # For xgb.cv: #' # For xgb.cv:
#' bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8, #' bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8,
#' callbacks = list(cb.gblinear.history())) #' callbacks = list(cb.gblinear.history()))
#' # coefficients in the CV fold #3 #' # coefficients in the CV fold #3
#' xgb.gblinear.history(bst)[[3]] %>% matplot(type = 'l') #' matplot(xgb.gblinear.history(bst)[[3]], type = 'l')
#' #'
#' #'
#' #### Multiclass classification: #' #### Multiclass classification:
#' # #' #
#' dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1) #' dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1, nthread = 2)
#' param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3, #' param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3,
#' lambda = 0.0003, alpha = 0.0003, nthread = 2) #' lambda = 0.0003, alpha = 0.0003, nthread = 2)
#' # For the default linear updater 'shotgun' it sometimes is helpful #' # For the default linear updater 'shotgun' it sometimes is helpful
@@ -603,15 +600,15 @@ cb.cv.predict <- function(save_models = FALSE) {
#' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 70, eta = 0.5, #' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 70, eta = 0.5,
#' callbacks = list(cb.gblinear.history())) #' callbacks = list(cb.gblinear.history()))
#' # Will plot the coefficient paths separately for each class: #' # Will plot the coefficient paths separately for each class:
#' xgb.gblinear.history(bst, class_index = 0) %>% matplot(type = 'l') #' matplot(xgb.gblinear.history(bst, class_index = 0), type = 'l')
#' xgb.gblinear.history(bst, class_index = 1) %>% matplot(type = 'l') #' matplot(xgb.gblinear.history(bst, class_index = 1), type = 'l')
#' xgb.gblinear.history(bst, class_index = 2) %>% matplot(type = 'l') #' matplot(xgb.gblinear.history(bst, class_index = 2), type = 'l')
#' #'
#' # CV: #' # CV:
#' bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 70, eta = 0.5, #' bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 70, eta = 0.5,
#' callbacks = list(cb.gblinear.history(FALSE))) #' callbacks = list(cb.gblinear.history(FALSE)))
#' # 1st forld of 1st class #' # 1st fold of 1st class
#' xgb.gblinear.history(bst, class_index = 0)[[1]] %>% matplot(type = 'l') #' matplot(xgb.gblinear.history(bst, class_index = 0)[[1]], type = 'l')
#' #'
#' @export #' @export
cb.gblinear.history <- function(sparse=FALSE) { cb.gblinear.history <- function(sparse=FALSE) {
@@ -642,9 +639,14 @@ cb.gblinear.history <- function(sparse=FALSE) {
if (!is.null(env$bst)) { # # xgb.train: if (!is.null(env$bst)) { # # xgb.train:
coefs <<- list2mat(coefs) coefs <<- list2mat(coefs)
} else { # xgb.cv: } else { # xgb.cv:
# first lapply transposes the list # second lapply transposes the list
coefs <<- lapply(seq_along(coefs[[1]]), function(i) lapply(coefs, "[[", i)) %>% coefs <<- lapply(
lapply(function(x) list2mat(x)) X = lapply(
X = seq_along(coefs[[1]]),
FUN = function(i) lapply(coefs, "[[", i)
),
FUN = list2mat
)
} }
} }

View File

@@ -1,6 +1,6 @@
# #
# This file is for the low level reuseable utility functions # This file is for the low level reusable utility functions
# that are not supposed to be visibe to a user. # that are not supposed to be visible to a user.
# #
# #
@@ -178,7 +178,8 @@ xgb.iter.eval <- function(booster_handle, watchlist, iter, feval = NULL) {
} else { } else {
res <- sapply(seq_along(watchlist), function(j) { res <- sapply(seq_along(watchlist), function(j) {
w <- watchlist[[j]] w <- watchlist[[j]]
preds <- predict(booster_handle, w, outputmargin = TRUE, ntreelimit = 0) # predict using all trees ## predict using all trees
preds <- predict(booster_handle, w, outputmargin = TRUE, iterationrange = c(1, 1))
eval_res <- feval(preds, w) eval_res <- feval(preds, w)
out <- eval_res$value out <- eval_res$value
names(out) <- paste0(evnames[j], "-", eval_res$metric) names(out) <- paste0(evnames[j], "-", eval_res$metric)
@@ -284,7 +285,7 @@ xgb.createFolds <- function(y, k = 10)
for (i in seq_along(numInClass)) { for (i in seq_along(numInClass)) {
## create a vector of integers from 1:k as many times as possible without ## create a vector of integers from 1:k as many times as possible without
## going over the number of samples in the class. Note that if the number ## going over the number of samples in the class. Note that if the number
## of samples in a class is less than k, nothing is producd here. ## of samples in a class is less than k, nothing is produced here.
seqVector <- rep(seq_len(k), numInClass[i] %/% k) seqVector <- rep(seq_len(k), numInClass[i] %/% k)
## add enough random integers to get length(seqVector) == numInClass[i] ## add enough random integers to get length(seqVector) == numInClass[i]
if (numInClass[i] %% k > 0) seqVector <- c(seqVector, sample.int(k, numInClass[i] %% k)) if (numInClass[i] %% k > 0) seqVector <- c(seqVector, sample.int(k, numInClass[i] %% k))

View File

@@ -1,7 +1,7 @@
# Construct an internal xgboost Booster and return a handle to it. # Construct an internal xgboost Booster and return a handle to it.
# internal utility function # internal utility function
xgb.Booster.handle <- function(params = list(), cachelist = list(), xgb.Booster.handle <- function(params = list(), cachelist = list(),
modelfile = NULL) { modelfile = NULL, handle = NULL) {
if (typeof(cachelist) != "list" || if (typeof(cachelist) != "list" ||
!all(vapply(cachelist, inherits, logical(1), what = 'xgb.DMatrix'))) { !all(vapply(cachelist, inherits, logical(1), what = 'xgb.DMatrix'))) {
stop("cachelist must be a list of xgb.DMatrix objects") stop("cachelist must be a list of xgb.DMatrix objects")
@@ -11,6 +11,7 @@ xgb.Booster.handle <- function(params = list(), cachelist = list(),
if (typeof(modelfile) == "character") { if (typeof(modelfile) == "character") {
## A filename ## A filename
handle <- .Call(XGBoosterCreate_R, cachelist) handle <- .Call(XGBoosterCreate_R, cachelist)
modelfile <- path.expand(modelfile)
.Call(XGBoosterLoadModel_R, handle, modelfile[1]) .Call(XGBoosterLoadModel_R, handle, modelfile[1])
class(handle) <- "xgb.Booster.handle" class(handle) <- "xgb.Booster.handle"
if (length(params) > 0) { if (length(params) > 0) {
@@ -19,7 +20,7 @@ xgb.Booster.handle <- function(params = list(), cachelist = list(),
return(handle) return(handle)
} else if (typeof(modelfile) == "raw") { } else if (typeof(modelfile) == "raw") {
## A memory buffer ## A memory buffer
bst <- xgb.unserialize(modelfile) bst <- xgb.unserialize(modelfile, handle)
xgb.parameters(bst) <- params xgb.parameters(bst) <- params
return (bst) return (bst)
} else if (inherits(modelfile, "xgb.Booster")) { } else if (inherits(modelfile, "xgb.Booster")) {
@@ -128,7 +129,7 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
stop("argument type must be xgb.Booster") stop("argument type must be xgb.Booster")
if (is.null.handle(object$handle)) { if (is.null.handle(object$handle)) {
object$handle <- xgb.Booster.handle(modelfile = object$raw) object$handle <- xgb.Booster.handle(modelfile = object$raw, handle = object$handle)
} else { } else {
if (is.null(object$raw) && saveraw) { if (is.null(object$raw) && saveraw) {
object$raw <- xgb.serialize(object$handle) object$raw <- xgb.serialize(object$handle)
@@ -161,14 +162,17 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
#' Predicted values based on either xgboost model or model handle object. #' Predicted values based on either xgboost model or model handle object.
#' #'
#' @param object Object of class \code{xgb.Booster} or \code{xgb.Booster.handle} #' @param object Object of class \code{xgb.Booster} or \code{xgb.Booster.handle}
#' @param newdata takes \code{matrix}, \code{dgCMatrix}, local data file or \code{xgb.DMatrix}. #' @param newdata takes \code{matrix}, \code{dgCMatrix}, \code{dgRMatrix}, \code{dsparseVector},
#' local data file or \code{xgb.DMatrix}.
#'
#' For single-row predictions on sparse data, it's recommended to use CSR format. If passing
#' a sparse vector, it will take it as a row vector.
#' @param missing Missing is only used when input is dense matrix. Pick a float value that represents #' @param missing Missing is only used when input is dense matrix. Pick a float value that represents
#' missing values in data (e.g., sometimes 0 or some other extreme value is used). #' missing values in data (e.g., sometimes 0 or some other extreme value is used).
#' @param outputmargin whether the prediction should be returned in the for of original untransformed #' @param outputmargin whether the prediction should be returned in the for of original untransformed
#' sum of predictions from boosting iterations' results. E.g., setting \code{outputmargin=TRUE} for #' sum of predictions from boosting iterations' results. E.g., setting \code{outputmargin=TRUE} for
#' logistic regression would result in predictions for log-odds instead of probabilities. #' logistic regression would result in predictions for log-odds instead of probabilities.
#' @param ntreelimit limit the number of model's trees or boosting iterations used in prediction (see Details). #' @param ntreelimit Deprecated, use \code{iterationrange} instead.
#' It will use all the trees by default (\code{NULL} value).
#' @param predleaf whether predict leaf index. #' @param predleaf whether predict leaf index.
#' @param predcontrib whether to return feature contributions to individual predictions (see Details). #' @param predcontrib whether to return feature contributions to individual predictions (see Details).
#' @param approxcontrib whether to use a fast approximation for feature contributions (see Details). #' @param approxcontrib whether to use a fast approximation for feature contributions (see Details).
@@ -178,16 +182,19 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
#' or predinteraction flags is TRUE. #' or predinteraction flags is TRUE.
#' @param training whether is the prediction result used for training. For dart booster, #' @param training whether is the prediction result used for training. For dart booster,
#' training predicting will perform dropout. #' training predicting will perform dropout.
#' @param iterationrange Specifies which layer of trees are used in prediction. For
#' example, if a random forest is trained with 100 rounds. Specifying
#' `iterationrange=(1, 21)`, then only the forests built during [1, 21) (half open set)
#' rounds are used in this prediction. It's 1-based index just like R vector. When set
#' to \code{c(1, 1)} XGBoost will use all trees.
#' @param strict_shape Default is \code{FALSE}. When it's set to \code{TRUE}, output
#' type and shape of prediction are invariant to model type.
#'
#' @param ... Parameters passed to \code{predict.xgb.Booster} #' @param ... Parameters passed to \code{predict.xgb.Booster}
#' #'
#' @details #' @details
#' Note that \code{ntreelimit} is not necessarily equal to the number of boosting iterations
#' and it is not necessarily equal to the number of trees in a model.
#' E.g., in a random forest-like model, \code{ntreelimit} would limit the number of trees.
#' But for multiclass classification, while there are multiple trees per iteration,
#' \code{ntreelimit} limits the number of boosting iterations.
#' #'
#' Also note that \code{ntreelimit} would currently do nothing for predictions from gblinear, #' Note that \code{iterationrange} would currently do nothing for predictions from gblinear,
#' since gblinear doesn't keep its boosting history. #' since gblinear doesn't keep its boosting history.
#' #'
#' One possible practical applications of the \code{predleaf} option is to use the model #' One possible practical applications of the \code{predleaf} option is to use the model
@@ -208,7 +215,8 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
#' of the most important features first. See below about the format of the returned results. #' of the most important features first. See below about the format of the returned results.
#' #'
#' @return #' @return
#' For regression or binary classification, it returns a vector of length \code{nrows(newdata)}. #' The return type is different depending whether \code{strict_shape} is set to \code{TRUE}. By default,
#' for regression or binary classification, it returns a vector of length \code{nrows(newdata)}.
#' For multiclass classification, either a \code{num_class * nrows(newdata)} vector or #' For multiclass classification, either a \code{num_class * nrows(newdata)} vector or
#' a \code{(nrows(newdata), num_class)} dimension matrix is returned, depending on #' a \code{(nrows(newdata), num_class)} dimension matrix is returned, depending on
#' the \code{reshape} value. #' the \code{reshape} value.
@@ -230,6 +238,13 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
#' For a multiclass case, a list of \code{num_class} elements is returned, where each element is #' For a multiclass case, a list of \code{num_class} elements is returned, where each element is
#' such an array. #' such an array.
#' #'
#' When \code{strict_shape} is set to \code{TRUE}, the output is always an array. For
#' normal prediction, the output is a 2-dimension array \code{(num_class, nrow(newdata))}.
#'
#' For \code{predcontrib = TRUE}, output is \code{(ncol(newdata) + 1, num_class, nrow(newdata))}
#' For \code{predinteraction = TRUE}, output is \code{(ncol(newdata) + 1, ncol(newdata) + 1, num_class, nrow(newdata))}
#' For \code{predleaf = TRUE}, output is \code{(n_trees_in_forest, num_class, n_iterations, nrow(newdata))}
#'
#' @seealso #' @seealso
#' \code{\link{xgb.train}}. #' \code{\link{xgb.train}}.
#' #'
@@ -252,7 +267,7 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
#' # use all trees by default #' # use all trees by default
#' pred <- predict(bst, test$data) #' pred <- predict(bst, test$data)
#' # use only the 1st tree #' # use only the 1st tree
#' pred1 <- predict(bst, test$data, ntreelimit = 1) #' pred1 <- predict(bst, test$data, iterationrange = c(1, 2))
#' #'
#' # Predicting tree leafs: #' # Predicting tree leafs:
#' # the result is an nsamples X ntrees matrix #' # the result is an nsamples X ntrees matrix
@@ -304,94 +319,152 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
#' all.equal(pred, pred_labels) #' all.equal(pred, pred_labels)
#' # prediction from using only 5 iterations should result #' # prediction from using only 5 iterations should result
#' # in the same error as seen in iteration 5: #' # in the same error as seen in iteration 5:
#' pred5 <- predict(bst, as.matrix(iris[, -5]), ntreelimit=5) #' pred5 <- predict(bst, as.matrix(iris[, -5]), iterationrange=c(1, 6))
#' sum(pred5 != lb)/length(lb) #' sum(pred5 != lb)/length(lb)
#' #'
#'
#' ## random forest-like model of 25 trees for binary classification:
#'
#' set.seed(11)
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 5,
#' nthread = 2, nrounds = 1, objective = "binary:logistic",
#' num_parallel_tree = 25, subsample = 0.6, colsample_bytree = 0.1)
#' # Inspect the prediction error vs number of trees:
#' lb <- test$label
#' dtest <- xgb.DMatrix(test$data, label=lb)
#' err <- sapply(1:25, function(n) {
#' pred <- predict(bst, dtest, ntreelimit=n)
#' sum((pred > 0.5) != lb)/length(lb)
#' })
#' plot(err, type='l', ylim=c(0,0.1), xlab='#trees')
#'
#' @rdname predict.xgb.Booster #' @rdname predict.xgb.Booster
#' @export #' @export
predict.xgb.Booster <- function(object, newdata, missing = NA, outputmargin = FALSE, ntreelimit = NULL, predict.xgb.Booster <- function(object, newdata, missing = NA, outputmargin = FALSE, ntreelimit = NULL,
predleaf = FALSE, predcontrib = FALSE, approxcontrib = FALSE, predinteraction = FALSE, predleaf = FALSE, predcontrib = FALSE, approxcontrib = FALSE, predinteraction = FALSE,
reshape = FALSE, training = FALSE, ...) { reshape = FALSE, training = FALSE, iterationrange = NULL, strict_shape = FALSE, ...) {
object <- xgb.Booster.complete(object, saveraw = FALSE) object <- xgb.Booster.complete(object, saveraw = FALSE)
if (!inherits(newdata, "xgb.DMatrix")) if (!inherits(newdata, "xgb.DMatrix"))
newdata <- xgb.DMatrix(newdata, missing = missing) newdata <- xgb.DMatrix(newdata, missing = missing, nthread = NVL(object$params[["nthread"]], -1))
if (!is.null(object[["feature_names"]]) && if (!is.null(object[["feature_names"]]) &&
!is.null(colnames(newdata)) && !is.null(colnames(newdata)) &&
!identical(object[["feature_names"]], colnames(newdata))) !identical(object[["feature_names"]], colnames(newdata)))
stop("Feature names stored in `object` and `newdata` are different!") stop("Feature names stored in `object` and `newdata` are different!")
if (is.null(ntreelimit))
ntreelimit <- NVL(object$best_ntreelimit, 0) if (NVL(object$params[['booster']], '') == 'gblinear' || is.null(ntreelimit))
if (NVL(object$params[['booster']], '') == 'gblinear')
ntreelimit <- 0 ntreelimit <- 0
if (ntreelimit < 0)
stop("ntreelimit cannot be negative")
option <- 0L + 1L * as.logical(outputmargin) + 2L * as.logical(predleaf) + 4L * as.logical(predcontrib) + if (ntreelimit != 0 && is.null(iterationrange)) {
8L * as.logical(approxcontrib) + 16L * as.logical(predinteraction) ## only ntreelimit, initialize iteration range
iterationrange <- c(0, 0)
} else if (ntreelimit == 0 && !is.null(iterationrange)) {
## only iteration range, handle 1-based indexing
iterationrange <- c(iterationrange[1] - 1, iterationrange[2] - 1)
} else if (ntreelimit != 0 && !is.null(iterationrange)) {
## both are specified, let libgxgboost throw an error
} else {
## no limit is supplied, use best
if (is.null(object$best_iteration)) {
iterationrange <- c(0, 0)
} else {
## We don't need to + 1 as R is 1-based index.
iterationrange <- c(0, as.integer(object$best_iteration))
}
}
## Handle the 0 length values.
box <- function(val) {
if (length(val) == 0) {
cval <- vector(, 1)
cval[0] <- val
return(cval)
}
return (val)
}
ret <- .Call(XGBoosterPredict_R, object$handle, newdata, option[1], ## We set strict_shape to TRUE then drop the dimensions conditionally
as.integer(ntreelimit), as.integer(training)) args <- list(
training = box(training),
strict_shape = box(TRUE),
iteration_begin = box(as.integer(iterationrange[1])),
iteration_end = box(as.integer(iterationrange[2])),
ntree_limit = box(as.integer(ntreelimit)),
type = box(as.integer(0))
)
set_type <- function(type) {
if (args$type != 0) {
stop("One type of prediction at a time.")
}
return(box(as.integer(type)))
}
if (outputmargin) {
args$type <- set_type(1)
}
if (predcontrib) {
args$type <- set_type(if (approxcontrib) 3 else 2)
}
if (predinteraction) {
args$type <- set_type(if (approxcontrib) 5 else 4)
}
if (predleaf) {
args$type <- set_type(6)
}
predts <- .Call(
XGBoosterPredictFromDMatrix_R, object$handle, newdata, jsonlite::toJSON(args, auto_unbox = TRUE)
)
names(predts) <- c("shape", "results")
shape <- predts$shape
ret <- predts$results
n_ret <- length(ret) n_ret <- length(ret)
n_row <- nrow(newdata) n_row <- nrow(newdata)
npred_per_case <- n_ret / n_row if (n_row != shape[1]) {
stop("Incorrect predict shape.")
}
if (n_ret %% n_row != 0) arr <- array(data = ret, dim = rev(shape))
stop("prediction length ", n_ret, " is not multiple of nrows(newdata) ", n_row)
cnames <- if (!is.null(colnames(newdata))) c(colnames(newdata), "BIAS") else NULL
n_groups <- shape[2]
## Needed regardless of whether strict shape is being used.
if (predcontrib) {
dimnames(arr) <- list(cnames, NULL, NULL)
} else if (predinteraction) {
dimnames(arr) <- list(cnames, cnames, NULL, NULL)
}
if (strict_shape) {
return(arr) # strict shape is calculated by libxgboost uniformly.
}
if (predleaf) { if (predleaf) {
ret <- if (n_ret == n_row) { ## Predict leaf
matrix(ret, ncol = 1) arr <- if (n_ret == n_row) {
matrix(arr, ncol = 1)
} else { } else {
matrix(ret, nrow = n_row, byrow = TRUE) matrix(arr, nrow = n_row, byrow = TRUE)
} }
} else if (predcontrib) { } else if (predcontrib) {
n_col1 <- ncol(newdata) + 1 ## Predict contribution
n_group <- npred_per_case / n_col1 arr <- aperm(a = arr, perm = c(2, 3, 1)) # [group, row, col]
cnames <- if (!is.null(colnames(newdata))) c(colnames(newdata), "BIAS") else NULL arr <- if (n_ret == n_row) {
ret <- if (n_ret == n_row) { matrix(arr, ncol = 1, dimnames = list(NULL, cnames))
matrix(ret, ncol = 1, dimnames = list(NULL, cnames)) } else if (n_groups != 1) {
} else if (n_group == 1) { ## turns array into list of matrices
matrix(ret, nrow = n_row, byrow = TRUE, dimnames = list(NULL, cnames)) lapply(seq_len(n_groups), function(g) arr[g, , ])
} else { } else {
arr <- array(ret, c(n_col1, n_group, n_row), ## remove the first axis (group)
dimnames = list(cnames, NULL, NULL)) %>% aperm(c(2, 3, 1)) # [group, row, col] dn <- dimnames(arr)
lapply(seq_len(n_group), function(g) arr[g, , ]) matrix(arr[1, , ], nrow = dim(arr)[2], ncol = dim(arr)[3], dimnames = c(dn[2], dn[3]))
} }
} else if (predinteraction) { } else if (predinteraction) {
n_col1 <- ncol(newdata) + 1 ## Predict interaction
n_group <- npred_per_case / n_col1^2 arr <- aperm(a = arr, perm = c(3, 4, 1, 2)) # [group, row, col, col]
cnames <- if (!is.null(colnames(newdata))) c(colnames(newdata), "BIAS") else NULL arr <- if (n_ret == n_row) {
ret <- if (n_ret == n_row) { matrix(arr, ncol = 1, dimnames = list(NULL, cnames))
matrix(ret, ncol = 1, dimnames = list(NULL, cnames)) } else if (n_groups != 1) {
} else if (n_group == 1) { ## turns array into list of matrices
array(ret, c(n_col1, n_col1, n_row), dimnames = list(cnames, cnames, NULL)) %>% aperm(c(3, 1, 2)) lapply(seq_len(n_groups), function(g) arr[g, , , ])
} else { } else {
arr <- array(ret, c(n_col1, n_col1, n_group, n_row), ## remove the first axis (group)
dimnames = list(cnames, cnames, NULL, NULL)) %>% aperm(c(3, 4, 1, 2)) # [group, row, col1, col2] arr <- arr[1, , , , drop = FALSE]
lapply(seq_len(n_group), function(g) arr[g, , , ]) array(arr, dim = dim(arr)[2:4], dimnames(arr)[2:4])
}
} else {
## Normal prediction
arr <- if (reshape && n_groups != 1) {
matrix(arr, ncol = n_groups, byrow = TRUE)
} else {
as.vector(ret)
} }
} else if (reshape && npred_per_case > 1) {
ret <- matrix(ret, nrow = n_row, byrow = TRUE)
} }
return(ret) return(arr)
} }
#' @rdname predict.xgb.Booster #' @rdname predict.xgb.Booster

View File

@@ -1,44 +1,63 @@
#' Construct xgb.DMatrix object #' Construct xgb.DMatrix object
#' #'
#' Construct xgb.DMatrix object from either a dense matrix, a sparse matrix, or a local file. #' Construct xgb.DMatrix object from either a dense matrix, a sparse matrix, or a local file.
#' Supported input file formats are either a libsvm text file or a binary file that was created previously by #' Supported input file formats are either a LIBSVM text file or a binary file that was created previously by
#' \code{\link{xgb.DMatrix.save}}). #' \code{\link{xgb.DMatrix.save}}).
#' #'
#' @param data a \code{matrix} object (either numeric or integer), a \code{dgCMatrix} object, or a character #' @param data a \code{matrix} object (either numeric or integer), a \code{dgCMatrix} object,
#' string representing a filename. #' a \code{dgRMatrix} object (only when making predictions from a fitted model),
#' a \code{dsparseVector} object (only when making predictions from a fitted model, will be
#' interpreted as a row vector), or a character string representing a filename.
#' @param info a named list of additional information to store in the \code{xgb.DMatrix} object. #' @param info a named list of additional information to store in the \code{xgb.DMatrix} object.
#' See \code{\link{setinfo}} for the specific allowed kinds of #' See \code{\link{setinfo}} for the specific allowed kinds of
#' @param missing a float value to represents missing values in data (used only when input is a dense matrix). #' @param missing a float value to represents missing values in data (used only when input is a dense matrix).
#' It is useful when a 0 or some other extreme value represents missing values in data. #' It is useful when a 0 or some other extreme value represents missing values in data.
#' @param silent whether to suppress printing an informational message after loading from a file. #' @param silent whether to suppress printing an informational message after loading from a file.
#' @param nthread Number of threads used for creating DMatrix.
#' @param ... the \code{info} data could be passed directly as parameters, without creating an \code{info} list. #' @param ... the \code{info} data could be passed directly as parameters, without creating an \code{info} list.
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' train <- agaricus.train #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#' xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data') #' xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
#' dtrain <- xgb.DMatrix('xgb.DMatrix.data') #' dtrain <- xgb.DMatrix('xgb.DMatrix.data')
#' if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data') #' if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')
#' @export #' @export
xgb.DMatrix <- function(data, info = list(), missing = NA, silent = FALSE, ...) { xgb.DMatrix <- function(data, info = list(), missing = NA, silent = FALSE, nthread = NULL, ...) {
cnames <- NULL cnames <- NULL
if (typeof(data) == "character") { if (typeof(data) == "character") {
if (length(data) > 1) if (length(data) > 1)
stop("'data' has class 'character' and length ", length(data), stop("'data' has class 'character' and length ", length(data),
".\n 'data' accepts either a numeric matrix or a single filename.") ".\n 'data' accepts either a numeric matrix or a single filename.")
data <- path.expand(data)
handle <- .Call(XGDMatrixCreateFromFile_R, data, as.integer(silent)) handle <- .Call(XGDMatrixCreateFromFile_R, data, as.integer(silent))
} else if (is.matrix(data)) { } else if (is.matrix(data)) {
handle <- .Call(XGDMatrixCreateFromMat_R, data, missing) handle <- .Call(XGDMatrixCreateFromMat_R, data, missing, as.integer(NVL(nthread, -1)))
cnames <- colnames(data) cnames <- colnames(data)
} else if (inherits(data, "dgCMatrix")) { } else if (inherits(data, "dgCMatrix")) {
handle <- .Call(XGDMatrixCreateFromCSC_R, data@p, data@i, data@x, nrow(data)) handle <- .Call(
XGDMatrixCreateFromCSC_R, data@p, data@i, data@x, nrow(data), as.integer(NVL(nthread, -1))
)
cnames <- colnames(data) cnames <- colnames(data)
} else if (inherits(data, "dgRMatrix")) {
handle <- .Call(
XGDMatrixCreateFromCSR_R, data@p, data@j, data@x, ncol(data), as.integer(NVL(nthread, -1))
)
cnames <- colnames(data)
} else if (inherits(data, "dsparseVector")) {
indptr <- c(0L, as.integer(length(data@i)))
ind <- as.integer(data@i) - 1L
handle <- .Call(
XGDMatrixCreateFromCSR_R, indptr, ind, data@x, length(data), as.integer(NVL(nthread, -1))
)
} else { } else {
stop("xgb.DMatrix does not support construction from ", typeof(data)) stop("xgb.DMatrix does not support construction from ", typeof(data))
} }
dmat <- handle dmat <- handle
attributes(dmat) <- list(.Dimnames = list(NULL, cnames), class = "xgb.DMatrix") attributes(dmat) <- list(class = "xgb.DMatrix")
if (!is.null(cnames)) {
setinfo(dmat, "feature_name", cnames)
}
info <- append(info, list(...)) info <- append(info, list(...))
for (i in seq_along(info)) { for (i in seq_along(info)) {
@@ -51,12 +70,12 @@ xgb.DMatrix <- function(data, info = list(), missing = NA, silent = FALSE, ...)
# get dmatrix from data, label # get dmatrix from data, label
# internal helper method # internal helper method
xgb.get.DMatrix <- function(data, label = NULL, missing = NA, weight = NULL) { xgb.get.DMatrix <- function(data, label = NULL, missing = NA, weight = NULL, nthread = NULL) {
if (inherits(data, "dgCMatrix") || is.matrix(data)) { if (inherits(data, "dgCMatrix") || is.matrix(data)) {
if (is.null(label)) { if (is.null(label)) {
stop("label must be provided when data is a matrix") stop("label must be provided when data is a matrix")
} }
dtrain <- xgb.DMatrix(data, label = label, missing = missing) dtrain <- xgb.DMatrix(data, label = label, missing = missing, nthread = nthread)
if (!is.null(weight)){ if (!is.null(weight)){
setinfo(dtrain, "weight", weight) setinfo(dtrain, "weight", weight)
} }
@@ -65,6 +84,7 @@ xgb.get.DMatrix <- function(data, label = NULL, missing = NA, weight = NULL) {
warning("xgboost: label will be ignored.") warning("xgboost: label will be ignored.")
} }
if (is.character(data)) { if (is.character(data)) {
data <- path.expand(data)
dtrain <- xgb.DMatrix(data[1]) dtrain <- xgb.DMatrix(data[1])
} else if (inherits(data, "xgb.DMatrix")) { } else if (inherits(data, "xgb.DMatrix")) {
dtrain <- data dtrain <- data
@@ -90,7 +110,7 @@ xgb.get.DMatrix <- function(data, label = NULL, missing = NA, weight = NULL) {
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' train <- agaricus.train #' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label) #' dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
#' #'
#' stopifnot(nrow(dtrain) == nrow(train$data)) #' stopifnot(nrow(dtrain) == nrow(train$data))
#' stopifnot(ncol(dtrain) == ncol(train$data)) #' stopifnot(ncol(dtrain) == ncol(train$data))
@@ -118,7 +138,7 @@ dim.xgb.DMatrix <- function(x) {
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' train <- agaricus.train #' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label) #' dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
#' dimnames(dtrain) #' dimnames(dtrain)
#' colnames(dtrain) #' colnames(dtrain)
#' colnames(dtrain) <- make.names(1:ncol(train$data)) #' colnames(dtrain) <- make.names(1:ncol(train$data))
@@ -127,7 +147,9 @@ dim.xgb.DMatrix <- function(x) {
#' @rdname dimnames.xgb.DMatrix #' @rdname dimnames.xgb.DMatrix
#' @export #' @export
dimnames.xgb.DMatrix <- function(x) { dimnames.xgb.DMatrix <- function(x) {
attr(x, '.Dimnames') fn <- getinfo(x, "feature_name")
## row names is null.
list(NULL, fn)
} }
#' @rdname dimnames.xgb.DMatrix #' @rdname dimnames.xgb.DMatrix
@@ -138,13 +160,13 @@ dimnames.xgb.DMatrix <- function(x) {
if (!is.null(value[[1L]])) if (!is.null(value[[1L]]))
stop("xgb.DMatrix does not have rownames") stop("xgb.DMatrix does not have rownames")
if (is.null(value[[2]])) { if (is.null(value[[2]])) {
attr(x, '.Dimnames') <- NULL setinfo(x, "feature_name", NULL)
return(x) return(x)
} }
if (ncol(x) != length(value[[2]])) if (ncol(x) != length(value[[2]])) {
stop("can't assign ", length(value[[2]]), " colnames to a ", stop("can't assign ", length(value[[2]]), " colnames to a ", ncol(x), " column xgb.DMatrix")
ncol(x), " column xgb.DMatrix") }
attr(x, '.Dimnames') <- value setinfo(x, "feature_name", value[[2]])
x x
} }
@@ -160,9 +182,9 @@ dimnames.xgb.DMatrix <- function(x) {
#' The \code{name} field can be one of the following: #' The \code{name} field can be one of the following:
#' #'
#' \itemize{ #' \itemize{
#' \item \code{label}: label Xgboost learn from ; #' \item \code{label}: label XGBoost learn from ;
#' \item \code{weight}: to do a weight rescale ; #' \item \code{weight}: to do a weight rescale ;
#' \item \code{base_margin}: base margin is the base prediction Xgboost will boost from ; #' \item \code{base_margin}: base margin is the base prediction XGBoost will boost from ;
#' \item \code{nrow}: number of rows of the \code{xgb.DMatrix}. #' \item \code{nrow}: number of rows of the \code{xgb.DMatrix}.
#' #'
#' } #' }
@@ -171,8 +193,7 @@ dimnames.xgb.DMatrix <- function(x) {
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' train <- agaricus.train #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#' #'
#' labels <- getinfo(dtrain, 'label') #' labels <- getinfo(dtrain, 'label')
#' setinfo(dtrain, 'label', 1-labels) #' setinfo(dtrain, 'label', 1-labels)
@@ -187,13 +208,17 @@ getinfo <- function(object, ...) UseMethod("getinfo")
#' @export #' @export
getinfo.xgb.DMatrix <- function(object, name, ...) { getinfo.xgb.DMatrix <- function(object, name, ...) {
if (typeof(name) != "character" || if (typeof(name) != "character" ||
length(name) != 1 || length(name) != 1 ||
!name %in% c('label', 'weight', 'base_margin', 'nrow', !name %in% c('label', 'weight', 'base_margin', 'nrow',
'label_lower_bound', 'label_upper_bound')) { 'label_lower_bound', 'label_upper_bound', "feature_type", "feature_name")) {
stop("getinfo: name must be one of the following\n", stop(
" 'label', 'weight', 'base_margin', 'nrow', 'label_lower_bound', 'label_upper_bound'") "getinfo: name must be one of the following\n",
" 'label', 'weight', 'base_margin', 'nrow', 'label_lower_bound', 'label_upper_bound', 'feature_type', 'feature_name'"
)
} }
if (name != "nrow"){ if (name == "feature_name" || name == "feature_type") {
ret <- .Call(XGDMatrixGetStrFeatureInfo_R, object, name)
} else if (name != "nrow"){
ret <- .Call(XGDMatrixGetInfo_R, object, name) ret <- .Call(XGDMatrixGetInfo_R, object, name)
} else { } else {
ret <- nrow(object) ret <- nrow(object)
@@ -216,16 +241,15 @@ getinfo.xgb.DMatrix <- function(object, name, ...) {
#' The \code{name} field can be one of the following: #' The \code{name} field can be one of the following:
#' #'
#' \itemize{ #' \itemize{
#' \item \code{label}: label Xgboost learn from ; #' \item \code{label}: label XGBoost learn from ;
#' \item \code{weight}: to do a weight rescale ; #' \item \code{weight}: to do a weight rescale ;
#' \item \code{base_margin}: base margin is the base prediction Xgboost will boost from ; #' \item \code{base_margin}: base margin is the base prediction XGBoost will boost from ;
#' \item \code{group}: number of rows in each group (to use with \code{rank:pairwise} objective). #' \item \code{group}: number of rows in each group (to use with \code{rank:pairwise} objective).
#' } #' }
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' train <- agaricus.train #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#' #'
#' labels <- getinfo(dtrain, 'label') #' labels <- getinfo(dtrain, 'label')
#' setinfo(dtrain, 'label', 1-labels) #' setinfo(dtrain, 'label', 1-labels)
@@ -272,6 +296,37 @@ setinfo.xgb.DMatrix <- function(object, name, info, ...) {
.Call(XGDMatrixSetInfo_R, object, name, as.integer(info)) .Call(XGDMatrixSetInfo_R, object, name, as.integer(info))
return(TRUE) return(TRUE)
} }
if (name == "feature_weights") {
if (length(info) != ncol(object)) {
stop("The number of feature weights must equal to the number of columns in the input data")
}
.Call(XGDMatrixSetInfo_R, object, name, as.numeric(info))
return(TRUE)
}
set_feat_info <- function(name) {
msg <- sprintf(
"The number of %s must equal to the number of columns in the input data. %s vs. %s",
name,
length(info),
ncol(object)
)
if (!is.null(info)) {
info <- as.list(info)
if (length(info) != ncol(object)) {
stop(msg)
}
}
.Call(XGDMatrixSetStrFeatureInfo_R, object, name, info)
}
if (name == "feature_name") {
set_feat_info("feature_name")
return(TRUE)
}
if (name == "feature_type") {
set_feat_info("feature_type")
return(TRUE)
}
stop("setinfo: unknown info name ", name) stop("setinfo: unknown info name ", name)
return(FALSE) return(FALSE)
} }
@@ -290,8 +345,7 @@ setinfo.xgb.DMatrix <- function(object, name, info, ...) {
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' train <- agaricus.train #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#' #'
#' dsub <- slice(dtrain, 1:42) #' dsub <- slice(dtrain, 1:42)
#' labels1 <- getinfo(dsub, 'label') #' labels1 <- getinfo(dsub, 'label')
@@ -347,8 +401,7 @@ slice.xgb.DMatrix <- function(object, idxset, ...) {
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' train <- agaricus.train #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#' #'
#' dtrain #' dtrain
#' print(dtrain, verbose=TRUE) #' print(dtrain, verbose=TRUE)

View File

@@ -7,8 +7,7 @@
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' train <- agaricus.train #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#' xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data') #' xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
#' dtrain <- xgb.DMatrix('xgb.DMatrix.data') #' dtrain <- xgb.DMatrix('xgb.DMatrix.data')
#' if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data') #' if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')
@@ -19,6 +18,7 @@ xgb.DMatrix.save <- function(dmatrix, fname) {
if (!inherits(dmatrix, "xgb.DMatrix")) if (!inherits(dmatrix, "xgb.DMatrix"))
stop("dmatrix must be xgb.DMatrix") stop("dmatrix must be xgb.DMatrix")
fname <- path.expand(fname)
.Call(XGDMatrixSaveBinary_R, dmatrix, fname[1], 0L) .Call(XGDMatrixSaveBinary_R, dmatrix, fname[1], 0L)
return(TRUE) return(TRUE)
} }

38
R-package/R/xgb.config.R Normal file
View File

@@ -0,0 +1,38 @@
#' Global configuration consists of a collection of parameters that can be applied in the global
#' scope. See \url{https://xgboost.readthedocs.io/en/stable/parameter.html} for the full list of
#' parameters supported in the global configuration. Use \code{xgb.set.config} to update the
#' values of one or more global-scope parameters. Use \code{xgb.get.config} to fetch the current
#' values of all global-scope parameters (listed in
#' \url{https://xgboost.readthedocs.io/en/stable/parameter.html}).
#'
#' @rdname xgbConfig
#' @title Set and get global configuration
#' @name xgb.set.config, xgb.get.config
#' @export xgb.set.config xgb.get.config
#' @param ... List of parameters to be set, as keyword arguments
#' @return
#' \code{xgb.set.config} returns \code{TRUE} to signal success. \code{xgb.get.config} returns
#' a list containing all global-scope parameters and their values.
#'
#' @examples
#' # Set verbosity level to silent (0)
#' xgb.set.config(verbosity = 0)
#' # Now global verbosity level is 0
#' config <- xgb.get.config()
#' print(config$verbosity)
#' # Set verbosity level to warning (1)
#' xgb.set.config(verbosity = 1)
#' # Now global verbosity level is 1
#' config <- xgb.get.config()
#' print(config$verbosity)
xgb.set.config <- function(...) {
new_config <- list(...)
.Call(XGBSetGlobalConfig_R, jsonlite::toJSON(new_config, auto_unbox = TRUE))
return(TRUE)
}
#' @rdname xgbConfig
xgb.get.config <- function() {
config <- .Call(XGBGetGlobalConfig_R)
return(jsonlite::fromJSON(config))
}

View File

@@ -18,7 +18,7 @@
#' #'
#' International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014 #' International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014
#' #'
#' \url{https://research.fb.com/publications/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}. #' \url{https://research.facebook.com/publications/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}.
#' #'
#' Extract explaining the method: #' Extract explaining the method:
#' #'
@@ -48,8 +48,8 @@
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost') #' data(agaricus.test, package='xgboost')
#' dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label) #' dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
#' #'
#' param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic') #' param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
#' nrounds = 4 #' nrounds = 4
@@ -65,8 +65,12 @@
#' new.features.test <- xgb.create.features(model = bst, agaricus.test$data) #' new.features.test <- xgb.create.features(model = bst, agaricus.test$data)
#' #'
#' # learning with new features #' # learning with new features
#' new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label) #' new.dtrain <- xgb.DMatrix(
#' new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label) #' data = new.features.train, label = agaricus.train$label, nthread = 2
#' )
#' new.dtest <- xgb.DMatrix(
#' data = new.features.test, label = agaricus.test$label, nthread = 2
#' )
#' watchlist <- list(train = new.dtrain) #' watchlist <- list(train = new.dtrain)
#' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2) #' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
#' #'
@@ -79,7 +83,7 @@
#' accuracy.after, "!\n")) #' accuracy.after, "!\n"))
#' #'
#' @export #' @export
xgb.create.features <- function(model, data, ...){ xgb.create.features <- function(model, data, ...) {
check.deprecation(...) check.deprecation(...)
pred_with_leaf <- predict(model, data, predleaf = TRUE) pred_with_leaf <- predict(model, data, predleaf = TRUE)
cols <- lapply(as.data.frame(pred_with_leaf), factor) cols <- lapply(as.data.frame(pred_with_leaf), factor)

View File

@@ -101,9 +101,7 @@
#' parameter or randomly generated. #' parameter or randomly generated.
#' \item \code{best_iteration} iteration number with the best evaluation metric value #' \item \code{best_iteration} iteration number with the best evaluation metric value
#' (only available with early stopping). #' (only available with early stopping).
#' \item \code{best_ntreelimit} the \code{ntreelimit} value corresponding to the best iteration, #' \item \code{best_ntreelimit} and the \code{ntreelimit} Deprecated attributes, use \code{best_iteration} instead.
#' which could further be used in \code{predict} method
#' (only available with early stopping).
#' \item \code{pred} CV prediction values available when \code{prediction} is set. #' \item \code{pred} CV prediction values available when \code{prediction} is set.
#' It is either vector or matrix (see \code{\link{cb.cv.predict}}). #' It is either vector or matrix (see \code{\link{cb.cv.predict}}).
#' \item \code{models} a list of the CV folds' models. It is only available with the explicit #' \item \code{models} a list of the CV folds' models. It is only available with the explicit
@@ -112,9 +110,9 @@
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"), #' cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"),
#' max_depth = 3, eta = 1, objective = "binary:logistic") #' max_depth = 3, eta = 1, objective = "binary:logistic")
#' print(cv) #' print(cv)
#' print(cv, verbose=TRUE) #' print(cv, verbose=TRUE)
#' #'
@@ -194,7 +192,7 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
# create the booster-folds # create the booster-folds
# train_folds # train_folds
dall <- xgb.get.DMatrix(data, label, missing) dall <- xgb.get.DMatrix(data, label, missing, nthread = params$nthread)
bst_folds <- lapply(seq_along(folds), function(k) { bst_folds <- lapply(seq_along(folds), function(k) {
dtest <- slice(dall, folds[[k]]) dtest <- slice(dall, folds[[k]])
# code originally contributed by @RolandASc on stackoverflow # code originally contributed by @RolandASc on stackoverflow

View File

@@ -6,8 +6,6 @@
#' @param fname the name of the text file where to save the model text dump. #' @param fname the name of the text file where to save the model text dump.
#' If not provided or set to \code{NULL}, the model is returned as a \code{character} vector. #' If not provided or set to \code{NULL}, the model is returned as a \code{character} vector.
#' @param fmap feature map file representing feature types. #' @param fmap feature map file representing feature types.
#' Detailed description could be found at
#' \url{https://github.com/dmlc/xgboost/wiki/Binary-Classification#dump-model}.
#' See demo/ for walkthrough example in R, and #' See demo/ for walkthrough example in R, and
#' \url{https://github.com/dmlc/xgboost/blob/master/demo/data/featmap.txt} #' \url{https://github.com/dmlc/xgboost/blob/master/demo/data/featmap.txt}
#' for example Format. #' for example Format.
@@ -66,6 +64,7 @@ xgb.dump <- function(model, fname = NULL, fmap = "", with_stats=FALSE,
if (is.null(fname)) { if (is.null(fname)) {
return(model_dump) return(model_dump)
} else { } else {
fname <- path.expand(fname)
writeLines(model_dump, fname[1]) writeLines(model_dump, fname[1])
return(TRUE) return(TRUE)
} }

View File

@@ -96,40 +96,44 @@ xgb.importance <- function(feature_names = NULL, model = NULL, trees = NULL,
if (!(is.null(feature_names) || is.character(feature_names))) if (!(is.null(feature_names) || is.character(feature_names)))
stop("feature_names: Has to be a character vector") stop("feature_names: Has to be a character vector")
model_text_dump <- xgb.dump(model = model, with_stats = TRUE) model <- xgb.Booster.complete(model)
config <- jsonlite::fromJSON(xgb.config(model))
# linear model if (config$learner$gradient_booster$name == "gblinear") {
if (model_text_dump[2] == "bias:"){ args <- list(importance_type = "weight", feature_names = feature_names)
weights <- which(model_text_dump == "weight:") %>% results <- .Call(
{model_text_dump[(. + 1):length(model_text_dump)]} %>% XGBoosterFeatureScore_R, model$handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null")
as.numeric )
names(results) <- c("features", "shape", "weight")
num_class <- NVL(model$params$num_class, 1) n_classes <- if (length(results$shape) == 2) { results$shape[2] } else { 0 }
if (is.null(feature_names)) importance <- if (n_classes == 0) {
feature_names <- seq(to = length(weights) / num_class) - 1 data.table(Feature = results$features, Weight = results$weight)[order(-abs(Weight))]
if (length(feature_names) * num_class != length(weights))
stop("feature_names length does not match the number of features used in the model")
result <- if (num_class == 1) {
data.table(Feature = feature_names, Weight = weights)[order(-abs(Weight))]
} else { } else {
data.table(Feature = rep(feature_names, each = num_class), data.table(
Weight = weights, Feature = rep(results$features, each = n_classes), Weight = results$weight, Class = seq_len(n_classes) - 1
Class = seq_len(num_class) - 1)[order(Class, -abs(Weight))] )[order(Class, -abs(Weight))]
} }
} else { # tree model } else {
result <- xgb.model.dt.tree(feature_names = feature_names, concatenated <- list()
text = model_text_dump, output_names <- vector()
trees = trees)[ for (importance_type in c("weight", "total_gain", "total_cover")) {
Feature != "Leaf", .(Gain = sum(Quality), args <- list(importance_type = importance_type, feature_names = feature_names, tree_idx = trees)
Cover = sum(Cover), results <- .Call(
Frequency = .N), by = Feature][ XGBoosterFeatureScore_R, model$handle, jsonlite::toJSON(args, auto_unbox = TRUE, null = "null")
, `:=`(Gain = Gain / sum(Gain), )
Cover = Cover / sum(Cover), names(results) <- c("features", "shape", importance_type)
Frequency = Frequency / sum(Frequency))][ concatenated[
order(Gain, decreasing = TRUE)] switch(importance_type, "weight" = "Frequency", "total_gain" = "Gain", "total_cover" = "Cover")
] <- results[importance_type]
output_names <- results$features
}
importance <- data.table(
Feature = output_names,
Gain = concatenated$Gain / sum(concatenated$Gain),
Cover = concatenated$Cover / sum(concatenated$Cover),
Frequency = concatenated$Frequency / sum(concatenated$Frequency)
)[order(Gain, decreasing = TRUE)]
} }
result importance
} }
# Avoid error messages during CRAN check. # Avoid error messages during CRAN check.

View File

@@ -5,7 +5,7 @@
#' @param modelfile the name of the binary input file. #' @param modelfile the name of the binary input file.
#' #'
#' @details #' @details
#' The input file is expected to contain a model saved in an xgboost-internal binary format #' The input file is expected to contain a model saved in an xgboost model format
#' using either \code{\link{xgb.save}} or \code{\link{cb.save.model}} in R, or using some #' using either \code{\link{xgb.save}} or \code{\link{cb.save.model}} in R, or using some
#' appropriate methods from other xgboost interfaces. E.g., a model trained in Python and #' appropriate methods from other xgboost interfaces. E.g., a model trained in Python and
#' saved from there in xgboost format, could be loaded from R. #' saved from there in xgboost format, could be loaded from R.
@@ -38,6 +38,13 @@ xgb.load <- function(modelfile) {
handle <- xgb.Booster.handle(modelfile = modelfile) handle <- xgb.Booster.handle(modelfile = modelfile)
# re-use modelfile if it is raw so we do not need to serialize # re-use modelfile if it is raw so we do not need to serialize
if (typeof(modelfile) == "raw") { if (typeof(modelfile) == "raw") {
warning(
paste(
"The support for loading raw booster with `xgb.load` will be ",
"discontinued in upcoming release. Use `xgb.load.raw` or",
" `xgb.unserialize` instead. "
)
)
bst <- xgb.handleToBooster(handle, modelfile) bst <- xgb.handleToBooster(handle, modelfile)
} else { } else {
bst <- xgb.handleToBooster(handle, NULL) bst <- xgb.handleToBooster(handle, NULL)

View File

@@ -3,12 +3,21 @@
#' User can generate raw memory buffer by calling xgb.save.raw #' User can generate raw memory buffer by calling xgb.save.raw
#' #'
#' @param buffer the buffer returned by xgb.save.raw #' @param buffer the buffer returned by xgb.save.raw
#' @param as_booster Return the loaded model as xgb.Booster instead of xgb.Booster.handle.
#' #'
#' @export #' @export
xgb.load.raw <- function(buffer) { xgb.load.raw <- function(buffer, as_booster = FALSE) {
cachelist <- list() cachelist <- list()
handle <- .Call(XGBoosterCreate_R, cachelist) handle <- .Call(XGBoosterCreate_R, cachelist)
.Call(XGBoosterLoadModelFromRaw_R, handle, buffer) .Call(XGBoosterLoadModelFromRaw_R, handle, buffer)
class(handle) <- "xgb.Booster.handle" class(handle) <- "xgb.Booster.handle"
return (handle)
if (as_booster) {
booster <- list(handle = handle, raw = NULL)
class(booster) <- "xgb.Booster"
booster <- xgb.Booster.complete(booster, saveraw = TRUE)
return(booster)
} else {
return (handle)
}
} }

View File

@@ -87,7 +87,7 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
} }
if (length(text) < 2 || if (length(text) < 2 ||
sum(grepl('yes=(\\d+),no=(\\d+)', text)) < 1) { sum(grepl('leaf=(\\d+)', text)) < 1) {
stop("Non-tree model detected! This function can only be used with tree models.") stop("Non-tree model detected! This function can only be used with tree models.")
} }
@@ -116,16 +116,28 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
branch_rx <- paste0("f(\\d+)<(", anynumber_regex, ")\\] yes=(\\d+),no=(\\d+),missing=(\\d+),", branch_rx <- paste0("f(\\d+)<(", anynumber_regex, ")\\] yes=(\\d+),no=(\\d+),missing=(\\d+),",
"gain=(", anynumber_regex, "),cover=(", anynumber_regex, ")") "gain=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
branch_cols <- c("Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover") branch_cols <- c("Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover")
td[isLeaf == FALSE, td[
(branch_cols) := { isLeaf == FALSE,
matches <- regmatches(t, regexec(branch_rx, t)) (branch_cols) := {
# skip some indices with spurious capture groups from anynumber_regex matches <- regmatches(t, regexec(branch_rx, t))
xtr <- do.call(rbind, matches)[, c(2, 3, 5, 6, 7, 8, 10), drop = FALSE] # skip some indices with spurious capture groups from anynumber_regex
xtr[, 3:5] <- add.tree.id(xtr[, 3:5], Tree) xtr <- do.call(rbind, matches)[, c(2, 3, 5, 6, 7, 8, 10), drop = FALSE]
as.data.table(xtr) xtr[, 3:5] <- add.tree.id(xtr[, 3:5], Tree)
}] if (length(xtr) == 0) {
as.data.table(
list(Feature = "NA", Split = "NA", Yes = "NA", No = "NA", Missing = "NA", Quality = "NA", Cover = "NA")
)
} else {
as.data.table(xtr)
}
}
]
# assign feature_names when available # assign feature_names when available
if (!is.null(feature_names)) { is_stump <- function() {
return(length(td$Feature) == 1 && is.na(td$Feature))
}
if (!is.null(feature_names) && !is_stump()) {
if (length(feature_names) <= max(as.numeric(td$Feature), na.rm = TRUE)) if (length(feature_names) <= max(as.numeric(td$Feature), na.rm = TRUE))
stop("feature_names has less elements than there are features used in the model") stop("feature_names has less elements than there are features used in the model")
td[isLeaf == FALSE, Feature := feature_names[as.numeric(Feature) + 1]] td[isLeaf == FALSE, Feature := feature_names[as.numeric(Feature) + 1]]
@@ -134,12 +146,18 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
# parse leaf lines # parse leaf lines
leaf_rx <- paste0("leaf=(", anynumber_regex, "),cover=(", anynumber_regex, ")") leaf_rx <- paste0("leaf=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
leaf_cols <- c("Feature", "Quality", "Cover") leaf_cols <- c("Feature", "Quality", "Cover")
td[isLeaf == TRUE, td[
(leaf_cols) := { isLeaf == TRUE,
matches <- regmatches(t, regexec(leaf_rx, t)) (leaf_cols) := {
xtr <- do.call(rbind, matches)[, c(2, 4)] matches <- regmatches(t, regexec(leaf_rx, t))
c("Leaf", as.data.table(xtr)) xtr <- do.call(rbind, matches)[, c(2, 4)]
}] if (length(xtr) == 2) {
c("Leaf", as.data.table(xtr[1]), as.data.table(xtr[2]))
} else {
c("Leaf", as.data.table(xtr))
}
}
]
# convert some columns to numeric # convert some columns to numeric
numeric_cols <- c("Split", "Quality", "Cover") numeric_cols <- c("Split", "Quality", "Cover")

View File

@@ -62,6 +62,9 @@
#' @export #' @export
xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5, plot_width = NULL, plot_height = NULL, xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5, plot_width = NULL, plot_height = NULL,
render = TRUE, ...){ render = TRUE, ...){
if (!requireNamespace("DiagrammeR", quietly = TRUE)) {
stop("DiagrammeR is required for xgb.plot.multi.trees")
}
check.deprecation(...) check.deprecation(...)
tree.matrix <- xgb.model.dt.tree(feature_names = feature_names, model = model) tree.matrix <- xgb.model.dt.tree(feature_names = feature_names, model = model)
@@ -75,8 +78,8 @@ xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5,
while (tree.matrix[, sum(is.na(abs.node.position))] > 0) { while (tree.matrix[, sum(is.na(abs.node.position))] > 0) {
yes.row.nodes <- tree.matrix[abs.node.position %in% precedent.nodes & !is.na(Yes)] yes.row.nodes <- tree.matrix[abs.node.position %in% precedent.nodes & !is.na(Yes)]
no.row.nodes <- tree.matrix[abs.node.position %in% precedent.nodes & !is.na(No)] no.row.nodes <- tree.matrix[abs.node.position %in% precedent.nodes & !is.na(No)]
yes.nodes.abs.pos <- yes.row.nodes[, abs.node.position] %>% paste0("_0") yes.nodes.abs.pos <- paste0(yes.row.nodes[, abs.node.position], "_0")
no.nodes.abs.pos <- no.row.nodes[, abs.node.position] %>% paste0("_1") no.nodes.abs.pos <- paste0(no.row.nodes[, abs.node.position], "_1")
tree.matrix[ID %in% yes.row.nodes[, Yes], abs.node.position := yes.nodes.abs.pos] tree.matrix[ID %in% yes.row.nodes[, Yes], abs.node.position := yes.nodes.abs.pos]
tree.matrix[ID %in% no.row.nodes[, No], abs.node.position := no.nodes.abs.pos] tree.matrix[ID %in% no.row.nodes[, No], abs.node.position := no.nodes.abs.pos]
@@ -92,19 +95,28 @@ xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5,
nodes.dt <- tree.matrix[ nodes.dt <- tree.matrix[
, .(Quality = sum(Quality)) , .(Quality = sum(Quality))
, by = .(abs.node.position, Feature) , by = .(abs.node.position, Feature)
][, .(Text = paste0(Feature[1:min(length(Feature), features_keep)], ][, .(Text = paste0(
" (", paste0(
format(Quality[1:min(length(Quality), features_keep)], digits = 5), Feature[1:min(length(Feature), features_keep)],
")") %>% " (",
paste0(collapse = "\n")) format(Quality[1:min(length(Quality), features_keep)], digits = 5),
, by = abs.node.position] ")"
),
collapse = "\n"
)
)
, by = abs.node.position
]
edges.dt <- tree.matrix[Feature != "Leaf", .(abs.node.position, Yes)] %>% edges.dt <- data.table::rbindlist(
list(tree.matrix[Feature != "Leaf", .(abs.node.position, No)]) %>% l = list(
rbindlist() %>% tree.matrix[Feature != "Leaf", .(abs.node.position, Yes)],
setnames(c("From", "To")) %>% tree.matrix[Feature != "Leaf", .(abs.node.position, No)]
.[, .N, .(From, To)] %>% )
.[, N := NULL] )
data.table::setnames(edges.dt, c("From", "To"))
edges.dt <- edges.dt[, .N, .(From, To)]
edges.dt[, N := NULL]
nodes <- DiagrammeR::create_node_df( nodes <- DiagrammeR::create_node_df(
n = nrow(nodes.dt), n = nrow(nodes.dt),
@@ -120,21 +132,25 @@ xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5,
nodes_df = nodes, nodes_df = nodes,
edges_df = edges, edges_df = edges,
attr_theme = NULL attr_theme = NULL
) %>% )
DiagrammeR::add_global_graph_attrs( graph <- DiagrammeR::add_global_graph_attrs(
graph = graph,
attr_type = "graph", attr_type = "graph",
attr = c("layout", "rankdir"), attr = c("layout", "rankdir"),
value = c("dot", "LR") value = c("dot", "LR")
) %>% )
DiagrammeR::add_global_graph_attrs( graph <- DiagrammeR::add_global_graph_attrs(
graph = graph,
attr_type = "node", attr_type = "node",
attr = c("color", "fillcolor", "style", "shape", "fontname"), attr = c("color", "fillcolor", "style", "shape", "fontname"),
value = c("DimGray", "beige", "filled", "rectangle", "Helvetica") value = c("DimGray", "beige", "filled", "rectangle", "Helvetica")
) %>% )
DiagrammeR::add_global_graph_attrs( graph <- DiagrammeR::add_global_graph_attrs(
graph = graph,
attr_type = "edge", attr_type = "edge",
attr = c("color", "arrowsize", "arrowhead", "fontname"), attr = c("color", "arrowsize", "arrowhead", "fontname"),
value = c("DimGray", "1.5", "vee", "Helvetica")) value = c("DimGray", "1.5", "vee", "Helvetica")
)
if (!render) return(invisible(graph)) if (!render) return(invisible(graph))

View File

@@ -33,7 +33,7 @@
#' @param col_loess a color to use for the loess curves. #' @param col_loess a color to use for the loess curves.
#' @param span_loess the \code{span} parameter in \code{\link[stats]{loess}}'s call. #' @param span_loess the \code{span} parameter in \code{\link[stats]{loess}}'s call.
#' @param which whether to do univariate or bivariate plotting. NOTE: only 1D is implemented so far. #' @param which whether to do univariate or bivariate plotting. NOTE: only 1D is implemented so far.
#' @param plot whether a plot should be drawn. If FALSE, only a lits of matrices is returned. #' @param plot whether a plot should be drawn. If FALSE, only a list of matrices is returned.
#' @param ... other parameters passed to \code{plot}. #' @param ... other parameters passed to \code{plot}.
#' #'
#' @details #' @details
@@ -157,7 +157,7 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
plot(x2plot, y, pch = pch, xlab = f, col = col, xlim = x_lim, ylim = y_lim, ylab = ylab, ...) plot(x2plot, y, pch = pch, xlab = f, col = col, xlim = x_lim, ylim = y_lim, ylab = ylab, ...)
grid() grid()
if (plot_loess) { if (plot_loess) {
# compress x to 3 digits, and mean-aggredate y # compress x to 3 digits, and mean-aggregate y
zz <- data.table(x = signif(x, 3), y)[, .(.N, y = mean(y)), x] zz <- data.table(x = signif(x, 3), y)[, .(.N, y = mean(y)), x]
if (nrow(zz) <= 5) { if (nrow(zz) <= 5) {
lines(zz$x, zz$y, col = col_loess) lines(zz$x, zz$y, col = col_loess)

View File

@@ -34,7 +34,7 @@
#' The branches that also used for missing values are marked as bold #' The branches that also used for missing values are marked as bold
#' (as in "carrying extra capacity"). #' (as in "carrying extra capacity").
#' #'
#' This function uses \href{http://www.graphviz.org/}{GraphViz} as a backend of DiagrammeR. #' This function uses \href{https://www.graphviz.org/}{GraphViz} as a backend of DiagrammeR.
#' #'
#' @return #' @return
#' #'
@@ -98,34 +98,46 @@ xgb.plot.tree <- function(feature_names = NULL, model = NULL, trees = NULL, plot
data = dt$Feature, data = dt$Feature,
fontcolor = "black") fontcolor = "black")
edges <- DiagrammeR::create_edge_df( if (nrow(dt[Feature != "Leaf"]) != 0) {
from = match(dt[Feature != "Leaf", c(ID)] %>% rep(2), dt$ID), edges <- DiagrammeR::create_edge_df(
to = match(dt[Feature != "Leaf", c(Yes, No)], dt$ID), from = match(rep(dt[Feature != "Leaf", c(ID)], 2), dt$ID),
label = dt[Feature != "Leaf", paste("<", Split)] %>% to = match(dt[Feature != "Leaf", c(Yes, No)], dt$ID),
c(rep("", nrow(dt[Feature != "Leaf"]))), label = c(
style = dt[Feature != "Leaf", ifelse(Missing == Yes, "bold", "solid")] %>% dt[Feature != "Leaf", paste("<", Split)],
c(dt[Feature != "Leaf", ifelse(Missing == No, "bold", "solid")]), rep("", nrow(dt[Feature != "Leaf"]))
rel = "leading_to") ),
style = c(
dt[Feature != "Leaf", ifelse(Missing == Yes, "bold", "solid")],
dt[Feature != "Leaf", ifelse(Missing == No, "bold", "solid")]
),
rel = "leading_to")
} else {
edges <- NULL
}
graph <- DiagrammeR::create_graph( graph <- DiagrammeR::create_graph(
nodes_df = nodes, nodes_df = nodes,
edges_df = edges, edges_df = edges,
attr_theme = NULL attr_theme = NULL
) %>% )
DiagrammeR::add_global_graph_attrs( graph <- DiagrammeR::add_global_graph_attrs(
graph = graph,
attr_type = "graph", attr_type = "graph",
attr = c("layout", "rankdir"), attr = c("layout", "rankdir"),
value = c("dot", "LR") value = c("dot", "LR")
) %>% )
DiagrammeR::add_global_graph_attrs( graph <- DiagrammeR::add_global_graph_attrs(
graph = graph,
attr_type = "node", attr_type = "node",
attr = c("color", "style", "fontname"), attr = c("color", "style", "fontname"),
value = c("DimGray", "filled", "Helvetica") value = c("DimGray", "filled", "Helvetica")
) %>% )
DiagrammeR::add_global_graph_attrs( graph <- DiagrammeR::add_global_graph_attrs(
graph = graph,
attr_type = "edge", attr_type = "edge",
attr = c("color", "arrowsize", "arrowhead", "fontname"), attr = c("color", "arrowsize", "arrowhead", "fontname"),
value = c("DimGray", "1.5", "vee", "Helvetica")) value = c("DimGray", "1.5", "vee", "Helvetica")
)
if (!render) return(invisible(graph)) if (!render) return(invisible(graph))

View File

@@ -42,6 +42,7 @@ xgb.save <- function(model, fname) {
if (inherits(model, "xgb.DMatrix")) " Use xgb.DMatrix.save to save an xgb.DMatrix object." else "") if (inherits(model, "xgb.DMatrix")) " Use xgb.DMatrix.save to save an xgb.DMatrix object." else "")
} }
model <- xgb.Booster.complete(model, saveraw = FALSE) model <- xgb.Booster.complete(model, saveraw = FALSE)
fname <- path.expand(fname)
.Call(XGBoosterSaveModel_R, model$handle, fname[1]) .Call(XGBoosterSaveModel_R, model$handle, fname[1])
return(TRUE) return(TRUE)
} }

View File

@@ -4,6 +4,14 @@
#' Save xgboost model from xgboost or xgb.train #' Save xgboost model from xgboost or xgb.train
#' #'
#' @param model the model object. #' @param model the model object.
#' @param raw_format The format for encoding the booster. Available options are
#' \itemize{
#' \item \code{json}: Encode the booster into JSON text document.
#' \item \code{ubj}: Encode the booster into Universal Binary JSON.
#' \item \code{deprecated}: Encode the booster into old customized binary format.
#' }
#'
#' Right now the default is \code{deprecated} but will be changed to \code{ubj} in upcoming release.
#' #'
#' @examples #' @examples
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
@@ -17,7 +25,8 @@
#' pred <- predict(bst, test$data) #' pred <- predict(bst, test$data)
#' #'
#' @export #' @export
xgb.save.raw <- function(model) { xgb.save.raw <- function(model, raw_format = "deprecated") {
handle <- xgb.get.handle(model) handle <- xgb.get.handle(model)
.Call(XGBoosterModelToRaw_R, handle) args <- list(format = raw_format)
.Call(XGBoosterSaveModelToRaw_R, handle, jsonlite::toJSON(args, auto_unbox = TRUE))
} }

View File

@@ -15,7 +15,7 @@
#' #'
#' 2. Booster Parameters #' 2. Booster Parameters
#' #'
#' 2.1. Parameter for Tree Booster #' 2.1. Parameters for Tree Booster
#' #'
#' \itemize{ #' \itemize{
#' \item \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1} when it is added to the current approximation. Used to prevent overfitting by making the boosting process more conservative. Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model more robust to overfitting but slower to compute. Default: 0.3 #' \item \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1} when it is added to the current approximation. Used to prevent overfitting by making the boosting process more conservative. Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model more robust to overfitting but slower to compute. Default: 0.3
@@ -24,12 +24,14 @@
#' \item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1 #' \item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1
#' \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1 #' \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1
#' \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1 #' \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
#' \item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through Xgboost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1 #' \item \code{lambda} L2 regularization term on weights. Default: 1
#' \item \code{alpha} L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0
#' \item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through XGBoost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1
#' \item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint. #' \item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.
#' \item \code{interaction_constraints} A list of vectors specifying feature indices of permitted interactions. Each item of the list represents one permitted interaction where specified features are allowed to interact with each other. Feature index values should start from \code{0} (\code{0} references the first column). Leave argument unspecified for no interaction constraints. #' \item \code{interaction_constraints} A list of vectors specifying feature indices of permitted interactions. Each item of the list represents one permitted interaction where specified features are allowed to interact with each other. Feature index values should start from \code{0} (\code{0} references the first column). Leave argument unspecified for no interaction constraints.
#' } #' }
#' #'
#' 2.2. Parameter for Linear Booster #' 2.2. Parameters for Linear Booster
#' #'
#' \itemize{ #' \itemize{
#' \item \code{lambda} L2 regularization term on weights. Default: 0 #' \item \code{lambda} L2 regularization term on weights. Default: 0
@@ -49,10 +51,10 @@
#' \item \code{binary:logistic} logistic regression for binary classification. Output probability. #' \item \code{binary:logistic} logistic regression for binary classification. Output probability.
#' \item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation. #' \item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation.
#' \item \code{binary:hinge}: hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities. #' \item \code{binary:hinge}: hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities.
#' \item \code{count:poisson}: poisson regression for count data, output mean of poisson distribution. \code{max_delta_step} is set to 0.7 by default in poisson regression (used to safeguard optimization). #' \item \code{count:poisson}: Poisson regression for count data, output mean of Poisson distribution. \code{max_delta_step} is set to 0.7 by default in poisson regression (used to safeguard optimization).
#' \item \code{survival:cox}: Cox regression for right censored survival time data (negative values are considered right censored). Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional hazard function \code{h(t) = h0(t) * HR)}. #' \item \code{survival:cox}: Cox regression for right censored survival time data (negative values are considered right censored). Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional hazard function \code{h(t) = h0(t) * HR)}.
#' \item \code{survival:aft}: Accelerated failure time model for censored survival time data. See \href{https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html}{Survival Analysis with Accelerated Failure Time} for details. #' \item \code{survival:aft}: Accelerated failure time model for censored survival time data. See \href{https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html}{Survival Analysis with Accelerated Failure Time} for details.
#' \item \code{aft_loss_distribution}: Probabilty Density Function used by \code{survival:aft} and \code{aft-nloglik} metric. #' \item \code{aft_loss_distribution}: Probability Density Function used by \code{survival:aft} and \code{aft-nloglik} metric.
#' \item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{num_class - 1}. #' \item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{num_class - 1}.
#' \item \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class. #' \item \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class.
#' \item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss. #' \item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss.
@@ -124,11 +126,11 @@
#' Parallelization is automatically enabled if \code{OpenMP} is present. #' Parallelization is automatically enabled if \code{OpenMP} is present.
#' Number of threads can also be manually specified via \code{nthread} parameter. #' Number of threads can also be manually specified via \code{nthread} parameter.
#' #'
#' The evaluation metric is chosen automatically by Xgboost (according to the objective) #' The evaluation metric is chosen automatically by XGBoost (according to the objective)
#' when the \code{eval_metric} parameter is not provided. #' when the \code{eval_metric} parameter is not provided.
#' User may set one or several \code{eval_metric} parameters. #' User may set one or several \code{eval_metric} parameters.
#' Note that when using a customized metric, only this single metric can be used. #' Note that when using a customized metric, only this single metric can be used.
#' The following is the list of built-in metrics for which Xgboost provides optimized implementation: #' The following is the list of built-in metrics for which XGBoost provides optimized implementation:
#' \itemize{ #' \itemize{
#' \item \code{rmse} root mean square error. \url{https://en.wikipedia.org/wiki/Root_mean_square_error} #' \item \code{rmse} root mean square error. \url{https://en.wikipedia.org/wiki/Root_mean_square_error}
#' \item \code{logloss} negative log-likelihood. \url{https://en.wikipedia.org/wiki/Log-likelihood} #' \item \code{logloss} negative log-likelihood. \url{https://en.wikipedia.org/wiki/Log-likelihood}
@@ -169,9 +171,6 @@
#' explicitly passed. #' explicitly passed.
#' \item \code{best_iteration} iteration number with the best evaluation metric value #' \item \code{best_iteration} iteration number with the best evaluation metric value
#' (only available with early stopping). #' (only available with early stopping).
#' \item \code{best_ntreelimit} the \code{ntreelimit} value corresponding to the best iteration,
#' which could further be used in \code{predict} method
#' (only available with early stopping).
#' \item \code{best_score} the best evaluation metric value during early stopping. #' \item \code{best_score} the best evaluation metric value during early stopping.
#' (only available with early stopping). #' (only available with early stopping).
#' \item \code{feature_names} names of the training dataset features #' \item \code{feature_names} names of the training dataset features
@@ -193,8 +192,8 @@
#' data(agaricus.train, package='xgboost') #' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost') #' data(agaricus.test, package='xgboost')
#' #'
#' dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label) #' dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
#' dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label) #' dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
#' watchlist <- list(train = dtrain, eval = dtest) #' watchlist <- list(train = dtrain, eval = dtest)
#' #'
#' ## A simple xgb.train example: #' ## A simple xgb.train example:

View File

@@ -1,11 +1,21 @@
#' Load the instance back from \code{\link{xgb.serialize}} #' Load the instance back from \code{\link{xgb.serialize}}
#' #'
#' @param buffer the buffer containing booster instance saved by \code{\link{xgb.serialize}} #' @param buffer the buffer containing booster instance saved by \code{\link{xgb.serialize}}
#' @param handle An \code{xgb.Booster.handle} object which will be overwritten with
#' the new deserialized object. Must be a null handle (e.g. when loading the model through
#' `readRDS`). If not provided, a new handle will be created.
#' @return An \code{xgb.Booster.handle} object.
#' #'
#' @export #' @export
xgb.unserialize <- function(buffer) { xgb.unserialize <- function(buffer, handle = NULL) {
cachelist <- list() cachelist <- list()
handle <- .Call(XGBoosterCreate_R, cachelist) if (is.null(handle)) {
handle <- .Call(XGBoosterCreate_R, cachelist)
} else {
if (!is.null.handle(handle))
stop("'handle' is not null/empty. Cannot overwrite existing handle.")
.Call(XGBoosterCreateInEmptyObj_R, cachelist, handle)
}
tryCatch( tryCatch(
.Call(XGBoosterUnserializeFromBuffer_R, handle, buffer), .Call(XGBoosterUnserializeFromBuffer_R, handle, buffer),
error = function(e) { error = function(e) {

View File

@@ -9,8 +9,8 @@ xgboost <- function(data = NULL, label = NULL, missing = NA, weight = NULL,
early_stopping_rounds = NULL, maximize = NULL, early_stopping_rounds = NULL, maximize = NULL,
save_period = NULL, save_name = "xgboost.model", save_period = NULL, save_name = "xgboost.model",
xgb_model = NULL, callbacks = list(), ...) { xgb_model = NULL, callbacks = list(), ...) {
merged <- check.booster.params(params, ...)
dtrain <- xgb.get.DMatrix(data, label, missing, weight) dtrain <- xgb.get.DMatrix(data, label, missing, weight, nthread = merged$nthread)
watchlist <- list(train = dtrain) watchlist <- list(train = dtrain)
@@ -90,7 +90,8 @@ NULL
#' @importFrom data.table setkey #' @importFrom data.table setkey
#' @importFrom data.table setkeyv #' @importFrom data.table setkeyv
#' @importFrom data.table setnames #' @importFrom data.table setnames
#' @importFrom magrittr %>% #' @importFrom jsonlite fromJSON
#' @importFrom jsonlite toJSON
#' @importFrom utils object.size str tail #' @importFrom utils object.size str tail
#' @importFrom stats predict #' @importFrom stats predict
#' @importFrom stats median #' @importFrom stats median

View File

@@ -30,4 +30,4 @@ Examples
Development Development
----------- -----------
* See the [R Package section](https://xgboost.readthedocs.io/en/latest/contribute.html#r-package) of the contributors guide. * See the [R Package section](https://xgboost.readthedocs.io/en/latest/contrib/coding_guide.html#r-coding-guideline) of the contributors guide.

View File

@@ -1,4 +1,3 @@
#!/bin/sh #!/bin/sh
rm -f src/Makevars rm -f src/Makevars
rm -f CMakeLists.txt

1841
R-package/configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -2,10 +2,25 @@
AC_PREREQ(2.69) AC_PREREQ(2.69)
AC_INIT([xgboost],[0.6-3],[],[xgboost],[]) AC_INIT([xgboost],[1.7.6],[],[xgboost],[])
# Use this line to set CC variable to a C compiler : ${R_HOME=`R RHOME`}
AC_PROG_CC if test -z "${R_HOME}"; then
echo "could not determine R_HOME"
exit 1
fi
CXX17=`"${R_HOME}/bin/R" CMD config CXX17`
CXX17STD=`"${R_HOME}/bin/R" CMD config CXX17STD`
CXX="${CXX17} ${CXX17STD}"
CXXFLAGS=`"${R_HOME}/bin/R" CMD config CXXFLAGS`
CC=`"${R_HOME}/bin/R" CMD config CC`
CFLAGS=`"${R_HOME}/bin/R" CMD config CFLAGS`
CPPFLAGS=`"${R_HOME}/bin/R" CMD config CPPFLAGS`
LDFLAGS=`"${R_HOME}/bin/R" CMD config LDFLAGS`
AC_LANG(C++)
### Check whether backtrace() is part of libc or the external lib libexecinfo ### Check whether backtrace() is part of libc or the external lib libexecinfo
AC_MSG_CHECKING([Backtrace lib]) AC_MSG_CHECKING([Backtrace lib])
@@ -28,12 +43,19 @@ fi
if test `uname -s` = "Darwin" if test `uname -s` = "Darwin"
then then
OPENMP_CXXFLAGS='-Xclang -fopenmp' if command -v brew &> /dev/null
OPENMP_LIB='-lomp' then
HOMEBREW_LIBOMP_PREFIX=`brew --prefix libomp`
else
# Homebrew not found
HOMEBREW_LIBOMP_PREFIX=''
fi
OPENMP_CXXFLAGS="-Xpreprocessor -fopenmp -I${HOMEBREW_LIBOMP_PREFIX}/include"
OPENMP_LIB="-lomp -L${HOMEBREW_LIBOMP_PREFIX}/lib"
ac_pkg_openmp=no ac_pkg_openmp=no
AC_MSG_CHECKING([whether OpenMP will work in a package]) AC_MSG_CHECKING([whether OpenMP will work in a package])
AC_LANG_CONFTEST([AC_LANG_PROGRAM([[#include <omp.h>]], [[ return (omp_get_max_threads() <= 1); ]])]) AC_LANG_CONFTEST([AC_LANG_PROGRAM([[#include <omp.h>]], [[ return (omp_get_max_threads() <= 1); ]])])
${CC} -o conftest conftest.c ${OPENMP_LIB} ${OPENMP_CXXFLAGS} 2>/dev/null && ./conftest && ac_pkg_openmp=yes ${CXX} -o conftest conftest.cpp ${CPPFLAGS} ${LDFLAGS} ${OPENMP_LIB} ${OPENMP_CXXFLAGS} 2>/dev/null && ./conftest && ac_pkg_openmp=yes
AC_MSG_RESULT([${ac_pkg_openmp}]) AC_MSG_RESULT([${ac_pkg_openmp}])
if test "${ac_pkg_openmp}" = no; then if test "${ac_pkg_openmp}" = no; then
OPENMP_CXXFLAGS='' OPENMP_CXXFLAGS=''

View File

@@ -1,6 +1,6 @@
basic_walkthrough Basic feature walkthrough basic_walkthrough Basic feature walkthrough
caret_wrapper Use xgboost to train in caret library caret_wrapper Use xgboost to train in caret library
custom_objective Cutomize loss function, and evaluation metric custom_objective Customize loss function, and evaluation metric
boost_from_prediction Boosting from existing prediction boost_from_prediction Boosting from existing prediction
predict_first_ntree Predicting using first n trees predict_first_ntree Predicting using first n trees
generalized_linear_model Generalized Linear Model generalized_linear_model Generalized Linear Model
@@ -8,8 +8,8 @@ cross_validation Cross validation
create_sparse_matrix Create Sparse Matrix create_sparse_matrix Create Sparse Matrix
predict_leaf_indices Predicting the corresponding leaves predict_leaf_indices Predicting the corresponding leaves
early_stopping Early Stop in training early_stopping Early Stop in training
poisson_regression Poisson Regression on count data poisson_regression Poisson regression on count data
tweedie_regression Tweddie Regression tweedie_regression Tweedie regression
gpu_accelerated GPU-accelerated tree building algorithms gpu_accelerated GPU-accelerated tree building algorithms
interaction_constraints Interaction constraints among features interaction_constraints Interaction constraints among features

View File

@@ -2,7 +2,7 @@ XGBoost R Feature Walkthrough
==== ====
* [Basic walkthrough of wrappers](basic_walkthrough.R) * [Basic walkthrough of wrappers](basic_walkthrough.R)
* [Train a xgboost model from caret library](caret_wrapper.R) * [Train a xgboost model from caret library](caret_wrapper.R)
* [Cutomize loss function, and evaluation metric](custom_objective.R) * [Customize loss function, and evaluation metric](custom_objective.R)
* [Boosting from existing prediction](boost_from_prediction.R) * [Boosting from existing prediction](boost_from_prediction.R)
* [Predicting using first n trees](predict_first_ntree.R) * [Predicting using first n trees](predict_first_ntree.R)
* [Generalized Linear Model](generalized_linear_model.R) * [Generalized Linear Model](generalized_linear_model.R)

View File

@@ -40,7 +40,7 @@ print("Train xgboost with verbose 2, also print information about tree")
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nrounds = 2, bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic", verbose = 2) nthread = 2, objective = "binary:logistic", verbose = 2)
# you can also specify data as file path to a LibSVM format input # you can also specify data as file path to a LIBSVM format input
# since we do not have this file with us, the following line is just for illustration # since we do not have this file with us, the following line is just for illustration
# bst <- xgboost(data = 'agaricus.train.svm', max_depth = 2, eta = 1, nrounds = 2,objective = "binary:logistic") # bst <- xgboost(data = 'agaricus.train.svm', max_depth = 2, eta = 1, nrounds = 2,objective = "binary:logistic")
@@ -63,7 +63,7 @@ print(paste("sum(abs(pred2-pred))=", sum(abs(pred2 - pred))))
# save model to R's raw vector # save model to R's raw vector
raw <- xgb.save.raw(bst) raw <- xgb.save.raw(bst)
# load binary model to R # load binary model to R
bst3 <- xgb.load(raw) bst3 <- xgb.load.raw(raw)
pred3 <- predict(bst3, test$data) pred3 <- predict(bst3, test$data)
# pred3 should be identical to pred # pred3 should be identical to pred
print(paste("sum(abs(pred3-pred))=", sum(abs(pred3 - pred)))) print(paste("sum(abs(pred3-pred))=", sum(abs(pred3 - pred))))

View File

@@ -2,17 +2,17 @@ require(xgboost)
require(Matrix) require(Matrix)
require(data.table) require(data.table)
if (!require(vcd)) { if (!require(vcd)) {
install.packages('vcd') #Available in Cran. Used for its dataset with categorical values. install.packages('vcd') #Available in CRAN. Used for its dataset with categorical values.
require(vcd) require(vcd)
} }
# According to its documentation, Xgboost works only on numbers. # According to its documentation, XGBoost works only on numbers.
# Sometimes the dataset we have to work on have categorical data. # Sometimes the dataset we have to work on have categorical data.
# A categorical variable is one which have a fixed number of values. By example, if for each observation a variable called "Colour" can have only "red", "blue" or "green" as value, it is a categorical variable. # A categorical variable is one which have a fixed number of values. By example, if for each observation a variable called "Colour" can have only "red", "blue" or "green" as value, it is a categorical variable.
# #
# In R, categorical variable is called Factor. # In R, categorical variable is called Factor.
# Type ?factor in console for more information. # Type ?factor in console for more information.
# #
# In this demo we will see how to transform a dense dataframe with categorical variables to a sparse matrix before analyzing it in Xgboost. # In this demo we will see how to transform a dense dataframe with categorical variables to a sparse matrix before analyzing it in XGBoost.
# The method we are going to see is usually called "one hot encoding". # The method we are going to see is usually called "one hot encoding".
#load Arthritis dataset in memory. #load Arthritis dataset in memory.
@@ -25,13 +25,13 @@ df <- data.table(Arthritis, keep.rownames = FALSE)
cat("Print the dataset\n") cat("Print the dataset\n")
print(df) print(df)
# 2 columns have factor type, one has ordinal type (ordinal variable is a categorical variable with values wich can be ordered, here: None > Some > Marked). # 2 columns have factor type, one has ordinal type (ordinal variable is a categorical variable with values which can be ordered, here: None > Some > Marked).
cat("Structure of the dataset\n") cat("Structure of the dataset\n")
str(df) str(df)
# Let's add some new categorical features to see if it helps. Of course these feature are highly correlated to the Age feature. Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features, even in case of highly correlated features. # Let's add some new categorical features to see if it helps. Of course these feature are highly correlated to the Age feature. Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features, even in case of highly correlated features.
# For the first feature we create groups of age by rounding the real age. Note that we transform it to factor (categorical data) so the algorithm treat them as independant values. # For the first feature we create groups of age by rounding the real age. Note that we transform it to factor (categorical data) so the algorithm treat them as independent values.
df[, AgeDiscret := as.factor(round(Age / 10, 0))] df[, AgeDiscret := as.factor(round(Age / 10, 0))]
# Here is an even stronger simplification of the real age with an arbitrary split at 30 years old. I choose this value based on nothing. We will see later if simplifying the information based on arbitrary values is a good strategy (I am sure you already have an idea of how well it will work!). # Here is an even stronger simplification of the real age with an arbitrary split at 30 years old. I choose this value based on nothing. We will see later if simplifying the information based on arbitrary values is a good strategy (I am sure you already have an idea of how well it will work!).

View File

@@ -22,10 +22,10 @@ xgb.cv(param, dtrain, nrounds, nfold = 5,
metrics = 'error', showsd = FALSE) metrics = 'error', showsd = FALSE)
### ###
# you can also do cross validation with cutomized loss function # you can also do cross validation with customized loss function
# See custom_objective.R # See custom_objective.R
## ##
print ('running cross validation, with cutomsized loss function') print ('running cross validation, with customized loss function')
logregobj <- function(preds, dtrain) { logregobj <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label") labels <- getinfo(dtrain, "label")

View File

@@ -12,7 +12,7 @@ watchlist <- list(eval = dtest, train = dtrain)
num_round <- 2 num_round <- 2
# user define objective function, given prediction, return gradient and second order gradient # user define objective function, given prediction, return gradient and second order gradient
# this is loglikelihood loss # this is log likelihood loss
logregobj <- function(preds, dtrain) { logregobj <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label") labels <- getinfo(dtrain, "label")
preds <- 1 / (1 + exp(-preds)) preds <- 1 / (1 + exp(-preds))
@@ -23,9 +23,9 @@ logregobj <- function(preds, dtrain) {
# user defined evaluation function, return a pair metric_name, result # user defined evaluation function, return a pair metric_name, result
# NOTE: when you do customized loss function, the default prediction value is margin # NOTE: when you do customized loss function, the default prediction value is margin
# this may make buildin evalution metric not function properly # this may make builtin evaluation metric not function properly
# for example, we are doing logistic loss, the prediction is score before logistic transformation # for example, we are doing logistic loss, the prediction is score before logistic transformation
# the buildin evaluation error assumes input is after logistic transformation # the builtin evaluation error assumes input is after logistic transformation
# Take this in mind when you use the customization, and maybe you need write customized evaluation function # Take this in mind when you use the customization, and maybe you need write customized evaluation function
evalerror <- function(preds, dtrain) { evalerror <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label") labels <- getinfo(dtrain, "label")

View File

@@ -11,7 +11,7 @@ param <- list(max_depth = 2, eta = 1, nthread = 2, verbosity = 0)
watchlist <- list(eval = dtest) watchlist <- list(eval = dtest)
num_round <- 20 num_round <- 20
# user define objective function, given prediction, return gradient and second order gradient # user define objective function, given prediction, return gradient and second order gradient
# this is loglikelihood loss # this is log likelihood loss
logregobj <- function(preds, dtrain) { logregobj <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label") labels <- getinfo(dtrain, "label")
preds <- 1 / (1 + exp(-preds)) preds <- 1 / (1 + exp(-preds))
@@ -21,9 +21,9 @@ logregobj <- function(preds, dtrain) {
} }
# user defined evaluation function, return a pair metric_name, result # user defined evaluation function, return a pair metric_name, result
# NOTE: when you do customized loss function, the default prediction value is margin # NOTE: when you do customized loss function, the default prediction value is margin
# this may make buildin evalution metric not function properly # this may make builtin evaluation metric not function properly
# for example, we are doing logistic loss, the prediction is score before logistic transformation # for example, we are doing logistic loss, the prediction is score before logistic transformation
# the buildin evaluation error assumes input is after logistic transformation # the builtin evaluation error assumes input is after logistic transformation
# Take this in mind when you use the customization, and maybe you need write customized evaluation function # Take this in mind when you use the customization, and maybe you need write customized evaluation function
evalerror <- function(preds, dtrain) { evalerror <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label") labels <- getinfo(dtrain, "label")

View File

@@ -38,10 +38,7 @@ The following additional fields are assigned to the model's R object:
\itemize{ \itemize{
\item \code{best_score} the evaluation score at the best iteration \item \code{best_score} the evaluation score at the best iteration
\item \code{best_iteration} at which boosting iteration the best score has occurred (1-based index) \item \code{best_iteration} at which boosting iteration the best score has occurred (1-based index)
\item \code{best_ntreelimit} to use with the \code{ntreelimit} parameter in \code{predict}.
It differs from \code{best_iteration} in multiclass or random forest settings.
} }
The Same values are also stored as xgb-attributes: The Same values are also stored as xgb-attributes:
\itemize{ \itemize{
\item \code{best_iteration} is stored as a 0-based iteration index (for interoperability of binary models) \item \code{best_iteration} is stored as a 0-based iteration index (for interoperability of binary models)

View File

@@ -8,16 +8,18 @@ during its training.}
cb.gblinear.history(sparse = FALSE) cb.gblinear.history(sparse = FALSE)
} }
\arguments{ \arguments{
\item{sparse}{when set to FALSE/TURE, a dense/sparse matrix is used to store the result. \item{sparse}{when set to FALSE/TRUE, a dense/sparse matrix is used to store the result.
Sparse format is useful when one expects only a subset of coefficients to be non-zero, Sparse format is useful when one expects only a subset of coefficients to be non-zero,
when using the "thrifty" feature selector with fairly small number of top features when using the "thrifty" feature selector with fairly small number of top features
selected per iteration.} selected per iteration.}
} }
\value{ \value{
Results are stored in the \code{coefs} element of the closure. Results are stored in the \code{coefs} element of the closure.
The \code{\link{xgb.gblinear.history}} convenience function provides an easy way to access it. The \code{\link{xgb.gblinear.history}} convenience function provides an easy
way to access it.
With \code{xgb.train}, it is either a dense of a sparse matrix. With \code{xgb.train}, it is either a dense of a sparse matrix.
While with \code{xgb.cv}, it is a list (an element per each fold) of such matrices. While with \code{xgb.cv}, it is a list (an element per each fold) of such
matrices.
} }
\description{ \description{
Callback closure for collecting the model coefficients history of a gblinear booster Callback closure for collecting the model coefficients history of a gblinear booster
@@ -36,10 +38,9 @@ Callback function expects the following values to be set in its calling frame:
# #
# In the iris dataset, it is hard to linearly separate Versicolor class from the rest # In the iris dataset, it is hard to linearly separate Versicolor class from the rest
# without considering the 2nd order interactions: # without considering the 2nd order interactions:
require(magrittr)
x <- model.matrix(Species ~ .^2, iris)[,-1] x <- model.matrix(Species ~ .^2, iris)[,-1]
colnames(x) colnames(x)
dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor")) dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor"), nthread = 2)
param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc", param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc",
lambda = 0.0003, alpha = 0.0003, nthread = 2) lambda = 0.0003, alpha = 0.0003, nthread = 2)
# For 'shotgun', which is a default linear updater, using high eta values may result in # For 'shotgun', which is a default linear updater, using high eta values may result in
@@ -57,21 +58,21 @@ matplot(coef_path, type = 'l')
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 0.8, bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 0.8,
updater = 'coord_descent', feature_selector = 'thrifty', top_k = 1, updater = 'coord_descent', feature_selector = 'thrifty', top_k = 1,
callbacks = list(cb.gblinear.history())) callbacks = list(cb.gblinear.history()))
xgb.gblinear.history(bst) \%>\% matplot(type = 'l') matplot(xgb.gblinear.history(bst), type = 'l')
# Componentwise boosting is known to have similar effect to Lasso regularization. # Componentwise boosting is known to have similar effect to Lasso regularization.
# Try experimenting with various values of top_k, eta, nrounds, # Try experimenting with various values of top_k, eta, nrounds,
# as well as different feature_selectors. # as well as different feature_selectors.
# For xgb.cv: # For xgb.cv:
bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8, bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8,
callbacks = list(cb.gblinear.history())) callbacks = list(cb.gblinear.history()))
# coefficients in the CV fold #3 # coefficients in the CV fold #3
xgb.gblinear.history(bst)[[3]] \%>\% matplot(type = 'l') matplot(xgb.gblinear.history(bst)[[3]], type = 'l')
#### Multiclass classification: #### Multiclass classification:
# #
dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1) dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1, nthread = 2)
param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3, param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3,
lambda = 0.0003, alpha = 0.0003, nthread = 2) lambda = 0.0003, alpha = 0.0003, nthread = 2)
# For the default linear updater 'shotgun' it sometimes is helpful # For the default linear updater 'shotgun' it sometimes is helpful
@@ -79,15 +80,15 @@ param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3,
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 70, eta = 0.5, bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 70, eta = 0.5,
callbacks = list(cb.gblinear.history())) callbacks = list(cb.gblinear.history()))
# Will plot the coefficient paths separately for each class: # Will plot the coefficient paths separately for each class:
xgb.gblinear.history(bst, class_index = 0) \%>\% matplot(type = 'l') matplot(xgb.gblinear.history(bst, class_index = 0), type = 'l')
xgb.gblinear.history(bst, class_index = 1) \%>\% matplot(type = 'l') matplot(xgb.gblinear.history(bst, class_index = 1), type = 'l')
xgb.gblinear.history(bst, class_index = 2) \%>\% matplot(type = 'l') matplot(xgb.gblinear.history(bst, class_index = 2), type = 'l')
# CV: # CV:
bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 70, eta = 0.5, bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 70, eta = 0.5,
callbacks = list(cb.gblinear.history(FALSE))) callbacks = list(cb.gblinear.history(FALSE)))
# 1st forld of 1st class # 1st fold of 1st class
xgb.gblinear.history(bst, class_index = 0)[[1]] \%>\% matplot(type = 'l') matplot(xgb.gblinear.history(bst, class_index = 0)[[1]], type = 'l')
} }
\seealso{ \seealso{

View File

@@ -19,7 +19,7 @@ be directly used with an \code{xgb.DMatrix} object.
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
train <- agaricus.train train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label) dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
stopifnot(nrow(dtrain) == nrow(train$data)) stopifnot(nrow(dtrain) == nrow(train$data))
stopifnot(ncol(dtrain) == ncol(train$data)) stopifnot(ncol(dtrain) == ncol(train$data))

View File

@@ -26,7 +26,7 @@ Since row names are irrelevant, it is recommended to use \code{colnames} directl
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
train <- agaricus.train train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label) dtrain <- xgb.DMatrix(train$data, label=train$label, nthread = 2)
dimnames(dtrain) dimnames(dtrain)
colnames(dtrain) colnames(dtrain)
colnames(dtrain) <- make.names(1:ncol(train$data)) colnames(dtrain) <- make.names(1:ncol(train$data))

View File

@@ -23,9 +23,9 @@ Get information of an xgb.DMatrix object
The \code{name} field can be one of the following: The \code{name} field can be one of the following:
\itemize{ \itemize{
\item \code{label}: label Xgboost learn from ; \item \code{label}: label XGBoost learn from ;
\item \code{weight}: to do a weight rescale ; \item \code{weight}: to do a weight rescale ;
\item \code{base_margin}: base margin is the base prediction Xgboost will boost from ; \item \code{base_margin}: base margin is the base prediction XGBoost will boost from ;
\item \code{nrow}: number of rows of the \code{xgb.DMatrix}. \item \code{nrow}: number of rows of the \code{xgb.DMatrix}.
} }
@@ -34,8 +34,7 @@ The \code{name} field can be one of the following:
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
train <- agaricus.train dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtrain <- xgb.DMatrix(train$data, label=train$label)
labels <- getinfo(dtrain, 'label') labels <- getinfo(dtrain, 'label')
setinfo(dtrain, 'label', 1-labels) setinfo(dtrain, 'label', 1-labels)

View File

@@ -17,6 +17,8 @@
predinteraction = FALSE, predinteraction = FALSE,
reshape = FALSE, reshape = FALSE,
training = FALSE, training = FALSE,
iterationrange = NULL,
strict_shape = FALSE,
... ...
) )
@@ -25,7 +27,11 @@
\arguments{ \arguments{
\item{object}{Object of class \code{xgb.Booster} or \code{xgb.Booster.handle}} \item{object}{Object of class \code{xgb.Booster} or \code{xgb.Booster.handle}}
\item{newdata}{takes \code{matrix}, \code{dgCMatrix}, local data file or \code{xgb.DMatrix}.} \item{newdata}{takes \code{matrix}, \code{dgCMatrix}, \code{dgRMatrix}, \code{dsparseVector},
local data file or \code{xgb.DMatrix}.
For single-row predictions on sparse data, it's recommended to use CSR format. If passing
a sparse vector, it will take it as a row vector.}
\item{missing}{Missing is only used when input is dense matrix. Pick a float value that represents \item{missing}{Missing is only used when input is dense matrix. Pick a float value that represents
missing values in data (e.g., sometimes 0 or some other extreme value is used).} missing values in data (e.g., sometimes 0 or some other extreme value is used).}
@@ -34,8 +40,7 @@ missing values in data (e.g., sometimes 0 or some other extreme value is used).}
sum of predictions from boosting iterations' results. E.g., setting \code{outputmargin=TRUE} for sum of predictions from boosting iterations' results. E.g., setting \code{outputmargin=TRUE} for
logistic regression would result in predictions for log-odds instead of probabilities.} logistic regression would result in predictions for log-odds instead of probabilities.}
\item{ntreelimit}{limit the number of model's trees or boosting iterations used in prediction (see Details). \item{ntreelimit}{Deprecated, use \code{iterationrange} instead.}
It will use all the trees by default (\code{NULL} value).}
\item{predleaf}{whether predict leaf index.} \item{predleaf}{whether predict leaf index.}
@@ -52,10 +57,20 @@ or predinteraction flags is TRUE.}
\item{training}{whether is the prediction result used for training. For dart booster, \item{training}{whether is the prediction result used for training. For dart booster,
training predicting will perform dropout.} training predicting will perform dropout.}
\item{iterationrange}{Specifies which layer of trees are used in prediction. For
example, if a random forest is trained with 100 rounds. Specifying
`iterationrange=(1, 21)`, then only the forests built during [1, 21) (half open set)
rounds are used in this prediction. It's 1-based index just like R vector. When set
to \code{c(1, 1)} XGBoost will use all trees.}
\item{strict_shape}{Default is \code{FALSE}. When it's set to \code{TRUE}, output
type and shape of prediction are invariant to model type.}
\item{...}{Parameters passed to \code{predict.xgb.Booster}} \item{...}{Parameters passed to \code{predict.xgb.Booster}}
} }
\value{ \value{
For regression or binary classification, it returns a vector of length \code{nrows(newdata)}. The return type is different depending whether \code{strict_shape} is set to \code{TRUE}. By default,
for regression or binary classification, it returns a vector of length \code{nrows(newdata)}.
For multiclass classification, either a \code{num_class * nrows(newdata)} vector or For multiclass classification, either a \code{num_class * nrows(newdata)} vector or
a \code{(nrows(newdata), num_class)} dimension matrix is returned, depending on a \code{(nrows(newdata), num_class)} dimension matrix is returned, depending on
the \code{reshape} value. the \code{reshape} value.
@@ -76,18 +91,19 @@ two dimensions. The "+ 1" columns corresponds to bias. Summing this array along
produce practically the same result as predict with \code{predcontrib = TRUE}. produce practically the same result as predict with \code{predcontrib = TRUE}.
For a multiclass case, a list of \code{num_class} elements is returned, where each element is For a multiclass case, a list of \code{num_class} elements is returned, where each element is
such an array. such an array.
When \code{strict_shape} is set to \code{TRUE}, the output is always an array. For
normal prediction, the output is a 2-dimension array \code{(num_class, nrow(newdata))}.
For \code{predcontrib = TRUE}, output is \code{(ncol(newdata) + 1, num_class, nrow(newdata))}
For \code{predinteraction = TRUE}, output is \code{(ncol(newdata) + 1, ncol(newdata) + 1, num_class, nrow(newdata))}
For \code{predleaf = TRUE}, output is \code{(n_trees_in_forest, num_class, n_iterations, nrow(newdata))}
} }
\description{ \description{
Predicted values based on either xgboost model or model handle object. Predicted values based on either xgboost model or model handle object.
} }
\details{ \details{
Note that \code{ntreelimit} is not necessarily equal to the number of boosting iterations Note that \code{iterationrange} would currently do nothing for predictions from gblinear,
and it is not necessarily equal to the number of trees in a model.
E.g., in a random forest-like model, \code{ntreelimit} would limit the number of trees.
But for multiclass classification, while there are multiple trees per iteration,
\code{ntreelimit} limits the number of boosting iterations.
Also note that \code{ntreelimit} would currently do nothing for predictions from gblinear,
since gblinear doesn't keep its boosting history. since gblinear doesn't keep its boosting history.
One possible practical applications of the \code{predleaf} option is to use the model One possible practical applications of the \code{predleaf} option is to use the model
@@ -120,7 +136,7 @@ bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
# use all trees by default # use all trees by default
pred <- predict(bst, test$data) pred <- predict(bst, test$data)
# use only the 1st tree # use only the 1st tree
pred1 <- predict(bst, test$data, ntreelimit = 1) pred1 <- predict(bst, test$data, iterationrange = c(1, 2))
# Predicting tree leafs: # Predicting tree leafs:
# the result is an nsamples X ntrees matrix # the result is an nsamples X ntrees matrix
@@ -172,25 +188,9 @@ str(pred)
all.equal(pred, pred_labels) all.equal(pred, pred_labels)
# prediction from using only 5 iterations should result # prediction from using only 5 iterations should result
# in the same error as seen in iteration 5: # in the same error as seen in iteration 5:
pred5 <- predict(bst, as.matrix(iris[, -5]), ntreelimit=5) pred5 <- predict(bst, as.matrix(iris[, -5]), iterationrange=c(1, 6))
sum(pred5 != lb)/length(lb) sum(pred5 != lb)/length(lb)
## random forest-like model of 25 trees for binary classification:
set.seed(11)
bst <- xgboost(data = train$data, label = train$label, max_depth = 5,
nthread = 2, nrounds = 1, objective = "binary:logistic",
num_parallel_tree = 25, subsample = 0.6, colsample_bytree = 0.1)
# Inspect the prediction error vs number of trees:
lb <- test$label
dtest <- xgb.DMatrix(test$data, label=lb)
err <- sapply(1:25, function(n) {
pred <- predict(bst, dtest, ntreelimit=n)
sum((pred > 0.5) != lb)/length(lb)
})
plot(err, type='l', ylim=c(0,0.1), xlab='#trees')
} }
\references{ \references{
Scott M. Lundberg, Su-In Lee, "A Unified Approach to Interpreting Model Predictions", NIPS Proceedings 2017, \url{https://arxiv.org/abs/1705.07874} Scott M. Lundberg, Su-In Lee, "A Unified Approach to Interpreting Model Predictions", NIPS Proceedings 2017, \url{https://arxiv.org/abs/1705.07874}

View File

@@ -19,8 +19,7 @@ Currently it displays dimensions and presence of info-fields and colnames.
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
train <- agaricus.train dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtrain <- xgb.DMatrix(train$data, label=train$label)
dtrain dtrain
print(dtrain, verbose=TRUE) print(dtrain, verbose=TRUE)

View File

@@ -25,16 +25,15 @@ Set information of an xgb.DMatrix object
The \code{name} field can be one of the following: The \code{name} field can be one of the following:
\itemize{ \itemize{
\item \code{label}: label Xgboost learn from ; \item \code{label}: label XGBoost learn from ;
\item \code{weight}: to do a weight rescale ; \item \code{weight}: to do a weight rescale ;
\item \code{base_margin}: base margin is the base prediction Xgboost will boost from ; \item \code{base_margin}: base margin is the base prediction XGBoost will boost from ;
\item \code{group}: number of rows in each group (to use with \code{rank:pairwise} objective). \item \code{group}: number of rows in each group (to use with \code{rank:pairwise} objective).
} }
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
train <- agaricus.train dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtrain <- xgb.DMatrix(train$data, label=train$label)
labels <- getinfo(dtrain, 'label') labels <- getinfo(dtrain, 'label')
setinfo(dtrain, 'label', 1-labels) setinfo(dtrain, 'label', 1-labels)

View File

@@ -28,8 +28,7 @@ original xgb.DMatrix object
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
train <- agaricus.train dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtrain <- xgb.DMatrix(train$data, label=train$label)
dsub <- slice(dtrain, 1:42) dsub <- slice(dtrain, 1:42)
labels1 <- getinfo(dsub, 'label') labels1 <- getinfo(dsub, 'label')

View File

@@ -4,11 +4,20 @@
\alias{xgb.DMatrix} \alias{xgb.DMatrix}
\title{Construct xgb.DMatrix object} \title{Construct xgb.DMatrix object}
\usage{ \usage{
xgb.DMatrix(data, info = list(), missing = NA, silent = FALSE, ...) xgb.DMatrix(
data,
info = list(),
missing = NA,
silent = FALSE,
nthread = NULL,
...
)
} }
\arguments{ \arguments{
\item{data}{a \code{matrix} object (either numeric or integer), a \code{dgCMatrix} object, or a character \item{data}{a \code{matrix} object (either numeric or integer), a \code{dgCMatrix} object,
string representing a filename.} a \code{dgRMatrix} object (only when making predictions from a fitted model),
a \code{dsparseVector} object (only when making predictions from a fitted model, will be
interpreted as a row vector), or a character string representing a filename.}
\item{info}{a named list of additional information to store in the \code{xgb.DMatrix} object. \item{info}{a named list of additional information to store in the \code{xgb.DMatrix} object.
See \code{\link{setinfo}} for the specific allowed kinds of} See \code{\link{setinfo}} for the specific allowed kinds of}
@@ -18,17 +27,18 @@ It is useful when a 0 or some other extreme value represents missing values in d
\item{silent}{whether to suppress printing an informational message after loading from a file.} \item{silent}{whether to suppress printing an informational message after loading from a file.}
\item{nthread}{Number of threads used for creating DMatrix.}
\item{...}{the \code{info} data could be passed directly as parameters, without creating an \code{info} list.} \item{...}{the \code{info} data could be passed directly as parameters, without creating an \code{info} list.}
} }
\description{ \description{
Construct xgb.DMatrix object from either a dense matrix, a sparse matrix, or a local file. Construct xgb.DMatrix object from either a dense matrix, a sparse matrix, or a local file.
Supported input file formats are either a libsvm text file or a binary file that was created previously by Supported input file formats are either a LIBSVM text file or a binary file that was created previously by
\code{\link{xgb.DMatrix.save}}). \code{\link{xgb.DMatrix.save}}).
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
train <- agaricus.train dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtrain <- xgb.DMatrix(train$data, label=train$label)
xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data') xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
dtrain <- xgb.DMatrix('xgb.DMatrix.data') dtrain <- xgb.DMatrix('xgb.DMatrix.data')
if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data') if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')

View File

@@ -16,8 +16,7 @@ Save xgb.DMatrix object to binary file
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
train <- agaricus.train dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtrain <- xgb.DMatrix(train$data, label=train$label)
xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data') xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
dtrain <- xgb.DMatrix('xgb.DMatrix.data') dtrain <- xgb.DMatrix('xgb.DMatrix.data')
if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data') if (file.exists('xgb.DMatrix.data')) file.remove('xgb.DMatrix.data')

View File

@@ -29,7 +29,7 @@ Joaquin Quinonero Candela)}
International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014 International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014
\url{https://research.fb.com/publications/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}. \url{https://research.facebook.com/publications/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}.
Extract explaining the method: Extract explaining the method:
@@ -59,8 +59,8 @@ a rule on certain features."
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost') data(agaricus.test, package='xgboost')
dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label) dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic') param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
nrounds = 4 nrounds = 4
@@ -76,8 +76,12 @@ new.features.train <- xgb.create.features(model = bst, agaricus.train$data)
new.features.test <- xgb.create.features(model = bst, agaricus.test$data) new.features.test <- xgb.create.features(model = bst, agaricus.test$data)
# learning with new features # learning with new features
new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label) new.dtrain <- xgb.DMatrix(
new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label) data = new.features.train, label = agaricus.train$label, nthread = 2
)
new.dtest <- xgb.DMatrix(
data = new.features.test, label = agaricus.test$label, nthread = 2
)
watchlist <- list(train = new.dtrain) watchlist <- list(train = new.dtrain)
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2) bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)

View File

@@ -135,9 +135,7 @@ An object of class \code{xgb.cv.synchronous} with the following elements:
parameter or randomly generated. parameter or randomly generated.
\item \code{best_iteration} iteration number with the best evaluation metric value \item \code{best_iteration} iteration number with the best evaluation metric value
(only available with early stopping). (only available with early stopping).
\item \code{best_ntreelimit} the \code{ntreelimit} value corresponding to the best iteration, \item \code{best_ntreelimit} and the \code{ntreelimit} Deprecated attributes, use \code{best_iteration} instead.
which could further be used in \code{predict} method
(only available with early stopping).
\item \code{pred} CV prediction values available when \code{prediction} is set. \item \code{pred} CV prediction values available when \code{prediction} is set.
It is either vector or matrix (see \code{\link{cb.cv.predict}}). It is either vector or matrix (see \code{\link{cb.cv.predict}}).
\item \code{models} a list of the CV folds' models. It is only available with the explicit \item \code{models} a list of the CV folds' models. It is only available with the explicit
@@ -160,9 +158,9 @@ Adapted from \url{https://en.wikipedia.org/wiki/Cross-validation_\%28statistics\
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"), cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"),
max_depth = 3, eta = 1, objective = "binary:logistic") max_depth = 3, eta = 1, objective = "binary:logistic")
print(cv) print(cv)
print(cv, verbose=TRUE) print(cv, verbose=TRUE)

View File

@@ -20,8 +20,6 @@ xgb.dump(
If not provided or set to \code{NULL}, the model is returned as a \code{character} vector.} If not provided or set to \code{NULL}, the model is returned as a \code{character} vector.}
\item{fmap}{feature map file representing feature types. \item{fmap}{feature map file representing feature types.
Detailed description could be found at
\url{https://github.com/dmlc/xgboost/wiki/Binary-Classification#dump-model}.
See demo/ for walkthrough example in R, and See demo/ for walkthrough example in R, and
\url{https://github.com/dmlc/xgboost/blob/master/demo/data/featmap.txt} \url{https://github.com/dmlc/xgboost/blob/master/demo/data/featmap.txt}
for example Format.} for example Format.}

View File

@@ -16,7 +16,7 @@ An object of \code{xgb.Booster} class.
Load xgboost model from the binary model file. Load xgboost model from the binary model file.
} }
\details{ \details{
The input file is expected to contain a model saved in an xgboost-internal binary format The input file is expected to contain a model saved in an xgboost model format
using either \code{\link{xgb.save}} or \code{\link{cb.save.model}} in R, or using some using either \code{\link{xgb.save}} or \code{\link{cb.save.model}} in R, or using some
appropriate methods from other xgboost interfaces. E.g., a model trained in Python and appropriate methods from other xgboost interfaces. E.g., a model trained in Python and
saved from there in xgboost format, could be loaded from R. saved from there in xgboost format, could be loaded from R.

View File

@@ -4,10 +4,12 @@
\alias{xgb.load.raw} \alias{xgb.load.raw}
\title{Load serialised xgboost model from R's raw vector} \title{Load serialised xgboost model from R's raw vector}
\usage{ \usage{
xgb.load.raw(buffer) xgb.load.raw(buffer, as_booster = FALSE)
} }
\arguments{ \arguments{
\item{buffer}{the buffer returned by xgb.save.raw} \item{buffer}{the buffer returned by xgb.save.raw}
\item{as_booster}{Return the loaded model as xgb.Booster instead of xgb.Booster.handle.}
} }
\description{ \description{
User can generate raw memory buffer by calling xgb.save.raw User can generate raw memory buffer by calling xgb.save.raw

View File

@@ -87,7 +87,7 @@ more than 5 distinct values.}
\item{which}{whether to do univariate or bivariate plotting. NOTE: only 1D is implemented so far.} \item{which}{whether to do univariate or bivariate plotting. NOTE: only 1D is implemented so far.}
\item{plot}{whether a plot should be drawn. If FALSE, only a lits of matrices is returned.} \item{plot}{whether a plot should be drawn. If FALSE, only a list of matrices is returned.}
\item{...}{other parameters passed to \code{plot}.} \item{...}{other parameters passed to \code{plot}.}
} }

View File

@@ -67,7 +67,7 @@ The "Yes" branches are marked by the "< split_value" label.
The branches that also used for missing values are marked as bold The branches that also used for missing values are marked as bold
(as in "carrying extra capacity"). (as in "carrying extra capacity").
This function uses \href{http://www.graphviz.org/}{GraphViz} as a backend of DiagrammeR. This function uses \href{https://www.graphviz.org/}{GraphViz} as a backend of DiagrammeR.
} }
\examples{ \examples{
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')

View File

@@ -5,10 +5,19 @@
\title{Save xgboost model to R's raw vector, \title{Save xgboost model to R's raw vector,
user can call xgb.load.raw to load the model back from raw vector} user can call xgb.load.raw to load the model back from raw vector}
\usage{ \usage{
xgb.save.raw(model) xgb.save.raw(model, raw_format = "deprecated")
} }
\arguments{ \arguments{
\item{model}{the model object.} \item{model}{the model object.}
\item{raw_format}{The format for encoding the booster. Available options are
\itemize{
\item \code{json}: Encode the booster into JSON text document.
\item \code{ubj}: Encode the booster into Universal Binary JSON.
\item \code{deprecated}: Encode the booster into old customized binary format.
}
Right now the default is \code{deprecated} but will be changed to \code{ubj} in upcoming release.}
} }
\description{ \description{
Save xgboost model from xgboost or xgb.train Save xgboost model from xgboost or xgb.train

View File

@@ -54,7 +54,7 @@ xgboost(
2. Booster Parameters 2. Booster Parameters
2.1. Parameter for Tree Booster 2.1. Parameters for Tree Booster
\itemize{ \itemize{
\item \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1} when it is added to the current approximation. Used to prevent overfitting by making the boosting process more conservative. Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model more robust to overfitting but slower to compute. Default: 0.3 \item \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1} when it is added to the current approximation. Used to prevent overfitting by making the boosting process more conservative. Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model more robust to overfitting but slower to compute. Default: 0.3
@@ -63,12 +63,14 @@ xgboost(
\item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1 \item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1
\item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1 \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1
\item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1 \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
\item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through Xgboost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1 \item \code{lambda} L2 regularization term on weights. Default: 1
\item \code{alpha} L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0
\item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through XGBoost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1
\item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint. \item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.
\item \code{interaction_constraints} A list of vectors specifying feature indices of permitted interactions. Each item of the list represents one permitted interaction where specified features are allowed to interact with each other. Feature index values should start from \code{0} (\code{0} references the first column). Leave argument unspecified for no interaction constraints. \item \code{interaction_constraints} A list of vectors specifying feature indices of permitted interactions. Each item of the list represents one permitted interaction where specified features are allowed to interact with each other. Feature index values should start from \code{0} (\code{0} references the first column). Leave argument unspecified for no interaction constraints.
} }
2.2. Parameter for Linear Booster 2.2. Parameters for Linear Booster
\itemize{ \itemize{
\item \code{lambda} L2 regularization term on weights. Default: 0 \item \code{lambda} L2 regularization term on weights. Default: 0
@@ -88,10 +90,10 @@ xgboost(
\item \code{binary:logistic} logistic regression for binary classification. Output probability. \item \code{binary:logistic} logistic regression for binary classification. Output probability.
\item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation. \item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation.
\item \code{binary:hinge}: hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities. \item \code{binary:hinge}: hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities.
\item \code{count:poisson}: poisson regression for count data, output mean of poisson distribution. \code{max_delta_step} is set to 0.7 by default in poisson regression (used to safeguard optimization). \item \code{count:poisson}: Poisson regression for count data, output mean of Poisson distribution. \code{max_delta_step} is set to 0.7 by default in poisson regression (used to safeguard optimization).
\item \code{survival:cox}: Cox regression for right censored survival time data (negative values are considered right censored). Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional hazard function \code{h(t) = h0(t) * HR)}. \item \code{survival:cox}: Cox regression for right censored survival time data (negative values are considered right censored). Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional hazard function \code{h(t) = h0(t) * HR)}.
\item \code{survival:aft}: Accelerated failure time model for censored survival time data. See \href{https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html}{Survival Analysis with Accelerated Failure Time} for details. \item \code{survival:aft}: Accelerated failure time model for censored survival time data. See \href{https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html}{Survival Analysis with Accelerated Failure Time} for details.
\item \code{aft_loss_distribution}: Probabilty Density Function used by \code{survival:aft} and \code{aft-nloglik} metric. \item \code{aft_loss_distribution}: Probability Density Function used by \code{survival:aft} and \code{aft-nloglik} metric.
\item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{num_class - 1}. \item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{num_class - 1}.
\item \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class. \item \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class.
\item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss. \item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss.
@@ -185,9 +187,6 @@ An object of class \code{xgb.Booster} with the following elements:
explicitly passed. explicitly passed.
\item \code{best_iteration} iteration number with the best evaluation metric value \item \code{best_iteration} iteration number with the best evaluation metric value
(only available with early stopping). (only available with early stopping).
\item \code{best_ntreelimit} the \code{ntreelimit} value corresponding to the best iteration,
which could further be used in \code{predict} method
(only available with early stopping).
\item \code{best_score} the best evaluation metric value during early stopping. \item \code{best_score} the best evaluation metric value during early stopping.
(only available with early stopping). (only available with early stopping).
\item \code{feature_names} names of the training dataset features \item \code{feature_names} names of the training dataset features
@@ -209,11 +208,11 @@ than the \code{xgboost} interface.
Parallelization is automatically enabled if \code{OpenMP} is present. Parallelization is automatically enabled if \code{OpenMP} is present.
Number of threads can also be manually specified via \code{nthread} parameter. Number of threads can also be manually specified via \code{nthread} parameter.
The evaluation metric is chosen automatically by Xgboost (according to the objective) The evaluation metric is chosen automatically by XGBoost (according to the objective)
when the \code{eval_metric} parameter is not provided. when the \code{eval_metric} parameter is not provided.
User may set one or several \code{eval_metric} parameters. User may set one or several \code{eval_metric} parameters.
Note that when using a customized metric, only this single metric can be used. Note that when using a customized metric, only this single metric can be used.
The following is the list of built-in metrics for which Xgboost provides optimized implementation: The following is the list of built-in metrics for which XGBoost provides optimized implementation:
\itemize{ \itemize{
\item \code{rmse} root mean square error. \url{https://en.wikipedia.org/wiki/Root_mean_square_error} \item \code{rmse} root mean square error. \url{https://en.wikipedia.org/wiki/Root_mean_square_error}
\item \code{logloss} negative log-likelihood. \url{https://en.wikipedia.org/wiki/Log-likelihood} \item \code{logloss} negative log-likelihood. \url{https://en.wikipedia.org/wiki/Log-likelihood}
@@ -242,8 +241,8 @@ The following callbacks are automatically created when certain parameters are se
data(agaricus.train, package='xgboost') data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost') data(agaricus.test, package='xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label) dtrain <- with(agaricus.train, xgb.DMatrix(data, label = label, nthread = 2))
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label) dtest <- with(agaricus.test, xgb.DMatrix(data, label = label, nthread = 2))
watchlist <- list(train = dtrain, eval = dtest) watchlist <- list(train = dtrain, eval = dtest)
## A simple xgb.train example: ## A simple xgb.train example:

View File

@@ -4,10 +4,17 @@
\alias{xgb.unserialize} \alias{xgb.unserialize}
\title{Load the instance back from \code{\link{xgb.serialize}}} \title{Load the instance back from \code{\link{xgb.serialize}}}
\usage{ \usage{
xgb.unserialize(buffer) xgb.unserialize(buffer, handle = NULL)
} }
\arguments{ \arguments{
\item{buffer}{the buffer containing booster instance saved by \code{\link{xgb.serialize}}} \item{buffer}{the buffer containing booster instance saved by \code{\link{xgb.serialize}}}
\item{handle}{An \code{xgb.Booster.handle} object which will be overwritten with
the new deserialized object. Must be a null handle (e.g. when loading the model through
`readRDS`). If not provided, a new handle will be created.}
}
\value{
An \code{xgb.Booster.handle} object.
} }
\description{ \description{
Load the instance back from \code{\link{xgb.serialize}} Load the instance back from \code{\link{xgb.serialize}}

View File

@@ -0,0 +1,39 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.config.R
\name{xgb.set.config, xgb.get.config}
\alias{xgb.set.config, xgb.get.config}
\alias{xgb.set.config}
\alias{xgb.get.config}
\title{Set and get global configuration}
\usage{
xgb.set.config(...)
xgb.get.config()
}
\arguments{
\item{...}{List of parameters to be set, as keyword arguments}
}
\value{
\code{xgb.set.config} returns \code{TRUE} to signal success. \code{xgb.get.config} returns
a list containing all global-scope parameters and their values.
}
\description{
Global configuration consists of a collection of parameters that can be applied in the global
scope. See \url{https://xgboost.readthedocs.io/en/stable/parameter.html} for the full list of
parameters supported in the global configuration. Use \code{xgb.set.config} to update the
values of one or more global-scope parameters. Use \code{xgb.get.config} to fetch the current
values of all global-scope parameters (listed in
\url{https://xgboost.readthedocs.io/en/stable/parameter.html}).
}
\examples{
# Set verbosity level to silent (0)
xgb.set.config(verbosity = 0)
# Now global verbosity level is 0
config <- xgb.get.config()
print(config$verbosity)
# Set verbosity level to warning (1)
xgb.set.config(verbosity = 1)
# Now global verbosity level is 1
config <- xgb.get.config()
print(config$verbosity)
}

View File

@@ -3,7 +3,7 @@ PKGROOT=../../
ENABLE_STD_THREAD=1 ENABLE_STD_THREAD=1
# _*_ mode: Makefile; _*_ # _*_ mode: Makefile; _*_
CXX_STD = CXX14 CXX_STD = CXX17
XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\ XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
-DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\ -DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\
@@ -17,9 +17,79 @@ endif
$(foreach v, $(XGB_RFLAGS), $(warning $(v))) $(foreach v, $(XGB_RFLAGS), $(warning $(v)))
PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS) PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS)
PKG_CXXFLAGS= @OPENMP_CXXFLAGS@ @ENDIAN_FLAG@ -pthread PKG_CXXFLAGS= @OPENMP_CXXFLAGS@ @ENDIAN_FLAG@ -pthread $(CXX_VISIBILITY)
PKG_LIBS = @OPENMP_CXXFLAGS@ @OPENMP_LIB@ @ENDIAN_FLAG@ @BACKTRACE_LIB@ -pthread PKG_LIBS = @OPENMP_CXXFLAGS@ @OPENMP_LIB@ @ENDIAN_FLAG@ @BACKTRACE_LIB@ -pthread
OBJECTS= ./xgboost_R.o ./xgboost_custom.o ./xgboost_assert.o ./init.o \
$(PKGROOT)/amalgamation/xgboost-all0.o $(PKGROOT)/amalgamation/dmlc-minimum0.o \ OBJECTS= \
$(PKGROOT)/rabit/src/engine.o $(PKGROOT)/rabit/src/c_api.o \ ./xgboost_R.o \
$(PKGROOT)/rabit/src/allreduce_base.o ./xgboost_custom.o \
./init.o \
$(PKGROOT)/src/metric/metric.o \
$(PKGROOT)/src/metric/elementwise_metric.o \
$(PKGROOT)/src/metric/multiclass_metric.o \
$(PKGROOT)/src/metric/rank_metric.o \
$(PKGROOT)/src/metric/auc.o \
$(PKGROOT)/src/metric/survival_metric.o \
$(PKGROOT)/src/objective/objective.o \
$(PKGROOT)/src/objective/regression_obj.o \
$(PKGROOT)/src/objective/multiclass_obj.o \
$(PKGROOT)/src/objective/rank_obj.o \
$(PKGROOT)/src/objective/hinge.o \
$(PKGROOT)/src/objective/aft_obj.o \
$(PKGROOT)/src/objective/adaptive.o \
$(PKGROOT)/src/gbm/gbm.o \
$(PKGROOT)/src/gbm/gbtree.o \
$(PKGROOT)/src/gbm/gbtree_model.o \
$(PKGROOT)/src/gbm/gblinear.o \
$(PKGROOT)/src/gbm/gblinear_model.o \
$(PKGROOT)/src/data/simple_dmatrix.o \
$(PKGROOT)/src/data/data.o \
$(PKGROOT)/src/data/sparse_page_raw_format.o \
$(PKGROOT)/src/data/ellpack_page.o \
$(PKGROOT)/src/data/gradient_index.o \
$(PKGROOT)/src/data/gradient_index_page_source.o \
$(PKGROOT)/src/data/gradient_index_format.o \
$(PKGROOT)/src/data/sparse_page_dmatrix.o \
$(PKGROOT)/src/data/proxy_dmatrix.o \
$(PKGROOT)/src/data/iterative_dmatrix.o \
$(PKGROOT)/src/predictor/predictor.o \
$(PKGROOT)/src/predictor/cpu_predictor.o \
$(PKGROOT)/src/tree/constraints.o \
$(PKGROOT)/src/tree/param.o \
$(PKGROOT)/src/tree/tree_model.o \
$(PKGROOT)/src/tree/tree_updater.o \
$(PKGROOT)/src/tree/updater_approx.o \
$(PKGROOT)/src/tree/updater_colmaker.o \
$(PKGROOT)/src/tree/updater_prune.o \
$(PKGROOT)/src/tree/updater_quantile_hist.o \
$(PKGROOT)/src/tree/updater_refresh.o \
$(PKGROOT)/src/tree/updater_sync.o \
$(PKGROOT)/src/linear/linear_updater.o \
$(PKGROOT)/src/linear/updater_coordinate.o \
$(PKGROOT)/src/linear/updater_shotgun.o \
$(PKGROOT)/src/learner.o \
$(PKGROOT)/src/logging.o \
$(PKGROOT)/src/global_config.o \
$(PKGROOT)/src/collective/communicator.o \
$(PKGROOT)/src/collective/socket.o \
$(PKGROOT)/src/common/charconv.o \
$(PKGROOT)/src/common/column_matrix.o \
$(PKGROOT)/src/common/common.o \
$(PKGROOT)/src/common/hist_util.o \
$(PKGROOT)/src/common/host_device_vector.o \
$(PKGROOT)/src/common/io.o \
$(PKGROOT)/src/common/json.o \
$(PKGROOT)/src/common/numeric.o \
$(PKGROOT)/src/common/pseudo_huber.o \
$(PKGROOT)/src/common/quantile.o \
$(PKGROOT)/src/common/random.o \
$(PKGROOT)/src/common/survival_util.o \
$(PKGROOT)/src/common/threading_utils.o \
$(PKGROOT)/src/common/timer.o \
$(PKGROOT)/src/common/version.o \
$(PKGROOT)/src/c_api/c_api.o \
$(PKGROOT)/src/c_api/c_api_error.o \
$(PKGROOT)/amalgamation/dmlc-minimum0.o \
$(PKGROOT)/rabit/src/engine.o \
$(PKGROOT)/rabit/src/rabit_c_api.o \
$(PKGROOT)/rabit/src/allreduce_base.o

View File

@@ -1,21 +1,9 @@
# package root # package root
PKGROOT=./ PKGROOT=../../
ENABLE_STD_THREAD=0 ENABLE_STD_THREAD=0
# _*_ mode: Makefile; _*_ # _*_ mode: Makefile; _*_
# This file is only used for windows compilation from github CXX_STD = CXX17
# It will be replaced with Makevars.in for the CRAN version
.PHONY: all xgblib
all: $(SHLIB)
$(SHLIB): xgblib
xgblib:
cp -r ../../src .
cp -r ../../rabit .
cp -r ../../dmlc-core .
cp -r ../../include .
cp -r ../../amalgamation .
CXX_STD = CXX14
XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\ XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
-DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\ -DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\
@@ -29,11 +17,79 @@ endif
$(foreach v, $(XGB_RFLAGS), $(warning $(v))) $(foreach v, $(XGB_RFLAGS), $(warning $(v)))
PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS) PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS)
PKG_CXXFLAGS= $(SHLIB_OPENMP_CXXFLAGS) $(SHLIB_PTHREAD_FLAGS) PKG_CXXFLAGS= $(SHLIB_OPENMP_CXXFLAGS) -DDMLC_CMAKE_LITTLE_ENDIAN=1 $(SHLIB_PTHREAD_FLAGS) $(CXX_VISIBILITY)
PKG_LIBS = $(SHLIB_OPENMP_CXXFLAGS) $(SHLIB_PTHREAD_FLAGS) PKG_LIBS = $(SHLIB_OPENMP_CXXFLAGS) -DDMLC_CMAKE_LITTLE_ENDIAN=1 $(SHLIB_PTHREAD_FLAGS) -lwsock32 -lws2_32
OBJECTS= ./xgboost_R.o ./xgboost_custom.o ./xgboost_assert.o ./init.o \
$(PKGROOT)/amalgamation/xgboost-all0.o $(PKGROOT)/amalgamation/dmlc-minimum0.o \
$(PKGROOT)/rabit/src/engine.o $(PKGROOT)/rabit/src/c_api.o \
$(PKGROOT)/rabit/src/allreduce_base.o
$(OBJECTS) : xgblib OBJECTS= \
./xgboost_R.o \
./xgboost_custom.o \
./init.o \
$(PKGROOT)/src/metric/metric.o \
$(PKGROOT)/src/metric/elementwise_metric.o \
$(PKGROOT)/src/metric/multiclass_metric.o \
$(PKGROOT)/src/metric/rank_metric.o \
$(PKGROOT)/src/metric/auc.o \
$(PKGROOT)/src/metric/survival_metric.o \
$(PKGROOT)/src/objective/objective.o \
$(PKGROOT)/src/objective/regression_obj.o \
$(PKGROOT)/src/objective/multiclass_obj.o \
$(PKGROOT)/src/objective/rank_obj.o \
$(PKGROOT)/src/objective/hinge.o \
$(PKGROOT)/src/objective/aft_obj.o \
$(PKGROOT)/src/objective/adaptive.o \
$(PKGROOT)/src/gbm/gbm.o \
$(PKGROOT)/src/gbm/gbtree.o \
$(PKGROOT)/src/gbm/gbtree_model.o \
$(PKGROOT)/src/gbm/gblinear.o \
$(PKGROOT)/src/gbm/gblinear_model.o \
$(PKGROOT)/src/data/simple_dmatrix.o \
$(PKGROOT)/src/data/data.o \
$(PKGROOT)/src/data/sparse_page_raw_format.o \
$(PKGROOT)/src/data/ellpack_page.o \
$(PKGROOT)/src/data/gradient_index.o \
$(PKGROOT)/src/data/gradient_index_page_source.o \
$(PKGROOT)/src/data/gradient_index_format.o \
$(PKGROOT)/src/data/sparse_page_dmatrix.o \
$(PKGROOT)/src/data/proxy_dmatrix.o \
$(PKGROOT)/src/data/iterative_dmatrix.o \
$(PKGROOT)/src/predictor/predictor.o \
$(PKGROOT)/src/predictor/cpu_predictor.o \
$(PKGROOT)/src/tree/constraints.o \
$(PKGROOT)/src/tree/param.o \
$(PKGROOT)/src/tree/tree_model.o \
$(PKGROOT)/src/tree/tree_updater.o \
$(PKGROOT)/src/tree/updater_approx.o \
$(PKGROOT)/src/tree/updater_colmaker.o \
$(PKGROOT)/src/tree/updater_prune.o \
$(PKGROOT)/src/tree/updater_quantile_hist.o \
$(PKGROOT)/src/tree/updater_refresh.o \
$(PKGROOT)/src/tree/updater_sync.o \
$(PKGROOT)/src/linear/linear_updater.o \
$(PKGROOT)/src/linear/updater_coordinate.o \
$(PKGROOT)/src/linear/updater_shotgun.o \
$(PKGROOT)/src/learner.o \
$(PKGROOT)/src/logging.o \
$(PKGROOT)/src/global_config.o \
$(PKGROOT)/src/collective/communicator.o \
$(PKGROOT)/src/collective/socket.o \
$(PKGROOT)/src/common/charconv.o \
$(PKGROOT)/src/common/column_matrix.o \
$(PKGROOT)/src/common/common.o \
$(PKGROOT)/src/common/hist_util.o \
$(PKGROOT)/src/common/host_device_vector.o \
$(PKGROOT)/src/common/io.o \
$(PKGROOT)/src/common/json.o \
$(PKGROOT)/src/common/numeric.o \
$(PKGROOT)/src/common/pseudo_huber.o \
$(PKGROOT)/src/common/quantile.o \
$(PKGROOT)/src/common/random.o \
$(PKGROOT)/src/common/survival_util.o \
$(PKGROOT)/src/common/threading_utils.o \
$(PKGROOT)/src/common/timer.o \
$(PKGROOT)/src/common/version.o \
$(PKGROOT)/src/c_api/c_api.o \
$(PKGROOT)/src/c_api/c_api_error.o \
$(PKGROOT)/amalgamation/dmlc-minimum0.o \
$(PKGROOT)/rabit/src/engine.o \
$(PKGROOT)/rabit/src/rabit_c_api.o \
$(PKGROOT)/rabit/src/allreduce_base.o

View File

@@ -9,6 +9,7 @@
#include <Rinternals.h> #include <Rinternals.h>
#include <stdlib.h> #include <stdlib.h>
#include <R_ext/Rdynload.h> #include <R_ext/Rdynload.h>
#include <R_ext/Visibility.h>
/* FIXME: /* FIXME:
Check these declarations against the C/Fortran source code. Check these declarations against the C/Fortran source code.
@@ -17,69 +18,85 @@ Check these declarations against the C/Fortran source code.
/* .Call calls */ /* .Call calls */
extern SEXP XGBoosterBoostOneIter_R(SEXP, SEXP, SEXP, SEXP); extern SEXP XGBoosterBoostOneIter_R(SEXP, SEXP, SEXP, SEXP);
extern SEXP XGBoosterCreate_R(SEXP); extern SEXP XGBoosterCreate_R(SEXP);
extern SEXP XGBoosterCreateInEmptyObj_R(SEXP, SEXP);
extern SEXP XGBoosterDumpModel_R(SEXP, SEXP, SEXP, SEXP); extern SEXP XGBoosterDumpModel_R(SEXP, SEXP, SEXP, SEXP);
extern SEXP XGBoosterEvalOneIter_R(SEXP, SEXP, SEXP, SEXP); extern SEXP XGBoosterEvalOneIter_R(SEXP, SEXP, SEXP, SEXP);
extern SEXP XGBoosterGetAttrNames_R(SEXP); extern SEXP XGBoosterGetAttrNames_R(SEXP);
extern SEXP XGBoosterGetAttr_R(SEXP, SEXP); extern SEXP XGBoosterGetAttr_R(SEXP, SEXP);
extern SEXP XGBoosterLoadModelFromRaw_R(SEXP, SEXP); extern SEXP XGBoosterLoadModelFromRaw_R(SEXP, SEXP);
extern SEXP XGBoosterSaveModelToRaw_R(SEXP handle, SEXP config);
extern SEXP XGBoosterLoadModel_R(SEXP, SEXP); extern SEXP XGBoosterLoadModel_R(SEXP, SEXP);
extern SEXP XGBoosterSaveJsonConfig_R(SEXP handle); extern SEXP XGBoosterSaveJsonConfig_R(SEXP handle);
extern SEXP XGBoosterLoadJsonConfig_R(SEXP handle, SEXP value); extern SEXP XGBoosterLoadJsonConfig_R(SEXP handle, SEXP value);
extern SEXP XGBoosterSerializeToBuffer_R(SEXP handle); extern SEXP XGBoosterSerializeToBuffer_R(SEXP handle);
extern SEXP XGBoosterUnserializeFromBuffer_R(SEXP handle, SEXP raw); extern SEXP XGBoosterUnserializeFromBuffer_R(SEXP handle, SEXP raw);
extern SEXP XGBoosterModelToRaw_R(SEXP);
extern SEXP XGBoosterPredict_R(SEXP, SEXP, SEXP, SEXP, SEXP); extern SEXP XGBoosterPredict_R(SEXP, SEXP, SEXP, SEXP, SEXP);
extern SEXP XGBoosterPredictFromDMatrix_R(SEXP, SEXP, SEXP);
extern SEXP XGBoosterSaveModel_R(SEXP, SEXP); extern SEXP XGBoosterSaveModel_R(SEXP, SEXP);
extern SEXP XGBoosterSetAttr_R(SEXP, SEXP, SEXP); extern SEXP XGBoosterSetAttr_R(SEXP, SEXP, SEXP);
extern SEXP XGBoosterSetParam_R(SEXP, SEXP, SEXP); extern SEXP XGBoosterSetParam_R(SEXP, SEXP, SEXP);
extern SEXP XGBoosterUpdateOneIter_R(SEXP, SEXP, SEXP); extern SEXP XGBoosterUpdateOneIter_R(SEXP, SEXP, SEXP);
extern SEXP XGCheckNullPtr_R(SEXP); extern SEXP XGCheckNullPtr_R(SEXP);
extern SEXP XGDMatrixCreateFromCSC_R(SEXP, SEXP, SEXP, SEXP); extern SEXP XGDMatrixCreateFromCSC_R(SEXP, SEXP, SEXP, SEXP, SEXP);
extern SEXP XGDMatrixCreateFromCSR_R(SEXP, SEXP, SEXP, SEXP, SEXP);
extern SEXP XGDMatrixCreateFromFile_R(SEXP, SEXP); extern SEXP XGDMatrixCreateFromFile_R(SEXP, SEXP);
extern SEXP XGDMatrixCreateFromMat_R(SEXP, SEXP); extern SEXP XGDMatrixCreateFromMat_R(SEXP, SEXP, SEXP);
extern SEXP XGDMatrixGetInfo_R(SEXP, SEXP); extern SEXP XGDMatrixGetInfo_R(SEXP, SEXP);
extern SEXP XGDMatrixGetStrFeatureInfo_R(SEXP, SEXP);
extern SEXP XGDMatrixNumCol_R(SEXP); extern SEXP XGDMatrixNumCol_R(SEXP);
extern SEXP XGDMatrixNumRow_R(SEXP); extern SEXP XGDMatrixNumRow_R(SEXP);
extern SEXP XGDMatrixSaveBinary_R(SEXP, SEXP, SEXP); extern SEXP XGDMatrixSaveBinary_R(SEXP, SEXP, SEXP);
extern SEXP XGDMatrixSetInfo_R(SEXP, SEXP, SEXP); extern SEXP XGDMatrixSetInfo_R(SEXP, SEXP, SEXP);
extern SEXP XGDMatrixSetStrFeatureInfo_R(SEXP, SEXP, SEXP);
extern SEXP XGDMatrixSliceDMatrix_R(SEXP, SEXP); extern SEXP XGDMatrixSliceDMatrix_R(SEXP, SEXP);
extern SEXP XGBSetGlobalConfig_R(SEXP);
extern SEXP XGBGetGlobalConfig_R(void);
extern SEXP XGBoosterFeatureScore_R(SEXP, SEXP);
static const R_CallMethodDef CallEntries[] = { static const R_CallMethodDef CallEntries[] = {
{"XGBoosterBoostOneIter_R", (DL_FUNC) &XGBoosterBoostOneIter_R, 4}, {"XGBoosterBoostOneIter_R", (DL_FUNC) &XGBoosterBoostOneIter_R, 4},
{"XGBoosterCreate_R", (DL_FUNC) &XGBoosterCreate_R, 1}, {"XGBoosterCreate_R", (DL_FUNC) &XGBoosterCreate_R, 1},
{"XGBoosterCreateInEmptyObj_R", (DL_FUNC) &XGBoosterCreateInEmptyObj_R, 2},
{"XGBoosterDumpModel_R", (DL_FUNC) &XGBoosterDumpModel_R, 4}, {"XGBoosterDumpModel_R", (DL_FUNC) &XGBoosterDumpModel_R, 4},
{"XGBoosterEvalOneIter_R", (DL_FUNC) &XGBoosterEvalOneIter_R, 4}, {"XGBoosterEvalOneIter_R", (DL_FUNC) &XGBoosterEvalOneIter_R, 4},
{"XGBoosterGetAttrNames_R", (DL_FUNC) &XGBoosterGetAttrNames_R, 1}, {"XGBoosterGetAttrNames_R", (DL_FUNC) &XGBoosterGetAttrNames_R, 1},
{"XGBoosterGetAttr_R", (DL_FUNC) &XGBoosterGetAttr_R, 2}, {"XGBoosterGetAttr_R", (DL_FUNC) &XGBoosterGetAttr_R, 2},
{"XGBoosterLoadModelFromRaw_R", (DL_FUNC) &XGBoosterLoadModelFromRaw_R, 2}, {"XGBoosterLoadModelFromRaw_R", (DL_FUNC) &XGBoosterLoadModelFromRaw_R, 2},
{"XGBoosterSaveModelToRaw_R", (DL_FUNC) &XGBoosterSaveModelToRaw_R, 2},
{"XGBoosterLoadModel_R", (DL_FUNC) &XGBoosterLoadModel_R, 2}, {"XGBoosterLoadModel_R", (DL_FUNC) &XGBoosterLoadModel_R, 2},
{"XGBoosterSaveJsonConfig_R", (DL_FUNC) &XGBoosterSaveJsonConfig_R, 1}, {"XGBoosterSaveJsonConfig_R", (DL_FUNC) &XGBoosterSaveJsonConfig_R, 1},
{"XGBoosterLoadJsonConfig_R", (DL_FUNC) &XGBoosterLoadJsonConfig_R, 2}, {"XGBoosterLoadJsonConfig_R", (DL_FUNC) &XGBoosterLoadJsonConfig_R, 2},
{"XGBoosterSerializeToBuffer_R", (DL_FUNC) &XGBoosterSerializeToBuffer_R, 1}, {"XGBoosterSerializeToBuffer_R", (DL_FUNC) &XGBoosterSerializeToBuffer_R, 1},
{"XGBoosterUnserializeFromBuffer_R", (DL_FUNC) &XGBoosterUnserializeFromBuffer_R, 2}, {"XGBoosterUnserializeFromBuffer_R", (DL_FUNC) &XGBoosterUnserializeFromBuffer_R, 2},
{"XGBoosterModelToRaw_R", (DL_FUNC) &XGBoosterModelToRaw_R, 1},
{"XGBoosterPredict_R", (DL_FUNC) &XGBoosterPredict_R, 5}, {"XGBoosterPredict_R", (DL_FUNC) &XGBoosterPredict_R, 5},
{"XGBoosterPredictFromDMatrix_R", (DL_FUNC) &XGBoosterPredictFromDMatrix_R, 3},
{"XGBoosterSaveModel_R", (DL_FUNC) &XGBoosterSaveModel_R, 2}, {"XGBoosterSaveModel_R", (DL_FUNC) &XGBoosterSaveModel_R, 2},
{"XGBoosterSetAttr_R", (DL_FUNC) &XGBoosterSetAttr_R, 3}, {"XGBoosterSetAttr_R", (DL_FUNC) &XGBoosterSetAttr_R, 3},
{"XGBoosterSetParam_R", (DL_FUNC) &XGBoosterSetParam_R, 3}, {"XGBoosterSetParam_R", (DL_FUNC) &XGBoosterSetParam_R, 3},
{"XGBoosterUpdateOneIter_R", (DL_FUNC) &XGBoosterUpdateOneIter_R, 3}, {"XGBoosterUpdateOneIter_R", (DL_FUNC) &XGBoosterUpdateOneIter_R, 3},
{"XGCheckNullPtr_R", (DL_FUNC) &XGCheckNullPtr_R, 1}, {"XGCheckNullPtr_R", (DL_FUNC) &XGCheckNullPtr_R, 1},
{"XGDMatrixCreateFromCSC_R", (DL_FUNC) &XGDMatrixCreateFromCSC_R, 4}, {"XGDMatrixCreateFromCSC_R", (DL_FUNC) &XGDMatrixCreateFromCSC_R, 5},
{"XGDMatrixCreateFromCSR_R", (DL_FUNC) &XGDMatrixCreateFromCSR_R, 5},
{"XGDMatrixCreateFromFile_R", (DL_FUNC) &XGDMatrixCreateFromFile_R, 2}, {"XGDMatrixCreateFromFile_R", (DL_FUNC) &XGDMatrixCreateFromFile_R, 2},
{"XGDMatrixCreateFromMat_R", (DL_FUNC) &XGDMatrixCreateFromMat_R, 2}, {"XGDMatrixCreateFromMat_R", (DL_FUNC) &XGDMatrixCreateFromMat_R, 3},
{"XGDMatrixGetInfo_R", (DL_FUNC) &XGDMatrixGetInfo_R, 2}, {"XGDMatrixGetInfo_R", (DL_FUNC) &XGDMatrixGetInfo_R, 2},
{"XGDMatrixGetStrFeatureInfo_R", (DL_FUNC) &XGDMatrixGetStrFeatureInfo_R, 2},
{"XGDMatrixNumCol_R", (DL_FUNC) &XGDMatrixNumCol_R, 1}, {"XGDMatrixNumCol_R", (DL_FUNC) &XGDMatrixNumCol_R, 1},
{"XGDMatrixNumRow_R", (DL_FUNC) &XGDMatrixNumRow_R, 1}, {"XGDMatrixNumRow_R", (DL_FUNC) &XGDMatrixNumRow_R, 1},
{"XGDMatrixSaveBinary_R", (DL_FUNC) &XGDMatrixSaveBinary_R, 3}, {"XGDMatrixSaveBinary_R", (DL_FUNC) &XGDMatrixSaveBinary_R, 3},
{"XGDMatrixSetInfo_R", (DL_FUNC) &XGDMatrixSetInfo_R, 3}, {"XGDMatrixSetInfo_R", (DL_FUNC) &XGDMatrixSetInfo_R, 3},
{"XGDMatrixSetStrFeatureInfo_R", (DL_FUNC) &XGDMatrixSetStrFeatureInfo_R, 3},
{"XGDMatrixSliceDMatrix_R", (DL_FUNC) &XGDMatrixSliceDMatrix_R, 2}, {"XGDMatrixSliceDMatrix_R", (DL_FUNC) &XGDMatrixSliceDMatrix_R, 2},
{"XGBSetGlobalConfig_R", (DL_FUNC) &XGBSetGlobalConfig_R, 1},
{"XGBGetGlobalConfig_R", (DL_FUNC) &XGBGetGlobalConfig_R, 0},
{"XGBoosterFeatureScore_R", (DL_FUNC) &XGBoosterFeatureScore_R, 2},
{NULL, NULL, 0} {NULL, NULL, 0}
}; };
#if defined(_WIN32) #if defined(_WIN32)
__declspec(dllexport) __declspec(dllexport)
#endif // defined(_WIN32) #endif // defined(_WIN32)
void R_init_xgboost(DllInfo *dll) { void attribute_visible R_init_xgboost(DllInfo *dll) {
R_registerRoutines(dll, NULL, CallEntries, NULL, NULL); R_registerRoutines(dll, NULL, CallEntries, NULL, NULL);
R_useDynamicSymbols(dll, FALSE); R_useDynamicSymbols(dll, FALSE);
} }

View File

@@ -0,0 +1,3 @@
LIBRARY xgboost.dll
EXPORTS
R_init_xgboost

View File

@@ -1,13 +1,23 @@
// Copyright (c) 2014 by Contributors /**
#include <dmlc/logging.h> * Copyright 2014-2022 by XGBoost Contributors
*/
#include <dmlc/common.h>
#include <dmlc/omp.h> #include <dmlc/omp.h>
#include <xgboost/c_api.h> #include <xgboost/c_api.h>
#include <vector> #include <xgboost/data.h>
#include <xgboost/generic_parameters.h>
#include <xgboost/logging.h>
#include <cstdio>
#include <cstring>
#include <sstream>
#include <string> #include <string>
#include <utility> #include <utility>
#include <cstring> #include <vector>
#include <cstdio>
#include <sstream> #include "../../src/c_api/c_api_error.h"
#include "../../src/common/threading_utils.h"
#include "./xgboost_R.h" #include "./xgboost_R.h"
/*! /*!
@@ -34,14 +44,27 @@
error(XGBGetLastError()); \ error(XGBGetLastError()); \
} }
using dmlc::BeginPtr;
using namespace dmlc; xgboost::GenericParameter const *BoosterCtx(BoosterHandle handle) {
CHECK_HANDLE();
auto *learner = static_cast<xgboost::Learner *>(handle);
CHECK(learner);
return learner->Ctx();
}
SEXP XGCheckNullPtr_R(SEXP handle) { xgboost::GenericParameter const *DMatrixCtx(DMatrixHandle handle) {
CHECK_HANDLE();
auto p_m = static_cast<std::shared_ptr<xgboost::DMatrix> *>(handle);
CHECK(p_m);
return p_m->get()->Ctx();
}
XGB_DLL SEXP XGCheckNullPtr_R(SEXP handle) {
return ScalarLogical(R_ExternalPtrAddr(handle) == NULL); return ScalarLogical(R_ExternalPtrAddr(handle) == NULL);
} }
void _DMatrixFinalizer(SEXP ext) { XGB_DLL void _DMatrixFinalizer(SEXP ext) {
R_API_BEGIN(); R_API_BEGIN();
if (R_ExternalPtrAddr(ext) == NULL) return; if (R_ExternalPtrAddr(ext) == NULL) return;
CHECK_CALL(XGDMatrixFree(R_ExternalPtrAddr(ext))); CHECK_CALL(XGDMatrixFree(R_ExternalPtrAddr(ext)));
@@ -49,7 +72,22 @@ void _DMatrixFinalizer(SEXP ext) {
R_API_END(); R_API_END();
} }
SEXP XGDMatrixCreateFromFile_R(SEXP fname, SEXP silent) { XGB_DLL SEXP XGBSetGlobalConfig_R(SEXP json_str) {
R_API_BEGIN();
CHECK_CALL(XGBSetGlobalConfig(CHAR(asChar(json_str))));
R_API_END();
return R_NilValue;
}
XGB_DLL SEXP XGBGetGlobalConfig_R() {
const char* json_str;
R_API_BEGIN();
CHECK_CALL(XGBGetGlobalConfig(&json_str));
R_API_END();
return mkString(json_str);
}
XGB_DLL SEXP XGDMatrixCreateFromFile_R(SEXP fname, SEXP silent) {
SEXP ret; SEXP ret;
R_API_BEGIN(); R_API_BEGIN();
DMatrixHandle handle; DMatrixHandle handle;
@@ -61,8 +99,7 @@ SEXP XGDMatrixCreateFromFile_R(SEXP fname, SEXP silent) {
return ret; return ret;
} }
SEXP XGDMatrixCreateFromMat_R(SEXP mat, XGB_DLL SEXP XGDMatrixCreateFromMat_R(SEXP mat, SEXP missing, SEXP n_threads) {
SEXP missing) {
SEXP ret; SEXP ret;
R_API_BEGIN(); R_API_BEGIN();
SEXP dim = getAttrib(mat, R_DimSymbol); SEXP dim = getAttrib(mat, R_DimSymbol);
@@ -77,14 +114,16 @@ SEXP XGDMatrixCreateFromMat_R(SEXP mat,
din = REAL(mat); din = REAL(mat);
} }
std::vector<float> data(nrow * ncol); std::vector<float> data(nrow * ncol);
#pragma omp parallel for schedule(static) int32_t threads = xgboost::common::OmpGetNumThreads(asInteger(n_threads));
for (omp_ulong i = 0; i < nrow; ++i) {
xgboost::common::ParallelFor(nrow, threads, [&](xgboost::omp_ulong i) {
for (size_t j = 0; j < ncol; ++j) { for (size_t j = 0; j < ncol; ++j) {
data[i * ncol +j] = is_int ? static_cast<float>(iin[i + nrow * j]) : din[i + nrow * j]; data[i * ncol + j] = is_int ? static_cast<float>(iin[i + nrow * j]) : din[i + nrow * j];
} }
} });
DMatrixHandle handle; DMatrixHandle handle;
CHECK_CALL(XGDMatrixCreateFromMat(BeginPtr(data), nrow, ncol, asReal(missing), &handle)); CHECK_CALL(XGDMatrixCreateFromMat_omp(BeginPtr(data), nrow, ncol,
asReal(missing), &handle, threads));
ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue)); ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE); R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
R_API_END(); R_API_END();
@@ -92,10 +131,8 @@ SEXP XGDMatrixCreateFromMat_R(SEXP mat,
return ret; return ret;
} }
SEXP XGDMatrixCreateFromCSC_R(SEXP indptr, XGB_DLL SEXP XGDMatrixCreateFromCSC_R(SEXP indptr, SEXP indices, SEXP data,
SEXP indices, SEXP num_row, SEXP n_threads) {
SEXP data,
SEXP num_row) {
SEXP ret; SEXP ret;
R_API_BEGIN(); R_API_BEGIN();
const int *p_indptr = INTEGER(indptr); const int *p_indptr = INTEGER(indptr);
@@ -111,11 +148,11 @@ SEXP XGDMatrixCreateFromCSC_R(SEXP indptr,
for (size_t i = 0; i < nindptr; ++i) { for (size_t i = 0; i < nindptr; ++i) {
col_ptr_[i] = static_cast<size_t>(p_indptr[i]); col_ptr_[i] = static_cast<size_t>(p_indptr[i]);
} }
#pragma omp parallel for schedule(static) int32_t threads = xgboost::common::OmpGetNumThreads(asInteger(n_threads));
for (int64_t i = 0; i < static_cast<int64_t>(ndata); ++i) { xgboost::common::ParallelFor(ndata, threads, [&](xgboost::omp_ulong i) {
indices_[i] = static_cast<unsigned>(p_indices[i]); indices_[i] = static_cast<unsigned>(p_indices[i]);
data_[i] = static_cast<float>(p_data[i]); data_[i] = static_cast<float>(p_data[i]);
} });
DMatrixHandle handle; DMatrixHandle handle;
CHECK_CALL(XGDMatrixCreateFromCSCEx(BeginPtr(col_ptr_), BeginPtr(indices_), CHECK_CALL(XGDMatrixCreateFromCSCEx(BeginPtr(col_ptr_), BeginPtr(indices_),
BeginPtr(data_), nindptr, ndata, BeginPtr(data_), nindptr, ndata,
@@ -127,7 +164,40 @@ SEXP XGDMatrixCreateFromCSC_R(SEXP indptr,
return ret; return ret;
} }
SEXP XGDMatrixSliceDMatrix_R(SEXP handle, SEXP idxset) { XGB_DLL SEXP XGDMatrixCreateFromCSR_R(SEXP indptr, SEXP indices, SEXP data,
SEXP num_col, SEXP n_threads) {
SEXP ret;
R_API_BEGIN();
const int *p_indptr = INTEGER(indptr);
const int *p_indices = INTEGER(indices);
const double *p_data = REAL(data);
size_t nindptr = static_cast<size_t>(length(indptr));
size_t ndata = static_cast<size_t>(length(data));
size_t ncol = static_cast<size_t>(INTEGER(num_col)[0]);
std::vector<size_t> row_ptr_(nindptr);
std::vector<unsigned> indices_(ndata);
std::vector<float> data_(ndata);
for (size_t i = 0; i < nindptr; ++i) {
row_ptr_[i] = static_cast<size_t>(p_indptr[i]);
}
int32_t threads = xgboost::common::OmpGetNumThreads(asInteger(n_threads));
xgboost::common::ParallelFor(ndata, threads, [&](xgboost::omp_ulong i) {
indices_[i] = static_cast<unsigned>(p_indices[i]);
data_[i] = static_cast<float>(p_data[i]);
});
DMatrixHandle handle;
CHECK_CALL(XGDMatrixCreateFromCSREx(BeginPtr(row_ptr_), BeginPtr(indices_),
BeginPtr(data_), nindptr, ndata,
ncol, &handle));
ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
R_API_END();
UNPROTECT(1);
return ret;
}
XGB_DLL SEXP XGDMatrixSliceDMatrix_R(SEXP handle, SEXP idxset) {
SEXP ret; SEXP ret;
R_API_BEGIN(); R_API_BEGIN();
int len = length(idxset); int len = length(idxset);
@@ -147,7 +217,7 @@ SEXP XGDMatrixSliceDMatrix_R(SEXP handle, SEXP idxset) {
return ret; return ret;
} }
SEXP XGDMatrixSaveBinary_R(SEXP handle, SEXP fname, SEXP silent) { XGB_DLL SEXP XGDMatrixSaveBinary_R(SEXP handle, SEXP fname, SEXP silent) {
R_API_BEGIN(); R_API_BEGIN();
CHECK_CALL(XGDMatrixSaveBinary(R_ExternalPtrAddr(handle), CHECK_CALL(XGDMatrixSaveBinary(R_ExternalPtrAddr(handle),
CHAR(asChar(fname)), CHAR(asChar(fname)),
@@ -156,42 +226,76 @@ SEXP XGDMatrixSaveBinary_R(SEXP handle, SEXP fname, SEXP silent) {
return R_NilValue; return R_NilValue;
} }
SEXP XGDMatrixSetInfo_R(SEXP handle, SEXP field, SEXP array) { XGB_DLL SEXP XGDMatrixSetInfo_R(SEXP handle, SEXP field, SEXP array) {
R_API_BEGIN(); R_API_BEGIN();
int len = length(array); int len = length(array);
const char *name = CHAR(asChar(field)); const char *name = CHAR(asChar(field));
auto ctx = DMatrixCtx(R_ExternalPtrAddr(handle));
if (!strcmp("group", name)) { if (!strcmp("group", name)) {
std::vector<unsigned> vec(len); std::vector<unsigned> vec(len);
#pragma omp parallel for schedule(static) xgboost::common::ParallelFor(len, ctx->Threads(), [&](xgboost::omp_ulong i) {
for (int i = 0; i < len; ++i) {
vec[i] = static_cast<unsigned>(INTEGER(array)[i]); vec[i] = static_cast<unsigned>(INTEGER(array)[i]);
} });
CHECK_CALL(XGDMatrixSetUIntInfo(R_ExternalPtrAddr(handle), CHECK_CALL(
CHAR(asChar(field)), XGDMatrixSetUIntInfo(R_ExternalPtrAddr(handle), CHAR(asChar(field)), BeginPtr(vec), len));
BeginPtr(vec), len));
} else { } else {
std::vector<float> vec(len); std::vector<float> vec(len);
#pragma omp parallel for schedule(static) xgboost::common::ParallelFor(len, ctx->Threads(),
for (int i = 0; i < len; ++i) { [&](xgboost::omp_ulong i) { vec[i] = REAL(array)[i]; });
vec[i] = REAL(array)[i]; CHECK_CALL(
} XGDMatrixSetFloatInfo(R_ExternalPtrAddr(handle), CHAR(asChar(field)), BeginPtr(vec), len));
CHECK_CALL(XGDMatrixSetFloatInfo(R_ExternalPtrAddr(handle),
CHAR(asChar(field)),
BeginPtr(vec), len));
} }
R_API_END(); R_API_END();
return R_NilValue; return R_NilValue;
} }
SEXP XGDMatrixGetInfo_R(SEXP handle, SEXP field) { XGB_DLL SEXP XGDMatrixSetStrFeatureInfo_R(SEXP handle, SEXP field, SEXP array) {
R_API_BEGIN();
size_t len{0};
if (!isNull(array)) {
len = length(array);
}
const char *name = CHAR(asChar(field));
std::vector<std::string> str_info;
for (size_t i = 0; i < len; ++i) {
str_info.emplace_back(CHAR(asChar(VECTOR_ELT(array, i))));
}
std::vector<char const*> vec(len);
std::transform(str_info.cbegin(), str_info.cend(), vec.begin(),
[](std::string const &str) { return str.c_str(); });
CHECK_CALL(XGDMatrixSetStrFeatureInfo(R_ExternalPtrAddr(handle), name, vec.data(), len));
R_API_END();
return R_NilValue;
}
XGB_DLL SEXP XGDMatrixGetStrFeatureInfo_R(SEXP handle, SEXP field) {
SEXP ret;
R_API_BEGIN();
char const **out_features{nullptr};
bst_ulong len{0};
const char *name = CHAR(asChar(field));
XGDMatrixGetStrFeatureInfo(R_ExternalPtrAddr(handle), name, &len, &out_features);
if (len > 0) {
ret = PROTECT(allocVector(STRSXP, len));
for (size_t i = 0; i < len; ++i) {
SET_STRING_ELT(ret, i, mkChar(out_features[i]));
}
} else {
ret = PROTECT(R_NilValue);
}
R_API_END();
UNPROTECT(1);
return ret;
}
XGB_DLL SEXP XGDMatrixGetInfo_R(SEXP handle, SEXP field) {
SEXP ret; SEXP ret;
R_API_BEGIN(); R_API_BEGIN();
bst_ulong olen; bst_ulong olen;
const float *res; const float *res;
CHECK_CALL(XGDMatrixGetFloatInfo(R_ExternalPtrAddr(handle), CHECK_CALL(XGDMatrixGetFloatInfo(R_ExternalPtrAddr(handle), CHAR(asChar(field)), &olen, &res));
CHAR(asChar(field)),
&olen,
&res));
ret = PROTECT(allocVector(REALSXP, olen)); ret = PROTECT(allocVector(REALSXP, olen));
for (size_t i = 0; i < olen; ++i) { for (size_t i = 0; i < olen; ++i) {
REAL(ret)[i] = res[i]; REAL(ret)[i] = res[i];
@@ -201,7 +305,7 @@ SEXP XGDMatrixGetInfo_R(SEXP handle, SEXP field) {
return ret; return ret;
} }
SEXP XGDMatrixNumRow_R(SEXP handle) { XGB_DLL SEXP XGDMatrixNumRow_R(SEXP handle) {
bst_ulong nrow; bst_ulong nrow;
R_API_BEGIN(); R_API_BEGIN();
CHECK_CALL(XGDMatrixNumRow(R_ExternalPtrAddr(handle), &nrow)); CHECK_CALL(XGDMatrixNumRow(R_ExternalPtrAddr(handle), &nrow));
@@ -209,7 +313,7 @@ SEXP XGDMatrixNumRow_R(SEXP handle) {
return ScalarInteger(static_cast<int>(nrow)); return ScalarInteger(static_cast<int>(nrow));
} }
SEXP XGDMatrixNumCol_R(SEXP handle) { XGB_DLL SEXP XGDMatrixNumCol_R(SEXP handle) {
bst_ulong ncol; bst_ulong ncol;
R_API_BEGIN(); R_API_BEGIN();
CHECK_CALL(XGDMatrixNumCol(R_ExternalPtrAddr(handle), &ncol)); CHECK_CALL(XGDMatrixNumCol(R_ExternalPtrAddr(handle), &ncol));
@@ -224,7 +328,7 @@ void _BoosterFinalizer(SEXP ext) {
R_ClearExternalPtr(ext); R_ClearExternalPtr(ext);
} }
SEXP XGBoosterCreate_R(SEXP dmats) { XGB_DLL SEXP XGBoosterCreate_R(SEXP dmats) {
SEXP ret; SEXP ret;
R_API_BEGIN(); R_API_BEGIN();
int len = length(dmats); int len = length(dmats);
@@ -241,7 +345,22 @@ SEXP XGBoosterCreate_R(SEXP dmats) {
return ret; return ret;
} }
SEXP XGBoosterSetParam_R(SEXP handle, SEXP name, SEXP val) { XGB_DLL SEXP XGBoosterCreateInEmptyObj_R(SEXP dmats, SEXP R_handle) {
R_API_BEGIN();
int len = length(dmats);
std::vector<void*> dvec;
for (int i = 0; i < len; ++i) {
dvec.push_back(R_ExternalPtrAddr(VECTOR_ELT(dmats, i)));
}
BoosterHandle handle;
CHECK_CALL(XGBoosterCreate(BeginPtr(dvec), dvec.size(), &handle));
R_SetExternalPtrAddr(R_handle, handle);
R_RegisterCFinalizerEx(R_handle, _BoosterFinalizer, TRUE);
R_API_END();
return R_NilValue;
}
XGB_DLL SEXP XGBoosterSetParam_R(SEXP handle, SEXP name, SEXP val) {
R_API_BEGIN(); R_API_BEGIN();
CHECK_CALL(XGBoosterSetParam(R_ExternalPtrAddr(handle), CHECK_CALL(XGBoosterSetParam(R_ExternalPtrAddr(handle),
CHAR(asChar(name)), CHAR(asChar(name)),
@@ -250,7 +369,7 @@ SEXP XGBoosterSetParam_R(SEXP handle, SEXP name, SEXP val) {
return R_NilValue; return R_NilValue;
} }
SEXP XGBoosterUpdateOneIter_R(SEXP handle, SEXP iter, SEXP dtrain) { XGB_DLL SEXP XGBoosterUpdateOneIter_R(SEXP handle, SEXP iter, SEXP dtrain) {
R_API_BEGIN(); R_API_BEGIN();
CHECK_CALL(XGBoosterUpdateOneIter(R_ExternalPtrAddr(handle), CHECK_CALL(XGBoosterUpdateOneIter(R_ExternalPtrAddr(handle),
asInteger(iter), asInteger(iter),
@@ -259,17 +378,17 @@ SEXP XGBoosterUpdateOneIter_R(SEXP handle, SEXP iter, SEXP dtrain) {
return R_NilValue; return R_NilValue;
} }
SEXP XGBoosterBoostOneIter_R(SEXP handle, SEXP dtrain, SEXP grad, SEXP hess) { XGB_DLL SEXP XGBoosterBoostOneIter_R(SEXP handle, SEXP dtrain, SEXP grad, SEXP hess) {
R_API_BEGIN(); R_API_BEGIN();
CHECK_EQ(length(grad), length(hess)) CHECK_EQ(length(grad), length(hess))
<< "gradient and hess must have same length"; << "gradient and hess must have same length";
int len = length(grad); int len = length(grad);
std::vector<float> tgrad(len), thess(len); std::vector<float> tgrad(len), thess(len);
#pragma omp parallel for schedule(static) auto ctx = BoosterCtx(R_ExternalPtrAddr(handle));
for (int j = 0; j < len; ++j) { xgboost::common::ParallelFor(len, ctx->Threads(), [&](xgboost::omp_ulong j) {
tgrad[j] = REAL(grad)[j]; tgrad[j] = REAL(grad)[j];
thess[j] = REAL(hess)[j]; thess[j] = REAL(hess)[j];
} });
CHECK_CALL(XGBoosterBoostOneIter(R_ExternalPtrAddr(handle), CHECK_CALL(XGBoosterBoostOneIter(R_ExternalPtrAddr(handle),
R_ExternalPtrAddr(dtrain), R_ExternalPtrAddr(dtrain),
BeginPtr(tgrad), BeginPtr(thess), BeginPtr(tgrad), BeginPtr(thess),
@@ -278,7 +397,7 @@ SEXP XGBoosterBoostOneIter_R(SEXP handle, SEXP dtrain, SEXP grad, SEXP hess) {
return R_NilValue; return R_NilValue;
} }
SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames) { XGB_DLL SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames) {
const char *ret; const char *ret;
R_API_BEGIN(); R_API_BEGIN();
CHECK_EQ(length(dmats), length(evnames)) CHECK_EQ(length(dmats), length(evnames))
@@ -303,8 +422,8 @@ SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames) {
return mkString(ret); return mkString(ret);
} }
SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask, XGB_DLL SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask,
SEXP ntree_limit, SEXP training) { SEXP ntree_limit, SEXP training) {
SEXP ret; SEXP ret;
R_API_BEGIN(); R_API_BEGIN();
bst_ulong olen; bst_ulong olen;
@@ -324,36 +443,59 @@ SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask,
return ret; return ret;
} }
SEXP XGBoosterLoadModel_R(SEXP handle, SEXP fname) { XGB_DLL SEXP XGBoosterPredictFromDMatrix_R(SEXP handle, SEXP dmat, SEXP json_config) {
SEXP r_out_shape;
SEXP r_out_result;
SEXP r_out;
R_API_BEGIN();
char const *c_json_config = CHAR(asChar(json_config));
bst_ulong out_dim;
bst_ulong const *out_shape;
float const *out_result;
CHECK_CALL(XGBoosterPredictFromDMatrix(R_ExternalPtrAddr(handle),
R_ExternalPtrAddr(dmat), c_json_config,
&out_shape, &out_dim, &out_result));
r_out_shape = PROTECT(allocVector(INTSXP, out_dim));
size_t len = 1;
for (size_t i = 0; i < out_dim; ++i) {
INTEGER(r_out_shape)[i] = out_shape[i];
len *= out_shape[i];
}
r_out_result = PROTECT(allocVector(REALSXP, len));
auto ctx = BoosterCtx(R_ExternalPtrAddr(handle));
xgboost::common::ParallelFor(len, ctx->Threads(), [&](xgboost::omp_ulong i) {
REAL(r_out_result)[i] = out_result[i];
});
r_out = PROTECT(allocVector(VECSXP, 2));
SET_VECTOR_ELT(r_out, 0, r_out_shape);
SET_VECTOR_ELT(r_out, 1, r_out_result);
R_API_END();
UNPROTECT(3);
return r_out;
}
XGB_DLL SEXP XGBoosterLoadModel_R(SEXP handle, SEXP fname) {
R_API_BEGIN(); R_API_BEGIN();
CHECK_CALL(XGBoosterLoadModel(R_ExternalPtrAddr(handle), CHAR(asChar(fname)))); CHECK_CALL(XGBoosterLoadModel(R_ExternalPtrAddr(handle), CHAR(asChar(fname))));
R_API_END(); R_API_END();
return R_NilValue; return R_NilValue;
} }
SEXP XGBoosterSaveModel_R(SEXP handle, SEXP fname) { XGB_DLL SEXP XGBoosterSaveModel_R(SEXP handle, SEXP fname) {
R_API_BEGIN(); R_API_BEGIN();
CHECK_CALL(XGBoosterSaveModel(R_ExternalPtrAddr(handle), CHAR(asChar(fname)))); CHECK_CALL(XGBoosterSaveModel(R_ExternalPtrAddr(handle), CHAR(asChar(fname))));
R_API_END(); R_API_END();
return R_NilValue; return R_NilValue;
} }
SEXP XGBoosterModelToRaw_R(SEXP handle) { XGB_DLL SEXP XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw) {
SEXP ret;
R_API_BEGIN();
bst_ulong olen;
const char *raw;
CHECK_CALL(XGBoosterGetModelRaw(R_ExternalPtrAddr(handle), &olen, &raw));
ret = PROTECT(allocVector(RAWSXP, olen));
if (olen != 0) {
memcpy(RAW(ret), raw, olen);
}
R_API_END();
UNPROTECT(1);
return ret;
}
SEXP XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw) {
R_API_BEGIN(); R_API_BEGIN();
CHECK_CALL(XGBoosterLoadModelFromBuffer(R_ExternalPtrAddr(handle), CHECK_CALL(XGBoosterLoadModelFromBuffer(R_ExternalPtrAddr(handle),
RAW(raw), RAW(raw),
@@ -362,7 +504,23 @@ SEXP XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw) {
return R_NilValue; return R_NilValue;
} }
SEXP XGBoosterSaveJsonConfig_R(SEXP handle) { XGB_DLL SEXP XGBoosterSaveModelToRaw_R(SEXP handle, SEXP json_config) {
SEXP ret;
R_API_BEGIN();
bst_ulong olen;
char const *c_json_config = CHAR(asChar(json_config));
char const *raw;
CHECK_CALL(XGBoosterSaveModelToBuffer(R_ExternalPtrAddr(handle), c_json_config, &olen, &raw))
ret = PROTECT(allocVector(RAWSXP, olen));
if (olen != 0) {
std::memcpy(RAW(ret), raw, olen);
}
R_API_END();
UNPROTECT(1);
return ret;
}
XGB_DLL SEXP XGBoosterSaveJsonConfig_R(SEXP handle) {
const char* ret; const char* ret;
R_API_BEGIN(); R_API_BEGIN();
bst_ulong len {0}; bst_ulong len {0};
@@ -373,14 +531,14 @@ SEXP XGBoosterSaveJsonConfig_R(SEXP handle) {
return mkString(ret); return mkString(ret);
} }
SEXP XGBoosterLoadJsonConfig_R(SEXP handle, SEXP value) { XGB_DLL SEXP XGBoosterLoadJsonConfig_R(SEXP handle, SEXP value) {
R_API_BEGIN(); R_API_BEGIN();
CHECK_CALL(XGBoosterLoadJsonConfig(R_ExternalPtrAddr(handle), CHAR(asChar(value)))); CHECK_CALL(XGBoosterLoadJsonConfig(R_ExternalPtrAddr(handle), CHAR(asChar(value))));
R_API_END(); R_API_END();
return R_NilValue; return R_NilValue;
} }
SEXP XGBoosterSerializeToBuffer_R(SEXP handle) { XGB_DLL SEXP XGBoosterSerializeToBuffer_R(SEXP handle) {
SEXP ret; SEXP ret;
R_API_BEGIN(); R_API_BEGIN();
bst_ulong out_len; bst_ulong out_len;
@@ -395,7 +553,7 @@ SEXP XGBoosterSerializeToBuffer_R(SEXP handle) {
return ret; return ret;
} }
SEXP XGBoosterUnserializeFromBuffer_R(SEXP handle, SEXP raw) { XGB_DLL SEXP XGBoosterUnserializeFromBuffer_R(SEXP handle, SEXP raw) {
R_API_BEGIN(); R_API_BEGIN();
CHECK_CALL(XGBoosterUnserializeFromBuffer(R_ExternalPtrAddr(handle), CHECK_CALL(XGBoosterUnserializeFromBuffer(R_ExternalPtrAddr(handle),
RAW(raw), RAW(raw),
@@ -404,7 +562,7 @@ SEXP XGBoosterUnserializeFromBuffer_R(SEXP handle, SEXP raw) {
return R_NilValue; return R_NilValue;
} }
SEXP XGBoosterDumpModel_R(SEXP handle, SEXP fmap, SEXP with_stats, SEXP dump_format) { XGB_DLL SEXP XGBoosterDumpModel_R(SEXP handle, SEXP fmap, SEXP with_stats, SEXP dump_format) {
SEXP out; SEXP out;
R_API_BEGIN(); R_API_BEGIN();
bst_ulong olen; bst_ulong olen;
@@ -441,7 +599,7 @@ SEXP XGBoosterDumpModel_R(SEXP handle, SEXP fmap, SEXP with_stats, SEXP dump_for
return out; return out;
} }
SEXP XGBoosterGetAttr_R(SEXP handle, SEXP name) { XGB_DLL SEXP XGBoosterGetAttr_R(SEXP handle, SEXP name) {
SEXP out; SEXP out;
R_API_BEGIN(); R_API_BEGIN();
int success; int success;
@@ -461,7 +619,7 @@ SEXP XGBoosterGetAttr_R(SEXP handle, SEXP name) {
return out; return out;
} }
SEXP XGBoosterSetAttr_R(SEXP handle, SEXP name, SEXP val) { XGB_DLL SEXP XGBoosterSetAttr_R(SEXP handle, SEXP name, SEXP val) {
R_API_BEGIN(); R_API_BEGIN();
const char *v = isNull(val) ? nullptr : CHAR(asChar(val)); const char *v = isNull(val) ? nullptr : CHAR(asChar(val));
CHECK_CALL(XGBoosterSetAttr(R_ExternalPtrAddr(handle), CHECK_CALL(XGBoosterSetAttr(R_ExternalPtrAddr(handle),
@@ -470,7 +628,7 @@ SEXP XGBoosterSetAttr_R(SEXP handle, SEXP name, SEXP val) {
return R_NilValue; return R_NilValue;
} }
SEXP XGBoosterGetAttrNames_R(SEXP handle) { XGB_DLL SEXP XGBoosterGetAttrNames_R(SEXP handle) {
SEXP out; SEXP out;
R_API_BEGIN(); R_API_BEGIN();
bst_ulong len; bst_ulong len;
@@ -489,3 +647,50 @@ SEXP XGBoosterGetAttrNames_R(SEXP handle) {
UNPROTECT(1); UNPROTECT(1);
return out; return out;
} }
XGB_DLL SEXP XGBoosterFeatureScore_R(SEXP handle, SEXP json_config) {
SEXP out_features_sexp;
SEXP out_scores_sexp;
SEXP out_shape_sexp;
SEXP r_out;
R_API_BEGIN();
char const *c_json_config = CHAR(asChar(json_config));
bst_ulong out_n_features;
char const **out_features;
bst_ulong out_dim;
bst_ulong const *out_shape;
float const *out_scores;
CHECK_CALL(XGBoosterFeatureScore(R_ExternalPtrAddr(handle), c_json_config,
&out_n_features, &out_features,
&out_dim, &out_shape, &out_scores));
out_shape_sexp = PROTECT(allocVector(INTSXP, out_dim));
size_t len = 1;
for (size_t i = 0; i < out_dim; ++i) {
INTEGER(out_shape_sexp)[i] = out_shape[i];
len *= out_shape[i];
}
out_scores_sexp = PROTECT(allocVector(REALSXP, len));
auto ctx = BoosterCtx(R_ExternalPtrAddr(handle));
xgboost::common::ParallelFor(len, ctx->Threads(), [&](xgboost::omp_ulong i) {
REAL(out_scores_sexp)[i] = out_scores[i];
});
out_features_sexp = PROTECT(allocVector(STRSXP, out_n_features));
for (size_t i = 0; i < out_n_features; ++i) {
SET_STRING_ELT(out_features_sexp, i, mkChar(out_features[i]));
}
r_out = PROTECT(allocVector(VECSXP, 3));
SET_VECTOR_ELT(r_out, 0, out_features_sexp);
SET_VECTOR_ELT(r_out, 1, out_shape_sexp);
SET_VECTOR_ELT(r_out, 2, out_scores_sexp);
R_API_END();
UNPROTECT(4);
return r_out;
}

View File

@@ -1,5 +1,5 @@
/*! /*!
* Copyright 2014 (c) by Contributors * Copyright 2014-2022 by XGBoost Contributors
* \file xgboost_R.h * \file xgboost_R.h
* \author Tianqi Chen * \author Tianqi Chen
* \brief R wrapper of xgboost * \brief R wrapper of xgboost
@@ -21,6 +21,19 @@
*/ */
XGB_DLL SEXP XGCheckNullPtr_R(SEXP handle); XGB_DLL SEXP XGCheckNullPtr_R(SEXP handle);
/*!
* \brief Set global configuration
* \param json_str a JSON string representing the list of key-value pairs
* \return R_NilValue
*/
XGB_DLL SEXP XGBSetGlobalConfig_R(SEXP json_str);
/*!
* \brief Get global configuration
* \return JSON string
*/
XGB_DLL SEXP XGBGetGlobalConfig_R();
/*! /*!
* \brief load a data matrix * \brief load a data matrix
* \param fname name of the content * \param fname name of the content
@@ -34,22 +47,35 @@ XGB_DLL SEXP XGDMatrixCreateFromFile_R(SEXP fname, SEXP silent);
* This assumes the matrix is stored in column major format * This assumes the matrix is stored in column major format
* \param data R Matrix object * \param data R Matrix object
* \param missing which value to represent missing value * \param missing which value to represent missing value
* \param n_threads Number of threads used to construct DMatrix from dense matrix.
* \return created dmatrix * \return created dmatrix
*/ */
XGB_DLL SEXP XGDMatrixCreateFromMat_R(SEXP mat, XGB_DLL SEXP XGDMatrixCreateFromMat_R(SEXP mat,
SEXP missing); SEXP missing,
SEXP n_threads);
/*! /*!
* \brief create a matrix content from CSC format * \brief create a matrix content from CSC format
* \param indptr pointer to column headers * \param indptr pointer to column headers
* \param indices row indices * \param indices row indices
* \param data content of the data * \param data content of the data
* \param num_row numer of rows (when it's set to 0, then guess from data) * \param num_row numer of rows (when it's set to 0, then guess from data)
* \param n_threads Number of threads used to construct DMatrix from csc matrix.
* \return created dmatrix * \return created dmatrix
*/ */
XGB_DLL SEXP XGDMatrixCreateFromCSC_R(SEXP indptr, XGB_DLL SEXP XGDMatrixCreateFromCSC_R(SEXP indptr, SEXP indices, SEXP data, SEXP num_row,
SEXP indices, SEXP n_threads);
SEXP data,
SEXP num_row); /*!
* \brief create a matrix content from CSR format
* \param indptr pointer to row headers
* \param indices column indices
* \param data content of the data
* \param num_col numer of columns (when it's set to 0, then guess from data)
* \param n_threads Number of threads used to construct DMatrix from csr matrix.
* \return created dmatrix
*/
XGB_DLL SEXP XGDMatrixCreateFromCSR_R(SEXP indptr, SEXP indices, SEXP data, SEXP num_col,
SEXP n_threads);
/*! /*!
* \brief create a new dmatrix from sliced content of existing matrix * \brief create a new dmatrix from sliced content of existing matrix
@@ -103,6 +129,14 @@ XGB_DLL SEXP XGDMatrixNumCol_R(SEXP handle);
*/ */
XGB_DLL SEXP XGBoosterCreate_R(SEXP dmats); XGB_DLL SEXP XGBoosterCreate_R(SEXP dmats);
/*!
* \brief create xgboost learner, saving the pointer into an existing R object
* \param dmats a list of dmatrix handles that will be cached
* \param R_handle a clean R external pointer (not holding any object)
*/
XGB_DLL SEXP XGBoosterCreateInEmptyObj_R(SEXP dmats, SEXP R_handle);
/*! /*!
* \brief set parameters * \brief set parameters
* \param handle handle * \param handle handle
@@ -143,7 +177,7 @@ XGB_DLL SEXP XGBoosterBoostOneIter_R(SEXP handle, SEXP dtrain, SEXP grad, SEXP h
XGB_DLL SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames); XGB_DLL SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames);
/*! /*!
* \brief make prediction based on dmat * \brief (Deprecated) make prediction based on dmat
* \param handle handle * \param handle handle
* \param dmat data matrix * \param dmat data matrix
* \param option_mask output_margin:1 predict_leaf:2 * \param option_mask output_margin:1 predict_leaf:2
@@ -152,6 +186,16 @@ XGB_DLL SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evn
*/ */
XGB_DLL SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask, XGB_DLL SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask,
SEXP ntree_limit, SEXP training); SEXP ntree_limit, SEXP training);
/*!
* \brief Run prediction on DMatrix, replacing `XGBoosterPredict_R`
* \param handle handle
* \param dmat data matrix
* \param json_config See `XGBoosterPredictFromDMatrix` in xgboost c_api.h
*
* \return A list containing 2 vectors, first one for shape while second one for prediction result.
*/
XGB_DLL SEXP XGBoosterPredictFromDMatrix_R(SEXP handle, SEXP dmat, SEXP json_config);
/*! /*!
* \brief load model from existing file * \brief load model from existing file
* \param handle handle * \param handle handle
@@ -176,11 +220,21 @@ XGB_DLL SEXP XGBoosterSaveModel_R(SEXP handle, SEXP fname);
XGB_DLL SEXP XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw); XGB_DLL SEXP XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw);
/*! /*!
* \brief save model into R's raw array * \brief Save model into R's raw array
*
* \param handle handle * \param handle handle
* \return raw array * \param json_config JSON encoded string storing parameters for the function. Following
* keys are expected in the JSON document:
*
* "format": str
* - json: Output booster will be encoded as JSON.
* - ubj: Output booster will be encoded as Univeral binary JSON.
* - deprecated: Output booster will be encoded as old custom binary format. Do now use
* this format except for compatibility reasons.
*
* \return Raw array
*/ */
XGB_DLL SEXP XGBoosterModelToRaw_R(SEXP handle); XGB_DLL SEXP XGBoosterSaveModelToRaw_R(SEXP handle, SEXP json_config);
/*! /*!
* \brief Save internal parameters as a JSON string * \brief Save internal parameters as a JSON string
@@ -244,4 +298,12 @@ XGB_DLL SEXP XGBoosterSetAttr_R(SEXP handle, SEXP name, SEXP val);
*/ */
XGB_DLL SEXP XGBoosterGetAttrNames_R(SEXP handle); XGB_DLL SEXP XGBoosterGetAttrNames_R(SEXP handle);
/*!
* \brief Get feature scores from the model.
* \param json_config See `XGBoosterFeatureScore` in xgboost c_api.h
* \return A vector with the first element as feature names, second element as shape of
* feature scores and thrid element as feature scores.
*/
XGB_DLL SEXP XGBoosterFeatureScore_R(SEXP handle, SEXP json_config);
#endif // XGBOOST_WRAPPER_R_H_ // NOLINT(*) #endif // XGBOOST_WRAPPER_R_H_ // NOLINT(*)

View File

@@ -1,26 +0,0 @@
// Copyright (c) 2014 by Contributors
#include <stdio.h>
#include <stdarg.h>
#include <Rinternals.h>
// implements error handling
void XGBoostAssert_R(int exp, const char *fmt, ...) {
char buf[1024];
if (exp == 0) {
va_list args;
va_start(args, fmt);
vsprintf(buf, fmt, args);
va_end(args);
error("AssertError:%s\n", buf);
}
}
void XGBoostCheck_R(int exp, const char *fmt, ...) {
char buf[1024];
if (exp == 0) {
va_list args;
va_start(args, fmt);
vsprintf(buf, fmt, args);
va_end(args);
error("%s\n", buf);
}
}

View File

@@ -16,7 +16,7 @@ void CustomLogMessage::Log(const std::string& msg) {
namespace xgboost { namespace xgboost {
ConsoleLogger::~ConsoleLogger() { ConsoleLogger::~ConsoleLogger() {
if (cur_verbosity_ == LogVerbosity::kIgnore || if (cur_verbosity_ == LogVerbosity::kIgnore ||
cur_verbosity_ <= global_verbosity_) { cur_verbosity_ <= GlobalVerbosity()) {
dmlc::CustomLogMessage::Log(log_stream_.str()); dmlc::CustomLogMessage::Log(log_stream_.str());
} }
} }

View File

@@ -13,7 +13,7 @@ my_linters <- list(
object_usage_linter = lintr::object_usage_linter, object_usage_linter = lintr::object_usage_linter,
object_length_linter = lintr::object_length_linter, object_length_linter = lintr::object_length_linter,
open_curly_linter = lintr::open_curly_linter, open_curly_linter = lintr::open_curly_linter,
semicolon = lintr::semicolon_terminator_linter, semicolon = lintr::semicolon_terminator_linter(semicolon = c("compound", "trailing")),
seq = lintr::seq_linter, seq = lintr::seq_linter,
spaces_inside_linter = lintr::spaces_inside_linter, spaces_inside_linter = lintr::spaces_inside_linter,
spaces_left_parentheses_linter = lintr::spaces_left_parentheses_linter, spaces_left_parentheses_linter = lintr::spaces_left_parentheses_linter,

View File

@@ -1,4 +1,5 @@
require(xgboost) require(xgboost)
library(Matrix)
context("basic functions") context("basic functions")
@@ -34,6 +35,10 @@ test_that("train and predict binary classification", {
err_pred1 <- sum((pred1 > 0.5) != train$label) / length(train$label) err_pred1 <- sum((pred1 > 0.5) != train$label) / length(train$label)
err_log <- bst$evaluation_log[1, train_error] err_log <- bst$evaluation_log[1, train_error]
expect_lt(abs(err_pred1 - err_log), 10e-6) expect_lt(abs(err_pred1 - err_log), 10e-6)
pred2 <- predict(bst, train$data, iterationrange = c(1, 2))
expect_length(pred1, 6513)
expect_equal(pred1, pred2)
}) })
test_that("parameter validation works", { test_that("parameter validation works", {
@@ -66,7 +71,7 @@ test_that("parameter validation works", {
xgb.train(params = params, data = dtrain, nrounds = nrounds)) xgb.train(params = params, data = dtrain, nrounds = nrounds))
print(output) print(output)
} }
expect_output(incorrect(), "bar, foo") expect_output(incorrect(), '\\\\"bar\\\\", \\\\"foo\\\\"')
}) })
@@ -143,6 +148,24 @@ test_that("train and predict softprob", {
pred_labels <- max.col(mpred) - 1 pred_labels <- max.col(mpred) - 1
err <- sum(pred_labels != lb) / length(lb) err <- sum(pred_labels != lb) / length(lb)
expect_equal(bst$evaluation_log[1, train_merror], err, tolerance = 5e-6) expect_equal(bst$evaluation_log[1, train_merror], err, tolerance = 5e-6)
mpred1 <- predict(bst, as.matrix(iris[, -5]), reshape = TRUE, iterationrange = c(1, 2))
expect_equal(mpred, mpred1)
d <- cbind(
x1 = rnorm(100),
x2 = rnorm(100),
x3 = rnorm(100)
)
y <- sample.int(10, 100, replace = TRUE) - 1
dtrain <- xgb.DMatrix(data = d, info = list(label = y))
booster <- xgb.train(
params = list(tree_method = "hist"), data = dtrain, nrounds = 4, num_class = 10,
objective = "multi:softprob"
)
predt <- predict(booster, as.matrix(d), reshape = TRUE, strict_shape = FALSE)
expect_equal(ncol(predt), 10)
expect_equal(rowSums(predt), rep(1, 100), tolerance = 1e-7)
}) })
test_that("train and predict softmax", { test_that("train and predict softmax", {
@@ -182,10 +205,8 @@ test_that("train and predict RF", {
pred_err_20 <- sum((pred > 0.5) != lb) / length(lb) pred_err_20 <- sum((pred > 0.5) != lb) / length(lb)
expect_equal(pred_err_20, pred_err) expect_equal(pred_err_20, pred_err)
#pred <- predict(bst, train$data, ntreelimit = 1) pred1 <- predict(bst, train$data, iterationrange = c(1, 2))
#pred_err_1 <- sum((pred > 0.5) != lb)/length(lb) expect_equal(pred, pred1)
#expect_lt(pred_err, pred_err_1)
#expect_lt(pred_err, 0.08)
}) })
test_that("train and predict RF with softprob", { test_that("train and predict RF with softprob", {
@@ -331,7 +352,7 @@ test_that("train and predict with non-strict classes", {
expect_error(pr <- predict(bst, train_dense), regexp = NA) expect_error(pr <- predict(bst, train_dense), regexp = NA)
expect_equal(pr0, pr) expect_equal(pr0, pr)
# when someone inhertis from xgb.Booster, it should still be possible to use it as xgb.Booster # when someone inherits from xgb.Booster, it should still be possible to use it as xgb.Booster
class(bst) <- c('super.Booster', 'xgb.Booster') class(bst) <- c('super.Booster', 'xgb.Booster')
expect_error(pr <- predict(bst, train_dense), regexp = NA) expect_error(pr <- predict(bst, train_dense), regexp = NA)
expect_equal(pr0, pr) expect_equal(pr0, pr)
@@ -346,7 +367,7 @@ test_that("max_delta_step works", {
bst1 <- xgb.train(param, dtrain, nrounds, watchlist, verbose = 1) bst1 <- xgb.train(param, dtrain, nrounds, watchlist, verbose = 1)
# model with restricted max_delta_step # model with restricted max_delta_step
bst2 <- xgb.train(param, dtrain, nrounds, watchlist, verbose = 1, max_delta_step = 1) bst2 <- xgb.train(param, dtrain, nrounds, watchlist, verbose = 1, max_delta_step = 1)
# the no-restriction model is expected to have consistently lower loss during the initial interations # the no-restriction model is expected to have consistently lower loss during the initial iterations
expect_true(all(bst1$evaluation_log$train_logloss < bst2$evaluation_log$train_logloss)) expect_true(all(bst1$evaluation_log$train_logloss < bst2$evaluation_log$train_logloss))
expect_lt(mean(bst1$evaluation_log$train_logloss) / mean(bst2$evaluation_log$train_logloss), 0.8) expect_lt(mean(bst1$evaluation_log$train_logloss) / mean(bst2$evaluation_log$train_logloss), 0.8)
}) })
@@ -385,3 +406,72 @@ test_that("Configuration works", {
reloaded_config <- xgb.config(bst) reloaded_config <- xgb.config(bst)
expect_equal(config, reloaded_config); expect_equal(config, reloaded_config);
}) })
test_that("strict_shape works", {
n_rounds <- 2
test_strict_shape <- function(bst, X, n_groups) {
predt <- predict(bst, X, strict_shape = TRUE)
margin <- predict(bst, X, outputmargin = TRUE, strict_shape = TRUE)
contri <- predict(bst, X, predcontrib = TRUE, strict_shape = TRUE)
interact <- predict(bst, X, predinteraction = TRUE, strict_shape = TRUE)
leaf <- predict(bst, X, predleaf = TRUE, strict_shape = TRUE)
n_rows <- nrow(X)
n_cols <- ncol(X)
expect_equal(dim(predt), c(n_groups, n_rows))
expect_equal(dim(margin), c(n_groups, n_rows))
expect_equal(dim(contri), c(n_cols + 1, n_groups, n_rows))
expect_equal(dim(interact), c(n_cols + 1, n_cols + 1, n_groups, n_rows))
expect_equal(dim(leaf), c(1, n_groups, n_rounds, n_rows))
if (n_groups != 1) {
for (g in seq_len(n_groups)) {
expect_lt(max(abs(colSums(contri[, g, ]) - margin[g, ])), 1e-5)
}
}
}
test_iris <- function() {
y <- as.numeric(iris$Species) - 1
X <- as.matrix(iris[, -5])
bst <- xgboost(data = X, label = y,
max_depth = 2, nrounds = n_rounds,
objective = "multi:softprob", num_class = 3, eval_metric = "merror")
test_strict_shape(bst, X, 3)
}
test_agaricus <- function() {
data(agaricus.train, package = 'xgboost')
X <- agaricus.train$data
y <- agaricus.train$label
bst <- xgboost(data = X, label = y, max_depth = 2,
nrounds = n_rounds, objective = "binary:logistic",
eval_metric = 'error', eval_metric = 'auc', eval_metric = "logloss")
test_strict_shape(bst, X, 1)
}
test_iris()
test_agaricus()
})
test_that("'predict' accepts CSR data", {
X <- agaricus.train$data
y <- agaricus.train$label
x_csc <- as(X[1L, , drop = FALSE], "CsparseMatrix")
x_csr <- as(x_csc, "RsparseMatrix")
x_spv <- as(x_csc, "sparseVector")
bst <- xgboost(data = X, label = y, objective = "binary:logistic",
nrounds = 5L, verbose = FALSE)
p_csc <- predict(bst, x_csc)
p_csr <- predict(bst, x_csr)
p_spv <- predict(bst, x_spv)
expect_equal(p_csc, p_csr)
expect_equal(p_csc, p_spv)
})

View File

@@ -0,0 +1,21 @@
context('Test global configuration')
test_that('Global configuration works with verbosity', {
old_verbosity <- xgb.get.config()$verbosity
for (v in c(0, 1, 2, 3)) {
xgb.set.config(verbosity = v)
expect_equal(xgb.get.config()$verbosity, v)
}
xgb.set.config(verbosity = old_verbosity)
expect_equal(xgb.get.config()$verbosity, old_verbosity)
})
test_that('Global configuration works with use_rmm flag', {
old_use_rmm_flag <- xgb.get.config()$use_rmm
for (v in c(TRUE, FALSE)) {
xgb.set.config(use_rmm = v)
expect_equal(xgb.get.config()$use_rmm, v)
}
xgb.set.config(use_rmm = old_use_rmm_flag)
expect_equal(xgb.get.config()$use_rmm, old_use_rmm_flag)
})

View File

@@ -27,6 +27,7 @@ test_that("xgb.DMatrix: saving, loading", {
# save to a local file # save to a local file
dtest1 <- xgb.DMatrix(test_data, label = test_label) dtest1 <- xgb.DMatrix(test_data, label = test_label)
tmp_file <- tempfile('xgb.DMatrix_') tmp_file <- tempfile('xgb.DMatrix_')
on.exit(unlink(tmp_file))
expect_true(xgb.DMatrix.save(dtest1, tmp_file)) expect_true(xgb.DMatrix.save(dtest1, tmp_file))
# read from a local file # read from a local file
expect_output(dtest3 <- xgb.DMatrix(tmp_file), "entries loaded from") expect_output(dtest3 <- xgb.DMatrix(tmp_file), "entries loaded from")
@@ -41,7 +42,20 @@ test_that("xgb.DMatrix: saving, loading", {
dtest4 <- xgb.DMatrix(tmp_file, silent = TRUE) dtest4 <- xgb.DMatrix(tmp_file, silent = TRUE)
expect_equal(dim(dtest4), c(3, 4)) expect_equal(dim(dtest4), c(3, 4))
expect_equal(getinfo(dtest4, 'label'), c(0, 1, 0)) expect_equal(getinfo(dtest4, 'label'), c(0, 1, 0))
unlink(tmp_file)
# check that feature info is saved
data(agaricus.train, package = 'xgboost')
dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label)
cnames <- colnames(dtrain)
expect_equal(length(cnames), 126)
tmp_file <- tempfile('xgb.DMatrix_')
xgb.DMatrix.save(dtrain, tmp_file)
dtrain <- xgb.DMatrix(tmp_file)
expect_equal(colnames(dtrain), cnames)
ft <- rep(c("c", "q"), each=length(cnames)/2)
setinfo(dtrain, "feature_type", ft)
expect_equal(ft, getinfo(dtrain, "feature_type"))
}) })
test_that("xgb.DMatrix: getinfo & setinfo", { test_that("xgb.DMatrix: getinfo & setinfo", {

View File

@@ -0,0 +1,27 @@
library(xgboost)
context("feature weights")
test_that("training with feature weights works", {
nrows <- 1000
ncols <- 9
set.seed(2022)
x <- matrix(rnorm(nrows * ncols), nrow = nrows)
y <- rowSums(x)
weights <- seq(from = 1, to = ncols)
test <- function(tm) {
names <- paste0("f", 1:ncols)
xy <- xgb.DMatrix(data = x, label = y, feature_weights = weights)
params <- list(colsample_bynode = 0.4, tree_method = tm, nthread = 1)
model <- xgb.train(params = params, data = xy, nrounds = 32)
importance <- xgb.importance(model = model, feature_names = names)
expect_equal(dim(importance), c(ncols, 4))
importance <- importance[order(importance$Feature)]
expect_lt(importance[1, Frequency], importance[9, Frequency])
}
for (tm in c("hist", "approx", "exact")) {
test(tm)
}
})

View File

@@ -46,3 +46,31 @@ test_that("gblinear works", {
expect_equal(dim(h), c(n, ncol(dtrain) + 1)) expect_equal(dim(h), c(n, ncol(dtrain) + 1))
expect_s4_class(h, "dgCMatrix") expect_s4_class(h, "dgCMatrix")
}) })
test_that("gblinear early stopping works", {
data(agaricus.train, package = 'xgboost')
data(agaricus.test, package = 'xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
param <- list(
objective = "binary:logistic", eval_metric = "error", booster = "gblinear",
nthread = 2, eta = 0.8, alpha = 0.0001, lambda = 0.0001,
updater = "coord_descent"
)
es_round <- 1
n <- 10
booster <- xgb.train(
param, dtrain, n, list(eval = dtest, train = dtrain), early_stopping_rounds = es_round
)
expect_equal(booster$best_iteration, 5)
predt_es <- predict(booster, dtrain)
n <- booster$best_iteration + es_round
booster <- xgb.train(
param, dtrain, n, list(eval = dtest, train = dtrain), early_stopping_rounds = es_round
)
predt <- predict(booster, dtrain)
expect_equal(predt_es, predt)
})

View File

@@ -1,3 +1,4 @@
library(testthat)
context('Test helper functions') context('Test helper functions')
require(xgboost) require(xgboost)
@@ -110,7 +111,7 @@ test_that("predict feature contributions works", {
pred <- predict(bst.GLM, sparse_matrix, outputmargin = TRUE) pred <- predict(bst.GLM, sparse_matrix, outputmargin = TRUE)
expect_lt(max(abs(rowSums(pred_contr) - pred)), 1e-5) expect_lt(max(abs(rowSums(pred_contr) - pred)), 1e-5)
# manual calculation of linear terms # manual calculation of linear terms
coefs <- xgb.dump(bst.GLM)[-c(1, 2, 4)] %>% as.numeric coefs <- as.numeric(xgb.dump(bst.GLM)[-c(1, 2, 4)])
coefs <- c(coefs[-1], coefs[1]) # intercept must be the last coefs <- c(coefs[-1], coefs[1]) # intercept must be the last
pred_contr_manual <- sweep(cbind(sparse_matrix, 1), 2, coefs, FUN = "*") pred_contr_manual <- sweep(cbind(sparse_matrix, 1), 2, coefs, FUN = "*")
expect_equal(as.numeric(pred_contr), as.numeric(pred_contr_manual), expect_equal(as.numeric(pred_contr), as.numeric(pred_contr_manual),
@@ -130,7 +131,11 @@ test_that("predict feature contributions works", {
pred <- predict(mbst.GLM, as.matrix(iris[, -5]), outputmargin = TRUE, reshape = TRUE) pred <- predict(mbst.GLM, as.matrix(iris[, -5]), outputmargin = TRUE, reshape = TRUE)
pred_contr <- predict(mbst.GLM, as.matrix(iris[, -5]), predcontrib = TRUE) pred_contr <- predict(mbst.GLM, as.matrix(iris[, -5]), predcontrib = TRUE)
expect_length(pred_contr, 3) expect_length(pred_contr, 3)
coefs_all <- xgb.dump(mbst.GLM)[-c(1, 2, 6)] %>% as.numeric %>% matrix(ncol = 3, byrow = TRUE) coefs_all <- matrix(
data = as.numeric(xgb.dump(mbst.GLM)[-c(1, 2, 6)]),
ncol = 3,
byrow = TRUE
)
for (g in seq_along(pred_contr)) { for (g in seq_along(pred_contr)) {
expect_equal(colnames(pred_contr[[g]]), c(colnames(iris[, -5]), "BIAS")) expect_equal(colnames(pred_contr[[g]]), c(colnames(iris[, -5]), "BIAS"))
expect_lt(max(abs(rowSums(pred_contr[[g]]) - pred[, g])), float_tolerance) expect_lt(max(abs(rowSums(pred_contr[[g]]) - pred[, g])), float_tolerance)
@@ -223,7 +228,7 @@ if (grepl('Windows', Sys.info()[['sysname']]) ||
X <- 10^runif(100, -20, 20) X <- 10^runif(100, -20, 20)
if (capabilities('long.double')) { if (capabilities('long.double')) {
X2X <- as.numeric(format(X, digits = 17)) X2X <- as.numeric(format(X, digits = 17))
expect_identical(X, X2X) expect_equal(X, X2X, tolerance = float_tolerance)
} }
# retrieved attributes to be the same as written # retrieved attributes to be the same as written
for (x in X) { for (x in X) {
@@ -238,12 +243,13 @@ if (grepl('Windows', Sys.info()[['sysname']]) ||
test_that("xgb.Booster serializing as R object works", { test_that("xgb.Booster serializing as R object works", {
saveRDS(bst.Tree, 'xgb.model.rds') saveRDS(bst.Tree, 'xgb.model.rds')
bst <- readRDS('xgb.model.rds') bst <- readRDS('xgb.model.rds')
if (file.exists('xgb.model.rds')) file.remove('xgb.model.rds')
dtrain <- xgb.DMatrix(sparse_matrix, label = label) dtrain <- xgb.DMatrix(sparse_matrix, label = label)
expect_equal(predict(bst.Tree, dtrain), predict(bst, dtrain), tolerance = float_tolerance) expect_equal(predict(bst.Tree, dtrain), predict(bst, dtrain), tolerance = float_tolerance)
expect_equal(xgb.dump(bst.Tree), xgb.dump(bst)) expect_equal(xgb.dump(bst.Tree), xgb.dump(bst))
xgb.save(bst, 'xgb.model') xgb.save(bst, 'xgb.model')
if (file.exists('xgb.model')) file.remove('xgb.model') if (file.exists('xgb.model')) file.remove('xgb.model')
bst <- readRDS('xgb.model.rds')
if (file.exists('xgb.model.rds')) file.remove('xgb.model.rds')
nil_ptr <- new("externalptr") nil_ptr <- new("externalptr")
class(nil_ptr) <- "xgb.Booster.handle" class(nil_ptr) <- "xgb.Booster.handle"
expect_true(identical(bst$handle, nil_ptr)) expect_true(identical(bst$handle, nil_ptr))
@@ -305,7 +311,45 @@ test_that("xgb.importance works with and without feature names", {
# for multiclass # for multiclass
imp.Tree <- xgb.importance(model = mbst.Tree) imp.Tree <- xgb.importance(model = mbst.Tree)
expect_equal(dim(imp.Tree), c(4, 4)) expect_equal(dim(imp.Tree), c(4, 4))
xgb.importance(model = mbst.Tree, trees = seq(from = 0, by = nclass, length.out = nrounds))
trees <- seq(from = 0, by = 2, length.out = 2)
importance <- xgb.importance(feature_names = feature.names, model = bst.Tree, trees = trees)
importance_from_dump <- function() {
model_text_dump <- xgb.dump(model = bst.Tree, with_stats = TRUE, trees = trees)
imp <- xgb.model.dt.tree(
feature_names = feature.names,
text = model_text_dump,
trees = trees
)[
Feature != "Leaf", .(
Gain = sum(Quality),
Cover = sum(Cover),
Frequency = .N
),
by = Feature
][
, `:=`(
Gain = Gain / sum(Gain),
Cover = Cover / sum(Cover),
Frequency = Frequency / sum(Frequency)
)
][
order(Gain, decreasing = TRUE)
]
imp
}
expect_equal(importance_from_dump(), importance, tolerance = 1e-6)
## decision stump
m <- xgboost::xgboost(
data = as.matrix(data.frame(x = c(0, 1))),
label = c(1, 2),
nrounds = 1
)
df <- xgb.model.dt.tree(model = m)
expect_equal(df$Feature, "Leaf")
expect_equal(df$Cover, 2)
}) })
test_that("xgb.importance works with GLM model", { test_that("xgb.importance works with GLM model", {

View File

@@ -1,7 +1,6 @@
context('Test prediction of feature interactions') context('Test prediction of feature interactions')
require(xgboost) require(xgboost)
require(magrittr)
set.seed(123) set.seed(123)
@@ -32,7 +31,7 @@ test_that("predict feature interactions works", {
cont <- predict(b, dm, predcontrib = TRUE) cont <- predict(b, dm, predcontrib = TRUE)
expect_equal(dim(cont), c(N, P + 1)) expect_equal(dim(cont), c(N, P + 1))
# make sure for each row they add up to marginal predictions # make sure for each row they add up to marginal predictions
max(abs(rowSums(cont) - pred)) %>% expect_lt(0.001) expect_lt(max(abs(rowSums(cont) - pred)), 0.001)
# Hand-construct the 'ground truth' feature contributions: # Hand-construct the 'ground truth' feature contributions:
gt_cont <- cbind( gt_cont <- cbind(
2. * X[, 1], 2. * X[, 1],
@@ -52,21 +51,24 @@ test_that("predict feature interactions works", {
expect_equal(dimnames(intr), list(NULL, cn, cn)) expect_equal(dimnames(intr), list(NULL, cn, cn))
# check the symmetry # check the symmetry
max(abs(aperm(intr, c(1, 3, 2)) - intr)) %>% expect_lt(0.00001) expect_lt(max(abs(aperm(intr, c(1, 3, 2)) - intr)), 0.00001)
# sums WRT columns must be close to feature contributions # sums WRT columns must be close to feature contributions
max(abs(apply(intr, c(1, 2), sum) - cont)) %>% expect_lt(0.00001) expect_lt(max(abs(apply(intr, c(1, 2), sum) - cont)), 0.00001)
# diagonal terms for features 3,4,5 must be close to zero # diagonal terms for features 3,4,5 must be close to zero
Reduce(max, sapply(3:P, function(i) max(abs(intr[, i, i])))) %>% expect_lt(0.05) expect_lt(Reduce(max, sapply(3:P, function(i) max(abs(intr[, i, i])))), 0.05)
# BIAS must have no interactions # BIAS must have no interactions
max(abs(intr[, 1:P, P + 1])) %>% expect_lt(0.00001) expect_lt(max(abs(intr[, 1:P, P + 1])), 0.00001)
# interactions other than 2 x 3 must be close to zero # interactions other than 2 x 3 must be close to zero
intr23 <- intr intr23 <- intr
intr23[, 2, 3] <- 0 intr23[, 2, 3] <- 0
Reduce(max, sapply(1:P, function(i) max(abs(intr23[, i, (i + 1):(P + 1)])))) %>% expect_lt(0.05) expect_lt(
Reduce(max, sapply(1:P, function(i) max(abs(intr23[, i, (i + 1):(P + 1)])))),
0.05
)
# Construct the 'ground truth' contributions of interactions directly from the linear terms: # Construct the 'ground truth' contributions of interactions directly from the linear terms:
gt_intr <- array(0, c(N, P + 1, P + 1)) gt_intr <- array(0, c(N, P + 1, P + 1))
@@ -119,23 +121,64 @@ test_that("multiclass feature interactions work", {
dm <- xgb.DMatrix(as.matrix(iris[, -5]), label = as.numeric(iris$Species) - 1) dm <- xgb.DMatrix(as.matrix(iris[, -5]), label = as.numeric(iris$Species) - 1)
param <- list(eta = 0.1, max_depth = 4, objective = 'multi:softprob', num_class = 3) param <- list(eta = 0.1, max_depth = 4, objective = 'multi:softprob', num_class = 3)
b <- xgb.train(param, dm, 40) b <- xgb.train(param, dm, 40)
pred <- predict(b, dm, outputmargin = TRUE) %>% array(c(3, 150)) %>% t pred <- t(
array(
data = predict(b, dm, outputmargin = TRUE),
dim = c(3, 150)
)
)
# SHAP contributions: # SHAP contributions:
cont <- predict(b, dm, predcontrib = TRUE) cont <- predict(b, dm, predcontrib = TRUE)
expect_length(cont, 3) expect_length(cont, 3)
# rewrap them as a 3d array # rewrap them as a 3d array
cont <- unlist(cont) %>% array(c(150, 5, 3)) cont <- array(
data = unlist(cont),
dim = c(150, 5, 3)
)
# make sure for each row they add up to marginal predictions # make sure for each row they add up to marginal predictions
max(abs(apply(cont, c(1, 3), sum) - pred)) %>% expect_lt(0.001) expect_lt(max(abs(apply(cont, c(1, 3), sum) - pred)), 0.001)
# SHAP interaction contributions: # SHAP interaction contributions:
intr <- predict(b, dm, predinteraction = TRUE) intr <- predict(b, dm, predinteraction = TRUE)
expect_length(intr, 3) expect_length(intr, 3)
# rewrap them as a 4d array # rewrap them as a 4d array
intr <- unlist(intr) %>% array(c(150, 5, 5, 3)) %>% aperm(c(4, 1, 2, 3)) # [grp, row, col, col] intr <- aperm(
a = array(
data = unlist(intr),
dim = c(150, 5, 5, 3)
),
perm = c(4, 1, 2, 3) # [grp, row, col, col]
)
# check the symmetry # check the symmetry
max(abs(aperm(intr, c(1, 2, 4, 3)) - intr)) %>% expect_lt(0.00001) expect_lt(max(abs(aperm(intr, c(1, 2, 4, 3)) - intr)), 0.00001)
# sums WRT columns must be close to feature contributions # sums WRT columns must be close to feature contributions
max(abs(apply(intr, c(1, 2, 3), sum) - aperm(cont, c(3, 1, 2)))) %>% expect_lt(0.00001) expect_lt(max(abs(apply(intr, c(1, 2, 3), sum) - aperm(cont, c(3, 1, 2)))), 0.00001)
})
test_that("SHAP single sample works", {
train <- agaricus.train
test <- agaricus.test
booster <- xgboost(
data = train$data,
label = train$label,
max_depth = 2,
nrounds = 4,
objective = "binary:logistic",
)
predt <- predict(
booster,
newdata = train$data[1, , drop = FALSE], predcontrib = TRUE
)
expect_equal(dim(predt), c(1, dim(train$data)[2] + 1))
predt <- predict(
booster,
newdata = train$data[1, , drop = FALSE], predinteraction = TRUE
)
expect_equal(dim(predt), c(1, dim(train$data)[2] + 1, dim(train$data)[2] + 1))
}) })

View File

@@ -0,0 +1,30 @@
context("Test model IO.")
## some other tests are in test_basic.R
require(xgboost)
require(testthat)
data(agaricus.train, package = "xgboost")
data(agaricus.test, package = "xgboost")
train <- agaricus.train
test <- agaricus.test
test_that("load/save raw works", {
nrounds <- 8
booster <- xgboost(
data = train$data, label = train$label,
nrounds = nrounds, objective = "binary:logistic"
)
json_bytes <- xgb.save.raw(booster, raw_format = "json")
ubj_bytes <- xgb.save.raw(booster, raw_format = "ubj")
old_bytes <- xgb.save.raw(booster, raw_format = "deprecated")
from_json <- xgb.load.raw(json_bytes, as_booster = TRUE)
from_ubj <- xgb.load.raw(ubj_bytes, as_booster = TRUE)
json2old <- xgb.save.raw(from_json, raw_format = "deprecated")
ubj2old <- xgb.save.raw(from_ubj, raw_format = "deprecated")
expect_equal(json2old, ubj2old)
expect_equal(json2old, old_bytes)
})

View File

@@ -77,12 +77,14 @@ test_that("Models from previous versions of XGBoost can be loaded", {
model_xgb_ver <- m[2] model_xgb_ver <- m[2]
name <- m[3] name <- m[3]
is_rds <- endsWith(model_file, '.rds') is_rds <- endsWith(model_file, '.rds')
is_json <- endsWith(model_file, '.json')
cpp_warning <- capture.output({ cpp_warning <- capture.output({
# Expect an R warning when a model is loaded from RDS and it was generated by version < 1.1.x # Expect an R warning when a model is loaded from RDS and it was generated by version < 1.1.x
if (is_rds && compareVersion(model_xgb_ver, '1.1.1.1') < 0) { if (is_rds && compareVersion(model_xgb_ver, '1.1.1.1') < 0) {
booster <- readRDS(model_file) booster <- readRDS(model_file)
expect_warning(predict(booster, newdata = pred_data)) expect_warning(predict(booster, newdata = pred_data))
booster <- readRDS(model_file)
expect_warning(run_booster_check(booster, name)) expect_warning(run_booster_check(booster, name))
} else { } else {
if (is_rds) { if (is_rds) {
@@ -94,15 +96,13 @@ test_that("Models from previous versions of XGBoost can be loaded", {
run_booster_check(booster, name) run_booster_check(booster, name)
} }
}) })
if (compareVersion(model_xgb_ver, '1.0.0.0') < 0) { cpp_warning <- paste0(cpp_warning, collapse = ' ')
# Expect a C++ warning when a model was generated in version < 1.0.x if (is_rds && compareVersion(model_xgb_ver, '1.1.1.1') >= 0) {
m <- grepl(paste0('.*Loading model from XGBoost < 1\\.0\\.0, consider saving it again for ', # Expect a C++ warning when a model is loaded from RDS and it was generated by old XGBoost`
'improved compatibility.*'), cpp_warning, perl = TRUE) m <- grepl(paste0('.*If you are loading a serialized model ',
expect_true(length(m) > 0 && all(m)) '\\(like pickle in Python, RDS in R\\).*',
} else if (is_rds && model_xgb_ver == '1.1.1.1') { 'for more details about differences between ',
# Expect a C++ warning when a model is loaded from RDS and it was generated by version 1.1.x 'saving model and serializing.*'), cpp_warning, perl = TRUE)
m <- grepl(paste0('.*Attempted to load internal configuration for a model file that was ',
'generated by a previous version of XGBoost.*'), cpp_warning, perl = TRUE)
expect_true(length(m) > 0 && all(m)) expect_true(length(m) > 0 && all(m))
} }
}) })

View File

@@ -19,5 +19,5 @@ test_that("monotone constraints for regression", {
pred.ord <- pred[ind] pred.ord <- pred[ind]
expect_true({ expect_true({
!any(diff(pred.ord) > 0) !any(diff(pred.ord) > 0)
}, "Monotone Contraint Satisfied") }, "Monotone constraint satisfied")
}) })

View File

@@ -1,9 +1,9 @@
context('Test poisson regression model') context('Test Poisson regression model')
require(xgboost) require(xgboost)
set.seed(1994) set.seed(1994)
test_that("poisson regression works", { test_that("Poisson regression works", {
data(mtcars) data(mtcars)
bst <- xgboost(data = as.matrix(mtcars[, -11]), label = mtcars[, 11], bst <- xgboost(data = as.matrix(mtcars[, -11]), label = mtcars[, 11],
objective = 'count:poisson', nrounds = 10, verbose = 0) objective = 'count:poisson', nrounds = 10, verbose = 0)

View File

@@ -1,5 +1,5 @@
--- ---
title: "Understand your dataset with Xgboost" title: "Understand your dataset with XGBoost"
output: output:
rmarkdown::html_vignette: rmarkdown::html_vignette:
css: vignette.css css: vignette.css
@@ -18,9 +18,9 @@ Understand your dataset with XGBoost
Introduction Introduction
------------ ------------
The purpose of this vignette is to show you how to use **Xgboost** to discover and understand your own dataset better. The purpose of this vignette is to show you how to use **XGBoost** to discover and understand your own dataset better.
This vignette is not about predicting anything (see [Xgboost presentation](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd)). We will explain how to use **Xgboost** to highlight the *link* between the *features* of your data and the *outcome*. This vignette is not about predicting anything (see [XGBoost presentation](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd)). We will explain how to use **XGBoost** to highlight the *link* between the *features* of your data and the *outcome*.
Package loading: Package loading:
@@ -39,7 +39,7 @@ Preparation of the dataset
### Numeric v.s. categorical variables ### Numeric v.s. categorical variables
**Xgboost** manages only `numeric` vectors. **XGBoost** manages only `numeric` vectors.
What to do when you have *categorical* data? What to do when you have *categorical* data?
@@ -66,7 +66,7 @@ data(Arthritis)
df <- data.table(Arthritis, keep.rownames = FALSE) df <- data.table(Arthritis, keep.rownames = FALSE)
``` ```
> `data.table` is 100% compliant with **R** `data.frame` but its syntax is more consistent and its performance for large dataset is [best in class](https://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) (`dplyr` from **R** and `Pandas` from **Python** [included](https://github.com/Rdatatable/data.table/wiki/Benchmarks-%3A-Grouping)). Some parts of **Xgboost** **R** package use `data.table`. > `data.table` is 100% compliant with **R** `data.frame` but its syntax is more consistent and its performance for large dataset is [best in class](https://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) (`dplyr` from **R** and `Pandas` from **Python** [included](https://github.com/Rdatatable/data.table/wiki/Benchmarks-%3A-Grouping)). Some parts of **XGBoost** **R** package use `data.table`.
The first thing we want to do is to have a look to the first few lines of the `data.table`: The first thing we want to do is to have a look to the first few lines of the `data.table`:
@@ -138,7 +138,7 @@ levels(df[,Treatment])
Next step, we will transform the categorical data to dummy variables. Next step, we will transform the categorical data to dummy variables.
Several encoding methods exist, e.g., [one-hot encoding](https://en.wikipedia.org/wiki/One-hot) is a common approach. Several encoding methods exist, e.g., [one-hot encoding](https://en.wikipedia.org/wiki/One-hot) is a common approach.
We will use the [dummy contrast coding](https://stats.idre.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/) which is popular because it produces "full rank" encoding (also see [this blog post by Max Kuhn](http://appliedpredictivemodeling.com/blog/2013/10/23/the-basics-of-encoding-categorical-data-for-predictive-models)). We will use the [dummy contrast coding](https://stats.oarc.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/) which is popular because it produces "full rank" encoding (also see [this blog post by Max Kuhn](http://appliedpredictivemodeling.com/blog/2013/10/23/the-basics-of-encoding-categorical-data-for-predictive-models)).
The purpose is to transform each value of each *categorical* feature into a *binary* feature `{0, 1}`. The purpose is to transform each value of each *categorical* feature into a *binary* feature `{0, 1}`.
@@ -166,7 +166,7 @@ output_vector = df[,Improved] == "Marked"
Build the model Build the model
--------------- ---------------
The code below is very usual. For more information, you can look at the documentation of `xgboost` function (or at the vignette [Xgboost presentation](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd)). The code below is very usual. For more information, you can look at the documentation of `xgboost` function (or at the vignette [XGBoost presentation](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd)).
```{r} ```{r}
bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 4, bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 4,
@@ -176,7 +176,7 @@ bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 4,
You can see some `train-error: 0.XXXXX` lines followed by a number. It decreases. Each line shows how well the model explains your data. Lower is better. You can see some `train-error: 0.XXXXX` lines followed by a number. It decreases. Each line shows how well the model explains your data. Lower is better.
A model which fits too well may [overfit](https://en.wikipedia.org/wiki/Overfitting) (meaning it copy/paste too much the past, and won't be that good to predict the future). A small value for training error may be a symptom of [overfitting](https://en.wikipedia.org/wiki/Overfitting), meaning the model will not accurately predict the future values.
> Here you can see the numbers decrease until line 7 and then increase. > Here you can see the numbers decrease until line 7 and then increase.
> >
@@ -304,19 +304,19 @@ Linear model may not be that smart in this scenario.
Special Note: What about Random Forests™? Special Note: What about Random Forests™?
----------------------------------------- -----------------------------------------
As you may know, [Random Forests](https://en.wikipedia.org/wiki/Random_forest) algorithm is cousin with boosting and both are part of the [ensemble learning](https://en.wikipedia.org/wiki/Ensemble_learning) family. As you may know, [Random Forests](https://en.wikipedia.org/wiki/Random_forest) algorithm is cousin with boosting and both are part of the [ensemble learning](https://en.wikipedia.org/wiki/Ensemble_learning) family.
Both trains several decision trees for one dataset. The *main* difference is that in Random Forests, trees are independent and in boosting, the tree `N+1` focus its learning on the loss (<=> what has not been well modeled by the tree `N`). Both trains several decision trees for one dataset. The *main* difference is that in Random Forests, trees are independent and in boosting, the tree `N+1` focus its learning on the loss (<=> what has not been well modeled by the tree `N`).
This difference have an impact on a corner case in feature importance analysis: the *correlated features*. This difference have an impact on a corner case in feature importance analysis: the *correlated features*.
Imagine two features perfectly correlated, feature `A` and feature `B`. For one specific tree, if the algorithm needs one of them, it will choose randomly (true in both boosting and Random Forests). Imagine two features perfectly correlated, feature `A` and feature `B`. For one specific tree, if the algorithm needs one of them, it will choose randomly (true in both boosting and Random Forests).
However, in Random Forests this random choice will be done for each tree, because each tree is independent from the others. Therefore, approximatively, depending of your parameters, 50% of the trees will choose feature `A` and the other 50% will choose feature `B`. So the *importance* of the information contained in `A` and `B` (which is the same, because they are perfectly correlated) is diluted in `A` and `B`. So you won't easily know this information is important to predict what you want to predict! It is even worse when you have 10 correlated features... However, in Random Forests this random choice will be done for each tree, because each tree is independent from the others. Therefore, approximatively, depending of your parameters, 50% of the trees will choose feature `A` and the other 50% will choose feature `B`. So the *importance* of the information contained in `A` and `B` (which is the same, because they are perfectly correlated) is diluted in `A` and `B`. So you won't easily know this information is important to predict what you want to predict! It is even worse when you have 10 correlated features...
In boosting, when a specific link between feature and outcome have been learned by the algorithm, it will try to not refocus on it (in theory it is what happens, reality is not always that simple). Therefore, all the importance will be on feature `A` or on feature `B` (but not both). You will know that one feature have an important role in the link between the observations and the label. It is still up to you to search for the correlated features to the one detected as important if you need to know all of them. In boosting, when a specific link between feature and outcome have been learned by the algorithm, it will try to not refocus on it (in theory it is what happens, reality is not always that simple). Therefore, all the importance will be on feature `A` or on feature `B` (but not both). You will know that one feature have an important role in the link between the observations and the label. It is still up to you to search for the correlated features to the one detected as important if you need to know all of them.
If you want to try Random Forests algorithm, you can tweak Xgboost parameters! If you want to try Random Forests algorithm, you can tweak XGBoost parameters!
For instance, to compute a model with 1000 trees, with a 0.5 factor on sampling rows and columns: For instance, to compute a model with 1000 trees, with a 0.5 factor on sampling rows and columns:
@@ -326,7 +326,7 @@ data(agaricus.test, package='xgboost')
train <- agaricus.train train <- agaricus.train
test <- agaricus.test test <- agaricus.test
#Random Forest - 1000 trees #Random Forest - 1000 trees
bst <- xgboost(data = train$data, label = train$label, max_depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nrounds = 1, objective = "binary:logistic") bst <- xgboost(data = train$data, label = train$label, max_depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nrounds = 1, objective = "binary:logistic")
#Boosting - 3 rounds #Boosting - 3 rounds
@@ -335,4 +335,4 @@ bst <- xgboost(data = train$data, label = train$label, max_depth = 4, nrounds =
> Note that the parameter `round` is set to `1`. > Note that the parameter `round` is set to `1`.
> [**Random Forests**](https://www.stat.berkeley.edu/~breiman/RandomForests/cc_papers.htm) is a trademark of Leo Breiman and Adele Cutler and is licensed exclusively to Salford Systems for the commercial release of the software. > [**Random Forests**](https://www.stat.berkeley.edu/~breiman/RandomForests/cc_papers.htm) is a trademark of Leo Breiman and Adele Cutler and is licensed exclusively to Salford Systems for the commercial release of the software.

View File

@@ -1,5 +1,5 @@
--- ---
title: "Xgboost presentation" title: "XGBoost presentation"
output: output:
rmarkdown::html_vignette: rmarkdown::html_vignette:
css: vignette.css css: vignette.css
@@ -8,7 +8,7 @@ output:
bibliography: xgboost.bib bibliography: xgboost.bib
author: Tianqi Chen, Tong He, Michaël Benesty author: Tianqi Chen, Tong He, Michaël Benesty
vignette: > vignette: >
%\VignetteIndexEntry{Xgboost presentation} %\VignetteIndexEntry{XGBoost presentation}
%\VignetteEngine{knitr::rmarkdown} %\VignetteEngine{knitr::rmarkdown}
\usepackage[utf8]{inputenc} \usepackage[utf8]{inputenc}
--- ---
@@ -19,9 +19,9 @@ XGBoost R Tutorial
## Introduction ## Introduction
**Xgboost** is short for e**X**treme **G**radient **Boost**ing package. **XGBoost** is short for e**X**treme **G**radient **Boost**ing package.
The purpose of this Vignette is to show you how to use **Xgboost** to build a model and make predictions. The purpose of this Vignette is to show you how to use **XGBoost** to build a model and make predictions.
It is an efficient and scalable implementation of gradient boosting framework by @friedman2000additive and @friedman2001greedy. Two solvers are included: It is an efficient and scalable implementation of gradient boosting framework by @friedman2000additive and @friedman2001greedy. Two solvers are included:
@@ -46,10 +46,10 @@ It has several features:
## Installation ## Installation
### Github version ### GitHub version
For weekly updated version (highly recommended), install from *Github*: For weekly updated version (highly recommended), install from *GitHub*:
```{r installGithub, eval=FALSE} ```{r installGithub, eval=FALSE}
install.packages("drat", repos="https://cran.rstudio.com") install.packages("drat", repos="https://cran.rstudio.com")
@@ -82,7 +82,7 @@ require(xgboost)
### Dataset presentation ### Dataset presentation
In this example, we are aiming to predict whether a mushroom can be eaten or not (like in many tutorials, example data are the the same as you will use on in your every day life :-). In this example, we are aiming to predict whether a mushroom can be eaten or not (like in many tutorials, example data are the same as you will use on in your every day life :-).
Mushroom data is cited from UCI Machine Learning Repository. @Bache+Lichman:2013. Mushroom data is cited from UCI Machine Learning Repository. @Bache+Lichman:2013.
@@ -148,7 +148,7 @@ We will train decision tree model using the following parameters:
* `objective = "binary:logistic"`: we will train a binary classification model ; * `objective = "binary:logistic"`: we will train a binary classification model ;
* `max_depth = 2`: the trees won't be deep, because our case is very simple ; * `max_depth = 2`: the trees won't be deep, because our case is very simple ;
* `nthread = 2`: the number of cpu threads we are going to use; * `nthread = 2`: the number of CPU threads we are going to use;
* `nrounds = 2`: there will be two passes on the data, the second one will enhance the model by further reducing the difference between ground truth and prediction. * `nrounds = 2`: there will be two passes on the data, the second one will enhance the model by further reducing the difference between ground truth and prediction.
```{r trainingSparse, message=F, warning=F} ```{r trainingSparse, message=F, warning=F}
@@ -180,7 +180,7 @@ bstDMatrix <- xgboost(data = dtrain, max_depth = 2, eta = 1, nthread = 2, nround
**XGBoost** has several features to help you to view how the learning progress internally. The purpose is to help you to set the best parameters, which is the key of your model quality. **XGBoost** has several features to help you to view how the learning progress internally. The purpose is to help you to set the best parameters, which is the key of your model quality.
One of the simplest way to see the training progress is to set the `verbose` option (see below for more advanced technics). One of the simplest way to see the training progress is to set the `verbose` option (see below for more advanced techniques).
```{r trainingVerbose0, message=T, warning=F} ```{r trainingVerbose0, message=T, warning=F}
# verbose = 0, no message # verbose = 0, no message
@@ -253,7 +253,7 @@ The most important thing to remember is that **to do a classification, you just
*Multiclass* classification works in a similar way. *Multiclass* classification works in a similar way.
This metric is **`r round(err, 2)`** and is pretty low: our yummly mushroom model works well! This metric is **`r round(err, 2)`** and is pretty low: our yummy mushroom model works well!
## Advanced features ## Advanced features

View File

@@ -16,7 +16,7 @@ XGBoost from JSON
## Introduction ## Introduction
The purpose of this Vignette is to show you how to correctly load and work with an **Xgboost** model that has been dumped to JSON. **Xgboost** internally converts all data to [32-bit floats](https://en.wikipedia.org/wiki/Single-precision_floating-point_format), and the values dumped to JSON are decimal representations of these values. When working with a model that has been parsed from a JSON file, care must be taken to correctly treat: The purpose of this Vignette is to show you how to correctly load and work with an **XGBoost** model that has been dumped to JSON. **XGBoost** internally converts all data to [32-bit floats](https://en.wikipedia.org/wiki/Single-precision_floating-point_format), and the values dumped to JSON are decimal representations of these values. When working with a model that has been parsed from a JSON file, care must be taken to correctly treat:
- the input data, which should be converted to 32-bit floats - the input data, which should be converted to 32-bit floats
- any 32-bit floats that were stored in JSON as decimal representations - any 32-bit floats that were stored in JSON as decimal representations
@@ -172,9 +172,9 @@ bst_from_json_preds <- ifelse(fl(data$dates)<fl(node$split_condition),
bst_preds == bst_from_json_preds bst_preds == bst_from_json_preds
``` ```
None are exactly equal again. What is going on here? Well, since we are using the value `1` in the calcuations, we have introduced a double into the calculation. Because of this, all float values are promoted to 64-bit doubles and the 64-bit version of the exponentiation operator `exp` is also used. On the other hand, xgboost uses the 32-bit version of the exponentation operator in its [sigmoid function](https://github.com/dmlc/xgboost/blob/54980b8959680a0da06a3fc0ec776e47c8cbb0a1/src/common/math.h#L25-L27). None are exactly equal again. What is going on here? Well, since we are using the value `1` in the calculations, we have introduced a double into the calculation. Because of this, all float values are promoted to 64-bit doubles and the 64-bit version of the exponentiation operator `exp` is also used. On the other hand, xgboost uses the 32-bit version of the exponentiation operator in its [sigmoid function](https://github.com/dmlc/xgboost/blob/54980b8959680a0da06a3fc0ec776e47c8cbb0a1/src/common/math.h#L25-L27).
How do we fix this? We have to ensure we use the correct datatypes everywhere and the correct operators. If we use only floats, the float library that we have loaded will ensure the 32-bit float exponention operator is applied. How do we fix this? We have to ensure we use the correct data types everywhere and the correct operators. If we use only floats, the float library that we have loaded will ensure the 32-bit float exponentiation operator is applied.
```{r} ```{r}
# calculate the predictions casting doubles to floats # calculate the predictions casting doubles to floats
bst_from_json_preds <- ifelse(fl(data$dates)<fl(node$split_condition), bst_from_json_preds <- ifelse(fl(data$dates)<fl(node$split_condition),

View File

@@ -2,14 +2,15 @@
=========== ===========
[![Build Status](https://xgboost-ci.net/job/xgboost/job/master/badge/icon)](https://xgboost-ci.net/blue/organizations/jenkins/xgboost/activity) [![Build Status](https://xgboost-ci.net/job/xgboost/job/master/badge/icon)](https://xgboost-ci.net/blue/organizations/jenkins/xgboost/activity)
[![Build Status](https://img.shields.io/travis/dmlc/xgboost.svg?label=build&logo=travis&branch=master)](https://travis-ci.org/dmlc/xgboost) [![Build Status](https://img.shields.io/travis/dmlc/xgboost.svg?label=build&logo=travis&branch=master)](https://travis-ci.org/dmlc/xgboost)
[![Build Status](https://ci.appveyor.com/api/projects/status/5ypa8vaed6kpmli8?svg=true)](https://ci.appveyor.com/project/tqchen/xgboost)
[![XGBoost-CI](https://github.com/dmlc/xgboost/workflows/XGBoost-CI/badge.svg?branch=master)](https://github.com/dmlc/xgboost/actions) [![XGBoost-CI](https://github.com/dmlc/xgboost/workflows/XGBoost-CI/badge.svg?branch=master)](https://github.com/dmlc/xgboost/actions)
[![Documentation Status](https://readthedocs.org/projects/xgboost/badge/?version=latest)](https://xgboost.readthedocs.org) [![Documentation Status](https://readthedocs.org/projects/xgboost/badge/?version=latest)](https://xgboost.readthedocs.org)
[![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE) [![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE)
[![CRAN Status Badge](http://www.r-pkg.org/badges/version/xgboost)](http://cran.r-project.org/web/packages/xgboost) [![CRAN Status Badge](http://www.r-pkg.org/badges/version/xgboost)](http://cran.r-project.org/web/packages/xgboost)
[![PyPI version](https://badge.fury.io/py/xgboost.svg)](https://pypi.python.org/pypi/xgboost/) [![PyPI version](https://badge.fury.io/py/xgboost.svg)](https://pypi.python.org/pypi/xgboost/)
[![Conda version](https://img.shields.io/conda/vn/conda-forge/py-xgboost.svg)](https://anaconda.org/conda-forge/py-xgboost)
[![Optuna](https://img.shields.io/badge/Optuna-integrated-blue)](https://optuna.org) [![Optuna](https://img.shields.io/badge/Optuna-integrated-blue)](https://optuna.org)
[![Twitter](https://img.shields.io/badge/@XGBoostProject--_.svg?style=social&logo=twitter)](https://twitter.com/XGBoostProject) [![Twitter](https://img.shields.io/badge/@XGBoostProject--_.svg?style=social&logo=twitter)](https://twitter.com/XGBoostProject)
[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/dmlc/xgboost/badge)](https://api.securityscorecards.dev/projects/github.com/dmlc/xgboost)
[Community](https://xgboost.ai/community) | [Community](https://xgboost.ai/community) |
[Documentation](https://xgboost.readthedocs.org) | [Documentation](https://xgboost.readthedocs.org) |
@@ -24,7 +25,7 @@ The same code runs on major distributed environment (Kubernetes, Hadoop, SGE, MP
License License
------- -------
© Contributors, 2019. Licensed under an [Apache-2](https://github.com/dmlc/xgboost/blob/master/LICENSE) license. © Contributors, 2021. Licensed under an [Apache-2](https://github.com/dmlc/xgboost/blob/master/LICENSE) license.
Contribute to XGBoost Contribute to XGBoost
--------------------- ---------------------
@@ -46,24 +47,11 @@ Become a sponsor and get a logo here. See details at [Sponsoring the XGBoost Pro
### Sponsors ### Sponsors
[[Become a sponsor](https://opencollective.com/xgboost#sponsor)] [[Become a sponsor](https://opencollective.com/xgboost#sponsor)]
<!--<a href="https://opencollective.com/xgboost/sponsor/0/website" target="_blank"><img src="https://opencollective.com/xgboost/sponsor/0/avatar.svg"></a>-->
<a href="https://www.nvidia.com/en-us/" target="_blank"><img src="https://raw.githubusercontent.com/xgboost-ai/xgboost-ai.github.io/master/images/sponsors/nvidia.jpg" alt="NVIDIA" width="72" height="72"></a> <a href="https://www.nvidia.com/en-us/" target="_blank"><img src="https://raw.githubusercontent.com/xgboost-ai/xgboost-ai.github.io/master/images/sponsors/nvidia.jpg" alt="NVIDIA" width="72" height="72"></a>
<a href="https://opencollective.com/xgboost/sponsor/1/website" target="_blank"><img src="https://opencollective.com/xgboost/sponsor/1/avatar.svg"></a> <a href="https://www.intel.com/" target="_blank"><img src="https://images.opencollective.com/intel-corporation/2fa85c1/logo/256.png" width="72" height="72"></a>
<a href="https://opencollective.com/xgboost/sponsor/2/website" target="_blank"><img src="https://opencollective.com/xgboost/sponsor/2/avatar.svg"></a> <a href="https://getkoffie.com/?utm_source=opencollective&utm_medium=github&utm_campaign=xgboost" target="_blank"><img src="https://images.opencollective.com/koffielabs/f391ab8/logo/256.png" width="72" height="72"></a>
<a href="https://opencollective.com/xgboost/sponsor/3/website" target="_blank"><img src="https://opencollective.com/xgboost/sponsor/3/avatar.svg"></a>
<a href="https://opencollective.com/xgboost/sponsor/4/website" target="_blank"><img src="https://opencollective.com/xgboost/sponsor/4/avatar.svg"></a>
<a href="https://opencollective.com/xgboost/sponsor/5/website" target="_blank"><img src="https://opencollective.com/xgboost/sponsor/5/avatar.svg"></a>
<a href="https://opencollective.com/xgboost/sponsor/6/website" target="_blank"><img src="https://opencollective.com/xgboost/sponsor/6/avatar.svg"></a>
<a href="https://opencollective.com/xgboost/sponsor/7/website" target="_blank"><img src="https://opencollective.com/xgboost/sponsor/7/avatar.svg"></a>
<a href="https://opencollective.com/xgboost/sponsor/8/website" target="_blank"><img src="https://opencollective.com/xgboost/sponsor/8/avatar.svg"></a>
<a href="https://opencollective.com/xgboost/sponsor/9/website" target="_blank"><img src="https://opencollective.com/xgboost/sponsor/9/avatar.svg"></a>
### Backers ### Backers
[[Become a backer](https://opencollective.com/xgboost#backer)] [[Become a backer](https://opencollective.com/xgboost#backer)]
<a href="https://opencollective.com/xgboost#backers" target="_blank"><img src="https://opencollective.com/xgboost/backers.svg?width=890"></a> <a href="https://opencollective.com/xgboost#backers" target="_blank"><img src="https://opencollective.com/xgboost/backers.svg?width=890"></a>
## Other sponsors
The sponsors in this list are donating cloud hours in lieu of cash donation.
<a href="https://aws.amazon.com/" target="_blank"><img src="https://raw.githubusercontent.com/xgboost-ai/xgboost-ai.github.io/master/images/sponsors/aws.png" alt="Amazon Web Services" width="72" height="72"></a>

Some files were not shown because too many files have changed in this diff Show More