Compare commits

...

817 Commits

Author SHA1 Message Date
fis
3e343159ef Release patch release 1.3.2 2021-01-13 17:35:00 +08:00
Jiaming Yuan
99e802f2ff Remove duplicated DMatrix. (#6592) (#6599) 2021-01-13 04:44:06 +08:00
Jiaming Yuan
6a29afb480 Fix evaluation result for XGBRanker. (#6594) (#6600)
* Remove duplicated code, which fixes typo `evals_result` -> `evals_result_`.
2021-01-13 04:42:43 +08:00
Jiaming Yuan
8e321adac8 Support Solaris. (#6578) (#6588)
* Add system header.

* Remove use of TR1 on Solaris

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2021-01-11 02:31:29 +08:00
Jiaming Yuan
d0ec65520a [backport] Fix best_ntree_limit for dart and gblinear. (#6579) (#6587)
* [backport] Fix `best_ntree_limit` for dart and gblinear. (#6579)

* Backport num group test fix.
2021-01-11 01:46:05 +08:00
Jiaming Yuan
7aec915dcd [Backport] Rename data to X in predict_proba. (#6555) (#6586)
* [Breaking] Rename `data` to `X` in `predict_proba`. (#6555)

New Scikit-Learn version uses keyword argument, and `X` is the predefined
keyword.

* Use pip to install latest Python graphviz on Windows CI.

* Suppress health check.
2021-01-10 16:05:17 +08:00
Philip Hyunsu Cho
a78d0d4110 Release patch release 1.3.1 (#6543) 2020-12-21 23:22:32 -08:00
Jiaming Yuan
76c361431f Remove cupy.array_equal, since it's not compatible with cuPy 7.8 (#6528) (#6535)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-12-20 15:11:50 +08:00
Jiaming Yuan
d95d02132a Fix handling of print period in EvaluationMonitor (#6499) (#6534)
Co-authored-by: Kirill Shvets <kirill.shvets@intel.com>

Co-authored-by: ShvetsKS <33296480+ShvetsKS@users.noreply.github.com>
Co-authored-by: Kirill Shvets <kirill.shvets@intel.com>
2020-12-20 15:07:42 +08:00
Jiaming Yuan
7109c6c1f2 [backport] Move metric configuration into booster. (#6504) (#6533) 2020-12-20 10:36:32 +08:00
Jiaming Yuan
bce7ca313c [backport] Fix save_best. (#6523) 2020-12-18 20:00:29 +08:00
Jiaming Yuan
8be2cd8c91 Enable loading model from <1.0.0 trained with objective='binary:logitraw' (#6517) (#6524)
* Enable loading model from <1.0.0 trained with objective='binary:logitraw'

* Add binary:logitraw in model compatibility testing suite

* Feedback from @trivialfis: Override ProbToMargin() for LogisticRaw

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-12-18 04:10:09 +08:00
Philip Hyunsu Cho
c5f0cdbc72 Hot fix for libgomp vendoring (#6482)
* Hot fix for libgomp vendoring

* Set post0 in setup.py
2020-12-09 10:04:45 -08:00
Jiaming Yuan
1bf3899983 Fix dask ip resolution. (#6475)
This adopts the solution used in dask/dask-xgboost#40 which employs the get_host_ip from dmlc-core tracker.
2020-12-07 16:38:16 -08:00
Jiaming Yuan
c39f6b25f0 Fix filtering callable objects in skl xgb param. (#6466)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-12-07 16:38:16 -08:00
Philip Hyunsu Cho
2b3e301543 [CI] Fix CentOS 6 Docker images (#6467) 2020-12-07 16:38:16 -08:00
Hyunsu Cho
10d3419fa6 Release 1.3.0 2020-12-03 21:35:09 -08:00
Philip Hyunsu Cho
b273e5bd4c Vendor libgomp in the manylinux Python wheel (#6461)
* Vendor libgomp in the manylinux2014_aarch64 wheel

* Use vault repo, since CentOS 6 has reached End-of-Life on Nov 30

* Vendor libgomp in the manylinux2010_x86_64 wheel

* Run verification step inside the container
2020-12-03 21:29:40 -08:00
Philip Hyunsu Cho
3a83fcb0eb Enforce row-major order in cuPy array (#6459) 2020-12-03 21:29:24 -08:00
hzy001
3efc4ea0d1 Fix broken links. (#6455)
Co-authored-by: Hao Ziyu <haoziyu@qiyi.com>
Co-authored-by: fis <jm.yuan@outlook.com>
2020-12-03 21:29:03 -08:00
Jiaming Yuan
a2c778e2d1 Fix period in evaluation monitor. (#6441) 2020-12-03 21:28:45 -08:00
Jiaming Yuan
8a0db293c5 Fix CLI ranking demo. (#6439)
Save model at final round.
2020-12-03 21:28:28 -08:00
Honza Sterba
028ec5f028 Optionaly fail when gpu_id is set to invalid value (#6342) 2020-12-03 21:27:58 -08:00
ShvetsKS
38c80bcec4 Thread local memory allocation for BuildHist (#6358)
* thread mem locality

* fix apply

* cleanup

* fix lint

* fix tests

* simple try

* fix

* fix

* apply comments

* fix comments

* fix

* apply simple comment

Co-authored-by: ShvetsKS <kirill.shvets@intel.com>
2020-12-03 21:27:31 -08:00
Philip Hyunsu Cho
16ff63905d [CI] Upgrade cuDF and RMM to 0.17 nightlies (#6434) 2020-12-03 21:27:01 -08:00
Philip Hyunsu Cho
a9b09919f9 [R] Fix R package installation via CMake (#6423) 2020-12-03 21:26:29 -08:00
Hyunsu Cho
f3b060401a Release 1.3.0 RC1 2020-11-21 11:36:08 -08:00
Jiaming Yuan
42d31d9dcb Fix MPI build. (#6403) 2020-11-21 13:38:21 +08:00
Jiaming Yuan
2ce2a1a4d8 [SKL] Propagate parameters to booster during set_param. (#6416) 2020-11-20 20:37:35 +08:00
zhang_jf
cc581b3b6b Misleading exception information: no such param of "allow_non_zero_missing" (#6418) 2020-11-20 19:33:34 +08:00
Jiaming Yuan
00218d065a [dask] Update document. [skip ci] (#6413) 2020-11-20 19:16:19 +08:00
Jiaming Yuan
c120822a24 Fix flaky sparse page dmatrix test. (#6417) 2020-11-20 19:15:45 +08:00
Jiaming Yuan
a7b42adb74 Fix dask predict (#6412) 2020-11-20 10:10:52 +08:00
Jiaming Yuan
44a9d69efb Small cleanup to evaluator. (#6400) 2020-11-20 09:33:51 +08:00
Philip Hyunsu Cho
9c9070aea2 Use pytest conventions consistently (#6337)
* Do not derive from unittest.TestCase (not needed for pytest)

* assertRaises -> pytest.raises

* Simplify test_empty_dmatrix with test parametrization

* setUpClass -> setup_class, tearDownClass -> teardown_class

* Don't import unittest; import pytest

* Use plain assert

* Use parametrized tests in more places

* Fix test_gpu_with_sklearn.py

* Put back run_empty_dmatrix_reg / run_empty_dmatrix_cls

* Fix test_eta_decay_gpu_hist

* Add parametrized tests for monotone constraints

* Fix test names

* Remove test parametrization

* Revise test_slice to be not flaky
2020-11-19 17:00:15 -08:00
Philip Hyunsu Cho
c763b50dd0 [CI] Upgrade to MacOS Mojave image (#6406) 2020-11-18 20:29:10 -08:00
Nan Zhu
4d1d5d4010 [jvm-packages] fix potential unit test suites aborted issue (#6373)
* fix race conditio

* code cleaning

rm pom.xml-e

* clean again

* fix compilation issue

* recover

* avoid using getOrCreate

* interrupt zombie threads

* safe guard

* fix deadlock

* Update SparkParallelismTracker.scala
2020-11-17 10:59:26 -08:00
Philip Hyunsu Cho
e426b6e040 [R] Do not convert continuous labels to factors (#6380)
* [R] Do not convert continuous labels to factors

* Address reviewer's comment
2020-11-17 09:19:16 -08:00
James Lamb
3cca1c5fa1 [R] remove uses of exists() (#6387) 2020-11-17 15:06:23 +08:00
Jiaming Yuan
3ac173fc8b Fix typo. (#6399) 2020-11-16 16:59:12 -08:00
Nikhil Choudhary
ae1662028a Fixed few grammatical mistakes in doc (#6393) 2020-11-15 13:48:08 +08:00
Philip Hyunsu Cho
5cb24d0d39 Fix broken link in CLI doc (#6396) 2020-11-14 17:58:07 -08:00
ShvetsKS
512b464cfa Disable HT for DMatrix creation (#6386)
Co-authored-by: SHVETS, KIRILL <kirill.shvets@intel.com>
2020-11-14 22:18:33 +08:00
Jiaming Yuan
fcd6fad822 [dask] Small cleanup. (#6391) 2020-11-14 22:15:05 +08:00
Jiaming Yuan
4ccf92ea34 [dask] Fix union of workers. (#6375) 2020-11-13 16:55:05 +08:00
Jiaming Yuan
fcfeb4959c Deprecate positional arguments. (#6365)
Deprecate positional arguments in following functions:

- `__init__` for all classes in sklearn module.
- `fit` method for all classes in sklearn module.
- dask interface.
- `set_info` for `DMatrix` class.

Refactor the evaluation matrices handling.
2020-11-13 11:10:30 +08:00
Philip Hyunsu Cho
e5193c21a1 [dask] Allow empty data matrix in AFT survival (#6379)
* [dask] Allow empty data matrix in AFT survival

* Add unit test
2020-11-12 17:49:58 -08:00
Philip Hyunsu Cho
5a33c2f3a0 [CI] Add noLD R test (#6382)
* [CI] Add noLD test

* Make noLD test only trigger with a PR comment

* [CI] Don't install stringi

* Add the Titanic example as a unit test

* Document trigger

* add to index

* Clarify that it needs to be a review comment
2020-11-12 12:41:25 -08:00
Jiaming Yuan
c1a62b5fa2 Expect gpu external memory to fail. (#6381) 2020-11-12 19:24:48 +08:00
Jiaming Yuan
c90f968d92 Update Python documents. (#6376) 2020-11-12 17:51:32 +08:00
Philip Hyunsu Cho
c5645180a6 [R] Fix a crash that occurs with noLD R (#6378) 2020-11-11 21:09:08 -08:00
James Lamb
12d27f43ff [doc] make Dask distributed example copy-pastable (#6345) 2020-11-11 20:22:17 -08:00
Jiaming Yuan
d711d648cb Fix label errors in graph visualization (#6369) 2020-11-11 17:44:59 -08:00
Jiaming Yuan
debeae2509 [R] Fix warnings from R check --as-cran (#6374)
* Remove exit and printf.

* Fix warnings.
2020-11-11 18:39:37 +08:00
Jiaming Yuan
6e12c2a6f8 [dask] Supoort running on GKE. (#6343)
* Avoid accessing `scheduler_info()['workers']`.
* Avoid calling `client.gather` inside task.
* Avoid using `client.scheduler_address`.
2020-11-11 18:04:34 +08:00
Jiaming Yuan
8a17610666 Implement GPU predict leaf. (#6187) 2020-11-11 17:33:47 +08:00
Philip Hyunsu Cho
7f101d1b33 [CI] Remove R check from Jenkins (#6372)
* Remove R check from Jenkins

* Print stacktrace when CRAN test fail in GitHub Actions

* Add verbose flag in tests/ci_build/print_r_stacktrace.sh

* Fix path in tests/ci_build/print_r_stacktrace.sh
2020-11-10 22:46:54 -08:00
Jiaming Yuan
a5cfa7841e Run R check as cran on action. [skip ci] (#6371)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-11-11 12:02:53 +08:00
Jiaming Yuan
43efadea2e Deterministic data partitioning for external memory (#6317)
* Make external memory data partitioning deterministic.

* Change the meaning of `page_size` from bytes to number of rows.

* Design a data pool.

* Note for external memory.

* Enable unity build on Windows CI.

* Force garbage collect on test.
2020-11-11 06:11:06 +08:00
Jean Lescut-Muller
9564886d9f Update custom_metric_obj.rst (#6367) 2020-11-10 22:29:22 +08:00
Jiaming Yuan
e65e3cf36e Support shared library in system path. (#6362) 2020-11-10 16:04:25 +08:00
Jiaming Yuan
184e2eac7d Add period to evaluation monitor. (#6348) 2020-11-10 07:47:48 +08:00
ShvetsKS
d411f98d26 simple fix for static shedule in predict (#6357)
Co-authored-by: ShvetsKS <kirill.shvets@intel.com>
2020-11-09 17:01:30 +08:00
Jiaming Yuan
519cee115a Avoid resetting seed for every configuration. (#6349) 2020-11-06 10:28:35 +08:00
James Lamb
f3a4253984 Ignore files from local Dask development (#6346) 2020-11-05 13:54:46 +08:00
Jack Dunn
51e6531315 Fix missing space in warning message (#6340) 2020-11-04 06:03:16 -05:00
Jiaming Yuan
2cc9662005 Support slicing tree model (#6302)
This PR is meant the end the confusion around best_ntree_limit and unify model slicing. We have multi-class and random forests, asking users to understand how to set ntree_limit is difficult and error prone.

* Implement the save_best option in early stopping.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-11-02 23:27:39 -08:00
Rory Mitchell
29745c6df2 Fix inclusive scan for large sizes (#6234) 2020-11-03 17:01:43 +13:00
Jiaming Yuan
7756192906 [dask] Fix prediction on DaskDMatrix with multiple meta data. (#6333)
* Unify the meta handling methods.
2020-11-02 19:18:44 -05:00
Jiaming Yuan
5a7b3592ed Optional find_package for sanitizers. (#6329) 2020-11-02 19:17:17 -05:00
Jiaming Yuan
048acf81cd Enable shap sparse test. (#6332) 2020-11-01 20:59:27 +08:00
Igor Moura
5e1e972aea Clean up warnings (#6325) 2020-10-30 23:50:29 +08:00
nabokovas
f0fe18fc28 Add a new github actions badge (#6321) 2020-10-30 17:57:21 +08:00
Jiaming Yuan
6ff331b705 Fix Python callback. (#6320) 2020-10-30 05:03:44 +08:00
Sergio Gavilán
b181a88f9f Reduced some C++ compiler warnings (#6197)
* Removed some warnings

* Rebase with master

* Solved C++ Google Tests errors made by refactoring in order to remove warnings

* Undo renaming path -> path_

* Fix style check

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-10-29 12:36:00 -07:00
Jiaming Yuan
c80657b542 Fix flaky data initialization test. (#6318) 2020-10-30 03:11:22 +08:00
Naveed Ahmed Saleem Janvekar
608bda7052 [jvm-packages] add example to handle missing value other than 0 (#5677)
add example to handle missing value other than 0 under Dealing with missing values section
2020-10-28 17:24:35 -07:00
Jiaming Yuan
74ea82209b Lazy import dask libraries. (#6309)
* Lazy import dask libraries.

* Lint && fix.

* Use short name.
2020-10-28 15:50:11 -07:00
Jiaming Yuan
dfac5f89e9 Group CLI demo into subdirectory. (#6258)
CLI is not most developed interface. Putting them into correct directory can help new users to avoid it as most of the use cases are from a language binding.
2020-10-28 14:40:44 -07:00
James Lamb
6383757dca [R] allow xgb.plot.importance() calls to fill a grid (#6294) 2020-10-28 14:37:28 -07:00
Tanuja Kirthi Doddapaneni
d261ba029a Added USE_NCCL_LIB_PATH option to enable user to set NCCL_LIBRARY during build (#6310)
Description: To enable user to set NCCL_LIBRARY during build
2020-10-28 14:36:31 -07:00
vcarpani
671971e12e Compiler warnings (#6286)
* Fix warnings for json.h

* Fix warnings for metric.h

* Fix warnings for updater_quantile_hist.cc.

* Fix warnings for updater_histmaker.cc.

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-10-28 13:46:15 -07:00
Jiaming Yuan
e8884c4637 Document tree method for feature weights. (#6312) 2020-10-28 13:42:13 -07:00
Philip Hyunsu Cho
143b278267 Mark flaky tests as XFAIL (#6299)
* Temporarily skip TestGPUUpdaters::test_categorical

* Temporarily skip test_boost_from_prediction[approx]
2020-10-28 11:50:57 -07:00
Jiaming Yuan
c4da967b5c Support unity build. (#6295)
* Support unity build.

* Setup on Windows Jenkins.

* Revert "Setup on Windows Jenkins."

This reverts commit 8345cb8d2b009eec8ae9fa6f16412a7c9b6ec12c.
2020-10-28 11:49:28 -07:00
Philip Hyunsu Cho
f6169c0b16 [CI] Use separate Docker cache for each CUDA version (#6305) 2020-10-28 11:07:00 -07:00
Jiaming Yuan
3310e208fd Fix inplace prediction interval. (#6259)
* Add back the interval in call.
* Make the interval non-optional.
2020-10-28 13:13:59 +08:00
Jiaming Yuan
cc76724762 Reduce warning. (#6273) 2020-10-27 12:24:19 -07:00
DIVYA CHAUHAN
4e9c4f2d73 Create a tutorial for using the C API in a C/C++ application (#6285)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-10-27 12:19:20 -07:00
James Lamb
e1de390e6e [ci] replace 'egrep' with 'grep -E' (#6287) 2020-10-27 12:05:48 -07:00
Rory Mitchell
f0c3ff313f Update GPUTreeShap, add docs (#6281)
* Update GPUTreeShap, add docs

* Fix test

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-10-27 18:22:12 +13:00
Jiaming Yuan
b180223d18 Cleanup RABIT. (#6290)
* Remove recovery and MPI speed tests.
* Remove readme.
* Remove Python binding.
* Add checks in C API.
2020-10-27 08:48:22 +08:00
Akira Funahashi
8e0f5a6fc7 Update plugin instructions for CMake build (#6289) 2020-10-26 17:42:07 -07:00
Philip Hyunsu Cho
c8ec62103a Deprecate LabelEncoder in XGBClassifier; Enable cuDF/cuPy inputs in XGBClassifier (#6269)
* Deprecate LabelEncoder in XGBClassifier; skip LabelEncoder for cuDF/cuPy inputs

* Add unit tests for cuDF and cuPy inputs with XGBClassifier

* Fix lint

* Clarify warning

* Move use_label_encoder option to XGBClassifier constructor

* Add a test for cudf.Series

* Add use_label_encoder to XGBRFClassifier doc

* Address reviewer feedback
2020-10-26 13:20:51 -07:00
Jiaming Yuan
bcfab4d726 Revert "Disable JSON full serialization for now. (#6248)" (#6266)
This reverts commit 6d293020fb.
2020-10-27 03:30:47 +08:00
Jiaming Yuan
d61b628bf5 Remove RABIT CMake targets. (#6275)
* Now it's built as part of libxgboost.
* Set correct C API error in RABIT initialization and finalization.
* Remove redundant message.
* Guard the tracker print C API.
2020-10-27 01:30:20 +08:00
Jiaming Yuan
2686d32a36 Skip dask tests on ARM. (#6267)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-10-26 15:09:05 +08:00
Philip Hyunsu Cho
677f676172 Use UserWarning for old callback, as DeprecationWarning is not visible (#6270) 2020-10-22 01:10:52 -07:00
Philip Hyunsu Cho
1300467d36 Fix a typo in is_arm() in testing.py [skip ci] (#6271) 2020-10-22 13:07:14 +08:00
Jiaming Yuan
b5c2a47b20 Drop single point model recovery (#6262)
* Pass rabit params in JVM package.
* Implement timeout using poll timeout parameter.
* Remove OOB data check.
2020-10-21 15:27:03 +08:00
Jiaming Yuan
81c37c28d5 Time the CPU tests on Jenkins. (#6257)
* Time the CPU tests on Jenkins.
* Reduce thread contention.
* Add doc.
* Skip heavy tests on ARM.
2020-10-20 17:19:07 -07:00
Igor Moura
d1254808d5 Clean up C++ warnings (#6213) 2020-10-19 23:02:33 +08:00
Jiaming Yuan
ddf37cca30 Unify thread configuration. (#6186) 2020-10-19 16:05:42 +08:00
Philip Hyunsu Cho
7f6ed5780c [CI] Build a Python wheel for aarch64 platform (#6253) 2020-10-18 22:35:19 -07:00
Jiaming Yuan
5037abeb86 Fix linear gpu input (#6255) 2020-10-19 12:02:36 +08:00
Yuan Tang
cdcdab98b8 Add sponsors link to FUNDING.yml (#6252) 2020-10-18 19:17:11 -07:00
Philip Hyunsu Cho
65ea42bd42 [CI] Reduce testing load with RMM (#6249)
* [CI] Reduce testing load with RMM

* Address reviewer's comment
2020-10-18 19:16:46 -07:00
Manikya Bardhan
549f361b71 Updated winning solutions list (#6254) 2020-10-19 04:06:48 +08:00
Jiaming Yuan
6d293020fb Disable JSON full serialization for now. (#6248)
* Disable JSON serialization for now.

* Multi-class classification is checkpointing for each iteration.
This brings significant overhead.

Revert: 90355b4f00

* Set R tests to use binary.
2020-10-16 17:59:54 +08:00
Jiaming Yuan
52452bebb9 Fix cls typo. (#6247) 2020-10-16 16:40:44 +08:00
Yuan Tang
3098d7cee0 Add link to XGBoost's Twitter handle (#6244) 2020-10-15 16:54:34 -07:00
Jiaming Yuan
3da5a69dc9 Fix typo in dask interface. (#6240) 2020-10-15 15:26:29 +08:00
dependabot[bot]
06e453ddf4 Bump junit from 4.11 to 4.13.1 in /jvm-packages/xgboost4j (#6230)
Bumps [junit](https://github.com/junit-team/junit4) from 4.11 to 4.13.1.
- [Release notes](https://github.com/junit-team/junit4/releases)
- [Changelog](https://github.com/junit-team/junit4/blob/main/doc/ReleaseNotes4.11.md)
- [Commits](https://github.com/junit-team/junit4/compare/r4.11...r4.13.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-13 19:46:19 -07:00
dependabot[bot]
b51a717deb Bump junit from 4.11 to 4.13.1 in /jvm-packages/xgboost4j-gpu (#6233)
Bumps [junit](https://github.com/junit-team/junit4) from 4.11 to 4.13.1.
- [Release notes](https://github.com/junit-team/junit4/releases)
- [Changelog](https://github.com/junit-team/junit4/blob/main/doc/ReleaseNotes4.11.md)
- [Commits](https://github.com/junit-team/junit4/compare/r4.11...r4.13.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-13 19:44:56 -07:00
Jiaming Yuan
bed7ae4083 Loop over thrust::reduce. (#6229)
* Check input chunk size of dqdm.
* Add doc for current limitation.
2020-10-14 10:40:56 +13:00
Rory Mitchell
734a911a26 Loop over copy_if (#6201)
* Loop over copy_if

* Catch OOM.

Co-authored-by: fis <jm.yuan@outlook.com>
2020-10-14 10:23:16 +13:00
Wittty-Panda
0fc263ead5 Update the list of winning solutions (#6222) 2020-10-13 20:05:12 +08:00
Jiaming Yuan
b05073bda5 [dask] Test for data initializaton. (#6226) 2020-10-13 11:08:35 +08:00
Jiaming Yuan
2443275891 Cleanup Python code. (#6223)
* Remove pathlike as XGBoost 1.2 requires Python 3.6.
* Move conditional import of dask/distributed into dask module.
2020-10-12 15:44:41 +08:00
Jiaming Yuan
70c2039748 Catch all standard exceptions in C API. (#6220)
* `std::bad_alloc` is not guaranteed to be caught.
2020-10-12 14:01:46 +08:00
Jiaming Yuan
2241563f23 Handle duplicated values in sketching. (#6178)
* Accumulate weights in duplicated values.
* Fix device id in iterative dmatrix.
2020-10-10 19:32:44 +08:00
Jiaming Yuan
ab5b35134f Rework Python callback functions. (#6199)
* Define a new callback interface for Python.
* Deprecate the old callbacks.
* Enable early stopping on dask.
2020-10-10 17:52:36 +08:00
Jiaming Yuan
b5b24354b8 More categorical tests and disable shap sparse test. (#6219)
* Fix tree load with 32 category.
2020-10-10 16:12:37 +08:00
Philip Hyunsu Cho
c991eb612d [jvm-packages] Fix up build for xgboost4j-gpu, xgboost4j-spark-gpu (#6216)
* [CI] Clean up build for JVM packages

* Use correct path for saving native lib

* Fix groupId of maven-surefire-plugin

* Fix stashing of xgboost4j_jar_gpu

* [CI] Don't run xgboost4j-tester with GPU, since it doesn't use gpu_hist
2020-10-09 14:08:15 -07:00
Jiaming Yuan
70ce5216b5 Add high level tests for categorical data. (#6179)
* Fix unique.
2020-10-09 09:27:23 +08:00
vcarpani
6bc9747df5 Reduce compile warnings (#6198)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-10-08 23:14:59 +08:00
ShvetsKS
a4ce0eae43 CPU predict performance improvement (#6127)
Co-authored-by: ShvetsKS <kirill.shvets@intel.com>
2020-10-08 15:50:21 +03:00
Jiaming Yuan
4cfdcaaf7b Move non-OpenMP gtest to GitHub Actions (#6210) 2020-10-08 00:58:21 -07:00
Jiaming Yuan
ddc4f20e54 Add JSON schema for categorical splits. (#6194) 2020-10-07 17:33:31 +08:00
odidev
a2fea33103 Added arm64 job in Travis-CI (#6200)
Signed-off-by: odidev <odidev@puresoftware.com>
2020-10-07 15:02:09 +08:00
Igor Moura
5908598666 [Doc] Add info on GPU compiler (#6204)
* Add note about the required compiler version for CUDA. 
* Also added a link that gives a short explanation on compute capability version
2020-10-06 11:35:18 +08:00
Yuan Tang
1013224888 Consistent style for build status badge (#6203) 2020-10-05 18:23:21 -07:00
Philip Hyunsu Cho
f121f2738f [CI] Fix Docker build for CUDA 11 (#6202) 2020-10-05 17:54:14 -07:00
Jiaming Yuan
fd58005edf Ignore cachedir by joblib. [skip ci] (#6193) 2020-10-04 14:54:32 +08:00
DIVYA CHAUHAN
750bd0ae9a Update the list of winning solutions using XGBoost (#6192)
Co-authored-by: divya <divyachauhan661@gmail.com>
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-10-03 13:39:58 -07:00
Christian Lorentzen
cf4f019ed6 [Breaking] Change default evaluation metric for classification to logloss / mlogloss (#6183)
* Change DefaultEvalMetric of classification from error to logloss

* Change default binary metric in plugin/example/custom_obj.cc

* Set old error metric in python tests

* Set old error metric in R tests

* Fix missed eval metrics and typos in R tests

* Fix setting eval_metric twice in R tests

* Add warning for empty eval_metric for classification

* Fix Dask tests

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-10-02 12:06:47 -07:00
John Quitto-Graham
e0e4f15d0e Fix a comment in demo to use correct reference (#6190)
Co-authored-by: John Quitto Graham <johnq@dgx07.aselab.nvidia.com>
2020-10-01 13:16:04 -07:00
Philip Hyunsu Cho
eb7946ff25 Hide C++ symbols from dmlc-core (#6188) 2020-10-01 10:07:13 -07:00
lacrosse91
6bc41df2fe [Doc] Add list of winning solutions in data science competitions using XGBoost (#6177) 2020-09-30 14:41:29 -07:00
Jiaming Yuan
f0c63902ff Use default allocator in sketching. (#6182) 2020-09-30 14:55:59 +08:00
Jiaming Yuan
444131a2e6 Add categorical data support to GPU Hist. (#6164) 2020-09-29 11:27:25 +08:00
Jiaming Yuan
798af22ff4 Add categorical data support to GPU predictor. (#6165) 2020-09-29 11:25:34 +08:00
Jiaming Yuan
7622b8cdb8 Enable categorical data support on Python DMatrix. (#6166)
* Only pandas is recognized.
2020-09-29 11:22:56 +08:00
Jiaming Yuan
52c0b3f100 Fix error message. (#6176) 2020-09-29 11:18:25 +08:00
Rory Mitchell
dda9e1e487 Update GPUTreeshap (#6163)
* Reduce shap test duration

* Test interoperability with shap package

* Add feature interactions

* Update GPUTreeShap
2020-09-28 09:43:47 +13:00
Jiaming Yuan
434a3f35a3 Add TAGS to gitignore. [skip ci] (#6175) 2020-09-27 21:27:40 +08:00
Jiaming Yuan
07355599c2 Option for generating device debug info. (#6168)
* Supply `-G;-src-in-ptx` when `USE_DEVICE_DEBUG` is set and debug mode is selected.
* Refactor CMake script to gather all CUDA configuration.
* Use CMAKE_CUDA_ARCHITECTURES.  Close #6029.
* Add compute 80.  Close #5999
2020-09-27 03:26:56 +08:00
Kyle Nicholson
e6a238c020 Update base margin dask (#6155)
* Add `base-margin`
* Add `output_margin` to regressor.

Co-authored-by: fis <jm.yuan@outlook.com>
2020-09-26 21:30:52 +08:00
Alexander Gugel
03b8fdec74 Add DMatrix usage examples to c-api-demo (#5854)
* Add DMatrix usage examples to c-api-demo

* Add XGDMatrixCreateFromCSREx example

* Add XGDMatrixCreateFromCSCEx example
2020-09-26 02:10:12 -07:00
Philip Hyunsu Cho
2c4dedb7a0 [CI] Test C API demo (#6159)
* Fix CMake install config to use dependencies

* [CI] Test C API demo

* Explicitly cast num_feature, to avoid warning in Linux
2020-09-25 14:49:01 -07:00
Philip Hyunsu Cho
bd2b1eabd0 Add back support for scipy.sparse.coo_matrix (#6162) 2020-09-25 00:49:49 -07:00
Philip Hyunsu Cho
72ef553550 Fall back to CUB allocator if RMM memory pool is not set up (#6150)
* Fall back to CUB allocator if RMM memory pool is not set up

* Fix build

* Prevent memory leak

* Add note about lack of memory initialisation

* Add check for other fast allocators

* Set use_cub_allocator_ to true when RMM is not enabled

* Fix clang-tidy

* Do not demangle symbol; add check to ensure Linux+Clang/GCC combo
2020-09-24 11:04:50 -07:00
Zeno Gantner
5b05f88ba9 Cosmetic fixes in faq.rst (#6161) 2020-09-24 21:05:10 +08:00
Jiaming Yuan
14afdb4d92 Support categorical data in ellpack. (#6140) 2020-09-24 19:28:57 +08:00
Jiaming Yuan
78d72ef936 Add DaskDeviceQuantileDMatrix demo. (#6156) 2020-09-24 14:08:28 +08:00
Philip Hyunsu Cho
678ea40b24 [CI] Upgrade cuDF and RMM to 0.16 nightlies; upgrade to Ubuntu 18.04 (#6157)
* [CI] Upgrade cuDF and RMM to 0.16 nightlies

* Use Ubuntu 18.04 in RMM test, since RMM needs GCC 7+
2020-09-23 19:48:44 -07:00
James Lamb
c686bc0461 [R] remove warning in configure.ac (fixes #6151) (#6152)
* [R] remove warning in configure.ac (fixes #6151)

* update configure
2020-09-22 22:47:38 -07:00
Jiaming Yuan
e033caa3ba Remove linking RMM library. (#6146)
* Remove linking RMM library.

* RMM is now header only.

* Remove remaining reference.
2020-09-22 16:59:33 -07:00
Jiaming Yuan
452ac8ea62 Time GPU tests on CI. (#6141) 2020-09-22 14:25:10 +08:00
Jiaming Yuan
33d80ffad0 [dask] Support more meta data on functional interface. (#6132)
* Add base_margin, label_(lower|upper)_bound.
* Test survival training with dask.
2020-09-21 16:56:37 +08:00
Jiaming Yuan
7065779afa Improve JSON format for categorical features. (#6128)
* Gather categories for all nodes.
2020-09-21 15:35:05 +08:00
Jiaming Yuan
210c131ce7 Support categorical data in GPU sketching. (#6137) 2020-09-21 13:53:06 +08:00
Nan Zhu
c932fb50a1 [jvm-packages]add xgboost4j-gpu/xgboost4j-spark-gpu module to facilitate release (#6136)
* add xgboost4j-gpu/xgboost4j-spark-gpu module to facilitate release

* Update pom.xml
2020-09-20 09:20:38 -07:00
Jiaming Yuan
a069a21e03 Implement intrusive ptr (#6129)
* Use intrusive ptr for JSON.
2020-09-20 20:07:16 +08:00
Jiaming Yuan
e319b63f9e Merge extract cuts into QuantileContainer. (#6125)
* Use pruning for initial summary construction.
2020-09-18 16:36:39 +08:00
Jiaming Yuan
cc82ca167a [dask] Refactor meta data handling. (#6130) 2020-09-18 13:26:40 +08:00
Jiaming Yuan
5384ed85c8 Use caching allocator from RMM, when RMM is enabled (#6131) 2020-09-17 21:51:49 -07:00
neko
6bc9b9dc4f Fix doc for CMake requirement. (#6123) 2020-09-16 17:59:43 +08:00
Philip Hyunsu Cho
9e955fb9b0 [R] Check warnings explicitly for model compatibility tests (#6114)
* [R] Check warnings explicitly for model compatibility tests

* Address reviewer's feedback
2020-09-15 10:49:48 -07:00
Philip Hyunsu Cho
33577ef5d3 Add MAPE metric (#6119) 2020-09-14 18:45:27 -07:00
Rory Mitchell
47350f6acb Allow kwargs in dask predict (#6117) 2020-09-15 13:04:03 +12:00
Jiaming Yuan
b5f52f0b1b Validate weights are positive values. (#6115) 2020-09-15 09:03:55 +08:00
Jiaming Yuan
c6f2b8c841 Upgrade gputreeshap. (#6099)
* Upgrade gputreeshap.

Co-authored-by: Rory Mitchell <r.a.mitchell.nz@gmail.com>
2020-09-15 12:57:22 +12:00
Vitalie Spinu
1453bee3e7 [R] Remove stringi dependency (#6109)
* [R] Fix empty empty tests and a test warnings

* [R] Remove stringi dependency (fix #5905)

* Fix R lint check

* [R] Fix automatic conversion to factor in R < 4.0.0 in xgb.model.dt.tree

* Add `R` Makefile variable

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-09-12 13:18:08 -07:00
Jiaming Yuan
07945290a2 Remove unused RABIT targets. (#6110)
* Remove rabit mock.
* Remove rabit base.
2020-09-11 14:09:44 +08:00
Jiaming Yuan
c92d751ad1 Enable building rabit on Windows (#6105) 2020-09-11 11:54:46 +08:00
Jiaming Yuan
08bdb2efc8 Fix dask doc. [skip ci] (#6108) 2020-09-11 10:56:12 +08:00
Bobby Wang
00b0ad1293 [Doc] add doc for kill_spark_context_on_worker_failure parameter (#6097)
* [Doc] add doc for kill_spark_context_on_worker_failure parameter

* resolve comments
2020-09-09 21:28:44 -07:00
Philip Hyunsu Cho
d0ccb13d09 Work around a compiler bug in MacOS AppleClang 11 (#6103)
* Workaround a compiler bug in MacOS AppleClang

* [CI] Run C++ test with MacOS Catalina + AppleClang 11.0.3

* [CI] Migrate cmake_test on MacOS from Travis CI to GitHub Actions

* Install OpenMP runtime

* [CI] Use CMake to locate lz4 lib
2020-09-09 21:21:55 -07:00
Philip Hyunsu Cho
9338582d79 [CI] Fix CTest by running it in a correct directory (#6104)
* [CI] Fix CTest by running it in a correct directory

* [CI] Do not run dmlc-core unit tests with sanitizer
2020-09-09 10:31:09 -07:00
Jiaming Yuan
3dcd85fab5 Refactor rabit tests (#6096)
* Merge rabit tests into XGBoost.
* Run them On CI.
* Simplification for CMake scripts.
2020-09-09 12:30:29 +08:00
Jiaming Yuan
318bffaa10 Fix custom obj link. [skip ci] (#6100) 2020-09-09 10:55:38 +08:00
Jiaming Yuan
b0001a6e29 Correct style warnings from clang-tidy for rabit. (#6095) 2020-09-08 12:13:58 +08:00
Hristo Iliev
da61d9460b [jvm-packages] Add getNumFeature method (#6075)
* Add getNumFeature to the Java API
* Add getNumFeature to the Scala API
* Add unit tests for getNumFeature

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-09-07 20:57:46 -07:00
Jiaming Yuan
93e9af43bb Unify set index data. (#6062) 2020-09-08 11:38:41 +08:00
Jiaming Yuan
e5d40b39cd [Breaking] Don't save leaf child count in JSON. (#6094)
The field is deprecated and not used anywhere in XGBoost.
2020-09-08 11:11:13 +08:00
Jiaming Yuan
5994f3b14c Don't link imported target. (#6093) 2020-09-07 02:51:09 -07:00
Philip Hyunsu Cho
974ba12f38 Fix CMake build with BUILD_STATIC_LIB option (#6090)
* Fix CMake build with BUILD_STATIC_LIB option

* Disable BUILD_STATIC_LIB option when R/JVM pkg is enabled

* Add objxgboost to install target only when BUILD_STATIC_LIB=ON
2020-09-07 02:38:29 -07:00
Daniel Steinberg
68c55a37d9 Add cache name back to external_memory.py files. (#6088) 2020-09-06 16:01:09 +08:00
Boris Feld
24ca9348f7 Fix typo in xgboost.callback.early_stop docstring (#6071) 2020-09-06 13:37:07 +08:00
Rory Mitchell
2e907abdb8 Updates to GPUTreeShap (#6087)
* Extract paths on device

* Update GPUTreeShap
2020-09-06 13:39:08 +12:00
Bobby Wang
0e2d5669f6 [jvm-packages] cancel job instead of killing SparkContext (#6019)
* cancel job instead of killing SparkContext

This PR changes the default behavior that kills SparkContext. Instead, This PR
cancels jobs when coming across task failed. That means the SparkContext is
still alive even some exceptions happen.

* add a parameter to control if killing SparkContext

* cancel the jobs the failed task belongs to

* remove the jobId from the map when one job failed.

* resolve comments
2020-09-02 14:20:59 -07:00
Tong He
3912f3de06 Updates from 1.2.0 cran submission (#6077)
* update for 1.2.0 cran submission

* recover cmakelists

* fix unittest from the shap PR

* trigger CI
2020-09-02 20:50:23 +08:00
Philip Hyunsu Cho
9be969cc7a Add release note for 1.2.0 in NEWS.md (#6063)
* Update query_contributors.py to account for pagination

* Add the release note for 1.2.0

* Add release note for patch releases 

* Apply suggestions from code review 

* Fix typo 

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
Co-authored-by: John Zedlewski <904524+JohnZed@users.noreply.github.com>
2020-09-02 00:49:02 -07:00
Anthony D'Amato
ada964f16e Clean the way deterministic paritioning is computed (#6033)
We propose to only use the rowHashCode to compute the partitionKey, adding the FeatureValue hashCode does not bring more value and would make the computation slower. Even though a collision would appear at 0.2% with MurmurHash3 this is bearable for partitioning, this won't have any impact on the data balancing.
2020-08-30 14:38:23 -07:00
ShvetsKS
c1ca872d1e Modin DF support (#6055)
* Modin DF support

* mode change

* tests were added, ci env was extended

* mode change

* Remove redundant installation of modin

* Add a pytest skip marker for modin

* Install Modin[ray] from PyPI

* fix interfering

* avoid extra conversion

* delete cv test for modin

* revert cv function

Co-authored-by: ShvetsKS <kirill.shvets@intel.com>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-08-29 22:33:30 +03:00
FelixYBW
3a990433f9 set maxBins to 256. Align with c code in src/tree/param.h (#6066) 2020-08-28 15:06:11 +03:00
Rory Mitchell
9bddecee05 Update GPUTreeShap (#6064)
* Update GPUTreeShap

* Update src/CMakeLists.txt

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-08-27 12:01:53 -07:00
Jiaming Yuan
2fcc4f2886 Unify evaluation functions. (#6037) 2020-08-26 14:23:27 +08:00
Jiaming Yuan
80c8547147 Make binary bin search reusable. (#6058)
* Move binary search row to hist util.
* Remove dead code.
2020-08-26 05:05:11 +08:00
Philip Hyunsu Cho
9c14e430af [CI] Improve JVM test in GitHub Actions (#5930)
* [CI] Improve JVM test in GitHub Actions

* Use env var for Wagon options [skip ci]

* Move the retry flag to pom.xml [skip ci]

* Export env var RABIT_MOCK to run Spark tests [skip ci]

* Correct location of env var

* Re-try up to 5 times [skip ci]

* Don't run distributed training test on Windows

* Fix typo

* Update main.yml
2020-08-25 10:14:46 -07:00
Jiaming Yuan
81d8dd79ca Bump header version. (#6056) 2020-08-26 00:29:00 +08:00
Jiaming Yuan
20c95be625 Expand categorical node. (#6028)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-08-25 18:53:57 +08:00
Rory Mitchell
9a4e8b1d81 GPUTreeShap (#6038) 2020-08-25 12:47:41 +12:00
Philip Hyunsu Cho
b3193052b3 Bump version to 1.3.0 snapshot in master (#6052) 2020-08-23 17:13:46 -07:00
Philip Hyunsu Cho
4729458a36 [jvm-packages] [doc] Update install doc for JVM packages (#6051) 2020-08-23 14:14:53 -07:00
Philip Hyunsu Cho
cfced58c1c [CI] Port CI fixes from the 1.2.0 branch (#6050)
* Fix a unit test on CLI, to handle RC versions

* [CI] Use mgpu machine to run gpu hist unit tests

* [CI] Build GPU-enabled JAR artifact and deploy to xgboost-maven-repo
2020-08-22 23:24:46 -07:00
Jiaming Yuan
a144daf034 Limit tree depth for GPU hist. (#6045) 2020-08-22 19:34:52 +08:00
Jiaming Yuan
b9ebbffc57 Fix plotting test. (#6040)
Previously the test loads a model generated by `test_basic.py`, now we generate
the model explicitly.

* Cleanup saved files for basic tests.
2020-08-22 13:18:48 +08:00
Jiaming Yuan
7a46515d3d Remove win2016 jvm github action test. (#6042) 2020-08-20 19:39:46 -07:00
Jiaming Yuan
7be2e04bd4 Fix scikit learn cls doc. (#6041) 2020-08-20 19:23:06 -07:00
Philip Hyunsu Cho
1fd29edf66 [CI] Migrate linters to GitHub Actions (#6035)
* [CI] Move lint to GitHub Actions

* [CI] Move Doxygen to GitHub Actions

* [CI] Move Sphinx build test to GitHub Actions

* [CI] Reduce workload for Windows R tests

* [CI] Move clang-tidy to Build stage
2020-08-19 12:33:51 -07:00
ShvetsKS
24f2e6c97e Optimize DMatrix build time. (#5877)
Co-authored-by: SHVETS, KIRILL <kirill.shvets@intel.com>
2020-08-20 01:37:03 +08:00
Jiaming Yuan
29b7fea572 Optimize cpu sketch allreduce for sparse data. (#6009)
* Bypass RABIT serialization reducer and use custom allgather based merging.
2020-08-19 10:03:45 +08:00
Jiaming Yuan
90355b4f00 Make JSON the default full serialization format. (#6027) 2020-08-19 09:57:43 +08:00
Anthony D'Amato
f58e41bad8 Fix deterministic partitioning with dataset containing Double.NaN (#5996)
The functions featureValueOfSparseVector or featureValueOfDenseVector could return a Float.NaN if the input vectore was containing any missing values. This would make fail the partition key computation and most of the vectors would end up in the same partition. We fix this by avoid returning a NaN and simply use the row HashCode in this case.
We added a test to ensure that the repartition is indeed now uniform on input dataset containing values by checking that the partitions size variance is below a certain threshold.

Signed-off-by: Anthony D'Amato <anthony.damato@hotmail.fr>
2020-08-18 18:55:37 -07:00
Cuong Duong
e51cba6195 Add SHAP summary plot using ggplot2 (#5882)
* add SHAP summary plot using ggplot2

* Update xgb.plot.shap

* Update example in xgb.plot.shap documentation

* update logic, add tests

* whitespace fixes

* whitespace fixes for test_helpers

* namespace for sd function

* explicitly declare variables that are automatically evaluated by data.table

* Fix R lint

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-08-18 18:04:09 -07:00
Qi Zhang
989ddd036f Swap byte-order in binary serializer to support big-endian arch (#5813)
* fixed some endian issues

* Use dmlc::ByteSwap() to simplify code

* Fix lint check

* [CI] Add test for s390x

* Download latest CMake on s390x

* Fix a bug in my code

* Save magic number in dmatrix with byteswap on big-endian machine

* Save version in binary with byteswap on big-endian machine

* Load scalar with byteswap in MetaInfo

* Add a debugging message

* Handle arrays correctly when byteswapping

* EOF can also be 255

* Handle magic number in MetaInfo carefully

* Skip Tree.Load test for big-endian, since the test manually builds little-endian binary model

* Handle missing packages in Python tests

* Don't use boto3 in model compatibility tests

* Add s390 Docker file for local testing

* Add model compatibility tests

* Add R compatibility test

* Revert "Add R compatibility test"

This reverts commit c2d2bdcb7dbae133cbb927fcd20f7e83ee2b18a8.

Co-authored-by: Qi Zhang <q.zhang@ibm.com>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-08-18 14:47:17 -07:00
Jiaming Yuan
4d99c58a5f Feature weights (#5962) 2020-08-18 19:55:41 +08:00
Jiaming Yuan
a418278064 Merge pull request #6023 from trivialfis/merge-rabit
Merge rabit
2020-08-18 09:01:56 +08:00
Philip Hyunsu Cho
14d5ce712c [CI] Fix Dask Pytest fixture (#6024) 2020-08-17 16:45:22 -07:00
fis
111968ca58 Merge rabit 2020-08-18 03:52:33 +08:00
fis
1c5904df3f Remove rabit. 2020-08-18 03:48:36 +08:00
Jiaming Yuan
d240463b38 Revert "Remove warning about memset. (#6003)" (#6020)
This reverts commit 12e3fb6a6c.
2020-08-17 20:10:15 +08:00
Philip Hyunsu Cho
511bb22ffd [Doc] Add dtreeviz as a showcase example of integration with 3rd-party software (#6013) 2020-08-13 20:53:59 -07:00
Philip Hyunsu Cho
e3ec7b01df [CI] Cancel builds on subsequent pushes (#6011)
* [CI] Cancel builds on subsequent pushes

* Use a more secure method

* test commit
2020-08-13 11:17:39 -07:00
Jiaming Yuan
674c409e9d Remove rabit dependency on public headers. (#6005) 2020-08-13 08:26:20 +08:00
Jiaming Yuan
12e3fb6a6c Remove warning about memset. (#6003) 2020-08-13 08:25:46 +08:00
Philip Hyunsu Cho
9adb812a0a RMM integration plugin (#5873)
* [CI] Add RMM as an optional dependency

* Replace caching allocator with pool allocator from RMM

* Revert "Replace caching allocator with pool allocator from RMM"

This reverts commit e15845d4e72e890c2babe31a988b26503a7d9038.

* Use rmm::mr::get_default_resource()

* Try setting default resource (doesn't work yet)

* Allocate pool_mr in the heap

* Prevent leaking pool_mr handle

* Separate EXPECT_DEATH() in separate test suite suffixed DeathTest

* Turn off death tests for RMM

* Address reviewer's feedback

* Prevent leaking of cuda_mr

* Fix Jenkinsfile syntax

* Remove unnecessary function in Jenkinsfile

* [CI] Install NCCL into RMM container

* Run Python tests

* Try building with RMM, CUDA 10.0

* Do not use RMM for CUDA 10.0 target

* Actually test for test_rmm flag

* Fix TestPythonGPU

* Use CNMeM allocator, since pool allocator doesn't yet support multiGPU

* Use 10.0 container to build RMM-enabled XGBoost

* Revert "Use 10.0 container to build RMM-enabled XGBoost"

This reverts commit 789021fa31112e25b683aef39fff375403060141.

* Fix Jenkinsfile

* [CI] Assign larger /dev/shm to NCCL

* Use 10.2 artifact to run multi-GPU Python tests

* Add CUDA 10.0 -> 11.0 cross-version test; remove CUDA 10.0 target

* Rename Conda env rmm_test -> gpu_test

* Use env var to opt into CNMeM pool for C++ tests

* Use identical CUDA version for RMM builds and tests

* Use Pytest fixtures to enable RMM pool in Python tests

* Move RMM to plugin/CMakeLists.txt; use PLUGIN_RMM

* Use per-device MR; use command arg in gtest

* Set CMake prefix path to use Conda env

* Use 0.15 nightly version of RMM

* Remove unnecessary header

* Fix a unit test when cudf is missing

* Add RMM demos

* Remove print()

* Use HostDeviceVector in GPU predictor

* Simplify pytest setup; use LocalCUDACluster fixture

* Address reviewers' commments

Co-authored-by: Hyunsu Cho <chohyu01@cs.wasshington.edu>
2020-08-12 01:26:02 -07:00
Jiaming Yuan
c3ea3b7e37 Fix nightly build doc. [skip ci] (#6004)
* Fix nightly build doc. [skip ci]

* Fix title too short. [skip ci]
2020-08-12 15:00:40 +08:00
Jiaming Yuan
ee70a2380b Unify CPU hist sketching (#5880) 2020-08-12 01:33:06 +08:00
jameskrach
bd6b7f4aa7 [Breaking] Fix .predict() method and add .predict_proba() in xgboost.dask.DaskXGBClassifier (#5986) 2020-08-11 16:11:28 +08:00
Jiaming Yuan
6f7112a848 Move warning about empty dataset. (#5998) 2020-08-11 14:10:51 +08:00
Jiaming Yuan
f93f1c03fc Rabit update. (#5978)
* Remove parameter on JVM Packages.
2020-08-11 09:17:32 +08:00
Jiaming Yuan
0b2a26fa74 Remove skmaker. (#5971) 2020-08-09 15:23:31 +08:00
Vladislav Epifanov
388f975cf5 Introducing DPC++-based plugin (predictor, objective function) supporting oneAPI programming model (#5825)
* Added plugin with DPC++-based predictor and objective function

* Update CMakeLists.txt

* Update regression_obj_oneapi.cc

* Added README.md for OneAPI plugin

* Added OneAPI predictor support to gbtree

* Update README.md

* Merged kernels in gradient computation. Enabled multiple loss functions with DPC++ backend

* Aligned plugin CMake files with latest master changes. Fixed whitespace typos

* Removed debug output

* [CI] Make oneapi_plugin a CMake target

* Added tests for OneAPI plugin for predictor and obj. functions

* Temporarily switched to default selector for device dispacthing in OneAPI plugin to enable execution in environments without gpus

* Updated readme file.

* Fixed USM usage in predictor

* Removed workaround with explicit templated names for DPC++ kernels

* Fixed warnings in plugin tests

* Fix CMake build of gtest

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-08-08 18:40:40 -07:00
Anthony D'Amato
7cf3e9be59 Fix typo in tracker logging (#5994) 2020-08-09 03:45:46 +08:00
James Lamb
589b385ec6 [R] fix uses of 1:length(x) and other small things (#5992) 2020-08-09 03:31:33 +08:00
Jiaming Yuan
801e6b6800 Fix dask predict shape infer. (#5989) 2020-08-08 14:29:22 +08:00
Jiaming Yuan
4acdd7c6f6 Remove stop process. (#143) 2020-08-05 10:12:00 -07:00
Jiaming Yuan
9c6e791e64 Enforce tree order in JSON. (#5974)
* Make JSON model IO more future proof by using tree id in model loading.
2020-08-05 16:44:52 +08:00
Jiaming Yuan
dde9c5aaff Fix missing data warning. (#5969)
* Fix data warning.

* Add numpy/scipy test.
2020-08-05 16:19:12 +08:00
Jiaming Yuan
8599f87597 Update JSON schema. (#5982)
* Update JSON schema for pseudo huber.
* Update JSON model schema.
2020-08-05 15:21:11 +08:00
Jiaming Yuan
9c93531709 Update Python custom objective demo. (#5981) 2020-08-05 12:27:19 +08:00
Jiaming Yuan
1149a7a292 Fix sklearn doc. (#5980) 2020-08-05 12:26:19 +08:00
Jiaming Yuan
b069431c28 Export DaskDeviceQuantileDMatrix in doc. [skip ci] (#5975) 2020-08-05 00:48:10 +08:00
FelixYBW
e6cd74ead3 Set a minimal reducer size and parent_down size (#139)
* set a minimal reducer msg size. Receive the same data size from parent each time.

* When parent read from a child, check it receive minimal reduce size.
 fix bug. Rewrite the minimal reducer size check, make sure it's 1~N times of minimal reduce size

 Assume the minimal reduce size is X, the logic here is
 1: each child upload total_size of message
 2: each parent receive X message at least, up to total_size
 3: parent reduce X or NxX or total_size message
 4: parent sends X or NxX or total_size message to its parent
 4: parent's parent receive X message at least, up to total_size. Then reduce X or NxX or total_size message
 6: parent's parent sends X or NxX or total_size message to its children
 7: parent receives X or NxX or total_size message, sends to its children
 8: child receive X or NxN or total_size message.

 During the whole process, each transfer is (1~N)xX Byte message or up to total_size.

 if X is larger than total_size, then allreduce allways reduce the whole messages and pass down.

* Follow style check rule

* fix the cpplint check

* fix allreduce_base header seq

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-07-25 12:46:45 -07:00
Philip Hyunsu Cho
74bf00a5ab De-duplicate macro _CRT_SECURE_NO_WARNINGS / _CRT_SECURE_NO_DEPRECATE (#136)
* De-duplicate macro _CRT_SECURE_NO_WARNINGS / _CRT_SECURE_NO_DEPRECATE

* Move all macros to base.h

* Fix CI
2020-06-28 09:51:50 -07:00
Philip Hyunsu Cho
8fe7f5dc43 [CI] Pass new cpplint check (#141) 2020-06-05 00:45:53 -07:00
Jiaming Yuan
a6008d5d93 Add RABIT_DLL tag to definitions of rabit APIs. (#140)
* Add RABIT_DLL tag to definitions of rabit APIs.
* Fix Travis tests.
2020-05-19 18:20:31 +08:00
Philip Hyunsu Cho
4fb34a008d Use 'default' visibility for C symbols (#138) 2020-04-23 20:48:52 -07:00
Philip Hyunsu Cho
2f7fcff4d7 Fix build on FreeBSD (#133) 2020-01-27 12:15:32 -08:00
Nan Zhu
6e563951af fix hanging trainings (#132)
* fix hanging connections

* remove logging
2020-01-27 09:12:02 -08:00
Chen Qin
0d6a853212 fix xgboost build failure introduced by allgather interface (#129)
* fix missing allgether rabit declaration

* fix allgather signature mismatch

* fix type conversion

* fix GetRingPrevRank
2020-01-01 22:45:14 +08:00
Chen Qin
493ad834a1 allow duplicated bootstrap allreduce overwrite previous results (#128)
* allow timeout to 0 to eanble immediate exit

* disable duplicated signature check, overwrite results with same key
2019-11-13 10:19:58 +08:00
nateagr
1907b25cd0 Expose RabitAllGatherRing and RabitGetRingPrevRank (#113)
* add unittests

* Expose RabitAllGatherRing and RabitGetRingPrevRank

* Enabled TCP_NODELAY to decrease latency
2019-11-12 19:55:32 +08:00
Jiaming Yuan
90e2239372 Fix cmake variable. (#126) 2019-11-05 01:27:08 -05:00
Chen Qin
2f25347168 allow timeout to 0 to eanble immediate exit (#125) 2019-10-22 14:38:55 -07:00
Chen Qin
d22e0809a8 throw dmlc::Error (#120)
* throw dmlc::Error handled by xgboost jni
2019-10-16 13:12:15 -04:00
Philip Hyunsu Cho
33dbc10aab Fix compilation failure on Windows (#119)
* Fix compilation failure on Windows

* Fix lint
2019-10-15 23:37:42 +07:00
Chen Qin
8e2c201d23 fix assert timeout_sec (#117) 2019-10-14 04:44:26 -04:00
Jiaming Yuan
ed9328ceae Fix lint. (#115) 2019-10-13 07:38:29 -04:00
Jiaming Yuan
6dab74689c Add SeekEnd to MemoryFixSizeBuffer. (#109)
* Don't assert buffer size.
2019-10-13 00:09:25 -04:00
Chen Qin
5d1b613910 exit when allreduce/broadcast error cause timeout (#112)
* keep async timeout task

* add missing pthread to cmake

* add tests

* Add a sleep period to avoid flushing the tracker.
2019-10-11 03:39:39 -04:00
Chen Qin
af7281afe3 unittests mock, cleanup (#111)
* cleanup, fix issue involved after remove is_bootstrap parameter

* misc

* clean

* add unittests
2019-10-01 13:36:11 -07:00
Chen Qin
ddcc2d85da Clean up cmake script and code includes (#106)
* Clean up CMake scripts and related include paths.
* Add unittests.
2019-09-26 02:29:04 -04:00
Xu Xiao
e92641887b remove unreached code of AllreduceRobust::CheckAndRecover (#108) 2019-09-18 23:06:59 -04:00
Jiaming Yuan
d4ce6807c7 Don't use _builtin_FUNCTION. (#107) 2019-09-18 12:05:23 -04:00
Chen Qin
9a7ac85d7e remove is_bootstrap parameter (#102)
* apply openmp simd

* clean __buildin detection, moving windows build check from xgboost project, add openmp support for vectorize reduce

* apply openmp only to rabit

* orgnize rabit signature

* remove is_bootstrap, use load_checkpoint as implict flag

* visual studio don't support latest openmp

* orgnize omp declarations

* replace memory copy with vector cast

* Revert "replace memory copy with vector cast"

This reverts commit 28de4792dcdff40d83d458510d23b7ef0b191d79.

* Revert "orgnize omp declarations"

This reverts commit 31341233d31ce93ccf34d700262b1f3f6690bbfe.

* remove openmp settings, merge into a upcoming pr

* mis

* per feedback, update comments
2019-09-10 11:45:50 -07:00
Chen Qin
5797dcb64e support bootstrap allreduce/broadcast (#98)
* support run rabit tests as xgboost subproject using xgboost/dmlc-core

* support tracker config set/get

* remove redudant printf

* remove redudant printf

* add c++0x declaration

* log allreduce/broadcast caller, engine should track caller stack for
investigation

* tracker support binary config format

* Revert "tracker support binary config format"

This reverts commit 2a28e5e2b55c200cb621af8d19f17ab1bc62503b.

* remove caller, prototype fetch allreduce/broadcast results from resbuf

* store cached allreduce/broadcast seq_no to tracker

* allow restore all caches from other nodes

* try new rabit collective cache, todo: recv_link seems down

* link up cache restore with main recovery

* cleanup load cache state

* update cache api

* pass test.mk

* have a working tests

* try to unify check into actionsummary

* more logging to debug distributed hist three method issue

* update rabit interface to support caller signature matching

* splite seq_counter from cur_cache_seq to different variables

* still see issue with inf loop

* support debug print caller as well as allreduce op

* cleanup

* remove get/set cache from model_recover, adding recover in
loadcheckpoint

* clarify rabit cache strategy, cache is set only by successful collective
call involving all nodes with unique cache key. if all nodes call
getcache at same time, we keep rabit run collective call. If some nodes
call getcache while others not, we backfill cache from those nodes with
most entries

* revert caller logs

* fix lint error

* fix engine mpi signature

* support getcache by ref

* allow result buffer presiet to filestream

* add loging

* try fix checkpoint failure recovery case

* use int64_t to avoid overflow caused seq fault

* try avoid int overflow

* try fix checkpoint failure recovery case

* try avoid seqno overflow to negative by offseting specifial flag value
adding cache seq no to checkpoint/load checkpoint/check point ack to avoid
confusion from cache recovery

* fix cache seq assert error

* remove loging, handle edge case

* add extensive log to checkpoint state  with different seq no

* fix lint errors

* clean up comments before merge back to master

* add logs to allreduce/broadcast/checkpoint

* use unsinged int 32 and give seq no larger range

* address remove allreduce dropseq code segment

* using caller signature to filter bootstrapallreduces

* remove get/set cache from empty

* apply signature to reducer

* apply signature to broadcast

* add key to broadcat log

* fix broadcast signature

* fix default _line value for non linux system

* adding comments, remove sleep(1)

* fix osx build issue

* try fix mpi

* fix doc

* fix engine_empty api

* logging, adding more logs, restore immutable assertion

* print unsinged int with ud

* fix lint

* rename seqtype to kSeq and KCache indicating it's usage
apply kDiffSeq check to load_cache routine

* comment allreduce/broadcast log

* allow tests run on arm

* enable flag to turn on / off cache

* add log info alert if user choose to enable rabit bootstrap cache

* add rabit_debug setting so user can use config to turn on

* log flags when user turn on rabit_debug

* force rabit restart if tracker assign -1 rank

* use OPENMP to vecotrize reducer

* address comment

* Revert "address comment"

This reverts commit 1dc61f33e7357dad8fa65528abeb81db92c5f9ed.

* fix checkpoint size print 0

* per feedback, remove DISABLEOPEMP, address race condition

* - remove openmp from this pr
- update name from cache to boostrapcache

* add default value of signature macros

* remove openmp from cmake file

* Update src/allreduce_robust.cc

Co-Authored-By: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Update src/allreduce_robust.cc

Co-Authored-By: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* run test with cmake

* remove openmp

* fix cmake based tests

* use cmake test fix darwin .dylib issue

* move around rabit_signature definition due to windows build

* misc, add c++ check in CMakeFile

* per feedback

* resolve CMake file

* update rabit version
2019-08-27 18:12:33 -07:00
Nan Zhu
dba32d54d1 shutdown for multiple times (#99) 2019-07-16 12:41:39 -07:00
Nan Zhu
65b718a5e7 return values in Init and Finalize (#96)
* make inti function return values

* address the comments
2019-06-25 20:05:54 -07:00
Nan Zhu
fc85f776f4 allow not stop process in error (#97)
* allow not stop process in error

* fix merge error
2019-06-25 13:04:39 -07:00
Nan Zhu
a429748e24 allow multi call on init (#92) 2019-04-26 18:41:02 -07:00
Chen Qin
5c3b36f346 Allow using external dmlc-core (#91)
* Set `RABIT_BUILD_DMLC=1` if use dmlc-core in rabit

* remove dmlc-core
2019-04-26 15:28:45 +08:00
Chen Qin
e3d51d3e62 [rabit harden] Enable all tests (#90)
* include osx in tests
* address `time_wait` on port assignment
* increase submit attempts.
* cleanup tests
2019-04-24 19:12:11 +08:00
Chen Qin
ecd4bf7aae [rabit harden] replace hardcopy dmlc-core headers with submodule links (#86)
* backport dmlc header changes to rabit

* use gitmodule to reference latest dmlc header files

* include ref to dmlc-core
fix cmake

* update cmake file, add cmake build traivs task

* try force using g++-4.8

* per feedback, update cmake
2019-03-23 13:11:29 +08:00
Chen Qin
785d7e54d3 [mpi] add engine_mpi travis build (#83) 2019-03-15 22:58:47 +08:00
Chen Qin
ed06e0c6af [rabit harden] fix rabit tests (#81)
* enable model recovery tests
* force use gcc4.8 in Travis
2019-03-15 07:16:45 +08:00
Jiaming Yuan
1cc34f01db Fix ssize_t definition. (#80)
* Fix linter.
2019-02-18 19:25:08 +08:00
Jiaming Yuan
0101a4719c Remove dmlc logging. (#78)
* Remove dmlc logging header.

* Fix lint.
2019-02-16 18:37:54 -08:00
Jiaming Yuan
05941a5f96 Try fixing mingw build error when using CMake. (#77)
* Try fixing mingw build error when using CMake.

* Check __MINGW32__ .

* Fix linter.
2019-02-16 22:35:43 +08:00
Chen Qin
eb2590b774 workaround macosx java test race condition (#74)
* fix error in dmlc#57, clean up comments and naming

* include missing packages, disable recovery tests for now

* disable local_recover tests until we have a bug fix

* support larger cluster

* fix lint, merge with master

* fix mac osx test failure in https://github.com/dmlc/xgboost/pull/3818

* Update allreduce_robust.cc
2018-10-26 12:39:31 -07:00
Chen Qin
3a35dabfae support larger cluster (#73)
* fix error in dmlc#57, clean up comments and naming

* include missing packages, disable recovery tests for now

* disable local_recover tests until we have a bug fix

* support larger cluster

* fix lint, merge with master
2018-10-22 10:13:45 -07:00
Chen Qin
69cdfae22f disable travis model_recover tests, fix doc generate failure (#71)
* add missing packackges used in dmlc submit

* disable local_recovery tests til we have code fix

* fix doc gen failure
2018-10-19 18:18:16 -07:00
Chen Qin
785bde6f87 add missing packackges used in dmlc submit (#70) 2018-10-19 13:04:33 -07:00
Ruifeng Zheng
edc403fb2c init (#60) 2018-07-04 12:31:24 -07:00
Philip Hyunsu Cho
87143deb4c Don't define DMLC_LOG_STACK_TRACE on Solaris (#59)
DMLC_LOG_STACK_TRACE involves use of non-standard header execinfo.h, which
causes compilation failure on Solaris.
2018-06-15 22:33:46 -07:00
trivialfis
fc5072b100 Fix building shared library. (#58) 2018-05-24 09:05:37 -07:00
Will Storey
7bc46b8c75 Allow compiling with -Werror=strict-prototypes (#56)
Without this, with gcc 7.3.0, we see things like:

/xgboost/include/xgboost/c_api.h:98:1: error: function
declaration isn't a prototype [-Werror=strict-prototypes]
 XGB_DLL const char *XGBGetLastError();
  ^~~~~~~
2018-03-18 22:21:35 -07:00
Dennis O'Brien
440e81db0b Fixed print statements and xrange to be compatibile with Python 2 and 3. (#55) 2018-02-26 12:19:04 -08:00
David Hirvonen
0759d5ed2b add cmake w/ relocatable pkgconfig installation (#53) 2018-01-07 14:49:39 -08:00
snehlatamohite
2eb1a1a371 Use -msse2 flag depending upon architecure while compiling the rabit code (#49) 2017-09-01 08:42:45 -07:00
Qiang Kou (KK)
41c96a25a9 To compile on ARM CPU (#46) 2017-07-12 20:24:19 -07:00
Artem Krylysov
0b406754fa Fix C API header compatibility with C compilers (#44) 2017-06-01 09:21:48 -07:00
Ziyue Huang
ab5f203b44 fix error: ‘nullptr’ was not declared in this scope (#43) 2017-04-23 10:44:11 -07:00
tqchen
a1acf23b60 only doc rabit 2017-03-17 22:09:13 -07:00
tqchen
a764d45cfb sync dmlc headers 2017-03-16 10:16:23 -07:00
AbdealiJK
21b5e12913 allreduce_robust.cc: Allow num_global_replica to be 0 (#38)
In some cases, users may not want to have any global replica of
the data being broadcasted/all-reduced. In such cases, set the
result_buffer_round to -1 as a flag that this is not necessary
and check for it.
2016-11-23 19:34:11 -08:00
Tianqi Chen
032152ad24 Update .travis.yml 2016-11-23 10:14:32 -08:00
kabu4i
af1b7d6e7a Applied FreeBSD support (#37) 2016-11-15 21:10:51 -08:00
tqchen
a9a2a69dc1 Merge branch 'master' of ssh://github.com/tqchen/rabit 2016-08-26 15:06:03 -07:00
tqchen
cd1db1afaa sync dmlc header 2016-08-26 15:05:42 -07:00
tomlaube
1007a26641 Fixing the imports to work with MPI (#30) 2016-08-26 15:04:41 -07:00
elferdo
7e15fdd9c6 FreeBSD does not have fopen64 (as of 10.3). Detect it and replace with (#29)
fopen.
2016-08-20 08:35:01 -07:00
Tianqi Chen
2dd7476ad7 Merge pull request #28 from randomjohnnyh/master
Use getaddrinfo instead of gethostbyname for thread safety
2016-07-27 10:40:24 -07:00
Johnny Ho
9d235c31a7 Use getaddrinfo instead of gethostbyname for thread safety 2016-07-27 02:35:02 -04:00
Tianqi Chen
8f61535b83 Update README.md 2016-05-10 20:14:53 -07:00
Tianqi Chen
b8aec1730c Update README.md 2016-05-10 20:14:29 -07:00
tqchen
e19fced5cb [FIX] rabit on single node 2016-05-10 20:05:59 -07:00
tqchen
849b20b7c8 add distributed checking 2016-04-11 15:43:01 -07:00
tqchen
be50e7b632 Make rabit library thread local 2016-03-01 20:12:51 -08:00
tqchen
aeb4008606 remove connect msg 2016-02-29 16:27:48 -08:00
tqchen
1392e9f3da fix travis 2016-02-29 15:51:36 -08:00
tqchen
225f5258c7 [DMLC] Add dep to dmlc logging 2016-02-29 14:59:44 -08:00
tqchen
56ec4263f9 fix type 2016-02-28 13:19:54 -08:00
tqchen
e3188afbe8 fix 2016-02-28 13:09:18 -08:00
tqchen
c7d53aecc3 add link tag 2016-02-28 09:44:11 -08:00
tqchen
26c87ec6e7 fix test 2016-02-28 09:35:08 -08:00
tqchen
f0f07ecd22 fix 2016-02-27 20:51:00 -08:00
tqchen
e814dc8a4b Fix docstring 2016-02-27 18:13:42 -08:00
tqchen
d45fca0298 fix build 2016-02-27 18:10:58 -08:00
tqchen
7479791f6a refactor: librabit 2016-02-27 10:14:41 -08:00
tqchen
73b6e9bbd0 [TRACKER] remove tracker in rabit, use DMLC 2016-02-27 09:07:40 -08:00
tqchen
112d866dc9 [RABIT] fix rabit in local mode 2016-01-12 21:34:26 -08:00
tqchen
05b958c178 [RABIT] Sync with dmlc 2016-01-09 21:43:29 -08:00
Tianqi Chen
bed63208af Merge pull request #26 from DrAndrey/master
Fix bug with name of sleep function
2015-11-18 09:58:21 -08:00
Andrey
291ab05023 Remove redundant whitespace again 2015-11-18 10:21:03 +03:00
Andrey
de251635b1 Remove redundant whitespace 2015-11-18 00:53:53 +03:00
Andrey
3a6be65a20 Fix bug with name of sleep function 2015-11-17 21:45:52 +03:00
Tianqi Chen
e81a11dd7e Merge pull request #25 from daiyl0320/master
add retry mechanism to ConnectTracker and modify Listen backlog to 128 in rabit_traker.py
2015-10-20 19:34:01 -07:00
yonglong.dyl
35c3b371ea add retry mechanism to ConnectTracker and modify Listen backlog to 128
in rabit_traker.py
2015-10-21 10:24:07 +08:00
tqchen
c71ed6fccb try deply doxygen 2015-08-22 21:37:14 -07:00
tqchen
62e5647a33 try deply doxygen 2015-08-22 21:33:23 -07:00
tqchen
732f1c634c try 2015-08-21 08:40:55 -07:00
tqchen
2fa6e0245a ok 2015-08-21 08:08:32 -07:00
tqchen
053766503c minor 2015-08-21 08:07:25 -07:00
tqchen
7b59dcb8b8 minor 2015-08-21 07:59:06 -07:00
tqchen
5934950ce2 new doc 2015-08-02 17:54:46 -07:00
tqchen
f5381871a3 ok 2015-08-01 21:40:05 -07:00
tqchen
44b60490f4 new doc 2015-08-01 21:36:09 -07:00
tqchen
387339bf17 add more 2015-07-30 18:16:15 -07:00
tqchen
9d4397aa4a chg 2015-07-30 17:59:16 -07:00
tqchen
2879a4853b chg 2015-07-30 17:58:42 -07:00
tqchen
30e3110170 ok 2015-07-28 23:18:15 -07:00
tqchen
9ff0301515 add link translation 2015-07-28 23:16:48 -07:00
tqchen
6b629c2e81 k 2015-07-27 18:41:17 -07:00
tqchen
32e19558e6 ok 2015-07-27 18:38:22 -07:00
tqchen
8f4839d1d9 fix 2015-07-27 18:34:43 -07:00
tqchen
93137b2e52 ok 2015-07-27 18:34:07 -07:00
tqchen
7eeeb79599 reload recommonmark 2015-07-27 18:33:19 -07:00
tqchen
a8f00cc4a5 minor 2015-07-27 18:16:03 -07:00
tqchen
19b0f019c7 ok 2015-07-27 18:14:01 -07:00
tqchen
dd011849b7 minor 2015-07-27 17:59:22 -07:00
tqchen
c1cdc194e9 minor 2015-07-27 17:50:02 -07:00
tqchen
fcf0f4351a try rst 2015-07-27 17:47:28 -07:00
tqchen
cbc21ae531 try 2015-07-27 17:46:08 -07:00
tqchen
62ddfa7709 tiny 2015-07-26 21:13:35 -07:00
tqchen
aefc05cb91 final change 2015-07-26 21:09:58 -07:00
tqchen
2aee9b4959 minor 2015-07-26 20:57:48 -07:00
tqchen
fe4e7c2b96 ok 2015-07-26 20:56:54 -07:00
tqchen
800198349f change to subtitle 2015-07-26 20:54:10 -07:00
tqchen
5ca33e48ea ok 2015-07-26 20:52:52 -07:00
tqchen
88f7d24de9 update guide 2015-07-26 20:52:34 -07:00
tqchen
29d43ab52f add code 2015-07-26 14:57:24 -07:00
tqchen
fe8bb3b60e minor hack for readthedocs 2015-07-26 14:47:40 -07:00
tqchen
229c71d9b5 Merge branch 'master' of ssh://github.com/dmlc/rabit 2015-07-26 14:46:24 -07:00
tqchen
7424218392 ok 2015-07-26 14:46:16 -07:00
Tianqi Chen
d1d45bbdae Update README.md 2015-07-26 14:43:08 -07:00
Tianqi Chen
1e8813f3bd Update README.md 2015-07-26 14:42:57 -07:00
Tianqi Chen
1ccc9903a1 Update README.md 2015-07-26 14:41:25 -07:00
tqchen
0323e0670e remove readme 2015-07-26 14:22:50 -07:00
tqchen
679a835d38 remove theme 2015-07-26 14:14:19 -07:00
tqchen
7ea5b7c209 remove numpydoc to napoleon 2015-07-26 14:02:43 -07:00
tqchen
b73e2be55e Merge branch 'master' of ssh://github.com/dmlc/rabit
Conflicts:
	doc/python-requirements.txt
2015-07-26 13:52:31 -07:00
tqchen
174228356d ok 2015-07-26 13:51:56 -07:00
Tianqi Chen
1838e25b8a Update python-requirements.txt 2015-07-26 13:05:52 -07:00
tqchen
bc4e957c39 ok 2015-07-26 13:00:18 -07:00
tqchen
fba6fc208c ok 2015-07-26 12:54:21 -07:00
tqchen
025110185e ok 2015-07-26 12:52:37 -07:00
tqchen
d50b905824 ok 2015-07-26 12:46:19 -07:00
tqchen
d4f2509178 ok 2015-07-26 12:43:49 -07:00
tqchen
cdf401a77c ok 2015-07-26 12:40:21 -07:00
tqchen
fef0ef26f1 new doc 2015-07-26 12:29:18 -07:00
tqchen
cef360d782 ok 2015-07-26 12:15:00 -07:00
tqchen
c125d2a8bb ok 2015-07-26 12:14:54 -07:00
tqchen
270a49ee75 add requirments 2015-07-23 22:22:52 -07:00
tqchen
744f9015bb get the basic doc 2015-07-23 22:14:42 -07:00
tqchen
1cb5cad50c Merge branch 'master' of ssh://github.com/dmlc/rabit 2015-07-03 17:42:00 -07:00
tqchen
8cc07ba391 minor 2015-07-03 17:41:52 -07:00
Tianqi Chen
d74f126592 Update .travis.yml 2015-07-03 15:35:47 -07:00
Tianqi Chen
52b3dcdf07 Update .travis.yml 2015-07-03 15:33:38 -07:00
Tianqi Chen
099581b591 Update .travis.yml 2015-07-03 15:31:43 -07:00
Tianqi Chen
1258046f14 Update .travis.yml 2015-07-03 15:29:30 -07:00
Tianqi Chen
7addac910b Update Makefile 2015-07-03 15:23:26 -07:00
Tianqi Chen
0ea7adff92 Update .travis.yml 2015-07-03 15:21:20 -07:00
Tianqi Chen
f858856586 Update travis_script.sh 2015-07-03 15:20:59 -07:00
Tianqi Chen
d8eac4ae27 Update README.md 2015-07-03 15:17:22 -07:00
tqchen
3cc49ad0e8 lint and travis 2015-07-03 15:15:11 -07:00
tqchen
ceedf4ea96 fix 2015-05-28 12:37:06 -07:00
Tianqi Chen
fd8920c71d fix win32 2015-05-28 12:24:26 -07:00
tqchen
8bbed35736 modify 2015-05-28 10:44:19 -07:00
Tianqi Chen
9520b90c4f Merge pull request #14 from dmlc/hjk41
add kLongLong and kULongLong
2015-05-20 05:38:01 +02:00
Chuntao Hong
df14bb1671 fix type 2015-05-20 11:36:17 +08:00
Chuntao Hong
f441dc7ed8 replace tab with blankspace 2015-05-20 11:33:48 +08:00
Chuntao Hong
2467942886 remove unnecessary include 2015-05-20 11:32:16 +08:00
Chuntao Hong
181ef47053 defined long long and ulonglong 2015-05-20 11:27:50 +08:00
Chuntao Hong
1582180e5b use int32_t to define int and int64_t to define long. in VC long is 32bit 2015-05-20 10:09:09 +08:00
tqchen
e0b7da0302 fix 2015-05-02 21:47:43 -07:00
tqchen
fa99857467 try fix warning on some platforms 2015-05-01 22:45:11 -07:00
tqchen
24f17df782 ok 2015-04-29 20:23:39 -07:00
tqchen
4fe8d1d66b ok io 2015-04-29 20:21:37 -07:00
tqchen
a5d77ca08d checkin new dmlc interface 2015-04-29 20:17:27 -07:00
tqchen
d1d2ab4599 remove at end 2015-04-28 10:49:44 -07:00
tqchen
e1ddcc2eb7 Merge branch 'master' of ssh://github.com/dmlc/rabit 2015-04-27 15:55:58 -07:00
tqchen
6745667eb0 new dmlc io 2015-04-27 15:55:51 -07:00
tqchen
c5b4610cfe sge scheduler change 2015-04-26 22:08:47 -07:00
tqchen
fed1683b9b minor 2015-04-25 21:24:38 -07:00
Tianqi Chen
c01520f173 change 2015-04-25 21:23:16 -07:00
tqchen
27340f95e4 final minor 2015-04-25 21:19:42 -07:00
Tianqi Chen
e03eabccda allow win32 2015-04-25 21:18:36 -07:00
tqchen
82ca10acb6 better handling at msvc 2015-04-25 20:52:07 -07:00
Tianqi Chen
6601939588 Merge pull request #12 from zjf/patch-2
Update rabit-inl.h
2015-04-23 23:16:49 -07:00
Jianfeng Zhu
df8f917463 Update rabit-inl.h
Fix missing parenthese
2015-04-24 14:09:47 +08:00
tqchen
c60b284e1f resize during tracker print 2015-04-20 11:37:45 -07:00
tqchen
c67967161e fix io style 2015-04-19 00:21:38 -07:00
tqchen
f52daf9be1 make timer cross platform 2015-04-19 00:01:48 -07:00
tqchen
7568f75f45 new io interface 2015-04-17 20:35:44 -07:00
tqchen
3bf8661ec1 add std before basic 2015-04-13 13:43:34 -07:00
tqchen
18f4d6c0ba remove rabit learn 2015-04-11 20:25:52 -07:00
tqchen
bcfbe51e7e fix dmlc io 2015-04-11 18:16:52 -07:00
tqchen
ad383b084d ok 2015-04-11 17:55:20 -07:00
tqchen
3b8c04a902 Merge branch 'master' of ssh://github.com/dmlc/rabit 2015-04-11 17:35:11 -07:00
tqchen
9dd97cc141 keepup with dmlc core 2015-04-11 17:35:03 -07:00
Ubuntu
ef13aaf379 ch 2015-04-11 05:29:07 +00:00
tqchen
50a66b3855 fix empty engine 2015-04-09 08:44:33 -07:00
tqchen
e08542c635 fix doc 2015-04-08 15:30:56 -07:00
tqchen
e95c96232a remove I prefix from interface, serializable now takes in pointer 2015-04-08 15:25:58 -07:00
tqchen
b15f6cd2ac rabit unifires with dmlc 2015-04-05 09:55:24 -07:00
tqchen
5634ec3008 ok 2015-04-03 22:25:33 -07:00
tqchen
2dd6c2f0c9 Merge branch 'master' of ssh://github.com/dmlc/rabit 2015-03-30 22:18:20 -07:00
tqchen
38d7f999a7 checkin wormhole spliter 2015-03-30 22:18:02 -07:00
Tianqi Chen
8acb96a627 Merge pull request #10 from ryanzz/master
fixed a mistake
2015-03-30 08:46:15 -07:00
ryanzz
911a1f0ce2 fixed a mistake 2015-03-30 16:25:36 +08:00
tqchen
732d8c33d1 inteface changing 2015-03-29 22:00:37 -07:00
tqchen
684ea0ad26 inteface changing 2015-03-29 22:00:33 -07:00
tqchen
8cb4c02165 add dmlc support 2015-03-28 22:44:10 -07:00
tqchen
be2ff703bc allow adapting wormhole 2015-03-27 17:33:51 -07:00
tqchen
16975b447c try pass on tokens during application submission 2015-03-27 11:04:19 -07:00
tqchen
eb1f4a4003 change auto to ip 2015-03-26 23:26:30 -07:00
tqchen
59e63bc135 minor 2015-03-21 00:38:37 -07:00
tqchen
62330505e1 ok 2015-03-21 00:37:59 -07:00
tqchen
14477f9f5a add namenode 2015-03-21 00:35:30 -07:00
tqchen
75a6d349c6 add libhdfs opts 2015-03-21 00:26:30 -07:00
tqchen
e3c76bfafb minmum fix 2015-03-21 00:25:16 -07:00
tqchen
8b3c435241 chg 2015-03-20 15:11:50 -07:00
tqchen
2035799817 test code 2015-03-20 13:02:46 -07:00
tqchen
7751b2b320 add debug 2015-03-15 23:52:16 -07:00
tqchen
769031375a ok 2015-03-15 23:47:24 -07:00
tqchen
bd346b4844 ok 2015-03-15 23:44:32 -07:00
tqchen
faba1dca6c add testload 2015-03-15 23:42:18 -07:00
tqchen
6f7783e4f6 add testload 2015-03-15 23:42:17 -07:00
tqchen
e5f034040e ok 2015-03-15 23:20:30 -07:00
tqchen
3ed9ec808f chg 2015-03-15 23:19:54 -07:00
tqchen
e552ac401e ask for more ram in am 2015-03-15 23:14:56 -07:00
tqchen
b2505e3d6f only stop nm when sucess 2015-03-15 23:02:15 -07:00
tqchen
bc696c9273 add queue info 2015-03-15 22:54:09 -07:00
tqchen
f3e867ed97 add option queue 2015-03-15 22:38:51 -07:00
tqchen
5dc843cff3 refactor fileio 2015-03-14 16:46:54 -07:00
tqchen
cd9c81be91 quick fix 2015-03-14 09:20:04 -07:00
tqchen
1e23af2adc add virtual destructor to iseekstream 2015-03-14 00:20:37 -07:00
tqchen
f165ffbc95 fix hdfs 2015-03-13 22:59:04 -07:00
tqchen
8cc650847a allow demo to pass in env 2015-03-13 22:27:36 -07:00
tqchen
fad4d69ee4 ok 2015-03-13 21:38:03 -07:00
tqchen
0fd6197b8b fix more 2015-03-13 21:36:09 -07:00
tqchen
7423837303 fix more 2015-03-13 21:36:08 -07:00
tqchen
d25de54008 add temporal solution, run_yarn_prog.py 2015-03-13 21:13:19 -07:00
tqchen
e5a9e31d13 final attempt 2015-03-13 00:04:51 -07:00
tqchen
ed3bee84c2 add command back 2015-03-12 22:48:30 -07:00
tqchen
07740003b8 add hdfs to resource 2015-03-12 22:43:41 -07:00
tqchen
9b66e7edf2 fix hadoop 2015-03-12 20:57:49 -07:00
tqchen
6812f14886 ok 2015-03-12 09:44:43 -07:00
tqchen
08e1c16dd2 change hadoop prefix back to hadoop home 2015-03-12 09:06:42 -07:00
Tianqi Chen
d6b68286ee Update build.sh 2015-03-12 09:03:02 -07:00
tqchen
146e069000 bugfix: logical boundary for ring buffer 2015-03-11 20:28:34 -07:00
tqchen
19cb685c40 ok 2015-03-12 02:59:50 +00:00
tqchen
4cf3c13750 Merge branch 'master' of ssh://github.com/tqchen/rabit
Conflicts:
	tracker/rabit_tracker.py
2015-03-11 13:35:35 -07:00
tqchen
20daddbeda add tracker 2015-03-11 13:27:23 -07:00
tqchen
c57dad8b17 add ringbased passing and batch schedule 2015-03-11 12:00:19 -07:00
tqchen
295d8a12f1 update 2015-03-10 15:28:10 -07:00
tqchen
994cb02a66 add sge 2015-03-10 15:26:40 -07:00
tqchen
014c86603d OK 2015-03-10 10:51:39 -07:00
tqchen
091634b259 fix 2015-03-09 14:56:01 -07:00
tqchen
d558f6f550 redefine distributed means 2015-03-09 14:43:05 -07:00
tqchen
c8efc01367 more complicated yarn script 2015-03-09 14:36:44 -07:00
tqchen
28ca7becbd add linear readme 2015-03-09 13:12:40 -07:00
tqchen
ca4b20fad1 add linear readme 2015-03-09 13:12:04 -07:00
tqchen
1133628c01 add linear readme 2015-03-09 13:11:17 -07:00
tqchen
6a1167611c update docs 2015-03-09 13:00:34 -07:00
Tianqi Chen
a607047aa1 Update build.sh 2015-03-08 23:55:42 -07:00
tqchen
2c1cfd8be6 complete yarn 2015-03-08 23:51:42 -07:00
tqchen
4f28e32ebd change formater 2015-03-08 12:29:07 -07:00
tqchen
2fbda812bc fix stdin input 2015-03-08 12:22:11 -07:00
tqchen
3258bcf531 checkin yarn master 2015-03-08 11:03:13 -07:00
tqchen
67ebf81e7a allow setup from env variables 2015-03-07 16:45:31 -08:00
tqchen
9b6bf57e79 fix hdfs 2015-03-07 09:08:21 -08:00
tqchen
395d5c29d5 add make system 2015-03-06 22:30:23 -08:00
tqchen
88ce76767e refactor io, initial hdfs file access need test 2015-03-06 22:17:27 -08:00
tqchen
19be870562 chgs 2015-03-06 21:12:04 -08:00
tqchen
a1bd3c64f0 Merge branch 'master' of ssh://github.com/tqchen/rabit 2015-03-06 21:09:59 -08:00
tqchen
1a573f987b introduce input split 2015-03-06 21:08:04 -08:00
tqchen
29476f1c6b fix timer issue 2015-03-06 20:59:10 -08:00
tqchen
d4ec037f2e fix rabit 2015-03-03 13:12:05 -08:00
tqchen
6612fcf36c Merge branch 'master' of ssh://github.com/tqchen/rabit 2015-03-02 16:10:15 -08:00
tqchen
d29892cb22 add mock option statis 2015-03-02 16:10:08 -08:00
tqchen
4fa054e26e new tracker 2015-03-02 07:32:25 +00:00
tqchen
75c647cd84 update tracker for host IP 2015-03-01 23:27:59 -08:00
tqchen
e4ce8efab5 add hadoop linear example 2015-03-02 04:36:48 +00:00
tqchen
76ecb4a031 add hadoop linear example 2015-03-02 04:35:56 +00:00
Ubuntu
2e1c4c945e add hadoop linear example 2015-03-02 04:35:01 +00:00
tqchen
4db0a62a06 bugfix of lazy prepare 2015-02-11 20:31:46 -08:00
tqchen
87017bd4cd license 2015-02-11 14:49:51 -08:00
tqchen
dc703e1b62 license 2015-02-11 14:48:59 -08:00
tqchen
c171440324 change license to bsd 2015-02-11 14:44:26 -08:00
Tianqi Chen
7db2070598 Update README.md 2015-02-09 20:53:29 -08:00
tqchen
581fe06a9b add mocktest 2015-02-09 20:46:38 -08:00
tqchen
d2f252f87a ok 2015-02-09 20:35:30 -08:00
tqchen
4a5b9e5f78 add all 2015-02-09 20:26:39 -08:00
tqchen
12ee049a74 init version of lbfgs 2015-02-09 17:44:32 -08:00
tqchen
37a28376bb complete lbfgs solver 2015-02-09 11:04:19 -08:00
tqchen
6ade7cba94 complete lbfgs 2015-02-08 23:08:59 -08:00
tqchen
1bb8fe9615 chg makefile 2015-01-30 16:46:10 -08:00
tqchen
fb13cab216 change makefile 2015-01-30 16:30:45 -08:00
Tyler
1479e370f8 fixed small bug in mpi submission script 2015-01-25 00:12:46 -08:00
Tianqi Chen
0ca7a63670 Update README.md 2015-01-22 09:16:46 -08:00
tqchen
5ef4830b55 ok 2015-01-20 20:30:22 -08:00
tqchen
93a13381c1 chg note 2015-01-20 20:27:43 -08:00
tqchen
4ebe657dd7 fix in cxx11 2015-01-19 21:37:02 -08:00
tqchen
85b746394e change def of reducer to take function ptr 2015-01-19 21:24:52 -08:00
tqchen
fe6366eb40 add engine base 2015-01-19 19:11:15 -08:00
Tianqi Chen
a98720ebc9 more deps 2015-01-19 08:20:43 -08:00
tqchen
1db6449b01 remove include in -I, make things easier to direct compile 2015-01-18 21:30:19 -08:00
tqchen
c7282acb2a doc 2015-01-18 19:55:04 -08:00
tqchen
f332750359 minor fix 2015-01-18 18:17:41 -08:00
tqchen
9edb3b306f update doc 2015-01-18 18:14:20 -08:00
tqchen
c46120a46b add win32 ver 2015-01-16 21:10:47 -08:00
Tianqi Chen
537497f520 changes 2015-01-16 21:10:01 -08:00
Tianqi Chen
56a80f431b check in windows solutions, pass small test in windows 2015-01-16 20:56:34 -08:00
tqchen
774d501c1f add languages 2015-01-16 11:13:27 -08:00
tqchen
7396c87249 chg 2015-01-16 10:53:31 -08:00
tqchen
c7533f92bb desgin goal 2015-01-16 10:50:05 -08:00
tqchen
38b7fec37a ok 2015-01-16 10:46:55 -08:00
tqchen
c798fc2a29 change toolkit to rabitlearn 2015-01-16 10:45:54 -08:00
tqchen
f5245c615c ok 2015-01-16 10:12:47 -08:00
nachocano
aebb7998a3 updating doc 2015-01-16 00:45:04 -08:00
nachocano
b87da8fe9a small typo 2015-01-15 10:52:39 -08:00
tqchen
1f35478b82 chg docs 2015-01-15 10:29:32 -08:00
tqchen
6d5ac6446c chg test folder 2015-01-15 10:24:58 -08:00
tqchen
8f23eb11d7 change convention 2015-01-15 10:22:59 -08:00
tqchen
0617281863 phrase python as a lib 2015-01-15 10:09:14 -08:00
nachocano
7d67f6f26d removing section 2015-01-15 01:24:04 -08:00
nachocano
34c8253ad6 Merge branch 'master' of https://github.com/tqchen/allreduce 2015-01-15 01:22:21 -08:00
nachocano
86e61ad6a5 adding changes suggested by Tianqi 2015-01-15 01:21:40 -08:00
tqchen
6dbaddd2b9 ok 2015-01-14 22:11:00 -08:00
tqchen
a7faac2f09 ok 2015-01-14 21:59:45 -08:00
tqchen
f161d2f1e5 fix bug in initialization of routing 2015-01-14 19:40:41 -08:00
tqchen
797fe27efe struct return type version 2015-01-14 15:43:28 -08:00
tqchen
a57c5c5425 add more error report when things goes wrong, need review 2015-01-14 15:32:36 -08:00
tqchen
968b33ec79 set all tracker thread to deamon 2015-01-14 12:05:00 -08:00
tqchen
87c7817124 add lazy check, need test, find a race condition 2015-01-14 11:58:43 -08:00
Tianqi Chen
bddfa2fc24 Merge pull request #7 from lqhl/master
update the fault tolerence section
2015-01-14 10:09:04 -08:00
Tianqi Chen
d05df9836b Merge pull request #8 from cblsjtu/master
correct a mistake
2015-01-14 10:07:17 -08:00
Boliang Chen
2f2e481fc3 correct a mistake 2015-01-14 20:53:34 +08:00
Qin Liu
1dda51f1fa update the fault tolerence section 2015-01-14 17:07:30 +08:00
tqchen
348a1e7619 change default behavior to behave normal 2015-01-13 22:21:15 -08:00
tqchen
478d250818 minor change 2015-01-13 20:01:15 -08:00
tqchen
532575b752 ok 2015-01-13 14:41:37 -08:00
tqchen
c127f9650c Merge branch 'master' of ssh://github.com/tqchen/rabit 2015-01-13 14:29:20 -08:00
tqchen
3419cf9aa7 add auto caching of python in hadoop script, mock test module to python, with checkpt 2015-01-13 14:29:10 -08:00
tqchen
877fc42e40 add data 2015-01-13 12:51:55 -08:00
nachocano
f79e5fc041 adding more stuff 2015-01-13 01:00:58 -08:00
nachocano
95c6d7398f adding more stuff 2015-01-13 00:59:20 -08:00
nachocano
5c7967e863 adding link 2015-01-13 00:49:57 -08:00
nachocano
54e2f7e90d adding wrapper section 2015-01-13 00:48:37 -08:00
nachocano
48c42bf189 fixing stuff 2015-01-13 00:18:46 -08:00
nachocano
92c94176c1 adding some changes to kmeans 2015-01-13 00:13:05 -08:00
tqchen
15e085cd32 basic allreduce lib ready 2015-01-12 22:59:36 -08:00
tqchen
2d72c853df checkin broadcast python module 2015-01-12 22:32:13 -08:00
tqchen
9a4a81f100 add wrapper 2015-01-12 21:33:01 -08:00
tqchen
61626aaf85 add more data types 2015-01-12 20:45:07 -08:00
tqchen
5a457d69fc Merge branch 'master' of ssh://github.com/tqchen/rabit
Conflicts:
	tracker/rabit_hadoop.py
2015-01-12 12:03:00 -08:00
tqchen
7572794add add stacklevel for rabit 2015-01-12 12:02:28 -08:00
Tianqi Chen
60a10b3322 Merge pull request #6 from cblsjtu/master
modify some explanation
2015-01-12 08:55:41 -08:00
Boliang Chen
ec3fd9bd2a modify some explanation 2015-01-12 23:46:20 +08:00
Boliang Chen
34cde09b2b modify some explanation 2015-01-12 23:41:45 +08:00
nachocano
8dd94461e1 guide 2015-01-12 00:24:46 -08:00
nachocano
9e04ab62fb adding breaks 2015-01-12 00:23:42 -08:00
nachocano
9907bafa1d fix 2015-01-12 00:20:43 -08:00
nachocano
30f3971bee adding more description to toolkit 2015-01-12 00:14:40 -08:00
tqchen
6b651176a3 yarn is part of hadoop script 2015-01-11 21:28:13 -08:00
tqchen
a120edc56e shorter 2015-01-11 11:48:08 -08:00
tqchen
5146409a1d simpler 2015-01-11 11:47:37 -08:00
tqchen
db2ebf7410 use unified script, auto detect hadoop version 2015-01-11 11:46:12 -08:00
tqchen
bfc3f61010 minor 2015-01-11 11:15:12 -08:00
tqchen
78bfe867e6 unify hadoop and yarn script 2015-01-11 11:13:02 -08:00
Tianqi Chen
03dca6d6b3 Merge pull request #5 from EricChenDM/master
add yarn script
2015-01-11 10:41:08 -08:00
chenshuaihua
b2dec95862 yarn script 2015-01-12 00:09:00 +08:00
chenshuaihua
26b5fdac40 yarn script 2015-01-11 23:54:31 +08:00
chenshuaihua
00323f462a yarn script 2015-01-11 23:32:14 +08:00
chenshuaihua
981f69ff55 yarn script 2015-01-11 23:23:58 +08:00
chenshuaihua
5e843cfbbd yarn script 2015-01-11 23:22:26 +08:00
chenshuaihua
b5ac85f103 yarn script 2015-01-11 23:19:04 +08:00
chenshuaihua
d81fb6a9e6 test 2015-01-11 21:59:38 +08:00
nachocano
d269cb9c50 guide stuff 2015-01-11 01:43:32 -08:00
nachocano
2d97833f48 slightly change 2015-01-11 01:35:04 -08:00
nachocano
eef79067a8 more cosmetic stuff 2015-01-11 01:31:10 -08:00
nachocano
aea4c10847 cosmetic changes to tutorial 2015-01-11 01:07:51 -08:00
Tianqi Chen
7eb4258951 Merge pull request #4 from cblsjtu/master
explain time out
2015-01-10 23:43:01 -08:00
Boliang Chen
c6d0be57d4 explain timeout 2015-01-11 15:39:50 +08:00
Boliang Chen
80b0d06b7e merger from tqchen 2015-01-11 14:56:20 +08:00
Boliang Chen
8685b740cc Merge remote-tracking branch 'tqchen/master' 2015-01-11 14:53:10 +08:00
Boliang Chen
7fa23f2d2f modify default jobname 2015-01-11 14:52:48 +08:00
tqchen
ed264002a0 Merge branch 'master' of ssh://github.com/tqchen/rabit
Conflicts:
	tracker/rabit_hadoop.py
2015-01-10 22:50:38 -08:00
tqchen
2e3361f0e0 fix -f 2015-01-10 22:49:56 -08:00
Boliang Chen
363994f29d Merge remote-tracking branch 'tqchen/master' 2015-01-11 13:46:32 +08:00
Boliang Chen
3f4bf96c5d temp 2015-01-11 13:46:18 +08:00
tqchen
0100fdd18d auto jobname 2015-01-10 21:21:39 -08:00
cblsjtu
c0f85c681e Merge pull request #1 from tqchen/master
merge from tqchen
2015-01-11 11:00:08 +08:00
tqchen
43c129f431 chg script 2015-01-10 17:49:09 -08:00
tqchen
500a57697d chg script 2015-01-10 17:45:53 -08:00
tqchen
c2ab64afe3 fix comment 2015-01-10 10:01:31 -08:00
tqchen
6b30fb2bea update cache script 2015-01-10 09:58:10 -08:00
Tianqi Chen
9d34d2e036 Merge pull request #1 from cblsjtu/master
fix several bugs
2015-01-10 09:29:55 -08:00
Boliang Chen
76c15dffde remove blank 2015-01-11 00:16:05 +08:00
Boliang Chen
d986693fbd fix bugs 2015-01-11 00:14:37 +08:00
Boliang Chen
7f5cb3aa0e modify hs 2015-01-10 10:58:53 +08:00
Boliang Chen
697a01bfb4 har -> jar 2015-01-10 10:54:12 +08:00
tqchen
1b4921977f update doc 2015-01-03 05:20:18 -08:00
tqchen
be355c1e60 minor 2015-01-01 06:06:55 -08:00
tqchen
d10a435d64 correct 2015-01-01 06:06:02 -08:00
tqchen
eb2b086b65 ok 2015-01-01 06:04:02 -08:00
tqchen
08ca3b0849 add more links 2015-01-01 06:02:32 -08:00
tqchen
61f21859d9 add api 2015-01-01 05:57:46 -08:00
tqchen
2bfbbfb381 checkin API doc 2015-01-01 05:48:34 -08:00
tqchen
31a3d22af4 add broadcast 2015-01-01 05:42:38 -08:00
tqchen
90a8505208 update guide 2015-01-01 05:42:03 -08:00
tqchen
06206e1d03 start checkin guides 2014-12-30 06:22:54 -08:00
tqchen
bfb9aa3d77 add native script 2014-12-30 04:37:50 -08:00
tqchen
1bcea65117 change nslave to nworker 2014-12-29 18:44:30 -08:00
tqchen
bdfa1a0220 change nslave to nworker 2014-12-29 18:42:24 -08:00
tqchen
39504825d8 add kmeans example 2014-12-29 18:32:56 -08:00
tqchen
76abd80cb7 change indentation 2014-12-29 18:17:20 -08:00
tqchen
b1340bf310 add auto cache 2014-12-29 06:50:17 -08:00
tqchen
c731e82fae add command 2014-12-29 06:37:07 -08:00
tqchen
491716c418 chg 2014-12-29 06:21:34 -08:00
tqchen
d64d0ef1dc cleanup submission script 2014-12-29 06:11:58 -08:00
tqchen
27d6977a3e cpplint pass 2014-12-28 05:12:07 -08:00
tqchen
15836eb98e add task id 2014-12-22 04:17:23 -08:00
tqchen
0dd51d5dd0 add attempt id for hadoop 2014-12-22 04:12:38 -08:00
tqchen
6e6031cbe9 add mock 2014-12-22 03:59:01 -08:00
tqchen
d82a6ed811 add file command 2014-12-22 03:48:14 -08:00
tqchen
ab7492dbc2 add support for yarn 2014-12-22 03:24:00 -08:00
tqchen
d3433c5946 change script 2014-12-22 01:54:11 -08:00
tqchen
975bcc8261 fix 2014-12-22 01:26:59 -08:00
tqchen
dd8d9646c4 rm mpi dep 2014-12-22 01:25:06 -08:00
tqchen
bb2ecc6ad5 remove c++11 2014-12-22 01:10:14 -08:00
tqchen
7a2ae105ea fix script 2014-12-22 01:03:12 -08:00
tqchen
fd533d9a76 add kmeans 2014-12-22 00:32:08 -08:00
tqchen
5fe3c58b4a add kmeans hadoop 2014-12-22 00:31:01 -08:00
tqchen
dcb6e22a9e add mapred tasks 2014-12-22 00:20:13 -08:00
tqchen
12399a1d42 add more mocktest 2014-12-21 17:59:12 -08:00
tqchen
a624051b85 add keepalive to socket, fix recover problem when a node is requester and pass data 2014-12-21 17:55:08 -08:00
tqchen
cfea4dbe85 fix rabit for single node without initialization 2014-12-21 04:35:32 -08:00
tqchen
e40047f9c2 new mock test 2014-12-20 18:38:54 -08:00
tqchen
10bb407a2c add mock engine 2014-12-20 18:31:33 -08:00
tqchen
ecf91ee081 change usage 2014-12-20 16:54:15 -08:00
tqchen
925d014271 change file structure 2014-12-20 16:19:54 -08:00
tqchen
77d74f6c0d fix bug in lambda allreduce 2014-12-20 05:04:16 -08:00
tqchen
5570e7ceae add complex types 2014-12-19 21:12:10 -08:00
tqchen
e72a869fd1 add complex reducer in 2014-12-19 20:57:53 -08:00
tqchen
2c0a0671ad skip actions when there is only 1 node 2014-12-19 19:21:21 -08:00
tqchen
6151899ce2 add tracker print 2014-12-19 18:40:06 -08:00
tqchen
6bf282c6c2 isolate iserializable 2014-12-19 17:36:42 -08:00
tqchen
8c35cff02c improve script 2014-12-19 04:21:16 -08:00
tqchen
9f42b78a18 improve tracker script 2014-12-19 04:20:45 -08:00
tqchen
69d7f71ae8 change kmeans to using lambda 2014-12-19 02:12:53 -08:00
tqchen
1754fdbf4e enable support for lambda preprocessing function, and c++11 2014-12-19 02:00:43 -08:00
tqchen
58331067f8 cleanup testcases 2014-12-18 23:50:59 -08:00
tqchen
aa2cb38543 ResetLink still not ok 2014-12-18 21:45:38 -08:00
tqchen
6b18ee9edb Merge branch 'master' of ssh://github.com/tqchen/rabit 2014-12-18 19:02:05 -08:00
tqchen
c8faed0b54 pass local model recover test 2014-12-18 18:53:58 -08:00
tqchen
dbd05a65b5 nice fix, start check local check 2014-12-18 18:39:24 -08:00
Tianqi Chen
31403a41cd Update rabit.h 2014-12-09 21:03:41 -08:00
tqchen
3f22596e3c check in license 2014-12-09 20:57:54 -08:00
tqchen
cc5efb8d81 Merge branch 'master' of ssh://github.com/tqchen/rabit 2014-12-09 20:56:33 -08:00
root
5aff7fab29 adding : 2014-12-08 17:15:49 +00:00
root
dfb3961eea changing port 2014-12-08 17:13:42 +00:00
Tianqi Chen
39f2dcdfef Update rabit_tracker.py 2014-12-08 08:36:55 -08:00
tqchen
2750679270 normal state running ok 2014-12-07 20:57:29 -08:00
tqchen
b38fa40fa6 fix ring passing 2014-12-07 20:25:42 -08:00
tqchen
8d570b54c7 add code to help link reuse, start test numreplica 2014-12-07 16:22:02 -08:00
tqchen
e2adce1cc1 add ring setup version 2014-12-07 16:09:28 -08:00
tqchen
322e40c72e Merge branch 'master' of ssh://github.com/tqchen/rabit 2014-12-06 23:00:18 -08:00
tqchen
328cf187ba check in the ring passing 2014-12-06 23:00:10 -08:00
nachocano
20b03e781c to run all executables 2014-12-06 15:37:09 -08:00
nachocano
fcf2f0a03d to stderr 2014-12-06 15:22:29 -08:00
nachocano
cd8ab469ff Merge branch 'master' of https://github.com/tqchen/allreduce 2014-12-06 15:14:19 -08:00
nachocano
659b9cd517 changing number of repetitions 2014-12-06 15:14:14 -08:00
root
52d472c209 using hostfile 2014-12-06 20:30:35 +00:00
nachocano
9ed59e71f6 speed runner 2014-12-06 12:09:40 -08:00
nachocano
e0053c62e1 adding executable 2014-12-06 12:05:08 -08:00
nachocano
8f0d7d1d3e changing to -ho not to conflict with help 2014-12-06 12:01:05 -08:00
nachocano
771891491c Merge branch 'master' of https://github.com/tqchen/allreduce 2014-12-06 11:59:22 -08:00
nachocano
f203d13efc speed runner 2014-12-06 11:59:16 -08:00
nachocano
14e400226a submit mpi to include machine file 2014-12-06 11:33:05 -08:00
tqchen
58f80c5675 Merge branch 'master' of ssh://github.com/tqchen/rabit 2014-12-06 11:25:18 -08:00
tqchen
4a7d84e861 chg string bcast 2014-12-06 11:25:08 -08:00
tqchen
1519f74f3c ok 2014-12-06 11:20:52 -08:00
tqchen
0e012cb05e add speed test 2014-12-06 11:05:24 -08:00
tqchen
19631ecef6 more tracker renaming 2014-12-06 09:24:12 -08:00
tqchen
a569bf2698 change gitignore 2014-12-06 09:19:08 -08:00
tqchen
dc12958fc7 rename master to tracker, to emphasie rabit is p2p in computing 2014-12-06 09:15:31 -08:00
nachocano
67b68ceae6 adding timing 2014-12-05 16:00:47 -08:00
nachocano
54eb5623cb worked on my machine !!! finally 2014-12-05 15:24:00 -08:00
nachocano
d9c22e54de closer, but still does not work... stays in map 100%. I think an exception is being thrown 2014-12-05 13:28:42 -08:00
tqchen
7765e2dc55 add status report 2014-12-05 09:49:26 -08:00
tqchen
ab278513ab ok 2014-12-05 09:39:51 -08:00
Tianqi Chen
e7a22792ac Update submit_job_hadoop.py 2014-12-05 09:14:44 -08:00
Tianqi Chen
e05098cacb Update submit_job_hadoop.py 2014-12-05 09:10:26 -08:00
Tianqi Chen
f9e95ab522 Update submit_job_hadoop.py 2014-12-05 09:09:20 -08:00
nachocano
bb7d6814a7 creating initial version of hadoop submit script. Not working.
Not sure how to get the master uri and port. I believe I cannot do it before I launch the job.

Updating the name from submit_job to submit_job_mpi
2014-12-05 03:27:02 -08:00
nachocano
e00fb99e7b cosmetic 2014-12-04 19:02:11 -08:00
nachocano
e9a3f5169e cosmetic changes 2014-12-04 18:02:07 -08:00
tqchen
1af3e81ada chg robust to reliable 2014-12-04 17:32:22 -08:00
tqchen
7cd5474f1a chg interface 2014-12-04 17:31:40 -08:00
tqchen
821eb21ae2 before make rabit public 2014-12-04 17:30:58 -08:00
tqchen
cc410b8c90 add local model in checkpoint interface, a new goal 2014-12-04 11:09:15 -08:00
tqchen
79e7862583 change note 2014-12-04 09:09:56 -08:00
tqchen
f9d634ce06 change notes 2014-12-04 09:09:29 -08:00
tqchen
65a1cdf8e5 remove doc from main repo 2014-12-04 09:07:36 -08:00
tqchen
67229fd7a9 change model 2014-12-04 09:05:48 -08:00
tqchen
3033177e9e ok 2014-12-03 22:36:16 -08:00
tqchen
656a8fa3a2 ok 2014-12-03 22:32:30 -08:00
tqchen
0e9b64649a ok 2014-12-03 22:30:23 -08:00
tqchen
9da3c6c573 Merge branch 'master' of ssh://github.com/tqchen/rabit 2014-12-03 22:28:59 -08:00
tqchen
09a1305628 chg readme 2014-12-03 22:27:52 -08:00
nachocano
7d314fef78 open for writing 2014-12-03 21:58:58 -08:00
nachocano
dece767084 Revert "open for writing"
This reverts commit 63bf9c7995.
2014-12-03 21:58:33 -08:00
nachocano
63bf9c7995 open for writing 2014-12-03 21:58:17 -08:00
tqchen
1c76483b4b ok 2014-12-03 21:53:34 -08:00
tqchen
9abe6ad4d8 checkin makefile 2014-12-03 21:30:11 -08:00
tqchen
8175df1002 bug fix in kmeans 2014-12-03 20:05:16 -08:00
tqchen
a1a1a8895e add kmeans 2014-12-03 18:23:58 -08:00
tqchen
69af79d45d sparse kmeans 2014-12-03 18:15:28 -08:00
nachocano
e3a95b2d1a Merge branch 'master' of https://github.com/tqchen/allreduce 2014-12-03 15:39:05 -08:00
nachocano
5c23b94069 updating kmeans based on Tianqi feedback. More efficient now 2014-12-03 15:38:58 -08:00
tqchen
85bb6cd027 Merge branch 'master' of ssh://github.com/tqchen/rabit 2014-12-03 15:13:09 -08:00
tqchen
90b9f1a98a add keepalive script 2014-12-03 15:04:30 -08:00
nachocano
55c2a5dc83 Merge branch 'master' of https://github.com/tqchen/allreduce 2014-12-03 14:21:42 -08:00
nachocano
1d0d5bb141 kmeans seems to be working.. not restarting anything though 2014-12-03 14:21:10 -08:00
tqchen
7a983a4079 add keepalive 2014-12-03 13:21:30 -08:00
tqchen
2523288509 basic recovery works 2014-12-03 12:19:08 -08:00
tqchen
8a6768763d bug fixed ver 2014-12-03 11:51:39 -08:00
tqchen
a186f8c3aa ok 2014-12-03 11:19:43 -08:00
tqchen
ceeb6f0690 bug version, check in and rollback 2014-12-03 11:17:39 -08:00
tqchen
f3e5b6e13c ok 2014-12-03 10:00:47 -08:00
tqchen
34f2f887b1 add more broadcast and basic broadcast 2014-12-03 09:59:13 -08:00
nachocano
20b51cc9ce cleaner 2014-12-03 01:44:34 -08:00
nachocano
56aad86231 adding incomplete kmeans.
I'm having a problem with the broadcast, and still need to implement the logic
2014-12-03 01:16:13 -08:00
tqchen
ed1de6df80 change AllReduce to Allreduce 2014-12-02 21:11:48 -08:00
nachocano
8cb5b68cb6 Merge branch 'master' of https://github.com/tqchen/allreduce 2014-12-02 11:28:27 -08:00
nachocano
e4abca9494 changing report folder to doc 2014-12-02 11:28:20 -08:00
tqchen
0a3300d773 rabit run on MPI 2014-12-02 11:20:19 -08:00
nachocano
2fab05c83e adding some design goals. 2014-12-02 11:07:07 -08:00
nachocano
40f7ee1cab adding simple image 2014-12-02 01:49:54 -08:00
nachocano
2c166d7a3a adding some initial skeleton of the report. 2014-12-02 01:19:36 -08:00
tqchen
dcea64c838 check in model recover 2014-12-01 21:41:37 -08:00
tqchen
255218a2f3 change in interface, seems resetlink is still bad 2014-12-01 21:39:51 -08:00
tqchen
b76cd5858c seems ok version 2014-12-01 20:18:25 -08:00
tqchen
46b5d46111 fix one bug, another comes 2014-12-01 19:53:41 -08:00
tqchen
993ff8bb91 find one bug, continue to next one 2014-12-01 19:34:27 -08:00
tqchen
2cde04867f Merge branch 'master' of ssh://github.com/tqchen/rabit 2014-12-01 16:57:33 -08:00
tqchen
337840d29b recover not yet working 2014-12-01 16:57:26 -08:00
Tianqi Chen
fd2c57b8a4 Update engine_robust.cc 2014-12-01 15:32:57 -08:00
tqchen
1c5167d96e rabit seems ready to run 2014-12-01 10:32:30 -08:00
Tianqi Chen
0d63646015 Update README.md 2014-12-01 10:04:10 -08:00
Tianqi Chen
b5367f48f6 Update README.md 2014-12-01 10:03:45 -08:00
Tianqi Chen
62c8ce9657 Update README.md 2014-12-01 10:03:31 -08:00
tqchen
eb2ca06d67 fresh name fresh start 2014-12-01 09:17:05 -08:00
tqchen
16f729115e checkin allreduce recover 2014-11-30 22:41:04 -08:00
tqchen
9355f5faf2 more conservative exception watching 2014-11-30 21:39:22 -08:00
tqchen
8cef2086f5 smarter select for allreduce and bcast 2014-11-30 21:31:45 -08:00
tqchen
f7928c68a3 next round try more careful select design 2014-11-30 21:07:34 -08:00
tqchen
ecb09a23bc add recover data, do a round of review 2014-11-30 20:59:55 -08:00
tqchen
b9b58a1275 bugfix in decide 2014-11-30 17:48:30 -08:00
tqchen
4a6c01c83c minor change in decide 2014-11-30 17:48:02 -08:00
tqchen
27f6f8ea9e bugfix in msg passing 2014-11-30 17:42:18 -08:00
tqchen
d8d648549f finish message passing, do a review on msg passing and decide 2014-11-30 17:40:30 -08:00
tqchen
38cd595235 check in message passing 2014-11-30 16:38:47 -08:00
tqchen
7a60cb7f3e checkin decide request, todo message passing 2014-11-30 16:37:26 -08:00
tqchen
68f13cd739 tight 2014-11-30 11:46:21 -08:00
tqchen
d1ce3c697c inline 2014-11-30 11:45:50 -08:00
tqchen
2e536eda29 check in the recover strategy 2014-11-30 11:42:59 -08:00
tqchen
155ed3a814 seems a OK version of reset, start to work on decide exec 2014-11-29 22:22:51 -08:00
tqchen
5b0bb53184 refactor code style, reset link still need thoughts 2014-11-29 20:15:27 -08:00
tqchen
42505f473d finish reset link log 2014-11-29 15:14:43 -08:00
tqchen
98756c068a livelock in oob send recv 2014-11-28 21:58:15 -08:00
tqchen
aa54a038f2 livelock in oob send recv 2014-11-28 21:56:58 -08:00
tqchen
a30075794b initial version of robust engine, add discard link, need more random mock test, next milestone will be recovery 2014-11-28 15:56:12 -08:00
nachocano
a8128493c2 execute it like this: ./test.sh 4 4000 testcase0.conf ./
Now we are passing the folder where the round instances are saved.
The problem is that calling utils::Check or utils::Assert on 1 or 2 nodes, shutdowns all of them. Only those should be shutdown and this will work. There maybe some other mechanism to shutdown a particular node. Tianqi?
2014-11-28 01:48:26 -08:00
nachocano
faed8285cd execute it like ./test.sh 4 4000 testcase0.conf to obtain a successful execution
updating mock. It now wraps the calls to sync and reads config from configuration file.
I believe it's better not to use the preprocessor directive, i.e. not to put any test code in the engine_tcp. I just call the mock in the test_allreduce file. It's a file purely for testing purposes, so it's fine to use the mock there.
2014-11-28 00:16:35 -08:00
nachocano
21f3f3eec4 adding const to variable to comply with google code convention...
may need to change more stuff though. Taint what else do you mean? Spaces, tabs, names?
2014-11-27 17:03:31 -08:00
tqchen
2f1ba40786 change in socket, to pass out error code 2014-11-27 16:17:07 -08:00
nachocano
c565104491 adding some references to mock inside TEST preprocessor directive.
It shouldn't be an assert because it shutdowns the process. Instead should check on the value and return some sort of error, so that we can recover.
The mock contains queues, indexed by the rank of the process. For each node, you can configure the behavior you expect (success or failure for now) when you call any of the methods (AllReduce, Broadcast, LoadCheckPoint and CheckPoint)... If you call several times AllReduce, the outputs will pop from the queue, i.e., first you can retrieve a success, then a failure and so on.
Pretty basic for now, need to tune it better
2014-11-26 17:24:29 -08:00
nachocano
54fcff189f dummy mock for now 2014-11-26 16:37:23 -08:00
tqchen
d37f38c455 initial version of allreduce 2014-11-25 16:15:56 -08:00
Tianqi Chen
5e5bdda491 Initial commit 2014-11-25 14:37:18 -08:00
466 changed files with 21706 additions and 6017 deletions

1
.github/FUNDING.yml vendored
View File

@@ -1 +1,2 @@
open_collective: xgboost
custom: https://xgboost.ai/sponsors

View File

@@ -7,17 +7,110 @@ name: XGBoost-CI
on: [push, pull_request]
env:
R_PACKAGES: c('XML', 'igraph', 'data.table', 'magrittr', 'stringi', 'ggplot2', 'DiagrammeR', 'Ckmeans.1d.dp', 'vcd', 'testthat', 'lintr', 'knitr', 'rmarkdown', 'e1071', 'cplm', 'devtools')
R_PACKAGES: c('XML', 'igraph', 'data.table', 'magrittr', 'ggplot2', 'DiagrammeR', 'Ckmeans.1d.dp', 'vcd', 'testthat', 'lintr', 'knitr', 'rmarkdown', 'e1071', 'cplm', 'devtools', 'float', 'titanic')
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
gtest-cpu:
name: Test Google C++ test (CPU)
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [macos-10.15]
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- name: Install system packages
run: |
brew install lz4 ninja libomp
- name: Build gtest binary
run: |
mkdir build
cd build
cmake .. -DGOOGLE_TEST=ON -DUSE_OPENMP=ON -DUSE_DMLC_GTEST=ON -DPLUGIN_LZ4=ON -DPLUGIN_DENSE_PARSER=ON -GNinja
ninja -v
- name: Run gtest binary
run: |
cd build
ctest --extra-verbose
gtest-cpu-nonomp:
name: Test Google C++ unittest (CPU Non-OMP)
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- name: Install system packages
run: |
sudo apt-get install -y --no-install-recommends ninja-build
- name: Build and install XGBoost
shell: bash -l {0}
run: |
mkdir build
cd build
cmake .. -GNinja -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON -DUSE_OPENMP=OFF
ninja -v
- name: Run gtest binary
run: |
cd build
ctest --extra-verbose
c-api-demo:
name: Test installing XGBoost lib + building the C API demo
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: ["ubuntu-latest"]
python-version: ["3.8"]
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- name: Install system packages
run: |
sudo apt-get install -y --no-install-recommends ninja-build
- uses: conda-incubator/setup-miniconda@v2
with:
auto-update-conda: true
python-version: ${{ matrix.python-version }}
- name: Display Conda env
shell: bash -l {0}
run: |
conda info
conda list
- name: Build and install XGBoost
shell: bash -l {0}
run: |
mkdir build
cd build
cmake .. -DBUILD_STATIC_LIB=ON -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja
ninja -v install
- name: Build and run C API demo
shell: bash -l {0}
run: |
cd demo/c-api/
mkdir build
cd build
cmake .. -GNinja -DCMAKE_PREFIX_PATH=$CONDA_PREFIX
ninja -v
cd ..
./build/api-demo
test-with-jvm:
name: Test JVM on OS ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [windows-latest, windows-2016, ubuntu-latest]
os: [windows-latest, ubuntu-latest]
steps:
- uses: actions/checkout@v2
@@ -35,17 +128,106 @@ jobs:
key: ${{ runner.os }}-m2-${{ hashFiles('./jvm-packages/pom.xml') }}
restore-keys: ${{ runner.os }}-m2
- name: Test JVM packages
- name: Test XGBoost4J
run: |
cd jvm-packages
mvn test -pl :xgboost4j_2.12
mvn test -B -pl :xgboost4j_2.12
- name: Test XGBoost4J-Spark
run: |
rm -rfv build/
cd jvm-packages
mvn -B test
if: matrix.os == 'ubuntu-latest' # Distributed training doesn't work on Windows
env:
RABIT_MOCK: ON
lint:
runs-on: ubuntu-latest
name: Code linting for Python and C++
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: actions/setup-python@v2
with:
python-version: '3.7'
architecture: 'x64'
- name: Install Python packages
run: |
python -m pip install wheel setuptools
python -m pip install pylint cpplint numpy scipy scikit-learn
- name: Run lint
run: |
make lint
doxygen:
runs-on: ubuntu-latest
name: Generate C/C++ API doc using Doxygen
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: actions/setup-python@v2
with:
python-version: '3.7'
architecture: 'x64'
- name: Install system packages
run: |
sudo apt-get install -y --no-install-recommends doxygen graphviz ninja-build
python -m pip install wheel setuptools
python -m pip install awscli
- name: Run Doxygen
run: |
mkdir build
cd build
cmake .. -DBUILD_C_DOC=ON -GNinja
ninja -v doc_doxygen
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Publish
run: |
cd build/
tar cvjf ${{ steps.extract_branch.outputs.branch }}.tar.bz2 doc_doxygen/
python -m awscli s3 cp ./${{ steps.extract_branch.outputs.branch }}.tar.bz2 s3://xgboost-docs/ --acl public-read
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_IAM_S3_UPLOADER }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_IAM_S3_UPLOADER }}
sphinx:
runs-on: ubuntu-latest
name: Build docs using Sphinx
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: actions/setup-python@v2
with:
python-version: '3.7'
architecture: 'x64'
- name: Install system packages
run: |
sudo apt-get install -y --no-install-recommends graphviz
python -m pip install wheel setuptools
python -m pip install -r doc/requirements.txt
- name: Extract branch name
shell: bash
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: extract_branch
if: github.ref == 'refs/heads/master' || contains(github.ref, 'refs/heads/release_')
- name: Run Sphinx
run: |
make -C doc html
env:
SPHINX_GIT_BRANCH: ${{ steps.extract_branch.outputs.branch }}
lintr:
runs-on: ${{ matrix.config.os }}
name: Run R linters on OS ${{ matrix.config.os }}, R ${{ matrix.config.r }}, Compiler ${{ matrix.config.compiler }}, Build ${{ matrix.config.build }}
strategy:
matrix:
config:
@@ -81,25 +263,17 @@ jobs:
run: |
cd R-package
R.exe CMD INSTALL .
Rscript.exe tests/run_lint.R
Rscript.exe tests/helper_scripts/run_lint.R
test-with-R:
runs-on: ${{ matrix.config.os }}
name: Test R on OS ${{ matrix.config.os }}, R ${{ matrix.config.r }}, Compiler ${{ matrix.config.compiler }}, Build ${{ matrix.config.build }}
strategy:
fail-fast: false
matrix:
config:
- {os: windows-latest, r: 'release', compiler: 'msvc', build: 'autotools'}
- {os: windows-2016, r: 'release', compiler: 'msvc', build: 'autotools'}
- {os: windows-latest, r: 'release', compiler: 'msvc', build: 'cmake'}
- {os: windows-2016, r: 'release', compiler: 'msvc', build: 'cmake'}
- {os: windows-latest, r: 'release', compiler: 'mingw', build: 'autotools'}
- {os: windows-2016, r: 'release', compiler: 'mingw', build: 'autotools'}
- {os: windows-latest, r: 'release', compiler: 'mingw', build: 'cmake'}
- {os: windows-2016, r: 'release', compiler: 'msvc', build: 'cmake'}
- {os: windows-2016, r: 'release', compiler: 'mingw', build: 'cmake'}
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
@@ -130,9 +304,53 @@ jobs:
- uses: actions/setup-python@v2
with:
python-version: '3.6' # Version range or exact version of a Python version to use, using SemVer's version range syntax
architecture: 'x64' # optional x64 or x86. Defaults to x64 if not specified
python-version: '3.7'
architecture: 'x64'
- name: Test R
run: |
python tests/ci_build/test_r_package.py --compiler="${{ matrix.config.compiler }}" --build-tool="${{ matrix.config.build }}"
test-R-CRAN:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
config:
- {r: 'release'}
steps:
- uses: actions/checkout@v2
with:
submodules: 'true'
- uses: r-lib/actions/setup-r@master
with:
r-version: ${{ matrix.config.r }}
- uses: r-lib/actions/setup-tinytex@master
- name: Cache R packages
uses: actions/cache@v2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-r-${{ matrix.config.r }}-1-${{ hashFiles('R-package/DESCRIPTION') }}
restore-keys: ${{ runner.os }}-r-${{ matrix.config.r }}-2-
- name: Install system packages
run: |
sudo apt-get update && sudo apt-get install libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev
- name: Install dependencies
shell: Rscript {0}
run: |
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
- name: Check R Package
run: |
# Print stacktrace upon success of failure
make Rcheck || tests/ci_build/print_r_stacktrace.sh fail
tests/ci_build/print_r_stacktrace.sh success

44
.github/workflows/r_nold.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
# Run R tests with noLD R. Only triggered by a pull request review
# See discussion at https://github.com/dmlc/xgboost/pull/6378
name: XGBoost-R-noLD
on:
pull_request_review_comment:
types: [created]
env:
R_PACKAGES: c('XML', 'igraph', 'data.table', 'magrittr', 'ggplot2', 'DiagrammeR', 'Ckmeans.1d.dp', 'vcd', 'testthat', 'lintr', 'knitr', 'rmarkdown', 'e1071', 'cplm', 'devtools', 'float', 'titanic')
jobs:
test-R-noLD:
if: github.event.comment.body == '/gha run r-nold-test' && contains('OWNER,MEMBER,COLLABORATOR', github.event.comment.author_association)
timeout-minutes: 120
runs-on: ubuntu-latest
container: rhub/debian-gcc-devel-nold
steps:
- name: Install git and system packages
shell: bash
run: |
apt-get update && apt-get install -y git libcurl4-openssl-dev libssl-dev libssh2-1-dev libgit2-dev libxml2-dev
- uses: actions/checkout@v2
with:
submodules: 'true'
- name: Install dependencies
shell: bash
run: |
cat > install_libs.R <<EOT
install.packages(${{ env.R_PACKAGES }},
repos = 'http://cloud.r-project.org',
dependencies = c('Depends', 'Imports', 'LinkingTo'))
EOT
/tmp/R-devel/bin/Rscript install_libs.R
- name: Run R tests
shell: bash
run: |
cd R-package && \
/tmp/R-devel/bin/R CMD INSTALL . && \
/tmp/R-devel/bin/R -q -e "library(testthat); setwd('tests'); source('testthat.R')"

12
.gitignore vendored
View File

@@ -71,6 +71,7 @@ build
build_plugin
recommonmark/
tags
TAGS
*.class
target
*.swp
@@ -104,4 +105,13 @@ R-package/src/Makevars
/cmake-build-debug/
# GDB
.gdb_history
.gdb_history
# Python joblib.Memory used in pytest.
cachedir/
# Files from local Dask work
dask-worker-space/
# Jupyter notebook checkpoints
.ipynb_checkpoints/

6
.gitmodules vendored
View File

@@ -1,9 +1,9 @@
[submodule "dmlc-core"]
path = dmlc-core
url = https://github.com/dmlc/dmlc-core
[submodule "rabit"]
path = rabit
url = https://github.com/dmlc/rabit
[submodule "cub"]
path = cub
url = https://github.com/NVlabs/cub
[submodule "gputreeshap"]
path = gputreeshap
url = https://github.com/rapidsai/gputreeshap.git

View File

@@ -1,38 +1,40 @@
# disable sudo for container build.
sudo: required
# Enabling test OS X
os:
- linux
- osx
osx_image: xcode10.1
dist: bionic
# Use Build Matrix to do lint and build seperately
env:
matrix:
# python package test
- TASK=python_test
# test installation of Python source distribution
- TASK=python_sdist_test
# java package test
- TASK=java_test
# cmake test
- TASK=cmake_test
global:
- secure: "PR16i9F8QtNwn99C5NDp8nptAS+97xwDtXEJJfEiEVhxPaaRkOp0MPWhogCaK0Eclxk1TqkgWbdXFknwGycX620AzZWa/A1K3gAs+GrpzqhnPMuoBJ0Z9qxXTbSJvCyvMbYwVrjaxc/zWqdMU8waWz8A7iqKGKs/SqbQ3rO6v7c="
- secure: "dAGAjBokqm/0nVoLMofQni/fWIBcYSmdq4XvCBX1ZAMDsWnuOfz/4XCY6h2lEI1rVHZQ+UdZkc9PioOHGPZh5BnvE49/xVVWr9c4/61lrDOlkD01ZjSAeoV0fAZq+93V/wPl4QV+MM+Sem9hNNzFSbN5VsQLAiWCSapWsLdKzqA="
matrix:
exclude:
jobs:
include:
- os: linux
arch: amd64
env: TASK=python_sdist_test
- os: linux
arch: arm64
env: TASK=python_sdist_test
- os: linux
arch: arm64
env: TASK=python_test
- os: linux
services:
- docker
- os: osx
arch: amd64
osx_image: xcode10.2
env: TASK=python_test
- os: osx
arch: amd64
osx_image: xcode10.2
env: TASK=python_sdist_test
- os: osx
arch: amd64
osx_image: xcode10.2
env: TASK=java_test
- os: linux
env: TASK=cmake_test
arch: s390x
env: TASK=s390x_test
# dependent brew packages
addons:
@@ -47,6 +49,10 @@ addons:
- wget
- r
update: true
apt:
packages:
- snapd
- unzip
before_install:
- source tests/travis/travis_setup_env.sh

View File

@@ -1,9 +1,10 @@
cmake_minimum_required(VERSION 3.13)
project(xgboost LANGUAGES CXX C VERSION 1.2.0)
project(xgboost LANGUAGES CXX C VERSION 1.3.2)
include(cmake/Utils.cmake)
list(APPEND CMAKE_MODULE_PATH "${xgboost_SOURCE_DIR}/cmake/modules")
cmake_policy(SET CMP0022 NEW)
cmake_policy(SET CMP0079 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0063 NEW)
cmake_policy(SET CMP0063 NEW)
if ((${CMAKE_VERSION} VERSION_GREATER 3.13) OR (${CMAKE_VERSION} VERSION_EQUAL 3.13))
@@ -23,9 +24,11 @@ write_version()
set_default_configuration_release()
#-- Options
## User options
option(BUILD_C_DOC "Build documentation for C APIs using Doxygen." OFF)
option(USE_OPENMP "Build with OpenMP support." ON)
option(BUILD_STATIC_LIB "Build static library" OFF)
option(RABIT_BUILD_MPI "Build MPI" OFF)
## Bindings
option(JVM_BINDINGS "Build JVM bindings" OFF)
option(R_LIB "Build shared library for R package" OFF)
@@ -37,6 +40,7 @@ option(ENABLE_ALL_WARNINGS "Enable all compiler warnings. Only effective for GCC
option(LOG_CAPI_INVOCATION "Log all C API invocations for debugging" OFF)
option(GOOGLE_TEST "Build google tests" OFF)
option(USE_DMLC_GTEST "Use google tests bundled with dmlc-core submodule" OFF)
option(USE_DEVICE_DEBUG "Generate CUDA device debug info." OFF)
option(USE_NVTX "Build with cuda profiling annotations. Developers only." OFF)
set(NVTX_HEADER_DIR "" CACHE PATH "Path to the stand-alone nvtx header")
option(RABIT_MOCK "Build rabit with mock" OFF)
@@ -60,6 +64,9 @@ address, leak, undefined and thread.")
## Plugins
option(PLUGIN_LZ4 "Build lz4 plugin" OFF)
option(PLUGIN_DENSE_PARSER "Build dense parser plugin" OFF)
option(PLUGIN_RMM "Build with RAPIDS Memory Manager (RMM)" OFF)
## TODO: 1. Add check if DPC++ compiler is used for building
option(PLUGIN_UPDATER_ONEAPI "DPC++ updater" OFF)
option(ADD_PKGCONFIG "Add xgboost.pc into system." ON)
#-- Checks for building XGBoost
@@ -69,6 +76,9 @@ endif (USE_DEBUG_OUTPUT AND (NOT (CMAKE_BUILD_TYPE MATCHES Debug)))
if (USE_NCCL AND NOT (USE_CUDA))
message(SEND_ERROR "`USE_NCCL` must be enabled with `USE_CUDA` flag.")
endif (USE_NCCL AND NOT (USE_CUDA))
if (USE_DEVICE_DEBUG AND NOT (USE_CUDA))
message(SEND_ERROR "`USE_DEVICE_DEBUG` must be enabled with `USE_CUDA` flag.")
endif (USE_DEVICE_DEBUG AND NOT (USE_CUDA))
if (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
message(SEND_ERROR "Build XGBoost with -DUSE_NCCL=ON to enable BUILD_WITH_SHARED_NCCL.")
endif (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
@@ -82,11 +92,23 @@ endif (R_LIB AND GOOGLE_TEST)
if (USE_AVX)
message(SEND_ERROR "The option 'USE_AVX' is deprecated as experimental AVX features have been removed from XGBoost.")
endif (USE_AVX)
if (PLUGIN_RMM AND NOT (USE_CUDA))
message(SEND_ERROR "`PLUGIN_RMM` must be enabled with `USE_CUDA` flag.")
endif (PLUGIN_RMM AND NOT (USE_CUDA))
if (PLUGIN_RMM AND NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") OR (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")))
message(SEND_ERROR "`PLUGIN_RMM` must be used with GCC or Clang compiler.")
endif (PLUGIN_RMM AND NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") OR (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")))
if (PLUGIN_RMM AND NOT (CMAKE_SYSTEM_NAME STREQUAL "Linux"))
message(SEND_ERROR "`PLUGIN_RMM` must be used with Linux.")
endif (PLUGIN_RMM AND NOT (CMAKE_SYSTEM_NAME STREQUAL "Linux"))
if (ENABLE_ALL_WARNINGS)
if ((NOT CMAKE_CXX_COMPILER_ID MATCHES "Clang") AND (NOT CMAKE_CXX_COMPILER_ID STREQUAL "GNU"))
message(SEND_ERROR "ENABLE_ALL_WARNINGS is only available for Clang and GCC.")
endif ((NOT CMAKE_CXX_COMPILER_ID MATCHES "Clang") AND (NOT CMAKE_CXX_COMPILER_ID STREQUAL "GNU"))
endif (ENABLE_ALL_WARNINGS)
if (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
message(SEND_ERROR "Cannot build a static library libxgboost.a when R or JVM packages are enabled.")
endif (BUILD_STATIC_LIB AND (R_LIB OR JVM_BINDINGS))
#-- Sanitizer
if (USE_SANITIZER)
@@ -106,7 +128,7 @@ if (USE_CUDA)
endif()
set(GEN_CODE "")
format_gencode_flags("${GPU_COMPUTE_VER}" GEN_CODE)
message(STATUS "CUDA GEN_CODE: ${GEN_CODE}")
add_subdirectory(${PROJECT_SOURCE_DIR}/gputreeshap)
endif (USE_CUDA)
if (FORCE_COLORED_OUTPUT AND (CMAKE_GENERATOR STREQUAL "Ninja") AND
@@ -126,9 +148,6 @@ if (USE_OPENMP)
find_package(OpenMP REQUIRED)
endif (USE_OPENMP)
# core xgboost
add_subdirectory(${xgboost_SOURCE_DIR}/src)
# dmlc-core
msvc_use_static_runtime()
add_subdirectory(${xgboost_SOURCE_DIR}/dmlc-core)
@@ -147,40 +166,13 @@ endif (MSVC)
if (ENABLE_ALL_WARNINGS)
target_compile_options(dmlc PRIVATE -Wall -Wextra)
endif (ENABLE_ALL_WARNINGS)
target_link_libraries(objxgboost PUBLIC dmlc)
# rabit
set(RABIT_BUILD_DMLC OFF)
set(DMLC_ROOT ${xgboost_SOURCE_DIR}/dmlc-core)
set(RABIT_WITH_R_LIB ${R_LIB})
add_subdirectory(rabit)
if (RABIT_MOCK)
target_link_libraries(objxgboost PUBLIC rabit_mock_static)
if (MSVC)
target_compile_options(rabit_mock_static PRIVATE
-D_CRT_SECURE_NO_WARNINGS -D_CRT_SECURE_NO_DEPRECATE)
endif (MSVC)
else()
target_link_libraries(objxgboost PUBLIC rabit)
if (MSVC)
target_compile_options(rabit PRIVATE
-D_CRT_SECURE_NO_WARNINGS -D_CRT_SECURE_NO_DEPRECATE)
endif (MSVC)
endif(RABIT_MOCK)
foreach(lib rabit rabit_base rabit_empty rabit_mock rabit_mock_static)
# Explicitly link dmlc to rabit, so that configured header (build_config.h)
# from dmlc is correctly applied to rabit.
if (TARGET ${lib})
target_link_libraries(${lib} dmlc ${CMAKE_THREAD_LIBS_INIT})
if (HIDE_CXX_SYMBOLS) # Hide all C++ symbols from Rabit
set_target_properties(${lib} PROPERTIES CXX_VISIBILITY_PRESET hidden)
endif (HIDE_CXX_SYMBOLS)
if (ENABLE_ALL_WARNINGS)
target_compile_options(${lib} PRIVATE -Wall -Wextra)
endif (ENABLE_ALL_WARNINGS)
endif (TARGET ${lib})
endforeach()
# core xgboost
add_subdirectory(${xgboost_SOURCE_DIR}/src)
target_link_libraries(objxgboost PUBLIC dmlc)
# Exports some R specific definitions and objects
if (R_LIB)
@@ -198,14 +190,15 @@ else (BUILD_STATIC_LIB)
endif (BUILD_STATIC_LIB)
target_link_libraries(xgboost PRIVATE objxgboost)
if (USE_NVTX)
enable_nvtx(xgboost)
endif (USE_NVTX)
if (USE_CUDA)
xgboost_set_cuda_flags(xgboost)
endif (USE_CUDA)
#-- Hide all C++ symbols
if (HIDE_CXX_SYMBOLS)
set_target_properties(objxgboost PROPERTIES CXX_VISIBILITY_PRESET hidden)
set_target_properties(xgboost PROPERTIES CXX_VISIBILITY_PRESET hidden)
foreach(target objxgboost xgboost dmlc)
set_target_properties(${target} PROPERTIES CXX_VISIBILITY_PRESET hidden)
endforeach()
endif (HIDE_CXX_SYMBOLS)
target_include_directories(xgboost
@@ -267,7 +260,20 @@ include(GNUInstallDirs)
install(DIRECTORY ${xgboost_SOURCE_DIR}/include/xgboost
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})
install(TARGETS xgboost runxgboost
# Install libraries. If `xgboost` is a static lib, specify `objxgboost` also, to avoid the
# following error:
#
# > install(EXPORT ...) includes target "xgboost" which requires target "objxgboost" that is not
# > in any export set.
#
# https://github.com/dmlc/xgboost/issues/6085
if (BUILD_STATIC_LIB)
set(INSTALL_TARGETS xgboost runxgboost objxgboost dmlc)
else (BUILD_STATIC_LIB)
set(INSTALL_TARGETS xgboost runxgboost)
endif (BUILD_STATIC_LIB)
install(TARGETS ${INSTALL_TARGETS}
EXPORT XGBoostTargets
ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}

224
Jenkinsfile vendored
View File

@@ -38,26 +38,15 @@ pipeline {
agent { label 'job_initializer' }
steps {
script {
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
checkoutSrcs()
commit_id = "${GIT_COMMIT}"
}
sh 'python3 tests/jenkins_get_approval.py'
stash name: 'srcs'
milestone ordinal: 1
}
}
stage('Jenkins Linux: Formatting Check') {
agent none
steps {
script {
parallel ([
'clang-tidy': { ClangTidy() },
'lint': { Lint() },
'sphinx-doc': { SphinxDoc() },
'doxygen': { Doxygen() }
])
}
milestone ordinal: 2
}
}
stage('Jenkins Linux: Build') {
@@ -65,22 +54,21 @@ pipeline {
steps {
script {
parallel ([
'clang-tidy': { ClangTidy() },
'build-cpu': { BuildCPU() },
'build-cpu-rabit-mock': { BuildCPUMock() },
'build-cpu-non-omp': { BuildCPUNonOmp() },
// Build reference, distribution-ready Python wheel with CUDA 10.0
// using CentOS 6 image
'build-gpu-cuda10.0': { BuildCUDA(cuda_version: '10.0') },
// The build-gpu-* builds below use Ubuntu image
'build-gpu-cuda10.1': { BuildCUDA(cuda_version: '10.1') },
'build-gpu-cuda10.2': { BuildCUDA(cuda_version: '10.2') },
'build-gpu-cuda10.2': { BuildCUDA(cuda_version: '10.2', build_rmm: true) },
'build-gpu-cuda11.0': { BuildCUDA(cuda_version: '11.0') },
'build-jvm-packages-gpu-cuda10.0': { BuildJVMPackagesWithCUDA(spark_version: '3.0.0', cuda_version: '10.0') },
'build-jvm-packages': { BuildJVMPackages(spark_version: '3.0.0') },
'build-jvm-doc': { BuildJVMDoc() }
])
}
milestone ordinal: 3
}
}
stage('Jenkins Linux: Test') {
@@ -89,20 +77,18 @@ pipeline {
script {
parallel ([
'test-python-cpu': { TestPythonCPU() },
'test-python-gpu-cuda10.2': { TestPythonGPU(host_cuda_version: '10.2') },
// artifact_cuda_version doesn't apply to RMM tests; RMM tests will always match CUDA version between artifact and host env
'test-python-gpu-cuda10.2': { TestPythonGPU(artifact_cuda_version: '10.0', host_cuda_version: '10.2', test_rmm: true) },
'test-python-gpu-cuda11.0-cross': { TestPythonGPU(artifact_cuda_version: '10.0', host_cuda_version: '11.0') },
'test-python-gpu-cuda11.0': { TestPythonGPU(artifact_cuda_version: '11.0', host_cuda_version: '11.0') },
'test-python-mgpu-cuda10.2': { TestPythonGPU(artifact_cuda_version: '10.2', host_cuda_version: '10.2', multi_gpu: true) },
'test-cpp-gpu-cuda10.2': { TestCppGPU(artifact_cuda_version: '10.2', host_cuda_version: '10.2') },
'test-python-mgpu-cuda10.2': { TestPythonGPU(artifact_cuda_version: '10.0', host_cuda_version: '10.2', multi_gpu: true, test_rmm: true) },
'test-cpp-gpu-cuda10.2': { TestCppGPU(artifact_cuda_version: '10.2', host_cuda_version: '10.2', test_rmm: true) },
'test-cpp-gpu-cuda11.0': { TestCppGPU(artifact_cuda_version: '11.0', host_cuda_version: '11.0') },
'test-jvm-jdk8-cuda10.0': { CrossTestJVMwithJDKGPU(artifact_cuda_version: '10.0', host_cuda_version: '10.0') },
'test-jvm-jdk8': { CrossTestJVMwithJDK(jdk_version: '8', spark_version: '3.0.0') },
'test-jvm-jdk11': { CrossTestJVMwithJDK(jdk_version: '11') },
'test-jvm-jdk12': { CrossTestJVMwithJDK(jdk_version: '12') },
'test-r-3.5.3': { TestR(use_r35: true) }
'test-jvm-jdk12': { CrossTestJVMwithJDK(jdk_version: '12') }
])
}
milestone ordinal: 4
}
}
stage('Jenkins Linux: Deploy') {
@@ -113,7 +99,6 @@ pipeline {
'deploy-jvm-packages': { DeployJVMPackages(spark_version: '3.0.0') }
])
}
milestone ordinal: 5
}
}
}
@@ -144,7 +129,7 @@ def ClangTidy() {
echo "Running clang-tidy job..."
def container_type = "clang_tidy"
def docker_binary = "docker"
def dockerArgs = "--build-arg CUDA_VERSION=10.1"
def dockerArgs = "--build-arg CUDA_VERSION_ARG=10.1"
sh """
${dockerRun} ${container_type} ${docker_binary} ${dockerArgs} python3 tests/ci_build/tidy.py
"""
@@ -152,50 +137,6 @@ def ClangTidy() {
}
}
def Lint() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Running lint..."
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} bash -c "source activate cpu_test && make lint"
"""
deleteDir()
}
}
def SphinxDoc() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Running sphinx-doc..."
def container_type = "cpu"
def docker_binary = "docker"
def docker_extra_params = "CI_DOCKER_EXTRA_PARAMS_INIT='-e SPHINX_GIT_BRANCH=${BRANCH_NAME}'"
sh """#!/bin/bash
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} bash -c "source activate cpu_test && make -C doc html"
"""
deleteDir()
}
}
def Doxygen() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Running doxygen..."
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/doxygen.sh ${BRANCH_NAME}
"""
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
echo 'Uploading doc...'
s3Upload file: "build/${BRANCH_NAME}.tar.bz2", bucket: 'xgboost-docs', acl: 'PublicRead', path: "doxygen/${BRANCH_NAME}.tar.bz2"
}
deleteDir()
}
}
def BuildCPU() {
node('linux && cpu') {
unstash name: 'srcs'
@@ -208,14 +149,14 @@ def BuildCPU() {
# We want to make sure that we use the configured header build/dmlc/build_config.h instead of include/dmlc/build_config_default.h.
# See discussion at https://github.com/dmlc/xgboost/issues/5510
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_via_cmake.sh -DPLUGIN_LZ4=ON -DPLUGIN_DENSE_PARSER=ON
${dockerRun} ${container_type} ${docker_binary} build/testxgboost
${dockerRun} ${container_type} ${docker_binary} bash -c "cd build && ctest --extra-verbose"
"""
// Sanitizer test
def docker_extra_params = "CI_DOCKER_EXTRA_PARAMS_INIT='-e ASAN_SYMBOLIZER_PATH=/usr/bin/llvm-symbolizer -e ASAN_OPTIONS=symbolize=1 -e UBSAN_OPTIONS=print_stacktrace=1:log_path=ubsan_error.log --cap-add SYS_PTRACE'"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_via_cmake.sh -DUSE_SANITIZER=ON -DENABLED_SANITIZERS="address;leak;undefined" \
-DCMAKE_BUILD_TYPE=Debug -DSANITIZER_PATH=/usr/lib/x86_64-linux-gnu/
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} build/testxgboost
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} bash -c "cd build && ctest --exclude-regex AllTestsInDMLCUnitTests --extra-verbose"
"""
stash name: 'xgboost_cli', includes: 'xgboost'
@@ -238,39 +179,31 @@ def BuildCPUMock() {
}
}
def BuildCPUNonOmp() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Build CPU without OpenMP"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_via_cmake.sh -DUSE_OPENMP=OFF
"""
echo "Running Non-OpenMP C++ test..."
sh """
${dockerRun} ${container_type} ${docker_binary} build/testxgboost
"""
deleteDir()
}
}
def BuildCUDA(args) {
node('linux && cpu_build') {
unstash name: 'srcs'
echo "Build with CUDA ${args.cuda_version}"
def container_type = GetCUDABuildContainerType(args.cuda_version)
def docker_binary = "docker"
def docker_args = "--build-arg CUDA_VERSION=${args.cuda_version}"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.cuda_version}"
def arch_flag = ""
if (env.BRANCH_NAME != 'master' && !(env.BRANCH_NAME.startsWith('release'))) {
arch_flag = "-DGPU_COMPUTE_VER=75"
}
def wheel_tag = "manylinux2010_x86_64"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_via_cmake.sh -DUSE_CUDA=ON -DUSE_NCCL=ON -DOPEN_MP:BOOL=ON -DHIDE_CXX_SYMBOLS=ON ${arch_flag}
${dockerRun} ${container_type} ${docker_binary} ${docker_args} bash -c "cd python-package && rm -rf dist/* && python setup.py bdist_wheel --universal"
${dockerRun} ${container_type} ${docker_binary} ${docker_args} python tests/ci_build/rename_whl.py python-package/dist/*.whl ${commit_id} manylinux2010_x86_64
${dockerRun} ${container_type} ${docker_binary} ${docker_args} python tests/ci_build/rename_whl.py python-package/dist/*.whl ${commit_id} ${wheel_tag}
"""
if (args.cuda_version == ref_cuda_ver) {
sh """
${dockerRun} auditwheel_x86_64 ${docker_binary} auditwheel repair --plat ${wheel_tag} python-package/dist/*.whl
mv -v wheelhouse/*.whl python-package/dist/
# Make sure that libgomp.so is vendored in the wheel
${dockerRun} auditwheel_x86_64 ${docker_binary} bash -c "unzip -l python-package/dist/*.whl | grep libgomp || exit -1"
"""
}
echo 'Stashing Python wheel...'
stash name: "xgboost_whl_cuda${args.cuda_version}", includes: 'python-package/dist/*.whl'
if (args.cuda_version == ref_cuda_ver && (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release'))) {
@@ -280,17 +213,33 @@ def BuildCUDA(args) {
}
echo 'Stashing C++ test executable (testxgboost)...'
stash name: "xgboost_cpp_tests_cuda${args.cuda_version}", includes: 'build/testxgboost'
if (args.build_rmm) {
echo "Build with CUDA ${args.cuda_version} and RMM"
container_type = "rmm"
docker_binary = "docker"
docker_args = "--build-arg CUDA_VERSION_ARG=${args.cuda_version}"
sh """
rm -rf build/
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_via_cmake.sh --conda-env=gpu_test -DUSE_CUDA=ON -DUSE_NCCL=ON -DPLUGIN_RMM=ON ${arch_flag}
${dockerRun} ${container_type} ${docker_binary} ${docker_args} bash -c "cd python-package && rm -rf dist/* && python setup.py bdist_wheel --universal"
${dockerRun} ${container_type} ${docker_binary} ${docker_args} python tests/ci_build/rename_whl.py python-package/dist/*.whl ${commit_id} manylinux2010_x86_64
"""
echo 'Stashing Python wheel...'
stash name: "xgboost_whl_rmm_cuda${args.cuda_version}", includes: 'python-package/dist/*.whl'
echo 'Stashing C++ test executable (testxgboost)...'
stash name: "xgboost_cpp_tests_rmm_cuda${args.cuda_version}", includes: 'build/testxgboost'
}
deleteDir()
}
}
def BuildJVMPackagesWithCUDA(args) {
node('linux && gpu') {
node('linux && mgpu') {
unstash name: 'srcs'
echo "Build XGBoost4J-Spark with Spark ${args.spark_version}, CUDA ${args.cuda_version}"
def container_type = "jvm_gpu_build"
def docker_binary = "nvidia-docker"
def docker_args = "--build-arg CUDA_VERSION=${args.cuda_version}"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.cuda_version}"
def arch_flag = ""
if (env.BRANCH_NAME != 'master' && !(env.BRANCH_NAME.startsWith('release'))) {
arch_flag = "-DGPU_COMPUTE_VER=75"
@@ -301,7 +250,7 @@ def BuildJVMPackagesWithCUDA(args) {
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_jvm_packages.sh ${args.spark_version} -Duse.cuda=ON $arch_flag
"""
echo "Stashing XGBoost4J JAR with CUDA ${args.cuda_version} ..."
stash name: 'xgboost4j_jar_gpu', includes: "jvm-packages/xgboost4j/target/*.jar,jvm-packages/xgboost4j-spark/target/*.jar,jvm-packages/xgboost4j-example/target/*.jar"
stash name: 'xgboost4j_jar_gpu', includes: "jvm-packages/xgboost4j-gpu/target/*.jar,jvm-packages/xgboost4j-spark-gpu/target/*.jar"
deleteDir()
}
}
@@ -365,38 +314,21 @@ def TestPythonGPU(args) {
echo "Test Python GPU: CUDA ${args.host_cuda_version}"
def container_type = "gpu"
def docker_binary = "nvidia-docker"
def docker_args = "--build-arg CUDA_VERSION=${args.host_cuda_version}"
if (args.multi_gpu) {
echo "Using multiple GPUs"
// Allocate extra space in /dev/shm to enable NCCL
def docker_extra_params = "CI_DOCKER_EXTRA_PARAMS_INIT='--shm-size=4g'"
sh """
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_python.sh mgpu
"""
} else {
echo "Using a single GPU"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_python.sh gpu
"""
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.host_cuda_version}"
def mgpu_indicator = (args.multi_gpu) ? 'mgpu' : 'gpu'
// Allocate extra space in /dev/shm to enable NCCL
def docker_extra_params = (args.multi_gpu) ? "CI_DOCKER_EXTRA_PARAMS_INIT='--shm-size=4g'" : ''
sh "${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_python.sh ${mgpu_indicator}"
if (args.test_rmm) {
sh "rm -rfv build/ python-package/dist/"
unstash name: "xgboost_whl_rmm_cuda${args.host_cuda_version}"
unstash name: "xgboost_cpp_tests_rmm_cuda${args.host_cuda_version}"
sh "${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_python.sh ${mgpu_indicator} --use-rmm-pool"
}
deleteDir()
}
}
def TestCppRabit() {
node(nodeReq) {
unstash name: 'xgboost_rabit_tests'
unstash name: 'srcs'
echo "Test C++, rabit mock on"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/runxgb.sh xgboost tests/ci_build/approx.conf.in
"""
deleteDir()
}
}
def TestCppGPU(args) {
def nodeReq = 'linux && mgpu'
def artifact_cuda_version = (args.artifact_cuda_version) ?: ref_cuda_ver
@@ -406,26 +338,19 @@ def TestCppGPU(args) {
echo "Test C++, CUDA ${args.host_cuda_version}"
def container_type = "gpu"
def docker_binary = "nvidia-docker"
def docker_args = "--build-arg CUDA_VERSION=${args.host_cuda_version}"
def docker_args = "--build-arg CUDA_VERSION_ARG=${args.host_cuda_version}"
sh "${dockerRun} ${container_type} ${docker_binary} ${docker_args} build/testxgboost"
deleteDir()
}
}
def CrossTestJVMwithJDKGPU(args) {
def nodeReq = 'linux && mgpu'
node(nodeReq) {
unstash name: "xgboost4j_jar_gpu"
unstash name: 'srcs'
if (args.spark_version != null) {
echo "Test XGBoost4J on a machine with JDK ${args.jdk_version}, Spark ${args.spark_version}, CUDA ${args.host_cuda_version}"
} else {
echo "Test XGBoost4J on a machine with JDK ${args.jdk_version}, CUDA ${args.host_cuda_version}"
if (args.test_rmm) {
sh "rm -rfv build/"
unstash name: "xgboost_cpp_tests_rmm_cuda${args.host_cuda_version}"
echo "Test C++, CUDA ${args.host_cuda_version} with RMM"
container_type = "rmm"
docker_binary = "nvidia-docker"
docker_args = "--build-arg CUDA_VERSION_ARG=${args.host_cuda_version}"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} bash -c "source activate gpu_test && build/testxgboost --use-rmm-pool --gtest_filter=-*DeathTest.*"
"""
}
def container_type = "gpu_jvm"
def docker_binary = "nvidia-docker"
def docker_args = "--build-arg CUDA_VERSION=${args.host_cuda_version}"
sh "${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_jvm_gpu_cross.sh"
deleteDir()
}
}
@@ -452,30 +377,13 @@ def CrossTestJVMwithJDK(args) {
}
}
def TestR(args) {
node('linux && cpu') {
unstash name: 'srcs'
echo "Test R package"
def container_type = "rproject"
def docker_binary = "docker"
def use_r35_flag = (args.use_r35) ? "1" : "0"
def docker_args = "--build-arg USE_R35=${use_r35_flag}"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_test_rpkg.sh || tests/ci_build/print_r_stacktrace.sh
"""
deleteDir()
}
}
def DeployJVMPackages(args) {
node('linux && cpu') {
unstash name: 'srcs'
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
echo 'Deploying to xgboost-maven-repo S3 repo...'
def container_type = "jvm"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/deploy_jvm_packages.sh ${args.spark_version}
${dockerRun} jvm_gpu_build docker --build-arg CUDA_VERSION_ARG=10.0 tests/ci_build/deploy_jvm_packages.sh ${args.spark_version}
"""
}
deleteDir()

View File

@@ -25,12 +25,14 @@ pipeline {
agent { label 'job_initializer' }
steps {
script {
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
checkoutSrcs()
commit_id = "${GIT_COMMIT}"
}
sh 'python3 tests/jenkins_get_approval.py'
stash name: 'srcs'
milestone ordinal: 1
}
}
stage('Jenkins Win64: Build') {
@@ -41,7 +43,6 @@ pipeline {
'build-win64-cuda10.1': { BuildWin64() }
])
}
milestone ordinal: 2
}
}
stage('Jenkins Win64: Test') {
@@ -52,7 +53,6 @@ pipeline {
'test-win64-cuda10.1': { TestWin64() },
])
}
milestone ordinal: 3
}
}
}
@@ -85,7 +85,7 @@ def BuildWin64() {
bat """
mkdir build
cd build
cmake .. -G"Visual Studio 15 2017 Win64" -DUSE_CUDA=ON -DCMAKE_VERBOSE_MAKEFILE=ON -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON ${arch_flag}
cmake .. -G"Visual Studio 15 2017 Win64" -DUSE_CUDA=ON -DCMAKE_VERBOSE_MAKEFILE=ON -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON ${arch_flag} -DCMAKE_UNITY_BUILD=ON
"""
bat """
cd build

View File

@@ -134,14 +134,18 @@ Rpack: clean_all
sed -i -e 's/@OPENMP_LIB@//g' xgboost/src/Makevars.win
rm -f xgboost/src/Makevars.win-e # OSX sed create this extra file; remove it
bash R-package/remove_warning_suppression_pragma.sh
bash xgboost/remove_warning_suppression_pragma.sh
rm xgboost/remove_warning_suppression_pragma.sh
rm -rfv xgboost/tests/helper_scripts/
R ?= R
Rbuild: Rpack
R CMD build --no-build-vignettes xgboost
$(R) CMD build xgboost
rm -rf xgboost
Rcheck: Rbuild
R CMD check xgboost*.tar.gz
$(R) CMD check --as-cran xgboost*.tar.gz
-include build/*.d
-include build/*/*.d

181
NEWS.md
View File

@@ -3,6 +3,177 @@ XGBoost Change Log
This file records the changes in xgboost library in reverse chronological order.
## v1.2.0 (2020.08.22)
### XGBoost4J-Spark now supports the GPU algorithm (#5171)
* Now XGBoost4J-Spark is able to leverage NVIDIA GPU hardware to speed up training.
* There is on-going work for accelerating the rest of the data pipeline with NVIDIA GPUs (#5950, #5972).
### XGBoost now supports CUDA 11 (#5808)
* It is now possible to build XGBoost with CUDA 11. Note that we do not yet distribute pre-built binaries built with CUDA 11; all current distributions use CUDA 10.0.
### Better guidance for persisting XGBoost models in an R environment (#5940, #5964)
* Users are strongly encouraged to use `xgb.save()` and `xgb.save.raw()` instead of `saveRDS()`. This is so that the persisted models can be accessed with future releases of XGBoost.
* The previous release (1.1.0) had problems loading models that were saved with `saveRDS()`. This release adds a compatibility layer to restore access to the old RDS files. Note that this is meant to be a temporary measure; users are advised to stop using `saveRDS()` and migrate to `xgb.save()` and `xgb.save.raw()`.
### New objectives and metrics
* The pseudo-Huber loss `reg:pseudohubererror` is added (#5647). The corresponding metric is `mphe`. Right now, the slope is hard-coded to 1.
* The Accelerated Failure Time objective for survival analysis (`survival:aft`) is now accelerated on GPUs (#5714, #5716). The survival metrics `aft-nloglik` and `interval-regression-accuracy` are also accelerated on GPUs.
### Improved integration with scikit-learn
* Added `n_features_in_` attribute to the scikit-learn interface to store the number of features used (#5780). This is useful for integrating with some scikit-learn features such as `StackingClassifier`. See [this link](https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep010/proposal.html) for more details.
* `XGBoostError` now inherits `ValueError`, which conforms scikit-learn's exception requirement (#5696).
### Improved integration with Dask
* The XGBoost Dask API now exposes an asynchronous interface (#5862). See [the document](https://xgboost.readthedocs.io/en/latest/tutorials/dask.html#working-with-asyncio) for details.
* Zero-copy ingestion of GPU arrays via `DaskDeviceQuantileDMatrix` (#5623, #5799, #5800, #5803, #5837, #5874, #5901): Previously, the Dask interface had to make 2 data copies: one for concatenating the Dask partition/block into a single block and another for internal representation. To save memory, we introduce `DaskDeviceQuantileDMatrix`. As long as Dask partitions are resident in the GPU memory, `DaskDeviceQuantileDMatrix` is able to ingest them directly without making copies. This matrix type wraps `DeviceQuantileDMatrix`.
* The prediction function now returns GPU Series type if the input is from Dask-cuDF (#5710). This is to preserve the input data type.
### Robust handling of external data types (#5689, #5893)
- As we support more and more external data types, the handling logic has proliferated all over the code base and became hard to keep track. It also became unclear how missing values and threads are handled. We refactored the Python package code to collect all data handling logic to a central location, and now we have an explicit list of of all supported data types.
### Improvements in GPU-side data matrix (`DeviceQuantileDMatrix`)
* The GPU-side data matrix now implements its own quantile sketching logic, so that data don't have to be transported back to the main memory (#5700, #5747, #5760, #5846, #5870, #5898). The GK sketching algorithm is also now better documented.
- Now we can load extremely sparse dataset like URL, although performance is still sub-optimal.
* The GPU-side data matrix now exposes an iterative interface (#5783), so that users are able to construct a matrix from a data iterator. See the [Python demo](https://github.com/dmlc/xgboost/blob/release_1.2.0/demo/guide-python/data_iterator.py).
### New language binding: Swift (#5728)
* Visit https://github.com/kongzii/SwiftXGBoost for more details.
### Robust model serialization with JSON (#5772, #5804, #5831, #5857, #5934)
* We continue efforts from the 1.0.0 release to adopt JSON as the format to save and load models robustly.
* JSON model IO is significantly faster and produces smaller model files.
* Round-trip reproducibility is guaranteed, via the introduction of an efficient float-to-string conversion algorithm known as [the Ryū algorithm](https://dl.acm.org/doi/10.1145/3192366.3192369). The conversion is locale-independent, producing consistent numeric representation regardless of the locale setting of the user's machine.
* We fixed an issue in loading large JSON files to memory.
* It is now possible to load a JSON file from a remote source such as S3.
### Performance improvements
* CPU hist tree method optimization
- Skip missing lookup in hist row partitioning if data is dense. (#5644)
- Specialize training procedures for CPU hist tree method on distributed environment. (#5557)
- Add single point histogram for CPU hist. Previously gradient histogram for CPU hist is hard coded to be 64 bit, now users can specify the parameter `single_precision_histogram` to use 32 bit histogram instead for faster training performance. (#5624, #5811)
* GPU hist tree method optimization
- Removed some unnecessary synchronizations and better memory allocation pattern. (#5707)
- Optimize GPU Hist for wide dataset. Previously for wide dataset the atomic operation is performed on global memory, now it can run on shared memory for faster histogram building. But there's a known small regression on GeForce cards with dense data. (#5795, #5926, #5948, #5631)
### API additions
* Support passing fmap to importance plot (#5719). Now importance plot can show actual names of features instead of default ones.
* Support 64bit seed. (#5643)
* A new C API `XGBoosterGetNumFeature` is added for getting number of features in booster (#5856).
* Feature names and feature types are now stored in C++ core and saved in binary DMatrix (#5858).
### Breaking: The `predict()` method of `DaskXGBClassifier` now produces class predictions (#5986). Use `predict_proba()` to obtain probability predictions.
* Previously, `DaskXGBClassifier.predict()` produced probability predictions. This is inconsistent with the behavior of other scikit-learn classifiers, where `predict()` returns class predictions. We make a breaking change in 1.2.0 release so that `DaskXGBClassifier.predict()` now correctly produces class predictions and thus behave like other scikit-learn classifiers. Furthermore, we introduce the `predict_proba()` method for obtaining probability predictions, again to be in line with other scikit-learn classifiers.
### Breaking: Custom evaluation metric now receives raw prediction (#5954)
* Previously, the custom evaluation metric received a transformed prediction result when used with a classifier. Now the custom metric will receive a raw (untransformed) prediction and will need to transform the prediction itself. See [demo/guide-python/custom\_softmax.py](https://github.com/dmlc/xgboost/blob/release_1.2.0/demo/guide-python/custom_softmax.py) for an example.
* This change is to make the custom metric behave consistently with the custom objective, which already receives raw prediction (#5564).
### Breaking: XGBoost4J-Spark now requires Spark 3.0 and Scala 2.12 (#5836, #5890)
* Starting with version 3.0, Spark can manage GPU resources and allocate them among executors.
* Spark 3.0 dropped support for Scala 2.11 and now only supports Scala 2.12. Thus, XGBoost4J-Spark also only supports Scala 2.12.
### Breaking: XGBoost Python package now requires Python 3.6 and later (#5715)
* Python 3.6 has many useful features such as f-strings.
### Breaking: XGBoost now adopts the C++14 standard (#5664)
* Make sure to use a sufficiently modern C++ compiler that supports C++14, such as Visual Studio 2017, GCC 5.0+, and Clang 3.4+.
### Bug-fixes
* Fix a data race in the prediction function (#5853). As a byproduct, the prediction function now uses a thread-local data store and became thread-safe.
* Restore capability to run prediction when the test input has fewer features than the training data (#5955). This capability is necessary to support predicting with LIBSVM inputs. The previous release (1.1) had broken this capability, so we restore it in this version with better tests.
* Fix OpenMP build with CMake for R package, to support CMake 3.13 (#5895).
* Fix Windows 2016 build (#5902, #5918).
* Fix edge cases in scikit-learn interface with Pandas input by disabling feature validation. (#5953)
* [R] Enable weighted learning to rank (#5945)
* [R] Fix early stopping with custom objective (#5923)
* Fix NDK Build (#5886)
* Add missing explicit template specializations for greater portability (#5921)
* Handle empty rows in data iterators correctly (#5929). This bug affects file loader and JVM data frames.
* Fix `IsDense` (#5702)
* [jvm-packages] Fix wrong method name `setAllowZeroForMissingValue` (#5740)
* Fix shape inference for Dask predict (#5989)
### Usability Improvements, Documentation
* [Doc] Document that CUDA 10.0 is required (#5872)
* Refactored command line interface (CLI). Now CLI is able to handle user errors and output basic document. (#5574)
* Better error handling in Python: use `raise from` syntax to preserve full stacktrace (#5787).
* The JSON model dump now has a formal schema (#5660, #5818). The benefit is to prevent `dump_model()` function from breaking. See [this document](https://xgboost.readthedocs.io/en/latest/tutorials/saving_model.html#difference-between-saving-model-and-dumping-model) to understand the difference between saving and dumping models.
* Add a reference to the GPU external memory paper (#5684)
* Document more objective parameters in the R package (#5682)
* Document the existence of pre-built binary wheels for MacOS (#5711)
* Remove `max.depth` in the R gblinear example. (#5753)
* Added conda environment file for building docs (#5773)
* Mention dask blog post in the doc, which introduces using Dask with GPU and some internal workings. (#5789)
* Fix rendering of Markdown docs (#5821)
* Document new objectives and metrics available on GPUs (#5909)
* Better message when no GPU is found. (#5594)
* Remove the use of `silent` parameter from R demos. (#5675)
* Don't use masked array in array interface. (#5730)
* Update affiliation of @terrytangyuan: Ant Financial -> Ant Group (#5827)
* Move dask tutorial closer other distributed tutorials (#5613)
* Update XGBoost + Dask overview documentation (#5961)
* Show `n_estimators` in the docstring of the scikit-learn interface (#6041)
* Fix a type in a doctring of the scikit-learn interface (#5980)
### Maintenance: testing, continuous integration, build system
* [CI] Remove CUDA 9.0 from CI (#5674, #5745)
* Require CUDA 10.0+ in CMake build (#5718)
* [R] Remove dependency on gendef for Visual Studio builds (fixes #5608) (#5764). This enables building XGBoost with GPU support with R 4.x.
* [R-package] Reduce duplication in configure.ac (#5693)
* Bump com.esotericsoftware to 4.0.2 (#5690)
* Migrate some tests from AppVeyor to GitHub Actions to speed up the tests. (#5911, #5917, #5919, #5922, #5928)
* Reduce cost of the Jenkins CI server (#5884, #5904, #5892). We now enforce a daily budget via an automated monitor. We also dramatically reduced the workload for the Windows platform, since the cloud VM cost is vastly greater for Windows.
* [R] Set up automated R linter (#5944)
* [R] replace uses of T and F with TRUE and FALSE (#5778)
* Update Docker container 'CPU' (#5956)
* Simplify CMake build with modern CMake techniques (#5871)
* Use `hypothesis` package for testing (#5759, #5835, #5849).
* Define `_CRT_SECURE_NO_WARNINGS` to remove unneeded warnings in MSVC (#5434)
* Run all Python demos in CI, to ensure that they don't break (#5651)
* Enhance nvtx support (#5636). Now we can use unified timer between CPU and GPU. Also CMake is able to find nvtx automatically.
* Speed up python test. (#5752)
* Add helper for generating batches of data. (#5756)
* Add c-api-demo to .gitignore (#5855)
* Add option to enable all compiler warnings in GCC/Clang (#5897)
* Make Python model compatibility test runnable locally (#5941)
* Add cupy to Windows CI (#5797)
* [CI] Fix cuDF install; merge 'gpu' and 'cudf' test suite (#5814)
* Update rabit submodule (#5680, #5876)
* Force colored output for Ninja build. (#5959)
* [CI] Assign larger /dev/shm to NCCL (#5966)
* Add missing Pytest marks to AsyncIO unit test (#5968)
* [CI] Use latest cuDF and dask-cudf (#6048)
* Add CMake flag to log C API invocations, to aid debugging (#5925)
* Fix a unit test on CLI, to handle RC versions (#6050)
* [CI] Use mgpu machine to run gpu hist unit tests (#6050)
* [CI] Build GPU-enabled JAR artifact and deploy to xgboost-maven-repo (#6050)
### Maintenance: Refactor code for legibility and maintainability
* Remove dead code in DMatrix initialization. (#5635)
* Catch dmlc error by ref. (#5678)
* Refactor the `gpu_hist` split evaluation in preparation for batched nodes enumeration. (#5610)
* Remove column major specialization. (#5755)
* Remove unused imports in Python (#5776)
* Avoid including `c_api.h` in header files. (#5782)
* Remove unweighted GK quantile, which is unused. (#5816)
* Add Python binding for rabit ops. (#5743)
* Implement `Empty` method for host device vector. (#5781)
* Remove print (#5867)
* Enforce tree order in JSON (#5974)
### Acknowledgement
**Contributors**: Nan Zhu (@CodingCat), @LionOrCatThatIsTheQuestion, Dmitry Mottl (@Mottl), Rory Mitchell (@RAMitchell), @ShvetsKS, Alex Wozniakowski (@a-wozniakowski), Alexander Gugel (@alexanderGugel), @anttisaukko, @boxdot, Andy Adinets (@canonizer), Ram Rachum (@cool-RR), Elliot Hershberg (@elliothershberg), Jason E. Aten, Ph.D. (@glycerine), Philip Hyunsu Cho (@hcho3), @jameskrach, James Lamb (@jameslamb), James Bourbeau (@jrbourbeau), Peter Jung (@kongzii), Lorenz Walthert (@lorenzwalthert), Oleksandr Kuvshynov (@okuvshynov), Rong Ou (@rongou), Shaochen Shi (@shishaochen), Yuan Tang (@terrytangyuan), Jiaming Yuan (@trivialfis), Bobby Wang (@wbo4958), Zhang Zhang (@zhangzhang10)
**Reviewers**: Nan Zhu (@CodingCat), @LionOrCatThatIsTheQuestion, Hao Yang (@QuantHao), Rory Mitchell (@RAMitchell), @ShvetsKS, Egor Smirnov (@SmirnovEgorRu), Alex Wozniakowski (@a-wozniakowski), Amit Kumar (@aktech), Avinash Barnwal (@avinashbarnwal), @boxdot, Andy Adinets (@canonizer), Chandra Shekhar Reddy (@chandrureddy), Ram Rachum (@cool-RR), Cristiano Goncalves (@cristianogoncalves), Elliot Hershberg (@elliothershberg), Jason E. Aten, Ph.D. (@glycerine), Philip Hyunsu Cho (@hcho3), Tong He (@hetong007), James Lamb (@jameslamb), James Bourbeau (@jrbourbeau), Lee Drake (@leedrake5), DougM (@mengdong), Oleksandr Kuvshynov (@okuvshynov), RongOu (@rongou), Shaochen Shi (@shishaochen), Xu Xiao (@sperlingxx), Yuan Tang (@terrytangyuan), Theodore Vasiloudis (@thvasilo), Jiaming Yuan (@trivialfis), Bobby Wang (@wbo4958), Zhang Zhang (@zhangzhang10)
## v1.1.1 (2020.06.06)
This patch release applies the following patches to 1.1.0 release:
* CPU performance improvement in the PyPI wheels (#5720)
* Fix loading old model (#5724)
* Install pkg-config file (#5744)
## v1.1.0 (2020.05.17)
### Better performance on multi-core CPUs (#5244, #5334, #5522)
@@ -203,6 +374,16 @@ Upgrading to latest pip allows us to depend on newer versions of system librarie
**Reviewers**: Nan Zhu (@CodingCat), @LeZhengThu, Rory Mitchell (@RAMitchell), @ShvetsKS, Egor Smirnov (@SmirnovEgorRu), Steve Bronder (@SteveBronder), Nikita Titov (@StrikerRUS), Andrew Kane (@ankane), Avinash Barnwal (@avinashbarnwal), @brydag, Andy Adinets (@canonizer), Chandra Shekhar Reddy (@chandrureddy), Chen Qin (@chenqin), Codecov (@codecov-io), David Díaz Vico (@daviddiazvico), Darby Payne (@dpayne), Jason E. Aten, Ph.D. (@glycerine), Philip Hyunsu Cho (@hcho3), James Lamb (@jameslamb), @johnny-cat, Mu Li (@mli), Mate Soos (@msoos), @rnyak, Rong Ou (@rongou), Sriram Chandramouli (@sriramch), Toby Dylan Hocking (@tdhock), Yuan Tang (@terrytangyuan), Oleksandr Pryimak (@trams), Jiaming Yuan (@trivialfis), Liang-Chi Hsieh (@viirya), Bobby Wang (@wbo4958),
## v1.0.2 (2020.03.03)
This patch release applies the following patches to 1.0.0 release:
* Fix a small typo in sklearn.py that broke multiple eval metrics (#5341)
* Restore loading model from buffer (#5360)
* Use type name for data type check (#5364)
## v1.0.1 (2020.02.21)
This release is identical to the 1.0.0 release, except that it fixes a small bug that rendered 1.0.0 incompatible with Python 3.5. See #5328.
## v1.0.0 (2020.02.19)
This release marks a major milestone for the XGBoost project.

View File

@@ -1,8 +1,8 @@
Package: xgboost
Type: Package
Title: Extreme Gradient Boosting
Version: 1.2.0.1
Date: 2020-02-21
Version: 1.3.2.1
Date: 2020-08-28
Authors@R: c(
person("Tianqi", "Chen", role = c("aut"),
email = "tianqi.tchen@gmail.com"),
@@ -55,7 +55,8 @@ Suggests:
igraph (>= 1.0.1),
jsonlite,
float,
crayon
crayon,
titanic
Depends:
R (>= 3.3.0)
Imports:
@@ -63,6 +64,5 @@ Imports:
methods,
data.table (>= 1.9.6),
magrittr (>= 1.5),
stringi (>= 0.5.2)
RoxygenNote: 7.1.1
SystemRequirements: GNU make, C++14

View File

@@ -38,6 +38,7 @@ export(xgb.dump)
export(xgb.gblinear.history)
export(xgb.ggplot.deepness)
export(xgb.ggplot.importance)
export(xgb.ggplot.shap.summary)
export(xgb.importance)
export(xgb.load)
export(xgb.load.raw)
@@ -46,6 +47,7 @@ export(xgb.plot.deepness)
export(xgb.plot.importance)
export(xgb.plot.multi.trees)
export(xgb.plot.shap)
export(xgb.plot.shap.summary)
export(xgb.plot.tree)
export(xgb.save)
export(xgb.save.raw)
@@ -79,11 +81,6 @@ importFrom(graphics,title)
importFrom(magrittr,"%>%")
importFrom(stats,median)
importFrom(stats,predict)
importFrom(stringi,stri_detect_regex)
importFrom(stringi,stri_match_first_regex)
importFrom(stringi,stri_replace_all_regex)
importFrom(stringi,stri_replace_first_regex)
importFrom(stringi,stri_split_regex)
importFrom(utils,head)
importFrom(utils,object.size)
importFrom(utils,str)

View File

@@ -352,9 +352,15 @@ cb.early.stop <- function(stopping_rounds, maximize = FALSE,
finalizer <- function(env) {
if (!is.null(env$bst)) {
attr_best_score <- as.numeric(xgb.attr(env$bst$handle, 'best_score'))
if (best_score != attr_best_score)
stop("Inconsistent 'best_score' values between the closure state: ", best_score,
" and the xgb.attr: ", attr_best_score)
if (best_score != attr_best_score) {
# If the difference is too big, throw an error
if (abs(best_score - attr_best_score) >= 1e-14) {
stop("Inconsistent 'best_score' values between the closure state: ", best_score,
" and the xgb.attr: ", attr_best_score)
}
# If the difference is due to floating-point truncation, update best_score
best_score <- attr_best_score
}
env$bst$best_iteration <- best_iteration
env$bst$best_ntreelimit <- best_ntreelimit
env$bst$best_score <- best_score

View File

@@ -20,6 +20,12 @@ NVL <- function(x, val) {
stop("typeof(x) == ", typeof(x), " is not supported by NVL")
}
# List of classification and ranking objectives
.CLASSIFICATION_OBJECTIVES <- function() {
return(c('binary:logistic', 'binary:logitraw', 'binary:hinge', 'multi:softmax',
'multi:softprob', 'rank:pairwise', 'rank:ndcg', 'rank:map'))
}
#
# Low-level functions for boosting --------------------------------------------
@@ -167,9 +173,8 @@ xgb.iter.eval <- function(booster_handle, watchlist, iter, feval = NULL) {
evnames <- names(watchlist)
if (is.null(feval)) {
msg <- .Call(XGBoosterEvalOneIter_R, booster_handle, as.integer(iter), watchlist, as.list(evnames))
msg <- stri_split_regex(msg, '(\\s+|:|\\s+)')[[1]][-1]
res <- as.numeric(msg[c(FALSE, TRUE)]) # even indices are the values
names(res) <- msg[c(TRUE, FALSE)] # odds are the names
mat <- matrix(strsplit(msg, '\\s+|:')[[1]][-1], nrow = 2)
res <- structure(as.numeric(mat[2, ]), names = mat[1, ])
} else {
res <- sapply(seq_along(watchlist), function(j) {
w <- watchlist[[j]]
@@ -188,13 +193,23 @@ xgb.iter.eval <- function(booster_handle, watchlist, iter, feval = NULL) {
# Helper functions for cross validation ---------------------------------------
#
# Possibly convert the labels into factors, depending on the objective.
# The labels are converted into factors only when the given objective refers to the classification
# or ranking tasks.
convert.labels <- function(labels, objective_name) {
if (objective_name %in% .CLASSIFICATION_OBJECTIVES()) {
return(as.factor(labels))
} else {
return(labels)
}
}
# Generates random (stratified if needed) CV folds
generate.cv.folds <- function(nfold, nrows, stratified, label, params) {
# cannot do it for rank
if (exists('objective', where = params) &&
is.character(params$objective) &&
strtrim(params$objective, 5) == 'rank:') {
objective <- params$objective
if (is.character(objective) && strtrim(objective, 5) == 'rank:') {
stop("\n\tAutomatic generation of CV-folds is not implemented for ranking!\n",
"\tConsider providing pre-computed CV-folds through the 'folds=' parameter.\n")
}
@@ -207,19 +222,16 @@ generate.cv.folds <- function(nfold, nrows, stratified, label, params) {
# - For classification, need to convert y labels to factor before making the folds,
# and then do stratification by factor levels.
# - For regression, leave y numeric and do stratification by quantiles.
if (exists('objective', where = params) &&
is.character(params$objective)) {
# If 'objective' provided in params, assume that y is a classification label
# unless objective is reg:squarederror
if (params$objective != 'reg:squarederror')
y <- factor(y)
if (is.character(objective)) {
y <- convert.labels(y, params$objective)
} else {
# If no 'objective' given in params, it means that user either wants to
# use the default 'reg:squarederror' objective or has provided a custom
# obj function. Here, assume classification setting when y has 5 or less
# unique values:
if (length(unique(y)) <= 5)
if (length(unique(y)) <= 5) {
y <- factor(y)
}
}
folds <- xgb.createFolds(y, nfold)
} else {
@@ -349,6 +361,7 @@ NULL
#' # Save as a stand-alone file (JSON); load it with xgb.load()
#' xgb.save(bst, 'xgb.model.json')
#' bst2 <- xgb.load('xgb.model.json')
#' if (file.exists('xgb.model.json')) file.remove('xgb.model.json')
#'
#' # Save as a raw byte vector; load it with xgb.load.raw()
#' xgb_bytes <- xgb.save.raw(bst)
@@ -364,6 +377,7 @@ NULL
#' obj2 <- readRDS('my_object.rds')
#' # Re-construct xgb.Booster object from the bytes
#' bst2 <- xgb.load.raw(obj2$xgb_model_bytes)
#' if (file.exists('my_object.rds')) file.remove('my_object.rds')
#'
#' @name a-compatibility-note-for-saveRDS-save
NULL

View File

@@ -357,7 +357,7 @@ slice.xgb.DMatrix <- function(object, idxset, ...) {
#' @export
print.xgb.DMatrix <- function(x, verbose = FALSE, ...) {
cat('xgb.DMatrix dim:', nrow(x), 'x', ncol(x), ' info: ')
infos <- c()
infos <- character(0)
if (length(getinfo(x, 'label')) > 0) infos <- 'label'
if (length(getinfo(x, 'weight')) > 0) infos <- c(infos, 'weight')
if (length(getinfo(x, 'base_margin')) > 0) infos <- c(infos, 'base_margin')

View File

@@ -36,6 +36,8 @@
#' \item \code{error} binary classification error rate
#' \item \code{rmse} Rooted mean square error
#' \item \code{logloss} negative log-likelihood function
#' \item \code{mae} Mean absolute error
#' \item \code{mape} Mean absolute percentage error
#' \item \code{auc} Area under curve
#' \item \code{aucpr} Area under PR curve
#' \item \code{merror} Exact matching error, used to evaluate multi-class classification
@@ -79,7 +81,7 @@
#'
#' All observations are used for both training and validation.
#'
#' Adapted from \url{http://en.wikipedia.org/wiki/Cross-validation_\%28statistics\%29#k-fold_cross-validation}
#' Adapted from \url{https://en.wikipedia.org/wiki/Cross-validation_\%28statistics\%29}
#'
#' @return
#' An object of class \code{xgb.cv.synchronous} with the following elements:

View File

@@ -56,10 +56,10 @@ xgb.dump <- function(model, fname = NULL, fmap = "", with_stats=FALSE,
as.character(dump_format))
if (is.null(fname))
model_dump <- stri_replace_all_regex(model_dump, '\t', '')
model_dump <- gsub('\t', '', model_dump, fixed = TRUE)
if (dump_format == "text")
model_dump <- unlist(stri_split_regex(model_dump, '\n'))
model_dump <- unlist(strsplit(model_dump, '\n', fixed = TRUE))
model_dump <- grep('^\\s*$', model_dump, invert = TRUE, value = TRUE)

View File

@@ -99,6 +99,84 @@ xgb.ggplot.deepness <- function(model = NULL, which = c("2x1", "max.depth", "med
}
}
#' @rdname xgb.plot.shap.summary
#' @export
xgb.ggplot.shap.summary <- function(data, shap_contrib = NULL, features = NULL, top_n = 10, model = NULL,
trees = NULL, target_class = NULL, approxcontrib = FALSE, subsample = NULL) {
data_list <- xgb.shap.data(
data = data,
shap_contrib = shap_contrib,
features = features,
top_n = top_n,
model = model,
trees = trees,
target_class = target_class,
approxcontrib = approxcontrib,
subsample = subsample,
max_observations = 10000 # 10,000 samples per feature.
)
p_data <- prepare.ggplot.shap.data(data_list, normalize = TRUE)
# Reverse factor levels so that the first level is at the top of the plot
p_data[, "feature" := factor(feature, rev(levels(feature)))]
p <- ggplot2::ggplot(p_data, ggplot2::aes(x = feature, y = p_data$shap_value, colour = p_data$feature_value)) +
ggplot2::geom_jitter(alpha = 0.5, width = 0.1) +
ggplot2::scale_colour_viridis_c(limits = c(-3, 3), option = "plasma", direction = -1) +
ggplot2::geom_abline(slope = 0, intercept = 0, colour = "darkgrey") +
ggplot2::coord_flip()
p
}
#' Combine and melt feature values and SHAP contributions for sample
#' observations.
#'
#' Conforms to data format required for ggplot functions.
#'
#' Internal utility function.
#'
#' @param data_list List containing 'data' and 'shap_contrib' returned by
#' \code{xgb.shap.data()}.
#' @param normalize Whether to standardize feature values to have mean 0 and
#' standard deviation 1 (useful for comparing multiple features on the same
#' plot). Default \code{FALSE}.
#'
#' @return A data.table containing the observation ID, the feature name, the
#' feature value (normalized if specified), and the SHAP contribution value.
prepare.ggplot.shap.data <- function(data_list, normalize = FALSE) {
data <- data_list[["data"]]
shap_contrib <- data_list[["shap_contrib"]]
data <- data.table::as.data.table(as.matrix(data))
if (normalize) {
data[, (names(data)) := lapply(.SD, normalize)]
}
data[, "id" := seq_len(nrow(data))]
data_m <- data.table::melt.data.table(data, id.vars = "id", variable.name = "feature", value.name = "feature_value")
shap_contrib <- data.table::as.data.table(as.matrix(shap_contrib))
shap_contrib[, "id" := seq_len(nrow(shap_contrib))]
shap_contrib_m <- data.table::melt.data.table(shap_contrib, id.vars = "id", variable.name = "feature", value.name = "shap_value")
p_data <- data.table::merge.data.table(data_m, shap_contrib_m, by = c("id", "feature"))
p_data
}
#' Scale feature value to have mean 0, standard deviation 1
#'
#' This is used to compare multiple features on the same plot.
#' Internal utility function
#'
#' @param x Numeric vector
#'
#' @return Numeric vector with mean 0 and sd 1.
normalize <- function(x) {
loc <- mean(x, na.rm = TRUE)
scale <- stats::sd(x, na.rm = TRUE)
(x - loc) / scale
}
# Plot multiple ggplot graph aligned by rows and columns.
# ... the plots
# cols number of columns
@@ -131,5 +209,5 @@ multiplot <- function(..., cols = 1) {
globalVariables(c(
"Cluster", "ggplot", "aes", "geom_bar", "coord_flip", "xlab", "ylab", "ggtitle", "theme",
"element_blank", "element_text", "V1", "Weight"
"element_blank", "element_text", "V1", "Weight", "feature"
))

View File

@@ -87,11 +87,11 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
}
if (length(text) < 2 ||
sum(stri_detect_regex(text, 'yes=(\\d+),no=(\\d+)')) < 1) {
sum(grepl('yes=(\\d+),no=(\\d+)', text)) < 1) {
stop("Non-tree model detected! This function can only be used with tree models.")
}
position <- which(!is.na(stri_match_first_regex(text, "booster")))
position <- which(grepl("booster", text, fixed = TRUE))
add.tree.id <- function(node, tree) if (use_int_id) node else paste(tree, node, sep = "-")
@@ -108,9 +108,9 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
}
td <- td[Tree %in% trees & !grepl('^booster', t)]
td[, Node := stri_match_first_regex(t, "(\\d+):")[, 2] %>% as.integer]
td[, Node := as.integer(sub("^([0-9]+):.*", "\\1", t))]
if (!use_int_id) td[, ID := add.tree.id(Node, Tree)]
td[, isLeaf := !is.na(stri_match_first_regex(t, "leaf"))]
td[, isLeaf := grepl("leaf", t, fixed = TRUE)]
# parse branch lines
branch_rx <- paste0("f(\\d+)<(", anynumber_regex, ")\\] yes=(\\d+),no=(\\d+),missing=(\\d+),",
@@ -118,10 +118,11 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
branch_cols <- c("Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover")
td[isLeaf == FALSE,
(branch_cols) := {
# skip some indices with spurious capture groups from anynumber_regex
xtr <- stri_match_first_regex(t, branch_rx)[, c(2, 3, 5, 6, 7, 8, 10), drop = FALSE]
xtr[, 3:5] <- add.tree.id(xtr[, 3:5], Tree)
lapply(seq_len(ncol(xtr)), function(i) xtr[, i])
matches <- regmatches(t, regexec(branch_rx, t))
# skip some indices with spurious capture groups from anynumber_regex
xtr <- do.call(rbind, matches)[, c(2, 3, 5, 6, 7, 8, 10), drop = FALSE]
xtr[, 3:5] <- add.tree.id(xtr[, 3:5], Tree)
as.data.table(xtr)
}]
# assign feature_names when available
if (!is.null(feature_names)) {
@@ -135,8 +136,9 @@ xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
leaf_cols <- c("Feature", "Quality", "Cover")
td[isLeaf == TRUE,
(leaf_cols) := {
xtr <- stri_match_first_regex(t, leaf_rx)[, c(2, 4)]
c("Leaf", lapply(seq_len(ncol(xtr)), function(i) xtr[, i]))
matches <- regmatches(t, regexec(leaf_rx, t))
xtr <- do.call(rbind, matches)[, c(2, 4)]
c("Leaf", as.data.table(xtr))
}]
# convert some columns to numeric

View File

@@ -99,21 +99,20 @@ xgb.plot.importance <- function(importance_matrix = NULL, top_n = NULL, measure
}
if (plot) {
op <- par(no.readonly = TRUE)
mar <- op$mar
original_mar <- par()$mar
# reset margins so this function doesn't have side effects
on.exit({par(mar = original_mar)})
mar <- original_mar
if (!is.null(left_margin))
mar[2] <- left_margin
par(mar = mar)
# reverse the order of rows to have the highest ranked at the top
importance_matrix[nrow(importance_matrix):1,
importance_matrix[rev(seq_len(nrow(importance_matrix))),
barplot(Importance, horiz = TRUE, border = NA, cex.names = cex,
names.arg = Feature, las = 1, ...)]
grid(NULL, NA)
# redraw over the grid
importance_matrix[nrow(importance_matrix):1,
barplot(Importance, horiz = TRUE, border = NA, add = TRUE)]
par(op)
}
invisible(importance_matrix)

View File

@@ -67,7 +67,7 @@ xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5,
# first number of the path represents the tree, then the following numbers are related to the path to follow
# root init
root.nodes <- tree.matrix[stri_detect_regex(ID, "\\d+-0"), ID]
root.nodes <- tree.matrix[Node == 0, ID]
tree.matrix[ID %in% root.nodes, abs.node.position := root.nodes]
precedent.nodes <- root.nodes
@@ -86,11 +86,8 @@ xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5,
tree.matrix[!is.na(Yes), Yes := paste0(abs.node.position, "_0")]
tree.matrix[!is.na(No), No := paste0(abs.node.position, "_1")]
remove.tree <- . %>% stri_replace_first_regex(pattern = "^\\d+-", replacement = "")
tree.matrix[, `:=`(abs.node.position = remove.tree(abs.node.position),
Yes = remove.tree(Yes),
No = remove.tree(No))]
for (nm in c("abs.node.position", "Yes", "No"))
data.table::set(tree.matrix, j = nm, value = sub("^\\d+-", "", tree.matrix[[nm]]))
nodes.dt <- tree.matrix[
, .(Quality = sum(Quality))

View File

@@ -81,6 +81,7 @@
#' xgb.plot.shap(agaricus.test$data, model = bst, features = "odor=none")
#' contr <- predict(bst, agaricus.test$data, predcontrib = TRUE)
#' xgb.plot.shap(agaricus.test$data, contr, model = bst, top_n = 12, n_col = 3)
#' xgb.ggplot.shap.summary(agaricus.test$data, contr, model = bst, top_n = 12) # Summary plot
#'
#' # multiclass example - plots for each class separately:
#' nclass <- 3
@@ -99,6 +100,7 @@
#' n_col = 2, col = col, pch = 16, pch_NA = 17)
#' xgb.plot.shap(x, model = mbst, trees = trees0 + 2, target_class = 2, top_n = 4,
#' n_col = 2, col = col, pch = 16, pch_NA = 17)
#' xgb.ggplot.shap.summary(x, model = mbst, target_class = 0, top_n = 4) # Summary plot
#'
#' @rdname xgb.plot.shap
#' @export
@@ -109,69 +111,33 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
plot_NA = TRUE, col_NA = rgb(0.7, 0, 1, 0.6), pch_NA = '.', pos_NA = 1.07,
plot_loess = TRUE, col_loess = 2, span_loess = 0.5,
which = c("1d", "2d"), plot = TRUE, ...) {
if (!is.matrix(data) && !inherits(data, "dgCMatrix"))
stop("data: must be either matrix or dgCMatrix")
if (is.null(shap_contrib) && (is.null(model) || !inherits(model, "xgb.Booster")))
stop("when shap_contrib is not provided, one must provide an xgb.Booster model")
if (is.null(features) && (is.null(model) || !inherits(model, "xgb.Booster")))
stop("when features are not provided, one must provide an xgb.Booster model to rank the features")
if (!is.null(shap_contrib) &&
(!is.matrix(shap_contrib) || nrow(shap_contrib) != nrow(data) || ncol(shap_contrib) != ncol(data) + 1))
stop("shap_contrib is not compatible with the provided data")
nsample <- if (is.null(subsample)) min(100000, nrow(data)) else as.integer(subsample * nrow(data))
idx <- sample(1:nrow(data), nsample)
data <- data[idx, ]
if (is.null(shap_contrib)) {
shap_contrib <- predict(model, data, predcontrib = TRUE, approxcontrib = approxcontrib)
} else {
shap_contrib <- shap_contrib[idx, ]
}
data_list <- xgb.shap.data(
data = data,
shap_contrib = shap_contrib,
features = features,
top_n = top_n,
model = model,
trees = trees,
target_class = target_class,
approxcontrib = approxcontrib,
subsample = subsample,
max_observations = 100000
)
data <- data_list[["data"]]
shap_contrib <- data_list[["shap_contrib"]]
features <- colnames(data)
which <- match.arg(which)
if (which == "2d")
stop("2D plots are not implemented yet")
if (is.null(features)) {
imp <- xgb.importance(model = model, trees = trees)
top_n <- as.integer(top_n[1])
if (top_n < 1 && top_n > 100)
stop("top_n: must be an integer within [1, 100]")
features <- imp$Feature[1:min(top_n, NROW(imp))]
}
if (is.character(features)) {
if (is.null(colnames(data)))
stop("Either provide `data` with column names or provide `features` as column indices")
features <- match(features, colnames(data))
}
if (n_col > length(features)) n_col <- length(features)
if (is.list(shap_contrib)) { # multiclass: either choose a class or merge
shap_contrib <- if (!is.null(target_class)) shap_contrib[[target_class + 1]]
else Reduce("+", lapply(shap_contrib, abs))
}
shap_contrib <- shap_contrib[, features, drop = FALSE]
data <- data[, features, drop = FALSE]
cols <- colnames(data)
if (is.null(cols)) cols <- colnames(shap_contrib)
if (is.null(cols)) cols <- paste0('X', 1:ncol(data))
colnames(data) <- cols
colnames(shap_contrib) <- cols
if (plot && which == "1d") {
op <- par(mfrow = c(ceiling(length(features) / n_col), n_col),
oma = c(0, 0, 0, 0) + 0.2,
mar = c(3.5, 3.5, 0, 0) + 0.1,
mgp = c(1.7, 0.6, 0))
for (f in cols) {
for (f in features) {
ord <- order(data[, f])
x <- data[, f][ord]
y <- shap_contrib[, f][ord]
@@ -216,3 +182,108 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
}
invisible(list(data = data, shap_contrib = shap_contrib))
}
#' SHAP contribution dependency summary plot
#'
#' Compare SHAP contributions of different features.
#'
#' A point plot (each point representing one sample from \code{data}) is
#' produced for each feature, with the points plotted on the SHAP value axis.
#' Each point (observation) is coloured based on its feature value. The plot
#' hence allows us to see which features have a negative / positive contribution
#' on the model prediction, and whether the contribution is different for larger
#' or smaller values of the feature. We effectively try to replicate the
#' \code{summary_plot} function from https://github.com/slundberg/shap.
#'
#' @inheritParams xgb.plot.shap
#'
#' @return A \code{ggplot2} object.
#' @export
#'
#' @examples # See \code{\link{xgb.plot.shap}}.
#' @seealso \code{\link{xgb.plot.shap}}, \code{\link{xgb.ggplot.shap.summary}},
#' \url{https://github.com/slundberg/shap}
xgb.plot.shap.summary <- function(data, shap_contrib = NULL, features = NULL, top_n = 10, model = NULL,
trees = NULL, target_class = NULL, approxcontrib = FALSE, subsample = NULL) {
# Only ggplot implementation is available.
xgb.ggplot.shap.summary(data, shap_contrib, features, top_n, model, trees, target_class, approxcontrib, subsample)
}
#' Prepare data for SHAP plots. To be used in xgb.plot.shap, xgb.plot.shap.summary, etc.
#' Internal utility function.
#'
#' @inheritParams xgb.plot.shap
#' @keywords internal
#'
#' @return A list containing: 'data', a matrix containing sample observations
#' and their feature values; 'shap_contrib', a matrix containing the SHAP contribution
#' values for these observations.
xgb.shap.data <- function(data, shap_contrib = NULL, features = NULL, top_n = 1, model = NULL,
trees = NULL, target_class = NULL, approxcontrib = FALSE,
subsample = NULL, max_observations = 100000) {
if (!is.matrix(data) && !inherits(data, "dgCMatrix"))
stop("data: must be either matrix or dgCMatrix")
if (is.null(shap_contrib) && (is.null(model) || !inherits(model, "xgb.Booster")))
stop("when shap_contrib is not provided, one must provide an xgb.Booster model")
if (is.null(features) && (is.null(model) || !inherits(model, "xgb.Booster")))
stop("when features are not provided, one must provide an xgb.Booster model to rank the features")
if (!is.null(shap_contrib) &&
(!is.matrix(shap_contrib) || nrow(shap_contrib) != nrow(data) || ncol(shap_contrib) != ncol(data) + 1))
stop("shap_contrib is not compatible with the provided data")
if (is.character(features) && is.null(colnames(data)))
stop("either provide `data` with column names or provide `features` as column indices")
if (is.null(model$feature_names) && model$nfeatures != ncol(data))
stop("if model has no feature_names, columns in `data` must match features in model")
if (!is.null(subsample)) {
idx <- sample(x = seq_len(nrow(data)), size = as.integer(subsample * nrow(data)), replace = FALSE)
} else {
idx <- seq_len(min(nrow(data), max_observations))
}
data <- data[idx, ]
if (is.null(colnames(data))) {
colnames(data) <- paste0("X", seq_len(ncol(data)))
}
if (!is.null(shap_contrib)) {
if (is.list(shap_contrib)) { # multiclass: either choose a class or merge
shap_contrib <- if (!is.null(target_class)) shap_contrib[[target_class + 1]] else Reduce("+", lapply(shap_contrib, abs))
}
shap_contrib <- shap_contrib[idx, ]
if (is.null(colnames(shap_contrib))) {
colnames(shap_contrib) <- paste0("X", seq_len(ncol(data)))
}
} else {
shap_contrib <- predict(model, newdata = data, predcontrib = TRUE, approxcontrib = approxcontrib)
if (is.list(shap_contrib)) { # multiclass: either choose a class or merge
shap_contrib <- if (!is.null(target_class)) shap_contrib[[target_class + 1]] else Reduce("+", lapply(shap_contrib, abs))
}
}
if (is.null(features)) {
if (!is.null(model$feature_names)) {
imp <- xgb.importance(model = model, trees = trees)
} else {
imp <- xgb.importance(model = model, trees = trees, feature_names = colnames(data))
}
top_n <- top_n[1]
if (top_n < 1 | top_n > 100) stop("top_n: must be an integer within [1, 100]")
features <- imp$Feature[1:min(top_n, NROW(imp))]
}
if (is.character(features)) {
features <- match(features, colnames(data))
}
shap_contrib <- shap_contrib[, features, drop = FALSE]
data <- data[, features, drop = FALSE]
list(
data = data,
shap_contrib = shap_contrib
)
}

View File

@@ -130,16 +130,18 @@
#' Note that when using a customized metric, only this single metric can be used.
#' The following is the list of built-in metrics for which Xgboost provides optimized implementation:
#' \itemize{
#' \item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
#' \item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
#' \item \code{mlogloss} multiclass logloss. \url{http://wiki.fast.ai/index.php/Log_Loss}
#' \item \code{rmse} root mean square error. \url{https://en.wikipedia.org/wiki/Root_mean_square_error}
#' \item \code{logloss} negative log-likelihood. \url{https://en.wikipedia.org/wiki/Log-likelihood}
#' \item \code{mlogloss} multiclass logloss. \url{https://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html}
#' \item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
#' By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
#' Different threshold (e.g., 0.) could be specified as "error@0."
#' \item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
#' \item \code{auc} Area under the curve. \url{http://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
#' \item \code{mae} Mean absolute error
#' \item \code{mape} Mean absolute percentage error
#' \item \code{auc} Area under the curve. \url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
#' \item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation.
#' \item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{http://en.wikipedia.org/wiki/NDCG}
#' \item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{https://en.wikipedia.org/wiki/NDCG}
#' }
#'
#' The following callbacks are automatically created when certain parameters are set:

View File

@@ -91,11 +91,6 @@ NULL
#' @importFrom data.table setkeyv
#' @importFrom data.table setnames
#' @importFrom magrittr %>%
#' @importFrom stringi stri_detect_regex
#' @importFrom stringi stri_match_first_regex
#' @importFrom stringi stri_replace_first_regex
#' @importFrom stringi stri_replace_all_regex
#' @importFrom stringi stri_split_regex
#' @importFrom utils object.size str tail
#' @importFrom stats predict
#' @importFrom stats median

2
R-package/configure vendored
View File

@@ -2732,7 +2732,7 @@ $as_echo "${ac_pkg_openmp}" >&6; }
OPENMP_CXXFLAGS=''
OPENMP_LIB=''
echo '*****************************************************************************************'
echo 'WARNING: OpenMP is unavailable on this Mac OSX system. Training speed may be suboptimal.'
echo ' OpenMP is unavailable on this Mac OSX system. Training speed may be suboptimal.'
echo ' To use all CPU cores for training jobs, you should install OpenMP by running\n'
echo ' brew install libomp'
echo '*****************************************************************************************'

View File

@@ -39,7 +39,7 @@ then
OPENMP_CXXFLAGS=''
OPENMP_LIB=''
echo '*****************************************************************************************'
echo 'WARNING: OpenMP is unavailable on this Mac OSX system. Training speed may be suboptimal.'
echo ' OpenMP is unavailable on this Mac OSX system. Training speed may be suboptimal.'
echo ' To use all CPU cores for training jobs, you should install OpenMP by running\n'
echo ' brew install libomp'
echo '*****************************************************************************************'
@@ -52,4 +52,3 @@ AC_SUBST(ENDIAN_FLAG)
AC_SUBST(BACKTRACE_LIB)
AC_CONFIG_FILES([src/Makevars])
AC_OUTPUT

View File

@@ -36,7 +36,7 @@ treeInteractions <- function(input_tree, input_max_depth) {
interaction_trees <- trees[!is.na(Split) & !is.na(parent_1),
c('Feature', paste0('parent_feat_', 1:(input_max_depth - 1))),
with = FALSE]
interaction_trees_split <- split(interaction_trees, 1:nrow(interaction_trees))
interaction_trees_split <- split(interaction_trees, seq_len(nrow(interaction_trees)))
interaction_list <- lapply(interaction_trees_split, as.character)
# Remove NAs (no parent interaction)
@@ -101,8 +101,8 @@ bst3_interactions <- treeInteractions(bst3_tree, 4)
# Show monotonic constraints still apply by checking scores after incrementing V1
x1 <- sort(unique(x[['V1']]))
for (i in 1:length(x1)){
testdata <- copy(x[, -c('V1')])
for (i in seq_along(x1)){
testdata <- copy(x[, - ('V1')])
testdata[['V1']] <- x1[i]
testdata <- testdata[, paste0('V', 1:10), with = FALSE]
pred <- predict(bst3, as.matrix(testdata))

View File

@@ -43,6 +43,7 @@ bst2 <- xgb.load('xgb.model')
# Save as a stand-alone file (JSON); load it with xgb.load()
xgb.save(bst, 'xgb.model.json')
bst2 <- xgb.load('xgb.model.json')
if (file.exists('xgb.model.json')) file.remove('xgb.model.json')
# Save as a raw byte vector; load it with xgb.load.raw()
xgb_bytes <- xgb.save.raw(bst)
@@ -58,5 +59,6 @@ saveRDS(obj, 'my_object.rds')
obj2 <- readRDS('my_object.rds')
# Re-construct xgb.Booster object from the bytes
bst2 <- xgb.load.raw(obj2$xgb_model_bytes)
if (file.exists('my_object.rds')) file.remove('my_object.rds')
}

View File

@@ -0,0 +1,18 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.ggplot.R
\name{normalize}
\alias{normalize}
\title{Scale feature value to have mean 0, standard deviation 1}
\usage{
normalize(x)
}
\arguments{
\item{x}{Numeric vector}
}
\value{
Numeric vector with mean 0 and sd 1.
}
\description{
This is used to compare multiple features on the same plot.
Internal utility function
}

View File

@@ -0,0 +1,27 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.ggplot.R
\name{prepare.ggplot.shap.data}
\alias{prepare.ggplot.shap.data}
\title{Combine and melt feature values and SHAP contributions for sample
observations.}
\usage{
prepare.ggplot.shap.data(data_list, normalize = FALSE)
}
\arguments{
\item{data_list}{List containing 'data' and 'shap_contrib' returned by
\code{xgb.shap.data()}.}
\item{normalize}{Whether to standardize feature values to have mean 0 and
standard deviation 1 (useful for comparing multiple features on the same
plot). Default \code{FALSE}.}
}
\value{
A data.table containing the observation ID, the feature name, the
feature value (normalized if specified), and the SHAP contribution value.
}
\description{
Conforms to data format required for ggplot functions.
}
\details{
Internal utility function.
}

View File

@@ -70,6 +70,8 @@ from each CV model. This parameter engages the \code{\link{cb.cv.predict}} callb
\item \code{error} binary classification error rate
\item \code{rmse} Rooted mean square error
\item \code{logloss} negative log-likelihood function
\item \code{mae} Mean absolute error
\item \code{mape} Mean absolute percentage error
\item \code{auc} Area under curve
\item \code{aucpr} Area under PR curve
\item \code{merror} Exact matching error, used to evaluate multi-class classification
@@ -154,7 +156,7 @@ The cross-validation process is then repeated \code{nrounds} times, with each of
All observations are used for both training and validation.
Adapted from \url{http://en.wikipedia.org/wiki/Cross-validation_\%28statistics\%29#k-fold_cross-validation}
Adapted from \url{https://en.wikipedia.org/wiki/Cross-validation_\%28statistics\%29}
}
\examples{
data(agaricus.train, package='xgboost')

View File

@@ -131,6 +131,7 @@ bst <- xgboost(agaricus.train$data, agaricus.train$label, nrounds = 50,
xgb.plot.shap(agaricus.test$data, model = bst, features = "odor=none")
contr <- predict(bst, agaricus.test$data, predcontrib = TRUE)
xgb.plot.shap(agaricus.test$data, contr, model = bst, top_n = 12, n_col = 3)
xgb.ggplot.shap.summary(agaricus.test$data, contr, model = bst, top_n = 12) # Summary plot
# multiclass example - plots for each class separately:
nclass <- 3
@@ -149,6 +150,7 @@ xgb.plot.shap(x, model = mbst, trees = trees0 + 1, target_class = 1, top_n = 4,
n_col = 2, col = col, pch = 16, pch_NA = 17)
xgb.plot.shap(x, model = mbst, trees = trees0 + 2, target_class = 2, top_n = 4,
n_col = 2, col = col, pch = 16, pch_NA = 17)
xgb.ggplot.shap.summary(x, model = mbst, target_class = 0, top_n = 4) # Summary plot
}
\references{

View File

@@ -0,0 +1,78 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.ggplot.R, R/xgb.plot.shap.R
\name{xgb.ggplot.shap.summary}
\alias{xgb.ggplot.shap.summary}
\alias{xgb.plot.shap.summary}
\title{SHAP contribution dependency summary plot}
\usage{
xgb.ggplot.shap.summary(
data,
shap_contrib = NULL,
features = NULL,
top_n = 10,
model = NULL,
trees = NULL,
target_class = NULL,
approxcontrib = FALSE,
subsample = NULL
)
xgb.plot.shap.summary(
data,
shap_contrib = NULL,
features = NULL,
top_n = 10,
model = NULL,
trees = NULL,
target_class = NULL,
approxcontrib = FALSE,
subsample = NULL
)
}
\arguments{
\item{data}{data as a \code{matrix} or \code{dgCMatrix}.}
\item{shap_contrib}{a matrix of SHAP contributions that was computed earlier for the above
\code{data}. When it is NULL, it is computed internally using \code{model} and \code{data}.}
\item{features}{a vector of either column indices or of feature names to plot. When it is NULL,
feature importance is calculated, and \code{top_n} high ranked features are taken.}
\item{top_n}{when \code{features} is NULL, top_n [1, 100] most important features in a model are taken.}
\item{model}{an \code{xgb.Booster} model. It has to be provided when either \code{shap_contrib}
or \code{features} is missing.}
\item{trees}{passed to \code{\link{xgb.importance}} when \code{features = NULL}.}
\item{target_class}{is only relevant for multiclass models. When it is set to a 0-based class index,
only SHAP contributions for that specific class are used.
If it is not set, SHAP importances are averaged over all classes.}
\item{approxcontrib}{passed to \code{\link{predict.xgb.Booster}} when \code{shap_contrib = NULL}.}
\item{subsample}{a random fraction of data points to use for plotting. When it is NULL,
it is set so that up to 100K data points are used.}
}
\value{
A \code{ggplot2} object.
}
\description{
Compare SHAP contributions of different features.
}
\details{
A point plot (each point representing one sample from \code{data}) is
produced for each feature, with the points plotted on the SHAP value axis.
Each point (observation) is coloured based on its feature value. The plot
hence allows us to see which features have a negative / positive contribution
on the model prediction, and whether the contribution is different for larger
or smaller values of the feature. We effectively try to replicate the
\code{summary_plot} function from https://github.com/slundberg/shap.
}
\examples{
# See \code{\link{xgb.plot.shap}}.
}
\seealso{
\code{\link{xgb.plot.shap}}, \code{\link{xgb.ggplot.shap.summary}},
\url{https://github.com/slundberg/shap}
}

View File

@@ -0,0 +1,55 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.plot.shap.R
\name{xgb.shap.data}
\alias{xgb.shap.data}
\title{Prepare data for SHAP plots. To be used in xgb.plot.shap, xgb.plot.shap.summary, etc.
Internal utility function.}
\usage{
xgb.shap.data(
data,
shap_contrib = NULL,
features = NULL,
top_n = 1,
model = NULL,
trees = NULL,
target_class = NULL,
approxcontrib = FALSE,
subsample = NULL,
max_observations = 1e+05
)
}
\arguments{
\item{data}{data as a \code{matrix} or \code{dgCMatrix}.}
\item{shap_contrib}{a matrix of SHAP contributions that was computed earlier for the above
\code{data}. When it is NULL, it is computed internally using \code{model} and \code{data}.}
\item{features}{a vector of either column indices or of feature names to plot. When it is NULL,
feature importance is calculated, and \code{top_n} high ranked features are taken.}
\item{top_n}{when \code{features} is NULL, top_n [1, 100] most important features in a model are taken.}
\item{model}{an \code{xgb.Booster} model. It has to be provided when either \code{shap_contrib}
or \code{features} is missing.}
\item{trees}{passed to \code{\link{xgb.importance}} when \code{features = NULL}.}
\item{target_class}{is only relevant for multiclass models. When it is set to a 0-based class index,
only SHAP contributions for that specific class are used.
If it is not set, SHAP importances are averaged over all classes.}
\item{approxcontrib}{passed to \code{\link{predict.xgb.Booster}} when \code{shap_contrib = NULL}.}
\item{subsample}{a random fraction of data points to use for plotting. When it is NULL,
it is set so that up to 100K data points are used.}
}
\value{
A list containing: 'data', a matrix containing sample observations
and their feature values; 'shap_contrib', a matrix containing the SHAP contribution
values for these observations.
}
\description{
Prepare data for SHAP plots. To be used in xgb.plot.shap, xgb.plot.shap.summary, etc.
Internal utility function.
}
\keyword{internal}

View File

@@ -215,16 +215,18 @@ User may set one or several \code{eval_metric} parameters.
Note that when using a customized metric, only this single metric can be used.
The following is the list of built-in metrics for which Xgboost provides optimized implementation:
\itemize{
\item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
\item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
\item \code{mlogloss} multiclass logloss. \url{http://wiki.fast.ai/index.php/Log_Loss}
\item \code{rmse} root mean square error. \url{https://en.wikipedia.org/wiki/Root_mean_square_error}
\item \code{logloss} negative log-likelihood. \url{https://en.wikipedia.org/wiki/Log-likelihood}
\item \code{mlogloss} multiclass logloss. \url{https://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html}
\item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
Different threshold (e.g., 0.) could be specified as "error@0."
\item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
\item \code{auc} Area under the curve. \url{http://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
\item \code{mae} Mean absolute error
\item \code{mape} Mean absolute percentage error
\item \code{auc} Area under the curve. \url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
\item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation.
\item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{http://en.wikipedia.org/wiki/NDCG}
\item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{https://en.wikipedia.org/wiki/NDCG}
}
The following callbacks are automatically created when certain parameters are set:

View File

@@ -8,7 +8,7 @@ CXX_STD = CXX14
XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
-DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\
-DDMLC_LOG_CUSTOMIZE=1 -DXGBOOST_CUSTOMIZE_LOGGER=1\
-DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_
-DRABIT_CUSTOMIZE_MSG_
# disable the use of thread_local for 32 bit windows:
ifeq ($(R_OSTYPE)$(WIN),windows)
@@ -19,6 +19,7 @@ $(foreach v, $(XGB_RFLAGS), $(warning $(v)))
PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS)
PKG_CXXFLAGS= @OPENMP_CXXFLAGS@ @ENDIAN_FLAG@ -pthread
PKG_LIBS = @OPENMP_CXXFLAGS@ @OPENMP_LIB@ @ENDIAN_FLAG@ @BACKTRACE_LIB@ -pthread
OBJECTS= ./xgboost_R.o ./xgboost_custom.o ./xgboost_assert.o ./init.o\
$(PKGROOT)/amalgamation/xgboost-all0.o $(PKGROOT)/amalgamation/dmlc-minimum0.o\
$(PKGROOT)/rabit/src/engine_empty.o $(PKGROOT)/rabit/src/c_api.o
OBJECTS= ./xgboost_R.o ./xgboost_custom.o ./xgboost_assert.o ./init.o \
$(PKGROOT)/amalgamation/xgboost-all0.o $(PKGROOT)/amalgamation/dmlc-minimum0.o \
$(PKGROOT)/rabit/src/engine.o $(PKGROOT)/rabit/src/c_api.o \
$(PKGROOT)/rabit/src/allreduce_base.o

View File

@@ -20,7 +20,7 @@ CXX_STD = CXX14
XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
-DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\
-DDMLC_LOG_CUSTOMIZE=1 -DXGBOOST_CUSTOMIZE_LOGGER=1\
-DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_
-DRABIT_CUSTOMIZE_MSG_
# disable the use of thread_local for 32 bit windows:
ifeq ($(R_OSTYPE)$(WIN),windows)
@@ -31,8 +31,9 @@ $(foreach v, $(XGB_RFLAGS), $(warning $(v)))
PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS)
PKG_CXXFLAGS= $(SHLIB_OPENMP_CXXFLAGS) $(SHLIB_PTHREAD_FLAGS)
PKG_LIBS = $(SHLIB_OPENMP_CXXFLAGS) $(SHLIB_PTHREAD_FLAGS)
OBJECTS= ./xgboost_R.o ./xgboost_custom.o ./xgboost_assert.o ./init.o\
$(PKGROOT)/amalgamation/xgboost-all0.o $(PKGROOT)/amalgamation/dmlc-minimum0.o\
$(PKGROOT)/rabit/src/engine_empty.o $(PKGROOT)/rabit/src/c_api.o
OBJECTS= ./xgboost_R.o ./xgboost_custom.o ./xgboost_assert.o ./init.o \
$(PKGROOT)/amalgamation/xgboost-all0.o $(PKGROOT)/amalgamation/dmlc-minimum0.o \
$(PKGROOT)/rabit/src/engine.o $(PKGROOT)/rabit/src/c_api.o \
$(PKGROOT)/rabit/src/allreduce_base.o
$(OBJECTS) : xgblib

View File

@@ -13,23 +13,6 @@ void CustomLogMessage::Log(const std::string& msg) {
}
} // namespace dmlc
// implements rabit error handling.
extern "C" {
void XGBoostAssert_R(int exp, const char *fmt, ...);
void XGBoostCheck_R(int exp, const char *fmt, ...);
}
namespace rabit {
namespace utils {
extern "C" {
void (*Printf)(const char *fmt, ...) = Rprintf;
void (*Assert)(int exp, const char *fmt, ...) = XGBoostAssert_R;
void (*Check)(int exp, const char *fmt, ...) = XGBoostCheck_R;
void (*Error)(const char *fmt, ...) = error;
}
}
}
namespace xgboost {
ConsoleLogger::~ConsoleLogger() {
if (cur_verbosity_ == LogVerbosity::kIgnore ||

View File

@@ -1,10 +0,0 @@
model_generator_metadata <- function() {
return (list(
kRounds = 2,
kRows = 1000,
kCols = 4,
kForests = 2,
kMaxDepth = 2,
kClasses = 3
))
}

View File

@@ -2,10 +2,16 @@
# of saved model files from XGBoost version 0.90 and 1.0.x.
library(xgboost)
library(Matrix)
source('./generate_models_params.R')
set.seed(0)
metadata <- model_generator_metadata()
metadata <- list(
kRounds = 2,
kRows = 1000,
kCols = 4,
kForests = 2,
kMaxDepth = 2,
kClasses = 3
)
X <- Matrix(data = rnorm(metadata$kRows * metadata$kCols), nrow = metadata$kRows,
ncol = metadata$kCols, sparse = TRUE)
w <- runif(metadata$kRows)
@@ -46,11 +52,16 @@ generate_logistic_model <- function () {
y <- sample(0:1, size = metadata$kRows, replace = TRUE)
stopifnot(max(y) == 1, min(y) == 0)
data <- xgb.DMatrix(X, label = y, weight = w)
params <- list(tree_method = 'hist', num_parallel_tree = metadata$kForests,
max_depth = metadata$kMaxDepth, objective = 'binary:logistic')
booster <- xgb.train(params, data, nrounds = metadata$kRounds)
save_booster(booster, 'logit')
objective <- c('binary:logistic', 'binary:logitraw')
name <- c('logit', 'logitraw')
for (i in seq_len(length(objective))) {
data <- xgb.DMatrix(X, label = y, weight = w)
params <- list(tree_method = 'hist', num_parallel_tree = metadata$kForests,
max_depth = metadata$kMaxDepth, objective = objective[i])
booster <- xgb.train(params, data, nrounds = metadata$kRounds)
save_booster(booster, name[i])
}
}
generate_classification_model <- function () {

View File

@@ -6,21 +6,21 @@ my_linters <- list(
assignment_linter = lintr::assignment_linter,
closed_curly_linter = lintr::closed_curly_linter,
commas_linter = lintr::commas_linter,
# commented_code_linter = lintr::commented_code_linter,
equals_na = lintr::equals_na_linter,
infix_spaces_linter = lintr::infix_spaces_linter,
line_length_linter = lintr::line_length_linter,
no_tab_linter = lintr::no_tab_linter,
object_usage_linter = lintr::object_usage_linter,
# snake_case_linter = lintr::snake_case_linter,
# multiple_dots_linter = lintr::multiple_dots_linter,
object_length_linter = lintr::object_length_linter,
open_curly_linter = lintr::open_curly_linter,
# single_quotes_linter = lintr::single_quotes_linter,
semicolon = lintr::semicolon_terminator_linter,
seq = lintr::seq_linter,
spaces_inside_linter = lintr::spaces_inside_linter,
spaces_left_parentheses_linter = lintr::spaces_left_parentheses_linter,
trailing_blank_lines_linter = lintr::trailing_blank_lines_linter,
trailing_whitespace_linter = lintr::trailing_whitespace_linter,
true_false = lintr::T_and_F_symbol_linter
true_false = lintr::T_and_F_symbol_linter,
unneeded_concatenation = lintr::unneeded_concatenation_linter
)
results <- lapply(

View File

@@ -17,7 +17,8 @@ test_that("train and predict binary classification", {
nrounds <- 2
expect_output(
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = nrounds, objective = "binary:logistic")
eta = 1, nthread = 2, nrounds = nrounds, objective = "binary:logistic",
eval_metric = "error")
, "train-error")
expect_equal(class(bst), "xgb.Booster")
expect_equal(bst$niter, nrounds)
@@ -122,7 +123,7 @@ test_that("train and predict softprob", {
expect_output(
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 3, eta = 0.5, nthread = 2, nrounds = 5,
objective = "multi:softprob", num_class = 3)
objective = "multi:softprob", num_class = 3, eval_metric = "merror")
, "train-merror")
expect_false(is.null(bst$evaluation_log))
expect_lt(bst$evaluation_log[, min(train_merror)], 0.025)
@@ -150,7 +151,7 @@ test_that("train and predict softmax", {
expect_output(
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 3, eta = 0.5, nthread = 2, nrounds = 5,
objective = "multi:softmax", num_class = 3)
objective = "multi:softmax", num_class = 3, eval_metric = "merror")
, "train-merror")
expect_false(is.null(bst$evaluation_log))
expect_lt(bst$evaluation_log[, min(train_merror)], 0.025)
@@ -167,7 +168,7 @@ test_that("train and predict RF", {
lb <- train$label
# single iteration
bst <- xgboost(data = train$data, label = lb, max_depth = 5,
nthread = 2, nrounds = 1, objective = "binary:logistic",
nthread = 2, nrounds = 1, objective = "binary:logistic", eval_metric = "error",
num_parallel_tree = 20, subsample = 0.6, colsample_bytree = 0.1)
expect_equal(bst$niter, 1)
expect_equal(xgb.ntree(bst), 20)
@@ -193,7 +194,8 @@ test_that("train and predict RF with softprob", {
set.seed(11)
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 3, eta = 0.9, nthread = 2, nrounds = nrounds,
objective = "multi:softprob", num_class = 3, verbose = 0,
objective = "multi:softprob", eval_metric = "merror",
num_class = 3, verbose = 0,
num_parallel_tree = 4, subsample = 0.5, colsample_bytree = 0.5)
expect_equal(bst$niter, 15)
expect_equal(xgb.ntree(bst), 15 * 3 * 4)
@@ -245,11 +247,12 @@ test_that("training continuation works", {
expect_equal(bst$raw, bst2$raw)
expect_equal(dim(bst2$evaluation_log), c(2, 2))
# test continuing from a model in file
xgb.save(bst1, "xgboost.model")
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0, xgb_model = "xgboost.model")
xgb.save(bst1, "xgboost.json")
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0, xgb_model = "xgboost.json")
if (!windows_flag && !solaris_flag)
expect_equal(bst$raw, bst2$raw)
expect_equal(dim(bst2$evaluation_log), c(2, 2))
file.remove("xgboost.json")
})
test_that("model serialization works", {
@@ -273,7 +276,7 @@ test_that("xgb.cv works", {
expect_output(
cv <- xgb.cv(data = train$data, label = train$label, max_depth = 2, nfold = 5,
eta = 1., nthread = 2, nrounds = 2, objective = "binary:logistic",
verbose = TRUE)
eval_metric = "error", verbose = TRUE)
, "train-error:")
expect_is(cv, 'xgb.cv.synchronous')
expect_false(is.null(cv$evaluation_log))
@@ -298,7 +301,7 @@ test_that("xgb.cv works with stratified folds", {
eta = 1., nthread = 2, nrounds = 2, objective = "binary:logistic",
verbose = TRUE, stratified = TRUE)
# Stratified folds should result in a different evaluation logs
expect_true(all(cv$evaluation_log[, test_error_mean] != cv2$evaluation_log[, test_error_mean]))
expect_true(all(cv$evaluation_log[, test_logloss_mean] != cv2$evaluation_log[, test_logloss_mean]))
})
test_that("train and predict with non-strict classes", {

View File

@@ -2,6 +2,7 @@
require(xgboost)
require(data.table)
require(titanic)
context("callbacks")
@@ -26,7 +27,8 @@ watchlist <- list(train = dtrain, test = dtest)
err <- function(label, pr) sum((pr > 0.5) != label) / length(label)
param <- list(objective = "binary:logistic", max_depth = 2, nthread = 2)
param <- list(objective = "binary:logistic", eval_metric = "error",
max_depth = 2, nthread = 2)
test_that("cb.print.evaluation works as expected", {
@@ -105,7 +107,8 @@ test_that("cb.evaluation.log works as expected", {
})
param <- list(objective = "binary:logistic", max_depth = 4, nthread = 2)
param <- list(objective = "binary:logistic", eval_metric = "error",
max_depth = 4, nthread = 2)
test_that("can store evaluation_log without printing", {
expect_silent(
@@ -173,16 +176,16 @@ test_that("cb.reset.parameters works as expected", {
})
test_that("cb.save.model works as expected", {
files <- c('xgboost_01.model', 'xgboost_02.model', 'xgboost.model')
files <- c('xgboost_01.json', 'xgboost_02.json', 'xgboost.json')
for (f in files) if (file.exists(f)) file.remove(f)
bst <- xgb.train(param, dtrain, nrounds = 2, watchlist, eta = 1, verbose = 0,
save_period = 1, save_name = "xgboost_%02d.model")
expect_true(file.exists('xgboost_01.model'))
expect_true(file.exists('xgboost_02.model'))
b1 <- xgb.load('xgboost_01.model')
save_period = 1, save_name = "xgboost_%02d.json")
expect_true(file.exists('xgboost_01.json'))
expect_true(file.exists('xgboost_02.json'))
b1 <- xgb.load('xgboost_01.json')
expect_equal(xgb.ntree(b1), 1)
b2 <- xgb.load('xgboost_02.model')
b2 <- xgb.load('xgboost_02.json')
expect_equal(xgb.ntree(b2), 2)
xgb.config(b2) <- xgb.config(bst)
@@ -191,9 +194,9 @@ test_that("cb.save.model works as expected", {
# save_period = 0 saves the last iteration's model
bst <- xgb.train(param, dtrain, nrounds = 2, watchlist, eta = 1, verbose = 0,
save_period = 0)
expect_true(file.exists('xgboost.model'))
b2 <- xgb.load('xgboost.model')
save_period = 0, save_name = 'xgboost.json')
expect_true(file.exists('xgboost.json'))
b2 <- xgb.load('xgboost.json')
xgb.config(b2) <- xgb.config(bst)
expect_equal(bst$raw, b2$raw)
@@ -236,7 +239,7 @@ test_that("early stopping xgb.train works", {
test_that("early stopping using a specific metric works", {
set.seed(11)
expect_output(
bst <- xgb.train(param, dtrain, nrounds = 20, watchlist, eta = 0.6,
bst <- xgb.train(param[-2], dtrain, nrounds = 20, watchlist, eta = 0.6,
eval_metric = "logloss", eval_metric = "auc",
callbacks = list(cb.early.stop(stopping_rounds = 3, maximize = FALSE,
metric_name = 'test_logloss')))
@@ -252,6 +255,26 @@ test_that("early stopping using a specific metric works", {
expect_equal(logloss_log, logloss_pred, tolerance = 1e-5)
})
test_that("early stopping works with titanic", {
# This test was inspired by https://github.com/dmlc/xgboost/issues/5935
# It catches possible issues on noLD R
titanic <- titanic::titanic_train
titanic$Pclass <- as.factor(titanic$Pclass)
dtx <- model.matrix(~ 0 + ., data = titanic[, c("Pclass", "Sex")])
dty <- titanic$Survived
xgboost::xgboost(
data = dtx,
label = dty,
objective = "binary:logistic",
eval_metric = "auc",
nrounds = 100,
early_stopping_rounds = 3
)
expect_true(TRUE) # should not crash
})
test_that("early stopping xgb.cv works", {
set.seed(11)
expect_output(

View File

@@ -47,7 +47,7 @@ test_that("custom objective with early stop works", {
bst <- xgb.train(param, dtrain, 10, watchlist)
expect_equal(class(bst), "xgb.Booster")
train_log <- bst$evaluation_log$train_error
expect_true(all(diff(train_log)) <= 0)
expect_true(all(diff(train_log) <= 0))
})
test_that("custom objective using DMatrix attr works", {

View File

@@ -64,8 +64,8 @@ test_that("xgb.DMatrix: getinfo & setinfo", {
expect_true(setinfo(dtest, 'group', c(50, 50)))
expect_error(setinfo(dtest, 'group', test_label))
# providing character values will give a warning
expect_warning(setinfo(dtest, 'weight', rep('a', nrow(test_data))))
# providing character values will give an error
expect_error(setinfo(dtest, 'weight', rep('a', nrow(test_data))))
# any other label should error
expect_error(setinfo(dtest, 'asdf', test_label))
@@ -99,7 +99,7 @@ test_that("xgb.DMatrix: colnames", {
dtest <- xgb.DMatrix(test_data, label = test_label)
expect_equal(colnames(dtest), colnames(test_data))
expect_error(colnames(dtest) <- 'asdf')
new_names <- make.names(1:ncol(test_data))
new_names <- make.names(seq_len(ncol(test_data)))
expect_silent(colnames(dtest) <- new_names)
expect_equal(colnames(dtest), new_names)
expect_silent(colnames(dtest) <- NULL)

View File

@@ -9,7 +9,8 @@ test_that("train and prediction when gctorture is on", {
test <- agaricus.test
gctorture(TRUE)
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
pred <- predict(bst, test$data)
gctorture(FALSE)
expect_length(pred, length(test$label))
})

View File

@@ -8,7 +8,7 @@ test_that("gblinear works", {
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
param <- list(objective = "binary:logistic", booster = "gblinear",
param <- list(objective = "binary:logistic", eval_metric = "error", booster = "gblinear",
nthread = 2, eta = 0.8, alpha = 0.0001, lambda = 0.0001)
watchlist <- list(eval = dtest, train = dtrain)

View File

@@ -174,7 +174,7 @@ test_that("SHAPs sum to predictions, with or without DART", {
expect_equal(rowSums(shap), pred, tol = tol)
expect_equal(apply(shapi, 1, sum), pred, tol = tol)
for (i in 1 : nrow(d))
for (i in seq_len(nrow(d)))
for (f in list(rowSums, colSums))
expect_equal(f(shapi[i, , ]), shap[i, ], tol = tol)
}
@@ -335,8 +335,8 @@ test_that("xgb.model.dt.tree and xgb.importance work with a single split model",
})
test_that("xgb.plot.tree works with and without feature names", {
xgb.plot.tree(feature_names = feature.names, model = bst.Tree)
xgb.plot.tree(model = bst.Tree)
expect_silent(xgb.plot.tree(feature_names = feature.names, model = bst.Tree))
expect_silent(xgb.plot.tree(model = bst.Tree))
})
test_that("xgb.plot.multi.trees works with and without feature names", {
@@ -351,11 +351,47 @@ test_that("xgb.plot.deepness works", {
xgb.ggplot.deepness(model = bst.Tree)
})
test_that("xgb.shap.data works when top_n is provided", {
data_list <- xgb.shap.data(data = sparse_matrix, model = bst.Tree, top_n = 2)
expect_equal(names(data_list), c("data", "shap_contrib"))
expect_equal(NCOL(data_list$data), 2)
expect_equal(NCOL(data_list$shap_contrib), 2)
expect_equal(NROW(data_list$data), NROW(data_list$shap_contrib))
expect_gt(length(colnames(data_list$data)), 0)
expect_gt(length(colnames(data_list$shap_contrib)), 0)
# for multiclass without target class provided
data_list <- xgb.shap.data(data = as.matrix(iris[, -5]), model = mbst.Tree, top_n = 2)
expect_equal(dim(data_list$shap_contrib), c(nrow(iris), 2))
# for multiclass with target class provided
data_list <- xgb.shap.data(data = as.matrix(iris[, -5]), model = mbst.Tree, top_n = 2, target_class = 0)
expect_equal(dim(data_list$shap_contrib), c(nrow(iris), 2))
})
test_that("xgb.shap.data works with subsampling", {
data_list <- xgb.shap.data(data = sparse_matrix, model = bst.Tree, top_n = 2, subsample = 0.8)
expect_equal(NROW(data_list$data), as.integer(0.8 * nrow(sparse_matrix)))
expect_equal(NROW(data_list$data), NROW(data_list$shap_contrib))
})
test_that("prepare.ggplot.shap.data works", {
data_list <- xgb.shap.data(data = sparse_matrix, model = bst.Tree, top_n = 2)
plot_data <- prepare.ggplot.shap.data(data_list, normalize = TRUE)
expect_s3_class(plot_data, "data.frame")
expect_equal(names(plot_data), c("id", "feature", "feature_value", "shap_value"))
expect_s3_class(plot_data$feature, "factor")
# Each observation should have 1 row for each feature
expect_equal(nrow(plot_data), nrow(sparse_matrix) * 2)
})
test_that("xgb.plot.shap works", {
sh <- xgb.plot.shap(data = sparse_matrix, model = bst.Tree, top_n = 2, col = 4)
expect_equal(names(sh), c("data", "shap_contrib"))
expect_equal(NCOL(sh$data), 2)
expect_equal(NCOL(sh$shap_contrib), 2)
})
test_that("xgb.plot.shap.summary works", {
expect_silent(xgb.plot.shap.summary(data = sparse_matrix, model = bst.Tree, top_n = 2))
expect_silent(xgb.ggplot.shap.summary(data = sparse_matrix, model = bst.Tree, top_n = 2))
})
test_that("check.deprecation works", {
@@ -374,3 +410,26 @@ test_that("check.deprecation works", {
, "\'dumm\' was partially matched to \'dummy\'")
expect_equal(res, list(a = 1, DUMMY = 22))
})
test_that('convert.labels works', {
y <- c(0, 1, 0, 0, 1)
for (objective in c('binary:logistic', 'binary:logitraw', 'binary:hinge')) {
res <- xgboost:::convert.labels(y, objective_name = objective)
expect_s3_class(res, 'factor')
expect_equal(res, factor(res))
}
y <- c(0, 1, 3, 2, 1, 4)
for (objective in c('multi:softmax', 'multi:softprob', 'rank:pairwise', 'rank:ndcg',
'rank:map')) {
res <- xgboost:::convert.labels(y, objective_name = objective)
expect_s3_class(res, 'factor')
expect_equal(res, factor(res))
}
y <- c(1.2, 3.0, -1.0, 10.0)
for (objective in c('reg:squarederror', 'reg:squaredlogerror', 'reg:logistic',
'reg:pseudohubererror', 'count:poisson', 'survival:cox', 'survival:aft',
'reg:gamma', 'reg:tweedie')) {
res <- xgboost:::convert.labels(y, objective_name = objective)
expect_equal(class(res), 'numeric')
}
})

View File

@@ -1,10 +1,16 @@
require(xgboost)
require(jsonlite)
source('../generate_models_params.R')
context("Models from previous versions of XGBoost can be loaded")
metadata <- model_generator_metadata()
metadata <- list(
kRounds = 2,
kRows = 1000,
kCols = 4,
kForests = 2,
kMaxDepth = 2,
kClasses = 3
)
run_model_param_check <- function (config) {
testthat::expect_equal(config$learner$learner_model_param$num_feature, '4')
@@ -33,6 +39,10 @@ run_booster_check <- function (booster, name) {
testthat::expect_equal(config$learner$learner_train_param$objective, 'multi:softmax')
testthat::expect_equal(as.numeric(config$learner$learner_model_param$num_class),
metadata$kClasses)
} else if (name == 'logitraw') {
testthat::expect_equal(get_num_tree(booster), metadata$kForests * metadata$kRounds)
testthat::expect_equal(as.numeric(config$learner$learner_model_param$num_class), 0)
testthat::expect_equal(config$learner$learner_train_param$objective, 'binary:logitraw')
} else if (name == 'logit') {
testthat::expect_equal(get_num_tree(booster), metadata$kForests * metadata$kRounds)
testthat::expect_equal(as.numeric(config$learner$learner_model_param$num_class), 0)
@@ -55,7 +65,7 @@ test_that("Models from previous versions of XGBoost can be loaded", {
zipfile <- file.path(getwd(), file_name)
model_dir <- file.path(getwd(), 'models')
download.file(paste('https://', bucket, '.s3-', region, '.amazonaws.com/', file_name, sep = ''),
destfile = zipfile, mode = 'wb')
destfile = zipfile, mode = 'wb', quiet = TRUE)
unzip(zipfile, overwrite = TRUE)
pred_data <- xgb.DMatrix(matrix(c(0, 0, 0, 0), nrow = 1, ncol = 4))
@@ -66,13 +76,34 @@ test_that("Models from previous versions of XGBoost can be loaded", {
m <- regmatches(model_file, m)[[1]]
model_xgb_ver <- m[2]
name <- m[3]
is_rds <- endsWith(model_file, '.rds')
if (endsWith(model_file, '.rds')) {
booster <- readRDS(model_file)
} else {
booster <- xgb.load(model_file)
cpp_warning <- capture.output({
# Expect an R warning when a model is loaded from RDS and it was generated by version < 1.1.x
if (is_rds && compareVersion(model_xgb_ver, '1.1.1.1') < 0) {
booster <- readRDS(model_file)
expect_warning(predict(booster, newdata = pred_data))
expect_warning(run_booster_check(booster, name))
} else {
if (is_rds) {
booster <- readRDS(model_file)
} else {
booster <- xgb.load(model_file)
}
predict(booster, newdata = pred_data)
run_booster_check(booster, name)
}
})
if (compareVersion(model_xgb_ver, '1.0.0.0') < 0) {
# Expect a C++ warning when a model was generated in version < 1.0.x
m <- grepl(paste0('.*Loading model from XGBoost < 1\\.0\\.0, consider saving it again for ',
'improved compatibility.*'), cpp_warning, perl = TRUE)
expect_true(length(m) > 0 && all(m))
} else if (is_rds && model_xgb_ver == '1.1.1.1') {
# Expect a C++ warning when a model is loaded from RDS and it was generated by version 1.1.x
m <- grepl(paste0('.*Attempted to load internal configuration for a model file that was ',
'generated by a previous version of XGBoost.*'), cpp_warning, perl = TRUE)
expect_true(length(m) > 0 && all(m))
}
predict(booster, newdata = pred_data)
run_booster_check(booster, name)
})
})

View File

@@ -57,7 +57,7 @@ To answer the question above we will convert *categorical* variables to `numeric
In this Vignette we will see how to transform a *dense* `data.frame` (*dense* = few zeroes in the matrix) with *categorical* variables to a very *sparse* matrix (*sparse* = lots of zero in the matrix) of `numeric` features.
The method we are going to see is usually called [one-hot encoding](http://en.wikipedia.org/wiki/One-hot).
The method we are going to see is usually called [one-hot encoding](https://en.wikipedia.org/wiki/One-hot).
The first step is to load `Arthritis` dataset in memory and wrap it with `data.table` package.
@@ -66,7 +66,7 @@ data(Arthritis)
df <- data.table(Arthritis, keep.rownames = FALSE)
```
> `data.table` is 100% compliant with **R** `data.frame` but its syntax is more consistent and its performance for large dataset is [best in class](http://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) (`dplyr` from **R** and `Pandas` from **Python** [included](https://github.com/Rdatatable/data.table/wiki/Benchmarks-%3A-Grouping)). Some parts of **Xgboost** **R** package use `data.table`.
> `data.table` is 100% compliant with **R** `data.frame` but its syntax is more consistent and its performance for large dataset is [best in class](https://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) (`dplyr` from **R** and `Pandas` from **Python** [included](https://github.com/Rdatatable/data.table/wiki/Benchmarks-%3A-Grouping)). Some parts of **Xgboost** **R** package use `data.table`.
The first thing we want to do is to have a look to the first few lines of the `data.table`:
@@ -137,8 +137,8 @@ levels(df[,Treatment])
#### Encoding categorical features
Next step, we will transform the categorical data to dummy variables.
Several encoding methods exist, e.g., [one-hot encoding](http://en.wikipedia.org/wiki/One-hot) is a common approach.
We will use the [dummy contrast coding](http://www.ats.ucla.edu/stat/r/library/contrast_coding.htm#dummy) which is popular because it produces "full rank" encoding (also see [this blog post by Max Kuhn](http://appliedpredictivemodeling.com/blog/2013/10/23/the-basics-of-encoding-categorical-data-for-predictive-models)).
Several encoding methods exist, e.g., [one-hot encoding](https://en.wikipedia.org/wiki/One-hot) is a common approach.
We will use the [dummy contrast coding](https://stats.idre.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/) which is popular because it produces "full rank" encoding (also see [this blog post by Max Kuhn](http://appliedpredictivemodeling.com/blog/2013/10/23/the-basics-of-encoding-categorical-data-for-predictive-models)).
The purpose is to transform each value of each *categorical* feature into a *binary* feature `{0, 1}`.
@@ -176,7 +176,7 @@ bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 4,
You can see some `train-error: 0.XXXXX` lines followed by a number. It decreases. Each line shows how well the model explains your data. Lower is better.
A model which fits too well may [overfit](http://en.wikipedia.org/wiki/Overfitting) (meaning it copy/paste too much the past, and won't be that good to predict the future).
A model which fits too well may [overfit](https://en.wikipedia.org/wiki/Overfitting) (meaning it copy/paste too much the past, and won't be that good to predict the future).
> Here you can see the numbers decrease until line 7 and then increase.
>
@@ -304,7 +304,7 @@ Linear model may not be that smart in this scenario.
Special Note: What about Random Forests™?
-----------------------------------------
As you may know, [Random Forests™](http://en.wikipedia.org/wiki/Random_forest) algorithm is cousin with boosting and both are part of the [ensemble learning](http://en.wikipedia.org/wiki/Ensemble_learning) family.
As you may know, [Random Forests™](https://en.wikipedia.org/wiki/Random_forest) algorithm is cousin with boosting and both are part of the [ensemble learning](https://en.wikipedia.org/wiki/Ensemble_learning) family.
Both trains several decision trees for one dataset. The *main* difference is that in Random Forests™, trees are independent and in boosting, the tree `N+1` focus its learning on the loss (<=> what has not been well modeled by the tree `N`).

View File

@@ -24,7 +24,7 @@
author = "K. Bache and M. Lichman",
year = "2013",
title = "{UCI} Machine Learning Repository",
url = "http://archive.ics.uci.edu/ml",
url = "http://archive.ics.uci.edu/ml/",
institution = "University of California, Irvine, School of Information and Computer Sciences"
}

View File

@@ -68,7 +68,7 @@ The version 0.4-2 is on CRAN, and you can install it by:
install.packages("xgboost")
```
Formerly available versions can be obtained from the CRAN [archive](https://cran.r-project.org/src/contrib/Archive/xgboost)
Formerly available versions can be obtained from the CRAN [archive](https://cran.r-project.org/src/contrib/Archive/xgboost/)
## Learning

View File

@@ -1,13 +1,15 @@
<img src=https://raw.githubusercontent.com/dmlc/dmlc.github.io/master/img/logo-m/xgboost.png width=135/> eXtreme Gradient Boosting
===========
[![Build Status](https://xgboost-ci.net/job/xgboost/job/master/badge/icon?style=plastic)](https://xgboost-ci.net/blue/organizations/jenkins/xgboost/activity)
[![Build Status](https://xgboost-ci.net/job/xgboost/job/master/badge/icon)](https://xgboost-ci.net/blue/organizations/jenkins/xgboost/activity)
[![Build Status](https://img.shields.io/travis/dmlc/xgboost.svg?label=build&logo=travis&branch=master)](https://travis-ci.org/dmlc/xgboost)
[![Build Status](https://ci.appveyor.com/api/projects/status/5ypa8vaed6kpmli8?svg=true)](https://ci.appveyor.com/project/tqchen/xgboost)
[![XGBoost-CI](https://github.com/dmlc/xgboost/workflows/XGBoost-CI/badge.svg?branch=master)](https://github.com/dmlc/xgboost/actions)
[![Documentation Status](https://readthedocs.org/projects/xgboost/badge/?version=latest)](https://xgboost.readthedocs.org)
[![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE)
[![CRAN Status Badge](http://www.r-pkg.org/badges/version/xgboost)](http://cran.r-project.org/web/packages/xgboost)
[![PyPI version](https://badge.fury.io/py/xgboost.svg)](https://pypi.python.org/pypi/xgboost/)
[![Optuna](https://img.shields.io/badge/Optuna-integrated-blue)](https://optuna.org)
[![Twitter](https://img.shields.io/badge/@XGBoostProject--_.svg?style=social&logo=twitter)](https://twitter.com/XGBoostProject)
[Community](https://xgboost.ai/community) |
[Documentation](https://xgboost.readthedocs.org) |

View File

@@ -12,5 +12,3 @@
#include "../dmlc-core/src/data.cc"
#include "../dmlc-core/src/io.cc"
#include "../dmlc-core/src/recordio.cc"

View File

@@ -44,11 +44,11 @@
#if DMLC_ENABLE_STD_THREAD
#include "../src/data/sparse_page_dmatrix.cc"
#include "../src/data/sparse_page_source.cc"
#endif
// trees
#include "../src/tree/param.cc"
#include "../src/tree/split_evaluator.cc"
#include "../src/tree/tree_model.cc"
#include "../src/tree/tree_updater.cc"
#include "../src/tree/updater_colmaker.cc"
@@ -57,7 +57,6 @@
#include "../src/tree/updater_refresh.cc"
#include "../src/tree/updater_sync.cc"
#include "../src/tree/updater_histmaker.cc"
#include "../src/tree/updater_skmaker.cc"
#include "../src/tree/constraints.cc"
// linear
@@ -69,8 +68,10 @@
#include "../src/learner.cc"
#include "../src/logging.cc"
#include "../src/common/common.cc"
#include "../src/common/random.cc"
#include "../src/common/charconv.cc"
#include "../src/common/timer.cc"
#include "../src/common/quantile.cc"
#include "../src/common/host_device_vector.cc"
#include "../src/common/hist_util.cc"
#include "../src/common/json.cc"

View File

@@ -1 +1 @@
@xgboost_VERSION_MAJOR@.@xgboost_VERSION_MINOR@.@xgboost_VERSION_PATCH@-SNAPSHOT
@xgboost_VERSION_MAJOR@.@xgboost_VERSION_MINOR@.@xgboost_VERSION_PATCH@

View File

@@ -6,11 +6,11 @@ function(setup_rpackage_install_target rlib_target build_dir)
install(
DIRECTORY "${xgboost_SOURCE_DIR}/R-package"
DESTINATION "${build_dir}"
REGEX "src/*" EXCLUDE
REGEX "R-package/configure" EXCLUDE
PATTERN "src/*" EXCLUDE
PATTERN "R-package/configure" EXCLUDE
)
install(TARGETS ${rlib_target}
LIBRARY DESTINATION "${build_dir}/R-package/src/"
RUNTIME DESTINATION "${build_dir}/R-package/src/")
install(SCRIPT ${PROJECT_BINARY_DIR}/RPackageInstall.cmake)
endfunction()
endfunction()

View File

@@ -6,24 +6,32 @@
# Add flags
macro(enable_sanitizer sanitizer)
if(${sanitizer} MATCHES "address")
find_package(ASan REQUIRED)
find_package(ASan)
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=address")
link_libraries(${ASan_LIBRARY})
if (ASan_FOUND)
link_libraries(${ASan_LIBRARY})
endif (ASan_FOUND)
elseif(${sanitizer} MATCHES "thread")
find_package(TSan REQUIRED)
find_package(TSan)
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=thread")
link_libraries(${TSan_LIBRARY})
if (TSan_FOUND)
link_libraries(${TSan_LIBRARY})
endif (TSan_FOUND)
elseif(${sanitizer} MATCHES "leak")
find_package(LSan REQUIRED)
find_package(LSan)
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=leak")
link_libraries(${LSan_LIBRARY})
if (LSan_FOUND)
link_libraries(${LSan_LIBRARY})
endif (LSan_FOUND)
elseif(${sanitizer} MATCHES "undefined")
find_package(UBSan REQUIRED)
find_package(UBSan)
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=undefined -fno-sanitize-recover=undefined")
link_libraries(${UBSan_LIBRARY})
if (UBSan_FOUND)
link_libraries(${UBSan_LIBRARY})
endif (UBSan_FOUND)
else()
message(FATAL_ERROR "Santizer ${sanitizer} not supported.")

View File

@@ -54,23 +54,22 @@ endfunction(msvc_use_static_runtime)
# Set output directory of target, ignoring debug or release
function(set_output_directory target dir)
set_target_properties(${target} PROPERTIES
RUNTIME_OUTPUT_DIRECTORY ${dir}
RUNTIME_OUTPUT_DIRECTORY_DEBUG ${dir}
RUNTIME_OUTPUT_DIRECTORY_RELEASE ${dir}
RUNTIME_OUTPUT_DIRECTORY_RELWITHDEBINFO ${dir}
RUNTIME_OUTPUT_DIRECTORY_MINSIZEREL ${dir}
LIBRARY_OUTPUT_DIRECTORY ${dir}
LIBRARY_OUTPUT_DIRECTORY_DEBUG ${dir}
LIBRARY_OUTPUT_DIRECTORY_RELEASE ${dir}
LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO ${dir}
LIBRARY_OUTPUT_DIRECTORY_MINSIZEREL ${dir}
ARCHIVE_OUTPUT_DIRECTORY ${dir}
ARCHIVE_OUTPUT_DIRECTORY_DEBUG ${dir}
ARCHIVE_OUTPUT_DIRECTORY_RELEASE ${dir}
ARCHIVE_OUTPUT_DIRECTORY_RELWITHDEBINFO ${dir}
ARCHIVE_OUTPUT_DIRECTORY_MINSIZEREL ${dir}
)
set_target_properties(${target} PROPERTIES
RUNTIME_OUTPUT_DIRECTORY ${dir}
RUNTIME_OUTPUT_DIRECTORY_DEBUG ${dir}
RUNTIME_OUTPUT_DIRECTORY_RELEASE ${dir}
RUNTIME_OUTPUT_DIRECTORY_RELWITHDEBINFO ${dir}
RUNTIME_OUTPUT_DIRECTORY_MINSIZEREL ${dir}
LIBRARY_OUTPUT_DIRECTORY ${dir}
LIBRARY_OUTPUT_DIRECTORY_DEBUG ${dir}
LIBRARY_OUTPUT_DIRECTORY_RELEASE ${dir}
LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO ${dir}
LIBRARY_OUTPUT_DIRECTORY_MINSIZEREL ${dir}
ARCHIVE_OUTPUT_DIRECTORY ${dir}
ARCHIVE_OUTPUT_DIRECTORY_DEBUG ${dir}
ARCHIVE_OUTPUT_DIRECTORY_RELEASE ${dir}
ARCHIVE_OUTPUT_DIRECTORY_RELWITHDEBINFO ${dir}
ARCHIVE_OUTPUT_DIRECTORY_MINSIZEREL ${dir})
endfunction(set_output_directory)
# Set a default build type to release if none was specified
@@ -91,7 +90,9 @@ function(format_gencode_flags flags out)
endif()
# Set up architecture flags
if(NOT flags)
if(CUDA_VERSION VERSION_GREATER_EQUAL "10.0")
if (CUDA_VERSION VERSION_GREATER_EQUAL "11.0")
set(flags "35;50;52;60;61;70;75;80")
elseif(CUDA_VERSION VERSION_GREATER_EQUAL "10.0")
set(flags "35;50;52;60;61;70;75")
elseif(CUDA_VERSION VERSION_GREATER_EQUAL "9.0")
set(flags "35;50;52;60;61;70")
@@ -99,15 +100,25 @@ function(format_gencode_flags flags out)
set(flags "35;50;52;60;61")
endif()
endif()
# Generate SASS
foreach(ver ${flags})
set(${out} "${${out}}--generate-code=arch=compute_${ver},code=sm_${ver};")
endforeach()
# Generate PTX for last architecture
list(GET flags -1 ver)
set(${out} "${${out}}--generate-code=arch=compute_${ver},code=compute_${ver};")
set(${out} "${${out}}" PARENT_SCOPE)
if (CMAKE_VERSION VERSION_GREATER_EQUAL "3.18")
cmake_policy(SET CMP0104 NEW)
foreach(ver ${flags})
set(CMAKE_CUDA_ARCHITECTURES "${ver}-real;${ver}-virtual;${CMAKE_CUDA_ARCHITECTURES}")
endforeach()
set(CMAKE_CUDA_ARCHITECTURES "${CMAKE_CUDA_ARCHITECTURES}" PARENT_SCOPE)
message(STATUS "CMAKE_CUDA_ARCHITECTURES: ${CMAKE_CUDA_ARCHITECTURES}")
else()
# Generate SASS
foreach(ver ${flags})
set(${out} "${${out}}--generate-code=arch=compute_${ver},code=sm_${ver};")
endforeach()
# Generate PTX for last architecture
list(GET flags -1 ver)
set(${out} "${${out}}--generate-code=arch=compute_${ver},code=compute_${ver};")
set(${out} "${${out}}" PARENT_SCOPE)
message(STATUS "CUDA GEN_CODE: ${GEN_CODE}")
endif (CMAKE_VERSION VERSION_GREATER_EQUAL "3.18")
endfunction(format_gencode_flags flags)
macro(enable_nvtx target)
@@ -116,3 +127,56 @@ macro(enable_nvtx target)
target_link_libraries(${target} PRIVATE "${NVTX_LIBRARY}")
target_compile_definitions(${target} PRIVATE -DXGBOOST_USE_NVTX=1)
endmacro()
# Set CUDA related flags to target. Must be used after code `format_gencode_flags`.
function(xgboost_set_cuda_flags target)
find_package(OpenMP REQUIRED)
target_link_libraries(${target} PUBLIC OpenMP::OpenMP_CXX)
target_compile_options(${target} PRIVATE
$<$<COMPILE_LANGUAGE:CUDA>:--expt-extended-lambda>
$<$<COMPILE_LANGUAGE:CUDA>:--expt-relaxed-constexpr>
$<$<COMPILE_LANGUAGE:CUDA>:${GEN_CODE}>
$<$<COMPILE_LANGUAGE:CUDA>:-Xcompiler=${OpenMP_CXX_FLAGS}>)
if (CMAKE_VERSION VERSION_GREATER_EQUAL "3.18")
set_property(TARGET ${target} PROPERTY CUDA_ARCHITECTURES ${CMAKE_CUDA_ARCHITECTURES})
endif (CMAKE_VERSION VERSION_GREATER_EQUAL "3.18")
if (USE_DEVICE_DEBUG)
target_compile_options(${target} PRIVATE
$<$<AND:$<CONFIG:DEBUG>,$<COMPILE_LANGUAGE:CUDA>>:-G;-src-in-ptx>)
else (USE_DEVICE_DEBUG)
target_compile_options(${target} PRIVATE
$<$<COMPILE_LANGUAGE:CUDA>:-lineinfo>)
endif (USE_DEVICE_DEBUG)
if (USE_NVTX)
enable_nvtx(${target})
endif (USE_NVTX)
target_compile_definitions(${target} PRIVATE -DXGBOOST_USE_CUDA=1 -DTHRUST_IGNORE_CUB_VERSION_CHECK=1)
target_include_directories(${target} PRIVATE ${xgboost_SOURCE_DIR}/cub/)
if (MSVC)
target_compile_options(${target} PRIVATE
$<$<COMPILE_LANGUAGE:CUDA>:-Xcompiler=/utf-8>)
endif (MSVC)
set_target_properties(${target} PROPERTIES
CUDA_STANDARD 14
CUDA_STANDARD_REQUIRED ON
CUDA_SEPARABLE_COMPILATION OFF)
if (HIDE_CXX_SYMBOLS)
target_compile_options(${target} PRIVATE
$<$<COMPILE_LANGUAGE:CUDA>:-Xcompiler=-fvisibility=hidden>)
endif (HIDE_CXX_SYMBOLS)
if (USE_NCCL)
find_package(Nccl REQUIRED)
target_include_directories(${target} PRIVATE ${NCCL_INCLUDE_DIR})
target_compile_definitions(${target} PRIVATE -DXGBOOST_USE_NCCL=1)
target_link_libraries(${target} PUBLIC ${NCCL_LIBRARY})
endif (USE_NCCL)
endfunction(xgboost_set_cuda_flags)

View File

@@ -22,19 +22,24 @@
#
# NCCL_ROOT - When set, this path is inspected instead of standard library
# locations as the root of the NCCL installation.
# The environment variable NCCL_ROOT overrides this veriable.
# The environment variable NCCL_ROOT overrides this variable.
#
# This module defines
# Nccl_FOUND, whether nccl has been found
# NCCL_INCLUDE_DIR, directory containing header
# NCCL_LIBRARY, directory containing nccl library
# NCCL_LIB_NAME, nccl library name
# USE_NCCL_LIB_PATH, when set, NCCL_LIBRARY path is also inspected for the
# location of the nccl library. This would disable
# switching between static and shared.
#
# This module assumes that the user has already called find_package(CUDA)
if (NCCL_LIBRARY)
# Don't cache NCCL_LIBRARY to enable switching between static and shared.
unset(NCCL_LIBRARY CACHE)
if(NOT USE_NCCL_LIB_PATH)
# Don't cache NCCL_LIBRARY to enable switching between static and shared.
unset(NCCL_LIBRARY CACHE)
endif(NOT USE_NCCL_LIB_PATH)
endif()
if (BUILD_WITH_SHARED_NCCL)

View File

@@ -1,5 +1,24 @@
@PACKAGE_INIT@
include(CMakeFindDependencyMacro)
set(USE_OPENMP @USE_OPENMP@)
set(USE_CUDA @USE_CUDA@)
set(USE_NCCL @USE_NCCL@)
find_dependency(Threads)
if(USE_OPENMP)
find_dependency(OpenMP)
endif()
if(USE_CUDA)
find_dependency(CUDA)
endif()
if(USE_NCCL)
find_dependency(Nccl)
endif()
if(NOT TARGET xgboost::xgboost)
include(${CMAKE_CURRENT_LIST_DIR}/XGBoostTargets.cmake)
endif()
message(STATUS "Found XGBoost (found version \"${xgboost_VERSION}\")")

2
cub

Submodule cub updated: c3cceac115...af39ee264f

View File

@@ -62,7 +62,7 @@ test:data = "agaricus.txt.test"
We use the tree booster and logistic regression objective in our setting. This indicates that we accomplish our task using classic gradient boosting regression tree(GBRT), which is a promising method for binary classification.
The parameters shown in the example gives the most common ones that are needed to use xgboost.
If you are interested in more parameter settings, the complete parameter settings and detailed descriptions are [here](../../doc/parameter.rst). Besides putting the parameters in the configuration file, we can set them by passing them as arguments as below:
If you are interested in more parameter settings, the complete parameter settings and detailed descriptions are [here](https://xgboost.readthedocs.io/en/stable/parameter.html). Besides putting the parameters in the configuration file, we can set them by passing them as arguments as below:
```
../../xgboost mushroom.conf max_depth=6
@@ -161,4 +161,3 @@ Eg. ```nthread=10```
Set nthread to be the number of your real cpu (On Unix, this can be found using ```lscpu```)
Some systems will have ```Thread(s) per core = 2```, for example, a 4 core cpu with 8 threads, in such case set ```nthread=4``` and not 8.

View File

@@ -18,7 +18,7 @@ max_depth = 3
# the number of round to do boosting
num_round = 2
# 0 means do not save any model except the final round model
save_period = 0
save_period = 2
# The path of training data
data = "agaricus.txt.train"
# The path of validation data, used to monitor training process, here [test] sets name of the validation set

View File

@@ -3,13 +3,15 @@
python mapfeat.py
# split train and test
python mknfold.py agaricus.txt 1
# training and output the models
../../xgboost mushroom.conf
# output prediction task=pred
../../xgboost mushroom.conf task=pred model_in=0002.model
# print the boosters of 00002.model in dump.raw.txt
../../xgboost mushroom.conf task=dump model_in=0002.model name_dump=dump.raw.txt
# use the feature map in printing for better visualization
../../xgboost mushroom.conf task=dump model_in=0002.model fmap=featmap.txt name_dump=dump.nice.txt
cat dump.nice.txt
XGBOOST=../../../xgboost
# training and output the models
$XGBOOST mushroom.conf
# output prediction task=pred
$XGBOOST mushroom.conf task=pred model_in=0002.model
# print the boosters of 00002.model in dump.raw.txt
$XGBOOST mushroom.conf task=dump model_in=0002.model name_dump=dump.raw.txt
# use the feature map in printing for better visualization
$XGBOOST mushroom.conf task=dump model_in=0002.model fmap=featmap.txt name_dump=dump.nice.txt
cat dump.nice.txt

View File

@@ -0,0 +1,11 @@
# This is the example script to run distributed xgboost on AWS.
# Change the following two lines for configuration
export BUCKET=mybucket
# submit the job to YARN
../../../dmlc-core/tracker/dmlc-submit --cluster=yarn --num-workers=2 --worker-cores=2\
../../../xgboost mushroom.aws.conf nthread=2\
data=s3://${BUCKET}/xgb-demo/train\
eval[test]=s3://${BUCKET}/xgb-demo/test\
model_dir=s3://${BUCKET}/xgb-demo/model

View File

@@ -1,6 +1,6 @@
Regression
====
Using XGBoost for regression is very similar to using it for binary classification. We suggest that you can refer to the [binary classification demo](../binary_classification) first. In XGBoost if we use negative log likelihood as the loss function for regression, the training procedure is same as training binary classifier of XGBoost.
Using XGBoost for regression is very similar to using it for binary classification. We suggest that you can refer to the [binary classification demo](../binary_classification) first. In XGBoost if we use negative log likelihood as the loss function for regression, the training procedure is same as training binary classifier of XGBoost.
### Tutorial
The dataset we used is the [computer hardware dataset from UCI repository](https://archive.ics.uci.edu/ml/datasets/Computer+Hardware). The demo for regression is almost the same as the [binary classification demo](../binary_classification), except a little difference in general parameter:
@@ -14,4 +14,3 @@ objective = reg:squarederror
```
The input format is same as binary classification, except that the label is now the target regression values. We use linear regression here, if we want use objective = reg:logistic logistic regression, the label needed to be pre-scaled into [0,1].

33
demo/CLI/regression/mapfeat.py Executable file
View File

@@ -0,0 +1,33 @@
#!/usr/bin/python
fo = open('machine.txt', 'w')
cnt = 6
fmap = {}
for l in open('machine.data'):
arr = l.split(',')
fo.write(arr[8])
for i in range(0, 6):
fo.write(' %d:%s' % (i, arr[i + 2]))
if arr[0] not in fmap:
fmap[arr[0]] = cnt
cnt += 1
fo.write(' %d:1' % fmap[arr[0]])
fo.write('\n')
fo.close()
# create feature map for machine data
fo = open('featmap.txt', 'w')
# list from machine.names
names = [
'vendor', 'MYCT', 'MMIN', 'MMAX', 'CACH', 'CHMIN', 'CHMAX', 'PRP', 'ERP'
]
for i in range(0, 6):
fo.write('%d\t%s\tint\n' % (i, names[i + 1]))
for v, k in sorted(fmap.items(), key=lambda x: x[1]):
fo.write('%d\tvendor=%s\ti\n' % (k, v))
fo.close()

28
demo/CLI/regression/mknfold.py Executable file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/python
import sys
import random
if len(sys.argv) < 2:
print('Usage:<filename> <k> [nfold = 5]')
exit(0)
random.seed(10)
k = int(sys.argv[2])
if len(sys.argv) > 3:
nfold = int(sys.argv[3])
else:
nfold = 5
fi = open(sys.argv[1], 'r')
ftr = open(sys.argv[1] + '.train', 'w')
fte = open(sys.argv[1] + '.test', 'w')
for l in fi:
if random.randint(1, nfold) == k:
fte.write(l)
else:
ftr.write(l)
fi.close()
ftr.close()
fte.close()

View File

@@ -1,14 +1,9 @@
#!/usr/bin/python
import sys
if len(sys.argv) < 3:
print 'Usage: <csv> <libsvm>'
print 'convert a all numerical csv to libsvm'
fo = open(sys.argv[2], 'w')
for l in open(sys.argv[1]):
arr = l.split(',')
fo.write('%s' % arr[0])
for i in xrange(len(arr) - 1):
for i in range(len(arr) - 1):
fo.write(' %d:%s' % (i, arr[i+1]))
fo.close()

View File

@@ -14,4 +14,4 @@ python csv2libsvm.py YearPredictionMSD.txt yearpredMSD.libsvm
head -n 463715 yearpredMSD.libsvm > yearpredMSD.libsvm.train
tail -n 51630 yearpredMSD.libsvm > yearpredMSD.libsvm.test
echo "finish making the data"
../../xgboost yearpredMSD.conf
../../../xgboost yearpredMSD.conf

View File

@@ -60,9 +60,9 @@ This is a list of short codes introducing different functionalities of xgboost p
Most of examples in this section are based on CLI or python version.
However, the parameter settings can be applied to all versions
- [Binary classification](binary_classification)
- [Binary classification](CLI/binary_classification)
- [Multiclass classification](multiclass_classification)
- [Regression](regression)
- [Regression](CLI/regression)
- [Learning to Rank](rank)
### Benchmarks
@@ -78,6 +78,14 @@ XGBoost is extensively used by machine learning practitioners to create state of
this is a list of machine learning winning solutions with XGBoost.
Please send pull requests if you find ones that are missing here.
- Benedikt Schifferer, Gilberto Titericz, Chris Deotte, Christof Henkel, Kazuki Onodera, Jiwei Liu, Bojan Tunguz, Even Oldridge, Gabriel De Souza Pereira Moreira and Ahmet Erdem, 1st place winner of [Twitter RecSys Challenge 2020](https://recsys-twitter.com/) conducted from June,20-August,20. [GPU Accelerated Feature Engineering and Training for Recommender Systems](https://medium.com/rapids-ai/winning-solution-of-recsys2020-challenge-gpu-accelerated-feature-engineering-and-training-for-cd67c5a87b1f)
- Eugene Khvedchenya,Jessica Fridrich, Jan Butora, Yassine Yousfi 1st place winner in [ALASKA2 Image Steganalysis](https://www.kaggle.com/c/alaska2-image-steganalysis/overview). Link to [discussion](https://www.kaggle.com/c/alaska2-image-steganalysis/discussion/168546)
- Dan Ofer, Seffi Cohen, Noa Dagan, Nurit, 1st place in WiDS Datathon 2020. Link to [discussion](https://www.kaggle.com/c/widsdatathon2020/discussion/133189)
- Chris Deotte, Konstantin Yakovlev 1st place in [IEEE-CIS Fraud Detection](https://www.kaggle.com/c/ieee-fraud-detection/overview). Link to [discussion](https://www.kaggle.com/c/ieee-fraud-detection/discussion/111308)
- Giba, Lucasz, 1st place winner in [Santander Value Prediction Challenge](https://www.kaggle.com/c/santander-value-prediction-challenge) organized on August,2018. Solution [discussion](https://www.kaggle.com/c/santander-value-prediction-challenge/discussion/65272) and [code](https://www.kaggle.com/titericz/winner-model-giba-single-xgb-lb0-5178/comments)
- Beluga, 2nd place and Evgeny Nekrasov, 3rd place winner in Statoil/C-CORE Iceberg Classifier Challenge'2018. Link to [discussion](https://www.kaggle.com/c/statoil-iceberg-classifier-challenge/discussion/48294)
- Radek Osmulski, 1st place of the [iMaterialist Challenge (Fashion) at FGVC5](https://www.kaggle.com/c/imaterialist-challenge-fashion-2018/overview). Link to [the winning solution](https://www.kaggle.com/c/imaterialist-challenge-fashion-2018/discussion/57944).
- Maksims Volkovs, Guangwei Yu and Tomi Poutanen, 1st place of the [2017 ACM RecSys challenge](http://2017.recsyschallenge.com/). Link to [paper](http://www.cs.toronto.edu/~mvolkovs/recsys2017_challenge.pdf).
- Vlad Sandulescu, Mihai Chiru, 1st place of the [KDD Cup 2016 competition](https://kddcup2016.azurewebsites.net). Link to [the arxiv paper](http://arxiv.org/abs/1609.02728).
- Marios Michailidis, Mathias Müller and HJ van Veen, 1st place of the [Dato Truely Native? competition](https://www.kaggle.com/c/dato-native). Link to [the Kaggle interview](http://blog.kaggle.com/2015/12/03/dato-winners-interview-1st-place-mad-professors/).
@@ -91,6 +99,11 @@ Please send pull requests if you find ones that are missing here.
- Owen Zhang, 1st place of the [Avito Context Ad Clicks competition](https://www.kaggle.com/c/avito-context-ad-clicks). Link to [the Kaggle interview](http://blog.kaggle.com/2015/08/26/avito-winners-interview-1st-place-owen-zhang/).
- Keiichi Kuroyanagi, 2nd place of the [Airbnb New User Bookings](https://www.kaggle.com/c/airbnb-recruiting-new-user-bookings). Link to [the Kaggle interview](http://blog.kaggle.com/2016/03/17/airbnb-new-user-bookings-winners-interview-2nd-place-keiichi-kuroyanagi-keiku/).
- Marios Michailidis, Mathias Müller and Ning Situ, 1st place [Homesite Quote Conversion](https://www.kaggle.com/c/homesite-quote-conversion). Link to [the Kaggle interview](http://blog.kaggle.com/2016/04/08/homesite-quote-conversion-winners-write-up-1st-place-kazanova-faron-clobber/).
- Gilberto Titericz, Stanislav Semenov, 1st place in challenge to classify products into the correct category organized by Otto Group in 2015. Link to [challenge](https://www.kaggle.com/c/otto-group-product-classification-challenge). Link to [kaggle winning solution](https://www.kaggle.com/c/otto-group-product-classification-challenge/discussion/14335)
- Darius Barušauskas, 1st place winner in [Predicting Red Hat Business Value](https://www.kaggle.com/c/predicting-red-hat-business-value). Link to [interview](https://medium.com/kaggle-blog/red-hat-business-value-competition-1st-place-winners-interview-darius-baru%C5%A1auskas-646692a2841b). Link to [discussion](https://www.kaggle.com/c/predicting-red-hat-business-value/discussion/23786)
- David Austin, Weimin Wang, 1st place winner in [Iceberg-classifier-challenge](https://www.kaggle.com/c/statoil-iceberg-classifier-challenge/leaderboard) Link to [discussion](https://www.kaggle.com/c/statoil-iceberg-classifier-challenge/discussion/48241)
- Kazuki Onodera, Kazuki Fujikawa, 2nd place winner in [OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction](https://www.kaggle.com/c/stanford-covid-vaccine/overview) Link to [Discussion](https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189709)
- Prarthana Bhat, 2nd place winner in [DYD Competition](https://datahack.analyticsvidhya.com/contest/date-your-data/). Link to [Solution](https://github.com/analyticsvidhya/DateYourData/blob/master/Prathna_Bhat_Model.R).
## Talks
- [XGBoost: A Scalable Tree Boosting System](http://datascience.la/xgboost-workshop-and-meetup-talk-with-tianqi-chen/) (video+slides) by Tianqi Chen at the Los Angeles Data Science meetup
@@ -138,6 +151,7 @@ Send a PR to add a one sentence description:)
Open source integrations with XGBoost:
* [Neptune.ai](http://neptune.ai/) - Experiment management and collaboration tool for ML/DL/RL specialists. Integration has a form of the [XGBoost callback](https://docs.neptune.ai/integrations/xgboost.html) that automatically logs training and evaluation metrics, as well as saved model (booster), feature importance chart and visualized trees.
* [Optuna](https://optuna.org/) - An open source hyperparameter optimization framework to automate hyperparameter search. Optuna integrates with XGBoost in the [XGBoostPruningCallback](https://optuna.readthedocs.io/en/stable/reference/integration.html#optuna.integration.XGBoostPruningCallback) that let users easily prune unpromising trials.
* [dtreeviz](https://github.com/parrt/dtreeviz) - A python library for decision tree visualization and model interpretation. Starting from version 1.0, dtreeviz is able to visualize tree ensembles produced by XGBoost.
## Awards
- [John Chambers Award](http://stat-computing.org/awards/jmc/winners.html) - 2016 Winner: XGBoost R Package, by Tong He (Simon Fraser University) and Tianqi Chen (University of Washington)

View File

@@ -1,4 +1,5 @@
cmake_minimum_required(VERSION 3.12)
cmake_minimum_required(VERSION 3.13)
project(api-demo LANGUAGES C CXX VERSION 0.0.1)
find_package(xgboost REQUIRED)
add_executable(api-demo c-api-demo.c)
target_link_libraries(api-demo xgboost::xgboost)
target_link_libraries(api-demo PRIVATE xgboost::xgboost)

View File

@@ -5,6 +5,7 @@
* \brief A simple example of using xgboost C API.
*/
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <xgboost/c_api.h>
@@ -62,7 +63,7 @@ int main(int argc, char** argv) {
bst_ulong num_feature = 0;
safe_xgboost(XGBoosterGetNumFeature(booster, &num_feature));
printf("num_feature: %llu\n", num_feature);
printf("num_feature: %lu\n", (unsigned long)(num_feature));
// predict
bst_ulong out_len = 0;
@@ -84,6 +85,86 @@ int main(int argc, char** argv) {
}
printf("\n");
{
printf("Dense Matrix Example (XGDMatrixCreateFromMat): ");
const float values[] = {0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0,
1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 1, 0, 0, 0, 0};
DMatrixHandle dmat;
safe_xgboost(XGDMatrixCreateFromMat(values, 1, 127, 0.0, &dmat));
bst_ulong out_len = 0;
const float* out_result = NULL;
safe_xgboost(XGBoosterPredict(booster, dmat, 0, 0, 0, &out_len,
&out_result));
assert(out_len == 1);
printf("%1.4f \n", out_result[0]);
safe_xgboost(XGDMatrixFree(dmat));
}
{
printf("Sparse Matrix Example (XGDMatrixCreateFromCSREx): ");
const size_t indptr[] = {0, 22};
const unsigned indices[] = {1, 9, 19, 21, 24, 34, 36, 39, 42, 53, 56, 65,
69, 77, 86, 88, 92, 95, 102, 106, 117, 122};
const float data[] = {1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0};
DMatrixHandle dmat;
safe_xgboost(XGDMatrixCreateFromCSREx(indptr, indices, data, 2, 22, 127,
&dmat));
bst_ulong out_len = 0;
const float* out_result = NULL;
safe_xgboost(XGBoosterPredict(booster, dmat, 0, 0, 0, &out_len,
&out_result));
assert(out_len == 1);
printf("%1.4f \n", out_result[0]);
safe_xgboost(XGDMatrixFree(dmat));
}
{
printf("Sparse Matrix Example (XGDMatrixCreateFromCSCEx): ");
const size_t col_ptr[] = {0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 7, 7, 7, 8,
8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 11, 11, 11, 11, 11, 11,
11, 11, 11, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14,
14, 14, 14, 14, 14, 14, 15, 15, 16, 16, 16, 16, 17, 17, 17, 18, 18, 18,
18, 18, 18, 18, 19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 21, 21, 21, 21, 21, 22, 22, 22, 22, 22};
const unsigned indices[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0};
const float data[] = {1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0};
DMatrixHandle dmat;
safe_xgboost(XGDMatrixCreateFromCSCEx(col_ptr, indices, data, 128, 22, 1,
&dmat));
bst_ulong out_len = 0;
const float* out_result = NULL;
safe_xgboost(XGBoosterPredict(booster, dmat, 0, 0, 0, &out_len,
&out_result));
assert(out_len == 1);
printf("%1.4f \n", out_result[0]);
safe_xgboost(XGDMatrixFree(dmat));
}
// free everything
safe_xgboost(XGBoosterFree(booster));
safe_xgboost(XGDMatrixFree(dtrain));

View File

@@ -2,16 +2,13 @@ from dask_cuda import LocalCUDACluster
from dask.distributed import Client
from dask import array as da
import xgboost as xgb
from xgboost import dask as dxgb
from xgboost.dask import DaskDMatrix
import cupy as cp
import argparse
def main(client):
# generate some random data for demonstration
m = 100000
n = 100
X = da.random.random(size=(m, n), chunks=100)
y = da.random.random(size=(m, ), chunks=100)
def using_dask_matrix(client: Client, X, y):
# DaskDMatrix acts like normal DMatrix, works as a proxy for local
# DMatrix scatter around workers.
dtrain = DaskDMatrix(client, X, y)
@@ -31,15 +28,56 @@ def main(client):
# you can pass output directly into `predict` too.
prediction = xgb.dask.predict(client, bst, dtrain)
prediction = prediction.compute()
print('Evaluation history:', history)
return prediction
def using_quantile_device_dmatrix(client: Client, X, y):
'''`DaskDeviceQuantileDMatrix` is a data type specialized for `gpu_hist`, tree
method that reduces memory overhead. When training on GPU pipeline, it's
preferred over `DaskDMatrix`.
.. versionadded:: 1.2.0
'''
# Input must be on GPU for `DaskDeviceQuantileDMatrix`.
X = X.map_blocks(cp.array)
y = y.map_blocks(cp.array)
# `DaskDeviceQuantileDMatrix` is used instead of `DaskDMatrix`, be careful
# that it can not be used for anything else than training.
dtrain = dxgb.DaskDeviceQuantileDMatrix(client, X, y)
output = xgb.dask.train(client,
{'verbosity': 2,
'tree_method': 'gpu_hist'},
dtrain,
num_boost_round=4)
prediction = xgb.dask.predict(client, output, X)
return prediction
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--ddqdm', choices=[0, 1], type=int, default=1,
help='''Whether should we use `DaskDeviceQuantileDMatrix`''')
args = parser.parse_args()
# `LocalCUDACluster` is used for assigning GPU to XGBoost processes. Here
# `n_workers` represents the number of GPUs since we use one GPU per worker
# process.
with LocalCUDACluster(n_workers=2, threads_per_worker=4) as cluster:
with Client(cluster) as client:
main(client)
# generate some random data for demonstration
m = 100000
n = 100
X = da.random.random(size=(m, n), chunks=100)
y = da.random.random(size=(m, ), chunks=100)
if args.ddqdm == 1:
print('Using DaskDeviceQuantileDMatrix')
from_ddqdm = using_quantile_device_dmatrix(client, X, y)
else:
print('Using DMatrix')
from_dmatrix = using_dask_matrix(client, X, y)

View File

@@ -1,11 +0,0 @@
# This is the example script to run distributed xgboost on AWS.
# Change the following two lines for configuration
export BUCKET=mybucket
# submit the job to YARN
../../dmlc-core/tracker/dmlc-submit --cluster=yarn --num-workers=2 --worker-cores=2\
../../xgboost mushroom.aws.conf nthread=2\
data=s3://${BUCKET}/xgb-demo/train\
eval[test]=s3://${BUCKET}/xgb-demo/test\
model_dir=s3://${BUCKET}/xgb-demo/model

View File

@@ -1,3 +1,5 @@
# GPU Acceleration Demo
`cover_type.py` shows how to train a model on the [forest cover type](https://archive.ics.uci.edu/ml/datasets/covertype) dataset using GPU acceleration. The forest cover type dataset has 581,012 rows and 54 features, making it time consuming to process. We compare the run-time and accuracy of the GPU and CPU histogram algorithms.
`cover_type.py` shows how to train a model on the [forest cover type](https://archive.ics.uci.edu/ml/datasets/covertype) dataset using GPU acceleration. The forest cover type dataset has 581,012 rows and 54 features, making it time consuming to process. We compare the run-time and accuracy of the GPU and CPU histogram algorithms.
`shap.ipynb` demonstrates using GPU acceleration to compute SHAP values for feature importance.

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,130 @@
'''
Demo for using and defining callback functions.
.. versionadded:: 1.3.0
'''
import xgboost as xgb
import tempfile
import os
import numpy as np
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
import argparse
class Plotting(xgb.callback.TrainingCallback):
'''Plot evaluation result during training. Only for demonstration purpose as it's quite
slow to draw.
'''
def __init__(self, rounds):
self.fig = plt.figure()
self.ax = self.fig.add_subplot(111)
self.rounds = rounds
self.lines = {}
self.fig.show()
self.x = np.linspace(0, self.rounds, self.rounds)
plt.ion()
def _get_key(self, data, metric):
return f'{data}-{metric}'
def after_iteration(self, model, epoch, evals_log):
'''Update the plot.'''
if not self.lines:
for data, metric in evals_log.items():
for metric_name, log in metric.items():
key = self._get_key(data, metric_name)
expanded = log + [0] * (self.rounds - len(log))
self.lines[key], = self.ax.plot(self.x, expanded, label=key)
self.ax.legend()
else:
# https://pythonspot.com/matplotlib-update-plot/
for data, metric in evals_log.items():
for metric_name, log in metric.items():
key = self._get_key(data, metric_name)
expanded = log + [0] * (self.rounds - len(log))
self.lines[key].set_ydata(expanded)
self.fig.canvas.draw()
# False to indicate training should not stop.
return False
def custom_callback():
'''Demo for defining a custom callback function that plots evaluation result during
training.'''
X, y = load_breast_cancer(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y, random_state=0)
D_train = xgb.DMatrix(X_train, y_train)
D_valid = xgb.DMatrix(X_valid, y_valid)
num_boost_round = 100
plotting = Plotting(num_boost_round)
# Pass it to the `callbacks` parameter as a list.
xgb.train(
{
'objective': 'binary:logistic',
'eval_metric': ['error', 'rmse'],
'tree_method': 'gpu_hist'
},
D_train,
evals=[(D_train, 'Train'), (D_valid, 'Valid')],
num_boost_round=num_boost_round,
callbacks=[plotting])
def check_point_callback():
# only for demo, set a larger value (like 100) in practice as checkpointing is quite
# slow.
rounds = 2
def check(as_pickle):
for i in range(0, 10, rounds):
if i == 0:
continue
if as_pickle:
path = os.path.join(tmpdir, 'model_' + str(i) + '.pkl')
else:
path = os.path.join(tmpdir, 'model_' + str(i) + '.json')
assert(os.path.exists(path))
X, y = load_breast_cancer(return_X_y=True)
m = xgb.DMatrix(X, y)
# Check point to a temporary directory for demo
with tempfile.TemporaryDirectory() as tmpdir:
# Use callback class from xgboost.callback
# Feel free to subclass/customize it to suit your need.
check_point = xgb.callback.TrainingCheckPoint(directory=tmpdir,
iterations=rounds,
name='model')
xgb.train({'objective': 'binary:logistic'}, m,
num_boost_round=10,
verbose_eval=False,
callbacks=[check_point])
check(False)
# This version of checkpoint saves everything including parameters and
# model. See: doc/tutorials/saving_model.rst
check_point = xgb.callback.TrainingCheckPoint(directory=tmpdir,
iterations=rounds,
as_pickle=True,
name='model')
xgb.train({'objective': 'binary:logistic'}, m,
num_boost_round=10,
verbose_eval=False,
callbacks=[check_point])
check(True)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--plot', default=1, type=int)
args = parser.parse_args()
check_point_callback()
if args.plot:
custom_callback()

View File

@@ -14,15 +14,15 @@ print('running cross validation')
# std_value is standard deviation of the metric
xgb.cv(param, dtrain, num_round, nfold=5,
metrics={'error'}, seed=0,
callbacks=[xgb.callback.print_evaluation(show_stdv=True)])
callbacks=[xgb.callback.EvaluationMonitor(show_stdv=True)])
print('running cross validation, disable standard deviation display')
# do cross validation, this will print result out as
# [iteration] metric_name:mean_value
res = xgb.cv(param, dtrain, num_boost_round=10, nfold=5,
metrics={'error'}, seed=0,
callbacks=[xgb.callback.print_evaluation(show_stdv=False),
xgb.callback.early_stop(3)])
callbacks=[xgb.callback.EvaluationMonitor(show_stdv=False),
xgb.callback.EarlyStopping(3)])
print(res)
print('running cross validation, with preprocessing function')
# define the preprocessing function

View File

@@ -1,28 +1,28 @@
import os
import numpy as np
import xgboost as xgb
###
# advanced: customized loss function
#
import os
import numpy as np
import xgboost as xgb
print('start running example to used customized objective function')
CURRENT_DIR = os.path.dirname(__file__)
dtrain = xgb.DMatrix(os.path.join(CURRENT_DIR, '../data/agaricus.txt.train'))
dtest = xgb.DMatrix(os.path.join(CURRENT_DIR, '../data/agaricus.txt.test'))
# note: for customized objective function, we leave objective as default
# note: what we are getting is margin value in prediction
# you must know what you are doing
param = {'max_depth': 2, 'eta': 1}
# note: what we are getting is margin value in prediction you must know what
# you are doing
param = {'max_depth': 2, 'eta': 1, 'objective': 'reg:logistic'}
watchlist = [(dtest, 'eval'), (dtrain, 'train')]
num_round = 2
num_round = 10
# user define objective function, given prediction, return gradient and second
# order gradient this is log likelihood loss
def logregobj(preds, dtrain):
labels = dtrain.get_label()
preds = 1.0 / (1.0 + np.exp(-preds))
preds = 1.0 / (1.0 + np.exp(-preds)) # transform raw leaf weight
grad = preds - labels
hess = preds * (1.0 - preds)
return grad, hess
@@ -31,20 +31,31 @@ def logregobj(preds, dtrain):
# user defined evaluation function, return a pair metric_name, result
# NOTE: when you do customized loss function, the default prediction value is
# margin. this may make builtin evaluation metric not function properly for
# example, we are doing logistic loss, the prediction is score before logistic
# transformation the builtin evaluation error assumes input is after logistic
# transformation Take this in mind when you use the customization, and maybe
# you need write customized evaluation function
# margin, which means the prediction is score before logistic transformation.
def evalerror(preds, dtrain):
labels = dtrain.get_label()
preds = 1.0 / (1.0 + np.exp(-preds)) # transform raw leaf weight
# return a pair metric_name, result. The metric name must not contain a
# colon (:) or a space since preds are margin(before logistic
# transformation, cutoff at 0)
return 'my-error', float(sum(labels != (preds > 0.0))) / len(labels)
# colon (:) or a space
return 'my-error', float(sum(labels != (preds > 0.5))) / len(labels)
py_evals_result = {}
# training with customized objective, we can also do step by step training
# simply look at xgboost.py's implementation of train
bst = xgb.train(param, dtrain, num_round, watchlist, obj=logregobj,
feval=evalerror)
# simply look at training.py's implementation of train
py_params = param.copy()
py_params.update({'disable_default_eval_metric': True})
py_logreg = xgb.train(py_params, dtrain, num_round, watchlist, obj=logregobj,
feval=evalerror, evals_result=py_evals_result)
evals_result = {}
params = param.copy()
params.update({'eval_metric': 'error'})
logreg = xgb.train(params, dtrain, num_boost_round=num_round, evals=watchlist,
evals_result=evals_result)
for i in range(len(py_evals_result['train']['my-error'])):
np.testing.assert_almost_equal(py_evals_result['train']['my-error'],
evals_result['train']['error'])

View File

@@ -142,7 +142,8 @@ def main(args):
native_results = {}
# Use the same objective function defined in XGBoost.
booster_native = xgb.train({'num_class': kClasses},
booster_native = xgb.train({'num_class': kClasses,
'eval_metric': 'merror'},
m,
num_boost_round=kRounds,
evals_result=native_results,

View File

@@ -1,5 +1,7 @@
'''A demo for defining data iterator.
.. versionadded:: 1.2.0
The demo that defines a customized iterator for passing batches of data into
`xgboost.DeviceQuantileDMatrix` and use this `DeviceQuantileDMatrix` for
training. The feature is used primarily designed to reduce the required GPU
@@ -39,7 +41,7 @@ class IterForDMatrixDemo(xgboost.core.DataIter):
rng = cupy.random.RandomState(1994)
self._data = [rng.randn(self.rows, self.cols)] * BATCHES
self._labels = [rng.randn(self.rows)] * BATCHES
self._weights = [rng.randn(self.rows)] * BATCHES
self._weights = [rng.uniform(size=self.rows)] * BATCHES
self.it = 0 # set iterator to 0
super().__init__()

View File

@@ -7,8 +7,8 @@ import xgboost as xgb
# several cache file with the prefix will be generated
# currently only support convert from libsvm file
CURRENT_DIR = os.path.dirname(__file__)
dtrain = xgb.DMatrix(os.path.join(CURRENT_DIR, '../data/agaricus.txt.train'))
dtest = xgb.DMatrix(os.path.join(CURRENT_DIR, '../data/agaricus.txt.test'))
dtrain = xgb.DMatrix(os.path.join(CURRENT_DIR, '../data/agaricus.txt.train#dtrain.cache'))
dtest = xgb.DMatrix(os.path.join(CURRENT_DIR, '../data/agaricus.txt.test#dtest.cache'))
# specify validations set to watch performance
param = {'max_depth':2, 'eta':1, 'objective':'binary:logistic'}

View File

@@ -0,0 +1,49 @@
'''Using feature weight to change column sampling.
.. versionadded:: 1.3.0
'''
import numpy as np
import xgboost
from matplotlib import pyplot as plt
import argparse
def main(args):
rng = np.random.RandomState(1994)
kRows = 1000
kCols = 10
X = rng.randn(kRows, kCols)
y = rng.randn(kRows)
fw = np.ones(shape=(kCols,))
for i in range(kCols):
fw[i] *= float(i)
dtrain = xgboost.DMatrix(X, y)
dtrain.set_info(feature_weights=fw)
bst = xgboost.train({'tree_method': 'hist',
'colsample_bynode': 0.5},
dtrain, num_boost_round=10,
evals=[(dtrain, 'd')])
featue_map = bst.get_fscore()
# feature zero has 0 weight
assert featue_map.get('f0', None) is None
assert max(featue_map.values()) == featue_map.get('f9')
if args.plot:
xgboost.plot_importance(bst)
plt.show()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--plot',
type=int,
default=1,
help='Set to 0 to disable plotting the evaluation history.')
args = parser.parse_args()
main(args)

Some files were not shown because too many files have changed in this diff Show More