Compare commits

..

462 Commits

Author SHA1 Message Date
Philip Hyunsu Cho
34408a7fdc Release patch release 1.1.1 with faster CPU performance (#5732)
* Fix release degradation (#5720)

* fix release degradation, related to 5666

* less resizes

Co-authored-by: SHVETS, KIRILL <kirill.shvets@intel.com>

* Make 1.1.1 patch release

* Disable too-many-function-args pylint warning for predict()

* Fix Windows CI

* Remove cpplint

Co-authored-by: ShvetsKS <33296480+ShvetsKS@users.noreply.github.com>
Co-authored-by: SHVETS, KIRILL <kirill.shvets@intel.com>
2020-06-04 10:56:07 -07:00
Philip Hyunsu Cho
f9b246f5ee Add pkgconfig to cmake (#5744) (#5748)
* Add pkgconfig to cmake

* Move xgboost.pc.in to cmake/

Co-authored-by: Peter Jung <peter@jung.ninja>
Co-authored-by: Peter Jung <peter.jung@heureka.cz>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-06-02 12:04:27 -07:00
Jiaming Yuan
8467880aeb Fix loading old model. (#5724) (#5737)
* Add test.
2020-06-01 04:32:24 +08:00
Jiaming Yuan
e74560c86a [CI] Backport Remove CUDA 9.0 from Windows CI. (#5674) (#5738)
* [CI] Remove CUDA 9.0 from Windows CI. (#5674)

* Remove CUDA 9.0 on Windows CI.

* Require cuda10 tag, to differentiate

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Pylint.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-05-31 21:35:00 +08:00
Hyunsu Cho
882b966536 [Doc] Fix typos in AFT tutorial 2020-05-27 03:09:46 -07:00
Philip Hyunsu Cho
115e4c3360 [R] Fix duplicated libomp.dylib error on Mac OSX (#5701) 2020-05-24 23:38:42 -07:00
Hyunsu Cho
f5d4fddafe Release 1.1.0 2020-05-17 00:26:22 -07:00
Jiaming Yuan
66690f3d07 Add JSON schema to model dump. (#5660) 2020-05-15 12:26:49 +08:00
Rory Mitchell
c42f533ae9 Resolve vector<bool>::iterator crash (#5642) 2020-05-11 18:14:41 +08:00
Philip Hyunsu Cho
751160b69c Upgrade to CUDA 10.0 (#5649)
Co-authored-by: fis <jm.yuan@outlook.com>
2020-05-11 18:04:47 +08:00
Hyunsu Cho
8aaabce7c9 Make RC2 2020-05-04 09:11:38 -07:00
Philip Hyunsu Cho
14543176d1 Fix build on big endian CPUs (#5617)
* Fix build on big endian CPUs

* Clang-tidy
2020-05-04 09:09:22 -07:00
Jason E. Aten, Ph.D
afa6e086cc Clarify meaning of training parameter in XGBoosterPredict() (#5604)
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2020-05-04 09:08:57 -07:00
Philip Hyunsu Cho
636ab6b522 Instruct Mac users to install libomp (#5606) 2020-05-04 09:08:25 -07:00
Philip Hyunsu Cho
6daa6ee4e0 [R] Address warnings to comply with CRAN submission policy (#5600)
* [R] Address warnings to comply with CRAN submission policy

* Include <xgboost/logging.h>
2020-05-04 09:08:16 -07:00
Philip Hyunsu Cho
4979991d5b [CI] Grant public read access to Mac OSX wheels (#5602) 2020-05-04 09:07:56 -07:00
Philip Hyunsu Cho
02faddc5f3 Fix compilation on Mac OSX High Sierra (10.13) (#5597)
* Fix compilation on Mac OSX High Sierra

* [CI] Build Mac OSX binary wheel using Travis CI
2020-05-04 09:07:29 -07:00
Jiaming Yuan
844d7c1d5b Set device in device dmatrix. (#5596) 2020-04-25 13:44:30 +08:00
Hyunsu Cho
3728855ce9 Make RC1 2020-04-24 13:56:54 -07:00
Philip Hyunsu Cho
ef26bc45bf Hide C++ symbols in libxgboost.so when building Python wheel (#5590)
* Hide C++ symbols in libxgboost.so when building Python wheel

* Update Jenkinsfile

* Add test

* Upgrade rabit

* Add setup.py option.

Co-authored-by: fis <jm.yuan@outlook.com>
2020-04-24 13:32:05 -07:00
Rory Mitchell
660be66207 Avoid rabit calls in learner configuration (#5581) 2020-04-24 14:59:29 +12:00
Philip Hyunsu Cho
92913aaf7f [CI] Use Vault repository to re-gain access to devtoolset-4 (#5589)
* [CI] Use Vault repository to re-gain access to devtoolset-4

* Use manylinux2010 tag

* Update Dockerfile.jvm

* Fix rename_whl.py

* Upgrade Pip, to handle manylinux2010 tag

* Update insert_vcomp140.py

* Update test_python.sh
2020-04-23 18:53:54 -07:00
Philip Hyunsu Cho
e4f5b6c84f Port R compatibility patches from 1.0.0 release branch (#5577)
* Don't use memset to set struct when compiling for R

* Support 32-bit Solaris target for R package
2020-04-21 22:51:18 -07:00
Jiaming Yuan
f27b6f9ba6 Update document. (#5572) 2020-04-22 02:37:37 +08:00
Jiaming Yuan
c355ab65ed Enable parameter validation for R. (#5569)
* Enable parameter validation for R.

* Add test.
2020-04-21 11:19:09 -07:00
Jiaming Yuan
564b22cee5 Restore attributes in complete. (#5573) 2020-04-21 11:06:55 -07:00
Rory Mitchell
a734f52807 Use cudaDeviceGetAttribute instead of cudaGetDeviceProperties (#5570) 2020-04-21 14:58:29 +12:00
Andy Adinets
73142041b9 For histograms, opting into maximum shared memory available per block. (#5491) 2020-04-21 14:56:42 +12:00
Jiaming Yuan
9c1103e06c [Breaking] Set output margin to True for custom objective. (#5564)
* Set output margin to True for custom objective in Python and R.

* Add a demo for writing multi-class custom objective function.

* Run tests on selected demos.
2020-04-20 20:44:12 +08:00
Jiaming Yuan
fcbedcedf8 Fix configuration I load model. (#5562) 2020-04-20 17:25:11 +08:00
Jiaming Yuan
29a4cfe400 Group aware GPU sketching. (#5551)
* Group aware GPU weighted sketching.

* Distribute group weights to each data point.
* Relax the test.
* Validate input meta info.
* Fix metainfo copy ctor.
2020-04-20 17:18:52 +08:00
Liang-Chi Hsieh
397d8f0ee7 [jvm-packages] XGBoost Spark should deal with NaN when parsing evaluation output (#5546) 2020-04-19 23:10:30 -07:00
Jiaming Yuan
b809f5d8b8 Don't set seed on CLI interface. (#5563) 2020-04-20 12:17:03 +08:00
Jiaming Yuan
ccd30e4491 Fix non-openmp build. (#5566)
* Add test to Jenkins.
* Fix threading utils tests.
* Require thread library.
2020-04-20 12:16:38 +08:00
Rory Mitchell
b2827a80e1 Use non-synchronising scan (#5560) 2020-04-20 15:51:34 +12:00
Rory Mitchell
d6d1035950 gpu_hist performance fixes (#5558)
* Remove unnecessary cuda API calls

* Fix histogram memory growth
2020-04-19 12:21:13 +12:00
Jiaming Yuan
e1f22baf8c Fix slice and get info. (#5552) 2020-04-18 18:00:13 +08:00
Jiaming Yuan
c245eb8755 Fix r interaction constraints (#5543)
* Unify the parsing code.

* Cleanup.
2020-04-18 06:53:51 +08:00
Jiaming Yuan
93df871c8c Assert matching length of evaluation inputs. (#5540) 2020-04-18 06:52:55 +08:00
Jiaming Yuan
c69a19e2b1 Fix skl nan tag. (#5538) 2020-04-18 06:52:17 +08:00
Jiaming Yuan
cfee9fae91 Don't use uint for threads. (#5542) 2020-04-17 09:45:42 +08:00
Jiaming Yuan
bb29ce2818 Add missing aft parameters. [skip ci] (#5553) 2020-04-16 12:08:55 -07:00
ShvetsKS
a2d86b8e4b Optimizations for RNG in InitData kernel (#5522)
* optimizations for subsampling in InitData

* optimizations for subsampling in InitData

Co-authored-by: SHVETS, KIRILL <kirill.shvets@intel.com>
2020-04-16 18:24:32 +03:00
Rory Mitchell
e268fb0093 Use thrust functions instead of custom functions (#5544) 2020-04-16 21:41:16 +12:00
Melissa Kohl
6a169cd41a Fix uninitialized value bug in xgboost callback (#5463)
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-04-16 07:50:54 +08:00
Jiaming Yuan
468b1594d3 Fix CLI model IO. (#5535)
* Add test for comparing Python and CLI training result.
2020-04-16 07:48:47 +08:00
Philip Hyunsu Cho
0676a19e70 [jvm-packages] [CI] Publish XGBoost4J JARs with Scala 2.11 and 2.12 (#5539) 2020-04-15 09:32:02 -07:00
Philip Hyunsu Cho
ec02f40d42 [CI] Use Ubuntu 18.04 LTS in JVM CI, because 19.04 is EOL (#5537) 2020-04-15 07:32:46 -07:00
Jiaming Yuan
8b04736b81 [dask] dask cudf inplace prediction. (#5512)
* Add inplace prediction for dask-cudf.

* Remove Dockerfile.release, since it's not used anywhere

* Use Conda exclusively in CUDF and GPU containers

* Improve cupy memory copying.

* Add skip marks to tests.

* Add mgpu-cudf category on the CI to run all distributed tests.

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-04-15 18:15:51 +08:00
Rory Mitchell
ca4e05660e Purge device_helpers.cuh (#5534)
* Simplifications with caching_device_vector

* Purge device helpers
2020-04-15 21:51:56 +12:00
Jiaming Yuan
a2f54963b6 Write binary header. (#5532) 2020-04-15 17:47:57 +08:00
Philip Hyunsu Cho
1b1969f20d [jvm-packages] [CI] Create a Maven repository to host SNAPSHOT JARs (#5533) 2020-04-14 19:33:32 -07:00
Kamil A. Kaczmarek
2809fb8b6f Add Neptune and Optuna to list of examples (#5528) 2020-04-14 11:00:50 -07:00
Jiaming Yuan
c90119eb67 Update Python doc. [skip ci] (#5517)
* Update doc for copying booster. [skip ci]

The issue is resolved in  #5312 .

* Add version for new APIs. [skip ci]
2020-04-14 16:25:20 +08:00
Philip Hyunsu Cho
88b64c8162 Ensure that configured dmlc/build_config.h is picked up by Rabit and XGBoost (#5514)
* Ensure that configured header (build_config.h) from dmlc-core is picked up by Rabit and XGBoost

* Check which Rabit target is being used

* Use CMake 3.13 in all Jenkins tests

* Upgrade CMake in Travis CI

* Install CMake using Kitware installer

* Remove existing CMake (3.12.4)
2020-04-11 23:48:28 -07:00
Nicolas Scozzaro
04f69b43e6 fix typo "customized" (#5515) 2020-04-12 14:43:48 +08:00
Liang-Chi Hsieh
449ab79e0c [CI] Use devtoolset-6 because devtoolset-4 is EOL and no longer available (#5506)
* Use devtoolset-6.

* [CI] Use devtoolset-6 because devtoolset-4 is EOL and no longer available

* CUDA 9.0 doesn't work with devtoolset-6; use devtoolset-4 for GPU build only

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-04-11 19:49:06 -07:00
Jiaming Yuan
b56c902841 [R] R raw serialization. (#5123)
* Add bindings for serialization.
* Change `xgb.save.raw' into full serialization instead of simple model.
* Add `xgb.load.raw' for unserialization.
* Run devtools.
2020-04-11 17:16:54 +08:00
Jiaming Yuan
a3db79df22 Remove makefiles. (#5513) 2020-04-11 13:25:53 +08:00
Rory Mitchell
093e2227e3 Serialise booster after training to reset state (#5484)
* Serialise booster after training to reset state

* Prevent process_type being set on load

* Check for correct updater sequence
2020-04-11 16:27:12 +12:00
Jiaming Yuan
4a0c8ef237 Update doc for parameter validation. (#5508)
* Update doc for parameter validation.

* Fix github rebase.
2020-04-11 00:43:46 +08:00
Jiaming Yuan
1334aca437 Fix github merge. (#5509) 2020-04-10 22:17:38 +08:00
Jiaming Yuan
866a477319 Unify max nodes. (#5497) 2020-04-10 19:26:35 +08:00
Jiaming Yuan
bd653fad4c Remove distcol updater. (#5507)
Closes #5498.
2020-04-10 12:52:56 +08:00
Jiaming Yuan
7d52c0b8c2 Requires setting leaf stat when expanding tree. (#5501)
* Fix GPU Hist feature importance.
2020-04-10 12:27:03 +08:00
Jiaming Yuan
dc2950fd90 Fix checking booster. (#5505)
* Use `get_params()` instead of `getattr` intrinsic.
2020-04-10 12:21:21 +08:00
Jiaming Yuan
6671b42dd4 Use ellpack for prediction only when sparsepage doesn't exist. (#5504) 2020-04-10 12:15:46 +08:00
Bobby Wang
ad826e913f [jvm-packages]add feature size for LabelPoint and DataBatch (#5303)
* fix type error

* Validate number of features.

* resolve comments

* add feature size for LabelPoint and DataBatch

* pass the feature size to native

* move feature size validating tests into a separate suite

* resolve comments

Co-authored-by: fis <jm.yuan@outlook.com>
2020-04-07 16:49:52 -07:00
Zhang Zhang
8bc595ea1e Fix out-of-bound array access in WQSummary::SetPrune() (#5493) 2020-04-08 10:02:31 +12:00
Rong Ou
a1085396e2 add reference to gpu external memory (#5490) 2020-04-07 11:15:58 +12:00
Yuan Tang
9097e8f0d9 Edits on tutorial for XGBoost job on Kubernetes (#5487) 2020-04-05 07:36:33 -04:00
Paul Kaefer
c362125d7b corrected spelling of 'list' (#5482) 2020-04-05 09:15:08 +08:00
Jiaming Yuan
0012f2ef93 Upgrade clang-tidy on CI. (#5469)
* Correct all clang-tidy errors.
* Upgrade clang-tidy to 10 on CI.

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-04-05 04:42:29 +08:00
Philip Hyunsu Cho
30e94ddd04 Add R code to AFT tutorial [skip ci] (#5486) 2020-04-04 13:06:12 -07:00
Rory Mitchell
15800107ad Small updates to GPU documentation (#5483) 2020-04-04 13:02:27 -07:00
Jiaming Yuan
a9313802ea Fix dump model. (#5485) 2020-04-05 03:52:54 +08:00
Philip Hyunsu Cho
5fc5ec539d Implement robust regularization in 'survival:aft' objective (#5473)
* Robust regularization of AFT gradient and hessian

* Fix AFT doc; expose it to tutorial TOC

* Apply robust regularization to uncensored case too

* Revise unit test slightly

* Fix lint

* Update test_survival.py

* Use GradientPairPrecise

* Remove unused variables
2020-04-04 12:21:24 -07:00
Jiaming Yuan
939973630d Accept other gradient types for split entry. (#5467) 2020-04-03 10:38:44 +08:00
Jiaming Yuan
86beb68ce8 Implement host span. (#5459) 2020-04-03 10:37:51 +08:00
Jiaming Yuan
459b175dc6 Split up test helpers header. (#5455) 2020-04-03 10:36:53 +08:00
Jiaming Yuan
c218d8ffbf Enable parameter validation for skl. (#5477) 2020-04-03 10:23:58 +08:00
Jiaming Yuan
d0b86c75d9 Remove silent parameter. (#5476) 2020-04-03 08:03:26 +08:00
Jiaming Yuan
29c6ad943a Prevent copying SimpleDMatrix. (#5453)
* Set default dtor for SimpleDMatrix to initialize default copy ctor, which is
deleted due to unique ptr.

* Remove commented code.
* Remove warning for calling host function (std::max).
* Remove warning for initialization order.
* Remove warning for unused variables.
2020-04-02 07:01:49 +08:00
Jiaming Yuan
e86030c360 Update dmlc-core. (#5466)
* Copy dmlc travis script to XGBoost.
2020-04-02 04:16:39 +08:00
Jiaming Yuan
babcb996e7 Reduce span check overhead. (#5464) 2020-04-01 22:07:24 +08:00
Rory Mitchell
15f40e51e9 Add support for dlpack, expose python docs for DeviceQuantileDMatrix (#5465) 2020-04-01 23:34:32 +13:00
Jiaming Yuan
6601a641d7 Thread safe, inplace prediction. (#5389)
Normal prediction with DMatrix is now thread safe with locks.  Added inplace prediction is lock free thread safe.

When data is on device (cupy, cudf), the returned data is also on device.

* Implementation for numpy, csr, cudf and cupy.

* Implementation for dask.

* Remove sync in simple dmatrix.
2020-03-30 15:35:28 +08:00
James Lamb
7f980e9f83 [R-package] fixed inconsistency in R -e calls in FindLibR.cmake (#5438) 2020-03-28 19:24:21 +08:00
ShvetsKS
27a8e36fc3 Reducing memory consumption for 'hist' method on CPU (#5334) 2020-03-28 14:45:52 +13:00
Rory Mitchell
13b10a6370 Device dmatrix (#5420) 2020-03-28 14:42:21 +13:00
Jiaming Yuan
780de49ddb Resolve travis failure. (#5445)
* Install dependencies by pip.
2020-03-27 19:37:58 +08:00
Jiaming Yuan
4942da64ae Refactor tests with data generator. (#5439) 2020-03-27 06:44:44 +08:00
Jiaming Yuan
7146b91d5a Force compressed buffer to be 4 bytes aligned. (#5441) 2020-03-27 06:43:52 +08:00
Avinash Barnwal
dcf439932a Add Accelerated Failure Time loss for survival analysis task (#4763)
* [WIP] Add lower and upper bounds on the label for survival analysis

* Update test MetaInfo.SaveLoadBinary to account for extra two fields

* Don't clear qids_ for version 2 of MetaInfo

* Add SetInfo() and GetInfo() method for lower and upper bounds

* changes to aft

* Add parameter class for AFT; use enum's to represent distribution and event type

* Add AFT metric

* changes to neg grad to grad

* changes to binomial loss

* changes to overflow

* changes to eps

* changes to code refactoring

* changes to code refactoring

* changes to code refactoring

* Re-factor survival analysis

* Remove aft namespace

* Move function bodies out of AFTNormal and AFTLogistic, to reduce clutter

* Move function bodies out of AFTLoss, to reduce clutter

* Use smart pointer to store AFTDistribution and AFTLoss

* Rename AFTNoiseDistribution enum to AFTDistributionType for clarity

The enum class was not a distribution itself but a distribution type

* Add AFTDistribution::Create() method for convenience

* changes to extreme distribution

* changes to extreme distribution

* changes to extreme

* changes to extreme distribution

* changes to left censored

* deleted cout

* changes to x,mu and sd and code refactoring

* changes to print

* changes to hessian formula in censored and uncensored

* changes to variable names and pow

* changes to Logistic Pdf

* changes to parameter

* Expose lower and upper bound labels to R package

* Use example weights; normalize log likelihood metric

* changes to CHECK

* changes to logistic hessian to standard formula

* changes to logistic formula

* Comply with coding style guideline

* Revert back Rabit submodule

* Revert dmlc-core submodule

* Comply with coding style guideline (clang-tidy)

* Fix an error in AFTLoss::Gradient()

* Add missing files to amalgamation

* Address @RAMitchell's comment: minimize future change in MetaInfo interface

* Fix lint

* Fix compilation error on 32-bit target, when size_t == bst_uint

* Allocate sufficient memory to hold extra label info

* Use OpenMP to speed up

* Fix compilation on Windows

* Address reviewer's feedback

* Add unit tests for probability distributions

* Make Metric subclass of Configurable

* Address reviewer's feedback: Configure() AFT metric

* Add a dummy test for AFT metric configuration

* Complete AFT configuration test; remove debugging print

* Rename AFT parameters

* Clarify test comment

* Add a dummy test for AFT loss for uncensored case

* Fix a bug in AFT loss for uncensored labels

* Complete unit test for AFT loss metric

* Simplify unit tests for AFT metric

* Add unit test to verify aggregate output from AFT metric

* Use EXPECT_* instead of ASSERT_*, so that we run all unit tests

* Use aft_loss_param when serializing AFTObj

This is to be consistent with AFT metric

* Add unit tests for AFT Objective

* Fix OpenMP bug; clarify semantics for shared variables used in OpenMP loops

* Add comments

* Remove AFT prefix from probability distribution; put probability distribution in separate source file

* Add comments

* Define kPI and kEulerMascheroni in probability_distribution.h

* Add probability_distribution.cc to amalgamation

* Remove unnecessary diff

* Address reviewer's feedback: define variables where they're used

* Eliminate all INFs and NANs from AFT loss and gradient

* Add demo

* Add tutorial

* Fix lint

* Use 'survival:aft' to be consistent with 'survival:cox'

* Move sample data to demo/data

* Add visual demo with 1D toy data

* Add Python tests

Co-authored-by: Philip Cho <chohyu01@cs.washington.edu>
2020-03-25 13:52:51 -07:00
Rory Mitchell
1de36cdf1e Add link to GPU documentation (#5437) 2020-03-24 09:29:29 +13:00
sriramch
d2231fc840 Ranking metric acceleration on the gpu (#5398) 2020-03-22 19:38:48 +13:00
Jiaming Yuan
cd7d6f7d59 [dask] Fix missing value for scikit-learn interface. (#5435) 2020-03-20 10:56:01 -04:00
James Lamb
4b7e2b7bff [R-package] fixed uses of class() (#5426)
Thank you a lot. Good catch!
2020-03-20 14:51:20 +01:00
Jiaming Yuan
abca9908ba Support pandas SparseArray. (#5431) 2020-03-20 21:40:22 +08:00
James Lamb
3cf665d3ec [R-package] changed FindLibR to take advantage of CMake cache (#5427) 2020-03-20 03:32:15 +08:00
Jiaming Yuan
760d5d0c3c [dask] Accept other inputs for prediction. (#5428)
* Returns a series when input is dataframe.

* Merge assert client.
2020-03-19 17:05:55 +08:00
Jiaming Yuan
8ca06ab329 [dask] Check non-equal when setting threads. (#5421)
* Check non-equal.

`nthread` can be restored from internal parameter, which is mis-interpreted as
user defined parameter.

* Check None.
2020-03-17 13:07:20 +08:00
Jiaming Yuan
b51124c158 [dask] Enable gridsearching with skl. (#5417) 2020-03-16 04:51:51 +08:00
Jiaming Yuan
761a5dbdfc [dask] Honor nthreads from dask worker. (#5414) 2020-03-16 04:51:24 +08:00
Jiaming Yuan
21b671aa06 [dask] Order the prediction result. (#5416) 2020-03-15 19:34:04 +08:00
Jiaming Yuan
668e432e2d [dask] Use DMLC_TASK_ID. (#5415) 2020-03-15 16:47:03 +08:00
Jiaming Yuan
fc88105620 Better error message for updating. (#5418) 2020-03-15 16:46:21 +08:00
Jiaming Yuan
ab7a46a1a4 Check whether current updater can modify a tree. (#5406)
* Check whether current updater can modify a tree.

* Fix tree model JSON IO for pruned trees.
2020-03-14 09:24:08 +08:00
Rory Mitchell
b745b7acce Fix memory usage of device sketching (#5407) 2020-03-14 13:43:24 +13:00
Jan Borchmann
bb8c8df39d [dask] passed through verbose for dask fit (#5413) 2020-03-14 06:33:53 +08:00
Jiaming Yuan
45a97ddf32 Split up LearnerImpl. (#5350) 2020-03-12 16:30:23 +08:00
Rory Mitchell
3ad4333b0e Partial rewrite EllpackPage (#5352) 2020-03-11 10:15:53 +13:00
Darby Payne
7a99f8f27f Adding static library option (#5397) 2020-03-10 18:22:15 +08:00
Bart Broere
a931589c96 Fix typo (#5399) 2020-03-09 19:41:39 +08:00
Rory Mitchell
a38e7bd19c Sketching from adapters (#5365)
* Sketching from adapters

* Add weights test
2020-03-07 21:07:58 +13:00
Jiaming Yuan
0dd97c206b Move thread local entry into Learner. (#5396)
* Move thread local entry into Learner.

This is an attempt to workaround CUDA context issue in static variable, where
the CUDA context can be released before device vector.

* Add PredictionEntry to thread local entry.

This eliminates one copy of prediction vector.

* Don't define CUDA C API in a namespace.
2020-03-07 15:37:39 +08:00
sriramch
1ba6706167 - create a gpu metrics (internal) registry (#5387)
* - create a gpu metrics (internal) registry
  - the objective is to separate the cpu and gpu implementations such that they evolve
    indepedently. to that end, this approach will:
    - preserve the same metrics configuration (from the end user perspective)
    - internally delegate the responsibility to the gpu metrics builder when there is a
      valid device present
    - decouple the gpu metrics builder from the cpu ones to prevent misuse
    - move away from including the cuda file from within the cc file and segregate the code
      via ifdef's
2020-03-07 15:31:35 +13:00
Jiaming Yuan
8d06878bf9 Deterministic GPU histogram. (#5361)
* Use pre-rounding based method to obtain reproducible floating point
  summation.
* GPU Hist for regression and classification are bit-by-bit reproducible.
* Add doc.
* Switch to thrust reduce for `node_sum_gradient`.
2020-03-04 15:13:28 +08:00
Philip Hyunsu Cho
9775da02d9 Add release note for 1.0.0 in NEWS.md (#5329)
* Add release note for 1.0.0

* Fix a small bug in the Python script that compiles the list of contributors

* Clarify governance of CI infrastructure; now PMC is formally in charge

* Address reviewer comment

* Fix typo
2020-03-03 21:35:43 -08:00
sriramch
5dc8e894c9 Fixes and changes to the ranking metrics computed on cpu (#5380)
* - fixes and changes to the ranking metrics computed on cpu
  - auc/aucpr ranking metric accelerated on cpu
  - fixes to the auc/aucpr metrics
2020-03-03 15:56:36 +13:00
Darius Kharazi
71a8b8c65a Fix simple typo: information.c -> information (#5384)
Closes #5383
2020-03-03 08:50:14 +08:00
Egor Smirnov
1b97eaf7a7 Optimized ApplySplit, BuildHist and UpdatePredictCache functions on CPU (#5244)
* Split up sparse and dense build hist kernels.
* Add `PartitionBuilder`.
2020-02-29 16:11:42 +08:00
sriramch
b81f8cbbc0 Move segment sorter to common (#5378)
- move segment sorter to common
- this is the first of a handful of pr's that splits the larger pr #5326
- it moves this facility to common (from ranking objective class), so that it can be
    used for metric computation
- it also wraps all the bald device pointers into span.
2020-02-29 15:42:07 +08:00
Jiaming Yuan
2ba8c13b69 Revert "Enable rabit test (#5358)" (#5377)
This reverts commit 9a5efffebe.
2020-02-29 04:25:03 +08:00
Chen Qin
9a5efffebe Enable rabit test (#5358) 2020-02-28 22:29:02 +08:00
Samrat Pandiri
2d76d40dfd Update dask.rst to correct a spelling mistake (#5371)
Change `signle-node` to `single-node`
2020-02-27 20:46:41 +08:00
Jiaming Yuan
a461a9a90a Define lazy isinstance for Python compat. (#5364)
* Avoid importing datatable.
* Fix #5363.
2020-02-26 14:23:33 +08:00
Jiaming Yuan
0fd455e162 Restore loading model from buffer. (#5360) 2020-02-26 11:30:13 +08:00
Jiaming Yuan
f2b8cd2922 Add number of columns to native data iterator. (#5202)
* Change native data iter into an adapter.
2020-02-25 23:42:01 +08:00
Jiaming Yuan
e0509b3307 Fix pruner. (#5335)
* Honor the tree depth.
* Prevent pruning pruned node.
2020-02-25 08:32:46 +08:00
Rory Mitchell
b0ed3f0a66 Remove unnecessary DMatrix methods (#5324) 2020-02-25 12:40:39 +13:00
Jiaming Yuan
655cf17b60 Predict on Ellpack. (#5327)
* Unify GPU prediction node.
* Add `PageExists`.
* Dispatch prediction on input data for GPU Predictor.
2020-02-23 06:27:03 +08:00
daiki katsuragawa
70a91ec3ba Update README.md (#5346) 2020-02-23 02:52:37 +08:00
Philip Hyunsu Cho
cfae247231 Fix a small typo in sklearn.py that broke multiple eval metrics (#5341) 2020-02-22 19:02:37 +08:00
Rong Ou
d6b31df449 update docs for gpu external memory (#5332)
* update docs for gpu external memory

* add hist limitation
2020-02-22 14:57:40 +08:00
Philip Hyunsu Cho
7ac7e8778f Port patches from 1.0.0 branch (#5336)
* Remove f-string, since it's not supported by Python 3.5 (#5330)

* Remove f-string, since it's not supported by Python 3.5

* Add Python 3.5 to CI, to ensure compatibility

* Remove duplicated matplotlib

* Show deprecation notice for Python 3.5

* Fix lint

* Fix lint

* Fix a unit test that mistook MINOR ver for PATCH ver

* Enforce only major version in JSON model schema

* Bump version to 1.1.0-SNAPSHOT
2020-02-21 13:13:21 -08:00
Philip Hyunsu Cho
8aa8ef1031 Display Sponsor button, link to OpenCollective (#5325) 2020-02-19 01:58:21 -08:00
Rory Mitchell
bc96ceb8b2 Refactor SparsePageSource, delete cache files after use (#5321)
* Refactor sparse page source

* Delete temporary cache files

* Log fatal if cache exists

* Log fatal if multiple threads used with prefetcher
2020-02-19 16:43:41 +13:00
Rory Mitchell
b2b2c4e231 Remove SimpleCSRSource (#5315) 2020-02-18 16:49:17 +13:00
Jiaming Yuan
9f77c18b0d Add JVM_CHECK_CALL. (#5199)
* Added a check call macro in jvm package, prevents executing other functions
from jvm when error occurred in XGBoost. For example, when prediction fails jvm
should not try to allocate memory based on the output prediction size.
2020-02-18 11:10:55 +08:00
Jiaming Yuan
0110754a76 Remove update prediction cache from predictors. (#5312)
Move this function into gbtree, and uses only updater for doing so. As now the predictor knows exactly how many trees to predict, there's no need for it to update the prediction cache.
2020-02-17 11:35:47 +08:00
Jiaming Yuan
e433a379e4 Fix changing locale. (#5314)
* Fix changing locale.

* Don't use locale guard.

As number parsing is implemented in house, we don't need locale.

* Update doc.
2020-02-17 11:31:13 +08:00
Rory Mitchell
7e32af5c21 Wide dataset quantile performance improvement (#5306) 2020-02-16 10:24:42 +13:00
Jiaming Yuan
ed2465cce4 Add configuration to R interface. (#5217)
* Save and load internal parameter configuration as JSON.
2020-02-16 03:01:58 +08:00
Jiaming Yuan
8ca9744b07 Use scikit-learn in extra dependencies. (#5310) 2020-02-15 07:12:51 +08:00
Jiaming Yuan
c35cdecddd Move prediction cache to Learner. (#5220)
* Move prediction cache into Learner.

* Clean-ups

- Remove duplicated cache in Learner and GBM.
- Remove ad-hoc fix of invalid cache.
- Remove `PredictFromCache` in predictors.
- Remove prediction cache for linear altogether, as it's only moving the
  prediction into training process but doesn't provide any actual overall speed
  gain.
- The cache is now unique to Learner, which means the ownership is no longer
  shared by any other components.

* Changes

- Add version to prediction cache.
- Use weak ptr to check expired DMatrix.
- Pass shared pointer instead of raw pointer.
2020-02-14 13:04:23 +08:00
Rory Mitchell
24ad9dec0b Testing hist_util (#5251)
* Rank tests

* Remove categorical split specialisation

* Extend tests to multiple features, switch to WQSketch

* Add tests for SparseCuts

* Add external memory quantile tests, fix some existing tests
2020-02-14 14:36:43 +13:00
Jiaming Yuan
911a902835 Merge model compatibility fixes from 1.0rc branch. (#5305)
* Port test model compatibility.
* Port logit model fix.

https://github.com/dmlc/xgboost/pull/5248
https://github.com/dmlc/xgboost/pull/5281
2020-02-13 20:41:58 +08:00
Jiaming Yuan
29eeea709a Pass shared pointer instead of raw pointer to Learner. (#5302)
Extracted from https://github.com/dmlc/xgboost/pull/5220 .
2020-02-11 14:16:38 +08:00
Philip Hyunsu Cho
2e0067e790 Update affiliation of @hcho3 (#5292) 2020-02-06 20:58:39 -08:00
Andrew Kane
94828a7c0c Updated Windows build docs (#5283) 2020-02-05 12:19:54 +08:00
Jiaming Yuan
84e395d91e Fix CMake build on Windows with setuptools. (#5280) 2020-02-05 10:47:39 +08:00
Jiaming Yuan
595a00466d Rewrite setup.py. (#5271)
The setup.py is rewritten.  This new script uses only Python code and provide customized
implementation of setuptools commands.  This way users can run most of setuptools commands
just like any other Python libraries.

* Remove setup_pip.py
* Remove soft links.
* Define customized commands.
* Remove shell script.
* Remove makefile script.
* Update the doc for building from source.
2020-02-04 13:35:42 +08:00
Rong Ou
e4b74c4d22 Gradient based sampling for GPU Hist (#5093)
* Implement gradient based sampling for GPU Hist tree method.
* Add samplers and handle compacted page in GPU Hist.
2020-02-04 10:31:27 +08:00
Philip Hyunsu Cho
c74216f22c Declare Python 3.8 support in setup.py (#5274) 2020-02-03 10:38:52 -08:00
David Díaz Vico
71e7e3b96f Improved sklearn compatibility (#5255) 2020-02-03 13:30:45 +08:00
Jiaming Yuan
a5cc112eea Export JSON config in get_params. (#5256) 2020-02-03 12:46:51 +08:00
Jiaming Yuan
ed0216642f Avoid dask test fixtures. (#5270)
* Fix Travis OSX timeout.

* Fix classifier.
2020-02-03 12:39:20 +08:00
Jiaming Yuan
856b81c727 Ignore gdb_history. [skip ci] (#5257) 2020-02-02 20:40:09 +08:00
Nan Zhu
d7b45fbcaf [jvm-packages] do not use multiple jobs to make checkpoints (#5082)
* temp

* temp

* tep

* address the comments

* fix stylistic issues

* fix

* external checkpoint
2020-02-01 19:36:39 -08:00
Philip Hyunsu Cho
fa26313feb Remove use of std::cout from R package (#5261) 2020-02-01 05:52:19 -08:00
Jiaming Yuan
fe8d72b50b Cleanup warnings. (#5247)
From clang-tidy-9 and gcc-7: Invalid case style, narrowing definition, wrong
initialization order, unused variables.
2020-01-31 14:52:15 +08:00
Philip Hyunsu Cho
adc795929a Fix compilation error due to 64-bit integer narrowing to size_t (#5250) 2020-01-30 21:32:15 -08:00
Jiaming Yuan
472ded549d Save Scikit-Learn attributes into learner attributes. (#5245)
* Remove the recommendation for pickle.

* Save skl attributes in booster.attr

* Test loading scikit-learn model with native booster.
2020-01-30 16:00:18 +08:00
Egor Smirnov
c67163250e Optimized BuildHist function (#5156) 2020-01-29 23:32:57 -08:00
Philip Hyunsu Cho
4240daed4e Make pip install xgboost*.tar.gz work by fixing build-python.sh (#5241)
* Make pip install xgboost*.tar.gz work by fixing build-python.sh

* Simplify install doc

* Add test

* Install Miniconda for Linux target too

* Build XGBoost only once in sdist

* Try importing xgboost after installation

* Don't set PYTHONPATH env var for sdist test
2020-01-28 23:18:23 -08:00
Philip Hyunsu Cho
cb3ed404cf [R] Enable OpenMP with AppleClang in XGBoost R package (#5240)
* [R] Enable OpenMP with AppleClang in XGBoost R package

* Dramatically simplify install doc
2020-01-28 12:37:22 -08:00
Philip Hyunsu Cho
f7105fa44f Update Rabit (#5237) 2020-01-28 02:05:01 -08:00
Jiaming Yuan
43974939f4 Better error message about loading pickled model. (#5236)
Print the link to new tutorial.
2020-01-28 15:29:14 +08:00
Philip Hyunsu Cho
b513dcd352 Support FreeBSD (#5233)
* Fix build on FreeBSD

* Use __linux__ macro
2020-01-27 22:19:16 -08:00
Jiaming Yuan
ef19480eda Add dart to JSON schema. (#5218)
* Add dart to JSON schema.

* Use spaces instead of tab.
2020-01-28 13:29:09 +08:00
Philip Hyunsu Cho
0c7455276d [R] Robust endian detection in CRAN xgboost build (#5232)
* [R] Robust endian detection in CRAN xgboost build

* Check for external backtrace() lib

* Update Makevars.win
2020-01-27 02:55:13 -08:00
Rory Mitchell
1b3947d929 Make some GPU tests deterministic (#5229) 2020-01-26 11:53:07 +13:00
Jiaming Yuan
3eb1279bbf Config for linear updaters. (#5222) 2020-01-25 11:26:46 +08:00
Jiaming Yuan
40680368cf Add constraint parameters to Scikit-Learn interface. (#5227)
* Add document for constraints.

* Fix a format error in doc for objective function.
2020-01-25 11:12:02 +08:00
Philip Hyunsu Cho
44469a0ca9 Extensible binary serialization format for DMatrix::MetaInfo (#5187)
* Turn xgboost::DataType into C++11 enum class

* New binary serialization format for DMatrix::MetaInfo

* Fix clang-tidy

* Fix c++ test

* Implement new format proposal

* Move helper functions to anonymous namespace; remove unneeded field

* Fix lint

* Add shape.

* Keep only roundtrip test.

* Fix test.

* various fixes

* Update data.cc

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2020-01-23 11:33:17 -08:00
OrdoAbChao
b4f952bd22 [Breaking] Remove Scikit-Learn default parameters (#5130)
* Simplify Scikit-Learn parameter management.

* Copy base class for removing duplicated parameter signatures.
* Set all parameters to None.
* Handle None in set_param.
* Extract the doc.

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2020-01-23 20:25:20 +08:00
Rory Mitchell
aa9a68010b uint not supported in cudf (#5225) 2020-01-23 16:59:18 +13:00
Jiaming Yuan
1891cc766d Fix metainfo from DataFrame. (#5216)
* Fix metainfo from DataFrame.

* Unify helper functions for data and meta.
2020-01-22 16:29:44 +08:00
Rory Mitchell
5d4c24a1fc Fix cupy without cudf import (#5219) 2020-01-22 18:02:39 +13:00
Rory Mitchell
9c56480c61 Support dmatrix construction from cupy array (#5206) 2020-01-22 13:15:27 +13:00
Philip Hyunsu Cho
2a071cebc5 Add CMake option to run Undefined Behavior Sanitizer (UBSan) (#5211)
* Fix related errors.
2020-01-20 16:57:44 +08:00
mattn
ff1342b252 Fix compilation error (#5215) 2020-01-18 23:51:07 +08:00
Crissman Loomis
e526871f0a Add Optuna badge to README.md (#5208)
* Update README.md
2020-01-16 12:22:19 +08:00
Jiaming Yuan
5199b86126 Fix R dart prediction. (#5204)
* Fix R dart prediction and add test.
2020-01-16 12:11:04 +08:00
Jiaming Yuan
808f61081b Update R doc by roxygen2. (#5201) 2020-01-15 10:49:17 +08:00
Philip Hyunsu Cho
0184f2e9f7 Explicitly use UTF-8 codepage when using MSVC (#5197)
* Explicitly use UTF-8 codepage when using MSVC

* Fix build with CUDA enabled
2020-01-14 13:30:34 -08:00
Rory Mitchell
a73e25e15f Implement slice via adapters (#5198) 2020-01-14 12:55:41 +13:00
Kodi Arfer
f100b8d878 [Breaking] Don't drop trees during DART prediction by default (#5115)
* Simplify DropTrees calling logic

* Add `training` parameter for prediction method.

* [Breaking]: Add `training` to C API.

* Change for R and Python custom objective.

* Correct comment.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2020-01-13 21:48:30 +08:00
Jiaming Yuan
7b65698187 Enforce correct data shape. (#5191)
* Fix syncing DMatrix columns.
* notes for tree method.
* Enable feature validation for all interfaces except for jvm.
* Better tests for boosting from predictions.
* Disable validation on JVM.
2020-01-13 15:48:17 +08:00
Rory Mitchell
8cbcc53ccb Remove old cudf constructor code (#5194) 2020-01-10 16:35:23 +13:00
Rory Mitchell
87ebfc1315 Implement cudf construction with adapters. (#5189) 2020-01-09 20:23:06 +13:00
Rory Mitchell
9559f81377 Move SimpleDMatrix constructor to .cc file (#5188) 2020-01-09 14:20:13 +13:00
cpfarrell
9049c7c653 Add new lines for Spark XGBoost missing values section (#5180) 2020-01-07 12:14:16 +08:00
Jiaming Yuan
ee287808fb Lazy initialization of device vector. (#5173)
* Lazy initialization of device vector.

* Fix #5162.

* Disable copy constructor of HostDeviceVector.  Prevents implicit copying.

* Fix CPU build.

* Bring back move assignment operator.
2020-01-07 11:23:05 +08:00
Jiaming Yuan
77cfbff5a7 Fix span constructor. (#5166)
* Make sure copy constructor is used.
2020-01-07 11:19:38 +08:00
Jiaming Yuan
ebc86a3afa Disable parameter validation for Scikit-Learn interface. (#5167)
* Disable parameter validation for now.

Scikit-Learn passes all parameters down to XGBoost, whether they are used or
not.

* Add option `validate_parameters`.
2020-01-07 11:17:31 +08:00
Jiaming Yuan
2b9a62a806 Throw error when not compiled with NCCL. (#5170) 2020-01-07 11:15:19 +08:00
Tim Gates
2d95b9a4b6 Fix simple typo: utilty -> utility (#5182) 2020-01-04 15:28:26 +08:00
Egor Smirnov
7b17e76c5b Optimized EvaluateSplut function (#5138)
* Add block based threading utilities.
2019-12-31 18:18:42 +08:00
Jiaming Yuan
04db125699 Quick fix for memory leak in CPU Hist. (#5153)
Closes https://github.com/dmlc/xgboost/issues/3579 .

* Don't use map.
2019-12-31 14:05:53 +08:00
K.O
018df6004e Fix feature_name crated from int64index dataframe. (#5081) 2019-12-30 12:26:22 +08:00
Jiaming Yuan
139ccc9902 Fix num_roots to be 1. (#5165) 2019-12-30 02:18:45 +08:00
Jiaming Yuan
d55489af14 Don't use modernize-use-trailing-return-type. (#5169) 2019-12-29 20:18:23 +08:00
Jiaming Yuan
6848d0426f Clean up Python 2 compatibility code. (#5161) 2019-12-27 18:34:53 +08:00
Jiaming Yuan
61286c6e8f Fix wrapping GPU ID and prevent data copying. (#5160)
* Removed some data copying.

* Make sure gpu_id is valid before any configuration is carried out.
2019-12-27 16:51:08 +08:00
sriramch
ee81ba8e1f implementation of map ranking algorithm on gpu (#5129)
* - implementation of map ranking algorithm
  - also effected necessary suggestions mentioned in the earlier ranking pr's
  - made some performance improvements to the ndcg algo as well
2019-12-27 12:05:37 +13:00
Philip Hyunsu Cho
9b0af6e882 Enable OpenMP with Apple Clang (Mac default compiler) (#5146)
* Add OpenMP as CMake target

* Require CMake 3.12, to allow linking OpenMP target to objxgboost

* Specify OpenMP compiler flag for CUDA host compiler

* Require CMake 3.16+ if the OS is Mac OSX

* Use AppleClang in Mac tests.

* Update dmlc-core
2019-12-26 16:53:12 +08:00
Jiaming Yuan
f3d7877802 Parameter validation (#5157)
* Unused code.

* Split up old colmaker parameters from train param.

* Fix dart.

* Better name.
2019-12-26 11:59:05 +08:00
Jiaming Yuan
ced3660f60 Tests for empty dmatrix. (#5159) 2019-12-26 11:51:54 +08:00
Jiaming Yuan
298ebe68ac [Breaking] Remove learning_rates in Python. (#5155)
* Remove `learning_rates`.

It's been deprecated since we have callback.

* Set `before_iteration` of `reset_learning_rate` to False to preserve 
  the initial learning rate, and comply to the term "reset".

Closes #4709.

* Tests for various `tree_method`.
2019-12-24 14:25:48 +08:00
Jiaming Yuan
73b1bd2789 Update demo for ranking. (#5154) 2019-12-24 13:39:07 +08:00
Jiaming Yuan
0202e04a8e Add base margin to sklearn interface. (#5151) 2019-12-24 09:43:41 +08:00
Jiaming Yuan
1d0ca49761 Example JSON model parser and Schema. (#5137) 2019-12-23 19:47:35 +08:00
Jiaming Yuan
a4b929385e Note for DaskDMatrix. (#5144)
* Brief introduction to `DaskDMatrix`.

* Add xgboost.dask.train to API doc
2019-12-23 18:55:32 +08:00
Jiaming Yuan
c8bdb652c4 Add check for length of weights. (#4872) 2019-12-21 11:30:58 +08:00
Rory Mitchell
3d04a8cc97 Use dynamic types for array interface columns instead of templates (#5108) 2019-12-21 16:08:10 +13:00
Jiaming Yuan
b915788708 Remove benchmark code in GPU test. (#5141)
* Update Jenkins script.
2019-12-21 11:00:21 +08:00
Philip Hyunsu Cho
74f545bde3 [CI] Repair download URL for Maven 3.6.1 (#5139) 2019-12-20 10:07:40 +08:00
Jiaming Yuan
e521bb6f83 Added new train_folds parameter (#5114)
https://stackoverflow.com/questions/32433458/how-to-specify-train-and-test-indices-for-xgb-cv-in-r-package-xgboost/51412073
2019-12-19 15:02:19 +01:00
Philip Hyunsu Cho
37fdfa03f8 [jvm-packages] Comply with scala style convention + fix broken unit test (#5134)
* Fix scala style check

* fix messed unit test
2019-12-18 17:26:58 -08:00
cpfarrell
bc9d88259f [jvm-packages] Allow for bypassing spark missing value check (#4805)
* Allow for bypassing spark missing value check

* Update documentation for dealing with missing values in spark xgboost
2019-12-18 10:48:20 -08:00
Jiaming Yuan
27b3646d29 Tests and documents for new JSON routines. (#5120) 2019-12-18 08:44:27 +08:00
Jiaming Yuan
63ffd2f686 Check against R seed. (#5125)
* Handle it in R instead.
2019-12-17 19:14:59 +08:00
Jiaming Yuan
2fdb34ed2e Fix metric name loading. (#5122) 2019-12-16 10:14:02 +08:00
Jiaming Yuan
3136185bc5 JSON configuration IO. (#5111)
* Add saving/loading JSON configuration.
* Implement Python pickle interface with new IO routines.
* Basic tests for training continuation.
2019-12-15 17:31:53 +08:00
Rory Mitchell
5aa007d7b2 Fix visual studio output library directories (#5119) 2019-12-15 15:08:20 +13:00
Jiaming Yuan
ad4a1c732c Small refinements for JSON model. (#5112)
* Naming consistency.

* Remove duplicated test.
2019-12-11 19:49:01 +08:00
Jiaming Yuan
208ab3b1ff Model IO in JSON. (#5110) 2019-12-11 11:20:40 +08:00
Rory Mitchell
c7cc657a4d Use adapters for SparsePageDMatrix (#5092) 2019-12-11 15:59:23 +13:00
Jiaming Yuan
e089e16e3d Pass pointer to model parameters. (#5101)
* Pass pointer to model parameters.

This PR de-duplicates most of the model parameters except the one in
`tree_model.h`.  One difficulty is `base_score` is a model property but can be
changed at runtime by objective function.  Hence when performing model IO, we
need to save the one provided by users, instead of the one transformed by
objective.  Here we created an immutable version of `LearnerModelParam` that
represents the value of model parameter after configuration.
2019-12-10 12:11:22 +08:00
Rory Mitchell
979f74d51a Group builder modified for incremental building (#5098) 2019-12-10 14:33:56 +13:00
Jiaming Yuan
1cb6bcc382 Remove dead code in colmaker. (#5105) 2019-12-10 09:32:37 +08:00
Egor Smirnov
b1789b0346 added tracking execution time for UpdatePredictionCache function (#5107) 2019-12-10 01:32:56 +08:00
Jiaming Yuan
38763aa4fa Update document for tree_method. [skip ci] (#5106) 2019-12-09 22:55:00 +08:00
Jiaming Yuan
608ebbe444 Fix GPU ID and prediction cache from pickle (#5086)
* Hack for saving GPU ID.

* Declare prediction cache on GBTree.

* Add a simple test.

* Add `auto` option for GPU Predictor.
2019-12-07 16:02:06 +08:00
Jiaming Yuan
7ef5b78003 Implement JSON IO for updaters (#5094)
* Implement JSON IO for updaters.

* Remove parameters in split evaluator.
2019-12-07 00:24:00 +08:00
Jiaming Yuan
2dcb62ddfb Add IO utilities. (#5091)
* Add fixed size stream for reading model stream.
* Add file extension.
2019-12-05 22:15:34 +08:00
Jiaming Yuan
64af1ecf86 [Breaking] Remove num roots. (#5059) 2019-12-05 21:58:43 +08:00
Jiaming Yuan
f3d8536702 Don't use 0 for "fresh leaf". (#5084)
* Allow using right child as marker for Exact tree_method.
2019-12-05 11:50:51 +08:00
Jiaming Yuan
df9bdbbcb9 Fix parsing empty vector in parameter. (#5087) 2019-12-05 11:42:01 +08:00
Jiaming Yuan
f5e13dcb9b Implement training observer. (#5088) 2019-12-05 11:12:20 +08:00
Jiaming Yuan
f0ca53d9ec Convenient methods for JSON integer. (#5089)
* Fix parsing empty object.
2019-12-05 11:01:12 +08:00
yage
dcde433402 Fix MacOS build error. (#5080)
* Update build script.
* Update build doc.
2019-12-04 19:34:35 +08:00
Rory Mitchell
e3c34c79be External data adapters (#5044)
* Use external data adapters as lightweight intermediate layer between external data and DMatrix
2019-12-04 10:56:17 +13:00
Kodi Arfer
f2277e7106 Use DART tree weights when computing SHAPs (#5050)
This PR fixes tree weights in dart being ignored when computing contributions.

* Fix ellpack page source link.
* Add tree weights to compute contribution.
2019-12-03 19:55:53 +08:00
Philip Hyunsu Cho
64f4361b47 [CI] Locate vcomp140.dll from System32 directory (#5078) 2019-12-02 02:09:32 -08:00
Jiaming Yuan
761e938dbe Support dask dataframe as y for classifier. (#5077)
* Support dask dataframe as y for classifier.

* Lint.
2019-12-02 11:53:30 +08:00
yage
b9dbfe0931 Update doc for building on OSX (#5074)
Co-Authored-By: Jiaming Yuan <jm.yuan@outlook.com>
2019-11-28 20:14:44 +08:00
Jiaming Yuan
9f52e834dc [doc] Some notes for external memory. (#5065) 2019-11-26 00:22:02 +08:00
Jiaming Yuan
d667ea9335 [CI] Fix Travis tests. (#5062)
- Install wget explicitly to match openssl.
- Install CMake explicitly.
- Use newer miniconda link.
- Reenable unittests.
- gcc@9 + xcode@10 for osx due to missing <_stdio.h>.  Other versions of gcc should also work.  But as homebrew pour gcc@9 after update by default, so I just stick with latest version.
- Disabled one external memory test for OSX.  Not sure about the thread implementation in there and fixing external memory is beyond the scope of this PR.
- Use Python3 with conda in jvm package.
2019-11-25 03:32:10 +08:00
Jiaming Yuan
04c640f562 Add cuDF DataFrame to doc. (#5053) 2019-11-19 18:29:40 +08:00
Jiaming Yuan
a4f5c86276 Allow using RandomState object from Numpy in sklearn interface. (#5049) 2019-11-19 10:56:39 +08:00
Philip Hyunsu Cho
4d2779663e Require Python 3.5+ in setup.py (#5021) 2019-11-18 18:55:58 -08:00
Jiaming Yuan
98b051269b Assert dask client at early stage. (#5048) 2019-11-19 10:55:26 +08:00
Rory Mitchell
e67388fb8f Some guidelines on device memory usage (#5038)
* Add memory usage demo

* Update documentation
2019-11-17 07:48:24 +13:00
Rong Ou
0afcc55d98 Support multiple batches in gpu_hist (#5014)
* Initial external memory training support for GPU Hist tree method.
2019-11-16 14:50:20 +08:00
Jiaming Yuan
97abcc7ee2 Extract interaction constraint from split evaluator. (#5034)
*  Extract interaction constraints from split evaluator.

The reason for doing so is mostly for model IO, where num_feature and interaction_constraints are copied in split evaluator. Also interaction constraint by itself is a feature selector, acting like column sampler and it's inefficient to bury it deep in the evaluator chain. Lastly removing one another copied parameter is a win.

*  Enable inc for approx tree method.

As now the implementation is spited up from evaluator class, it's also enabled for approx method.

*  Removing obsoleted code in colmaker.

They are never documented nor actually used in real world. Also there isn't a single test for those code blocks.

*  Unifying the types used for row and column.

As the size of input dataset is marching to billion, incorrect use of int is subject to overflow, also singed integer overflow is undefined behaviour. This PR starts the procedure for unifying used index type to unsigned integers. There's optimization that can utilize this undefined behaviour, but after some testings I don't see the optimization is beneficial to XGBoost.
2019-11-14 20:11:41 +08:00
Sebastian
886bf93ba4 fix: reset hit counter for next batch (#5035) 2019-11-14 10:22:02 +08:00
sriramch
2abe69d774 - ndcg ltr implementation on gpu (#5004)
* - ndcg ltr implementation on gpu
  - this is a follow-up to the pairwise ltr implementation
2019-11-13 11:21:04 +13:00
Philip Hyunsu Cho
f4e7b707c9 Revert #4529 (#5008)
* Revert " Optimize ‘hist’ for multi-core CPU (#4529)"

This reverts commit 4d6590be3c.

* Fix build
2019-11-12 09:35:03 -08:00
KaiJin Ji
1733c9e8f7 Improve operation efficiency for single predict (#5016)
* Improve operation efficiency for single predict
2019-11-10 02:01:28 +08:00
mitama
374648c21a Add better error message for invalid feature names (#5024) 2019-11-10 01:58:14 +08:00
Jiaming Yuan
7663de956c Run training with empty DMatrix. (#4990)
This makes GPU Hist robust in distributed environment as some workers might not
be associated with any data in either training or evaluation.

* Disable rabit mock test for now: See #5012 .

* Disable dask-cudf test at prediction for now: See #5003

* Launch dask job for all workers despite they might not have any data.
* Check 0 rows in elementwise evaluation metrics.

   Using AUC and AUC-PR still throws an error.  See #4663 for a robust fix.

* Add tests for edge cases.
* Add `LaunchKernel` wrapper handling zero sized grid.
* Move some parts of allreducer into a cu file.
* Don't validate feature names when the booster is empty.

* Sync number of columns in DMatrix.

  As num_feature is required to be the same across all workers in data split
  mode.

* Filtering in dask interface now by default syncs all booster that's not
empty, instead of using rank 0.

* Fix Jenkins' GPU tests.

* Install dask-cuda from source in Jenkins' test.

  Now all tests are actually running.

* Restore GPU Hist tree synchronization test.

* Check UUID of running devices.

  The check is only performed on CUDA version >= 10.x, as 9.x doesn't have UUID field.

* Fix CMake policy and project variables.

  Use xgboost_SOURCE_DIR uniformly, add policy for CMake >= 3.13.

* Fix copying data to CPU

* Fix race condition in cpu predictor.

* Fix duplicated DMatrix construction.

* Don't download extra nccl in CI script.
2019-11-06 16:13:13 +08:00
Christopher Cowden
807a244517 Fix repeated split and 0 cover nodes (#5010) 2019-11-06 14:57:22 +08:00
Chen Qin
b29b8c2f34 [jvm-packages] update rabit, surface new changes to spark, add parity and failure tests (#4966)
* [phase 1] expose sets of rabit configurations to spark layer

* add back mutable import

* disable ring_mincount till https://github.com/dmlc/rabit/pull/106d

* Revert "disable ring_mincount till https://github.com/dmlc/rabit/pull/106d"

This reverts commit 65e95a98e24f5eb53c6ba9ef9b2379524258984d.

* apply latest rabit

* fix build error

* apply https://github.com/dmlc/xgboost/pull/4880

* downgrade cmake in rabit

* point to rabit with DMLC_ROOT fix

* relative path of rabit install prefix

* split rabit parameters to another trait

* misc

* misc

* Delete .classpath

* Delete .classpath

* Delete .classpath

* Update XGBoostClassifier.scala

* Update XGBoostRegressor.scala

* Update GeneralParams.scala

* Update GeneralParams.scala

* Update GeneralParams.scala

* Update GeneralParams.scala

* Delete .classpath

* Update RabitParams.scala

* Update .gitignore

* Update .gitignore

* apply rabitParams to training

* use string as rabit parameter value type

* cleanup

* add rabitEnv check

* point to dmlc/rabit

* per feedback

* update private scope

* misc

* update rabit

* add rabit_timtout, fix failing test.

* split tests

* allow build jvm with rabit mock

* pass mock failures to rabit with test

* add mock error and graceful handle rabit assertion error test

* split mvn test

* remove sign for test

* update rabit

* build jvm_packages with rabit mock

* point back to dmlc/rabit

* per feedback, update scala header

* cleanup pom

* per feedback

* try fix lint

* fix lint

* per feedback, remove bootstrap_cache

* per feedback 2

* try replace dev profile with passing mvn property

* fix build error

* remove mvn property and replace with env setting to build test jar

* per feedback

* revert copyright headlines, point to dmlc/rabit

* revert python lint

* remove multiple failure test case as retry is not enabled in spark

* Update core.py

* Update core.py

* per feedback, style fix
2019-11-01 14:21:19 -07:00
Philip Hyunsu Cho
a37691428f Document minimum version required for gtest [skip ci] (#5001) 2019-10-31 15:47:50 -07:00
Jiaming Yuan
6fac40cfb4 Add asan.so.5 to cmake script. (#4999) 2019-10-30 16:03:23 -04:00
Jiaming Yuan
755a606201 Fix dart usegpu. (#4984) 2019-10-28 06:12:04 -04:00
Jiaming Yuan
6ec7e300bd Fix external memory race in colmaker. (#4980)
* Move `GetColDensity` out of omp parallel block.
2019-10-25 04:11:13 -04:00
Philip Hyunsu Cho
96cd7ec2bb [CI] Upload master branch artifacts to S3 root [skip ci] (#4979) 2019-10-23 22:39:04 -07:00
Philip Hyunsu Cho
da6e74f7bb [CI] Upload nightly builds to S3 (#4976)
* Do not store built artifacts in the Jenkins master

* Add wheel renaming script

* Upload wheels to S3 bucket

* Use env.GIT_COMMIT

* Capture git hash correctly

* Add missing import in Jenkinsfile

* Address reviewer's comments

* Put artifacts for pull requests in separate directory

* No wildcard expansion in Windows CMD
2019-10-23 21:16:05 -07:00
Jiaming Yuan
ac457c56a2 Use `UpdateAllowUnknown' for non-model related parameter. (#4961)
* Use `UpdateAllowUnknown' for non-model related parameter.

Model parameter can not pack an additional boolean value due to binary IO
format.  This commit deals only with non-model related parameter configuration.

* Add tidy command line arg for use-dmlc-gtest.
2019-10-23 05:50:12 -04:00
Jiaming Yuan
f24be2efb4 Use configure_file() to configure version only (#4974)
* Avoid writing build_config.h

* Remove build_config.h all together.

* Lint.
2019-10-22 23:47:00 -07:00
Rong Ou
5b1715d97c Write ELLPACK pages to disk (#4879)
* add ellpack source
* add batch param
* extract function to parse cache info
* construct ellpack info separately
* push batch to ellpack page
* write ellpack page.
* make sparse page source reusable
2019-10-22 23:44:32 -04:00
sriramch
310fe60b35 Pairwise ranking objective implementation on gpu (#4873)
* - pairwise ranking objective implementation on gpu
   - there are couple of more algorithms (ndcg and map) for which support will be added
     as follow-up pr's
   - with no label groups defined, get gradient is 90x faster on gpu (120m instance
     mortgage dataset)
   - it can perform by an order of magnitude faster with ~ 10 groups (and adequate cores
     for the cpu implementation)

* Add JSON config to rank obj.
2019-10-22 23:40:07 -04:00
Jiaming Yuan
5620322a48 [Breaking] Add global versioning. (#4936)
* Use CMake config file for representing version.

* Generate c and Python version file with CMake.

The generated file is written into source tree.  But unless XGBoost upgrades
its version, there will be no actual modification.  This retains compatibility
with Makefiles for R.

* Add XGBoost version the DMatrix binaries.
* Simplify prefetch detection in CMakeLists.txt
2019-10-22 23:27:26 -04:00
Jiaming Yuan
7e477a2adb Fix data loading (#4862)
* Fix loading text data.
* Fix config regex.
* Try to explain the error better in exception.
* Update doc.
2019-10-22 12:33:14 -04:00
Philip Hyunsu Cho
95295ce026 [CI] Use latest dask (#4973)
* Remove version spec, to use latest dask always
2019-10-22 07:00:13 -04:00
Philip Hyunsu Cho
741fbf47c4 [CI] Update lint configuration to support latest pylint convention (#4971)
* Update lint configuration

* Use gcc 8 consistently in build instruction
2019-10-21 16:40:57 -07:00
Jiaming Yuan
4771bb0d41 Catch exception in transform function omp context. (#4960) 2019-10-21 17:03:38 +08:00
Jiaming Yuan
010b8f1428 Revert "[jvm-packages] update rabit, surface new changes to spark, add parity and failure tests (#4876)" (#4965)
This reverts commit 86ed01c4bb.
2019-10-18 14:02:35 -07:00
Chen Qin
86ed01c4bb [jvm-packages] update rabit, surface new changes to spark, add parity and failure tests (#4876)
* Expose sets of rabit configurations to spark layer
2019-10-18 15:07:31 -04:00
Jiaming Yuan
31030a8d3a Set correct file permission. (#4964) 2019-10-18 12:54:29 -04:00
Jiaming Yuan
ae536756ae Add Model and Configurable interface. (#4945)
* Apply Configurable to objective functions.
* Apply Model to Learner and Regtree, gbm.
* Add Load/SaveConfig to objs.
* Refactor obj tests to use smart pointer.
* Dummy methods for Save/Load Model.
2019-10-18 01:56:02 -04:00
Jiaming Yuan
9fc681001a Copy CMake parameter from dmlc-core. (#4948) 2019-10-17 23:46:32 -04:00
Jacob Kim
a78d4e7aa8 Follow PEP 257 -- Docstring Conventions (#4959) 2019-10-17 23:45:25 -04:00
Rory Mitchell
60748b2071 Use heuristic to select histogram node, avoid rabit call (#4951) 2019-10-18 11:33:54 +13:00
Jiaming Yuan
185e3f1916 Update GPU doc. (#4953) 2019-10-16 05:54:09 -04:00
Jiaming Yuan
7e72a12871 Don't set_params at the end of set_state. (#4947)
* Don't set_params at the end of set_state.

* Also fix another issue found in dask prediction.

* Add note about prediction.

Don't support other prediction modes at the moment.
2019-10-15 10:08:26 -04:00
Jiaming Yuan
2ebdec8aa6 Fix dask prediction. (#4941)
* Fix dask prediction.

* Add better error messages for wrong partition.
2019-10-14 23:19:34 -04:00
Jiaming Yuan
b61d534472 Span: use size_t' for index_type, add front' and `back'. (#4935)
* Use `size_t' for index_type.  Add `front' and `back'.

* Remove a batch of `static_cast'.
2019-10-14 09:13:33 -04:00
Peter Badida
a9053aff83 Fix incorrectly displayed Note in the doc (#4943) 2019-10-14 03:45:23 -04:00
Jiaming Yuan
0e0849fa1e Mention dask in readme. [skip ci] (#4942) 2019-10-14 03:44:08 -04:00
Jiaming Yuan
3d46bd0fa5 Ignore columnar alignment requirement. (#4928)
* Better error message for wrong type.
* Fix stride size.
2019-10-13 06:41:43 -04:00
Yuan Tang
05d4751540 Update README.md (#4940) 2019-10-13 02:37:19 -04:00
Yuan Tang
08ff510e48 Mention Kubernetes on README (#4939) 2019-10-13 01:43:09 -04:00
Philip Hyunsu Cho
f7487e4c2a [CI] Run cuDF tests in Jenkins CI server (#4927) 2019-10-13 00:04:54 -04:00
Philip Hyunsu Cho
5b4f28cc46 [CI] Raise timeout threshold in Jenkins (#4938) 2019-10-12 23:47:35 -04:00
Jiaming Yuan
4bbf062ed3 [Breaking] Update sklearn interface. (#4929)
* Remove nthread, seed, silent. Add tree_method, gpu_id, num_parallel_tree. Fix #4909.
* Check data shape. Fix #4896.
* Check element of eval_set is tuple. Fix #4875
*  Add doc for random_state with hogwild. Fixes #4919
2019-10-12 02:50:09 -04:00
Jiaming Yuan
c2cce4fac3 Update dmlc-core. (#4924)
* Fixed some threading errors.
* Allow updating parameters.
2019-10-09 23:16:45 -04:00
Jiaming Yuan
6c9b6f11da Use cudf.concat explicitly. (#4918)
* Use `cudf.concat` explicitly.

* Add test.
2019-10-10 16:02:10 +13:00
Rory Mitchell
aefb1e5c2f Resolve dask performance issues (#4914)
* Set dask client.map as impure function

* Remove nrows

* Remove slow check in verbose mode
2019-10-10 16:01:30 +13:00
Oleksandr Pryimak
80977182c5 Use bundled gtest (#4900)
* Suggest to use gtest bundled with dmlc

* Use dmlc bundled gtest in all CI scripts

* Make clang-tidy to use dmlc embedded gtest
2019-10-09 16:26:19 -07:00
Jiaming Yuan
095de3bf5f Export c++ headers in CMake installation. (#4897)
* Move get transpose into cc.

* Clean up headers in host device vector, remove thrust dependency.

* Move span and host device vector into public.

* Install c++ headers.

* Short notes for c and c++.

Co-Authored-By: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2019-10-06 23:53:09 -04:00
Jiaming Yuan
4ab1df5fe6 Check deprecated n_gpus. (#4908) 2019-10-02 02:05:14 -04:00
Jiaming Yuan
7e24a8d245 Improve doc and demo for dask. (#4907)
* Add a readme with link to doc.
* Add more comments in the demonstrations code.
* Workaround https://github.com/dask/distributed/issues/3081 .
2019-09-30 23:59:37 -04:00
Jiaming Yuan
d30e63a0a5 Support feature names/types for cudf. (#4902)
* Implement most of the pandas procedure for cudf except for type conversion.
* Requires an array of interfaces in metainfo.
2019-09-29 15:07:51 -04:00
Vibhu Jawa
2fa8b359e0 Add support for cudf.Series (#4891) 2019-09-25 23:52:28 -04:00
Liangcai Li
82ee2317e8 Add case for LongParam. (#4885)
To support specifying long parameter as String, the same as other basic
type, such as Int, Double ...
2019-09-25 05:41:53 -07:00
Jiaming Yuan
b8433c455a Rewrite Dask interface. (#4819) 2019-09-25 01:30:14 -04:00
Rong Ou
562bb0ae31 remove device shards (#4867) 2019-09-25 13:15:46 +08:00
Jiaming Yuan
0b89cd1dfa Support gamma in GPU_Hist. (#4874)
* Just prevent building the tree instead of using an explicit pruner.
2019-09-24 10:16:08 +08:00
Jiaming Yuan
a40b72d127 Workaround isnan across different environments. (#4883) 2019-09-23 21:34:27 -04:00
Jiaming Yuan
c7416002e9 Fix DMatrix doc. (#4884) 2019-09-23 01:55:04 -04:00
Nan Zhu
fc8c9b0521 [jvm-packages] enable deterministic repartitioning when checkpoint is enabled (#4807)
* do reparititoning in DataUtil

* keep previous behavior of partitioning without checkpoint

* deterministic repartitioning

* change
2019-09-19 15:21:05 -07:00
Xu Xiao
277e25797b [jvm-packages] refine numAliveCores method of SparkParallelismTracker (#4858)
* refine numAliveCores

* refine XGBoostToMLlibParams

* fix waitForCondition

* resolve conflicts

* Update SparkParallelismTracker.scala
2019-09-19 15:18:29 -07:00
Honza Sterba
22209b7b95 [jvm-packages] Add BigDenseMatrix (#4383)
* Add BigDenseMatrix

* ability to create DMatrix with bigger than Integer.MAX_VALUE size arrays
* uses sun.misc.Unsafe

* make DMatrix test work from a jar as well
2019-09-18 20:46:14 -07:00
Jiaming Yuan
57106a3459 Fix parsing empty json object. (#4868)
* Fix parsing empty json object.

* Better error message.
2019-09-18 03:31:46 -04:00
Rong Ou
006eb80578 ignore vscode and clion files (#4866)
* ignore vscode and clion files

* ignore all .idea directories
2019-09-17 21:27:40 -04:00
Jiaming Yuan
d669ea1eaa Deprecate set group (#4864)
* Convert jvm package and R package.

* Restore for compatibility.
2019-09-17 21:26:54 -04:00
Philip Hyunsu Cho
0e0955a6d8 Add link to Ruby XGBoost gem (#4856) 2019-09-17 10:40:44 -07:00
Jiaming Yuan
5374f52531 Complete cudf support. (#4850)
* Handles missing value.
* Accept all floating point and integer types.
* Move to cudf 9.0 API.
* Remove requirement on `null_count`.
* Arbitrary column types support.
2019-09-16 23:52:00 -04:00
Rong Ou
125bcec62e Move ellpack page construction into DMatrix (#4833) 2019-09-16 23:50:55 -04:00
Chen Qin
512f037e55 [rabit_bootstrap_cache ] failed xgb worker recover from other workers (#4808)
* Better recovery support.  Restarting only the failed workers.
2019-09-16 23:31:52 -04:00
Xu Xiao
c89bcc4de5 [blocking] fix parallel eval_split of hist updater (#4851)
* Don't call rabit functions inside parallel loop.
2019-09-13 09:35:03 -04:00
Jiaming Yuan
6a5e805886 Add n_jobs as an alias of nthread. (#4842) 2019-09-09 19:57:12 -04:00
Stephanie Yang
0fc7dcfe6c Add public group getter for java and scala (#4838)
* Add public group getter for java and scala

* Remove unnecessary param from javadoc

* Fix typo

* Fix another typo

* Add semicolon

* Fix javadoc return statement

* Fix missing return statement

* Add a unit test
2019-09-09 10:07:48 -07:00
Jiaming Yuan
f90e7f9aa8 Some comments for row partitioner. (#4832) 2019-09-06 03:01:42 -04:00
Jiaming Yuan
a5f232feb8 Fix calling GPU predictor (#4836)
* Fix calling GPU predictor
2019-09-05 19:09:38 -04:00
Jiaming Yuan
52d44e07fe monitor for distributed envorinment. (#4829)
* Collect statistics from other ranks in monitor.

* Workaround old GCC bug.
2019-09-05 13:18:09 +08:00
Jiaming Yuan
c0fbeff0ab Restrict access to cfg_ in gbm. (#4801)
* Restrict access to `cfg_` in gbm.

* Verify having correct updaters.

* Remove `grow_global_histmaker`

This updater is the same as `grow_histmaker`.  The former is not in our
document so we just remove it.
2019-09-02 00:43:19 -04:00
TinkleG
2aed0ae230 Fix auc error in distributed mode (#4798)
Need more work for a complete fix.  See #4663 .
2019-09-01 02:54:14 -04:00
Rong Ou
733ed24dd9 further cleanup of single process multi-GPU code (#4810)
* use subspan in gpu predictor instead of copying
* Revise `HostDeviceVector`
2019-08-30 05:27:23 -04:00
Nan Zhu
0184eb5d02 [jvm-packages] Refactor XGBoost.scala to put all params processing in one place (#4815)
* cleaning checkpoint file after a successful file

* address comments

* refactor xgboost.scala to avoid multiple changes when adding params

* consolidate params

* fix compilation issue

* fix failed test

* fix wrong name

* tyep conversion
2019-08-28 22:41:05 -07:00
Cyprien Ricque
830e73901d eval_metrics print fixed (#4803) 2019-08-28 23:52:18 -04:00
Oleksandr Pryimak
516955564b Update xgboost-spark doc (#4804) 2019-08-27 10:51:00 -07:00
Rong Ou
38ab79f889 Make HostDeviceVector single gpu only (#4773)
* Make HostDeviceVector single gpu only
2019-08-26 09:51:13 +12:00
Michaël Benesty
41227d1933 fix #4764 (#4800)
check all classes in xgb.get.handle
2019-08-21 21:42:37 -04:00
Jiaming Yuan
6e6216ad67 Skip related tests when sklearn is not installed. (#4791) 2019-08-21 00:32:52 -04:00
Jiaming Yuan
fba298fecb Prevent copying data to host. (#4795) 2019-08-20 23:06:27 -04:00
Jiaming Yuan
3fa2ceb193 Add self. (#4794) 2019-08-20 14:41:30 +08:00
Jiaming Yuan
9700776597 Cudf support. (#4745)
* Initial support for cudf integration.

* Add two C APIs for consuming data and metainfo.

* Add CopyFrom for SimpleCSRSource as a generic function to consume the data.

* Add FromDeviceColumnar for consuming device data.

* Add new MetaInfo::SetInfo for consuming label, weight etc.
2019-08-19 16:51:40 +12:00
Jiaming Yuan
ab357dd41c Remove plugin, cuda related code in automake & autoconf files (#4789)
* Build plugin example with CMake.

* Remove plugin, cuda related code in automake & autoconf files.

* Fix typo in GPU doc.
2019-08-18 16:54:34 -04:00
Jiaming Yuan
c358d95c44 Remove initializing stringstream reference. (#4788) 2019-08-18 09:59:47 -04:00
Jiaming Yuan
c81238b5c4 Clean up after removing gpu_exact. (#4777)
* Removed unused functions.
* Removed unused parameters.
* Move ValueConstraints into constraints.cuh since it's now only used in GPU_Hist.
2019-08-17 01:05:57 -04:00
Jiaming Yuan
b9b57f2289 Use long key id. (#4783) 2019-08-16 11:19:22 -07:00
Evan Kepner
53d4272c2a add os.PathLike support for file paths to DMatrix and Booster Python classes (#4757) 2019-08-15 04:46:25 -04:00
Nan Zhu
7b5cbcc846 [jvm-packages] cleaning checkpoint file after a successful training (#4754)
* cleaning checkpoint file after a successful file

* address comments
2019-08-14 10:57:47 -07:00
Xu Xiao
ef9af33a00 [HOTFIX] distributed training with hist method (#4716)
* add parallel test for hist.EvalualiteSplit

* update test_openmp.py

* update test_openmp.py

* update test_openmp.py

* update test_openmp.py

* update test_openmp.py

* fix OMP schedule policy

* fix clang-tidy

* add logging: total_num_bins

* fix

* fix

* test

* replace guided OPENMP policy with static in updater_quantile_hist.cc
2019-08-13 11:27:29 -07:00
Jiaming Yuan
c0ffe65f5c Mimic cuda assert output in span check. (#4762) 2019-08-13 01:44:54 -04:00
Rong Ou
c5b229632d [BREAKING] prevent multi-gpu usage (#4749)
* prevent multi-gpu usage

* fix distributed test

* combine gpu predictor tests

* set upper bound on n_gpus
2019-08-13 09:11:35 +12:00
sriramch
198f3a6c4a Enable natural copies of the batch iterators without the need of the clone method (#4748)
- the synthesized copy constructor should do the appropriate job
2019-08-09 11:47:35 -04:00
Rong Ou
19f9fd5de9 remove the qids_ field in MetaInfo (#4744) 2019-08-08 10:01:59 +08:00
sriramch
f22b1c0348 Fix external memory documentation [skip ci] (#4747)
* - fix external memory documentation [skip ci]
   - to state that it is supported now on gpu algorithms
2019-08-08 09:27:02 +08:00
Rong Ou
602484e19f Remove some unused functions as reported by cppcheck (#4743) 2019-08-07 02:42:33 -04:00
Bobby
3e2c472944 Fix model parameter recovery (#4738) 2019-08-07 02:32:10 -04:00
Rong Ou
851b5b3808 Remove gpu_exact tree method (#4742) 2019-08-07 11:43:20 +12:00
Jiaming Yuan
2a4df8e29f Add Json integer, remove specialization. (#4739) 2019-08-06 03:10:49 -04:00
Jiaming Yuan
9c469b3844 Move bitfield into common. (#4737)
* Prepare for columnar format support.
2019-08-06 02:49:32 -04:00
Xu Xiao
97eece6ea0 [python package] include dmlc-tracker into xgb python pkg (#4731) 2019-08-05 12:21:07 -04:00
Oleksandr Pryimak
b68de018b8 [jvm-packages] jvm test should clean up after themselfs (#4706) 2019-08-04 14:09:11 -07:00
Jiaming Yuan
4fe0d8203e Specify version macro in CMake. (#4730)
* Specify version macro in CMake.

* Use `XGBOOST_DEFINITIONS` instead.
2019-08-04 06:04:04 -04:00
Rong Ou
6edddd7966 Refactor DMatrix to return batches of different page types (#4686)
* Use explicit template parameter for specifying page type.
2019-08-03 15:10:34 -04:00
Jiaming Yuan
e930a8e54f Remove old Python trouble shooting doc. [skip ci] (#4729) 2019-08-03 12:51:29 -04:00
Rong Ou
cb9a80ca90 Update dmlc-core (#4726) 2019-08-02 03:54:14 -04:00
Philip Hyunsu Cho
166def9f75 [CI] Fix broken installation of Pandas (#4722)
* [CI] Fix broken installation of Pandas

* Update Dockerfile.gpu
2019-07-30 22:03:11 -07:00
Matthew Jones
b43f08bea5 updating rabit commit hash (#4718) 2019-07-30 06:51:54 -04:00
Jiaming Yuan
d2e1e4d5b4 A simple Json implementation for future use. (#4708)
* A simple Json implementation for future use.
2019-07-29 21:17:27 -04:00
Rong Ou
9b9e298ff2 remove RowSet which is no longer being used (#4697) 2019-07-25 17:25:58 -07:00
Tong He
7b74b1b64d fix additional files note (#4699)
* fix additional files note

* Trigger CI

* Trigger CI
2019-07-25 00:37:38 -07:00
Jiaming Yuan
59bc1ef330 Remove VC-2013 support. (#4701)
* Removing it as it is not fully c++11 compliance.
2019-07-25 01:28:51 -04:00
Philip Hyunsu Cho
2758c5acea [CI] Fix broken installation of Pandas (#4704) 2019-07-24 19:03:35 -07:00
Nan Zhu
d5c386ae24 Update CONTRIBUTORS.md 2019-07-24 09:38:47 -07:00
Jiaming Yuan
001aaaee5f Removed deprecated gpu objectives. (#4690) 2019-07-20 23:18:34 -04:00
Philip Hyunsu Cho
d4e0a30582 Upgrade dmlc-core submodule (#4688) 2019-07-20 11:30:05 -07:00
Jiaming Yuan
f0064c07ab Refactor configuration [Part II]. (#4577)
* Refactor configuration [Part II].

* General changes:
** Remove `Init` methods to avoid ambiguity.
** Remove `Configure(std::map<>)` to avoid redundant copying and prepare for
   parameter validation. (`std::vector` is returned from `InitAllowUnknown`).
** Add name to tree updaters for easier debugging.

* Learner changes:
** Make `LearnerImpl` the only source of configuration.

    All configurations are stored and carried out by `LearnerImpl::Configure()`.

** Remove booster in C API.

    Originally kept for "compatibility reason", but did not state why.  So here
    we just remove it.

** Add a `metric_names_` field in `LearnerImpl`.
** Remove `LazyInit`.  Configuration will always be lazy.
** Run `Configure` before every iteration.

* Predictor changes:
** Allocate both cpu and gpu predictor.
** Remove cpu_predictor from gpu_predictor.

    `GBTree` is now used to dispatch the predictor.

** Remove some GPU Predictor tests.

* IO

No IO changes.  The binary model format stability is tested by comparing
hashing value of save models between two commits
2019-07-20 08:34:56 -04:00
Jiaming Yuan
ad1192e8a3 Remove silent in doc. [skip ci] (#4689) 2019-07-20 05:53:42 -04:00
Nathan Moore
b45258ce66 minor updates to links and grammar (#4673)
updated links to caret data splitting, xgb.dump(with_stats), and some grammar
2019-07-18 16:56:40 -07:00
Philip Hyunsu Cho
4ef6d216b9 Upgrade dmlc-core submodule (#4674) 2019-07-17 18:54:49 -07:00
Tong He
8ac8fbef29 [R] Fix CRAN error for Mac OS X (#4672)
* fix cran error for mac os x

* ignore float on windows check for now
2019-07-17 17:55:52 -07:00
Nan Zhu
1595e3f57b upgrade version num (#4670)
* upgrade version num

* missign changes

* fix version script

* change versions

* rm files

* Update CMakeLists.txt
2019-07-17 15:25:35 -07:00
Nan Zhu
01b0c9047c [jvm-packages] allowing chaining prediction (#4667)
* add test for chaining prediction

* update rabit

* Update XGBoostGeneralSuite.scala
2019-07-17 08:50:27 -07:00
koertkuipers
3c506b076e [jvm-packages] upgrade to Scala 2.12 (#4574)
* bump scala to 2.12 which requires java 8 and also newer flink and akka

* put scala version in artifactId

* fix appveyor

* fix for scaladoc issue that looks like https://github.com/scala/bug/issues/10509

* fix ci_build

* update versions in generate_pom.py

* fix generate_pom.py

* apache does not have a download for spark 2.4.3 distro using scala 2.12 yet, so for now i use a tgz i put on s3

* Upload spark-2.4.3-bin-scala2.12-hadoop2.7.tgz to our own S3

* Update Dockerfile.jvm_cross

* Update Dockerfile.jvm_cross
2019-07-16 08:43:34 -07:00
Oleksandr Pryimak
5544a730f1 Add optional dependencies to setup.py (#4655) 2019-07-16 17:12:43 +08:00
Mathew Wicks
6323ef94ad [jvm-packages] update local dev build process (#4640) 2019-07-15 21:23:06 -07:00
Philip Hyunsu Cho
9975c533c7 Re-organize contributor's guide (#4659)
* Reorganize contributor's doc

* Address comments from @trivialfis

* Address @sriramch's comment: include ABI compatibility guarantee

* Address @rongou's comment

* Postpone ABI compatibility guarantee for now
2019-07-15 20:56:05 -07:00
Oleksandr Pryimak
2973416f2e [jvm-packages] Fix maven warnings (#4664)
* exec plugin was missing a version
* reportPlugins has been deprecated:
  see https://maven.apache.org/plugins/maven-site-plugin/maven-3.html#Classic_configuration_Maven_2__3
2019-07-15 20:25:43 -07:00
Matvey Turkov
61f764946f fixed year to 2019 in conf.py, helpers.h and LICENSE (#4661) 2019-07-15 12:29:12 -04:00
Mingjie Tang
beb7b295a8 Add tutorial for distributed training and batch prediction with Kubernetes (#4621)
* provide the readme

* update for format

* reformat

* reformat -2

* update again

* update format

* update w.r.t yinlou's comments

* Add kubernetes tutorial to Table of Contents

* Style edit
2019-07-14 23:27:27 -07:00
Nan Zhu
3e339d9557 contribute to community doc (#4646)
* add community doc

* update

* update
2019-07-14 21:29:57 -07:00
sriramch
7a388cbf8b Modify caching allocator/vector and fix issues relating to inability to train large datasets (#4615) 2019-07-09 18:33:27 +12:00
Xu Xiao
cd1526d3b1 fix auc error in distributed mode caused by unbalanced dataset (#4645) 2019-07-08 16:01:52 +08:00
Rong Ou
30204b50fe fix spark tests on machines with many cores (#4634) 2019-07-07 16:02:56 -07:00
Philip Hyunsu Cho
d333918f5e [jvm-packages] Expose setMissing method in XGBoostClassificationModel / XGBoostRegressionModel (#4643) 2019-07-07 16:02:44 -07:00
Philip Hyunsu Cho
1aaf4a679d Fix early stopping in the Python package (#4638)
* Fix #4630, #4421: Preserve correct ordering between metrics, and always use last metric for early stopping

* Clarify semantics of early stopping in presence of multiple valid sets and metrics

* Add a test

* Fix lint
2019-07-07 01:01:03 -07:00
Marcos
562d9ae963 Eliminate FutureWarning: Series.base is deprecated (#4337)
* Remove all references to data.base

Should eliminate the deprecation warning in issue #4300

* Fix lint
2019-07-04 21:06:23 -07:00
Jiaming Yuan
d9a47794a5 Fix CPU hist init for sparse dataset. (#4625)
* Fix CPU hist init for sparse dataset.

* Implement sparse histogram cut.
* Allow empty features.

* Fix windows build, don't use sparse in distributed environment.

* Comments.

* Smaller threshold.

* Fix windows omp.

* Fix msvc lambda capture.

* Fix MSVC macro.

* Fix MSVC initialization list.

* Fix MSVC initialization list x2.

* Preserve categorical feature behavior.

* Rename matrix to sparse cuts.
* Reuse UseGroup.
* Check for categorical data when adding cut.

Co-Authored-By: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Sanity check.

* Fix comments.

* Fix comment.
2019-07-04 16:27:03 -07:00
Philip Hyunsu Cho
b7a1f22d24 Empty evaluation list in early stopping should produce meaningful error message (#4633)
* Empty evaluation list should not break early stopping

* Fix lint

* Update callback.py
2019-07-04 13:27:18 -07:00
Philip Hyunsu Cho
4df246191f Add warning when save_model() is called from scikit-learn interface (#4632) 2019-07-03 23:37:53 -07:00
Philip Hyunsu Cho
96bf91725b Support ndcg- and map- (#4635) 2019-07-03 22:51:48 -07:00
Philip Hyunsu Cho
4e9fad74eb [R] Use built-in label when xgb.DMatrix is given to xgb.cv() (#4631)
* Use built-in label when xgb.DMatrix is given to xgb.cv()

* Add a test

* Fix test

* Bump version number
2019-07-03 01:32:40 -07:00
Oleksandr Pryimak
986fee6022 pytest tests/python fails if no pandas installed (#4620)
* _maybe_pandas_xxx should return their arguments unchanged if no pandas installed

* Tests should not assume pandas is installed

* Mark tests which require pandas as such
2019-07-01 02:54:08 +08:00
Jiaming Yuan
45876bf41b Fix external memory for get column batches. (#4622)
* Fix external memory for get column batches.

This fixes two bugs:

* Use PushCSC for get column batches.
* Don't remove the created temporary directory before finishing test.

* Check all pages.
2019-06-30 09:56:49 +08:00
Philip Hyunsu Cho
a30176907f Support Dask 2.0 (#4617) 2019-06-27 20:42:35 -07:00
Oleksandr Pryimak
923e6c86ba Add to documentation how to run tests locally (#4610)
* Add to documentation how to build native unit tests

* Add instructions to run Python tests and to use Docker container [skip ci]

* Fix link to pytest chapter

* Add link to Google Test [skip ci]

* Set PYTHONPATH [skip ci]

* Revise test_python.sh for running tests locally

* Update test_python.sh

* Place Docker recommendation notice in a prominent place [skip ci]
2019-06-27 19:02:04 -07:00
Rong Ou
63ec95623d fix gpu predictor when dmatrix is mismatched with model (#4613) 2019-06-28 11:03:02 +12:00
Egor Smirnov
4d6590be3c Optimize ‘hist’ for multi-core CPU (#4529)
* Initial performance optimizations for xgboost

* remove includes

* revert float->double

* fix for CI

* fix for CI

* fix for CI

* fix for CI

* fix for CI

* fix for CI

* fix for CI

* fix for CI

* fix for CI

* fix for CI

* Check existence of _mm_prefetch and __builtin_prefetch

* Fix lint

* optimizations for CPU

* appling comments in review

* add some comments, code refactoring

* fixing issues in CI

* adding runtime checks

* remove 1 extra check

* remove extra checks in BuildHist

* remove checks

* add debug info

* added debug info

* revert changes

* added comments

* Apply suggestions from code review

Co-Authored-By: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* apply review comments

* Remove unused function CreateNewNodes()

* Add descriptive comment on node_idx variable in QuantileHistMaker::Builder::BuildHistsBatch()
2019-06-27 11:33:49 -07:00
Nan Zhu
abffbe014e [jvm-packages] delete all constraints from spark layer about obj and eval metrics and handle error in jvm layer (#4560)
* temp

* prediction part

* remove supported*

* add for test

* fix param name

* add rabit

* update rabit

* return value of rabit init

* eliminate compilation warnings

* update rabit

* shutdown

* update rabit again

* check sparkcontext shutdown

* fix logic

* sleep

* fix tests

* test with relaxed threshold

* create new thread each time

* stop for job quitting

* udpate rabit

* update rabit

* update rabit

* update git modules
2019-06-27 08:47:37 -07:00
Philip Hyunsu Cho
dd01f7c4f5 Use Sphinx 2.1+ to compile documentation [skip ci] (#4609) 2019-06-26 16:26:22 -07:00
Philip Hyunsu Cho
cd3a3f99da Fix doc for customized objective/metric [skip ci] (#4608) 2019-06-26 13:40:34 -07:00
Jiaming Yuan
5b2f805e74 Doc and demo for customized metric and obj. (#4598)
Co-Authored-By: Theodore Vasiloudis <theodoros.vasiloudis@gmail.com>
2019-06-26 16:13:12 +08:00
Jiaming Yuan
8bdf15120a Implement tree model dump with code generator. (#4602)
* Implement tree model dump with a code generator.

* Split up generators.
* Implement graphviz generator.
* Use pattern matching.

* [Breaking] Return a Source in `to_graphviz` instead of Digraph in Python package.


Co-Authored-By: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2019-06-26 15:20:44 +08:00
Nan Zhu
fe2de6f415 [jvm-packages]fix silly bug in feature scoring (#4604) 2019-06-25 20:49:01 -07:00
Philip Hyunsu Cho
1f98f18cb8 Add instruction to run formatting checks locally [skip ci] (#4591) 2019-06-24 00:09:09 -07:00
Jiaming Yuan
2cff735126 Update doc for feature constraints and n_gpus. (#4596)
* Update doc for feature constraints. 

* Fix some warnings.

* Clean up doc for `n_gpus`.
2019-06-23 14:37:22 +08:00
Andy Adinets
9fa29ad753 Set reg_lambda=1e-5 for scikit-learn-like random forest classes. (#4558) 2019-06-22 08:02:13 +12:00
Philip Hyunsu Cho
30e1cb4e9e Fix docstring for XGBModel.predict() [skip ci] (#4592) 2019-06-21 12:44:42 +08:00
Rong Ou
77fc28427d fix benchmark_tree.py (#4593) 2019-06-21 11:51:48 +12:00
Jiaming Yuan
9494950ee7 Address some sphinx warnings and errors, add doc for building doc. (#4589) 2019-06-20 15:07:36 -07:00
Rong Ou
6125521caf fix compiler warning (#4588) 2019-06-21 04:06:26 +08:00
Jiaming Yuan
fdf27a5b82 Fix race condition in interaction constraint. (#4587)
* Split up the kernel to sync write.

* QueryNode is no-longer used in Query, but kept for testing.
2019-06-21 02:47:48 +08:00
Rory Mitchell
221e163185 Refactor out row partitioning logic from gpu_hist, introduce caching device vectors (#4554) 2019-06-20 18:24:09 +12:00
Philip Hyunsu Cho
0c50f8417a [CI] Specify account ID when logging into ECR Docker registry (#4584)
* [CI] Specify account ID when logging into ECR Docker registry

* Do not display awscli login command
2019-06-19 15:08:42 -07:00
Jiaming Yuan
ae05948e32 Feature interaction for GPU Hist. (#4534)
* GPU hist Interaction Constraints.
* Duplicate related parameters.
* Add tests for CPU interaction constraint.
* Add better error reporting.
* Thorough tests.
2019-06-19 18:11:02 +08:00
Philip Hyunsu Cho
570374effe [CI] Remove CUDA 8.0 from CI pipeline (#4580) 2019-06-18 23:38:03 -07:00
Rong Ou
e94f85f0e4 Deprecate single node multi-gpu mode (#4579)
* deprecate multi-gpu training

* add single node

* add warning
2019-06-19 15:51:38 +12:00
sriramch
6757654337 Optimizations for quantisation on device (#4572)
* - do not create device vectors for the entire sparse page while computing histograms...
   - while creating the compressed histogram indices, the row vector is created for the entire
     sparse page batch. this is needless as we only process chunks at a time based on a slice
     of the total gpu memory
   - this pr will allocate only as much as required to store the ppropriate row indices and the entries

* - do not dereference row_ptrs once the device_vector has been created to elide host copies of those counts
   - instead, grab the entry counts directly from the sparsepage
2019-06-19 10:50:25 +12:00
Rong Ou
ba1d848767 Remove doc about not supporting cuda 10.1 (#4578) 2019-06-19 10:44:59 +12:00
Daniel Stahl
7ae11c9284 [jvm-packages] updated kryo dependency to 2.22 (#4575) 2019-06-18 09:26:54 -07:00
sriramch
90f683b25b Set the appropriate device before freeing device memory... (#4566)
* - set the appropriate device before freeing device memory...
   - pr #4532 added a global memory tracker/logger to keep track of number of (de)allocations
     and peak memory usage on a per device basis.
   - this pr adds the appropriate check to make sure that the (de)allocation counts and memory usages
     makes sense for the device. since verbosity is typically increased on debug/non-retail builds.  
* - pre-create cub allocators and reuse them
   - create them once and not resize them dynamically. we need to ensure that these allocators
     are created and destroyed exactly once so that the appropriate device id's are set
2019-06-18 14:58:05 +12:00
sriramch
a22368d210 Choose the appropriate tree method *only* when the tree method is auto (#4571)
* Remove redundant checks.
2019-06-17 18:16:45 +08:00
Jiaming Yuan
66f9951d70 Mark SparsePageDmatrix destructor default. (#4568) 2019-06-15 11:21:50 +08:00
Jiaming Yuan
c5719cc457 Offload some configurations into GBM. (#4553)
This is part 1 of refactoring configuration.

* Move tree heuristic configurations.
* Split up declarations and definitions for GBTree.
* Implement UseGPU in gbm.
2019-06-14 09:18:51 +08:00
sriramch
a2042b685a - training with external memory - part 2 of 2 (#4526)
* - training with external memory - part 2 of 2
   - when external memory support is enabled, building of histogram indices are
     done incrementally for every sparse page
   - the entire set of input data is divided across multiple gpu's and the relative
     row positions within each device is tracked when building the compressed histogram buffer
   - this was tested using a mortgage dataset containing ~ 670m rows before 4xt4's could be
     saturated
2019-06-12 09:52:56 +12:00
Jiaming Yuan
4591039eba Remove remaining reg:linear. (#4544) 2019-06-11 16:04:09 +08:00
Jiaming Yuan
4e9965cb9d Fix Python demo and doc. (#4545)
* Remove old doc.
* Fix checking __stdin__.
2019-06-11 08:58:41 +08:00
Jiaming Yuan
2f1319f273 Add rmsle metric and reg:squaredlogerror objective (#4541) 2019-06-11 05:48:27 +08:00
Rory Mitchell
9683fd433e Overload device memory allocation (#4532)
* Group source files, include headers in source files

* Overload device memory allocation
2019-06-10 11:35:13 +12:00
Jiaming Yuan
da21ac0cc2 Fix tweedie metric string. (#4543) 2019-06-09 09:52:29 +08:00
sriramch
59ae42a179 Ensure gcc is at least 5.x (#4538)
* make sure that xgboost has gcc 5.x at the very least to build on gcc tool chain
2019-06-07 05:08:08 +08:00
Jiaming Yuan
afa99e6d9d Use yaml.safe_load. (#4537) 2019-06-07 03:39:25 +08:00
Philip Hyunsu Cho
3f2fe25a32 Fix C++11 config parser (#4521)
* Fix C++11 config parser
* Use raw strings to improve readability of regex
* Fix compilation for GCC 5.x

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
2019-06-03 22:18:16 +08:00
Rory Mitchell
23a10c8339 Refactor histogram building code for gpu_hist (#4528) 2019-06-03 09:50:10 +12:00
Rory Mitchell
399fabed49 Deprecate gpu_exact, bump required cuda version in docs (#4527) 2019-06-03 09:49:05 +12:00
Philip Hyunsu Cho
c2a3902ba3 Fix #4497: Enable feature importance property for DART booster (#4525) 2019-05-31 15:11:57 -07:00
Philip Hyunsu Cho
ea44417754 Enforce exclusion between pred_interactions=True and pred_interactions=True (#4522) 2019-05-31 12:29:23 -07:00
Rory Mitchell
fbbae3386a Smarter choice of histogram construction for distributed gpu_hist (#4519)
* Smarter choice of histogram construction for distributed gpu_hist

* Limit omp team size in ExecuteShards
2019-05-31 14:11:34 +12:00
fuhaoda
dd60fc23e6 Simplify INI-style config reader using C++11 STL (#4478)
* simplify the config.h file

* revise config.h

* revised config.h

* revise format

* revise format issues

* revise whitespace issues

* revise whitespace namespace format issues

* revise namespace format issues

* format issues

* format issues

* format issues

* format issues

* Revert submodule changes

* minor change

* Update src/common/config.h

Co-Authored-By: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* address format issue from trivialfis

* Use correct cub submodule
2019-05-30 11:57:56 -07:00
Jiaming Yuan
b48f895027 Fix prediction from loaded pickle. (#4516) 2019-05-30 15:05:09 +08:00
sriramch
fed665ae8a - training with external memory part 1 of 2 (#4486)
* - training with external memory part 1 of 2
   - this pr focuses on computing the quantiles using multiple gpus on a
     dataset that uses the external cache capabilities
   - there will a follow-up pr soon after this that will support creation
     of histogram indices on large dataset as well
   - both of these changes are required to support training with external memory
   - the sparse pages in dmatrix are taken in batches and the the cut matrices
     are incrementally built
   - also snuck in some (perf) changes related to sketches aggregation amongst multiple
     features across multiple sparse page batches. instead of aggregating the summary
     inside each device and merged later, it is aggregated in-place when the device
     is working on different rows but the same feature
2019-05-30 08:18:34 +12:00
sriramch
6e16900711 Fix crash with approx tree method on cpu (#4510) 2019-05-30 01:11:29 +08:00
Jiaming Yuan
c589eff941 De-duplicate GPU parameters. (#4454)
* Only define `gpu_id` and `n_gpus` in `LearnerTrainParam`
* Pass LearnerTrainParam through XGBoost vid factory method.
* Disable all GPU usage when GPU related parameters are not specified (fixes XGBoost choosing GPU over aggressively).
* Test learner train param io.
* Fix gpu pickling.
2019-05-29 11:55:57 +08:00
sriramch
a3fedbeaa8 - fix issues with training with external memory on cpu (#4487)
* - fix issues with training with external memory on cpu
   - use the batch size to determine the correct number of rows in a batch
   - use the right number of threads in omp parallalization if the batch size
     is less than the default omp max threads (applicable for the last batch)

* - handle scenarios where last batch size is < available number of threads
- augment tests such that we can test all scenarios (batch size <, >, = number of threads)
2019-05-29 12:31:30 +12:00
Rory Mitchell
972f693eaf Fix dask API sphinx docstrings (#4507)
* Fix dask API sphinx docstrings

* Update GPU docs page
2019-05-28 16:39:26 +12:00
yellowdolphin
3f7e5d9c47 add dll_path for cygwin users (#4499) 2019-05-27 12:04:28 +08:00
Rory Mitchell
09b90d9329 Add native support for Dask (#4473)
* Add native support for Dask

* Add multi-GPU demo

* Add sklearn example
2019-05-27 13:29:28 +12:00
Jiaming Yuan
55e645c5f5 Revert hist init optimization. (#4502) 2019-05-26 08:57:41 +08:00
Rory Mitchell
8ddd2715ee Add python RF documentation (#4500) 2019-05-24 23:30:24 -07:00
Jiaming Yuan
0ce300e73a [jvm-packages] Add back reg:linear for scala. (#4490)
* Add back reg:linear for scala.

* Fix linter.
2019-05-23 15:02:08 -07:00
Bryan Woods
278562db13 Add support for cross-validation using query ID (#4474)
* adding support for matrix slicing with query ID for cross-validation

* hail mary test of unrar installation for windows tests

* trying to modify tests to run in Github CI

* Remove dependency on wget and unrar

* Save error log from R test

* Relax assertion in test_training

* Use int instead of bool in C function interface

* Revise R interface

* Add XGDMatrixSliceDMatrixEx and keep old XGDMatrixSliceDMatrix for API compatibility
2019-05-23 10:45:02 -07:00
Sean Owen
5a567ec249 Ensure pandas DataFrame column names are treated as strings in type error message (#4481) 2019-05-21 16:19:35 +08:00
572 changed files with 48425 additions and 18900 deletions

View File

@@ -1,21 +1,21 @@
Checks: 'modernize-*,-modernize-make-*,-modernize-use-auto,-modernize-raw-string-literal,google-*,-google-default-arguments,-clang-diagnostic-#pragma-messages,readability-identifier-naming'
Checks: 'modernize-*,-modernize-make-*,-modernize-use-auto,-modernize-raw-string-literal,-modernize-avoid-c-arrays,-modernize-use-trailing-return-type,google-*,-google-default-arguments,-clang-diagnostic-#pragma-messages,readability-identifier-naming'
CheckOptions:
- { key: readability-identifier-naming.ClassCase, value: CamelCase }
- { key: readability-identifier-naming.StructCase, value: CamelCase }
- { key: readability-identifier-naming.TypeAliasCase, value: CamelCase }
- { key: readability-identifier-naming.TypedefCase, value: CamelCase }
- { key: readability-identifier-naming.TypeTemplateParameterCase, value: CamelCase }
- { key: readability-identifier-naming.MemberCase, value: lower_case }
- { key: readability-identifier-naming.PrivateMemberSuffix, value: '_' }
- { key: readability-identifier-naming.ProtectedMemberSuffix, value: '_' }
- { key: readability-identifier-naming.EnumCase, value: CamelCase }
- { key: readability-identifier-naming.EnumConstant, value: CamelCase }
- { key: readability-identifier-naming.EnumConstantPrefix, value: k }
- { key: readability-identifier-naming.GlobalConstantCase, value: CamelCase }
- { key: readability-identifier-naming.GlobalConstantPrefix, value: k }
- { key: readability-identifier-naming.StaticConstantCase, value: CamelCase }
- { key: readability-identifier-naming.StaticConstantPrefix, value: k }
- { key: readability-identifier-naming.ConstexprVariableCase, value: CamelCase }
- { key: readability-identifier-naming.ConstexprVariablePrefix, value: k }
- { key: readability-identifier-naming.FunctionCase, value: CamelCase }
- { key: readability-identifier-naming.NamespaceCase, value: lower_case }
- { key: readability-identifier-naming.ClassCase, value: CamelCase }
- { key: readability-identifier-naming.StructCase, value: CamelCase }
- { key: readability-identifier-naming.TypeAliasCase, value: CamelCase }
- { key: readability-identifier-naming.TypedefCase, value: CamelCase }
- { key: readability-identifier-naming.TypeTemplateParameterCase, value: CamelCase }
- { key: readability-identifier-naming.MemberCase, value: lower_case }
- { key: readability-identifier-naming.PrivateMemberSuffix, value: '_' }
- { key: readability-identifier-naming.ProtectedMemberSuffix, value: '_' }
- { key: readability-identifier-naming.EnumCase, value: CamelCase }
- { key: readability-identifier-naming.EnumConstant, value: CamelCase }
- { key: readability-identifier-naming.EnumConstantPrefix, value: k }
- { key: readability-identifier-naming.GlobalConstantCase, value: CamelCase }
- { key: readability-identifier-naming.GlobalConstantPrefix, value: k }
- { key: readability-identifier-naming.StaticConstantCase, value: CamelCase }
- { key: readability-identifier-naming.StaticConstantPrefix, value: k }
- { key: readability-identifier-naming.ConstexprVariableCase, value: CamelCase }
- { key: readability-identifier-naming.ConstexprVariablePrefix, value: k }
- { key: readability-identifier-naming.FunctionCase, value: CamelCase }
- { key: readability-identifier-naming.NamespaceCase, value: lower_case }

1
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1 @@
open_collective: xgboost

17
.gitignore vendored
View File

@@ -17,7 +17,7 @@
*.tar.gz
*conf
*buffer
*model
*.model
*pyc
*.train
*.test
@@ -65,14 +65,11 @@ nb-configuration*
.pydevproject
.settings/
build
config.mk
/xgboost
*.data
build_plugin
.idea
recommonmark/
tags
*.iml
*.class
target
*.swp
@@ -90,9 +87,19 @@ lib/
# spark
metastore_db
plugin/updater_gpu/test/cpp/data
/include/xgboost/build_config.h
# files from R-package source install
**/config.status
R-package/src/Makevars
# Visual Studio Code
/.vscode/
# IntelliJ/CLion
.idea
*.iml
/cmake-build-debug/
# GDB
.gdb_history

View File

@@ -1,36 +1,55 @@
# disable sudo for container build.
sudo: required
# Enabling test on Linux and OS X
# Enabling test OS X
os:
- linux
- osx
osx_image: xcode9.3
osx_image: xcode10.1
dist: bionic
# Use Build Matrix to do lint and build seperately
env:
matrix:
# python package test
- TASK=python_test
# test installation of Python source distribution
- TASK=python_sdist_test
# java package test
- TASK=java_test
# cmake test
# - TASK=cmake_test
- TASK=cmake_test
# dependent apt packages
global:
- secure: "PR16i9F8QtNwn99C5NDp8nptAS+97xwDtXEJJfEiEVhxPaaRkOp0MPWhogCaK0Eclxk1TqkgWbdXFknwGycX620AzZWa/A1K3gAs+GrpzqhnPMuoBJ0Z9qxXTbSJvCyvMbYwVrjaxc/zWqdMU8waWz8A7iqKGKs/SqbQ3rO6v7c="
- secure: "dAGAjBokqm/0nVoLMofQni/fWIBcYSmdq4XvCBX1ZAMDsWnuOfz/4XCY6h2lEI1rVHZQ+UdZkc9PioOHGPZh5BnvE49/xVVWr9c4/61lrDOlkD01ZjSAeoV0fAZq+93V/wPl4QV+MM+Sem9hNNzFSbN5VsQLAiWCSapWsLdKzqA="
matrix:
exclude:
- os: linux
env: TASK=python_test
- os: linux
env: TASK=java_test
- os: linux
env: TASK=cmake_test
# dependent brew packages
addons:
homebrew:
packages:
- gcc@7
- cmake
- libomp
- graphviz
- openssl
- libgit2
- wget
- r
update: true
before_install:
- source dmlc-core/scripts/travis/travis_setup_env.sh
- export PYTHONPATH=${PYTHONPATH}:${PWD}/python-package
- source tests/travis/travis_setup_env.sh
- if [ "${TASK}" != "python_sdist_test" ]; then export PYTHONPATH=${PYTHONPATH}:${PWD}/python-package; fi
- echo "MAVEN_OPTS='-Xmx2g -XX:MaxPermSize=1024m -XX:ReservedCodeCacheSize=512m -Dorg.slf4j.simpleLogger.defaultLogLevel=error'" > ~/.mavenrc
install:
@@ -45,7 +64,7 @@ cache:
- ${HOME}/.cache/pip
before_cache:
- dmlc-core/scripts/travis/travis_before_cache.sh
- tests/travis/travis_before_cache.sh
after_failure:
- tests/travis/travis_after_failure.sh

View File

@@ -1,61 +1,92 @@
cmake_minimum_required(VERSION 3.3)
project(xgboost LANGUAGES CXX C VERSION 0.90)
cmake_minimum_required(VERSION 3.13)
project(xgboost LANGUAGES CXX C VERSION 1.1.1)
include(cmake/Utils.cmake)
list(APPEND CMAKE_MODULE_PATH "${PROJECT_SOURCE_DIR}/cmake/modules")
list(APPEND CMAKE_MODULE_PATH "${xgboost_SOURCE_DIR}/cmake/modules")
cmake_policy(SET CMP0022 NEW)
cmake_policy(SET CMP0079 NEW)
cmake_policy(SET CMP0063 NEW)
if ((${CMAKE_VERSION} VERSION_GREATER 3.13) OR (${CMAKE_VERSION} VERSION_EQUAL 3.13))
cmake_policy(SET CMP0077 NEW)
endif ((${CMAKE_VERSION} VERSION_GREATER 3.13) OR (${CMAKE_VERSION} VERSION_EQUAL 3.13))
message(STATUS "CMake version ${CMAKE_VERSION}")
if (MSVC)
cmake_minimum_required(VERSION 3.11)
endif (MSVC)
if (CMAKE_COMPILER_IS_GNUCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5.0)
message(FATAL_ERROR "GCC version must be at least 5.0!")
endif()
include(${xgboost_SOURCE_DIR}/cmake/FindPrefetchIntrinsics.cmake)
find_prefetch_intrinsics()
include(${xgboost_SOURCE_DIR}/cmake/Version.cmake)
write_version()
set_default_configuration_release()
#-- Options
option(BUILD_C_DOC "Build documentation for C APIs using Doxygen." OFF)
option(USE_OPENMP "Build with OpenMP support." ON)
option(BUILD_STATIC_LIB "Build static library" OFF)
## Bindings
option(JVM_BINDINGS "Build JVM bindings" OFF)
option(R_LIB "Build shared library for R package" OFF)
## Dev
option(USE_DEBUG_OUTPUT "Dump internal training results like gradients and predictions to stdout.
Should only be used for debugging." OFF)
option(GOOGLE_TEST "Build google tests" OFF)
option(USE_DMLC_GTEST "Use google tests bundled with dmlc-core submodule (EXPERIMENTAL)" OFF)
option(USE_DMLC_GTEST "Use google tests bundled with dmlc-core submodule" OFF)
option(USE_NVTX "Build with cuda profiling annotations. Developers only." OFF)
set(NVTX_HEADER_DIR "" CACHE PATH "Path to the stand-alone nvtx header")
option(RABIT_MOCK "Build rabit with mock" OFF)
option(HIDE_CXX_SYMBOLS "Build shared library and hide all C++ symbols" OFF)
## CUDA
option(USE_CUDA "Build with GPU acceleration" OFF)
option(USE_NCCL "Build with NCCL to enable multi-GPU support." OFF)
option(USE_NCCL "Build with NCCL to enable distributed GPU support." OFF)
option(BUILD_WITH_SHARED_NCCL "Build with shared NCCL library." OFF)
set(GPU_COMPUTE_VER "" CACHE STRING
"Semicolon separated list of compute versions to be built against, e.g. '35;61'")
if (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
message(SEND_ERROR "Build XGBoost with -DUSE_NCCL=ON to enable BUILD_WITH_SHARED_NCCL.")
endif (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
## Copied From dmlc
option(USE_HDFS "Build with HDFS support" OFF)
option(USE_AZURE "Build with AZURE support" OFF)
option(USE_S3 "Build with S3 support" OFF)
## Sanitizers
option(USE_SANITIZER "Use santizer flags" OFF)
option(SANITIZER_PATH "Path to sanitizes.")
set(ENABLED_SANITIZERS "address" "leak" CACHE STRING
"Semicolon separated list of sanitizer names. E.g 'address;leak'. Supported sanitizers are
address, leak and thread.")
address, leak, undefined and thread.")
## Plugins
option(PLUGIN_LZ4 "Build lz4 plugin" OFF)
option(PLUGIN_DENSE_PARSER "Build dense parser plugin" OFF)
option(ADD_PKGCONFIG "Add xgboost.pc into system." ON)
## Deprecation warning
#-- Checks for building XGBoost
if (USE_DEBUG_OUTPUT AND (NOT (CMAKE_BUILD_TYPE MATCHES Debug)))
message(SEND_ERROR "Do not enable `USE_DEBUG_OUTPUT' with release build.")
endif (USE_DEBUG_OUTPUT AND (NOT (CMAKE_BUILD_TYPE MATCHES Debug)))
if (USE_NCCL AND NOT (USE_CUDA))
message(SEND_ERROR "`USE_NCCL` must be enabled with `USE_CUDA` flag.")
endif (USE_NCCL AND NOT (USE_CUDA))
if (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
message(SEND_ERROR "Build XGBoost with -DUSE_NCCL=ON to enable BUILD_WITH_SHARED_NCCL.")
endif (BUILD_WITH_SHARED_NCCL AND (NOT USE_NCCL))
if (JVM_BINDINGS AND R_LIB)
message(SEND_ERROR "`R_LIB' is not compatible with `JVM_BINDINGS' as they both have customized configurations.")
endif (JVM_BINDINGS AND R_LIB)
if (R_LIB AND GOOGLE_TEST)
message(WARNING "Some C++ unittests will fail with `R_LIB` enabled,
as R package redirects some functions to R runtime implementation.")
endif (R_LIB AND GOOGLE_TEST)
if (USE_AVX)
message(WARNING "The option 'USE_AVX' is deprecated as experimental AVX features have been removed from xgboost.")
message(SEND_ERROR "The option 'USE_AVX' is deprecated as experimental AVX features have been removed from XGBoost.")
endif (USE_AVX)
# Sanitizer
#-- Sanitizer
if (USE_SANITIZER)
# Older CMake versions have had troubles with Sanitizer
cmake_minimum_required(VERSION 3.12)
include(cmake/Sanitizer.cmake)
enable_sanitizers("${ENABLED_SANITIZERS}")
endif (USE_SANITIZER)
if (USE_CUDA)
cmake_minimum_required(VERSION 3.12)
SET(USE_OPENMP ON CACHE BOOL "CUDA requires OpenMP" FORCE)
# `export CXX=' is ignored by CMake CUDA.
set(CMAKE_CUDA_HOST_COMPILER ${CMAKE_CXX_COMPILER})
@@ -67,9 +98,20 @@ if (USE_CUDA)
message(STATUS "CUDA GEN_CODE: ${GEN_CODE}")
endif (USE_CUDA)
find_package(Threads REQUIRED)
if (USE_OPENMP)
if (APPLE)
# Require CMake 3.16+ on Mac OSX, as previous versions of CMake had trouble locating
# OpenMP on Mac. See https://github.com/dmlc/xgboost/pull/5146#issuecomment-568312706
cmake_minimum_required(VERSION 3.16)
endif (APPLE)
find_package(OpenMP REQUIRED)
endif (USE_OPENMP)
# dmlc-core
msvc_use_static_runtime()
add_subdirectory(${PROJECT_SOURCE_DIR}/dmlc-core)
add_subdirectory(${xgboost_SOURCE_DIR}/dmlc-core)
set_target_properties(dmlc PROPERTIES
CXX_STANDARD 11
CXX_STANDARD_REQUIRED ON
@@ -77,40 +119,52 @@ set_target_properties(dmlc PROPERTIES
list(APPEND LINKED_LIBRARIES_PRIVATE dmlc)
# rabit
# full rabit doesn't build on windows, so we can't import it as subdirectory
if(MINGW OR R_LIB)
set(RABIT_SOURCES
rabit/src/engine_empty.cc
rabit/src/c_api.cc)
else ()
set(RABIT_SOURCES
rabit/src/allreduce_base.cc
rabit/src/allreduce_robust.cc
rabit/src/engine.cc
rabit/src/c_api.cc)
endif (MINGW OR R_LIB)
add_library(rabit STATIC ${RABIT_SOURCES})
target_include_directories(rabit PRIVATE
$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/dmlc-core/include>
$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/rabit/include/rabit>)
set_target_properties(rabit
PROPERTIES
CXX_STANDARD 11
CXX_STANDARD_REQUIRED ON
POSITION_INDEPENDENT_CODE ON)
list(APPEND LINKED_LIBRARIES_PRIVATE rabit)
set(RABIT_BUILD_DMLC OFF)
set(DMLC_ROOT ${xgboost_SOURCE_DIR}/dmlc-core)
set(RABIT_WITH_R_LIB ${R_LIB})
add_subdirectory(rabit)
if (RABIT_MOCK)
list(APPEND LINKED_LIBRARIES_PRIVATE rabit_mock_static)
else()
list(APPEND LINKED_LIBRARIES_PRIVATE rabit)
endif(RABIT_MOCK)
foreach(lib rabit rabit_base rabit_empty rabit_mock rabit_mock_static)
# Explicitly link dmlc to rabit, so that configured header (build_config.h)
# from dmlc is correctly applied to rabit.
if (TARGET ${lib})
target_link_libraries(${lib} dmlc ${CMAKE_THREAD_LIBS_INIT})
if (HIDE_CXX_SYMBOLS) # Hide all C++ symbols from Rabit
set_target_properties(${lib} PROPERTIES CXX_VISIBILITY_PRESET hidden)
endif (HIDE_CXX_SYMBOLS)
endif (TARGET ${lib})
endforeach()
# Exports some R specific definitions and objects
if (R_LIB)
add_subdirectory(${PROJECT_SOURCE_DIR}/R-package)
add_subdirectory(${xgboost_SOURCE_DIR}/R-package)
endif (R_LIB)
# core xgboost
add_subdirectory(${PROJECT_SOURCE_DIR}/src)
list(APPEND LINKED_LIBRARIES_PRIVATE Threads::Threads ${CMAKE_THREAD_LIBS_INIT})
add_subdirectory(${xgboost_SOURCE_DIR}/plugin)
add_subdirectory(${xgboost_SOURCE_DIR}/src)
target_link_libraries(objxgboost PUBLIC dmlc)
set(XGBOOST_OBJ_SOURCES "${XGBOOST_OBJ_SOURCES};$<TARGET_OBJECTS:objxgboost>")
#-- Shared library
add_library(xgboost SHARED ${XGBOOST_OBJ_SOURCES} ${PLUGINS_SOURCES})
#-- library
if (BUILD_STATIC_LIB)
add_library(xgboost STATIC ${XGBOOST_OBJ_SOURCES})
else (BUILD_STATIC_LIB)
add_library(xgboost SHARED ${XGBOOST_OBJ_SOURCES})
endif (BUILD_STATIC_LIB)
#-- Hide all C++ symbols
if (HIDE_CXX_SYMBOLS)
set_target_properties(objxgboost PROPERTIES CXX_VISIBILITY_PRESET hidden)
set_target_properties(xgboost PROPERTIES CXX_VISIBILITY_PRESET hidden)
endif (HIDE_CXX_SYMBOLS)
target_include_directories(xgboost
INTERFACE
$<INSTALL_INTERFACE:${CMAKE_INSTALL_PREFIX}/include>
@@ -119,22 +173,18 @@ target_link_libraries(xgboost PRIVATE ${LINKED_LIBRARIES_PRIVATE})
# This creates its own shared library `xgboost4j'.
if (JVM_BINDINGS)
add_subdirectory(${PROJECT_SOURCE_DIR}/jvm-packages)
add_subdirectory(${xgboost_SOURCE_DIR}/jvm-packages)
endif (JVM_BINDINGS)
#-- End shared library
#-- CLI for xgboost
add_executable(runxgboost ${PROJECT_SOURCE_DIR}/src/cli_main.cc ${XGBOOST_OBJ_SOURCES})
# For cli_main.cc only
if (USE_OPENMP)
find_package(OpenMP REQUIRED)
target_compile_options(runxgboost PRIVATE ${OpenMP_CXX_FLAGS})
endif (USE_OPENMP)
add_executable(runxgboost ${xgboost_SOURCE_DIR}/src/cli_main.cc ${XGBOOST_OBJ_SOURCES})
target_include_directories(runxgboost
PRIVATE
${PROJECT_SOURCE_DIR}/include
${PROJECT_SOURCE_DIR}/dmlc-core/include
${PROJECT_SOURCE_DIR}/rabit/include)
${xgboost_SOURCE_DIR}/include
${xgboost_SOURCE_DIR}/dmlc-core/include
${xgboost_SOURCE_DIR}/rabit/include)
target_link_libraries(runxgboost PRIVATE ${LINKED_LIBRARIES_PRIVATE})
set_target_properties(
runxgboost PROPERTIES
@@ -143,8 +193,8 @@ set_target_properties(
CXX_STANDARD_REQUIRED ON)
#-- End CLI for xgboost
set_output_directory(runxgboost ${PROJECT_SOURCE_DIR})
set_output_directory(xgboost ${PROJECT_SOURCE_DIR}/lib)
set_output_directory(runxgboost ${xgboost_SOURCE_DIR})
set_output_directory(xgboost ${xgboost_SOURCE_DIR}/lib)
# Ensure these two targets do not build simultaneously, as they produce outputs with conflicting names
add_dependencies(xgboost runxgboost)
@@ -167,11 +217,9 @@ if (BUILD_C_DOC)
endif (BUILD_C_DOC)
include(GNUInstallDirs)
# Exposing only C APIs.
install(FILES
"${PROJECT_SOURCE_DIR}/include/xgboost/c_api.h"
DESTINATION
include/xgboost/)
# Install all headers. Please note that currently the C++ headers does not form an "API".
install(DIRECTORY ${xgboost_SOURCE_DIR}/include/xgboost
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})
install(TARGETS xgboost runxgboost
EXPORT XGBoostTargets
@@ -203,21 +251,21 @@ install(
if (GOOGLE_TEST)
enable_testing()
# Unittests.
add_subdirectory(${PROJECT_SOURCE_DIR}/tests/cpp)
add_subdirectory(${xgboost_SOURCE_DIR}/tests/cpp)
add_test(
NAME TestXGBoostLib
COMMAND testxgboost
WORKING_DIRECTORY ${PROJECT_BINARY_DIR})
WORKING_DIRECTORY ${xgboost_BINARY_DIR})
# CLI tests
configure_file(
${PROJECT_SOURCE_DIR}/tests/cli/machine.conf.in
${PROJECT_BINARY_DIR}/tests/cli/machine.conf
${xgboost_SOURCE_DIR}/tests/cli/machine.conf.in
${xgboost_BINARY_DIR}/tests/cli/machine.conf
@ONLY)
add_test(
NAME TestXGBoostCLI
COMMAND runxgboost ${PROJECT_BINARY_DIR}/tests/cli/machine.conf
WORKING_DIRECTORY ${PROJECT_BINARY_DIR})
COMMAND runxgboost ${xgboost_BINARY_DIR}/tests/cli/machine.conf
WORKING_DIRECTORY ${xgboost_BINARY_DIR})
set_tests_properties(TestXGBoostCLI
PROPERTIES
PASS_REGULAR_EXPRESSION ".*test-rmse:0.087.*")
@@ -227,3 +275,12 @@ endif (GOOGLE_TEST)
# replace /MD with /MT. See https://github.com/dmlc/xgboost/issues/4462
# for issues caused by mixing of /MD and /MT flags
msvc_use_static_runtime()
# Add xgboost.pc
if (ADD_PKGCONFIG)
configure_file(${xgboost_SOURCE_DIR}/cmake/xgboost.pc.in ${xgboost_BINARY_DIR}/xgboost.pc @ONLY)
install(
FILES ${xgboost_BINARY_DIR}/xgboost.pc
DESTINATION ${CMAKE_INSTALL_LIBDIR}/pkgconfig)
endif (ADD_PKGCONFIG)

View File

@@ -2,34 +2,42 @@ Contributors of DMLC/XGBoost
============================
XGBoost has been developed and used by a group of active community. Everyone is more than welcomed to is a great way to make the project better and more accessible to more users.
Committers
Project Management Committee(PMC)
----------
Committers are people who have made substantial contribution to the project and granted write access to the project.
The Project Management Committee(PMC) consists group of active committers that moderate the discussion, manage the project release, and proposes new committer/PMC members.
* [Tianqi Chen](https://github.com/tqchen), University of Washington
- Tianqi is a Ph.D. student working on large-scale machine learning. He is the creator of the project.
* [Tong He](https://github.com/hetong007), Amazon AI
- Tong is an applied scientist in Amazon AI. He is the maintainer of XGBoost R package.
* [Vadim Khotilovich](https://github.com/khotilov)
- Vadim contributes many improvements in R and core packages.
* [Bing Xu](https://github.com/antinucleon)
- Bing is the original creator of XGBoost Python package and currently the maintainer of [XGBoost.jl](https://github.com/antinucleon/XGBoost.jl).
* [Michael Benesty](https://github.com/pommedeterresautee)
- Michael is a lawyer and data scientist in France. He is the creator of XGBoost interactive analysis module in R.
* [Yuan Tang](https://github.com/terrytangyuan), Ant Financial
- Yuan is a software engineer in Ant Financial. He contributed mostly in R and Python packages.
* [Nan Zhu](https://github.com/CodingCat), Uber
- Nan is a software engineer in Uber. He contributed mostly in JVM packages.
* [Sergei Lebedev](https://github.com/superbobry), Criteo
- Sergei is a software engineer in Criteo. He contributed mostly in JVM packages.
* [Hongliang Liu](https://github.com/phunterlau)
* [Scott Lundberg](http://scottlundberg.com/), University of Washington
- Scott is a Ph.D. student at University of Washington. He is the creator of SHAP, a unified approach to explain the output of machine learning models such as decision tree ensembles. He also helps maintain the XGBoost Julia package.
* [Jiaming Yuan](https://github.com/trivialfis)
- Jiaming contributed to the GPU algorithms. He has also introduced new abstractions to improve the quality of the C++ codebase.
* [Hyunsu Cho](http://hyunsu-cho.io/), NVIDIA
- Hyunsu is the maintainer of the XGBoost Python package. He also manages the Jenkins continuous integration system (https://xgboost-ci.net/). He is the initial author of the CPU 'hist' updater.
* [Rory Mitchell](https://github.com/RAMitchell), University of Waikato
- Rory is a Ph.D. student at University of Waikato. He is the original creator of the GPU training algorithms. He improved the CMake build system and continuous integration.
* [Hyunsu Cho](http://hyunsu-cho.io/), Amazon AI
- Hyunsu is an applied scientist in Amazon AI. He is the maintainer of the XGBoost Python package. He also manages the Jenkins continuous integration system (https://xgboost-ci.net/). He is the initial author of the CPU 'hist' updater.
* [Jiaming](https://github.com/trivialfis)
- Jiaming contributed to the GPU algorithms. He has also introduced new abstractions to improve the quality of the C++ codebase.
* [Hongliang Liu](https://github.com/phunterlau)
Committers
----------
Committers are people who have made substantial contribution to the project and granted write access to the project.
* [Tong He](https://github.com/hetong007), Amazon AI
- Tong is an applied scientist in Amazon AI. He is the maintainer of XGBoost R package.
* [Vadim Khotilovich](https://github.com/khotilov)
- Vadim contributes many improvements in R and core packages.
* [Bing Xu](https://github.com/antinucleon)
- Bing is the original creator of XGBoost Python package and currently the maintainer of [XGBoost.jl](https://github.com/antinucleon/XGBoost.jl).
* [Sergei Lebedev](https://github.com/superbobry), Criteo
- Sergei is a software engineer in Criteo. He contributed mostly in JVM packages.
* [Scott Lundberg](http://scottlundberg.com/), University of Washington
- Scott is a Ph.D. student at University of Washington. He is the creator of SHAP, a unified approach to explain the output of machine learning models such as decision tree ensembles. He also helps maintain the XGBoost Julia package.
Become a Committer
------------------
@@ -89,3 +97,8 @@ List of Contributors
* [Sam Wilkinson](https://samwilkinson.io)
* [Matthew Jones](https://github.com/mt-jones)
* [Jiaxiang Li](https://github.com/JiaxiangBU)
* [Bryan Woods](https://github.com/bryan-woods)
- Bryan added support for cross-validation for the ranking objective
* [Haoda Fu](https://github.com/fuhaoda)
* [Evan Kepner](https://github.com/EvanKepner)
- Evan Kepner added support for os.PathLike file paths in Python

149
Jenkinsfile vendored
View File

@@ -6,19 +6,25 @@
// Command to run command inside a docker container
dockerRun = 'tests/ci_build/ci_build.sh'
import groovy.transform.Field
@Field
def commit_id // necessary to pass a variable from one stage to another
pipeline {
// Each stage specify its own agent
agent none
environment {
DOCKER_CACHE_REPO = '492475357299.dkr.ecr.us-west-2.amazonaws.com'
DOCKER_CACHE_ECR_ID = '492475357299'
DOCKER_CACHE_ECR_REGION = 'us-west-2'
}
// Setup common job properties
options {
ansiColor('xterm')
timestamps()
timeout(time: 120, unit: 'MINUTES')
timeout(time: 240, unit: 'MINUTES')
buildDiscarder(logRotator(numToKeepStr: '10'))
preserveStashes()
}
@@ -30,6 +36,7 @@ pipeline {
steps {
script {
checkoutSrcs()
commit_id = "${GIT_COMMIT}"
}
stash name: 'srcs'
milestone ordinal: 1
@@ -41,7 +48,6 @@ pipeline {
script {
parallel ([
'clang-tidy': { ClangTidy() },
'lint': { Lint() },
'sphinx-doc': { SphinxDoc() },
'doxygen': { Doxygen() }
])
@@ -55,8 +61,8 @@ pipeline {
script {
parallel ([
'build-cpu': { BuildCPU() },
'build-gpu-cuda8.0': { BuildCUDA(cuda_version: '8.0') },
'build-gpu-cuda9.0': { BuildCUDA(cuda_version: '9.0') },
'build-cpu-rabit-mock': { BuildCPUMock() },
'build-cpu-non-omp': { BuildCPUNonOmp() },
'build-gpu-cuda10.0': { BuildCUDA(cuda_version: '10.0') },
'build-gpu-cuda10.1': { BuildCUDA(cuda_version: '10.1') },
'build-jvm-packages': { BuildJVMPackages(spark_version: '2.4.3') },
@@ -72,7 +78,6 @@ pipeline {
script {
parallel ([
'test-python-cpu': { TestPythonCPU() },
'test-python-gpu-cuda8.0': { TestPythonGPU(cuda_version: '8.0') },
'test-python-gpu-cuda9.0': { TestPythonGPU(cuda_version: '9.0') },
'test-python-gpu-cuda10.0': { TestPythonGPU(cuda_version: '10.0') },
'test-python-gpu-cuda10.1': { TestPythonGPU(cuda_version: '10.1') },
@@ -82,13 +87,23 @@ pipeline {
'test-jvm-jdk8': { CrossTestJVMwithJDK(jdk_version: '8', spark_version: '2.4.3') },
'test-jvm-jdk11': { CrossTestJVMwithJDK(jdk_version: '11') },
'test-jvm-jdk12': { CrossTestJVMwithJDK(jdk_version: '12') },
'test-r-3.4.4': { TestR(use_r35: false) },
'test-r-3.5.3': { TestR(use_r35: true) }
])
}
milestone ordinal: 4
}
}
stage('Jenkins Linux: Deploy') {
agent none
steps {
script {
parallel ([
'deploy-jvm-packages': { DeployJVMPackages(spark_version: '2.4.3') }
])
}
milestone ordinal: 5
}
}
}
}
@@ -113,9 +128,9 @@ def ClangTidy() {
echo "Running clang-tidy job..."
def container_type = "clang_tidy"
def docker_binary = "docker"
def dockerArgs = "--build-arg CUDA_VERSION=9.2"
def dockerArgs = "--build-arg CUDA_VERSION=10.1"
sh """
${dockerRun} ${container_type} ${docker_binary} ${dockerArgs} tests/ci_build/clang_tidy.sh
${dockerRun} ${container_type} ${docker_binary} ${dockerArgs} python3 tests/ci_build/tidy.py
"""
deleteDir()
}
@@ -157,7 +172,6 @@ def Doxygen() {
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/doxygen.sh ${BRANCH_NAME}
"""
archiveArtifacts artifacts: "build/${BRANCH_NAME}.tar.bz2", allowEmptyArchive: true
echo 'Uploading doc...'
s3Upload file: "build/${BRANCH_NAME}.tar.bz2", bucket: 'xgboost-docs', acl: 'PublicRead', path: "doxygen/${BRANCH_NAME}.tar.bz2"
deleteDir()
@@ -171,17 +185,54 @@ def BuildCPU() {
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} rm -fv dmlc-core/include/dmlc/build_config_default.h
# This step is not necessary, but here we include it, to ensure that DMLC_CORE_USE_CMAKE flag is correctly propagated
# We want to make sure that we use the configured header build/dmlc/build_config.h instead of include/dmlc/build_config_default.h.
# See discussion at https://github.com/dmlc/xgboost/issues/5510
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_via_cmake.sh
${dockerRun} ${container_type} ${docker_binary} build/testxgboost
"""
// Sanitizer test
def docker_extra_params = "CI_DOCKER_EXTRA_PARAMS_INIT='-e ASAN_SYMBOLIZER_PATH=/usr/bin/llvm-symbolizer -e ASAN_OPTIONS=symbolize=1 --cap-add SYS_PTRACE'"
def docker_args = "--build-arg CMAKE_VERSION=3.12"
def docker_extra_params = "CI_DOCKER_EXTRA_PARAMS_INIT='-e ASAN_SYMBOLIZER_PATH=/usr/bin/llvm-symbolizer -e ASAN_OPTIONS=symbolize=1 -e UBSAN_OPTIONS=print_stacktrace=1:log_path=ubsan_error.log --cap-add SYS_PTRACE'"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_via_cmake.sh -DUSE_SANITIZER=ON -DENABLED_SANITIZERS="address" \
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_via_cmake.sh -DUSE_SANITIZER=ON -DENABLED_SANITIZERS="address;leak;undefined" \
-DCMAKE_BUILD_TYPE=Debug -DSANITIZER_PATH=/usr/lib/x86_64-linux-gnu/
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} build/testxgboost
"""
stash name: 'xgboost_cli', includes: 'xgboost'
deleteDir()
}
}
def BuildCPUMock() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Build CPU with rabit mock"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_mock_cmake.sh
"""
echo 'Stashing rabit C++ test executable (xgboost)...'
stash name: 'xgboost_rabit_tests', includes: 'xgboost'
deleteDir()
}
}
def BuildCPUNonOmp() {
node('linux && cpu') {
unstash name: 'srcs'
echo "Build CPU without OpenMP"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_via_cmake.sh -DUSE_OPENMP=OFF
"""
echo "Running Non-OpenMP C++ test..."
sh """
${dockerRun} ${container_type} ${docker_binary} build/testxgboost
"""
deleteDir()
}
}
@@ -194,17 +245,16 @@ def BuildCUDA(args) {
def docker_binary = "docker"
def docker_args = "--build-arg CUDA_VERSION=${args.cuda_version}"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_via_cmake.sh -DUSE_CUDA=ON -DUSE_NCCL=ON -DOPEN_MP:BOOL=ON
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_via_cmake.sh -DUSE_CUDA=ON -DUSE_NCCL=ON -DOPEN_MP:BOOL=ON -DHIDE_CXX_SYMBOLS=ON
${dockerRun} ${container_type} ${docker_binary} ${docker_args} bash -c "cd python-package && rm -rf dist/* && python setup.py bdist_wheel --universal"
${dockerRun} ${container_type} ${docker_binary} ${docker_args} python3 tests/ci_build/rename_whl.py python-package/dist/*.whl ${commit_id} manylinux2010_x86_64
"""
// Stash wheel for CUDA 8.0 / 9.0 target
if (args.cuda_version == '8.0') {
// Stash wheel for CUDA 10.0 target
if (args.cuda_version == '10.0') {
echo 'Stashing Python wheel...'
stash name: 'xgboost_whl_cuda8', includes: 'python-package/dist/*.whl'
} else if (args.cuda_version == '9.0') {
echo 'Stashing Python wheel...'
stash name: 'xgboost_whl_cuda9', includes: 'python-package/dist/*.whl'
archiveArtifacts artifacts: "python-package/dist/*.whl", allowEmptyArchive: true
stash name: 'xgboost_whl_cuda10', includes: 'python-package/dist/*.whl'
path = ("${BRANCH_NAME}" == 'master') ? '' : "${BRANCH_NAME}/"
s3Upload bucket: 'xgboost-nightly-builds', path: path, acl: 'PublicRead', workingDir: 'python-package/dist', includePathPattern:'**/*.whl'
echo 'Stashing C++ test executable (testxgboost)...'
stash name: 'xgboost_cpp_tests', includes: 'build/testxgboost'
}
@@ -224,7 +274,7 @@ def BuildJVMPackages(args) {
${docker_extra_params} ${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_jvm_packages.sh ${args.spark_version}
"""
echo 'Stashing XGBoost4J JAR...'
stash name: 'xgboost4j_jar', includes: 'jvm-packages/xgboost4j/target/*.jar,jvm-packages/xgboost4j-spark/target/*.jar,jvm-packages/xgboost4j-example/target/*.jar'
stash name: 'xgboost4j_jar', includes: "jvm-packages/xgboost4j/target/*.jar,jvm-packages/xgboost4j-spark/target/*.jar,jvm-packages/xgboost4j-example/target/*.jar"
deleteDir()
}
}
@@ -238,7 +288,6 @@ def BuildJVMDoc() {
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/build_jvm_doc.sh ${BRANCH_NAME}
"""
archiveArtifacts artifacts: "jvm-packages/${BRANCH_NAME}.tar.bz2", allowEmptyArchive: true
echo 'Uploading doc...'
s3Upload file: "jvm-packages/${BRANCH_NAME}.tar.bz2", bucket: 'xgboost-docs', acl: 'PublicRead', path: "${BRANCH_NAME}.tar.bz2"
deleteDir()
@@ -247,13 +296,15 @@ def BuildJVMDoc() {
def TestPythonCPU() {
node('linux && cpu') {
unstash name: 'xgboost_whl_cuda9'
unstash name: 'xgboost_whl_cuda10'
unstash name: 'srcs'
unstash name: 'xgboost_cli'
echo "Test Python CPU"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/test_python.sh cpu
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/test_python.sh cpu-py35
"""
deleteDir()
}
@@ -262,11 +313,7 @@ def TestPythonCPU() {
def TestPythonGPU(args) {
nodeReq = (args.multi_gpu) ? 'linux && mgpu' : 'linux && gpu'
node(nodeReq) {
if (args.cuda_version == '8.0') {
unstash name: 'xgboost_whl_cuda8'
} else {
unstash name: 'xgboost_whl_cuda9'
}
unstash name: 'xgboost_whl_cuda10'
unstash name: 'srcs'
echo "Test Python GPU: CUDA ${args.cuda_version}"
def container_type = "gpu"
@@ -277,12 +324,39 @@ def TestPythonGPU(args) {
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_python.sh mgpu
"""
if (args.cuda_version != '9.0') {
echo "Running tests with cuDF..."
sh """
${dockerRun} cudf ${docker_binary} ${docker_args} tests/ci_build/test_python.sh mgpu-cudf
"""
}
} else {
echo "Using a single GPU"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/test_python.sh gpu
"""
if (args.cuda_version != '9.0') {
echo "Running tests with cuDF..."
sh """
${dockerRun} cudf ${docker_binary} ${docker_args} tests/ci_build/test_python.sh cudf
"""
}
}
// For CUDA 10.0 target, run cuDF tests too
deleteDir()
}
}
def TestCppRabit() {
node(nodeReq) {
unstash name: 'xgboost_rabit_tests'
unstash name: 'srcs'
echo "Test C++, rabit mock on"
def container_type = "cpu"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/runxgb.sh xgboost tests/ci_build/approx.conf.in
"""
deleteDir()
}
}
@@ -338,8 +412,23 @@ def TestR(args) {
def use_r35_flag = (args.use_r35) ? "1" : "0"
def docker_args = "--build-arg USE_R35=${use_r35_flag}"
sh """
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_test_rpkg.sh
${dockerRun} ${container_type} ${docker_binary} ${docker_args} tests/ci_build/build_test_rpkg.sh || tests/ci_build/print_r_stacktrace.sh
"""
deleteDir()
}
}
def DeployJVMPackages(args) {
node('linux && cpu') {
unstash name: 'srcs'
if (env.BRANCH_NAME == 'master' || env.BRANCH_NAME.startsWith('release')) {
echo 'Deploying to xgboost-maven-repo S3 repo...'
def container_type = "jvm"
def docker_binary = "docker"
sh """
${dockerRun} ${container_type} ${docker_binary} tests/ci_build/deploy_jvm_packages.sh ${args.spark_version}
"""
}
deleteDir()
}
}

View File

@@ -3,6 +3,11 @@
/* Jenkins pipeline for Windows AMD64 target */
import groovy.transform.Field
@Field
def commit_id // necessary to pass a variable from one stage to another
pipeline {
agent none
// Build stages
@@ -12,6 +17,7 @@ pipeline {
steps {
script {
checkoutSrcs()
commit_id = "${GIT_COMMIT}"
}
stash name: 'srcs'
milestone ordinal: 1
@@ -22,7 +28,7 @@ pipeline {
steps {
script {
parallel ([
'build-win64-cuda9.0': { BuildWin64() }
'build-win64-cuda10.0': { BuildWin64() }
])
}
milestone ordinal: 2
@@ -34,7 +40,6 @@ pipeline {
script {
parallel ([
'test-win64-cpu': { TestWin64CPU() },
'test-win64-gpu-cuda9.0': { TestWin64GPU(cuda_target: 'cuda9') },
'test-win64-gpu-cuda10.0': { TestWin64GPU(cuda_target: 'cuda10_0') },
'test-win64-gpu-cuda10.1': { TestWin64GPU(cuda_target: 'cuda10_1') }
])
@@ -61,7 +66,7 @@ def checkoutSrcs() {
}
def BuildWin64() {
node('win64 && build') {
node('win64 && build && cuda10') {
unstash name: 'srcs'
echo "Building XGBoost for Windows AMD64 target..."
bat "nvcc --version"
@@ -76,7 +81,7 @@ def BuildWin64() {
"""
bat """
cd python-package
conda activate && python setup.py bdist_wheel --universal
conda activate && python setup.py bdist_wheel --universal && for /R %%i in (dist\\*.whl) DO python ../tests/ci_build/rename_whl.py "%%i" ${commit_id} win_amd64
"""
echo "Insert vcomp140.dll (OpenMP runtime) into the wheel..."
bat """
@@ -86,9 +91,11 @@ def BuildWin64() {
"""
echo 'Stashing Python wheel...'
stash name: 'xgboost_whl', includes: 'python-package/dist/*.whl'
archiveArtifacts artifacts: "python-package/dist/*.whl", allowEmptyArchive: true
path = ("${BRANCH_NAME}" == 'master') ? '' : "${BRANCH_NAME}/"
s3Upload bucket: 'xgboost-nightly-builds', path: path, acl: 'PublicRead', workingDir: 'python-package/dist', includePathPattern:'**/*.whl'
echo 'Stashing C++ test executable (testxgboost)...'
stash name: 'xgboost_cpp_tests', includes: 'build/testxgboost.exe'
stash name: 'xgboost_cli', includes: 'xgboost.exe'
deleteDir()
}
}
@@ -97,12 +104,17 @@ def TestWin64CPU() {
node('win64 && cpu') {
unstash name: 'srcs'
unstash name: 'xgboost_whl'
unstash name: 'xgboost_cli'
echo "Test Win64 CPU"
echo "Installing Python wheel..."
bat "conda activate && (python -m pip uninstall -y xgboost || cd .)"
bat """
conda activate && for /R %%i in (python-package\\dist\\*.whl) DO python -m pip install "%%i"
"""
echo "Installing Python dependencies..."
bat """
conda activate && conda upgrade scikit-learn pandas numpy
"""
echo "Running Python tests..."
bat "conda activate && python -m pytest -v -s --fulltrace tests\\python"
bat "conda activate && python -m pip uninstall -y xgboost"
@@ -124,6 +136,10 @@ def TestWin64GPU(args) {
bat """
conda activate && for /R %%i in (python-package\\dist\\*.whl) DO python -m pip install "%%i"
"""
echo "Installing Python dependencies..."
bat """
conda activate && conda upgrade scikit-learn pandas numpy
"""
echo "Running Python tests..."
bat """
conda activate && python -m pytest -v -s --fulltrace -m "(not slow) and (not mgpu)" tests\\python-gpu

View File

@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2018 by Contributors
Copyright (c) 2019 by Contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

160
Makefile
View File

@@ -1,11 +1,3 @@
ifndef config
ifneq ("$(wildcard ./config.mk)","")
config = config.mk
else
config = make/config.mk
endif
endif
ifndef DMLC_CORE
DMLC_CORE = dmlc-core
endif
@@ -30,23 +22,8 @@ ifndef MAKE_OK
endif
$(warning MAKE [$(MAKE)] - $(if $(MAKE_OK),checked OK,PROBLEM))
ifeq ($(OS), Windows_NT)
UNAME="Windows"
else
UNAME=$(shell uname)
endif
include $(config)
ifeq ($(USE_OPENMP), 0)
export NO_OPENMP = 1
endif
include $(DMLC_CORE)/make/dmlc.mk
# include the plugins
ifdef XGB_PLUGINS
include $(XGB_PLUGINS)
endif
# set compiler defaults for OSX versus *nix
# let people override either
OS := $(shell uname)
@@ -67,111 +44,31 @@ export CXX = g++
endif
endif
export LDFLAGS= -pthread -lm $(ADD_LDFLAGS) $(DMLC_LDFLAGS) $(PLUGIN_LDFLAGS)
export CFLAGS= -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude $(ADD_CFLAGS) $(PLUGIN_CFLAGS)
export CFLAGS= -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude $(ADD_CFLAGS)
CFLAGS += -I$(DMLC_CORE)/include -I$(RABIT)/include -I$(GTEST_PATH)/include
#java include path
export JAVAINCFLAGS = -I${JAVA_HOME}/include -I./java
ifeq ($(TEST_COVER), 1)
CFLAGS += -g -O0 -fprofile-arcs -ftest-coverage
else
CFLAGS += -O3 -funroll-loops
ifeq ($(USE_SSE), 1)
CFLAGS += -msse2
endif
endif
ifndef LINT_LANG
LINT_LANG= "all"
endif
ifeq ($(UNAME), Windows)
XGBOOST_DYLIB = lib/xgboost.dll
JAVAINCFLAGS += -I${JAVA_HOME}/include/win32
else
ifeq ($(UNAME), Darwin)
XGBOOST_DYLIB = lib/libxgboost.dylib
CFLAGS += -fPIC
else
XGBOOST_DYLIB = lib/libxgboost.so
CFLAGS += -fPIC
endif
endif
ifeq ($(UNAME), Linux)
LDFLAGS += -lrt
JAVAINCFLAGS += -I${JAVA_HOME}/include/linux
endif
ifeq ($(UNAME), Darwin)
JAVAINCFLAGS += -I${JAVA_HOME}/include/darwin
endif
OPENMP_FLAGS =
ifeq ($(USE_OPENMP), 1)
OPENMP_FLAGS = -fopenmp
else
OPENMP_FLAGS = -DDISABLE_OPENMP
endif
CFLAGS += $(OPENMP_FLAGS)
# specify tensor path
.PHONY: clean all lint clean_all doxygen rcpplint pypack Rpack Rbuild Rcheck java pylint
all: lib/libxgboost.a $(XGBOOST_DYLIB) xgboost
$(DMLC_CORE)/libdmlc.a: $(wildcard $(DMLC_CORE)/src/*.cc $(DMLC_CORE)/src/*/*.cc)
+ cd $(DMLC_CORE); "$(MAKE)" libdmlc.a config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
$(RABIT)/lib/$(LIB_RABIT): $(wildcard $(RABIT)/src/*.cc)
+ cd $(RABIT); "$(MAKE)" lib/$(LIB_RABIT) USE_SSE=$(USE_SSE); cd $(ROOTDIR)
jvm: jvm-packages/lib/libxgboost4j.so
SRC = $(wildcard src/*.cc src/*/*.cc)
ALL_OBJ = $(patsubst src/%.cc, build/%.o, $(SRC)) $(PLUGIN_OBJS)
AMALGA_OBJ = amalgamation/xgboost-all0.o
LIB_DEP = $(DMLC_CORE)/libdmlc.a $(RABIT)/lib/$(LIB_RABIT)
ALL_DEP = $(filter-out build/cli_main.o, $(ALL_OBJ)) $(LIB_DEP)
CLI_OBJ = build/cli_main.o
include tests/cpp/xgboost_test.mk
.PHONY: clean all lint clean_all doxygen rcpplint pypack Rpack Rbuild Rcheck
build/%.o: src/%.cc
@mkdir -p $(@D)
$(CXX) $(CFLAGS) -MM -MT build/$*.o $< >build/$*.d
$(CXX) -c $(CFLAGS) $< -o $@
build_plugin/%.o: plugin/%.cc
@mkdir -p $(@D)
$(CXX) $(CFLAGS) -MM -MT build_plugin/$*.o $< >build_plugin/$*.d
$(CXX) -c $(CFLAGS) $< -o $@
# The should be equivalent to $(ALL_OBJ) except for build/cli_main.o
amalgamation/xgboost-all0.o: amalgamation/xgboost-all0.cc
$(CXX) -c $(CFLAGS) $< -o $@
# Equivalent to lib/libxgboost_all.so
lib/libxgboost_all.so: $(AMALGA_OBJ) $(LIB_DEP)
@mkdir -p $(@D)
$(CXX) $(CFLAGS) -shared -o $@ $(filter %.o %.a, $^) $(LDFLAGS)
lib/libxgboost.a: $(ALL_DEP)
@mkdir -p $(@D)
ar crv $@ $(filter %.o, $?)
lib/xgboost.dll lib/libxgboost.so lib/libxgboost.dylib: $(ALL_DEP)
@mkdir -p $(@D)
$(CXX) $(CFLAGS) -shared -o $@ $(filter %.o %a, $^) $(LDFLAGS)
jvm-packages/lib/libxgboost4j.so: jvm-packages/xgboost4j/src/native/xgboost4j.cpp $(ALL_DEP)
@mkdir -p $(@D)
$(CXX) $(CFLAGS) $(JAVAINCFLAGS) -shared -o $@ $(filter %.cpp %.o %.a, $^) $(LDFLAGS)
xgboost: $(CLI_OBJ) $(ALL_DEP)
$(CXX) $(CFLAGS) -o $@ $(filter %.o %.a, $^) $(LDFLAGS)
rcpplint:
python3 dmlc-core/scripts/lint.py xgboost ${LINT_LANG} R-package/src
@@ -180,17 +77,7 @@ lint: rcpplint
python-package/xgboost/include python-package/xgboost/lib \
python-package/xgboost/make python-package/xgboost/rabit \
python-package/xgboost/src --pylint-rc ${PWD}/python-package/.pylintrc xgboost \
${LINT_LANG} include src plugin python-package
pylint:
flake8 --ignore E501 python-package
flake8 --ignore E501 tests/python
test: $(ALL_TEST)
$(ALL_TEST)
check: test
./tests/cpp/xgboost_test
${LINT_LANG} include src python-package
ifeq ($(TEST_COVER), 1)
cover: check
@@ -200,7 +87,7 @@ cover: check
endif
clean:
$(RM) -rf build build_plugin lib bin *~ */*~ */*/*~ */*/*/*~ */*.o */*/*.o */*/*/*.o #xgboost
$(RM) -rf build lib bin *~ */*~ */*/*~ */*/*/*~ */*.o */*/*.o */*/*/*.o #xgboost
$(RM) -rf build_tests *.gcov tests/cpp/xgboost_test
if [ -d "R-package/src" ]; then \
cd R-package/src; \
@@ -212,36 +99,9 @@ clean_all: clean
cd $(DMLC_CORE); "$(MAKE)" clean; cd $(ROOTDIR)
cd $(RABIT); "$(MAKE)" clean; cd $(ROOTDIR)
doxygen:
doxygen doc/Doxyfile
# create standalone python tar file.
pypack: ${XGBOOST_DYLIB}
cp ${XGBOOST_DYLIB} python-package/xgboost
cd python-package; tar cf xgboost.tar xgboost; cd ..
# create pip source dist (sdist) pack for PyPI
pippack: clean_all
rm -rf xgboost-python
# remove symlinked directories in python-package/xgboost
rm -rf python-package/xgboost/lib
rm -rf python-package/xgboost/dmlc-core
rm -rf python-package/xgboost/include
rm -rf python-package/xgboost/make
rm -rf python-package/xgboost/rabit
rm -rf python-package/xgboost/src
cp -r python-package xgboost-python
cp -r Makefile xgboost-python/xgboost/
cp -r make xgboost-python/xgboost/
cp -r src xgboost-python/xgboost/
cp -r tests xgboost-python/xgboost/
cp -r include xgboost-python/xgboost/
cp -r dmlc-core xgboost-python/xgboost/
cp -r rabit xgboost-python/xgboost/
# Use setup_pip.py instead of setup.py
mv xgboost-python/setup_pip.py xgboost-python/setup.py
# Build sdist tarball
cd xgboost-python; python setup.py sdist; mv dist/*.tar.gz ..; cd ..
cd python-package; python setup.py sdist; mv dist/*.tar.gz ..; cd ..
# Script to make a clean installable R package.
Rpack: clean_all
@@ -262,10 +122,17 @@ Rpack: clean_all
cp -r dmlc-core/include xgboost/src/dmlc-core/include
cp -r dmlc-core/src xgboost/src/dmlc-core/src
cp ./LICENSE xgboost
cat R-package/src/Makevars.in|sed '2s/.*/PKGROOT=./' | sed '3s/.*/ENABLE_STD_THREAD=0/' > xgboost/src/Makevars.in
# Modify PKGROOT in Makevars.in
cat R-package/src/Makevars.in|sed '2s/.*/PKGROOT=./' > xgboost/src/Makevars.in
# Configure Makevars.win (Windows-specific Makevars, likely using MinGW)
cp xgboost/src/Makevars.in xgboost/src/Makevars.win
cat xgboost/src/Makevars.in| sed '3s/.*/ENABLE_STD_THREAD=0/' > xgboost/src/Makevars.win
sed -i -e 's/@OPENMP_CXXFLAGS@/$$\(SHLIB_OPENMP_CXXFLAGS\)/g' xgboost/src/Makevars.win
sed -i -e 's/-pthread/$$\(SHLIB_PTHREAD_FLAGS\)/g' xgboost/src/Makevars.win
sed -i -e 's/@ENDIAN_FLAG@/-DDMLC_CMAKE_LITTLE_ENDIAN=1/g' xgboost/src/Makevars.win
sed -i -e 's/@BACKTRACE_LIB@//g' xgboost/src/Makevars.win
sed -i -e 's/@OPENMP_LIB@//g' xgboost/src/Makevars.win
rm -f xgboost/src/Makevars.win-e # OSX sed create this extra file; remove it
bash R-package/remove_warning_suppression_pragma.sh
rm xgboost/remove_warning_suppression_pragma.sh
@@ -278,4 +145,3 @@ Rcheck: Rbuild
-include build/*.d
-include build/*/*.d
-include build_plugin/*/*.d

308
NEWS.md
View File

@@ -3,6 +3,314 @@ XGBoost Change Log
This file records the changes in xgboost library in reverse chronological order.
## v1.0.0 (2020.02.19)
This release marks a major milestone for the XGBoost project.
### Apache-style governance, contribution policy, and semantic versioning (#4646, #4659)
* Starting with 1.0.0 release, the XGBoost Project is adopting Apache-style governance. The full community guideline is [available in the doc website](https://xgboost.readthedocs.io/en/release_1.0.0/contrib/community.html). Note that we now have Project Management Committee (PMC) who would steward the project on the long-term basis. The PMC is also entrusted to run and fund the project's continuous integration (CI) infrastructure (https://xgboost-ci.net).
* We also adopt the [semantic versioning](https://semver.org/). See [our release versioning policy](https://xgboost.readthedocs.io/en/release_1.0.0/contrib/release.html).
### Better performance scaling for multi-core CPUs (#4502, #4529, #4716, #4851, #5008, #5107, #5138, #5156)
* Poor performance scaling of the `hist` algorithm for multi-core CPUs has been under investigation (#3810). Previous effort #4529 was replaced with a series of pull requests (#5107, #5138, #5156) aimed at achieving the same performance benefits while keeping the C++ codebase legible. The latest performance benchmark results show [up to 5x speedup on Intel CPUs with many cores](https://github.com/dmlc/xgboost/pull/5156#issuecomment-580024413). Note: #5244, which concludes the effort, will become part of the upcoming release 1.1.0.
### Improved installation experience on Mac OSX (#4672, #5074, #5080, #5146, #5240)
* It used to be quite complicated to install XGBoost on Mac OSX. XGBoost uses OpenMP to distribute work among multiple CPU cores, and Mac's default C++ compiler (Apple Clang) does not come with OpenMP. Existing work-around (using another C++ compiler) was complex and prone to fail with cryptic diagnosis (#4933, #4949, #4969).
* Now it only takes two commands to install XGBoost: `brew install libomp` followed by `pip install xgboost`. The installed XGBoost will use all CPU cores.
* Even better, XGBoost is now available from Homebrew: `brew install xgboost`. See Homebrew/homebrew-core#50467.
* Previously, if you installed the XGBoost R package using the command `install.packages('xgboost')`, it could only use a single CPU core and you would experience slow training performance. With 1.0.0 release, the R package will use all CPU cores out of box.
### Distributed XGBoost now available on Kubernetes (#4621, #4939)
* Check out the [tutorial for setting up distributed XGBoost on a Kubernetes cluster](https://xgboost.readthedocs.io/en/release_1.0.0/tutorials/kubernetes.html).
### Ruby binding for XGBoost (#4856)
### New Native Dask interface for multi-GPU and multi-node scaling (#4473, #4507, #4617, #4819, #4907, #4914, #4941, #4942, #4951, #4973, #5048, #5077, #5144, #5270)
* XGBoost now integrates seamlessly with [Dask](https://dask.org/), a lightweight distributed framework for data processing. Together with the first-class support for cuDF data frames (see below), it is now easier than ever to create end-to-end data pipeline running on one or more NVIDIA GPUs.
* Multi-GPU training with Dask is now up to 20% faster than the previous release (#4914, #4951).
### First-class support for cuDF data frames and cuPy arrays (#4737, #4745, #4794, #4850, #4891, #4902, #4918, #4927, #4928, #5053, #5189, #5194, #5206, #5219, #5225)
* [cuDF](https://github.com/rapidsai/cudf) is a data frame library for loading and processing tabular data on NVIDIA GPUs. It provides a Pandas-like API.
* [cuPy](https://github.com/cupy/cupy) implements a NumPy-compatible multi-dimensional array on NVIDIA GPUs.
* Now users can keep the data on the GPU memory throughout the end-to-end data pipeline, obviating the need for copying data between the main memory and GPU memory.
* XGBoost can accept any data structure that exposes `__array_interface__` signature, opening way to support other columar formats that are compatible with Apache Arrow.
### [Feature interaction constraint](https://xgboost.readthedocs.io/en/release_1.0.0/tutorials/feature_interaction_constraint.html) is now available with `approx` and `gpu_hist` algorithms (#4534, #4587, #4596, #5034).
### Learning to rank is now GPU accelerated (#4873, #5004, #5129)
* Supported ranking objectives: NDGC, Map, Pairwise.
* [Up to 2x improved training performance on GPUs](https://devblogs.nvidia.com/learning-to-rank-with-xgboost-and-gpu/).
### Enable `gamma` parameter for GPU training (#4874, #4953)
* The `gamma` parameter specifies the minimum loss reduction required to add a new split in a tree. A larger value for `gamma` has the effect of pre-pruning the tree, by making harder to add splits.
### External memory for GPU training (#4486, #4526, #4747, #4833, #4879, #5014)
* It is now possible to use NVIDIA GPUs even when the size of training data exceeds the available GPU memory. Note that the external memory support for GPU is still experimental. #5093 will further improve performance and will become part of the upcoming release 1.1.0.
* RFC for enabling external memory with GPU algorithms: #4357
### Improve Scikit-Learn interface (#4558, #4842, #4929, #5049, #5151, #5130, #5227)
* Many users of XGBoost enjoy the convenience and breadth of Scikit-Learn ecosystem. In this release, we revise the Scikit-Learn API of XGBoost (`XGBRegressor`, `XGBClassifier`, and `XGBRanker`) to achieve feature parity with the traditional XGBoost interface (`xgboost.train()`).
* Insert check to validate data shapes.
* Produce an error message if `eval_set` is not a tuple. An error message is better than silently crashing.
* Allow using `numpy.RandomState` object.
* Add `n_jobs` as an alias of `nthread`.
* Roadmap: #5152
### XGBoost4J-Spark: Redesigning checkpointing mechanism
* RFC is available at #4786
* Clean up checkpoint file after a successful training job (#4754): The current implementation in XGBoost4J-Spark does not clean up the checkpoint file after a successful training job. If the user runs another job with the same checkpointing directory, she will get a wrong model because the second job will re-use the checkpoint file left over from the first job. To prevent this scenario, we propose to always clean up the checkpoint file after every successful training job.
* Avoid Multiple Jobs for Checkpointing (#5082): The current method for checkpoint is to collect the booster produced at the last iteration of each checkpoint internal to Driver and persist it in HDFS. The major issue with this approach is that it needs to re-perform the data preparation for training if the user did not choose to cache the training dataset. To avoid re-performing data prep, we build external-memory checkpointing in the XGBoost4J layer as well.
* Enable deterministic repartitioning when checkpoint is enabled (#4807): Distributed algorithm for gradient boosting assumes a fixed partition of the training data between multiple iterations. In previous versions, there was no guarantee that data partition would stay the same, especially when a worker goes down and some data had to recovered from previous checkpoint. In this release, we make data partition deterministic by using the data hash value of each data row in computing the partition.
### XGBoost4J-Spark: handle errors thrown by the native code (#4560)
* All core logic of XGBoost is written in C++, so XGBoost4J-Spark internally uses the C++ code via Java Native Interface (JNI). #4560 adds a proper error handling for any errors or exceptions arising from the C++ code, so that the XGBoost Spark application can be torn down in an orderly fashion.
### XGBoost4J-Spark: Refine method to count the number of alive cores (#4858)
* The `SparkParallelismTracker` class ensures that sufficient number of executor cores are alive. To that end, it is important to query the number of alive cores reliably.
### XGBoost4J: Add `BigDenseMatrix` to store more than `Integer.MAX_VALUE` elements (#4383)
### Robust model serialization with JSON (#4632, #4708, #4739, #4868, #4936, #4945, #4974, #5086, #5087, #5089, #5091, #5094, #5110, #5111, #5112, #5120, #5137, #5218, #5222, #5236, #5245, #5248, #5281)
* In this release, we introduce an experimental support of using [JSON](https://www.json.org/json-en.html) for serializing (saving/loading) XGBoost models and related hyperparameters for training. We would like to eventually replace the old binary format with JSON, since it is an open format and parsers are available in many programming languages and platforms. See [the documentation for model I/O using JSON](https://xgboost.readthedocs.io/en/release_1.0.0/tutorials/saving_model.html). #3980 explains why JSON was chosen over other alternatives.
* To maximize interoperability and compatibility of the serialized models, we now split serialization into two parts (#4855):
1. Model, e.g. decision trees and strictly related metadata like `num_features`.
2. Internal configuration, consisting of training parameters and other configurable parameters. For example, `max_delta_step`, `tree_method`, `objective`, `predictor`, `gpu_id`.
Previously, users often ran into issues where the model file produced by one machine could not load or run on another machine. For example, models trained using a machine with an NVIDIA GPU could not run on another machine without a GPU (#5291, #5234). The reason is that the old binary format saved some internal configuration that were not universally applicable to all machines, e.g. `predictor='gpu_predictor'`.
Now, model saving function (`Booster.save_model()` in Python) will save only the model, without internal configuration. This will guarantee that your model file would be used anywhere. Internal configuration will be serialized in limited circumstances such as:
* Multiple nodes in a distributed system exchange model details over the network.
* Model checkpointing, to recover from possible crashes.
This work proved to be useful for parameter validation as well (see below).
* Starting with 1.0.0 release, we will use semantic versioning to indicate whether the model produced by one version of XGBoost would be compatible with another version of XGBoost. Any change in the major version indicates a breaking change in the serialization format.
* We now provide a robust method to save and load scikit-learn related attributes (#5245). Previously, we used Python pickle to save Python attributes related to `XGBClassifier`, `XGBRegressor`, and `XGBRanker` objects. The attributes are necessary to properly interact with scikit-learn. See #4639 for more details. The use of pickling hampered interoperability, as a pickle from one machine may not necessarily work on another machine. Starting with this release, we use an alternative method to serialize the scikit-learn related attributes. The use of Python pickle is now discouraged (#5236, #5281).
### Parameter validation: detection of unused or incorrect parameters (#4553, #4577, #4738, #4801, #4961, #5101, #5157, #5167, #5256)
* Mis-spelled training parameter is a common user mistake. In previous versions of XGBoost, mis-spelled parameters were silently ignored. Starting with 1.0.0 release, XGBoost will produce a warning message if there is any unused training parameters. Currently, parameter validation is available to R users and Python XGBoost API users. We are working to extend its support to scikit-learn users.
* Configuration steps now have well-defined semantics (#4542, #4738), so we know exactly where and how the internal configurable parameters are changed.
* The user can now use `save_config()` function to inspect all (used) training parameters. This is helpful for debugging model performance.
### Allow individual workers to recover from faults (#4808, #4966)
* Status quo: if a worker fails, all workers are shut down and restarted, and learning resumes from the last checkpoint. This involves requesting resources from the scheduler (e.g. Spark) and shuffling all the data again from scratch. Both of these operations can be quite costly and block training for extended periods of time, especially if the training data is big and the number of worker nodes is in the hundreds.
* The proposed solution is to recover the single node that failed, instead of shutting down all workers. The rest of the clusters wait until the single failed worker is bootstrapped and catches up with the rest.
* See roadmap at #4753. Note that this is work in progress. In particular, the feature is not yet available from XGBoost4J-Spark.
### Accurate prediction for DART models
* Use DART tree weights when computing SHAPs (#5050)
* Don't drop trees during DART prediction by default (#5115)
* Fix DART prediction in R (#5204)
### Make external memory more robust
* Fix issues with training with external memory on cpu (#4487)
* Fix crash with approx tree method on cpu (#4510)
* Fix external memory race in `exact` (#4980). Note: `dmlc::ThreadedIter` is not actually thread-safe. We would like to re-design it in the long term.
### Major refactoring of the `DMatrix` class (#4686, #4744, #4748, #5044, #5092, #5108, #5188, #5198)
* Goal 1: improve performance and reduce memory consumption. Right now, if the user trains a model with a NumPy array as training data, the array gets copies 2-3 times before training begins. We'd like to reduce duplication of the data matrix.
* Goal 2: Expose a common interface to external data, unify the way DMatrix objects are constructed and simplify the process of adding new external data sources. This work is essential for ingesting cuPy arrays.
* Goal 3: Handle missing values consistently.
* RFC: #4354, Roadmap: #5143
* This work is also relevant to external memory support on GPUs.
### Breaking: XGBoost Python package now requires Python 3.5 or newer (#5021, #5274)
* Python 3.4 has reached its end-of-life on March 16, 2019, so we now require Python 3.5 or newer.
### Breaking: GPU algorithm now requires CUDA 9.0 and higher (#4527, #4580)
### Breaking: `n_gpus` parameter removed; multi-GPU training now requires a distributed framework (#4579, #4749, #4773, #4810, #4867, #4908)
* #4531 proposed removing support for single-process multi-GPU training. Contributors would focus on multi-GPU support through distributed frameworks such as Dask and Spark, where the framework would be expected to assign a worker process for each GPU independently. By delegating GPU management and data movement to the distributed framework, we can greatly simplify the core XGBoost codebase, make multi-GPU training more robust, and reduce burden for future development.
### Breaking: Some deprecated features have been removed
* ``gpu_exact`` training method (#4527, #4742, #4777). Use ``gpu_hist`` instead.
* ``learning_rates`` parameter in Python (#5155). Use the callback API instead.
* ``num_roots`` (#5059, #5165), since the current training code always uses a single root node.
* GPU-specific objectives (#4690), such as `gpu:reg:linear`. Use objectives without `gpu:` prefix; GPU will be used automatically if your machine has one.
### Breaking: the C API function `XGBoosterPredict()` now asks for an extra parameter `training`.
### Breaking: We now use CMake exclusively to build XGBoost. `Makefile` is being sunset.
* Exception: the R package uses Autotools, as the CRAN ecosystem did not yet adopt CMake widely.
### Performance improvements
* Smarter choice of histogram construction for distributed `gpu_hist` (#4519)
* Optimizations for quantization on device (#4572)
* Introduce caching memory allocator to avoid latency associated with GPU memory allocation (#4554, #4615)
* Optimize the initialization stage of the CPU `hist` algorithm for sparse datasets (#4625)
* Prevent unnecessary data copies from GPU memory to the host (#4795)
* Improve operation efficiency for single prediction (#5016)
* Group builder modified for incremental building, to speed up building large `DMatrix` (#5098)
### Bug-fixes
* Eliminate `FutureWarning: Series.base is deprecated` (#4337)
* Ensure pandas DataFrame column names are treated as strings in type error message (#4481)
* [jvm-packages] Add back `reg:linear` for scala, as it is only deprecated and not meant to be removed yet (#4490)
* Fix library loading for Cygwin users (#4499)
* Fix prediction from loaded pickle (#4516)
* Enforce exclusion between `pred_interactions=True` and `pred_interactions=True` (#4522)
* Do not return dangling reference to local `std::string` (#4543)
* Set the appropriate device before freeing device memory (#4566)
* Mark `SparsePageDmatrix` destructor default. (#4568)
* Choose the appropriate tree method only when the tree method is 'auto' (#4571)
* Fix `benchmark_tree.py` (#4593)
* [jvm-packages] Fix silly bug in feature scoring (#4604)
* Fix GPU predictor when the test data matrix has different number of features than the training data matrix used to train the model (#4613)
* Fix external memory for get column batches. (#4622)
* [R] Use built-in label when xgb.DMatrix is given to xgb.cv() (#4631)
* Fix early stopping in the Python package (#4638)
* Fix AUC error in distributed mode caused by imbalanced dataset (#4645, #4798)
* [jvm-packages] Expose `setMissing` method in `XGBoostClassificationModel` / `XGBoostRegressionModel` (#4643)
* Remove initializing stringstream reference. (#4788)
* [R] `xgb.get.handle` now checks all class listed of `object` (#4800)
* Do not use `gpu_predictor` unless data comes from GPU (#4836)
* Fix data loading (#4862)
* Workaround `isnan` across different environments. (#4883)
* [jvm-packages] Handle Long-type parameter (#4885)
* Don't `set_params` at the end of `set_state` (#4947). Ensure that the model does not change after pickling and unpickling multiple times.
* C++ exceptions should not crash OpenMP loops (#4960)
* Fix `usegpu` flag in DART. (#4984)
* Run training with empty `DMatrix` (#4990, #5159)
* Ensure that no two processes can use the same GPU (#4990)
* Fix repeated split and 0 cover nodes (#5010)
* Reset histogram hit counter between multiple data batches (#5035)
* Fix `feature_name` crated from int64index dataframe. (#5081)
* Don't use 0 for "fresh leaf" (#5084)
* Throw error when user attempts to use multi-GPU training and XGBoost has not been compiled with NCCL (#5170)
* Fix metric name loading (#5122)
* Quick fix for memory leak in CPU `hist` algorithm (#5153)
* Fix wrapping GPU ID and prevent data copying (#5160)
* Fix signature of Span constructor (#5166)
* Lazy initialization of device vector, so that XGBoost compiled with CUDA can run on a machine without any GPU (#5173)
* Model loading should not change system locale (#5314)
* Distributed training jobs would sometimes hang; revert Rabit to fix this regression (dmlc/rabit#132, #5237)
### API changes
* Add support for cross-validation using query ID (#4474)
* Enable feature importance property for DART model (#4525)
* Add `rmsle` metric and `reg:squaredlogerror` objective (#4541)
* All objective and evaluation metrics are now exposed to JVM packages (#4560)
* `dump_model()` and `get_dump()` now support exporting in GraphViz language (#4602)
* Support metrics `ndcg-` and `map-` (#4635)
* [jvm-packages] Allow chaining prediction (transform) in XGBoost4J-Spark (#4667)
* [jvm-packages] Add option to bypass missing value check in the Spark layer (#4805). Only use this option if you know what you are doing.
* [jvm-packages] Add public group getter (#4838)
* `XGDMatrixSetGroup` C API is now deprecated (#4864). Use `XGDMatrixSetUIntInfo` instead.
* [R] Added new `train_folds` parameter to `xgb.cv()` (#5114)
* Ingest meta information from Pandas DataFrame, such as data weights (#5216)
### Maintenance: Refactor code for legibility and maintainability
* De-duplicate GPU parameters (#4454)
* Simplify INI-style config reader using C++11 STL (#4478, #4521)
* Refactor histogram building code for `gpu_hist` (#4528)
* Overload device memory allocator, to enable instrumentation for compiling memory usage statistics (#4532)
* Refactor out row partitioning logic from `gpu_hist` (#4554)
* Remove an unused variable (#4588)
* Implement tree model dump with code generator, to de-duplicate code for generating dumps in 3 different formats (#4602)
* Remove `RowSet` class which is no longer being used (#4697)
* Remove some unused functions as reported by cppcheck (#4743)
* Mimic CUDA assert output in Span check (#4762)
* [jvm-packages] Refactor `XGBoost.scala` to put all params processing in one place (#4815)
* Add some comments for GPU row partitioner (#4832)
* Span: use `size_t' for index_type, add `front' and `back'. (#4935)
* Remove dead code in `exact` algorithm (#5034, #5105)
* Unify integer types used for row and column indices (#5034)
* Extract feature interaction constraint from `SplitEvaluator` class. (#5034)
* [Breaking] De-duplicate paramters and docstrings in the constructors of Scikit-Learn models (#5130)
* Remove benchmark code from GPU tests (#5141)
* Clean up Python 2 compatibility code. (#5161)
* Extensible binary serialization format for `DMatrix::MetaInfo` (#5187). This will be useful for implementing censored labels for survival analysis applications.
* Cleanup clang-tidy warnings. (#5247)
### Maintenance: testing, continuous integration, build system
* Use `yaml.safe_load` instead of `yaml.load`. (#4537)
* Ensure GCC is at least 5.x (#4538)
* Remove all mention of `reg:linear` from tests (#4544)
* [jvm-packages] Upgrade to Scala 2.12 (#4574)
* [jvm-packages] Update kryo dependency to 2.22 (#4575)
* [CI] Specify account ID when logging into ECR Docker registry (#4584)
* Use Sphinx 2.1+ to compile documentation (#4609)
* Make Pandas optional for running Python unit tests (#4620)
* Fix spark tests on machines with many cores (#4634)
* [jvm-packages] Update local dev build process (#4640)
* Add optional dependencies to setup.py (#4655)
* [jvm-packages] Fix maven warnings (#4664)
* Remove extraneous files from the R package, to comply with CRAN policy (#4699)
* Remove VC-2013 support, since it is not C++11 compliant (#4701)
* [CI] Fix broken installation of Pandas (#4704, #4722)
* [jvm-packages] Clean up temporary files afer running tests (#4706)
* Specify version macro in CMake. (#4730)
* Include dmlc-tracker into XGBoost Python package (#4731)
* [CI] Use long key ID for Ubuntu repository fingerprints. (#4783)
* Remove plugin, cuda related code in automake & autoconf files (#4789)
* Skip related tests when scikit-learn is not installed. (#4791)
* Ignore vscode and clion files (#4866)
* Use bundled Google Test by default (#4900)
* [CI] Raise timeout threshold in Jenkins (#4938)
* Copy CMake parameter from dmlc-core. (#4948)
* Set correct file permission. (#4964)
* [CI] Update lint configuration to support latest pylint convention (#4971)
* [CI] Upload nightly builds to S3 (#4976, #4979)
* Add asan.so.5 to cmake script. (#4999)
* [CI] Fix Travis tests. (#5062)
* [CI] Locate vcomp140.dll from System32 directory (#5078)
* Implement training observer to dump internal states of objects (#5088). This will be useful for debugging.
* Fix visual studio output library directories (#5119)
* [jvm-packages] Comply with scala style convention + fix broken unit test (#5134)
* [CI] Repair download URL for Maven 3.6.1 (#5139)
* Don't use modernize-use-trailing-return-type in clang-tidy. (#5169)
* Explicitly use UTF-8 codepage when using MSVC (#5197)
* Add CMake option to run Undefined Behavior Sanitizer (UBSan) (#5211)
* Make some GPU tests deterministic (#5229)
* [R] Robust endian detection in CRAN xgboost build (#5232)
* Support FreeBSD (#5233)
* Make `pip install xgboost*.tar.gz` work by fixing build-python.sh (#5241)
* Fix compilation error due to 64-bit integer narrowing to `size_t` (#5250)
* Remove use of `std::cout` from R package, to comply with CRAN policy (#5261)
* Update DMLC-Core submodule (#4674, #4688, #4726, #4924)
* Update Rabit submodule (#4560, #4667, #4718, #4808, #4966, #5237)
### Usability Improvements, Documentation
* Add Random Forest API to Python API doc (#4500)
* Fix Python demo and doc. (#4545)
* Remove doc about not supporting cuda 10.1 (#4578)
* Address some sphinx warnings and errors, add doc for building doc. (#4589)
* Add instruction to run formatting checks locally (#4591)
* Fix docstring for `XGBModel.predict()` (#4592)
* Doc and demo for customized metric and objective (#4598, #4608)
* Add to documentation how to run tests locally (#4610)
* Empty evaluation list in early stopping should produce meaningful error message (#4633)
* Fixed year to 2019 in conf.py, helpers.h and LICENSE (#4661)
* Minor updates to links and grammar (#4673)
* Remove `silent` in doc (#4689)
* Remove old Python trouble shooting doc (#4729)
* Add `os.PathLike` support for file paths to DMatrix and Booster Python classes (#4757)
* Update XGBoost4J-Spark doc (#4804)
* Regular formatting for evaluation metrics (#4803)
* [jvm-packages] Refine documentation for handling missing values in XGBoost4J-Spark (#4805)
* Monitor for distributed envorinment (#4829). This is useful for identifying performance bottleneck.
* Add check for length of weights and produce a good error message (#4872)
* Fix DMatrix doc (#4884)
* Export C++ headers in CMake installation (#4897)
* Update license year in README.md to 2019 (#4940)
* Fix incorrectly displayed Note in the doc (#4943)
* Follow PEP 257 Docstring Conventions (#4959)
* Document minimum version required for Google Test (#5001)
* Add better error message for invalid feature names (#5024)
* Some guidelines on device memory usage (#5038)
* [doc] Some notes for external memory. (#5065)
* Update document for `tree_method` (#5106)
* Update demo for ranking. (#5154)
* Add new lines for Spark XGBoost missing values section (#5180)
* Fix simple typo: utilty -> utility (#5182)
* Update R doc by roxygen2 (#5201)
* [R] Direct user to use `set.seed()` instead of setting `seed` parameter (#5125)
* Add Optuna badge to `README.md` (#5208)
* Fix compilation error in `c-api-demo.c` (#5215)
### Acknowledgement
**Contributors**: Nan Zhu (@CodingCat), Crissman Loomis (@Crissman), Cyprien Ricque (@Cyprien-Ricque), Evan Kepner (@EvanKepner), K.O. (@Hi-king), KaiJin Ji (@KerryJi), Peter Badida (@KeyWeeUsr), Kodi Arfer (@Kodiologist), Rory Mitchell (@RAMitchell), Egor Smirnov (@SmirnovEgorRu), Jacob Kim (@TheJacobKim), Vibhu Jawa (@VibhuJawa), Marcos (@astrowonk), Andy Adinets (@canonizer), Chen Qin (@chenqin), Christopher Cowden (@cowden), @cpfarrell, @david-cortes, Liangcai Li (@firestarman), @fuhaoda, Philip Hyunsu Cho (@hcho3), @here-nagini, Tong He (@hetong007), Michal Kurka (@michalkurka), Honza Sterba (@honzasterba), @iblumin, @koertkuipers, mattn (@mattn), Mingjie Tang (@merlintang), OrdoAbChao (@mglowacki100), Matthew Jones (@mt-jones), mitama (@nigimitama), Nathan Moore (@nmoorenz), Daniel Stahl (@phillyfan1138), Michaël Benesty (@pommedeterresautee), Rong Ou (@rongou), Sebastian (@sfahnens), Xu Xiao (@sperlingxx), @sriramch, Sean Owen (@srowen), Stephanie Yang (@stpyang), Yuan Tang (@terrytangyuan), Mathew Wicks (@thesuperzapper), Tim Gates (@timgates42), TinkleG (@tinkle1129), Oleksandr Pryimak (@trams), Jiaming Yuan (@trivialfis), Matvey Turkov (@turk0v), Bobby Wang (@wbo4958), yage (@yage99), @yellowdolphin
**Reviewers**: Nan Zhu (@CodingCat), Crissman Loomis (@Crissman), Cyprien Ricque (@Cyprien-Ricque), Evan Kepner (@EvanKepner), John Zedlewski (@JohnZed), KOLANICH (@KOLANICH), KaiJin Ji (@KerryJi), Kodi Arfer (@Kodiologist), Rory Mitchell (@RAMitchell), Egor Smirnov (@SmirnovEgorRu), Nikita Titov (@StrikerRUS), Jacob Kim (@TheJacobKim), Vibhu Jawa (@VibhuJawa), Andrew Kane (@ankane), Arno Candel (@arnocandel), Marcos (@astrowonk), Bryan Woods (@bryan-woods), Andy Adinets (@canonizer), Chen Qin (@chenqin), Thomas Franke (@coding-komek), Peter (@codingforfun), @cpfarrell, Joshua Patterson (@datametrician), @fuhaoda, Philip Hyunsu Cho (@hcho3), Tong He (@hetong007), Honza Sterba (@honzasterba), @iblumin, @jakirkham, Vadim Khotilovich (@khotilov), Keith Kraus (@kkraus14), @koertkuipers, @melonki, Mingjie Tang (@merlintang), OrdoAbChao (@mglowacki100), Daniel Mahler (@mhlr), Matthew Rocklin (@mrocklin), Matthew Jones (@mt-jones), Michaël Benesty (@pommedeterresautee), PSEUDOTENSOR / Jonathan McKinney (@pseudotensor), Rong Ou (@rongou), Vladimir (@sh1ng), Scott Lundberg (@slundberg), Xu Xiao (@sperlingxx), @sriramch, Pasha Stetsenko (@st-pasha), Stephanie Yang (@stpyang), Yuan Tang (@terrytangyuan), Mathew Wicks (@thesuperzapper), Theodore Vasiloudis (@thvasilo), TinkleG (@tinkle1129), Oleksandr Pryimak (@trams), Jiaming Yuan (@trivialfis), Bobby Wang (@wbo4958), yage (@yage99), @yellowdolphin, Yin Lou (@yinlou)
## v0.90 (2019.05.18)
### XGBoost Python package drops Python 2.x (#4379, #4381)

View File

@@ -29,6 +29,10 @@ set_target_properties(
CXX_STANDARD_REQUIRED ON
POSITION_INDEPENDENT_CODE ON)
set(XGBOOST_DEFINITIONS ${R_DEFINITIONS} PARENT_SCOPE)
set(XGBOOST_DEFINITIONS "${XGBOOST_DEFINITIONS};${R_DEFINITIONS}" PARENT_SCOPE)
set(XGBOOST_OBJ_SOURCES $<TARGET_OBJECTS:xgboost-r> PARENT_SCOPE)
set(LINKED_LIBRARIES_PRIVATE ${LINKED_LIBRARIES_PRIVATE} ${LIBR_CORE_LIBRARY} PARENT_SCOPE)
if (USE_OPENMP)
target_link_libraries(xgboost-r PRIVATE OpenMP::OpenMP_CXX)
endif ()

View File

@@ -1,8 +1,8 @@
Package: xgboost
Type: Package
Title: Extreme Gradient Boosting
Version: 0.90.0.1
Date: 2019-05-18
Version: 1.1.1.1
Date: 2020-02-21
Authors@R: c(
person("Tianqi", "Chen", role = c("aut"),
email = "tianqi.tchen@gmail.com"),
@@ -63,5 +63,5 @@ Imports:
data.table (>= 1.9.6),
magrittr (>= 1.5),
stringi (>= 0.5.2)
RoxygenNote: 6.1.0
RoxygenNote: 7.1.0
SystemRequirements: GNU make, C++11

View File

@@ -14,6 +14,7 @@ S3method(setinfo,xgb.DMatrix)
S3method(slice,xgb.DMatrix)
export("xgb.attr<-")
export("xgb.attributes<-")
export("xgb.config<-")
export("xgb.parameters<-")
export(cb.cv.predict)
export(cb.early.stop)
@@ -30,6 +31,7 @@ export(xgb.DMatrix)
export(xgb.DMatrix.save)
export(xgb.attr)
export(xgb.attributes)
export(xgb.config)
export(xgb.create.features)
export(xgb.cv)
export(xgb.dump)
@@ -38,6 +40,7 @@ export(xgb.ggplot.deepness)
export(xgb.ggplot.importance)
export(xgb.importance)
export(xgb.load)
export(xgb.load.raw)
export(xgb.model.dt.tree)
export(xgb.plot.deepness)
export(xgb.plot.importance)
@@ -46,7 +49,9 @@ export(xgb.plot.shap)
export(xgb.plot.tree)
export(xgb.save)
export(xgb.save.raw)
export(xgb.serialize)
export(xgb.train)
export(xgb.unserialize)
export(xgboost)
import(methods)
importClassesFrom(Matrix,dgCMatrix)

View File

@@ -28,7 +28,7 @@ NVL <- function(x, val) {
# Merges booster params with whatever is provided in ...
# plus runs some checks
check.booster.params <- function(params, ...) {
if (typeof(params) != "list")
if (!identical(class(params), "list"))
stop("params must be a list")
# in R interface, allow for '.' instead of '_' in parameter names
@@ -78,7 +78,7 @@ check.booster.params <- function(params, ...) {
if (!is.null(params[['interaction_constraints']]) &&
typeof(params[['interaction_constraints']]) != "character"){
# check input class
if (class(params[['interaction_constraints']]) != 'list') stop('interaction_constraints should be class list')
if (!identical(class(params[['interaction_constraints']]),'list')) stop('interaction_constraints should be class list')
if (!all(unique(sapply(params[['interaction_constraints']], class)) %in% c('numeric','integer'))) {
stop('interaction_constraints should be a list of numeric/integer vectors')
}
@@ -145,7 +145,7 @@ xgb.iter.update <- function(booster_handle, dtrain, iter, obj = NULL) {
if (is.null(obj)) {
.Call(XGBoosterUpdateOneIter_R, booster_handle, as.integer(iter), dtrain)
} else {
pred <- predict(booster_handle, dtrain)
pred <- predict(booster_handle, dtrain, outputmargin = TRUE, training = TRUE)
gpair <- obj(pred, dtrain)
.Call(XGBoosterBoostOneIter_R, booster_handle, dtrain, gpair$grad, gpair$hess)
}

View File

@@ -5,20 +5,34 @@ xgb.Booster.handle <- function(params = list(), cachelist = list(), modelfile =
!all(vapply(cachelist, inherits, logical(1), what = 'xgb.DMatrix'))) {
stop("cachelist must be a list of xgb.DMatrix objects")
}
handle <- .Call(XGBoosterCreate_R, cachelist)
## Load existing model, dispatch for on disk model file and in memory buffer
if (!is.null(modelfile)) {
if (typeof(modelfile) == "character") {
## A filename
handle <- .Call(XGBoosterCreate_R, cachelist)
.Call(XGBoosterLoadModel_R, handle, modelfile[1])
class(handle) <- "xgb.Booster.handle"
if (length(params) > 0) {
xgb.parameters(handle) <- params
}
return(handle)
} else if (typeof(modelfile) == "raw") {
.Call(XGBoosterLoadModelFromRaw_R, handle, modelfile)
## A memory buffer
bst <- xgb.unserialize(modelfile)
xgb.parameters(bst) <- params
return (bst)
} else if (inherits(modelfile, "xgb.Booster")) {
## A booster object
bst <- xgb.Booster.complete(modelfile, saveraw = TRUE)
.Call(XGBoosterLoadModelFromRaw_R, handle, bst$raw)
bst <- xgb.unserialize(bst$raw)
xgb.parameters(bst) <- params
return (bst)
} else {
stop("modelfile must be either character filename, or raw booster dump, or xgb.Booster object")
}
}
## Create new model
handle <- .Call(XGBoosterCreate_R, cachelist)
class(handle) <- "xgb.Booster.handle"
if (length(params) > 0) {
xgb.parameters(handle) <- params
@@ -51,11 +65,13 @@ is.null.handle <- function(handle) {
# Return a verified to be valid handle out of either xgb.Booster.handle or xgb.Booster
# internal utility function
xgb.get.handle <- function(object) {
handle <- switch(class(object)[1],
xgb.Booster = object$handle,
xgb.Booster.handle = object,
if (inherits(object, "xgb.Booster")) {
handle <- object$handle
} else if (inherits(object, "xgb.Booster.handle")) {
handle <- object
} else {
stop("argument must be of either xgb.Booster or xgb.Booster.handle class")
)
}
if (is.null.handle(handle)) {
stop("invalid xgb.Booster.handle")
}
@@ -111,9 +127,29 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
if (is.null.handle(object$handle)) {
object$handle <- xgb.Booster.handle(modelfile = object$raw)
} else {
if (is.null(object$raw) && saveraw)
object$raw <- xgb.save.raw(object$handle)
if (is.null(object$raw) && saveraw) {
object$raw <- xgb.serialize(object$handle)
}
}
attrs <- xgb.attributes(object)
if (!is.null(attrs$best_ntreelimit)) {
object$best_ntreelimit <- as.integer(attrs$best_ntreelimit)
}
if (!is.null(attrs$best_iteration)) {
## Convert from 0 based back to 1 based.
object$best_iteration <- as.integer(attrs$best_iteration) + 1
}
if (!is.null(attrs$best_score)) {
object$best_score <- as.numeric(attrs$best_score)
}
if (!is.null(attrs$best_msg)) {
object$best_msg <- attrs$best_msg
}
if (!is.null(attrs$niter)) {
object$niter <- as.integer(attrs$niter)
}
return(object)
}
@@ -137,6 +173,8 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
#' @param reshape whether to reshape the vector of predictions to a matrix form when there are several
#' prediction outputs per case. This option has no effect when either of predleaf, predcontrib,
#' or predinteraction flags is TRUE.
#' @param training whether is the prediction result used for training. For dart booster,
#' training predicting will perform dropout.
#' @param ... Parameters passed to \code{predict.xgb.Booster}
#'
#' @details
@@ -286,7 +324,7 @@ xgb.Booster.complete <- function(object, saveraw = TRUE) {
#' @export
predict.xgb.Booster <- function(object, newdata, missing = NA, outputmargin = FALSE, ntreelimit = NULL,
predleaf = FALSE, predcontrib = FALSE, approxcontrib = FALSE, predinteraction = FALSE,
reshape = FALSE, ...) {
reshape = FALSE, training = FALSE, ...) {
object <- xgb.Booster.complete(object, saveraw = FALSE)
if (!inherits(newdata, "xgb.DMatrix"))
@@ -305,7 +343,8 @@ predict.xgb.Booster <- function(object, newdata, missing = NA, outputmargin = FA
option <- 0L + 1L * as.logical(outputmargin) + 2L * as.logical(predleaf) + 4L * as.logical(predcontrib) +
8L * as.logical(approxcontrib) + 16L * as.logical(predinteraction)
ret <- .Call(XGBoosterPredict_R, object$handle, newdata, option[1], as.integer(ntreelimit))
ret <- .Call(XGBoosterPredict_R, object$handle, newdata, option[1],
as.integer(ntreelimit), as.integer(training))
n_ret <- length(ret)
n_row <- nrow(newdata)
@@ -394,7 +433,7 @@ predict.xgb.Booster.handle <- function(object, ...) {
#' That would only matter if attributes need to be set many times.
#' Note, however, that when feeding a handle of an \code{xgb.Booster} object to the attribute setters,
#' the raw model cache of an \code{xgb.Booster} object would not be automatically updated,
#' and it would be user's responsibility to call \code{xgb.save.raw} to update it.
#' and it would be user's responsibility to call \code{xgb.serialize} to update it.
#'
#' The \code{xgb.attributes<-} setter either updates the existing or adds one or several attributes,
#' but it doesn't delete the other existing attributes.
@@ -453,7 +492,7 @@ xgb.attr <- function(object, name) {
}
.Call(XGBoosterSetAttr_R, handle, as.character(name[1]), value)
if (is(object, 'xgb.Booster') && !is.null(object$raw)) {
object$raw <- xgb.save.raw(object$handle)
object$raw <- xgb.serialize(object$handle)
}
object
}
@@ -493,11 +532,41 @@ xgb.attributes <- function(object) {
.Call(XGBoosterSetAttr_R, handle, names(a[i]), a[[i]])
}
if (is(object, 'xgb.Booster') && !is.null(object$raw)) {
object$raw <- xgb.save.raw(object$handle)
object$raw <- xgb.serialize(object$handle)
}
object
}
#' Accessors for model parameters as JSON string.
#'
#' @param object Object of class \code{xgb.Booster}
#' @param value A JSON string.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#'
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' config <- xgb.config(bst)
#'
#' @rdname xgb.config
#' @export
xgb.config <- function(object) {
handle <- xgb.get.handle(object)
.Call(XGBoosterSaveJsonConfig_R, handle);
}
#' @rdname xgb.config
#' @export
`xgb.config<-` <- function(object, value) {
handle <- xgb.get.handle(object)
.Call(XGBoosterLoadJsonConfig_R, handle, value)
object$raw <- NULL # force renew the raw buffer
object <- xgb.Booster.complete(object)
object
}
#' Accessors for model parameters.
#'
#' Only the setter for xgboost parameters is currently implemented.
@@ -534,7 +603,7 @@ xgb.attributes <- function(object) {
.Call(XGBoosterSetParam_R, handle, names(p[i]), p[[i]])
}
if (is(object, 'xgb.Booster') && !is.null(object$raw)) {
object$raw <- xgb.save.raw(object$handle)
object$raw <- xgb.serialize(object$handle)
}
object
}

View File

@@ -188,9 +188,10 @@ getinfo <- function(object, ...) UseMethod("getinfo")
getinfo.xgb.DMatrix <- function(object, name, ...) {
if (typeof(name) != "character" ||
length(name) != 1 ||
!name %in% c('label', 'weight', 'base_margin', 'nrow')) {
!name %in% c('label', 'weight', 'base_margin', 'nrow',
'label_lower_bound', 'label_upper_bound')) {
stop("getinfo: name must be one of the following\n",
" 'label', 'weight', 'base_margin', 'nrow'")
" 'label', 'weight', 'base_margin', 'nrow', 'label_lower_bound', 'label_upper_bound'")
}
if (name != "nrow"){
ret <- .Call(XGDMatrixGetInfo_R, object, name)
@@ -243,6 +244,18 @@ setinfo.xgb.DMatrix <- function(object, name, info, ...) {
.Call(XGDMatrixSetInfo_R, object, name, as.numeric(info))
return(TRUE)
}
if (name == "label_lower_bound") {
if (length(info) != nrow(object))
stop("The length of lower-bound labels must equal to the number of rows in the input data")
.Call(XGDMatrixSetInfo_R, object, name, as.numeric(info))
return(TRUE)
}
if (name == "label_upper_bound") {
if (length(info) != nrow(object))
stop("The length of upper-bound labels must equal to the number of rows in the input data")
.Call(XGDMatrixSetInfo_R, object, name, as.numeric(info))
return(TRUE)
}
if (name == "weight") {
if (length(info) != nrow(object))
stop("The length of weights must equal to the number of rows in the input data")

View File

@@ -47,6 +47,8 @@
#' @param folds \code{list} provides a possibility to use a list of pre-defined CV folds
#' (each element must be a vector of test fold's indices). When folds are supplied,
#' the \code{nfold} and \code{stratified} parameters are ignored.
#' @param train_folds \code{list} list specifying which indicies to use for training. If \code{NULL}
#' (the default) all indices not specified in \code{folds} will be used for training.
#' @param verbose \code{boolean}, print the statistics during the process
#' @param print_every_n Print each n-th iteration evaluation messages when \code{verbose>0}.
#' Default is 1 which means all messages are printed. This parameter is passed to the
@@ -99,7 +101,7 @@
#' (only available with early stopping).
#' \item \code{pred} CV prediction values available when \code{prediction} is set.
#' It is either vector or matrix (see \code{\link{cb.cv.predict}}).
#' \item \code{models} a liost of the CV folds' models. It is only available with the explicit
#' \item \code{models} a list of the CV folds' models. It is only available with the explicit
#' setting of the \code{cb.cv.predict(save_models = TRUE)} callback.
#' }
#'
@@ -114,7 +116,7 @@
#' @export
xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing = NA,
prediction = FALSE, showsd = TRUE, metrics=list(),
obj = NULL, feval = NULL, stratified = TRUE, folds = NULL,
obj = NULL, feval = NULL, stratified = TRUE, folds = NULL, train_folds = NULL,
verbose = TRUE, print_every_n=1L,
early_stopping_rounds = NULL, maximize = NULL, callbacks = list(), ...) {
@@ -133,8 +135,15 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
# Check the labels
if ( (inherits(data, 'xgb.DMatrix') && is.null(getinfo(data, 'label'))) ||
(!inherits(data, 'xgb.DMatrix') && is.null(label)))
(!inherits(data, 'xgb.DMatrix') && is.null(label))) {
stop("Labels must be provided for CV either through xgb.DMatrix, or through 'label=' when 'data' is matrix")
} else if (inherits(data, 'xgb.DMatrix')) {
if (!is.null(label))
warning("xgb.cv: label will be ignored, since data is of type xgb.DMatrix")
cv_label = getinfo(data, 'label')
} else {
cv_label = label
}
# CV folds
if(!is.null(folds)) {
@@ -144,7 +153,7 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
} else {
if (nfold <= 1)
stop("'nfold' must be > 1")
folds <- generate.cv.folds(nfold, nrow(data), stratified, label, params)
folds <- generate.cv.folds(nfold, nrow(data), stratified, cv_label, params)
}
# Potential TODO: sequential CV
@@ -179,10 +188,15 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
# create the booster-folds
# train_folds
dall <- xgb.get.DMatrix(data, label, missing)
bst_folds <- lapply(seq_along(folds), function(k) {
dtest <- slice(dall, folds[[k]])
dtrain <- slice(dall, unlist(folds[-k]))
# code originally contributed by @RolandASc on stackoverflow
if(is.null(train_folds))
dtrain <- slice(dall, unlist(folds[-k]))
else
dtrain <- slice(dall, train_folds[[k]])
handle <- xgb.Booster.handle(params, list(dtrain, dtest))
list(dtrain = dtrain, bst = handle, watchlist = list(train = dtrain, test=dtest), index = folds[[k]])
})

View File

@@ -1,30 +1,30 @@
#' Load xgboost model from binary file
#'
#' Load xgboost model from the binary model file.
#'
#'
#' Load xgboost model from the binary model file.
#'
#' @param modelfile the name of the binary input file.
#'
#' @details
#'
#' @details
#' The input file is expected to contain a model saved in an xgboost-internal binary format
#' using either \code{\link{xgb.save}} or \code{\link{cb.save.model}} in R, or using some
#' appropriate methods from other xgboost interfaces. E.g., a model trained in Python and
#' using either \code{\link{xgb.save}} or \code{\link{cb.save.model}} in R, or using some
#' appropriate methods from other xgboost interfaces. E.g., a model trained in Python and
#' saved from there in xgboost format, could be loaded from R.
#'
#'
#' Note: a model saved as an R-object, has to be loaded using corresponding R-methods,
#' not \code{xgb.load}.
#'
#' @return
#'
#' @return
#' An object of \code{xgb.Booster} class.
#'
#' @seealso
#' \code{\link{xgb.save}}, \code{\link{xgb.Booster.complete}}.
#'
#'
#' @seealso
#' \code{\link{xgb.save}}, \code{\link{xgb.Booster.complete}}.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' xgb.save(bst, 'xgb.model')
#' bst <- xgb.load('xgb.model')

View File

@@ -0,0 +1,14 @@
#' Load serialised xgboost model from R's raw vector
#'
#' User can generate raw memory buffer by calling xgb.save.raw
#'
#' @param buffer the buffer returned by xgb.save.raw
#'
#' @export
xgb.load.raw <- function(buffer) {
cachelist <- list()
handle <- .Call(XGBoosterCreate_R, cachelist)
.Call(XGBoosterLoadModelFromRaw_R, handle, buffer)
class(handle) <- "xgb.Booster.handle"
return (handle)
}

View File

@@ -1,23 +1,23 @@
#' Save xgboost model to R's raw vector,
#' user can call xgb.load to load the model back from raw vector
#'
#' user can call xgb.load.raw to load the model back from raw vector
#'
#' Save xgboost model from xgboost or xgb.train
#'
#'
#' @param model the model object.
#'
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' raw <- xgb.save.raw(bst)
#' bst <- xgb.load(raw)
#' bst <- xgb.load.raw(raw)
#' pred <- predict(bst, test$data)
#'
#' @export
xgb.save.raw <- function(model) {
model <- xgb.get.handle(model)
.Call(XGBoosterModelToRaw_R, model)
handle <- xgb.get.handle(model)
.Call(XGBoosterModelToRaw_R, handle)
}

View File

@@ -0,0 +1,21 @@
#' Serialize the booster instance into R's raw vector. The serialization method differs
#' from \code{\link{xgb.save.raw}} as the latter one saves only the model but not
#' parameters. This serialization format is not stable across different xgboost versions.
#'
#' @param booster the booster instance
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' raw <- xgb.serialize(bst)
#' bst <- xgb.unserialize(raw)
#'
#' @export
xgb.serialize <- function(booster) {
handle <- xgb.get.handle(booster)
.Call(XGBoosterSerializeToBuffer_R, handle)
}

View File

@@ -267,7 +267,7 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
}
# evaluation printing callback
params <- c(params, list(silent = ifelse(verbose > 1, 0, 1)))
params <- c(params)
print_every_n <- max( as.integer(print_every_n), 1L)
if (!has.callbacks(callbacks, 'cb.print.evaluation') &&
verbose) {
@@ -291,8 +291,13 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
callbacks <- add.cb(callbacks, cb.early.stop(early_stopping_rounds,
maximize = maximize, verbose = verbose))
}
# Sort the callbacks into categories
cb <- categorize.callbacks(callbacks)
params['validate_parameters'] <- TRUE
if (!is.null(params[['seed']])) {
warning("xgb.train: `seed` is ignored in R package. Use `set.seed()` instead.")
}
# The tree updating process would need slightly different handling
is_update <- NVL(params[['process_type']], '.') == 'update'

View File

@@ -0,0 +1,12 @@
#' Load the instance back from \code{\link{xgb.serialize}}
#'
#' @param buffer the buffer containing booster instance saved by \code{\link{xgb.serialize}}
#'
#' @export
xgb.unserialize <- function(buffer) {
cachelist <- list()
handle <- .Call(XGBoosterCreate_R, cachelist)
.Call(XGBoosterUnserializeFromBuffer_R, handle, buffer)
class(handle) <- "xgb.Booster.handle"
return (handle)
}

View File

@@ -5,8 +5,8 @@
#' @export
xgboost <- function(data = NULL, label = NULL, missing = NA, weight = NULL,
params = list(), nrounds,
verbose = 1, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL,
verbose = 1, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL,
save_period = NULL, save_name = "xgboost.model",
xgb_model = NULL, callbacks = list(), ...) {
@@ -18,16 +18,16 @@ xgboost <- function(data = NULL, label = NULL, missing = NA, weight = NULL,
early_stopping_rounds = early_stopping_rounds, maximize = maximize,
save_period = save_period, save_name = save_name,
xgb_model = xgb_model, callbacks = callbacks, ...)
return(bst)
return (bst)
}
#' Training part from Mushroom Data Set
#'
#'
#' This data set is originally from the Mushroom data set,
#' UCI Machine Learning Repository.
#'
#'
#' This data set includes the following fields:
#'
#'
#' \itemize{
#' \item \code{label} the label for each record
#' \item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
@@ -35,16 +35,16 @@ xgboost <- function(data = NULL, label = NULL, missing = NA, weight = NULL,
#'
#' @references
#' https://archive.ics.uci.edu/ml/datasets/Mushroom
#'
#' Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
#' [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
#'
#' Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
#' [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
#' School of Information and Computer Science.
#'
#'
#' @docType data
#' @keywords datasets
#' @name agaricus.train
#' @usage data(agaricus.train)
#' @format A list containing a label vector, and a dgCMatrix object with 6513
#' @format A list containing a label vector, and a dgCMatrix object with 6513
#' rows and 127 variables
NULL
@@ -52,9 +52,9 @@ NULL
#'
#' This data set is originally from the Mushroom data set,
#' UCI Machine Learning Repository.
#'
#'
#' This data set includes the following fields:
#'
#'
#' \itemize{
#' \item \code{label} the label for each record
#' \item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
@@ -62,16 +62,16 @@ NULL
#'
#' @references
#' https://archive.ics.uci.edu/ml/datasets/Mushroom
#'
#' Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
#' [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
#'
#' Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
#' [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
#' School of Information and Computer Science.
#'
#'
#' @docType data
#' @keywords datasets
#' @name agaricus.test
#' @usage data(agaricus.test)
#' @format A list containing a label vector, and a dgCMatrix object with 1611
#' @format A list containing a label vector, and a dgCMatrix object with 1611
#' rows and 126 variables
NULL
@@ -107,7 +107,7 @@ NULL
#' @importFrom graphics par
#' @importFrom graphics title
#' @importFrom grDevices rgb
#'
#'
#' @import methods
#' @useDynLib xgboost, .registration = TRUE
NULL

1043
R-package/configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,21 @@ AC_PREREQ(2.62)
AC_INIT([xgboost],[0.6-3],[],[xgboost],[])
# Use this line to set CC variable to a C compiler
AC_PROG_CC
### Check whether backtrace() is part of libc or the external lib libexecinfo
AC_MSG_CHECKING([Backtrace lib])
AC_MSG_RESULT([])
AC_CHECK_LIB([execinfo], [backtrace], [BACKTRACE_LIB=-lexecinfo], [BACKTRACE_LIB=''])
### Endian detection
AC_MSG_CHECKING([endian])
AC_MSG_RESULT([])
AC_RUN_IFELSE([AC_LANG_PROGRAM([[#include <stdint.h>]], [[const uint16_t endianness = 256; return !!(*(const uint8_t *)&endianness);]])],
[ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=1"],
[ENDIAN_FLAG="-DDMLC_CMAKE_LITTLE_ENDIAN=0"])
OPENMP_CXXFLAGS=""
if test `uname -s` = "Linux"
@@ -13,19 +28,28 @@ fi
if test `uname -s` = "Darwin"
then
OPENMP_CXXFLAGS="\$(SHLIB_OPENMP_CXXFLAGS)"
OPENMP_CXXFLAGS='-Xclang -fopenmp'
OPENMP_LIB='-lomp'
ac_pkg_openmp=no
AC_MSG_CHECKING([whether OpenMP will work in a package])
AC_LANG_CONFTEST(
[AC_LANG_PROGRAM([[#include <omp.h>]], [[ return omp_get_num_threads (); ]])])
PKG_CFLAGS="${OPENMP_CFLAGS}" PKG_LIBS="${OPENMP_CFLAGS}" "$RBIN" CMD SHLIB conftest.c 1>&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD && "$RBIN" --vanilla -q -e "dyn.load(paste('conftest',.Platform\$dynlib.ext,sep=''))" 1>&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD && ac_pkg_openmp=yes
AC_LANG_CONFTEST([AC_LANG_PROGRAM([[#include <omp.h>]], [[ return (omp_get_max_threads() <= 1); ]])])
${CC} -o conftest conftest.c /usr/local/lib/libomp.dylib -Xclang -fopenmp 2>/dev/null && ./conftest && ac_pkg_openmp=yes
AC_MSG_RESULT([${ac_pkg_openmp}])
if test "${ac_pkg_openmp}" = no; then
OPENMP_CXXFLAGS=''
OPENMP_LIB=''
echo '*****************************************************************************************'
echo 'WARNING: OpenMP is unavailable on this Mac OSX system. Training speed may be suboptimal.'
echo ' To use all CPU cores for training jobs, you should install OpenMP by running\n'
echo ' brew install libomp'
echo '*****************************************************************************************'
fi
fi
AC_SUBST(OPENMP_CXXFLAGS)
AC_SUBST(OPENMP_LIB)
AC_SUBST(ENDIAN_FLAG)
AC_SUBST(BACKTRACE_LIB)
AC_CONFIG_FILES([src/Makevars])
AC_OUTPUT

View File

@@ -30,7 +30,7 @@ wl <- list(train = dtrain, test = dtest)
# - similar to the 'hist'
# - the fastest option for moderately large datasets
# - current limitations: max_depth < 16, does not implement guided loss
# You can use tree_method = 'gpu_exact' for another GPU accelerated algorithm,
# You can use tree_method = 'gpu_hist' for another GPU accelerated algorithm,
# which is slower, more memory-hungry, but does not use binning.
param <- list(objective = 'reg:logistic', eval_metric = 'auc', subsample = 0.5, nthread = 4,
max_bin = 64, tree_method = 'gpu_hist')

View File

@@ -4,8 +4,10 @@
\name{agaricus.test}
\alias{agaricus.test}
\title{Test part from Mushroom Data Set}
\format{A list containing a label vector, and a dgCMatrix object with 1611
rows and 126 variables}
\format{
A list containing a label vector, and a dgCMatrix object with 1611
rows and 126 variables
}
\usage{
data(agaricus.test)
}
@@ -24,8 +26,8 @@ This data set includes the following fields:
\references{
https://archive.ics.uci.edu/ml/datasets/Mushroom
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
[http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
[http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
School of Information and Computer Science.
}
\keyword{datasets}

View File

@@ -4,8 +4,10 @@
\name{agaricus.train}
\alias{agaricus.train}
\title{Training part from Mushroom Data Set}
\format{A list containing a label vector, and a dgCMatrix object with 6513
rows and 127 variables}
\format{
A list containing a label vector, and a dgCMatrix object with 6513
rows and 127 variables
}
\usage{
data(agaricus.train)
}
@@ -24,8 +26,8 @@ This data set includes the following fields:
\references{
https://archive.ics.uci.edu/ml/datasets/Mushroom
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
[http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
[http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
School of Information and Computer Science.
}
\keyword{datasets}

View File

@@ -4,8 +4,12 @@
\alias{cb.early.stop}
\title{Callback closure to activate the early stopping.}
\usage{
cb.early.stop(stopping_rounds, maximize = FALSE, metric_name = NULL,
verbose = TRUE)
cb.early.stop(
stopping_rounds,
maximize = FALSE,
metric_name = NULL,
verbose = TRUE
)
}
\arguments{
\item{stopping_rounds}{The number of rounds with no improvement in

View File

@@ -5,10 +5,20 @@
\alias{predict.xgb.Booster.handle}
\title{Predict method for eXtreme Gradient Boosting model}
\usage{
\method{predict}{xgb.Booster}(object, newdata, missing = NA,
outputmargin = FALSE, ntreelimit = NULL, predleaf = FALSE,
predcontrib = FALSE, approxcontrib = FALSE, predinteraction = FALSE,
reshape = FALSE, ...)
\method{predict}{xgb.Booster}(
object,
newdata,
missing = NA,
outputmargin = FALSE,
ntreelimit = NULL,
predleaf = FALSE,
predcontrib = FALSE,
approxcontrib = FALSE,
predinteraction = FALSE,
reshape = FALSE,
training = FALSE,
...
)
\method{predict}{xgb.Booster.handle}(object, ...)
}
@@ -39,6 +49,9 @@ It will use all the trees by default (\code{NULL} value).}
prediction outputs per case. This option has no effect when either of predleaf, predcontrib,
or predinteraction flags is TRUE.}
\item{training}{whether is the prediction result used for training. For dart booster,
training predicting will perform dropout.}
\item{...}{Parameters passed to \code{predict.xgb.Booster}}
}
\value{

View File

@@ -55,7 +55,7 @@ than for \code{xgb.Booster}, since only just a handle (pointer) would need to be
That would only matter if attributes need to be set many times.
Note, however, that when feeding a handle of an \code{xgb.Booster} object to the attribute setters,
the raw model cache of an \code{xgb.Booster} object would not be automatically updated,
and it would be user's responsibility to call \code{xgb.save.raw} to update it.
and it would be user's responsibility to call \code{xgb.serialize} to update it.
The \code{xgb.attributes<-} setter either updates the existing or adds one or several attributes,
but it doesn't delete the other existing attributes.

View File

@@ -0,0 +1,28 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.Booster.R
\name{xgb.config}
\alias{xgb.config}
\alias{xgb.config<-}
\title{Accessors for model parameters as JSON string.}
\usage{
xgb.config(object)
xgb.config(object) <- value
}
\arguments{
\item{object}{Object of class \code{xgb.Booster}}
\item{value}{A JSON string.}
}
\description{
Accessors for model parameters as JSON string.
}
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
config <- xgb.config(bst)
}

View File

@@ -87,6 +87,6 @@ accuracy.after <- sum((predict(bst, new.dtest) >= 0.5) == agaricus.test$label) /
# Here the accuracy was already good and is now perfect.
cat(paste("The accuracy was", accuracy.before, "before adding leaf features and it is now",
accuracy.after, "!\\n"))
accuracy.after, "!\n"))
}

View File

@@ -4,11 +4,28 @@
\alias{xgb.cv}
\title{Cross Validation}
\usage{
xgb.cv(params = list(), data, nrounds, nfold, label = NULL, missing = NA,
prediction = FALSE, showsd = TRUE, metrics = list(), obj = NULL,
feval = NULL, stratified = TRUE, folds = NULL, verbose = TRUE,
print_every_n = 1L, early_stopping_rounds = NULL, maximize = NULL,
callbacks = list(), ...)
xgb.cv(
params = list(),
data,
nrounds,
nfold,
label = NULL,
missing = NA,
prediction = FALSE,
showsd = TRUE,
metrics = list(),
obj = NULL,
feval = NULL,
stratified = TRUE,
folds = NULL,
train_folds = NULL,
verbose = TRUE,
print_every_n = 1L,
early_stopping_rounds = NULL,
maximize = NULL,
callbacks = list(),
...
)
}
\arguments{
\item{params}{the list of parameters. Commonly used ones are:
@@ -69,6 +86,9 @@ by the values of outcome labels.}
(each element must be a vector of test fold's indices). When folds are supplied,
the \code{nfold} and \code{stratified} parameters are ignored.}
\item{train_folds}{\code{list} list specifying which indicies to use for training. If \code{NULL}
(the default) all indices not specified in \code{folds} will be used for training.}
\item{verbose}{\code{boolean}, print the statistics during the process}
\item{print_every_n}{Print each n-th iteration evaluation messages when \code{verbose>0}.
@@ -115,7 +135,7 @@ An object of class \code{xgb.cv.synchronous} with the following elements:
(only available with early stopping).
\item \code{pred} CV prediction values available when \code{prediction} is set.
It is either vector or matrix (see \code{\link{cb.cv.predict}}).
\item \code{models} a liost of the CV folds' models. It is only available with the explicit
\item \code{models} a list of the CV folds' models. It is only available with the explicit
setting of the \code{cb.cv.predict(save_models = TRUE)} callback.
}
}

View File

@@ -4,8 +4,14 @@
\alias{xgb.dump}
\title{Dump an xgboost model in text format.}
\usage{
xgb.dump(model, fname = NULL, fmap = "", with_stats = FALSE,
dump_format = c("text", "json"), ...)
xgb.dump(
model,
fname = NULL,
fmap = "",
with_stats = FALSE,
dump_format = c("text", "json"),
...
)
}
\arguments{
\item{model}{the model object.}

View File

@@ -4,8 +4,14 @@
\alias{xgb.importance}
\title{Importance of features in a model.}
\usage{
xgb.importance(feature_names = NULL, model = NULL, trees = NULL,
data = NULL, label = NULL, target = NULL)
xgb.importance(
feature_names = NULL,
model = NULL,
trees = NULL,
data = NULL,
label = NULL,
target = NULL
)
}
\arguments{
\item{feature_names}{character vector of feature names. If the model already

View File

@@ -0,0 +1,14 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.load.raw.R
\name{xgb.load.raw}
\alias{xgb.load.raw}
\title{Load serialised xgboost model from R's raw vector}
\usage{
xgb.load.raw(buffer)
}
\arguments{
\item{buffer}{the buffer returned by xgb.save.raw}
}
\description{
User can generate raw memory buffer by calling xgb.save.raw
}

View File

@@ -4,8 +4,14 @@
\alias{xgb.model.dt.tree}
\title{Parse a boosted tree model text dump}
\usage{
xgb.model.dt.tree(feature_names = NULL, model = NULL, text = NULL,
trees = NULL, use_int_id = FALSE, ...)
xgb.model.dt.tree(
feature_names = NULL,
model = NULL,
text = NULL,
trees = NULL,
use_int_id = FALSE,
...
)
}
\arguments{
\item{feature_names}{character vector of feature names. If the model already

View File

@@ -5,11 +5,17 @@
\alias{xgb.plot.deepness}
\title{Plot model trees deepness}
\usage{
xgb.ggplot.deepness(model = NULL, which = c("2x1", "max.depth", "med.depth",
"med.weight"))
xgb.ggplot.deepness(
model = NULL,
which = c("2x1", "max.depth", "med.depth", "med.weight")
)
xgb.plot.deepness(model = NULL, which = c("2x1", "max.depth", "med.depth",
"med.weight"), plot = TRUE, ...)
xgb.plot.deepness(
model = NULL,
which = c("2x1", "max.depth", "med.depth", "med.weight"),
plot = TRUE,
...
)
}
\arguments{
\item{model}{either an \code{xgb.Booster} model generated by the \code{xgb.train} function

View File

@@ -5,12 +5,25 @@
\alias{xgb.plot.importance}
\title{Plot feature importance as a bar graph}
\usage{
xgb.ggplot.importance(importance_matrix = NULL, top_n = NULL,
measure = NULL, rel_to_first = FALSE, n_clusters = c(1:10), ...)
xgb.ggplot.importance(
importance_matrix = NULL,
top_n = NULL,
measure = NULL,
rel_to_first = FALSE,
n_clusters = c(1:10),
...
)
xgb.plot.importance(importance_matrix = NULL, top_n = NULL,
measure = NULL, rel_to_first = FALSE, left_margin = 10, cex = NULL,
plot = TRUE, ...)
xgb.plot.importance(
importance_matrix = NULL,
top_n = NULL,
measure = NULL,
rel_to_first = FALSE,
left_margin = 10,
cex = NULL,
plot = TRUE,
...
)
}
\arguments{
\item{importance_matrix}{a \code{data.table} returned by \code{\link{xgb.importance}}.}

View File

@@ -4,8 +4,15 @@
\alias{xgb.plot.multi.trees}
\title{Project all trees on one tree and plot it}
\usage{
xgb.plot.multi.trees(model, feature_names = NULL, features_keep = 5,
plot_width = NULL, plot_height = NULL, render = TRUE, ...)
xgb.plot.multi.trees(
model,
feature_names = NULL,
features_keep = 5,
plot_width = NULL,
plot_height = NULL,
render = TRUE,
...
)
}
\arguments{
\item{model}{produced by the \code{xgb.train} function.}

View File

@@ -4,13 +4,33 @@
\alias{xgb.plot.shap}
\title{SHAP contribution dependency plots}
\usage{
xgb.plot.shap(data, shap_contrib = NULL, features = NULL, top_n = 1,
model = NULL, trees = NULL, target_class = NULL,
approxcontrib = FALSE, subsample = NULL, n_col = 1, col = rgb(0, 0, 1,
0.2), pch = ".", discrete_n_uniq = 5, discrete_jitter = 0.01,
ylab = "SHAP", plot_NA = TRUE, col_NA = rgb(0.7, 0, 1, 0.6),
pch_NA = ".", pos_NA = 1.07, plot_loess = TRUE, col_loess = 2,
span_loess = 0.5, which = c("1d", "2d"), plot = TRUE, ...)
xgb.plot.shap(
data,
shap_contrib = NULL,
features = NULL,
top_n = 1,
model = NULL,
trees = NULL,
target_class = NULL,
approxcontrib = FALSE,
subsample = NULL,
n_col = 1,
col = rgb(0, 0, 1, 0.2),
pch = ".",
discrete_n_uniq = 5,
discrete_jitter = 0.01,
ylab = "SHAP",
plot_NA = TRUE,
col_NA = rgb(0.7, 0, 1, 0.6),
pch_NA = ".",
pos_NA = 1.07,
plot_loess = TRUE,
col_loess = 2,
span_loess = 0.5,
which = c("1d", "2d"),
plot = TRUE,
...
)
}
\arguments{
\item{data}{data as a \code{matrix} or \code{dgCMatrix}.}

View File

@@ -4,9 +4,16 @@
\alias{xgb.plot.tree}
\title{Plot a boosted tree model}
\usage{
xgb.plot.tree(feature_names = NULL, model = NULL, trees = NULL,
plot_width = NULL, plot_height = NULL, render = TRUE,
show_node_id = FALSE, ...)
xgb.plot.tree(
feature_names = NULL,
model = NULL,
trees = NULL,
plot_width = NULL,
plot_height = NULL,
render = TRUE,
show_node_id = FALSE,
...
)
}
\arguments{
\item{feature_names}{names of each feature as a \code{character} vector.}

View File

@@ -3,7 +3,7 @@
\name{xgb.save.raw}
\alias{xgb.save.raw}
\title{Save xgboost model to R's raw vector,
user can call xgb.load to load the model back from raw vector}
user can call xgb.load.raw to load the model back from raw vector}
\usage{
xgb.save.raw(model)
}
@@ -18,10 +18,10 @@ data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
raw <- xgb.save.raw(bst)
bst <- xgb.load(raw)
bst <- xgb.load.raw(raw)
pred <- predict(bst, test$data)
}

View File

@@ -0,0 +1,29 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.serialize.R
\name{xgb.serialize}
\alias{xgb.serialize}
\title{Serialize the booster instance into R's raw vector. The serialization method differs
from \code{\link{xgb.save.raw}} as the latter one saves only the model but not
parameters. This serialization format is not stable across different xgboost versions.}
\usage{
xgb.serialize(booster)
}
\arguments{
\item{booster}{the booster instance}
}
\description{
Serialize the booster instance into R's raw vector. The serialization method differs
from \code{\link{xgb.save.raw}} as the latter one saves only the model but not
parameters. This serialization format is not stable across different xgboost versions.
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
raw <- xgb.serialize(bst)
bst <- xgb.unserialize(raw)
}

View File

@@ -5,15 +5,41 @@
\alias{xgboost}
\title{eXtreme Gradient Boosting Training}
\usage{
xgb.train(params = list(), data, nrounds, watchlist = list(), obj = NULL,
feval = NULL, verbose = 1, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL, save_period = NULL,
save_name = "xgboost.model", xgb_model = NULL, callbacks = list(), ...)
xgb.train(
params = list(),
data,
nrounds,
watchlist = list(),
obj = NULL,
feval = NULL,
verbose = 1,
print_every_n = 1L,
early_stopping_rounds = NULL,
maximize = NULL,
save_period = NULL,
save_name = "xgboost.model",
xgb_model = NULL,
callbacks = list(),
...
)
xgboost(data = NULL, label = NULL, missing = NA, weight = NULL,
params = list(), nrounds, verbose = 1, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL, save_period = NULL,
save_name = "xgboost.model", xgb_model = NULL, callbacks = list(), ...)
xgboost(
data = NULL,
label = NULL,
missing = NA,
weight = NULL,
params = list(),
nrounds,
verbose = 1,
print_every_n = 1L,
early_stopping_rounds = NULL,
maximize = NULL,
save_period = NULL,
save_name = "xgboost.model",
xgb_model = NULL,
callbacks = list(),
...
)
}
\arguments{
\item{params}{the list of parameters.

View File

@@ -0,0 +1,14 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.unserialize.R
\name{xgb.unserialize}
\alias{xgb.unserialize}
\title{Load the instance back from \code{\link{xgb.serialize}}}
\usage{
xgb.unserialize(buffer)
}
\arguments{
\item{buffer}{the buffer containing booster instance saved by \code{\link{xgb.serialize}}}
}
\description{
Load the instance back from \code{\link{xgb.serialize}}
}

View File

@@ -17,8 +17,8 @@ endif
$(foreach v, $(XGB_RFLAGS), $(warning $(v)))
PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS)
PKG_CXXFLAGS= @OPENMP_CXXFLAGS@ -pthread
PKG_LIBS = @OPENMP_CXXFLAGS@ -pthread
PKG_CXXFLAGS= @OPENMP_CXXFLAGS@ @ENDIAN_FLAG@ -pthread
PKG_LIBS = @OPENMP_CXXFLAGS@ @OPENMP_LIB@ @ENDIAN_FLAG@ @BACKTRACE_LIB@ -pthread
OBJECTS= ./xgboost_R.o ./xgboost_custom.o ./xgboost_assert.o ./init.o\
$(PKGROOT)/amalgamation/xgboost-all0.o $(PKGROOT)/amalgamation/dmlc-minimum0.o\
$(PKGROOT)/rabit/src/engine_empty.o $(PKGROOT)/rabit/src/c_api.o

View File

@@ -23,8 +23,12 @@ extern SEXP XGBoosterGetAttrNames_R(SEXP);
extern SEXP XGBoosterGetAttr_R(SEXP, SEXP);
extern SEXP XGBoosterLoadModelFromRaw_R(SEXP, SEXP);
extern SEXP XGBoosterLoadModel_R(SEXP, SEXP);
extern SEXP XGBoosterSaveJsonConfig_R(SEXP handle);
extern SEXP XGBoosterLoadJsonConfig_R(SEXP handle, SEXP value);
extern SEXP XGBoosterSerializeToBuffer_R(SEXP handle);
extern SEXP XGBoosterUnserializeFromBuffer_R(SEXP handle, SEXP raw);
extern SEXP XGBoosterModelToRaw_R(SEXP);
extern SEXP XGBoosterPredict_R(SEXP, SEXP, SEXP, SEXP);
extern SEXP XGBoosterPredict_R(SEXP, SEXP, SEXP, SEXP, SEXP);
extern SEXP XGBoosterSaveModel_R(SEXP, SEXP);
extern SEXP XGBoosterSetAttr_R(SEXP, SEXP, SEXP);
extern SEXP XGBoosterSetParam_R(SEXP, SEXP, SEXP);
@@ -49,8 +53,12 @@ static const R_CallMethodDef CallEntries[] = {
{"XGBoosterGetAttr_R", (DL_FUNC) &XGBoosterGetAttr_R, 2},
{"XGBoosterLoadModelFromRaw_R", (DL_FUNC) &XGBoosterLoadModelFromRaw_R, 2},
{"XGBoosterLoadModel_R", (DL_FUNC) &XGBoosterLoadModel_R, 2},
{"XGBoosterSaveJsonConfig_R", (DL_FUNC) &XGBoosterSaveJsonConfig_R, 1},
{"XGBoosterLoadJsonConfig_R", (DL_FUNC) &XGBoosterLoadJsonConfig_R, 2},
{"XGBoosterSerializeToBuffer_R", (DL_FUNC) &XGBoosterSerializeToBuffer_R, 1},
{"XGBoosterUnserializeFromBuffer_R", (DL_FUNC) &XGBoosterUnserializeFromBuffer_R, 2},
{"XGBoosterModelToRaw_R", (DL_FUNC) &XGBoosterModelToRaw_R, 1},
{"XGBoosterPredict_R", (DL_FUNC) &XGBoosterPredict_R, 4},
{"XGBoosterPredict_R", (DL_FUNC) &XGBoosterPredict_R, 5},
{"XGBoosterSaveModel_R", (DL_FUNC) &XGBoosterSaveModel_R, 2},
{"XGBoosterSetAttr_R", (DL_FUNC) &XGBoosterSetAttr_R, 3},
{"XGBoosterSetParam_R", (DL_FUNC) &XGBoosterSetParam_R, 3},

View File

@@ -136,9 +136,10 @@ SEXP XGDMatrixSliceDMatrix_R(SEXP handle, SEXP idxset) {
idxvec[i] = INTEGER(idxset)[i] - 1;
}
DMatrixHandle res;
CHECK_CALL(XGDMatrixSliceDMatrix(R_ExternalPtrAddr(handle),
BeginPtr(idxvec), len,
&res));
CHECK_CALL(XGDMatrixSliceDMatrixEx(R_ExternalPtrAddr(handle),
BeginPtr(idxvec), len,
&res,
0));
ret = PROTECT(R_MakeExternalPtr(res, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
R_API_END();
@@ -165,7 +166,9 @@ SEXP XGDMatrixSetInfo_R(SEXP handle, SEXP field, SEXP array) {
for (int i = 0; i < len; ++i) {
vec[i] = static_cast<unsigned>(INTEGER(array)[i]);
}
CHECK_CALL(XGDMatrixSetGroup(R_ExternalPtrAddr(handle), BeginPtr(vec), len));
CHECK_CALL(XGDMatrixSetUIntInfo(R_ExternalPtrAddr(handle),
CHAR(asChar(field)),
BeginPtr(vec), len));
} else {
std::vector<float> vec(len);
#pragma omp parallel for schedule(static)
@@ -173,8 +176,8 @@ SEXP XGDMatrixSetInfo_R(SEXP handle, SEXP field, SEXP array) {
vec[i] = REAL(array)[i];
}
CHECK_CALL(XGDMatrixSetFloatInfo(R_ExternalPtrAddr(handle),
CHAR(asChar(field)),
BeginPtr(vec), len));
CHAR(asChar(field)),
BeginPtr(vec), len));
}
R_API_END();
return R_NilValue;
@@ -292,24 +295,26 @@ SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames) {
vec_sptr.push_back(vec_names[i].c_str());
}
CHECK_CALL(XGBoosterEvalOneIter(R_ExternalPtrAddr(handle),
asInteger(iter),
BeginPtr(vec_dmats),
BeginPtr(vec_sptr),
len, &ret));
asInteger(iter),
BeginPtr(vec_dmats),
BeginPtr(vec_sptr),
len, &ret));
R_API_END();
return mkString(ret);
}
SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask, SEXP ntree_limit) {
SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask,
SEXP ntree_limit, SEXP training) {
SEXP ret;
R_API_BEGIN();
bst_ulong olen;
const float *res;
CHECK_CALL(XGBoosterPredict(R_ExternalPtrAddr(handle),
R_ExternalPtrAddr(dmat),
asInteger(option_mask),
asInteger(ntree_limit),
&olen, &res));
R_ExternalPtrAddr(dmat),
asInteger(option_mask),
asInteger(ntree_limit),
asInteger(training),
&olen, &res));
ret = PROTECT(allocVector(REALSXP, olen));
for (size_t i = 0; i < olen; ++i) {
REAL(ret)[i] = res[i];
@@ -333,15 +338,6 @@ SEXP XGBoosterSaveModel_R(SEXP handle, SEXP fname) {
return R_NilValue;
}
SEXP XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw) {
R_API_BEGIN();
CHECK_CALL(XGBoosterLoadModelFromBuffer(R_ExternalPtrAddr(handle),
RAW(raw),
length(raw)));
R_API_END();
return R_NilValue;
}
SEXP XGBoosterModelToRaw_R(SEXP handle) {
SEXP ret;
R_API_BEGIN();
@@ -357,6 +353,57 @@ SEXP XGBoosterModelToRaw_R(SEXP handle) {
return ret;
}
SEXP XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw) {
R_API_BEGIN();
CHECK_CALL(XGBoosterLoadModelFromBuffer(R_ExternalPtrAddr(handle),
RAW(raw),
length(raw)));
R_API_END();
return R_NilValue;
}
SEXP XGBoosterSaveJsonConfig_R(SEXP handle) {
const char* ret;
R_API_BEGIN();
bst_ulong len {0};
CHECK_CALL(XGBoosterSaveJsonConfig(R_ExternalPtrAddr(handle),
&len,
&ret));
R_API_END();
return mkString(ret);
}
SEXP XGBoosterLoadJsonConfig_R(SEXP handle, SEXP value) {
R_API_BEGIN();
XGBoosterLoadJsonConfig(R_ExternalPtrAddr(handle), CHAR(asChar(value)));
R_API_END();
return R_NilValue;
}
SEXP XGBoosterSerializeToBuffer_R(SEXP handle) {
SEXP ret;
R_API_BEGIN();
bst_ulong out_len;
const char *raw;
CHECK_CALL(XGBoosterSerializeToBuffer(R_ExternalPtrAddr(handle), &out_len, &raw));
ret = PROTECT(allocVector(RAWSXP, out_len));
if (out_len != 0) {
memcpy(RAW(ret), raw, out_len);
}
R_API_END();
UNPROTECT(1);
return ret;
}
SEXP XGBoosterUnserializeFromBuffer_R(SEXP handle, SEXP raw) {
R_API_BEGIN();
XGBoosterUnserializeFromBuffer(R_ExternalPtrAddr(handle),
RAW(raw),
length(raw));
R_API_END();
return R_NilValue;
}
SEXP XGBoosterDumpModel_R(SEXP handle, SEXP fmap, SEXP with_stats, SEXP dump_format) {
SEXP out;
R_API_BEGIN();

View File

@@ -148,8 +148,10 @@ XGB_DLL SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evn
* \param dmat data matrix
* \param option_mask output_margin:1 predict_leaf:2
* \param ntree_limit limit number of trees used in prediction
* \param training Whether the prediction value is used for training.
*/
XGB_DLL SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask, SEXP ntree_limit);
XGB_DLL SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask,
SEXP ntree_limit, SEXP training);
/*!
* \brief load model from existing file
* \param handle handle
@@ -177,9 +179,39 @@ XGB_DLL SEXP XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw);
* \brief save model into R's raw array
* \param handle handle
* \return raw array
*/
*/
XGB_DLL SEXP XGBoosterModelToRaw_R(SEXP handle);
/*!
* \brief Save internal parameters as a JSON string
* \param handle handle
* \return JSON string
*/
XGB_DLL SEXP XGBoosterSaveJsonConfig_R(SEXP handle);
/*!
* \brief Load the JSON string returnd by XGBoosterSaveJsonConfig_R
* \param handle handle
* \param value JSON string
* \return R_NilValue
*/
XGB_DLL SEXP XGBoosterLoadJsonConfig_R(SEXP handle, SEXP value);
/*!
* \brief Memory snapshot based serialization method. Saves everything states
* into buffer.
* \param handle handle to booster
*/
XGB_DLL SEXP XGBoosterSerializeToBuffer_R(SEXP handle);
/*!
* \brief Memory snapshot based serialization method. Loads the buffer returned
* from `XGBoosterSerializeToBuffer'.
* \param handle handle to booster
* \return raw byte array
*/
XGB_DLL SEXP XGBoosterUnserializeFromBuffer_R(SEXP handle, SEXP raw);
/*!
* \brief dump model into a string
* \param handle handle

View File

@@ -27,7 +27,7 @@ test_that("train and predict binary classification", {
pred <- predict(bst, test$data)
expect_length(pred, 1611)
pred1 <- predict(bst, train$data, ntreelimit = 1)
expect_length(pred1, 6513)
err_pred1 <- sum((pred1 > 0.5) != train$label)/length(train$label)
@@ -35,6 +35,87 @@ test_that("train and predict binary classification", {
expect_lt(abs(err_pred1 - err_log), 10e-6)
})
test_that("parameter validation works", {
p <- list(foo = "bar")
nrounds = 1
set.seed(1994)
d <- cbind(
x1 = rnorm(10),
x2 = rnorm(10),
x3 = rnorm(10))
y <- d[,"x1"] + d[,"x2"]^2 +
ifelse(d[,"x3"] > .5, d[,"x3"]^2, 2^d[,"x3"]) +
rnorm(10)
dtrain <- xgb.DMatrix(data=d, info = list(label=y))
correct <- function() {
params <- list(max_depth = 2, booster = "dart",
rate_drop = 0.5, one_drop = TRUE,
objective = "reg:squarederror")
xgb.train(params = params, data = dtrain, nrounds = nrounds)
}
expect_silent(correct())
incorrect <- function() {
params <- list(max_depth = 2, booster = "dart",
rate_drop = 0.5, one_drop = TRUE,
objective = "reg:squarederror",
foo = "bar", bar = "foo")
output <- capture.output(
xgb.train(params = params, data = dtrain, nrounds = nrounds))
print(output)
}
expect_output(incorrect(), "bar, foo")
})
test_that("dart prediction works", {
nrounds = 32
set.seed(1994)
d <- cbind(
x1 = rnorm(100),
x2 = rnorm(100),
x3 = rnorm(100))
y <- d[,"x1"] + d[,"x2"]^2 +
ifelse(d[,"x3"] > .5, d[,"x3"]^2, 2^d[,"x3"]) +
rnorm(100)
set.seed(1994)
booster_by_xgboost <- xgboost(data = d, label = y, max_depth = 2, booster = "dart",
rate_drop = 0.5, one_drop = TRUE,
eta = 1, nthread = 2, nrounds = nrounds, objective = "reg:squarederror")
pred_by_xgboost_0 <- predict(booster_by_xgboost, newdata = d, ntreelimit = 0)
pred_by_xgboost_1 <- predict(booster_by_xgboost, newdata = d, ntreelimit = nrounds)
expect_true(all(matrix(pred_by_xgboost_0, byrow=TRUE) == matrix(pred_by_xgboost_1, byrow=TRUE)))
pred_by_xgboost_2 <- predict(booster_by_xgboost, newdata = d, training = TRUE)
expect_false(all(matrix(pred_by_xgboost_0, byrow=TRUE) == matrix(pred_by_xgboost_2, byrow=TRUE)))
set.seed(1994)
dtrain <- xgb.DMatrix(data=d, info = list(label=y))
booster_by_train <- xgb.train( params = list(
booster = "dart",
max_depth = 2,
eta = 1,
rate_drop = 0.5,
one_drop = TRUE,
nthread = 1,
tree_method= "exact",
objective = "reg:squarederror"
),
data = dtrain,
nrounds = nrounds
)
pred_by_train_0 <- predict(booster_by_train, newdata = dtrain, ntreelimit = 0)
pred_by_train_1 <- predict(booster_by_train, newdata = dtrain, ntreelimit = nrounds)
pred_by_train_2 <- predict(booster_by_train, newdata = dtrain, training = TRUE)
expect_true(all(matrix(pred_by_train_0, byrow=TRUE) == matrix(pred_by_xgboost_0, byrow=TRUE)))
expect_true(all(matrix(pred_by_train_1, byrow=TRUE) == matrix(pred_by_xgboost_1, byrow=TRUE)))
expect_true(all(matrix(pred_by_train_2, byrow=TRUE) == matrix(pred_by_xgboost_2, byrow=TRUE)))
})
test_that("train and predict softprob", {
lb <- as.numeric(iris$Species) - 1
set.seed(11)
@@ -74,7 +155,7 @@ test_that("train and predict softmax", {
expect_false(is.null(bst$evaluation_log))
expect_lt(bst$evaluation_log[, min(train_merror)], 0.025)
expect_equal(bst$niter * 3, xgb.ntree(bst))
pred <- predict(bst, as.matrix(iris[, -5]))
expect_length(pred, nrow(iris))
err <- sum(pred != lb)/length(lb)
@@ -90,12 +171,12 @@ test_that("train and predict RF", {
num_parallel_tree = 20, subsample = 0.6, colsample_bytree = 0.1)
expect_equal(bst$niter, 1)
expect_equal(xgb.ntree(bst), 20)
pred <- predict(bst, train$data)
pred_err <- sum((pred > 0.5) != lb)/length(lb)
expect_lt(abs(bst$evaluation_log[1, train_error] - pred_err), 10e-6)
#expect_lt(pred_err, 0.03)
pred <- predict(bst, train$data, ntreelimit = 20)
pred_err_20 <- sum((pred > 0.5) != lb)/length(lb)
expect_equal(pred_err_20, pred_err)
@@ -171,6 +252,21 @@ test_that("training continuation works", {
expect_equal(dim(bst2$evaluation_log), c(2, 2))
})
test_that("model serialization works", {
out_path <- "model_serialization"
dtrain <- xgb.DMatrix(train$data, label = train$label)
watchlist = list(train=dtrain)
param <- list(objective = "binary:logistic")
booster <- xgb.train(param, dtrain, nrounds = 4, watchlist)
raw <- xgb.serialize(booster)
saveRDS(raw, out_path)
raw <- readRDS(out_path)
loaded <- xgb.unserialize(raw)
raw_from_loaded <- xgb.serialize(loaded)
expect_equal(raw, raw_from_loaded)
file.remove(out_path)
})
test_that("xgb.cv works", {
set.seed(11)
@@ -191,13 +287,27 @@ test_that("xgb.cv works", {
expect_false(is.null(cv$call))
})
test_that("xgb.cv works with stratified folds", {
dtrain <- xgb.DMatrix(train$data, label = train$label)
set.seed(314159)
cv <- xgb.cv(data = dtrain, max_depth = 2, nfold = 5,
eta = 1., nthread = 2, nrounds = 2, objective = "binary:logistic",
verbose=TRUE, stratified = FALSE)
set.seed(314159)
cv2 <- xgb.cv(data = dtrain, max_depth = 2, nfold = 5,
eta = 1., nthread = 2, nrounds = 2, objective = "binary:logistic",
verbose=TRUE, stratified = TRUE)
# Stratified folds should result in a different evaluation logs
expect_true(all(cv$evaluation_log[, test_error_mean] != cv2$evaluation_log[, test_error_mean]))
})
test_that("train and predict with non-strict classes", {
# standard dense matrix input
train_dense <- as.matrix(train$data)
bst <- xgboost(data = train_dense, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic", verbose = 0)
pr0 <- predict(bst, train_dense)
# dense matrix-like input of non-matrix class
class(train_dense) <- 'shmatrix'
expect_true(is.matrix(train_dense))
@@ -207,7 +317,7 @@ test_that("train and predict with non-strict classes", {
, regexp = NA)
expect_error(pr <- predict(bst, train_dense), regexp = NA)
expect_equal(pr0, pr)
# dense matrix-like input of non-matrix class with some inheritance
class(train_dense) <- c('pphmatrix','shmatrix')
expect_true(is.matrix(train_dense))
@@ -217,7 +327,7 @@ test_that("train and predict with non-strict classes", {
, regexp = NA)
expect_error(pr <- predict(bst, train_dense), regexp = NA)
expect_equal(pr0, pr)
# when someone inhertis from xgb.Booster, it should still be possible to use it as xgb.Booster
class(bst) <- c('super.Booster', 'xgb.Booster')
expect_error(pr <- predict(bst, train_dense), regexp = NA)
@@ -247,18 +357,28 @@ test_that("colsample_bytree works", {
test_y <- as.numeric(rowSums(test_x) > 0)
colnames(train_x) <- paste0("Feature_", sprintf("%03d", 1:100))
colnames(test_x) <- paste0("Feature_", sprintf("%03d", 1:100))
dtrain <- xgb.DMatrix(train_x, label = train_y)
dtrain <- xgb.DMatrix(train_x, label = train_y)
dtest <- xgb.DMatrix(test_x, label = test_y)
watchlist <- list(train = dtrain, eval = dtest)
# Use colsample_bytree = 0.01, so that roughly one out of 100 features is
# chosen for each tree
param <- list(max_depth = 2, eta = 0, silent = 1, nthread = 2,
## Use colsample_bytree = 0.01, so that roughly one out of 100 features is chosen for
## each tree
param <- list(max_depth = 2, eta = 0, nthread = 2,
colsample_bytree = 0.01, objective = "binary:logistic",
eval_metric = "auc")
set.seed(2)
set.seed(2)
bst <- xgb.train(param, dtrain, nrounds = 100, watchlist, verbose = 0)
xgb.importance(model = bst)
# If colsample_bytree works properly, a variety of features should be used
# in the 100 trees
expect_gte(nrow(xgb.importance(model = bst)), 30)
})
test_that("Configuration works", {
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic",
eval_metric = 'error', eval_metric = 'auc', eval_metric = "logloss")
config <- xgb.config(bst)
xgb.config(bst) <- config
reloaded_config <- xgb.config(bst)
expect_equal(config, reloaded_config);
})

View File

@@ -30,16 +30,16 @@ param <- list(objective = "binary:logistic", max_depth = 2, nthread = 2)
test_that("cb.print.evaluation works as expected", {
bst_evaluation <- c('train-auc'=0.9, 'test-auc'=0.8)
bst_evaluation_err <- NULL
begin_iteration <- 1
end_iteration <- 7
f0 <- cb.print.evaluation(period=0)
f1 <- cb.print.evaluation(period=1)
f5 <- cb.print.evaluation(period=5)
expect_false(is.null(attr(f1, 'call')))
expect_equal(attr(f1, 'name'), 'cb.print.evaluation')
@@ -48,15 +48,15 @@ test_that("cb.print.evaluation works as expected", {
expect_output(f1(), "\\[1\\]\ttrain-auc:0.900000\ttest-auc:0.800000")
expect_output(f5(), "\\[1\\]\ttrain-auc:0.900000\ttest-auc:0.800000")
expect_null(f1())
iteration <- 2
expect_output(f1(), "\\[2\\]\ttrain-auc:0.900000\ttest-auc:0.800000")
expect_silent(f5())
iteration <- 7
expect_output(f1(), "\\[7\\]\ttrain-auc:0.900000\ttest-auc:0.800000")
expect_output(f5(), "\\[7\\]\ttrain-auc:0.900000\ttest-auc:0.800000")
bst_evaluation_err <- c('train-auc'=0.1, 'test-auc'=0.2)
expect_output(f1(), "\\[7\\]\ttrain-auc:0.900000\\+0.100000\ttest-auc:0.800000\\+0.200000")
})
@@ -65,40 +65,40 @@ test_that("cb.evaluation.log works as expected", {
bst_evaluation <- c('train-auc'=0.9, 'test-auc'=0.8)
bst_evaluation_err <- NULL
evaluation_log <- list()
f <- cb.evaluation.log()
expect_false(is.null(attr(f, 'call')))
expect_equal(attr(f, 'name'), 'cb.evaluation.log')
iteration <- 1
expect_silent(f())
expect_equal(evaluation_log,
expect_equal(evaluation_log,
list(c(iter=1, bst_evaluation)))
iteration <- 2
expect_silent(f())
expect_equal(evaluation_log,
expect_equal(evaluation_log,
list(c(iter=1, bst_evaluation), c(iter=2, bst_evaluation)))
expect_silent(f(finalize = TRUE))
expect_equal(evaluation_log,
expect_equal(evaluation_log,
data.table(iter=1:2, train_auc=c(0.9,0.9), test_auc=c(0.8,0.8)))
bst_evaluation_err <- c('train-auc'=0.1, 'test-auc'=0.2)
evaluation_log <- list()
f <- cb.evaluation.log()
iteration <- 1
expect_silent(f())
expect_equal(evaluation_log,
expect_equal(evaluation_log,
list(c(iter=1, c(bst_evaluation, bst_evaluation_err))))
iteration <- 2
expect_silent(f())
expect_equal(evaluation_log,
expect_equal(evaluation_log,
list(c(iter=1, c(bst_evaluation, bst_evaluation_err)),
c(iter=2, c(bst_evaluation, bst_evaluation_err))))
expect_silent(f(finalize = TRUE))
expect_equal(evaluation_log,
expect_equal(evaluation_log,
data.table(iter=1:2,
train_auc_mean=c(0.9,0.9), train_auc_std=c(0.1,0.1),
test_auc_mean=c(0.8,0.8), test_auc_std=c(0.2,0.2)))
@@ -130,18 +130,18 @@ test_that("cb.reset.parameters works as expected", {
bst1 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0,
callbacks = list(cb.reset.parameters(my_par)))
expect_false(is.null(bst1$evaluation_log$train_error))
expect_equal(bst0$evaluation_log$train_error,
expect_equal(bst0$evaluation_log$train_error,
bst1$evaluation_log$train_error)
# same eta but re-set via a function in the callback
set.seed(111)
my_par <- list(eta = function(itr, itr_end) 0.9)
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0,
callbacks = list(cb.reset.parameters(my_par)))
expect_false(is.null(bst2$evaluation_log$train_error))
expect_equal(bst0$evaluation_log$train_error,
expect_equal(bst0$evaluation_log$train_error,
bst2$evaluation_log$train_error)
# different eta re-set as a vector parameter in the callback
set.seed(111)
my_par <- list(eta = c(0.6, 0.5))
@@ -149,7 +149,7 @@ test_that("cb.reset.parameters works as expected", {
callbacks = list(cb.reset.parameters(my_par)))
expect_false(is.null(bst3$evaluation_log$train_error))
expect_false(all(bst0$evaluation_log$train_error == bst3$evaluation_log$train_error))
# resetting multiple parameters at the same time runs with no error
my_par <- list(eta = c(1., 0.5), gamma = c(1, 2), max_depth = c(4, 8))
expect_error(
@@ -175,7 +175,7 @@ test_that("cb.reset.parameters works as expected", {
test_that("cb.save.model works as expected", {
files <- c('xgboost_01.model', 'xgboost_02.model', 'xgboost.model')
for (f in files) if (file.exists(f)) file.remove(f)
bst <- xgb.train(param, dtrain, nrounds = 2, watchlist, eta = 1, verbose = 0,
save_period = 1, save_name = "xgboost_%02d.model")
expect_true(file.exists('xgboost_01.model'))
@@ -184,6 +184,9 @@ test_that("cb.save.model works as expected", {
expect_equal(xgb.ntree(b1), 1)
b2 <- xgb.load('xgboost_02.model')
expect_equal(xgb.ntree(b2), 2)
xgb.config(b2) <- xgb.config(bst)
expect_equal(xgb.config(bst), xgb.config(b2))
expect_equal(bst$raw, b2$raw)
# save_period = 0 saves the last iteration's model
@@ -191,8 +194,9 @@ test_that("cb.save.model works as expected", {
save_period = 0)
expect_true(file.exists('xgboost.model'))
b2 <- xgb.load('xgboost.model')
xgb.config(b2) <- xgb.config(bst)
expect_equal(bst$raw, b2$raw)
for (f in files) if (file.exists(f)) file.remove(f)
})
@@ -211,13 +215,22 @@ test_that("early stopping xgb.train works", {
err_pred <- err(ltest, pred)
err_log <- bst$evaluation_log[bst$best_iteration, test_error]
expect_equal(err_log, err_pred, tolerance = 5e-6)
set.seed(11)
expect_silent(
bst0 <- xgb.train(param, dtrain, nrounds = 20, watchlist, eta = 0.3,
early_stopping_rounds = 3, maximize = FALSE, verbose = 0)
)
expect_equal(bst$evaluation_log, bst0$evaluation_log)
xgb.save(bst, "model.bin")
loaded <- xgb.load("model.bin")
expect_false(is.null(loaded$best_iteration))
expect_equal(loaded$best_iteration, bst$best_ntreelimit)
expect_equal(loaded$best_ntreelimit, bst$best_ntreelimit)
file.remove("model.bin")
})
test_that("early stopping using a specific metric works", {
@@ -285,15 +298,16 @@ test_that("prediction in early-stopping xgb.cv works", {
set.seed(11)
expect_output(
cv <- xgb.cv(param, dtrain, nfold = 5, eta = 0.1, nrounds = 20,
early_stopping_rounds = 5, maximize = FALSE, prediction = TRUE)
early_stopping_rounds = 5, maximize = FALSE, stratified = FALSE,
prediction = TRUE)
, "Stopping. Best iteration")
expect_false(is.null(cv$best_iteration))
expect_lt(cv$best_iteration, 19)
expect_false(is.null(cv$evaluation_log))
expect_false(is.null(cv$pred))
expect_length(cv$pred, nrow(train$data))
err_pred <- mean( sapply(cv$folds, function(f) mean(err(ltrain[f], cv$pred[f]))) )
err_log <- cv$evaluation_log[cv$best_iteration, test_error_mean]
expect_equal(err_pred, err_log, tolerance = 1e-6)

View File

@@ -31,7 +31,6 @@ num_round <- 2
test_that("custom objective works", {
bst <- xgb.train(param, dtrain, num_round, watchlist)
expect_equal(class(bst), "xgb.Booster")
expect_equal(length(bst$raw), 1100)
expect_false(is.null(bst$evaluation_log))
expect_false(is.null(bst$evaluation_log$eval_error))
expect_lt(bst$evaluation_log[num_round, eval_error], 0.03)
@@ -58,5 +57,21 @@ test_that("custom objective using DMatrix attr works", {
param$objective = logregobjattr
bst <- xgb.train(param, dtrain, num_round, watchlist)
expect_equal(class(bst), "xgb.Booster")
expect_equal(length(bst$raw), 1100)
})
test_that("custom objective with multi-class works", {
data = as.matrix(iris[, -5])
label = as.numeric(iris$Species) - 1
dtrain <- xgb.DMatrix(data = data, label = label)
nclasses <- 3
fake_softprob <- function(preds, dtrain) {
expect_true(all(matrix(preds) == 0.5))
grad <- rnorm(dim(as.matrix(preds))[1])
expect_equal(dim(data)[1] * nclasses, dim(as.matrix(preds))[1])
hess <- rnorm(dim(as.matrix(preds))[1])
return (list(grad = grad, hess = hess))
}
param$objective = fake_softprob
bst <- xgb.train(param, dtrain, 1, num_class=nclasses)
})

View File

@@ -50,6 +50,12 @@ test_that("xgb.DMatrix: getinfo & setinfo", {
labels <- getinfo(dtest, 'label')
expect_equal(test_label, getinfo(dtest, 'label'))
expect_true(setinfo(dtest, 'label_lower_bound', test_label))
expect_equal(test_label, getinfo(dtest, 'label_lower_bound'))
expect_true(setinfo(dtest, 'label_upper_bound', test_label))
expect_equal(test_label, getinfo(dtest, 'label_upper_bound'))
expect_true(length(getinfo(dtest, 'weight')) == 0)
expect_true(length(getinfo(dtest, 'base_margin')) == 0)
@@ -59,7 +65,7 @@ test_that("xgb.DMatrix: getinfo & setinfo", {
expect_error(setinfo(dtest, 'group', test_label))
# providing character values will give a warning
expect_warning( setinfo(dtest, 'weight', rep('a', nrow(test_data))) )
expect_warning(setinfo(dtest, 'weight', rep('a', nrow(test_data))))
# any other label should error
expect_error(setinfo(dtest, 'asdf', test_label))

View File

@@ -40,7 +40,7 @@ test_that("gblinear works", {
expect_lt(bst$evaluation_log$eval_error[2], ERR_UL)
bst <- xgb.train(param, dtrain, n, watchlist, verbose = VERB, feature_selector = 'thrifty',
top_n = 50, callbacks = list(cb.gblinear.history(sparse = TRUE)))
top_k = 50, callbacks = list(cb.gblinear.history(sparse = TRUE)))
expect_lt(bst$evaluation_log$eval_error[n], ERR_UL)
h <- xgb.gblinear.history(bst)
expect_equal(dim(h), c(n, ncol(dtrain) + 1))

View File

@@ -7,8 +7,8 @@ require(vcd, quietly = TRUE)
float_tolerance = 5e-6
# disable some tests for Win32
win32_flag = .Platform$OS.type == "windows" && .Machine$sizeof.pointer != 8
# disable some tests for 32-bit environment
flag_32bit = .Machine$sizeof.pointer != 8
set.seed(1982)
data(Arthritis)
@@ -44,7 +44,7 @@ mbst.GLM <- xgboost(data = as.matrix(iris[, -5]), label = mlabel, verbose = 0,
test_that("xgb.dump works", {
if (!win32_flag)
if (!flag_32bit)
expect_length(xgb.dump(bst.Tree), 200)
dump_file = file.path(tempdir(), 'xgb.model.dump')
expect_true(xgb.dump(bst.Tree, dump_file, with_stats = T))
@@ -54,7 +54,7 @@ test_that("xgb.dump works", {
# JSON format
dmp <- xgb.dump(bst.Tree, dump_format = "json")
expect_length(dmp, 1)
if (!win32_flag)
if (!flag_32bit)
expect_length(grep('nodeid', strsplit(dmp, '\n')[[1]]), 188)
})
@@ -142,6 +142,44 @@ test_that("predict feature contributions works", {
}
})
test_that("SHAPs sum to predictions, with or without DART", {
d <- cbind(
x1 = rnorm(100),
x2 = rnorm(100),
x3 = rnorm(100))
y <- d[,"x1"] + d[,"x2"]^2 +
ifelse(d[,"x3"] > .5, d[,"x3"]^2, 2^d[,"x3"]) +
rnorm(100)
nrounds <- 30
for (booster in list("gbtree", "dart")) {
fit <- xgboost(
params = c(
list(
booster = booster,
objective = "reg:squarederror",
eval_metric = "rmse"),
if (booster == "dart")
list(rate_drop = .01, one_drop = T)),
data = d,
label = y,
nrounds = nrounds)
pr <- function(...)
predict(fit, newdata = d, ...)
pred <- pr()
shap <- pr(predcontrib = T)
shapi <- pr(predinteraction = T)
tol = 1e-5
expect_equal(rowSums(shap), pred, tol = tol)
expect_equal(apply(shapi, 1, sum), pred, tol = tol)
for (i in 1 : nrow(d))
for (f in list(rowSums, colSums))
expect_equal(f(shapi[i,,]), shap[i,], tol = tol)
}
})
test_that("xgb-attribute functionality", {
val <- "my attribute value"
list.val <- list(my_attr=val, a=123, b='ok')
@@ -218,7 +256,7 @@ test_that("xgb.model.dt.tree works with and without feature names", {
names.dt.trees <- c("Tree", "Node", "ID", "Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover")
dt.tree <- xgb.model.dt.tree(feature_names = feature.names, model = bst.Tree)
expect_equal(names.dt.trees, names(dt.tree))
if (!win32_flag)
if (!flag_32bit)
expect_equal(dim(dt.tree), c(188, 10))
expect_output(str(dt.tree), 'Feature.*\\"Age\\"')
@@ -245,7 +283,7 @@ test_that("xgb.model.dt.tree throws error for gblinear", {
test_that("xgb.importance works with and without feature names", {
importance.Tree <- xgb.importance(feature_names = feature.names, model = bst.Tree)
if (!win32_flag)
if (!flag_32bit)
expect_equal(dim(importance.Tree), c(7, 4))
expect_equal(colnames(importance.Tree), c("Feature", "Gain", "Cover", "Frequency"))
expect_output(str(importance.Tree), 'Feature.*\\"Age\\"')

View File

@@ -14,25 +14,42 @@ test_that("interaction constraints for regression", {
bst <- xgboost(data = train, label = y, max_depth = 3,
eta = 0.1, nthread = 2, nrounds = 100, verbose = 0,
interaction_constraints = list(c(0,1)))
# Set all observations to have the same x3 values then increment
# by the same amount
preds <- lapply(c(1,2,3), function(x){
tmat <- matrix(c(x1,x2,rep(x,1000)), ncol=3)
return(predict(bst, tmat))
})
preds <- lapply(c(1,2,3), function(x){
tmat <- matrix(c(x1,x2,rep(x,1000)), ncol=3)
return(predict(bst, tmat))
})
# Check incrementing x3 has the same effect on all observations
# since x3 is constrained to be independent of x1 and x2
# and all observations start off from the same x3 value
diff1 <- preds[[2]] - preds[[1]]
test1 <- all(abs(diff1 - diff1[1]) < 1e-4)
diff2 <- preds[[3]] - preds[[2]]
test2 <- all(abs(diff2 - diff2[1]) < 1e-4)
diff1 <- preds[[2]] - preds[[1]]
test1 <- all(abs(diff1 - diff1[1]) < 1e-4)
diff2 <- preds[[3]] - preds[[2]]
test2 <- all(abs(diff2 - diff2[1]) < 1e-4)
expect_true({
test1 & test2
}, "Interaction Contraint Satisfied")
})
test_that("interaction constraints scientific representation", {
rows <- 10
## When number exceeds 1e5, R paste function uses scientific representation.
## See: https://github.com/dmlc/xgboost/issues/5179
cols <- 1e5+10
d <- matrix(rexp(rows, rate=.1), nrow=rows, ncol=cols)
y <- rnorm(rows)
dtrain <- xgb.DMatrix(data=d, info = list(label=y))
inc <- list(c(seq.int(from = 0, to = cols, by = 1)))
with_inc <- xgb.train(data=dtrain, tree_method='hist',
interaction_constraints=inc, nrounds=10)
without_inc <- xgb.train(data=dtrain, tree_method='hist', nrounds=10)
expect_equal(xgb.save.raw(with_inc), xgb.save.raw(without_inc))
})

View File

@@ -410,7 +410,7 @@ In some very specific cases, like when you want to pilot **XGBoost** from `caret
```{r saveLoadRBinVectorModel, message=F, warning=F}
# save model to R's raw vector
rawVec <- xgb.save.raw(bst)
rawVec <- xgb.serialize(bst)
# print class
print(class(rawVec))

View File

@@ -7,6 +7,7 @@
[![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE)
[![CRAN Status Badge](http://www.r-pkg.org/badges/version/xgboost)](http://cran.r-project.org/web/packages/xgboost)
[![PyPI version](https://badge.fury.io/py/xgboost.svg)](https://pypi.python.org/pypi/xgboost/)
[![Optuna](https://img.shields.io/badge/Optuna-integrated-blue)](https://optuna.org)
[Community](https://xgboost.ai/community) |
[Documentation](https://xgboost.readthedocs.org) |
@@ -17,16 +18,16 @@
XGBoost is an optimized distributed gradient boosting library designed to be highly ***efficient***, ***flexible*** and ***portable***.
It implements machine learning algorithms under the [Gradient Boosting](https://en.wikipedia.org/wiki/Gradient_boosting) framework.
XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.
The same code runs on major distributed environment (Hadoop, SGE, MPI) and can solve problems beyond billions of examples.
The same code runs on major distributed environment (Kubernetes, Hadoop, SGE, MPI, Dask) and can solve problems beyond billions of examples.
License
-------
© Contributors, 2016. Licensed under an [Apache-2](https://github.com/dmlc/xgboost/blob/master/LICENSE) license.
© Contributors, 2019. Licensed under an [Apache-2](https://github.com/dmlc/xgboost/blob/master/LICENSE) license.
Contribute to XGBoost
---------------------
XGBoost has been developed and used by a group of active community members. Your help is very valuable to make the package better for everyone.
Checkout the [Community Page](https://xgboost.ai/community)
Checkout the [Community Page](https://xgboost.ai/community).
Reference
---------
@@ -38,7 +39,7 @@ Sponsors
Become a sponsor and get a logo here. See details at [Sponsoring the XGBoost Project](https://xgboost.ai/sponsors). The funds are used to defray the cost of continuous integration and testing infrastructure (https://xgboost-ci.net).
## Open Source Collective sponsors
[![Backers on Open Collective](https://opencollective.com/xgboost/backers/badge.svg)](#backers) [![Sponsors on Open Collective](https://opencollective.com/xgboost/sponsors/badge.svg)](#sponsors)
[![Backers on Open Collective](https://opencollective.com/xgboost/backers/badge.svg)](#backers) [![Sponsors on Open Collective](https://opencollective.com/xgboost/sponsors/badge.svg)](#sponsors)
### Sponsors
[[Become a sponsor](https://opencollective.com/xgboost#sponsor)]

View File

@@ -1,5 +1,5 @@
/*!
* Copyright 2015 by Contributors.
* Copyright 2015-2019 by Contributors.
* \brief XGBoost Amalgamation.
* This offers an alternative way to compile the entire library from this single file.
*
@@ -14,6 +14,7 @@
#include "../src/metric/elementwise_metric.cc"
#include "../src/metric/multiclass_metric.cc"
#include "../src/metric/rank_metric.cc"
#include "../src/metric/survival_metric.cc"
// objectives
#include "../src/objective/objective.cc"
@@ -21,29 +22,32 @@
#include "../src/objective/multiclass_obj.cc"
#include "../src/objective/rank_obj.cc"
#include "../src/objective/hinge.cc"
#include "../src/objective/aft_obj.cc"
// gbms
#include "../src/gbm/gbm.cc"
#include "../src/gbm/gbtree.cc"
#include "../src/gbm/gbtree_model.cc"
#include "../src/gbm/gblinear.cc"
#include "../src/gbm/gblinear_model.cc"
// data
#include "../src/data/data.cc"
#include "../src/data/simple_csr_source.cc"
#include "../src/data/simple_dmatrix.cc"
#include "../src/data/sparse_page_raw_format.cc"
#include "../src/data/ellpack_page.cc"
#include "../src/data/ellpack_page_source.cc"
// prediction
#include "../src/predictor/predictor.cc"
#include "../src/predictor/cpu_predictor.cc"
#if DMLC_ENABLE_STD_THREAD
#include "../src/data/sparse_page_source.cc"
#include "../src/data/sparse_page_dmatrix.cc"
#include "../src/data/sparse_page_writer.cc"
#endif
// tress
// trees
#include "../src/tree/param.cc"
#include "../src/tree/split_evaluator.cc"
#include "../src/tree/tree_model.cc"
#include "../src/tree/tree_updater.cc"
@@ -54,6 +58,7 @@
#include "../src/tree/updater_sync.cc"
#include "../src/tree/updater_histmaker.cc"
#include "../src/tree/updater_skmaker.cc"
#include "../src/tree/constraints.cc"
// linear
#include "../src/linear/linear_updater.cc"
@@ -64,8 +69,14 @@
#include "../src/learner.cc"
#include "../src/logging.cc"
#include "../src/common/common.cc"
#include "../src/common/timer.cc"
#include "../src/common/host_device_vector.cc"
#include "../src/common/hist_util.cc"
#include "../src/common/json.cc"
#include "../src/common/io.cc"
#include "../src/common/survival_util.cc"
#include "../src/common/probability_distribution.cc"
#include "../src/common/version.cc"
// c_api
#include "../src/c_api/c_api.cc"

View File

@@ -2,10 +2,6 @@ environment:
R_ARCH: x64
USE_RTOOLS: true
matrix:
- target: msvc
ver: 2013
generator: "Visual Studio 12 2013 Win64"
configuration: Release
- target: msvc
ver: 2015
generator: "Visual Studio 14 2015 Win64"
@@ -98,7 +94,7 @@ build_script:
cmake .. -G"%generator%" -DCMAKE_CONFIGURATION_TYPES="Release" -DR_LIB=ON &&
cmake --build . --target install --config Release
)
- if /i "%target%" == "jvm" cd jvm-packages && mvn test -pl :xgboost4j
- if /i "%target%" == "jvm" cd jvm-packages && mvn test -pl :xgboost4j_2.12
test_script:
- cd %APPVEYOR_BUILD_FOLDER%

View File

@@ -6,7 +6,7 @@ function (run_doxygen)
endif (NOT DOXYGEN_DOT_FOUND)
configure_file(
${PROJECT_SOURCE_DIR}/doc/Doxyfile.in
${xgboost_SOURCE_DIR}/doc/Doxyfile.in
${CMAKE_CURRENT_BINARY_DIR}/Doxyfile @ONLY)
add_custom_target( doc_doxygen ALL
COMMAND ${DOXYGEN_EXECUTABLE} ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile

View File

@@ -0,0 +1,22 @@
function (find_prefetch_intrinsics)
include(CheckCXXSourceCompiles)
check_cxx_source_compiles("
#include <xmmintrin.h>
int main() {
char data = 0;
const char* address = &data;
_mm_prefetch(address, _MM_HINT_NTA);
return 0;
}
" XGBOOST_MM_PREFETCH_PRESENT)
check_cxx_source_compiles("
int main() {
char data = 0;
const char* address = &data;
__builtin_prefetch(address, 0, 0);
return 0;
}
" XGBOOST_BUILTIN_PREFETCH_PRESENT)
set(XGBOOST_MM_PREFETCH_PRESENT ${XGBOOST_MM_PREFETCH_PRESENT} PARENT_SCOPE)
set(XGBOOST_BUILTIN_PREFETCH_PRESENT ${XGBOOST_BUILTIN_PREFETCH_PRESENT} PARENT_SCOPE)
endfunction (find_prefetch_intrinsics)

1
cmake/Python_version.in Normal file
View File

@@ -0,0 +1 @@
@xgboost_VERSION_MAJOR@.@xgboost_VERSION_MINOR@.@xgboost_VERSION_PATCH@

View File

@@ -4,24 +4,29 @@
# enable_sanitizers("address;leak")
# Add flags
macro(enable_sanitizer santizer)
if(${santizer} MATCHES "address")
macro(enable_sanitizer sanitizer)
if(${sanitizer} MATCHES "address")
find_package(ASan REQUIRED)
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=address")
link_libraries(${ASan_LIBRARY})
elseif(${santizer} MATCHES "thread")
elseif(${sanitizer} MATCHES "thread")
find_package(TSan REQUIRED)
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=thread")
link_libraries(${TSan_LIBRARY})
elseif(${santizer} MATCHES "leak")
elseif(${sanitizer} MATCHES "leak")
find_package(LSan REQUIRED)
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=leak")
link_libraries(${LSan_LIBRARY})
elseif(${sanitizer} MATCHES "undefined")
find_package(UBSan REQUIRED)
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=undefined -fno-sanitize-recover=undefined")
link_libraries(${UBSan_LIBRARY})
else()
message(FATAL_ERROR "Santizer ${santizer} not supported.")
message(FATAL_ERROR "Santizer ${sanitizer} not supported.")
endif()
endmacro()

View File

@@ -58,9 +58,18 @@ function(set_output_directory target dir)
RUNTIME_OUTPUT_DIRECTORY ${dir}
RUNTIME_OUTPUT_DIRECTORY_DEBUG ${dir}
RUNTIME_OUTPUT_DIRECTORY_RELEASE ${dir}
RUNTIME_OUTPUT_DIRECTORY_RELWITHDEBINFO ${dir}
RUNTIME_OUTPUT_DIRECTORY_MINSIZEREL ${dir}
LIBRARY_OUTPUT_DIRECTORY ${dir}
LIBRARY_OUTPUT_DIRECTORY_DEBUG ${dir}
LIBRARY_OUTPUT_DIRECTORY_RELEASE ${dir}
LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO ${dir}
LIBRARY_OUTPUT_DIRECTORY_MINSIZEREL ${dir}
ARCHIVE_OUTPUT_DIRECTORY ${dir}
ARCHIVE_OUTPUT_DIRECTORY_DEBUG ${dir}
ARCHIVE_OUTPUT_DIRECTORY_RELEASE ${dir}
ARCHIVE_OUTPUT_DIRECTORY_RELWITHDEBINFO ${dir}
ARCHIVE_OUTPUT_DIRECTORY_MINSIZEREL ${dir}
)
endfunction(set_output_directory)
@@ -111,7 +120,7 @@ DESTINATION \"${build_dir}/bak\")")
install(CODE "file(REMOVE_RECURSE \"${build_dir}/R-package\")")
install(
DIRECTORY "${PROJECT_SOURCE_DIR}/R-package"
DIRECTORY "${xgboost_SOURCE_DIR}/R-package"
DESTINATION "${build_dir}"
REGEX "src/*" EXCLUDE
REGEX "R-package/configure" EXCLUDE

9
cmake/Version.cmake Normal file
View File

@@ -0,0 +1,9 @@
function (write_version)
message(STATUS "xgboost VERSION: ${xgboost_VERSION}")
configure_file(
${xgboost_SOURCE_DIR}/cmake/version_config.h.in
${xgboost_SOURCE_DIR}/include/xgboost/version_config.h @ONLY)
configure_file(
${xgboost_SOURCE_DIR}/cmake/Python_version.in
${xgboost_SOURCE_DIR}/python-package/xgboost/VERSION @ONLY)
endfunction (write_version)

View File

@@ -1,7 +1,7 @@
set(ASan_LIB_NAME ASan)
find_library(ASan_LIBRARY
NAMES libasan.so libasan.so.4 libasan.so.3 libasan.so.2 libasan.so.1 libasan.so.0
NAMES libasan.so libasan.so.5 libasan.so.4 libasan.so.3 libasan.so.2 libasan.so.1 libasan.so.0
PATHS ${SANITIZER_PATH} /usr/lib64 /usr/lib /usr/local/lib64 /usr/local/lib ${CMAKE_PREFIX_PATH}/lib)
include(FindPackageHandleStandardArgs)

View File

@@ -50,10 +50,10 @@ function(create_rlib_for_msvc)
\nDo you have Rtools installed with its MinGW's bin/ in PATH?")
endif()
# extract symbols from R.dll into R.def and R.lib import library
execute_process(COMMAND gendef
execute_process(COMMAND ${GENDEF_EXE}
"-" "${LIBR_LIB_DIR}/R.dll"
OUTPUT_FILE "${CMAKE_CURRENT_BINARY_DIR}/R.def")
execute_process(COMMAND dlltool
execute_process(COMMAND ${DLLTOOL_EXE}
"--input-def" "${CMAKE_CURRENT_BINARY_DIR}/R.def"
"--output-lib" "${CMAKE_CURRENT_BINARY_DIR}/R.lib")
endfunction(create_rlib_for_msvc)
@@ -103,12 +103,12 @@ else()
)
# ask R for the include dir
execute_process(
COMMAND ${LIBR_EXECUTABLE} "--slave" "--no-save" "-e" "cat(R.home('include'))"
COMMAND ${LIBR_EXECUTABLE} "--slave" "--vanilla" "-e" "cat(R.home('include'))"
OUTPUT_VARIABLE LIBR_INCLUDE_DIRS
)
# ask R for the lib dir
execute_process(
COMMAND ${LIBR_EXECUTABLE} "--slave" "--no-save" "-e" "cat(R.home('lib'))"
COMMAND ${LIBR_EXECUTABLE} "--slave" "--vanilla" "-e" "cat(R.home('lib'))"
OUTPUT_VARIABLE LIBR_LIB_DIR
)

View File

@@ -0,0 +1,23 @@
if (NVML_LIBRARY)
unset(NVML_LIBRARY CACHE)
endif(NVML_LIBRARY)
set(NVML_LIB_NAME nvml)
find_path(NVML_INCLUDE_DIR
NAMES nvml.h
PATHS ${CUDA_HOME}/include ${CUDA_INCLUDE} /usr/local/cuda/include)
find_library(NVML_LIBRARY
NAMES nvidia-ml)
message(STATUS "Using nvml library: ${NVML_LIBRARY}")
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(NVML DEFAULT_MSG
NVML_INCLUDE_DIR NVML_LIBRARY)
mark_as_advanced(
NVML_INCLUDE_DIR
NVML_LIBRARY
)

View File

@@ -0,0 +1,13 @@
set(UBSan_LIB_NAME UBSan)
find_library(UBSan_LIBRARY
NAMES libubsan.so libubsan.so.5 libubsan.so.4 libubsan.so.3 libubsan.so.2 libubsan.so.1 libubsan.so.0
PATHS ${SANITIZER_PATH} /usr/lib64 /usr/lib /usr/local/lib64 /usr/local/lib ${CMAKE_PREFIX_PATH}/lib)
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(UBSan DEFAULT_MSG
UBSan_LIBRARY)
mark_as_advanced(
UBSan_LIBRARY
UBSan_LIB_NAME)

11
cmake/version_config.h.in Normal file
View File

@@ -0,0 +1,11 @@
/*!
* Copyright 2019 XGBoost contributors
*/
#ifndef XGBOOST_VERSION_CONFIG_H_
#define XGBOOST_VERSION_CONFIG_H_
#define XGBOOST_VER_MAJOR @xgboost_VERSION_MAJOR@
#define XGBOOST_VER_MINOR @xgboost_VERSION_MINOR@
#define XGBOOST_VER_PATCH @xgboost_VERSION_PATCH@
#endif // XGBOOST_VERSION_CONFIG_H_

12
cmake/xgboost.pc.in Normal file
View File

@@ -0,0 +1,12 @@
prefix=@CMAKE_INSTALL_PREFIX@
version=@xgboost_VERSION@
exec_prefix=${prefix}/bin
libdir=${prefix}/lib
includedir=${prefix}/include
Name: xgboost
Description: XGBoost - Scalable and Flexible Gradient Boosting.
Version: ${version}
Cflags: -I${includedir}
Libs: -L${libdir} -lxgboost

View File

@@ -16,6 +16,7 @@ Contents
- [Tutorials](#tutorials)
- [Usecases](#usecases)
- [Tools using XGBoost](#tools-using-xgboost)
- [Integrations with 3rd party software](#integrations-with-3rd-party-software)
- [Awards](#awards)
- [Windows Binaries](#windows-binaries)
@@ -114,6 +115,7 @@ Please send pull requests if you find ones that are missing here.
- [Complete Guide to Parameter Tuning in XGBoost](http://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/) by Aarshay Jain
- [Practical XGBoost in Python online course](http://education.parrotprediction.teachable.com/courses/practical-xgboost-in-python) by Parrot Prediction
- [Spark and XGBoost using Scala](http://www.elenacuoco.com/2016/10/10/scala-spark-xgboost-classification/) by Elena Cuoco
## Usecases
If you have particular usecase of xgboost that you would like to highlight.
Send a PR to add a one sentence description:)
@@ -126,14 +128,17 @@ Send a PR to add a one sentence description:)
- [Hanjing Su](https://www.52cs.org) from Tencent data platform team: "We use distributed XGBoost for click through prediction in wechat shopping and lookalikes. The problems involve hundreds millions of users and thousands of features. XGBoost is cleanly designed and can be easily integrated into our production environment, reducing our cost in developments."
- [CNevd](https://github.com/CNevd) from autohome.com ad platform team: "Distributed XGBoost is used for click through rate prediction in our display advertising, XGBoost is highly efficient and flexible and can be easily used on our distributed platform, our ctr made a great improvement with hundred millions samples and millions features due to this awesome XGBoost"
## Tools using XGBoost
- [BayesBoost](https://github.com/mpearmain/BayesBoost) - Bayesian Optimization using xgboost and sklearn API
- [gp_xgboost_gridsearch](https://github.com/vatsan/gp_xgboost_gridsearch) - In-database parallel grid-search for XGBoost on [Greenplum](https://github.com/greenplum-db/gpdb) using PL/Python
- [tpot](https://github.com/rhiever/tpot) - A Python tool that automatically creates and optimizes machine learning pipelines using genetic programming.
## Integrations with 3rd party software
Open source integrations with XGBoost:
* [Neptune.ai](http://neptune.ai/) - Experiment management and collaboration tool for ML/DL/RL specialists. Integration has a form of the [XGBoost callback](https://docs.neptune.ai/integrations/xgboost.html) that automatically logs training and evaluation metrics, as well as saved model (booster), feature importance chart and visualized trees.
* [Optuna](https://optuna.org/) - An open source hyperparameter optimization framework to automate hyperparameter search. Optuna integrates with XGBoost in the [XGBoostPruningCallback](https://optuna.readthedocs.io/en/stable/reference/integration.html#optuna.integration.XGBoostPruningCallback) that let users easily prune unpromising trials.
## Awards
- [John Chambers Award](http://stat-computing.org/awards/jmc/winners.html) - 2016 Winner: XGBoost R Package, by Tong He (Simon Fraser University) and Tianqi Chen (University of Washington)
- [InfoWorlds 2019 Technology of the Year Award](https://www.infoworld.com/article/3336072/application-development/infoworlds-2019-technology-of-the-year-award-winners.html)

View File

@@ -0,0 +1,54 @@
"""
Demo for survival analysis (regression) using Accelerated Failure Time (AFT) model
"""
from sklearn.model_selection import ShuffleSplit
import pandas as pd
import numpy as np
import xgboost as xgb
# The Veterans' Administration Lung Cancer Trial
# The Statistical Analysis of Failure Time Data by Kalbfleisch J. and Prentice R (1980)
df = pd.read_csv('../data/veterans_lung_cancer.csv')
print('Training data:')
print(df)
# Split features and labels
y_lower_bound = df['Survival_label_lower_bound']
y_upper_bound = df['Survival_label_upper_bound']
X = df.drop(['Survival_label_lower_bound', 'Survival_label_upper_bound'], axis=1)
# Split data into training and validation sets
rs = ShuffleSplit(n_splits=2, test_size=.7, random_state=0)
train_index, valid_index = next(rs.split(X))
dtrain = xgb.DMatrix(X.values[train_index, :])
dtrain.set_float_info('label_lower_bound', y_lower_bound[train_index])
dtrain.set_float_info('label_upper_bound', y_upper_bound[train_index])
dvalid = xgb.DMatrix(X.values[valid_index, :])
dvalid.set_float_info('label_lower_bound', y_lower_bound[valid_index])
dvalid.set_float_info('label_upper_bound', y_upper_bound[valid_index])
# Train gradient boosted trees using AFT loss and metric
params = {'verbosity': 0,
'objective': 'survival:aft',
'eval_metric': 'aft-nloglik',
'tree_method': 'hist',
'learning_rate': 0.05,
'aft_loss_distribution': 'normal',
'aft_loss_distribution_scale': 1.20,
'max_depth': 6,
'lambda': 0.01,
'alpha': 0.02}
bst = xgb.train(params, dtrain, num_boost_round=10000,
evals=[(dtrain, 'train'), (dvalid, 'valid')],
early_stopping_rounds=50)
# Run prediction on the validation set
df = pd.DataFrame({'Label (lower bound)': y_lower_bound[valid_index],
'Label (upper bound)': y_upper_bound[valid_index],
'Predicted label': bst.predict(dvalid)})
print(df)
# Show only data points with right-censored labels
print(df[np.isinf(df['Label (upper bound)'])])
# Save trained model
bst.save_model('aft_model.json')

View File

@@ -0,0 +1,78 @@
"""
Demo for survival analysis (regression) using Accelerated Failure Time (AFT) model, using Optuna
to tune hyperparameters
"""
from sklearn.model_selection import ShuffleSplit
import pandas as pd
import numpy as np
import xgboost as xgb
import optuna
# The Veterans' Administration Lung Cancer Trial
# The Statistical Analysis of Failure Time Data by Kalbfleisch J. and Prentice R (1980)
df = pd.read_csv('../data/veterans_lung_cancer.csv')
print('Training data:')
print(df)
# Split features and labels
y_lower_bound = df['Survival_label_lower_bound']
y_upper_bound = df['Survival_label_upper_bound']
X = df.drop(['Survival_label_lower_bound', 'Survival_label_upper_bound'], axis=1)
# Split data into training and validation sets
rs = ShuffleSplit(n_splits=2, test_size=.7, random_state=0)
train_index, valid_index = next(rs.split(X))
dtrain = xgb.DMatrix(X.values[train_index, :])
dtrain.set_float_info('label_lower_bound', y_lower_bound[train_index])
dtrain.set_float_info('label_upper_bound', y_upper_bound[train_index])
dvalid = xgb.DMatrix(X.values[valid_index, :])
dvalid.set_float_info('label_lower_bound', y_lower_bound[valid_index])
dvalid.set_float_info('label_upper_bound', y_upper_bound[valid_index])
# Define hyperparameter search space
base_params = {'verbosity': 0,
'objective': 'survival:aft',
'eval_metric': 'aft-nloglik',
'tree_method': 'hist'} # Hyperparameters common to all trials
def objective(trial):
params = {'learning_rate': trial.suggest_loguniform('learning_rate', 0.01, 1.0),
'aft_loss_distribution': trial.suggest_categorical('aft_loss_distribution',
['normal', 'logistic', 'extreme']),
'aft_loss_distribution_scale': trial.suggest_loguniform('aft_loss_distribution_scale', 0.1, 10.0),
'max_depth': trial.suggest_int('max_depth', 3, 8),
'lambda': trial.suggest_loguniform('lambda', 1e-8, 1.0),
'alpha': trial.suggest_loguniform('alpha', 1e-8, 1.0)} # Search space
params.update(base_params)
pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'valid-aft-nloglik')
bst = xgb.train(params, dtrain, num_boost_round=10000,
evals=[(dtrain, 'train'), (dvalid, 'valid')],
early_stopping_rounds=50, verbose_eval=False, callbacks=[pruning_callback])
if bst.best_iteration >= 25:
return bst.best_score
else:
return np.inf # Reject models with < 25 trees
# Run hyperparameter search
study = optuna.create_study(direction='minimize')
study.optimize(objective, n_trials=200)
print('Completed hyperparameter tuning with best aft-nloglik = {}.'.format(study.best_trial.value))
params = {}
params.update(base_params)
params.update(study.best_trial.params)
# Re-run training with the best hyperparameter combination
print('Re-running the best trial... params = {}'.format(params))
bst = xgb.train(params, dtrain, num_boost_round=10000,
evals=[(dtrain, 'train'), (dvalid, 'valid')],
early_stopping_rounds=50)
# Run prediction on the validation set
df = pd.DataFrame({'Label (lower bound)': y_lower_bound[valid_index],
'Label (upper bound)': y_upper_bound[valid_index],
'Predicted label': bst.predict(dvalid)})
print(df)
# Show only data points with right-censored labels
print(df[np.isinf(df['Label (upper bound)'])])
# Save trained model
bst.save_model('aft_best_model.json')

View File

@@ -0,0 +1,97 @@
"""
Visual demo for survival analysis (regression) with Accelerated Failure Time (AFT) model.
This demo uses 1D toy data and visualizes how XGBoost fits a tree ensemble. The ensemble model
starts out as a flat line and evolves into a step function in order to account for all ranged
labels.
"""
import numpy as np
import xgboost as xgb
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 13})
# Function to visualize censored labels
def plot_censored_labels(X, y_lower, y_upper):
def replace_inf(x, target_value):
x[np.isinf(x)] = target_value
return x
plt.plot(X, y_lower, 'o', label='y_lower', color='blue')
plt.plot(X, y_upper, 'o', label='y_upper', color='fuchsia')
plt.vlines(X, ymin=replace_inf(y_lower, 0.01), ymax=replace_inf(y_upper, 1000),
label='Range for y', color='gray')
# Toy data
X = np.array([1, 2, 3, 4, 5]).reshape((-1, 1))
INF = np.inf
y_lower = np.array([ 10, 15, -INF, 30, 100])
y_upper = np.array([INF, INF, 20, 50, INF])
# Visualize toy data
plt.figure(figsize=(5, 4))
plot_censored_labels(X, y_lower, y_upper)
plt.ylim((6, 200))
plt.legend(loc='lower right')
plt.title('Toy data')
plt.xlabel('Input feature')
plt.ylabel('Label')
plt.yscale('log')
plt.tight_layout()
plt.show(block=True)
# Will be used to visualize XGBoost model
grid_pts = np.linspace(0.8, 5.2, 1000).reshape((-1, 1))
# Train AFT model using XGBoost
dmat = xgb.DMatrix(X)
dmat.set_float_info('label_lower_bound', y_lower)
dmat.set_float_info('label_upper_bound', y_upper)
params = {'max_depth': 3, 'objective':'survival:aft', 'min_child_weight': 0}
accuracy_history = []
def plot_intermediate_model_callback(env):
"""Custom callback to plot intermediate models"""
# Compute y_pred = prediction using the intermediate model, at current boosting iteration
y_pred = env.model.predict(dmat)
# "Accuracy" = the number of data points whose ranged label (y_lower, y_upper) includes
# the corresponding predicted label (y_pred)
acc = np.sum(np.logical_and(y_pred >= y_lower, y_pred <= y_upper)/len(X) * 100)
accuracy_history.append(acc)
# Plot ranged labels as well as predictions by the model
plt.subplot(5, 3, env.iteration + 1)
plot_censored_labels(X, y_lower, y_upper)
y_pred_grid_pts = env.model.predict(xgb.DMatrix(grid_pts))
plt.plot(grid_pts, y_pred_grid_pts, 'r-', label='XGBoost AFT model', linewidth=4)
plt.title('Iteration {}'.format(env.iteration), x=0.5, y=0.8)
plt.xlim((0.8, 5.2))
plt.ylim((1 if np.min(y_pred) < 6 else 6, 200))
plt.yscale('log')
res = {}
plt.figure(figsize=(12,13))
bst = xgb.train(params, dmat, 15, [(dmat, 'train')], evals_result=res,
callbacks=[plot_intermediate_model_callback])
plt.tight_layout()
plt.legend(loc='lower center', ncol=4,
bbox_to_anchor=(0.5, 0),
bbox_transform=plt.gcf().transFigure)
plt.tight_layout()
# Plot negative log likelihood over boosting iterations
plt.figure(figsize=(8,3))
plt.subplot(1, 2, 1)
plt.plot(res['train']['aft-nloglik'], 'b-o', label='aft-nloglik')
plt.xlabel('# Boosting Iterations')
plt.legend(loc='best')
# Plot "accuracy" over boosting iterations
# "Accuracy" = the number of data points whose ranged label (y_lower, y_upper) includes
# the corresponding predicted label (y_pred)
plt.subplot(1, 2, 2)
plt.plot(accuracy_history, 'r-o', label='Accuracy (%)')
plt.xlabel('# Boosting Iterations')
plt.legend(loc='best')
plt.tight_layout()
plt.show()

View File

@@ -156,7 +156,7 @@ If you want to continue boosting from existing model, say 0002.model, use
```
xgboost will load from 0002.model continue boosting for 2 rounds, and save output to continue.model. However, beware that the training and evaluation data specified in mushroom.conf should not change when you use this function.
#### Use Multi-Threading
When you are working with a large dataset, you may want to take advantage of parallelism. If your compiler supports OpenMP, xgboost is naturally multi-threaded, to set number of parallel running add ```nthread``` parameter to you configuration.
When you are working with a large dataset, you may want to take advantage of parallelism. If your compiler supports OpenMP, xgboost is naturally multi-threaded, to set number of parallel running add ```nthread``` parameter to your configuration.
Eg. ```nthread=10```
Set nthread to be the number of your real cpu (On Unix, this can be found using ```lscpu```)

View File

@@ -0,0 +1,4 @@
cmake_minimum_required(VERSION 3.12)
find_package(xgboost REQUIRED)
add_executable(api-demo c-api-demo.c)
target_link_libraries(api-demo xgboost::xgboost)

View File

@@ -36,13 +36,12 @@ int main(int argc, char** argv) {
// https://xgboost.readthedocs.io/en/latest/parameter.html
safe_xgboost(XGBoosterSetParam(booster, "tree_method", use_gpu ? "gpu_hist" : "hist"));
if (use_gpu) {
// set the number of GPUs and the first GPU to use;
// set the GPU to use;
// this is not necessary, but provided here as an illustration
safe_xgboost(XGBoosterSetParam(booster, "n_gpus", "1"));
safe_xgboost(XGBoosterSetParam(booster, "gpu_id", "0"));
} else {
// avoid evaluating objective and metric on a GPU
safe_xgboost(XGBoosterSetParam(booster, "n_gpus", "0"));
safe_xgboost(XGBoosterSetParam(booster, "gpu_id", "-1"));
}
safe_xgboost(XGBoosterSetParam(booster, "objective", "binary:logistic"));
@@ -66,7 +65,7 @@ int main(int argc, char** argv) {
const float* out_result = NULL;
int n_print = 10;
safe_xgboost(XGBoosterPredict(booster, dtest, 0, 0, &out_len, &out_result));
safe_xgboost(XGBoosterPredict(booster, dtest, 0, 0, 0, &out_len, &out_result));
printf("y_pred: ");
for (int i = 0; i < n_print; ++i) {
printf("%1.4f ", out_result[i]);

6
demo/dask/README.md Normal file
View File

@@ -0,0 +1,6 @@
Dask
====
This directory contains some demonstrations for using `dask` with `XGBoost`.
For an overview, see
https://xgboost.readthedocs.io/en/latest/tutorials/dask.html .

41
demo/dask/cpu_training.py Normal file
View File

@@ -0,0 +1,41 @@
import xgboost as xgb
from xgboost.dask import DaskDMatrix
from dask.distributed import Client
from dask.distributed import LocalCluster
from dask import array as da
def main(client):
# generate some random data for demonstration
m = 100000
n = 100
X = da.random.random(size=(m, n), chunks=100)
y = da.random.random(size=(m, ), chunks=100)
# DaskDMatrix acts like normal DMatrix, works as a proxy for local
# DMatrix scatter around workers.
dtrain = DaskDMatrix(client, X, y)
# Use train method from xgboost.dask instead of xgboost. This
# distributed version of train returns a dictionary containing the
# resulting booster and evaluation history obtained from
# evaluation metrics.
output = xgb.dask.train(client,
{'verbosity': 1,
'tree_method': 'hist'},
dtrain,
num_boost_round=4, evals=[(dtrain, 'train')])
bst = output['booster']
history = output['history']
# you can pass output directly into `predict` too.
prediction = xgb.dask.predict(client, bst, dtrain)
print('Evaluation history:', history)
return prediction
if __name__ == '__main__':
# or use other clusters for scaling
with LocalCluster(n_workers=7, threads_per_worker=4) as cluster:
with Client(cluster) as client:
main(client)

45
demo/dask/gpu_training.py Normal file
View File

@@ -0,0 +1,45 @@
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
from dask import array as da
import xgboost as xgb
from xgboost.dask import DaskDMatrix
def main(client):
# generate some random data for demonstration
m = 100000
n = 100
X = da.random.random(size=(m, n), chunks=100)
y = da.random.random(size=(m, ), chunks=100)
# DaskDMatrix acts like normal DMatrix, works as a proxy for local
# DMatrix scatter around workers.
dtrain = DaskDMatrix(client, X, y)
# Use train method from xgboost.dask instead of xgboost. This
# distributed version of train returns a dictionary containing the
# resulting booster and evaluation history obtained from
# evaluation metrics.
output = xgb.dask.train(client,
{'verbosity': 2,
# Golden line for GPU training
'tree_method': 'gpu_hist'},
dtrain,
num_boost_round=4, evals=[(dtrain, 'train')])
bst = output['booster']
history = output['history']
# you can pass output directly into `predict` too.
prediction = xgb.dask.predict(client, bst, dtrain)
prediction = prediction.compute()
print('Evaluation history:', history)
return prediction
if __name__ == '__main__':
# `LocalCUDACluster` is used for assigning GPU to XGBoost processes. Here
# `n_workers` represents the number of GPUs since we use one GPU per worker
# process.
with LocalCUDACluster(n_workers=2, threads_per_worker=4) as cluster:
with Client(cluster) as client:
main(client)

View File

@@ -0,0 +1,39 @@
'''Dask interface demo:
Use scikit-learn regressor interface with CPU histogram tree method.'''
from dask.distributed import Client
from dask.distributed import LocalCluster
from dask import array as da
import xgboost
def main(client):
# generate some random data for demonstration
n = 100
m = 10000
partition_size = 100
X = da.random.random((m, n), partition_size)
y = da.random.random(m, partition_size)
regressor = xgboost.dask.DaskXGBRegressor(verbosity=1, n_estimators=2)
regressor.set_params(tree_method='hist')
# assigning client here is optional
regressor.client = client
regressor.fit(X, y, eval_set=[(X, y)])
prediction = regressor.predict(X)
bst = regressor.get_booster()
history = regressor.evals_result()
print('Evaluation history:', history)
# returned prediction is always a dask array.
assert isinstance(prediction, da.Array)
return bst # returning the trained model
if __name__ == '__main__':
# or use other clusters for scaling
with LocalCluster(n_workers=4, threads_per_worker=1) as cluster:
with Client(cluster) as client:
main(client)

View File

@@ -0,0 +1,42 @@
'''Dask interface demo:
Use scikit-learn regressor interface with GPU histogram tree method.'''
from dask.distributed import Client
# It's recommended to use dask_cuda for GPU assignment
from dask_cuda import LocalCUDACluster
from dask import array as da
import xgboost
def main(client):
# generate some random data for demonstration
n = 100
m = 1000000
partition_size = 10000
X = da.random.random((m, n), partition_size)
y = da.random.random(m, partition_size)
regressor = xgboost.dask.DaskXGBRegressor(verbosity=1)
regressor.set_params(tree_method='gpu_hist')
# assigning client here is optional
regressor.client = client
regressor.fit(X, y, eval_set=[(X, y)])
prediction = regressor.predict(X)
bst = regressor.get_booster()
history = regressor.evals_result()
print('Evaluation history:', history)
# returned prediction is always a dask array.
assert isinstance(prediction, da.Array)
return bst # returning the trained model
if __name__ == '__main__':
# With dask cuda, one can scale up XGBoost to arbitrary GPU clusters.
# `LocalCUDACluster` used here is only for demonstration purpose.
with LocalCUDACluster() as cluster:
with Client(cluster) as client:
main(client)

View File

@@ -0,0 +1,138 @@
Survival_label_lower_bound,Survival_label_upper_bound,Age_in_years,Karnofsky_score,Months_from_Diagnosis,Celltype=adeno,Celltype=large,Celltype=smallcell,Celltype=squamous,Prior_therapy=no,Prior_therapy=yes,Treatment=standard,Treatment=test
72.0,72.0,69.0,60.0,7.0,0,0,0,1,1,0,1,0
411.0,411.0,64.0,70.0,5.0,0,0,0,1,0,1,1,0
228.0,228.0,38.0,60.0,3.0,0,0,0,1,1,0,1,0
126.0,126.0,63.0,60.0,9.0,0,0,0,1,0,1,1,0
118.0,118.0,65.0,70.0,11.0,0,0,0,1,0,1,1,0
10.0,10.0,49.0,20.0,5.0,0,0,0,1,1,0,1,0
82.0,82.0,69.0,40.0,10.0,0,0,0,1,0,1,1,0
110.0,110.0,68.0,80.0,29.0,0,0,0,1,1,0,1,0
314.0,314.0,43.0,50.0,18.0,0,0,0,1,1,0,1,0
100.0,inf,70.0,70.0,6.0,0,0,0,1,1,0,1,0
42.0,42.0,81.0,60.0,4.0,0,0,0,1,1,0,1,0
8.0,8.0,63.0,40.0,58.0,0,0,0,1,0,1,1,0
144.0,144.0,63.0,30.0,4.0,0,0,0,1,1,0,1,0
25.0,inf,52.0,80.0,9.0,0,0,0,1,0,1,1,0
11.0,11.0,48.0,70.0,11.0,0,0,0,1,0,1,1,0
30.0,30.0,61.0,60.0,3.0,0,0,1,0,1,0,1,0
384.0,384.0,42.0,60.0,9.0,0,0,1,0,1,0,1,0
4.0,4.0,35.0,40.0,2.0,0,0,1,0,1,0,1,0
54.0,54.0,63.0,80.0,4.0,0,0,1,0,0,1,1,0
13.0,13.0,56.0,60.0,4.0,0,0,1,0,1,0,1,0
123.0,inf,55.0,40.0,3.0,0,0,1,0,1,0,1,0
97.0,inf,67.0,60.0,5.0,0,0,1,0,1,0,1,0
153.0,153.0,63.0,60.0,14.0,0,0,1,0,0,1,1,0
59.0,59.0,65.0,30.0,2.0,0,0,1,0,1,0,1,0
117.0,117.0,46.0,80.0,3.0,0,0,1,0,1,0,1,0
16.0,16.0,53.0,30.0,4.0,0,0,1,0,0,1,1,0
151.0,151.0,69.0,50.0,12.0,0,0,1,0,1,0,1,0
22.0,22.0,68.0,60.0,4.0,0,0,1,0,1,0,1,0
56.0,56.0,43.0,80.0,12.0,0,0,1,0,0,1,1,0
21.0,21.0,55.0,40.0,2.0,0,0,1,0,0,1,1,0
18.0,18.0,42.0,20.0,15.0,0,0,1,0,1,0,1,0
139.0,139.0,64.0,80.0,2.0,0,0,1,0,1,0,1,0
20.0,20.0,65.0,30.0,5.0,0,0,1,0,1,0,1,0
31.0,31.0,65.0,75.0,3.0,0,0,1,0,1,0,1,0
52.0,52.0,55.0,70.0,2.0,0,0,1,0,1,0,1,0
287.0,287.0,66.0,60.0,25.0,0,0,1,0,0,1,1,0
18.0,18.0,60.0,30.0,4.0,0,0,1,0,1,0,1,0
51.0,51.0,67.0,60.0,1.0,0,0,1,0,1,0,1,0
122.0,122.0,53.0,80.0,28.0,0,0,1,0,1,0,1,0
27.0,27.0,62.0,60.0,8.0,0,0,1,0,1,0,1,0
54.0,54.0,67.0,70.0,1.0,0,0,1,0,1,0,1,0
7.0,7.0,72.0,50.0,7.0,0,0,1,0,1,0,1,0
63.0,63.0,48.0,50.0,11.0,0,0,1,0,1,0,1,0
392.0,392.0,68.0,40.0,4.0,0,0,1,0,1,0,1,0
10.0,10.0,67.0,40.0,23.0,0,0,1,0,0,1,1,0
8.0,8.0,61.0,20.0,19.0,1,0,0,0,0,1,1,0
92.0,92.0,60.0,70.0,10.0,1,0,0,0,1,0,1,0
35.0,35.0,62.0,40.0,6.0,1,0,0,0,1,0,1,0
117.0,117.0,38.0,80.0,2.0,1,0,0,0,1,0,1,0
132.0,132.0,50.0,80.0,5.0,1,0,0,0,1,0,1,0
12.0,12.0,63.0,50.0,4.0,1,0,0,0,0,1,1,0
162.0,162.0,64.0,80.0,5.0,1,0,0,0,1,0,1,0
3.0,3.0,43.0,30.0,3.0,1,0,0,0,1,0,1,0
95.0,95.0,34.0,80.0,4.0,1,0,0,0,1,0,1,0
177.0,177.0,66.0,50.0,16.0,0,1,0,0,0,1,1,0
162.0,162.0,62.0,80.0,5.0,0,1,0,0,1,0,1,0
216.0,216.0,52.0,50.0,15.0,0,1,0,0,1,0,1,0
553.0,553.0,47.0,70.0,2.0,0,1,0,0,1,0,1,0
278.0,278.0,63.0,60.0,12.0,0,1,0,0,1,0,1,0
12.0,12.0,68.0,40.0,12.0,0,1,0,0,0,1,1,0
260.0,260.0,45.0,80.0,5.0,0,1,0,0,1,0,1,0
200.0,200.0,41.0,80.0,12.0,0,1,0,0,0,1,1,0
156.0,156.0,66.0,70.0,2.0,0,1,0,0,1,0,1,0
182.0,inf,62.0,90.0,2.0,0,1,0,0,1,0,1,0
143.0,143.0,60.0,90.0,8.0,0,1,0,0,1,0,1,0
105.0,105.0,66.0,80.0,11.0,0,1,0,0,1,0,1,0
103.0,103.0,38.0,80.0,5.0,0,1,0,0,1,0,1,0
250.0,250.0,53.0,70.0,8.0,0,1,0,0,0,1,1,0
100.0,100.0,37.0,60.0,13.0,0,1,0,0,0,1,1,0
999.0,999.0,54.0,90.0,12.0,0,0,0,1,0,1,0,1
112.0,112.0,60.0,80.0,6.0,0,0,0,1,1,0,0,1
87.0,inf,48.0,80.0,3.0,0,0,0,1,1,0,0,1
231.0,inf,52.0,50.0,8.0,0,0,0,1,0,1,0,1
242.0,242.0,70.0,50.0,1.0,0,0,0,1,1,0,0,1
991.0,991.0,50.0,70.0,7.0,0,0,0,1,0,1,0,1
111.0,111.0,62.0,70.0,3.0,0,0,0,1,1,0,0,1
1.0,1.0,65.0,20.0,21.0,0,0,0,1,0,1,0,1
587.0,587.0,58.0,60.0,3.0,0,0,0,1,1,0,0,1
389.0,389.0,62.0,90.0,2.0,0,0,0,1,1,0,0,1
33.0,33.0,64.0,30.0,6.0,0,0,0,1,1,0,0,1
25.0,25.0,63.0,20.0,36.0,0,0,0,1,1,0,0,1
357.0,357.0,58.0,70.0,13.0,0,0,0,1,1,0,0,1
467.0,467.0,64.0,90.0,2.0,0,0,0,1,1,0,0,1
201.0,201.0,52.0,80.0,28.0,0,0,0,1,0,1,0,1
1.0,1.0,35.0,50.0,7.0,0,0,0,1,1,0,0,1
30.0,30.0,63.0,70.0,11.0,0,0,0,1,1,0,0,1
44.0,44.0,70.0,60.0,13.0,0,0,0,1,0,1,0,1
283.0,283.0,51.0,90.0,2.0,0,0,0,1,1,0,0,1
15.0,15.0,40.0,50.0,13.0,0,0,0,1,0,1,0,1
25.0,25.0,69.0,30.0,2.0,0,0,1,0,1,0,0,1
103.0,inf,36.0,70.0,22.0,0,0,1,0,0,1,0,1
21.0,21.0,71.0,20.0,4.0,0,0,1,0,1,0,0,1
13.0,13.0,62.0,30.0,2.0,0,0,1,0,1,0,0,1
87.0,87.0,60.0,60.0,2.0,0,0,1,0,1,0,0,1
2.0,2.0,44.0,40.0,36.0,0,0,1,0,0,1,0,1
20.0,20.0,54.0,30.0,9.0,0,0,1,0,0,1,0,1
7.0,7.0,66.0,20.0,11.0,0,0,1,0,1,0,0,1
24.0,24.0,49.0,60.0,8.0,0,0,1,0,1,0,0,1
99.0,99.0,72.0,70.0,3.0,0,0,1,0,1,0,0,1
8.0,8.0,68.0,80.0,2.0,0,0,1,0,1,0,0,1
99.0,99.0,62.0,85.0,4.0,0,0,1,0,1,0,0,1
61.0,61.0,71.0,70.0,2.0,0,0,1,0,1,0,0,1
25.0,25.0,70.0,70.0,2.0,0,0,1,0,1,0,0,1
95.0,95.0,61.0,70.0,1.0,0,0,1,0,1,0,0,1
80.0,80.0,71.0,50.0,17.0,0,0,1,0,1,0,0,1
51.0,51.0,59.0,30.0,87.0,0,0,1,0,0,1,0,1
29.0,29.0,67.0,40.0,8.0,0,0,1,0,1,0,0,1
24.0,24.0,60.0,40.0,2.0,1,0,0,0,1,0,0,1
18.0,18.0,69.0,40.0,5.0,1,0,0,0,0,1,0,1
83.0,inf,57.0,99.0,3.0,1,0,0,0,1,0,0,1
31.0,31.0,39.0,80.0,3.0,1,0,0,0,1,0,0,1
51.0,51.0,62.0,60.0,5.0,1,0,0,0,1,0,0,1
90.0,90.0,50.0,60.0,22.0,1,0,0,0,0,1,0,1
52.0,52.0,43.0,60.0,3.0,1,0,0,0,1,0,0,1
73.0,73.0,70.0,60.0,3.0,1,0,0,0,1,0,0,1
8.0,8.0,66.0,50.0,5.0,1,0,0,0,1,0,0,1
36.0,36.0,61.0,70.0,8.0,1,0,0,0,1,0,0,1
48.0,48.0,81.0,10.0,4.0,1,0,0,0,1,0,0,1
7.0,7.0,58.0,40.0,4.0,1,0,0,0,1,0,0,1
140.0,140.0,63.0,70.0,3.0,1,0,0,0,1,0,0,1
186.0,186.0,60.0,90.0,3.0,1,0,0,0,1,0,0,1
84.0,84.0,62.0,80.0,4.0,1,0,0,0,0,1,0,1
19.0,19.0,42.0,50.0,10.0,1,0,0,0,1,0,0,1
45.0,45.0,69.0,40.0,3.0,1,0,0,0,1,0,0,1
80.0,80.0,63.0,40.0,4.0,1,0,0,0,1,0,0,1
52.0,52.0,45.0,60.0,4.0,0,1,0,0,1,0,0,1
164.0,164.0,68.0,70.0,15.0,0,1,0,0,0,1,0,1
19.0,19.0,39.0,30.0,4.0,0,1,0,0,0,1,0,1
53.0,53.0,66.0,60.0,12.0,0,1,0,0,1,0,0,1
15.0,15.0,63.0,30.0,5.0,0,1,0,0,1,0,0,1
43.0,43.0,49.0,60.0,11.0,0,1,0,0,0,1,0,1
340.0,340.0,64.0,80.0,10.0,0,1,0,0,0,1,0,1
133.0,133.0,65.0,75.0,1.0,0,1,0,0,1,0,0,1
111.0,111.0,64.0,60.0,5.0,0,1,0,0,1,0,0,1
231.0,231.0,67.0,70.0,18.0,0,1,0,0,0,1,0,1
378.0,378.0,65.0,80.0,4.0,0,1,0,0,1,0,0,1
49.0,49.0,37.0,30.0,3.0,0,1,0,0,1,0,0,1
1 Survival_label_lower_bound Survival_label_upper_bound Age_in_years Karnofsky_score Months_from_Diagnosis Celltype=adeno Celltype=large Celltype=smallcell Celltype=squamous Prior_therapy=no Prior_therapy=yes Treatment=standard Treatment=test
2 72.0 72.0 69.0 60.0 7.0 0 0 0 1 1 0 1 0
3 411.0 411.0 64.0 70.0 5.0 0 0 0 1 0 1 1 0
4 228.0 228.0 38.0 60.0 3.0 0 0 0 1 1 0 1 0
5 126.0 126.0 63.0 60.0 9.0 0 0 0 1 0 1 1 0
6 118.0 118.0 65.0 70.0 11.0 0 0 0 1 0 1 1 0
7 10.0 10.0 49.0 20.0 5.0 0 0 0 1 1 0 1 0
8 82.0 82.0 69.0 40.0 10.0 0 0 0 1 0 1 1 0
9 110.0 110.0 68.0 80.0 29.0 0 0 0 1 1 0 1 0
10 314.0 314.0 43.0 50.0 18.0 0 0 0 1 1 0 1 0
11 100.0 inf 70.0 70.0 6.0 0 0 0 1 1 0 1 0
12 42.0 42.0 81.0 60.0 4.0 0 0 0 1 1 0 1 0
13 8.0 8.0 63.0 40.0 58.0 0 0 0 1 0 1 1 0
14 144.0 144.0 63.0 30.0 4.0 0 0 0 1 1 0 1 0
15 25.0 inf 52.0 80.0 9.0 0 0 0 1 0 1 1 0
16 11.0 11.0 48.0 70.0 11.0 0 0 0 1 0 1 1 0
17 30.0 30.0 61.0 60.0 3.0 0 0 1 0 1 0 1 0
18 384.0 384.0 42.0 60.0 9.0 0 0 1 0 1 0 1 0
19 4.0 4.0 35.0 40.0 2.0 0 0 1 0 1 0 1 0
20 54.0 54.0 63.0 80.0 4.0 0 0 1 0 0 1 1 0
21 13.0 13.0 56.0 60.0 4.0 0 0 1 0 1 0 1 0
22 123.0 inf 55.0 40.0 3.0 0 0 1 0 1 0 1 0
23 97.0 inf 67.0 60.0 5.0 0 0 1 0 1 0 1 0
24 153.0 153.0 63.0 60.0 14.0 0 0 1 0 0 1 1 0
25 59.0 59.0 65.0 30.0 2.0 0 0 1 0 1 0 1 0
26 117.0 117.0 46.0 80.0 3.0 0 0 1 0 1 0 1 0
27 16.0 16.0 53.0 30.0 4.0 0 0 1 0 0 1 1 0
28 151.0 151.0 69.0 50.0 12.0 0 0 1 0 1 0 1 0
29 22.0 22.0 68.0 60.0 4.0 0 0 1 0 1 0 1 0
30 56.0 56.0 43.0 80.0 12.0 0 0 1 0 0 1 1 0
31 21.0 21.0 55.0 40.0 2.0 0 0 1 0 0 1 1 0
32 18.0 18.0 42.0 20.0 15.0 0 0 1 0 1 0 1 0
33 139.0 139.0 64.0 80.0 2.0 0 0 1 0 1 0 1 0
34 20.0 20.0 65.0 30.0 5.0 0 0 1 0 1 0 1 0
35 31.0 31.0 65.0 75.0 3.0 0 0 1 0 1 0 1 0
36 52.0 52.0 55.0 70.0 2.0 0 0 1 0 1 0 1 0
37 287.0 287.0 66.0 60.0 25.0 0 0 1 0 0 1 1 0
38 18.0 18.0 60.0 30.0 4.0 0 0 1 0 1 0 1 0
39 51.0 51.0 67.0 60.0 1.0 0 0 1 0 1 0 1 0
40 122.0 122.0 53.0 80.0 28.0 0 0 1 0 1 0 1 0
41 27.0 27.0 62.0 60.0 8.0 0 0 1 0 1 0 1 0
42 54.0 54.0 67.0 70.0 1.0 0 0 1 0 1 0 1 0
43 7.0 7.0 72.0 50.0 7.0 0 0 1 0 1 0 1 0
44 63.0 63.0 48.0 50.0 11.0 0 0 1 0 1 0 1 0
45 392.0 392.0 68.0 40.0 4.0 0 0 1 0 1 0 1 0
46 10.0 10.0 67.0 40.0 23.0 0 0 1 0 0 1 1 0
47 8.0 8.0 61.0 20.0 19.0 1 0 0 0 0 1 1 0
48 92.0 92.0 60.0 70.0 10.0 1 0 0 0 1 0 1 0
49 35.0 35.0 62.0 40.0 6.0 1 0 0 0 1 0 1 0
50 117.0 117.0 38.0 80.0 2.0 1 0 0 0 1 0 1 0
51 132.0 132.0 50.0 80.0 5.0 1 0 0 0 1 0 1 0
52 12.0 12.0 63.0 50.0 4.0 1 0 0 0 0 1 1 0
53 162.0 162.0 64.0 80.0 5.0 1 0 0 0 1 0 1 0
54 3.0 3.0 43.0 30.0 3.0 1 0 0 0 1 0 1 0
55 95.0 95.0 34.0 80.0 4.0 1 0 0 0 1 0 1 0
56 177.0 177.0 66.0 50.0 16.0 0 1 0 0 0 1 1 0
57 162.0 162.0 62.0 80.0 5.0 0 1 0 0 1 0 1 0
58 216.0 216.0 52.0 50.0 15.0 0 1 0 0 1 0 1 0
59 553.0 553.0 47.0 70.0 2.0 0 1 0 0 1 0 1 0
60 278.0 278.0 63.0 60.0 12.0 0 1 0 0 1 0 1 0
61 12.0 12.0 68.0 40.0 12.0 0 1 0 0 0 1 1 0
62 260.0 260.0 45.0 80.0 5.0 0 1 0 0 1 0 1 0
63 200.0 200.0 41.0 80.0 12.0 0 1 0 0 0 1 1 0
64 156.0 156.0 66.0 70.0 2.0 0 1 0 0 1 0 1 0
65 182.0 inf 62.0 90.0 2.0 0 1 0 0 1 0 1 0
66 143.0 143.0 60.0 90.0 8.0 0 1 0 0 1 0 1 0
67 105.0 105.0 66.0 80.0 11.0 0 1 0 0 1 0 1 0
68 103.0 103.0 38.0 80.0 5.0 0 1 0 0 1 0 1 0
69 250.0 250.0 53.0 70.0 8.0 0 1 0 0 0 1 1 0
70 100.0 100.0 37.0 60.0 13.0 0 1 0 0 0 1 1 0
71 999.0 999.0 54.0 90.0 12.0 0 0 0 1 0 1 0 1
72 112.0 112.0 60.0 80.0 6.0 0 0 0 1 1 0 0 1
73 87.0 inf 48.0 80.0 3.0 0 0 0 1 1 0 0 1
74 231.0 inf 52.0 50.0 8.0 0 0 0 1 0 1 0 1
75 242.0 242.0 70.0 50.0 1.0 0 0 0 1 1 0 0 1
76 991.0 991.0 50.0 70.0 7.0 0 0 0 1 0 1 0 1
77 111.0 111.0 62.0 70.0 3.0 0 0 0 1 1 0 0 1
78 1.0 1.0 65.0 20.0 21.0 0 0 0 1 0 1 0 1
79 587.0 587.0 58.0 60.0 3.0 0 0 0 1 1 0 0 1
80 389.0 389.0 62.0 90.0 2.0 0 0 0 1 1 0 0 1
81 33.0 33.0 64.0 30.0 6.0 0 0 0 1 1 0 0 1
82 25.0 25.0 63.0 20.0 36.0 0 0 0 1 1 0 0 1
83 357.0 357.0 58.0 70.0 13.0 0 0 0 1 1 0 0 1
84 467.0 467.0 64.0 90.0 2.0 0 0 0 1 1 0 0 1
85 201.0 201.0 52.0 80.0 28.0 0 0 0 1 0 1 0 1
86 1.0 1.0 35.0 50.0 7.0 0 0 0 1 1 0 0 1
87 30.0 30.0 63.0 70.0 11.0 0 0 0 1 1 0 0 1
88 44.0 44.0 70.0 60.0 13.0 0 0 0 1 0 1 0 1
89 283.0 283.0 51.0 90.0 2.0 0 0 0 1 1 0 0 1
90 15.0 15.0 40.0 50.0 13.0 0 0 0 1 0 1 0 1
91 25.0 25.0 69.0 30.0 2.0 0 0 1 0 1 0 0 1
92 103.0 inf 36.0 70.0 22.0 0 0 1 0 0 1 0 1
93 21.0 21.0 71.0 20.0 4.0 0 0 1 0 1 0 0 1
94 13.0 13.0 62.0 30.0 2.0 0 0 1 0 1 0 0 1
95 87.0 87.0 60.0 60.0 2.0 0 0 1 0 1 0 0 1
96 2.0 2.0 44.0 40.0 36.0 0 0 1 0 0 1 0 1
97 20.0 20.0 54.0 30.0 9.0 0 0 1 0 0 1 0 1
98 7.0 7.0 66.0 20.0 11.0 0 0 1 0 1 0 0 1
99 24.0 24.0 49.0 60.0 8.0 0 0 1 0 1 0 0 1
100 99.0 99.0 72.0 70.0 3.0 0 0 1 0 1 0 0 1
101 8.0 8.0 68.0 80.0 2.0 0 0 1 0 1 0 0 1
102 99.0 99.0 62.0 85.0 4.0 0 0 1 0 1 0 0 1
103 61.0 61.0 71.0 70.0 2.0 0 0 1 0 1 0 0 1
104 25.0 25.0 70.0 70.0 2.0 0 0 1 0 1 0 0 1
105 95.0 95.0 61.0 70.0 1.0 0 0 1 0 1 0 0 1
106 80.0 80.0 71.0 50.0 17.0 0 0 1 0 1 0 0 1
107 51.0 51.0 59.0 30.0 87.0 0 0 1 0 0 1 0 1
108 29.0 29.0 67.0 40.0 8.0 0 0 1 0 1 0 0 1
109 24.0 24.0 60.0 40.0 2.0 1 0 0 0 1 0 0 1
110 18.0 18.0 69.0 40.0 5.0 1 0 0 0 0 1 0 1
111 83.0 inf 57.0 99.0 3.0 1 0 0 0 1 0 0 1
112 31.0 31.0 39.0 80.0 3.0 1 0 0 0 1 0 0 1
113 51.0 51.0 62.0 60.0 5.0 1 0 0 0 1 0 0 1
114 90.0 90.0 50.0 60.0 22.0 1 0 0 0 0 1 0 1
115 52.0 52.0 43.0 60.0 3.0 1 0 0 0 1 0 0 1
116 73.0 73.0 70.0 60.0 3.0 1 0 0 0 1 0 0 1
117 8.0 8.0 66.0 50.0 5.0 1 0 0 0 1 0 0 1
118 36.0 36.0 61.0 70.0 8.0 1 0 0 0 1 0 0 1
119 48.0 48.0 81.0 10.0 4.0 1 0 0 0 1 0 0 1
120 7.0 7.0 58.0 40.0 4.0 1 0 0 0 1 0 0 1
121 140.0 140.0 63.0 70.0 3.0 1 0 0 0 1 0 0 1
122 186.0 186.0 60.0 90.0 3.0 1 0 0 0 1 0 0 1
123 84.0 84.0 62.0 80.0 4.0 1 0 0 0 0 1 0 1
124 19.0 19.0 42.0 50.0 10.0 1 0 0 0 1 0 0 1
125 45.0 45.0 69.0 40.0 3.0 1 0 0 0 1 0 0 1
126 80.0 80.0 63.0 40.0 4.0 1 0 0 0 1 0 0 1
127 52.0 52.0 45.0 60.0 4.0 0 1 0 0 1 0 0 1
128 164.0 164.0 68.0 70.0 15.0 0 1 0 0 0 1 0 1
129 19.0 19.0 39.0 30.0 4.0 0 1 0 0 0 1 0 1
130 53.0 53.0 66.0 60.0 12.0 0 1 0 0 1 0 0 1
131 15.0 15.0 63.0 30.0 5.0 0 1 0 0 1 0 0 1
132 43.0 43.0 49.0 60.0 11.0 0 1 0 0 0 1 0 1
133 340.0 340.0 64.0 80.0 10.0 0 1 0 0 0 1 0 1
134 133.0 133.0 65.0 75.0 1.0 0 1 0 0 1 0 0 1
135 111.0 111.0 64.0 60.0 5.0 0 1 0 0 1 0 0 1
136 231.0 231.0 67.0 70.0 18.0 0 1 0 0 0 1 0 1
137 378.0 378.0 65.0 80.0 4.0 0 1 0 0 1 0 0 1
138 49.0 49.0 37.0 30.0 3.0 0 1 0 0 1 0 0 1

View File

@@ -8,7 +8,11 @@ if you are interested in contributing.
Build XGBoost with Distributed Filesystem Support
-------------------------------------------------
To use distributed xgboost, you only need to turn the options on to build
with distributed filesystems(HDFS or S3) in ```xgboost/make/config.mk```.
with distributed filesystems(HDFS or S3) in cmake.
```
cmake <path/to/xgboost> -DUSE_HDFS=ON -DUSE_S3=ON -DUSE_AZURE=ON
```
Step by Step Tutorial on AWS

View File

@@ -1,7 +1,5 @@
# GPU Acceleration Demo
This demo shows how to train a model on the [forest cover type](https://archive.ics.uci.edu/ml/datasets/covertype) dataset using GPU acceleration. The forest cover type dataset has 581,012 rows and 54 features, making it time consuming to process. We compare the run-time and accuracy of the GPU and CPU histogram algorithms.
`cover_type.py` shows how to train a model on the [forest cover type](https://archive.ics.uci.edu/ml/datasets/covertype) dataset using GPU acceleration. The forest cover type dataset has 581,012 rows and 54 features, making it time consuming to process. We compare the run-time and accuracy of the GPU and CPU histogram algorithms.
This demo requires the [GPU plug-in](https://xgboost.readthedocs.io/en/latest/gpu/index.html) to be built and installed.
The dataset is automatically loaded via the sklearn script.
`memory.py` shows how to repeatedly train xgboost models while freeing memory between iterations.

View File

@@ -0,0 +1,51 @@
import xgboost as xgb
import numpy as np
import time
import pickle
import GPUtil
n = 10000
m = 1000
X = np.random.random((n, m))
y = np.random.random(n)
param = {'objective': 'binary:logistic',
'tree_method': 'gpu_hist'
}
iterations = 5
dtrain = xgb.DMatrix(X, label=y)
# High memory usage
# active bst objects with device memory persist across iterations
boosters = []
for i in range(iterations):
bst = xgb.train(param, dtrain)
boosters.append(bst)
print("Example 1")
GPUtil.showUtilization()
del boosters
# Better memory usage
# The bst object can be destroyed by the python gc, freeing device memory
# The gc may not immediately free the object, so more than one booster can be allocated at a time
boosters = []
for i in range(iterations):
bst = xgb.train(param, dtrain)
boosters.append(pickle.dumps(bst))
print("Example 2")
GPUtil.showUtilization()
del boosters
# Best memory usage
# The gc explicitly frees the booster before starting the next iteration
boosters = []
for i in range(iterations):
bst = xgb.train(param, dtrain)
boosters.append(pickle.dumps(bst))
del bst
print("Example 3")
GPUtil.showUtilization()
del boosters

View File

@@ -2,6 +2,8 @@ XGBoost Python Feature Walkthrough
==================================
* [Basic walkthrough of wrappers](basic_walkthrough.py)
* [Customize loss function, and evaluation metric](custom_objective.py)
* [Re-implement RMSLE as customized metric and objective](custom_rmsle.py)
* [Re-Implement `multi:softmax` objective as customized objective](custom_softmax.py)
* [Boosting from existing prediction](boost_from_prediction.py)
* [Predicting using first n trees](predict_first_ntree.py)
* [Generalized Linear Model](generalized_linear_model.py)

View File

@@ -1,16 +1,22 @@
#!/usr/bin/python
#!/usr/bin/env python
import numpy as np
import scipy.sparse
import pickle
import xgboost as xgb
import os
### simple example
# Make sure the demo knows where to load the data.
CURRENT_DIR = os.path.dirname(os.path.abspath(__file__))
XGBOOST_ROOT_DIR = os.path.dirname(os.path.dirname(CURRENT_DIR))
DEMO_DIR = os.path.join(XGBOOST_ROOT_DIR, 'demo')
# simple example
# load file from text file, also binary buffer generated by xgboost
dtrain = xgb.DMatrix('../data/agaricus.txt.train')
dtest = xgb.DMatrix('../data/agaricus.txt.test')
dtrain = xgb.DMatrix(os.path.join(DEMO_DIR, 'data', 'agaricus.txt.train'))
dtest = xgb.DMatrix(os.path.join(DEMO_DIR, 'data', 'agaricus.txt.test'))
# specify parameters via map, definition are same as c++ version
param = {'max_depth':2, 'eta':1, 'silent':1, 'objective':'binary:logistic'}
param = {'max_depth': 2, 'eta': 1, 'objective': 'binary:logistic'}
# specify validations set to watch performance
watchlist = [(dtest, 'eval'), (dtrain, 'train')]
@@ -20,12 +26,14 @@ bst = xgb.train(param, dtrain, num_round, watchlist)
# this is prediction
preds = bst.predict(dtest)
labels = dtest.get_label()
print('error=%f' % (sum(1 for i in range(len(preds)) if int(preds[i] > 0.5) != labels[i]) / float(len(preds))))
print('error=%f' %
(sum(1 for i in range(len(preds)) if int(preds[i] > 0.5) != labels[i]) /
float(len(preds))))
bst.save_model('0001.model')
# dump model
bst.dump_model('dump.raw.txt')
# dump model with feature map
bst.dump_model('dump.nice.txt', '../data/featmap.txt')
bst.dump_model('dump.nice.txt', os.path.join(DEMO_DIR, 'data/featmap.txt'))
# save dmatrix into binary buffer
dtest.save_binary('dtest.buffer')
@@ -50,14 +58,18 @@ assert np.sum(np.abs(preds3 - preds)) == 0
# build dmatrix from scipy.sparse
print('start running example of build DMatrix from scipy.sparse CSR Matrix')
labels = []
row = []; col = []; dat = []
row = []
col = []
dat = []
i = 0
for l in open('../data/agaricus.txt.train'):
for l in open(os.path.join(DEMO_DIR, 'data', 'agaricus.txt.train')):
arr = l.split()
labels.append(int(arr[0]))
for it in arr[1:]:
k,v = it.split(':')
row.append(i); col.append(int(k)); dat.append(float(v))
k, v = it.split(':')
row.append(i)
col.append(int(k))
dat.append(float(v))
i += 1
csr = scipy.sparse.csr_matrix((dat, (row, col)))
dtrain = xgb.DMatrix(csr, label=labels)
@@ -72,8 +84,8 @@ watchlist = [(dtest, 'eval'), (dtrain, 'train')]
bst = xgb.train(param, dtrain, num_round, watchlist)
print('start running example of build DMatrix from numpy array')
# NOTE: npymat is numpy array, we will convert it into scipy.sparse.csr_matrix in internal implementation
# then convert to DMatrix
# NOTE: npymat is numpy array, we will convert it into scipy.sparse.csr_matrix
# in internal implementation then convert to DMatrix
npymat = csr.todense()
dtrain = xgb.DMatrix(npymat, label=labels)
watchlist = [(dtest, 'eval'), (dtrain, 'train')]

View File

@@ -1,5 +1,4 @@
#!/usr/bin/python
import numpy as np
import xgboost as xgb
dtrain = xgb.DMatrix('../data/agaricus.txt.train')
@@ -8,18 +7,19 @@ watchlist = [(dtest, 'eval'), (dtrain, 'train')]
###
# advanced: start from a initial base prediction
#
print ('start running example to start from a initial prediction')
print('start running example to start from a initial prediction')
# specify parameters via map, definition are same as c++ version
param = {'max_depth':2, 'eta':1, 'silent':1, 'objective':'binary:logistic'}
param = {'max_depth': 2, 'eta': 1, 'silent': 1, 'objective': 'binary:logistic'}
# train xgboost for 1 round
bst = xgb.train(param, dtrain, 1, watchlist)
# Note: we need the margin value instead of transformed prediction in set_base_margin
# do predict with output_margin=True, will always give you margin values before logistic transformation
# Note: we need the margin value instead of transformed prediction in
# set_base_margin
# do predict with output_margin=True, will always give you margin values
# before logistic transformation
ptrain = bst.predict(dtrain, output_margin=True)
ptest = bst.predict(dtest, output_margin=True)
dtrain.set_base_margin(ptrain)
dtest.set_base_margin(ptest)
print('this is result of running from initial prediction')
bst = xgb.train(param, dtrain, 1, watchlist)
bst = xgb.train(param, dtrain, 1, watchlist)

View File

@@ -45,7 +45,7 @@ xgb.cv(param, dtrain, num_round, nfold=5,
# you can also do cross validation with customized loss function
# See custom_objective.py
##
print('running cross validation, with cutomsized loss function')
print('running cross validation, with customized loss function')
def logregobj(preds, dtrain):
labels = dtrain.get_label()
preds = 1.0 / (1.0 + np.exp(-preds))

View File

@@ -16,8 +16,9 @@ param = {'max_depth': 2, 'eta': 1, 'silent': 1}
watchlist = [(dtest, 'eval'), (dtrain, 'train')]
num_round = 2
# user define objective function, given prediction, return gradient and second order gradient
# this is log likelihood loss
# user define objective function, given prediction, return gradient and second
# order gradient this is log likelihood loss
def logregobj(preds, dtrain):
labels = dtrain.get_label()
preds = 1.0 / (1.0 + np.exp(-preds))
@@ -25,18 +26,24 @@ def logregobj(preds, dtrain):
hess = preds * (1.0 - preds)
return grad, hess
# user defined evaluation function, return a pair metric_name, result
# NOTE: when you do customized loss function, the default prediction value is margin
# this may make builtin evaluation metric not function properly
# for example, we are doing logistic loss, the prediction is score before logistic transformation
# the builtin evaluation error assumes input is after logistic transformation
# Take this in mind when you use the customization, and maybe you need write customized evaluation function
# NOTE: when you do customized loss function, the default prediction value is
# margin. this may make builtin evaluation metric not function properly for
# example, we are doing logistic loss, the prediction is score before logistic
# transformation the builtin evaluation error assumes input is after logistic
# transformation Take this in mind when you use the customization, and maybe
# you need write customized evaluation function
def evalerror(preds, dtrain):
labels = dtrain.get_label()
# return a pair metric_name, result. The metric name must not contain a colon (:) or a space
# since preds are margin(before logistic transformation, cutoff at 0)
# return a pair metric_name, result. The metric name must not contain a
# colon (:) or a space since preds are margin(before logistic
# transformation, cutoff at 0)
return 'my-error', float(sum(labels != (preds > 0.0))) / len(labels)
# training with customized objective, we can also do step by step training
# simply look at xgboost.py's implementation of train
bst = xgb.train(param, dtrain, num_round, watchlist, obj=logregobj, feval=evalerror)
bst = xgb.train(param, dtrain, num_round, watchlist, obj=logregobj,
feval=evalerror)

View File

@@ -0,0 +1,197 @@
'''Demo for defining customized metric and objective. Notice that for
simplicity reason weight is not used in following example. In this
script, we implement the Squared Log Error (SLE) objective and RMSLE metric as customized
functions, then compare it with native implementation in XGBoost.
See doc/tutorials/custom_metric_obj.rst for a step by step
walkthrough, with other details.
The `SLE` objective reduces impact of outliers in training dataset,
hence here we also compare its performance with standard squared
error.
'''
import numpy as np
import xgboost as xgb
from typing import Tuple, Dict, List
from time import time
import argparse
import matplotlib
from matplotlib import pyplot as plt
# shape of generated data.
kRows = 4096
kCols = 16
kOutlier = 10000 # mean of generated outliers
kNumberOfOutliers = 64
kRatio = 0.7
kSeed = 1994
kBoostRound = 20
np.random.seed(seed=kSeed)
def generate_data() -> Tuple[xgb.DMatrix, xgb.DMatrix]:
'''Generate data containing outliers.'''
x = np.random.randn(kRows, kCols)
y = np.random.randn(kRows)
y += np.abs(np.min(y))
# Create outliers
for i in range(0, kNumberOfOutliers):
ind = np.random.randint(0, len(y)-1)
y[ind] += np.random.randint(0, kOutlier)
train_portion = int(kRows * kRatio)
# rmsle requires all label be greater than -1.
assert np.all(y > -1.0)
train_x: np.ndarray = x[: train_portion]
train_y: np.ndarray = y[: train_portion]
dtrain = xgb.DMatrix(train_x, label=train_y)
test_x = x[train_portion:]
test_y = y[train_portion:]
dtest = xgb.DMatrix(test_x, label=test_y)
return dtrain, dtest
def native_rmse(dtrain: xgb.DMatrix,
dtest: xgb.DMatrix) -> Dict[str, Dict[str, List[float]]]:
'''Train using native implementation of Root Mean Squared Loss.'''
print('Squared Error')
squared_error = {
'objective': 'reg:squarederror',
'eval_metric': 'rmse',
'tree_method': 'hist',
'seed': kSeed
}
start = time()
results: Dict[str, Dict[str, List[float]]] = {}
xgb.train(squared_error,
dtrain=dtrain,
num_boost_round=kBoostRound,
evals=[(dtrain, 'dtrain'), (dtest, 'dtest')],
evals_result=results)
print('Finished Squared Error in:', time() - start, '\n')
return results
def native_rmsle(dtrain: xgb.DMatrix,
dtest: xgb.DMatrix) -> Dict[str, Dict[str, List[float]]]:
'''Train using native implementation of Squared Log Error.'''
print('Squared Log Error')
results: Dict[str, Dict[str, List[float]]] = {}
squared_log_error = {
'objective': 'reg:squaredlogerror',
'eval_metric': 'rmsle',
'tree_method': 'hist',
'seed': kSeed
}
start = time()
xgb.train(squared_log_error,
dtrain=dtrain,
num_boost_round=kBoostRound,
evals=[(dtrain, 'dtrain'), (dtest, 'dtest')],
evals_result=results)
print('Finished Squared Log Error in:', time() - start)
return results
def py_rmsle(dtrain: xgb.DMatrix, dtest: xgb.DMatrix) -> Dict:
'''Train using Python implementation of Squared Log Error.'''
def gradient(predt: np.ndarray, dtrain: xgb.DMatrix) -> np.ndarray:
'''Compute the gradient squared log error.'''
y = dtrain.get_label()
return (np.log1p(predt) - np.log1p(y)) / (predt + 1)
def hessian(predt: np.ndarray, dtrain: xgb.DMatrix) -> np.ndarray:
'''Compute the hessian for squared log error.'''
y = dtrain.get_label()
return ((-np.log1p(predt) + np.log1p(y) + 1) /
np.power(predt + 1, 2))
def squared_log(predt: np.ndarray,
dtrain: xgb.DMatrix) -> Tuple[np.ndarray, np.ndarray]:
'''Squared Log Error objective. A simplified version for RMSLE used as
objective function.
:math:`\frac{1}{2}[log(pred + 1) - log(label + 1)]^2`
'''
predt[predt < -1] = -1 + 1e-6
grad = gradient(predt, dtrain)
hess = hessian(predt, dtrain)
return grad, hess
def rmsle(predt: np.ndarray, dtrain: xgb.DMatrix) -> Tuple[str, float]:
''' Root mean squared log error metric.
:math:`\sqrt{\frac{1}{N}[log(pred + 1) - log(label + 1)]^2}`
'''
y = dtrain.get_label()
predt[predt < -1] = -1 + 1e-6
elements = np.power(np.log1p(y) - np.log1p(predt), 2)
return 'PyRMSLE', float(np.sqrt(np.sum(elements) / len(y)))
results: Dict[str, Dict[str, List[float]]] = {}
xgb.train({'tree_method': 'hist', 'seed': kSeed,
'disable_default_eval_metric': 1},
dtrain=dtrain,
num_boost_round=kBoostRound,
obj=squared_log,
feval=rmsle,
evals=[(dtrain, 'dtrain'), (dtest, 'dtest')],
evals_result=results)
return results
def plot_history(rmse_evals, rmsle_evals, py_rmsle_evals):
fig, axs = plt.subplots(3, 1)
ax0: matplotlib.axes.Axes = axs[0]
ax1: matplotlib.axes.Axes = axs[1]
ax2: matplotlib.axes.Axes = axs[2]
x = np.arange(0, kBoostRound, 1)
ax0.plot(x, rmse_evals['dtrain']['rmse'], label='train-RMSE')
ax0.plot(x, rmse_evals['dtest']['rmse'], label='test-RMSE')
ax0.legend()
ax1.plot(x, rmsle_evals['dtrain']['rmsle'], label='train-native-RMSLE')
ax1.plot(x, rmsle_evals['dtest']['rmsle'], label='test-native-RMSLE')
ax1.legend()
ax2.plot(x, py_rmsle_evals['dtrain']['PyRMSLE'], label='train-PyRMSLE')
ax2.plot(x, py_rmsle_evals['dtest']['PyRMSLE'], label='test-PyRMSLE')
ax2.legend()
plt.show()
plt.close()
def main(args):
dtrain, dtest = generate_data()
rmse_evals = native_rmse(dtrain, dtest)
rmsle_evals = native_rmsle(dtrain, dtest)
py_rmsle_evals = py_rmsle(dtrain, dtest)
if args.plot != 0:
plot_history(rmse_evals, rmsle_evals, py_rmsle_evals)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='Arguments for custom RMSLE objective function demo.')
parser.add_argument(
'--plot',
type=int,
default=1,
help='Set to 0 to disable plotting the evaluation history.')
args = parser.parse_args()
main(args)

View File

@@ -0,0 +1,148 @@
'''Demo for creating customized multi-class objective function. This demo is
only applicable after (excluding) XGBoost 1.0.0, as before this version XGBoost
returns transformed prediction for multi-class objective function. More
details in comments.
'''
import numpy as np
import xgboost as xgb
from matplotlib import pyplot as plt
import argparse
np.random.seed(1994)
kRows = 100
kCols = 10
kClasses = 4 # number of classes
kRounds = 10 # number of boosting rounds.
# Generate some random data for demo.
X = np.random.randn(kRows, kCols)
y = np.random.randint(0, 4, size=kRows)
m = xgb.DMatrix(X, y)
def softmax(x):
'''Softmax function with x as input vector.'''
e = np.exp(x)
return e / np.sum(e)
def softprob_obj(predt: np.ndarray, data: xgb.DMatrix):
'''Loss function. Computing the gradient and approximated hessian (diagonal).
Reimplements the `multi:softprob` inside XGBoost.
'''
labels = data.get_label()
if data.get_weight().size == 0:
# Use 1 as weight if we don't have custom weight.
weights = np.ones((kRows, 1), dtype=float)
else:
weights = data.get_weight()
# The prediction is of shape (rows, classes), each element in a row
# represents a raw prediction (leaf weight, hasn't gone through softmax
# yet). In XGBoost 1.0.0, the prediction is transformed by a softmax
# function, fixed in later versions.
assert predt.shape == (kRows, kClasses)
grad = np.zeros((kRows, kClasses), dtype=float)
hess = np.zeros((kRows, kClasses), dtype=float)
eps = 1e-6
# compute the gradient and hessian, slow iterations in Python, only
# suitable for demo. Also the one in native XGBoost core is more robust to
# numeric overflow as we don't do anything to mitigate the `exp` in
# `softmax` here.
for r in range(predt.shape[0]):
target = labels[r]
p = softmax(predt[r, :])
for c in range(predt.shape[1]):
assert target >= 0 or target <= kClasses
g = p[c] - 1.0 if c == target else p[c]
g = g * weights[r]
h = max((2.0 * p[c] * (1.0 - p[c]) * weights[r]).item(), eps)
grad[r, c] = g
hess[r, c] = h
# Right now (XGBoost 1.0.0), reshaping is necessary
grad = grad.reshape((kRows * kClasses, 1))
hess = hess.reshape((kRows * kClasses, 1))
return grad, hess
def predict(booster, X):
'''A customized prediction function that converts raw prediction to
target class.
'''
# Output margin means we want to obtain the raw prediction obtained from
# tree leaf weight.
predt = booster.predict(X, output_margin=True)
out = np.zeros(kRows)
for r in range(predt.shape[0]):
# the class with maximum prob (not strictly prob as it haven't gone
# through softmax yet so it doesn't sum to 1, but result is the same
# for argmax).
i = np.argmax(predt[r])
out[r] = i
return out
def plot_history(custom_results, native_results):
fig, axs = plt.subplots(2, 1)
ax0 = axs[0]
ax1 = axs[1]
x = np.arange(0, kRounds, 1)
ax0.plot(x, custom_results['train']['merror'], label='Custom objective')
ax0.legend()
ax1.plot(x, native_results['train']['merror'], label='multi:softmax')
ax1.legend()
plt.show()
def main(args):
custom_results = {}
# Use our custom objective function
booster_custom = xgb.train({'num_class': kClasses},
m,
num_boost_round=kRounds,
obj=softprob_obj,
evals_result=custom_results,
evals=[(m, 'train')])
predt_custom = predict(booster_custom, m)
native_results = {}
# Use the same objective function defined in XGBoost.
booster_native = xgb.train({'num_class': kClasses},
m,
num_boost_round=kRounds,
evals_result=native_results,
evals=[(m, 'train')])
predt_native = booster_native.predict(m)
# We are reimplementing the loss function in XGBoost, so it should
# be the same for normal cases.
assert np.all(predt_custom == predt_native)
if args.plot != 0:
plot_history(custom_results, native_results)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='Arguments for custom softmax objective function demo.')
parser.add_argument(
'--plot',
type=int,
default=1,
help='Set to 0 to disable plotting the evaluation history.')
args = parser.parse_args()
main(args)

View File

@@ -0,0 +1,3 @@
We introduced initial support for saving XGBoost model in JSON format in 1.0.0. Note that
it's still experimental and under development, output schema is subject to change due to
bug fixes or further refactoring. For an overview, see https://xgboost.readthedocs.io/en/latest/tutorials/saving_model.html .

Some files were not shown because too many files have changed in this diff Show More