Compare commits

...

89 Commits
v0.7 ... v0.72

Author SHA1 Message Date
Philip Hyunsu Cho
1214081f99 Release version 0.72 (#3337) 2018-06-01 16:00:31 -07:00
Ryota Suzuki
b7cbec4d4b Fix print.xgb.Booster for R (#3338)
* Fix print.xgb.Booster

valid_handle should be TRUE when x$handle is NOT null

* Update xgb.Booster.R

Modify is.null.handle to return TRUE for NULL handle
2018-05-29 11:44:55 -07:00
Kristian Gampong
a510e68dda Add validate_features option for Booster predict (#3323)
* Add validate_features option for Booster predict

* Fix trailing whitespace in docstring
2018-05-29 11:40:49 -07:00
Yanbo Liang
b018ef104f Remove output_margin from XGBClassifier.predict_proba argument list. (#3343) 2018-05-28 10:30:21 -07:00
trivialfis
34aeee2961 Fix test_param.cc header path (#3317) 2018-05-28 10:26:29 -07:00
Dave Challis
8efbadcde4 Point rabit submodule at latest commit from master. (#3330) 2018-05-28 10:21:10 -07:00
pdavalo
480e3fd764 Sklearn: validation set weights (#2354)
* Add option to use weights when evaluating metrics in validation sets

* Add test for validation-set weights functionality

* simplify case with no weights for test sets

* fix lint issues
2018-05-23 17:06:20 -07:00
Philip Hyunsu Cho
71e226120a For CRAN submission, remove all #pragma's that suppress compiler warnings (#3329)
* For CRAN submission, remove all #pragma's that suppress compiler warnings

A few headers in dmlc-core contain #pragma's that disable compiler warnings,
which is against the CRAN submission policy. Fix the problem by removing
the offending #pragma's as part of the command `make Rbuild`.

This addresses issue #3322.

* Fix script to improve Cygwin/MSYS compatibility

We need this to pass rmingw CI test

* Remove remove_warning_suppression_pragma.sh from packaged tarball
2018-05-23 09:58:39 -07:00
Thejaswi
d367e4fc6b Fix for issue 3306. (#3324) 2018-05-23 13:42:20 +12:00
Sergei Lebedev
8f6aadd4b7 [jvm-packages] Fixed CheckpointManagerSuite for Scala 2.10 (#3332)
As before, the compilation error is caused by mixing positional and
labelled arguments.
2018-05-19 18:28:11 -07:00
Rory Mitchell
3ee725e3bb Add cuda forwards compatibility (#3316) 2018-05-17 10:59:22 +12:00
Rory Mitchell
f8b7686719 Add cuda 8/9.1 centos 6 builds, test GPU wheel on CPU only container. (#3309)
* Add cuda 8/9.1 centos 6 builds, test GPU wheel on CPU only container.

* Add Google test
2018-05-17 10:57:01 +12:00
Tong He
098075b81b CRAN Submission for 0.71.1 (#3311)
* fix for CRAN manual checks

* fix for CRAN manual checks

* pass local check

* fix variable naming style

* Adding Philip's record
2018-05-14 17:32:39 -07:00
Nan Zhu
49b9f39818 [jvm-packages] update xgboost4j cross build script to be compatible with older glibc (#3307)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* static glibc glibc++

* update to build with glib 2.12

* remove unsupported flags

* update version number

* remove properties

* remove unnecessary command

* update poms
2018-05-10 06:39:44 -07:00
Philip Hyunsu Cho
9a8211f668 Update dmlc-core submodule (#3221)
* Update dmlc-core submodule

* Fix dense_parser to work with the latest dmlc-core

* Specify location of Google Test

* Add more source files in dmlc-minimum to get latest dmlc-core working

* Update dmlc-core submodule
2018-05-09 18:55:29 -07:00
mallniya
039dbe6aec freebsd support in libpath.py (#3247) 2018-05-09 16:13:30 -07:00
Clive Chan
0c0a78c255 Suggest git submodule update instead of delete + reclone (#3214) 2018-05-09 14:39:17 -07:00
Will Storey
747381b520 Improve .gitignore patterns (#3184)
* Adjust xgboost entries in .gitignore

They were overly broad. In particularly this was inconvenient when
working with tools such as fzf that use the .gitignore to decide what to
include. As written, we'd not look into /include/xgboost.

* Make cosmetic improvements to .gitignore

* Remove dmlc-core from .gitignore

This seems unnecessary and has the drawback that tools that use
.gitignore to know files to skip mean they won't look here, and being
able to inspect the submodule files with them is useful.
2018-05-09 14:31:59 -07:00
Samuel O. Ronsin
cc79a65ab9 Increase precision of bst_float values in tree dumps (#3298)
* Increase precision of bst_float values in tree dumps

* Increase precision of bst_float values in tree dumps

* Fix lint error and switch precision to right float variable

* Fix clang-tidy error
2018-05-09 14:12:21 -07:00
Brandon Greenwell
d13f1a0f16 Fix typo (#3305) 2018-05-09 10:18:36 -07:00
Rory Mitchell
088bb4b27c Prevent multiclass Hessian approaching 0 (#3304)
* Prevent Hessian in multiclass objective becoming zero

* Set default learning rate to 0.5 for "coord_descent" linear updater
2018-05-09 20:25:51 +12:00
Andrew V. Adinetz
b8a0d66fe6 Multi-GPU HostDeviceVector. (#3287)
* Multi-GPU HostDeviceVector.

- HostDeviceVector instances can now span multiple devices, defined by GPUSet struct
- the interface of HostDeviceVector has been modified accordingly
- GPU objective functions are now multi-GPU
- GPU predicting from cache is now multi-GPU
- avoiding omp_set_num_threads() calls
- other minor changes
2018-05-05 08:00:05 +12:00
Rory Mitchell
90a5c4db9d Update Jenkins CI for GPU (#3294) 2018-05-04 16:50:59 +12:00
Thejaswi
c80d51ccb3 Fix issue #3264, accuracy issues on k80 GPUs. (#3293) 2018-05-04 13:14:08 +12:00
Nan Zhu
e1f57b4417 [jvm-packages] scripts to cross-build and deploy artifacts to github (#3276)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* cross building files

* update

* build with docker

* remove

* temp

* update build script

* update pom

* update

* update version

* upload build

* fix path

* update README.md

* fix compiler version to 4.8.5
2018-04-28 07:41:30 -07:00
Yanbo Liang
4850f67b85 Fix broken link for xgboost-spark example. (#3275) 2018-04-26 06:45:01 -07:00
Thomas J. Leeper
c2b647f26e fix typo in README (#3263) 2018-04-22 09:24:38 -04:00
Nan Zhu
25b2919c44 [jvm-packages] change version of jvm to keep consistent with other pkgs (#3253)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* change version of jvm to keep consistent with other pkgs
2018-04-19 20:48:50 -07:00
Nan Zhu
d9dd485313 [jvm-packages] upgrade spark version to 2.3 (#3254)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* update default spark version to 2.3
2018-04-19 20:15:19 -07:00
Rory Mitchell
a185ddfe03 Implement GPU accelerated coordinate descent algorithm (#3178)
* Implement GPU accelerated coordinate descent algorithm. 

* Exclude external memory tests for GPU
2018-04-20 14:56:35 +12:00
Rory Mitchell
ccf80703ef Clang-tidy static analysis (#3222)
* Clang-tidy static analysis

* Modernise checks

* Google coding standard checks

* Identifier renaming according to Google style
2018-04-19 18:57:13 +12:00
Michal Josífko
3242b0a378 Update rabit submodule to latest version. (#3246) 2018-04-19 13:58:09 +12:00
Philip Hyunsu Cho
842e28fdcd Fix RMinGW build error: dependency 'data.table' not available (#3257)
The R package dependency 'data.table' is apparently unavailable in Windows binary format, resulting into the following build errors:
* https://ci.appveyor.com/project/tqchen/xgboost/build/1.0.1810/job/hhanvg0c2cqpn7bc
* https://ci.appveyor.com/project/tqchen/xgboost/build/1.0.1811/job/hg65t9wb3rt1f5k8

Fix: use type='both' to fall back to source when binary is unavailable
2018-04-18 10:56:44 -07:00
Philip Hyunsu Cho
230cb9b787 Release version 0.71 (#3200) 2018-04-11 21:43:32 +09:00
Nan Zhu
4109818b32 [jvm-packages] add back libsvm notes (#3232)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* add back libsvm notes
2018-04-10 09:00:58 -07:00
Rory Mitchell
443ff746e9 Fix logic in GPU predictor cache lookup (#3217)
* Fix logic in GPU predictor cache lookup

* Add sklearn test for GPU prediction
2018-04-04 15:08:22 +12:00
Rory Mitchell
a1ec7b1716 Change reduce operation from thrust to cub. Fix for cuda 9.1 error (#3218)
* Change reduce operation from thrust to cub. Fix for cuda 9.1 runtime error

* Unit test sum reduce
2018-04-04 14:21:48 +12:00
Philip Hyunsu Cho
017acf54d9 Fix up make pippack command for building source package for PyPI (#3199)
* Now `make pippack` works without any manual action: it will produce
  xgboost-[version].tar.gz, which one can use by typing
  `pip3 install xgboost-[version].tar.gz`.
* Detect OpenMP-capable compilers (clang, gcc-5, gcc-7) on MacOS
2018-03-28 10:32:52 -07:00
Tong He
ace4016c36 Replace cBind by cbind (#3203)
* modify test_helper.R

* fix noLD

* update desc

* fix solaris test

* fix desc

* improve fix

* fix url

* change Matrix cBind to cbind

* fix

* fix error in demo

* fix examples
2018-03-28 10:05:47 -07:00
Philip Hyunsu Cho
b087620661 Condense MinGW installation instruction (#3201) 2018-03-25 03:05:11 -07:00
Yuan (Terry) Tang
92782a8406 Change DESCRIPTION to more modern look (#3179)
So other things can be added in comment field, such as ORCID.
2018-03-23 10:45:10 -04:00
Arjan van der Velde
04221a7469 rank_metric: add AUC-PR (#3172)
* rank_metric: add AUC-PR

Implementation of the AUC-PR calculation for weighted data, proposed by Keilwagen, Grosse and Grau (https://doi.org/10.1371/journal.pone.0092209)

* rank_metric: fix lint warnings

* Implement tests for AUC-PR and fix implementation

* add aucpr to documentation for other languages
2018-03-23 10:43:47 -04:00
zhaocc
8fb3388af2 fix typo (#3188) 2018-03-21 19:24:29 -04:00
Will Storey
00d9728e4b Fix memory leak in XGDMatrixCreateFromMat_omp() (#3182)
* Fix memory leak in XGDMatrixCreateFromMat_omp()

This replaces the array allocated by new with a std::vector.

Fixes #3161
2018-03-18 15:03:27 +13:00
Will Storey
c85995952f Allow compilation with -Werror=strict-prototypes (#3183) 2018-03-18 12:25:42 +13:00
Rory Mitchell
9fa45d3a9c Fix bug with gpu_predictor caching behaviour (#3177)
* Fixes #3162
2018-03-18 10:35:10 +13:00
Ray Kim
cdc036b752 Fixed performance bug (#3171)
Minor performance improvements to gpu predictor
2018-03-15 09:40:24 +13:00
Rory Mitchell
7a81c87dfa Fix incorrect minimum value in quantile generation (#3167) 2018-03-14 08:21:18 -07:00
Vadim Khotilovich
706be4e5d4 Additional improvements for gblinear (#3134)
* fix rebase conflict

* [core] additional gblinear improvements

* [R] callback for gblinear coefficients history

* force eta=1 for gblinear python tests

* add top_k to GreedyFeatureSelector

* set eta=1 in shotgun test

* [core] fix SparsePage processing in gblinear; col-wise multithreading in greedy updater

* set sorted flag within TryInitColData

* gblinear tests: use scale, add external memory test

* fix multiclass for greedy updater

* fix whitespace

* fix typo
2018-03-13 01:27:13 -05:00
Andrew V. Adinetz
a1b48afa41 Added back UpdatePredictionCache() in updater_gpu_hist.cu. (#3120)
* Added back UpdatePredictionCache() in updater_gpu_hist.cu.

- it had been there before, but wasn't ported to the new version
  of updater_gpu_hist.cu
2018-03-09 15:06:45 +13:00
redditur
d5f1b74ef5 'hist': Montonic Constraints (#3085)
* Extended monotonic constraints support to 'hist' tree method.

* Added monotonic constraints tests.

* Fix the signature of NoConstraint::CalcSplitGain()

* Document monotonic constraint support in 'hist'

* Update signature of Update to account for latest refactor
2018-03-05 16:45:49 -08:00
Andrea Bergonzo
8937134015 Update build_trouble_shooting.md (#3144) 2018-03-02 16:23:45 -08:00
Philip Hyunsu Cho
32ea70c1c9 Documenting CSV loading into DMatrix (#3137)
* Support CSV file in DMatrix

We'd just need to expose the CSV parser in dmlc-core to the Python wrapper

* Revert extra code; document existing CSV support

CSV support is already there but undocumented

* Add notice about categorical features
2018-02-28 18:41:10 -08:00
Andrew V. Adinetz
d5992dd881 Replaced std::vector-based interfaces with HostDeviceVector-based interfaces. (#3116)
* Replaced std::vector-based interfaces with HostDeviceVector-based interfaces.

- replacement was performed in the learner, boosters, predictors,
  updaters, and objective functions
- only interfaces used in training were replaced;
  interfaces like PredictInstance() still use std::vector
- refactoring necessary for replacement of interfaces was also performed,
  such as using HostDeviceVector in prediction cache

* HostDeviceVector-based interfaces for custom objective function example plugin.
2018-02-28 13:00:04 +13:00
Yuan (Terry) Tang
11bfa8584d Remove unnecessary dependencies in distributed test (#3132) 2018-02-24 20:24:34 -05:00
Yuan (Terry) Tang
cf89fa7139 Remove additional "/" in external memory doc (#3131) 2018-02-24 14:27:03 -05:00
Yuan (Terry) Tang
5d4cc49080 Update GPU plug-in documentation link (#3130) 2018-02-24 13:37:12 -05:00
Philip Hyunsu Cho
3d7aff5697 Fix doc build (#3126)
* Fix doc build

ReadTheDocs build has been broken for a while due to incompatibilities between
commonmark, recommonmark, and sphinx. See:
* "Recommonmark not working with Sphinx 1.6"
  https://github.com/rtfd/recommonmark/issues/73
* "CommonMark 0.6.0 breaks compatibility"
  https://github.com/rtfd/recommonmark/issues/24
For now, we fix the versions to get the build working again

* Fix search bar
2018-02-21 16:57:30 -08:00
Dmitry Mottl
eb9e30bb30 Minor: fixed dropdown <li> width in xgboost.css (#3121) 2018-02-20 07:24:38 -08:00
Dmitry Mottl
20b733e1a0 Minor: removed extra parenthesis in doc (#3119) 2018-02-20 02:55:29 -08:00
tomisuker
8153ba6fe7 modify build guide from source on macOS (#2993)
* modify build guide from source on macOS

* fix; installation for macOS
2018-02-19 12:20:00 -08:00
Rory Mitchell
dd82b28e20 Update GPU code with dmatrix changes (#3117) 2018-02-17 12:11:48 +13:00
Rory Mitchell
10eb05a63a Refactor linear modelling and add new coordinate descent updater (#3103)
* Refactor linear modelling and add new coordinate descent updater

* Allow unsorted column iterator

* Add prediction cacheing to gblinear
2018-02-17 09:17:01 +13:00
Vadim Khotilovich
9ffe8596f2 [core] fix slow predict-caching with many classes (#3109)
* fix prediction caching inefficiency for multiclass

* silence some warnings

* redundant if

* workaround for R v3.4.3 bug; fixes #3081
2018-02-15 18:31:42 -06:00
Oleg Panichev
cf19caa46a Fix for ZeroDivisionError when verbose_eval equals to 0. (#3115) 2018-02-15 17:58:06 -06:00
Philip Hyunsu Cho
375d75304d Fix typos, addressing issues #2212 and #3090 (#3105) 2018-02-09 11:16:44 -08:00
Felipe Arruda Pontes
81d1b17f9c adding some docs based on core.Boost.predict (#1865) 2018-02-09 06:38:38 -08:00
cinqS
b99f56e386 added mingw64 installation instruction, and library file copy. (#2977)
* added mingw64 installation instruction, and library file copy.

* Change all `libxgboost.dll` to `xgboost.dll`

On Windows, the library file is called `xgboost.dll`, not `libxgboost.dll` as in the build doc previously
2018-02-09 01:54:15 -08:00
Abraham Zhan
874525c152 c_api.cc variable declared inapproiate (#3044)
In line 461, the "size_t offset = 0;" should be declared before any calculation, otherwise will cause compilation error. 

```
I:\Libraries\xgboost\src\c_api\c_api.cc(416): error C2146: Missing ";" before "offset" [I:\Libraries\xgboost\build\objxgboost.vcxproj]
```
2018-02-09 01:32:01 -08:00
Scott Lundberg
d878c36c84 Add SHAP interaction effects, fix minor bug, and add cox loss (#3043)
* Add interaction effects and cox loss

* Minimize whitespace changes

* Cox loss now no longer needs a pre-sorted dataset.

* Address code review comments

* Remove mem check, rename to pred_interactions, include bias

* Make lint happy

* More lint fixes

* Fix cox loss indexing

* Fix main effects and tests

* Fix lint

* Use half interaction values on the off-diagonals

* Fix lint again
2018-02-07 20:38:01 -06:00
Jonas
077abb35cd fix relative link to demo (#3066) 2018-02-07 01:09:03 -06:00
Vadim Khotilovich
94e655329f Replacing cout with LOG (#3076)
* change cout to LOG

* lint fix
2018-02-06 02:00:34 -06:00
Sergei Lebedev
7c99e90ecd [jvm-packages] Declared Spark as provided in the POM (#3093)
* [jvm-packages] Explicitly declared Spark dependencies as provided

* Removed noop spark-2.x profile
2018-02-05 10:06:06 -08:00
Peter M. Landwehr
86bf930497 Fix typo: cutomize -> customize (#3073) 2018-02-04 22:56:04 +01:00
Andrew V. Adinetz
24c2e41287 Fixed the bug with illegal memory access in test_large_sizes.py with 4 GPUs. (#3068)
- thrust::copy() called from dvec::copy() for gpairs invoked a GPU kernel instead of
  cudaMemcpy()
- this resulted in illegal memory access if the GPU running the kernel could not access
  the data being copied
- new version of dvec::copy() for thrust::device_ptr iterators calls cudaMemcpy(),
  avoiding the problem.
2018-02-01 16:54:46 +13:00
Tong He
98be9aef9a A fix for CRAN submission of version 0.7-0 (#3061)
* modify test_helper.R

* fix noLD

* update desc

* fix solaris test

* fix desc

* improve fix

* fix url
2018-01-27 17:06:28 -08:00
Vadim Khotilovich
c88bae112e change cmd to cmd.exe in appveyor (#3071) 2018-01-26 12:27:33 -06:00
tomasatdatabricks
5ef684641b Fixed SparkParallelTracker to work with Spark2.3 (#3062) 2018-01-25 04:31:38 +01:00
Rory Mitchell
f87802f00c Fix GPU bugs (#3051)
* Change uint to unsigned int

* Fix no root predictions bug

* Remove redundant splitting due to numerical instability
2018-01-23 13:14:15 +13:00
Yun Ni
8b2f4e2d39 [jvm-packages] Move cache files to TempDirectory and delete this directory after XGBoost job finishes (#3022)
* [jvm-packages] Move cache files to tmp dir and delete on exit

* Delete the cache dir when watches are deleted
2018-01-20 21:13:25 -08:00
Yun Ni
3f3f54bcad [jvm-packages] Update docs and unify the terminology (#3024)
* [jvm-packages] Move cache files to tmp dir and delete on exit

* [jvm-packages] Update docs and unify terminology

* Address CR Comments
2018-01-16 17:16:55 +01:00
Thejaswi
84ab74f3a5 Objective function evaluation on GPU with minimal PCIe transfers (#2935)
* Added GPU objective function and no-copy interface.

- xgboost::HostDeviceVector<T> syncs automatically between host and device
- no-copy interfaces have been added
- default implementations just sync the data to host
  and call the implementations with std::vector
- GPU objective function, predictor, histogram updater process data
  directly on GPU
2018-01-12 21:33:39 +13:00
Nan Zhu
a187ed6c8f [jvm-packages] tiny fix for empty partition in predict (#3014)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* tiny fix for empty partition in predict

* further fix
2018-01-07 08:34:18 -08:00
Yun Ni
740eba42f7 [jvm-packages] Add back the overriden finalize() method for SBooster (#3011)
* Convert SIGSEGV to XGBoostError

* Address CR Comments

* Address CR Comments
2018-01-06 14:07:37 -08:00
Yun Ni
65fb4e3f5c [jvm-packages] Prevent dispose being called on unfinalized JBooster (#3005)
* [jvm-packages] Prevent dispose being called twice when finalize

* Convert SIGSEGV to XGBoostError

* Avoid creating a new SBooster with the same JBooster

* Address CR Comments
2018-01-06 09:46:52 -08:00
Nan Zhu
9747ea2acb [jvm-packages] fix the pattern in dev script and version mismatch (#3009)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* fix the pattern in dev script and version mismatch
2018-01-06 06:59:38 -08:00
Zhirui Wang
bf43671841 update macOS gcc@5 installation guide (#3003)
After installing ``gcc@5``, ``CMAKE_C_COMPILER`` will not be set to gcc-5 in some macOS environment automatically and the installation of xgboost will still fail. Manually setting the compiler will solve the problem.
2018-01-04 11:28:26 -08:00
Nan Zhu
14c6392381 [jvm-packages] add dev script to update version and update versions (#2998)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* add dev script to update version and update versions
2018-01-01 21:28:53 -08:00
Vadim Khotilovich
526801cdb3 [R] fix for the 32 bit windows issue (#2994)
* [R] disable thred_local for 32bit windows

* [R] require C++11 and GNU make in DESCRIPTION

* [R] enable 32+64 build and check in appveyor
2017-12-31 14:18:50 -08:00
207 changed files with 8747 additions and 4191 deletions

22
.clang-tidy Normal file
View File

@@ -0,0 +1,22 @@
Checks: 'modernize-*,-modernize-make-*,-modernize-raw-string-literal,google-*,-google-default-arguments,-clang-diagnostic-#pragma-messages,readability-identifier-naming'
CheckOptions:
- { key: readability-identifier-naming.ClassCase, value: CamelCase }
- { key: readability-identifier-naming.StructCase, value: CamelCase }
- { key: readability-identifier-naming.TypeAliasCase, value: CamelCase }
- { key: readability-identifier-naming.TypedefCase, value: CamelCase }
- { key: readability-identifier-naming.TypeTemplateParameterCase, value: CamelCase }
- { key: readability-identifier-naming.LocalVariableCase, value: lower_case }
- { key: readability-identifier-naming.MemberCase, value: lower_case }
- { key: readability-identifier-naming.PrivateMemberSuffix, value: '_' }
- { key: readability-identifier-naming.ProtectedMemberSuffix, value: '_' }
- { key: readability-identifier-naming.EnumCase, value: CamelCase }
- { key: readability-identifier-naming.EnumConstant, value: CamelCase }
- { key: readability-identifier-naming.EnumConstantPrefix, value: k }
- { key: readability-identifier-naming.GlobalConstantCase, value: CamelCase }
- { key: readability-identifier-naming.GlobalConstantPrefix, value: k }
- { key: readability-identifier-naming.StaticConstantCase, value: CamelCase }
- { key: readability-identifier-naming.StaticConstantPrefix, value: k }
- { key: readability-identifier-naming.ConstexprVariableCase, value: CamelCase }
- { key: readability-identifier-naming.ConstexprVariablePrefix, value: k }
- { key: readability-identifier-naming.FunctionCase, value: CamelCase }
- { key: readability-identifier-naming.NamespaceCase, value: lower_case }

7
.gitignore vendored
View File

@@ -15,7 +15,6 @@
*.Rcheck
*.rds
*.tar.gz
#*txt*
*conf
*buffer
*model
@@ -47,13 +46,12 @@ Debug
*.cpage.col
*.cpage
*.Rproj
./xgboost
./xgboost.mpi
./xgboost.mock
#.Rbuildignore
R-package.Rproj
*.cache*
#java
# java
java/xgboost4j/target
java/xgboost4j/tmp
java/xgboost4j-demo/target
@@ -68,10 +66,9 @@ nb-configuration*
.settings/
build
config.mk
xgboost
/xgboost
*.data
build_plugin
dmlc-core
.idea
recommonmark/
tags

View File

@@ -44,10 +44,12 @@ matrix:
addons:
apt:
sources:
- llvm-toolchain-trusty-5.0
- ubuntu-toolchain-r-test
- george-edison55-precise-backports
packages:
- cmake
- clang
- clang-tidy-5.0
- cmake-data
- doxygen
- wget

View File

@@ -14,8 +14,8 @@ option(USE_NCCL "Build using NCCL for multi-GPU. Also requires USE_CUDA")
option(JVM_BINDINGS "Build JVM bindings" OFF)
option(GOOGLE_TEST "Build google tests" OFF)
option(R_LIB "Build shared library for R package" OFF)
set(GPU_COMPUTE_VER 35;50;52;60;61 CACHE STRING
"Space separated list of compute versions to be built against")
set(GPU_COMPUTE_VER "" CACHE STRING
"Space separated list of compute versions to be built against, e.g. '35 61'")
# Deprecation warning
if(PLUGIN_UPDATER_GPU)
@@ -106,7 +106,7 @@ endif()
# dmlc-core
add_subdirectory(dmlc-core)
set(LINK_LIBRARIES dmlccore rabit)
set(LINK_LIBRARIES dmlc rabit)
if(USE_CUDA)
@@ -122,16 +122,13 @@ if(USE_CUDA)
add_definitions(-DXGBOOST_USE_NCCL)
endif()
if((CUDA_VERSION_MAJOR EQUAL 9) OR (CUDA_VERSION_MAJOR GREATER 9))
message("CUDA 9.0 detected, adding Volta compute capability (7.0).")
set(GPU_COMPUTE_VER "${GPU_COMPUTE_VER};70")
endif()
set(GENCODE_FLAGS "")
format_gencode_flags("${GPU_COMPUTE_VER}" GENCODE_FLAGS)
message("cuda architecture flags: ${GENCODE_FLAGS}")
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS};--expt-extended-lambda;--expt-relaxed-constexpr;${GENCODE_FLAGS};-lineinfo;")
if(NOT MSVC)
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS};-Xcompiler -fPIC; -std=c++11")
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS};-Xcompiler -fPIC; -Xcompiler -Werror; -std=c++11")
endif()
if(USE_NCCL)

View File

@@ -7,8 +7,8 @@ Committers
Committers are people who have made substantial contribution to the project and granted write access to the project.
* [Tianqi Chen](https://github.com/tqchen), University of Washington
- Tianqi is a PhD working on large-scale machine learning, he is the creator of the project.
* [Tong He](https://github.com/hetong007), Simon Fraser University
- Tong is a master student working on data mining, he is the maintainer of xgboost R package.
* [Tong He](https://github.com/hetong007), Amazon AI
- Tong is an applied scientist in Amazon AI, he is the maintainer of xgboost R package.
* [Vadim Khotilovich](https://github.com/khotilov)
- Vadim contributes many improvements in R and core packages.
* [Bing Xu](https://github.com/antinucleon)
@@ -54,7 +54,8 @@ List of Contributors
* [Masaaki Horikoshi](https://github.com/sinhrks)
- Masaaki is the initial creator of xgboost python plotting module.
* [Hongliang Liu](https://github.com/phunterlau)
- Hongliang is the maintainer of xgboost python PyPI package for pip installation.
* [Hyunsu Cho](http://hyunsu-cho.io/)
- Hyunsu is the maintainer of the XGBoost Python package. He is in charge of submitting the Python package to Python Package Index (PyPI). He is also the initial author of the CPU 'hist' updater.
* [daiyl0320](https://github.com/daiyl0320)
- daiyl0320 contributed patch to xgboost distributed version more robust, and scales stably on TB scale datasets.
* [Huayi Zhang](https://github.com/irachex)

60
Jenkinsfile vendored
View File

@@ -7,9 +7,9 @@
dockerRun = 'tests/ci_build/ci_build.sh'
def buildMatrix = [
[ "enabled": true, "os" : "linux", "withGpu": true, "withOmp": true, "pythonVersion": "2.7" ],
[ "enabled": true, "os" : "linux", "withGpu": false, "withOmp": true, "pythonVersion": "2.7" ],
[ "enabled": false, "os" : "osx", "withGpu": false, "withOmp": false, "pythonVersion": "2.7" ],
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": true, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "9.1" ],
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": true, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "8.0" ],
[ "enabled": false, "os" : "linux", "withGpu": false, "withNccl": false, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "" ],
]
pipeline {
@@ -69,8 +69,7 @@ def buildFactory(buildName, conf) {
def os = conf["os"]
def nodeReq = conf["withGpu"] ? "${os} && gpu" : "${os}"
def dockerTarget = conf["withGpu"] ? "gpu" : "cpu"
[ ("cmake_${buildName}") : { buildPlatformCmake("cmake_${buildName}", conf, nodeReq, dockerTarget) },
("make_${buildName}") : { buildPlatformMake("make_${buildName}", conf, nodeReq, dockerTarget) }
[ ("${buildName}") : { buildPlatformCmake("${buildName}", conf, nodeReq, dockerTarget) }
]
}
@@ -81,6 +80,10 @@ def buildPlatformCmake(buildName, conf, nodeReq, dockerTarget) {
def opts = cmakeOptions(conf)
// Destination dir for artifacts
def distDir = "dist/${buildName}"
def dockerArgs = ""
if(conf["withGpu"]){
dockerArgs = "--build-arg CUDA_VERSION=" + conf["cudaVersion"]
}
// Build node - this is returned result
node(nodeReq) {
unstash name: 'srcs'
@@ -92,58 +95,33 @@ def buildPlatformCmake(buildName, conf, nodeReq, dockerTarget) {
""".stripMargin('|')
// Invoke command inside docker
sh """
${dockerRun} ${dockerTarget} tests/ci_build/build_via_cmake.sh ${opts}
${dockerRun} ${dockerTarget} tests/ci_build/test_${dockerTarget}.sh
${dockerRun} ${dockerTarget} bash -c "cd python-package; python setup.py bdist_wheel"
${dockerRun} ${dockerTarget} ${dockerArgs} tests/ci_build/build_via_cmake.sh ${opts}
${dockerRun} ${dockerTarget} ${dockerArgs} tests/ci_build/test_${dockerTarget}.sh
${dockerRun} ${dockerTarget} ${dockerArgs} bash -c "cd python-package; python setup.py bdist_wheel"
rm -rf "${distDir}"; mkdir -p "${distDir}/py"
cp xgboost "${distDir}"
cp -r lib "${distDir}"
cp -r python-package/dist "${distDir}/py"
# Test the wheel for compatibility on a barebones CPU container
${dockerRun} release ${dockerArgs} bash -c " \
auditwheel show xgboost-*-py2-none-any.whl
pip install --user python-package/dist/xgboost-*-py2-none-any.whl && \
python -m nose tests/python"
"""
archiveArtifacts artifacts: "${distDir}/**/*.*", allowEmptyArchive: true
}
}
/**
* Build platform via make
*/
def buildPlatformMake(buildName, conf, nodeReq, dockerTarget) {
def opts = makeOptions(conf)
// Destination dir for artifacts
def distDir = "dist/${buildName}"
// Build node
node(nodeReq) {
unstash name: 'srcs'
echo """
|===== XGBoost Make build =====
| dockerTarget: ${dockerTarget}
| makeOpts : ${opts}
|=========================
""".stripMargin('|')
// Invoke command inside docker
sh """
${dockerRun} ${dockerTarget} tests/ci_build/build_via_make.sh ${opts}
"""
}
}
def makeOptions(conf) {
return ([
conf["withGpu"] ? 'PLUGIN_UPDATER_GPU=ON' : 'PLUGIN_UPDATER_GPU=OFF',
conf["withOmp"] ? 'USE_OPENMP=1' : 'USE_OPENMP=0']
).join(" ")
}
def cmakeOptions(conf) {
return ([
conf["withGpu"] ? '-DPLUGIN_UPDATER_GPU:BOOL=ON' : '',
conf["withGpu"] ? '-DUSE_CUDA=ON' : '-DUSE_CUDA=OFF',
conf["withNccl"] ? '-DUSE_NCCL=ON' : '-DUSE_NCCL=OFF',
conf["withOmp"] ? '-DOPEN_MP:BOOL=ON' : '']
).join(" ")
}
def getBuildName(conf) {
def gpuLabel = conf['withGpu'] ? "_gpu" : "_cpu"
def gpuLabel = conf['withGpu'] ? "_cuda" + conf['cudaVersion'] : "_cpu"
def ompLabel = conf['withOmp'] ? "_omp" : ""
def pyLabel = "_py${conf['pythonVersion']}"
return "${conf['os']}${gpuLabel}${ompLabel}${pyLabel}"

View File

@@ -198,7 +198,11 @@ endif
clean:
$(RM) -rf build build_plugin lib bin *~ */*~ */*/*~ */*/*/*~ */*.o */*/*.o */*/*/*.o #xgboost
$(RM) -rf build_tests *.gcov tests/cpp/xgboost_test
cd R-package/src; $(RM) -rf rabit src include dmlc-core amalgamation *.so *.dll; cd $(ROOTDIR)
if [ -d "R-package/src" ]; then \
cd R-package/src; \
$(RM) -rf rabit src include dmlc-core amalgamation *.so *.dll; \
cd $(ROOTDIR); \
fi
clean_all: clean
cd $(DMLC_CORE); "$(MAKE)" clean; cd $(ROOTDIR)
@@ -212,16 +216,28 @@ pypack: ${XGBOOST_DYLIB}
cp ${XGBOOST_DYLIB} python-package/xgboost
cd python-package; tar cf xgboost.tar xgboost; cd ..
# create pip installation pack for PyPI
# create pip source dist (sdist) pack for PyPI
pippack: clean_all
rm -rf xgboost-python
# remove symlinked directories in python-package/xgboost
rm -rf python-package/xgboost/lib
rm -rf python-package/xgboost/dmlc-core
rm -rf python-package/xgboost/include
rm -rf python-package/xgboost/make
rm -rf python-package/xgboost/rabit
rm -rf python-package/xgboost/src
cp -r python-package xgboost-python
cp -r Makefile xgboost-python/xgboost/
cp -r make xgboost-python/xgboost/
cp -r src xgboost-python/xgboost/
cp -r tests xgboost-python/xgboost/
cp -r include xgboost-python/xgboost/
cp -r dmlc-core xgboost-python/xgboost/
cp -r rabit xgboost-python/xgboost/
# Use setup_pip.py instead of setup.py
mv xgboost-python/setup_pip.py xgboost-python/setup.py
# Build sdist tarball
cd xgboost-python; python setup.py sdist; mv dist/*.tar.gz ..; cd ..
# Script to make a clean installable R package.
Rpack: clean_all
@@ -245,13 +261,15 @@ Rpack: clean_all
cat R-package/src/Makevars.in|sed '2s/.*/PKGROOT=./' | sed '3s/.*/ENABLE_STD_THREAD=0/' > xgboost/src/Makevars.in
cp xgboost/src/Makevars.in xgboost/src/Makevars.win
sed -i -e 's/@OPENMP_CXXFLAGS@/$$\(SHLIB_OPENMP_CFLAGS\)/g' xgboost/src/Makevars.win
bash R-package/remove_warning_suppression_pragma.sh
rm xgboost/remove_warning_suppression_pragma.sh
Rbuild: Rpack
R CMD build --no-build-vignettes xgboost
rm -rf xgboost
Rcheck: Rbuild
R CMD check xgboost*.tar.gz
R CMD check xgboost*.tar.gz
-include build/*.d
-include build/*/*.d

59
NEWS.md
View File

@@ -3,6 +3,65 @@ XGBoost Change Log
This file records the changes in xgboost library in reverse chronological order.
## v0.72 (2018.06.01)
* Starting with this release, we plan to make a new release every two months. See #3252 for more details.
* Fix a pathological behavior (near-zero second-order gradients) in multiclass objective (#3304)
* Tree dumps now use high precision in storing floating-point values (#3298)
* Submodules `rabit` and `dmlc-core` have been brought up to date, bringing bug fixes (#3330, #3221).
* GPU support
- Continuous integration tests for GPU code (#3294, #3309)
- GPU accelerated coordinate descent algorithm (#3178)
- Abstract 1D vector class now works with multiple GPUs (#3287)
- Generate PTX code for most recent architecture (#3316)
- Fix a memory bug on NVIDIA K80 cards (#3293)
- Address performance instability for single-GPU, multi-core machines (#3324)
* Python package
- FreeBSD support (#3247)
- Validation of feature names in `Booster.predict()` is now optional (#3323)
* Updated Sklearn API
- Validation sets now support instance weights (#2354)
- `XGBClassifier.predict_proba()` should not support `output_margin` option. (#3343) See BREAKING CHANGES below.
* R package:
- Better handling of NULL in `print.xgb.Booster()` (#3338)
- Comply with CRAN policy by removing compiler warning suppression (#3329)
- Updated CRAN submission
* JVM packages
- JVM packages will now use the same versioning scheme as other packages (#3253)
- Update Spark to 2.3 (#3254)
- Add scripts to cross-build and deploy artifacts (#3276, #3307)
- Fix a compilation error for Scala 2.10 (#3332)
* BREAKING CHANGES
- `XGBClassifier.predict_proba()` no longer accepts paramter `output_margin`. The paramater makes no sense for `predict_proba()` because the method is to predict class probabilities, not raw margin scores.
## v0.71 (2018.04.11)
* This is a minor release, mainly motivated by issues concerning `pip install`, e.g. #2426, #3189, #3118, and #3194.
With this release, users of Linux and MacOS will be able to run `pip install` for the most part.
* Refactored linear booster class (`gblinear`), so as to support multiple coordinate descent updaters (#3103, #3134). See BREAKING CHANGES below.
* Fix slow training for multiclass classification with high number of classes (#3109)
* Fix a corner case in approximate quantile sketch (#3167). Applicable for 'hist' and 'gpu_hist' algorithms
* Fix memory leak in DMatrix (#3182)
* New functionality
- Better linear booster class (#3103, #3134)
- Pairwise SHAP interaction effects (#3043)
- Cox loss (#3043)
- AUC-PR metric for ranking task (#3172)
- Monotonic constraints for 'hist' algorithm (#3085)
* GPU support
- Create an abtract 1D vector class that moves data seamlessly between the main and GPU memory (#2935, #3116, #3068). This eliminates unnecessary PCIe data transfer during training time.
- Fix minor bugs (#3051, #3217)
- Fix compatibility error for CUDA 9.1 (#3218)
* Python package:
- Correctly handle parameter `verbose_eval=0` (#3115)
* R package:
- Eliminate segmentation fault on 32-bit Windows platform (#2994)
* JVM packages
- Fix a memory bug involving double-freeing Booster objects (#3005, #3011)
- Handle empty partition in predict (#3014)
- Update docs and unify terminology (#3024)
- Delete cache files after job finishes (#3022)
- Compatibility fixes for latest Spark versions (#3062, #3093)
* BREAKING CHANGES: Updated linear modelling algorithms. In particular L1/L2 regularisation penalties are now normalised to number of training examples. This makes the implementation consistent with sklearn/glmnet. L2 regularisation has also been removed from the intercept. To produce linear models with the old regularisation behaviour, the alpha/lambda regularisation parameters can be manually scaled by dividing them by the number of training examples.
## v0.7 (2017.12.30)
* **This version represents a major change from the last release (v0.6), which was released one year and half ago.**
* Updated Sklearn API

View File

@@ -1,12 +1,34 @@
Package: xgboost
Type: Package
Title: Extreme Gradient Boosting
Version: 0.6.4.8
Date: 2017-12-05
Author: Tianqi Chen <tianqi.tchen@gmail.com>, Tong He <hetong007@gmail.com>,
Michael Benesty <michael@benesty.fr>, Vadim Khotilovich <khotilovich@gmail.com>,
Yuan Tang <terrytangyuan@gmail.com>
Maintainer: Tong He <hetong007@gmail.com>
Version: 0.71.1
Date: 2018-05-11
Authors@R: c(
person("Tianqi", "Chen", role = c("aut"),
email = "tianqi.tchen@gmail.com"),
person("Tong", "He", role = c("aut", "cre"),
email = "hetong007@gmail.com"),
person("Michael", "Benesty", role = c("aut"),
email = "michael@benesty.fr"),
person("Vadim", "Khotilovich", role = c("aut"),
email = "khotilovich@gmail.com"),
person("Yuan", "Tang", role = c("aut"),
email = "terrytangyuan@gmail.com",
comment = c(ORCID = "0000-0001-5243-233X")),
person("Hyunsu", "Cho", role = c("aut"),
email = "chohyu01@cs.washington.edu"),
person("Kailong", "Chen", role = c("aut")),
person("Rory", "Mitchell", role = c("aut")),
person("Ignacio", "Cano", role = c("aut")),
person("Tianyi", "Zhou", role = c("aut")),
person("Mu", "Li", role = c("aut")),
person("Junyuan", "Xie", role = c("aut")),
person("Min", "Lin", role = c("aut")),
person("Yifeng", "Geng", role = c("aut")),
person("Yutian", "Li", role = c("aut")),
person("XGBoost contributors", role = c("cph"),
comment = "base XGBoost implementation")
)
Description: Extreme Gradient Boosting, which is an efficient implementation
of the gradient boosting framework from Chen & Guestrin (2016) <doi:10.1145/2939672.2939785>.
This package is its R interface. The package includes efficient linear
@@ -19,6 +41,7 @@ Description: Extreme Gradient Boosting, which is an efficient implementation
License: Apache License (== 2.0) | file LICENSE
URL: https://github.com/dmlc/xgboost
BugReports: https://github.com/dmlc/xgboost/issues
NeedsCompilation: yes
VignetteBuilder: knitr
Suggests:
knitr,
@@ -38,3 +61,4 @@ Imports:
magrittr (>= 1.5),
stringi (>= 0.5.2)
RoxygenNote: 6.0.1
SystemRequirements: GNU make, C++11

View File

@@ -18,6 +18,7 @@ export("xgb.parameters<-")
export(cb.cv.predict)
export(cb.early.stop)
export(cb.evaluation.log)
export(cb.gblinear.history)
export(cb.print.evaluation)
export(cb.reset.parameters)
export(cb.save.model)
@@ -32,6 +33,7 @@ export(xgb.attributes)
export(xgb.create.features)
export(xgb.cv)
export(xgb.dump)
export(xgb.gblinear.history)
export(xgb.ggplot.deepness)
export(xgb.ggplot.importance)
export(xgb.importance)
@@ -49,10 +51,11 @@ export(xgboost)
import(methods)
importClassesFrom(Matrix,dgCMatrix)
importClassesFrom(Matrix,dgeMatrix)
importFrom(Matrix,cBind)
importFrom(Matrix,colSums)
importFrom(Matrix,sparse.model.matrix)
importFrom(Matrix,sparseMatrix)
importFrom(Matrix,sparseVector)
importFrom(Matrix,t)
importFrom(data.table,":=")
importFrom(data.table,as.data.table)
importFrom(data.table,data.table)

View File

@@ -524,6 +524,223 @@ cb.cv.predict <- function(save_models = FALSE) {
}
#' Callback closure for collecting the model coefficients history of a gblinear booster
#' during its training.
#'
#' @param sparse when set to FALSE/TURE, a dense/sparse matrix is used to store the result.
#' Sparse format is useful when one expects only a subset of coefficients to be non-zero,
#' when using the "thrifty" feature selector with fairly small number of top features
#' selected per iteration.
#'
#' @details
#' To keep things fast and simple, gblinear booster does not internally store the history of linear
#' model coefficients at each boosting iteration. This callback provides a workaround for storing
#' the coefficients' path, by extracting them after each training iteration.
#'
#' Callback function expects the following values to be set in its calling frame:
#' \code{bst} (or \code{bst_folds}).
#'
#' @return
#' Results are stored in the \code{coefs} element of the closure.
#' The \code{\link{xgb.gblinear.history}} convenience function provides an easy way to access it.
#' With \code{xgb.train}, it is either a dense of a sparse matrix.
#' While with \code{xgb.cv}, it is a list (an element per each fold) of such matrices.
#'
#' @seealso
#' \code{\link{callbacks}}, \code{\link{xgb.gblinear.history}}.
#'
#' @examples
#' #### Binary classification:
#' #
#' # In the iris dataset, it is hard to linearly separate Versicolor class from the rest
#' # without considering the 2nd order interactions:
#' require(magrittr)
#' x <- model.matrix(Species ~ .^2, iris)[,-1]
#' colnames(x)
#' dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor"))
#' param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc",
#' lambda = 0.0003, alpha = 0.0003, nthread = 2)
#' # For 'shotgun', which is a default linear updater, using high eta values may result in
#' # unstable behaviour in some datasets. With this simple dataset, however, the high learning
#' # rate does not break the convergence, but allows us to illustrate the typical pattern of
#' # "stochastic explosion" behaviour of this lock-free algorithm at early boosting iterations.
#' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 1.,
#' callbacks = list(cb.gblinear.history()))
#' # Extract the coefficients' path and plot them vs boosting iteration number:
#' coef_path <- xgb.gblinear.history(bst)
#' matplot(coef_path, type = 'l')
#'
#' # With the deterministic coordinate descent updater, it is safer to use higher learning rates.
#' # Will try the classical componentwise boosting which selects a single best feature per round:
#' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 0.8,
#' updater = 'coord_descent', feature_selector = 'thrifty', top_k = 1,
#' callbacks = list(cb.gblinear.history()))
#' xgb.gblinear.history(bst) %>% matplot(type = 'l')
#' # Componentwise boosting is known to have similar effect to Lasso regularization.
#' # Try experimenting with various values of top_k, eta, nrounds,
#' # as well as different feature_selectors.
#'
#' # For xgb.cv:
#' bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8,
#' callbacks = list(cb.gblinear.history()))
#' # coefficients in the CV fold #3
#' xgb.gblinear.history(bst)[[3]] %>% matplot(type = 'l')
#'
#'
#' #### Multiclass classification:
#' #
#' dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1)
#' param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3,
#' lambda = 0.0003, alpha = 0.0003, nthread = 2)
#' # For the default linear updater 'shotgun' it sometimes is helpful
#' # to use smaller eta to reduce instability
#' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 70, eta = 0.5,
#' callbacks = list(cb.gblinear.history()))
#' # Will plot the coefficient paths separately for each class:
#' xgb.gblinear.history(bst, class_index = 0) %>% matplot(type = 'l')
#' xgb.gblinear.history(bst, class_index = 1) %>% matplot(type = 'l')
#' xgb.gblinear.history(bst, class_index = 2) %>% matplot(type = 'l')
#'
#' # CV:
#' bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 70, eta = 0.5,
#' callbacks = list(cb.gblinear.history(FALSE)))
#' # 1st forld of 1st class
#' xgb.gblinear.history(bst, class_index = 0)[[1]] %>% matplot(type = 'l')
#'
#' @export
cb.gblinear.history <- function(sparse=FALSE) {
coefs <- NULL
init <- function(env) {
if (!is.null(env$bst)) { # xgb.train:
coef_path <- list()
} else if (!is.null(env$bst_folds)) { # xgb.cv:
coef_path <- rep(list(), length(env$bst_folds))
} else stop("Parent frame has neither 'bst' nor 'bst_folds'")
}
# convert from list to (sparse) matrix
list2mat <- function(coef_list) {
if (sparse) {
coef_mat <- sparseMatrix(x = unlist(lapply(coef_list, slot, "x")),
i = unlist(lapply(coef_list, slot, "i")),
p = c(0, cumsum(sapply(coef_list, function(x) length(x@x)))),
dims = c(length(coef_list[[1]]), length(coef_list)))
return(t(coef_mat))
} else {
return(do.call(rbind, coef_list))
}
}
finalizer <- function(env) {
if (length(coefs) == 0)
return()
if (!is.null(env$bst)) { # # xgb.train:
coefs <<- list2mat(coefs)
} else { # xgb.cv:
# first lapply transposes the list
coefs <<- lapply(seq_along(coefs[[1]]), function(i) lapply(coefs, "[[", i)) %>%
lapply(function(x) list2mat(x))
}
}
extract.coef <- function(env) {
if (!is.null(env$bst)) { # # xgb.train:
cf <- as.numeric(grep('(booster|bias|weigh)', xgb.dump(env$bst), invert = TRUE, value = TRUE))
if (sparse) cf <- as(cf, "sparseVector")
} else { # xgb.cv:
cf <- vector("list", length(env$bst_folds))
for (i in seq_along(env$bst_folds)) {
dmp <- xgb.dump(xgb.handleToBooster(env$bst_folds[[i]]$bst))
cf[[i]] <- as.numeric(grep('(booster|bias|weigh)', dmp, invert = TRUE, value = TRUE))
if (sparse) cf[[i]] <- as(cf[[i]], "sparseVector")
}
}
cf
}
callback <- function(env = parent.frame(), finalize = FALSE) {
if (is.null(coefs)) init(env)
if (finalize) return(finalizer(env))
cf <- extract.coef(env)
coefs <<- c(coefs, list(cf))
}
attr(callback, 'call') <- match.call()
attr(callback, 'name') <- 'cb.gblinear.history'
callback
}
#' Extract gblinear coefficients history.
#'
#' A helper function to extract the matrix of linear coefficients' history
#' from a gblinear model created while using the \code{cb.gblinear.history()}
#' callback.
#'
#' @param model either an \code{xgb.Booster} or a result of \code{xgb.cv()}, trained
#' using the \code{cb.gblinear.history()} callback.
#' @param class_index zero-based class index to extract the coefficients for only that
#' specific class in a multinomial multiclass model. When it is NULL, all the
#' coeffients are returned. Has no effect in non-multiclass models.
#'
#' @return
#' For an \code{xgb.train} result, a matrix (either dense or sparse) with the columns
#' corresponding to iteration's coefficients (in the order as \code{xgb.dump()} would
#' return) and the rows corresponding to boosting iterations.
#'
#' For an \code{xgb.cv} result, a list of such matrices is returned with the elements
#' corresponding to CV folds.
#'
#' @export
xgb.gblinear.history <- function(model, class_index = NULL) {
if (!(inherits(model, "xgb.Booster") ||
inherits(model, "xgb.cv.synchronous")))
stop("model must be an object of either xgb.Booster or xgb.cv.synchronous class")
is_cv <- inherits(model, "xgb.cv.synchronous")
if (is.null(model[["callbacks"]]) || is.null(model$callbacks[["cb.gblinear.history"]]))
stop("model must be trained while using the cb.gblinear.history() callback")
if (!is_cv) {
# extract num_class & num_feat from the internal model
dmp <- xgb.dump(model)
if(length(dmp) < 2 || dmp[2] != "bias:")
stop("It does not appear to be a gblinear model")
dmp <- dmp[-c(1,2)]
n <- which(dmp == 'weight:')
if(length(n) != 1)
stop("It does not appear to be a gblinear model")
num_class <- n - 1
num_feat <- (length(dmp) - 4) / num_class
} else {
# in case of CV, the object is expected to have this info
if (model$params$booster != "gblinear")
stop("It does not appear to be a gblinear model")
num_class <- NVL(model$params$num_class, 1)
num_feat <- model$nfeatures
if (is.null(num_feat))
stop("This xgb.cv result does not have nfeatures info")
}
if (!is.null(class_index) &&
num_class > 1 &&
(class_index[1] < 0 || class_index[1] >= num_class))
stop("class_index has to be within [0,", num_class - 1, "]")
coef_path <- environment(model$callbacks$cb.gblinear.history)[["coefs"]]
if (!is.null(class_index) && num_class > 1) {
coef_path <- if (is.list(coef_path)) {
lapply(coef_path,
function(x) x[, seq(1 + class_index, by=num_class, length.out=num_feat)])
} else {
coef_path <- coef_path[, seq(1 + class_index, by=num_class, length.out=num_feat)]
}
}
coef_path
}
#
# Internal utility functions for callbacks ------------------------------------
#

View File

@@ -37,11 +37,14 @@ xgb.handleToBooster <- function(handle, raw = NULL) {
# Check whether xgb.Booster.handle is null
# internal utility function
is.null.handle <- function(handle) {
if (is.null(handle)) return(TRUE)
if (!identical(class(handle), "xgb.Booster.handle"))
stop("argument type must be xgb.Booster.handle")
if (is.null(handle) || .Call(XGCheckNullPtr_R, handle))
if (.Call(XGCheckNullPtr_R, handle))
return(TRUE)
return(FALSE)
}
@@ -537,7 +540,7 @@ xgb.ntree <- function(bst) {
print.xgb.Booster <- function(x, verbose = FALSE, ...) {
cat('##### xgb.Booster\n')
valid_handle <- is.null.handle(x$handle)
valid_handle <- !is.null.handle(x$handle)
if (!valid_handle)
cat("Handle is invalid! Suggest using xgb.Booster.complete\n")

View File

@@ -83,5 +83,5 @@ xgb.create.features <- function(model, data, ...){
check.deprecation(...)
pred_with_leaf <- predict(model, data, predleaf = TRUE)
cols <- lapply(as.data.frame(pred_with_leaf), factor)
cBind(data, sparse.model.matrix( ~ . -1, cols))
cbind(data, sparse.model.matrix( ~ . -1, cols))
}

View File

@@ -34,6 +34,7 @@
#' \item \code{rmse} Rooted mean square error
#' \item \code{logloss} negative log-likelihood function
#' \item \code{auc} Area under curve
#' \item \code{aucpr} Area under PR curve
#' \item \code{merror} Exact matching error, used to evaluate multi-class classification
#' }
#' @param obj customized objective function. Returns gradient and second order
@@ -82,12 +83,13 @@
#' \item \code{params} parameters that were passed to the xgboost library. Note that it does not
#' capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
#' \item \code{callbacks} callback functions that were either automatically assigned or
#' explicitely passed.
#' explicitly passed.
#' \item \code{evaluation_log} evaluation history storead as a \code{data.table} with the
#' first column corresponding to iteration number and the rest corresponding to the
#' CV-based evaluation means and standard deviations for the training and test CV-sets.
#' It is created by the \code{\link{cb.evaluation.log}} callback.
#' \item \code{niter} number of boosting iterations.
#' \item \code{nfeatures} number of features in training data.
#' \item \code{folds} the list of CV folds' indices - either those passed through the \code{folds}
#' parameter or randomly generated.
#' \item \code{best_iteration} iteration number with the best evaluation metric value
@@ -184,6 +186,7 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
handle <- xgb.Booster.handle(params, list(dtrain, dtest))
list(dtrain = dtrain, bst = handle, watchlist = list(train = dtrain, test=dtest), index = folds[[k]])
})
rm(dall)
# a "basket" to collect some results from callbacks
basket <- list()
@@ -221,6 +224,7 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
callbacks = callbacks,
evaluation_log = evaluation_log,
niter = end_iteration,
nfeatures = ncol(data),
folds = folds
)
ret <- c(ret, basket)

View File

@@ -30,7 +30,8 @@
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' # save the model in file 'xgb.model.dump'
#' xgb.dump(bst, 'xgb.model.dump', with_stats = TRUE)
#' dump_path = file.path(tempdir(), 'model.dump')
#' xgb.dump(bst, dump_path, with_stats = TRUE)
#'
#' # print the model without saving it to a file
#' print(xgb.dump(bst, with_stats = TRUE))

View File

@@ -121,12 +121,13 @@
#' \itemize{
#' \item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
#' \item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
#' \item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss/}
#' \item \code{mlogloss} multiclass logloss. \url{http://wiki.fast.ai/index.php/Log_Loss}
#' \item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
#' By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
#' Different threshold (e.g., 0.) could be specified as "error@0."
#' \item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
#' \item \code{auc} Area under the curve. \url{http://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
#' \item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation.
#' \item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{http://en.wikipedia.org/wiki/NDCG}
#' }
#'
@@ -162,6 +163,7 @@
#' (only available with early stopping).
#' \item \code{feature_names} names of the training dataset features
#' (only when comun names were defined in training data).
#' \item \code{nfeatures} number of features in training data.
#' }
#'
#' @seealso
@@ -351,8 +353,8 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
if (inherits(xgb_model, 'xgb.Booster') &&
!is_update &&
!is.null(xgb_model$evaluation_log) &&
all.equal(colnames(evaluation_log),
colnames(xgb_model$evaluation_log))) {
isTRUE(all.equal(colnames(evaluation_log),
colnames(xgb_model$evaluation_log)))) {
evaluation_log <- rbindlist(list(xgb_model$evaluation_log, evaluation_log))
}
bst$evaluation_log <- evaluation_log
@@ -363,6 +365,7 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
bst$callbacks <- callbacks
if (!is.null(colnames(dtrain)))
bst$feature_names <- colnames(dtrain)
bst$nfeatures <- ncol(dtrain)
return(bst)
}

View File

@@ -77,10 +77,11 @@ NULL
# Various imports
#' @importClassesFrom Matrix dgCMatrix dgeMatrix
#' @importFrom Matrix cBind
#' @importFrom Matrix colSums
#' @importFrom Matrix sparse.model.matrix
#' @importFrom Matrix sparseVector
#' @importFrom Matrix sparseMatrix
#' @importFrom Matrix t
#' @importFrom data.table data.table
#' @importFrom data.table is.data.table
#' @importFrom data.table as.data.table

View File

@@ -30,4 +30,4 @@ Examples
Development
-----------
* See the [R Package section](https://xgboost.readthedocs.io/en/latest/how_to/contribute.html#r-package) of the contributiors guide.
* See the [R Package section](https://xgboost.readthedocs.io/en/latest/how_to/contribute.html#r-package) of the contributors guide.

0
R-package/configure.win Normal file
View File

View File

@@ -99,7 +99,8 @@ err <- as.numeric(sum(as.integer(pred > 0.5) != label))/length(label)
print(paste("test-error=", err))
# You can dump the tree you learned using xgb.dump into a text file
xgb.dump(bst, "dump.raw.txt", with_stats = T)
dump_path = file.path(tempdir(), 'dump.raw.txt')
xgb.dump(bst, dump_path, with_stats = T)
# Finally, you can check which features are the most important.
print("Most important features (look at column Gain):")

View File

@@ -32,7 +32,7 @@ create.new.tree.features <- function(model, original.features){
leaf.id <- sort(unique(pred_with_leaf[,i]))
cols[[i]] <- factor(x = pred_with_leaf[,i], level = leaf.id)
}
cBind(original.features, sparse.model.matrix( ~ . -1, as.data.frame(cols)))
cbind(original.features, sparse.model.matrix( ~ . -1, as.data.frame(cols)))
}
# Convert previous features to one hot encoding

View File

@@ -0,0 +1,95 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.gblinear.history}
\alias{cb.gblinear.history}
\title{Callback closure for collecting the model coefficients history of a gblinear booster
during its training.}
\usage{
cb.gblinear.history(sparse = FALSE)
}
\arguments{
\item{sparse}{when set to FALSE/TURE, a dense/sparse matrix is used to store the result.
Sparse format is useful when one expects only a subset of coefficients to be non-zero,
when using the "thrifty" feature selector with fairly small number of top features
selected per iteration.}
}
\value{
Results are stored in the \code{coefs} element of the closure.
The \code{\link{xgb.gblinear.history}} convenience function provides an easy way to access it.
With \code{xgb.train}, it is either a dense of a sparse matrix.
While with \code{xgb.cv}, it is a list (an element per each fold) of such matrices.
}
\description{
Callback closure for collecting the model coefficients history of a gblinear booster
during its training.
}
\details{
To keep things fast and simple, gblinear booster does not internally store the history of linear
model coefficients at each boosting iteration. This callback provides a workaround for storing
the coefficients' path, by extracting them after each training iteration.
Callback function expects the following values to be set in its calling frame:
\code{bst} (or \code{bst_folds}).
}
\examples{
#### Binary classification:
#
# In the iris dataset, it is hard to linearly separate Versicolor class from the rest
# without considering the 2nd order interactions:
require(magrittr)
x <- model.matrix(Species ~ .^2, iris)[,-1]
colnames(x)
dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor"))
param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc",
lambda = 0.0003, alpha = 0.0003, nthread = 2)
# For 'shotgun', which is a default linear updater, using high eta values may result in
# unstable behaviour in some datasets. With this simple dataset, however, the high learning
# rate does not break the convergence, but allows us to illustrate the typical pattern of
# "stochastic explosion" behaviour of this lock-free algorithm at early boosting iterations.
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 1.,
callbacks = list(cb.gblinear.history()))
# Extract the coefficients' path and plot them vs boosting iteration number:
coef_path <- xgb.gblinear.history(bst)
matplot(coef_path, type = 'l')
# With the deterministic coordinate descent updater, it is safer to use higher learning rates.
# Will try the classical componentwise boosting which selects a single best feature per round:
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 0.8,
updater = 'coord_descent', feature_selector = 'thrifty', top_k = 1,
callbacks = list(cb.gblinear.history()))
xgb.gblinear.history(bst) \%>\% matplot(type = 'l')
# Componentwise boosting is known to have similar effect to Lasso regularization.
# Try experimenting with various values of top_k, eta, nrounds,
# as well as different feature_selectors.
# For xgb.cv:
bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8,
callbacks = list(cb.gblinear.history()))
# coefficients in the CV fold #3
xgb.gblinear.history(bst)[[3]] \%>\% matplot(type = 'l')
#### Multiclass classification:
#
dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1)
param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3,
lambda = 0.0003, alpha = 0.0003, nthread = 2)
# For the default linear updater 'shotgun' it sometimes is helpful
# to use smaller eta to reduce instability
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 70, eta = 0.5,
callbacks = list(cb.gblinear.history()))
# Will plot the coefficient paths separately for each class:
xgb.gblinear.history(bst, class_index = 0) \%>\% matplot(type = 'l')
xgb.gblinear.history(bst, class_index = 1) \%>\% matplot(type = 'l')
xgb.gblinear.history(bst, class_index = 2) \%>\% matplot(type = 'l')
# CV:
bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 70, eta = 0.5,
callbacks = list(cb.gblinear.history(FALSE)))
# 1st forld of 1st class
xgb.gblinear.history(bst, class_index = 0)[[1]] \%>\% matplot(type = 'l')
}
\seealso{
\code{\link{callbacks}}, \code{\link{xgb.gblinear.history}}.
}

View File

@@ -51,6 +51,7 @@ from each CV model. This parameter engages the \code{\link{cb.cv.predict}} callb
\item \code{rmse} Rooted mean square error
\item \code{logloss} negative log-likelihood function
\item \code{auc} Area under curve
\item \code{aucpr} Area under PR curve
\item \code{merror} Exact matching error, used to evaluate multi-class classification
}}
@@ -98,12 +99,13 @@ An object of class \code{xgb.cv.synchronous} with the following elements:
\item \code{params} parameters that were passed to the xgboost library. Note that it does not
capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
\item \code{callbacks} callback functions that were either automatically assigned or
explicitely passed.
explicitly passed.
\item \code{evaluation_log} evaluation history storead as a \code{data.table} with the
first column corresponding to iteration number and the rest corresponding to the
CV-based evaluation means and standard deviations for the training and test CV-sets.
It is created by the \code{\link{cb.evaluation.log}} callback.
\item \code{niter} number of boosting iterations.
\item \code{nfeatures} number of features in training data.
\item \code{folds} the list of CV folds' indices - either those passed through the \code{folds}
parameter or randomly generated.
\item \code{best_iteration} iteration number with the best evaluation metric value

View File

@@ -44,7 +44,8 @@ test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
# save the model in file 'xgb.model.dump'
xgb.dump(bst, 'xgb.model.dump', with_stats = TRUE)
dump.path = file.path(tempdir(), 'model.dump')
xgb.dump(bst, dump.path, with_stats = TRUE)
# print the model without saving it to a file
print(xgb.dump(bst, with_stats = TRUE))

View File

@@ -0,0 +1,29 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{xgb.gblinear.history}
\alias{xgb.gblinear.history}
\title{Extract gblinear coefficients history.}
\usage{
xgb.gblinear.history(model, class_index = NULL)
}
\arguments{
\item{model}{either an \code{xgb.Booster} or a result of \code{xgb.cv()}, trained
using the \code{cb.gblinear.history()} callback.}
\item{class_index}{zero-based class index to extract the coefficients for only that
specific class in a multinomial multiclass model. When it is NULL, all the
coeffients are returned. Has no effect in non-multiclass models.}
}
\value{
For an \code{xgb.train} result, a matrix (either dense or sparse) with the columns
corresponding to iteration's coefficients (in the order as \code{xgb.dump()} would
return) and the rows corresponding to boosting iterations.
For an \code{xgb.cv} result, a list of such matrices is returned with the elements
corresponding to CV folds.
}
\description{
A helper function to extract the matrix of linear coefficients' history
from a gblinear model created while using the \code{cb.gblinear.history()}
callback.
}

View File

@@ -155,6 +155,7 @@ An object of class \code{xgb.Booster} with the following elements:
(only available with early stopping).
\item \code{feature_names} names of the training dataset features
(only when comun names were defined in training data).
\item \code{nfeatures} number of features in training data.
}
}
\description{
@@ -179,12 +180,13 @@ The folloiwing is the list of built-in metrics for which Xgboost provides optimi
\itemize{
\item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
\item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
\item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss/}
\item \code{mlogloss} multiclass logloss. \url{http://wiki.fast.ai/index.php/Log_Loss}
\item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
Different threshold (e.g., 0.) could be specified as "error@0."
\item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
\item \code{auc} Area under the curve. \url{http://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
\item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation.
\item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{http://en.wikipedia.org/wiki/NDCG}
}

View File

@@ -0,0 +1,14 @@
#!/bin/bash
# remove all #pragma's that suppress compiler warnings
set -e
set -x
for file in xgboost/src/dmlc-core/include/dmlc/*.h
do
sed -i.bak -e 's/^.*#pragma GCC diagnostic.*$//' -e 's/^.*#pragma clang diagnostic.*$//' -e 's/^.*#pragma warning.*$//' "${file}"
done
for file in xgboost/src/dmlc-core/include/dmlc/*.h.bak
do
rm "${file}"
done
set +x
set +e

View File

@@ -10,6 +10,12 @@ XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
-DDMLC_LOG_CUSTOMIZE=1 -DXGBOOST_CUSTOMIZE_LOGGER=1\
-DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_
# disable the use of thread_local for 32 bit windows:
ifeq ($(R_OSTYPE)$(WIN),windows)
XGB_RFLAGS += -DDMLC_CXX11_THREAD_LOCAL=0
endif
$(foreach v, $(XGB_RFLAGS), $(warning $(v)))
PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS)
PKG_CXXFLAGS= @OPENMP_CXXFLAGS@ $(SHLIB_PTHREAD_FLAGS)
PKG_LIBS = @OPENMP_CXXFLAGS@ $(SHLIB_PTHREAD_FLAGS)

View File

@@ -4,7 +4,7 @@ ENABLE_STD_THREAD=0
# _*_ mode: Makefile; _*_
# This file is only used for windows compilation from github
# It will be replaced by Makevars in CRAN version
# It will be replaced with Makevars.in for the CRAN version
.PHONY: all xgblib
all: $(SHLIB)
$(SHLIB): xgblib
@@ -22,6 +22,12 @@ XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
-DDMLC_LOG_CUSTOMIZE=1 -DXGBOOST_CUSTOMIZE_LOGGER=1\
-DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_
# disable the use of thread_local for 32 bit windows:
ifeq ($(R_OSTYPE)$(WIN),windows)
XGB_RFLAGS += -DDMLC_CXX11_THREAD_LOCAL=0
endif
$(foreach v, $(XGB_RFLAGS), $(warning $(v)))
PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS)
PKG_CXXFLAGS= $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)
PKG_LIBS = $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)

View File

@@ -19,10 +19,10 @@ extern SEXP XGBoosterBoostOneIter_R(SEXP, SEXP, SEXP, SEXP);
extern SEXP XGBoosterCreate_R(SEXP);
extern SEXP XGBoosterDumpModel_R(SEXP, SEXP, SEXP, SEXP);
extern SEXP XGBoosterEvalOneIter_R(SEXP, SEXP, SEXP, SEXP);
extern SEXP XGBoosterGetAttr_R(SEXP, SEXP);
extern SEXP XGBoosterGetAttrNames_R(SEXP);
extern SEXP XGBoosterLoadModel_R(SEXP, SEXP);
extern SEXP XGBoosterGetAttr_R(SEXP, SEXP);
extern SEXP XGBoosterLoadModelFromRaw_R(SEXP, SEXP);
extern SEXP XGBoosterLoadModel_R(SEXP, SEXP);
extern SEXP XGBoosterModelToRaw_R(SEXP);
extern SEXP XGBoosterPredict_R(SEXP, SEXP, SEXP, SEXP);
extern SEXP XGBoosterSaveModel_R(SEXP, SEXP);
@@ -45,10 +45,10 @@ static const R_CallMethodDef CallEntries[] = {
{"XGBoosterCreate_R", (DL_FUNC) &XGBoosterCreate_R, 1},
{"XGBoosterDumpModel_R", (DL_FUNC) &XGBoosterDumpModel_R, 4},
{"XGBoosterEvalOneIter_R", (DL_FUNC) &XGBoosterEvalOneIter_R, 4},
{"XGBoosterGetAttr_R", (DL_FUNC) &XGBoosterGetAttr_R, 2},
{"XGBoosterGetAttrNames_R", (DL_FUNC) &XGBoosterGetAttrNames_R, 1},
{"XGBoosterLoadModel_R", (DL_FUNC) &XGBoosterLoadModel_R, 2},
{"XGBoosterGetAttr_R", (DL_FUNC) &XGBoosterGetAttr_R, 2},
{"XGBoosterLoadModelFromRaw_R", (DL_FUNC) &XGBoosterLoadModelFromRaw_R, 2},
{"XGBoosterLoadModel_R", (DL_FUNC) &XGBoosterLoadModel_R, 2},
{"XGBoosterModelToRaw_R", (DL_FUNC) &XGBoosterModelToRaw_R, 1},
{"XGBoosterPredict_R", (DL_FUNC) &XGBoosterPredict_R, 4},
{"XGBoosterSaveModel_R", (DL_FUNC) &XGBoosterSaveModel_R, 2},

View File

@@ -11,6 +11,7 @@ set.seed(1994)
# disable some tests for Win32
windows_flag = .Platform$OS.type == "windows" &&
.Machine$sizeof.pointer != 8
solaris_flag = (Sys.info()['sysname'] == "SunOS")
test_that("train and predict binary classification", {
nrounds = 2
@@ -152,20 +153,20 @@ test_that("training continuation works", {
bst1 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0)
# continue for two more:
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0, xgb_model = bst1)
if (!windows_flag)
if (!windows_flag && !solaris_flag)
expect_equal(bst$raw, bst2$raw)
expect_false(is.null(bst2$evaluation_log))
expect_equal(dim(bst2$evaluation_log), c(4, 2))
expect_equal(bst2$evaluation_log, bst$evaluation_log)
# test continuing from raw model data
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0, xgb_model = bst1$raw)
if (!windows_flag)
if (!windows_flag && !solaris_flag)
expect_equal(bst$raw, bst2$raw)
expect_equal(dim(bst2$evaluation_log), c(2, 2))
# test continuing from a model in file
xgb.save(bst1, "xgboost.model")
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0, xgb_model = "xgboost.model")
if (!windows_flag)
if (!windows_flag && !solaris_flag)
expect_equal(bst$raw, bst2$raw)
expect_equal(dim(bst2$evaluation_log), c(2, 2))
})

View File

@@ -2,18 +2,47 @@ context('Test generalized linear models')
require(xgboost)
test_that("glm works", {
test_that("gblinear works", {
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
expect_equal(class(dtrain), "xgb.DMatrix")
expect_equal(class(dtest), "xgb.DMatrix")
param <- list(objective = "binary:logistic", booster = "gblinear",
nthread = 2, alpha = 0.0001, lambda = 1)
nthread = 2, eta = 0.8, alpha = 0.0001, lambda = 0.0001)
watchlist <- list(eval = dtest, train = dtrain)
num_round <- 2
bst <- xgb.train(param, dtrain, num_round, watchlist)
n <- 5 # iterations
ERR_UL <- 0.005 # upper limit for the test set error
VERB <- 0 # chatterbox switch
param$updater = 'shotgun'
bst <- xgb.train(param, dtrain, n, watchlist, verbose = VERB, feature_selector = 'shuffle')
ypred <- predict(bst, dtest)
expect_equal(length(getinfo(dtest, 'label')), 1611)
expect_lt(bst$evaluation_log$eval_error[n], ERR_UL)
bst <- xgb.train(param, dtrain, n, watchlist, verbose = VERB, feature_selector = 'cyclic',
callbacks = list(cb.gblinear.history()))
expect_lt(bst$evaluation_log$eval_error[n], ERR_UL)
h <- xgb.gblinear.history(bst)
expect_equal(dim(h), c(n, ncol(dtrain) + 1))
expect_is(h, "matrix")
param$updater = 'coord_descent'
bst <- xgb.train(param, dtrain, n, watchlist, verbose = VERB, feature_selector = 'cyclic')
expect_lt(bst$evaluation_log$eval_error[n], ERR_UL)
bst <- xgb.train(param, dtrain, n, watchlist, verbose = VERB, feature_selector = 'shuffle')
expect_lt(bst$evaluation_log$eval_error[n], ERR_UL)
bst <- xgb.train(param, dtrain, 2, watchlist, verbose = VERB, feature_selector = 'greedy')
expect_lt(bst$evaluation_log$eval_error[2], ERR_UL)
bst <- xgb.train(param, dtrain, n, watchlist, verbose = VERB, feature_selector = 'thrifty',
top_n = 50, callbacks = list(cb.gblinear.history(sparse = TRUE)))
expect_lt(bst$evaluation_log$eval_error[n], ERR_UL)
h <- xgb.gblinear.history(bst)
expect_equal(dim(h), c(n, ncol(dtrain) + 1))
expect_s4_class(h, "dgCMatrix")
})

View File

@@ -5,6 +5,8 @@ require(data.table)
require(Matrix)
require(vcd, quietly = TRUE)
float_tolerance = 5e-6
set.seed(1982)
data(Arthritis)
df <- data.table(Arthritis, keep.rownames = F)
@@ -40,9 +42,10 @@ mbst.GLM <- xgboost(data = as.matrix(iris[, -5]), label = mlabel, verbose = 0,
test_that("xgb.dump works", {
expect_length(xgb.dump(bst.Tree), 200)
expect_true(xgb.dump(bst.Tree, 'xgb.model.dump', with_stats = T))
expect_true(file.exists('xgb.model.dump'))
expect_gt(file.size('xgb.model.dump'), 8000)
dump_file = file.path(tempdir(), 'xgb.model.dump')
expect_true(xgb.dump(bst.Tree, dump_file, with_stats = T))
expect_true(file.exists(dump_file))
expect_gt(file.size(dump_file), 8000)
# JSON format
dmp <- xgb.dump(bst.Tree, dump_format = "json")
@@ -85,7 +88,8 @@ test_that("predict feature contributions works", {
X <- sparse_matrix
colnames(X) <- NULL
expect_error(pred_contr_ <- predict(bst.Tree, X, predcontrib = TRUE), regexp = NA)
expect_equal(pred_contr, pred_contr_, check.attributes = FALSE)
expect_equal(pred_contr, pred_contr_, check.attributes = FALSE,
tolerance = float_tolerance)
# gbtree binary classifier (approximate method)
expect_error(pred_contr <- predict(bst.Tree, sparse_matrix, predcontrib = TRUE, approxcontrib = TRUE), regexp = NA)
@@ -104,7 +108,8 @@ test_that("predict feature contributions works", {
coefs <- xgb.dump(bst.GLM)[-c(1,2,4)] %>% as.numeric
coefs <- c(coefs[-1], coefs[1]) # intercept must be the last
pred_contr_manual <- sweep(cbind(sparse_matrix, 1), 2, coefs, FUN="*")
expect_equal(as.numeric(pred_contr), as.numeric(pred_contr_manual), 1e-5)
expect_equal(as.numeric(pred_contr), as.numeric(pred_contr_manual),
tolerance = float_tolerance)
# gbtree multiclass
pred <- predict(mbst.Tree, as.matrix(iris[, -5]), outputmargin = TRUE, reshape = TRUE)
@@ -123,11 +128,12 @@ test_that("predict feature contributions works", {
coefs_all <- xgb.dump(mbst.GLM)[-c(1,2,6)] %>% as.numeric %>% matrix(ncol = 3, byrow = TRUE)
for (g in seq_along(pred_contr)) {
expect_equal(colnames(pred_contr[[g]]), c(colnames(iris[, -5]), "BIAS"))
expect_lt(max(abs(rowSums(pred_contr[[g]]) - pred[, g])), 2e-6)
expect_lt(max(abs(rowSums(pred_contr[[g]]) - pred[, g])), float_tolerance)
# manual calculation of linear terms
coefs <- c(coefs_all[-1, g], coefs_all[1, g]) # intercept needs to be the last
pred_contr_manual <- sweep(as.matrix(cbind(iris[,-5], 1)), 2, coefs, FUN="*")
expect_equal(as.numeric(pred_contr[[g]]), as.numeric(pred_contr_manual), 2e-6)
expect_equal(as.numeric(pred_contr[[g]]), as.numeric(pred_contr_manual),
tolerance = float_tolerance)
}
})
@@ -171,14 +177,16 @@ if (grepl('Windows', Sys.info()[['sysname']]) ||
# check that lossless conversion works with 17 digits
# numeric -> character -> numeric
X <- 10^runif(100, -20, 20)
X2X <- as.numeric(format(X, digits = 17))
expect_identical(X, X2X)
if (capabilities('long.double')) {
X2X <- as.numeric(format(X, digits = 17))
expect_identical(X, X2X)
}
# retrieved attributes to be the same as written
for (x in X) {
xgb.attr(bst.Tree, "x") <- x
expect_identical(as.numeric(xgb.attr(bst.Tree, "x")), x)
expect_equal(as.numeric(xgb.attr(bst.Tree, "x")), x, tolerance = float_tolerance)
xgb.attributes(bst.Tree) <- list(a = "A", b = x)
expect_identical(as.numeric(xgb.attr(bst.Tree, "b")), x)
expect_equal(as.numeric(xgb.attr(bst.Tree, "b")), x, tolerance = float_tolerance)
}
})
}
@@ -187,7 +195,7 @@ test_that("xgb.Booster serializing as R object works", {
saveRDS(bst.Tree, 'xgb.model.rds')
bst <- readRDS('xgb.model.rds')
dtrain <- xgb.DMatrix(sparse_matrix, label = label)
expect_equal(predict(bst.Tree, dtrain), predict(bst, dtrain))
expect_equal(predict(bst.Tree, dtrain), predict(bst, dtrain), tolerance = float_tolerance)
expect_equal(xgb.dump(bst.Tree), xgb.dump(bst))
xgb.save(bst, 'xgb.model')
nil_ptr <- new("externalptr")
@@ -195,7 +203,7 @@ test_that("xgb.Booster serializing as R object works", {
expect_true(identical(bst$handle, nil_ptr))
bst <- xgb.Booster.complete(bst)
expect_true(!identical(bst$handle, nil_ptr))
expect_equal(predict(bst.Tree, dtrain), predict(bst, dtrain))
expect_equal(predict(bst.Tree, dtrain), predict(bst, dtrain), tolerance = float_tolerance)
})
test_that("xgb.model.dt.tree works with and without feature names", {
@@ -233,13 +241,14 @@ test_that("xgb.importance works with and without feature names", {
expect_output(str(importance.Tree), 'Feature.*\\"Age\\"')
importance.Tree.0 <- xgb.importance(model = bst.Tree)
expect_equal(importance.Tree, importance.Tree.0)
expect_equal(importance.Tree, importance.Tree.0, tolerance = float_tolerance)
# when model contains no feature names:
bst.Tree.x <- bst.Tree
bst.Tree.x$feature_names <- NULL
importance.Tree.x <- xgb.importance(model = bst.Tree)
expect_equal(importance.Tree[, -1, with=FALSE], importance.Tree.x[, -1, with=FALSE])
expect_equal(importance.Tree[, -1, with=FALSE], importance.Tree.x[, -1, with=FALSE],
tolerance = float_tolerance)
imp2plot <- xgb.plot.importance(importance_matrix = importance.Tree)
expect_equal(colnames(imp2plot), c("Feature", "Gain", "Cover", "Frequency", "Importance"))

View File

@@ -7,6 +7,8 @@
#include "../dmlc-core/src/io/recordio_split.cc"
#include "../dmlc-core/src/io/input_split_base.cc"
#include "../dmlc-core/src/io/local_filesys.cc"
#include "../dmlc-core/src/io/filesys.cc"
#include "../dmlc-core/src/io/indexed_recordio_split.cc"
#include "../dmlc-core/src/data.cc"
#include "../dmlc-core/src/io.cc"
#include "../dmlc-core/src/recordio.cc"

View File

@@ -53,10 +53,16 @@
#include "../src/tree/updater_histmaker.cc"
#include "../src/tree/updater_skmaker.cc"
// linear
#include "../src/linear/linear_updater.cc"
#include "../src/linear/updater_coordinate.cc"
#include "../src/linear/updater_shotgun.cc"
// global
#include "../src/learner.cc"
#include "../src/logging.cc"
#include "../src/common/common.cc"
#include "../src/common/host_device_vector.cc"
#include "../src/common/hist_util.cc"
// c_api

View File

@@ -53,7 +53,7 @@ install:
Import-Module "$Env:TEMP\appveyor-tool.ps1"
Bootstrap
$DEPS = "c('data.table','magrittr','stringi','ggplot2','DiagrammeR','Ckmeans.1d.dp','vcd','testthat','igraph','knitr','rmarkdown')"
cmd /c "R.exe -q -e ""install.packages($DEPS, repos='$CRAN', type='win.binary')"" 2>&1"
cmd.exe /c "R.exe -q -e ""install.packages($DEPS, repos='$CRAN', type='both')"" 2>&1"
}
build_script:
@@ -81,7 +81,7 @@ build_script:
- if /i "%target%" == "rmingw" (
make Rbuild &&
ls -l &&
R.exe CMD INSTALL --no-multiarch xgboost*.tar.gz
R.exe CMD INSTALL xgboost*.tar.gz
)
# R package: cmake + VC2015
- if /i "%target%" == "rmsvc" (
@@ -98,10 +98,9 @@ test_script:
# mingw R package: run the R check (which includes unit tests), and also keep the built binary package
- if /i "%target%" == "rmingw" (
set _R_CHECK_CRAN_INCOMING_=FALSE&&
R.exe CMD check xgboost*.tar.gz --no-manual --no-build-vignettes --as-cran --install-args=--build --no-multiarch
R.exe CMD check xgboost*.tar.gz --no-manual --no-build-vignettes --as-cran --install-args=--build
)
# MSVC R package: run only the unit tests
# TODO: create a binary msvc-built package to keep as an artifact
- if /i "%target%" == "rmsvc" (
cd build_rmsvc%ver%\R-package &&
R.exe -q -e "library(testthat); setwd('tests'); source('testthat.R')"

View File

@@ -15,25 +15,21 @@ else
if [[ ! -e ./rabit/Makefile ]]; then
echo ""
echo "Please clone the rabit repository into this directory."
echo "Here are the commands:"
echo "rm -rf rabit"
echo "git clone https://github.com/dmlc/rabit.git rabit"
echo "Please init the rabit submodule:"
echo "git submodule update --init --recursive -- rabit"
not_ready=1
fi
if [[ ! -e ./dmlc-core/Makefile ]]; then
echo ""
echo "Please clone the dmlc-core repository into this directory."
echo "Here are the commands:"
echo "rm -rf dmlc-core"
echo "git clone https://github.com/dmlc/dmlc-core.git dmlc-core"
echo "Please init the dmlc-core submodule:"
echo "git submodule update --init --recursive -- dmlc-core"
not_ready=1
fi
if [[ "${not_ready}" == "1" ]]; then
echo ""
echo "Please fix the errors above and retry the build or reclone the repository with:"
echo "Please fix the errors above and retry the build, or reclone the repository with:"
echo "git clone --recursive https://github.com/dmlc/xgboost.git"
echo ""
exit 1

View File

@@ -54,10 +54,25 @@ function(set_default_configuration_release)
endif()
endfunction(set_default_configuration_release)
# Generate nvcc compiler flags given a list of architectures
# Also generates PTX for the most recent architecture for forwards compatibility
function(format_gencode_flags flags out)
# Set up architecture flags
if(NOT flags)
if((CUDA_VERSION_MAJOR EQUAL 9) OR (CUDA_VERSION_MAJOR GREATER 9))
set(flags "35;50;52;60;61;70")
else()
set(flags "35;50;52;60;61")
endif()
endif()
# Generate SASS
foreach(ver ${flags})
set(${out} "${${out}}-gencode arch=compute_${ver},code=sm_${ver};")
endforeach()
# Generate PTX for last architecture
list(GET flags -1 ver)
set(${out} "${${out}}-gencode arch=compute_${ver},code=compute_${ver};")
set(${out} "${${out}}" PARENT_SCOPE)
endfunction(format_gencode_flags flags)

View File

@@ -117,7 +117,7 @@ else()
# ask R for R_HOME
if(LIBR_EXECUTABLE)
execute_process(
COMMAND ${LIBR_EXECUTABLE} "--slave" "--no-save" "-e" "cat(normalizePath(R.home(), winslash='/'))"
COMMAND ${LIBR_EXECUTABLE} "--slave" "--no-save" "-e" "cat(normalizePath(R.home(),winslash='/'))"
OUTPUT_VARIABLE LIBR_HOME)
endif()
# if R executable not available, query R_HOME path from registry

View File

@@ -2,8 +2,6 @@
This demo shows how to train a model on the [forest cover type](https://archive.ics.uci.edu/ml/datasets/covertype) dataset using GPU acceleration. The forest cover type dataset has 581,012 rows and 54 features, making it time consuming to process. We compare the run-time and accuracy of the GPU and CPU histogram algorithms.
This demo requires the [GPU plug-in](https://github.com/dmlc/xgboost/tree/master/plugin/updater_gpu) to be built and installed.
This demo requires the [GPU plug-in](https://xgboost.readthedocs.io/en/latest/gpu/index.html) to be built and installed.
The dataset is automatically loaded via the sklearn script.

View File

@@ -1,7 +1,7 @@
XGBoost Python Feature Walkthrough
==================================
* [Basic walkthrough of wrappers](basic_walkthrough.py)
* [Cutomize loss function, and evaluation metric](custom_objective.py)
* [Customize loss function, and evaluation metric](custom_objective.py)
* [Boosting from existing prediction](boost_from_prediction.py)
* [Predicting using first n trees](predict_first_ntree.py)
* [Generalized Linear Model](generalized_linear_model.py)

View File

@@ -42,7 +42,7 @@ xgb.cv(param, dtrain, num_round, nfold=5,
metrics={'auc'}, seed=0, fpreproc=fpreproc)
###
# you can also do cross validation with cutomized loss function
# you can also do cross validation with customized loss function
# See custom_objective.py
##
print('running cross validation, with cutomsized loss function')

View File

@@ -1,7 +1,5 @@
The documentation of xgboost is generated with recommonmark and sphinx.
You can build it locally by typing "make html" in this folder.
- clone https://github.com/tqchen/recommonmark to root
- type make html
Checkout https://recommonmark.readthedocs.org for guide on how to write markdown with extensions used in this doc, such as math formulas and table of content.

View File

@@ -56,7 +56,7 @@
};
</script>
{% for name in ['jquery.js', 'underscore.js', 'doctools.js', 'searchtools.js'] %}
{% for name in ['jquery.js', 'underscore.js', 'doctools.js', 'searchtools-new.js'] %}
<script type="text/javascript" src="{{ pathto('_static/' + name, 1) }}"></script>
{% endfor %}

View File

@@ -185,7 +185,7 @@ pre {
.dropdown-menu li {
padding: 0px 0px;
width: 120px;
width: 100%;
}
.dropdown-menu li a {
color: #0079b2;

View File

@@ -4,7 +4,7 @@ Installation Guide
This page gives instructions on how to build and install the xgboost package from
scratch on various systems. It consists of two steps:
1. First build the shared library from the C++ codes (`libxgboost.so` for linux/osx and `libxgboost.dll` for windows).
1. First build the shared library from the C++ codes (`libxgboost.so` for Linux/OSX and `xgboost.dll` for Windows).
- Exception: for R-package installation please directly refer to the R package section.
2. Then install the language packages (e.g. Python Package).
@@ -39,7 +39,7 @@ even better to send pull request if you can fix the problem.
Our goal is to build the shared library:
- On Linux/OSX the target library is `libxgboost.so`
- On Windows the target library is `libxgboost.dll`
- On Windows the target library is `xgboost.dll`
The minimal building requirement is
@@ -85,12 +85,33 @@ Now, clone the repository
```bash
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost; cp make/config.mk ./config.mk
```
Open config.mk and uncomment these two lines
```config.mk
export CC = gcc
export CXX = g++
```
and replace these two lines into(5 or 6 or 7; depending on your gcc-version)
```config.mk
export CC = gcc-7
export CXX = g++-7
```
To find your gcc version
```bash
gcc-version
```
and build using the following commands
```bash
cd xgboost; cp make/config.mk ./config.mk; make -j4
make -j4
```
head over to `Python Package Installation` for the next steps
@@ -111,12 +132,13 @@ After installing [Git for Windows](https://git-for-windows.github.io/), you shou
All the following steps are in the `Git Bash`.
In MinGW, `make` command comes with the name `mingw32-make`. You can add the following line into the `.bashrc` file.
```bash
alias make='mingw32-make'
```
(On 64-bit Windows, you should get [mingw64](https://sourceforge.net/projects/mingw-w64/) instead.) Make sure
that the path to MinGW is in the system PATH.
To build with MinGW
To build with MinGW, type:
```bash
cp make/mingw64.mk config.mk; make -j4
@@ -130,7 +152,7 @@ cd build
cmake .. -G"Visual Studio 12 2013 Win64"
```
This specifies an out of source build using the MSVC 12 64 bit generator. Open the .sln file in the build directory and build with Visual Studio. To use the Python module you can copy libxgboost.dll into python-package\xgboost.
This specifies an out of source build using the MSVC 12 64 bit generator. Open the .sln file in the build directory and build with Visual Studio. To use the Python module you can copy `xgboost.dll` into python-package\xgboost.
Other versions of Visual Studio may work but are untested.
@@ -148,7 +170,7 @@ $ cd build
$ cmake .. -DUSE_CUDA=ON
$ make -j
```
**Windows requirements** for GPU build: only Visual C++ 2015 or 2013 with CUDA v8.0 were fully tested. Either install Visual C++ 2015 Build Tools separately, or as a part of Visual Studio 2015. If you already have Visual Studio 2017, the Visual C++ 2015 Toolchain componenet has to be installed using the VS 2017 Installer. Likely, you would need to use the VS2015 x64 Native Tools command prompt to run the cmake commands given below. In some situations, however, things run just fine from MSYS2 bash command line.
**Windows requirements** for GPU build: only Visual C++ 2015 or 2013 with CUDA v8.0 were fully tested. Either install Visual C++ 2015 Build Tools separately, or as a part of Visual Studio 2015. If you already have Visual Studio 2017, the Visual C++ 2015 Toolchain componenet has to be installed using the VS 2017 Installer. Likely, you would need to use the VS2015 x64 Native Tools command prompt to run the cmake commands given below. In some situations, however, things run just fine from MSYS2 bash command line.
On Windows, using cmake, see what options for Generators you have for cmake, and choose one with [arch] replaced by Win64:
```bash
@@ -169,6 +191,8 @@ If build seems to use only a single process, you might try to append an option l
### Windows Binaries
After the build process successfully ends, you will find a `xgboost.dll` library file inside `./lib/` folder, copy this file to the the API package folder like `python-package/xgboost` if you are using *python* API. And you are good to follow the below instructions.
Unofficial windows binaries and instructions on how to use them are hosted on [Guido Tapia's blog](http://www.picnet.com.au/blogs/guido/post/2016/09/22/xgboost-windows-x64-binaries-for-download/)
### Customized Building

View File

@@ -14,7 +14,6 @@
import sys
import os, subprocess
import shlex
import urllib
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
@@ -79,6 +78,8 @@ master_doc = 'index'
# Usually you set "language" from the command line for these cases.
language = None
autoclass_content = 'both'
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
@@ -164,8 +165,14 @@ def setup(app):
# Add hook for building doxygen xml when needed
# no c++ API for now
# app.connect("builder-inited", generate_doxygen_xml)
urllib.urlretrieve('https://code.jquery.com/jquery-2.2.4.min.js',
'_static/jquery.js')
# urlretrieve got moved in Python 3.x
try:
from urllib import urlretrieve
except ImportError:
from urllib.request import urlretrieve
urlretrieve('https://code.jquery.com/jquery-2.2.4.min.js',
'_static/jquery.js')
app.add_config_value('recommonmark_config', {
'url_resolver': lambda url: github_doc_root + url,
'enable_eval_rst': True,

View File

@@ -11,7 +11,7 @@ filename#cacheprefix
The ```filename``` is the normal path to libsvm file you want to load in, ```cacheprefix``` is a
path to a cache file that xgboost will use for external memory cache.
The following code was extracted from [../demo/guide-python/external_memory.py](../demo/guide-python/external_memory.py)
The following code was extracted from [../../demo/guide-python/external_memory.py](../../demo/guide-python/external_memory.py)
```python
dtrain = xgb.DMatrix('../data/agaricus.txt.train#dtrain.cache')
```
@@ -28,7 +28,7 @@ Distributed Version
-------------------
The external memory mode naturally works on distributed version, you can simply set path like
```
data = "hdfs:///path-to-data/#dtrain.cache"
data = "hdfs://path-to-data/#dtrain.cache"
```
xgboost will cache the data to the local position. When you run on YARN, the current folder is temporal
so that you can directly use ```dtrain.cache``` to cache to current folder.

View File

@@ -65,8 +65,8 @@ Parameters for Tree Booster
- 'exact': Exact greedy algorithm.
- 'approx': Approximate greedy algorithm using sketching and histogram.
- 'hist': Fast histogram optimized approximate greedy algorithm. It uses some performance improvements such as bins caching.
- 'gpu_exact': GPU implementation of exact algorithm.
- 'gpu_hist': GPU implementation of hist algorithm.
- 'gpu_exact': GPU implementation of exact algorithm.
- 'gpu_hist': GPU implementation of hist algorithm.
* sketch_eps, [default=0.03]
- This is only used for approximate greedy algorithm.
- This roughly translated into ```O(1 / sketch_eps)``` number of bins.
@@ -96,7 +96,7 @@ Parameters for Tree Booster
- A type of boosting process to run.
- Choices: {'default', 'update'}
- 'default': the normal boosting process which creates new trees.
- 'update': starts from an existing model and only updates its trees. In each boosting iteration, a tree from the initial model is taken, a specified sequence of updater plugins is run for that tree, and a modified tree is added to the new model. The new model would have either the same or smaller number of trees, depending on the number of boosting iteratons performed. Currently, the following built-in updater plugins could be meaningfully used with this process type: 'refresh', 'prune'. With 'update', one cannot use updater plugins that create new nrees.
- 'update': starts from an existing model and only updates its trees. In each boosting iteration, a tree from the initial model is taken, a specified sequence of updater plugins is run for that tree, and a modified tree is added to the new model. The new model would have either the same or smaller number of trees, depending on the number of boosting iteratons performed. Currently, the following built-in updater plugins could be meaningfully used with this process type: 'refresh', 'prune'. With 'update', one cannot use updater plugins that create new trees.
* grow_policy, string [default='depthwise']
- Controls a way new nodes are added to the tree.
- Currently supported only if `tree_method` is set to 'hist'.
@@ -142,11 +142,14 @@ Additional parameters for Dart Booster
Parameters for Linear Booster
-----------------------------
* lambda [default=0, alias: reg_lambda]
- L2 regularization term on weights, increase this value will make model more conservative.
- L2 regularization term on weights, increase this value will make model more conservative. Normalised to number of training examples.
* alpha [default=0, alias: reg_alpha]
- L1 regularization term on weights, increase this value will make model more conservative.
* lambda_bias [default=0, alias: reg_lambda_bias]
- L2 regularization term on bias (no L1 reg on bias because it is not important)
- L1 regularization term on weights, increase this value will make model more conservative. Normalised to number of training examples.
* updater [default='shotgun']
- Linear model algorithm
- 'shotgun': Parallel coordinate descent algorithm based on shotgun algorithm. Uses 'hogwild' parallelism and therefore produces a nondeterministic solution on each run.
- 'coord_descent': Ordinary coordinate descent algorithm. Also multithreaded but still produces a deterministic solution.
Parameters for Tweedie Regression
---------------------------------
@@ -165,8 +168,13 @@ Specify the learning task and the corresponding learning objective. The objectiv
- "reg:logistic" --logistic regression
- "binary:logistic" --logistic regression for binary classification, output probability
- "binary:logitraw" --logistic regression for binary classification, output score before logistic transformation
- "gpu:reg:linear", "gpu:reg:logistic", "gpu:binary:logistic", gpu:binary:logitraw" --versions
of the corresponding objective functions evaluated on the GPU; note that like the GPU histogram algorithm,
they can only be used when the entire training session uses the same dataset
- "count:poisson" --poisson regression for count data, output mean of poisson distribution
- max_delta_step is set to 0.7 by default in poisson regression (used to safeguard optimization)
- "survival:cox" --Cox regression for right censored survival time data (negative values are considered right censored).
Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional hazard function h(t) = h0(t) * HR).
- "multi:softmax" --set XGBoost to do multiclass classification using the softmax objective, you also need to set num_class(number of classes)
- "multi:softprob" --same as softmax, but output a vector of ndata * nclass, which can be further reshaped to ndata, nclass matrix. The result contains predicted probability of each data point belonging to each class.
- "rank:pairwise" --set XGBoost to do ranking task by minimizing the pairwise loss
@@ -194,6 +202,7 @@ Specify the learning task and the corresponding learning objective. The objectiv
training repeatedly
- "poisson-nloglik": negative log-likelihood for Poisson regression
- "gamma-nloglik": negative log-likelihood for gamma regression
- "cox-nloglik": negative partial log-likelihood for Cox proportional hazards regression
- "gamma-deviance": residual deviance for gamma regression
- "tweedie-nloglik": negative log-likelihood for Tweedie regression (at a specified value of the tweedie_variance_power parameter)
* seed [default=0]

View File

@@ -25,7 +25,9 @@ Data Interface
--------------
The XGBoost python module is able to load data from:
- libsvm txt format file
- Numpy 2D array, and
- comma-separated values (CSV) file
- Numpy 2D array
- Scipy 2D sparse array, and
- xgboost binary buffer file.
The data is stored in a ```DMatrix``` object.
@@ -35,6 +37,16 @@ The data is stored in a ```DMatrix``` object.
dtrain = xgb.DMatrix('train.svm.txt')
dtest = xgb.DMatrix('test.svm.buffer')
```
* To load a CSV file into ```DMatrix```:
```python
# label_column specifies the index of the column containing the true label
dtrain = xgb.DMatrix('train.csv?format=csv&label_column=0')
dtest = xgb.DMatrix('test.csv?format=csv&label_column=0')
```
(Note that XGBoost does not support categorical features; if your data contains
categorical features, load it as a numpy array first and then perform
[one-hot encoding](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html).)
* To load a numpy array into ```DMatrix```:
```python
data = np.random.rand(5, 10) # 5 entities, each contains 10 features

3
doc/requirements.txt Normal file
View File

@@ -0,0 +1,3 @@
sphinx==1.5.6
commonmark==0.5.4
mock

View File

@@ -1,9 +1,9 @@
DART booster
============
[XGBoost](https://github.com/dmlc/xgboost)) mostly combines a huge number of regression trees with a small learning rate.
[XGBoost](https://github.com/dmlc/xgboost) mostly combines a huge number of regression trees with a small learning rate.
In this situation, trees added early are significant and trees added late are unimportant.
Rasmi et al. proposed a new method to add dropout techniques from the deep neural net community to boosted trees, and reported better results in some situations.
Vinayak and Gilad-Bachrach proposed a new method to add dropout techniques from the deep neural net community to boosted trees, and reported better results in some situations.
This is a instruction of new tree booster `dart`.

View File

@@ -76,3 +76,15 @@ Some other examples:
- ```(1,0)```: An increasing constraint on the first predictor and no constraint on the second.
- ```(0,-1)```: No constraint on the first predictor and a decreasing constraint on the second.
**Choise of tree construction algorithm**. To use monotonic constraints, be
sure to set the `tree_method` parameter to one of `'exact'`, `'hist'`, and
`'gpu_hist'`.
**Note for the `'hist'` tree construction algorithm**.
If `tree_method` is set to either `'hist'` or `'gpu_hist'`, enabling monotonic
constraints may produce unnecessarily shallow trees. This is because the
`'hist'` method reduces the number of candidate splits to be considered at each
split. Monotonic constraints may wipe out all available split candidates, in
which case no split is made. To reduce the effect, you may want to increase
the `max_bin` parameter to consider more split candidates.

View File

@@ -81,20 +81,19 @@ namespace xgboost {
* \brief unsigned integer type used in boost,
* used for feature index and row index.
*/
typedef uint32_t bst_uint;
typedef int32_t bst_int;
using bst_uint = uint32_t; // NOLINT
using bst_int = int32_t; // NOLINT
/*! \brief long integers */
typedef uint64_t bst_ulong; // NOLINT(*)
/*! \brief float type, used for storing statistics */
typedef float bst_float;
using bst_float = float; // NOLINT
namespace detail {
/*! \brief Implementation of gradient statistics pair. Template specialisation
* may be used to overload different gradients types e.g. low precision, high
* precision, integer, floating point. */
template <typename T>
class bst_gpair_internal {
class GradientPairInternal {
/*! \brief gradient statistics */
T grad_;
/*! \brief second order gradient statistics */
@@ -104,23 +103,23 @@ class bst_gpair_internal {
XGBOOST_DEVICE void SetHess(float h) { hess_ = h; }
public:
typedef T value_t;
using ValueT = T;
XGBOOST_DEVICE bst_gpair_internal() : grad_(0), hess_(0) {}
XGBOOST_DEVICE GradientPairInternal() : grad_(0), hess_(0) {}
XGBOOST_DEVICE bst_gpair_internal(float grad, float hess) {
XGBOOST_DEVICE GradientPairInternal(float grad, float hess) {
SetGrad(grad);
SetHess(hess);
}
// Copy constructor if of same value type
XGBOOST_DEVICE bst_gpair_internal(const bst_gpair_internal<T> &g)
: grad_(g.grad_), hess_(g.hess_) {}
XGBOOST_DEVICE GradientPairInternal(const GradientPairInternal<T> &g)
: grad_(g.grad_), hess_(g.hess_) {} // NOLINT
// Copy constructor if different value type - use getters and setters to
// perform conversion
template <typename T2>
XGBOOST_DEVICE bst_gpair_internal(const bst_gpair_internal<T2> &g) {
XGBOOST_DEVICE explicit GradientPairInternal(const GradientPairInternal<T2> &g) {
SetGrad(g.GetGrad());
SetHess(g.GetHess());
}
@@ -128,85 +127,85 @@ class bst_gpair_internal {
XGBOOST_DEVICE float GetGrad() const { return grad_; }
XGBOOST_DEVICE float GetHess() const { return hess_; }
XGBOOST_DEVICE bst_gpair_internal<T> &operator+=(
const bst_gpair_internal<T> &rhs) {
XGBOOST_DEVICE GradientPairInternal<T> &operator+=(
const GradientPairInternal<T> &rhs) {
grad_ += rhs.grad_;
hess_ += rhs.hess_;
return *this;
}
XGBOOST_DEVICE bst_gpair_internal<T> operator+(
const bst_gpair_internal<T> &rhs) const {
bst_gpair_internal<T> g;
XGBOOST_DEVICE GradientPairInternal<T> operator+(
const GradientPairInternal<T> &rhs) const {
GradientPairInternal<T> g;
g.grad_ = grad_ + rhs.grad_;
g.hess_ = hess_ + rhs.hess_;
return g;
}
XGBOOST_DEVICE bst_gpair_internal<T> &operator-=(
const bst_gpair_internal<T> &rhs) {
XGBOOST_DEVICE GradientPairInternal<T> &operator-=(
const GradientPairInternal<T> &rhs) {
grad_ -= rhs.grad_;
hess_ -= rhs.hess_;
return *this;
}
XGBOOST_DEVICE bst_gpair_internal<T> operator-(
const bst_gpair_internal<T> &rhs) const {
bst_gpair_internal<T> g;
XGBOOST_DEVICE GradientPairInternal<T> operator-(
const GradientPairInternal<T> &rhs) const {
GradientPairInternal<T> g;
g.grad_ = grad_ - rhs.grad_;
g.hess_ = hess_ - rhs.hess_;
return g;
}
XGBOOST_DEVICE bst_gpair_internal(int value) {
*this = bst_gpair_internal<T>(static_cast<float>(value),
XGBOOST_DEVICE explicit GradientPairInternal(int value) {
*this = GradientPairInternal<T>(static_cast<float>(value),
static_cast<float>(value));
}
friend std::ostream &operator<<(std::ostream &os,
const bst_gpair_internal<T> &g) {
const GradientPairInternal<T> &g) {
os << g.GetGrad() << "/" << g.GetHess();
return os;
}
};
template<>
inline XGBOOST_DEVICE float bst_gpair_internal<int64_t>::GetGrad() const {
inline XGBOOST_DEVICE float GradientPairInternal<int64_t>::GetGrad() const {
return grad_ * 1e-4f;
}
template<>
inline XGBOOST_DEVICE float bst_gpair_internal<int64_t>::GetHess() const {
inline XGBOOST_DEVICE float GradientPairInternal<int64_t>::GetHess() const {
return hess_ * 1e-4f;
}
template<>
inline XGBOOST_DEVICE void bst_gpair_internal<int64_t>::SetGrad(float g) {
inline XGBOOST_DEVICE void GradientPairInternal<int64_t>::SetGrad(float g) {
grad_ = static_cast<int64_t>(std::round(g * 1e4));
}
template<>
inline XGBOOST_DEVICE void bst_gpair_internal<int64_t>::SetHess(float h) {
inline XGBOOST_DEVICE void GradientPairInternal<int64_t>::SetHess(float h) {
hess_ = static_cast<int64_t>(std::round(h * 1e4));
}
} // namespace detail
/*! \brief gradient statistics pair usually needed in gradient boosting */
typedef detail::bst_gpair_internal<float> bst_gpair;
using GradientPair = detail::GradientPairInternal<float>;
/*! \brief High precision gradient statistics pair */
typedef detail::bst_gpair_internal<double> bst_gpair_precise;
using GradientPairPrecise = detail::GradientPairInternal<double>;
/*! \brief High precision gradient statistics pair with integer backed
* storage. Operators are associative where floating point versions are not
* associative. */
typedef detail::bst_gpair_internal<int64_t> bst_gpair_integer;
using GradientPairInteger = detail::GradientPairInternal<int64_t>;
/*! \brief small eps gap for minimum split decision. */
const bst_float rt_eps = 1e-6f;
const bst_float kRtEps = 1e-6f;
/*! \brief define unsigned long for openmp loop */
typedef dmlc::omp_ulong omp_ulong;
using omp_ulong = dmlc::omp_ulong; // NOLINT
/*! \brief define unsigned int for openmp loop */
typedef dmlc::omp_uint bst_omp_uint;
using bst_omp_uint = dmlc::omp_uint; // NOLINT
/*!
* \brief define compatible keywords in g++

View File

@@ -30,16 +30,16 @@ typedef uint64_t bst_ulong; // NOLINT(*)
/*! \brief handle to DMatrix */
typedef void *DMatrixHandle;
typedef void *DMatrixHandle; // NOLINT(*)
/*! \brief handle to Booster */
typedef void *BoosterHandle;
typedef void *BoosterHandle; // NOLINT(*)
/*! \brief handle to a data iterator */
typedef void *DataIterHandle;
typedef void *DataIterHandle; // NOLINT(*)
/*! \brief handle to a internal data holder. */
typedef void *DataHolderHandle;
typedef void *DataHolderHandle; // NOLINT(*)
/*! \brief Mini batch used in XGBoost Data Iteration */
typedef struct {
typedef struct { // NOLINT(*)
/*! \brief number of rows in the minibatch */
size_t size;
/*! \brief row pointer to the rows in the data */
@@ -66,7 +66,7 @@ typedef struct {
* \param handle The handle to the callback.
* \param batch The data content to be set.
*/
XGB_EXTERN_C typedef int XGBCallbackSetData(
XGB_EXTERN_C typedef int XGBCallbackSetData( // NOLINT(*)
DataHolderHandle handle, XGBoostBatchCSR batch);
/*!
@@ -80,9 +80,8 @@ XGB_EXTERN_C typedef int XGBCallbackSetData(
* \param set_function_handle The handle to be passed to set function.
* \return 0 if we are reaching the end and batch is not returned.
*/
XGB_EXTERN_C typedef int XGBCallbackDataIterNext(
DataIterHandle data_handle,
XGBCallbackSetData* set_function,
XGB_EXTERN_C typedef int XGBCallbackDataIterNext( // NOLINT(*)
DataIterHandle data_handle, XGBCallbackSetData *set_function,
DataHolderHandle set_function_handle);
/*!
@@ -95,7 +94,7 @@ XGB_EXTERN_C typedef int XGBCallbackDataIterNext(
* this function is thread safe and can be called by different thread
* \return const char* error information
*/
XGB_DLL const char *XGBGetLastError();
XGB_DLL const char *XGBGetLastError(void);
/*!
* \brief load a data matrix
@@ -216,11 +215,9 @@ XGB_DLL int XGDMatrixCreateFromMat(const float *data,
* \param nthread number of threads (up to maximum cores available, if <=0 use all cores)
* \return 0 when success, -1 when failure happens
*/
XGB_DLL int XGDMatrixCreateFromMat_omp(const float *data,
bst_ulong nrow,
bst_ulong ncol,
float missing,
DMatrixHandle *out,
XGB_DLL int XGDMatrixCreateFromMat_omp(const float *data, // NOLINT
bst_ulong nrow, bst_ulong ncol,
float missing, DMatrixHandle *out,
int nthread);
/*!
* \brief create a new dmatrix from sliced content of existing matrix

View File

@@ -12,6 +12,7 @@
#include <string>
#include <memory>
#include <vector>
#include <numeric>
#include "./base.h"
namespace xgboost {
@@ -29,44 +30,45 @@ enum DataType {
/*!
* \brief Meta information about dataset, always sit in memory.
*/
struct MetaInfo {
class MetaInfo {
public:
/*! \brief number of rows in the data */
uint64_t num_row;
uint64_t num_row_{0};
/*! \brief number of columns in the data */
uint64_t num_col;
uint64_t num_col_{0};
/*! \brief number of nonzero entries in the data */
uint64_t num_nonzero;
uint64_t num_nonzero_{0};
/*! \brief label of each instance */
std::vector<bst_float> labels;
std::vector<bst_float> labels_;
/*!
* \brief specified root index of each instance,
* can be used for multi task setting
*/
std::vector<bst_uint> root_index;
std::vector<bst_uint> root_index_;
/*!
* \brief the index of begin and end of a group
* needed when the learning task is ranking.
*/
std::vector<bst_uint> group_ptr;
std::vector<bst_uint> group_ptr_;
/*! \brief weights of each instance, optional */
std::vector<bst_float> weights;
std::vector<bst_float> weights_;
/*!
* \brief initialized margins,
* if specified, xgboost will start from this init margin
* can be used to specify initial prediction to boost from.
*/
std::vector<bst_float> base_margin;
std::vector<bst_float> base_margin_;
/*! \brief version flag, used to check version of this info */
static const int kVersion = 1;
/*! \brief default constructor */
MetaInfo() : num_row(0), num_col(0), num_nonzero(0) {}
MetaInfo() = default;
/*!
* \brief Get weight of each instances.
* \param i Instance index.
* \return The weight.
*/
inline bst_float GetWeight(size_t i) const {
return weights.size() != 0 ? weights[i] : 1.0f;
return weights_.size() != 0 ? weights_[i] : 1.0f;
}
/*!
* \brief Get the root index of i-th instance.
@@ -74,7 +76,20 @@ struct MetaInfo {
* \return The pre-defined root index of i-th instance.
*/
inline unsigned GetRoot(size_t i) const {
return root_index.size() != 0 ? root_index[i] : 0U;
return root_index_.size() != 0 ? root_index_[i] : 0U;
}
/*! \brief get sorted indexes (argsort) of labels by absolute value (used by cox loss) */
inline const std::vector<size_t>& LabelAbsSort() const {
if (label_order_cache_.size() == labels_.size()) {
return label_order_cache_;
}
label_order_cache_.resize(labels_.size());
std::iota(label_order_cache_.begin(), label_order_cache_.end(), 0);
const auto l = labels_;
XGBOOST_PARALLEL_SORT(label_order_cache_.begin(), label_order_cache_.end(),
[&l](size_t i1, size_t i2) {return std::abs(l[i1]) < std::abs(l[i2]);});
return label_order_cache_;
}
/*! \brief clear all the information */
void Clear();
@@ -96,6 +111,10 @@ struct MetaInfo {
* \param num Number of elements in the source array.
*/
void SetInfo(const char* key, const void* dptr, DataType dtype, size_t num);
private:
/*! \brief argsort of labels */
mutable std::vector<size_t> label_order_cache_;
};
/*! \brief read-only sparse instance batch in CSR format */
@@ -107,7 +126,7 @@ struct SparseBatch {
/*! \brief feature value */
bst_float fvalue;
/*! \brief default constructor */
Entry() {}
Entry() = default;
/*!
* \brief constructor with index and value
* \param index The feature or row index.
@@ -123,11 +142,11 @@ struct SparseBatch {
/*! \brief an instance of sparse vector in the batch */
struct Inst {
/*! \brief pointer to the elements*/
const Entry *data;
const Entry *data{nullptr};
/*! \brief length of the instance */
bst_uint length;
bst_uint length{0};
/*! \brief constructor */
Inst() : data(0), length(0) {}
Inst() = default;
Inst(const Entry *data, bst_uint length) : data(data), length(length) {}
/*! \brief get i-th pair in the sparse vector*/
inline const Entry& operator[](size_t i) const {
@@ -149,7 +168,7 @@ struct RowBatch : public SparseBatch {
const Entry *data_ptr;
/*! \brief get i-th row from the batch */
inline Inst operator[](size_t i) const {
return Inst(data_ptr + ind_ptr[i], static_cast<bst_uint>(ind_ptr[i + 1] - ind_ptr[i]));
return {data_ptr + ind_ptr[i], static_cast<bst_uint>(ind_ptr[i + 1] - ind_ptr[i])};
}
};
@@ -188,16 +207,16 @@ class DataSource : public dmlc::DataIter<RowBatch> {
* \brief A vector-like structure to represent set of rows.
* But saves the memory when all rows are in the set (common case in xgb)
*/
struct RowSet {
class RowSet {
public:
/*! \return i-th row index */
inline bst_uint operator[](size_t i) const;
/*! \return the size of the set. */
inline size_t size() const;
inline size_t Size() const;
/*! \brief push the index back to the set */
inline void push_back(bst_uint i);
inline void PushBack(bst_uint i);
/*! \brief clear the set */
inline void clear();
inline void Clear();
/*!
* \brief save rowset to file.
* \param fo The file to be saved.
@@ -210,11 +229,11 @@ struct RowSet {
*/
inline bool Load(dmlc::Stream* fi);
/*! \brief constructor */
RowSet() : size_(0) {}
RowSet() = default;
private:
/*! \brief The internal data structure of size */
uint64_t size_;
uint64_t size_{0};
/*! \brief The internal data structure of row set if not all*/
std::vector<bst_uint> rows_;
};
@@ -232,11 +251,11 @@ struct RowSet {
class DMatrix {
public:
/*! \brief default constructor */
DMatrix() : cache_learner_ptr_(nullptr) {}
DMatrix() = default;
/*! \brief meta information of the dataset */
virtual MetaInfo& info() = 0;
virtual MetaInfo& Info() = 0;
/*! \brief meta information of the dataset */
virtual const MetaInfo& info() const = 0;
virtual const MetaInfo& Info() const = 0;
/*!
* \brief get the row iterator, reset to beginning position
* \note Only either RowIterator or column Iterator can be active.
@@ -256,14 +275,16 @@ class DMatrix {
* \param subsample subsample ratio when generating column access.
* \param max_row_perbatch auxiliary information, maximum row used in each column batch.
* this is a hint information that can be ignored by the implementation.
* \param sorted If column features should be in sorted order
* \return Number of column blocks in the column access.
*/
virtual void InitColAccess(const std::vector<bool>& enabled,
float subsample,
size_t max_row_perbatch) = 0;
size_t max_row_perbatch, bool sorted) = 0;
// the following are column meta data, should be able to answer them fast.
/*! \return whether column access is enabled */
virtual bool HaveColAccess() const = 0;
virtual bool HaveColAccess(bool sorted) const = 0;
/*! \return Whether the data columns single column block. */
virtual bool SingleColBlock() const = 0;
/*! \brief get number of non-missing entries in column */
@@ -271,9 +292,9 @@ class DMatrix {
/*! \brief get column density */
virtual float GetColDensity(size_t cidx) const = 0;
/*! \return reference of buffered rowset, in column access */
virtual const RowSet& buffered_rowset() const = 0;
virtual const RowSet& BufferedRowset() const = 0;
/*! \brief virtual destructor */
virtual ~DMatrix() {}
virtual ~DMatrix() = default;
/*!
* \brief Save DMatrix to local file.
* The saved file only works for non-sharded dataset(single machine training).
@@ -323,7 +344,7 @@ class DMatrix {
// allow learner class to access this field.
friend class LearnerImpl;
/*! \brief public field to back ref cached matrix. */
LearnerImpl* cache_learner_ptr_;
LearnerImpl* cache_learner_ptr_{nullptr};
};
// implementation of inline functions
@@ -331,15 +352,15 @@ inline bst_uint RowSet::operator[](size_t i) const {
return rows_.size() == 0 ? static_cast<bst_uint>(i) : rows_[i];
}
inline size_t RowSet::size() const {
inline size_t RowSet::Size() const {
return size_;
}
inline void RowSet::clear() {
inline void RowSet::Clear() {
rows_.clear(); size_ = 0;
}
inline void RowSet::push_back(bst_uint i) {
inline void RowSet::PushBack(bst_uint i) {
if (rows_.size() == 0) {
if (i == size_) {
++size_; return;

View File

@@ -45,7 +45,7 @@ class FeatureMap {
*/
inline void PushBack(int fid, const char *fname, const char *ftype) {
CHECK_EQ(fid, static_cast<int>(names_.size()));
names_.push_back(std::string(fname));
names_.emplace_back(fname);
types_.push_back(GetType(ftype));
}
/*! \brief clear the feature map */
@@ -54,11 +54,11 @@ class FeatureMap {
types_.clear();
}
/*! \return number of known features */
inline size_t size() const {
inline size_t Size() const {
return names_.size();
}
/*! \return name of specific feature */
inline const char* name(size_t idx) const {
inline const char* Name(size_t idx) const {
CHECK_LT(idx, names_.size()) << "FeatureMap feature index exceed bound";
return names_[idx].c_str();
}
@@ -75,7 +75,7 @@ class FeatureMap {
* \return The translated type.
*/
inline static Type GetType(const char* tname) {
using namespace std;
using std::strcmp;
if (!strcmp("i", tname)) return kIndicator;
if (!strcmp("q", tname)) return kQuantitive;
if (!strcmp("int", tname)) return kInteger;

View File

@@ -18,6 +18,7 @@
#include "./data.h"
#include "./objective.h"
#include "./feature_map.h"
#include "../../src/common/host_device_vector.h"
namespace xgboost {
/*!
@@ -26,7 +27,7 @@ namespace xgboost {
class GradientBooster {
public:
/*! \brief virtual destructor */
virtual ~GradientBooster() {}
virtual ~GradientBooster() = default;
/*!
* \brief set configuration from pair iterators.
* \param begin The beginning iterator.
@@ -68,8 +69,9 @@ class GradientBooster {
* the booster may change content of gpair
*/
virtual void DoBoost(DMatrix* p_fmat,
std::vector<bst_gpair>* in_gpair,
HostDeviceVector<GradientPair>* in_gpair,
ObjFunction* obj = nullptr) = 0;
/*!
* \brief generate predictions for given feature matrix
* \param dmat feature matrix
@@ -78,8 +80,8 @@ class GradientBooster {
* we do not limit number of trees, this parameter is only valid for gbtree, but not for gblinear
*/
virtual void PredictBatch(DMatrix* dmat,
std::vector<bst_float>* out_preds,
unsigned ntree_limit = 0) = 0;
HostDeviceVector<bst_float>* out_preds,
unsigned ntree_limit = 0) = 0;
/*!
* \brief online prediction function, predict score for one instance at a time
* NOTE: use the batch prediction interface if possible, batch prediction is usually
@@ -116,10 +118,17 @@ class GradientBooster {
* \param ntree_limit limit the number of trees used in prediction, when it equals 0, this means
* we do not limit number of trees
* \param approximate use a faster (inconsistent) approximation of SHAP values
* \param condition condition on the condition_feature (0=no, -1=cond off, 1=cond on).
* \param condition_feature feature to condition on (i.e. fix) during calculations
*/
virtual void PredictContribution(DMatrix* dmat,
std::vector<bst_float>* out_contribs,
unsigned ntree_limit = 0, bool approximate = false) = 0;
unsigned ntree_limit = 0, bool approximate = false,
int condition = 0, unsigned condition_feature = 0) = 0;
virtual void PredictInteractionContributions(DMatrix* dmat,
std::vector<bst_float>* out_contribs,
unsigned ntree_limit, bool approximate) = 0;
/*!
* \brief dump the model in the requested format

View File

@@ -37,7 +37,7 @@ namespace xgboost {
class Learner : public rabit::Serializable {
public:
/*! \brief virtual destructor */
virtual ~Learner() {}
~Learner() override = default;
/*!
* \brief set configuration from pair iterators.
* \param begin The beginning iterator.
@@ -62,12 +62,12 @@ class Learner : public rabit::Serializable {
* \brief load model from stream
* \param fi input stream.
*/
virtual void Load(dmlc::Stream* fi) = 0;
void Load(dmlc::Stream* fi) override = 0;
/*!
* \brief save model to stream.
* \param fo output stream
*/
virtual void Save(dmlc::Stream* fo) const = 0;
void Save(dmlc::Stream* fo) const override = 0;
/*!
* \brief update the model for one iteration
* With the specified objective function.
@@ -84,7 +84,7 @@ class Learner : public rabit::Serializable {
*/
virtual void BoostOneIter(int iter,
DMatrix* train,
std::vector<bst_gpair>* in_gpair) = 0;
HostDeviceVector<GradientPair>* in_gpair) = 0;
/*!
* \brief evaluate the model for specific iteration using the configured metrics.
* \param iter iteration number
@@ -105,14 +105,17 @@ class Learner : public rabit::Serializable {
* \param pred_leaf whether to only predict the leaf index of each tree in a boosted tree predictor
* \param pred_contribs whether to only predict the feature contributions
* \param approx_contribs whether to approximate the feature contributions for speed
* \param pred_interactions whether to compute the feature pair contributions
*/
virtual void Predict(DMatrix* data,
bool output_margin,
std::vector<bst_float> *out_preds,
HostDeviceVector<bst_float> *out_preds,
unsigned ntree_limit = 0,
bool pred_leaf = false,
bool pred_contribs = false,
bool approx_contribs = false) const = 0;
bool approx_contribs = false,
bool pred_interactions = false) const = 0;
/*!
* \brief Set additional attribute to the Booster.
* The property will be saved along the booster.
@@ -166,7 +169,7 @@ class Learner : public rabit::Serializable {
*/
inline void Predict(const SparseBatch::Inst &inst,
bool output_margin,
std::vector<bst_float> *out_preds,
HostDeviceVector<bst_float> *out_preds,
unsigned ntree_limit = 0) const;
/*!
* \brief Create a new instance of learner.
@@ -189,9 +192,9 @@ class Learner : public rabit::Serializable {
// implementation of inline functions.
inline void Learner::Predict(const SparseBatch::Inst& inst,
bool output_margin,
std::vector<bst_float>* out_preds,
HostDeviceVector<bst_float>* out_preds,
unsigned ntree_limit) const {
gbm_->PredictInstance(inst, out_preds, ntree_limit);
gbm_->PredictInstance(inst, &out_preds->HostVector(), ntree_limit);
if (!output_margin) {
obj_->PredTransform(out_preds);
}

View File

@@ -0,0 +1,67 @@
/*
* Copyright 2018 by Contributors
*/
#pragma once
#include <dmlc/registry.h>
#include <xgboost/base.h>
#include <xgboost/data.h>
#include <functional>
#include <string>
#include <utility>
#include <vector>
#include "../../src/gbm/gblinear_model.h"
#include "../../src/common/host_device_vector.h"
namespace xgboost {
/*!
* \brief interface of linear updater
*/
class LinearUpdater {
public:
/*! \brief virtual destructor */
virtual ~LinearUpdater() = default;
/*!
* \brief Initialize the updater with given arguments.
* \param args arguments to the objective function.
*/
virtual void Init(
const std::vector<std::pair<std::string, std::string> >& args) = 0;
/**
* \brief Updates linear model given gradients.
*
* \param in_gpair The gradient pair statistics of the data.
* \param data Input data matrix.
* \param model Model to be updated.
* \param sum_instance_weight The sum instance weights, used to normalise l1/l2 penalty.
*/
virtual void Update(HostDeviceVector<GradientPair>* in_gpair, DMatrix* data,
gbm::GBLinearModel* model,
double sum_instance_weight) = 0;
/*!
* \brief Create a linear updater given name
* \param name Name of the linear updater.
*/
static LinearUpdater* Create(const std::string& name);
};
/*!
* \brief Registry entry for linear updater.
*/
struct LinearUpdaterReg
: public dmlc::FunctionRegEntryBase<LinearUpdaterReg,
std::function<LinearUpdater*()> > {};
/*!
* \brief Macro to register linear updater.
*/
#define XGBOOST_REGISTER_LINEAR_UPDATER(UniqueId, Name) \
static DMLC_ATTRIBUTE_UNUSED ::xgboost::LinearUpdaterReg& \
__make_##LinearUpdaterReg##_##UniqueId##__ = \
::dmlc::Registry< ::xgboost::LinearUpdaterReg>::Get()->__REGISTER__( \
Name)
} // namespace xgboost

View File

@@ -21,7 +21,7 @@ class BaseLogger {
log_stream_ << "[" << dmlc::DateLogger().HumanDate() << "] ";
#endif
}
std::ostream& stream() { return log_stream_; }
std::ostream& stream() { return log_stream_; } // NOLINT
protected:
std::ostringstream log_stream_;

View File

@@ -35,7 +35,7 @@ class Metric {
/*! \return name of metric */
virtual const char* Name() const = 0;
/*! \brief virtual destructor */
virtual ~Metric() {}
virtual ~Metric() = default;
/*!
* \brief create a metric according to name.
* \param name name of the metric.

View File

@@ -14,13 +14,16 @@
#include <functional>
#include "./data.h"
#include "./base.h"
#include "../../src/common/host_device_vector.h"
namespace xgboost {
/*! \brief interface of objective function */
class ObjFunction {
public:
/*! \brief virtual destructor */
virtual ~ObjFunction() {}
virtual ~ObjFunction() = default;
/*!
* \brief set configuration from pair iterators.
* \param begin The beginning iterator.
@@ -41,10 +44,11 @@ class ObjFunction {
* \param iteration current iteration number.
* \param out_gpair output of get gradient, saves gradient and second order gradient in
*/
virtual void GetGradient(const std::vector<bst_float>& preds,
virtual void GetGradient(HostDeviceVector<bst_float>* preds,
const MetaInfo& info,
int iteration,
std::vector<bst_gpair>* out_gpair) = 0;
HostDeviceVector<GradientPair>* out_gpair) = 0;
/*! \return the default evaluation metric for the objective */
virtual const char* DefaultEvalMetric() const = 0;
// the following functions are optional, most of time default implementation is good enough
@@ -52,13 +56,14 @@ class ObjFunction {
* \brief transform prediction values, this is only called when Prediction is called
* \param io_preds prediction values, saves to this vector as well
*/
virtual void PredTransform(std::vector<bst_float> *io_preds) {}
virtual void PredTransform(HostDeviceVector<bst_float> *io_preds) {}
/*!
* \brief transform prediction values, this is only called when Eval is called,
* usually it redirect to PredTransform
* \param io_preds prediction values, saves to this vector as well
*/
virtual void EvalTransform(std::vector<bst_float> *io_preds) {
virtual void EvalTransform(HostDeviceVector<bst_float> *io_preds) {
this->PredTransform(io_preds);
}
/*!

View File

@@ -13,6 +13,7 @@
#include <utility>
#include <vector>
#include "../../src/gbm/gbtree_model.h"
#include "../../src/common/host_device_vector.h"
// Forward declarations
namespace xgboost {
@@ -35,7 +36,7 @@ namespace xgboost {
class Predictor {
public:
virtual ~Predictor() {}
virtual ~Predictor() = default;
/**
* \fn virtual void Predictor::Init(const std::vector<std::pair<std::string,
@@ -51,10 +52,6 @@ class Predictor {
const std::vector<std::shared_ptr<DMatrix>>& cache);
/**
* \fn virtual void Predictor::PredictBatch( DMatrix* dmat,
* std::vector<bst_float>* out_preds, const gbm::GBTreeModel &model, int
* tree_begin, unsigned ntree_limit = 0) = 0;
*
* \brief Generate batch predictions for a given feature matrix. May use
* cached predictions if available instead of calculating from scratch.
*
@@ -66,7 +63,7 @@ class Predictor {
* limit trees.
*/
virtual void PredictBatch(DMatrix* dmat, std::vector<bst_float>* out_preds,
virtual void PredictBatch(DMatrix* dmat, HostDeviceVector<bst_float>* out_preds,
const gbm::GBTreeModel& model, int tree_begin,
unsigned ntree_limit = 0) = 0;
@@ -140,14 +137,24 @@ class Predictor {
* a vector of length (nfeats + 1) * num_output_group * nsample, arranged in
* that order.
*
* \param [in,out] dmat The input feature matrix.
* \param [in,out] out_contribs The output feature contribs.
* \param model Model to make predictions from.
* \param ntree_limit (Optional) The ntree limit.
* \param approximate Use fast approximate algorithm.
* \param [in,out] dmat The input feature matrix.
* \param [in,out] out_contribs The output feature contribs.
* \param model Model to make predictions from.
* \param ntree_limit (Optional) The ntree limit.
* \param approximate Use fast approximate algorithm.
* \param condition Condition on the condition_feature (0=no, -1=cond off, 1=cond on).
* \param condition_feature Feature to condition on (i.e. fix) during calculations.
*/
virtual void PredictContribution(DMatrix* dmat,
std::vector<bst_float>* out_contribs,
const gbm::GBTreeModel& model,
unsigned ntree_limit = 0,
bool approximate = false,
int condition = 0,
unsigned condition_feature = 0) = 0;
virtual void PredictInteractionContributions(DMatrix* dmat,
std::vector<bst_float>* out_contribs,
const gbm::GBTreeModel& model,
unsigned ntree_limit = 0,
@@ -163,41 +170,14 @@ class Predictor {
static Predictor* Create(std::string name);
protected:
/**
* \fn bool PredictFromCache(DMatrix* dmat, std::vector<bst_float>*
* out_preds, const gbm::GBTreeModel& model, unsigned ntree_limit = 0)
*
* \brief Attempt to predict from cache.
*
* \return True if it succeeds, false if it fails.
*/
bool PredictFromCache(DMatrix* dmat, std::vector<bst_float>* out_preds,
const gbm::GBTreeModel& model,
unsigned ntree_limit = 0);
/**
* \fn void Predictor::InitOutPredictions(const MetaInfo& info,
* std::vector<bst_float>* out_preds, const gbm::GBTreeModel& model) const;
*
* \brief Init out predictions according to base margin.
*
* \param info Dmatrix info possibly containing base margin.
* \param [in,out] out_preds The out preds.
* \param model The model.
*/
void InitOutPredictions(const MetaInfo& info,
std::vector<bst_float>* out_preds,
const gbm::GBTreeModel& model) const;
/**
* \struct PredictionCacheEntry
*
* \brief Contains pointer to input matrix and associated cached predictions.
*/
struct PredictionCacheEntry {
std::shared_ptr<DMatrix> data;
std::vector<bst_float> predictions;
HostDeviceVector<bst_float> predictions;
};
/**

View File

@@ -71,70 +71,70 @@ template<typename TSplitCond, typename TNodeStat>
class TreeModel {
public:
/*! \brief data type to indicate split condition */
typedef TNodeStat NodeStat;
using NodeStat = TNodeStat;
/*! \brief auxiliary statistics of node to help tree building */
typedef TSplitCond SplitCond;
using SplitCond = TSplitCond;
/*! \brief tree node */
class Node {
public:
Node() : sindex_(0) {
Node() {
// assert compact alignment
static_assert(sizeof(Node) == 4 * sizeof(int) + sizeof(Info),
"Node: 64 bit align");
}
/*! \brief index of left child */
inline int cleft() const {
inline int LeftChild() const {
return this->cleft_;
}
/*! \brief index of right child */
inline int cright() const {
inline int RightChild() const {
return this->cright_;
}
/*! \brief index of default child when feature is missing */
inline int cdefault() const {
return this->default_left() ? this->cleft() : this->cright();
inline int DefaultChild() const {
return this->DefaultLeft() ? this->LeftChild() : this->RightChild();
}
/*! \brief feature index of split condition */
inline unsigned split_index() const {
inline unsigned SplitIndex() const {
return sindex_ & ((1U << 31) - 1U);
}
/*! \brief when feature is unknown, whether goes to left child */
inline bool default_left() const {
inline bool DefaultLeft() const {
return (sindex_ >> 31) != 0;
}
/*! \brief whether current node is leaf node */
inline bool is_leaf() const {
inline bool IsLeaf() const {
return cleft_ == -1;
}
/*! \return get leaf value of leaf node */
inline bst_float leaf_value() const {
inline bst_float LeafValue() const {
return (this->info_).leaf_value;
}
/*! \return get split condition of the node */
inline TSplitCond split_cond() const {
inline TSplitCond SplitCond() const {
return (this->info_).split_cond;
}
/*! \brief get parent of the node */
inline int parent() const {
inline int Parent() const {
return parent_ & ((1U << 31) - 1);
}
/*! \brief whether current node is left child */
inline bool is_left_child() const {
inline bool IsLeftChild() const {
return (parent_ & (1U << 31)) != 0;
}
/*! \brief whether this node is deleted */
inline bool is_deleted() const {
inline bool IsDeleted() const {
return sindex_ == std::numeric_limits<unsigned>::max();
}
/*! \brief whether current node is root */
inline bool is_root() const {
inline bool IsRoot() const {
return parent_ == -1;
}
/*!
* \brief set the right child
* \param nid node id to right child
*/
inline void set_right_child(int nid) {
inline void SetRightChild(int nid) {
this->cright_ = nid;
}
/*!
@@ -143,7 +143,7 @@ class TreeModel {
* \param split_cond split condition
* \param default_left the default direction when feature is unknown
*/
inline void set_split(unsigned split_index, TSplitCond split_cond,
inline void SetSplit(unsigned split_index, TSplitCond split_cond,
bool default_left = false) {
if (default_left) split_index |= (1U << 31);
this->sindex_ = split_index;
@@ -155,13 +155,13 @@ class TreeModel {
* \param right right index, could be used to store
* additional information
*/
inline void set_leaf(bst_float value, int right = -1) {
inline void SetLeaf(bst_float value, int right = -1) {
(this->info_).leaf_value = value;
this->cleft_ = -1;
this->cright_ = right;
}
/*! \brief mark that this node is deleted */
inline void mark_delete() {
inline void MarkDelete() {
this->sindex_ = std::numeric_limits<unsigned>::max();
}
@@ -181,11 +181,11 @@ class TreeModel {
// pointer to left, right
int cleft_, cright_;
// split feature index, left split or right split depends on the highest bit
unsigned sindex_;
unsigned sindex_{0};
// extra info
Info info_;
// set parent
inline void set_parent(int pidx, bool is_left_child = true) {
inline void SetParent(int pidx, bool is_left_child = true) {
if (is_left_child) pidx |= (1U << 31);
this->parent_ = pidx;
}
@@ -193,35 +193,35 @@ class TreeModel {
protected:
// vector of nodes
std::vector<Node> nodes;
std::vector<Node> nodes_;
// free node space, used during training process
std::vector<int> deleted_nodes;
std::vector<int> deleted_nodes_;
// stats of nodes
std::vector<TNodeStat> stats;
std::vector<TNodeStat> stats_;
// leaf vector, that is used to store additional information
std::vector<bst_float> leaf_vector;
std::vector<bst_float> leaf_vector_;
// allocate a new node,
// !!!!!! NOTE: may cause BUG here, nodes.resize
inline int AllocNode() {
if (param.num_deleted != 0) {
int nd = deleted_nodes.back();
deleted_nodes.pop_back();
int nd = deleted_nodes_.back();
deleted_nodes_.pop_back();
--param.num_deleted;
return nd;
}
int nd = param.num_nodes++;
CHECK_LT(param.num_nodes, std::numeric_limits<int>::max())
<< "number of nodes in the tree exceed 2^31";
nodes.resize(param.num_nodes);
stats.resize(param.num_nodes);
leaf_vector.resize(param.num_nodes * param.size_leaf_vector);
nodes_.resize(param.num_nodes);
stats_.resize(param.num_nodes);
leaf_vector_.resize(param.num_nodes * param.size_leaf_vector);
return nd;
}
// delete a tree node, keep the parent field to allow trace back
inline void DeleteNode(int nid) {
CHECK_GE(nid, param.num_roots);
deleted_nodes.push_back(nid);
nodes[nid].mark_delete();
deleted_nodes_.push_back(nid);
nodes_[nid].MarkDelete();
++param.num_deleted;
}
@@ -232,11 +232,11 @@ class TreeModel {
* \param value new leaf value
*/
inline void ChangeToLeaf(int rid, bst_float value) {
CHECK(nodes[nodes[rid].cleft() ].is_leaf());
CHECK(nodes[nodes[rid].cright()].is_leaf());
this->DeleteNode(nodes[rid].cleft());
this->DeleteNode(nodes[rid].cright());
nodes[rid].set_leaf(value);
CHECK(nodes_[nodes_[rid].LeftChild() ].IsLeaf());
CHECK(nodes_[nodes_[rid].RightChild()].IsLeaf());
this->DeleteNode(nodes_[rid].LeftChild());
this->DeleteNode(nodes_[rid].RightChild());
nodes_[rid].SetLeaf(value);
}
/*!
* \brief collapse a non leaf node to a leaf node, delete its children
@@ -244,12 +244,12 @@ class TreeModel {
* \param value new leaf value
*/
inline void CollapseToLeaf(int rid, bst_float value) {
if (nodes[rid].is_leaf()) return;
if (!nodes[nodes[rid].cleft() ].is_leaf()) {
CollapseToLeaf(nodes[rid].cleft(), 0.0f);
if (nodes_[rid].IsLeaf()) return;
if (!nodes_[nodes_[rid].LeftChild() ].IsLeaf()) {
CollapseToLeaf(nodes_[rid].LeftChild(), 0.0f);
}
if (!nodes[nodes[rid].cright() ].is_leaf()) {
CollapseToLeaf(nodes[rid].cright(), 0.0f);
if (!nodes_[nodes_[rid].RightChild() ].IsLeaf()) {
CollapseToLeaf(nodes_[rid].RightChild(), 0.0f);
}
this->ChangeToLeaf(rid, value);
}
@@ -262,47 +262,47 @@ class TreeModel {
param.num_nodes = 1;
param.num_roots = 1;
param.num_deleted = 0;
nodes.resize(1);
nodes_.resize(1);
}
/*! \brief get node given nid */
inline Node& operator[](int nid) {
return nodes[nid];
return nodes_[nid];
}
/*! \brief get node given nid */
inline const Node& operator[](int nid) const {
return nodes[nid];
return nodes_[nid];
}
/*! \brief get const reference to nodes */
inline const std::vector<Node>& GetNodes() const { return nodes; }
inline const std::vector<Node>& GetNodes() const { return nodes_; }
/*! \brief get node statistics given nid */
inline NodeStat& stat(int nid) {
return stats[nid];
inline NodeStat& Stat(int nid) {
return stats_[nid];
}
/*! \brief get node statistics given nid */
inline const NodeStat& stat(int nid) const {
return stats[nid];
inline const NodeStat& Stat(int nid) const {
return stats_[nid];
}
/*! \brief get leaf vector given nid */
inline bst_float* leafvec(int nid) {
if (leaf_vector.size() == 0) return nullptr;
return& leaf_vector[nid * param.size_leaf_vector];
inline bst_float* Leafvec(int nid) {
if (leaf_vector_.size() == 0) return nullptr;
return& leaf_vector_[nid * param.size_leaf_vector];
}
/*! \brief get leaf vector given nid */
inline const bst_float* leafvec(int nid) const {
if (leaf_vector.size() == 0) return nullptr;
return& leaf_vector[nid * param.size_leaf_vector];
inline const bst_float* Leafvec(int nid) const {
if (leaf_vector_.size() == 0) return nullptr;
return& leaf_vector_[nid * param.size_leaf_vector];
}
/*! \brief initialize the model */
inline void InitModel() {
param.num_nodes = param.num_roots;
nodes.resize(param.num_nodes);
stats.resize(param.num_nodes);
leaf_vector.resize(param.num_nodes * param.size_leaf_vector, 0.0f);
nodes_.resize(param.num_nodes);
stats_.resize(param.num_nodes);
leaf_vector_.resize(param.num_nodes * param.size_leaf_vector, 0.0f);
for (int i = 0; i < param.num_nodes; i ++) {
nodes[i].set_leaf(0.0f);
nodes[i].set_parent(-1);
nodes_[i].SetLeaf(0.0f);
nodes_[i].SetParent(-1);
}
}
/*!
@@ -311,35 +311,35 @@ class TreeModel {
*/
inline void Load(dmlc::Stream* fi) {
CHECK_EQ(fi->Read(&param, sizeof(TreeParam)), sizeof(TreeParam));
nodes.resize(param.num_nodes);
stats.resize(param.num_nodes);
nodes_.resize(param.num_nodes);
stats_.resize(param.num_nodes);
CHECK_NE(param.num_nodes, 0);
CHECK_EQ(fi->Read(dmlc::BeginPtr(nodes), sizeof(Node) * nodes.size()),
sizeof(Node) * nodes.size());
CHECK_EQ(fi->Read(dmlc::BeginPtr(stats), sizeof(NodeStat) * stats.size()),
sizeof(NodeStat) * stats.size());
CHECK_EQ(fi->Read(dmlc::BeginPtr(nodes_), sizeof(Node) * nodes_.size()),
sizeof(Node) * nodes_.size());
CHECK_EQ(fi->Read(dmlc::BeginPtr(stats_), sizeof(NodeStat) * stats_.size()),
sizeof(NodeStat) * stats_.size());
if (param.size_leaf_vector != 0) {
CHECK(fi->Read(&leaf_vector));
CHECK(fi->Read(&leaf_vector_));
}
// chg deleted nodes
deleted_nodes.resize(0);
deleted_nodes_.resize(0);
for (int i = param.num_roots; i < param.num_nodes; ++i) {
if (nodes[i].is_deleted()) deleted_nodes.push_back(i);
if (nodes_[i].IsDeleted()) deleted_nodes_.push_back(i);
}
CHECK_EQ(static_cast<int>(deleted_nodes.size()), param.num_deleted);
CHECK_EQ(static_cast<int>(deleted_nodes_.size()), param.num_deleted);
}
/*!
* \brief save model to stream
* \param fo output stream
*/
inline void Save(dmlc::Stream* fo) const {
CHECK_EQ(param.num_nodes, static_cast<int>(nodes.size()));
CHECK_EQ(param.num_nodes, static_cast<int>(stats.size()));
CHECK_EQ(param.num_nodes, static_cast<int>(nodes_.size()));
CHECK_EQ(param.num_nodes, static_cast<int>(stats_.size()));
fo->Write(&param, sizeof(TreeParam));
CHECK_NE(param.num_nodes, 0);
fo->Write(dmlc::BeginPtr(nodes), sizeof(Node) * nodes.size());
fo->Write(dmlc::BeginPtr(stats), sizeof(NodeStat) * nodes.size());
if (param.size_leaf_vector != 0) fo->Write(leaf_vector);
fo->Write(dmlc::BeginPtr(nodes_), sizeof(Node) * nodes_.size());
fo->Write(dmlc::BeginPtr(stats_), sizeof(NodeStat) * nodes_.size());
if (param.size_leaf_vector != 0) fo->Write(leaf_vector_);
}
/*!
* \brief add child nodes to node
@@ -348,10 +348,10 @@ class TreeModel {
inline void AddChilds(int nid) {
int pleft = this->AllocNode();
int pright = this->AllocNode();
nodes[nid].cleft_ = pleft;
nodes[nid].cright_ = pright;
nodes[nodes[nid].cleft() ].set_parent(nid, true);
nodes[nodes[nid].cright()].set_parent(nid, false);
nodes_[nid].cleft_ = pleft;
nodes_[nid].cright_ = pright;
nodes_[nodes_[nid].LeftChild() ].SetParent(nid, true);
nodes_[nodes_[nid].RightChild()].SetParent(nid, false);
}
/*!
* \brief only add a right child to a leaf node
@@ -359,8 +359,8 @@ class TreeModel {
*/
inline void AddRightChild(int nid) {
int pright = this->AllocNode();
nodes[nid].right = pright;
nodes[nodes[nid].right].set_parent(nid, false);
nodes_[nid].right = pright;
nodes_[nodes_[nid].right].SetParent(nid, false);
}
/*!
* \brief get current depth
@@ -369,9 +369,9 @@ class TreeModel {
*/
inline int GetDepth(int nid, bool pass_rchild = false) const {
int depth = 0;
while (!nodes[nid].is_root()) {
if (!pass_rchild || nodes[nid].is_left_child()) ++depth;
nid = nodes[nid].parent();
while (!nodes_[nid].IsRoot()) {
if (!pass_rchild || nodes_[nid].IsLeftChild()) ++depth;
nid = nodes_[nid].Parent();
}
return depth;
}
@@ -380,9 +380,9 @@ class TreeModel {
* \param nid node id
*/
inline int MaxDepth(int nid) const {
if (nodes[nid].is_leaf()) return 0;
return std::max(MaxDepth(nodes[nid].cleft())+1,
MaxDepth(nodes[nid].cright())+1);
if (nodes_[nid].IsLeaf()) return 0;
return std::max(MaxDepth(nodes_[nid].LeftChild())+1,
MaxDepth(nodes_[nid].RightChild())+1);
}
/*!
* \brief get maximum depth
@@ -395,7 +395,7 @@ class TreeModel {
return maxd;
}
/*! \brief number of extra nodes besides the root */
inline int num_extra_nodes() const {
inline int NumExtraNodes() const {
return param.num_nodes - param.num_roots - param.num_deleted;
}
};
@@ -421,7 +421,7 @@ struct PathElement {
bst_float zero_fraction;
bst_float one_fraction;
bst_float pweight;
PathElement() {}
PathElement() = default;
PathElement(int i, bst_float z, bst_float o, bst_float w) :
feature_index(i), zero_fraction(z), one_fraction(o), pweight(w) {}
};
@@ -457,19 +457,19 @@ class RegTree: public TreeModel<bst_float, RTreeNodeStat> {
* \brief returns the size of the feature vector
* \return the size of the feature vector
*/
inline size_t size() const;
inline size_t Size() const;
/*!
* \brief get ith value
* \param i feature index.
* \return the i-th feature value
*/
inline bst_float fvalue(size_t i) const;
inline bst_float Fvalue(size_t i) const;
/*!
* \brief check whether i-th entry is missing
* \param i feature index.
* \return whether i-th value is missing.
*/
inline bool is_missing(size_t i) const;
inline bool IsMissing(size_t i) const;
private:
/*!
@@ -480,7 +480,7 @@ class RegTree: public TreeModel<bst_float, RTreeNodeStat> {
bst_float fvalue;
int flag;
};
std::vector<Entry> data;
std::vector<Entry> data_;
};
/*!
* \brief get the leaf index
@@ -501,13 +501,33 @@ class RegTree: public TreeModel<bst_float, RTreeNodeStat> {
* \param feat dense feature vector, if the feature is missing the field is set to NaN
* \param root_id starting root index of the instance
* \param out_contribs output vector to hold the contributions
* \param condition fix one feature to either off (-1) on (1) or not fixed (0 default)
* \param condition_feature the index of the feature to fix
*/
inline void CalculateContributions(const RegTree::FVec& feat, unsigned root_id,
bst_float *out_contribs) const;
bst_float *out_contribs,
int condition = 0,
unsigned condition_feature = 0) const;
/*!
* \brief Recursive function that computes the feature attributions for a single tree.
* \param feat dense feature vector, if the feature is missing the field is set to NaN
* \param phi dense output vector of feature attributions
* \param node_index the index of the current node in the tree
* \param unique_depth how many unique features are above the current node in the tree
* \param parent_unique_path a vector of statistics about our current path through the tree
* \param parent_zero_fraction what fraction of the parent path weight is coming as 0 (integrated)
* \param parent_one_fraction what fraction of the parent path weight is coming as 1 (fixed)
* \param parent_feature_index what feature the parent node used to split
* \param condition fix one feature to either off (-1) on (1) or not fixed (0 default)
* \param condition_feature the index of the feature to fix
* \param condition_fraction what fraction of the current weight matches our conditioning feature
*/
inline void TreeShap(const RegTree::FVec& feat, bst_float *phi,
unsigned node_index, unsigned unique_depth,
PathElement *parent_unique_path, bst_float parent_zero_fraction,
bst_float parent_one_fraction, int parent_feature_index) const;
bst_float parent_one_fraction, int parent_feature_index,
int condition, unsigned condition_feature,
bst_float condition_fraction) const;
/*!
* \brief calculate the approximate feature contributions for the given root
@@ -542,63 +562,63 @@ class RegTree: public TreeModel<bst_float, RTreeNodeStat> {
private:
inline bst_float FillNodeMeanValue(int nid);
std::vector<bst_float> node_mean_values;
std::vector<bst_float> node_mean_values_;
};
// implementations of inline functions
// do not need to read if only use the model
inline void RegTree::FVec::Init(size_t size) {
Entry e; e.flag = -1;
data.resize(size);
std::fill(data.begin(), data.end(), e);
data_.resize(size);
std::fill(data_.begin(), data_.end(), e);
}
inline void RegTree::FVec::Fill(const RowBatch::Inst& inst) {
for (bst_uint i = 0; i < inst.length; ++i) {
if (inst[i].index >= data.size()) continue;
data[inst[i].index].fvalue = inst[i].fvalue;
if (inst[i].index >= data_.size()) continue;
data_[inst[i].index].fvalue = inst[i].fvalue;
}
}
inline void RegTree::FVec::Drop(const RowBatch::Inst& inst) {
for (bst_uint i = 0; i < inst.length; ++i) {
if (inst[i].index >= data.size()) continue;
data[inst[i].index].flag = -1;
if (inst[i].index >= data_.size()) continue;
data_[inst[i].index].flag = -1;
}
}
inline size_t RegTree::FVec::size() const {
return data.size();
inline size_t RegTree::FVec::Size() const {
return data_.size();
}
inline bst_float RegTree::FVec::fvalue(size_t i) const {
return data[i].fvalue;
inline bst_float RegTree::FVec::Fvalue(size_t i) const {
return data_[i].fvalue;
}
inline bool RegTree::FVec::is_missing(size_t i) const {
return data[i].flag == -1;
inline bool RegTree::FVec::IsMissing(size_t i) const {
return data_[i].flag == -1;
}
inline int RegTree::GetLeafIndex(const RegTree::FVec& feat, unsigned root_id) const {
int pid = static_cast<int>(root_id);
while (!(*this)[pid].is_leaf()) {
unsigned split_index = (*this)[pid].split_index();
pid = this->GetNext(pid, feat.fvalue(split_index), feat.is_missing(split_index));
auto pid = static_cast<int>(root_id);
while (!(*this)[pid].IsLeaf()) {
unsigned split_index = (*this)[pid].SplitIndex();
pid = this->GetNext(pid, feat.Fvalue(split_index), feat.IsMissing(split_index));
}
return pid;
}
inline bst_float RegTree::Predict(const RegTree::FVec& feat, unsigned root_id) const {
int pid = this->GetLeafIndex(feat, root_id);
return (*this)[pid].leaf_value();
return (*this)[pid].LeafValue();
}
inline void RegTree::FillNodeMeanValues() {
size_t num_nodes = this->param.num_nodes;
if (this->node_mean_values.size() == num_nodes) {
if (this->node_mean_values_.size() == num_nodes) {
return;
}
this->node_mean_values.resize(num_nodes);
this->node_mean_values_.resize(num_nodes);
for (int root_id = 0; root_id < param.num_roots; ++root_id) {
this->FillNodeMeanValue(root_id);
}
@@ -607,40 +627,39 @@ inline void RegTree::FillNodeMeanValues() {
inline bst_float RegTree::FillNodeMeanValue(int nid) {
bst_float result;
auto& node = (*this)[nid];
if (node.is_leaf()) {
result = node.leaf_value();
if (node.IsLeaf()) {
result = node.LeafValue();
} else {
result = this->FillNodeMeanValue(node.cleft()) * this->stat(node.cleft()).sum_hess;
result += this->FillNodeMeanValue(node.cright()) * this->stat(node.cright()).sum_hess;
result /= this->stat(nid).sum_hess;
result = this->FillNodeMeanValue(node.LeftChild()) * this->Stat(node.LeftChild()).sum_hess;
result += this->FillNodeMeanValue(node.RightChild()) * this->Stat(node.RightChild()).sum_hess;
result /= this->Stat(nid).sum_hess;
}
this->node_mean_values[nid] = result;
this->node_mean_values_[nid] = result;
return result;
}
inline void RegTree::CalculateContributionsApprox(const RegTree::FVec& feat, unsigned root_id,
bst_float *out_contribs) const {
CHECK_GT(this->node_mean_values.size(), 0U);
CHECK_GT(this->node_mean_values_.size(), 0U);
// this follows the idea of http://blog.datadive.net/interpreting-random-forests/
bst_float node_value;
unsigned split_index;
int pid = static_cast<int>(root_id);
unsigned split_index = 0;
auto pid = static_cast<int>(root_id);
// update bias value
node_value = this->node_mean_values[pid];
out_contribs[feat.size()] += node_value;
if ((*this)[pid].is_leaf()) {
bst_float node_value = this->node_mean_values_[pid];
out_contribs[feat.Size()] += node_value;
if ((*this)[pid].IsLeaf()) {
// nothing to do anymore
return;
}
while (!(*this)[pid].is_leaf()) {
split_index = (*this)[pid].split_index();
pid = this->GetNext(pid, feat.fvalue(split_index), feat.is_missing(split_index));
bst_float new_value = this->node_mean_values[pid];
while (!(*this)[pid].IsLeaf()) {
split_index = (*this)[pid].SplitIndex();
pid = this->GetNext(pid, feat.Fvalue(split_index), feat.IsMissing(split_index));
bst_float new_value = this->node_mean_values_[pid];
// update feature weight
out_contribs[split_index] += new_value - node_value;
node_value = new_value;
}
bst_float leaf_value = (*this)[pid].leaf_value();
bst_float leaf_value = (*this)[pid].LeafValue();
// update leaf feature weight
out_contribs[split_index] += leaf_value - node_value;
}
@@ -700,7 +719,7 @@ inline bst_float UnwoundPathSum(const PathElement *unique_path, unsigned unique_
/ static_cast<bst_float>((i + 1) * one_fraction);
total += tmp;
next_one_portion = unique_path[i].pweight - tmp * zero_fraction * ((unique_depth - i)
/ static_cast<bst_float>(unique_depth+1));
/ static_cast<bst_float>(unique_depth + 1));
} else {
total += (unique_path[i].pweight / zero_fraction) / ((unique_depth - i)
/ static_cast<bst_float>(unique_depth + 1));
@@ -713,41 +732,49 @@ inline bst_float UnwoundPathSum(const PathElement *unique_path, unsigned unique_
inline void RegTree::TreeShap(const RegTree::FVec& feat, bst_float *phi,
unsigned node_index, unsigned unique_depth,
PathElement *parent_unique_path, bst_float parent_zero_fraction,
bst_float parent_one_fraction, int parent_feature_index) const {
bst_float parent_one_fraction, int parent_feature_index,
int condition, unsigned condition_feature,
bst_float condition_fraction) const {
const auto node = (*this)[node_index];
// stop if we have no weight coming down to us
if (condition_fraction == 0) return;
// extend the unique path
PathElement *unique_path = parent_unique_path + unique_depth;
if (unique_depth > 0) std::copy(parent_unique_path,
parent_unique_path + unique_depth, unique_path);
ExtendPath(unique_path, unique_depth, parent_zero_fraction,
parent_one_fraction, parent_feature_index);
const unsigned split_index = node.split_index();
PathElement *unique_path = parent_unique_path + unique_depth + 1;
std::copy(parent_unique_path, parent_unique_path + unique_depth + 1, unique_path);
if (condition == 0 || condition_feature != static_cast<unsigned>(parent_feature_index)) {
ExtendPath(unique_path, unique_depth, parent_zero_fraction,
parent_one_fraction, parent_feature_index);
}
const unsigned split_index = node.SplitIndex();
// leaf node
if (node.is_leaf()) {
if (node.IsLeaf()) {
for (unsigned i = 1; i <= unique_depth; ++i) {
const bst_float w = UnwoundPathSum(unique_path, unique_depth, i);
const PathElement &el = unique_path[i];
phi[el.feature_index] += w * (el.one_fraction - el.zero_fraction) * node.leaf_value();
phi[el.feature_index] += w * (el.one_fraction - el.zero_fraction)
* node.LeafValue() * condition_fraction;
}
// internal node
} else {
// find which branch is "hot" (meaning x would follow it)
unsigned hot_index = 0;
if (feat.is_missing(split_index)) {
hot_index = node.cdefault();
} else if (feat.fvalue(split_index) < node.split_cond()) {
hot_index = node.cleft();
if (feat.IsMissing(split_index)) {
hot_index = node.DefaultChild();
} else if (feat.Fvalue(split_index) < node.SplitCond()) {
hot_index = node.LeftChild();
} else {
hot_index = node.cright();
hot_index = node.RightChild();
}
const unsigned cold_index = (static_cast<int>(hot_index) == node.cleft() ?
node.cright() : node.cleft());
const bst_float w = this->stat(node_index).sum_hess;
const bst_float hot_zero_fraction = this->stat(hot_index).sum_hess / w;
const bst_float cold_zero_fraction = this->stat(cold_index).sum_hess / w;
const unsigned cold_index = (static_cast<int>(hot_index) == node.LeftChild() ?
node.RightChild() : node.LeftChild());
const bst_float w = this->Stat(node_index).sum_hess;
const bst_float hot_zero_fraction = this->Stat(hot_index).sum_hess / w;
const bst_float cold_zero_fraction = this->Stat(cold_index).sum_hess / w;
bst_float incoming_zero_fraction = 1;
bst_float incoming_one_fraction = 1;
@@ -764,47 +791,57 @@ inline void RegTree::TreeShap(const RegTree::FVec& feat, bst_float *phi,
unique_depth -= 1;
}
// divide up the condition_fraction among the recursive calls
bst_float hot_condition_fraction = condition_fraction;
bst_float cold_condition_fraction = condition_fraction;
if (condition > 0 && split_index == condition_feature) {
cold_condition_fraction = 0;
unique_depth -= 1;
} else if (condition < 0 && split_index == condition_feature) {
hot_condition_fraction *= hot_zero_fraction;
cold_condition_fraction *= cold_zero_fraction;
unique_depth -= 1;
}
TreeShap(feat, phi, hot_index, unique_depth + 1, unique_path,
hot_zero_fraction*incoming_zero_fraction, incoming_one_fraction, split_index);
hot_zero_fraction * incoming_zero_fraction, incoming_one_fraction,
split_index, condition, condition_feature, hot_condition_fraction);
TreeShap(feat, phi, cold_index, unique_depth + 1, unique_path,
cold_zero_fraction*incoming_zero_fraction, 0, split_index);
cold_zero_fraction * incoming_zero_fraction, 0,
split_index, condition, condition_feature, cold_condition_fraction);
}
}
inline void RegTree::CalculateContributions(const RegTree::FVec& feat, unsigned root_id,
bst_float *out_contribs) const {
bst_float *out_contribs,
int condition,
unsigned condition_feature) const {
// find the expected value of the tree's predictions
bst_float base_value = 0.0f;
bst_float total_cover = 0.0f;
for (int i = 0; i < (*this).param.num_nodes; ++i) {
const auto node = (*this)[i];
if (node.is_leaf()) {
const auto cover = this->stat(i).sum_hess;
base_value += cover * node.leaf_value();
total_cover += cover;
}
if (condition == 0) {
bst_float node_value = this->node_mean_values_[static_cast<int>(root_id)];
out_contribs[feat.Size()] += node_value;
}
out_contribs[feat.size()] += base_value / total_cover;
// Preallocate space for the unique path data
const int maxd = this->MaxDepth(root_id) + 1;
PathElement *unique_path_data = new PathElement[(maxd * (maxd + 1)) / 2];
const int maxd = this->MaxDepth(root_id) + 2;
auto *unique_path_data = new PathElement[(maxd * (maxd + 1)) / 2];
TreeShap(feat, out_contribs, root_id, 0, unique_path_data, 1, 1, -1);
TreeShap(feat, out_contribs, root_id, 0, unique_path_data,
1, 1, -1, condition, condition_feature, 1);
delete[] unique_path_data;
}
/*! \brief get next position of the tree given current pid */
inline int RegTree::GetNext(int pid, bst_float fvalue, bool is_unknown) const {
bst_float split_value = (*this)[pid].split_cond();
bst_float split_value = (*this)[pid].SplitCond();
if (is_unknown) {
return (*this)[pid].cdefault();
return (*this)[pid].DefaultChild();
} else {
if (fvalue < split_value) {
return (*this)[pid].cleft();
return (*this)[pid].LeftChild();
} else {
return (*this)[pid].cright();
return (*this)[pid].RightChild();
}
}
}

View File

@@ -16,6 +16,7 @@
#include "./base.h"
#include "./data.h"
#include "./tree_model.h"
#include "../../src/common/host_device_vector.h"
namespace xgboost {
/*!
@@ -24,7 +25,7 @@ namespace xgboost {
class TreeUpdater {
public:
/*! \brief virtual destructor */
virtual ~TreeUpdater() {}
virtual ~TreeUpdater() = default;
/*!
* \brief Initialize the updater with given arguments.
* \param args arguments to the objective function.
@@ -39,7 +40,7 @@ class TreeUpdater {
* but maybe different random seeds, usually one tree is passed in at a time,
* there can be multiple trees when we train random forest style model
*/
virtual void Update(const std::vector<bst_gpair>& gpair,
virtual void Update(HostDeviceVector<GradientPair>* gpair,
DMatrix* data,
const std::vector<RegTree*>& trees) = 0;
@@ -54,9 +55,10 @@ class TreeUpdater {
* updated by the time this function returns.
*/
virtual bool UpdatePredictionCache(const DMatrix* data,
std::vector<bst_float>* out_preds) {
HostDeviceVector<bst_float>* out_preds) {
return false;
}
/*!
* \brief Create a tree updater given name
* \param name Name of the tree updater.

View File

@@ -16,7 +16,58 @@ Apache Flink and Apache Spark.
You can find more about XGBoost on [Documentation](https://xgboost.readthedocs.org/en/latest/jvm/index.html) and [Resource Page](../demo/README.md).
## Add Maven Dependency
XGBoost4J, XGBoost4J-Spark, etc. in maven repository is compiled with g++-4.8.5
### Access SNAPSHOT version
You need to add github as repo:
<b>maven</b>:
```xml
<repository>
<id>GitHub Repo</id>
<name>GitHub Repo</name>
<url>https://raw.githubusercontent.com/CodingCat/xgboost/maven-repo/</url>
</repository>
```
<b>sbt</b>:
```sbt
resolvers += "GitHub Repo" at "https://raw.githubusercontent.com/CodingCat/xgboost/maven-repo/"
```
the add dependency as following:
<b>maven</b>
```
<dependency>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost4j</artifactId>
<version>latest_version_num</version>
</dependency>
```
<b>sbt</b>
```sbt
"ml.dmlc" % "xgboost4j" % "latest_version_num"
```
if you want to use `xgboost4j-spark`, you just need to replace xgboost4j with `xgboost4j-spark`
## Examples
Full code examples for Scala, Java, Apache Spark, and Apache Flink can
be found in the [examples package](https://github.com/dmlc/xgboost/tree/master/jvm-packages/xgboost4j-example).
**NOTE on LIBSVM Format**:
* Use *1-based* ascending indexes for the LIBSVM format in distributed training mode
* Spark does the internal conversion, and does not accept formats that are 0-based
* Whereas, use *0-based* indexes format when predicting in normal mode - for instance, while using the saved model in the Python package

View File

@@ -0,0 +1,5 @@
#!/bin/bash
set -x
sudo docker run --rm -m 4g -e JAVA_OPTS='-Xmx6g' --attach stdin --attach stdout --attach stderr --volume `pwd`/../:/xgboost codingcat/xgbrelease:latest /xgboost/jvm-packages/dev/build.sh

21
jvm-packages/dev/build.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/usr/bin/env bash
set -x
export JAVA_HOME=/usr/lib/jvm/java-1.8.0
export MAVEN_OPTS="-Xmx3000m"
export CMAKE_CXX_COMPILER=/opt/rh/devtoolset-2/root/usr/bin/gcc
export CXX=/opt/rh/devtoolset-2/root/usr/bin/g++
export CC=/opt/rh/devtoolset-2/root/usr/bin/gcc
export PATH=$CXX:$CC:/opt/rh/python27/root/usr/bin/python:$PATH
scl enable devtoolset-2 bash
scl enable python27 bash
rm /usr/bin/python
ln -s /opt/rh/python27/root/usr/bin/python /usr/bin/python
# build xgboost
cd /xgboost/jvm-packages;mvn package

View File

@@ -0,0 +1,44 @@
#!/bin/sh
#!/bin/bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# (Yizhi) This is mainly inspired by the script in apache/spark.
# I did some modificaiton to get it with our project.
# (Nan) Modified from MxNet
set -e
if [[ ($# -ne 2) || ( $1 == "--help") || $1 == "-h" ]]; then
echo "Usage: $(basename $0) [-h|--help] <from_version> <to_version>" 1>&2
exit 1
fi
FROM_VERSION=$1
TO_VERSION=$2
sed_i() {
perl -p -000 -e "$1" "$2" > "$2.tmp" && mv "$2.tmp" "$2"
}
export -f sed_i
BASEDIR=$(dirname $0)/..
find "$BASEDIR" -name 'pom.xml' -not -path '*target*' -print \
-exec bash -c \
"sed_i 's/(<artifactId>(xgboost-jvm|xgboost4j.*)<\/artifactId>\s+<version)>'$FROM_VERSION'(<\/version>)/\1>'$TO_VERSION'\3/g' {}" \;

View File

@@ -6,16 +6,15 @@
<groupId>ml.dmlc</groupId>
<artifactId>xgboost-jvm</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
<packaging>pom</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
<maven.version>3.3.9</maven.version>
<flink.version>0.10.2</flink.version>
<spark.version>2.1.0</spark.version>
<spark.version>2.3.0</spark.version>
<scala.version>2.11.8</scala.version>
<scala.binary.version>2.11</scala.binary.version>
</properties>
@@ -34,16 +33,82 @@
</modules>
<profiles>
<profile>
<id>spark-2.x</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<!--<properties>-->
<!--<flink.version>0.10.2</flink.version> -->
<!--<spark.version>2.0.1</spark.version>-->
<!--<scala.version>2.11.8</scala.version>-->
<!--<scala.binary.version>2.11</scala.binary.version>-->
<!--</properties>-->
<id>assembly</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.6</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<skipAssembly>true</skipAssembly>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>release-to-github</id>
<distributionManagement>
<repository>
<id>github.repo</id>
<name>Temporary Staging Repository</name>
<url>file://${project.build.directory}/mvn-repo</url>
</repository>
</distributionManagement>
<properties>
<github.global.server>github</github.global.server>
</properties>
<build>
<plugins>
<plugin>
<groupId>com.github.github</groupId>
<artifactId>site-maven-plugin</artifactId>
<version>0.12</version>
<configuration>
<message>Maven artifacts for ${project.version}</message>
<noJekyll>true</noJekyll>
<outputDirectory>${project.build.directory}/mvn-repo</outputDirectory>
<branch>refs/heads/maven-repo</branch>
<excludes>
<exclude>*-with-dependencies.jar</exclude>
</excludes>
<repositoryName>xgboost</repositoryName>
<repositoryOwner>CodingCat</repositoryOwner>
<merge>true</merge>
</configuration>
<executions>
<!-- run site-maven-plugin's 'site' target as part of the build's normal 'deploy' phase -->
<execution>
<goals>
<goal>site</goal>
</goals>
<phase>deploy</phase>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-deploy-plugin</artifactId>
<version>2.8.2</version>
<configuration>
<altDeploymentRepository>internal.repo::default::file://${project.build.directory}/mvn-repo</altDeploymentRepository>
</configuration>
</plugin>
</plugins>
</build>
</profile>
</profiles>
<build>
@@ -173,27 +238,6 @@
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.6</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<skipAssembly>true</skipAssembly>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>

View File

@@ -23,7 +23,7 @@ XGBoost4J Code Examples
* [External Memory](src/main/scala/ml/dmlc/xgboost4j/scala/example/ExternalMemory.scala)
## Spark API
* [Distributed Training with Spark](src/main/scala/ml/dmlc/xgboost4j/scala/example/spark/DistTrainWithSpark.scala)
* [Distributed Training with Spark](src/main/scala/ml/dmlc/xgboost4j/scala/example/spark/SparkWithDataFrame.scala)
## Flink API
* [Distributed Training with Flink](src/main/scala/ml/dmlc/xgboost4j/scala/example/flink/DistTrainWithFlink.scala)

View File

@@ -6,10 +6,10 @@
<parent>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost-jvm</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
</parent>
<artifactId>xgboost4j-example</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
<packaging>jar</packaging>
<build>
<plugins>
@@ -26,7 +26,7 @@
<dependency>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost4j-spark</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
@@ -37,7 +37,7 @@
<dependency>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost4j-flink</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>

View File

@@ -6,10 +6,10 @@
<parent>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost-jvm</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
</parent>
<artifactId>xgboost4j-flink</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
<build>
<plugins>
<plugin>
@@ -26,7 +26,7 @@
<dependency>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost4j</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>

View File

@@ -6,7 +6,7 @@
<parent>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost-jvm</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
</parent>
<artifactId>xgboost4j-spark</artifactId>
<build>
@@ -24,7 +24,19 @@
<dependency>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost4j</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>

View File

@@ -22,12 +22,16 @@ import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.spark.SparkContext
/**
* A class which allows user to save checkpoint boosters every a few rounds. If a previous job
* fails, the job can restart training from a saved booster instead of from scratch. This class
* A class which allows user to save checkpoints every a few rounds. If a previous job fails,
* the job can restart training from a saved checkpoints instead of from scratch. This class
* provides interface and helper methods for the checkpoint functionality.
*
* NOTE: This checkpoint is different from Rabit checkpoint. Rabit checkpoint is a native-level
* checkpoint stored in executor memory. This is a checkpoint which Spark driver store on HDFS
* for every a few iterations.
*
* @param sc the sparkContext object
* @param checkpointPath the hdfs path to store checkpoint boosters
* @param checkpointPath the hdfs path to store checkpoints
*/
private[spark] class CheckpointManager(sc: SparkContext, checkpointPath: String) {
private val logger = LogFactory.getLog("XGBoostSpark")
@@ -49,11 +53,11 @@ private[spark] class CheckpointManager(sc: SparkContext, checkpointPath: String)
}
/**
* Load existing checkpoint with the highest version.
* Load existing checkpoint with the highest version as a Booster object
*
* @return the booster with the highest version, null if no checkpoints available.
*/
private[spark] def loadBooster: Booster = {
private[spark] def loadCheckpointAsBooster: Booster = {
val versions = getExistingVersions
if (versions.nonEmpty) {
val version = versions.max
@@ -68,16 +72,16 @@ private[spark] class CheckpointManager(sc: SparkContext, checkpointPath: String)
}
/**
* Clean up all previous models and save a new model
* Clean up all previous checkpoints and save a new checkpoint
*
* @param model the xgboost model to save
* @param checkpoint the checkpoint to save as an XGBoostModel
*/
private[spark] def updateModel(model: XGBoostModel): Unit = {
private[spark] def updateCheckpoint(checkpoint: XGBoostModel): Unit = {
val fs = FileSystem.get(sc.hadoopConfiguration)
val prevModelPaths = getExistingVersions.map(version => new Path(getPath(version)))
val fullPath = getPath(model.version)
logger.info(s"Saving checkpoint model with version ${model.version} to $fullPath")
model.saveModelAsHadoopFile(fullPath)(sc)
val fullPath = getPath(checkpoint.version)
logger.info(s"Saving checkpoint model with version ${checkpoint.version} to $fullPath")
checkpoint.saveModelAsHadoopFile(fullPath)(sc)
prevModelPaths.foreach(path => fs.delete(path, true))
}
@@ -95,22 +99,22 @@ private[spark] class CheckpointManager(sc: SparkContext, checkpointPath: String)
}
/**
* Calculate a list of checkpoint rounds to save checkpoints based on the savingFreq and
* total number of rounds for the training. Concretely, the saving rounds start with
* prevRounds + savingFreq, and increase by savingFreq in each step until it reaches total
* number of rounds. If savingFreq is 0, the checkpoint will be disabled and the method
* returns Seq(round)
* Calculate a list of checkpoint rounds to save checkpoints based on the checkpointInterval
* and total number of rounds for the training. Concretely, the checkpoint rounds start with
* prevRounds + checkpointInterval, and increase by checkpointInterval in each step until it
* reaches total number of rounds. If checkpointInterval is 0, the checkpoint will be disabled
* and the method returns Seq(round)
*
* @param savingFreq the increase on rounds during each step of training
* @param checkpointInterval Period (in iterations) between checkpoints.
* @param round the total number of rounds for the training
* @return a seq of integers, each represent the index of round to save the checkpoints
*/
private[spark] def getSavingRounds(savingFreq: Int, round: Int): Seq[Int] = {
if (checkpointPath.nonEmpty && savingFreq > 0) {
private[spark] def getCheckpointRounds(checkpointInterval: Int, round: Int): Seq[Int] = {
if (checkpointPath.nonEmpty && checkpointInterval > 0) {
val prevRounds = getExistingVersions.map(_ / 2)
val firstSavingRound = (0 +: prevRounds).max + savingFreq
(firstSavingRound until round by savingFreq) :+ round
} else if (savingFreq <= 0) {
val firstCheckpointRound = (0 +: prevRounds).max + checkpointInterval
(firstCheckpointRound until round by checkpointInterval) :+ round
} else if (checkpointInterval <= 0) {
Seq(round)
} else {
throw new IllegalArgumentException("parameters \"checkpoint_path\" should also be set.")
@@ -128,12 +132,12 @@ object CheckpointManager {
" an instance of String.")
}
val savingFreq: Int = params.get("saving_frequency") match {
val checkpointInterval: Int = params.get("checkpoint_interval") match {
case None => 0
case Some(freq: Int) => freq
case _ => throw new IllegalArgumentException("parameter \"saving_frequency\" must be" +
case _ => throw new IllegalArgumentException("parameter \"checkpoint_interval\" must be" +
" an instance of Int.")
}
(checkpointPath, savingFreq)
(checkpointPath, checkpointInterval)
}
}

View File

@@ -17,6 +17,7 @@
package ml.dmlc.xgboost4j.scala.spark
import java.io.File
import java.nio.file.Files
import scala.collection.mutable
import scala.util.Random
@@ -24,6 +25,7 @@ import ml.dmlc.xgboost4j.java.{IRabitTracker, Rabit, XGBoostError, RabitTracker
import ml.dmlc.xgboost4j.scala.rabit.RabitTracker
import ml.dmlc.xgboost4j.scala.{XGBoost => SXGBoost, _}
import ml.dmlc.xgboost4j.{LabeledPoint => XGBLabeledPoint}
import org.apache.commons.io.FileUtils
import org.apache.commons.logging.LogFactory
import org.apache.hadoop.fs.{FSDataInputStream, Path}
import org.apache.spark.rdd.RDD
@@ -120,11 +122,8 @@ object XGBoost extends Serializable {
}
val taskId = TaskContext.getPartitionId().toString
val cacheDirName = if (useExternalMemory) {
val dir = new File(s"${TaskContext.get().stageId()}-cache-$taskId")
if (!(dir.exists() || dir.mkdirs())) {
throw new XGBoostError(s"failed to create cache directory: $dir")
}
Some(dir.toString)
val dir = Files.createTempDirectory(s"${TaskContext.get().stageId()}-cache-$taskId")
Some(dir.toAbsolutePath.toString)
} else {
None
}
@@ -325,23 +324,24 @@ object XGBoost extends Serializable {
case _ => throw new IllegalArgumentException("parameter \"timeout_request_workers\" must be" +
" an instance of Long.")
}
val (checkpointPath, savingFeq) = CheckpointManager.extractParams(params)
val (checkpointPath, checkpointInterval) = CheckpointManager.extractParams(params)
val partitionedData = repartitionForTraining(trainingData, nWorkers)
val sc = trainingData.sparkContext
val checkpointManager = new CheckpointManager(sc, checkpointPath)
checkpointManager.cleanUpHigherVersions(round)
var prevBooster = checkpointManager.loadBooster
var prevBooster = checkpointManager.loadCheckpointAsBooster
// Train for every ${savingRound} rounds and save the partially completed booster
checkpointManager.getSavingRounds(savingFeq, round).map {
savingRound: Int =>
checkpointManager.getCheckpointRounds(checkpointInterval, round).map {
checkpointRound: Int =>
val tracker = startTracker(nWorkers, trackerConf)
try {
val parallelismTracker = new SparkParallelismTracker(sc, timeoutRequestWorkers, nWorkers)
val overriddenParams = overrideParamsAccordingToTaskCPUs(params, sc)
val boostersAndMetrics = buildDistributedBoosters(partitionedData, overriddenParams,
tracker.getWorkerEnvs, savingRound, obj, eval, useExternalMemory, missing, prevBooster)
tracker.getWorkerEnvs, checkpointRound, obj, eval, useExternalMemory, missing,
prevBooster)
val sparkJobThread = new Thread() {
override def run() {
// force the job
@@ -359,9 +359,9 @@ object XGBoost extends Serializable {
model.asInstanceOf[XGBoostClassificationModel].numOfClasses =
params.getOrElse("num_class", "2").toString.toInt
}
if (savingRound < round) {
if (checkpointRound < round) {
prevBooster = model.booster
checkpointManager.updateModel(model)
checkpointManager.updateCheckpoint(model)
}
model
} finally {
@@ -480,11 +480,7 @@ private class Watches private(
def delete(): Unit = {
toMap.values.foreach(_.delete())
cacheDirName.foreach { name =>
for (cacheFile <- new File(name).listFiles()) {
if (!cacheFile.delete()) {
throw new IllegalStateException(s"failed to delete $cacheFile")
}
}
FileUtils.deleteDirectory(new File(name))
}
}

View File

@@ -169,12 +169,12 @@ abstract class XGBoostModel(protected var _booster: Booster)
def predict(testSet: RDD[MLDenseVector], missingValue: Float): RDD[Array[Float]] = {
val broadcastBooster = testSet.sparkContext.broadcast(_booster)
testSet.mapPartitions { testSamples =>
val sampleArray = testSamples.toList
val numRows = sampleArray.size
val numColumns = sampleArray.head.size
val sampleArray = testSamples.toArray
val numRows = sampleArray.length
if (numRows == 0) {
Iterator()
} else {
val numColumns = sampleArray.head.size
val rabitEnv = Map("DMLC_TASK_ID" -> TaskContext.getPartitionId().toString)
Rabit.init(rabitEnv.asJava)
// translate to required format

View File

@@ -71,7 +71,7 @@ trait GeneralParams extends Params {
val missing = new FloatParam(this, "missing", "the value treated as missing")
/**
* the interval to check whether total numCores is no smaller than nWorkers. default: 30 minutes
* the maximum time to wait for the job requesting new workers. default: 30 minutes
*/
val timeoutRequestWorkers = new LongParam(this, "timeout_request_workers", "the maximum time to" +
" request new Workers if numCores are insufficient. The timeout will be disabled if this" +
@@ -81,16 +81,19 @@ trait GeneralParams extends Params {
* The hdfs folder to load and save checkpoint boosters. default: `empty_string`
*/
val checkpointPath = new Param[String](this, "checkpoint_path", "the hdfs folder to load and " +
"save checkpoints. The job will try to load the existing booster as the starting point for " +
"training. If saving_frequency is also set, the job will save a checkpoint every a few rounds.")
"save checkpoints. If there are existing checkpoints in checkpoint_path. The job will load " +
"the checkpoint with highest version as the starting point for training. If " +
"checkpoint_interval is also set, the job will save a checkpoint every a few rounds.")
/**
* The frequency to save checkpoint boosters. default: 0
* Param for set checkpoint interval (&gt;= 1) or disable checkpoint (-1). E.g. 10 means that
* the trained model will get checkpointed every 10 iterations. Note: `checkpoint_path` must
* also be set if the checkpoint interval is greater than 0.
*/
val savingFrequency = new IntParam(this, "saving_frequency", "if checkpoint_path is also set," +
" the job will save checkpoints at this frequency. If the job fails and gets restarted with" +
" same setting, it will load the existing booster instead of training from scratch." +
" Checkpoint will be disabled if set to 0.")
val checkpointInterval: IntParam = new IntParam(this, "checkpointInterval", "set checkpoint " +
"interval (>= 1) or disable checkpoint (-1). E.g. 10 means that the trained model will get " +
"checkpointed every 10 iterations. Note: `checkpoint_path` must also be set if the checkpoint" +
" interval is greater than 0.", (interval: Int) => interval == -1 || interval >= 1)
/**
* Rabit tracker configurations. The parameter must be provided as an instance of the
@@ -128,6 +131,6 @@ trait GeneralParams extends Params {
useExternalMemory -> false, silent -> 0,
customObj -> null, customEval -> null, missing -> Float.NaN,
trackerConf -> TrackerConf(), seed -> 0, timeoutRequestWorkers -> 30 * 60 * 1000L,
checkpointPath -> "", savingFrequency -> 0
checkpointPath -> "", checkpointInterval -> -1
)
}

View File

@@ -45,7 +45,8 @@ trait LearningTaskParams extends Params {
/**
* evaluation metrics for validation data, a default metric will be assigned according to
* objective(rmse for regression, and error for classification, mean average precision for
* ranking). options: rmse, mae, logloss, error, merror, mlogloss, auc, ndcg, map, gamma-deviance
* ranking). options: rmse, mae, logloss, error, merror, mlogloss, auc, aucpr, ndcg, map,
* gamma-deviance
*/
val evalMetric = new Param[String](this, "eval_metric", "evaluation metrics for validation" +
" data, a default metric will be assigned according to objective (rmse for regression, and" +
@@ -97,5 +98,5 @@ private[spark] object LearningTaskParams {
"reg:gamma")
val supportedEvalMetrics = HashSet("rmse", "mae", "logloss", "error", "merror", "mlogloss",
"auc", "ndcg", "map", "gamma-deviance")
"auc", "aucpr", "ndcg", "map", "gamma-deviance")
}

View File

@@ -76,11 +76,12 @@ class SparkParallelismTracker(
}
private[this] def safeExecute[T](body: => T): T = {
sc.listenerBus.listeners.add(0, new TaskFailedListener)
val listener = new TaskFailedListener;
sc.addSparkListener(listener)
try {
body
} finally {
sc.listenerBus.listeners.remove(0)
sc.listenerBus.removeListener(listener)
}
}

View File

@@ -38,30 +38,30 @@ class CheckpointManagerSuite extends FunSuite with BeforeAndAfterAll {
val trainingRDD = sc.parallelize(Classification.train).map(_.asML).cache()
val paramMap = Map("eta" -> "1", "max_depth" -> "2", "silent" -> "1",
"objective" -> "binary:logistic")
(XGBoost.trainWithRDD(trainingRDD, paramMap, round = 2, sc.defaultParallelism),
XGBoost.trainWithRDD(trainingRDD, paramMap, round = 4, sc.defaultParallelism))
(XGBoost.trainWithRDD(trainingRDD, paramMap, round = 2, nWorkers = sc.defaultParallelism),
XGBoost.trainWithRDD(trainingRDD, paramMap, round = 4, nWorkers = sc.defaultParallelism))
}
test("test update/load models") {
val tmpPath = Files.createTempDirectory("test").toAbsolutePath.toString
val manager = new CheckpointManager(sc, tmpPath)
manager.updateModel(model4)
manager.updateCheckpoint(model4)
var files = FileSystem.get(sc.hadoopConfiguration).listStatus(new Path(tmpPath))
assert(files.length == 1)
assert(files.head.getPath.getName == "4.model")
assert(manager.loadBooster.booster.getVersion == 4)
assert(manager.loadCheckpointAsBooster.booster.getVersion == 4)
manager.updateModel(model8)
manager.updateCheckpoint(model8)
files = FileSystem.get(sc.hadoopConfiguration).listStatus(new Path(tmpPath))
assert(files.length == 1)
assert(files.head.getPath.getName == "8.model")
assert(manager.loadBooster.booster.getVersion == 8)
assert(manager.loadCheckpointAsBooster.booster.getVersion == 8)
}
test("test cleanUpHigherVersions") {
val tmpPath = Files.createTempDirectory("test").toAbsolutePath.toString
val manager = new CheckpointManager(sc, tmpPath)
manager.updateModel(model8)
manager.updateCheckpoint(model8)
manager.cleanUpHigherVersions(round = 8)
assert(new File(s"$tmpPath/8.model").exists())
@@ -69,12 +69,12 @@ class CheckpointManagerSuite extends FunSuite with BeforeAndAfterAll {
assert(!new File(s"$tmpPath/8.model").exists())
}
test("test saving rounds") {
test("test checkpoint rounds") {
val tmpPath = Files.createTempDirectory("test").toAbsolutePath.toString
val manager = new CheckpointManager(sc, tmpPath)
assertResult(Seq(7))(manager.getSavingRounds(savingFreq = 0, round = 7))
assertResult(Seq(2, 4, 6, 7))(manager.getSavingRounds(savingFreq = 2, round = 7))
manager.updateModel(model4)
assertResult(Seq(4, 6, 7))(manager.getSavingRounds(2, 7))
assertResult(Seq(7))(manager.getCheckpointRounds(checkpointInterval = 0, round = 7))
assertResult(Seq(2, 4, 6, 7))(manager.getCheckpointRounds(checkpointInterval = 2, round = 7))
manager.updateCheckpoint(model4)
assertResult(Seq(4, 6, 7))(manager.getCheckpointRounds(2, 7))
}
}

View File

@@ -338,7 +338,7 @@ class XGBoostGeneralSuite extends FunSuite with PerTest {
}
}
test("training with saving checkpoint boosters") {
test("training with checkpoint boosters") {
import DataUtils._
val eval = new EvalError()
val trainingRDD = sc.parallelize(Classification.train).map(_.asML)
@@ -347,7 +347,7 @@ class XGBoostGeneralSuite extends FunSuite with PerTest {
val tmpPath = Files.createTempDirectory("model1").toAbsolutePath.toString
val paramMap = List("eta" -> "1", "max_depth" -> 2, "silent" -> "1",
"objective" -> "binary:logistic", "checkpoint_path" -> tmpPath,
"saving_frequency" -> 2).toMap
"checkpoint_interval" -> 2).toMap
val prevModel = XGBoost.trainWithRDD(trainingRDD, paramMap, round = 5,
nWorkers = numWorkers)
def error(model: XGBoostModel): Float = eval.eval(

View File

@@ -6,10 +6,10 @@
<parent>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost-jvm</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
</parent>
<artifactId>xgboost4j</artifactId>
<version>0.7</version>
<version>0.72-SNAPSHOT</version>
<packaging>jar</packaging>
<dependencies>

View File

@@ -16,8 +16,6 @@
package ml.dmlc.xgboost4j.scala
import java.io.IOException
import com.esotericsoftware.kryo.io.{Output, Input}
import com.esotericsoftware.kryo.{Kryo, KryoSerializable}
import ml.dmlc.xgboost4j.java.{Booster => JBooster}
@@ -25,6 +23,12 @@ import ml.dmlc.xgboost4j.java.XGBoostError
import scala.collection.JavaConverters._
import scala.collection.mutable
/**
* Booster for xgboost, this is a model API that support interactive build of a XGBoost Model
*
* DEVELOPER WARNING: A Java Booster must not be shared by more than one Scala Booster
* @param booster the java booster object.
*/
class Booster private[xgboost4j](private[xgboost4j] var booster: JBooster)
extends Serializable with KryoSerializable {

View File

@@ -66,7 +66,12 @@ object XGBoost {
// we have to filter null value for customized obj and eval
params.filter(_._2 != null).mapValues(_.toString.asInstanceOf[AnyRef]).asJava,
round, jWatches, metrics, obj, eval, earlyStoppingRound, jBooster)
new Booster(xgboostInJava)
if (booster == null) {
new Booster(xgboostInJava)
} else {
// Avoid creating a new SBooster with the same JBooster
booster
}
}
/**

View File

@@ -198,4 +198,16 @@ class ScalaBoosterImplSuite extends FunSuite {
trainBoosterWithFastHisto(trainMat, Map("training" -> trainMat),
round = 10, paramMap, 0.85f)
}
test("test training from existing model in scala") {
val trainMat = new DMatrix("../../demo/data/agaricus.txt.train")
val paramMap = List("max_depth" -> "0", "silent" -> "0",
"objective" -> "binary:logistic", "tree_method" -> "hist",
"grow_policy" -> "depthwise", "max_depth" -> "2", "max_bin" -> "2",
"eval_metric" -> "auc").toMap
val prevBooster = XGBoost.train(trainMat, paramMap, round = 2)
val nextBooster = XGBoost.train(trainMat, paramMap, round = 4, booster = prevBooster)
assert(prevBooster == nextBooster)
}
}

View File

@@ -69,7 +69,7 @@ class DensifyParser : public dmlc::Parser<IndexType> {
std::vector<xgboost::bst_float> dense_value_;
};
template<typename IndexType>
template<typename IndexType, typename DType = real_t>
Parser<IndexType> *
CreateDenseLibSVMParser(const std::string& path,
const std::map<std::string, std::string>& args,
@@ -82,5 +82,6 @@ CreateDenseLibSVMParser(const std::string& path,
}
} // namespace data
DMLC_REGISTER_DATA_PARSER(uint32_t, dense_libsvm, data::CreateDenseLibSVMParser<uint32_t>);
DMLC_REGISTER_DATA_PARSER(uint32_t, real_t, dense_libsvm,
data::CreateDenseLibSVMParser<uint32_t __DMLC_COMMA real_t>);
} // namespace dmlc

View File

@@ -33,30 +33,32 @@ class MyLogistic : public ObjFunction {
void Configure(const std::vector<std::pair<std::string, std::string> >& args) override {
param_.InitAllowUnknown(args);
}
void GetGradient(const std::vector<bst_float> &preds,
void GetGradient(HostDeviceVector<bst_float> *preds,
const MetaInfo &info,
int iter,
std::vector<bst_gpair> *out_gpair) override {
out_gpair->resize(preds.size());
for (size_t i = 0; i < preds.size(); ++i) {
HostDeviceVector<GradientPair> *out_gpair) override {
out_gpair->Resize(preds->Size());
std::vector<bst_float>& preds_h = preds->HostVector();
std::vector<GradientPair>& out_gpair_h = out_gpair->HostVector();
for (size_t i = 0; i < preds_h.size(); ++i) {
bst_float w = info.GetWeight(i);
// scale the negative examples!
if (info.labels[i] == 0.0f) w *= param_.scale_neg_weight;
if (info.labels_[i] == 0.0f) w *= param_.scale_neg_weight;
// logistic transformation
bst_float p = 1.0f / (1.0f + std::exp(-preds[i]));
bst_float p = 1.0f / (1.0f + std::exp(-preds_h[i]));
// this is the gradient
bst_float grad = (p - info.labels[i]) * w;
bst_float grad = (p - info.labels_[i]) * w;
// this is the second order gradient
bst_float hess = p * (1.0f - p) * w;
out_gpair->at(i) = bst_gpair(grad, hess);
out_gpair_h.at(i) = GradientPair(grad, hess);
}
}
const char* DefaultEvalMetric() const override {
return "error";
}
void PredTransform(std::vector<bst_float> *io_preds) override {
void PredTransform(HostDeviceVector<bst_float> *io_preds) override {
// transform margin value to probability.
std::vector<bst_float> &preds = *io_preds;
std::vector<bst_float> &preds = io_preds->HostVector();
for (size_t i = 0; i < preds.size(); ++i) {
preds[i] = 1.0f / (1.0f + std::exp(-preds[i]));
}

View File

@@ -26,6 +26,11 @@ Please install ``gcc@5`` from `Homebrew <https://brew.sh/>`_::
brew install gcc@5
After installing ``gcc@5``, set it as your compiler::
export CC = gcc-5
export CXX = g++-5
Linux
-----

View File

@@ -10,7 +10,7 @@ Linux platform (also Mac OS X in general)
------------
**Trouble 0**: I see error messages like this when install from github using `python setup.py install`.
XGBoostLibraryNotFound: Cannot find XGBoost Libarary in the candidate path, did you install compilers and run build.sh in root path?
XGBoostLibraryNotFound: Cannot find XGBoost Library in the candidate path, did you install compilers and run build.sh in root path?
List of candidates:
/home/dmlc/anaconda/lib/python2.7/site-packages/xgboost-0.4-py2.7.egg/xgboost/libxgboostwrapper.so
/home/dmlc/anaconda/lib/python2.7/site-packages/xgboost-0.4-py2.7.egg/xgboost/../../wrapper/libxgboostwrapper.so

View File

@@ -12,7 +12,10 @@ sys.path.insert(0, '.')
# please don't use this file for installing from github
if os.name != 'nt': # if not windows, compile and install
os.system('sh ./xgboost/build-python.sh')
# if not windows, compile and install
if len(sys.argv) < 2 or sys.argv[1] != 'sdist':
# do not build for sdist
os.system('sh ./xgboost/build-python.sh')
else:
print('Windows users please use github installation.')
sys.exit()
@@ -30,16 +33,14 @@ class BinaryDistribution(Distribution):
# We can not import `xgboost.libpath` in setup.py directly since xgboost/__init__.py
# import `xgboost.core` and finally will import `numpy` and `scipy` which are setup
# `install_requires`. That's why we're using `exec` here.
libpath_py = os.path.join(CURRENT_DIR, 'xgboost/libpath.py')
libpath = {'__file__': libpath_py}
exec(compile(open(libpath_py, "rb").read(), libpath_py, 'exec'), libpath, libpath)
# do not import libpath for sdist
if len(sys.argv) < 2 or sys.argv[1] != 'sdist':
libpath_py = os.path.join(CURRENT_DIR, 'xgboost/libpath.py')
libpath = {'__file__': libpath_py}
exec(compile(open(libpath_py, "rb").read(), libpath_py, 'exec'), libpath, libpath)
LIB_PATH = libpath['find_lib_path']()
LIB_PATH = libpath['find_lib_path']()
# to deploy to pip, please use
# make pythonpack
# python setup.py register sdist upload
# and be sure to test it firstly using "python setup.py register sdist upload -r pypitest"
setup(name='xgboost',
version=open(os.path.join(CURRENT_DIR, 'xgboost/VERSION')).read().strip(),
description='XGBoost Python Package',

View File

@@ -1 +1 @@
0.7
0.72

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/bin/sh
# This is a simple script to make xgboost in MAC and Linux for python wrapper only
# Basically, it first try to make with OpenMP, if fails, disable OpenMP and make it again.
# This will automatically make xgboost for MAC users who don't have OpenMP support.
@@ -9,22 +9,44 @@
# note: this script is build for python package only, and it might have some filename
# conflict with build.sh which is for everything.
set -e
set -x
#pushd xgboost
oldpath=`pwd`
cd ./xgboost/
if echo "${OSTYPE}" | grep -q "darwin"; then
LIB_XGBOOST=libxgboost.dylib
# Use OpenMP-capable compiler if possible
if which g++-5; then
export CC=gcc-5
export CXX=g++-5
elif which g++-7; then
export CC=gcc-7
export CXX=g++-7
elif which clang++; then
export CC=clang
export CXX=clang++
fi
else
LIB_XGBOOST=libxgboost.so
fi
#remove the pre-compiled .so and trigger the system's on-the-fly compiling
make clean
if make lib/libxgboost.so -j4; then
if make lib/${LIB_XGBOOST} -j4; then
echo "Successfully build multi-thread xgboost"
else
echo "-----------------------------"
echo "Building multi-thread xgboost failed"
echo "Start to build single-thread xgboost"
make clean
make lib/libxgboost.so -j4 USE_OPENMP=0
make lib/${LIB_XGBOOST} -j4 USE_OPENMP=0
echo "Successfully build single-thread xgboost"
echo "If you want multi-threaded version"
echo "See additional instructions in doc/build.md"
fi
cd $oldpath
set +x

View File

@@ -50,7 +50,7 @@ def print_evaluation(period=1, show_stdv=True):
"""
def callback(env):
"""internal function"""
if env.rank != 0 or len(env.evaluation_result_list) == 0 or period is False:
if env.rank != 0 or len(env.evaluation_result_list) == 0 or period is False or period == 0:
return
i = env.iteration
if (i % period == 0 or i + 1 == env.begin_iteration or i + 1 == env.end_iteration):

View File

@@ -235,8 +235,6 @@ class DMatrix(object):
feature_names=None, feature_types=None,
nthread=None):
"""
Data matrix used in XGBoost.
Parameters
----------
data : string/numpy array/scipy.sparse/pd.DataFrame
@@ -628,9 +626,8 @@ class DMatrix(object):
feature_names : list or None
"""
if self._feature_names is None:
return ['f{0}'.format(i) for i in range(self.num_col())]
else:
return self._feature_names
self._feature_names = ['f{0}'.format(i) for i in range(self.num_col())]
return self._feature_names
@property
def feature_types(self):
@@ -706,7 +703,7 @@ class DMatrix(object):
class Booster(object):
""""A Booster of of XGBoost.
"""A Booster of of XGBoost.
Booster is the model of xgboost, that contains low level routines for
training, prediction and evaluation.
@@ -716,8 +713,7 @@ class Booster(object):
def __init__(self, params=None, cache=(), model_file=None):
# pylint: disable=invalid-name
"""Initialize the Booster.
"""
Parameters
----------
params : dict
@@ -992,7 +988,8 @@ class Booster(object):
return self.eval_set([(data, name)], iteration)
def predict(self, data, output_margin=False, ntree_limit=0, pred_leaf=False,
pred_contribs=False, approx_contribs=False):
pred_contribs=False, approx_contribs=False, pred_interactions=False,
validate_features=True):
"""
Predict with data.
@@ -1019,14 +1016,25 @@ class Booster(object):
in both tree 1 and tree 0.
pred_contribs : bool
When this option is on, the output will be a matrix of (nsample, nfeats+1)
When this is True the output will be a matrix of size (nsample, nfeats + 1)
with each record indicating the feature contributions (SHAP values) for that
prediction. The sum of all feature contributions is equal to the prediction.
Note that the bias is added as the final column, on top of the regular features.
prediction. The sum of all feature contributions is equal to the raw untransformed
margin value of the prediction. Note the final column is the bias term.
approx_contribs : bool
Approximate the contributions of each feature
pred_interactions : bool
When this is True the output will be a matrix of size (nsample, nfeats + 1, nfeats + 1)
indicating the SHAP interaction values for each pair of features. The sum of each
row (or column) of the interaction values equals the corresponding SHAP value (from
pred_contribs), and the sum of the entire matrix equals the raw untransformed margin
value of the prediction. Note the last row and column correspond to the bias term.
validate_features : bool
When this is True, validate that the Booster's and data's feature_names are identical.
Otherwise, it is assumed that the feature_names are the same.
Returns
-------
prediction : numpy array
@@ -1040,8 +1048,11 @@ class Booster(object):
option_mask |= 0x04
if approx_contribs:
option_mask |= 0x08
if pred_interactions:
option_mask |= 0x10
self._validate_features(data)
if validate_features:
self._validate_features(data)
length = c_bst_ulong()
preds = ctypes.POINTER(ctypes.c_float)()
@@ -1055,8 +1066,22 @@ class Booster(object):
preds = preds.astype(np.int32)
nrow = data.num_row()
if preds.size != nrow and preds.size % nrow == 0:
ncol = int(preds.size / nrow)
preds = preds.reshape(nrow, ncol)
chunk_size = int(preds.size / nrow)
if pred_interactions:
ngroup = int(chunk_size / ((data.num_col() + 1) * (data.num_col() + 1)))
if ngroup == 1:
preds = preds.reshape(nrow, data.num_col() + 1, data.num_col() + 1)
else:
preds = preds.reshape(nrow, ngroup, data.num_col() + 1, data.num_col() + 1)
elif pred_contribs:
ngroup = int(chunk_size / (data.num_col() + 1))
if ngroup == 1:
preds = preds.reshape(nrow, data.num_col() + 1)
else:
preds = preds.reshape(nrow, ngroup, data.num_col() + 1)
else:
preds = preds.reshape(nrow, chunk_size)
return preds
def save_model(self, fname):

View File

@@ -34,7 +34,7 @@ def find_lib_path():
# hack for pip installation when copy all parent source directory here
dll_path.append(os.path.join(curr_path, './windows/Release/'))
dll_path = [os.path.join(p, 'xgboost.dll') for p in dll_path]
elif sys.platform.startswith('linux'):
elif sys.platform.startswith('linux') or sys.platform.startswith('freebsd'):
dll_path = [os.path.join(p, 'libxgboost.so') for p in dll_path]
elif sys.platform == 'darwin':
dll_path = [os.path.join(p, 'libxgboost.dylib') for p in dll_path]

View File

@@ -215,7 +215,8 @@ class XGBModel(XGBModelBase):
return xgb_params
def fit(self, X, y, sample_weight=None, eval_set=None, eval_metric=None,
early_stopping_rounds=None, verbose=True, xgb_model=None):
early_stopping_rounds=None, verbose=True, xgb_model=None,
sample_weight_eval_set=None):
# pylint: disable=missing-docstring,invalid-name,attribute-defined-outside-init
"""
Fit the gradient boosting model
@@ -231,6 +232,9 @@ class XGBModel(XGBModelBase):
eval_set : list, optional
A list of (X, y) tuple pairs to use as a validation set for
early-stopping
sample_weight_eval_set : list, optional
A list of the form [L_1, L_2, ..., L_n], where each L_i is a list of
instance weights on the i-th validation set.
eval_metric : str, callable, optional
If a str, should be a built-in evaluation metric to use. See
doc/parameter.md. If callable, a custom evaluation metric. The call
@@ -263,9 +267,14 @@ class XGBModel(XGBModelBase):
trainDmatrix = DMatrix(X, label=y, missing=self.missing, nthread=self.n_jobs)
evals_result = {}
if eval_set is not None:
evals = list(DMatrix(x[0], label=x[1], missing=self.missing,
nthread=self.n_jobs) for x in eval_set)
if sample_weight_eval_set is None:
sample_weight_eval_set = [None] * len(eval_set)
evals = list(
DMatrix(eval_set[i][0], label=eval_set[i][1], missing=self.missing,
weight=sample_weight_eval_set[i], nthread=self.n_jobs)
for i in range(len(eval_set)))
evals = list(zip(evals, ["validation_{}".format(i) for i in
range(len(evals))]))
else:
@@ -408,7 +417,8 @@ class XGBClassifier(XGBModel, XGBClassifierBase):
random_state, seed, missing, **kwargs)
def fit(self, X, y, sample_weight=None, eval_set=None, eval_metric=None,
early_stopping_rounds=None, verbose=True, xgb_model=None):
early_stopping_rounds=None, verbose=True, xgb_model=None,
sample_weight_eval_set=None):
# pylint: disable = attribute-defined-outside-init,arguments-differ
"""
Fit gradient boosting classifier
@@ -424,6 +434,9 @@ class XGBClassifier(XGBModel, XGBClassifierBase):
eval_set : list, optional
A list of (X, y) pairs to use as a validation set for
early-stopping
sample_weight_eval_set : list, optional
A list of the form [L_1, L_2, ..., L_n], where each L_i is a list of
instance weights on the i-th validation set.
eval_metric : str, callable, optional
If a str, should be a built-in evaluation metric to use. See
doc/parameter.md. If callable, a custom evaluation metric. The call
@@ -478,11 +491,13 @@ class XGBClassifier(XGBModel, XGBClassifierBase):
training_labels = self._le.transform(y)
if eval_set is not None:
# TODO: use sample_weight if given?
if sample_weight_eval_set is None:
sample_weight_eval_set = [None] * len(eval_set)
evals = list(
DMatrix(x[0], label=self._le.transform(x[1]),
missing=self.missing, nthread=self.n_jobs)
for x in eval_set
DMatrix(eval_set[i][0], label=self._le.transform(eval_set[i][1]),
missing=self.missing, weight=sample_weight_eval_set[i],
nthread=self.n_jobs)
for i in range(len(eval_set))
)
nevals = len(evals)
eval_names = ["validation_{}".format(i) for i in range(nevals)]
@@ -520,6 +535,24 @@ class XGBClassifier(XGBModel, XGBClassifierBase):
return self
def predict(self, data, output_margin=False, ntree_limit=0):
"""
Predict with `data`.
NOTE: This function is not thread safe.
For each booster object, predict can only be called from one thread.
If you want to run prediction using multiple thread, call xgb.copy() to make copies
of model object and then call predict
Parameters
----------
data : DMatrix
The dmatrix storing the input.
output_margin : bool
Whether to output the raw untransformed margin value.
ntree_limit : int
Limit number of trees in the prediction; defaults to 0 (use all trees).
Returns
-------
prediction : numpy array
"""
test_dmatrix = DMatrix(data, missing=self.missing, nthread=self.n_jobs)
class_probs = self.get_booster().predict(test_dmatrix,
output_margin=output_margin,
@@ -531,10 +564,26 @@ class XGBClassifier(XGBModel, XGBClassifierBase):
column_indexes[class_probs > 0.5] = 1
return self._le.inverse_transform(column_indexes)
def predict_proba(self, data, output_margin=False, ntree_limit=0):
def predict_proba(self, data, ntree_limit=0):
"""
Predict the probability of each `data` example being of a given class.
NOTE: This function is not thread safe.
For each booster object, predict can only be called from one thread.
If you want to run prediction using multiple thread, call xgb.copy() to make copies
of model object and then call predict
Parameters
----------
data : DMatrix
The dmatrix storing the input.
ntree_limit : int
Limit number of trees in the prediction; defaults to 0 (use all trees).
Returns
-------
prediction : numpy array
a numpy array with the probability of each data example being of a given class.
"""
test_dmatrix = DMatrix(data, missing=self.missing, nthread=self.n_jobs)
class_probs = self.get_booster().predict(test_dmatrix,
output_margin=output_margin,
ntree_limit=ntree_limit)
if self.objective == "multi:softprob":
return class_probs

Some files were not shown because too many files have changed in this diff Show More