Compare commits

..

140 Commits

Author SHA1 Message Date
Philip Hyunsu Cho
6852d0afd5 Separate out restricted and unrestricted tasks (#3736) 2018-09-27 23:20:06 -07:00
Philip Hyunsu Cho
c0bd296354 Fix #3730: scikit-learn 0.20 compatibility fix (#3731)
* Fix #3730: scikit-learn 0.20 compatibility fix

sklearn.cross_validation has been removed from scikit-learn 0.20,
so replace it with sklearn.model_selection

* Display test names for Python tests for clarity
2018-09-27 15:04:25 -07:00
Philip Hyunsu Cho
09142c94f5 Disable flaky tests in R-package/tests/testthat/test_update.R (#3723) 2018-09-26 14:57:46 -07:00
Philip Cho
ba4244ef51 Mask tests for 32-bit Windows that fail due to difference between x87 and SSE 2018-09-05 16:18:20 -07:00
Philip Hyunsu Cho
a46b0ac2d2 Fix CRAN check by removing reference to std::cerr (#3660)
* Fix CRAN check by removing reference to std::cerr

* Mask tests that fail on 32-bit Windows R
2018-09-05 12:05:31 -07:00
gorogm
4bc7e94603 Link fixed. (#3640) 2018-09-05 12:04:46 -07:00
Philip Hyunsu Cho
a899e8f4cd Document CUDA requirement, lack of external memory on GPU (#3624)
* Document fact that GPU doesn't support external memory

* Document CUDA requirement
2018-09-05 12:03:46 -07:00
Philip Cho
f9a833f525 Update Python API doc (#3619)
* Show inherited members of XGBRegressor in API doc, since XGBRegressor uses default methods from XGBModel

* Add table of contents to Python API doc

* Skip JVM doc download if not available

* Show inherited members for XGBRegressor

* Add docstring to XGBRegressor.predict()

* Fix rendering errors in Python docstrings

* Fix lint
2018-09-05 12:02:11 -07:00
Grant W Schneider
1afd2f1b2d Remove errant $ (#3618) 2018-09-05 11:55:14 -07:00
Philip Hyunsu Cho
b1d76d533d Fix #3609: Removed unused parameter 'use_buffer' (#3610) 2018-09-05 11:54:54 -07:00
Philip Hyunsu Cho
9d70655c42 Fix #3598: document that custom objective can't contain colon (:) (#3601) 2018-09-05 11:54:19 -07:00
Grace Lam
dd1fda449c Add JSON dump functionality documentation (#3600) 2018-09-05 11:53:31 -07:00
Jakob Richter
324f3b5259 replace nround with nrounds to match actual parameter (#3592) 2018-09-05 11:52:34 -07:00
Philip Cho
24e08c2638 Add version to doc sidebar 2018-08-13 01:46:05 -07:00
Philip Hyunsu Cho
96826a3515 Release version 0.80 (#3541)
* Up versions

* Write release note for 0.80
2018-08-13 01:38:37 -07:00
Mathew
06ef4db4cc Fix Spark 2.2 Support (Amending #3062) (#3325)
This pull request amends the broken #3062 allow Spark 2.2 to work.

Please note this won't work in Spark <=2.1 as sc.removeSparkListener was implemented in Spark 2.2. (So perhaps a more general method is better, although that is what was attempted in #3062)

This PR fixes: #3208, #3151 and the discussion in #1927.

I do find it strange that #3062 dose not work in Spark 2.2, it's probably due to some sort of public/private issue in the org.apache.spark.scheduler.LiveListenerBus class inheritance (In Spark itself). The error is: `java.lang.NoSuchMethodError: org.apache.spark.scheduler.LiveListenerBus.removeListener(Ljava/lang/Object;)V`
2018-08-12 18:35:20 -07:00
Rory Mitchell
645996b12f Remove accidental SparsePage copies (#3583) 2018-08-12 17:49:38 -07:00
Philip Hyunsu Cho
0b607fb884 Add link to XGBoost4J-Spark tutorial on AWS Yarn tutorial (#3582) 2018-08-12 07:27:28 -07:00
Philip Hyunsu Cho
4202332783 Clarify multi-GPU training, binary wheels, Pandas integration (#3581)
* Clarify multi-GPU training, binary wheels, Pandas integration

* Add a note about multi-GPU on gpu/index.rst
2018-08-11 19:21:28 -07:00
Matthew Tovbin
7300002516 [jvm-packages] Use treeLimit param in getTreeLimit (#3575) 2018-08-10 09:38:58 -07:00
Philip Hyunsu Cho
9c647d8130 Bring XGBoost4J Intro up-to-date (#3574) 2018-08-10 09:08:19 -07:00
Philip Hyunsu Cho
2e7c3a0ed5 Refined logic for locating git branch inside ReadTheDocs (#3573) 2018-08-09 15:28:12 -07:00
Philip Hyunsu Cho
aa4ee6a0e4 [BLOCKING] Adding JVM doc build to Jenkins CI (#3567)
* Adding Java/Scala doc build to Jenkins CI

* Deploy built doc to S3 bucket

* Build doc only for branches

* Build doc first, to get doc faster for branch updates

* Have ReadTheDocs download doc tarball from S3

* Update JVM doc links

* Put doc build commands in a script

* Specify Spark 2.3+ requirement for XGBoost4J-Spark

* Build GPU wheel without NCCL, to reduce binary size
2018-08-09 13:27:01 -07:00
Matthew Tovbin
bad76048d1 Eliminate use of System.out + proper error logging (#3572) 2018-08-09 10:06:17 -07:00
Rory Mitchell
bbb771f32e Refactor parts of fast histogram utilities (#3564)
* Refactor parts of fast histogram utilities

* Removed byte packing from column matrix
2018-08-09 17:59:57 +12:00
Philip Hyunsu Cho
3c72654e3b Revert "Fix #3485, #3540: Don't use dropout for predicting test sets" (#3563)
* Revert "Fix #3485, #3540: Don't use dropout for predicting test sets (#3556)"

This reverts commit 44811f2330.

* Document behavior of predict() for DART booster

* Add notice to parameter.rst
2018-08-08 09:48:55 -07:00
Zeno Gantner
e3e776bd58 grammar fixes and typos (#3568) 2018-08-08 09:48:27 -07:00
Nan Zhu
1c08b3b2ea [jvm-packages] enable predictLeaf/predictContrib/treeLimit in 0.8 (#3532)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* partial finish

* no test

* add test cases

* add test cases

* address comments

* add test for regressor

* fix typo
2018-08-07 14:01:18 -07:00
Philip Hyunsu Cho
246ec92163 Update broken links (#3565)
Fix #3559
Fix #3562
2018-08-07 05:27:39 -07:00
trivialfis
55caad6e49 Remove redundant FindGTest.cmake. (#3533)
During removal of FindGTest.cmake, also

* Fix gtest include dirs.
* Remove some blanks and use PWD for gtest dir.
2018-08-07 10:08:08 +12:00
Henry Gouk
69454d9487 Implementation of hinge loss for binary classification (#3477) 2018-08-07 10:06:42 +12:00
Philip Hyunsu Cho
44811f2330 Fix #3485, #3540: Don't use dropout for predicting test sets (#3556)
* Fix #3485, #3540: Don't use dropout for predicting test sets

Dropout (for DART) should only be used at training time.

* Add regression test
2018-08-05 10:17:21 -07:00
Philip Hyunsu Cho
109473dae2 Fix #3545: XGDMatrixCreateFromCSCEx silently discards empty trailing rows (#3553)
* Fix #3545: XGDMatrixCreateFromCSCEx silently discards empty trailing rows

Description: The bug is triggered when

1. The data matrix has empty rows at the bottom. More precisely, the rows
   `n-k+1`, `n-k+2`, ..., `n` of the matrix have missing values in all
   dimensions (`n` number of instances, `k` number of trailing rows)
2. The data matrix is given as Compressed Sparse Column (CSC) format.

Diagnosis: When the CSC matrix is converted to Compressed Sparse Row (CSR)
format (this is common format used for DMatrix), the trailing empty rows
are silently ignored. More specifically, the row pointer (`offset`) of the
newly created CSR matrix does not take account of these rows.

Fix: Modify the row pointer.

* Add regression test
2018-08-05 10:15:42 -07:00
Philip Hyunsu Cho
8c633d1ca3 Fix #3505: Prevent undefined behavior due to incorrectly sized base_margin (#3555)
The base margin will need to have length `[num_class] * [number of data points]`.
Otherwise, the array holding prediction results will be only partially
initialized, causing undefined behavior.

Fix: check the length of the base margin. If the length is not correct,
use the global bias (`base_score`) instead. Warn the user about the
substitution.
2018-08-05 10:14:07 -07:00
Philip Hyunsu Cho
4a429a7c4f Add reg:tweedie to supported objectives in XGBoost4J-Spark (#3552) 2018-08-05 07:42:59 -07:00
Philip Hyunsu Cho
7fefd6865d Fix #3402: wrong fid crashes distributed algorithm (#3535)
* Fix #3402: wrong fid crashes distributed algorithm

The bug was introduced by the recent DMatrix refactor (#3301). It was partially
fixed by #3408 but the example in #3402 was still failing. The example in #3402
will succeed after this fix is applied.

* Explicitly specify "this" to prevent compile error

* Add regression test

* Add distributed test to Travis matrix

* Install kubernetes Python package as dependency of dmlc tracker

* Add Python dependencies

* Add compile step

* Reduce size of regression test case

* Further reduce size of test
2018-08-04 19:20:04 -07:00
Nan Zhu
31d1baba3d [jvm-packages] Tutorial of XGBoost4J-Spark (#3534)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* add new

* update doc

* finish Gang Scheduling

* more

* intro

* Add sections: Prediction, Model persistence and ML pipeline.

* Add XGBoost4j-Spark MLlib pipeline example

* partial finished version

* finish the doc

* adjust code

* fix the doc

* use rst

* Convert XGBoost4J-Spark tutorial to reST

* Bring XGBoost4J up to date

* add note about using hdfs

* remove duplicate file

* fix descriptions

* update doc

* Wrap HDFS/S3 export support as a note

* update

* wrap indexing_mode example in code block
2018-08-03 21:17:50 -07:00
trivialfis
34dc9155ab Use __CUDA__ macro with __NVCC__. (#3539)
* __CUDA__ is defined in clang. Making the change won't make clang
compile xgboost, but syntax checking from clang is at least partially
working.
2018-08-02 22:04:23 +12:00
Philip Hyunsu Cho
70026655b0 Clarify supported OSes for XGBoost4J published JARs (#3547) 2018-08-01 19:51:44 -07:00
Philip Hyunsu Cho
437b368b1f Update dmlc-core submodule (#3546)
This bring many goodies, including:

* Ability to specify delimiter and weight_column for CSV files:
```python
dtrain = xgboost.DMatrix('train.csv?format=csv&label_column=0&weight_column=1&delimiter= ')
```
* Ability to choose between 0-based and 1-based indexing for LIBSVM/LIBFM files:
```python
dtrain = xgboost.DMatrix('train.libsvm?indexing_mode=1')    # use 1-based indexing
dtest = xgboost.DMatrix('test.libsvm')                      # use 0-based indexing (default)
dtest2 = xgboost.DMatrix('test2.libsvm?indexing_mode=-1')  # use heuristic to detect 0-based / 1-based
```
* Fix a bug in float parsing (issue dmlc/dmlc-core#440)
2018-08-01 15:15:40 -07:00
Nan Zhu
6cf97b4eae [jvm-packages] consider spark.task.cpus when controlling parallelism (#3530)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* consider spark.task.cpus when controlling parallelism

* fix bug

* fix conf setup

* calculate requestedCores within ParallelismController

* enforce spark.task.cpus = 1

* unify unit test case framework

* enable spark ui
2018-07-31 06:19:45 -07:00
trivialfis
860263f814 Enable building with sanitizers. (#3525) 2018-07-31 17:25:47 +12:00
Nan Zhu
b546321c83 [jvm-packages] the current version of xgboost does not consider missing value in prediction (#3529)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* consider missing value in prediction

* handle single prediction instance

* fix type conversion
2018-07-30 14:16:24 -07:00
wenduowang
3b62e75f2e Fix bug of using list(x) function when x is string (#3432)
* Fix bug of using list(x) function when x is string

list('abcdcba') = ['a', 'b', 'c', 'd', 'c', 'b', 'a']

* Allow feature_names/feature_types to be of any type

If feature_names/feature_types is iterable, e.g. tuple, list, then convert the value to list, except for string; otherwise construct a list with a single value

* Delete excess whitespace

* Fix whitespace to pass lint
2018-07-30 07:36:34 -07:00
jqmp
dd07c25d12 Fix typo in ElasticNet threshold function (#3527) 2018-07-30 14:08:14 +12:00
Philip Hyunsu Cho
2bb9b9d3db Fix typo in parameter.rst, gblinear section (#3518) 2018-07-28 18:58:15 -07:00
Nan Zhu
b5178d3d99 [jvm-packages] a better explanation about the inconsistent issue (#3524) 2018-07-28 17:34:39 -07:00
hlsc
5850a2558a fix DMatrix load_row_split bug (#3431) 2018-07-28 17:21:30 -07:00
trivialfis
8973f2cb0e Fix building dmlc-core from xgboost. (#3522)
Move building dmlc-core before adding DMLC_LOG_CUSTOMIZE.

Fix #3520.
2018-07-28 10:35:11 -07:00
Uddeshya Singh
3363b9142e Update faq.rst (#3521)
Just fixing a minor typo
2018-07-28 10:34:14 -07:00
Rory Mitchell
07ff52d54c Dynamically allocate GPU histogram memory (#3519)
* Expand histogram memory dynamically to prevent large allocations for large tree depths (e.g. > 15)

* Remove GPU memory allocation messages. These are misleading as a large number of allocations are now dynamic.

* Fix appveyor R test
2018-07-28 21:22:41 +12:00
Brandon Greenwell
b5fad42da2 Issue warning when requesting bivariate plotting (#3516) 2018-07-27 16:15:37 -07:00
Philip Hyunsu Cho
8a5209c55e Fix model saving for 'count:possion': max_delta_step as Booster attribute (#3515)
* Save max_delta_step as an extra attribute of Booster

Fixes #3509 and #3026, where `max_delta_step` parameter gets lost during serialization.

* fix lint

* Use camel case for global constant

* disable local variable case in clang-tidy
2018-07-27 09:55:54 -07:00
Andy Adinets
cc6a5a3666 Added finding quantiles on GPU. (#3393)
* Added finding quantiles on GPU.

- this includes datasets where weights are assigned to data rows
- as the quantiles found by the new algorithm are not the same
  as those found by the old one, test thresholds in
    tests/python-gpu/test_gpu_updaters.py have been adjusted.

* Adjustments and improved testing for finding quantiles on the GPU.

- added C++ tests for the DeviceSketch() function
- reduced one of the thresholds in test_gpu_updaters.py
- adjusted the cuts found by the find_cuts_k kernel
2018-07-27 14:03:16 +12:00
Nan Zhu
e2f09db77a [jvm-packages] minor fix for parameter name in example (#3507) 2018-07-25 19:57:40 -07:00
Rory Mitchell
a725272e19 Correct mistake from dmatrix refactor (#3408) 2018-07-24 15:03:36 +12:00
jqmp
e9a97e0d88 Add total_gain and total_cover importance measures (#3498)
Add `'total_gain'` and `'total_cover'` as possible `importance_type`
arguments to `Booster.get_score` in the Python package.

`get_score` already accepts a `'gain'` argument, which returns each
feature's average gain over all of its splits.  `'total_gain'` does the
same, but returns a total rather than an average.  This seems more
intuitively meaningful, and also matches the behavior of the R package's
`xgb.importance` function.

I also added an analogous `'total_cover'` command for consistency.

This should resolve #3484.
2018-07-23 00:30:55 -07:00
KOLANICH
a1505de631 Added configuration for python into .editorconfig (#3494)
* Added configuration for python into .editorconfig

* Fixed forgotten change in the number of spaces
2018-07-23 00:24:10 -07:00
KOLANICH
a393d44c5d Improved library loading a bit (#3481)
* Improved library loading a bit

* Fixed indentation.

* Fixes according to the discussion

* Moved the comment to a separate line.
* specified exception type
2018-07-20 16:03:44 -07:00
Philip Hyunsu Cho
8e90b60c4d Fix relpath in setup.py on Windows (#3493)
* Fix relpath in setup.py on Windows

Fixes #3480.

* Use only one lib file; use 4 space indent
2018-07-20 12:28:08 -07:00
Philip Hyunsu Cho
05b089405d Doc modernization (#3474)
* Change doc build to reST exclusively

* Rewrite Intro doc in reST; create toctree

* Update parameter and contribute

* Convert tutorials to reST

* Convert Python tutorials to reST

* Convert CLI and Julia docs to reST

* Enable markdown for R vignettes

* Done migrating to reST

* Add guzzle_sphinx_theme to requirements

* Add breathe to requirements

* Fix search bar

* Add link to user forum
2018-07-19 14:22:16 -07:00
Yanbo Liang
c004cea788 Expose setCustomObj & setCustomEval for XGBoostClassifier & XGBoostRegressor. (#3486) 2018-07-17 21:16:51 -07:00
KOLANICH
b6dcbf0e07 Added .editorconfig (#3478) 2018-07-17 20:05:55 -07:00
Rory Mitchell
0f145a0365 Resolve GPU bug on large files (#3472)
Remove calls to thrust copy, fix indexing bug
2018-07-16 20:43:45 +12:00
Rory Mitchell
1b59316444 Updates for GPU CI tests (#3467)
* Fail GPU CI after test failure

* Fix GPU linear tests

* Reduced number of GPU tests to speed up CI

* Remove static allocations of device memory

* Resolve illegal memory access for updater_fast_hist.cc

* Fix broken r tests dependency

* Update python install documentation for GPU
2018-07-16 18:05:53 +12:00
Henry Gouk
a13e29ece1 Add LASSO (#3429)
* Allow multiple split constraints

* Replace RidgePenalty with ElasticNet

* Add test for checking Ridge, LASSO, and Elastic Net are implemented
2018-07-15 16:38:26 +12:00
Yanbo Liang
2f8764955c [JVM-packages] Support single instance prediction. (#3464)
* Support single instance prediction.

* Address comments.
2018-07-12 14:17:53 -07:00
Thejaswi
2200939416 Upgrading to NCCL2 (#3404)
* Upgrading to NCCL2

* Part - II of NCCL2 upgradation

 - Doc updates to build with nccl2
 - Dockerfile.gpu update for a correct CI build with nccl2
 - Updated FindNccl package to have env-var NCCL_ROOT to take precedence

* Upgrading to v9.2 for CI workflow, since it has the nccl2 binaries available

* Added NCCL2 license + copy the nccl binaries into /usr location for the FindNccl module to find

* Set LD_LIBRARY_PATH variable to pick nccl2 binary at runtime

* Need the nccl2 library download instructions inside Dockerfile.release as well

* Use NCCL2 as a static library
2018-07-10 00:42:15 -07:00
Thejaswi
a6331925d2 Upgrade cuda version to 9.2 for CI workflows (#3460)
- Needed by the issue #3404
 - as v9.1 doesn't have a nccl2 release
2018-07-08 23:04:51 -07:00
Philip Hyunsu Cho
b40959042c Document 0.72.1 version (#3458) 2018-07-08 15:42:09 -07:00
kodonnell
6bed54ac39 python sklearn api: defaulting to best_ntree_limit if defined, otherwise current behaviour (#3445)
* python sklearn api: defaulting to best_ntree_limit if defined, otherwise current behaviour

* Fix whitespace
2018-07-08 14:35:52 -07:00
ngoyal2707
cb017d0c9a [jvm-packages] removed old group_data from spark api (#3451) 2018-07-07 22:21:01 -07:00
Nan Zhu
aa90e5c6ce [jvm-packages] disable booster setup for xgboost4j-spark (#3456)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* disable booster setup in spark

* check in parameter conversion

* fix compilation issue

* update exception type
2018-07-07 21:57:24 -07:00
Philip Hyunsu Cho
66e74d2223 Fix get_uint_info() (#3442)
* Add regression test
2018-07-05 20:06:59 -07:00
Philip Hyunsu Cho
48d6e68690 Add callback interface to re-direct console output (#3438)
* Add callback interface to re-direct console output

* Exempt TrackerLogger from custom logging

* Fix lint
2018-07-05 11:32:30 -07:00
Philip Hyunsu Cho
45bf4fbffb Add a notice for binary PyPI wheel (#3443) 2018-07-05 08:28:43 -07:00
Tianqi Chen
01aff45f26 Update README.md 2018-07-04 13:09:32 -07:00
Tianqi Chen
e62639c59b [DOCS] Update link to readme (#3437) 2018-07-04 12:24:33 -07:00
Yanbo Liang
aec6299c49 [jvm-packages] Expose nativeBooster for XGBoostClassificationModel and XGBoostRegressionModel. (#3428) 2018-07-01 15:06:16 -07:00
Nikita Titov
295252249e fixed MinGW missed dll (#3430) 2018-07-01 16:43:33 +00:00
liuliang01
0cf88d036f Add qid like ranklib format (#2749)
* add qid for https://github.com/dmlc/xgboost/issues/2748

* change names

* change spaces

* change qid to bst_uint type

* change qid type to size_t

* change qid first to SIZE_MAX

* change qid type from size_t to uint64_t

* update dmlc-core

* fix qids name error

* fix group_ptr_ error

* Style fix

* Add qid handling logic to SparsePage

* New MetaInfo format + backward compatibility fix

Old MetaInfo format (1.0) doesn't contain qid field. We still want to be able
to read from MetaInfo files saved in old format. Also, define a new format
(2.0) that contains the qid field. This way, we can distinguish files that
contain qid and those that do not.

* Update MetaInfo test

* Simply group assignment logic

* Explicitly set qid=nullptr in NativeDataIter

NativeDataIter's callback does not support qid field. Users of NativeDataIter
will need to call setGroup() function separately to set group information.

* Save qids_ in SaveBinary()

* Upgrade dmlc-core submodule

* Add a test for reading qid

* Add contributor

* Check the size of qids_

* Document qid format
2018-06-30 20:24:03 +00:00
Oliver Laslett
18813a26ab allow arbitrary cross validation fold indices (#3353)
* allow arbitrary cross validation fold indices

 - use training indices passed to `folds` parameter in `training.cv`
 - update doc string

* add tests for arbitrary fold indices
2018-06-30 19:23:49 +00:00
Mike Liu
594bcea83e Save and load model in sklearn API (#3192)
* Add (load|save)_model to XGBModel

* Add docstring

* Fix docstring

* Fix mixed use of space and tab

* Add a test

* Fix Flake8 style errors
2018-06-30 19:21:49 +00:00
Rory Mitchell
24fde92660 Build universal wheels using GPU CI (#3424) 2018-06-29 13:45:24 +00:00
Yun Ni
30d10ab035 Convert handle == nullptr from SegFault to user-friendly error. (#3021)
* Convert SegFault to user-friendly error.

* Apply the change to DMatrix API as well
2018-06-29 06:30:26 +00:00
cinqS
8bec8d5e9a Better doc for save_model() / load_model() (#3143)
Be clear that they do not save Python-specific attributes
2018-06-29 04:24:33 +00:00
pdesahb
12e34f32e2 Fix tweedie handling of base_score (#3295)
* fix tweedie margin calculations

* add entry to contributors
2018-06-28 15:43:05 +00:00
Henry Gouk
64b8cffde3 Refactor of FastHistMaker to allow for custom regularisation methods (#3335)
* Refactor to allow for custom regularisation methods

* Implement compositional SplitEvaluator framework

* Fixed segfault when no monotone_constraints are supplied.

* Change pid to parentID

* test_monotone_constraints.py now passes

* Refactor ColMaker and DistColMaker to use SplitEvaluator

* Performance optimisation when no monotone_constraints specified

* Fix linter messages

* Fix a few more linter errors

* Update the amalgamation

* Add bounds check

* Add check for leaf node

* Fix linter error in param.h

* Fix clang-tidy errors on CI

* Fix incorrect function name

* Fix clang-tidy error in updater_fast_hist.cc

* Enable SSE2 for Win32 R MinGW

Addresses https://github.com/dmlc/xgboost/pull/3335#issuecomment-400535752

* Add contributor
2018-06-28 07:37:25 +00:00
Philip Hyunsu Cho
cafc621914 Do not unzip google test archive if exists (#3416) 2018-06-28 04:10:39 +00:00
Philip Hyunsu Cho
e2743548ed Fix wget for google tests in tests (#3414)
CI tests were failing because wget prompts "the user" for a response
whenever the google test archive is already on the disk.

Fix: Use `-nc` option to skip download when the archive already
exists
2018-06-27 22:12:56 +00:00
Rory Mitchell
a0a1df1aba Refactor python tests (#3410)
* Add unit test utility

* Refactor updater tests. Add coverage for histmaker.
2018-06-27 11:20:27 +12:00
Adam Johnston
0988fb191f [jvm-packages] avoid use of Seq.apply in buildGroups (#3413) 2018-06-26 16:00:28 -07:00
ngoyal2707
5cd851ccef added code for instance based weighing for rank objectives (#3379)
* added code for instance based weighing for rank objectives

* Fix lint
2018-06-22 15:10:59 -07:00
Nan Zhu
d062c6f61b [jvm-packages] Maven central release stuffs (#3401)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* maven central release
2018-06-22 06:41:28 -07:00
PSEUDOTENSOR / Jonathan McKinney
9ac163d0bb Allow import via python datatable. (#3272)
* Allow import via python datatable.

* Write unit tests

* Refactor dt API functions

* Refactor python code

* Lint fixes

* Address review comments
2018-06-20 13:16:18 -07:00
James
eecf341ea7 [jvm-packages] Added latest version number example (#3374)
* Added latest version number example

* Added latest version number example
2018-06-18 22:09:39 -07:00
Thejaswi
0e78034607 Shared memory atomics while building histogram (#3384)
* Use shared memory atomics for building histograms, whenever possible
2018-06-19 16:03:09 +12:00
Yanbo Liang
2c4359e914 [jvm-packages] XGBoost Spark integration refactor (#3387)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* [jvm-packages] XGBoost Spark integration refactor. (#3313)

* XGBoost Spark integration refactor.

* Make corresponding update for xgboost4j-example

* Address comments.

* [jvm-packages] Refactor XGBoost-Spark params to make it compatible with both XGBoost and Spark MLLib (#3326)

* Refactor XGBoost-Spark params to make it compatible with both XGBoost and Spark MLLib

* Fix extra space.

* [jvm-packages] XGBoost Spark supports ranking with group data. (#3369)

* XGBoost Spark supports ranking with group data.

* Use Iterator.duplicate to prevent OOM.

* Update CheckpointManagerSuite.scala

* Resolve conflicts
2018-06-18 15:39:18 -07:00
Tong He
e6696337e4 Fix CRAN check for lintr (#3372)
* fix CRAN check

* Update submodules dmlc-core and rabit

* Add kintr to rmingw test
2018-06-18 12:53:52 -07:00
Bruce Qu
578a0c7ddb params confusion fixed (#3386) 2018-06-15 13:17:35 -07:00
Gorkem Ozkaya
34e3edfb1a Update index.md (#3228) 2018-06-07 21:51:06 -07:00
ngoyal2707
902ecbade8 added python doc string for nthreads to dmatrix (#3363) 2018-06-08 14:16:30 +12:00
Rory Mitchell
a96039141a Dmatrix refactor stage 1 (#3301)
* Use sparse page as singular CSR matrix representation

* Simplify dmatrix methods

* Reduce statefullness of batch iterators

* BREAKING CHANGE: Remove prob_buffer_row parameter. Users are instead recommended to sample their dataset as a preprocessing step before using XGBoost.
2018-06-07 10:25:58 +12:00
Andy Adinets
286dccb8e8 GPU binning and compression. (#3319)
* GPU binning and compression.

- binning and index compression are done inside the DeviceShard constructor
- in case of a DMatrix with multiple row batches, it is first converted into a single row batch
2018-06-05 17:15:13 +12:00
Rory Mitchell
3f7696ff53 Cleanup old artefacts in Jenkins (#3361) 2018-06-05 15:16:37 +12:00
Philip Hyunsu Cho
bd01acdfbc Save outputs in high precision in CLI prediction (#3356)
Currently, `CLIPredict()` saves prediction results in default 6-digit precision which causes precision loss. This PR sets precision to a level so that the conversion back to `bst_float` is lossless.

Related: #3298.
2018-06-03 14:15:47 -07:00
Nan Zhu
f66731181f Update 0.8 version num (#3358)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* update 0.80
2018-06-02 07:06:01 -07:00
Philip Hyunsu Cho
1214081f99 Release version 0.72 (#3337) 2018-06-01 16:00:31 -07:00
Ryota Suzuki
b7cbec4d4b Fix print.xgb.Booster for R (#3338)
* Fix print.xgb.Booster

valid_handle should be TRUE when x$handle is NOT null

* Update xgb.Booster.R

Modify is.null.handle to return TRUE for NULL handle
2018-05-29 11:44:55 -07:00
Kristian Gampong
a510e68dda Add validate_features option for Booster predict (#3323)
* Add validate_features option for Booster predict

* Fix trailing whitespace in docstring
2018-05-29 11:40:49 -07:00
Yanbo Liang
b018ef104f Remove output_margin from XGBClassifier.predict_proba argument list. (#3343) 2018-05-28 10:30:21 -07:00
trivialfis
34aeee2961 Fix test_param.cc header path (#3317) 2018-05-28 10:26:29 -07:00
Dave Challis
8efbadcde4 Point rabit submodule at latest commit from master. (#3330) 2018-05-28 10:21:10 -07:00
pdavalo
480e3fd764 Sklearn: validation set weights (#2354)
* Add option to use weights when evaluating metrics in validation sets

* Add test for validation-set weights functionality

* simplify case with no weights for test sets

* fix lint issues
2018-05-23 17:06:20 -07:00
Philip Hyunsu Cho
71e226120a For CRAN submission, remove all #pragma's that suppress compiler warnings (#3329)
* For CRAN submission, remove all #pragma's that suppress compiler warnings

A few headers in dmlc-core contain #pragma's that disable compiler warnings,
which is against the CRAN submission policy. Fix the problem by removing
the offending #pragma's as part of the command `make Rbuild`.

This addresses issue #3322.

* Fix script to improve Cygwin/MSYS compatibility

We need this to pass rmingw CI test

* Remove remove_warning_suppression_pragma.sh from packaged tarball
2018-05-23 09:58:39 -07:00
Thejaswi
d367e4fc6b Fix for issue 3306. (#3324) 2018-05-23 13:42:20 +12:00
Sergei Lebedev
8f6aadd4b7 [jvm-packages] Fixed CheckpointManagerSuite for Scala 2.10 (#3332)
As before, the compilation error is caused by mixing positional and
labelled arguments.
2018-05-19 18:28:11 -07:00
Rory Mitchell
3ee725e3bb Add cuda forwards compatibility (#3316) 2018-05-17 10:59:22 +12:00
Rory Mitchell
f8b7686719 Add cuda 8/9.1 centos 6 builds, test GPU wheel on CPU only container. (#3309)
* Add cuda 8/9.1 centos 6 builds, test GPU wheel on CPU only container.

* Add Google test
2018-05-17 10:57:01 +12:00
Tong He
098075b81b CRAN Submission for 0.71.1 (#3311)
* fix for CRAN manual checks

* fix for CRAN manual checks

* pass local check

* fix variable naming style

* Adding Philip's record
2018-05-14 17:32:39 -07:00
Nan Zhu
49b9f39818 [jvm-packages] update xgboost4j cross build script to be compatible with older glibc (#3307)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* static glibc glibc++

* update to build with glib 2.12

* remove unsupported flags

* update version number

* remove properties

* remove unnecessary command

* update poms
2018-05-10 06:39:44 -07:00
Philip Hyunsu Cho
9a8211f668 Update dmlc-core submodule (#3221)
* Update dmlc-core submodule

* Fix dense_parser to work with the latest dmlc-core

* Specify location of Google Test

* Add more source files in dmlc-minimum to get latest dmlc-core working

* Update dmlc-core submodule
2018-05-09 18:55:29 -07:00
mallniya
039dbe6aec freebsd support in libpath.py (#3247) 2018-05-09 16:13:30 -07:00
Clive Chan
0c0a78c255 Suggest git submodule update instead of delete + reclone (#3214) 2018-05-09 14:39:17 -07:00
Will Storey
747381b520 Improve .gitignore patterns (#3184)
* Adjust xgboost entries in .gitignore

They were overly broad. In particularly this was inconvenient when
working with tools such as fzf that use the .gitignore to decide what to
include. As written, we'd not look into /include/xgboost.

* Make cosmetic improvements to .gitignore

* Remove dmlc-core from .gitignore

This seems unnecessary and has the drawback that tools that use
.gitignore to know files to skip mean they won't look here, and being
able to inspect the submodule files with them is useful.
2018-05-09 14:31:59 -07:00
Samuel O. Ronsin
cc79a65ab9 Increase precision of bst_float values in tree dumps (#3298)
* Increase precision of bst_float values in tree dumps

* Increase precision of bst_float values in tree dumps

* Fix lint error and switch precision to right float variable

* Fix clang-tidy error
2018-05-09 14:12:21 -07:00
Brandon Greenwell
d13f1a0f16 Fix typo (#3305) 2018-05-09 10:18:36 -07:00
Rory Mitchell
088bb4b27c Prevent multiclass Hessian approaching 0 (#3304)
* Prevent Hessian in multiclass objective becoming zero

* Set default learning rate to 0.5 for "coord_descent" linear updater
2018-05-09 20:25:51 +12:00
Andrew V. Adinetz
b8a0d66fe6 Multi-GPU HostDeviceVector. (#3287)
* Multi-GPU HostDeviceVector.

- HostDeviceVector instances can now span multiple devices, defined by GPUSet struct
- the interface of HostDeviceVector has been modified accordingly
- GPU objective functions are now multi-GPU
- GPU predicting from cache is now multi-GPU
- avoiding omp_set_num_threads() calls
- other minor changes
2018-05-05 08:00:05 +12:00
Rory Mitchell
90a5c4db9d Update Jenkins CI for GPU (#3294) 2018-05-04 16:50:59 +12:00
Thejaswi
c80d51ccb3 Fix issue #3264, accuracy issues on k80 GPUs. (#3293) 2018-05-04 13:14:08 +12:00
Nan Zhu
e1f57b4417 [jvm-packages] scripts to cross-build and deploy artifacts to github (#3276)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* cross building files

* update

* build with docker

* remove

* temp

* update build script

* update pom

* update

* update version

* upload build

* fix path

* update README.md

* fix compiler version to 4.8.5
2018-04-28 07:41:30 -07:00
Yanbo Liang
4850f67b85 Fix broken link for xgboost-spark example. (#3275) 2018-04-26 06:45:01 -07:00
Thomas J. Leeper
c2b647f26e fix typo in README (#3263) 2018-04-22 09:24:38 -04:00
Nan Zhu
25b2919c44 [jvm-packages] change version of jvm to keep consistent with other pkgs (#3253)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* change version of jvm to keep consistent with other pkgs
2018-04-19 20:48:50 -07:00
Nan Zhu
d9dd485313 [jvm-packages] upgrade spark version to 2.3 (#3254)
* add back train method but mark as deprecated

* add back train method but mark as deprecated

* fix scalastyle error

* fix scalastyle error

* update default spark version to 2.3
2018-04-19 20:15:19 -07:00
Rory Mitchell
a185ddfe03 Implement GPU accelerated coordinate descent algorithm (#3178)
* Implement GPU accelerated coordinate descent algorithm. 

* Exclude external memory tests for GPU
2018-04-20 14:56:35 +12:00
Rory Mitchell
ccf80703ef Clang-tidy static analysis (#3222)
* Clang-tidy static analysis

* Modernise checks

* Google coding standard checks

* Identifier renaming according to Google style
2018-04-19 18:57:13 +12:00
Michal Josífko
3242b0a378 Update rabit submodule to latest version. (#3246) 2018-04-19 13:58:09 +12:00
Philip Hyunsu Cho
842e28fdcd Fix RMinGW build error: dependency 'data.table' not available (#3257)
The R package dependency 'data.table' is apparently unavailable in Windows binary format, resulting into the following build errors:
* https://ci.appveyor.com/project/tqchen/xgboost/build/1.0.1810/job/hhanvg0c2cqpn7bc
* https://ci.appveyor.com/project/tqchen/xgboost/build/1.0.1811/job/hg65t9wb3rt1f5k8

Fix: use type='both' to fall back to source when binary is unavailable
2018-04-18 10:56:44 -07:00
321 changed files with 15420 additions and 12386 deletions

21
.clang-tidy Normal file
View File

@@ -0,0 +1,21 @@
Checks: 'modernize-*,-modernize-make-*,-modernize-raw-string-literal,google-*,-google-default-arguments,-clang-diagnostic-#pragma-messages,readability-identifier-naming'
CheckOptions:
- { key: readability-identifier-naming.ClassCase, value: CamelCase }
- { key: readability-identifier-naming.StructCase, value: CamelCase }
- { key: readability-identifier-naming.TypeAliasCase, value: CamelCase }
- { key: readability-identifier-naming.TypedefCase, value: CamelCase }
- { key: readability-identifier-naming.TypeTemplateParameterCase, value: CamelCase }
- { key: readability-identifier-naming.MemberCase, value: lower_case }
- { key: readability-identifier-naming.PrivateMemberSuffix, value: '_' }
- { key: readability-identifier-naming.ProtectedMemberSuffix, value: '_' }
- { key: readability-identifier-naming.EnumCase, value: CamelCase }
- { key: readability-identifier-naming.EnumConstant, value: CamelCase }
- { key: readability-identifier-naming.EnumConstantPrefix, value: k }
- { key: readability-identifier-naming.GlobalConstantCase, value: CamelCase }
- { key: readability-identifier-naming.GlobalConstantPrefix, value: k }
- { key: readability-identifier-naming.StaticConstantCase, value: CamelCase }
- { key: readability-identifier-naming.StaticConstantPrefix, value: k }
- { key: readability-identifier-naming.ConstexprVariableCase, value: CamelCase }
- { key: readability-identifier-naming.ConstexprVariablePrefix, value: k }
- { key: readability-identifier-naming.FunctionCase, value: CamelCase }
- { key: readability-identifier-naming.NamespaceCase, value: lower_case }

11
.editorconfig Normal file
View File

@@ -0,0 +1,11 @@
root = true
[*]
charset=utf-8
indent_style = space
indent_size = 2
insert_final_newline = true
[*.py]
indent_style = space
indent_size = 4

7
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,7 @@
Thanks for participating in the XGBoost community! We use https://discuss.xgboost.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first :)
Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.
For bug reports, to help the developer act on the issues, please include a description of your environment, preferably a minimum script to reproduce the problem.
For feature proposals, list clear, small actionable items so we can track the progress of the change.

7
.gitignore vendored
View File

@@ -15,7 +15,6 @@
*.Rcheck
*.rds
*.tar.gz
#*txt*
*conf
*buffer
*model
@@ -47,13 +46,12 @@ Debug
*.cpage.col
*.cpage
*.Rproj
./xgboost
./xgboost.mpi
./xgboost.mock
#.Rbuildignore
R-package.Rproj
*.cache*
#java
# java
java/xgboost4j/target
java/xgboost4j/tmp
java/xgboost4j-demo/target
@@ -68,10 +66,9 @@ nb-configuration*
.settings/
build
config.mk
xgboost
/xgboost
*.data
build_plugin
dmlc-core
.idea
recommonmark/
tags

3
.gitmodules vendored
View File

@@ -4,9 +4,6 @@
[submodule "rabit"]
path = rabit
url = https://github.com/dmlc/rabit
[submodule "nccl"]
path = nccl
url = https://github.com/dmlc/nccl
[submodule "cub"]
path = cub
url = https://github.com/NVlabs/cub

View File

@@ -26,6 +26,8 @@ env:
- TASK=cmake_test
# c++ test
- TASK=cpp_test
# distributed test
- TASK=distributed_test
matrix:
exclude:
@@ -39,15 +41,19 @@ matrix:
env: TASK=python_lightweight_test
- os: osx
env: TASK=cpp_test
- os: osx
env: TASK=distributed_test
# dependent apt packages
addons:
apt:
sources:
- llvm-toolchain-trusty-5.0
- ubuntu-toolchain-r-test
- george-edison55-precise-backports
packages:
- cmake
- clang
- clang-tidy-5.0
- cmake-data
- doxygen
- wget

View File

@@ -8,14 +8,18 @@ set_default_configuration_release()
msvc_use_static_runtime()
# Options
option(USE_CUDA "Build with GPU acceleration")
option(USE_AVX "Build with AVX instructions. May not produce identical results due to approximate math." OFF)
option(USE_NCCL "Build using NCCL for multi-GPU. Also requires USE_CUDA")
option(USE_CUDA "Build with GPU acceleration")
option(USE_AVX "Build with AVX instructions. May not produce identical results due to approximate math." OFF)
option(USE_NCCL "Build using NCCL for multi-GPU. Also requires USE_CUDA")
option(JVM_BINDINGS "Build JVM bindings" OFF)
option(GOOGLE_TEST "Build google tests" OFF)
option(R_LIB "Build shared library for R package" OFF)
set(GPU_COMPUTE_VER 35;50;52;60;61 CACHE STRING
"Space separated list of compute versions to be built against")
option(USE_SANITIZER "Use santizer flags" OFF)
set(GPU_COMPUTE_VER "" CACHE STRING
"Space separated list of compute versions to be built against, e.g. '35 61'")
set(ENABLED_SANITIZERS "address" "leak" CACHE STRING
"Semicolon separated list of sanitizer names. E.g 'address;leak'. Supported sanitizers are
address, leak and thread.")
# Deprecation warning
if(PLUGIN_UPDATER_GPU)
@@ -39,6 +43,15 @@ else()
# Performance
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -funroll-loops")
endif()
if(WIN32 AND MINGW)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -static-libstdc++")
endif()
# Sanitizer
if(USE_SANITIZER)
include(cmake/Sanitizer.cmake)
enable_sanitizers("${ENABLED_SANITIZERS}")
endif(USE_SANITIZER)
# AVX
if(USE_AVX)
@@ -50,6 +63,12 @@ if(USE_AVX)
add_definitions(-DXGBOOST_USE_AVX)
endif()
# dmlc-core
add_subdirectory(dmlc-core)
set(LINK_LIBRARIES dmlc rabit)
# enable custom logging
add_definitions(-DDMLC_LOG_CUSTOMIZE=1)
# compiled code customizations for R package
if(R_LIB)
@@ -70,7 +89,7 @@ include_directories (
${PROJECT_SOURCE_DIR}/rabit/include
)
file(GLOB_RECURSE SOURCES
file(GLOB_RECURSE SOURCES
src/*.cc
src/*.h
include/*.h
@@ -103,47 +122,36 @@ else()
add_library(rabit STATIC ${RABIT_SOURCES})
endif()
# dmlc-core
add_subdirectory(dmlc-core)
set(LINK_LIBRARIES dmlccore rabit)
if(USE_CUDA)
find_package(CUDA 8.0 REQUIRED)
cmake_minimum_required(VERSION 3.5)
add_definitions(-DXGBOOST_USE_CUDA)
include_directories(cub)
if(USE_NCCL)
include_directories(nccl/src)
find_package(Nccl REQUIRED)
include_directories(${NCCL_INCLUDE_DIR})
add_definitions(-DXGBOOST_USE_NCCL)
endif()
if((CUDA_VERSION_MAJOR EQUAL 9) OR (CUDA_VERSION_MAJOR GREATER 9))
message("CUDA 9.0 detected, adding Volta compute capability (7.0).")
set(GPU_COMPUTE_VER "${GPU_COMPUTE_VER};70")
endif()
set(GENCODE_FLAGS "")
format_gencode_flags("${GPU_COMPUTE_VER}" GENCODE_FLAGS)
message("cuda architecture flags: ${GENCODE_FLAGS}")
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS};--expt-extended-lambda;--expt-relaxed-constexpr;${GENCODE_FLAGS};-lineinfo;")
if(NOT MSVC)
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS};-Xcompiler -fPIC; -std=c++11")
endif()
if(USE_NCCL)
add_subdirectory(nccl)
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS};-Xcompiler -fPIC; -Xcompiler -Werror; -std=c++11")
endif()
cuda_add_library(gpuxgboost ${CUDA_SOURCES} STATIC)
if(USE_NCCL)
target_link_libraries(gpuxgboost nccl)
link_directories(${NCCL_LIBRARY})
target_link_libraries(gpuxgboost ${NCCL_LIB_NAME})
endif()
list(APPEND LINK_LIBRARIES gpuxgboost)
list(APPEND LINK_LIBRARIES gpuxgboost)
endif()
@@ -224,12 +232,12 @@ endif()
# Test
if(GOOGLE_TEST)
find_package(GTest REQUIRED)
enable_testing()
find_package(GTest REQUIRED)
file(GLOB_RECURSE TEST_SOURCES "tests/cpp/*.cc")
auto_source_group("${TEST_SOURCES}")
include_directories(${GTEST_INCLUDE_DIR})
include_directories(${GTEST_INCLUDE_DIRS})
if(USE_CUDA)
file(GLOB_RECURSE CUDA_TEST_SOURCES "tests/cpp/*.cu")

View File

@@ -7,8 +7,8 @@ Committers
Committers are people who have made substantial contribution to the project and granted write access to the project.
* [Tianqi Chen](https://github.com/tqchen), University of Washington
- Tianqi is a PhD working on large-scale machine learning, he is the creator of the project.
* [Tong He](https://github.com/hetong007), Simon Fraser University
- Tong is a master student working on data mining, he is the maintainer of xgboost R package.
* [Tong He](https://github.com/hetong007), Amazon AI
- Tong is an applied scientist in Amazon AI, he is the maintainer of xgboost R package.
* [Vadim Khotilovich](https://github.com/khotilov)
- Vadim contributes many improvements in R and core packages.
* [Bing Xu](https://github.com/antinucleon)
@@ -54,7 +54,8 @@ List of Contributors
* [Masaaki Horikoshi](https://github.com/sinhrks)
- Masaaki is the initial creator of xgboost python plotting module.
* [Hongliang Liu](https://github.com/phunterlau)
- Hongliang is the maintainer of xgboost python PyPI package for pip installation.
* [Hyunsu Cho](http://hyunsu-cho.io/)
- Hyunsu is the maintainer of the XGBoost Python package. He is in charge of submitting the Python package to Python Package Index (PyPI). He is also the initial author of the CPU 'hist' updater.
* [daiyl0320](https://github.com/daiyl0320)
- daiyl0320 contributed patch to xgboost distributed version more robust, and scales stably on TB scale datasets.
* [Huayi Zhang](https://github.com/irachex)
@@ -72,3 +73,8 @@ List of Contributors
* [Gideon Whitehead](https://github.com/gaw89)
* [Yi-Lin Juang](https://github.com/frankyjuang)
* [Andrew Hannigan](https://github.com/andrewhannigan)
* [Andy Adinets](https://github.com/canonizer)
* [Henry Gouk](https://github.com/henrygouk)
* [Pierre de Sahb](https://github.com/pdesahb)
* [liuliang01](https://github.com/liuliang01)
- liuliang01 added support for the qid column for LibSVM input format. This makes ranking task easier in distributed setting.

View File

@@ -1,44 +0,0 @@
For bugs or installation issues, please provide the following information.
The more information you provide, the more easily we will be able to offer
help and advice.
## Environment info
Operating System:
Compiler:
Package used (python/R/jvm/C++):
`xgboost` version used:
If installing from source, please provide
1. The commit hash (`git rev-parse HEAD`)
2. Logs will be helpful (If logs are large, please upload as attachment).
If you are using jvm package, please
1. add [jvm-packages] in the title to make it quickly be identified
2. the gcc version and distribution
If you are using python package, please provide
1. The python version and distribution
2. The command to install `xgboost` if you are not installing from source
If you are using R package, please provide
1. The R `sessionInfo()`
2. The command to install `xgboost` if you are not installing from source
## Steps to reproduce
1.
2.
3.
## What have you tried?
1.
2.
3.

121
Jenkinsfile vendored
View File

@@ -3,13 +3,20 @@
// Jenkins pipeline
// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/
import groovy.transform.Field
/* Unrestricted tasks: tasks that do NOT generate artifacts */
// Command to run command inside a docker container
dockerRun = 'tests/ci_build/ci_build.sh'
def dockerRun = 'tests/ci_build/ci_build.sh'
// Utility functions
@Field
def utils
def buildMatrix = [
[ "enabled": true, "os" : "linux", "withGpu": true, "withOmp": true, "pythonVersion": "2.7" ],
[ "enabled": true, "os" : "linux", "withGpu": false, "withOmp": true, "pythonVersion": "2.7" ],
[ "enabled": false, "os" : "osx", "withGpu": false, "withOmp": false, "pythonVersion": "2.7" ],
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": true, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "9.2" ],
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": true, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "8.0" ],
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": false, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "8.0" ],
]
pipeline {
@@ -26,20 +33,25 @@ pipeline {
// Build stages
stages {
stage('Get sources') {
agent any
stage('Jenkins: Get sources') {
agent {
label 'unrestricted'
}
steps {
checkoutSrcs()
script {
utils = load('tests/ci_build/jenkins_tools.Groovy')
utils.checkoutSrcs()
}
stash name: 'srcs', excludes: '.git/'
milestone label: 'Sources ready', ordinal: 1
}
}
stage('Build & Test') {
stage('Jenkins: Build & Test') {
steps {
script {
parallel (buildMatrix.findAll{it['enabled']}.collectEntries{ c ->
def buildName = getBuildName(c)
buildFactory(buildName, c)
def buildName = utils.getBuildName(c)
utils.buildFactory(buildName, c, false, this.&buildPlatformCmake)
})
}
}
@@ -47,40 +59,17 @@ pipeline {
}
}
// initialize source codes
def checkoutSrcs() {
retry(5) {
try {
timeout(time: 2, unit: 'MINUTES') {
checkout scm
sh 'git submodule update --init'
}
} catch (exc) {
deleteDir()
error "Failed to fetch source codes"
}
}
}
/**
* Creates cmake and make builds
*/
def buildFactory(buildName, conf) {
def os = conf["os"]
def nodeReq = conf["withGpu"] ? "${os} && gpu" : "${os}"
def dockerTarget = conf["withGpu"] ? "gpu" : "cpu"
[ ("cmake_${buildName}") : { buildPlatformCmake("cmake_${buildName}", conf, nodeReq, dockerTarget) },
("make_${buildName}") : { buildPlatformMake("make_${buildName}", conf, nodeReq, dockerTarget) }
]
}
/**
* Build platform and test it via cmake.
*/
def buildPlatformCmake(buildName, conf, nodeReq, dockerTarget) {
def opts = cmakeOptions(conf)
def opts = utils.cmakeOptions(conf)
// Destination dir for artifacts
def distDir = "dist/${buildName}"
def dockerArgs = ""
if(conf["withGpu"]){
dockerArgs = "--build-arg CUDA_VERSION=" + conf["cudaVersion"]
}
// Build node - this is returned result
node(nodeReq) {
unstash name: 'srcs'
@@ -92,60 +81,8 @@ def buildPlatformCmake(buildName, conf, nodeReq, dockerTarget) {
""".stripMargin('|')
// Invoke command inside docker
sh """
${dockerRun} ${dockerTarget} tests/ci_build/build_via_cmake.sh ${opts}
${dockerRun} ${dockerTarget} tests/ci_build/test_${dockerTarget}.sh
${dockerRun} ${dockerTarget} bash -c "cd python-package; python setup.py bdist_wheel"
rm -rf "${distDir}"; mkdir -p "${distDir}/py"
cp xgboost "${distDir}"
cp -r lib "${distDir}"
cp -r python-package/dist "${distDir}/py"
"""
archiveArtifacts artifacts: "${distDir}/**/*.*", allowEmptyArchive: true
}
}
/**
* Build platform via make
*/
def buildPlatformMake(buildName, conf, nodeReq, dockerTarget) {
def opts = makeOptions(conf)
// Destination dir for artifacts
def distDir = "dist/${buildName}"
// Build node
node(nodeReq) {
unstash name: 'srcs'
echo """
|===== XGBoost Make build =====
| dockerTarget: ${dockerTarget}
| makeOpts : ${opts}
|=========================
""".stripMargin('|')
// Invoke command inside docker
sh """
${dockerRun} ${dockerTarget} tests/ci_build/build_via_make.sh ${opts}
${dockerRun} ${dockerTarget} ${dockerArgs} tests/ci_build/build_via_cmake.sh ${opts}
${dockerRun} ${dockerTarget} ${dockerArgs} tests/ci_build/test_${dockerTarget}.sh
"""
}
}
def makeOptions(conf) {
return ([
conf["withGpu"] ? 'PLUGIN_UPDATER_GPU=ON' : 'PLUGIN_UPDATER_GPU=OFF',
conf["withOmp"] ? 'USE_OPENMP=1' : 'USE_OPENMP=0']
).join(" ")
}
def cmakeOptions(conf) {
return ([
conf["withGpu"] ? '-DPLUGIN_UPDATER_GPU:BOOL=ON' : '',
conf["withOmp"] ? '-DOPEN_MP:BOOL=ON' : '']
).join(" ")
}
def getBuildName(conf) {
def gpuLabel = conf['withGpu'] ? "_gpu" : "_cpu"
def ompLabel = conf['withOmp'] ? "_omp" : ""
def pyLabel = "_py${conf['pythonVersion']}"
return "${conf['os']}${gpuLabel}${ompLabel}${pyLabel}"
}

121
Jenkinsfile-restricted Normal file
View File

@@ -0,0 +1,121 @@
#!/usr/bin/groovy
// -*- mode: groovy -*-
// Jenkins pipeline
// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/
import groovy.transform.Field
/* Restricted tasks: tasks generating artifacts, such as binary wheels and
documentation */
// Command to run command inside a docker container
def dockerRun = 'tests/ci_build/ci_build.sh'
// Utility functions
@Field
def utils
def buildMatrix = [
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": true, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "9.2" ],
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": true, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "8.0" ],
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": false, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "8.0" ],
]
pipeline {
// Each stage specify its own agent
agent none
// Setup common job properties
options {
ansiColor('xterm')
timestamps()
timeout(time: 120, unit: 'MINUTES')
buildDiscarder(logRotator(numToKeepStr: '10'))
}
// Build stages
stages {
stage('Jenkins: Get sources') {
agent {
label 'restricted'
}
steps {
script {
utils = load('tests/ci_build/jenkins_tools.Groovy')
utils.checkoutSrcs()
}
stash name: 'srcs', excludes: '.git/'
milestone label: 'Sources ready', ordinal: 1
}
}
stage('Jenkins: Build doc') {
agent {
label 'linux && cpu && restricted'
}
steps {
unstash name: 'srcs'
script {
def commit_id = "${GIT_COMMIT}"
def branch_name = "${GIT_LOCAL_BRANCH}"
echo 'Building doc...'
dir ('jvm-packages') {
sh "bash ./build_doc.sh ${commit_id}"
archiveArtifacts artifacts: "${commit_id}.tar.bz2", allowEmptyArchive: true
echo 'Deploying doc...'
withAWS(credentials:'xgboost-doc-bucket') {
s3Upload file: "${commit_id}.tar.bz2", bucket: 'xgboost-docs', acl: 'PublicRead', path: "${branch_name}.tar.bz2"
}
}
}
}
}
stage('Jenkins: Build artifacts') {
steps {
script {
parallel (buildMatrix.findAll{it['enabled']}.collectEntries{ c ->
def buildName = utils.getBuildName(c)
utils.buildFactory(buildName, c, true, this.&buildPlatformCmake)
})
}
}
}
}
}
/**
* Build platform and test it via cmake.
*/
def buildPlatformCmake(buildName, conf, nodeReq, dockerTarget) {
def opts = utils.cmakeOptions(conf)
// Destination dir for artifacts
def distDir = "dist/${buildName}"
def dockerArgs = ""
if(conf["withGpu"]){
dockerArgs = "--build-arg CUDA_VERSION=" + conf["cudaVersion"]
}
// Build node - this is returned result
node(nodeReq) {
unstash name: 'srcs'
echo """
|===== XGBoost CMake build =====
| dockerTarget: ${dockerTarget}
| cmakeOpts : ${opts}
|=========================
""".stripMargin('|')
// Invoke command inside docker
sh """
${dockerRun} ${dockerTarget} ${dockerArgs} tests/ci_build/build_via_cmake.sh ${opts}
${dockerRun} ${dockerTarget} ${dockerArgs} bash -c "cd python-package; rm -f dist/*; python setup.py bdist_wheel --universal"
rm -rf "${distDir}"; mkdir -p "${distDir}/py"
cp xgboost "${distDir}"
cp -r lib "${distDir}"
cp -r python-package/dist "${distDir}/py"
# Test the wheel for compatibility on a barebones CPU container
${dockerRun} release ${dockerArgs} bash -c " \
auditwheel show xgboost-*-py2-none-any.whl
pip install --user python-package/dist/xgboost-*-none-any.whl && \
python -m nose tests/python"
"""
archiveArtifacts artifacts: "${distDir}/**/*.*", allowEmptyArchive: true
}
}

View File

@@ -68,7 +68,7 @@ endif
endif
export LDFLAGS= -pthread -lm $(ADD_LDFLAGS) $(DMLC_LDFLAGS) $(PLUGIN_LDFLAGS)
export CFLAGS= -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude $(ADD_CFLAGS) $(PLUGIN_CFLAGS)
export CFLAGS= -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude $(ADD_CFLAGS) $(PLUGIN_CFLAGS)
CFLAGS += -I$(DMLC_CORE)/include -I$(RABIT)/include -I$(GTEST_PATH)/include
#java include path
export JAVAINCFLAGS = -I${JAVA_HOME}/include -I./java
@@ -261,13 +261,15 @@ Rpack: clean_all
cat R-package/src/Makevars.in|sed '2s/.*/PKGROOT=./' | sed '3s/.*/ENABLE_STD_THREAD=0/' > xgboost/src/Makevars.in
cp xgboost/src/Makevars.in xgboost/src/Makevars.win
sed -i -e 's/@OPENMP_CXXFLAGS@/$$\(SHLIB_OPENMP_CFLAGS\)/g' xgboost/src/Makevars.win
bash R-package/remove_warning_suppression_pragma.sh
rm xgboost/remove_warning_suppression_pragma.sh
Rbuild: Rpack
R CMD build --no-build-vignettes xgboost
rm -rf xgboost
Rcheck: Rbuild
R CMD check xgboost*.tar.gz
R CMD check xgboost*.tar.gz
-include build/*.d
-include build/*/*.d

82
NEWS.md
View File

@@ -3,6 +3,88 @@ XGBoost Change Log
This file records the changes in xgboost library in reverse chronological order.
## v0.80 (2018.08.13)
* **JVM packages received a major upgrade**: To consolidate the APIs and improve the user experience, we refactored the design of XGBoost4J-Spark in a significant manner. (#3387)
- Consolidated APIs: It is now much easier to integrate XGBoost models into a Spark ML pipeline. Users can control behaviors like output leaf prediction results by setting corresponding column names. Training is now more consistent with other Estimators in Spark MLLIB: there is now one single method `fit()` to train decision trees.
- Better user experience: we refactored the parameters relevant modules in XGBoost4J-Spark to provide both camel-case (Spark ML style) and underscore (XGBoost style) parameters
- A brand-new tutorial is [available](https://xgboost.readthedocs.io/en/release_0.80/jvm/xgboost4j_spark_tutorial.html) for XGBoost4J-Spark.
- Latest API documentation is now hosted at https://xgboost.readthedocs.io/.
* XGBoost documentation now keeps track of multiple versions:
- Latest master: https://xgboost.readthedocs.io/en/latest
- 0.80 stable: https://xgboost.readthedocs.io/en/release_0.80
- 0.72 stable: https://xgboost.readthedocs.io/en/release_0.72
* Ranking task now uses instance weights (#3379)
* Fix inaccurate decimal parsing (#3546)
* New functionality
- Query ID column support in LIBSVM data files (#2749). This is convenient for performing ranking task in distributed setting.
- Hinge loss for binary classification (`binary:hinge`) (#3477)
- Ability to specify delimiter and instance weight column for CSV files (#3546)
- Ability to use 1-based indexing instead of 0-based (#3546)
* GPU support
- Quantile sketch, binning, and index compression are now performed on GPU, eliminating PCIe transfer for 'gpu_hist' algorithm (#3319, #3393)
- Upgrade to NCCL2 for multi-GPU training (#3404).
- Use shared memory atomics for faster training (#3384).
- Dynamically allocate GPU memory, to prevent large allocations for deep trees (#3519)
- Fix memory copy bug for large files (#3472)
* Python package
- Importing data from Python datatable (#3272)
- Pre-built binary wheels available for 64-bit Linux and Windows (#3424, #3443)
- Add new importance measures 'total_gain', 'total_cover' (#3498)
- Sklearn API now supports saving and loading models (#3192)
- Arbitrary cross validation fold indices (#3353)
- `predict()` function in Sklearn API uses `best_ntree_limit` if available, to make early stopping easier to use (#3445)
- Informational messages are now directed to Python's `print()` rather than standard output (#3438). This way, messages appear inside Jupyter notebooks.
* R package
- Oracle Solaris support, per CRAN policy (#3372)
* JVM packages
- Single-instance prediction (#3464)
- Pre-built JARs are now available from Maven Central (#3401)
- Add NULL pointer check (#3021)
- Consider `spark.task.cpus` when controlling parallelism (#3530)
- Handle missing values in prediction (#3529)
- Eliminate outputs of `System.out` (#3572)
* Refactored C++ DMatrix class for simplicity and de-duplication (#3301)
* Refactored C++ histogram facilities (#3564)
* Refactored constraints / regularization mechanism for split finding (#3335, #3429). Users may specify an elastic net (L2 + L1 regularization) on leaf weights as well as monotonic constraints on test nodes. The refactor will be useful for a future addition of feature interaction constraints.
* Statically link `libstdc++` for MinGW32 (#3430)
* Enable loading from `group`, `base_margin` and `weight` (see [here](http://xgboost.readthedocs.io/en/latest/tutorials/input_format.html#auxiliary-files-for-additional-information)) for Python, R, and JVM packages (#3431)
* Fix model saving for `count:possion` so that `max_delta_step` doesn't get truncated (#3515)
* Fix loading of sparse CSC matrix (#3553)
* Fix incorrect handling of `base_score` parameter for Tweedie regression (#3295)
## v0.72.1 (2018.07.08)
This version is only applicable for the Python package. The content is identical to that of v0.72.
## v0.72 (2018.06.01)
* Starting with this release, we plan to make a new release every two months. See #3252 for more details.
* Fix a pathological behavior (near-zero second-order gradients) in multiclass objective (#3304)
* Tree dumps now use high precision in storing floating-point values (#3298)
* Submodules `rabit` and `dmlc-core` have been brought up to date, bringing bug fixes (#3330, #3221).
* GPU support
- Continuous integration tests for GPU code (#3294, #3309)
- GPU accelerated coordinate descent algorithm (#3178)
- Abstract 1D vector class now works with multiple GPUs (#3287)
- Generate PTX code for most recent architecture (#3316)
- Fix a memory bug on NVIDIA K80 cards (#3293)
- Address performance instability for single-GPU, multi-core machines (#3324)
* Python package
- FreeBSD support (#3247)
- Validation of feature names in `Booster.predict()` is now optional (#3323)
* Updated Sklearn API
- Validation sets now support instance weights (#2354)
- `XGBClassifier.predict_proba()` should not support `output_margin` option. (#3343) See BREAKING CHANGES below.
* R package:
- Better handling of NULL in `print.xgb.Booster()` (#3338)
- Comply with CRAN policy by removing compiler warning suppression (#3329)
- Updated CRAN submission
* JVM packages
- JVM packages will now use the same versioning scheme as other packages (#3253)
- Update Spark to 2.3 (#3254)
- Add scripts to cross-build and deploy artifacts (#3276, #3307)
- Fix a compilation error for Scala 2.10 (#3332)
* BREAKING CHANGES
- `XGBClassifier.predict_proba()` no longer accepts paramter `output_margin`. The paramater makes no sense for `predict_proba()` because the method is to predict class probabilities, not raw margin scores.
## v0.71 (2018.04.11)
* This is a minor release, mainly motivated by issues concerning `pip install`, e.g. #2426, #3189, #3118, and #3194.
With this release, users of Linux and MacOS will be able to run `pip install` for the most part.

View File

@@ -1,8 +1,8 @@
Package: xgboost
Type: Package
Title: Extreme Gradient Boosting
Version: 0.71.1
Date: 2018-04-11
Version: 0.80.1
Date: 2018-08-13
Authors@R: c(
person("Tianqi", "Chen", role = c("aut"),
email = "tianqi.tchen@gmail.com"),
@@ -14,7 +14,20 @@ Authors@R: c(
email = "khotilovich@gmail.com"),
person("Yuan", "Tang", role = c("aut"),
email = "terrytangyuan@gmail.com",
comment = c(ORCID = "0000-0001-5243-233X"))
comment = c(ORCID = "0000-0001-5243-233X")),
person("Hyunsu", "Cho", role = c("aut"),
email = "chohyu01@cs.washington.edu"),
person("Kailong", "Chen", role = c("aut")),
person("Rory", "Mitchell", role = c("aut")),
person("Ignacio", "Cano", role = c("aut")),
person("Tianyi", "Zhou", role = c("aut")),
person("Mu", "Li", role = c("aut")),
person("Junyuan", "Xie", role = c("aut")),
person("Min", "Lin", role = c("aut")),
person("Yifeng", "Geng", role = c("aut")),
person("Yutian", "Li", role = c("aut")),
person("XGBoost contributors", role = c("cph"),
comment = "base XGBoost implementation")
)
Description: Extreme Gradient Boosting, which is an efficient implementation
of the gradient boosting framework from Chen & Guestrin (2016) <doi:10.1145/2939672.2939785>.
@@ -28,6 +41,7 @@ Description: Extreme Gradient Boosting, which is an efficient implementation
License: Apache License (== 2.0) | file LICENSE
URL: https://github.com/dmlc/xgboost
BugReports: https://github.com/dmlc/xgboost/issues
NeedsCompilation: yes
VignetteBuilder: knitr
Suggests:
knitr,
@@ -37,6 +51,7 @@ Suggests:
Ckmeans.1d.dp (>= 3.3.1),
vcd (>= 1.3),
testthat,
lintr,
igraph (>= 1.0.1)
Depends:
R (>= 3.3.0)

View File

@@ -168,7 +168,7 @@ cb.evaluation.log <- function() {
#' at the beginning of each iteration.
#'
#' Note that when training is resumed from some previous model, and a function is used to
#' reset a parameter value, the \code{nround} argument in this function would be the
#' reset a parameter value, the \code{nrounds} argument in this function would be the
#' the number of boosting rounds in the current training.
#'
#' Callback function expects the following values to be set in its calling frame:
@@ -691,11 +691,6 @@ cb.gblinear.history <- function(sparse=FALSE) {
#' For an \code{xgb.cv} result, a list of such matrices is returned with the elements
#' corresponding to CV folds.
#'
#' @examples
#' \dontrun{
#' See \code{\link{cv.gblinear.history}}
#' }
#'
#' @export
xgb.gblinear.history <- function(model, class_index = NULL) {

View File

@@ -37,11 +37,14 @@ xgb.handleToBooster <- function(handle, raw = NULL) {
# Check whether xgb.Booster.handle is null
# internal utility function
is.null.handle <- function(handle) {
if (is.null(handle)) return(TRUE)
if (!identical(class(handle), "xgb.Booster.handle"))
stop("argument type must be xgb.Booster.handle")
if (is.null(handle) || .Call(XGCheckNullPtr_R, handle))
if (.Call(XGCheckNullPtr_R, handle))
return(TRUE)
return(FALSE)
}
@@ -537,7 +540,7 @@ xgb.ntree <- function(bst) {
print.xgb.Booster <- function(x, verbose = FALSE, ...) {
cat('##### xgb.Booster\n')
valid_handle <- is.null.handle(x$handle)
valid_handle <- !is.null.handle(x$handle)
if (!valid_handle)
cat("Handle is invalid! Suggest using xgb.Booster.complete\n")

View File

@@ -52,9 +52,9 @@
#' dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)
#'
#' param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
#' nround = 4
#' nrounds = 4
#'
#' bst = xgb.train(params = param, data = dtrain, nrounds = nround, nthread = 2)
#' bst = xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)
#'
#' # Model accuracy without new features
#' accuracy.before <- sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.test$label) /
@@ -68,7 +68,7 @@
#' new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label)
#' new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label)
#' watchlist <- list(train = new.dtrain)
#' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nround, nthread = 2)
#' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
#'
#' # Model accuracy with new features
#' accuracy.after <- sum((predict(bst, new.dtest) >= 0.5) == agaricus.test$label) /

View File

@@ -83,7 +83,7 @@
#' \item \code{params} parameters that were passed to the xgboost library. Note that it does not
#' capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
#' \item \code{callbacks} callback functions that were either automatically assigned or
#' explicitely passed.
#' explicitly passed.
#' \item \code{evaluation_log} evaluation history storead as a \code{data.table} with the
#' first column corresponding to iteration number and the rest corresponding to the
#' CV-based evaluation means and standard deviations for the training and test CV-sets.

View File

@@ -30,7 +30,8 @@
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' # save the model in file 'xgb.model.dump'
#' xgb.dump(bst, 'xgb.model.dump', with_stats = TRUE)
#' dump_path = file.path(tempdir(), 'model.dump')
#' xgb.dump(bst, dump_path, with_stats = TRUE)
#'
#' # print the model without saving it to a file
#' print(xgb.dump(bst, with_stats = TRUE))

View File

@@ -212,6 +212,7 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
}
if (plot && which == "2d") {
# TODO
warning("Bivariate plotting is currently not available.")
}
invisible(list(data = data, shap_contrib = shap_contrib))
}

View File

@@ -22,7 +22,7 @@
#' \item \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be.
#' \item \code{max_depth} maximum depth of a tree. Default: 6
#' \item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1
#' \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nround}. Default: 1
#' \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1
#' \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
#' \item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through Xgboost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1
#' \item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.

View File

@@ -30,4 +30,4 @@ Examples
Development
-----------
* See the [R Package section](https://xgboost.readthedocs.io/en/latest/how_to/contribute.html#r-package) of the contributiors guide.
* See the [R Package section](https://xgboost.readthedocs.io/en/latest/how_to/contribute.html#r-package) of the contributors guide.

View File

@@ -99,7 +99,8 @@ err <- as.numeric(sum(as.integer(pred > 0.5) != label))/length(label)
print(paste("test-error=", err))
# You can dump the tree you learned using xgb.dump into a text file
xgb.dump(bst, "dump.raw.txt", with_stats = T)
dump_path = file.path(tempdir(), 'dump.raw.txt')
xgb.dump(bst, dump_path, with_stats = T)
# Finally, you can check which features are the most important.
print("Most important features (look at column Gain):")

View File

@@ -5,20 +5,20 @@ data(agaricus.test, package='xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
nround <- 2
nrounds <- 2
param <- list(max_depth=2, eta=1, silent=1, nthread=2, objective='binary:logistic')
cat('running cross validation\n')
# do cross validation, this will print result out as
# [iteration] metric_name:mean_value+std_value
# std_value is standard deviation of the metric
xgb.cv(param, dtrain, nround, nfold=5, metrics={'error'})
xgb.cv(param, dtrain, nrounds, nfold=5, metrics={'error'})
cat('running cross validation, disable standard deviation display\n')
# do cross validation, this will print result out as
# [iteration] metric_name:mean_value+std_value
# std_value is standard deviation of the metric
xgb.cv(param, dtrain, nround, nfold=5,
xgb.cv(param, dtrain, nrounds, nfold=5,
metrics='error', showsd = FALSE)
###
@@ -43,9 +43,9 @@ evalerror <- function(preds, dtrain) {
param <- list(max_depth=2, eta=1, silent=1,
objective = logregobj, eval_metric = evalerror)
# train with customized objective
xgb.cv(params = param, data = dtrain, nrounds = nround, nfold = 5)
xgb.cv(params = param, data = dtrain, nrounds = nrounds, nfold = 5)
# do cross validation with prediction values for each fold
res <- xgb.cv(params = param, data = dtrain, nrounds = nround, nfold = 5, prediction = TRUE)
res <- xgb.cv(params = param, data = dtrain, nrounds = nrounds, nfold = 5, prediction = TRUE)
res$evaluation_log
length(res$pred)

View File

@@ -7,10 +7,10 @@ dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
watchlist <- list(eval = dtest, train = dtrain)
nround = 2
nrounds = 2
# training the model for two rounds
bst = xgb.train(param, dtrain, nround, nthread = 2, watchlist)
bst = xgb.train(param, dtrain, nrounds, nthread = 2, watchlist)
cat('start testing prediction from first n trees\n')
labels <- getinfo(dtest,'label')

View File

@@ -11,10 +11,10 @@ dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)
param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
nround = 4
nrounds = 4
# training the model for two rounds
bst = xgb.train(params = param, data = dtrain, nrounds = nround, nthread = 2)
bst = xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)
# Model accuracy without new features
accuracy.before <- sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.test$label) / length(agaricus.test$label)
@@ -43,7 +43,7 @@ new.features.test <- create.new.tree.features(bst, agaricus.test$data)
new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label)
new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label)
watchlist <- list(train = new.dtrain)
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nround, nthread = 2)
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
# Model accuracy with new features
accuracy.after <- sum((predict(bst, new.dtest) >= 0.5) == agaricus.test$label) / length(agaricus.test$label)

View File

@@ -22,7 +22,7 @@ This is a "pre-iteration" callback function used to reset booster's parameters
at the beginning of each iteration.
Note that when training is resumed from some previous model, and a function is used to
reset a parameter value, the \code{nround} argument in this function would be the
reset a parameter value, the \code{nrounds} argument in this function would be the
the number of boosting rounds in the current training.
Callback function expects the following values to be set in its calling frame:

View File

@@ -63,9 +63,9 @@ dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)
param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
nround = 4
nrounds = 4
bst = xgb.train(params = param, data = dtrain, nrounds = nround, nthread = 2)
bst = xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)
# Model accuracy without new features
accuracy.before <- sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.test$label) /
@@ -79,7 +79,7 @@ new.features.test <- xgb.create.features(model = bst, agaricus.test$data)
new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label)
new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label)
watchlist <- list(train = new.dtrain)
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nround, nthread = 2)
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
# Model accuracy with new features
accuracy.after <- sum((predict(bst, new.dtest) >= 0.5) == agaricus.test$label) /

View File

@@ -99,7 +99,7 @@ An object of class \code{xgb.cv.synchronous} with the following elements:
\item \code{params} parameters that were passed to the xgboost library. Note that it does not
capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
\item \code{callbacks} callback functions that were either automatically assigned or
explicitely passed.
explicitly passed.
\item \code{evaluation_log} evaluation history storead as a \code{data.table} with the
first column corresponding to iteration number and the rest corresponding to the
CV-based evaluation means and standard deviations for the training and test CV-sets.

View File

@@ -44,7 +44,8 @@ test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
# save the model in file 'xgb.model.dump'
xgb.dump(bst, 'xgb.model.dump', with_stats = TRUE)
dump.path = file.path(tempdir(), 'model.dump')
xgb.dump(bst, dump.path, with_stats = TRUE)
# print the model without saving it to a file
print(xgb.dump(bst, with_stats = TRUE))

View File

@@ -27,9 +27,3 @@ A helper function to extract the matrix of linear coefficients' history
from a gblinear model created while using the \code{cb.gblinear.history()}
callback.
}
\examples{
\dontrun{
See \\code{\\link{cv.gblinear.history}}
}
}

View File

@@ -35,7 +35,7 @@ xgboost(data = NULL, label = NULL, missing = NA, weight = NULL,
\item \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be.
\item \code{max_depth} maximum depth of a tree. Default: 6
\item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1
\item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nround}. Default: 1
\item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1
\item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
\item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through Xgboost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1
\item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.

View File

@@ -0,0 +1,14 @@
#!/bin/bash
# remove all #pragma's that suppress compiler warnings
set -e
set -x
for file in xgboost/src/dmlc-core/include/dmlc/*.h
do
sed -i.bak -e 's/^.*#pragma GCC diagnostic.*$//' -e 's/^.*#pragma clang diagnostic.*$//' -e 's/^.*#pragma warning.*$//' "${file}"
done
for file in xgboost/src/dmlc-core/include/dmlc/*.h.bak
do
rm "${file}"
done
set +x
set +e

View File

@@ -77,6 +77,18 @@ test_that("xgb.DMatrix: slice, dim", {
expect_equal(getinfo(dsub1, 'label'), getinfo(dsub2, 'label'))
})
test_that("xgb.DMatrix: slice, trailing empty rows", {
data(agaricus.train, package='xgboost')
train_data <- agaricus.train$data
train_label <- agaricus.train$label
dtrain <- xgb.DMatrix(data=train_data, label=train_label)
slice(dtrain, 6513L)
train_data[6513, ] <- 0
dtrain <- xgb.DMatrix(data=train_data, label=train_label)
slice(dtrain, 6513L)
expect_equal(nrow(dtrain), 6513)
})
test_that("xgb.DMatrix: colnames", {
dtest <- xgb.DMatrix(test_data, label=test_label)
expect_equal(colnames(dtest), colnames(test_data))

View File

@@ -9,7 +9,7 @@ test_that("train and prediction when gctorture is on", {
test <- agaricus.test
gctorture(TRUE)
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
pred <- predict(bst, test$data)
gctorture(FALSE)
})

View File

@@ -7,6 +7,9 @@ require(vcd, quietly = TRUE)
float_tolerance = 5e-6
# disable some tests for Win32
win32_flag = .Platform$OS.type == "windows" && .Machine$sizeof.pointer != 8
set.seed(1982)
data(Arthritis)
df <- data.table(Arthritis, keep.rownames = F)
@@ -41,15 +44,18 @@ mbst.GLM <- xgboost(data = as.matrix(iris[, -5]), label = mlabel, verbose = 0,
test_that("xgb.dump works", {
expect_length(xgb.dump(bst.Tree), 200)
expect_true(xgb.dump(bst.Tree, 'xgb.model.dump', with_stats = T))
expect_true(file.exists('xgb.model.dump'))
expect_gt(file.size('xgb.model.dump'), 8000)
if (!win32_flag)
expect_length(xgb.dump(bst.Tree), 200)
dump_file = file.path(tempdir(), 'xgb.model.dump')
expect_true(xgb.dump(bst.Tree, dump_file, with_stats = T))
expect_true(file.exists(dump_file))
expect_gt(file.size(dump_file), 8000)
# JSON format
dmp <- xgb.dump(bst.Tree, dump_format = "json")
expect_length(dmp, 1)
expect_length(grep('nodeid', strsplit(dmp, '\n')[[1]]), 188)
if (!win32_flag)
expect_length(grep('nodeid', strsplit(dmp, '\n')[[1]]), 188)
})
test_that("xgb.dump works for gblinear", {
@@ -209,7 +215,8 @@ test_that("xgb.model.dt.tree works with and without feature names", {
names.dt.trees <- c("Tree", "Node", "ID", "Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover")
dt.tree <- xgb.model.dt.tree(feature_names = feature.names, model = bst.Tree)
expect_equal(names.dt.trees, names(dt.tree))
expect_equal(dim(dt.tree), c(188, 10))
if (!win32_flag)
expect_equal(dim(dt.tree), c(188, 10))
expect_output(str(dt.tree), 'Feature.*\\"Age\\"')
dt.tree.0 <- xgb.model.dt.tree(model = bst.Tree)
@@ -235,7 +242,8 @@ test_that("xgb.model.dt.tree throws error for gblinear", {
test_that("xgb.importance works with and without feature names", {
importance.Tree <- xgb.importance(feature_names = feature.names, model = bst.Tree)
expect_equal(dim(importance.Tree), c(7, 4))
if (!win32_flag)
expect_equal(dim(importance.Tree), c(7, 4))
expect_equal(colnames(importance.Tree), c("Feature", "Gain", "Cover", "Frequency"))
expect_output(str(importance.Tree), 'Feature.*\\"Age\\"')

View File

@@ -7,6 +7,10 @@ data(agaricus.test, package = 'xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
# Disable flaky tests for 32-bit Windows.
# See https://github.com/dmlc/xgboost/issues/3720
win32_flag = .Platform$OS.type == "windows" && .Machine$sizeof.pointer != 8
test_that("updating the model works", {
watchlist = list(train = dtrain, test = dtest)
@@ -29,7 +33,9 @@ test_that("updating the model works", {
tr1r <- xgb.model.dt.tree(model = bst1r)
# all should be the same when no subsampling
expect_equal(bst1$evaluation_log, bst1r$evaluation_log)
expect_equal(tr1, tr1r, tolerance = 0.00001, check.attributes = FALSE)
if (!win32_flag) {
expect_equal(tr1, tr1r, tolerance = 0.00001, check.attributes = FALSE)
}
# the same boosting with subsampling with an extra 'refresh' updater:
p2r <- modifyList(p2, list(updater = 'grow_colmaker,prune,refresh', refresh_leaf = FALSE))
@@ -38,7 +44,9 @@ test_that("updating the model works", {
tr2r <- xgb.model.dt.tree(model = bst2r)
# should be the same evaluation but different gains and larger cover
expect_equal(bst2$evaluation_log, bst2r$evaluation_log)
expect_equal(tr2[Feature == 'Leaf']$Quality, tr2r[Feature == 'Leaf']$Quality)
if (!win32_flag) {
expect_equal(tr2[Feature == 'Leaf']$Quality, tr2r[Feature == 'Leaf']$Quality)
}
expect_gt(sum(abs(tr2[Feature != 'Leaf']$Quality - tr2r[Feature != 'Leaf']$Quality)), 100)
expect_gt(sum(tr2r$Cover) / sum(tr2$Cover), 1.5)
@@ -61,7 +69,9 @@ test_that("updating the model works", {
expect_gt(sum(tr2u$Cover) / sum(tr2$Cover), 1.5)
# the results should be the same as for the model with an extra 'refresh' updater
expect_equal(bst2r$evaluation_log, bst2u$evaluation_log)
expect_equal(tr2r, tr2u, tolerance = 0.00001, check.attributes = FALSE)
if (!win32_flag) {
expect_equal(tr2r, tr2u, tolerance = 0.00001, check.attributes = FALSE)
}
# process type 'update' for no-subsampling model, refreshing only the tree stats from TEST data:
p1ut <- modifyList(p1, list(process_type = 'update', updater = 'refresh', refresh_leaf = FALSE))

View File

@@ -6,46 +6,28 @@
[![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE)
[![CRAN Status Badge](http://www.r-pkg.org/badges/version/xgboost)](http://cran.r-project.org/web/packages/xgboost)
[![PyPI version](https://badge.fury.io/py/xgboost.svg)](https://pypi.python.org/pypi/xgboost/)
[![Gitter chat for developers at https://gitter.im/dmlc/xgboost](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/dmlc/xgboost?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[Community](https://xgboost.ai/community) |
[Documentation](https://xgboost.readthedocs.org) |
[Resources](demo/README.md) |
[Installation](https://xgboost.readthedocs.org/en/latest/build.html) |
[Release Notes](NEWS.md) |
[RoadMap](https://github.com/dmlc/xgboost/issues/873)
[Contributors](CONTRIBUTORS.md) |
[Release Notes](NEWS.md)
XGBoost is an optimized distributed gradient boosting library designed to be highly ***efficient***, ***flexible*** and ***portable***.
It implements machine learning algorithms under the [Gradient Boosting](https://en.wikipedia.org/wiki/Gradient_boosting) framework.
XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.
The same code runs on major distributed environment (Hadoop, SGE, MPI) and can solve problems beyond billions of examples.
What's New
----------
* [XGBoost GPU support with fast histogram algorithm](https://github.com/dmlc/xgboost/tree/master/plugin/updater_gpu)
* [XGBoost4J: Portable Distributed XGboost in Spark, Flink and Dataflow](http://dmlc.ml/2016/03/14/xgboost4j-portable-distributed-xgboost-in-spark-flink-and-dataflow.html), see [JVM-Package](https://github.com/dmlc/xgboost/tree/master/jvm-packages)
* [Story and Lessons Behind the Evolution of XGBoost](http://homes.cs.washington.edu/~tqchen/2016/03/10/story-and-lessons-behind-the-evolution-of-xgboost.html)
* [Tutorial: Distributed XGBoost on AWS with YARN](https://xgboost.readthedocs.io/en/latest/tutorials/aws_yarn.html)
* [XGBoost brick](NEWS.md) Release
Ask a Question
--------------
* For reporting bugs please use the [xgboost/issues](https://github.com/dmlc/xgboost/issues) page.
* For generic questions or to share your experience using XGBoost please use the [XGBoost User Group](https://groups.google.com/forum/#!forum/xgboost-user/)
Help to Make XGBoost Better
---------------------------
XGBoost has been developed and used by a group of active community members. Your help is very valuable to make the package better for everyone.
- Check out [call for contributions](https://github.com/dmlc/xgboost/issues?q=is%3Aissue+label%3Acall-for-contribution+is%3Aopen) and [Roadmap](https://github.com/dmlc/xgboost/issues/873) to see what can be improved, or open an issue if you want something.
- Contribute to the [documents and examples](https://github.com/dmlc/xgboost/blob/master/doc/) to share your experience with other users.
- Add your stories and experience to [Awesome XGBoost](demo/README.md).
- Please add your name to [CONTRIBUTORS.md](CONTRIBUTORS.md) and after your patch has been merged.
- Please also update [NEWS.md](NEWS.md) on changes and improvements in API and docs.
License
-------
© Contributors, 2016. Licensed under an [Apache-2](https://github.com/dmlc/xgboost/blob/master/LICENSE) license.
Contribute to XGBoost
---------------------
XGBoost has been developed and used by a group of active community members. Your help is very valuable to make the package better for everyone.
Checkout the [Community Page](https://xgboost.ai/community)
Reference
---------
- Tianqi Chen and Carlos Guestrin. [XGBoost: A Scalable Tree Boosting System](http://arxiv.org/abs/1603.02754). In 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, 2016
- XGBoost originates from research project at University of Washington, see also the [Project Page at UW](http://dmlc.cs.washington.edu/xgboost.html).
- Tianqi Chen and Carlos Guestrin. [XGBoost: A Scalable Tree Boosting System](http://arxiv.org/abs/1603.02754). In 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, 2016
- XGBoost originates from research project at University of Washington.

View File

@@ -7,6 +7,8 @@
#include "../dmlc-core/src/io/recordio_split.cc"
#include "../dmlc-core/src/io/input_split_base.cc"
#include "../dmlc-core/src/io/local_filesys.cc"
#include "../dmlc-core/src/io/filesys.cc"
#include "../dmlc-core/src/io/indexed_recordio_split.cc"
#include "../dmlc-core/src/data.cc"
#include "../dmlc-core/src/io.cc"
#include "../dmlc-core/src/recordio.cc"

View File

@@ -20,6 +20,7 @@
#include "../src/objective/regression_obj.cc"
#include "../src/objective/multiclass_obj.cc"
#include "../src/objective/rank_obj.cc"
#include "../src/objective/hinge.cc"
// gbms
#include "../src/gbm/gbm.cc"
@@ -43,6 +44,7 @@
#endif
// tress
#include "../src/tree/split_evaluator.cc"
#include "../src/tree/tree_model.cc"
#include "../src/tree/tree_updater.cc"
#include "../src/tree/updater_colmaker.cc"

View File

@@ -52,8 +52,10 @@ install:
Invoke-WebRequest http://raw.github.com/krlmlr/r-appveyor/master/scripts/appveyor-tool.ps1 -OutFile "$Env:TEMP\appveyor-tool.ps1"
Import-Module "$Env:TEMP\appveyor-tool.ps1"
Bootstrap
$DEPS = "c('data.table','magrittr','stringi','ggplot2','DiagrammeR','Ckmeans.1d.dp','vcd','testthat','igraph','knitr','rmarkdown')"
cmd.exe /c "R.exe -q -e ""install.packages($DEPS, repos='$CRAN', type='win.binary')"" 2>&1"
$DEPS = "c('data.table','magrittr','stringi','ggplot2','DiagrammeR','Ckmeans.1d.dp','vcd','testthat','lintr','knitr','rmarkdown')"
cmd.exe /c "R.exe -q -e ""install.packages($DEPS, repos='$CRAN', type='both')"" 2>&1"
$BINARY_DEPS = "c('XML','igraph')"
cmd.exe /c "R.exe -q -e ""install.packages($BINARY_DEPS, repos='$CRAN', type='win.binary')"" 2>&1"
}
build_script:

View File

@@ -15,25 +15,21 @@ else
if [[ ! -e ./rabit/Makefile ]]; then
echo ""
echo "Please clone the rabit repository into this directory."
echo "Here are the commands:"
echo "rm -rf rabit"
echo "git clone https://github.com/dmlc/rabit.git rabit"
echo "Please init the rabit submodule:"
echo "git submodule update --init --recursive -- rabit"
not_ready=1
fi
if [[ ! -e ./dmlc-core/Makefile ]]; then
echo ""
echo "Please clone the dmlc-core repository into this directory."
echo "Here are the commands:"
echo "rm -rf dmlc-core"
echo "git clone https://github.com/dmlc/dmlc-core.git dmlc-core"
echo "Please init the dmlc-core submodule:"
echo "git submodule update --init --recursive -- dmlc-core"
not_ready=1
fi
if [[ "${not_ready}" == "1" ]]; then
echo ""
echo "Please fix the errors above and retry the build or reclone the repository with:"
echo "Please fix the errors above and retry the build, or reclone the repository with:"
echo "git clone --recursive https://github.com/dmlc/xgboost.git"
echo ""
exit 1

58
cmake/Sanitizer.cmake Normal file
View File

@@ -0,0 +1,58 @@
# Set appropriate compiler and linker flags for sanitizers.
#
# Usage of this module:
# enable_sanitizers("address;leak")
# Add flags
macro(enable_sanitizer santizer)
if(${santizer} MATCHES "address")
find_package(ASan REQUIRED)
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=address")
link_libraries(${ASan_LIBRARY})
elseif(${santizer} MATCHES "thread")
find_package(TSan REQUIRED)
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=thread")
link_libraries(${TSan_LIBRARY})
elseif(${santizer} MATCHES "leak")
find_package(LSan REQUIRED)
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=leak")
link_libraries(${LSan_LIBRARY})
else()
message(FATAL_ERROR "Santizer ${santizer} not supported.")
endif()
endmacro()
macro(enable_sanitizers SANITIZERS)
# Check sanitizers compatibility.
# Idealy, we should use if(san IN_LIST SANITIZERS) ... endif()
# But I haven't figure out how to make it work.
foreach ( _san ${SANITIZERS} )
string(TOLOWER ${_san} _san)
if (_san MATCHES "thread")
if (${_use_other_sanitizers})
message(FATAL_ERROR
"thread sanitizer is not compatible with ${_san} sanitizer.")
endif()
set(_use_thread_sanitizer 1)
else ()
if (${_use_thread_sanitizer})
message(FATAL_ERROR
"${_san} sanitizer is not compatible with thread sanitizer.")
endif()
set(_use_other_sanitizers 1)
endif()
endforeach()
message("Sanitizers: ${SANITIZERS}")
foreach( _san ${SANITIZERS} )
string(TOLOWER ${_san} _san)
enable_sanitizer(${_san})
endforeach()
message("Sanitizers compile flags: ${SAN_COMPILE_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SAN_COMPILE_FLAGS}")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SAN_COMPILE_FLAGS}")
endmacro()

View File

@@ -54,10 +54,25 @@ function(set_default_configuration_release)
endif()
endfunction(set_default_configuration_release)
# Generate nvcc compiler flags given a list of architectures
# Also generates PTX for the most recent architecture for forwards compatibility
function(format_gencode_flags flags out)
# Set up architecture flags
if(NOT flags)
if((CUDA_VERSION_MAJOR EQUAL 9) OR (CUDA_VERSION_MAJOR GREATER 9))
set(flags "35;50;52;60;61;70")
else()
set(flags "35;50;52;60;61")
endif()
endif()
# Generate SASS
foreach(ver ${flags})
set(${out} "${${out}}-gencode arch=compute_${ver},code=sm_${ver};")
endforeach()
# Generate PTX for last architecture
list(GET flags -1 ver)
set(${out} "${${out}}-gencode arch=compute_${ver},code=compute_${ver};")
set(${out} "${${out}}" PARENT_SCOPE)
endfunction(format_gencode_flags flags)

View File

@@ -0,0 +1,13 @@
set(ASan_LIB_NAME ASan)
find_library(ASan_LIBRARY
NAMES libasan.so libasan.so.4
PATHS /usr/lib64 /usr/lib /usr/local/lib64 /usr/local/lib)
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(ASan DEFAULT_MSG
ASan_LIBRARY)
mark_as_advanced(
ASan_LIBRARY
ASan_LIB_NAME)

View File

@@ -1,79 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Tries to find GTest headers and libraries.
#
# Usage of this module as follows:
#
# find_package(GTest)
#
# Variables used by this module, they can change the default behaviour and need
# to be set before calling find_package:
#
# GTest_HOME - When set, this path is inspected instead of standard library
# locations as the root of the GTest installation.
# The environment variable GTEST_HOME overrides this veriable.
#
# This module defines
# GTEST_INCLUDE_DIR, directory containing headers
# GTEST_LIBS, directory containing gtest libraries
# GTEST_STATIC_LIB, path to libgtest.a
# GTEST_SHARED_LIB, path to libgtest's shared library
# GTEST_FOUND, whether gtest has been found
find_path(GTEST_INCLUDE_DIR NAMES gtest/gtest.h gtest.h PATHS ${CMAKE_SOURCE_DIR}/gtest/include NO_DEFAULT_PATH)
find_library(GTEST_LIBRARIES NAMES gtest PATHS ${CMAKE_SOURCE_DIR}/gtest/lib NO_DEFAULT_PATH)
if (GTEST_INCLUDE_DIR )
message(STATUS "Found the GTest includes: ${GTEST_INCLUDE_DIR}")
endif ()
if (GTEST_INCLUDE_DIR AND GTEST_LIBRARIES)
set(GTEST_FOUND TRUE)
get_filename_component( GTEST_LIBS ${GTEST_LIBRARIES} PATH )
set(GTEST_LIB_NAME gtest)
set(GTEST_STATIC_LIB ${GTEST_LIBS}/${CMAKE_STATIC_LIBRARY_PREFIX}${GTEST_LIB_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX})
set(GTEST_MAIN_STATIC_LIB ${GTEST_LIBS}/${CMAKE_STATIC_LIBRARY_PREFIX}${GTEST_LIB_NAME}_main${CMAKE_STATIC_LIBRARY_SUFFIX})
set(GTEST_SHARED_LIB ${GTEST_LIBS}/${CMAKE_SHARED_LIBRARY_PREFIX}${GTEST_LIB_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX})
else ()
set(GTEST_FOUND FALSE)
endif ()
if (GTEST_FOUND)
if (NOT GTest_FIND_QUIETLY)
message(STATUS "Found the GTest library: ${GTEST_LIBRARIES}")
endif ()
else ()
if (NOT GTest_FIND_QUIETLY)
set(GTEST_ERR_MSG "Could not find the GTest library. Looked in ")
if ( _gtest_roots )
set(GTEST_ERR_MSG "${GTEST_ERR_MSG} in ${_gtest_roots}.")
else ()
set(GTEST_ERR_MSG "${GTEST_ERR_MSG} system search paths.")
endif ()
if (GTest_FIND_REQUIRED)
message(FATAL_ERROR "${GTEST_ERR_MSG}")
else (GTest_FIND_REQUIRED)
message(STATUS "${GTEST_ERR_MSG}")
endif (GTest_FIND_REQUIRED)
endif ()
endif ()
mark_as_advanced(
GTEST_INCLUDE_DIR
GTEST_LIBS
GTEST_LIBRARIES
GTEST_STATIC_LIB
GTEST_SHARED_LIB
)

View File

@@ -0,0 +1,13 @@
set(LSan_LIB_NAME lsan)
find_library(LSan_LIBRARY
NAMES liblsan.so liblsan.so.0 liblsan.so.0.0.0
PATHS /usr/lib64 /usr/lib /usr/local/lib64 /usr/local/lib)
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(LSan DEFAULT_MSG
LSan_LIBRARY)
mark_as_advanced(
LSan_LIBRARY
LSan_LIB_NAME)

View File

@@ -0,0 +1,58 @@
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Tries to find NCCL headers and libraries.
#
# Usage of this module as follows:
#
# find_package(NCCL)
#
# Variables used by this module, they can change the default behaviour and need
# to be set before calling find_package:
#
# NCCL_ROOT - When set, this path is inspected instead of standard library
# locations as the root of the NCCL installation.
# The environment variable NCCL_ROOT overrides this veriable.
#
# This module defines
# Nccl_FOUND, whether nccl has been found
# NCCL_INCLUDE_DIR, directory containing header
# NCCL_LIBRARY, directory containing nccl library
# NCCL_LIB_NAME, nccl library name
#
# This module assumes that the user has already called find_package(CUDA)
set(NCCL_LIB_NAME nccl_static)
find_path(NCCL_INCLUDE_DIR
NAMES nccl.h
PATHS $ENV{NCCL_ROOT}/include ${NCCL_ROOT}/include ${CUDA_INCLUDE_DIRS} /usr/include)
find_library(NCCL_LIBRARY
NAMES ${NCCL_LIB_NAME}
PATHS $ENV{NCCL_ROOT}/lib ${NCCL_ROOT}/lib ${CUDA_INCLUDE_DIRS}/../lib /usr/lib)
if (NCCL_INCLUDE_DIR AND NCCL_LIBRARY)
get_filename_component(NCCL_LIBRARY ${NCCL_LIBRARY} PATH)
endif ()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(Nccl DEFAULT_MSG
NCCL_INCLUDE_DIR NCCL_LIBRARY)
mark_as_advanced(
NCCL_INCLUDE_DIR
NCCL_LIBRARY
NCCL_LIB_NAME
)

View File

@@ -0,0 +1,13 @@
set(TSan_LIB_NAME tsan)
find_library(TSan_LIBRARY
NAMES libtsan.so libtsan.so.0 libtsan.so.0.0.0
PATHS /usr/lib64 /usr/lib /usr/local/lib64 /usr/local/lib)
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(TSan DEFAULT_MSG
TSan_LIBRARY)
mark_as_advanced(
TSan_LIBRARY
TSan_LIB_NAME)

View File

@@ -80,12 +80,6 @@ booster = gblinear
# L2 regularization term on weights, default 0
lambda = 0.01
# L1 regularization term on weights, default 0
If ```agaricus.txt.test.buffer``` exists, and automatically loads from binary buffer if possible, this can speedup training process when you do training many times. You can disable it by setting ```use_buffer=0```.
- Buffer file can also be used as standalone input, i.e if buffer file exists, but original agaricus.txt.test was removed, xgboost will still run
* Deviation from LibSVM input format: xgboost is compatible with LibSVM format, with the following minor differences:
- xgboost allows feature index starts from 0
- for binary classification, the label is 1 for positive, 0 for negative, instead of +1,-1
- the feature indices in each line *do not* need to be sorted
alpha = 0.01
# L2 regularization term on bias, default 0
lambda_bias = 0.01
@@ -102,7 +96,7 @@ After training, we can use the output model to get the prediction of the test da
For binary classification, the output predictions are probability confidence scores in [0,1], corresponds to the probability of the label to be positive.
#### Dump Model
This is a preliminary feature, so far only tree model support text dump. XGBoost can display the tree models in text files and we can scan the model in an easy way:
This is a preliminary feature, so only tree models support text dump. XGBoost can display the tree models in text or JSON files, and we can scan the model in an easy way:
```
../../xgboost mushroom.conf task=dump model_in=0002.model name_dump=dump.raw.txt
../../xgboost mushroom.conf task=dump model_in=0002.model fmap=featmap.txt name_dump=dump.nice.txt

View File

@@ -33,10 +33,10 @@ def logregobj(preds, dtrain):
# Take this in mind when you use the customization, and maybe you need write customized evaluation function
def evalerror(preds, dtrain):
labels = dtrain.get_label()
# return a pair metric_name, result
# return a pair metric_name, result. The metric name must not contain a colon (:)
# since preds are margin(before logistic transformation, cutoff at 0)
return 'error', float(sum(labels != (preds > 0.0))) / len(labels)
# training with customized objective, we can also do step by step training
# simply look at xgboost.py's implementation of train
bst = xgb.train(param, dtrain, num_round, watchlist, logregobj, evalerror)
bst = xgb.train(param, dtrain, num_round, watchlist, obj=logregobj, feval=evalerror)

View File

@@ -24,9 +24,9 @@ param <- list("objective" = "binary:logitraw",
"silent" = 1,
"nthread" = 16)
watchlist <- list("train" = xgmat)
nround = 120
nrounds = 120
print ("loading data end, start to boost trees")
bst = xgb.train(param, xgmat, nround, watchlist );
bst = xgb.train(param, xgmat, nrounds, watchlist );
# save out model
xgb.save(bst, "higgs.model")
print ('finish training')

View File

@@ -39,9 +39,9 @@ for (i in 1:length(threads)){
"silent" = 1,
"nthread" = thread)
watchlist <- list("train" = xgmat)
nround = 120
nrounds = 120
print ("loading data end, start to boost trees")
bst = xgb.train(param, xgmat, nround, watchlist );
bst = xgb.train(param, xgmat, nrounds, watchlist );
# save out model
xgb.save(bst, "higgs.model")
print ('finish training')

View File

@@ -23,13 +23,13 @@ param <- list("objective" = "multi:softprob",
"nthread" = 8)
# Run Cross Validation
cv.nround = 50
cv.nrounds = 50
bst.cv = xgb.cv(param=param, data = x[trind,], label = y,
nfold = 3, nrounds=cv.nround)
nfold = 3, nrounds=cv.nrounds)
# Train the model
nround = 50
bst = xgboost(param=param, data = x[trind,], label = y, nrounds=nround)
nrounds = 50
bst = xgboost(param=param, data = x[trind,], label = y, nrounds=nrounds)
# Make prediction
pred = predict(bst,x[teind,])

View File

@@ -121,19 +121,19 @@ param <- list("objective" = "multi:softprob",
"eval_metric" = "mlogloss",
"num_class" = numberOfClasses)
cv.nround <- 5
cv.nrounds <- 5
cv.nfold <- 3
bst.cv = xgb.cv(param=param, data = trainMatrix, label = y,
nfold = cv.nfold, nrounds = cv.nround)
nfold = cv.nfold, nrounds = cv.nrounds)
```
> As we can see the error rate is low on the test dataset (for a 5mn trained model).
Finally, we are ready to train the real model!!!
```{r modelTraining}
nround = 50
bst = xgboost(param=param, data = trainMatrix, label = y, nrounds=nround)
nrounds = 50
bst = xgboost(param=param, data = trainMatrix, label = y, nrounds=nrounds)
```
Model understanding
@@ -142,7 +142,7 @@ Model understanding
Feature importance
------------------
So far, we have built a model made of **`r nround`** trees.
So far, we have built a model made of **`r nrounds`** trees.
To build a tree, the dataset is divided recursively several times. At the end of the process, you get groups of observations (here, these observations are properties regarding **Otto** products).

View File

@@ -14,8 +14,15 @@ For more usage details please refer to the [binary classification demo](../binar
Instructions
====
The dataset for ranking demo is from LETOR04 MQ2008 fold1,
You can use the following command to run the example
The dataset for ranking demo is from LETOR04 MQ2008 fold1.
You can use the following command to run the example:
Get the data: ./wgetdata.sh
Run the example: ./runexp.sh
Get the data:
```
./wgetdata.sh
```
Run the example:
```
./runexp.sh
```

View File

@@ -1,4 +1,4 @@
#!/bin/bash
wget http://research.microsoft.com/en-us/um/beijing/projects/letor/LETOR4.0/Data/MQ2008.rar
wget https://s3-us-west-2.amazonaws.com/xgboost-examples/MQ2008.rar
unrar x MQ2008.rar
mv -f MQ2008/Fold1/*.txt .

View File

@@ -222,7 +222,7 @@ The code below is very usual. For more information, you can look at the document
```r
bst <- xgboost(data = sparse_matrix, label = output_vector, max.depth = 4,
eta = 1, nthread = 2, nround = 10,objective = "binary:logistic")
eta = 1, nthread = 2, nrounds = 10,objective = "binary:logistic")
```
```
@@ -244,7 +244,7 @@ A model which fits too well may [overfit](http://en.wikipedia.org/wiki/Overfitti
> Here you can see the numbers decrease until line 7 and then increase.
>
> It probably means we are overfitting. To fix that I should reduce the number of rounds to `nround = 4`. I will let things like that because I don't really care for the purpose of this example :-)
> It probably means we are overfitting. To fix that I should reduce the number of rounds to `nrounds = 4`. I will let things like that because I don't really care for the purpose of this example :-)
Feature importance
------------------
@@ -448,7 +448,7 @@ train <- agaricus.train
test <- agaricus.test
#Random Forest™ - 1000 trees
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nround = 1, objective = "binary:logistic")
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nrounds = 1, objective = "binary:logistic")
```
```
@@ -457,7 +457,7 @@ bst <- xgboost(data = train$data, label = train$label, max.depth = 4, num_parall
```r
#Boosting - 3 rounds
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, nround = 3, objective = "binary:logistic")
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, nrounds = 3, objective = "binary:logistic")
```
```

View File

@@ -1,17 +0,0 @@
XGBoost R Package
=================
[![CRAN Status Badge](http://www.r-pkg.org/badges/version/xgboost)](http://cran.r-project.org/web/packages/xgboost)
[![CRAN Downloads](http://cranlogs.r-pkg.org/badges/xgboost)](http://cran.rstudio.com/web/packages/xgboost/index.html)
You have found the XGBoost R Package!
Get Started
-----------
* Checkout the [Installation Guide](../build.md) contains instructions to install xgboost, and [Tutorials](#tutorials) for examples on how to use xgboost for various tasks.
* Please visit [walk through example](../../R-package/demo).
Tutorials
---------
- [Introduction to XGBoost in R](xgboostPresentation.md)
- [Discover your data with XGBoost in R](discoverYourData.md)

28
doc/R-package/index.rst Normal file
View File

@@ -0,0 +1,28 @@
#################
XGBoost R Package
#################
.. raw:: html
<a href="http://cran.r-project.org/web/packages/xgboost"><img alt="CRAN Status Badge" src="http://www.r-pkg.org/badges/version/xgboost"></a>
<a href="http://cran.rstudio.com/web/packages/xgboost/index.html"><img alt="CRAN Downloads" src="http://cranlogs.r-pkg.org/badges/xgboost"></a>
You have found the XGBoost R Package!
***********
Get Started
***********
* Checkout the :doc:`Installation Guide </build>` contains instructions to install xgboost, and :doc:`Tutorials </tutorials/index>` for examples on how to use XGBoost for various tasks.
* Read the `API documentation <https://cran.r-project.org/web/packages/xgboost/xgboost.pdf>`_.
* Please visit `Walk-through Examples <https://github.com/dmlc/xgboost/tree/master/R-package/demo>`_.
*********
Tutorials
*********
.. toctree::
:maxdepth: 2
:titlesonly:
Introduction to XGBoost in R <xgboostPresentation>
Understanding your dataset with XGBoost <discoverYourData>

View File

@@ -178,11 +178,11 @@ We will train decision tree model using the following parameters:
* `objective = "binary:logistic"`: we will train a binary classification model ;
* `max.deph = 2`: the trees won't be deep, because our case is very simple ;
* `nthread = 2`: the number of cpu threads we are going to use;
* `nround = 2`: there will be two passes on the data, the second one will enhance the model by further reducing the difference between ground truth and prediction.
* `nrounds = 2`: there will be two passes on the data, the second one will enhance the model by further reducing the difference between ground truth and prediction.
```r
bstSparse <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
bstSparse <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
```
```
@@ -200,7 +200,7 @@ Alternatively, you can put your dataset in a *dense* matrix, i.e. a basic **R**
```r
bstDense <- xgboost(data = as.matrix(train$data), label = train$label, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
bstDense <- xgboost(data = as.matrix(train$data), label = train$label, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
```
```
@@ -215,7 +215,7 @@ bstDense <- xgboost(data = as.matrix(train$data), label = train$label, max.depth
```r
dtrain <- xgb.DMatrix(data = train$data, label = train$label)
bstDMatrix <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
bstDMatrix <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
```
```
@@ -232,13 +232,13 @@ One of the simplest way to see the training progress is to set the `verbose` opt
```r
# verbose = 0, no message
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic", verbose = 0)
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic", verbose = 0)
```
```r
# verbose = 1, print evaluation metric
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic", verbose = 1)
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic", verbose = 1)
```
```
@@ -249,7 +249,7 @@ bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, o
```r
# verbose = 2, also print information about tree
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic", verbose = 2)
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic", verbose = 2)
```
```
@@ -372,7 +372,7 @@ For the purpose of this example, we use `watchlist` parameter. It is a list of `
```r
watchlist <- list(train=dtrain, test=dtest)
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nround=2, watchlist=watchlist, objective = "binary:logistic")
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nrounds=2, watchlist=watchlist, objective = "binary:logistic")
```
```
@@ -380,7 +380,7 @@ bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nround=2, watchli
## [1] train-error:0.022263 test-error:0.021726
```
**XGBoost** has computed at each round the same average error metric than seen above (we set `nround` to 2, that is why we have two lines). Obviously, the `train-error` number is related to the training dataset (the one the algorithm learns from) and the `test-error` number to the test dataset.
**XGBoost** has computed at each round the same average error metric than seen above (we set `nrounds` to 2, that is why we have two lines). Obviously, the `train-error` number is related to the training dataset (the one the algorithm learns from) and the `test-error` number to the test dataset.
Both training and test error related metrics are very similar, and in some way, it makes sense: what we have learned from the training dataset matches the observations from the test dataset.
@@ -390,7 +390,7 @@ For a better understanding of the learning progression, you may want to have som
```r
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nround=2, watchlist=watchlist, eval.metric = "error", eval.metric = "logloss", objective = "binary:logistic")
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nrounds=2, watchlist=watchlist, eval.metric = "error", eval.metric = "logloss", objective = "binary:logistic")
```
```
@@ -407,7 +407,7 @@ Until now, all the learnings we have performed were based on boosting trees. **X
```r
bst <- xgb.train(data=dtrain, booster = "gblinear", max.depth=2, nthread = 2, nround=2, watchlist=watchlist, eval.metric = "error", eval.metric = "logloss", objective = "binary:logistic")
bst <- xgb.train(data=dtrain, booster = "gblinear", max.depth=2, nthread = 2, nrounds=2, watchlist=watchlist, eval.metric = "error", eval.metric = "logloss", objective = "binary:logistic")
```
```
@@ -445,7 +445,7 @@ dtrain2 <- xgb.DMatrix("dtrain.buffer")
```
```r
bst <- xgb.train(data=dtrain2, max.depth=2, eta=1, nthread = 2, nround=2, watchlist=watchlist, objective = "binary:logistic")
bst <- xgb.train(data=dtrain2, max.depth=2, eta=1, nthread = 2, nrounds=2, watchlist=watchlist, objective = "binary:logistic")
```
```

23
doc/_static/custom.css vendored Normal file
View File

@@ -0,0 +1,23 @@
div.breathe-sectiondef.container {
width: 100%;
}
div.literal-block-wrapper.container {
width: 100%;
}
.red {
color: red;
}
table {
border: 0;
}
td, th {
padding: 1px 8px 1px 5px;
border-top: 0;
border-bottom: 1px solid #aaa;
border-left: 0;
border-right: 0;
}

View File

@@ -1,752 +0,0 @@
/*
* searchtools.js_t
* ~~~~~~~~~~~~~~~~
*
* Sphinx JavaScript utilities for the full-text search.
*
* :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
/* Non-minified version JS is _stemmer.js if file is provided */
/**
* Porter Stemmer
*/
var Stemmer = function() {
var step2list = {
ational: 'ate',
tional: 'tion',
enci: 'ence',
anci: 'ance',
izer: 'ize',
bli: 'ble',
alli: 'al',
entli: 'ent',
eli: 'e',
ousli: 'ous',
ization: 'ize',
ation: 'ate',
ator: 'ate',
alism: 'al',
iveness: 'ive',
fulness: 'ful',
ousness: 'ous',
aliti: 'al',
iviti: 'ive',
biliti: 'ble',
logi: 'log'
};
var step3list = {
icate: 'ic',
ative: '',
alize: 'al',
iciti: 'ic',
ical: 'ic',
ful: '',
ness: ''
};
var c = "[^aeiou]"; // consonant
var v = "[aeiouy]"; // vowel
var C = c + "[^aeiouy]*"; // consonant sequence
var V = v + "[aeiou]*"; // vowel sequence
var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0
var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1
var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1
var s_v = "^(" + C + ")?" + v; // vowel in stem
this.stemWord = function (w) {
var stem;
var suffix;
var firstch;
var origword = w;
if (w.length < 3)
return w;
var re;
var re2;
var re3;
var re4;
firstch = w.substr(0,1);
if (firstch == "y")
w = firstch.toUpperCase() + w.substr(1);
// Step 1a
re = /^(.+?)(ss|i)es$/;
re2 = /^(.+?)([^s])s$/;
if (re.test(w))
w = w.replace(re,"$1$2");
else if (re2.test(w))
w = w.replace(re2,"$1$2");
// Step 1b
re = /^(.+?)eed$/;
re2 = /^(.+?)(ed|ing)$/;
if (re.test(w)) {
var fp = re.exec(w);
re = new RegExp(mgr0);
if (re.test(fp[1])) {
re = /.$/;
w = w.replace(re,"");
}
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1];
re2 = new RegExp(s_v);
if (re2.test(stem)) {
w = stem;
re2 = /(at|bl|iz)$/;
re3 = new RegExp("([^aeiouylsz])\\1$");
re4 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re2.test(w))
w = w + "e";
else if (re3.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
else if (re4.test(w))
w = w + "e";
}
}
// Step 1c
re = /^(.+?)y$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(s_v);
if (re.test(stem))
w = stem + "i";
}
// Step 2
re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step2list[suffix];
}
// Step 3
re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step3list[suffix];
}
// Step 4
re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
re2 = /^(.+?)(s|t)(ion)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
if (re.test(stem))
w = stem;
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1] + fp[2];
re2 = new RegExp(mgr1);
if (re2.test(stem))
w = stem;
}
// Step 5
re = /^(.+?)e$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
re2 = new RegExp(meq1);
re3 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re.test(stem) || (re2.test(stem) && !(re3.test(stem))))
w = stem;
}
re = /ll$/;
re2 = new RegExp(mgr1);
if (re.test(w) && re2.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
// and turn initial Y back to y
if (firstch == "y")
w = firstch.toLowerCase() + w.substr(1);
return w;
}
}
/**
* Simple result scoring code.
*/
var Scorer = {
// Implement the following function to further tweak the score for each result
// The function takes a result array [filename, title, anchor, descr, score]
// and returns the new score.
/*
score: function(result) {
return result[4];
},
*/
// query matches the full name of an object
objNameMatch: 11,
// or matches in the last dotted part of the object name
objPartialMatch: 6,
// Additive scores depending on the priority of the object
objPrio: {0: 15, // used to be importantResults
1: 5, // used to be objectResults
2: -5}, // used to be unimportantResults
// Used when the priority is not in the mapping.
objPrioDefault: 0,
// query found in title
title: 15,
// query found in terms
term: 5
};
var splitChars = (function() {
var result = {};
var singles = [96, 180, 187, 191, 215, 247, 749, 885, 903, 907, 909, 930, 1014, 1648,
1748, 1809, 2416, 2473, 2481, 2526, 2601, 2609, 2612, 2615, 2653, 2702,
2706, 2729, 2737, 2740, 2857, 2865, 2868, 2910, 2928, 2948, 2961, 2971,
2973, 3085, 3089, 3113, 3124, 3213, 3217, 3241, 3252, 3295, 3341, 3345,
3369, 3506, 3516, 3633, 3715, 3721, 3736, 3744, 3748, 3750, 3756, 3761,
3781, 3912, 4239, 4347, 4681, 4695, 4697, 4745, 4785, 4799, 4801, 4823,
4881, 5760, 5901, 5997, 6313, 7405, 8024, 8026, 8028, 8030, 8117, 8125,
8133, 8181, 8468, 8485, 8487, 8489, 8494, 8527, 11311, 11359, 11687, 11695,
11703, 11711, 11719, 11727, 11735, 12448, 12539, 43010, 43014, 43019, 43587,
43696, 43713, 64286, 64297, 64311, 64317, 64319, 64322, 64325, 65141];
var i, j, start, end;
for (i = 0; i < singles.length; i++) {
result[singles[i]] = true;
}
var ranges = [[0, 47], [58, 64], [91, 94], [123, 169], [171, 177], [182, 184], [706, 709],
[722, 735], [741, 747], [751, 879], [888, 889], [894, 901], [1154, 1161],
[1318, 1328], [1367, 1368], [1370, 1376], [1416, 1487], [1515, 1519], [1523, 1568],
[1611, 1631], [1642, 1645], [1750, 1764], [1767, 1773], [1789, 1790], [1792, 1807],
[1840, 1868], [1958, 1968], [1970, 1983], [2027, 2035], [2038, 2041], [2043, 2047],
[2070, 2073], [2075, 2083], [2085, 2087], [2089, 2307], [2362, 2364], [2366, 2383],
[2385, 2391], [2402, 2405], [2419, 2424], [2432, 2436], [2445, 2446], [2449, 2450],
[2483, 2485], [2490, 2492], [2494, 2509], [2511, 2523], [2530, 2533], [2546, 2547],
[2554, 2564], [2571, 2574], [2577, 2578], [2618, 2648], [2655, 2661], [2672, 2673],
[2677, 2692], [2746, 2748], [2750, 2767], [2769, 2783], [2786, 2789], [2800, 2820],
[2829, 2830], [2833, 2834], [2874, 2876], [2878, 2907], [2914, 2917], [2930, 2946],
[2955, 2957], [2966, 2968], [2976, 2978], [2981, 2983], [2987, 2989], [3002, 3023],
[3025, 3045], [3059, 3076], [3130, 3132], [3134, 3159], [3162, 3167], [3170, 3173],
[3184, 3191], [3199, 3204], [3258, 3260], [3262, 3293], [3298, 3301], [3312, 3332],
[3386, 3388], [3390, 3423], [3426, 3429], [3446, 3449], [3456, 3460], [3479, 3481],
[3518, 3519], [3527, 3584], [3636, 3647], [3655, 3663], [3674, 3712], [3717, 3718],
[3723, 3724], [3726, 3731], [3752, 3753], [3764, 3772], [3774, 3775], [3783, 3791],
[3802, 3803], [3806, 3839], [3841, 3871], [3892, 3903], [3949, 3975], [3980, 4095],
[4139, 4158], [4170, 4175], [4182, 4185], [4190, 4192], [4194, 4196], [4199, 4205],
[4209, 4212], [4226, 4237], [4250, 4255], [4294, 4303], [4349, 4351], [4686, 4687],
[4702, 4703], [4750, 4751], [4790, 4791], [4806, 4807], [4886, 4887], [4955, 4968],
[4989, 4991], [5008, 5023], [5109, 5120], [5741, 5742], [5787, 5791], [5867, 5869],
[5873, 5887], [5906, 5919], [5938, 5951], [5970, 5983], [6001, 6015], [6068, 6102],
[6104, 6107], [6109, 6111], [6122, 6127], [6138, 6159], [6170, 6175], [6264, 6271],
[6315, 6319], [6390, 6399], [6429, 6469], [6510, 6511], [6517, 6527], [6572, 6592],
[6600, 6607], [6619, 6655], [6679, 6687], [6741, 6783], [6794, 6799], [6810, 6822],
[6824, 6916], [6964, 6980], [6988, 6991], [7002, 7042], [7073, 7085], [7098, 7167],
[7204, 7231], [7242, 7244], [7294, 7400], [7410, 7423], [7616, 7679], [7958, 7959],
[7966, 7967], [8006, 8007], [8014, 8015], [8062, 8063], [8127, 8129], [8141, 8143],
[8148, 8149], [8156, 8159], [8173, 8177], [8189, 8303], [8306, 8307], [8314, 8318],
[8330, 8335], [8341, 8449], [8451, 8454], [8456, 8457], [8470, 8472], [8478, 8483],
[8506, 8507], [8512, 8516], [8522, 8525], [8586, 9311], [9372, 9449], [9472, 10101],
[10132, 11263], [11493, 11498], [11503, 11516], [11518, 11519], [11558, 11567],
[11622, 11630], [11632, 11647], [11671, 11679], [11743, 11822], [11824, 12292],
[12296, 12320], [12330, 12336], [12342, 12343], [12349, 12352], [12439, 12444],
[12544, 12548], [12590, 12592], [12687, 12689], [12694, 12703], [12728, 12783],
[12800, 12831], [12842, 12880], [12896, 12927], [12938, 12976], [12992, 13311],
[19894, 19967], [40908, 40959], [42125, 42191], [42238, 42239], [42509, 42511],
[42540, 42559], [42592, 42593], [42607, 42622], [42648, 42655], [42736, 42774],
[42784, 42785], [42889, 42890], [42893, 43002], [43043, 43055], [43062, 43071],
[43124, 43137], [43188, 43215], [43226, 43249], [43256, 43258], [43260, 43263],
[43302, 43311], [43335, 43359], [43389, 43395], [43443, 43470], [43482, 43519],
[43561, 43583], [43596, 43599], [43610, 43615], [43639, 43641], [43643, 43647],
[43698, 43700], [43703, 43704], [43710, 43711], [43715, 43738], [43742, 43967],
[44003, 44015], [44026, 44031], [55204, 55215], [55239, 55242], [55292, 55295],
[57344, 63743], [64046, 64047], [64110, 64111], [64218, 64255], [64263, 64274],
[64280, 64284], [64434, 64466], [64830, 64847], [64912, 64913], [64968, 65007],
[65020, 65135], [65277, 65295], [65306, 65312], [65339, 65344], [65371, 65381],
[65471, 65473], [65480, 65481], [65488, 65489], [65496, 65497]];
for (i = 0; i < ranges.length; i++) {
start = ranges[i][0];
end = ranges[i][1];
for (j = start; j <= end; j++) {
result[j] = true;
}
}
return result;
})();
function splitQuery(query) {
var result = [];
var start = -1;
for (var i = 0; i < query.length; i++) {
if (splitChars[query.charCodeAt(i)]) {
if (start !== -1) {
result.push(query.slice(start, i));
start = -1;
}
} else if (start === -1) {
start = i;
}
}
if (start !== -1) {
result.push(query.slice(start));
}
return result;
}
/**
* Search Module
*/
var Search = {
_index : null,
_queued_query : null,
_pulse_status : -1,
init : function() {
var params = $.getQueryParameters();
if (params.q) {
var query = params.q[0];
$('input[name="q"]')[0].value = query;
this.performSearch(query);
}
},
loadIndex : function(url) {
$.ajax({type: "GET", url: url, data: null,
dataType: "script", cache: true,
complete: function(jqxhr, textstatus) {
if (textstatus != "success") {
document.getElementById("searchindexloader").src = url;
}
}});
},
setIndex : function(index) {
var q;
this._index = index;
if ((q = this._queued_query) !== null) {
this._queued_query = null;
Search.query(q);
}
},
hasIndex : function() {
return this._index !== null;
},
deferQuery : function(query) {
this._queued_query = query;
},
stopPulse : function() {
this._pulse_status = 0;
},
startPulse : function() {
if (this._pulse_status >= 0)
return;
function pulse() {
var i;
Search._pulse_status = (Search._pulse_status + 1) % 4;
var dotString = '';
for (i = 0; i < Search._pulse_status; i++)
dotString += '.';
Search.dots.text(dotString);
if (Search._pulse_status > -1)
window.setTimeout(pulse, 500);
}
pulse();
},
/**
* perform a search for something (or wait until index is loaded)
*/
performSearch : function(query) {
// create the required interface elements
this.out = $('#search-results');
this.title = $('<h2>' + _('Searching') + '</h2>').appendTo(this.out);
this.dots = $('<span></span>').appendTo(this.title);
this.status = $('<p style="display: none"></p>').appendTo(this.out);
this.output = $('<ul class="search"/>').appendTo(this.out);
$('#search-progress').text(_('Preparing search...'));
this.startPulse();
// index already loaded, the browser was quick!
if (this.hasIndex())
this.query(query);
else
this.deferQuery(query);
},
/**
* execute search (requires search index to be loaded)
*/
query : function(query) {
var i;
var stopwords = ["a","and","are","as","at","be","but","by","for","if","in","into","is","it","near","no","not","of","on","or","such","that","the","their","then","there","these","they","this","to","was","will","with"];
// stem the searchterms and add them to the correct list
var stemmer = new Stemmer();
var searchterms = [];
var excluded = [];
var hlterms = [];
var tmp = splitQuery(query);
var objectterms = [];
for (i = 0; i < tmp.length; i++) {
if (tmp[i] !== "") {
objectterms.push(tmp[i].toLowerCase());
}
if ($u.indexOf(stopwords, tmp[i].toLowerCase()) != -1 || tmp[i].match(/^\d+$/) ||
tmp[i] === "") {
// skip this "word"
continue;
}
// stem the word
var word = stemmer.stemWord(tmp[i].toLowerCase());
var toAppend;
// select the correct list
if (word[0] == '-') {
toAppend = excluded;
word = word.substr(1);
}
else {
toAppend = searchterms;
hlterms.push(tmp[i].toLowerCase());
}
// only add if not already in the list
if (!$u.contains(toAppend, word))
toAppend.push(word);
}
var highlightstring = '?highlight=' + $.urlencode(hlterms.join(" "));
// console.debug('SEARCH: searching for:');
// console.info('required: ', searchterms);
// console.info('excluded: ', excluded);
// prepare search
var terms = this._index.terms;
var titleterms = this._index.titleterms;
// array of [filename, title, anchor, descr, score]
var results = [];
$('#search-progress').empty();
// lookup as object
for (i = 0; i < objectterms.length; i++) {
var others = [].concat(objectterms.slice(0, i),
objectterms.slice(i+1, objectterms.length));
results = results.concat(this.performObjectSearch(objectterms[i], others));
}
// lookup as search terms in fulltext
results = results.concat(this.performTermsSearch(searchterms, excluded, terms, titleterms));
// let the scorer override scores with a custom scoring function
if (Scorer.score) {
for (i = 0; i < results.length; i++)
results[i][4] = Scorer.score(results[i]);
}
// now sort the results by score (in opposite order of appearance, since the
// display function below uses pop() to retrieve items) and then
// alphabetically
results.sort(function(a, b) {
var left = a[4];
var right = b[4];
if (left > right) {
return 1;
} else if (left < right) {
return -1;
} else {
// same score: sort alphabetically
left = a[1].toLowerCase();
right = b[1].toLowerCase();
return (left > right) ? -1 : ((left < right) ? 1 : 0);
}
});
// for debugging
//Search.lastresults = results.slice(); // a copy
//console.info('search results:', Search.lastresults);
// print the results
var resultCount = results.length;
function displayNextItem() {
// results left, load the summary and display it
if (results.length) {
var item = results.pop();
var listItem = $('<li style="display:none"></li>');
if (DOCUMENTATION_OPTIONS.FILE_SUFFIX === '') {
// dirhtml builder
var dirname = item[0] + '/';
if (dirname.match(/\/index\/$/)) {
dirname = dirname.substring(0, dirname.length-6);
} else if (dirname == 'index/') {
dirname = '';
}
listItem.append($('<a/>').attr('href',
DOCUMENTATION_OPTIONS.URL_ROOT + dirname +
highlightstring + item[2]).html(item[1]));
} else {
// normal html builders
listItem.append($('<a/>').attr('href',
item[0] + DOCUMENTATION_OPTIONS.FILE_SUFFIX +
highlightstring + item[2]).html(item[1]));
}
if (item[3]) {
listItem.append($('<span> (' + item[3] + ')</span>'));
Search.output.append(listItem);
listItem.slideDown(5, function() {
displayNextItem();
});
} else if (DOCUMENTATION_OPTIONS.HAS_SOURCE) {
$.ajax({url: DOCUMENTATION_OPTIONS.URL_ROOT + '_sources/' + item[0] + '.md.txt',
dataType: "text",
complete: function(jqxhr, textstatus) {
var data = jqxhr.responseText;
if (data !== '' && data !== undefined) {
listItem.append(Search.makeSearchSummary(data, searchterms, hlterms));
}
Search.output.append(listItem);
listItem.slideDown(5, function() {
displayNextItem();
});
}});
} else {
// no source available, just display title
Search.output.append(listItem);
listItem.slideDown(5, function() {
displayNextItem();
});
}
}
// search finished, update title and status message
else {
Search.stopPulse();
Search.title.text(_('Search Results'));
if (!resultCount)
Search.status.text(_('Your search did not match any documents. Please make sure that all words are spelled correctly and that you\'ve selected enough categories.'));
else
Search.status.text(_('Search finished, found %s page(s) matching the search query.').replace('%s', resultCount));
Search.status.fadeIn(500);
}
}
displayNextItem();
},
/**
* search for object names
*/
performObjectSearch : function(object, otherterms) {
var filenames = this._index.docnames;
var objects = this._index.objects;
var objnames = this._index.objnames;
var titles = this._index.titles;
var i;
var results = [];
for (var prefix in objects) {
for (var name in objects[prefix]) {
var fullname = (prefix ? prefix + '.' : '') + name;
if (fullname.toLowerCase().indexOf(object) > -1) {
var score = 0;
var parts = fullname.split('.');
// check for different match types: exact matches of full name or
// "last name" (i.e. last dotted part)
if (fullname == object || parts[parts.length - 1] == object) {
score += Scorer.objNameMatch;
// matches in last name
} else if (parts[parts.length - 1].indexOf(object) > -1) {
score += Scorer.objPartialMatch;
}
var match = objects[prefix][name];
var objname = objnames[match[1]][2];
var title = titles[match[0]];
// If more than one term searched for, we require other words to be
// found in the name/title/description
if (otherterms.length > 0) {
var haystack = (prefix + ' ' + name + ' ' +
objname + ' ' + title).toLowerCase();
var allfound = true;
for (i = 0; i < otherterms.length; i++) {
if (haystack.indexOf(otherterms[i]) == -1) {
allfound = false;
break;
}
}
if (!allfound) {
continue;
}
}
var descr = objname + _(', in ') + title;
var anchor = match[3];
if (anchor === '')
anchor = fullname;
else if (anchor == '-')
anchor = objnames[match[1]][1] + '-' + fullname;
// add custom score for some objects according to scorer
if (Scorer.objPrio.hasOwnProperty(match[2])) {
score += Scorer.objPrio[match[2]];
} else {
score += Scorer.objPrioDefault;
}
results.push([filenames[match[0]], fullname, '#'+anchor, descr, score]);
}
}
}
return results;
},
/**
* search for full-text terms in the index
*/
performTermsSearch : function(searchterms, excluded, terms, titleterms) {
var filenames = this._index.docnames;
var titles = this._index.titles;
var i, j, file;
var fileMap = {};
var scoreMap = {};
var results = [];
// perform the search on the required terms
for (i = 0; i < searchterms.length; i++) {
var word = searchterms[i];
var files = [];
var _o = [
{files: terms[word], score: Scorer.term},
{files: titleterms[word], score: Scorer.title}
];
// no match but word was a required one
if ($u.every(_o, function(o){return o.files === undefined;})) {
break;
}
// found search word in contents
$u.each(_o, function(o) {
var _files = o.files;
if (_files === undefined)
return
if (_files.length === undefined)
_files = [_files];
files = files.concat(_files);
// set score for the word in each file to Scorer.term
for (j = 0; j < _files.length; j++) {
file = _files[j];
if (!(file in scoreMap))
scoreMap[file] = {}
scoreMap[file][word] = o.score;
}
});
// create the mapping
for (j = 0; j < files.length; j++) {
file = files[j];
if (file in fileMap)
fileMap[file].push(word);
else
fileMap[file] = [word];
}
}
// now check if the files don't contain excluded terms
for (file in fileMap) {
var valid = true;
// check if all requirements are matched
if (fileMap[file].length != searchterms.length)
continue;
// ensure that none of the excluded terms is in the search result
for (i = 0; i < excluded.length; i++) {
if (terms[excluded[i]] == file ||
titleterms[excluded[i]] == file ||
$u.contains(terms[excluded[i]] || [], file) ||
$u.contains(titleterms[excluded[i]] || [], file)) {
valid = false;
break;
}
}
// if we have still a valid result we can add it to the result list
if (valid) {
// select one (max) score for the file.
// for better ranking, we should calculate ranking by using words statistics like basic tf-idf...
var score = $u.max($u.map(fileMap[file], function(w){return scoreMap[file][w]}));
results.push([filenames[file], titles[file], '', null, score]);
}
}
return results;
},
/**
* helper function to return a node containing the
* search summary for a given text. keywords is a list
* of stemmed words, hlwords is the list of normal, unstemmed
* words. the first one is used to find the occurrence, the
* latter for highlighting it.
*/
makeSearchSummary : function(text, keywords, hlwords) {
var textLower = text.toLowerCase();
var start = 0;
$.each(keywords, function() {
var i = textLower.indexOf(this.toLowerCase());
if (i > -1)
start = i;
});
start = Math.max(start - 120, 0);
var excerpt = ((start > 0) ? '...' : '') +
$.trim(text.substr(start, 240)) +
((start + 240 - text.length) ? '...' : '');
var rv = $('<div class="context"></div>').text(excerpt);
$.each(hlwords, function() {
rv = rv.highlightText(this, 'highlighted');
});
return rv;
}
};
/* Search initialization removed for Read the Docs */
$(document).ready(function() {
Search.init();
});

View File

@@ -1,5 +0,0 @@
<div class="container">
<div class="footer">
<p> © 2015-2016 DMLC. All rights reserved. </p>
</div>
</div>

View File

@@ -1,58 +0,0 @@
<div class="splash">
<div class="container">
<div class="row">
<div class="col-lg-12">
<h1>Scalable and Flexible Gradient Boosting</h1>
<div id="social">
<span>
<iframe src="https://ghbtns.com/github-btn.html?user=dmlc&repo=xgboost&type=star&count=true&v=2"
frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<iframe src="https://ghbtns.com/github-btn.html?user=dmlc&repo=xgboost&type=fork&count=true&v=2"
frameborder="0" scrolling="0" width="100px" height="20px"></iframe>
</span>
</div> <!-- end of social -->
<div class="get_start">
<a href="get_started/" class="get_start_btn">Get Started</a>
</div> <!-- end of get started button -->
</div>
</div>
</div>
</div>
<div class="section-tout">
<div class="container">
<div class="row">
<div class="col-lg-4 col-sm-6">
<h3><i class="fa fa-flag"></i> Flexible</h3>
<p>Supports regression, classification, ranking and user defined objectives.
</p>
</div>
<div class="col-lg-4 col-sm-6">
<h3><i class="fa fa-cube"></i> Portable</h3>
<p>Runs on Windows, Linux and OS X, as well as various cloud Platforms</p>
</div>
<div class="col-lg-4 col-sm-6">
<h3><i class="fa fa-wrench"></i>Multiple Languages</h3>
<p>Supports multiple languages including C++, Python, R, Java, Scala, Julia.</p>
</div>
<div class="col-lg-4 col-sm-6">
<h3><i class="fa fa-cogs"></i> Battle-tested</h3>
<p>Wins many data science and machine learning challenges.
Used in production by multiple companies.
</p>
</div>
<div class="col-lg-4 col-sm-6">
<h3><i class="fa fa-cloud"></i>Distributed on Cloud</h3>
<p>Supports distributed training on multiple machines, including AWS,
GCE, Azure, and Yarn clusters. Can be integrated with Flink, Spark and other cloud dataflow systems.</p>
</div>
<div class="col-lg-4 col-sm-6">
<h3><i class="fa fa-rocket"></i> Performance</h3>
<p>The well-optimized backend system for the best performance with limited resources.
The distributed version solves problems beyond billions of examples with same code.
</p>
</div>
</div>
</div>
</div>
</div>

View File

@@ -1,156 +0,0 @@
{%- block doctype -%}
<!DOCTYPE html>
{%- endblock %}
{%- set reldelim1 = reldelim1 is not defined and ' &raquo;' or reldelim1 %}
{%- set reldelim2 = reldelim2 is not defined and ' |' or reldelim2 %}
{%- set render_sidebar = (not embedded) and (not theme_nosidebar|tobool) and
(sidebars != []) %}
{%- set url_root = pathto('', 1) %}
{%- if url_root == '#' %}{% set url_root = '' %}{% endif %}
{%- if not embedded and docstitle %}
{%- set titlesuffix = " &mdash; "|safe + docstitle|e %}
{%- else %}
{%- set titlesuffix = "" %}
{%- endif %}
{%- macro searchform(classes, button) %}
<form class="{{classes}}" role="search" action="{{ pathto('search') }}" method="get">
<div class="form-group">
<input type="text" name="q" class="form-control" {{ 'placeholder="Search"' if not button }} >
</div>
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
{% if button %}
<input type="submit" class="btn btn-default" value="search">
{% endif %}
</form>
{%- endmacro %}
{%- macro sidebarglobal() %}
<ul class="globaltoc">
{{ toctree(maxdepth=2|toint, collapse=False,includehidden=theme_globaltoc_includehidden|tobool) }}
</ul>
{%- endmacro %}
{%- macro sidebar() %}
{%- if render_sidebar %}
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
{%- block sidebartoc %}
{%- include "localtoc.html" %}
{%- endblock %}
</div>
</div>
{%- endif %}
{%- endmacro %}
{%- macro script() %}
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: '{{ url_root }}',
VERSION: '{{ release|e }}',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '{{ '' if no_search_suffix else file_suffix }}',
HAS_SOURCE: {{ has_source|lower }}
};
</script>
{% for name in ['jquery.js', 'underscore.js', 'doctools.js', 'searchtools-new.js'] %}
<script type="text/javascript" src="{{ pathto('_static/' + name, 1) }}"></script>
{% endfor %}
<script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<!-- {%- for scriptfile in script_files %} -->
<!-- <script type="text/javascript" src="{{ pathto(scriptfile, 1) }}"></script> -->
<!-- {%- endfor %} -->
{%- endmacro %}
{%- macro css() %}
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" crossorigin="anonymous">
{% if pagename == 'index' %}
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.5.0/css/font-awesome.min.css">
{%- else %}
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css" />
<link rel="stylesheet" href="{{ pathto('_static/pygments.css', 1) }}" type="text/css" />
{%- endif %}
<link rel="stylesheet" href="{{ pathto('_static/xgboost.css', 1) }}">
{%- endmacro %}
<html lang="en">
<head>
<meta charset="{{ encoding }}">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
{# The above 3 meta tags *must* come first in the head; any other head content
must come *after* these tags. #}
{{ metatags }}
{%- block htmltitle %}
{%- if pagename != 'index' %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{%- else %}
<title>XGBoost Documents</title>
{%- endif %}
{%- endblock %}
{{ css() }}
{%- if not embedded %}
{{ script() }}
{%- if use_opensearch %}
<link rel="search" type="application/opensearchdescription+xml"
title="{% trans docstitle=docstitle|e %}Search within {{ docstitle }}{% endtrans %}"
href="{{ pathto('_static/opensearch.xml', 1) }}"/>
{%- endif %}
{%- if favicon %}
<link rel="shortcut icon" href="{{ pathto('_static/' + favicon, 1) }}"/>
{%- endif %}
{%- endif %}
{%- block linktags %}
{%- if hasdoc('about') %}
<link rel="author" title="{{ _('About these documents') }}" href="{{ pathto('about') }}" />
{%- endif %}
{%- if hasdoc('genindex') %}
<link rel="index" title="{{ _('Index') }}" href="{{ pathto('genindex') }}" />
{%- endif %}
{%- if hasdoc('search') %}
<link rel="search" title="{{ _('Search') }}" href="{{ pathto('search') }}" />
{%- endif %}
{%- if hasdoc('copyright') %}
<link rel="copyright" title="{{ _('Copyright') }}" href="{{ pathto('copyright') }}" />
{%- endif %}
{%- if parents %}
<link rel="up" title="{{ parents[-1].title|striptags|e }}" href="{{ parents[-1].link|e }}" />
{%- endif %}
{%- if next %}
<link rel="next" title="{{ next.title|striptags|e }}" href="{{ next.link|e }}" />
{%- endif %}
{%- if prev %}
<link rel="prev" title="{{ prev.title|striptags|e }}" href="{{ prev.link|e }}" />
{%- endif %}
{%- endblock %}
{%- block extrahead %} {% endblock %}
<link rel="icon" type="image/png" href="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet-icon.png">
</head>
<body role="document">
{%- include "navbar.html" %}
{% if pagename != 'index' %}
<div class="container">
<div class="row">
{{ sidebar() }}
<div class="content">
{% block body %} {% endblock %}
{%- include "footer.html" %}
</div>
</div>
</div>
{%- else %}
{%- include "index.html" %}
{%- include "footer.html" %}
{%- endif %} <!-- pagename != index -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js" integrity="sha384-0mSbJDEHialfmuBBQP6A4Qrprq5OVfW37PRR3j5ELqxss1yVqOtnepnHVP9aJ7xS" crossorigin="anonymous"></script>
</body>
</html>

View File

@@ -1,41 +0,0 @@
<div class="navbar navbar-default navbar-fixed-top">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
</div>
<div id="navbar" class="navbar-collapse collapse">
<ul id="navbar" class="navbar navbar-left">
<li> <a href="{{url_root}}">XGBoost</a> </li>
{% for name in ['Get Started', 'Tutorials', 'How To'] %}
<li> <a href="{{url_root}}{{name.lower()|replace(" ", "_")}}/index.html">{{name}}</a> </li>
{% endfor %}
{% for name in ['Packages'] %}
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="true">{{name}} <span class="caret"></span></a>
<ul class="dropdown-menu">
<li><a href="{{url_root}}python/index.html">Python</a></li>
<li><a href="{{url_root}}R-package/index.html">R</a></li>
<li><a href="{{url_root}}jvm/index.html">JVM</a></li>
<li><a href="{{url_root}}julia/index.html">Julia</a></li>
<li><a href="{{url_root}}cli/index.html">CLI</a></li>
<li><a href="{{url_root}}gpu/index.html">GPU</a></li>
</ul>
</li>
{% endfor %}
<li> <a href="{{url_root}}/parameter.html"> Knobs </a> </li>
<li> {{searchform('', False)}} </li>
</ul>
<!--
<ul id="navbar" class="navbar navbar-right">
<li> <a href="{{url_root}}index.html"><span class="flag-icon flag-icon-us"></span></a> </li>
<li> <a href="{{url_root}}/zh/index.html"><span class="flag-icon flag-icon-cn"></span></a> </li>
</ul>
navbar -->
</div>
</div>
</div>

View File

@@ -1,2 +0,0 @@
[theme]
inherit = basic

View File

@@ -1,232 +0,0 @@
/* header section */
.splash{
padding:5em 0 1em 0;
background-color:#0079b2;
/* background-image:url(../img/bg.jpg); */
background-size:cover;
background-attachment:fixed;
color:#fff;
text-align:center
}
.splash h1{
font-size: 40px;
margin-bottom: 20px;
}
.splash .social{
margin:2em 0
}
.splash .get_start {
margin:2em 0
}
.splash .get_start_btn {
border: 2px solid #FFFFFF;
border-radius: 5px;
color: #FFFFFF;
display: inline-block;
font-size: 26px;
padding: 9px 20px;
}
.section-tout{
padding:3em 0 3em;
border-bottom:1px solid rgba(0,0,0,.05);
background-color:#eaf1f1
}
.section-tout .fa{
margin-right:.5em
}
.section-tout h3{
font-size:20px;
}
.section-tout p {
margin-bottom:2em
}
.section-inst{
padding:3em 0 3em;
border-bottom:1px solid rgba(0,0,0,.05);
text-align:center
}
.section-inst p {
margin-bottom:2em
}
.section-inst img {
-webkit-filter: grayscale(90%); /* Chrome, Safari, Opera */
filter: grayscale(90%);
margin-bottom:2em
}
.section-inst img:hover {
-webkit-filter: grayscale(0%); /* Chrome, Safari, Opera */
filter: grayscale(0%);
}
.footer{
padding-top: 40px;
}
.footer li{
float:right;
margin-right:1.5em;
margin-bottom:1.5em
}
.footer p{
font-size: 15px;
color: #888;
clear:right;
margin-bottom:0
}
/* sidebar */
div.sphinxsidebar {
margin-top: 20px;
margin-left: 0;
position: fixed;
overflow-y: scroll;
width: 250px;
top: 52px;
bottom: 0;
display: none
}
div.sphinxsidebar ul { padding: 0 }
div.sphinxsidebar ul ul { margin-left: 15px }
@media (min-width:1200px) {
.content { float: right; width: 66.66666667%; margin-right: 5% }
div.sphinxsidebar {display: block}
}
.github-btn { border: 0; overflow: hidden }
.container {
margin-right: auto;
margin-left: auto;
padding-left: 15px;
padding-right: 15px
}
body>.container {
padding-top: 80px
}
body {
font-size: 16px;
}
pre {
font-size: 14px;
}
/* navbar */
.navbar {
background-color:#0079b2;
border: 0px;
height: 65px;
}
.navbar-right li {
display:inline-block;
vertical-align:top;
padding: 22px 4px;
}
.navbar-left li {
display:inline-block;
vertical-align:top;
padding: 17px 10px;
/* margin: 0 5px; */
}
.navbar-left li a {
font-size: 22px;
color: #fff;
}
.navbar-left > li > a:hover{
color:#fff;
}
.flag-icon {
background-size: contain;
background-position: 50%;
background-repeat: no-repeat;
position: relative;
display: inline-block;
width: 1.33333333em;
line-height: 1em;
}
.flag-icon:before {
content: "\00a0";
}
.flag-icon-cn {
background-image: url(./cn.svg);
}
.flag-icon-us {
background-image: url(./us.svg);
}
/* .flags { */
/* padding: 10px; */
/* } */
.navbar-brand >img {
width: 110px;
}
.dropdown-menu li {
padding: 0px 0px;
width: 100%;
}
.dropdown-menu li a {
color: #0079b2;
font-size: 20px;
}
.section h1 {
padding-top: 90px;
margin-top: -60px;
padding-bottom: 10px;
font-size: 28px;
}
.section h2 {
padding-top: 80px;
margin-top: -60px;
padding-bottom: 10px;
font-size: 22px;
}
.section h3 {
padding-top: 80px;
margin-top: -64px;
padding-bottom: 8px;
}
.section h4 {
padding-top: 80px;
margin-top: -64px;
padding-bottom: 8px;
}
dt {
margin-top: -76px;
padding-top: 76px;
}
dt:target, .highlighted {
background-color: #fff;
}
.section code.descname {
font-size: 1em;
}

View File

@@ -1,377 +0,0 @@
Installation Guide
==================
This page gives instructions on how to build and install the xgboost package from
scratch on various systems. It consists of two steps:
1. First build the shared library from the C++ codes (`libxgboost.so` for Linux/OSX and `xgboost.dll` for Windows).
- Exception: for R-package installation please directly refer to the R package section.
2. Then install the language packages (e.g. Python Package).
***Important*** the newest version of xgboost uses submodule to maintain packages. So when you clone the repo, remember to use the recursive option as follows.
```bash
git clone --recursive https://github.com/dmlc/xgboost
```
For windows users who use github tools, you can open the git shell, and type the following command.
```bash
git submodule init
git submodule update
```
Please refer to [Trouble Shooting Section](#trouble-shooting) first if you had any problem
during installation. If the instructions do not work for you, please feel free
to ask questions at [xgboost/issues](https://github.com/dmlc/xgboost/issues), or
even better to send pull request if you can fix the problem.
## Contents
- [Build the Shared Library](#build-the-shared-library)
- [Building on Ubuntu/Debian](#building-on-ubuntu-debian)
- [Building on macOS](#building-on-macos)
- [Building on Windows](#building-on-windows)
- [Building with GPU support](#building-with-gpu-support)
- [Windows Binaries](#windows-binaries)
- [Customized Building](#customized-building)
- [Python Package Installation](#python-package-installation)
- [R Package Installation](#r-package-installation)
- [Trouble Shooting](#trouble-shooting)
## Build the Shared Library
Our goal is to build the shared library:
- On Linux/OSX the target library is `libxgboost.so`
- On Windows the target library is `xgboost.dll`
The minimal building requirement is
- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
We can edit `make/config.mk` to change the compile options, and then build by
`make`. If everything goes well, we can go to the specific language installation section.
### Building on Ubuntu/Debian
On Ubuntu, one builds xgboost by
```bash
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost; make -j4
```
### Building on macOS
**Install with pip - simple method**
First, make sure you obtained *gcc-5* (newer version does not work with this method yet). Note: installation of `gcc` can take a while (~ 30 minutes)
```bash
brew install gcc5
```
You might need to run the following command with `sudo` if you run into some permission errors:
```bash
pip install xgboost
```
**Build from the source code - advanced method**
First, obtain gcc-7.x.x with brew (https://brew.sh/) if you want multi-threaded version, otherwise, Clang is ok if OpenMP / multi-threaded is not required. Note: installation of `gcc` can take a while (~ 30 minutes)
```bash
brew install gcc
```
Now, clone the repository
```bash
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost; cp make/config.mk ./config.mk
```
Open config.mk and uncomment these two lines
```config.mk
export CC = gcc
export CXX = g++
```
and replace these two lines into(5 or 6 or 7; depending on your gcc-version)
```config.mk
export CC = gcc-7
export CXX = g++-7
```
To find your gcc version
```bash
gcc-version
```
and build using the following commands
```bash
make -j4
```
head over to `Python Package Installation` for the next steps
### Building on Windows
You need to first clone the xgboost repo with recursive option clone the submodules.
If you are using github tools, you can open the git-shell, and type the following command.
We recommend using [Git for Windows](https://git-for-windows.github.io/)
because it brings a standard bash shell. This will highly ease the installation process.
```bash
git submodule init
git submodule update
```
XGBoost support both build by MSVC or MinGW. Here is how you can build xgboost library using MinGW.
After installing [Git for Windows](https://git-for-windows.github.io/), you should have a shortcut `Git Bash`.
All the following steps are in the `Git Bash`.
In MinGW, `make` command comes with the name `mingw32-make`. You can add the following line into the `.bashrc` file.
```bash
alias make='mingw32-make'
```
(On 64-bit Windows, you should get [mingw64](https://sourceforge.net/projects/mingw-w64/) instead.) Make sure
that the path to MinGW is in the system PATH.
To build with MinGW, type:
```bash
cp make/mingw64.mk config.mk; make -j4
```
To build with Visual Studio 2013 use cmake. Make sure you have a recent version of cmake added to your path and then from the xgboost directory:
```bash
mkdir build
cd build
cmake .. -G"Visual Studio 12 2013 Win64"
```
This specifies an out of source build using the MSVC 12 64 bit generator. Open the .sln file in the build directory and build with Visual Studio. To use the Python module you can copy `xgboost.dll` into python-package\xgboost.
Other versions of Visual Studio may work but are untested.
### Building with GPU support
XGBoost can be built with GPU support for both Linux and Windows using cmake. GPU support works with the Python package as well as the CLI version. See [Installing R package with GPU support](#installing-r-package-with-gpu-support) for special instructions for R.
An up-to-date version of the CUDA toolkit is required.
From the command line on Linux starting from the xgboost directory:
```bash
$ mkdir build
$ cd build
$ cmake .. -DUSE_CUDA=ON
$ make -j
```
**Windows requirements** for GPU build: only Visual C++ 2015 or 2013 with CUDA v8.0 were fully tested. Either install Visual C++ 2015 Build Tools separately, or as a part of Visual Studio 2015. If you already have Visual Studio 2017, the Visual C++ 2015 Toolchain componenet has to be installed using the VS 2017 Installer. Likely, you would need to use the VS2015 x64 Native Tools command prompt to run the cmake commands given below. In some situations, however, things run just fine from MSYS2 bash command line.
On Windows, using cmake, see what options for Generators you have for cmake, and choose one with [arch] replaced by Win64:
```bash
cmake -help
```
Then run cmake as:
```bash
$ mkdir build
$ cd build
$ cmake .. -G"Visual Studio 14 2015 Win64" -DUSE_CUDA=ON
```
To speed up compilation, compute version specific to your GPU could be passed to cmake as, e.g., `-DGPU_COMPUTE_VER=50`.
The above cmake configuration run will create an xgboost.sln solution file in the build directory. Build this solution in release mode as a x64 build, either from Visual studio or from command line:
```
cmake --build . --target xgboost --config Release
```
If build seems to use only a single process, you might try to append an option like ` -- /m:6` to the above command.
### Windows Binaries
After the build process successfully ends, you will find a `xgboost.dll` library file inside `./lib/` folder, copy this file to the the API package folder like `python-package/xgboost` if you are using *python* API. And you are good to follow the below instructions.
Unofficial windows binaries and instructions on how to use them are hosted on [Guido Tapia's blog](http://www.picnet.com.au/blogs/guido/post/2016/09/22/xgboost-windows-x64-binaries-for-download/)
### Customized Building
The configuration of xgboost can be modified by ```config.mk```
- modify configuration on various distributed filesystem such as HDFS/Amazon S3/...
- First copy [make/config.mk](../make/config.mk) to the project root, on which
any local modification will be ignored by git, then modify the according flags.
## Python Package Installation
The python package is located at [python-package](../python-package).
There are several ways to install the package:
1. Install system-widely, which requires root permission
```bash
cd python-package; sudo python setup.py install
```
You will however need Python `distutils` module for this to
work. It is often part of the core python package or it can be installed using your
package manager, e.g. in Debian use
```bash
sudo apt-get install python-setuptools
```
*NOTE: If you recompiled xgboost, then you need to reinstall it again to
make the new library take effect*
2. Only set the environment variable `PYTHONPATH` to tell python where to find
the library. For example, assume we cloned `xgboost` on the home directory
`~`. then we can added the following line in `~/.bashrc`.
It is ***recommended for developers*** who may change the codes. The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ```setup``` again)
```bash
export PYTHONPATH=~/xgboost/python-package
```
3. Install only for the current user.
```bash
cd python-package; python setup.py develop --user
```
4. If you are installing the latest xgboost version which requires compilation, add MinGW to the system PATH:
```python
import os
os.environ['PATH'] = os.environ['PATH'] + ';C:\\Program Files\\mingw-w64\\x86_64-5.3.0-posix-seh-rt_v4-rev0\\mingw64\\bin'
```
## R Package Installation
### Installing pre-packaged version
You can install xgboost from CRAN just like any other R package:
```r
install.packages("xgboost")
```
Or you can install it from our weekly updated drat repo:
```r
install.packages("drat", repos="https://cran.rstudio.com")
drat:::addRepo("dmlc")
install.packages("xgboost", repos="http://dmlc.ml/drat/", type = "source")
```
For OSX users, single threaded version will be installed. To install multi-threaded version,
first follow [Building on OSX](#building-on-osx) to get the OpenMP enabled compiler, then:
- Set the `Makevars` file in highest piority for R.
The point is, there are three `Makevars` : `~/.R/Makevars`, `xgboost/R-package/src/Makevars`, and `/usr/local/Cellar/r/3.2.0/R.framework/Resources/etc/Makeconf` (the last one obtained by running `file.path(R.home("etc"), "Makeconf")` in R), and `SHLIB_OPENMP_CXXFLAGS` is not set by default!! After trying, it seems that the first one has highest piority (surprise!).
Then inside R, run
```R
install.packages("drat", repos="https://cran.rstudio.com")
drat:::addRepo("dmlc")
install.packages("xgboost", repos="http://dmlc.ml/drat/", type = "source")
```
### Installing the development version
Make sure you have installed git and a recent C++ compiler supporting C++11 (e.g., g++-4.8 or higher).
On Windows, Rtools must be installed, and its bin directory has to be added to PATH during the installation.
And see the previous subsection for an OSX tip.
Due to the use of git-submodules, `devtools::install_github` can no longer be used to install the latest version of R package.
Thus, one has to run git to check out the code first:
```bash
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost
git submodule init
git submodule update
cd R-package
R CMD INSTALL .
```
If the last line fails because of "R: command not found", it means that R was not set up to run from command line.
In this case, just start R as you would normally do and run the following:
```r
setwd('wherever/you/cloned/it/xgboost/R-package/')
install.packages('.', repos = NULL, type="source")
```
The package could also be built and installed with cmake (and Visual C++ 2015 on Windows) using instructions from the next section, but without GPU support (omit the `-DUSE_CUDA=ON` cmake parameter).
If all fails, try [building the shared library](#build-the-shared-library) to see whether a problem is specific to R package or not.
### Installing R package with GPU support
The procedure and requirements are similar as in [Building with GPU support](#building-with-gpu-support), so make sure to read it first.
On Linux, starting from the xgboost directory:
```bash
mkdir build
cd build
cmake .. -DUSE_CUDA=ON -DR_LIB=ON
make install -j
```
When default target is used, an R package shared library would be built in the `build` area.
The `install` target, in addition, assembles the package files with this shared library under `build/R-package`, and runs `R CMD INSTALL`.
On Windows, cmake with Visual C++ Build Tools (or Visual Studio) has to be used to build an R package with GPU support. Rtools must also be installed (perhaps, some other MinGW distributions with `gendef.exe` and `dlltool.exe` would work, but that was not tested).
```bash
mkdir build
cd build
cmake .. -G"Visual Studio 14 2015 Win64" -DUSE_CUDA=ON -DR_LIB=ON
cmake --build . --target install --config Release
```
When `--target xgboost` is used, an R package dll would be built under `build/Release`.
The `--target install`, in addition, assembles the package files with this dll under `build/R-package`, and runs `R CMD INSTALL`.
If cmake can't find your R during the configuration step, you might provide the location of its executable to cmake like this: `-DLIBR_EXECUTABLE="C:/Program Files/R/R-3.4.1/bin/x64/R.exe"`.
If on Windows you get a "permission denied" error when trying to write to ...Program Files/R/... during the package installation, create a `.Rprofile` file in your personal home directory (if you don't already have one in there), and add a line to it which specifies the location of your R packages user library, like the following:
```r
.libPaths( unique(c("C:/Users/USERNAME/Documents/R/win-library/3.4", .libPaths())))
```
You might find the exact location by running `.libPaths()` in R GUI or RStudio.
## Trouble Shooting
1. **Compile failed after `git pull`**
Please first update the submodules, clean all and recompile:
```bash
git submodule update && make clean_all && make -j4
```
2. **Compile failed after `config.mk` is modified**
Need to clean all first:
```bash
make clean_all && make -j4
```
3. **Makefile: dmlc-core/make/dmlc.mk: No such file or directory**
We need to recursively clone the submodule, you can do:
```bash
git submodule init
git submodule update
```
Alternatively, do another clone
```bash
git clone https://github.com/dmlc/xgboost --recursive
```

445
doc/build.rst Normal file
View File

@@ -0,0 +1,445 @@
##################
Installation Guide
##################
.. note:: Pre-built binary wheel for Python
If you are planning to use Python, consider installing XGBoost from a pre-built binary wheel, available from Python Package Index (PyPI). You may download and install it by running
.. code-block:: bash
# Ensure that you are downloading one of the following:
# * xgboost-{version}-py2.py3-none-manylinux1_x86_64.whl
# * xgboost-{version}-py2.py3-none-win_amd64.whl
pip3 install xgboost
* The binary wheel will support GPU algorithms (`gpu_exact`, `gpu_hist`) on machines with NVIDIA GPUs. **However, it will not support multi-GPU training; only single GPU will be used.** To enable multi-GPU training, download and install the binary wheel from `this page <https://s3-us-west-2.amazonaws.com/xgboost-wheels/list.html>`_.
* Currently, we provide binary wheels for 64-bit Linux and Windows.
****************************
Building XGBoost from source
****************************
This page gives instructions on how to build and install XGBoost from scratch on various systems. It consists of two steps:
1. First build the shared library from the C++ codes (``libxgboost.so`` for Linux/OSX and ``xgboost.dll`` for Windows).
(For R-package installation, please directly refer to `R Package Installation`_.)
2. Then install the language packages (e.g. Python Package).
.. note:: Use of Git submodules
XGBoost uses Git submodules to manage dependencies. So when you clone the repo, remember to specify ``--recursive`` option:
.. code-block:: bash
git clone --recursive https://github.com/dmlc/xgboost
For windows users who use github tools, you can open the git shell and type the following command:
.. code-block:: batch
git submodule init
git submodule update
Please refer to `Trouble Shooting`_ section first if you have any problem
during installation. If the instructions do not work for you, please feel free
to ask questions at `the user forum <https://discuss.xgboost.ai>`_.
**Contents**
* `Building the Shared Library`_
- `Building on Ubuntu/Debian`_
- `Building on OSX`_
- `Building on Windows`_
- `Building with GPU support`_
- `Customized Building`_
* `Python Package Installation`_
* `R Package Installation`_
* `Trouble Shooting`_
***************************
Building the Shared Library
***************************
Our goal is to build the shared library:
- On Linux/OSX the target library is ``libxgboost.so``
- On Windows the target library is ``xgboost.dll``
The minimal building requirement is
- A recent C++ compiler supporting C++11 (g++-4.8 or higher)
We can edit ``make/config.mk`` to change the compile options, and then build by
``make``. If everything goes well, we can go to the specific language installation section.
Building on Ubuntu/Debian
=========================
On Ubuntu, one builds XGBoost by running
.. code-block:: bash
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost; make -j4
Building on OSX
===============
Install with pip: simple method
--------------------------------
First, make sure you obtained ``gcc-5`` (newer version does not work with this method yet). Note: installation of ``gcc`` can take a while (~ 30 minutes).
.. code-block:: bash
brew install gcc@5
Then install XGBoost with ``pip``:
.. code-block:: bash
pip3 install xgboost
You might need to run the command with ``sudo`` if you run into permission errors.
Build from the source code - advanced method
--------------------------------------------
First, obtain ``gcc-7`` with homebrew (https://brew.sh/) if you want multi-threaded version. Clang is okay if multithreading is not required. Note: installation of ``gcc`` can take a while (~ 30 minutes).
.. code-block:: bash
brew install gcc@7
Now, clone the repository:
.. code-block:: bash
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost; cp make/config.mk ./config.mk
Open ``config.mk`` and uncomment these two lines:
.. code-block:: bash
export CC = gcc
export CXX = g++
and replace these two lines as follows: (specify the GCC version)
.. code-block:: bash
export CC = gcc-7
export CXX = g++-7
Now, you may build XGBoost using the following command:
.. code-block:: bash
make -j4
You may now continue to `Python Package Installation`_.
Building on Windows
===================
You need to first clone the XGBoost repo with ``--recursive`` option, to clone the submodules.
We recommend you use `Git for Windows <https://git-for-windows.github.io/>`_, as it comes with a standard Bash shell. This will highly ease the installation process.
.. code-block:: bash
git submodule init
git submodule update
XGBoost support compilation with Microsoft Visual Studio and MinGW.
Compile XGBoost using MinGW
---------------------------
After installing `Git for Windows <https://git-for-windows.github.io/>`_, you should have a shortcut named ``Git Bash``. You should run all subsequent steps in ``Git Bash``.
In MinGW, ``make`` command comes with the name ``mingw32-make``. You can add the following line into the ``.bashrc`` file:
.. code-block:: bash
alias make='mingw32-make'
(On 64-bit Windows, you should get `MinGW64 <https://sourceforge.net/projects/mingw-w64/>`_ instead.) Make sure
that the path to MinGW is in the system PATH.
To build with MinGW, type:
.. code-block:: bash
cp make/mingw64.mk config.mk; make -j4
Compile XGBoost with Microsoft Visual Studio
--------------------------------------------
To build with Visual Studio, we will need CMake. Make sure to install a recent version of CMake. Then run the following from the root of the XGBoost directory:
.. code-block:: bash
mkdir build
cd build
cmake .. -G"Visual Studio 12 2013 Win64"
This specifies an out of source build using the MSVC 12 64 bit generator. Open the ``.sln`` file in the build directory and build with Visual Studio. To use the Python module you can copy ``xgboost.dll`` into ``python-package/xgboost``.
After the build process successfully ends, you will find a ``xgboost.dll`` library file inside ``./lib/`` folder, copy this file to the the API package folder like ``python-package/xgboost`` if you are using Python API.
Unofficial windows binaries and instructions on how to use them are hosted on `Guido Tapia's blog <http://www.picnet.com.au/blogs/guido/post/2016/09/22/xgboost-windows-x64-binaries-for-download/>`_.
.. _build_gpu_support:
Building with GPU support
=========================
XGBoost can be built with GPU support for both Linux and Windows using CMake. GPU support works with the Python package as well as the CLI version. See `Installing R package with GPU support`_ for special instructions for R.
An up-to-date version of the CUDA toolkit is required.
From the command line on Linux starting from the XGBoost directory:
.. code-block:: bash
mkdir build
cd build
cmake .. -DUSE_CUDA=ON
make -j
.. note:: Enabling multi-GPU training
By default, multi-GPU training is disabled and only a single GPU will be used. To enable multi-GPU training, set the option ``USE_NCCL=ON``. Multi-GPU training depends on NCCL2, available at `this link <https://developer.nvidia.com/nccl>`_. Since NCCL2 is only available for Linux machines, **multi-GPU training is available only for Linux**.
.. code-block:: bash
mkdir build
cd build
cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON
make -j
On Windows, see what options for generators you have for CMake, and choose one with ``[arch]`` replaced with Win64:
.. code-block:: bash
cmake -help
Then run CMake as follows:
.. code-block:: bash
mkdir build
cd build
cmake .. -G"Visual Studio 14 2015 Win64" -DUSE_CUDA=ON
.. note:: Visual Studio 2017 Win64 Generator may not work
Choosing the Visual Studio 2017 generator may cause compilation failure. When it happens, specify the 2015 compiler by adding the ``-T`` option:
.. code-block:: bash
make .. -G"Visual Studio 15 2017 Win64" -T v140,cuda=8.0 -DR_LIB=ON -DUSE_CUDA=ON
To speed up compilation, the compute version specific to your GPU could be passed to cmake as, e.g., ``-DGPU_COMPUTE_VER=50``.
The above cmake configuration run will create an ``xgboost.sln`` solution file in the build directory. Build this solution in release mode as a x64 build, either from Visual studio or from command line:
.. code-block:: bash
cmake --build . --target xgboost --config Release
To speed up compilation, run multiple jobs in parallel by appending option ``-- /MP``.
Customized Building
===================
The configuration file ``config.mk`` modifies several compilation flags:
- Whether to enable support for various distributed filesystems such as HDFS and Amazon S3
- Which compiler to use
- And some more
To customize, first copy ``make/config.mk`` to the project root and then modify the copy.
Python Package Installation
===========================
The python package is located at ``python-package/``.
There are several ways to install the package:
1. Install system-wide, which requires root permission:
.. code-block:: bash
cd python-package; sudo python setup.py install
You will however need Python ``distutils`` module for this to
work. It is often part of the core python package or it can be installed using your
package manager, e.g. in Debian use
.. code-block:: bash
sudo apt-get install python-setuptools
.. note:: Re-compiling XGBoost
If you recompiled XGBoost, then you need to reinstall it again to make the new library take effect.
2. Only set the environment variable ``PYTHONPATH`` to tell python where to find
the library. For example, assume we cloned `xgboost` on the home directory
`~`. then we can added the following line in `~/.bashrc`.
This option is **recommended for developers** who change the code frequently. The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ``setup`` again)
.. code-block:: bash
export PYTHONPATH=~/xgboost/python-package
3. Install only for the current user.
.. code-block:: bash
cd python-package; python setup.py develop --user
4. If you are installing the latest XGBoost version which requires compilation, add MinGW to the system PATH:
.. code-block:: bash
import os
os.environ['PATH'] = os.environ['PATH'] + ';C:\\Program Files\\mingw-w64\\x86_64-5.3.0-posix-seh-rt_v4-rev0\\mingw64\\bin'
R Package Installation
======================
Installing pre-packaged version
-------------------------------
You can install xgboost from CRAN just like any other R package:
.. code-block:: R
install.packages("xgboost")
Or you can install it from our weekly updated drat repo:
.. code-block:: R
install.packages("drat", repos="https://cran.rstudio.com")
drat:::addRepo("dmlc")
install.packages("xgboost", repos="http://dmlc.ml/drat/", type = "source")
For OSX users, single threaded version will be installed. To install multi-threaded version,
first follow `Building on OSX`_ to get the OpenMP enabled compiler. Then
- Set the ``Makevars`` file in highest piority for R.
The point is, there are three ``Makevars`` : ``~/.R/Makevars``, ``xgboost/R-package/src/Makevars``, and ``/usr/local/Cellar/r/3.2.0/R.framework/Resources/etc/Makeconf`` (the last one obtained by running ``file.path(R.home("etc"), "Makeconf")`` in R), and ``SHLIB_OPENMP_CXXFLAGS`` is not set by default!! After trying, it seems that the first one has highest piority (surprise!).
Then inside R, run
.. code-block:: R
install.packages("drat", repos="https://cran.rstudio.com")
drat:::addRepo("dmlc")
install.packages("xgboost", repos="http://dmlc.ml/drat/", type = "source")
Installing the development version
----------------------------------
Make sure you have installed git and a recent C++ compiler supporting C++11 (e.g., g++-4.8 or higher).
On Windows, Rtools must be installed, and its bin directory has to be added to PATH during the installation.
And see the previous subsection for an OSX tip.
Due to the use of git-submodules, ``devtools::install_github`` can no longer be used to install the latest version of R package.
Thus, one has to run git to check out the code first:
.. code-block:: bash
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost
git submodule init
git submodule update
cd R-package
R CMD INSTALL .
If the last line fails because of the error ``R: command not found``, it means that R was not set up to run from command line.
In this case, just start R as you would normally do and run the following:
.. code-block:: R
setwd('wherever/you/cloned/it/xgboost/R-package/')
install.packages('.', repos = NULL, type="source")
The package could also be built and installed with cmake (and Visual C++ 2015 on Windows) using instructions from the next section, but without GPU support (omit the ``-DUSE_CUDA=ON`` cmake parameter).
If all fails, try `Building the shared library`_ to see whether a problem is specific to R package or not.
Installing R package with GPU support
-------------------------------------
The procedure and requirements are similar as in `Building with GPU support`_, so make sure to read it first.
On Linux, starting from the XGBoost directory type:
.. code-block:: bash
mkdir build
cd build
cmake .. -DUSE_CUDA=ON -DR_LIB=ON
make install -j
When default target is used, an R package shared library would be built in the ``build`` area.
The ``install`` target, in addition, assembles the package files with this shared library under ``build/R-package``, and runs ``R CMD INSTALL``.
On Windows, cmake with Visual C++ Build Tools (or Visual Studio) has to be used to build an R package with GPU support. Rtools must also be installed (perhaps, some other MinGW distributions with ``gendef.exe`` and ``dlltool.exe`` would work, but that was not tested).
.. code-block:: bash
mkdir build
cd build
cmake .. -G"Visual Studio 14 2015 Win64" -DUSE_CUDA=ON -DR_LIB=ON
cmake --build . --target install --config Release
When ``--target xgboost`` is used, an R package dll would be built under ``build/Release``.
The ``--target install``, in addition, assembles the package files with this dll under ``build/R-package``, and runs ``R CMD INSTALL``.
If cmake can't find your R during the configuration step, you might provide the location of its executable to cmake like this: ``-DLIBR_EXECUTABLE="C:/Program Files/R/R-3.4.1/bin/x64/R.exe"``.
If on Windows you get a "permission denied" error when trying to write to ...Program Files/R/... during the package installation, create a ``.Rprofile`` file in your personal home directory (if you don't already have one in there), and add a line to it which specifies the location of your R packages user library, like the following:
.. code-block:: R
.libPaths( unique(c("C:/Users/USERNAME/Documents/R/win-library/3.4", .libPaths())))
You might find the exact location by running ``.libPaths()`` in R GUI or RStudio.
Trouble Shooting
================
1. Compile failed after ``git pull``
Please first update the submodules, clean all and recompile:
.. code-block:: bash
git submodule update && make clean_all && make -j4
2. Compile failed after ``config.mk`` is modified
Need to clean all first:
.. code-block:: bash
make clean_all && make -j4
3. ``Makefile: dmlc-core/make/dmlc.mk: No such file or directory``
We need to recursively clone the submodule:
.. code-block:: bash
git submodule init
git submodule update
Alternatively, do another clone
.. code-block:: bash
git clone https://github.com/dmlc/xgboost --recursive

5
doc/cli.rst Normal file
View File

@@ -0,0 +1,5 @@
############################
XGBoost Command Line version
############################
See `XGBoost Command Line walkthrough <https://github.com/dmlc/xgboost/blob/master/demo/binary_classification/README.md>`_.

View File

@@ -1,3 +0,0 @@
# XGBoost Command Line version
See [XGBoost Command Line walkthrough](https://github.com/dmlc/xgboost/blob/master/demo/binary_classification/README.md)

View File

@@ -11,9 +11,26 @@
#
# All configuration values have a default; values that are commented out
# serve to show the default.
from subprocess import call
from sh.contrib import git
import urllib.request
from urllib.error import HTTPError
from recommonmark.parser import CommonMarkParser
import sys
import re
import os, subprocess
import shlex
import guzzle_sphinx_theme
git_branch = [re.sub(r'origin/', '', x.lstrip(' ')) for x in str(git.branch('-r', '--contains', 'HEAD')).rstrip('\n').split('\n')]
git_branch = [x for x in git_branch if 'HEAD' not in x]
print('git_branch = {}'.format(git_branch[0]))
try:
filename, _ = urllib.request.urlretrieve('https://s3-us-west-2.amazonaws.com/xgboost-docs/{}.tar.bz2'.format(git_branch[0]))
call('if [ -d tmp ]; then rm -rf tmp; fi; mkdir -p tmp/jvm; cd tmp/jvm; tar xvf {}'.format(filename), shell=True)
except HTTPError:
print('JVM doc not found. Skipping...')
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
@@ -22,13 +39,11 @@ libpath = os.path.join(curr_path, '../python-package/')
sys.path.insert(0, libpath)
sys.path.insert(0, curr_path)
from sphinx_util import MarkdownParser, AutoStructify
# -- mock out modules
import mock
MOCK_MODULES = ['numpy', 'scipy', 'scipy.sparse', 'sklearn', 'matplotlib', 'pandas', 'graphviz']
for mod_name in MOCK_MODULES:
sys.modules[mod_name] = mock.Mock()
sys.modules[mod_name] = mock.Mock()
# -- General configuration ------------------------------------------------
@@ -38,11 +53,6 @@ author = u'%s developers' % project
copyright = u'2016, %s' % author
github_doc_root = 'https://github.com/dmlc/xgboost/tree/master/doc/'
# add markdown parser
MarkdownParser.github_doc_root = github_doc_root
source_parsers = {
'.md': MarkdownParser,
}
os.environ['XGBOOST_BUILD_DOC'] = '1'
# Version information.
import xgboost
@@ -55,14 +65,23 @@ extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.napoleon',
'sphinx.ext.mathjax',
'sphinx.ext.intersphinx',
'breathe'
]
# Breathe extension variables
breathe_projects = {"xgboost": "doxyxml/"}
breathe_default_project = "xgboost"
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
source_parsers = {
'.md': CommonMarkParser,
}
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = ['.rst', '.md']
# The encoding of source files.
@@ -89,6 +108,7 @@ autoclass_content = 'both'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
html_extra_path = ['./tmp']
# The reST default role (used for this markup: `text`) to use for all
# documents.
@@ -119,11 +139,23 @@ todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
html_theme_path = ['_static']
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
# html_theme = 'alabaster'
html_theme = 'xgboost-theme'
html_theme_path = guzzle_sphinx_theme.html_theme_path()
html_theme = 'guzzle_sphinx_theme'
# Register the theme as an extension to generate a sitemap.xml
extensions.append("guzzle_sphinx_theme")
# Guzzle theme options (see theme.conf for more information)
html_theme_options = {
# Set the name of the project to appear in the sidebar
"project_nav_name": "XGBoost (0.80)"
}
html_sidebars = {
'**': ['logo-text.html', 'globaltoc.html', 'searchbox.html']
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
@@ -145,38 +177,27 @@ latex_documents = [
author, 'manual'),
]
intersphinx_mapping = {'python': ('https://docs.python.org/3.6', None),
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
'scipy': ('http://docs.scipy.org/doc/scipy/reference/', None),
'pandas': ('http://pandas-docs.github.io/pandas-docs-travis/', None),
'sklearn': ('http://scikit-learn.org/stable', None)}
# hook for doxygen
def run_doxygen(folder):
"""Run the doxygen make command in the designated folder."""
try:
retcode = subprocess.call("cd %s; make doxygen" % folder, shell=True)
if retcode < 0:
sys.stderr.write("doxygen terminated by signal %s" % (-retcode))
except OSError as e:
sys.stderr.write("doxygen execution failed: %s" % e)
"""Run the doxygen make command in the designated folder."""
try:
retcode = subprocess.call("cd %s; make doxygen" % folder, shell=True)
if retcode < 0:
sys.stderr.write("doxygen terminated by signal %s" % (-retcode))
except OSError as e:
sys.stderr.write("doxygen execution failed: %s" % e)
def generate_doxygen_xml(app):
"""Run the doxygen make commands if we're on the ReadTheDocs server"""
read_the_docs_build = os.environ.get('READTHEDOCS', None) == 'True'
if read_the_docs_build:
run_doxygen('..')
"""Run the doxygen make commands if we're on the ReadTheDocs server"""
read_the_docs_build = os.environ.get('READTHEDOCS', None) == 'True'
if read_the_docs_build:
run_doxygen('..')
def setup(app):
# Add hook for building doxygen xml when needed
# no c++ API for now
# app.connect("builder-inited", generate_doxygen_xml)
# urlretrieve got moved in Python 3.x
try:
from urllib import urlretrieve
except ImportError:
from urllib.request import urlretrieve
urlretrieve('https://code.jquery.com/jquery-2.2.4.min.js',
'_static/jquery.js')
app.add_config_value('recommonmark_config', {
'url_resolver': lambda url: github_doc_root + url,
'enable_eval_rst': True,
}, True,
)
app.add_transform(AutoStructify)
app.add_javascript('jquery.js')
app.add_stylesheet('custom.css')

256
doc/contribute.rst Normal file
View File

@@ -0,0 +1,256 @@
#####################
Contribute to XGBoost
#####################
XGBoost has been developed and used by a group of active community members.
Everyone is more than welcome to contribute. It is a way to make the project better and more accessible to more users.
- Please add your name to `CONTRIBUTORS.md <https://github.com/dmlc/xgboost/blob/master/CONTRIBUTORS.md>`_ after your patch has been merged.
- Please also update `NEWS.md <https://github.com/dmlc/xgboost/blob/master/NEWS.md>`_ to add note on your changes to the API or XGBoost documentation.
**Guidelines**
* `Submit Pull Request`_
* `Git Workflow Howtos`_
- `How to resolve conflict with master`_
- `How to combine multiple commits into one`_
- `What is the consequence of force push`_
* `Documents`_
* `Testcases`_
* `Sanitizers`_
* `Examples`_
* `Core Library`_
* `Python Package`_
* `R Package`_
*******************
Submit Pull Request
*******************
* Before submit, please rebase your code on the most recent version of master, you can do it by
.. code-block:: bash
git remote add upstream https://github.com/dmlc/xgboost
git fetch upstream
git rebase upstream/master
* If you have multiple small commits,
it might be good to merge them together(use git rebase then squash) into more meaningful groups.
* Send the pull request!
- Fix the problems reported by automatic checks
- If you are contributing a new module, consider add a testcase in `tests <https://github.com/dmlc/xgboost/tree/master/tests>`_.
*******************
Git Workflow Howtos
*******************
How to resolve conflict with master
===================================
- First rebase to most recent master
.. code-block:: bash
# The first two steps can be skipped after you do it once.
git remote add upstream https://github.com/dmlc/xgboost
git fetch upstream
git rebase upstream/master
- The git may show some conflicts it cannot merge, say ``conflicted.py``.
- Manually modify the file to resolve the conflict.
- After you resolved the conflict, mark it as resolved by
.. code-block:: bash
git add conflicted.py
- Then you can continue rebase by
.. code-block:: bash
git rebase --continue
- Finally push to your fork, you may need to force push here.
.. code-block:: bash
git push --force
How to combine multiple commits into one
========================================
Sometimes we want to combine multiple commits, especially when later commits are only fixes to previous ones,
to create a PR with set of meaningful commits. You can do it by following steps.
- Before doing so, configure the default editor of git if you haven't done so before.
.. code-block:: bash
git config core.editor the-editor-you-like
- Assume we want to merge last 3 commits, type the following commands
.. code-block:: bash
git rebase -i HEAD~3
- It will pop up an text editor. Set the first commit as ``pick``, and change later ones to ``squash``.
- After you saved the file, it will pop up another text editor to ask you modify the combined commit message.
- Push the changes to your fork, you need to force push.
.. code-block:: bash
git push --force
What is the consequence of force push
=====================================
The previous two tips requires force push, this is because we altered the path of the commits.
It is fine to force push to your own fork, as long as the commits changed are only yours.
*********
Documents
*********
* Documentation is built using sphinx.
* Each document is written in `reStructuredText <http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_.
* You can build document locally to see the effect.
*********
Testcases
*********
* All the testcases are in `tests <https://github.com/dmlc/xgboost/tree/master/tests>`_.
* We use python nose for python test cases.
**********
Sanitizers
**********
By default, sanitizers are bundled in GCC and Clang/LLVM. One can enable
sanitizers with GCC >= 4.8 or LLVM >= 3.1, But some distributions might package
sanitizers separately. Here is a list of supported sanitizers with
corresponding library names:
- Address sanitizer: libasan
- Leak sanitizer: liblsan
- Thread sanitizer: libtsan
Memory sanitizer is exclusive to LLVM, hence not supported in XGBoost.
How to build XGBoost with sanitizers
====================================
One can build XGBoost with sanitizer support by specifying -DUSE_SANITIZER=ON.
By default, address sanitizer and leak sanitizer are used when you turn the
USE_SANITIZER flag on. You can always change the default by providing a
semicolon separated list of sanitizers to ENABLED_SANITIZERS. Note that thread
sanitizer is not compatible with the other two sanitizers.
.. code-block:: bash
cmake -DUSE_SANITIZER=ON -DENABLED_SANITIZERS="address;leak" /path/to/xgboost
How to use sanitizers with CUDA support
=======================================
Runing XGBoost on CUDA with address sanitizer (asan) will raise memory error.
To use asan with CUDA correctly, you need to configure asan via ASAN_OPTIONS
environment variable:
.. code-block:: bash
ASAN_OPTIONS=protect_shadow_gap=0 ../testxgboost
For details, please consult `official documentation <https://github.com/google/sanitizers/wiki>`_ for sanitizers.
********
Examples
********
* Usecases and examples will be in `demo <https://github.com/dmlc/xgboost/tree/master/demo>`_.
* We are super excited to hear about your story, if you have blogposts,
tutorials code solutions using XGBoost, please tell us and we will add
a link in the example pages.
************
Core Library
************
- Follow `Google style for C++ <https://google.github.io/styleguide/cppguide.html>`_.
- Use C++11 features such as smart pointers, braced initializers, lambda functions, and ``std::thread``.
- We use Doxygen to document all the interface code.
- You can reproduce the linter checks by running ``make lint``
**************
Python Package
**************
- Always add docstring to the new functions in numpydoc format.
- You can reproduce the linter checks by typing ``make lint``
*********
R Package
*********
Code Style
==========
- We follow Google's C++ Style guide for C++ code.
- This is mainly to be consistent with the rest of the project.
- Another reason is we will be able to check style automatically with a linter.
- You can check the style of the code by typing the following command at root folder.
.. code-block:: bash
make rcpplint
- When needed, you can disable the linter warning of certain line with ```// NOLINT(*)``` comments.
- We use `roxygen <https://cran.r-project.org/web/packages/roxygen2/vignettes/roxygen2.html>`_ for documenting the R package.
Rmarkdown Vignettes
===================
Rmarkdown vignettes are placed in `R-package/vignettes <https://github.com/dmlc/xgboost/tree/master/R-package/vignettes>`_.
These Rmarkdown files are not compiled. We host the compiled version on `doc/R-package <https://github.com/dmlc/xgboost/tree/master/doc/R-package>`_.
The following steps are followed to add a new Rmarkdown vignettes:
- Add the original rmarkdown to ``R-package/vignettes``.
- Modify ``doc/R-package/Makefile`` to add the markdown files to be build.
- Clone the `dmlc/web-data <https://github.com/dmlc/web-data>`_ repo to folder ``doc``.
- Now type the following command on ``doc/R-package``:
.. code-block:: bash
make the-markdown-to-make.md
- This will generate the markdown, as well as the figures in ``doc/web-data/xgboost/knitr``.
- Modify the ``doc/R-package/index.md`` to point to the generated markdown.
- Add the generated figure to the ``dmlc/web-data`` repo.
- If you already cloned the repo to doc, this means ``git add``
- Create PR for both the markdown and ``dmlc/web-data``.
- You can also build the document locally by typing the following command at the ``doc`` directory:
.. code-block:: bash
make html
The reason we do this is to avoid exploded repo size due to generated images.
R package versioning
====================
Since version 0.6.4.3, we have adopted a versioning system that uses x.y.z (or ``core_major.core_minor.cran_release``)
format for CRAN releases and an x.y.z.p (or ``core_major.core_minor.cran_release.patch``) format for development patch versions.
This approach is similar to the one described in Yihui Xie's
`blog post on R Package Versioning <https://yihui.name/en/2013/06/r-package-versioning/>`_,
except we need an additional field to accomodate the x.y core library version.
Each new CRAN release bumps up the 3rd field, while developments in-between CRAN releases
would be marked by an additional 4th field on the top of an existing CRAN release version.
Some additional consideration is needed when the core library version changes.
E.g., after the core changes from 0.6 to 0.7, the R package development version would become 0.7.0.1, working towards
a 0.7.1 CRAN release. The 0.7.0 would not be released to CRAN, unless it would require almost no additional development.
Registering native routines in R
================================
According to `R extension manual <https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Registering-native-routines>`_,
it is good practice to register native routines and to disable symbol search. When any changes or additions are made to the
C++ interface of the R package, please make corresponding changes in ``src/init.c`` as well.

View File

@@ -1,46 +1,50 @@
##########################
Frequently Asked Questions
========================
This document contains frequently asked questions about xgboost.
##########################
This document contains frequently asked questions about XGBoost.
**********************
How to tune parameters
----------------------
See [Parameter Tunning Guide](how_to/param_tuning.md)
**********************
See :doc:`Parameter Tuning Guide </tutorials/param_tuning>`.
************************
Description on the model
------------------------
See [Introduction to Boosted Trees](model.md)
************************
See :doc:`Introduction to Boosted Trees </tutorials/model>`.
********************
I have a big dataset
--------------------
XGBoost is designed to be memory efficient. Usually it can handle problems as long as the data fit into your memory
(This usually means millions of instances).
If you are running out of memory, checkout [external memory version](how_to/external_memory.md) or
[distributed version](../demo/distributed-training) of xgboost.
********************
XGBoost is designed to be memory efficient. Usually it can handle problems as long as the data fit into your memory.
(This usually means millions of instances)
If you are running out of memory, checkout :doc:`external memory version </tutorials/external_memory>` or
:doc:`distributed version </tutorials/aws_yarn>` of XGBoost.
Running xgboost on Platform X (Hadoop/Yarn, Mesos)
--------------------------------------------------
**************************************************
Running XGBoost on Platform X (Hadoop/Yarn, Mesos)
**************************************************
The distributed version of XGBoost is designed to be portable to various environment.
Distributed XGBoost can be ported to any platform that supports [rabit](https://github.com/dmlc/rabit).
You can directly run xgboost on Yarn. In theory Mesos and other resource allocation engines can be easily supported as well.
Distributed XGBoost can be ported to any platform that supports `rabit <https://github.com/dmlc/rabit>`_.
You can directly run XGBoost on Yarn. In theory Mesos and other resource allocation engines can be easily supported as well.
Why not implement distributed xgboost on top of X (Spark, Hadoop)
-----------------------------------------------------------------
*****************************************************************
Why not implement distributed XGBoost on top of X (Spark, Hadoop)
*****************************************************************
The first fact we need to know is going distributed does not necessarily solve all the problems.
Instead, it creates more problems such as more communication overhead and fault tolerance.
The ultimate question will still come back to how to push the limit of each computation node
and use less resources to complete the task (thus with less communication and chance of failure).
To achieve these, we decide to reuse the optimizations in the single node xgboost and build distributed version on top of it.
To achieve these, we decide to reuse the optimizations in the single node XGBoost and build distributed version on top of it.
The demand of communication in machine learning is rather simple, in the sense that we can depend on a limited set of API (in our case rabit).
Such design allows us to reuse most of the code, while being portable to major platforms such as Hadoop/Yarn, MPI, SGE.
Most importantly, it pushes the limit of the computation resources we can use.
*****************************************
How can I port the model to my own system
-----------------------------------------
*****************************************
The model and data format of XGBoost is exchangeable,
which means the model trained by one language can be loaded in another.
This means you can train the model using R, while running prediction using
@@ -48,26 +52,26 @@ Java or C++, which are more common in production systems.
You can also train the model using distributed versions,
and load them in from Python to do some interactive analysis.
*************************
Do you support LambdaMART
-------------------------
Yes, xgboost implements LambdaMART. Checkout the objective section in [parameters](parameter.md)
*************************
Yes, XGBoost implements LambdaMART. Checkout the objective section in :doc:`parameters </parameter>`.
******************************
How to deal with Missing Value
------------------------------
xgboost supports missing value by default.
******************************
XGBoost supports missing value by default.
In tree algorithms, branch directions for missing values are learned during training.
Note that the gblinear booster treats missing values as zeros.
**************************************
Slightly different result between runs
--------------------------------------
**************************************
This could happen, due to non-determinism in floating point summation order and multi-threading.
Though the general accuracy will usually remain the same.
**********************************************************
Why do I see different results with sparse and dense data?
--------------------------------------------------------
**********************************************************
"Sparse" elements are treated as if they were "missing" by the tree booster, and as zeros by the linear booster.
For tree models, it is important to use consistent data formats during training and scoring.
For tree models, it is important to use consistent data formats during training and scoring.

94
doc/get_started.rst Normal file
View File

@@ -0,0 +1,94 @@
########################
Get Started with XGBoost
########################
This is a quick start tutorial showing snippets for you to quickly try out XGBoost
on the demo dataset on a binary classification task.
********************************
Links to Other Helpful Resources
********************************
- See :doc:`Installation Guide </build>` on how to install XGBoost.
- See :doc:`Text Input Format </tutorials/input_format>` on using text format for specifying training/testing data.
- See :doc:`Tutorials </tutorials/index>` for tips and tutorials.
- See `Learning to use XGBoost by Examples <https://github.com/dmlc/xgboost/tree/master/demo>`_ for more code examples.
******
Python
******
.. code-block:: python
import xgboost as xgb
# read in data
dtrain = xgb.DMatrix('demo/data/agaricus.txt.train')
dtest = xgb.DMatrix('demo/data/agaricus.txt.test')
# specify parameters via map
param = {'max_depth':2, 'eta':1, 'silent':1, 'objective':'binary:logistic' }
num_round = 2
bst = xgb.train(param, dtrain, num_round)
# make prediction
preds = bst.predict(dtest)
***
R
***
.. code-block:: R
# load data
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
# fit model
bst <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic")
# predict
pred <- predict(bst, test$data)
*****
Julia
*****
.. code-block:: julia
using XGBoost
# read data
train_X, train_Y = readlibsvm("demo/data/agaricus.txt.train", (6513, 126))
test_X, test_Y = readlibsvm("demo/data/agaricus.txt.test", (1611, 126))
# fit model
num_round = 2
bst = xgboost(train_X, num_round, label=train_Y, eta=1, max_depth=2)
# predict
pred = predict(bst, test_X)
*****
Scala
*****
.. code-block:: scala
import ml.dmlc.xgboost4j.scala.DMatrix
import ml.dmlc.xgboost4j.scala.XGBoost
object XGBoostScalaExample {
def main(args: Array[String]) {
// read trainining data, available at xgboost/demo/data
val trainData =
new DMatrix("/path/to/agaricus.txt.train")
// define parameters
val paramMap = List(
"eta" -> 0.1,
"max_depth" -> 2,
"objective" -> "binary:logistic").toMap
// number of iterations
val round = 2
// train the model
val model = XGBoost.train(trainData, paramMap, round)
// run prediction
val predTrain = model.predict(trainData)
// save model to the file.
model.saveModel("/local/path/to/model")
}
}

View File

@@ -1,79 +0,0 @@
# Get Started with XGBoost
This is a quick start tutorial showing snippets for you to quickly try out xgboost
on the demo dataset on a binary classification task.
## Links to Helpful Other Resources
- See [Installation Guide](../build.md) on how to install xgboost.
- See [How to pages](../how_to/index.md) on various tips on using xgboost.
- See [Tutorials](../tutorials/index.md) on tutorials on specific tasks.
- See [Learning to use XGBoost by Examples](../../demo) for more code examples.
## Python
```python
import xgboost as xgb
# read in data
dtrain = xgb.DMatrix('demo/data/agaricus.txt.train')
dtest = xgb.DMatrix('demo/data/agaricus.txt.test')
# specify parameters via map
param = {'max_depth':2, 'eta':1, 'silent':1, 'objective':'binary:logistic' }
num_round = 2
bst = xgb.train(param, dtrain, num_round)
# make prediction
preds = bst.predict(dtest)
```
## R
```r
# load data
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
# fit model
bst <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nround = 2,
nthread = 2, objective = "binary:logistic")
# predict
pred <- predict(bst, test$data)
```
## Julia
```julia
using XGBoost
# read data
train_X, train_Y = readlibsvm("demo/data/agaricus.txt.train", (6513, 126))
test_X, test_Y = readlibsvm("demo/data/agaricus.txt.test", (1611, 126))
# fit model
num_round = 2
bst = xgboost(train_X, num_round, label=train_Y, eta=1, max_depth=2)
# predict
pred = predict(bst, test_X)
```
## Scala
```scala
import ml.dmlc.xgboost4j.scala.DMatrix
import ml.dmlc.xgboost4j.scala.XGBoost
object XGBoostScalaExample {
def main(args: Array[String]) {
// read trainining data, available at xgboost/demo/data
val trainData =
new DMatrix("/path/to/agaricus.txt.train")
// define parameters
val paramMap = List(
"eta" -> 0.1,
"max_depth" -> 2,
"objective" -> "binary:logistic").toMap
// number of iterations
val round = 2
// train the model
val model = XGBoost.train(trainData, paramMap, round)
// run prediction
val predTrain = model.predict(trainData)
// save model to the file.
model.saveModel("/local/path/to/model")
}
}
```

View File

@@ -1,105 +0,0 @@
XGBoost GPU Support
===================
This page contains information about GPU algorithms supported in XGBoost.
To install GPU support, checkout the [build and installation instructions](../build.md).
# CUDA Accelerated Tree Construction Algorithms
This plugin adds GPU accelerated tree construction and prediction algorithms to XGBoost.
## Usage
Specify the 'tree_method' parameter as one of the following algorithms.
### Algorithms
```eval_rst
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| tree_method | Description |
+==============+=======================================================================================================================================================================+
| gpu_exact | The standard XGBoost tree construction algorithm. Performs exact search for splits. Slower and uses considerably more memory than 'gpu_hist' |
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| gpu_hist | Equivalent to the XGBoost fast histogram algorithm. Much faster and uses considerably less memory. NOTE: Will run very slowly on GPUs older than Pascal architecture. |
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
```
### Supported parameters
```eval_rst
.. |tick| unicode:: U+2714
.. |cross| unicode:: U+2718
+----------------------+------------+-----------+
| parameter | gpu_exact | gpu_hist |
+======================+============+===========+
| subsample | |cross| | |tick| |
+----------------------+------------+-----------+
| colsample_bytree | |cross| | |tick| |
+----------------------+------------+-----------+
| colsample_bylevel | |cross| | |tick| |
+----------------------+------------+-----------+
| max_bin | |cross| | |tick| |
+----------------------+------------+-----------+
| gpu_id | |tick| | |tick| |
+----------------------+------------+-----------+
| n_gpus | |cross| | |tick| |
+----------------------+------------+-----------+
| predictor | |tick| | |tick| |
+----------------------+------------+-----------+
| grow_policy | |cross| | |tick| |
+----------------------+------------+-----------+
| monotone_constraints | |cross| | |tick| |
+----------------------+------------+-----------+
```
GPU accelerated prediction is enabled by default for the above mentioned 'tree_method' parameters but can be switched to CPU prediction by setting 'predictor':'cpu_predictor'. This could be useful if you want to conserve GPU memory. Likewise when using CPU algorithms, GPU accelerated prediction can be enabled by setting 'predictor':'gpu_predictor'.
The device ordinal can be selected using the 'gpu_id' parameter, which defaults to 0.
Multiple GPUs can be used with the grow_gpu_hist parameter using the n_gpus parameter. which defaults to 1. If this is set to -1 all available GPUs will be used. If gpu_id is specified as non-zero, the gpu device order is mod(gpu_id + i) % n_visible_devices for i=0 to n_gpus-1. As with GPU vs. CPU, multi-GPU will not always be faster than a single GPU due to PCI bus bandwidth that can limit performance.
This plugin currently works with the CLI, python and R - see installation guide for details.
Python example:
```python
param['gpu_id'] = 0
param['max_bin'] = 16
param['tree_method'] = 'gpu_hist'
```
## Benchmarks
To run benchmarks on synthetic data for binary classification:
```bash
$ python tests/benchmark/benchmark.py
```
Training time time on 1,000,000 rows x 50 columns with 500 boosting iterations and 0.25/0.75 test/train split on i7-6700K CPU @ 4.00GHz and Pascal Titan X.
```eval_rst
+--------------+----------+
| tree_method | Time (s) |
+==============+==========+
| gpu_hist | 13.87 |
+--------------+----------+
| hist | 63.55 |
+--------------+----------+
| gpu_exact | 161.08 |
+--------------+----------+
| exact | 1082.20 |
+--------------+----------+
```
[See here](http://dmlc.ml/2016/12/14/GPU-accelerated-xgboost.html) for additional performance benchmarks of the 'gpu_exact' tree_method.
## References
[Mitchell R, Frank E. (2017) Accelerating the XGBoost algorithm using GPU computing. PeerJ Computer Science 3:e127 https://doi.org/10.7717/peerj-cs.127](https://peerj.com/articles/cs-127/)
[Nvidia Parallel Forall: Gradient Boosting, Decision Trees and XGBoost with CUDA](https://devblogs.nvidia.com/parallelforall/gradient-boosting-decision-trees-xgboost-cuda/)
## Author
Rory Mitchell
Jonathan C. McKinney
Shankara Rao Thejaswi Nanditale
Vinay Deshpande
... and the rest of the H2O.ai and NVIDIA team.
Please report bugs to the xgboost/issues page.

121
doc/gpu/index.rst Normal file
View File

@@ -0,0 +1,121 @@
###################
XGBoost GPU Support
###################
This page contains information about GPU algorithms supported in XGBoost.
To install GPU support, checkout the :doc:`/build`.
.. note:: CUDA 8.0, Compute Capability 3.5 required
The GPU algorithms in XGBoost require a graphics card with compute capability 3.5 or higher, with
CUDA toolkits 8.0 or later.
(See `this list <https://en.wikipedia.org/wiki/CUDA#GPUs_supported>`_ to look up compute capability of your GPU card.)
*********************************************
CUDA Accelerated Tree Construction Algorithms
*********************************************
Tree construction (training) and prediction can be accelerated with CUDA-capable GPUs.
Usage
=====
Specify the ``tree_method`` parameter as one of the following algorithms.
Algorithms
----------
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| tree_method | Description |
+==============+=======================================================================================================================================================================+
| gpu_exact | The standard XGBoost tree construction algorithm. Performs exact search for splits. Slower and uses considerably more memory than ``gpu_hist``. |
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| gpu_hist | Equivalent to the XGBoost fast histogram algorithm. Much faster and uses considerably less memory. NOTE: Will run very slowly on GPUs older than Pascal architecture. |
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Supported parameters
--------------------
.. |tick| unicode:: U+2714
.. |cross| unicode:: U+2718
+--------------------------+---------------+--------------+
| parameter | ``gpu_exact`` | ``gpu_hist`` |
+==========================+===============+==============+
| ``subsample`` | |cross| | |tick| |
+--------------------------+---------------+--------------+
| ``colsample_bytree`` | |cross| | |tick| |
+--------------------------+---------------+--------------+
| ``colsample_bylevel`` | |cross| | |tick| |
+--------------------------+---------------+--------------+
| ``max_bin`` | |cross| | |tick| |
+--------------------------+---------------+--------------+
| ``gpu_id`` | |tick| | |tick| |
+--------------------------+---------------+--------------+
| ``n_gpus`` | |cross| | |tick| |
+--------------------------+---------------+--------------+
| ``predictor`` | |tick| | |tick| |
+--------------------------+---------------+--------------+
| ``grow_policy`` | |cross| | |tick| |
+--------------------------+---------------+--------------+
| ``monotone_constraints`` | |cross| | |tick| |
+--------------------------+---------------+--------------+
GPU accelerated prediction is enabled by default for the above mentioned ``tree_method`` parameters but can be switched to CPU prediction by setting ``predictor`` to ``cpu_predictor``. This could be useful if you want to conserve GPU memory. Likewise when using CPU algorithms, GPU accelerated prediction can be enabled by setting ``predictor`` to ``gpu_predictor``.
The device ordinal can be selected using the ``gpu_id`` parameter, which defaults to 0.
Multiple GPUs can be used with the ``gpu_hist`` tree method using the ``n_gpus`` parameter. which defaults to 1. If this is set to -1 all available GPUs will be used. If ``gpu_id`` is specified as non-zero, the gpu device order is ``mod(gpu_id + i) % n_visible_devices`` for ``i=0`` to ``n_gpus-1``. As with GPU vs. CPU, multi-GPU will not always be faster than a single GPU due to PCI bus bandwidth that can limit performance.
.. note:: Enabling multi-GPU training
Default installation may not enable multi-GPU training. To use multiple GPUs, make sure to read :ref:`build_gpu_support`.
The GPU algorithms currently work with CLI, Python and R packages. See :doc:`/build` for details.
.. code-block:: python
:caption: Python example
param['gpu_id'] = 0
param['max_bin'] = 16
param['tree_method'] = 'gpu_hist'
Benchmarks
==========
You can run benchmarks on synthetic data for binary classification:
.. code-block:: bash
python tests/benchmark/benchmark.py
Training time time on 1,000,000 rows x 50 columns with 500 boosting iterations and 0.25/0.75 test/train split on i7-6700K CPU @ 4.00GHz and Pascal Titan X yields the following results:
+--------------+----------+
| tree_method | Time (s) |
+==============+==========+
| gpu_hist | 13.87 |
+--------------+----------+
| hist | 63.55 |
+--------------+----------+
| gpu_exact | 161.08 |
+--------------+----------+
| exact | 1082.20 |
+--------------+----------+
See `GPU Accelerated XGBoost <https://xgboost.ai/2016/12/14/GPU-accelerated-xgboost.html>`_ and `Updates to the XGBoost GPU algorithms <https://xgboost.ai/2018/07/04/gpu-xgboost-update.html>`_ for additional performance benchmarks of the ``gpu_exact`` and ``gpu_hist`` tree methods.
**********
References
**********
`Mitchell R, Frank E. (2017) Accelerating the XGBoost algorithm using GPU computing. PeerJ Computer Science 3:e127 https://doi.org/10.7717/peerj-cs.127 <https://peerj.com/articles/cs-127/>`_
`Nvidia Parallel Forall: Gradient Boosting, Decision Trees and XGBoost with CUDA <https://devblogs.nvidia.com/parallelforall/gradient-boosting-decision-trees-xgboost-cuda/>`_
Authors
=======
* Rory Mitchell
* Jonathan C. McKinney
* Shankara Rao Thejaswi Nanditale
* Vinay Deshpande
* ... and the rest of the H2O.ai and NVIDIA team.
Please report bugs to the user forum https://discuss.xgboost.ai/.

View File

@@ -1,164 +0,0 @@
Contribute to XGBoost
=====================
XGBoost has been developed and used by a group of active community members.
Everyone is more than welcome to contribute. It is a way to make the project better and more accessible to more users.
- Please add your name to [CONTRIBUTORS.md](../../CONTRIBUTORS.md) after your patch has been merged.
- Please also update [NEWS.md](../../NEWS.md) to add note on your changes to the API or added a new document.
Guidelines
----------
* [Submit Pull Request](#submit-pull-request)
* [Git Workflow Howtos](#git-workflow-howtos)
- [How to resolve conflict with master](#how-to-resolve-conflict-with-master)
- [How to combine multiple commits into one](#how-to-combine-multiple-commits-into-one)
- [What is the consequence of force push](#what-is-the-consequence-of-force-push)
* [Document](#document)
* [Testcases](#testcases)
* [Examples](#examples)
* [Core Library](#core-library)
* [Python Package](#python-package)
* [R Package](#r-package)
Submit Pull Request
-------------------
* Before submit, please rebase your code on the most recent version of master, you can do it by
```bash
git remote add upstream https://github.com/dmlc/xgboost
git fetch upstream
git rebase upstream/master
```
* If you have multiple small commits,
it might be good to merge them together(use git rebase then squash) into more meaningful groups.
* Send the pull request!
- Fix the problems reported by automatic checks
- If you are contributing a new module, consider add a testcase in [tests](../tests)
Git Workflow Howtos
-------------------
### How to resolve conflict with master
- First rebase to most recent master
```bash
# The first two steps can be skipped after you do it once.
git remote add upstream https://github.com/dmlc/xgboost
git fetch upstream
git rebase upstream/master
```
- The git may show some conflicts it cannot merge, say ```conflicted.py```.
- Manually modify the file to resolve the conflict.
- After you resolved the conflict, mark it as resolved by
```bash
git add conflicted.py
```
- Then you can continue rebase by
```bash
git rebase --continue
```
- Finally push to your fork, you may need to force push here.
```bash
git push --force
```
### How to combine multiple commits into one
Sometimes we want to combine multiple commits, especially when later commits are only fixes to previous ones,
to create a PR with set of meaningful commits. You can do it by following steps.
- Before doing so, configure the default editor of git if you haven't done so before.
```bash
git config core.editor the-editor-you-like
```
- Assume we want to merge last 3 commits, type the following commands
```bash
git rebase -i HEAD~3
```
- It will pop up an text editor. Set the first commit as ```pick```, and change later ones to ```squash```.
- After you saved the file, it will pop up another text editor to ask you modify the combined commit message.
- Push the changes to your fork, you need to force push.
```bash
git push --force
```
### What is the consequence of force push
The previous two tips requires force push, this is because we altered the path of the commits.
It is fine to force push to your own fork, as long as the commits changed are only yours.
Documents
---------
* The document is created using sphinx and [recommonmark](http://recommonmark.readthedocs.org/en/latest/)
* You can build document locally to see the effect.
Testcases
---------
* All the testcases are in [tests](../tests)
* We use python nose for python test cases.
Examples
--------
* Usecases and examples will be in [demo](../demo)
* We are super excited to hear about your story, if you have blogposts,
tutorials code solutions using xgboost, please tell us and we will add
a link in the example pages.
Core Library
------------
- Follow Google C style for C++.
- We use doxygen to document all the interface code.
- You can reproduce the linter checks by typing ```make lint```
Python Package
--------------
- Always add docstring to the new functions in numpydoc format.
- You can reproduce the linter checks by typing ```make lint```
R Package
---------
### Code Style
- We follow Google's C++ Style guide on C++ code.
- This is mainly to be consistent with the rest of the project.
- Another reason is we will be able to check style automatically with a linter.
- You can check the style of the code by typing the following command at root folder.
```bash
make rcpplint
```
- When needed, you can disable the linter warning of certain line with ```// NOLINT(*)``` comments.
- We use [roxygen](https://cran.r-project.org/web/packages/roxygen2/vignettes/roxygen2.html) for documenting the R package.
### Rmarkdown Vignettes
Rmarkdown vignettes are placed in [R-package/vignettes](../R-package/vignettes)
These Rmarkdown files are not compiled. We host the compiled version on [doc/R-package](R-package)
The following steps are followed to add a new Rmarkdown vignettes:
- Add the original rmarkdown to ```R-package/vignettes```
- Modify ```doc/R-package/Makefile``` to add the markdown files to be build
- Clone the [dmlc/web-data](https://github.com/dmlc/web-data) repo to folder ```doc```
- Now type the following command on ```doc/R-package```
```bash
make the-markdown-to-make.md
```
- This will generate the markdown, as well as the figures into ```doc/web-data/xgboost/knitr```
- Modify the ```doc/R-package/index.md``` to point to the generated markdown.
- Add the generated figure to the ```dmlc/web-data``` repo.
- If you already cloned the repo to doc, this means a ```git add```
- Create PR for both the markdown and ```dmlc/web-data```
- You can also build the document locally by typing the following command at ```doc```
```bash
make html
```
The reason we do this is to avoid exploded repo size due to generated images sizes.
### R package versioning
Since version 0.6.4.3, we have adopted a versioning system that uses an ```x.y.z``` (or ```core_major.core_minor.cran_release```)
format for CRAN releases and an ```x.y.z.p``` (or ```core_major.core_minor.cran_release.patch```) format for development patch versions.
This approach is similar to the one described in Yihui Xie's
[blog post on R Package Versioning](https://yihui.name/en/2013/06/r-package-versioning/),
except we need an additional field to accomodate the ```x.y``` core library version.
Each new CRAN release bumps up the 3rd field, while developments in-between CRAN releases
would be marked by an additional 4th field on the top of an existing CRAN release version.
Some additional consideration is needed when the core library version changes.
E.g., after the core changes from 0.6 to 0.7, the R package development version would become 0.7.0.1, working towards
a 0.7.1 CRAN release. The 0.7.0 would not be released to CRAN, unless it would require almost no additional development.
### Registering native routines in R
According to [R extension manual](https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Registering-native-routines),
it is good practice to register native routines and to disable symbol search. When any changes or additions are made to the
C++ interface of the R package, please make corresponding changes in ```src/init.c``` as well.

View File

@@ -1,42 +0,0 @@
Using XGBoost External Memory Version(beta)
===========================================
There is no big difference between using external memory version and in-memory version.
The only difference is the filename format.
The external memory version takes in the following filename format
```
filename#cacheprefix
```
The ```filename``` is the normal path to libsvm file you want to load in, ```cacheprefix``` is a
path to a cache file that xgboost will use for external memory cache.
The following code was extracted from [../../demo/guide-python/external_memory.py](../../demo/guide-python/external_memory.py)
```python
dtrain = xgb.DMatrix('../data/agaricus.txt.train#dtrain.cache')
```
You can find that there is additional ```#dtrain.cache``` following the libsvm file, this is the name of cache file.
For CLI version, simply use ```"../data/agaricus.txt.train#dtrain.cache"``` in filename.
Performance Note
----------------
* the parameter ```nthread``` should be set to number of ***real*** cores
- Most modern CPU offer hyperthreading, which means you can have a 4 core cpu with 8 threads
- Set nthread to be 4 for maximum performance in such case
Distributed Version
-------------------
The external memory mode naturally works on distributed version, you can simply set path like
```
data = "hdfs://path-to-data/#dtrain.cache"
```
xgboost will cache the data to the local position. When you run on YARN, the current folder is temporal
so that you can directly use ```dtrain.cache``` to cache to current folder.
Usage Note
----------
* This is a experimental version
- If you like to try and test it, report results to https://github.com/dmlc/xgboost/issues/244
* Currently only importing from libsvm format is supported
- Contribution of ingestion from other common external memory data source is welcomed

View File

@@ -1,17 +0,0 @@
# XGBoost How To
This page contains guidelines to use and develop XGBoost.
## Installation
- [How to Install XGBoost](../build.md)
## Use XGBoost in Specific Ways
- [Parameter tuning guide](param_tuning.md)
- [Use out of core computation for large dataset](external_memory.md)
- [Use XGBoost GPU algorithms](../gpu/index.md)
## Develop and Hack XGBoost
- [Contribute to XGBoost](contribute.md)
## Frequently Ask Questions
- [FAQ](../faq.md)

View File

@@ -1,16 +0,0 @@
XGBoost Documentation
=====================
This document is hosted at http://xgboost.readthedocs.org/. You can also browse most of the documents in github directly.
These are used to generate the index used in search.
* [Python Package Document](python/index.md)
* [R Package Document](R-package/index.md)
* [Java/Scala Package Document](jvm/index.md)
* [Julia Package Document](julia/index.md)
* [CLI Package Document](cli/index.md)
* [GPU Support Document](gpu/index.md)
- [Howto Documents](how_to/index.md)
- [Get Started Documents](get_started/index.md)
- [Tutorials](tutorials/index.md)

30
doc/index.rst Normal file
View File

@@ -0,0 +1,30 @@
#####################
XGBoost Documentation
#####################
**XGBoost** is an optimized distributed gradient boosting library designed to be highly **efficient**, **flexible** and **portable**.
It implements machine learning algorithms under the `Gradient Boosting <https://en.wikipedia.org/wiki/Gradient_boosting>`_ framework.
XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.
The same code runs on major distributed environment (Hadoop, SGE, MPI) and can solve problems beyond billions of examples.
********
Contents
********
.. toctree::
:maxdepth: 2
:titlesonly:
build
get_started
tutorials/index
faq
XGBoost User Forum <https://discuss.xgboost.ai>
GPU support <gpu/index>
parameter
Python package <python/index>
R package <R-package/index>
JVM package <jvm/index>
Julia package <julia>
CLI interface <cli>
contribute

View File

@@ -1,56 +0,0 @@
Text Input Format of DMatrix
============================
## Basic Input Format
As we have mentioned, XGBoost takes LibSVM format. For training or predicting, XGBoost takes an instance file with the format as below:
train.txt
```
1 101:1.2 102:0.03
0 1:2.1 10001:300 10002:400
0 0:1.3 1:0.3
1 0:0.01 1:0.3
0 0:0.2 1:0.3
```
Each line represent a single instance, and in the first line '1' is the instance label,'101' and '102' are feature indices, '1.2' and '0.03' are feature values. In the binary classification case, '1' is used to indicate positive samples, and '0' is used to indicate negative samples. We also support probability values in [0,1] as label, to indicate the probability of the instance being positive.
Additional Information
----------------------
Note: these additional information are only applicable to single machine version of the package.
### Group Input Format
As XGBoost supports accomplishing [ranking task](../demo/rank), we support the group input format. In ranking task, instances are categorized into different groups in real world scenarios, for example, in the learning to rank web pages scenario, the web page instances are grouped by their queries. Except the instance file mentioned in the group input format, XGBoost need an file indicating the group information. For example, if the instance file is the "train.txt" shown above,
and the group file is as below:
train.txt.group
```
2
3
```
This means that, the data set contains 5 instances, and the first two instances are in a group and the other three are in another group. The numbers in the group file are actually indicating the number of instances in each group in the instance file in order.
While configuration, you do not have to indicate the path of the group file. If the instance file name is "xxx", XGBoost will check whether there is a file named "xxx.group" in the same directory and decides whether to read the data as group input format.
### Instance Weight File
XGBoost supports providing each instance an weight to differentiate the importance of instances. For example, if we provide an instance weight file for the "train.txt" file in the example as below:
train.txt.weight
```
1
0.5
0.5
1
0.5
```
It means that XGBoost will emphasize more on the first and fourth instance that is to say positive instances while training.
The configuration is similar to configuring the group information. If the instance file name is "xxx", XGBoost will check whether there is a file named "xxx.weight" in the same directory and if there is, will use the weights while training models. Weights will be included into an "xxx.buffer" file that is created by XGBoost automatically. If you want to update the weights, you need to delete the "xxx.buffer" file prior to launching XGBoost.
### Initial Margin file
XGBoost supports providing each instance an initial margin prediction. For example, if we have a initial prediction using logistic regression for "train.txt" file, we can create the following file:
train.txt.base_margin
```
-0.4
1.0
3.4
```
XGBoost will take these values as initial margin prediction and boost from that. An important note about base_margin is that it should be margin prediction before transformation, so if you are doing logistic loss, you will need to put in value before logistic transformation. If you are using XGBoost predictor, use pred_margin=1 to output margin values.

5
doc/julia.rst Normal file
View File

@@ -0,0 +1,5 @@
##########
XGBoost.jl
##########
See `XGBoost.jl Project page <https://github.com/dmlc/XGBoost.jl>`_.

View File

@@ -1,3 +0,0 @@
# XGBoost.jl
See [XGBoost.jl Project page](https://github.com/dmlc/XGBoost.jl)

View File

@@ -1,49 +0,0 @@
XGBoost JVM Package
===================
[![Build Status](https://travis-ci.org/dmlc/xgboost.svg?branch=master)](https://travis-ci.org/dmlc/xgboost)
[![GitHub license](http://dmlc.github.io/img/apache2.svg)](../LICENSE)
You have found the XGBoost JVM Package!
Installation
------------
Currently, XGBoost4J only support installation from source. Building XGBoost4J using Maven requires Maven 3 or newer, Java 7+ and CMake 3.2+ for compiling the JNI bindings.
Before you install XGBoost4J, you need to define environment variable `JAVA_HOME` as your JDK directory to ensure that your compiler can find `jni.h` correctly, since XGBoost4J relies on JNI to implement the interaction between the JVM and native libraries.
After your `JAVA_HOME` is defined correctly, it is as simple as run `mvn package` under jvm-packages directory to install XGBoost4J. You can also skip the tests by running `mvn -DskipTests=true package`, if you are sure about the correctness of your local setup.
To publish the artifacts to your local maven repository, run
mvn install
Or, if you would like to skip tests, run
mvn -DskipTests install
This command will publish the xgboost binaries, the compiled java classes as well as the java sources to your local repository. Then you can use XGBoost4J in your Java projects by including the following dependency in `pom.xml`:
<dependency>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost4j</artifactId>
<version>0.7</version>
</dependency>
After integrating with Dataframe/Dataset APIs of Spark 2.0, XGBoost4J-Spark only supports compile with Spark 2.x. You can build XGBoost4J-Spark as a component of XGBoost4J by running `mvn package`, and you can specify the version of spark with `mvn -Dspark.version=2.0.0 package`. (To continue working with Spark 1.x, the users are supposed to update pom.xml by modifying the properties like `spark.version`, `scala.version`, and `scala.binary.version`. Users also need to change the implementation by replacing SparkSession with SQLContext and the type of API parameters from Dataset[_] to Dataframe)
Contents
--------
* [Java Overview Tutorial](java_intro.md)
Resources
---------
* [Code Examples](https://github.com/dmlc/xgboost/tree/master/jvm-packages/xgboost4j-example)
* [Java API Docs](http://dmlc.ml/docs/javadocs/index.html)
## Scala API Docs
* [XGBoost4J](http://dmlc.ml/docs/scaladocs/xgboost4j/index.html)
* [XGBoost4J-Spark](http://dmlc.ml/docs/scaladocs/xgboost4j-spark/index.html)
* [XGBoost4J-Flink](http://dmlc.ml/docs/scaladocs/xgboost4j-flink/index.html)

155
doc/jvm/index.rst Normal file
View File

@@ -0,0 +1,155 @@
###################
XGBoost JVM Package
###################
.. raw:: html
<a href="https://travis-ci.org/dmlc/xgboost">
<img alt="Build Status" src="https://travis-ci.org/dmlc/xgboost.svg?branch=master">
</a>
<a href="https://github.com/dmlc/xgboost/blob/master/LICENSE">
<img alt="GitHub license" src="http://dmlc.github.io/img/apache2.svg">
</a>
You have found the XGBoost JVM Package!
************
Installation
************
Installation from source
========================
Building XGBoost4J using Maven requires Maven 3 or newer, Java 7+ and CMake 3.2+ for compiling the JNI bindings.
Before you install XGBoost4J, you need to define environment variable ``JAVA_HOME`` as your JDK directory to ensure that your compiler can find ``jni.h`` correctly, since XGBoost4J relies on JNI to implement the interaction between the JVM and native libraries.
After your ``JAVA_HOME`` is defined correctly, it is as simple as run ``mvn package`` under jvm-packages directory to install XGBoost4J. You can also skip the tests by running ``mvn -DskipTests=true package``, if you are sure about the correctness of your local setup.
To publish the artifacts to your local maven repository, run
.. code-block:: bash
mvn install
Or, if you would like to skip tests, run
.. code-block:: bash
mvn -DskipTests install
This command will publish the xgboost binaries, the compiled java classes as well as the java sources to your local repository. Then you can use XGBoost4J in your Java projects by including the following dependency in ``pom.xml``:
.. code-block:: xml
<dependency>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost4j</artifactId>
<version>latest_source_version_num</version>
</dependency>
For sbt, please add the repository and dependency in build.sbt as following:
.. code-block:: scala
resolvers += "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository"
"ml.dmlc" % "xgboost4j" % "latest_source_version_num"
If you want to use XGBoost4J-Spark, replace ``xgboost4j`` with ``xgboost4j-spark``.
.. note:: XGBoost4J-Spark requires Spark 2.3+
XGBoost4J-Spark now requires Spark 2.3+. Latest versions of XGBoost4J-Spark uses facilities of `org.apache.spark.ml.param.shared` extensively to provide for a tight integration with Spark MLLIB framework, and these facilities are not fully available on earlier versions of Spark.
Installation from maven repo
============================
Access release version
----------------------
.. code-block:: xml
:caption: maven
<dependency>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost4j</artifactId>
<version>latest_version_num</version>
</dependency>
.. code-block:: scala
:caption: sbt
"ml.dmlc" % "xgboost4j" % "latest_version_num"
This will checkout the latest stable version from the Maven Central.
For the latest release version number, please check `here <https://github.com/dmlc/xgboost/releases>`_.
if you want to use XGBoost4J-Spark, replace ``xgboost4j`` with ``xgboost4j-spark``.
Access SNAPSHOT version
-----------------------
You need to add GitHub as repo:
.. code-block:: xml
:caption: maven
<repository>
<id>GitHub Repo</id>
<name>GitHub Repo</name>
<url>https://raw.githubusercontent.com/CodingCat/xgboost/maven-repo/</url>
</repository>
.. code-block:: scala
:caption: sbt
resolvers += "GitHub Repo" at "https://raw.githubusercontent.com/CodingCat/xgboost/maven-repo/"
Then add dependency as following:
.. code-block:: xml
:caption: maven
<dependency>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost4j</artifactId>
<version>latest_version_num</version>
</dependency>
.. code-block:: scala
:caption: sbt
"ml.dmlc" % "xgboost4j" % "latest_version_num"
For the latest release version number, please check `here <https://github.com/CodingCat/xgboost/tree/maven-repo/ml/dmlc/xgboost4j>`_.
.. note:: Windows not supported by published JARs
The published JARs from the Maven Central and GitHub currently only supports Linux and MacOS. Windows users should consider building XGBoost4J / XGBoost4J-Spark from the source. Alternatively, checkout pre-built JARs from `criteo-forks/xgboost-jars <https://github.com/criteo-forks/xgboost-jars>`_.
Enabling OpenMP for Mac OS
--------------------------
If you are on Mac OS and using a compiler that supports OpenMP, you need to go to the file ``xgboost/jvm-packages/create_jni.py`` and comment out the line
.. code-block:: python
CONFIG["USE_OPENMP"] = "OFF"
in order to get the benefit of multi-threading.
********
Contents
********
.. toctree::
:maxdepth: 2
java_intro
XGBoost4J-Spark Tutorial <xgboost4j_spark_tutorial>
Code Examples <https://github.com/dmlc/xgboost/tree/master/jvm-packages/xgboost4j-example>
XGBoost4J Java API <javadocs/index>
XGBoost4J Scala API <scaladocs/xgboost4j/index>
XGBoost4J-Spark Scala API <scaladocs/xgboost4j-spark/index>
XGBoost4J-Flink Scala API <scaladocs/xgboost4j-flink/index>

View File

@@ -1,143 +0,0 @@
XGBoost4J Java API
==================
This tutorial introduces
## Data Interface
Like the xgboost python module, xgboost4j uses ```DMatrix``` to handle data,
libsvm txt format file, sparse matrix in CSR/CSC format, and dense matrix is
supported.
* To import ```DMatrix``` :
```java
import org.dmlc.xgboost4j.DMatrix;
```
* To load libsvm text format file, the usage is like :
```java
DMatrix dmat = new DMatrix("train.svm.txt");
```
* To load sparse matrix in CSR/CSC format is a little complicated, the usage is like :
suppose a sparse matrix :
1 0 2 0
4 0 0 3
3 1 2 0
for CSR format
```java
long[] rowHeaders = new long[] {0,2,4,7};
float[] data = new float[] {1f,2f,4f,3f,3f,1f,2f};
int[] colIndex = new int[] {0,2,0,3,0,1,2};
DMatrix dmat = new DMatrix(rowHeaders, colIndex, data, DMatrix.SparseType.CSR);
```
for CSC format
```java
long[] colHeaders = new long[] {0,3,4,6,7};
float[] data = new float[] {1f,4f,3f,1f,2f,2f,3f};
int[] rowIndex = new int[] {0,1,2,2,0,2,1};
DMatrix dmat = new DMatrix(colHeaders, rowIndex, data, DMatrix.SparseType.CSC);
```
* To load 3*2 dense matrix, the usage is like :
suppose a matrix :
1 2
3 4
5 6
```java
float[] data = new float[] {1f,2f,3f,4f,5f,6f};
int nrow = 3;
int ncol = 2;
float missing = 0.0f;
DMatrix dmat = new Matrix(data, nrow, ncol, missing);
```
* To set weight :
```java
float[] weights = new float[] {1f,2f,1f};
dmat.setWeight(weights);
```
## Setting Parameters
* in xgboost4j any ```Iterable<Entry<String, Object>>``` object could be used as parameters.
* to set parameters, for non-multiple value params, you can simply use entrySet of an Map:
```java
Map<String, Object> paramMap = new HashMap<>() {
{
put("eta", 1.0);
put("max_depth", 2);
put("silent", 1);
put("objective", "binary:logistic");
put("eval_metric", "logloss");
}
};
Iterable<Entry<String, Object>> params = paramMap.entrySet();
```
* for the situation that multiple values with same param key, List<Entry<String, Object>> would be a good choice, e.g. :
```java
List<Entry<String, Object>> params = new ArrayList<Entry<String, Object>>() {
{
add(new SimpleEntry<String, Object>("eta", 1.0));
add(new SimpleEntry<String, Object>("max_depth", 2.0));
add(new SimpleEntry<String, Object>("silent", 1));
add(new SimpleEntry<String, Object>("objective", "binary:logistic"));
}
};
```
## Training Model
With parameters and data, you are able to train a booster model.
* Import ```Trainer``` and ```Booster``` :
```java
import org.dmlc.xgboost4j.Booster;
import org.dmlc.xgboost4j.util.Trainer;
```
* Training
```java
DMatrix trainMat = new DMatrix("train.svm.txt");
DMatrix validMat = new DMatrix("valid.svm.txt");
//specify a watchList to see the performance
//any Iterable<Entry<String, DMatrix>> object could be used as watchList
List<Entry<String, DMatrix>> watchs = new ArrayList<>();
watchs.add(new SimpleEntry<>("train", trainMat));
watchs.add(new SimpleEntry<>("test", testMat));
int round = 2;
Booster booster = Trainer.train(params, trainMat, round, watchs, null, null);
```
* Saving model
After training, you can save model and dump it out.
```java
booster.saveModel("model.bin");
```
* Dump Model and Feature Map
```java
booster.dumpModel("modelInfo.txt", false)
//dump with featureMap
booster.dumpModel("modelInfo.txt", "featureMap.txt", false)
```
* Load a model
```java
Params param = new Params() {
{
put("silent", 1);
put("nthread", 6);
}
};
Booster booster = new Booster(param, "model.bin");
```
## Prediction
after training and loading a model, you use it to predict other data, the predict results will be a two-dimension float array (nsample, nclass), for predict leaf, it would be (nsample, nclass*ntrees)
```java
DMatrix dtest = new DMatrix("test.svm.txt");
//predict
float[][] predicts = booster.predict(dtest);
//predict leaf
float[][] leafPredicts = booster.predict(dtest, 0, true);
```

160
doc/jvm/java_intro.rst Normal file
View File

@@ -0,0 +1,160 @@
##############################
Getting Started with XGBoost4J
##############################
This tutorial introduces Java API for XGBoost.
**************
Data Interface
**************
Like the XGBoost python module, XGBoost4J uses DMatrix to handle data.
LIBSVM txt format file, sparse matrix in CSR/CSC format, and dense matrix are
supported.
* The first step is to import DMatrix:
.. code-block:: java
import ml.dmlc.xgboost4j.java.DMatrix;
* Use DMatrix constructor to load data from a libsvm text format file:
.. code-block:: java
DMatrix dmat = new DMatrix("train.svm.txt");
* Pass arrays to DMatrix constructor to load from sparse matrix.
Suppose we have a sparse matrix
.. code-block:: none
1 0 2 0
4 0 0 3
3 1 2 0
We can express the sparse matrix in `Compressed Sparse Row (CSR) <https://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_row_(CSR,_CRS_or_Yale_format)>`_ format:
.. code-block:: java
long[] rowHeaders = new long[] {0,2,4,7};
float[] data = new float[] {1f,2f,4f,3f,3f,1f,2f};
int[] colIndex = new int[] {0,2,0,3,0,1,2};
int numColumn = 4;
DMatrix dmat = new DMatrix(rowHeaders, colIndex, data, DMatrix.SparseType.CSR, numColumn);
... or in `Compressed Sparse Column (CSC) <https://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_column_(CSC_or_CCS)>`_ format:
.. code-block:: java
long[] colHeaders = new long[] {0,3,4,6,7};
float[] data = new float[] {1f,4f,3f,1f,2f,2f,3f};
int[] rowIndex = new int[] {0,1,2,2,0,2,1};
int numRow = 3;
DMatrix dmat = new DMatrix(colHeaders, rowIndex, data, DMatrix.SparseType.CSC, numRow);
* You may also load your data from a dense matrix. Let's assume we have a matrix of form
.. code-block:: none
1 2
3 4
5 6
Using `row-major layout <https://en.wikipedia.org/wiki/Row-_and_column-major_order>`_, we specify the dense matrix as follows:
.. code-block:: java
float[] data = new float[] {1f,2f,3f,4f,5f,6f};
int nrow = 3;
int ncol = 2;
float missing = 0.0f;
DMatrix dmat = new DMatrix(data, nrow, ncol, missing);
* To set weight:
.. code-block:: java
float[] weights = new float[] {1f,2f,1f};
dmat.setWeight(weights);
******************
Setting Parameters
******************
To set parameters, parameters are specified as a Map:
.. code-block:: java
Map<String, Object> params = new HashMap<String, Object>() {
{
put("eta", 1.0);
put("max_depth", 2);
put("silent", 1);
put("objective", "binary:logistic");
put("eval_metric", "logloss");
}
};
**************
Training Model
**************
With parameters and data, you are able to train a booster model.
* Import Booster and XGBoost:
.. code-block:: java
import ml.dmlc.xgboost4j.java.Booster;
import ml.dmlc.xgboost4j.java.XGBoost;
* Training
.. code-block:: java
DMatrix trainMat = new DMatrix("train.svm.txt");
DMatrix validMat = new DMatrix("valid.svm.txt");
// Specify a watch list to see model accuracy on data sets
Map<String, DMatrix> watches = new HashMap<String, DMatrix>() {
{
put("train", trainMat);
put("test", testMat);
}
};
int nround = 2;
Booster booster = XGBoost.train(trainMat, params, nround, watches, null, null);
* Saving model
After training, you can save model and dump it out.
.. code-block:: java
booster.saveModel("model.bin");
* Generaing model dump with feature map
.. code-block:: java
// dump without feature map
String[] model_dump = booster.getModelDump(null, false);
// dump with feature map
String[] model_dump_with_feature_map = booster.getModelDump("featureMap.txt", false);
* Load a model
.. code-block:: java
Booster booster = XGBoost.loadModel("model.bin");
**********
Prediction
**********
After training and loading a model, you can use it to make prediction for other data. The result will be a two-dimension float array ``(nsample, nclass)``; for ``predictLeaf()``, the result would be of shape ``(nsample, nclass*ntrees)``.
.. code-block:: java
DMatrix dtest = new DMatrix("test.svm.txt");
// predict
float[][] predicts = booster.predict(dtest);
// predict leaf
float[][] leafPredicts = booster.predictLeaf(dtest, 0);

View File

@@ -0,0 +1,3 @@
==================
XGBoost4J Java API
==================

View File

@@ -0,0 +1,3 @@
=========================
XGBoost4J-Flink Scala API
=========================

View File

@@ -0,0 +1,3 @@
=========================
XGBoost4J-Spark Scala API
=========================

View File

@@ -0,0 +1,3 @@
===================
XGBoost4J Scala API
===================

View File

@@ -1,187 +0,0 @@
---
layout: post
title: XGBoost4J: Portable Distributed Tree Boosting in DataFlow
date: 2016-03-15 12:00:00
author: Nan Zhu, Tianqi Chen
comments: true
---
## Introduction
[XGBoost](https://github.com/dmlc/xgboost) is a library designed and optimized for tree boosting. Gradient boosting trees model is originally proposed by Friedman et al. By embracing multi-threads and introducing regularization, XGBoost delivers higher computational power and more accurate prediction. **More than half of the winning solutions in machine learning challenges** hosted at Kaggle adopt XGBoost ([Incomplete list](https://github.com/dmlc/xgboost/tree/master/demo#machine-learning-challenge-winning-solutions)).
XGBoost has provided native interfaces for C++, R, python, Julia and Java users.
It is used by both [data exploration and production scenarios](https://github.com/dmlc/xgboost/tree/master/demo#usecases) to solve real world machine learning problems.
The distributed XGBoost is described in the [recently published paper](http://arxiv.org/abs/1603.02754).
In short, the XGBoost system runs magnitudes faster than existing alternatives of distributed ML,
and uses far fewer resources. The reader is more than welcomed to refer to the paper for more details.
Despite the current great success, one of our ultimate goals is to make XGBoost even more available for all production scenario.
Programming languages and data processing/storage systems based on Java Virtual Machine (JVM) play the significant roles in the BigData ecosystem. [Hadoop](http://hadoop.apache.org/), [Spark](http://spark.apache.org/) and more recently introduced [Flink](http://flink.apache.org/) are very useful solutions to general large-scale data processing.
On the other side, the emerging demands of machine learning and deep learning
inspires many excellent machine learning libraries.
Many of these machine learning libraries(e.g. [XGBoost](https://github.com/dmlc/xgboost)/[MxNet](https://github.com/dmlc/mxnet))
requires new computation abstraction and native support (e.g. C++ for GPU computing).
They are also often [much more efficient](http://arxiv.org/abs/1603.02754).
The gap between the implementation fundamentals of the general data processing frameworks and the more specific machine learning libraries/systems prohibits the smooth connection between these two types of systems, thus brings unnecessary inconvenience to the end user. The common workflow to the user is to utilize the systems like Spark/Flink to preprocess/clean data, pass the results to machine learning systems like [XGBoost](https://github.com/dmlc/xgboost)/[MxNet](https://github.com/dmlc/mxnet)) via the file systems and then conduct the following machine learning phase. This process jumping across two types of systems creates certain inconvenience for the users and brings additional overhead to the operators of the infrastructure.
We want best of both worlds, so we can use the data processing frameworks like Spark and Flink together with
the best distributed machine learning solutions.
To resolve the situation, we introduce the new-brewed [XGBoost4J](https://github.com/dmlc/xgboost/tree/master/jvm-packages),
<b>XGBoost</b> for <b>J</b>VM Platform. We aim to provide the clean Java/Scala APIs and the integration with the most popular data processing systems developed in JVM-based languages.
## Unix Philosophy in Machine Learning
XGBoost and XGBoost4J adopts Unix Philosophy.
XGBoost **does its best in one thing -- tree boosting** and is **being designed to work with other systems**.
We strongly believe that machine learning solution should not be restricted to certain language or certain platform.
Specifically, users will be able to use distributed XGBoost in both Spark and Flink, and possibly more frameworks in Future.
We have made the API in a portable way so it **can be easily ported to other Dataflow frameworks provided by the Cloud**.
XGBoost4J shares its core with other XGBoost libraries, which means data scientists can use R/python
read and visualize the model trained distributedly.
It also means that user can start with single machine version for exploration,
which already can handle hundreds of million examples.
## System Overview
In the following Figure, we describe the overall architecture of XGBoost4J. XGBoost4J provides the Java/Scala API calling the core functionality of XGBoost library. Most importantly, it not only supports the single-machine model training, but also provides an abstraction layer which masks the difference of the underlying data processing engines and scales training to the distributed servers.
![XGBoost4J Architecture](https://raw.githubusercontent.com/dmlc/web-data/master/xgboost/xgboost4j.png)
By calling the XGBoost4J API, users can scale the model training to the cluster. XGBoost4J calls the running instance of XGBoost worker in Spark/Flink task and run them across the cluster. The communication among the distributed model training tasks and the XGBoost4J runtime environment go through [Rabit] (https://github.com/dmlc/rabit).
With the abstraction of XGBoost4J, users can build an unified data analytic application ranging from Extract-Transform-Loading, data exploration, machine learning model training and the final data product service. The following figure illustrate an example application built on top of Apache Spark. The application seamlessly embeds XGBoost into the processing pipeline and exchange data with other Spark-based processing phase through Spark's distributed memory layer.
![XGBoost4J Architecture](https://raw.githubusercontent.com/dmlc/web-data/master/xgboost/unified_pipeline.png)
## Single-machine Training Walk-through
In this section, we will work through the APIs of XGBoost4J by examples.
We will be using scala for demonstration, but we also have a complete API for java users.
To start the model training and evaluation, we need to prepare the training and test set:
```scala
val trainMax = new DMatrix("../../demo/data/agaricus.txt.train")
val testMax = new DMatrix("../../demo/data/agaricus.txt.test")
```
After preparing the data, we can train our model:
```scala
val params = new mutable.HashMap[String, Any]()
params += "eta" -> 1.0
params += "max_depth" -> 2
params += "silent" -> 1
params += "objective" -> "binary:logistic"
val watches = new mutable.HashMap[String, DMatrix]
watches += "train" -> trainMax
watches += "test" -> testMax
val round = 2
// train a model
val booster = XGBoost.train(trainMax, params.toMap, round, watches.toMap)
```
We then evaluate our model:
```scala
val predicts = booster.predict(testMax)
```
`predict` can output the predict results and you can define a customized evaluation method to derive your own metrics (see the example in ([Customized Evaluation Metric in Java](https://github.com/dmlc/xgboost/blob/master/jvm-packages/xgboost4j-example/src/main/java/ml/dmlc/xgboost4j/java/example/CustomObjective.java), [Customized Evaluation Metric in Scala] (https://github.com/dmlc/xgboost/blob/master/jvm-packages/xgboost4j-example/src/main/scala/ml/dmlc/xgboost4j/scala/example/CustomObjective.scala)).
## Distributed Model Training with Distributed Dataflow Frameworks
The most exciting part in this XGBoost4J release is the integration with the Distributed Dataflow Framework. The most popular data processing frameworks fall into this category, e.g. [Apache Spark](http://spark.apache.org/), [Apache Flink] (http://flink.apache.org/), etc. In this part, we will walk through the steps to build the unified data analytic applications containing data preprocessing and distributed model training with Spark and Flink. (currently, we only provide Scala API for the integration with Spark and Flink)
Similar to the single-machine training, we need to prepare the training and test dataset.
### Spark Example
In Spark, the dataset is represented as the [Resilient Distributed Dataset (RDD)](http://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds), we can utilize the Spark-distributed tools to parse libSVM file and wrap it as the RDD:
```scala
val trainRDD = MLUtils.loadLibSVMFile(sc, inputTrainPath).repartition(args(1).toInt)
```
We move forward to train the models:
```scala
val xgboostModel = XGBoost.train(trainRDD, paramMap, numRound, numWorkers)
```
The next step is to evaluate the model, you can either predict in local side or in a distributed fashion
```scala
// testSet is an RDD containing testset data represented as
// org.apache.spark.mllib.regression.LabeledPoint
val testSet = MLUtils.loadLibSVMFile(sc, inputTestPath)
// local prediction
// import methods in DataUtils to convert Iterator[org.apache.spark.mllib.regression.LabeledPoint]
// to Iterator[ml.dmlc.xgboost4j.LabeledPoint] in automatic
import DataUtils._
xgboostModel.predict(new DMatrix(testSet.collect().iterator)
// distributed prediction
xgboostModel.predict(testSet)
```
### Flink example
In Flink, we represent training data as Flink's [DataSet](https://ci.apache.org/projects/flink/flink-docs-master/apis/batch/index.html)
```scala
val trainData = MLUtils.readLibSVM(env, "/path/to/data/agaricus.txt.train")
```
Model Training can be done as follows
```scala
val xgboostModel = XGBoost.train(trainData, paramMap, round)
```
Training and prediction.
```scala
// testData is a Dataset containing testset data represented as
// org.apache.flink.ml.math.Vector.LabeledVector
val testData = MLUtils.readLibSVM(env, "/path/to/data/agaricus.txt.test")
// local prediction
xgboostModel.predict(testData.collect().iterator)
// distributed prediction
xgboostModel.predict(testData.map{x => x.vector})
```
## Road Map
It is the first release of XGBoost4J package, we are actively move forward for more charming features in the next release. You can watch our progress in [XGBoost4J Road Map](https://github.com/dmlc/xgboost/issues/935).
While we are trying our best to keep the minimum changes to the APIs, it is still subject to the incompatible changes.
## Further Readings
If you are interested in knowing more about XGBoost, you can find rich resources in
- [The github repository of XGBoost](https://github.com/dmlc/xgboost)
- [The comprehensive documentation site for XGBoostl](http://xgboost.readthedocs.org/en/latest/index.html)
- [An introduction to the gradient boosting model](http://xgboost.readthedocs.org/en/latest/model.html)
- [Tutorials for the R package](xgboost.readthedocs.org/en/latest/R-package/index.html)
- [Introduction of the Parameters](http://xgboost.readthedocs.org/en/latest/parameter.html)
- [Awesome XGBoost, a curated list of examples, tutorials, blogs about XGBoost usecases](https://github.com/dmlc/xgboost/tree/master/demo)
## Acknowledgements
We would like to send many thanks to [Zixuan Huang](https://github.com/yanqingmen), the early developer of XGBoost for Java (XGBoost for Java).

View File

@@ -1,139 +0,0 @@
## Introduction
On March 2016, we released the first version of [XGBoost4J](http://dmlc.ml/2016/03/14/xgboost4j-portable-distributed-xgboost-in-spark-flink-and-dataflow.html), which is a set of packages providing Java/Scala interfaces of XGBoost and the integration with prevalent JVM-based distributed data processing platforms, like Spark/Flink.
The integrations with Spark/Flink, a.k.a. <b>XGBoost4J-Spark</b> and <b>XGBoost-Flink</b>, receive the tremendous positive feedbacks from the community. It enables users to build a unified pipeline, embedding XGBoost into the data processing system based on the widely-deployed frameworks like Spark. The following figure shows the general architecture of such a pipeline with the first version of <b>XGBoost4J-Spark</b>, where the data processing is based on the low-level [Resilient Distributed Dataset (RDD)](http://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) abstraction.
![XGBoost4J Architecture](https://raw.githubusercontent.com/dmlc/web-data/master/xgboost/unified_pipeline.png)
In the last months, we have a lot of communication with the users and gain the deeper understanding of the users' latest usage scenario and requirements:
* XGBoost keeps gaining more and more deployments in the production environment and the adoption in machine learning competitions [Link](http://datascience.la/xgboost-workshop-and-meetup-talk-with-tianqi-chen/).
* While Spark is still the mainstream data processing tool in most of scenarios, more and more users are porting their RDD-based Spark programs to [DataFrame/Dataset APIs](http://spark.apache.org/docs/latest/sql-programming-guide.html) for the well-designed interfaces to manipulate structured data and the [significant performance improvement](https://databricks.com/blog/2016/07/26/introducing-apache-spark-2-0.html).
* Spark itself has presented a clear roadmap that DataFrame/Dataset would be the base of the latest and future features, e.g. latest version of [ML pipeline](http://spark.apache.org/docs/latest/ml-guide.html) and [Structured Streaming](http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html).
Based on these feedbacks from the users, we observe a gap between the original RDD-based XGBoost4J-Spark and the users' latest usage scenario as well as the future direction of Spark ecosystem. To fill this gap, we start working on the <b><i>integration of XGBoost and Spark's DataFrame/Dataset abstraction</i></b> in September. In this blog, we will introduce <b>the latest version of XGBoost4J-Spark</b> which allows the user to work with DataFrame/Dataset directly and embed XGBoost to Spark's ML pipeline seamlessly.
## A Full Integration of XGBoost and DataFrame/Dataset
The following figure illustrates the new pipeline architecture with the latest XGBoost4J-Spark.
![XGBoost4J New Architecture](https://raw.githubusercontent.com/dmlc/web-data/master/xgboost/unified_pipeline_new.png)
Being different with the previous version, users are able to use both low- and high-level memory abstraction in Spark, i.e. RDD and DataFrame/Dataset. The DataFrame/Dataset abstraction grants the user to manipulate structured datasets and utilize the built-in routines in Spark or User Defined Functions (UDF) to explore the value distribution in columns before they feed data into the machine learning phase in the pipeline. In the following example, the structured sales records can be saved in a JSON file, parsed as DataFrame through Spark's API and feed to train XGBoost model in two lines of Scala code.
```scala
// load sales records saved in json files
val salesDF = spark.read.json("sales.json")
// call XGBoost API to train with the DataFrame-represented training set
val xgboostModel = XGBoost.trainWithDataFrame(
salesDF, paramMap, numRound, nWorkers, useExternalMemory)
```
By integrating with DataFrame/Dataset, XGBoost4J-Spark not only enables users to call DataFrame/Dataset APIs directly but also make DataFrame/Dataset-based Spark features available to XGBoost users, e.g. ML Package.
### Integration with ML Package
ML package of Spark provides a set of convenient tools for feature extraction/transformation/selection. Additionally, with the model selection tool in ML package, users can select the best model through an automatic parameter searching process which is defined with through ML package APIs. After integrating with DataFrame/Dataset abstraction, these charming features in ML package are also available to XGBoost users.
#### Feature Extraction/Transformation/Selection
The following example shows a feature transformer which converts the string-typed storeType feature to the numeric storeTypeIndex. The transformed DataFrame is then fed to train XGBoost model.
```scala
import org.apache.spark.ml.feature.StringIndexer
// load sales records saved in json files
val salesDF = spark.read.json("sales.json")
// transform the string-represented storeType feature to numeric storeTypeIndex
val indexer = new StringIndexer()
.setInputCol("storeType")
.setOutputCol("storeTypeIndex")
// drop the extra column
val indexed = indexer.fit(salesDF).transform(df).drop("storeType")
// use the transformed dataframe as training dataset
val xgboostModel = XGBoost.trainWithDataFrame(
indexed, paramMap, numRound, nWorkers, useExternalMemory)
```
#### Pipelining
Spark ML package allows the user to build a complete pipeline from feature extraction/transformation/selection to model training. We integrate XGBoost with ML package and make it feasible to embed XGBoost into such a pipeline seamlessly. The following example shows how to build such a pipeline consisting of feature transformers and the XGBoost estimator.
```scala
import org.apache.spark.ml.feature.StringIndexer
// load sales records saved in json files
val salesDF = spark.read.json("sales.json")
// transform the string-represented storeType feature to numeric storeTypeIndex
val indexer = new StringIndexer()
.setInputCol("storeType")
.setOutputCol("storeTypeIndex")
// assemble the columns in dataframe into a vector
val vectorAssembler = new VectorAssembler()
.setInputCols(Array("storeId", "storeTypeIndex", ...))
.setOutputCol("features")
// construct the pipeline
val pipeline = new Pipeline().setStages(
Array(storeTypeIndexer, ..., vectorAssembler, new XGBoostEstimator(Map[String, Any]("num_rounds" -> 100)))
// use the transformed dataframe as training dataset
val xgboostModel = pipeline.fit(salesDF)
// predict with the trained model
val salesTestDF = spark.read.json("sales_test.json")
val salesRecordsWithPred = xgboostModel.transform(salesTestDF)
```
#### Model Selection
The most critical operation to maximize the power of XGBoost is to select the optimal parameters for the model. Tuning parameters manually is a tedious and labor-consuming process. With the latest version of XGBoost4J-Spark, we can utilize the Spark model selecting tool to automate this process. The following example shows the code snippet utilizing [TrainValidationSplit](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.tuning.TrainValidationSplit) and [RegressionEvaluator](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.evaluation.RegressionEvaluator) to search the optimal combination of two XGBoost parameters, [max_depth and eta] (https://github.com/dmlc/xgboost/blob/master/doc/parameter.md). The model producing the minimum cost function value defined by RegressionEvaluator is selected and used to generate the prediction for the test set.
```scala
// create XGBoostEstimator
val xgbEstimator = new XGBoostEstimator(xgboostParam).setFeaturesCol("features").
setLabelCol("sales")
val paramGrid = new ParamGridBuilder()
.addGrid(xgbEstimator.maxDepth, Array(5, 6))
.addGrid(xgbEstimator.eta, Array(0.1, 0.4))
.build()
val tv = new TrainValidationSplit()
.setEstimator(xgbEstimator)
.setEvaluator(new RegressionEvaluator().setLabelCol("sales"))
.setEstimatorParamMaps(paramGrid)
.setTrainRatio(0.8)
val salesTestDF = spark.read.json("sales_test.json")
val salesRecordsWithPred = xgboostModel.transform(salesTestDF)
```
## Summary
Through the latest XGBoost4J-Spark, XGBoost users can build a more efficient data processing pipeline which works with DataFrame/Dataset APIs to handle the structured data with the excellent performance, and simultaneously embrace the powerful XGBoost to explore the insights from the dataset and transform this insight into action. Additionally, XGBoost4J-Spark seamlessly connect XGBoost with Spark ML package which makes the job of feature extraction/transformation/selection and parameter model much easier than before.
The latest version of XGBoost4J-Spark has been available in the [GitHub Repository] (https://github.com/dmlc/xgboost), and the latest API docs are in [here](http://xgboost.readthedocs.io/en/latest/jvm/index.html).
## Portable Machine Learning Systems
XGBoost is one of the projects incubated by [Distributed Machine Learning Community (DMLC)](http://dmlc.ml/), which also creates several other popular projects on machine learning systems ([Link](https://github.com/dmlc/)), e.g. one of the most popular deep learning frameworks, [MXNet](http://mxnet.io/). We strongly believe that machine learning solution should not be restricted to certain language or certain platform. We realize this design philosophy in several projects, like XGBoost and MXNet. We are willing to see more contributions from the community in this direction.
## Further Readings
If you are interested in knowing more about XGBoost, you can find rich resources in
- [The github repository of XGBoost](https://github.com/dmlc/xgboost)
- [The comprehensive documentation site for XGBoostl](http://xgboost.readthedocs.org/en/latest/index.html)
- [An introduction to the gradient boosting model](http://xgboost.readthedocs.org/en/latest/model.html)
- [Tutorials for the R package](xgboost.readthedocs.org/en/latest/R-package/index.html)
- [Introduction of the Parameters](http://xgboost.readthedocs.org/en/latest/parameter.html)
- [Awesome XGBoost, a curated list of examples, tutorials, blogs about XGBoost usecases](https://github.com/dmlc/xgboost/tree/master/demo)

View File

@@ -0,0 +1,513 @@
#######################################
XGBoost4J-Spark Tutorial (version 0.8+)
#######################################
**XGBoost4J-Spark** is a project aiming to seamlessly integrate XGBoost and Apache Spark by fitting XGBoost to Apache Spark's MLLIB framework. With the integration, user can not only uses the high-performant algorithm implementation of XGBoost, but also leverages the powerful data processing engine of Spark for:
* Feature Engineering: feature extraction, transformation, dimensionality reduction, and selection, etc.
* Pipelines: constructing, evaluating, and tuning ML Pipelines
* Persistence: persist and load machine learning models and even whole Pipelines
This tutorial is to cover the end-to-end process to build a machine learning pipeline with XGBoost4J-Spark. We will discuss
* Using Spark to preprocess data to fit to XGBoost/XGBoost4J-Spark's data interface
* Training a XGBoost model with XGBoost4J-Spark
* Serving XGBoost model (prediction) with Spark
* Building a Machine Learning Pipeline with XGBoost4J-Spark
* Running XGBoost4J-Spark in Production
.. contents::
:backlinks: none
:local:
********************************************
Build an ML Application with XGBoost4J-Spark
********************************************
Refer to XGBoost4J-Spark Dependency
===================================
Before we go into the tour of how to use XGBoost4J-Spark, we would bring a brief introduction about how to build a machine learning application with XGBoost4J-Spark. The first thing you need to do is to refer to the dependency in Maven Central.
You can add the following dependency in your ``pom.xml``.
.. code-block:: xml
<dependency>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost4j-spark</artifactId>
<version>latest_version_num</version>
</dependency>
For the latest release version number, please check `here <https://github.com/dmlc/xgboost/releases>`_.
We also publish some functionalities which would be included in the coming release in the form of snapshot version. To access these functionalities, you can add dependency to the snapshot artifacts. We publish snapshot version in github-based repo, so you can add the following repo in ``pom.xml``:
.. code-block:: xml
<repository>
<id>XGBoost4J-Spark Snapshot Repo</id>
<name>XGBoost4J-Spark Snapshot Repo</name>
<url>https://raw.githubusercontent.com/CodingCat/xgboost/maven-repo/</url>
</repository>
and then refer to the snapshot dependency by adding:
.. code-block:: xml
<dependency>
<groupId>ml.dmlc</groupId>
<artifactId>xgboost4j</artifactId>
<version>next_version_num-SNAPSHOT</version>
</dependency>
.. note:: XGBoost4J-Spark requires Spark 2.3+
XGBoost4J-Spark now requires Spark 2.3+. Latest versions of XGBoost4J-Spark uses facilities of `org.apache.spark.ml.param.shared` extensively to provide for a tight integration with Spark MLLIB framework, and these facilities are not fully available on earlier versions of Spark.
Data Preparation
================
As aforementioned, XGBoost4J-Spark seamlessly integrates Spark and XGBoost. The integration enables
users to apply various types of transformation over the training/test datasets with the convenient
and powerful data processing framework, Spark.
In this section, we use `Iris <https://archive.ics.uci.edu/ml/datasets/iris>`_ dataset as an example to
showcase how we use Spark to transform raw dataset and make it fit to the data interface of XGBoost.
Iris dataset is shipped in CSV format. Each instance contains 4 features, "sepal length", "sepal width",
"petal length" and "petal width". In addition, it contains the "class" columnm, which is essentially the label with three possible values: "Iris Setosa", "Iris Versicolour" and "Iris Virginica".
Read Dataset with Spark's Built-In Reader
-----------------------------------------
The first thing in data transformation is to load the dataset as Spark's structured data abstraction, DataFrame.
.. code-block:: scala
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types.{DoubleType, StringType, StructField, StructType}
val spark = SparkSession.builder().getOrCreate()
val schema = new StructType(Array(
StructField("sepal length", DoubleType, true),
StructField("sepal width", DoubleType, true),
StructField("petal length", DoubleType, true),
StructField("petal width", DoubleType, true),
StructField("class", StringType, true)))
val rawInput = spark.read.schema(schema).csv("input_path")
At the first line, we create a instance of `SparkSession <http://spark.apache.org/docs/latest/sql-programming-guide.html#starting-point-sparksession>`_ which is the entry of any Spark program working with DataFrame. The ``schema`` variable defines the schema of DataFrame wrapping Iris data. With this explicitly set schema, we can define the columns' name as well as their types; otherwise the column name would be the default ones derived by Spark, such as ``_col0``, etc. Finally, we can use Spark's built-in csv reader to load Iris csv file as a DataFrame named ``rawInput``.
Spark also contains many built-in readers for other format. The latest version of Spark supports CSV, JSON, Parquet, and LIBSVM.
Transform Raw Iris Dataset
--------------------------
To make Iris dataset be recognizable to XGBoost, we need to
1. Transform String-typed label, i.e. "class", to Double-typed label.
2. Assemble the feature columns as a vector to fit to the data interface of Spark ML framework.
To convert String-typed label to Double, we can use Spark's built-in feature transformer `StringIndexer <https://spark.apache.org/docs/2.3.1/api/scala/index.html#org.apache.spark.ml.feature.StringIndexer>`_.
.. code-block:: scala
import org.apache.spark.ml.feature.StringIndexer
val stringIndexer = new StringIndexer().
setInputCol("class").
setOutputCol("classIndex").
fit(rawInput)
val labelTransformed = stringIndexer.transform(rawInput).drop("class")
With a newly created StringIndexer instance:
1. we set input column, i.e. the column containing String-typed label
2. we set output column, i.e. the column to contain the Double-typed label.
3. Then we ``fit`` StringIndex with our input DataFrame ``rawInput``, so that Spark internals can get information like total number of distinct values, etc.
Now we have a StringIndexer which is ready to be applied to our input DataFrame. To execute the transformation logic of StringIndexer, we ``transform`` the input DataFrame ``rawInput`` and to keep a concise DataFrame,
we drop the column "class" and only keeps the feature columns and the transformed Double-typed label column (in the last line of the above code snippet).
The ``fit`` and ``transform`` are two key operations in MLLIB. Basically, ``fit`` produces a "transformer", e.g. StringIndexer, and each transformer applies ``transform`` method on DataFrame to add new column(s) containing transformed features/labels or prediction results, etc. To understand more about ``fit`` and ``transform``, You can find more details in `here <http://spark.apache.org/docs/latest/ml-pipeline.html#pipeline-components>`_.
Similarly, we can use another transformer, `VectorAssembler <https://spark.apache.org/docs/2.3.1/api/scala/index.html#org.apache.spark.ml.feature.VectorAssembler>`_, to assemble feature columns "sepal length", "sepal width", "petal length" and "petal width" as a vector.
.. code-block:: scala
import org.apache.spark.ml.feature.VectorAssembler
val vectorAssembler = new VectorAssembler().
setInputCols(Array("sepal length", "sepal width", "petal length", "petal width")).
setOutputCol("features")
val xgbInput = vectorAssembler.transform(labelTransformed).select("features", "classIndex")
Now, we have a DataFrame containing only two columns, "features" which contains vector-represented
"sepal length", "sepal width", "petal length" and "petal width" and "classIndex" which has Double-typed
labels. A DataFrame like this (containing vector-represented features and numeric labels) can be fed to XGBoost4J-Spark's training engine directly.
Training
========
XGBoost supports both regression and classification. While we use Iris dataset in this tutorial to show how we use XGBoost/XGBoost4J-Spark to resolve a multi-classes classification problem, the usage in Regression is very similar to classification.
To train a XGBoost model for classification, we need to claim a XGBoostClassifier first:
.. code-block:: scala
import ml.dmlc.xgboost4j.scala.spark.XGBoostClassifier
val xgbParam = Map("eta" -> 0.1f,
"max_depth" -> 2,
"objective" -> "multi:softprob",
"num_class" -> 3,
"num_round" -> 100,
"num_workers" -> 2)
val xgbClassifier = new XGBoostClassifier(xgbParam).
setFeaturesCol("features").
setLabelCol("classIndex")
The available parameters for training a XGBoost model can be found in :doc:`here </parameter>`. In XGBoost4J-Spark, we support not only the default set of parameters but also the camel-case variant of these parameters to keep consistent with Spark's MLLIB parameters.
Specifically, each parameter in :doc:`this page </parameter>` has its
equivalent form in XGBoost4J-Spark with camel case. For example, to set ``max_depth`` for each tree, you can pass parameter just like what we did in the above code snippet (as ``max_depth`` wrapped in a Map), or you can do it through setters in XGBoostClassifer:
.. code-block:: scala
val xgbClassifier = new XGBoostClassifier().
setFeaturesCol("features").
setLabelCol("classIndex")
xgbClassifier.setMaxDepth(2)
After we set XGBoostClassifier parameters and feature/label column, we can build a transformer, XGBoostClassificationModel by fitting XGBoostClassifier with the input DataFrame. This ``fit`` operation is essentially the training process and the generated model can then be used in prediction.
.. code-block:: scala
val xgbClassificationModel = xgbClassifier.fit(xgbInput)
Prediction
==========
XGBoost4j-Spark supports two ways for model serving: batch prediction and single instance prediction.
Batch Prediction
----------------
When we get a model, either XGBoostClassificationModel or XGBoostRegressionModel, it takes a DataFrame, read the column containing feature vectors, predict for each feature vector, and output a new DataFrame with the following columns by default:
* XGBoostClassificationModel will output margins (``rawPredictionCol``), probabilities(``probabilityCol``) and the eventual prediction labels (``predictionCol``) for each possible label.
* XGBoostRegressionModel will output prediction label(``predictionCol``).
Batch prediction expects the user to pass the testset in the form of a DataFrame. XGBoost4J-Spark starts a XGBoost worker for each partition of DataFrame for parallel prediction and generates prediction results for the whole DataFrame in a batch.
.. code-block:: scala
val xgbClassificationModel = xgbClassifier.fit(xgbInput)
val results = xgbClassificationModel.transform(testSet)
With the above code snippet, we get a result DataFrame, result containing margin, probability for each class and the prediction for each instance
.. code-block:: none
+-----------------+----------+--------------------+--------------------+----------+
| features|classIndex| rawPrediction| probability|prediction|
+-----------------+----------+--------------------+--------------------+----------+
|[5.1,3.5,1.4,0.2]| 0.0|[3.45569849014282...|[0.99579632282257...| 0.0|
|[4.9,3.0,1.4,0.2]| 0.0|[3.45569849014282...|[0.99618089199066...| 0.0|
|[4.7,3.2,1.3,0.2]| 0.0|[3.45569849014282...|[0.99643349647521...| 0.0|
|[4.6,3.1,1.5,0.2]| 0.0|[3.45569849014282...|[0.99636095762252...| 0.0|
|[5.0,3.6,1.4,0.2]| 0.0|[3.45569849014282...|[0.99579632282257...| 0.0|
|[5.4,3.9,1.7,0.4]| 0.0|[3.45569849014282...|[0.99428516626358...| 0.0|
|[4.6,3.4,1.4,0.3]| 0.0|[3.45569849014282...|[0.99643349647521...| 0.0|
|[5.0,3.4,1.5,0.2]| 0.0|[3.45569849014282...|[0.99579632282257...| 0.0|
|[4.4,2.9,1.4,0.2]| 0.0|[3.45569849014282...|[0.99618089199066...| 0.0|
|[4.9,3.1,1.5,0.1]| 0.0|[3.45569849014282...|[0.99636095762252...| 0.0|
|[5.4,3.7,1.5,0.2]| 0.0|[3.45569849014282...|[0.99428516626358...| 0.0|
|[4.8,3.4,1.6,0.2]| 0.0|[3.45569849014282...|[0.99643349647521...| 0.0|
|[4.8,3.0,1.4,0.1]| 0.0|[3.45569849014282...|[0.99618089199066...| 0.0|
|[4.3,3.0,1.1,0.1]| 0.0|[3.45569849014282...|[0.99618089199066...| 0.0|
|[5.8,4.0,1.2,0.2]| 0.0|[3.45569849014282...|[0.97809928655624...| 0.0|
|[5.7,4.4,1.5,0.4]| 0.0|[3.45569849014282...|[0.97809928655624...| 0.0|
|[5.4,3.9,1.3,0.4]| 0.0|[3.45569849014282...|[0.99428516626358...| 0.0|
|[5.1,3.5,1.4,0.3]| 0.0|[3.45569849014282...|[0.99579632282257...| 0.0|
|[5.7,3.8,1.7,0.3]| 0.0|[3.45569849014282...|[0.97809928655624...| 0.0|
|[5.1,3.8,1.5,0.3]| 0.0|[3.45569849014282...|[0.99579632282257...| 0.0|
+-----------------+----------+--------------------+--------------------+----------+
Single instance prediction
--------------------------
XGBoostClassificationModel or XGBoostRegressionModel support make prediction on single instance as well.
It accepts a single Vector as feature, and output the prediction label.
However, the overhead of single-instance prediction is high due to the internal overhead of XGBoost, use it carefully!
.. code-block:: scala
val features = xgbInput.head().getAs[Vector]("features")
val result = xgbClassificationModel.predict(features)
Model Persistence
=================
Model and pipeline persistence
------------------------------
A data scientist produces an ML model and hands it over to an engineering team for deployment in a production environment. Reversely, a trained model may be used by data scientists, for example as a baseline, across the process of data exploration. So it's important to support model persistence to make the models available across usage scenarios and programming languages.
XGBoost4j-Spark supports saving and loading XGBoostClassifier/XGBoostClassificationModel and XGBoostRegressor/XGBoostRegressionModel. It also supports saving and loading a ML pipeline which includes these estimators and models.
We can save the XGBoostClassificationModel to file system:
.. code-block:: scala
val xgbClassificationModelPath = "/tmp/xgbClassificationModel"
xgbClassificationModel.write.overwrite().save(xgbClassificationModelPath)
and then loading the model in another session:
.. code-block:: scala
import ml.dmlc.xgboost4j.scala.spark.XGBoostClassificationModel
val xgbClassificationModel2 = XGBoostClassificationModel.load(xgbClassificationModelPath)
xgbClassificationModel2.transform(xgbInput)
With regards to ML pipeline save and load, please refer the next section.
Interact with Other Bindings of XGBoost
---------------------------------------
After we train a model with XGBoost4j-Spark on massive dataset, sometimes we want to do model serving in single machine or integrate it with other single node libraries for further processing. XGBoost4j-Spark supports export model to local by:
.. code-block:: scala
val nativeModelPath = "/tmp/nativeModel"
xgbClassificationModel.nativeBooster.saveModel(nativeModelPath)
Then we can load this model with single node Python XGBoost:
.. code-block:: python
import xgboost as xgb
bst = xgb.Booster({'nthread': 4})
bst.load_model(nativeModelPath)
.. note:: Using HDFS and S3 for exporting the models with nativeBooster.saveModel()
When interacting with other language bindings, XGBoost also supports saving-models-to and loading-models-from file systems other than the local one. You can use HDFS and S3 by prefixing the path with ``hdfs://`` and ``s3://`` respectively. However, for this capability, you must do **one** of the following:
1. Build XGBoost4J-Spark with the steps described in `here <https://xgboost.readthedocs.io/en/latest/jvm/index.html#installation-from-source>`_, but turning `USE_HDFS <https://github.com/dmlc/xgboost/blob/e939192978a0c152ad7b49b744630e99d54cffa8/jvm-packages/create_jni.py#L18>`_ (or USE_S3, etc. in the same place) switch on. With this approach, you can reuse the above code example by replacing "nativeModelPath" with a HDFS path.
- However, if you build with USE_HDFS, etc. you have to ensure that the involved shared object file, e.g. libhdfs.so, is put in the LIBRARY_PATH of your cluster. To avoid the complicated cluster environment configuration, choose the other option.
2. Use bindings of HDFS, S3, etc. to pass model files around. Here are the steps (taking HDFS as an example):
- Create a new file with
.. code-block:: scala
val outputStream = fs.create("hdfs_path")
where "fs" is an instance of `org.apache.hadoop.fs.FileSystem <https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/fs/FileSystem.html>`_ class in Hadoop.
- Pass the returned OutputStream in the first step to nativeBooster.saveModel():
.. code-block:: scala
xgbClassificationModel.nativeBooster.saveModel(outputStream)
- Download file in other languages from HDFS and load with the pre-built (without the requirement of libhdfs.so) version of XGBoost. (The function "download_from_hdfs" is a helper function to be implemented by the user)
.. code-block:: python
import xgboost as xgb
bst = xgb.Booster({'nthread': 4})
local_path = download_from_hdfs("hdfs_path")
bst.load_model(local_path)
.. note:: Consistency issue between XGBoost4J-Spark and other bindings
There is a consistency issue between XGBoost4J-Spark and other language bindings of XGBoost.
When users use Spark to load training/test data in LIBSVM format with the following code snippet:
.. code-block:: scala
spark.read.format("libsvm").load("trainingset_libsvm")
Spark assumes that the dataset is using 1-based indexing (feature indices staring with 1). However, when you do prediction with other bindings of XGBoost (e.g. Python API of XGBoost), XGBoost assumes that the dataset is using 0-based indexing (feature indices starting with 0) by default. It creates a pitfall for the users who train model with Spark but predict with the dataset in the same format in other bindings of XGBoost. The solution is to transform the dataset to 0-based indexing before you predict with, for example, Python API, or you append ``?indexing_mode=1`` to your file path when loading with DMatirx. For example in Python:
.. code-block:: python
xgb.DMatrix('test.libsvm?indexing_mode=1')
*******************************************
Building a ML Pipeline with XGBoost4J-Spark
*******************************************
Basic ML Pipeline
=================
Spark ML pipeline can combine multiple algorithms or functions into a single pipeline.
It covers from feature extraction, transformation, selection to model training and prediction.
XGBoost4j-Spark makes it feasible to embed XGBoost into such a pipeline seamlessly.
The following example shows how to build such a pipeline consisting of Spark MLlib feature transformer
and XGBoostClassifier estimator.
We still use `Iris <https://archive.ics.uci.edu/ml/datasets/iris>`_ dataset and the ``rawInput`` DataFrame.
First we need to split the dataset into training and test dataset.
.. code-block:: scala
val Array(training, test) = rawInput.randomSplit(Array(0.8, 0.2), 123)
The we build the ML pipeline which includes 4 stages:
* Assemble all features into a single vector column.
* From string label to indexed double label.
* Use XGBoostClassifier to train classification model.
* Convert indexed double label back to original string label.
We have shown the first three steps in the earlier sections, and the last step is finished with a new transformer `IndexToString <https://spark.apache.org/docs/2.3.1/api/scala/index.html#org.apache.spark.ml.feature.IndexToString>`_:
.. code-block:: scala
val labelConverter = new IndexToString()
.setInputCol("prediction")
.setOutputCol("realLabel")
.setLabels(stringIndexer.labels)
We need to organize these steps as a Pipeline in Spark ML framework and evaluate the whole pipeline to get a PipelineModel:
.. code-block:: scala
import org.apache.spark.ml.feature._
import org.apache.spark.ml.Pipeline
val pipeline = new Pipeline()
.setStages(Array(assembler, stringIndexer, booster, labelConverter))
val model = pipeline.fit(training)
After we get the PipelineModel, we can make prediction on the test dataset and evaluate the model accuracy.
.. code-block:: scala
import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
val prediction = model.transform(test)
val evaluator = new MulticlassClassificationEvaluator()
val accuracy = evaluator.evaluate(prediction)
Pipeline with Hyper-parameter Tunning
=====================================
The most critical operation to maximize the power of XGBoost is to select the optimal parameters for the model. Tuning parameters manually is a tedious and labor-consuming process. With the latest version of XGBoost4J-Spark, we can utilize the Spark model selecting tool to automate this process.
The following example shows the code snippet utilizing CrossValidation and MulticlassClassificationEvaluator
to search the optimal combination of two XGBoost parameters, ``max_depth`` and ``eta``. (See :doc:`/parameter`.)
The model producing the maximum accuracy defined by MulticlassClassificationEvaluator is selected and used to generate the prediction for the test set.
.. code-block:: scala
import org.apache.spark.ml.tuning._
import org.apache.spark.ml.PipelineModel
import ml.dmlc.xgboost4j.scala.spark.XGBoostClassificationModel
val paramGrid = new ParamGridBuilder()
.addGrid(booster.maxDepth, Array(3, 8))
.addGrid(booster.eta, Array(0.2, 0.6))
.build()
val cv = new CrossValidator()
.setEstimator(pipeline)
.setEvaluator(evaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(3)
val cvModel = cv.fit(training)
val bestModel = cvModel.bestModel.asInstanceOf[PipelineModel].stages(2)
.asInstanceOf[XGBoostClassificationModel]
bestModel.extractParamMap()
*********************************
Run XGBoost4J-Spark in Production
*********************************
XGBoost4J-Spark is one of the most important steps to bring XGBoost to production environment easier. In this section, we introduce three key features to run XGBoost4J-Spark in production.
Parallel/Distributed Training
=============================
The massive size of training dataset is one of the most significant characteristics in production environment. To ensure that training in XGBoost scales with the data size, XGBoost4J-Spark bridges the distributed/parallel processing framework of Spark and the parallel/distributed training mechanism of XGBoost.
In XGBoost4J-Spark, each XGBoost worker is wrapped by a Spark task and the training dataset in Spark's memory space is fed to XGBoost workers in a transparent approach to the user.
In the code snippet where we build XGBoostClassifier, we set parameter ``num_workers`` (or ``numWorkers``).
This parameter controls how many parallel workers we want to have when training a XGBoostClassificationModel.
.. note:: Regarding OpenMP optimization
By default, we allocate a core per each XGBoost worker. Therefore, the OpenMP optimization within each XGBoost worker does not take effect and the parallelization of training is achieved
by running multiple workers (i.e. Spark tasks) at the same time.
If you do want OpenMP optimization, you have to
1. set ``nthread`` to a value larger than 1 when creating XGBoostClassifier/XGBoostRegressor
2. set ``spark.task.cpus`` in Spark to the same value as ``nthread``
Gang Scheduling
===============
XGBoost uses `AllReduce <http://mpitutorial.com/tutorials/mpi-reduce-and-allreduce/>`_.
algorithm to synchronize the stats, e.g. histogram values, of each worker during training. Therefore XGBoost4J-Spark requires that all of ``nthread * numWorkers`` cores should be available before the training runs.
In the production environment where many users share the same cluster, it's hard to guarantee that your XGBoost4J-Spark application can get all requested resources for every run. By default, the communication layer in XGBoost will block the whole application when it requires more resources to be available. This process usually brings unnecessary resource waste as it keeps the ready resources and try to claim more. Additionally, this usually happens silently and does not bring the attention of users.
XGBoost4J-Spark allows the user to setup a timeout threshold for claiming resources from the cluster. If the application cannot get enough resources within this time period, the application would fail instead of wasting resources for hanging long. To enable this feature, you can set with XGBoostClassifier/XGBoostRegressor:
.. code-block:: scala
xgbClassifier.setTimeoutRequestWorkers(60000L)
or pass in ``timeout_request_workers`` in ``xgbParamMap`` when building XGBoostClassifier:
.. code-block:: scala
val xgbParam = Map("eta" -> 0.1f,
"max_depth" -> 2,
"objective" -> "multi:softprob",
"num_class" -> 3,
"num_round" -> 100,
"num_workers" -> 2,
"timeout_request_workers" -> 60000L)
val xgbClassifier = new XGBoostClassifier(xgbParam).
setFeaturesCol("features").
setLabelCol("classIndex")
If XGBoost4J-Spark cannot get enough resources for running two XGBoost workers, the application would fail. Users can have external mechanism to monitor the status of application and get notified for such case.
Checkpoint During Training
==========================
Transient failures are also commonly seen in production environment. To simplify the design of XGBoost,
we stop training if any of the distributed workers fail. However, if the training fails after having been through a long time, it would be a great waste of resources.
We support creating checkpoint during training to facilitate more efficient recovery from failture. To enable this feature, you can set how many iterations we build each checkpoint with ``setCheckpointInterval`` and the location of checkpoints with ``setCheckpointPath``:
.. code-block:: scala
xgbClassifier.setCheckpointInterval(2)
xgbClassifier.setCheckpointPath("/checkpoint_path")
An equivalent way is to pass in parameters in XGBoostClassifier's constructor:
.. code-block:: scala
val xgbParam = Map("eta" -> 0.1f,
"max_depth" -> 2,
"objective" -> "multi:softprob",
"num_class" -> 3,
"num_round" -> 100,
"num_workers" -> 2,
"checkpoint_path" -> "/checkpoints_path",
"checkpoint_interval" -> 2)
val xgbClassifier = new XGBoostClassifier(xgbParam).
setFeaturesCol("features").
setLabelCol("classIndex")
If the training failed during these 100 rounds, the next run of training would start by reading the latest checkpoint file in ``/checkpoints_path`` and start from the iteration when the checkpoint was built until to next failure or the specified 100 rounds.

Some files were not shown because too many files have changed in this diff Show More