* Fail GPU CI after test failure
* Fix GPU linear tests
* Reduced number of GPU tests to speed up CI
* Remove static allocations of device memory
* Resolve illegal memory access for updater_fast_hist.cc
* Fix broken r tests dependency
* Update python install documentation for GPU
* Upgrading to NCCL2
* Part - II of NCCL2 upgradation
- Doc updates to build with nccl2
- Dockerfile.gpu update for a correct CI build with nccl2
- Updated FindNccl package to have env-var NCCL_ROOT to take precedence
* Upgrading to v9.2 for CI workflow, since it has the nccl2 binaries available
* Added NCCL2 license + copy the nccl binaries into /usr location for the FindNccl module to find
* Set LD_LIBRARY_PATH variable to pick nccl2 binary at runtime
* Need the nccl2 library download instructions inside Dockerfile.release as well
* Use NCCL2 as a static library
* add qid for https://github.com/dmlc/xgboost/issues/2748
* change names
* change spaces
* change qid to bst_uint type
* change qid type to size_t
* change qid first to SIZE_MAX
* change qid type from size_t to uint64_t
* update dmlc-core
* fix qids name error
* fix group_ptr_ error
* Style fix
* Add qid handling logic to SparsePage
* New MetaInfo format + backward compatibility fix
Old MetaInfo format (1.0) doesn't contain qid field. We still want to be able
to read from MetaInfo files saved in old format. Also, define a new format
(2.0) that contains the qid field. This way, we can distinguish files that
contain qid and those that do not.
* Update MetaInfo test
* Simply group assignment logic
* Explicitly set qid=nullptr in NativeDataIter
NativeDataIter's callback does not support qid field. Users of NativeDataIter
will need to call setGroup() function separately to set group information.
* Save qids_ in SaveBinary()
* Upgrade dmlc-core submodule
* Add a test for reading qid
* Add contributor
* Check the size of qids_
* Document qid format
* Refactor to allow for custom regularisation methods
* Implement compositional SplitEvaluator framework
* Fixed segfault when no monotone_constraints are supplied.
* Change pid to parentID
* test_monotone_constraints.py now passes
* Refactor ColMaker and DistColMaker to use SplitEvaluator
* Performance optimisation when no monotone_constraints specified
* Fix linter messages
* Fix a few more linter errors
* Update the amalgamation
* Add bounds check
* Add check for leaf node
* Fix linter error in param.h
* Fix clang-tidy errors on CI
* Fix incorrect function name
* Fix clang-tidy error in updater_fast_hist.cc
* Enable SSE2 for Win32 R MinGW
Addresses https://github.com/dmlc/xgboost/pull/3335#issuecomment-400535752
* Add contributor
* Use sparse page as singular CSR matrix representation
* Simplify dmatrix methods
* Reduce statefullness of batch iterators
* BREAKING CHANGE: Remove prob_buffer_row parameter. Users are instead recommended to sample their dataset as a preprocessing step before using XGBoost.
* GPU binning and compression.
- binning and index compression are done inside the DeviceShard constructor
- in case of a DMatrix with multiple row batches, it is first converted into a single row batch
Currently, `CLIPredict()` saves prediction results in default 6-digit precision which causes precision loss. This PR sets precision to a level so that the conversion back to `bst_float` is lossless.
Related: #3298.
* Increase precision of bst_float values in tree dumps
* Increase precision of bst_float values in tree dumps
* Fix lint error and switch precision to right float variable
* Fix clang-tidy error
* Multi-GPU HostDeviceVector.
- HostDeviceVector instances can now span multiple devices, defined by GPUSet struct
- the interface of HostDeviceVector has been modified accordingly
- GPU objective functions are now multi-GPU
- GPU predicting from cache is now multi-GPU
- avoiding omp_set_num_threads() calls
- other minor changes
* rank_metric: add AUC-PR
Implementation of the AUC-PR calculation for weighted data, proposed by Keilwagen, Grosse and Grau (https://doi.org/10.1371/journal.pone.0092209)
* rank_metric: fix lint warnings
* Implement tests for AUC-PR and fix implementation
* add aucpr to documentation for other languages
* fix rebase conflict
* [core] additional gblinear improvements
* [R] callback for gblinear coefficients history
* force eta=1 for gblinear python tests
* add top_k to GreedyFeatureSelector
* set eta=1 in shotgun test
* [core] fix SparsePage processing in gblinear; col-wise multithreading in greedy updater
* set sorted flag within TryInitColData
* gblinear tests: use scale, add external memory test
* fix multiclass for greedy updater
* fix whitespace
* fix typo
* Extended monotonic constraints support to 'hist' tree method.
* Added monotonic constraints tests.
* Fix the signature of NoConstraint::CalcSplitGain()
* Document monotonic constraint support in 'hist'
* Update signature of Update to account for latest refactor
* Replaced std::vector-based interfaces with HostDeviceVector-based interfaces.
- replacement was performed in the learner, boosters, predictors,
updaters, and objective functions
- only interfaces used in training were replaced;
interfaces like PredictInstance() still use std::vector
- refactoring necessary for replacement of interfaces was also performed,
such as using HostDeviceVector in prediction cache
* HostDeviceVector-based interfaces for custom objective function example plugin.
In line 461, the "size_t offset = 0;" should be declared before any calculation, otherwise will cause compilation error.
```
I:\Libraries\xgboost\src\c_api\c_api.cc(416): error C2146: Missing ";" before "offset" [I:\Libraries\xgboost\build\objxgboost.vcxproj]
```
* Add interaction effects and cox loss
* Minimize whitespace changes
* Cox loss now no longer needs a pre-sorted dataset.
* Address code review comments
* Remove mem check, rename to pred_interactions, include bias
* Make lint happy
* More lint fixes
* Fix cox loss indexing
* Fix main effects and tests
* Fix lint
* Use half interaction values on the off-diagonals
* Fix lint again
- thrust::copy() called from dvec::copy() for gpairs invoked a GPU kernel instead of
cudaMemcpy()
- this resulted in illegal memory access if the GPU running the kernel could not access
the data being copied
- new version of dvec::copy() for thrust::device_ptr iterators calls cudaMemcpy(),
avoiding the problem.
* Added GPU objective function and no-copy interface.
- xgboost::HostDeviceVector<T> syncs automatically between host and device
- no-copy interfaces have been added
- default implementations just sync the data to host
and call the implementations with std::vector
- GPU objective function, predictor, histogram updater process data
directly on GPU
* Fix#2905
* Fix gpu_exact test failures
* Fix bug in GPU prediction where multiple calls to batch prediction can produce incorrect results
* Fix GPU documentation formatting
- Implement colsampling, subsampling for gpu_hist_experimental
- Optimised multi-GPU implementation for gpu_hist_experimental
- Make nccl optional
- Add Volta architecture flag
- Optimise RegLossObj
- Add timing utilities for debug verbose mode
- Bump required cuda version to 8.0