227 Commits

Author SHA1 Message Date
Jiaming Yuan
fa3715f584
[Dask] Asyncio support. (#5862) 2020-07-30 06:23:58 +08:00
Philip Hyunsu Cho
ac9136ee49
Further improvements and savings in Jenkins pipeline (#5904)
* Publish artifacts only on the master and release branches

* Build CUDA only for Compute Capability 7.5 when building PRs

* Run all Windows jobs in a single worker image

* Build nightly XGBoost4J SNAPSHOT JARs with Scala 2.12 only

* Show skipped Python tests on Windows

* Make Graphviz optional for Python tests

* Add back C++ tests

* Unstash xgboost_cpp_tests

* Fix label to CUDA 10.1

* Install cuPy for CUDA 10.1

* Install jsonschema

* Address reviewer's feedback
2020-07-18 03:30:40 -07:00
Jiaming Yuan
7c2686146e
Dask device dmatrix (#5901)
* Fix softprob with empty dmatrix.
2020-07-17 13:17:43 +08:00
Jiaming Yuan
029a8b533f
Simplify the data backends. (#5893) 2020-07-16 15:17:31 +08:00
Jiaming Yuan
a3ec964346
Accept iterator in device dmatrix. (#5783)
* Remove Device DMatrix.
2020-07-07 21:44:48 +08:00
Jiaming Yuan
048d969be4
Implement GK sketching on GPU. (#5846)
* Implement GK sketching on GPU.
* Strong tests on quantile building.
* Handle sparse dataset by binary searching the column index.
* Hypothesis test on dask.
2020-07-07 12:16:21 +08:00
Rory Mitchell
abdf894fcf
Add cupy to Windows CI (#5797)
* Add cupy to Windows CI

* Update Jenkinsfile-win64

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Update Jenkinsfile-win64

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Update tests/python-gpu/test_gpu_prediction.py

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-06-17 21:55:09 -07:00
Rory Mitchell
b47b5ac771
Use hypothesis (#5759)
* Use hypothesis

* Allow int64 array interface for groups

* Add packages to Windows CI

* Add to travis

* Make sure device index is set correctly

* Fix dask-cudf test

* appveyor
2020-06-16 12:45:59 +12:00
Rory Mitchell
359023c0fa
Speed up python test (#5752)
* Speed up tests

* Prevent DeviceQuantileDMatrix initialisation with numpy

* Use joblib.memory

* Use RandomState
2020-06-05 11:39:24 +12:00
Jiaming Yuan
35e2205256
[dask] Return GPU Series when input is from cuDF. (#5710)
* Refactor predict function.
2020-05-28 17:51:20 +08:00
Jiaming Yuan
5af8161a1a
Implement Python data handler. (#5689)
* Define data handlers for DMatrix.
* Throw ValueError in scikit learn interface.
2020-05-22 11:53:55 +08:00
Jiaming Yuan
9ad40901a8
Upgrade to CUDA 10.0 (#5649) (#5652)
Co-authored-by: fis <jm.yuan@outlook.com>

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2020-05-11 22:27:36 +08:00
Rory Mitchell
b9649e7b8e
Refactor gpu_hist split evaluation (#5610)
* Refactor

* Rewrite evaluate splits

* Add more tests
2020-04-30 08:58:12 +12:00
Jiaming Yuan
7d93932423
Better message when no GPU is found. (#5594) 2020-04-26 10:00:57 +08:00
Jiaming Yuan
e726dd9902
Set device in device dmatrix. (#5596) 2020-04-25 13:42:53 +08:00
Jiaming Yuan
29a4cfe400
Group aware GPU sketching. (#5551)
* Group aware GPU weighted sketching.

* Distribute group weights to each data point.
* Relax the test.
* Validate input meta info.
* Fix metainfo copy ctor.
2020-04-20 17:18:52 +08:00
Jiaming Yuan
8b04736b81
[dask] dask cudf inplace prediction. (#5512)
* Add inplace prediction for dask-cudf.

* Remove Dockerfile.release, since it's not used anywhere

* Use Conda exclusively in CUDF and GPU containers

* Improve cupy memory copying.

* Add skip marks to tests.

* Add mgpu-cudf category on the CI to run all distributed tests.

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
2020-04-15 18:15:51 +08:00
Rory Mitchell
15f40e51e9
Add support for dlpack, expose python docs for DeviceQuantileDMatrix (#5465) 2020-04-01 23:34:32 +13:00
Jiaming Yuan
6601a641d7
Thread safe, inplace prediction. (#5389)
Normal prediction with DMatrix is now thread safe with locks.  Added inplace prediction is lock free thread safe.

When data is on device (cupy, cudf), the returned data is also on device.

* Implementation for numpy, csr, cudf and cupy.

* Implementation for dask.

* Remove sync in simple dmatrix.
2020-03-30 15:35:28 +08:00
Rory Mitchell
13b10a6370
Device dmatrix (#5420) 2020-03-28 14:42:21 +13:00
Jiaming Yuan
0dd97c206b
Move thread local entry into Learner. (#5396)
* Move thread local entry into Learner.

This is an attempt to workaround CUDA context issue in static variable, where
the CUDA context can be released before device vector.

* Add PredictionEntry to thread local entry.

This eliminates one copy of prediction vector.

* Don't define CUDA C API in a namespace.
2020-03-07 15:37:39 +08:00
Jiaming Yuan
8d06878bf9
Deterministic GPU histogram. (#5361)
* Use pre-rounding based method to obtain reproducible floating point
  summation.
* GPU Hist for regression and classification are bit-by-bit reproducible.
* Add doc.
* Switch to thrust reduce for `node_sum_gradient`.
2020-03-04 15:13:28 +08:00
Rory Mitchell
24ad9dec0b
Testing hist_util (#5251)
* Rank tests

* Remove categorical split specialisation

* Extend tests to multiple features, switch to WQSketch

* Add tests for SparseCuts

* Add external memory quantile tests, fix some existing tests
2020-02-14 14:36:43 +13:00
Rory Mitchell
1b3947d929
Make some GPU tests deterministic (#5229) 2020-01-26 11:53:07 +13:00
Rory Mitchell
aa9a68010b
uint not supported in cudf (#5225) 2020-01-23 16:59:18 +13:00
Rory Mitchell
9c56480c61
Support dmatrix construction from cupy array (#5206) 2020-01-22 13:15:27 +13:00
Jiaming Yuan
7b65698187
Enforce correct data shape. (#5191)
* Fix syncing DMatrix columns.
* notes for tree method.
* Enable feature validation for all interfaces except for jvm.
* Better tests for boosting from predictions.
* Disable validation on JVM.
2020-01-13 15:48:17 +08:00
Jiaming Yuan
ebc86a3afa
Disable parameter validation for Scikit-Learn interface. (#5167)
* Disable parameter validation for now.

Scikit-Learn passes all parameters down to XGBoost, whether they are used or
not.

* Add option `validate_parameters`.
2020-01-07 11:17:31 +08:00
Jiaming Yuan
61286c6e8f
Fix wrapping GPU ID and prevent data copying. (#5160)
* Removed some data copying.

* Make sure gpu_id is valid before any configuration is carried out.
2019-12-27 16:51:08 +08:00
sriramch
ee81ba8e1f implementation of map ranking algorithm on gpu (#5129)
* - implementation of map ranking algorithm
  - also effected necessary suggestions mentioned in the earlier ranking pr's
  - made some performance improvements to the ndcg algo as well
2019-12-27 12:05:37 +13:00
Jiaming Yuan
ced3660f60
Tests for empty dmatrix. (#5159) 2019-12-26 11:51:54 +08:00
Jiaming Yuan
298ebe68ac
[Breaking] Remove learning_rates in Python. (#5155)
* Remove `learning_rates`.

It's been deprecated since we have callback.

* Set `before_iteration` of `reset_learning_rate` to False to preserve 
  the initial learning rate, and comply to the term "reset".

Closes #4709.

* Tests for various `tree_method`.
2019-12-24 14:25:48 +08:00
Jiaming Yuan
b915788708
Remove benchmark code in GPU test. (#5141)
* Update Jenkins script.
2019-12-21 11:00:21 +08:00
Jiaming Yuan
3136185bc5
JSON configuration IO. (#5111)
* Add saving/loading JSON configuration.
* Implement Python pickle interface with new IO routines.
* Basic tests for training continuation.
2019-12-15 17:31:53 +08:00
Jiaming Yuan
608ebbe444
Fix GPU ID and prediction cache from pickle (#5086)
* Hack for saving GPU ID.

* Declare prediction cache on GBTree.

* Add a simple test.

* Add `auto` option for GPU Predictor.
2019-12-07 16:02:06 +08:00
Rong Ou
0afcc55d98 Support multiple batches in gpu_hist (#5014)
* Initial external memory training support for GPU Hist tree method.
2019-11-16 14:50:20 +08:00
Jiaming Yuan
97abcc7ee2
Extract interaction constraint from split evaluator. (#5034)
*  Extract interaction constraints from split evaluator.

The reason for doing so is mostly for model IO, where num_feature and interaction_constraints are copied in split evaluator. Also interaction constraint by itself is a feature selector, acting like column sampler and it's inefficient to bury it deep in the evaluator chain. Lastly removing one another copied parameter is a win.

*  Enable inc for approx tree method.

As now the implementation is spited up from evaluator class, it's also enabled for approx method.

*  Removing obsoleted code in colmaker.

They are never documented nor actually used in real world. Also there isn't a single test for those code blocks.

*  Unifying the types used for row and column.

As the size of input dataset is marching to billion, incorrect use of int is subject to overflow, also singed integer overflow is undefined behaviour. This PR starts the procedure for unifying used index type to unsigned integers. There's optimization that can utilize this undefined behaviour, but after some testings I don't see the optimization is beneficial to XGBoost.
2019-11-14 20:11:41 +08:00
sriramch
2abe69d774 - ndcg ltr implementation on gpu (#5004)
* - ndcg ltr implementation on gpu
  - this is a follow-up to the pairwise ltr implementation
2019-11-13 11:21:04 +13:00
Jiaming Yuan
7663de956c
Run training with empty DMatrix. (#4990)
This makes GPU Hist robust in distributed environment as some workers might not
be associated with any data in either training or evaluation.

* Disable rabit mock test for now: See #5012 .

* Disable dask-cudf test at prediction for now: See #5003

* Launch dask job for all workers despite they might not have any data.
* Check 0 rows in elementwise evaluation metrics.

   Using AUC and AUC-PR still throws an error.  See #4663 for a robust fix.

* Add tests for edge cases.
* Add `LaunchKernel` wrapper handling zero sized grid.
* Move some parts of allreducer into a cu file.
* Don't validate feature names when the booster is empty.

* Sync number of columns in DMatrix.

  As num_feature is required to be the same across all workers in data split
  mode.

* Filtering in dask interface now by default syncs all booster that's not
empty, instead of using rank 0.

* Fix Jenkins' GPU tests.

* Install dask-cuda from source in Jenkins' test.

  Now all tests are actually running.

* Restore GPU Hist tree synchronization test.

* Check UUID of running devices.

  The check is only performed on CUDA version >= 10.x, as 9.x doesn't have UUID field.

* Fix CMake policy and project variables.

  Use xgboost_SOURCE_DIR uniformly, add policy for CMake >= 3.13.

* Fix copying data to CPU

* Fix race condition in cpu predictor.

* Fix duplicated DMatrix construction.

* Don't download extra nccl in CI script.
2019-11-06 16:13:13 +08:00
Jiaming Yuan
7e72a12871
Don't set_params at the end of set_state. (#4947)
* Don't set_params at the end of set_state.

* Also fix another issue found in dask prediction.

* Add note about prediction.

Don't support other prediction modes at the moment.
2019-10-15 10:08:26 -04:00
Jiaming Yuan
3d46bd0fa5
Ignore columnar alignment requirement. (#4928)
* Better error message for wrong type.
* Fix stride size.
2019-10-13 06:41:43 -04:00
Philip Hyunsu Cho
f7487e4c2a [CI] Run cuDF tests in Jenkins CI server (#4927) 2019-10-13 00:04:54 -04:00
Jiaming Yuan
4bbf062ed3
[Breaking] Update sklearn interface. (#4929)
* Remove nthread, seed, silent. Add tree_method, gpu_id, num_parallel_tree. Fix #4909.
* Check data shape. Fix #4896.
* Check element of eval_set is tuple. Fix #4875
*  Add doc for random_state with hogwild. Fixes #4919
2019-10-12 02:50:09 -04:00
Jiaming Yuan
6c9b6f11da Use cudf.concat explicitly. (#4918)
* Use `cudf.concat` explicitly.

* Add test.
2019-10-10 16:02:10 +13:00
Jiaming Yuan
d30e63a0a5
Support feature names/types for cudf. (#4902)
* Implement most of the pandas procedure for cudf except for type conversion.
* Requires an array of interfaces in metainfo.
2019-09-29 15:07:51 -04:00
Vibhu Jawa
2fa8b359e0 Add support for cudf.Series (#4891) 2019-09-25 23:52:28 -04:00
Jiaming Yuan
5374f52531
Complete cudf support. (#4850)
* Handles missing value.
* Accept all floating point and integer types.
* Move to cudf 9.0 API.
* Remove requirement on `null_count`.
* Arbitrary column types support.
2019-09-16 23:52:00 -04:00
Rong Ou
38ab79f889 Make HostDeviceVector single gpu only (#4773)
* Make HostDeviceVector single gpu only
2019-08-26 09:51:13 +12:00
Jiaming Yuan
3fa2ceb193
Add self. (#4794) 2019-08-20 14:41:30 +08:00
Jiaming Yuan
9700776597 Cudf support. (#4745)
* Initial support for cudf integration.

* Add two C APIs for consuming data and metainfo.

* Add CopyFrom for SimpleCSRSource as a generic function to consume the data.

* Add FromDeviceColumnar for consuming device data.

* Add new MetaInfo::SetInfo for consuming label, weight etc.
2019-08-19 16:51:40 +12:00