1484 Commits

Author SHA1 Message Date
Jiaming Yuan
6e18d3a290
[pyspark] Handle the device parameter in pyspark. (#9390)
- Handle the new `device` parameter in PySpark.
- Deprecate the old `use_gpu` parameter.
2023-07-18 08:47:03 +08:00
Jiaming Yuan
0a07900b9f
Fix integer overflow. (#9380) 2023-07-15 21:11:02 +08:00
Jiaming Yuan
16eb41936d
Handle the new device parameter in dask and demos. (#9386)
* Handle the new `device` parameter in dask and demos.

- Check no ordinal is specified in the dask interface.
- Update demos.
- Update dask doc.
- Update the condition for QDM.
2023-07-15 19:11:20 +08:00
Jiaming Yuan
04aff3af8e
Define the new device parameter. (#9362) 2023-07-13 19:30:25 +08:00
Rong Ou
3632242e0b
Support column split with GPU quantile (#9370) 2023-07-11 12:15:56 +08:00
Jiaming Yuan
97ed944209
Unify the hist tree method for different devices. (#9363) 2023-07-11 10:04:39 +08:00
Jiaming Yuan
20c52f07d2
Support exporting cut values (#9356) 2023-07-08 15:32:41 +08:00
edumugi
c3124813e8
Support numpy vertical split (#9365) 2023-07-08 13:18:12 +08:00
Jiaming Yuan
59787b23af
Allow empty page in external memory. (#9361) 2023-07-08 09:24:35 +08:00
Rong Ou
15ca12a77e
Fix NCCL test hang (#9367) 2023-07-07 11:21:35 +08:00
Jiaming Yuan
41c6813496
Preserve order of saved updaters config. (#9355)
- Save the updater sequence as an array instead of object.
- Warn only once.

The compatibility is kept, but we should be able to break it as the config is not loaded
in pickle model and it's declared to be not stable.
2023-07-05 20:20:07 +08:00
Jiaming Yuan
645037e376
Improve test coverage with predictor configuration. (#9354)
* Improve test coverage with predictor configuration.

- Test with ext memory.
- Test with QDM.
- Test with dart.
2023-07-05 15:17:22 +08:00
Jiaming Yuan
d0916849a6
Remove unused weight from buffer for cat features. (#9341) 2023-07-04 01:07:09 +08:00
Jiaming Yuan
39390cc2ee
[breaking] Remove the predictor param, allow fallback to prediction using DMatrix. (#9129)
- A `DeviceOrd` struct is implemented to indicate the device. It will eventually replace the `gpu_id` parameter.
- The `predictor` parameter is removed.
- Fallback to `DMatrix` when `inplace_predict` is not available.
- The heuristic for choosing a predictor is only used during training.
2023-07-03 19:23:54 +08:00
Rong Ou
3a0f787703
Support column split in GPU predictor (#9343) 2023-07-03 04:05:34 +08:00
Rong Ou
f90771eec6
Fix device communicator dependency (#9346) 2023-06-29 10:34:30 +08:00
Jiaming Yuan
f4798718c7
Use hist as the default tree method. (#9320) 2023-06-27 23:04:24 +08:00
Jiaming Yuan
bc267dd729
Use ptr from mmap for GHistIndexMatrix and ColumnMatrix. (#9315)
* Use ptr from mmap for `GHistIndexMatrix` and `ColumnMatrix`.

- Define a resource for holding various types of memory pointers.
- Define ref vector for holding resources.
- Swap the underlying resources for GHist and ColumnM.
- Add documentation for current status.
- s390x support is removed. It should work if you can compile XGBoost, all the old workaround code does is to get GCC to compile.
2023-06-27 19:05:46 +08:00
Jiaming Yuan
54da4b3185
Cleanup to prepare for using mmap pointer in external memory. (#9317)
- Update SparseDMatrix comment.
- Use a pointer in the bitfield. We will replace the `std::vector<bool>` in `ColumnMatrix` with bitfield.
- Clean up the page source. The timer is removed as it's inaccurate once we swap the mmap pointer into the page.
2023-06-22 06:43:11 +08:00
Jiaming Yuan
ee6809e642
Use mmap for external memory. (#9282)
- Have basic infrastructure for mmap.
- Release file write handle.
2023-06-19 18:52:55 +08:00
Rong Ou
d8beb517ed
Support bitwise allreduce in NCCL communicator (#9300) 2023-06-17 01:56:50 +08:00
Rong Ou
e70810be8a
Refactor device communicator to make allreduce more flexible (#9295) 2023-06-14 03:53:03 +08:00
ZHAOKAI WANG
2b76061659
remove redundant method in expand_entry (#9283) 2023-06-10 05:18:21 +08:00
Jiaming Yuan
ea0deeca68
Disable dense optimization in hist for distributed training. (#9272) 2023-06-10 02:31:34 +08:00
Jiaming Yuan
1fcc26a6f8
Set ndcg to default for LTR. (#8822)
- Add document.
- Add tests.
- Use `ndcg` with `topk` as default.
2023-06-09 23:31:33 +08:00
Rong Ou
ff122d61ff
More tests for cpu predictor with column split (#9270) 2023-06-08 22:47:19 +08:00
ZHAOKAI WANG
84d3fcb7ea
Fix cpu_predictor categorical feature disaptch (#9256) 2023-06-08 01:24:04 +08:00
Jiaming Yuan
0cba2cdbb0
Support linalg data structures in check device. (#9243) 2023-06-06 09:47:24 +08:00
Rong Ou
962a20693f
More support for column split in cpu predictor (#9244)
- Added column split support to `PredictInstance` and `PredictLeaf`.
- Refactoring of tests.
2023-06-05 08:05:38 +08:00
Jiaming Yuan
9fbde21e9d
Rework the precision metric. (#9222)
- Rework the precision metric for both CPU and GPU.
- Mention it in the document.
- Cleanup old support code for GPU ranking metric.
- Deterministic GPU implementation.

* Drop support for classification.

* type.

* use batch shape.

* lint.

* cpu build.

* cpu build.

* lint.

* Tests.

* Fix.

* Cleanup error message.
2023-06-02 20:49:43 +08:00
ZHAOKAI WANG
fa2ab1f021
TreeRefresher note word spelling modification (#9223) 2023-05-31 20:27:27 +08:00
Jiaming Yuan
17fd3f55e9
Optimize adapter element counting on GPU. (#9209)
- Implement a simple `IterSpan` for passing iterators with size.
- Use shared memory for column size counts.
- Use one thread for each sample in row count to reduce atomic operations.
2023-05-30 23:28:43 +08:00
Jiaming Yuan
097f11b6e0
Support CUDA f16 without transformation. (#9207)
- Support f16 from cupy.
- Include CUDA header explicitly.
- Cleanup cmake nvtx support.
2023-05-30 20:54:31 +08:00
Jiaming Yuan
ae7450ce54
Skip optional synchronization in thrust. (#9212) 2023-05-30 17:23:09 +08:00
Jiaming Yuan
03bc6e6427
Remove unused variables. (#9210)
- remove used variables.
- Remove signed comparison warnings.
2023-05-28 05:24:15 +08:00
Jiaming Yuan
053aababd4
Avoid thrust logical operation. (#9199)
Thrust implementation of `thrust::all_of/any_of/none_of` adopts an early stopping strategy
to bailout early by dividing the input into small batches. This is not ideal for data
validation as we expect all data to be valid. The strategy leads to excessive kernel
launches and stream synchronization.

* Use reduce from dh instead.
2023-05-27 01:36:58 +08:00
Rong Ou
5b69534b43
Support column split in multi-target hist (#9171) 2023-05-26 16:56:05 +08:00
Rong Ou
acd363033e
Fix running MGPU gtests (#9200) 2023-05-26 05:26:38 +08:00
Stephan T. Lavavej
7375bd058b
Fix IndexTransformIter. (#9155) 2023-05-12 21:25:54 +08:00
Stephan T. Lavavej
779b82c098
Avoid redefining macros. (#9154) 2023-05-11 15:59:25 -07:00
Rong Ou
603f8ce2fa
Support hist in the partition builder under column split (#9120) 2023-05-11 05:24:29 +08:00
Rong Ou
52311dcec9
Fix multi-threaded gtests (#9148) 2023-05-10 19:15:32 +08:00
Jiaming Yuan
85988a3178
Wait for data CUDA stream instead of sync. (#9144)
---------

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
2023-05-09 09:52:21 +08:00
Jiaming Yuan
55968ed3fa
Fix monotone constraints on CPU. (#9122) 2023-05-06 01:07:54 +08:00
Jiaming Yuan
08ce495b5d
Use Booster context in DMatrix. (#8896)
- Pass context from booster to DMatrix.
- Use context instead of integer for `n_threads`.
- Check the consistency configuration for `max_bin`.
- Test for all combinations of initialization options.
2023-04-28 21:47:14 +08:00
Jiaming Yuan
1f9a57d17b
[Breaking] Require format to be specified in input URI. (#9077)
Previously, we use `libsvm` as default when format is not specified. However, the dmlc
data parser is not particularly robust against errors, and the most common type of error
is undefined format.

Along with which, we will recommend users to use other data loader instead. We will
continue the maintenance of the parsers as it's currently used for many internal tests
including federated learning.
2023-04-28 19:45:15 +08:00
Jiaming Yuan
17ff471616
Optimize array interface input. (#9090) 2023-04-28 18:01:58 +08:00
Jiaming Yuan
e206b899ef
Rework MAP and Pairwise for LTR. (#9075) 2023-04-28 02:39:12 +08:00
Jiaming Yuan
0e470ef606
Optimize prediction with QuantileDMatrix. (#9096)
- Reduce overhead in `FVecDrop`.
- Reduce overhead caused by `HostVector()` calls.
2023-04-28 00:51:41 +08:00
Rong Ou
a320b402a5
More refactoring to take advantage of collective aggregators (#9081) 2023-04-26 03:36:09 +08:00