629 Commits

Author SHA1 Message Date
Jiaming Yuan
ee6809e642
Use mmap for external memory. (#9282)
- Have basic infrastructure for mmap.
- Release file write handle.
2023-06-19 18:52:55 +08:00
amdsc21
5ca7daaa13 merge latest changes 2023-06-15 21:39:14 +02:00
Rong Ou
e70810be8a
Refactor device communicator to make allreduce more flexible (#9295) 2023-06-14 03:53:03 +08:00
ZHAOKAI WANG
2b76061659
remove redundant method in expand_entry (#9283) 2023-06-10 05:18:21 +08:00
amdsc21
5f78360949 merge changes Jun092023 2023-06-09 22:41:33 +02:00
Jiaming Yuan
ea0deeca68
Disable dense optimization in hist for distributed training. (#9272) 2023-06-10 02:31:34 +08:00
Jiaming Yuan
1fcc26a6f8
Set ndcg to default for LTR. (#8822)
- Add document.
- Add tests.
- Use `ndcg` with `topk` as default.
2023-06-09 23:31:33 +08:00
Your Name
42867a4805 sync Jun 1 2023-06-01 15:55:06 -07:00
ZHAOKAI WANG
fa2ab1f021
TreeRefresher note word spelling modification (#9223) 2023-05-31 20:27:27 +08:00
Jiaming Yuan
03bc6e6427
Remove unused variables. (#9210)
- remove used variables.
- Remove signed comparison warnings.
2023-05-28 05:24:15 +08:00
Rong Ou
5b69534b43
Support column split in multi-target hist (#9171) 2023-05-26 16:56:05 +08:00
amdsc21
b22644fc10 add hip.h 2023-05-20 01:25:33 +02:00
amdsc21
8cad8c693c sync up May15 2023 2023-05-15 18:59:18 +02:00
Rong Ou
603f8ce2fa
Support hist in the partition builder under column split (#9120) 2023-05-11 05:24:29 +08:00
Jiaming Yuan
55968ed3fa
Fix monotone constraints on CPU. (#9122) 2023-05-06 01:07:54 +08:00
amdsc21
5446c501af merge 23Mar01 2023-05-02 00:05:58 +02:00
amdsc21
313a74b582 add Shap Magic to check if use cat 2023-05-01 21:55:14 +02:00
Jiaming Yuan
08ce495b5d
Use Booster context in DMatrix. (#8896)
- Pass context from booster to DMatrix.
- Use context instead of integer for `n_threads`.
- Check the consistency configuration for `max_bin`.
- Test for all combinations of initialization options.
2023-04-28 21:47:14 +08:00
Jiaming Yuan
0e470ef606
Optimize prediction with QuantileDMatrix. (#9096)
- Reduce overhead in `FVecDrop`.
- Reduce overhead caused by `HostVector()` calls.
2023-04-28 00:51:41 +08:00
Rong Ou
8dbe0510de
More collective aggregators (#9060) 2023-04-22 03:32:05 +08:00
Jiaming Yuan
7032981350
Fix timer annotation. (#9057) 2023-04-21 22:53:58 +08:00
amdsc21
65d83e288f fix device query 2023-04-19 19:53:26 +02:00
amdsc21
acad01afc9 sync Mar 29 2023-03-30 00:46:50 +02:00
Rong Ou
ff26cd3212
More tests for column split and vertical federated learning (#8985)
Added some more tests for the learner and fit_stump, for both column-wise distributed learning and vertical federated learning.

Also moved the `IsRowSplit` and `IsColumnSplit` methods from the `DMatrix` to the `MetaInfo` since in some places we only have access to the `MetaInfo`. Added a new convenience method `IsVerticalFederatedLearning`.

Some refactoring of the testing fixtures.
2023-03-28 16:40:26 +08:00
amdsc21
c50cc424bc sync Mar 27 2023 2023-03-27 18:54:41 +02:00
Jiaming Yuan
acc110c251
[MT-TREE] Support prediction cache and model slicing. (#8968)
- Fix prediction range.
- Support prediction cache in mt-hist.
- Support model slicing.
- Make the booster a Python iterable by defining `__iter__`.
- Cleanup removed/deprecated parameters.
- A new field in the output model `iteration_indptr` for pointing to the ranges of trees for each iteration.
2023-03-27 23:10:54 +08:00
amdsc21
8c77e936d1 tune grid size 2023-03-26 17:45:19 +02:00
amdsc21
7ee4734d3a rm device_helpers.hip.h from cu 2023-03-26 00:24:11 +01:00
amdsc21
ee582f03c3 rm device_helpers.hip.h from cuh 2023-03-25 23:35:57 +01:00
amdsc21
7fbc561e17 initial merge 2023-03-25 04:31:55 +01:00
Jiaming Yuan
151882dd26
Initial support for multi-target tree. (#8616)
* Implement multi-target for hist.

- Add new hist tree builder.
- Move data fetchers for tests.
- Dispatch function calls in gbm base on the tree type.
2023-03-22 23:49:56 +08:00
Rong Ou
b240f055d3
Support vertical federated learning (#8932) 2023-03-22 14:25:26 +08:00
amdsc21
595cd81251 add max shared mem workaround 2023-03-19 20:08:42 +01:00
Jiaming Yuan
9b6cc0ed07
Refactor hist to prepare for multi-target builder. (#8928)
- Extract the builder from the updater class. We need a new builder for multi-target.
- Extract `UpdateTree`, it can be reused for different builders. Eventually, other tree
  updaters can use it as well.
2023-03-17 17:21:04 +08:00
Jiaming Yuan
a093770f36
Partitioner for multi-target tree. (#8922) 2023-03-16 18:49:34 +08:00
amdsc21
a79a35c22c add warp size 2023-03-15 22:00:26 +01:00
Jiaming Yuan
26209a42a5
Define git attributes for renormalization. (#8921) 2023-03-16 02:43:11 +08:00
amdsc21
4484c7f073 disable Optin Shared Mem 2023-03-15 02:10:16 +01:00
Jiaming Yuan
8685556af2
Implement hist evaluator for multi-target tree. (#8908) 2023-03-15 01:42:51 +08:00
amdsc21
364df7db0f fix ../tree/gpu_hist/evaluate_splits.hip bugs, size 64 2023-03-14 06:17:21 +01:00
Jiaming Yuan
9bade7203a
Remove public access to tree model param. (#8902)
* Make tree model param a private member.
* Number of features and targets are immutable after construction.

This is to reduce the number of places where we can run configuration.
2023-03-13 20:55:10 +08:00
Jiaming Yuan
5ba3509dd3
Define multi expand entry. (#8895) 2023-03-13 19:31:05 +08:00
amdsc21
7d96758382 macro format 2023-03-11 06:57:24 +01:00
amdsc21
f0b8c02f15 merge latest changes 2023-03-10 22:10:20 +01:00
Jiaming Yuan
6deaec8027
Pass obj info by reference instead of by value. (#8889)
- Pass obj info into tree updater as const pointer.

This way we don't have to initialize the learner model param before configuring gbm, hence
breaking up the dependency of configurations.
2023-03-11 01:38:28 +08:00
amdsc21
bde3107c3e fix macro XGBOOST_USE_HIP 2023-03-10 07:01:25 +01:00
amdsc21
1c58ff61d1 finish fit_stump.cu 2023-03-10 00:46:29 +01:00
amdsc21
1530c03f7d finish constraints.cu 2023-03-09 22:43:51 +01:00
amdsc21
309268de02 finish updater_gpu_hist.cu 2023-03-09 22:40:44 +01:00
amdsc21
500428cc0f finish row_partitioner.cu 2023-03-09 22:31:11 +01:00