Jiaming Yuan 8bad677c2f
Update collective implementation. (#10152)
* Update collective implementation.

- Cleanup resource during `Finalize` to avoid handling threads in destructor.
- Calculate the size for allgather automatically.
- Use simple allgather for small (smaller than the number of worker) allreduce.
2024-03-30 18:57:31 +08:00
..
2023-12-12 07:34:01 +08:00

XGBoost Plugin for Federated Learning

This folder contains the plugin for federated learning. Follow these steps to build and test it.

Install gRPC

Refer to the installation guide from the gRPC website.

Build the Plugin

# Under xgboost source tree.
mkdir build
cd build
cmake .. -GNinja \
 -DPLUGIN_FEDERATED=ON \
 -DUSE_CUDA=ON\
 -DUSE_NCCL=ON
ninja
cd ../python-package
pip install -e .

If CMake fails to locate gRPC, you may need to pass -DCMAKE_PREFIX_PATH=<grpc path> to CMake.

Test Federated XGBoost

# Under xgboost source tree.
cd tests/distributed
# This tests both CPU training (`hist`) and GPU training (`gpu_hist`).
./runtests-federated.sh