Test federated plugin using GitHub action. (#10336)

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
This commit is contained in:
Jiaming Yuan 2024-05-29 02:28:14 +08:00 committed by GitHub
parent 7ae5c972f9
commit 7354955cbb
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 27 additions and 27 deletions

View File

@ -156,8 +156,9 @@ jobs:
- name: Build and install XGBoost shared library - name: Build and install XGBoost shared library
run: | run: |
cd build cd build
cmake .. -DBUILD_STATIC_LIB=OFF -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja cmake .. -DBUILD_STATIC_LIB=OFF -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja -DPLUGIN_FEDERATED=ON -DGOOGLE_TEST=ON
ninja -v install ninja -v install
./testxgboost
cd - cd -
- name: Build and run C API demo with shared - name: Build and run C API demo with shared
run: | run: |

View File

@ -134,7 +134,7 @@ From the command line on Linux starting from the XGBoost directory:
.. note:: Specifying compute capability .. note:: Specifying compute capability
To speed up compilation, the compute version specific to your GPU could be passed to cmake as, e.g., ``-DGPU_COMPUTE_VER=50``. A quick explanation and numbers for some architectures can be found `in this page <https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/>`_. To speed up compilation, the compute version specific to your GPU could be passed to cmake as, e.g., ``-DCMAKE_CUDA_ARCHITECTURES=75``. A quick explanation and numbers for some architectures can be found `in this page <https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/>`_.
.. note:: Faster distributed GPU training with NCCL .. note:: Faster distributed GPU training with NCCL
@ -147,6 +147,8 @@ From the command line on Linux starting from the XGBoost directory:
cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON -DNCCL_ROOT=/path/to/nccl2 cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON -DNCCL_ROOT=/path/to/nccl2
make -j4 make -j4
Some additional flags are available for NCCL, ``BUILD_WITH_SHARED_NCCL`` enables building XGBoost with NCCL as a shared library, while ``USE_DLOPEN_NCCL`` enables XGBoost to load NCCL at runtime using ``dlopen``.
On Windows, run CMake as follows: On Windows, run CMake as follows:
.. code-block:: bash .. code-block:: bash
@ -165,6 +167,17 @@ The above cmake configuration run will create an ``xgboost.sln`` solution file i
To speed up compilation, run multiple jobs in parallel by appending option ``-- /MP``. To speed up compilation, run multiple jobs in parallel by appending option ``-- /MP``.
Federated Learning
==================
The federated learning plugin requires ``grpc`` and ``protobuf``. To install grpc, refer
to the `installation guide from the gRPC website
<https://grpc.io/docs/languages/cpp/quickstart/>`_. Alternatively, one can use the
``libgrpc`` and the ``protobuf`` package from conda forge if conda is available. After
obtaining the required dependencies, enable the flag: `-DPLUGIN_FEDERATED=ON` when running
CMake. Please note that only Linux is supported for the federated plugin.
.. _build_python: .. _build_python:
*********************************** ***********************************
@ -228,11 +241,12 @@ There are several ways to build and install the package from source:
3. Editable installation 3. Editable installation
To further enable rapid development and iteration, we provide an **editable installation**. To further enable rapid development and iteration, we provide an **editable
In an editable installation, the installed package is simply a symbolic link to your installation**. In an editable installation, the installed package is simply a symbolic
working copy of the XGBoost source code. So every changes you make to your source link to your working copy of the XGBoost source code. So every changes you make to your
directory will be immediately visible to the Python interpreter. Here is how to source directory will be immediately visible to the Python interpreter. To install
install XGBoost as editable installation: XGBoost as editable installation, first build the shared library as previously
described, then install the Python package:
.. code-block:: bash .. code-block:: bash

View File

@ -1,33 +1,16 @@
XGBoost Plugin for Federated Learning XGBoost Plugin for Federated Learning
===================================== =====================================
This folder contains the plugin for federated learning. Follow these steps to build and test it. This folder contains the plugin for federated learning.
Install gRPC See [build instruction](../../doc/build.rst) for how to build the plugin.
------------
Refer to the [installation guide from the gRPC website](https://grpc.io/docs/languages/cpp/quickstart/).
Build the Plugin
----------------
```shell
# Under xgboost source tree.
mkdir build
cd build
cmake .. -GNinja \
-DPLUGIN_FEDERATED=ON \
-DUSE_CUDA=ON\
-DUSE_NCCL=ON
ninja
cd ../python-package
pip install -e .
```
If CMake fails to locate gRPC, you may need to pass `-DCMAKE_PREFIX_PATH=<grpc path>` to CMake.
Test Federated XGBoost Test Federated XGBoost
---------------------- ----------------------
```shell ```shell
# Under xgboost source tree. # Under xgboost source tree.
cd tests/distributed cd tests/distributed/test_federated
# This tests both CPU training (`hist`) and GPU training (`gpu_hist`). # This tests both CPU training (`hist`) and GPU training (`gpu_hist`).
./runtests-federated.sh ./runtests-federated.sh
``` ```

View File

@ -8,3 +8,5 @@ dependencies:
- c-compiler - c-compiler
- cxx-compiler - cxx-compiler
- gtest - gtest
- protobuf
- libgrpc