1.5 KiB
Experimental Support of Vertical Federated XGBoost using NVFlare
This directory contains a demo of Vertical Federated Learning using NVFlare.
Training with CPU only
To run the demo, first build XGBoost with the federated learning plugin enabled (see the README).
Install NVFlare (note that currently NVFlare only supports Python 3.8):
pip install nvflare
Prepare the data (note that this step will download the HIGGS dataset, which is 2.6GB compressed, and 7.5GB uncompressed, so make sure you have enough disk space and are on a fast internet connection):
./prepare_data.sh
Start the NVFlare federated server:
/tmp/nvflare/poc/server/startup/start.sh
In another terminal, start the first worker:
/tmp/nvflare/poc/site-1/startup/start.sh
And the second worker:
/tmp/nvflare/poc/site-2/startup/start.sh
Then start the admin CLI:
/tmp/nvflare/poc/admin/startup/fl_admin.sh
In the admin CLI, run the following command:
submit_job vertical-xgboost
Once the training finishes, the model file should be written into
/tmp/nvlfare/poc/site-1/run_1/test.model.json and /tmp/nvflare/poc/site-2/run_1/test.model.json
respectively.
Finally, shutdown everything from the admin CLI, using admin as password:
shutdown client
shutdown server
Training with GPUs
Currently GPUs are not yet supported by vertical federated XGBoost.