This makes GPU Hist robust in distributed environment as some workers might not be associated with any data in either training or evaluation. * Disable rabit mock test for now: See #5012 . * Disable dask-cudf test at prediction for now: See #5003 * Launch dask job for all workers despite they might not have any data. * Check 0 rows in elementwise evaluation metrics. Using AUC and AUC-PR still throws an error. See #4663 for a robust fix. * Add tests for edge cases. * Add `LaunchKernel` wrapper handling zero sized grid. * Move some parts of allreducer into a cu file. * Don't validate feature names when the booster is empty. * Sync number of columns in DMatrix. As num_feature is required to be the same across all workers in data split mode. * Filtering in dask interface now by default syncs all booster that's not empty, instead of using rank 0. * Fix Jenkins' GPU tests. * Install dask-cuda from source in Jenkins' test. Now all tests are actually running. * Restore GPU Hist tree synchronization test. * Check UUID of running devices. The check is only performed on CUDA version >= 10.x, as 9.x doesn't have UUID field. * Fix CMake policy and project variables. Use xgboost_SOURCE_DIR uniformly, add policy for CMake >= 3.13. * Fix copying data to CPU * Fix race condition in cpu predictor. * Fix duplicated DMatrix construction. * Don't download extra nccl in CI script.
54 lines
1.5 KiB
Bash
Executable File
54 lines
1.5 KiB
Bash
Executable File
#!/bin/bash
|
|
|
|
cp make/travis.mk config.mk
|
|
make -f dmlc-core/scripts/packages.mk lz4
|
|
|
|
if [ ${TRAVIS_OS_NAME} == "osx" ]; then
|
|
echo 'USE_OPENMP=0' >> config.mk
|
|
else
|
|
# use g++-4.8 for linux
|
|
export CXX=g++-4.8
|
|
fi
|
|
|
|
if [ ${TASK} == "python_test" ]; then
|
|
make all || exit -1
|
|
echo "-------------------------------"
|
|
source activate python3
|
|
python --version
|
|
conda install numpy scipy pandas matplotlib scikit-learn
|
|
|
|
python -m pip install graphviz pytest pytest-cov codecov
|
|
python -m pip install dask distributed dask[dataframe]
|
|
python -m pip install https://h2o-release.s3.amazonaws.com/datatable/stable/datatable-0.7.0/datatable-0.7.0-cp37-cp37m-linux_x86_64.whl
|
|
python -m pytest -v --fulltrace -s tests/python --cov=python-package/xgboost || exit -1
|
|
codecov
|
|
fi
|
|
|
|
if [ ${TASK} == "java_test" ]; then
|
|
set -e
|
|
export RABIT_MOCK=ON
|
|
cd jvm-packages
|
|
mvn -q clean install -DskipTests -Dmaven.test.skip
|
|
mvn -q test
|
|
fi
|
|
|
|
if [ ${TASK} == "cmake_test" ]; then
|
|
set -e
|
|
|
|
if grep -n -R '<<<.*>>>\(.*\)' src include | grep --invert "NOLINT"; then
|
|
echo 'Do not use raw CUDA execution configuration syntax with <<<blocks, threads>>>.' \
|
|
'try `dh::LaunchKernel`'
|
|
exit -1
|
|
fi
|
|
|
|
# Build/test
|
|
rm -rf build
|
|
mkdir build && cd build
|
|
PLUGINS="-DPLUGIN_LZ4=ON -DPLUGIN_DENSE_PARSER=ON"
|
|
CC=gcc-7 CXX=g++-7 cmake .. -DGOOGLE_TEST=ON -DUSE_DMLC_GTEST=ON ${PLUGINS}
|
|
make
|
|
./testxgboost
|
|
cd ..
|
|
rm -rf build
|
|
fi
|