This makes GPU Hist robust in distributed environment as some workers might not be associated with any data in either training or evaluation. * Disable rabit mock test for now: See #5012 . * Disable dask-cudf test at prediction for now: See #5003 * Launch dask job for all workers despite they might not have any data. * Check 0 rows in elementwise evaluation metrics. Using AUC and AUC-PR still throws an error. See #4663 for a robust fix. * Add tests for edge cases. * Add `LaunchKernel` wrapper handling zero sized grid. * Move some parts of allreducer into a cu file. * Don't validate feature names when the booster is empty. * Sync number of columns in DMatrix. As num_feature is required to be the same across all workers in data split mode. * Filtering in dask interface now by default syncs all booster that's not empty, instead of using rank 0. * Fix Jenkins' GPU tests. * Install dask-cuda from source in Jenkins' test. Now all tests are actually running. * Restore GPU Hist tree synchronization test. * Check UUID of running devices. The check is only performed on CUDA version >= 10.x, as 9.x doesn't have UUID field. * Fix CMake policy and project variables. Use xgboost_SOURCE_DIR uniformly, add policy for CMake >= 3.13. * Fix copying data to CPU * Fix race condition in cpu predictor. * Fix duplicated DMatrix construction. * Don't download extra nccl in CI script.
54 lines
2.1 KiB
Docker
54 lines
2.1 KiB
Docker
ARG CUDA_VERSION
|
|
FROM nvidia/cuda:$CUDA_VERSION-devel-centos6
|
|
|
|
# Environment
|
|
ENV DEBIAN_FRONTEND noninteractive
|
|
|
|
# Install all basic requirements
|
|
RUN \
|
|
yum -y update && \
|
|
yum install -y tar unzip wget xz git centos-release-scl yum-utils && \
|
|
yum-config-manager --enable centos-sclo-rh-testing && \
|
|
yum -y update && \
|
|
yum install -y devtoolset-4-gcc devtoolset-4-binutils devtoolset-4-gcc-c++ && \
|
|
# Python
|
|
wget https://repo.continuum.io/miniconda/Miniconda3-4.5.12-Linux-x86_64.sh && \
|
|
bash Miniconda3-4.5.12-Linux-x86_64.sh -b -p /opt/python && \
|
|
# CMake
|
|
wget -nv -nc https://cmake.org/files/v3.12/cmake-3.12.0-Linux-x86_64.sh --no-check-certificate && \
|
|
bash cmake-3.12.0-Linux-x86_64.sh --skip-license --prefix=/usr
|
|
|
|
# NCCL2 (License: https://docs.nvidia.com/deeplearning/sdk/nccl-sla/index.html)
|
|
RUN \
|
|
export CUDA_SHORT=`echo $CUDA_VERSION | egrep -o '[0-9]+\.[0-9]'` && \
|
|
export NCCL_VERSION=2.4.8-1 && \
|
|
wget https://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/x86_64/nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm && \
|
|
rpm -i nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm && \
|
|
yum -y update && \
|
|
yum install -y libnccl-${NCCL_VERSION}+cuda${CUDA_SHORT} libnccl-devel-${NCCL_VERSION}+cuda${CUDA_SHORT} libnccl-static-${NCCL_VERSION}+cuda${CUDA_SHORT} && \
|
|
rm -f nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm;
|
|
|
|
ENV PATH=/opt/python/bin:$PATH
|
|
ENV CC=/opt/rh/devtoolset-4/root/usr/bin/gcc
|
|
ENV CXX=/opt/rh/devtoolset-4/root/usr/bin/c++
|
|
ENV CPP=/opt/rh/devtoolset-4/root/usr/bin/cpp
|
|
|
|
# Install Python packages
|
|
RUN \
|
|
pip install numpy pytest scipy scikit-learn wheel kubernetes urllib3==1.22
|
|
|
|
ENV GOSU_VERSION 1.10
|
|
|
|
# Install lightweight sudo (not bound to TTY)
|
|
RUN set -ex; \
|
|
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-amd64" && \
|
|
chmod +x /usr/local/bin/gosu && \
|
|
gosu nobody true
|
|
|
|
# Default entry-point to use if running locally
|
|
# It will preserve attributes of created files
|
|
COPY entrypoint.sh /scripts/
|
|
|
|
WORKDIR /workspace
|
|
ENTRYPOINT ["/scripts/entrypoint.sh"]
|