diff --git a/demo/rmm_plugin/README.rst b/demo/rmm_plugin/README.rst index 4742507d2..28b816eb2 100644 --- a/demo/rmm_plugin/README.rst +++ b/demo/rmm_plugin/README.rst @@ -1,5 +1,5 @@ -Using XGBoost with RAPIDS Memory Manager (RMM) plugin (EXPERIMENTAL) -==================================================================== +Using XGBoost with RAPIDS Memory Manager (RMM) plugin +===================================================== `RAPIDS Memory Manager (RMM) `__ library provides a collection of efficient memory allocators for NVIDIA GPUs. It is now possible to use @@ -47,5 +47,15 @@ the global configuration ``use_rmm``: with xgb.config_context(use_rmm=True): clf = xgb.XGBClassifier(tree_method="hist", device="cuda") -Depending on the choice of memory pool size or type of allocator, this may have negative -performance impact. +Depending on the choice of memory pool size and the type of the allocator, this can add +more consistency to memory usage but with slightly degraded performance impact. + +******************************* +No Device Ordinal for Multi-GPU +******************************* + +Since with RMM the memory pool is pre-allocated on a specific device, changing the CUDA +device ordinal in XGBoost can result in memory error ``cudaErrorIllegalAddress``. Use the +``CUDA_VISIBLE_DEVICES`` environment variable instead of the ``device="cuda:1"`` parameter +for selecting device. For distributed training, the distributed computing frameworks like +``dask-cuda`` are responsible for device management. \ No newline at end of file diff --git a/doc/parameter.rst b/doc/parameter.rst index 00f0eaea6..a77655922 100644 --- a/doc/parameter.rst +++ b/doc/parameter.rst @@ -25,7 +25,11 @@ Global Configuration The following parameters can be set in the global scope, using :py:func:`xgboost.config_context()` (Python) or ``xgb.set.config()`` (R). * ``verbosity``: Verbosity of printing messages. Valid values of 0 (silent), 1 (warning), 2 (info), and 3 (debug). -* ``use_rmm``: Whether to use RAPIDS Memory Manager (RMM) to allocate GPU memory. This option is only applicable when XGBoost is built (compiled) with the RMM plugin enabled. Valid values are ``true`` and ``false``. + +* ``use_rmm``: Whether to use RAPIDS Memory Manager (RMM) to allocate cache GPU + memory. The primary memory is always allocated on the RMM pool when XGBoost is built + (compiled) with the RMM plugin enabled. Valid values are ``true`` and ``false``. See + :doc:`/python/rmm-examples/index` for details. ****************** General Parameters