[doc] Fix broken links. (#7341)

* Fix most of the link checks from sphinx.
* Remove duplicate explicit target name.
This commit is contained in:
Jiaming Yuan
2021-10-20 14:45:30 +08:00
committed by GitHub
parent f53da412aa
commit 376b448015
7 changed files with 24 additions and 17 deletions

View File

@@ -95,13 +95,13 @@ XGBoost makes use of `GPUTreeShap <https://github.com/rapidsai/gputreeshap>`_ as
shap_interaction_values = model.predict(dtrain, pred_interactions=True)
See examples `here
<https://github.com/dmlc/xgboost/tree/master/demo/gpu_acceleration>`_.
<https://github.com/dmlc/xgboost/tree/master/demo/gpu_acceleration>`__.
Multi-node Multi-GPU Training
=============================
XGBoost supports fully distributed GPU training using `Dask <https://dask.org/>`_. For
getting started see our tutorial :doc:`/tutorials/dask` and worked examples `here
<https://github.com/dmlc/xgboost/tree/master/demo/dask>`_, also Python documentation
<https://github.com/dmlc/xgboost/tree/master/demo/dask>`__, also Python documentation
:ref:`dask_api` for complete reference.
@@ -238,7 +238,7 @@ Working memory is allocated inside the algorithm proportional to the number of r
The quantile finding algorithm also uses some amount of working device memory. It is able to operate in batches, but is not currently well optimised for sparse data.
If you are getting out-of-memory errors on a big dataset, try the `external memory version <../tutorials/external_memory.html>`_.
If you are getting out-of-memory errors on a big dataset, try the :doc:`external memory version </tutorials/external_memory>`.
Developer notes
===============