[doc] Use cross references in sphinx doc. (#7522)

* Use cross references instead of URL.
* Fix auto doc for callback.
This commit is contained in:
Jiaming Yuan 2022-01-05 03:21:25 +08:00 committed by GitHub
parent eb1efb54b5
commit 54582f641a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 93 additions and 73 deletions

View File

@ -16,10 +16,13 @@ Before running XGBoost, we must set three types of parameters: general parameter
:backlinks: none :backlinks: none
:local: :local:
.. _global_config:
******************** ********************
Global Configuration Global Configuration
******************** ********************
The following parameters can be set in the global scope, using ``xgb.config_context()`` (Python) or ``xgb.set.config()`` (R). The following parameters can be set in the global scope, using :py:func:`xgboost.config_context()` (Python) or ``xgb.set.config()`` (R).
* ``verbosity``: Verbosity of printing messages. Valid values of 0 (silent), 1 (warning), 2 (info), and 3 (debug). * ``verbosity``: Verbosity of printing messages. Valid values of 0 (silent), 1 (warning), 2 (info), and 3 (debug).
* ``use_rmm``: Whether to use RAPIDS Memory Manager (RMM) to allocate GPU memory. This option is only applicable when XGBoost is built (compiled) with the RMM plugin enabled. Valid values are ``true`` and ``false``. * ``use_rmm``: Whether to use RAPIDS Memory Manager (RMM) to allocate GPU memory. This option is only applicable when XGBoost is built (compiled) with the RMM plugin enabled. Valid values are ``true`` and ``false``.

View File

@ -2,10 +2,11 @@
Callback Functions Callback Functions
################## ##################
This document gives a basic walkthrough of callback function used in XGBoost Python This document gives a basic walkthrough of :ref:`callback API <callback_api>` used in
package. In XGBoost 1.3, a new callback interface is designed for Python package, which XGBoost Python package. In XGBoost 1.3, a new callback interface is designed for Python
provides the flexibility of designing various extension for training. Also, XGBoost has a package, which provides the flexibility of designing various extension for training.
number of pre-defined callbacks for supporting early stopping, checkpoints etc. Also, XGBoost has a number of pre-defined callbacks for supporting early stopping,
checkpoints etc.
Using builtin callbacks Using builtin callbacks
@ -14,8 +15,8 @@ Using builtin callbacks
By default, training methods in XGBoost have parameters like ``early_stopping_rounds`` and By default, training methods in XGBoost have parameters like ``early_stopping_rounds`` and
``verbose``/``verbose_eval``, when specified the training procedure will define the ``verbose``/``verbose_eval``, when specified the training procedure will define the
corresponding callbacks internally. For example, when ``early_stopping_rounds`` is corresponding callbacks internally. For example, when ``early_stopping_rounds`` is
specified, ``EarlyStopping`` callback is invoked inside iteration loop. You can also pass specified, :py:class:`EarlyStopping <xgboost.callback.EarlyStopping>` callback is invoked
this callback function directly into XGBoost: inside iteration loop. You can also pass this callback function directly into XGBoost:
.. code-block:: python .. code-block:: python
@ -54,6 +55,7 @@ this callback function directly into XGBoost:
Defining your own callback Defining your own callback
-------------------------- --------------------------
XGBoost provides an callback interface class: ``xgboost.callback.TrainingCallback``, user XGBoost provides an callback interface class: :py:class:`TrainingCallback
defined callbacks should inherit this class and override corresponding methods. There's a <xgboost.callback.TrainingCallback>`, user defined callbacks should inherit this class and
working example in `demo/guide-python/callbacks.py <https://github.com/dmlc/xgboost/tree/master/demo/guide-python/callbacks.py>`_ override corresponding methods. There's a working example in
:ref:`sphx_glr_python_examples_callbacks.py`.

View File

@ -77,15 +77,29 @@ Plotting API
Callback API Callback API
------------ ------------
.. autofunction:: xgboost.callback.TrainingCallback .. automodule:: xgboost.callback
.. autoclass:: xgboost.callback.TrainingCallback
:members:
.. autofunction:: xgboost.callback.EvaluationMonitor .. autoclass:: xgboost.callback.EvaluationMonitor
:members:
:inherited-members:
:show-inheritance:
.. autofunction:: xgboost.callback.EarlyStopping .. autoclass:: xgboost.callback.EarlyStopping
:members:
:inherited-members:
:show-inheritance:
.. autofunction:: xgboost.callback.LearningRateScheduler .. autoclass:: xgboost.callback.LearningRateScheduler
:members:
:inherited-members:
:show-inheritance:
.. autofunction:: xgboost.callback.TrainingCheckPoint .. autoclass:: xgboost.callback.TrainingCheckPoint
:members:
:inherited-members:
:show-inheritance:
.. _dask_api: .. _dask_api:

View File

@ -1,6 +1,6 @@
#################### ############
XGBoost Tree Methods Tree Methods
#################### ############
For training boosted tree models, there are 2 parameters used for choosing algorithms, For training boosted tree models, there are 2 parameters used for choosing algorithms,
namely ``updater`` and ``tree_method``. XGBoost has 4 builtin tree methods, namely namely ``updater`` and ``tree_method``. XGBoost has 4 builtin tree methods, namely

View File

@ -146,7 +146,8 @@ We will be able to see XGBoost printing something like:
Notice that the parameter ``disable_default_eval_metric`` is used to suppress the default metric Notice that the parameter ``disable_default_eval_metric`` is used to suppress the default metric
in XGBoost. in XGBoost.
For fully reproducible source code and comparison plots, see `custom_rmsle.py <https://github.com/dmlc/xgboost/tree/master/demo/guide-python/custom_rmsle.py>`_. For fully reproducible source code and comparison plots, see
:ref:`sphx_glr_python_examples_custom_rmsle.py`.
********************* *********************
Reverse Link Function Reverse Link Function
@ -261,8 +262,7 @@ available in XGBoost:
We use ``multi:softmax`` to illustrate the differences of transformed prediction. With We use ``multi:softmax`` to illustrate the differences of transformed prediction. With
``softprob`` the output prediction array has shape ``(n_samples, n_classes)`` while for ``softprob`` the output prediction array has shape ``(n_samples, n_classes)`` while for
``softmax`` it's ``(n_samples, )``. A demo for multi-class objective function is also ``softmax`` it's ``(n_samples, )``. A demo for multi-class objective function is also
available at `demo/guide-python/custom_softmax.py available at :ref:`sphx_glr_python_examples_custom_softmax.py`.
<https://github.com/dmlc/xgboost/tree/master/demo/guide-python/custom_softmax.py>`_
********************** **********************

View File

@ -1,7 +1,11 @@
# coding: utf-8 # coding: utf-8
# pylint: disable=invalid-name, too-many-statements, no-self-use # pylint: disable=invalid-name, too-many-statements, no-self-use
# pylint: disable=too-many-arguments # pylint: disable=too-many-arguments
"""Training Library containing training routines.""" """Callback library containing training routines. See :doc:`Callback Functions
</python/callbacks>` for a quick introduction.
"""
from abc import ABC from abc import ABC
import collections import collections
import os import os

View File

@ -30,8 +30,8 @@ def config_doc(*, header=None, extra_note=None, parameters=None, returns=None,
{header} {header}
Global configuration consists of a collection of parameters that can be applied in the Global configuration consists of a collection of parameters that can be applied in the
global scope. See https://xgboost.readthedocs.io/en/stable/parameter.html for the full global scope. See :ref:`global_config` for the full list of parameters supported in
list of parameters supported in the global configuration. the global configuration.
{extra_note} {extra_note}

View File

@ -1817,9 +1817,8 @@ class Booster:
.. note:: .. note::
See `Prediction See :doc:`Prediction </prediction>` for issues like thread safety and a
<https://xgboost.readthedocs.io/en/latest/prediction.html>`_ summary of outputs from this function.
for issues like thread safety and a summary of outputs from this function.
Parameters Parameters
---------- ----------
@ -1945,8 +1944,8 @@ class Booster:
base_margin: Any = None, base_margin: Any = None,
strict_shape: bool = False strict_shape: bool = False
): ):
"""Run prediction in-place, Unlike ``predict`` method, inplace prediction does """Run prediction in-place, Unlike :py:meth:`predict` method, inplace prediction does not
not cache the prediction result. cache the prediction result.
Calling only ``inplace_predict`` in multiple threads is safe and lock Calling only ``inplace_predict`` in multiple threads is safe and lock
free. But the safety does not hold when used in conjunction with other free. But the safety does not hold when used in conjunction with other
@ -1971,7 +1970,7 @@ class Booster:
``predictor`` to ``gpu_predictor`` for running prediction on CuPy ``predictor`` to ``gpu_predictor`` for running prediction on CuPy
array or CuDF DataFrame. array or CuDF DataFrame.
iteration_range : iteration_range :
See :py:meth:`xgboost.Booster.predict` for details. See :py:meth:`predict` for details.
predict_type : predict_type :
* `value` Output model prediction values. * `value` Output model prediction values.
* `margin` Output the raw untransformed margin value. * `margin` Output the raw untransformed margin value.
@ -2127,9 +2126,8 @@ class Booster:
The model is saved in an XGBoost internal format which is universal among the The model is saved in an XGBoost internal format which is universal among the
various XGBoost interfaces. Auxiliary attributes of the Python Booster object various XGBoost interfaces. Auxiliary attributes of the Python Booster object
(such as feature_names) will not be saved when using binary format. To save those (such as feature_names) will not be saved when using binary format. To save those
attributes, use JSON instead. See: `Model IO attributes, use JSON instead. See :doc:`Model IO </tutorials/saving_model>` for
<https://xgboost.readthedocs.io/en/stable/tutorials/saving_model.html>`_ for more more info.
info.
Parameters Parameters
---------- ----------
@ -2165,9 +2163,8 @@ class Booster:
The model is loaded from XGBoost format which is universal among the various The model is loaded from XGBoost format which is universal among the various
XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as
feature_names) will not be loaded when using binary format. To save those feature_names) will not be loaded when using binary format. To save those
attributes, use JSON instead. See: `Model IO attributes, use JSON instead. See :doc:`Model IO </tutorials/saving_model>` for
<https://xgboost.readthedocs.io/en/stable/tutorials/saving_model.html>`_ for more more info.
info.
Parameters Parameters
---------- ----------
@ -2215,7 +2212,7 @@ class Booster:
return features.value return features.value
def dump_model(self, fout, fmap='', with_stats=False, dump_format="text"): def dump_model(self, fout, fmap='', with_stats=False, dump_format="text"):
"""Dump model into a text or JSON file. Unlike `save_model`, the """Dump model into a text or JSON file. Unlike :py:meth:`save_model`, the
output format is primarily used for visualization or interpretation, output format is primarily used for visualization or interpretation,
hence it's more human readable but cannot be loaded back to XGBoost. hence it's more human readable but cannot be loaded back to XGBoost.
@ -2258,9 +2255,9 @@ class Booster:
with_stats: bool = False, with_stats: bool = False,
dump_format: str = "text" dump_format: str = "text"
) -> List[str]: ) -> List[str]:
"""Returns the model dump as a list of strings. Unlike `save_model`, the """Returns the model dump as a list of strings. Unlike :py:meth:`save_model`, the output
output format is primarily used for visualization or interpretation, format is primarily used for visualization or interpretation, hence it's more
hence it's more human readable but cannot be loaded back to XGBoost. human readable but cannot be loaded back to XGBoost.
Parameters Parameters
---------- ----------

View File

@ -3,9 +3,8 @@
# pylint: disable=too-many-lines, fixme # pylint: disable=too-many-lines, fixme
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
# pylint: disable=import-error # pylint: disable=import-error
"""Dask extensions for distributed training. See """Dask extensions for distributed training. See :doc:`Distributed XGBoost with Dask
https://xgboost.readthedocs.io/en/latest/tutorials/dask.html for simple </tutorials/dask>` for simple tutorial. Also xgboost/demo/dask for some examples.
tutorial. Also xgboost/demo/dask for some examples.
There are two sets of APIs in this module, one is the functional API including There are two sets of APIs in this module, one is the functional API including
``train`` and ``predict`` methods. Another is stateful Scikit-Learner wrapper ``train`` and ``predict`` methods. Another is stateful Scikit-Learner wrapper

View File

@ -122,10 +122,10 @@ __model_doc = f'''
booster: Optional[str] booster: Optional[str]
Specify which booster to use: gbtree, gblinear or dart. Specify which booster to use: gbtree, gblinear or dart.
tree_method: Optional[str] tree_method: Optional[str]
Specify which tree method to use. Default to auto. If this parameter Specify which tree method to use. Default to auto. If this parameter is set to
is set to default, XGBoost will choose the most conservative option default, XGBoost will choose the most conservative option available. It's
available. It's recommended to study this option from the parameters recommended to study this option from the parameters document :doc:`tree method
document: https://xgboost.readthedocs.io/en/latest/treemethod.html. </treemethod>`
n_jobs : Optional[int] n_jobs : Optional[int]
Number of parallel threads used to run xgboost. When used with other Scikit-Learn Number of parallel threads used to run xgboost. When used with other Scikit-Learn
algorithms like grid search, you may choose which algorithm to parallelize and algorithms like grid search, you may choose which algorithm to parallelize and
@ -167,14 +167,14 @@ __model_doc = f'''
num_parallel_tree: Optional[int] num_parallel_tree: Optional[int]
Used for boosting random forest. Used for boosting random forest.
monotone_constraints : Optional[Union[Dict[str, int], str]] monotone_constraints : Optional[Union[Dict[str, int], str]]
Constraint of variable monotonicity. See tutorial for more Constraint of variable monotonicity. See :doc:`tutorial </tutorials/monotonic>`
information. for more information.
interaction_constraints : Optional[Union[str, List[Tuple[str]]]] interaction_constraints : Optional[Union[str, List[Tuple[str]]]]
Constraints for interaction representing permitted interactions. The Constraints for interaction representing permitted interactions. The
constraints must be specified in the form of a nest list, e.g. [[0, 1], constraints must be specified in the form of a nested list, e.g. ``[[0, 1], [2,
[2, 3, 4]], where each inner list is a group of indices of features 3, 4]]``, where each inner list is a group of indices of features that are
that are allowed to interact with each other. See tutorial for more allowed to interact with each other. See :doc:`tutorial
information </tutorials/feature_interaction_constraint>` for more information
importance_type: Optional[str] importance_type: Optional[str]
The feature importance type for the feature_importances\\_ property: The feature importance type for the feature_importances\\_ property:
@ -216,9 +216,8 @@ __model_doc = f'''
For advanced usage on Early stopping like directly choosing to maximize instead of For advanced usage on Early stopping like directly choosing to maximize instead of
minimize, see :py:obj:`xgboost.callback.EarlyStopping`. minimize, see :py:obj:`xgboost.callback.EarlyStopping`.
See `Custom Objective and Evaluation Metric See :doc:`Custom Objective and Evaluation Metric </tutorials/custom_metric_obj>`
<https://xgboost.readthedocs.io/en/latest/tutorials/custom_metric_obj.html>`_ for for more.
more.
.. note:: .. note::
@ -243,7 +242,7 @@ __model_doc = f'''
Activates early stopping. Validation metric needs to improve at least once in Activates early stopping. Validation metric needs to improve at least once in
every **early_stopping_rounds** round(s) to continue training. Requires at least every **early_stopping_rounds** round(s) to continue training. Requires at least
one item in **eval_set** in :py:meth:`xgboost.sklearn.XGBModel.fit`. one item in **eval_set** in :py:meth:`fit`.
The method returns the model from the last iteration (not the best one). If The method returns the model from the last iteration (not the best one). If
there's more than one item in **eval_set**, the last entry will be used for early there's more than one item in **eval_set**, the last entry will be used for early
@ -251,7 +250,8 @@ __model_doc = f'''
will be used for early stopping. will be used for early stopping.
If early stopping occurs, the model will have three additional fields: If early stopping occurs, the model will have three additional fields:
``clf.best_score``, ``clf.best_iteration`` and ``clf.best_ntree_limit``. :py:attr:`best_score`, :py:attr:`best_iteration` and
:py:attr:`best_ntree_limit`.
.. note:: .. note::
@ -268,9 +268,8 @@ __model_doc = f'''
save_best=True)] save_best=True)]
kwargs : dict, optional kwargs : dict, optional
Keyword arguments for XGBoost Booster object. Full documentation of Keyword arguments for XGBoost Booster object. Full documentation of parameters
parameters can be found here: can be found :doc:`here </parameter>`.
https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst.
Attempting to set a parameter via the constructor args and \\*\\*kwargs Attempting to set a parameter via the constructor args and \\*\\*kwargs
dict simultaneously will result in a TypeError. dict simultaneously will result in a TypeError.
@ -1102,6 +1101,7 @@ class XGBModel(XGBModelBase):
@property @property
def n_features_in_(self) -> int: def n_features_in_(self) -> int:
"""Number of features seen during :py:meth:`fit`."""
booster = self.get_booster() booster = self.get_booster()
return booster.num_features() return booster.num_features()
@ -1128,10 +1128,15 @@ class XGBModel(XGBModelBase):
@property @property
def best_score(self) -> float: def best_score(self) -> float:
"""The best score obtained by early stopping."""
return float(self._early_stopping_attr('best_score')) return float(self._early_stopping_attr('best_score'))
@property @property
def best_iteration(self) -> int: def best_iteration(self) -> int:
"""The best iteration obtained by early stopping. This attribute is 0-based,
for instance if the best iteration is the first round, then best_iteration is 0.
"""
return int(self._early_stopping_attr('best_iteration')) return int(self._early_stopping_attr('best_iteration'))
@property @property

View File

@ -76,9 +76,8 @@ def train(
List of validation sets for which metrics will evaluated during training. List of validation sets for which metrics will evaluated during training.
Validation metrics will help us track the performance of the model. Validation metrics will help us track the performance of the model.
obj obj
Custom objective function. See `Custom Objective Custom objective function. See :doc:`Custom Objective
<https://xgboost.readthedocs.io/en/latest/tutorials/custom_metric_obj.html>`_ for </tutorials/custom_metric_obj>` for details.
details.
feval : feval :
.. deprecated:: 1.6.0 .. deprecated:: 1.6.0
Use `custom_metric` instead. Use `custom_metric` instead.
@ -134,9 +133,8 @@ def train(
.. versionadded 1.6.0 .. versionadded 1.6.0
Custom metric function. See `Custom Metric Custom metric function. See :doc:`Custom Metric </tutorials/custom_metric_obj>`
<https://xgboost.readthedocs.io/en/latest/tutorials/custom_metric_obj.html>`_ for for details.
details.
Returns Returns
------- -------
@ -387,9 +385,8 @@ def cv(params, dtrain, num_boost_round=10, nfold=3, stratified=False, folds=None
Evaluation metrics to be watched in CV. Evaluation metrics to be watched in CV.
obj : obj :
Custom objective function. See `Custom Objective Custom objective function. See :doc:`Custom Objective
<https://xgboost.readthedocs.io/en/latest/tutorials/custom_metric_obj.html>`_ for </tutorials/custom_metric_obj>` for details.
details.
feval : function feval : function
.. deprecated:: 1.6.0 .. deprecated:: 1.6.0
@ -434,9 +431,8 @@ def cv(params, dtrain, num_boost_round=10, nfold=3, stratified=False, folds=None
.. versionadded 1.6.0 .. versionadded 1.6.0
Custom metric function. See `Custom Metric Custom metric function. See :doc:`Custom Metric </tutorials/custom_metric_obj>`
<https://xgboost.readthedocs.io/en/latest/tutorials/custom_metric_obj.html>`_ for for details.
details.
Returns Returns
------- -------