[doc] Fix typo. [skip ci] (#7311)

This commit is contained in:
Jiaming Yuan 2021-10-12 19:10:18 +08:00 committed by GitHub
parent 0bd8f21e4e
commit 406c70ba0e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -50,7 +50,7 @@ can plot the model and calculate the global feature importance:
# Get a graph
graph = xgb.to_graphviz(clf, num_trees=1)
# Or get a matplotlib axis
ax = xgb.plot_tree(reg, num_trees=1)
ax = xgb.plot_tree(clf, num_trees=1)
# Get feature importances
clf.feature_importances_
@ -60,8 +60,8 @@ idea is create dataframe with category feature type, and tell XGBoost to use ``g
with parameter ``enable_categorical``. See `this demo
<https://github.com/dmlc/xgboost/blob/master/demo/guide-python/categorical.py>`_ for a
worked example using categorical data with ``scikit-learn`` interface. For using it with
the Kaggle tutorial dataset, see `<this demo
https://github.com/dmlc/xgboost/blob/master/demo/guide-python/cat_in_the_dat.py>`_
the Kaggle tutorial dataset, see `this demo
<https://github.com/dmlc/xgboost/blob/master/demo/guide-python/cat_in_the_dat.py>`_
**********************
@ -114,5 +114,5 @@ Next Steps
**********
As of XGBoost 1.5, the feature is highly experimental and have limited features like CPU
training is not yet supported. Please see `<this issue>
https://github.com/dmlc/xgboost/issues/6503`_ for progress.
training is not yet supported. Please see `this issue
<https://github.com/dmlc/xgboost/issues/6503>`_ for progress.