scikit-learn api section documentation correction (#3967)

* update description of early stopping rounds

the description of early stopping round was quite inconsistent in the scikit-learn api section since the fit paragraph tells that when early stopping rounds occurs, the last iteration is returned not the best one, but the predict paragraph tells that when the predict is called without ntree_limit specified, then ntree_limit is equals to best_ntree_limit.

Thus, when reading the fit part, one could think that it is needed to specify what is the best iter when calling the predict, but when reading the predict part, then the best iter is given by default, it is the last iter that you have to specify if needed.

* Update sklearn.py

* Update sklearn.py

fix doc according to the python_lightweight_test error
This commit is contained in:
lyxthe 2018-12-14 09:27:04 +01:00 committed by Philip Hyunsu Cho
parent 3d81c48d3f
commit 53f695acf2

View File

@ -631,11 +631,11 @@ class XGBClassifier(XGBModel, XGBClassifierBase):
early_stopping_rounds : int, optional
Activates early stopping. Validation error needs to decrease at
least every <early_stopping_rounds> round(s) to continue training.
Requires at least one item in evals. If there's more than one,
will use the last. Returns the model from the last iteration
(not the best one). If early stopping occurs, the model will
have three additional fields: bst.best_score, bst.best_iteration
and bst.best_ntree_limit.
Requires at least one item in evals. If there's more than one,
will use the last. If early stopping occurs, the model will have
three additional fields: bst.best_score, bst.best_iteration and
bst.best_ntree_limit (bst.best_ntree_limit is the ntree_limit parameter
default value in predict method if not any other value is specified).
(Use bst.best_ntree_limit to get the correct value if num_parallel_tree
and/or num_class appears in the parameters)
verbose : bool