Guard against index error in prediction. (#6982)
* Remove `best_ntree_limit` from documents.
This commit is contained in:
@@ -165,9 +165,7 @@ def early_stop(stopping_rounds, maximize=False, verbose=True):
|
||||
If there's more than one, will use the last.
|
||||
Returns the model from the last iteration (not the best one).
|
||||
If early stopping occurs, the model will have three additional fields:
|
||||
``bst.best_score``, ``bst.best_iteration`` and ``bst.best_ntree_limit``.
|
||||
(Use ``bst.best_ntree_limit`` to get the correct value if ``num_parallel_tree``
|
||||
and/or ``num_class`` appears in the parameters)
|
||||
``bst.best_score``, ``bst.best_iteration``.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
|
||||
@@ -687,7 +687,7 @@ class XGBModel(XGBModelBase):
|
||||
used for early stopping.
|
||||
|
||||
If early stopping occurs, the model will have three additional fields:
|
||||
``clf.best_score``, ``clf.best_iteration`` and ``clf.best_ntree_limit``.
|
||||
``clf.best_score``, ``clf.best_iteration``.
|
||||
verbose :
|
||||
If `verbose` and an evaluation set is used, writes the evaluation metric
|
||||
measured on the validation set to stderr.
|
||||
|
||||
@@ -144,10 +144,7 @@ def train(params, dtrain, num_boost_round=10, evals=(), obj=None, feval=None,
|
||||
If there's more than one metric in the **eval_metric** parameter given in
|
||||
**params**, the last metric will be used for early stopping.
|
||||
If early stopping occurs, the model will have three additional fields:
|
||||
``bst.best_score``, ``bst.best_iteration`` and ``bst.best_ntree_limit``. Use
|
||||
``bst.best_ntree_limit`` to get the correct value if ``num_parallel_tree`` and/or
|
||||
``num_class`` appears in the parameters. ``best_ntree_limit`` is the result of
|
||||
``num_parallel_tree * best_iteration``.
|
||||
``bst.best_score``, ``bst.best_iteration``.
|
||||
evals_result: dict
|
||||
This dictionary stores the evaluation results of all the items in watchlist.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user