Clarification for learning_rates
This commit is contained in:
parent
4db3dfee7d
commit
b8bc85b534
@ -50,7 +50,9 @@ def train(params, dtrain, num_boost_round=10, evals=(), obj=None, feval=None,
|
||||
If `verbose_eval` then the evaluation metric on the validation set, if
|
||||
given, is printed at each boosting stage.
|
||||
learning_rates: list or function
|
||||
Learning rate for each boosting round (yields learning rate decay).
|
||||
List of learning rate for each boosting round
|
||||
or a customized function that calculates eta in terms of
|
||||
current number of round and the total number of boosting round (e.g. yields learning rate decay)
|
||||
- list l: eta = l[boosting round]
|
||||
- function f: eta = f(boosting round, num_boost_round)
|
||||
xgb_model : file name of stored xgb model or 'Booster' instance
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user