Feature weights (#5962)
This commit is contained in:
@@ -107,6 +107,10 @@ Parameters for Tree Booster
|
||||
'colsample_bynode':0.5}`` with 64 features will leave 8 features to choose from at
|
||||
each split.
|
||||
|
||||
On Python interface, one can set the ``feature_weights`` for DMatrix to define the
|
||||
probability of each feature being selected when using column sampling. There's a
|
||||
similar parameter for ``fit`` method in sklearn interface.
|
||||
|
||||
* ``lambda`` [default=1, alias: ``reg_lambda``]
|
||||
|
||||
- L2 regularization term on weights. Increasing this value will make model more conservative.
|
||||
@@ -224,7 +228,7 @@ Parameters for Tree Booster
|
||||
list is a group of indices of features that are allowed to interact with each other.
|
||||
See tutorial for more information
|
||||
|
||||
Additional parameters for ``hist`` and ```gpu_hist`` tree method
|
||||
Additional parameters for ``hist`` and ``gpu_hist`` tree method
|
||||
================================================================
|
||||
|
||||
* ``single_precision_histogram``, [default=``false``]
|
||||
|
||||
Reference in New Issue
Block a user