Implement robust regularization in 'survival:aft' objective (#5473)
* Robust regularization of AFT gradient and hessian * Fix AFT doc; expose it to tutorial TOC * Apply robust regularization to uncensored case too * Revise unit test slightly * Fix lint * Update test_survival.py * Use GradientPairPrecise * Remove unused variables
This commit is contained in:
committed by
GitHub
parent
939973630d
commit
5fc5ec539d
@@ -68,7 +68,7 @@ Note that this model is a generalized form of a linear regression model :math:`Y
|
||||
|
||||
\ln{Y} = \mathcal{T}(\mathbf{x}) + \sigma Z
|
||||
|
||||
where :math:`\mathcal{T}(\mathbf{x})` represents the output from a decision tree ensemble, given input :math:`\mathbf{x}`. Since :math:`Z` is a random variable, we have a likelihood defined for the expression :math:`\ln{Y} = \mathcal{T}(\mathbf{x}) + \sigma Z`. So the goal for XGBoost is to maximize the (log) likelihood by fitting a good tree ensemble :math:`\mathbf{x}`.
|
||||
where :math:`\mathcal{T}(\mathbf{x})` represents the output from a decision tree ensemble, given input :math:`\mathbf{x}`. Since :math:`Z` is a random variable, we have a likelihood defined for the expression :math:`\ln{Y} = \mathcal{T}(\mathbf{x}) + \sigma Z`. So the goal for XGBoost is to maximize the (log) likelihood by fitting a good tree ensemble :math:`\mathcal{T}(\mathbf{x})`.
|
||||
|
||||
**********
|
||||
How to use
|
||||
|
||||
@@ -18,6 +18,7 @@ See `Awesome XGBoost <https://github.com/dmlc/xgboost/tree/master/demo>`_ for mo
|
||||
monotonic
|
||||
rf
|
||||
feature_interaction_constraint
|
||||
aft_survival_analysis
|
||||
input_format
|
||||
param_tuning
|
||||
external_memory
|
||||
|
||||
Reference in New Issue
Block a user