xgboost/R-package/man/xgb.importance.Rd
Vadim Khotilovich b4d97d3cb8 R maintenance Feb2017 (#2045)
* [R] better argument check in xgb.DMatrix; fixes #1480

* [R] showsd was a dummy; fixes #2044

* [R] better categorical encoding explanation in vignette; fixes #1989

* [R] new roxygen version docs update
2017-02-20 10:02:40 -08:00

66 lines
2.4 KiB
R

% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.importance.R
\name{xgb.importance}
\alias{xgb.importance}
\title{Importance of features in a model.}
\usage{
xgb.importance(feature_names = NULL, model = NULL, data = NULL,
label = NULL, target = NULL)
}
\arguments{
\item{feature_names}{character vector of feature names. If the model already
contains feature names, those would be used when \code{feature_names=NULL} (default value).
Non-null \code{feature_names} could be provided to override those in the model.}
\item{model}{object of class \code{xgb.Booster}.}
\item{data}{deprecated.}
\item{label}{deprecated.}
\item{target}{deprecated.}
}
\value{
For a tree model, a \code{data.table} with the following columns:
\itemize{
\item \code{Features} names of the features used in the model;
\item \code{Gain} represents fractional contribution of each feature to the model based on
the total gain of this feature's splits. Higher percentage means a more important
predictive feature.
\item \code{Cover} metric of the number of observation related to this feature;
\item \code{Frequency} percentage representing the relative number of times
a feature have been used in trees.
}
A linear model's importance \code{data.table} has only two columns:
\itemize{
\item \code{Features} names of the features used in the model;
\item \code{Weight} the linear coefficient of this feature.
}
If you don't provide or \code{model} doesn't have \code{feature_names},
index of the features will be used instead. Because the index is extracted from the model dump
(based on C++ code), it starts at 0 (as in C/C++ or Python) instead of 1 (usual in R).
}
\description{
Creates a \code{data.table} of feature importances in a model.
}
\details{
This function works for both linear and tree models.
For linear models, the importance is the absolute magnitude of linear coefficients.
For that reason, in order to obtain a meaningful ranking by importance for a linear model,
the features need to be on the same scale (which you also would want to do when using either
L1 or L2 regularization).
}
\examples{
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
xgb.importance(model = bst)
}