[R] various R code maintenance (#1964)

* [R] xgb.save must work when handle in nil but raw exists

* [R] print.xgb.Booster should still print other info when handle is nil

* [R] rename internal function xgb.Booster to xgb.Booster.handle to make its intent clear

* [R] rename xgb.Booster.check to xgb.Booster.complete and make it visible; more docs

* [R] storing evaluation_log should depend only on watchlist, not on verbose

* [R] reduce the excessive chattiness of unit tests

* [R] only disable some tests in windows when it's not 64-bit

* [R] clean-up xgb.DMatrix

* [R] test xgb.DMatrix loading from libsvm text file

* [R] store feature_names in xgb.Booster, use them from utility functions

* [R] remove non-functional co-occurence computation from xgb.importance

* [R] verbose=0 is enough without a callback

* [R] added forgotten xgb.Booster.complete.Rd; cran check fixes

* [R] update installation instructions
This commit is contained in:
Vadim Khotilovich
2017-01-21 13:22:46 -06:00
committed by Tianqi Chen
parent a073a2c3d4
commit 2b5b96d760
27 changed files with 561 additions and 327 deletions

View File

@@ -2,64 +2,65 @@
% Please edit documentation in R/xgb.importance.R
\name{xgb.importance}
\alias{xgb.importance}
\title{Show importance of features in a model}
\title{Importance of features in a model.}
\usage{
xgb.importance(feature_names = NULL, model = NULL, data = NULL,
label = NULL, target = function(x) ((x + label) == 2))
label = NULL, target = NULL)
}
\arguments{
\item{feature_names}{names of each feature as a \code{character} vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.}
\item{feature_names}{character vector of feature names. If the model already
contains feature names, those would be used when \code{feature_names=NULL} (default value).
Non-null \code{feature_names} could be provided to override those in the model.}
\item{model}{generated by the \code{xgb.train} function.}
\item{model}{object of class \code{xgb.Booster}.}
\item{data}{the dataset used for the training step. Will be used with \code{label} parameter for co-occurence computation. More information in \code{Detail} part. This parameter is optional.}
\item{data}{deprecated.}
\item{label}{the label vector used for the training step. Will be used with \code{data} parameter for co-occurence computation. More information in \code{Detail} part. This parameter is optional.}
\item{label}{deprecated.}
\item{target}{a function which returns \code{TRUE} or \code{1} when an observation should be count as a co-occurence and \code{FALSE} or \code{0} otherwise. Default function is provided for computing co-occurences in a binary classification. The \code{target} function should have only one parameter. This parameter will be used to provide each important feature vector after having applied the split condition, therefore these vector will be only made of 0 and 1 only, whatever was the information before. More information in \code{Detail} part. This parameter is optional.}
\item{target}{deprecated.}
}
\value{
A \code{data.table} of the features used in the model with their average gain (and their weight for boosted tree model) in the model.
For a tree model, a \code{data.table} with the following columns:
\itemize{
\item \code{Features} names of the features used in the model;
\item \code{Gain} represents fractional contribution of each feature to the model based on
the total gain of this feature's splits. Higher percentage means a more important
predictive feature.
\item \code{Cover} metric of the number of observation related to this feature;
\item \code{Frequency} percentage representing the relative number of times
a feature have been used in trees.
}
A linear model's importance \code{data.table} has only two columns:
\itemize{
\item \code{Features} names of the features used in the model;
\item \code{Weight} the linear coefficient of this feature.
}
If you don't provide or \code{model} doesn't have \code{feature_names},
index of the features will be used instead. Because the index is extracted from the model dump
(based on C++ code), it starts at 0 (as in C/C++ or Python) instead of 1 (usual in R).
}
\description{
Create a \code{data.table} of the most important features of a model.
Creates a \code{data.table} of feature importances in a model.
}
\details{
This function is for both linear and tree models.
This function works for both linear and tree models.
\code{data.table} is returned by the function.
The columns are:
\itemize{
\item \code{Features} name of the features as provided in \code{feature_names} or already present in the model dump;
\item \code{Gain} contribution of each feature to the model. For boosted tree model, each gain of each feature of each tree is taken into account, then average per feature to give a vision of the entire model. Highest percentage means important feature to predict the \code{label} used for the training (only available for tree models);
\item \code{Cover} metric of the number of observation related to this feature (only available for tree models);
\item \code{Weight} percentage representing the relative number of times a feature have been taken into trees.
}
If you don't provide \code{feature_names}, index of the features will be used instead.
Because the index is extracted from the model dump (made on the C++ side), it starts at 0 (usual in C++) instead of 1 (usual in R).
Co-occurence count
------------------
The gain gives you indication about the information of how a feature is important in making a branch of a decision tree more pure. However, with this information only, you can't know if this feature has to be present or not to get a specific classification. In the example code, you may wonder if odor=none should be \code{TRUE} to not eat a mushroom.
Co-occurence computation is here to help in understanding this relation between a predictor and a specific class. It will count how many observations are returned as \code{TRUE} by the \code{target} function (see parameters). When you execute the example below, there are 92 times only over the 3140 observations of the train dataset where a mushroom have no odor and can be eaten safely.
If you need to remember only one thing: unless you want to leave us early, don't eat a mushroom which has no odor :-)
For linear models, the importance is the absolute magnitude of linear coefficients.
For that reason, in order to obtain a meaningful ranking by importance for a linear model,
the features need to be on the same scale (which you also would want to do when using either
L1 or L2 regularization).
}
\examples{
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
xgb.importance(colnames(agaricus.train$data), model = bst)
# Same thing with co-occurence computation this time
xgb.importance(colnames(agaricus.train$data), model = bst,
data = agaricus.train$data, label = agaricus.train$label)
xgb.importance(model = bst)
}