parent
d943720883
commit
674024c53a
@ -18,7 +18,7 @@
|
|||||||
#'
|
#'
|
||||||
#' International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014
|
#' International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014
|
||||||
#'
|
#'
|
||||||
#' \url{https://research.facebook.com/publications/758569837499391/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}.
|
#' \url{https://research.fb.com/publications/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}.
|
||||||
#'
|
#'
|
||||||
#' Extract explaining the method:
|
#' Extract explaining the method:
|
||||||
#'
|
#'
|
||||||
|
|||||||
@ -14,7 +14,7 @@
|
|||||||
#' When this option is on, the model dump comes with two additional statistics:
|
#' When this option is on, the model dump comes with two additional statistics:
|
||||||
#' gain is the approximate loss function gain we get in each split;
|
#' gain is the approximate loss function gain we get in each split;
|
||||||
#' cover is the sum of second order gradient in each node.
|
#' cover is the sum of second order gradient in each node.
|
||||||
#' @param dump_fomat either 'text' or 'json' format could be specified.
|
#' @param dump_format either 'text' or 'json' format could be specified.
|
||||||
#' @param ... currently not used
|
#' @param ... currently not used
|
||||||
#'
|
#'
|
||||||
#' @return
|
#' @return
|
||||||
|
|||||||
@ -119,7 +119,7 @@
|
|||||||
#' \itemize{
|
#' \itemize{
|
||||||
#' \item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
|
#' \item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
|
||||||
#' \item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
|
#' \item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
|
||||||
#' \item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss}
|
#' \item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss/}
|
||||||
#' \item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
|
#' \item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
|
||||||
#' By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
|
#' By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
|
||||||
#' Different threshold (e.g., 0.) could be specified as "error@0."
|
#' Different threshold (e.g., 0.) could be specified as "error@0."
|
||||||
|
|||||||
@ -29,7 +29,7 @@ Joaquin Quinonero Candela)}
|
|||||||
|
|
||||||
International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014
|
International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014
|
||||||
|
|
||||||
\url{https://research.facebook.com/publications/758569837499391/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}.
|
\url{https://research.fb.com/publications/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}.
|
||||||
|
|
||||||
Extract explaining the method:
|
Extract explaining the method:
|
||||||
|
|
||||||
|
|||||||
@ -24,9 +24,9 @@ When this option is on, the model dump comes with two additional statistics:
|
|||||||
gain is the approximate loss function gain we get in each split;
|
gain is the approximate loss function gain we get in each split;
|
||||||
cover is the sum of second order gradient in each node.}
|
cover is the sum of second order gradient in each node.}
|
||||||
|
|
||||||
\item{...}{currently not used}
|
\item{dump_format}{either 'text' or 'json' format could be specified.}
|
||||||
|
|
||||||
\item{dump_fomat}{either 'text' or 'json' format could be specified.}
|
\item{...}{currently not used}
|
||||||
}
|
}
|
||||||
\value{
|
\value{
|
||||||
if fname is not provided or set to \code{NULL} the function will return the model as a \code{character} vector. Otherwise it will return \code{TRUE}.
|
if fname is not provided or set to \code{NULL} the function will return the model as a \code{character} vector. Otherwise it will return \code{TRUE}.
|
||||||
|
|||||||
@ -174,7 +174,7 @@ The folloiwing is the list of built-in metrics for which Xgboost provides optimi
|
|||||||
\itemize{
|
\itemize{
|
||||||
\item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
|
\item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
|
||||||
\item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
|
\item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
|
||||||
\item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss}
|
\item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss/}
|
||||||
\item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
|
\item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
|
||||||
By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
|
By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
|
||||||
Different threshold (e.g., 0.) could be specified as "error@0."
|
Different threshold (e.g., 0.) could be specified as "error@0."
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user