diff --git a/R-package/R/xgb.create.features.R b/R-package/R/xgb.create.features.R index f875b32fe..4365552e7 100644 --- a/R-package/R/xgb.create.features.R +++ b/R-package/R/xgb.create.features.R @@ -18,7 +18,7 @@ #' #' International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014 #' -#' \url{https://research.facebook.com/publications/758569837499391/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}. +#' \url{https://research.fb.com/publications/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}. #' #' Extract explaining the method: #' diff --git a/R-package/R/xgb.dump.R b/R-package/R/xgb.dump.R index 9bd9a2135..86e97b4e7 100644 --- a/R-package/R/xgb.dump.R +++ b/R-package/R/xgb.dump.R @@ -14,7 +14,7 @@ #' When this option is on, the model dump comes with two additional statistics: #' gain is the approximate loss function gain we get in each split; #' cover is the sum of second order gradient in each node. -#' @param dump_fomat either 'text' or 'json' format could be specified. +#' @param dump_format either 'text' or 'json' format could be specified. #' @param ... currently not used #' #' @return diff --git a/R-package/R/xgb.train.R b/R-package/R/xgb.train.R index 20270605d..2ed2194d3 100644 --- a/R-package/R/xgb.train.R +++ b/R-package/R/xgb.train.R @@ -119,7 +119,7 @@ #' \itemize{ #' \item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error} #' \item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood} -#' \item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss} +#' \item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss/} #' \item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}. #' By default, it uses the 0.5 threshold for predicted values to define negative and positive instances. #' Different threshold (e.g., 0.) could be specified as "error@0." diff --git a/R-package/man/xgb.create.features.Rd b/R-package/man/xgb.create.features.Rd index 679203833..4f799444b 100644 --- a/R-package/man/xgb.create.features.Rd +++ b/R-package/man/xgb.create.features.Rd @@ -29,7 +29,7 @@ Joaquin Quinonero Candela)} International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014 -\url{https://research.facebook.com/publications/758569837499391/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}. +\url{https://research.fb.com/publications/practical-lessons-from-predicting-clicks-on-ads-at-facebook/}. Extract explaining the method: diff --git a/R-package/man/xgb.dump.Rd b/R-package/man/xgb.dump.Rd index 206e32022..2ec26c743 100644 --- a/R-package/man/xgb.dump.Rd +++ b/R-package/man/xgb.dump.Rd @@ -24,9 +24,9 @@ When this option is on, the model dump comes with two additional statistics: gain is the approximate loss function gain we get in each split; cover is the sum of second order gradient in each node.} -\item{...}{currently not used} +\item{dump_format}{either 'text' or 'json' format could be specified.} -\item{dump_fomat}{either 'text' or 'json' format could be specified.} +\item{...}{currently not used} } \value{ if fname is not provided or set to \code{NULL} the function will return the model as a \code{character} vector. Otherwise it will return \code{TRUE}. diff --git a/R-package/man/xgb.train.Rd b/R-package/man/xgb.train.Rd index aea9c0a1b..4f37b78b8 100644 --- a/R-package/man/xgb.train.Rd +++ b/R-package/man/xgb.train.Rd @@ -174,7 +174,7 @@ The folloiwing is the list of built-in metrics for which Xgboost provides optimi \itemize{ \item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error} \item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood} - \item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss} + \item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss/} \item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}. By default, it uses the 0.5 threshold for predicted values to define negative and positive instances. Different threshold (e.g., 0.) could be specified as "error@0."