Merge branch 'master' of github.com:tqchen/xgboost

This commit is contained in:
antinucleon 2014-09-03 00:38:06 -06:00
commit 2182ebcba1
2 changed files with 8 additions and 11 deletions

View File

@ -1,18 +1,18 @@
Package: xgboost
Type: Package
Title: eXtreme Gradient Boosting
Version: 0.3-0
Version: 0.3-1
Date: 2014-08-23
Author: Tianqi Chen <tianqi.tchen@gmail.com>, Tong He <hetong007@gmail.com>
Maintainer: Tong He <hetong007@gmail.com>
Description: This package is a R wrapper of xgboost, which is short for eXtreme
Gradient Boosting. It is an efficient and scalable implementation of
gradient boosting framework. The package includes efficient linear model
solver and tree learning algorithm. The package can automatically do
solver and tree learning algorithms. The package can automatically do
parallel computation with OpenMP, and it can be more than 10 times faster
than existing gradient boosting packages such as gbm. It supports various
objective functions, including regression, classification and ranking. The
package is made to be extensible, so that user are also allowed to define
package is made to be extensible, so that users are also allowed to define
their own objectives easily.
License: Apache License (== 2.0) | file LICENSE
URL: https://github.com/tqchen/xgboost

View File

@ -52,8 +52,7 @@ This is an introductory document of using the \verb@xgboost@ package in R.
and scalable implementation of gradient boosting framework by \citep{friedman2001greedy}.
The package includes efficient linear model solver and tree learning algorithm.
It supports various objective functions, including regression, classification
and ranking. The package is made to be extendible, so that user are also allowed
to define there own objectives easily. It has several features:
and ranking. The package is made to be extendible, so that users are also allowed to define their own objectives easily. It has several features:
\begin{enumerate}
\item{Speed: }{\verb@xgboost@ can automatically do parallel computation on
Windows and Linux, with openmp. It is generally over 10 times faster than
@ -137,13 +136,10 @@ diris = xgb.DMatrix('iris.xgb.DMatrix')
\section{Advanced Examples}
The function \verb@xgboost@ is a simple function with less parameters, in order
to be R-friendly. The core training function is wrapped in \verb@xgb.train@. It
is more flexible than \verb@xgboost@, but it requires users to read the document
a bit more carefully.
The function \verb@xgboost@ is a simple function with less parameter, in order
to be R-friendly. The core training function is wrapped in \verb@xgb.train@. It is more flexible than \verb@xgboost@, but it requires users to read the document a bit more carefully.
\verb@xgb.train@ only accept a \verb@xgb.DMatrix@ object as its input, while it
supports advanced features as custom objective and evaluation functions.
\verb@xgb.train@ only accept a \verb@xgb.DMatrix@ object as its input, while it supports advanced features as custom objective and evaluation functions.
<<Customized loss function>>=
logregobj <- function(preds, dtrain) {
@ -213,3 +209,4 @@ competition.
\bibliography{xgboost}
\end{document}