text vignette
This commit is contained in:
parent
56e9bff11f
commit
56068b5453
@ -25,7 +25,7 @@ body{
|
|||||||
|
|
||||||
line-height: 1;
|
line-height: 1;
|
||||||
max-width: 800px;
|
max-width: 800px;
|
||||||
padding: 20px;
|
padding: 10px;
|
||||||
font-size: 17px;
|
font-size: 17px;
|
||||||
text-align: justify;
|
text-align: justify;
|
||||||
text-justify: inter-word;
|
text-justify: inter-word;
|
||||||
@ -33,9 +33,10 @@ body{
|
|||||||
|
|
||||||
|
|
||||||
p {
|
p {
|
||||||
line-height: 150%;
|
line-height: 140%;
|
||||||
/ max-width: 540px;
|
/ max-width: 540px;
|
||||||
max-width: 960px;
|
max-width: 960px;
|
||||||
|
margin-bottom: 5px;
|
||||||
font-weight: 400;
|
font-weight: 400;
|
||||||
/ color: #333333
|
/ color: #333333
|
||||||
}
|
}
|
||||||
@ -46,7 +47,7 @@ h1, h2, h3, h4 {
|
|||||||
font-weight: 400;
|
font-weight: 400;
|
||||||
}
|
}
|
||||||
|
|
||||||
h2, h3, h4, h5, p {
|
h2, h3, h4, h5 {
|
||||||
margin-bottom: 20px;
|
margin-bottom: 20px;
|
||||||
padding: 0;
|
padding: 0;
|
||||||
}
|
}
|
||||||
@ -86,6 +87,7 @@ h6 {
|
|||||||
font-variant:small-caps;
|
font-variant:small-caps;
|
||||||
font-style: italic;
|
font-style: italic;
|
||||||
}
|
}
|
||||||
|
|
||||||
a {
|
a {
|
||||||
color: #606AAA;
|
color: #606AAA;
|
||||||
margin: 0;
|
margin: 0;
|
||||||
@ -101,6 +103,7 @@ a:hover {
|
|||||||
a:visited {
|
a:visited {
|
||||||
color: gray;
|
color: gray;
|
||||||
}
|
}
|
||||||
|
|
||||||
ul, ol {
|
ul, ol {
|
||||||
padding: 0;
|
padding: 0;
|
||||||
margin: 0px 0px 0px 50px;
|
margin: 0px 0px 0px 50px;
|
||||||
@ -138,9 +141,10 @@ code {
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
p code {
|
li code, p code {
|
||||||
background: #CDCDCD;
|
background: #CDCDCD;
|
||||||
color: #606AAA;
|
color: #606AAA;
|
||||||
|
padding: 0px 5px 0px 5px;
|
||||||
}
|
}
|
||||||
|
|
||||||
code.r, code.cpp {
|
code.r, code.cpp {
|
||||||
|
|||||||
@ -22,8 +22,8 @@ This is an introductory document for using the \verb@xgboost@ package in *R*.
|
|||||||
|
|
||||||
It is an efficient and scalable implementation of gradient boosting framework by @friedman2001greedy. Two solvers are included:
|
It is an efficient and scalable implementation of gradient boosting framework by @friedman2001greedy. Two solvers are included:
|
||||||
|
|
||||||
- *linear model*
|
- *linear* model ;
|
||||||
- *tree learning* algorithm
|
- *tree learning* algorithm.
|
||||||
|
|
||||||
It supports various objective functions, including *regression*, *classification* and *ranking*. The package is made to be extendible, so that users are also allowed to define their own objective function easily.
|
It supports various objective functions, including *regression*, *classification* and *ranking*. The package is made to be extendible, so that users are also allowed to define their own objective function easily.
|
||||||
|
|
||||||
@ -48,7 +48,7 @@ Installation
|
|||||||
|
|
||||||
The first step is to install the package.
|
The first step is to install the package.
|
||||||
|
|
||||||
For up-to-date version (which is *highly* recommended), install from Github:
|
For up-to-date version (which is *highly* recommended), install from *Github*:
|
||||||
|
|
||||||
```{r installGithub, eval=FALSE}
|
```{r installGithub, eval=FALSE}
|
||||||
devtools::install_github('tqchen/xgboost',subdir='R-package')
|
devtools::install_github('tqchen/xgboost',subdir='R-package')
|
||||||
@ -56,7 +56,7 @@ devtools::install_github('tqchen/xgboost',subdir='R-package')
|
|||||||
|
|
||||||
> *Windows* user will need to install [RTools](http://cran.r-project.org/bin/windows/Rtools/) first.
|
> *Windows* user will need to install [RTools](http://cran.r-project.org/bin/windows/Rtools/) first.
|
||||||
|
|
||||||
For stable version on CRAN, run:
|
For stable version on *CRAN*, run:
|
||||||
|
|
||||||
```{r installCran, eval=FALSE}
|
```{r installCran, eval=FALSE}
|
||||||
install.packages('xgboost')
|
install.packages('xgboost')
|
||||||
@ -194,11 +194,11 @@ print(paste("test-error=", err))
|
|||||||
|
|
||||||
> We remind you that the algorithm has never seen the `test` data before.
|
> We remind you that the algorithm has never seen the `test` data before.
|
||||||
|
|
||||||
Here, we have just computed a simple metric: the average error:
|
Here, we have just computed a simple metric, the average error.
|
||||||
|
|
||||||
* `as.numeric(pred > 0.5)` applies our rule that when the probability (== prediction == regression) is over `0.5` the observation is classified as `1` and `0` otherwise ;
|
1. `as.numeric(pred > 0.5)` applies our rule that when the probability (== prediction == regression) is over `0.5` the observation is classified as `1` and `0` otherwise ;
|
||||||
* `probabilityVectorPreviouslyComputed != test$label` computes the vector of error between true data and computed probabilities ;
|
2. `probabilityVectorPreviouslyComputed != test$label` computes the vector of error between true data and computed probabilities ;
|
||||||
* `mean(vectorOfErrors)` computes the average error itself.
|
3. `mean(vectorOfErrors)` computes the average error itself.
|
||||||
|
|
||||||
The most important thing to remember is that **to do a classification basically, you just do a regression and then apply a threeshold**.
|
The most important thing to remember is that **to do a classification basically, you just do a regression and then apply a threeshold**.
|
||||||
|
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user