Fix broken links. (#6455)
Co-authored-by: Hao Ziyu <haoziyu@qiyi.com> Co-authored-by: fis <jm.yuan@outlook.com>
This commit is contained in:
parent
a2c778e2d1
commit
3efc4ea0d1
@ -62,7 +62,7 @@ test:data = "agaricus.txt.test"
|
|||||||
We use the tree booster and logistic regression objective in our setting. This indicates that we accomplish our task using classic gradient boosting regression tree(GBRT), which is a promising method for binary classification.
|
We use the tree booster and logistic regression objective in our setting. This indicates that we accomplish our task using classic gradient boosting regression tree(GBRT), which is a promising method for binary classification.
|
||||||
|
|
||||||
The parameters shown in the example gives the most common ones that are needed to use xgboost.
|
The parameters shown in the example gives the most common ones that are needed to use xgboost.
|
||||||
If you are interested in more parameter settings, the complete parameter settings and detailed descriptions are [here](../../doc/parameter.rst). Besides putting the parameters in the configuration file, we can set them by passing them as arguments as below:
|
If you are interested in more parameter settings, the complete parameter settings and detailed descriptions are [here](https://xgboost.readthedocs.io/en/stable/parameter.html). Besides putting the parameters in the configuration file, we can set them by passing them as arguments as below:
|
||||||
|
|
||||||
```
|
```
|
||||||
../../xgboost mushroom.conf max_depth=6
|
../../xgboost mushroom.conf max_depth=6
|
||||||
@ -161,4 +161,3 @@ Eg. ```nthread=10```
|
|||||||
|
|
||||||
Set nthread to be the number of your real cpu (On Unix, this can be found using ```lscpu```)
|
Set nthread to be the number of your real cpu (On Unix, this can be found using ```lscpu```)
|
||||||
Some systems will have ```Thread(s) per core = 2```, for example, a 4 core cpu with 8 threads, in such case set ```nthread=4``` and not 8.
|
Some systems will have ```Thread(s) per core = 2```, for example, a 4 core cpu with 8 threads, in such case set ```nthread=4``` and not 8.
|
||||||
|
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
Regression
|
Regression
|
||||||
====
|
====
|
||||||
Using XGBoost for regression is very similar to using it for binary classification. We suggest that you can refer to the [binary classification demo](../binary_classification) first. In XGBoost if we use negative log likelihood as the loss function for regression, the training procedure is same as training binary classifier of XGBoost.
|
Using XGBoost for regression is very similar to using it for binary classification. We suggest that you can refer to the [binary classification demo](../binary_classification) first. In XGBoost if we use negative log likelihood as the loss function for regression, the training procedure is same as training binary classifier of XGBoost.
|
||||||
|
|
||||||
### Tutorial
|
### Tutorial
|
||||||
The dataset we used is the [computer hardware dataset from UCI repository](https://archive.ics.uci.edu/ml/datasets/Computer+Hardware). The demo for regression is almost the same as the [binary classification demo](../binary_classification), except a little difference in general parameter:
|
The dataset we used is the [computer hardware dataset from UCI repository](https://archive.ics.uci.edu/ml/datasets/Computer+Hardware). The demo for regression is almost the same as the [binary classification demo](../binary_classification), except a little difference in general parameter:
|
||||||
@ -14,4 +14,3 @@ objective = reg:squarederror
|
|||||||
```
|
```
|
||||||
|
|
||||||
The input format is same as binary classification, except that the label is now the target regression values. We use linear regression here, if we want use objective = reg:logistic logistic regression, the label needed to be pre-scaled into [0,1].
|
The input format is same as binary classification, except that the label is now the target regression values. We use linear regression here, if we want use objective = reg:logistic logistic regression, the label needed to be pre-scaled into [0,1].
|
||||||
|
|
||||||
|
|||||||
@ -60,9 +60,9 @@ This is a list of short codes introducing different functionalities of xgboost p
|
|||||||
Most of examples in this section are based on CLI or python version.
|
Most of examples in this section are based on CLI or python version.
|
||||||
However, the parameter settings can be applied to all versions
|
However, the parameter settings can be applied to all versions
|
||||||
|
|
||||||
- [Binary classification](binary_classification)
|
- [Binary classification](CLI/binary_classification)
|
||||||
- [Multiclass classification](multiclass_classification)
|
- [Multiclass classification](multiclass_classification)
|
||||||
- [Regression](regression)
|
- [Regression](CLI/regression)
|
||||||
- [Learning to Rank](rank)
|
- [Learning to Rank](rank)
|
||||||
|
|
||||||
### Benchmarks
|
### Benchmarks
|
||||||
|
|||||||
@ -2,7 +2,6 @@
|
|||||||
Introduction to Boosted Trees
|
Introduction to Boosted Trees
|
||||||
#############################
|
#############################
|
||||||
XGBoost stands for "Extreme Gradient Boosting", where the term "Gradient Boosting" originates from the paper *Greedy Function Approximation: A Gradient Boosting Machine*, by Friedman.
|
XGBoost stands for "Extreme Gradient Boosting", where the term "Gradient Boosting" originates from the paper *Greedy Function Approximation: A Gradient Boosting Machine*, by Friedman.
|
||||||
This is a tutorial on gradient boosted trees, and most of the content is based on `these slides <http://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf>`_ by Tianqi Chen, the original author of XGBoost.
|
|
||||||
|
|
||||||
The **gradient boosted trees** has been around for a while, and there are a lot of materials on the topic.
|
The **gradient boosted trees** has been around for a while, and there are a lot of materials on the topic.
|
||||||
This tutorial will explain boosted trees in a self-contained and principled way using the elements of supervised learning.
|
This tutorial will explain boosted trees in a self-contained and principled way using the elements of supervised learning.
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user