From c2ba4fb95742c9fe927ee2c5f180ef81a20d33d1 Mon Sep 17 00:00:00 2001 From: hzy001 Date: Wed, 2 Dec 2020 17:39:12 +0800 Subject: [PATCH] Fix broken links. (#6455) Co-authored-by: Hao Ziyu Co-authored-by: fis --- demo/CLI/binary_classification/README.md | 3 +-- demo/CLI/regression/README.md | 3 +-- demo/README.md | 4 ++-- doc/tutorials/model.rst | 1 - 4 files changed, 4 insertions(+), 7 deletions(-) diff --git a/demo/CLI/binary_classification/README.md b/demo/CLI/binary_classification/README.md index 8947adf89..048058cfc 100644 --- a/demo/CLI/binary_classification/README.md +++ b/demo/CLI/binary_classification/README.md @@ -62,7 +62,7 @@ test:data = "agaricus.txt.test" We use the tree booster and logistic regression objective in our setting. This indicates that we accomplish our task using classic gradient boosting regression tree(GBRT), which is a promising method for binary classification. The parameters shown in the example gives the most common ones that are needed to use xgboost. -If you are interested in more parameter settings, the complete parameter settings and detailed descriptions are [here](../../doc/parameter.rst). Besides putting the parameters in the configuration file, we can set them by passing them as arguments as below: +If you are interested in more parameter settings, the complete parameter settings and detailed descriptions are [here](https://xgboost.readthedocs.io/en/stable/parameter.html). Besides putting the parameters in the configuration file, we can set them by passing them as arguments as below: ``` ../../xgboost mushroom.conf max_depth=6 @@ -161,4 +161,3 @@ Eg. ```nthread=10``` Set nthread to be the number of your real cpu (On Unix, this can be found using ```lscpu```) Some systems will have ```Thread(s) per core = 2```, for example, a 4 core cpu with 8 threads, in such case set ```nthread=4``` and not 8. - diff --git a/demo/CLI/regression/README.md b/demo/CLI/regression/README.md index 0a87f37a8..2525f9824 100644 --- a/demo/CLI/regression/README.md +++ b/demo/CLI/regression/README.md @@ -1,6 +1,6 @@ Regression ==== -Using XGBoost for regression is very similar to using it for binary classification. We suggest that you can refer to the [binary classification demo](../binary_classification) first. In XGBoost if we use negative log likelihood as the loss function for regression, the training procedure is same as training binary classifier of XGBoost. +Using XGBoost for regression is very similar to using it for binary classification. We suggest that you can refer to the [binary classification demo](../binary_classification) first. In XGBoost if we use negative log likelihood as the loss function for regression, the training procedure is same as training binary classifier of XGBoost. ### Tutorial The dataset we used is the [computer hardware dataset from UCI repository](https://archive.ics.uci.edu/ml/datasets/Computer+Hardware). The demo for regression is almost the same as the [binary classification demo](../binary_classification), except a little difference in general parameter: @@ -14,4 +14,3 @@ objective = reg:squarederror ``` The input format is same as binary classification, except that the label is now the target regression values. We use linear regression here, if we want use objective = reg:logistic logistic regression, the label needed to be pre-scaled into [0,1]. - diff --git a/demo/README.md b/demo/README.md index 4bf3a08d0..b94b43b61 100644 --- a/demo/README.md +++ b/demo/README.md @@ -60,9 +60,9 @@ This is a list of short codes introducing different functionalities of xgboost p Most of examples in this section are based on CLI or python version. However, the parameter settings can be applied to all versions -- [Binary classification](binary_classification) +- [Binary classification](CLI/binary_classification) - [Multiclass classification](multiclass_classification) -- [Regression](regression) +- [Regression](CLI/regression) - [Learning to Rank](rank) ### Benchmarks diff --git a/doc/tutorials/model.rst b/doc/tutorials/model.rst index 4720e7250..a3c0dfb37 100644 --- a/doc/tutorials/model.rst +++ b/doc/tutorials/model.rst @@ -2,7 +2,6 @@ Introduction to Boosted Trees ############################# XGBoost stands for "Extreme Gradient Boosting", where the term "Gradient Boosting" originates from the paper *Greedy Function Approximation: A Gradient Boosting Machine*, by Friedman. -This is a tutorial on gradient boosted trees, and most of the content is based on `these slides `_ by Tianqi Chen, the original author of XGBoost. The **gradient boosted trees** has been around for a while, and there are a lot of materials on the topic. This tutorial will explain boosted trees in a self-contained and principled way using the elements of supervised learning.