diff --git a/demo/distributed-training/README.md b/demo/distributed-training/README.md
index 0879b828b..43709021f 100644
--- a/demo/distributed-training/README.md
+++ b/demo/distributed-training/README.md
@@ -18,6 +18,6 @@ Checkout [this tutorial](https://xgboost.readthedocs.org/en/latest/tutorials/aws
Model Analysis
--------------
-XGBoost is exchangable across all bindings and platforms.
+XGBoost is exchangeable across all bindings and platforms.
This means you can use python or R to analyze the learnt model and do prediction.
For example, you can use the [plot_model.ipynb](plot_model.ipynb) to visualize the learnt model.
diff --git a/demo/kaggle-higgs/README.md b/demo/kaggle-higgs/README.md
index 8b6dead19..d202a99bd 100644
--- a/demo/kaggle-higgs/README.md
+++ b/demo/kaggle-higgs/README.md
@@ -1,7 +1,7 @@
Highlights
=====
Higgs challenge ends recently, xgboost is being used by many users. This list highlights the xgboost solutions of players
-* Blogpost by phunther: [Winning solution of Kaggle Higgs competition: what a single model can do](http://no2147483647.wordpress.com/2014/09/17/winning-solution-of-kaggle-higgs-competition-what-a-single-model-can-do/)
+* Blogpost by phunther: [Winning solution of Kaggle Higgs competition: what a single model can do](http://no2147483647.wordpress.com/2014/09/17/winning-solution-of-kaggle-higgs-competition-what-a-single-model-can-do/)
* The solution by Tianqi Chen and Tong He [Link](https://github.com/hetong007/higgsml)
Guide for Kaggle Higgs Challenge
@@ -9,7 +9,7 @@ Guide for Kaggle Higgs Challenge
This is the folder giving example of how to use XGBoost Python Module to run Kaggle Higgs competition
-This script will achieve about 3.600 AMS score in public leadboard. To get start, you need do following step:
+This script will achieve about 3.600 AMS score in public leaderboard. To get start, you need do following step:
1. Compile the XGBoost python lib
```bash
@@ -28,5 +28,4 @@ speedtest.py compares xgboost's speed on this dataset with sklearn.GBM
Using R module
=====
-* Alternatively, you can run using R, higgs-train.R and higgs-pred.R.
-
+* Alternatively, you can run using R, higgs-train.R and higgs-pred.R.
diff --git a/demo/kaggle-otto/understandingXGBoostModel.Rmd b/demo/kaggle-otto/understandingXGBoostModel.Rmd
index e04277d4e..e125db831 100644
--- a/demo/kaggle-otto/understandingXGBoostModel.Rmd
+++ b/demo/kaggle-otto/understandingXGBoostModel.Rmd
@@ -152,9 +152,9 @@ Each group at each division level is called a branch and the deepest level is ca
In the final model, these *leafs* are supposed to be as pure as possible for each tree, meaning in our case that each *leaf* should be made of one class of **Otto** product only (of course it is not true, but that's what we try to achieve in a minimum of splits).
-**Not all *splits* are equally important**. Basically the first *split* of a tree will have more impact on the purity that, for instance, the deepest *split*. Intuitively, we understand that the first *split* makes most of the work, and the following *splits* focus on smaller parts of the dataset which have been missclassified by the first *tree*.
+**Not all *splits* are equally important**. Basically the first *split* of a tree will have more impact on the purity that, for instance, the deepest *split*. Intuitively, we understand that the first *split* makes most of the work, and the following *splits* focus on smaller parts of the dataset which have been misclassified by the first *tree*.
-In the same way, in Boosting we try to optimize the missclassification at each round (it is called the *loss*). So the first *tree* will do the big work and the following trees will focus on the remaining, on the parts not correctly learned by the previous *trees*.
+In the same way, in Boosting we try to optimize the misclassification at each round (it is called the *loss*). So the first *tree* will do the big work and the following trees will focus on the remaining, on the parts not correctly learned by the previous *trees*.
The improvement brought by each *split* can be measured, it is the *gain*.
@@ -200,7 +200,7 @@ This function gives a color to each bar. These colors represent groups of featur
From here you can take several actions. For instance you can remove the less important feature (feature selection process), or go deeper in the interaction between the most important features and labels.
-Or you can just reason about why these features are so importat (in **Otto** challenge we can't go this way because there is not enough information).
+Or you can just reason about why these features are so important (in **Otto** challenge we can't go this way because there is not enough information).
Tree graph
----------
@@ -217,7 +217,7 @@ xgb.plot.tree(feature_names = names, model = bst, n_first_tree = 2)
We are just displaying the first two trees here.
-On simple models the first two trees may be enough. Here, it might not be the case. We can see from the size of the trees that the intersaction between features is complicated.
+On simple models the first two trees may be enough. Here, it might not be the case. We can see from the size of the trees that the interaction between features is complicated.
Besides, **XGBoost** generate `k` trees at each round for a `k`-classification problem. Therefore the two trees illustrated here are trying to classify data into different classes.
Going deeper
@@ -226,6 +226,6 @@ Going deeper
There are 4 documents you may also be interested in:
* [xgboostPresentation.Rmd](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd): general presentation
-* [discoverYourData.Rmd](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/discoverYourData.Rmd): explaining feature analysus
+* [discoverYourData.Rmd](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/discoverYourData.Rmd): explaining feature analysis
* [Feature Importance Analysis with XGBoost in Tax audit](http://fr.slideshare.net/MichaelBENESTY/feature-importance-analysis-with-xgboost-in-tax-audit): use case
* [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/): very good book to have a good understanding of the model
diff --git a/demo/rank/README.md b/demo/rank/README.md
index 2ce9c60de..06cace675 100644
--- a/demo/rank/README.md
+++ b/demo/rank/README.md
@@ -1,22 +1,21 @@
Learning to rank
====
-XGBoost supports accomplishing ranking tasks. In ranking scenario, data are often grouped and we need the [group information file](../../doc/input_format.md#group-input-format) to specify ranking tasks. The model used in XGBoost for ranking is the LambdaRank, this function is not yet completed. Currently, we provide pairwise rank.
+XGBoost supports accomplishing ranking tasks. In ranking scenario, data are often grouped and we need the [group information file](../../doc/input_format.md#group-input-format) to specify ranking tasks. The model used in XGBoost for ranking is the LambdaRank, this function is not yet completed. Currently, we provide pairwise rank.
### Parameters
-The configuration setting is similar to the regression and binary classification setting,except user need to specify the objectives:
+The configuration setting is similar to the regression and binary classification setting, except user need to specify the objectives:
```
...
objective="rank:pairwise"
...
```
-For more usage details please refer to the [binary classification demo](../binary_classification),
+For more usage details please refer to the [binary classification demo](../binary_classification),
Instructions
====
-The dataset for ranking demo is from LETOR04 MQ2008 fold1,
+The dataset for ranking demo is from LETOR04 MQ2008 fold1,
You can use the following command to run the example
Get the data: ./wgetdata.sh
Run the example: ./runexp.sh
-
diff --git a/doc/faq.md b/doc/faq.md
index 3dd55bd5e..70cd4b000 100644
--- a/doc/faq.md
+++ b/doc/faq.md
@@ -41,7 +41,7 @@ Most importantly, it pushes the limit of the computation resources we can use.
How can I port the model to my own system
-----------------------------------------
-The model and data format of XGBoost is exchangable,
+The model and data format of XGBoost is exchangeable,
which means the model trained by one language can be loaded in another.
This means you can train the model using R, while running prediction using
Java or C++, which are more common in production systems.
diff --git a/doc/get_started/index.md b/doc/get_started/index.md
index cf2a13026..13d843ad6 100644
--- a/doc/get_started/index.md
+++ b/doc/get_started/index.md
@@ -36,7 +36,6 @@ bst <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, n
nthread = 2, objective = "binary:logistic")
# predict
pred <- predict(bst, test$data)
-
```
## Julia
diff --git a/doc/how_to/contribute.md b/doc/how_to/contribute.md
index e248f7d4b..c056313f0 100644
--- a/doc/how_to/contribute.md
+++ b/doc/how_to/contribute.md
@@ -138,7 +138,7 @@ make the-markdown-to-make.md
- Add the generated figure to the ```dmlc/web-data``` repo.
- If you already cloned the repo to doc, this means a ```git add```
- Create PR for both the markdown and ```dmlc/web-data```
-- You can also build the document locally by typing the followig command at ```doc```
+- You can also build the document locally by typing the following command at ```doc```
```bash
make html
```
diff --git a/doc/how_to/index.md b/doc/how_to/index.md
index afc69f777..8359805f7 100644
--- a/doc/how_to/index.md
+++ b/doc/how_to/index.md
@@ -6,7 +6,7 @@ This page contains guidelines to use and develop mxnets.
- [How to Install XGBoost](../build.md)
## Use XGBoost in Specific Ways
-- [Parameter tunning guide](param_tuning.md)
+- [Parameter tuning guide](param_tuning.md)
- [Use out of core computation for large dataset](external_memory.md)
## Develop and Hack XGBoost
diff --git a/doc/input_format.md b/doc/input_format.md
index b2bb49255..fcefc5eae 100644
--- a/doc/input_format.md
+++ b/doc/input_format.md
@@ -12,8 +12,7 @@ train.txt
1 0:0.01 1:0.3
0 0:0.2 1:0.3
```
-Each line represent a single instance, and in the first line '1' is the instance label,'101' and '102' are feature indices, '1.2' and '0.03' are feature values. In the binary classification case, '1' is used to indicate positive samples, and '0' is used to indicate negative samples. We also support probability values in [0,1] as label, to indicate the probability of the instanc
-e being positive.
+Each line represent a single instance, and in the first line '1' is the instance label,'101' and '102' are feature indices, '1.2' and '0.03' are feature values. In the binary classification case, '1' is used to indicate positive samples, and '0' is used to indicate negative samples. We also support probability values in [0,1] as label, to indicate the probability of the instance being positive.
Additional Information
----------------------
@@ -54,4 +53,4 @@ train.txt.base_margin
1.0
3.4
```
-XGBoost will take these values as intial margin prediction and boost from that. An important note about base_margin is that it should be margin prediction before transformation, so if you are doing logistic loss, you will need to put in value before logistic transformation. If you are using XGBoost predictor, use pred_margin=1 to output margin values.
+XGBoost will take these values as initial margin prediction and boost from that. An important note about base_margin is that it should be margin prediction before transformation, so if you are doing logistic loss, you will need to put in value before logistic transformation. If you are using XGBoost predictor, use pred_margin=1 to output margin values.
diff --git a/doc/jvm/index.md b/doc/jvm/index.md
index 0f01fb042..13c2aa709 100644
--- a/doc/jvm/index.md
+++ b/doc/jvm/index.md
@@ -17,7 +17,7 @@ To publish the artifacts to your local maven repository, run
mvn install
-Or, if you would like to skip tests, run
+Or, if you would like to skip tests, run
mvn -DskipTests install
@@ -32,7 +32,7 @@ This command will publish the xgboost binaries, the compiled java classes as wel
-After integrating with Dataframe/Dataset APIs of Spark 2.0, XGBoost4J-Spark only supports compile with Spark 2.x. You can build XGBoost4J-Spark as a component of XGBoost4J by running `mvn package`, and you can specify the version of spark with `mvn -Dspark.version=2.0.0 package`. (To continue working with Spark 1.x, the users are supposed to update pom.xml by modifying the properties like `spark.version`, `scala.version`, and `scala.binary.version`. Users also need to change the implemention by replacing SparkSession with SQLContext and the type of API parameters from Dataset[_] to Dataframe)
+After integrating with Dataframe/Dataset APIs of Spark 2.0, XGBoost4J-Spark only supports compile with Spark 2.x. You can build XGBoost4J-Spark as a component of XGBoost4J by running `mvn package`, and you can specify the version of spark with `mvn -Dspark.version=2.0.0 package`. (To continue working with Spark 1.x, the users are supposed to update pom.xml by modifying the properties like `spark.version`, `scala.version`, and `scala.binary.version`. Users also need to change the implementation by replacing SparkSession with SQLContext and the type of API parameters from Dataset[_] to Dataframe)
Contents
--------
diff --git a/doc/jvm/java_intro.md b/doc/jvm/java_intro.md
index 9e145f369..4ddfeb954 100644
--- a/doc/jvm/java_intro.md
+++ b/doc/jvm/java_intro.md
@@ -133,7 +133,7 @@ Booster booster = new Booster(param, "model.bin");
```
## Prediction
-after training and loading a model, you use it to predict other data, the predict results will be a two-dimension float array (nsample, nclass) ,for predict leaf, it would be (nsample, nclass*ntrees)
+after training and loading a model, you use it to predict other data, the predict results will be a two-dimension float array (nsample, nclass), for predict leaf, it would be (nsample, nclass*ntrees)
```java
DMatrix dtest = new DMatrix("test.svm.txt");
//predict
diff --git a/doc/jvm/xgboost4j-intro.md b/doc/jvm/xgboost4j-intro.md
index bc0cefc6f..50c1ad898 100644
--- a/doc/jvm/xgboost4j-intro.md
+++ b/doc/jvm/xgboost4j-intro.md
@@ -26,7 +26,7 @@ They are also often [much more efficient](http://arxiv.org/abs/1603.02754).
The gap between the implementation fundamentals of the general data processing frameworks and the more specific machine learning libraries/systems prohibits the smooth connection between these two types of systems, thus brings unnecessary inconvenience to the end user. The common workflow to the user is to utilize the systems like Spark/Flink to preprocess/clean data, pass the results to machine learning systems like [XGBoost](https://github.com/dmlc/xgboost)/[MxNet](https://github.com/dmlc/mxnet)) via the file systems and then conduct the following machine learning phase. This process jumping across two types of systems creates certain inconvenience for the users and brings additional overhead to the operators of the infrastructure.
-We want best of both worlds, so we can use the data processing frameworks like Spark and Flink toghether with
+We want best of both worlds, so we can use the data processing frameworks like Spark and Flink together with
the best distributed machine learning solutions.
To resolve the situation, we introduce the new-brewed [XGBoost4J](https://github.com/dmlc/xgboost/tree/master/jvm-packages),
XGBoost for JVM Platform. We aim to provide the clean Java/Scala APIs and the integration with the most popular data processing systems developed in JVM-based languages.
diff --git a/doc/jvm/xgboost4j_full_integration.md b/doc/jvm/xgboost4j_full_integration.md
index d7023ea21..562721f03 100644
--- a/doc/jvm/xgboost4j_full_integration.md
+++ b/doc/jvm/xgboost4j_full_integration.md
@@ -1,6 +1,6 @@
-## Introduction
+## Introduction
-On March 2016, we released the first version of [XGBoost4J](http://dmlc.ml/2016/03/14/xgboost4j-portable-distributed-xgboost-in-spark-flink-and-dataflow.html), which is a set of packages providing Java/Scala interfaces of XGBoost and the integration with prevalent JVM-based distributed data processing platforms, like Spark/Flink.
+On March 2016, we released the first version of [XGBoost4J](http://dmlc.ml/2016/03/14/xgboost4j-portable-distributed-xgboost-in-spark-flink-and-dataflow.html), which is a set of packages providing Java/Scala interfaces of XGBoost and the integration with prevalent JVM-based distributed data processing platforms, like Spark/Flink.
The integrations with Spark/Flink, a.k.a. XGBoost4J-Spark and XGBoost-Flink, receive the tremendous positive feedbacks from the community. It enables users to build a unified pipeline, embedding XGBoost into the data processing system based on the widely-deployed frameworks like Spark. The following figure shows the general architecture of such a pipeline with the first version of XGBoost4J-Spark, where the data processing is based on the low-level [Resilient Distributed Dataset (RDD)](http://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) abstraction.
@@ -12,14 +12,14 @@ In the last months, we have a lot of communication with the users and gain the d
* While Spark is still the mainstream data processing tool in most of scenarios, more and more users are porting their RDD-based Spark programs to [DataFrame/Dataset APIs](http://spark.apache.org/docs/latest/sql-programming-guide.html) for the well-designed interfaces to manipulate structured data and the [significant performance improvement](https://databricks.com/blog/2016/07/26/introducing-apache-spark-2-0.html).
-* Spark itself has presented a clear roadmap that DataFrame/Dataset would be the base of the latest and future features, e.g. latest version of [ML pipeline](http://spark.apache.org/docs/latest/ml-guide.html) and [Structured Streaming](http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html).
+* Spark itself has presented a clear roadmap that DataFrame/Dataset would be the base of the latest and future features, e.g. latest version of [ML pipeline](http://spark.apache.org/docs/latest/ml-guide.html) and [Structured Streaming](http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html).
-Based on these feedbacks from the users, we observe a gap between the original RDD-based XGBoost4J-Spark and the users' latest usage scenario as well as the future direction of Spark ecosystem. To fill this gap, we start working on the integration of XGBoost and Spark's DataFrame/Dataset abstraction in September. In this blog, we will introduce the latest version of XGBoost4J-Spark which allows the user to work with DataFrame/Dataset directly and embed XGBoost to Spark's ML pipeline seamlessly.
+Based on these feedbacks from the users, we observe a gap between the original RDD-based XGBoost4J-Spark and the users' latest usage scenario as well as the future direction of Spark ecosystem. To fill this gap, we start working on the integration of XGBoost and Spark's DataFrame/Dataset abstraction in September. In this blog, we will introduce the latest version of XGBoost4J-Spark which allows the user to work with DataFrame/Dataset directly and embed XGBoost to Spark's ML pipeline seamlessly.
## A Full Integration of XGBoost and DataFrame/Dataset
-The following figure illustrates the new pipeline architecture with the latest XGBoost4J-Spark.
+The following figure illustrates the new pipeline architecture with the latest XGBoost4J-Spark.

@@ -49,7 +49,7 @@ import org.apache.spark.ml.feature.StringIndexer
// load sales records saved in json files
val salesDF = spark.read.json("sales.json")
-// transfrom the string-represented storeType feature to numeric storeTypeIndex
+// transform the string-represented storeType feature to numeric storeTypeIndex
val indexer = new StringIndexer()
.setInputCol("storeType")
.setOutputCol("storeTypeIndex")
@@ -71,7 +71,7 @@ import org.apache.spark.ml.feature.StringIndexer
// load sales records saved in json files
val salesDF = spark.read.json("sales.json")
-// transfrom the string-represented storeType feature to numeric storeTypeIndex
+// transform the string-represented storeType feature to numeric storeTypeIndex
val indexer = new StringIndexer()
.setInputCol("storeType")
.setOutputCol("storeTypeIndex")
@@ -99,7 +99,7 @@ val salesRecordsWithPred = xgboostModel.transform(salesTestDF)
The most critical operation to maximize the power of XGBoost is to select the optimal parameters for the model. Tuning parameters manually is a tedious and labor-consuming process. With the latest version of XGBoost4J-Spark, we can utilize the Spark model selecting tool to automate this process. The following example shows the code snippet utilizing [TrainValidationSplit](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.tuning.TrainValidationSplit) and [RegressionEvaluator](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.evaluation.RegressionEvaluator) to search the optimal combination of two XGBoost parameters, [max_depth and eta] (https://github.com/dmlc/xgboost/blob/master/doc/parameter.md). The model producing the minimum cost function value defined by RegressionEvaluator is selected and used to generate the prediction for the test set.
```scala
-// create XGBoostEstimator
+// create XGBoostEstimator
val xgbEstimator = new XGBoostEstimator(xgboostParam).setFeaturesCol("features").
setLabelCol("sales")
val paramGrid = new ParamGridBuilder()
@@ -137,5 +137,3 @@ If you are interested in knowing more about XGBoost, you can find rich resources
- [Tutorials for the R package](xgboost.readthedocs.org/en/latest/R-package/index.html)
- [Introduction of the Parameters](http://xgboost.readthedocs.org/en/latest/parameter.html)
- [Awesome XGBoost, a curated list of examples, tutorials, blogs about XGBoost usecases](https://github.com/dmlc/xgboost/tree/master/demo)
-
-
diff --git a/doc/tutorials/aws_yarn.md b/doc/tutorials/aws_yarn.md
index fb1dcd8fa..20cdd620b 100644
--- a/doc/tutorials/aws_yarn.md
+++ b/doc/tutorials/aws_yarn.md
@@ -49,7 +49,7 @@ Now we can open the browser, and type(replace the DNS with the master DNS)
```
ec2-xx-xx-xx.us-west-2.compute.amazonaws.com:8088
```
-This will show the job tracker of the YARN cluster. Note that we may wait a few minutes before the master finishes bootstraping and starts the
+This will show the job tracker of the YARN cluster. Note that we may wait a few minutes before the master finishes bootstrapping and starts the
job tracker.
After master machine gets up, we can freely add more slave machines to the cluster.
@@ -158,7 +158,7 @@ Application application_1456461717456_0015 finished with state FINISHED at 14564
Analyze the Model
-----------------
After the model is trained, we can analyse the learnt model and use it for future prediction task.
-XGBoost is a portable framework, the model in all platforms are ***exchangable***.
+XGBoost is a portable framework, the model in all platforms are ***exchangeable***.
This means we can load the trained model in python/R/Julia and take benefit of data science pipelines
in these languages to do model analysis and prediction.