From 878f3079484fc5fa9e1f90a476bdded23d9e55de Mon Sep 17 00:00:00 2001 From: LevineHuang Date: Thu, 30 Nov 2017 03:22:09 +0800 Subject: [PATCH] Fix minor typos (#2842) * Some minor changes to the code style Some minor changes to the code style in file basic_walkthrough.py * coding style changes * coding style changes arrcording PEP8 * Update basic_walkthrough.py * Fix minor typo * Minor edits to coding style Minor edits to coding style following the proposals of PEP8. --- demo/binary_classification/README.md | 6 +++--- demo/multiclass_classification/train.py | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/demo/binary_classification/README.md b/demo/binary_classification/README.md index 665027880..0a35b5987 100644 --- a/demo/binary_classification/README.md +++ b/demo/binary_classification/README.md @@ -1,8 +1,8 @@ Binary Classification ===================== This is the quick start tutorial for xgboost CLI version. -Here we demonstrate how to use XGBoost for a binary classification task. Before getting started, make sure you compile xgboost in the root directory of the project by typing ```make``` -The script runexp.sh can be used to run the demo. Here we use [mushroom dataset](https://archive.ics.uci.edu/ml/datasets/Mushroom) from UCI machine learning repository. +Here we demonstrate how to use XGBoost for a binary classification task. Before getting started, make sure you compile xgboost in the root directory of the project by typing ```make```. +The script 'runexp.sh' can be used to run the demo. Here we use [mushroom dataset](https://archive.ics.uci.edu/ml/datasets/Mushroom) from UCI machine learning repository. ### Tutorial #### Generate Input Data @@ -80,7 +80,7 @@ booster = gblinear # L2 regularization term on weights, default 0 lambda = 0.01 # L1 regularization term on weights, default 0 -f ```agaricus.txt.test.buffer``` exists, and automatically loads from binary buffer if possible, this can speedup training process when you do training many times. You can disable it by setting ```use_buffer=0```. +If ```agaricus.txt.test.buffer``` exists, and automatically loads from binary buffer if possible, this can speedup training process when you do training many times. You can disable it by setting ```use_buffer=0```. - Buffer file can also be used as standalone input, i.e if buffer file exists, but original agaricus.txt.test was removed, xgboost will still run * Deviation from LibSVM input format: xgboost is compatible with LibSVM format, with the following minor differences: - xgboost allows feature index starts from 0 diff --git a/demo/multiclass_classification/train.py b/demo/multiclass_classification/train.py index 6a43c6dee..4dbce8216 100755 --- a/demo/multiclass_classification/train.py +++ b/demo/multiclass_classification/train.py @@ -7,7 +7,7 @@ import xgboost as xgb # label need to be 0 to num_class -1 data = np.loadtxt('./dermatology.data', delimiter=',', - converters={33: lambda x:int(x == '?'), 34: lambda x:int(x)-1}) + converters={33: lambda x:int(x == '?'), 34: lambda x:int(x) - 1}) sz = data.shape train = data[:int(sz[0] * 0.7), :]