diff --git a/multi-node/hadoop/README.md b/multi-node/hadoop/README.md index a403af474..d1dde8ba3 100644 --- a/multi-node/hadoop/README.md +++ b/multi-node/hadoop/README.md @@ -2,16 +2,17 @@ Distributed XGBoost: Hadoop Version ==== * The script in this fold shows an example of how to run distributed xgboost on hadoop platform. * It relies on [Rabit Library](https://github.com/tqchen/rabit) (Reliable Allreduce and Broadcast Interface) and Hadoop Streaming. Rabit provides an interface to aggregate gradient values and split statistics, that allow xgboost to run reliably on hadoop. You do not need to care how to update model in each iteration, just use the script ```rabit_hadoop.py```. For those who want to know how it exactly works, plz refer to the main page of [Rabit](https://github.com/tqchen/rabit). -* Quick start: run ```bash run_binary_classification.sh ``` +* Quick start: run ```bash run_mushroom.sh ``` - This is the hadoop version of binary classification example in the demo folder. - - More info of the binary classification task can be refered to https://github.com/tqchen/xgboost/wiki/Binary-Classification. + - More info of the usage of xgboost can be refered to [wiki page](https://github.com/tqchen/xgboost/wiki) Before you run the script ==== -* Make sure you have set up the hadoop environment. Otherwise you should run single machine examples in the demo fold. +* Make sure you have set up the hadoop environment. +* If you want to only use single machine multi-threading, try single machine examples in the [demo folder](../../demo). * Build: run ```bash build.sh``` in the root folder, it will automatically download rabit and build xgboost. -* Check whether the environment variable $HADOOP_HOME exists (e.g. run ```echo $HADOOP_HOME```). If not, plz set up hadoop-streaming.jar path in rabit_hadoop.py. - +* Check whether the environment variable $HADOOP_HOME exists (e.g. run ```echo $HADOOP_HOME```). If not, please set up hadoop-streaming.jar path in rabit_hadoop.py. + How to Use ==== * Input data format: LIBSVM format. The example here uses generated data in demo/data folder. @@ -19,24 +20,24 @@ How to Use * Use rabit ```rabit_hadoop.py``` to submit training task to hadoop, and save the final model file. * Get the final model file from HDFS, and locally do prediction as well as visualization of model. -XGBoost: Single machine verison VS Hadoop version +Single machine vs Hadoop version ==== If you have used xgboost (single machine version) before, this section will show you how to run xgboost on hadoop with a slight modification on conf file. -* Hadoop version needs to set up how many slave nodes/machines/workers you would like to use at first. -* IO: instead of reading and writing file locally, hadoop version use "stdin" to read training file and use "stdout" to store the final model file. Therefore, you should change the parameters "data" and "model_out" in conf file to ```data = stdin; model_out = stdout```. +* Hadoop version needs to set up how many slave nodes/machines/workers you would like to use at first. +* IO: instead of reading and writing file locally, hadoop version use "stdin" to read training file and use "stdout" to store the final model file. Therefore, you should change the parameters "data" and "model_out" in conf file to ```data=stdin``` and ```model_out=stdout```. * File cache: ```rabit_hadoop.py``` also provide several ways to cache necesary files, including binary file (xgboost), conf file, small size of dataset which used for eveluation during the training process, and so on. - Any file used in config file, excluding stdin, should be cached in the script. ```rabit_hadoop.py``` will automatically cache files in the command line. For example, ```rabit_hadoop.py -n 3 -i $hdfsPath/agaricus.txt.train -o $hdfsPath/mushroom.final.model $localPath/xgboost mushroom.hadoop.conf``` will cache "xgboost" and "mushroom.hadoop.conf". - You could also use "-f" to manually cache one or more files, like ```-f file1 -f file2``` or ```-f file1#file2``` (use "#" to spilt file names). - The local path of cached files in command is "./". - Since the cached files will be packaged and delivered to hadoop slave nodes, the cached file should not be large. For instance, trying to cache files of GB size may reduce the performance. -* Hadoop version also support evaluting each training round. You just need to modify parameters "eval_train" and "eval[test]" in conf file and cache the evaluation file. -* Hadoop version now can only save the final model. -* Predict locally. Althought the hadoop version supports training process, you should do prediction locally, just the same as single machine version. -* The hadoop version now can only save the final model. -* More details of hadoop version can be referred to the usage of ```rabit_hadoop.py```. +* Hadoop version also support evaluting each training round. You just need to modify parameters "eval_train". +* More details of submission can be referred to the usage of ```rabit_hadoop.py```. +* The model saved by hadoop version is compatible with single machine version. Notes ==== -* The code has been tested on MapReduce 1 (MRv1), it should be ok and recommended to run on MapReduce 2 (MRv2, YARN). -* The code is multi-threaded, so you want to run one xgboost per node/worker, which means the parameter should be less than the number of slaves/workers. - +* The code has been tested on MapReduce 1 (MRv1) and YARN. + - We recommend to run it on MapReduce 2 (MRv2, YARN) so that multi-threading can be enabled. +* The code is optimized with multi-threading, so you will want to run one xgboost per node/worker for best performance. + - You will want to set to be number of cores you have on each machine. + - You will need YARN to set specify number of cores of each worker diff --git a/multi-node/hadoop/mushroom.hadoop.conf b/multi-node/hadoop/mushroom.hadoop.conf index 15e05f2da..a4e885d54 100644 --- a/multi-node/hadoop/mushroom.hadoop.conf +++ b/multi-node/hadoop/mushroom.hadoop.conf @@ -32,3 +32,8 @@ model_out = stdout # split pattern of xgboost dsplit = row +<<<<<<< HEAD +# evaluate on training data as well each round +eval_train = 1 +======= +>>>>>>> df3f87c182cc12ccc9ac1f9cafbe01ea7ebf0ac4 diff --git a/multi-node/hadoop/run_hadoop_mushroom.sh b/multi-node/hadoop/run_hadoop_mushroom.sh deleted file mode 100755 index 1e7c9a1d0..000000000 --- a/multi-node/hadoop/run_hadoop_mushroom.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -if [ "$#" -lt 2 ]; -then - echo "Usage: " - exit -1 -fi - -# put the local training file to HDFS -hadoop fs -mkdir $2/data -hadoop fs -put ../../demo/data/agaricus.txt.train $2/data - -# training and output the final model file -../../rabit/tracker/rabit_hadoop.py -n $1 -i $2/data/agaricus.txt.train -o $2/mushroom.final.model ../../xgboost mushroom.hadoop.conf - -# get the final model file -hadoop fs -get $2/mushroom.final.model/part-00000 ./mushroom.final.model - -# output prediction task=pred of test:data -../../xgboost mushroom.hadoop.conf task=pred model_in=mushroom.final.model test:data=../../demo/data/agaricus.txt.test -# print the boosters of final.model in dump.raw.txt -../../xgboost mushroom.hadoop.conf task=dump model_in=mushroom.final.model name_dump=dump.raw.txt -# use the feature map in printing for better visualization -../../xgboost mushroom.hadoop.conf task=dump model_in=mushroom.final.model fmap=../../demo/data/featmap.txt name_dump=dump.nice.txt -cat dump.nice.txt diff --git a/multi-node/hadoop/run_mushroom.sh b/multi-node/hadoop/run_mushroom.sh new file mode 100755 index 000000000..5f133298e --- /dev/null +++ b/multi-node/hadoop/run_mushroom.sh @@ -0,0 +1,23 @@ +#!/bin/bash +if [ "$#" -lt 3 ]; +then + echo "Usage: " + exit -1 +fi + +# put the local training file to HDFS +hadoop fs -mkdir $3/data +hadoop fs -put ../../demo/data/agaricus.txt.train $3/data + +../../rabit/tracker/rabit_hadoop.py -n $1 -nt $2 -i $3/data/agaricus.txt.train -o $3/mushroom.final.model ../../xgboost mushroom.hadoop.conf nthread=$2 + +# get the final model file +hadoop fs -get $3/mushroom.final.model/part-00000 ./final.model + +# output prediction task=pred +../../xgboost mushroom.hadoop.conf task=pred model_in=final.model test:data=../../demo/data/agaricus.txt.test +# print the boosters of final.model in dump.raw.txt +../../xgboost mushroom.hadoop.conf task=dump model_in=final.model name_dump=dump.raw.txt +# use the feature map in printing for better visualization +../../xgboost mushroom.hadoop.conf task=dump model_in=final.model fmap=../../demo/data/featmap.txt name_dump=dump.nice.txt +cat dump.nice.txt diff --git a/src/xgboost_main.cpp b/src/xgboost_main.cpp index 9440c791a..db37cbd1d 100644 --- a/src/xgboost_main.cpp +++ b/src/xgboost_main.cpp @@ -32,7 +32,7 @@ class BoostLearnTask { } } // do not save anything when save to stdout - if (model_out == "stdout") { + if (model_out == "stdout" || name_pred == "stdout") { this->SetParam("silent", "1"); save_period = 0; } @@ -235,12 +235,17 @@ class BoostLearnTask { std::vector preds; if (!silent) printf("start prediction...\n"); learner.Predict(*data, pred_margin != 0, &preds, ntree_limit); - if (!silent) printf("writing prediction to %s\n", name_pred.c_str()); - FILE *fo = utils::FopenCheck(name_pred.c_str(), "w"); - for (size_t i = 0; i < preds.size(); i++) { - fprintf(fo, "%f\n", preds[i]); + if (!silent) printf("writing prediction to %s\n", name_pred.c_str()); + FILE *fo; + if (name_pred != "stdout") { + fo = utils::FopenCheck(name_pred.c_str(), "w"); + } else { + fo = stdout; } - fclose(fo); + for (size_t i = 0; i < preds.size(); ++i) { + fprintf(fo, "%g\n", preds[i]); + } + if (fo != stdout) fclose(fo); } private: /*! \brief whether silent */