Compare commits

..

509 Commits
0.47 ... v0.60

Author SHA1 Message Date
Tianqi Chen
4a8d63b6c8 Tag version 0.6 (#1422) 2016-07-29 11:23:06 -07:00
Vadim Khotilovich
75f401481f no exception throwing within omp parallel; set nthread in Learner (#1421) 2016-07-29 10:08:03 -07:00
Baltazar Bieniek
89c4f67f59 Class function returns more than one value (#1417)
Fix to a bug when the class function returns more than one value. In that case, the code will fail.
2016-07-29 10:07:09 -07:00
Fangzhou
a8adf16228 fix bug: doing rabit call after finalize in spark prediction phase (#1420) 2016-07-28 23:11:20 -05:00
Johnny Ho
328e8e4c69 Update rabit repository (#1409) 2016-07-27 11:40:42 -07:00
Vadim Khotilovich
d5c143367d [R-package] GPL2 dependency reduction and some fixes (#1401)
* [R] do not remove zero coefficients from gblinear dump

* [R] switch from stringr to stringi

* fix #1399

* [R] separate ggplot backend, add base r graphics, cleanup, more plots, tests

* add missing include in amalgamation - fixes building R package in linux

* add forgotten file

* [R] fix DESCRIPTION

* [R] fix travis check issue and some cleanup
2016-07-27 00:05:04 -07:00
Tianqi Chen
f6423056c0 Update dmlc-core (#1408) 2016-07-26 10:53:31 -07:00
Tianqi Chen
c3eb4f7000 Move model doc images to web-data (#1397) 2016-07-23 23:53:22 -07:00
Earthson Lu
d29edc677c fix #1377 spark-mllib scope: default => provided (#1381) 2016-07-20 23:10:49 -04:00
Shengwen Yang
7089301b62 Metrics for gamma regression (#1369)
* Add deviance metric for gamma regression

* Simplify the computation of nloglik for gamma regression

* Add a description for gamma-deviance

* Minor fix
2016-07-18 09:10:44 -05:00
Yuan (Terry) Tang
c60a356273 Remove pypi downloads badge (#1365) 2016-07-16 13:36:05 -04:00
anpark
0e61c514a7 fix duplicate loop over output_group when predict (#1342)
* fix sparse page source meta info empty when load from dmatrix

* fix duplicate loop over output_group when predict
2016-07-13 10:03:10 -07:00
convexquad
313764b3be Expose predictLeaf functionality in Scala XGBoostModel (#1351) 2016-07-12 06:55:24 -04:00
Titouan Lorieul
75d9be55de [py] fix label encoding of eval sets in sklearn API (#1244) 2016-07-11 05:29:46 -05:00
Yuan (Terry) Tang
197b4c6b18 Update DESCRIPTION (#1348) 2016-07-10 09:59:16 -07:00
Yuan (Terry) Tang
5f179340a8 Merge pull request #1347 from marugari/prototype_dart
add Dart tutorial
2016-07-10 09:19:44 -05:00
marugari
c332eb5a2b add Dart tutorial 2016-07-10 20:12:42 +09:00
Rahul
f14c160f4f [jvm-packages][xgboost4j-spark][Minor] Move sparkContext dependency from the XGBoostModel (#1335)
* Move sparkContext dependency from the XGBoostModel

* Update Spark example to declare SparkContext as implict
2016-07-08 06:43:33 -04:00
anpark
3f32b3f0eb fix sparse page source meta info empty when load from dmatrix (#1336) 2016-07-07 21:17:35 -07:00
Shengwen Yang
77d17f6264 Add support for Gamma regression (#1258)
* Add support for Gamma regression

* Use base_score to replace the lp_bias

* Remove the lp_bias config block

* Add a demo for running gamma regression in Python

* Typo fix

* Revise the description for objective

* Add a script to generate the autoclaims dataset
2016-07-06 10:22:46 -07:00
Ryan Curtin
f74e2439e0 Fix spelling error. (#1331) 2016-07-05 12:58:24 -07:00
JP Rosevear
13445e3522 Check for visual studio 12.0 and newer for c++11 support (#1330) 2016-07-04 18:32:20 -07:00
Vadim Khotilovich
11efa038bd [R-package] various fixes for R CMD check (#1328)
* [R] fix xgb.create.features

* [R] fixes for R CMD check
2016-07-04 10:40:35 -07:00
RAMitchell
f8d23b97be Add build instructions for Visual Studio 2013 (#1327) 2016-07-03 21:30:50 -07:00
Tong He
44ed6d5674 Merge pull request #1264 from khotilov/r_callbacks
[R-package] callbacks per #892
2016-07-03 13:43:29 -07:00
Vadim Khotilovich
4fb1b8a5a7 Merge branch 'master' into r_callbacks 2016-07-03 15:04:58 -05:00
Muhammad Haseeb Tariq
7533191af7 Typos in README (#1326)
* Inconsistency in libsvm formats

* note on libsvm formats

* typos in README

* Update README.md

* Update README.md

* Update README.md
2016-07-03 15:14:35 -04:00
Muhammad Haseeb Tariq
14f9697025 Inconsistency in libsvm formats (#1325)
* Inconsistency in libsvm formats

* note on libsvm formats
2016-07-03 10:49:41 -07:00
RAMitchell
93196eb811 cmake build system (#1314)
* Changed c api to compile under MSVC

* Include functional.h header for MSVC

* Add cmake build
2016-07-02 19:07:35 -07:00
Frank
3b73824842 Fix ambiguous call to abs(c or c++). (#1308) 2016-06-29 14:28:28 -07:00
Vadim Khotilovich
344d7b4699 [R] disable for now some of the RF tests that fail in travis 2016-06-27 02:49:23 -05:00
Vadim Khotilovich
ae0ca486ed added name to DESCRIPTION 2016-06-27 02:23:37 -05:00
Vadim Khotilovich
4b2eedc186 fix merge conflicts 2016-06-27 02:18:59 -05:00
Vadim Khotilovich
e1a52e896c [R] rm renamed CB's docs 2016-06-27 02:01:54 -05:00
Vadim Khotilovich
fd4300b95a [R] additional and modified tests 2016-06-27 02:00:46 -05:00
Vadim Khotilovich
3b6b344561 [R] adopt demos and vignettes to a more consistent parameter style 2016-06-27 02:00:39 -05:00
Vadim Khotilovich
a0aa305268 [R] docs update - callbacks and parameter style 2016-06-27 01:59:58 -05:00
Vadim Khotilovich
e9eb34fabc [R] parameter style consistency 2016-06-27 01:58:03 -05:00
Vadim Khotilovich
56bd442b31 [R] simplified the code; parameter style consistency 2016-06-27 01:57:57 -05:00
Vadim Khotilovich
8473b18c3d [R] consolidate importFrom-s; parameter style 2016-06-27 01:50:03 -05:00
Vadim Khotilovich
b9aeeda074 [R] in predict: doc, examples, reshape parameter 2016-06-27 01:49:57 -05:00
Vadim Khotilovich
c342614a81 [R] add parameter deprecation related utilities; code style 2016-06-27 01:49:51 -05:00
Vadim Khotilovich
76650c096f [R] CB naming change; cv-prediction as CB; add.cb function to ensure proper CB order; docs; minor fixes + changes 2016-06-27 01:49:47 -05:00
Nan Zhu
bd5b07873e [jvm-packages] create dmatrix with specified missing value (#1272)
* create dmatrix with specified missing value

* update dmlc-core

* support for predict method in spark package

repartitioning

work around

* add more elements to work around training set empty partition issue
2016-06-21 17:35:17 -04:00
Nan Zhu
c9a73fe2a9 explicitly throw exception when detecting empty partition in training dataset (#1281) 2016-06-15 16:03:37 -04:00
Bill Chambers
465e5dfb87 Broken Link in README (#1275) 2016-06-13 15:41:24 -07:00
Tong He
9cb872b879 Merge pull request #914 from catena/master
R: fix "bestInd" and add "best_ntreelimit" to xgb.Booster
2016-06-13 11:13:00 -07:00
catena
661c062bd9 add best_ntreelimit attribute 2016-06-13 12:45:20 +05:30
Vladimir
aaf0a73486 fixed error when eval False (#1271) 2016-06-12 09:36:36 -07:00
Yoshinori Nakano
7cfeb5f012 fix Dart::NormalizeTrees (#1265) 2016-06-09 15:28:24 -07:00
Vadim Khotilovich
4e1269b522 print.xgb.cv fix - Rd too 2016-06-09 10:12:20 -05:00
Vadim Khotilovich
79704cdfb4 print.xgb.cv fix 2016-06-09 09:29:19 -05:00
Vadim Khotilovich
f34f9fb9f7 R-callbacks tests + other tests brushup 2016-06-09 02:53:37 -05:00
Vadim Khotilovich
2e0ffcc303 R-callbacks docs 2016-06-09 02:52:09 -05:00
Vadim Khotilovich
422b0000a8 R-callbacks refactor 2016-06-09 02:46:13 -05:00
Vadim Khotilovich
754f3a6e07 protection against returning 0-length vector 2016-06-09 02:45:02 -05:00
Vadim Khotilovich
bdf14007b5 print method; construct from initial xgb.Booster 2016-06-09 02:43:25 -05:00
Vadim Khotilovich
264c222fe0 Merge remote-tracking branch 'upstream/master' 2016-06-09 02:32:26 -05:00
Yoshinori Nakano
949d1e3027 add Dart booster (#1220) 2016-06-08 14:04:01 -07:00
Shengwen Yang
e034fdf74c Fix issue #1236: cli_main crashes when dumping count:poisson model (#1253) 2016-06-07 21:52:47 -07:00
Szilard Pafka
e2c1aa8b51 link to talk (video+slides) by Tianqi at Los Angeles Data Science meetup (#1254)
* link to talk (video+slides) by Tianqi

* benchmark
2016-06-07 21:43:52 -07:00
Vadim Khotilovich
9a48a40cf1 Fixes for multiple and default metric (#1239)
* fix multiple evaluation metrics

* create DefaultEvalMetric only when really necessary

* py test for #1239

* make travis happy
2016-06-04 22:17:35 -07:00
Vadim Khotilovich
26a82621a2 make travis happy 2016-06-04 22:38:24 -05:00
Vadim Khotilovich
ba04a1d552 py test for #1239 2016-06-04 22:08:10 -05:00
Yuan (Terry) Tang
9ef86072f4 Merge pull request #1241 from KhaoticMind/master
[py]Preserve the actual objective used on the booster - Fixed #1215
2016-06-01 09:11:49 -05:00
Antonio Augusto Santos
19129b289c Preserve the actal objective used on the booster
Save the actual objective used on xgboost.train.

Not saving it was giving problem in predict_proba, as issue  #1215
2016-05-31 19:01:10 -03:00
Vadim Khotilovich
22ad94d281 create DefaultEvalMetric only when really necessary 2016-05-31 08:20:25 -05:00
Vadim Khotilovich
64b9dcf7b5 fix multiple evaluation metrics 2016-05-31 08:20:17 -05:00
Tianqi Chen
6e3463097d Merge pull request #1232 from zl1zl/master
fix cli_main crashes when using count:poisson regression
2016-05-27 20:22:50 -07:00
Zhongliang Li
1dde863c98 fix cli_main crashes when using count:poisson regression 2016-05-26 10:03:29 -07:00
Tianqi Chen
2ef81e0673 Merge pull request #1228 from albertotb/master
XGBModel doctstring
2016-05-25 10:38:48 -07:00
Alberto Torres
118eb2f1bb Merge pull request #1 from albertotb/sklearn-docstring
Update sklearn.py
2016-05-25 15:02:41 +02:00
Alberto Torres
af2e9ebd82 Update sklearn.py 2016-05-25 15:00:11 +02:00
Tianqi Chen
5c14daffe2 Merge pull request #1221 from ryaninhust/master
[DATA] fix instance weights loading
2016-05-24 20:19:31 -07:00
Nan Zhu
c6631ad2ed specify spark version (#1224) 2016-05-24 18:19:32 -04:00
yuanbowen
5898f1c59e [DATA] fix instance weights loading 2016-05-23 18:40:41 +08:00
Nan Zhu
c85b9012c6 [jvm-packages] xgboost4j-spark external memory (#1219)
* implement external memory support for XGBoost4J

* remove extra space

* enable external memory for prediction

* update doc
2016-05-22 14:01:28 -04:00
Tianqi Chen
587999755f Merge pull request #1218 from tqchen/master
[DATA] fix async data writing
2016-05-21 19:40:41 -07:00
tqchen
d816208797 [DATA] fix async data writing 2016-05-21 18:46:36 -07:00
Tianqi Chen
a4d8c1b49f redirects funding info to UW page 2016-05-21 11:07:55 -07:00
Tianqi Chen
cc5112d405 Merge pull request #1214 from tqchen/master
add style
2016-05-20 13:11:48 -07:00
tqchen
2c0c06639c add style 2016-05-20 13:11:27 -07:00
Tianqi Chen
47f359ca9f Merge pull request #1213 from tqchen/master
[DOC] refactor doc
2016-05-20 13:10:11 -07:00
tqchen
84ae514d7e [DOC] refactor doc 2016-05-20 13:09:42 -07:00
Tianqi Chen
e4ea166d05 Merge pull request #1211 from tqchen/master
[PYTHON] Refactor trainnig API to use callback
2016-05-19 21:47:25 -07:00
tqchen
149589c583 [PYTHON] Refactor trainnig API to use callback 2016-05-19 21:31:23 -07:00
Tianqi Chen
03996dd4e8 Update NEWS.md 2016-05-19 11:04:48 -07:00
Michaël Benesty
51154f42fe Merge pull request #1118 from khotilov/parsing_speedup
[R-package] xgb.model.dt.tree up to x100 faster
2016-05-17 17:48:11 +02:00
Vadim Khotilovich
611b317057 make travis happy 2016-05-17 00:24:06 -05:00
Vadim Khotilovich
2b8b18583f some more xgb.model.dt.tree improvements 2016-05-17 00:24:06 -05:00
Vadim Khotilovich
be65949ba2 xgb.model.dt.tree up to x100 faster 2016-05-17 00:24:06 -05:00
Tianqi Chen
49bbd72d08 Merge pull request #1198 from khotilov/xgb_attributes
More functionality for model attributes
2016-05-16 09:31:58 -07:00
Vadim Khotilovich
ffed95eec0 py: replace attr_names() with attributes() 2016-05-15 22:04:38 -05:00
Vadim Khotilovich
26b36714ea doxygen suggested fix 2016-05-15 03:05:19 -05:00
Vadim Khotilovich
185fef3fce fixes for lint 2016-05-15 02:35:37 -05:00
Vadim Khotilovich
a13a3a4d76 attr_names for python interface; attribute deletion via set_attr 2016-05-15 02:05:10 -05:00
Vadim Khotilovich
8664217a5a [R] more attribute handling functionality 2016-05-14 18:19:18 -05:00
Vadim Khotilovich
ea9285dd4f methods to delete an attribute and get names of available attributes 2016-05-14 18:19:18 -05:00
Tianqi Chen
9c26566eb0 Merge pull request #1190 from geneorama/geneorama-patch-1
add `scale_pos_weight` to parameter documentation
2016-05-11 15:06:21 -07:00
Tianqi Chen
07003e8342 Merge pull request #1191 from SeanBE/fix-build-doc
make grammar and spelling fixes to build doc
2016-05-11 15:06:04 -07:00
Sean Löfgren
bf322322fe make grammar and spelling fixes to build doc 2016-05-11 22:56:29 +01:00
Gene Leynes
2e7abffffb add scale_pos_weight to parameter documentation 2016-05-11 14:10:36 -05:00
Tianqi Chen
617aeb912b Merge pull request #1186 from tqchen/master
Update rabit to latest
2016-05-10 20:18:14 -07:00
tqchen
44d4a62631 Update rabit to latest 2016-05-10 20:07:22 -07:00
Tianqi Chen
bab69919d2 Merge pull request #1181 from shaynekang/visible_deprecation_warning
Fix VisibleDeprecationWarning
2016-05-07 11:11:58 -07:00
Shayne Kang
bf24d6ae98 fix VisibleDeprecationWarning 2016-05-08 01:44:04 +09:00
Yuan (Terry) Tang
840481d215 Merge pull request #1180 from borundev/master
call to DMatrix was missing 'missing=self.missing'
2016-05-07 10:42:00 -04:00
Borun Dev Chowdhury
fc02f8a2dc cosmetic change
cosmetic change of putting space after comma compared to previous edit.
2016-05-07 12:33:37 +02:00
borundev
95bcff90af XGBModel.fit had a call to DMatrix without missing=self.missing. fixed that 2016-05-07 12:32:03 +02:00
Tianqi Chen
6e79ba831a Merge pull request #1166 from khotilov/r_api_fix
[R-package] C-API fix; attribute accessors
2016-05-06 20:35:28 -07:00
Yuan (Terry) Tang
b92e2252b0 Merge pull request #1173 from tlorieul/sklearn_get_trees_leaves
[py] added apply function in sklearn API to return the predicted leaves
2016-05-04 08:02:08 -05:00
Titouan Lorieul
3ab8f0b13d [py] added apply function in sklearn API to return the predicted leaves 2016-05-04 12:27:30 +02:00
Saiwing Yeung
28cdc10259 Fixed a typo (#1172)
panda -> Pandas
2016-05-03 19:29:22 -05:00
Alistair Johnson
6750c8b743 Added other feature importances in python package (#1135)
* added new function to calculate other feature importances

* added capability to plot other feature importance measures

* changed plotting default to fscore

* added info on importance_type to boilerplate comment

* updated text of error statement

* added self module name to fix call

* added unit test for feature importances

* style fixes
2016-05-02 12:25:24 -05:00
Vadim Khotilovich
5a78118396 use short-circuiting scalar && 2016-05-02 01:01:22 -05:00
Vadim Khotilovich
79c7c9e5bb R accessors for model attributes 2016-05-02 00:20:44 -05:00
Vadim Khotilovich
0839aed380 fix attribute accessors C-interface for R 2016-05-02 00:19:38 -05:00
Yuan (Terry) Tang
c2c61eefd9 Merge pull request #1164 from sinhrks/fix_doc
DOC/TST: Fix Python sklearn dep
2016-05-01 16:18:03 -05:00
Vadim Khotilovich
b5fb437aa7 learner attribute setter & getter for R interface 2016-05-01 15:40:51 -05:00
Vadim Khotilovich
b588479f66 .Call-interface functions need to return SEXP 2016-05-01 15:40:51 -05:00
sinhrks
9da2f3e613 DOC/TST: Fix Python sklearn dep 2016-05-01 17:27:43 +09:00
Tianqi Chen
2f2ad21de4 Merge pull request #1153 from khotilov/seed_in_configure
Fixes for repeated Configure calls
2016-04-30 10:43:14 -07:00
Yuan (Terry) Tang
da85a4e923 Merge pull request #1161 from Far0n/eta_decay_fix
[py] eta decay bugfix
2016-04-30 10:50:52 -04:00
Faron
ad3f49e881 [py] eta decay bugfix 2016-04-30 15:51:57 +02:00
Yuan (Terry) Tang
9bc2ac4bd0 Merge pull request #1158 from sinhrks/feature_bug
Bug mixing DMatrix's with and without feature names
2016-04-30 09:20:58 -04:00
sinhrks
6bab164d80 Bug mixing DMatrix's with and without feature names 2016-04-30 14:42:57 +09:00
Yuan (Terry) Tang
ff4dda2102 Merge pull request #1159 from saiwing-yeung/my-fix
fixed typo
2016-04-30 00:38:18 -04:00
Saiwing Yeung
a6909e389f fixed typo
panda -> Pandas
2016-04-30 09:24:39 +08:00
Yuan (Terry) Tang
3434083d3e Merge pull request #1017 from Far0n/hist
[py] split value histograms
2016-04-28 13:52:49 -05:00
Faron
cf607e2448 [py] split value histograms 2016-04-28 20:26:21 +02:00
Yuan (Terry) Tang
6691d5c3f4 Merge pull request #1141 from sinhrks/pandas_features
BUG: XGBClassifier.feature_importances_ raises ValueError if input is…
2016-04-27 18:07:51 -05:00
sinhrks
c55cc809e5 BUG: XGBClassifier.feature_importances_ raises ValueError if input is pandas DataFrame 2016-04-27 21:50:03 +09:00
Vadim Khotilovich
24e3c5773e Merge branch 'master' into seed_in_configure 2016-04-26 22:47:01 -05:00
Vadim Khotilovich
811c6ef58b obey the lint 2016-04-26 22:11:19 -05:00
Tianqi Chen
4149854633 Merge pull request #1068 from Laurae2/master
Updated obsolete installation instructions
2016-04-26 19:50:06 -07:00
Tianqi Chen
43c073d8c5 Merge pull request #1142 from sinhrks/flake8
Enable flake8 for Python
2016-04-26 19:47:42 -07:00
Tianqi Chen
5d7a69663b Merge pull request #1145 from khotilov/error_at_threshold
ability to specify threshold for the error metric
2016-04-26 19:46:37 -07:00
Vadim Khotilovich
3e0732dea9 in Configure, set random seed only for uninitialized model 2016-04-26 02:03:22 -05:00
Vadim Khotilovich
0527b17c9d avoid collecting duplicate parameters in Booster::cfg_ 2016-04-25 22:08:53 -05:00
Vadim Khotilovich
1160d0bf25 ability to specify threshold for the error metric 2016-04-25 01:29:04 -05:00
sinhrks
8fc2456c87 Enable flake8 2016-04-24 17:32:31 +09:00
Tianqi Chen
b3c9e6a0db Merge pull request #1139 from hxd1011/patch-2
Update parameter.md
2016-04-22 12:12:01 -07:00
hxd1011
04ace6311b Update parameter.md
add some detail how the parameter will affect model complexity, and comment on the base score.
2016-04-22 14:42:11 -04:00
Nan Zhu
060350f64c Merge pull request #1103 from dmlc/revert-1100-master
Revert "updating JVM docs"
2016-04-11 22:36:37 -04:00
Nan Zhu
e6de01baaf Revert "updating JVM docs" 2016-04-11 22:00:45 -04:00
Tianqi Chen
f2557ce530 Merge pull request #1102 from tqchen/master
allow common python output in single node
2016-04-11 16:04:52 -07:00
Tianqi Chen
db4c5bc627 Merge pull request #1100 from avloss/master
updating JVM docs
2016-04-11 15:49:33 -07:00
tqchen
49f3892942 allow common python output in single node 2016-04-11 15:48:16 -07:00
avl055
f75d78f686 updating JVM docs
adding “-DskipTests” to Docs for JVM. without this flag building takes
forever
2016-04-10 23:52:09 +01:00
Yuan (Terry) Tang
59610c49df Merge pull request #1098 from zyxue/patch-1
improved docstring for folds in cv function
2016-04-09 14:09:10 -04:00
zyxue
79b35da308 improved docstring for folds in cv function 2016-04-09 10:21:56 -07:00
Yuan (Terry) Tang
c791894668 Merge pull request #1097 from Far0n/patch-1
winning solution
2016-04-09 07:44:43 -04:00
Far0n
d7e5095e7c Update README.md
Winning solution added: "Homesite Quote Conversion" (@Kaggle)
2016-04-09 09:12:49 +02:00
Tong He
4af55518f2 Merge pull request #1050 from khotilov/S4toS3
[R-package] Convert to S3; some new DMatrix methods
2016-04-03 11:52:08 -07:00
Vadim Khotilovich
25965227b3 Merge branch 'master' into S4toS3 2016-04-02 14:30:44 -05:00
Tianqi Chen
714901eac5 Merge pull request #1071 from khotilov/make_fix
fix Makefile to use MAKE variable
2016-03-30 23:22:21 -07:00
Vadim Khotilovich
de63993543 Merge branch 'master' into make_fix 2016-03-31 01:19:52 -05:00
Vadim Khotilovich
9f177d7353 fix Makefile to use MAKE variable 2016-03-31 00:47:51 -05:00
Tianqi Chen
babf1d7840 Merge pull request #1048 from WojciechMigda/xgboostercreate-api-fix
XGBoosterCreate api unified to use const DMatrixHandle[] argument
2016-03-30 20:56:10 -07:00
Laurae2
77136baf2c Updated obsolete installation instructions
Fixed local compilation, and installation for R package and Python
package. Modified the according documents.
2016-03-30 17:43:54 +02:00
WojciechMigda
30a306b974 Merge branch 'master' into xgboostercreate-api-fix 2016-03-30 11:25:21 +02:00
Nan Zhu
6eda06256e Merge pull request #1057 from CodingCat/master
update doc
2016-03-28 20:09:53 -04:00
Nan Zhu
e27977d416 Merge branch 'master' into master 2016-03-28 19:03:04 -04:00
CodingCat
daeee84e4d update doc 2016-03-28 19:02:37 -04:00
Vadim Khotilovich
33131e2e13 make travis happy 2016-03-27 20:28:40 -05:00
Vadim Khotilovich
fb5291271e fix print.xgb.DMatrix doc 2016-03-27 19:42:26 -05:00
Vadim Khotilovich
4b760762f9 added unit tests for xgb.DMatrix 2016-03-27 19:23:08 -05:00
Vadim Khotilovich
71f402ac16 convert S4 to S3; add some extra methods to DMatrix 2016-03-27 19:22:22 -05:00
Vadim Khotilovich
d27bfb61b0 consolidated DMatrix&Booster stuff into xgb.DMatrix.R & xgb.Booster.R 2016-03-27 19:17:13 -05:00
Vadim Khotilovich
1d504d6c6c added XGDMatrixNumCol_R function 2016-03-27 19:11:22 -05:00
Wojciech Migda
6a5eb47789 XGBoosterCreate api unified to use const DMatrix[] argument 2016-03-26 19:42:58 +01:00
Nan Zhu
605c23e0dc Merge pull request #1037 from CodingCat/allow_empty_partitions
[jvm-packages] allow empty partitions
2016-03-23 15:04:51 -04:00
Nan Zhu
dfafce4cfd Merge branch 'master' into allow_empty_partitions 2016-03-23 12:30:33 -04:00
CodingCat
d8535313eb allow empty partitions 2016-03-23 12:30:06 -04:00
Tianqi Chen
03dfffca15 Merge pull request #1028 from kilojoules/patch-6
More verbose error message: which fields have impropper data types
2016-03-22 21:02:17 -07:00
Julian Quick
bbb9ce1641 Verbose message: which fields have impropper data types
A more verbose error message letting the user know which fields have impropper data types
2016-03-22 14:13:29 -06:00
Tianqi Chen
1625dab1cb Merge pull request #1025 from andyandy1992/master
Fixed typos.
2016-03-22 08:49:32 -07:00
Andrew Smith
5efc1ee3a4 Fixed typos. 2016-03-22 12:54:18 +00:00
Nan Zhu
c135703655 Merge pull request #1012 from CodingCat/master
update installation doc
2016-03-19 08:58:41 -04:00
Nan Zhu
8842176888 Merge branch 'master' into master 2016-03-19 08:16:18 -04:00
CodingCat
f1114688a7 update installation doc 2016-03-19 08:15:56 -04:00
Tianqi Chen
7c555d5bc6 Update README.md 2016-03-18 15:28:38 -07:00
Nan Zhu
11415f228f Merge pull request #1010 from CodingCat/master
[jvm-packages] adjust numWorkers for test
2016-03-18 11:30:31 -04:00
Nan Zhu
14a031a1ab Merge branch 'master' into master 2016-03-18 10:36:20 -04:00
CodingCat
55ab1c6a22 adjust numWorkers for test 2016-03-18 10:34:36 -04:00
Nan Zhu
a2146708bd Merge pull request #1008 from CodingCat/master
typo fix
2016-03-18 07:12:14 -04:00
Nan Zhu
0b998d5249 Merge branch 'master' into master 2016-03-18 06:56:06 -04:00
Tianqi Chen
0e0f0e75f6 Merge pull request #1009 from Keiku/work
Add a new winning solution to demo/README.md
2016-03-17 20:33:04 -07:00
Keiku
7016bee98f Add a new winning solution to demo/README.md 2016-03-18 12:06:36 +09:00
Nan Zhu
ffe7af572c Merge branch 'master' into master 2016-03-17 23:00:50 -04:00
Tianqi Chen
375a8a97a6 Merge pull request #1006 from kilojoules/patch-4
a more verbose field mismatch error message
2016-03-17 19:56:33 -07:00
CodingCat
cc0722a4aa typo fix 2016-03-17 22:17:08 -04:00
Julian Quick
2cd109fb98 a more verbose field mismatch error message
This error message can be hard to understand when there are several fields, as shown in the example below. This improves the error message, letting the user know which fields were unexpected or missing.

    import xgboost as xgb
    import pandas as pd
    train = pd.DataFrame({'a':[1], 'b':[2], 'c':[3], 'd':[4], 'f':[2], 'g':2, 'etc etc etc':[11]})
    dtrain = xgb.DMatrix(train.drop('d', axis=1), train.d)
    test = pd.DataFrame({'a':[1], 'b':[2], 'c':[1], 'd':[4], 'e':[2], 'f':[2], 'g':2, 'etc etc etc':[11]})
    dtest = xgb.DMatrix(test)
    modl = xgb.train({}, dtrain)
    modl.predict(dtest)
    
    
    # ValueError: feature_names mismatch: [u'a', u'b', u'c', u'etc etc etc', u'f', u'g'] [u'a', u'b', u'c', u'd', u'e', u'etc etc etc', u'f', u'g']
2016-03-17 18:13:30 -06:00
Tianqi Chen
c449dc6874 Merge pull request #1003 from DrAndrey/master
change type of xgbclassifier.classes_ from list to numpy array
2016-03-17 09:51:05 -07:00
DAndrey
311f7c8f47 change type of xgbclassifier.classes_ from list to numpy array 2016-03-17 16:54:33 +03:00
Tianqi Chen
612ccd0bb7 Merge pull request #1000 from CodingCat/installation_doc
[jvm-packages] run native lib building command from maven
2016-03-16 14:06:19 -07:00
CodingCat
6f273a8c21 update docs 2016-03-16 17:00:44 -04:00
CodingCat
a31a978471 run native lib building command from maven 2016-03-16 16:47:08 -04:00
Tianqi Chen
6321bc20ea Merge pull request #996 from CNevd/patch-1
Update README.md
2016-03-15 19:34:36 -07:00
CNevd
73d5965961 Update README.md 2016-03-16 10:28:22 +08:00
Tianqi Chen
c2fa67757a Merge pull request #995 from ohld/year-update
update year in LICENSE, conf.py and README.md files
2016-03-15 09:34:10 -07:00
Okhlopkov Daniil Olegovich
5829eb3cf2 update year in LICENSE, conf.py and README.md files
I found that year in files is not up-to-date
2016-03-15 16:51:34 +03:00
Tianqi Chen
9a5489ee15 Merge pull request #992 from tqchen/master
[FLINK] remove nWorker from API
2016-03-14 16:29:58 -07:00
tqchen
90f7220736 [FLINK] remove nWorker from API 2016-03-14 16:18:35 -07:00
Tianqi Chen
084ed6224d Update README.md 2016-03-14 15:21:39 -07:00
Tianqi Chen
29ad3ab2c3 Merge pull request #991 from CodingCat/master
xgboost4j intro
2016-03-14 14:07:11 -07:00
CodingCat
d1c5280f4b xgboost4j intro 2016-03-14 16:44:03 -04:00
Nan Zhu
00caf2c956 Merge pull request #988 from CodingCat/master
[jvm-packages] getter of XGBoostModel
2016-03-14 07:54:03 -04:00
CodingCat
c3e56017cc Merge branch 'master' of https://github.com/dmlc/xgboost 2016-03-14 07:27:53 -04:00
CodingCat
3a951d0ab8 getter of XGBoostModel 2016-03-14 07:26:51 -04:00
Nan Zhu
362eb4bd02 Merge pull request #983 from CodingCat/master
[jvm-packages] upgrade spark version to 1.6.1
2016-03-13 23:04:16 -04:00
Nan Zhu
e3fa7753f5 Merge branch 'master' into master 2016-03-13 22:46:38 -04:00
CodingCat
6f92f1c117 update spark version to 1.6.1 2016-03-13 22:46:06 -04:00
Tianqi Chen
e1b2ad2e5e Merge pull request #980 from tqchen/master
[METHOD], add tree method option to prefer faster algo
2016-03-13 12:25:03 -07:00
tqchen
a2714fe052 [METHOD], add tree method option to prefer faster algo 2016-03-13 12:24:47 -07:00
Tianqi Chen
4454c7b72a Update README.md 2016-03-13 12:13:41 -07:00
Tianqi Chen
5fb09dc0ab Merge pull request #979 from CodingCat/kryo
[jvm-packages] support kryo serialization
2016-03-13 11:25:01 -07:00
Tianqi Chen
05b692590d Merge pull request #978 from CodingCat/worker_num
[jvm-packages] jvm doc index
2016-03-13 11:24:33 -07:00
Nan Zhu
a382c1698a Merge branch 'master' into worker_num 2016-03-13 11:57:28 -04:00
Nan Zhu
cc1e608a3d Merge branch 'master' into kryo 2016-03-13 11:57:17 -04:00
CodingCat
f2ef958ebb support kryo serialization 2016-03-13 11:55:14 -04:00
CodingCat
9011acf52b jvm doc index 2016-03-13 09:20:51 -04:00
Nan Zhu
3ce33563f2 Merge pull request #975 from CodingCat/worker_num
force the user to set number of workers
2016-03-12 13:47:27 -05:00
CodingCat
16b9e92328 force the user to set number of workers 2016-03-12 13:33:57 -05:00
Nan Zhu
980898f3fb Merge pull request #971 from CodingCat/set_nthread
set nthread to spark.task.cpus by default
2016-03-11 20:19:56 -05:00
CodingCat
5f441a29a8 set nthread to spark.task.cpus by default 2016-03-11 20:07:09 -05:00
Tianqi Chen
cbabaeba0c Merge pull request #969 from tqchen/master
JVM API Update
2016-03-11 12:36:27 -08:00
Tianqi Chen
57987100bc Merge pull request #3 from CodingCat/fix_examples
adjust the API signature as well as the docs
2016-03-11 12:29:33 -08:00
CodingCat
a3b2e76230 update README for jvm-packages 2016-03-11 15:28:55 -05:00
CodingCat
400b1faecc adjust the API signature as well as the docs 2016-03-11 15:22:44 -05:00
CodingCat
97e4dcde98 Merge branch 'master' of https://github.com/tqchen/xgboost into fix_examples 2016-03-11 15:13:54 -05:00
tqchen
2a6ac6fd34 Update JVM Doc 2016-03-11 11:53:24 -08:00
tqchen
79f2d0cf70 Update JVM Doc 2016-03-11 11:50:21 -08:00
Tianqi Chen
2dac506773 Merge pull request #968 from CodingCat/master
XGBoost4J intro
2016-03-11 10:59:42 -08:00
Nan Zhu
61db5aa575 Merge branch 'master' into master 2016-03-11 13:58:24 -05:00
CodingCat
6b9442a7f8 XGBoost4J intro 2016-03-11 13:58:00 -05:00
CodingCat
ab68a0ccc7 fix examples 2016-03-11 13:57:03 -05:00
Nan Zhu
acdd23e789 Merge pull request #967 from CodingCat/master
[jvm-packages] change the API name
2016-03-11 10:59:16 -05:00
CodingCat
aca0096b33 more updates for Flink
more fix
2016-03-11 10:15:49 -05:00
CodingCat
43d7a85bc9 change the API name since we support not only HDFS and local file system 2016-03-11 10:05:32 -05:00
Nan Zhu
8e3ce908fe Merge pull request #965 from shaform/fs
Support the cases when user load dataset and save model to two different file systems
2016-03-11 09:25:02 -05:00
Shaform
6558ef3273 support different types of filesystems 2016-03-11 22:06:40 +08:00
Nan Zhu
00e7e4eef0 Merge pull request #964 from CodingCat/master
fix create_jni.sh
2016-03-11 09:00:20 -05:00
CodingCat
51b0e7010c fix create_jni sh 2016-03-11 08:46:44 -05:00
Tianqi Chen
39359edbd8 Merge pull request #962 from tqchen/master
Fix continue training in CLI
2016-03-10 22:11:45 -08:00
Tianqi Chen
04f7fe9c36 Merge pull request #961 from tqchen/master
Fix multi-class loading
2016-03-10 19:39:28 -08:00
tqchen
59d59a968d Fix continue training in CLI 2016-03-10 19:39:09 -08:00
tqchen
ec2fb5bc48 Fix multi-class loading 2016-03-10 19:22:26 -08:00
Tianqi Chen
d02bd41623 Merge pull request #959 from tqchen/master
Fix continue training in CLI
2016-03-10 18:12:54 -08:00
tqchen
96b17971ac Fix continue training in CLI 2016-03-10 12:43:25 -08:00
Tianqi Chen
845f80ec22 Merge pull request #958 from tqchen/master
fix link
2016-03-10 12:28:46 -08:00
tqchen
d35cab6911 fix link 2016-03-10 12:28:29 -08:00
Tianqi Chen
5b3ece2ca9 Merge pull request #957 from tqchen/master
Add Reference
2016-03-10 12:27:41 -08:00
tqchen
f3e2878784 Add Reference 2016-03-10 12:26:01 -08:00
Tianqi Chen
7b2e128c91 Merge pull request #951 from tcfuji/master
Add tpot to Tools using XGBoost
2016-03-10 07:53:29 -08:00
Tianqi Chen
d913cbbfbc Merge pull request #954 from CodingCat/worker_num
[jvm-packages] allow the user to specify the worker number and avoid unnecessary shuffle
2016-03-10 07:53:20 -08:00
CodingCat
d47df5c1d8 allow the user to specify the worker number and avoid unnecessary shuffle 2016-03-10 06:58:30 -05:00
CodingCat
e0a3f1c000 nthread no larger than spark.task.cpus 2016-03-10 05:51:07 -05:00
Ted Fujimoto
13486cf672 Add tpot to Tools using XGBoost 2016-03-09 23:27:34 -08:00
Tianqi Chen
bbe2b2f0b6 Merge pull request #949 from CodingCat/scala_examples
fix typo in README
2016-03-09 16:00:27 -08:00
Nan Zhu
e0555d2ddc Merge branch 'master' into scala_examples 2016-03-09 17:23:22 -05:00
CodingCat
4e86c8c866 fix typo in README 2016-03-09 17:22:19 -05:00
Tianqi Chen
db7a4e2ada Merge pull request #944 from CodingCat/scala_examples
Scala examples
2016-03-09 10:08:07 -08:00
CodingCat
7e30ada8c1 update README 2016-03-09 13:05:08 -05:00
Nan Zhu
b398c145a9 Merge branch 'master' into scala_examples 2016-03-09 12:57:41 -05:00
Tianqi Chen
a23e091be1 Merge pull request #946 from saiias/modify_makefile
modify Makefile
2016-03-09 09:52:22 -08:00
CodingCat
005b1276d0 remove duplicate in stream close 2016-03-09 12:33:49 -05:00
CodingCat
852c5a4b32 code formatting in XGBoostModel 2016-03-09 12:31:35 -05:00
CodingCat
c9830cd8b1 remove spark/flink examples 2016-03-09 12:31:35 -05:00
CodingCat
8cfa752fa0 add scala examples 2016-03-09 12:31:35 -05:00
Tianqi Chen
f64516c8d0 Merge pull request #942 from CodingCat/revise_java
refactor jvm API
2016-03-09 09:30:12 -08:00
saiias
357c89fa93 modify Makefile 2016-03-09 23:11:48 +09:00
CodingCat
a08cc8aad4 allow the user define how many workers they need 2016-03-08 18:46:53 -05:00
CodingCat
909c6af330 add test resources manually 2016-03-08 18:43:30 -05:00
CodingCat
fa03aaeb63 revise current API 2016-03-08 17:18:55 -05:00
Tianqi Chen
9911771b02 Merge pull request #939 from tqchen/master
update dmlc-core
2016-03-08 08:02:02 -08:00
tqchen
7aafd8f777 update dmlc-core 2016-03-08 08:01:47 -08:00
Tianqi Chen
8a12a78427 Merge pull request #936 from tqchen/master
Fix broken tracker
2016-03-07 21:00:39 -08:00
tqchen
7cbb9da0e6 Fix broken tracker 2016-03-07 21:00:20 -08:00
Tianqi Chen
6f5632dd6e Merge pull request #934 from tqchen/master
[Spark] Refactor train, predict, add save
2016-03-06 21:57:38 -08:00
tqchen
435a0425b9 [Spark] Refactor train, predict, add save 2016-03-06 21:51:08 -08:00
Tianqi Chen
3402953633 Update README.md 2016-03-06 21:15:43 -08:00
Tianqi Chen
0fd433ff0f Update README.md 2016-03-06 21:15:32 -08:00
Tianqi Chen
d4161bdeec Merge pull request #932 from tqchen/master
Add doc badge
2016-03-06 21:09:32 -08:00
tqchen
dda226ab85 Add doc badge 2016-03-06 21:09:13 -08:00
Tianqi Chen
3d8f2fb1b9 Merge pull request #931 from tqchen/master
Add JVM Package
2016-03-06 21:05:35 -08:00
tqchen
8cc8f227c3 Add JVM Package 2016-03-06 21:05:17 -08:00
Tianqi Chen
8c6cbe7608 Merge pull request #930 from tqchen/master
Fix rabit
2016-03-06 20:54:49 -08:00
tqchen
bec7332eea Fix rabit 2016-03-06 20:54:27 -08:00
Tianqi Chen
54f13ab9e7 Merge pull request #929 from tqchen/master
[DOC-JVM] Refactor JVM docs
2016-03-06 20:52:40 -08:00
tqchen
c05c5bc7bc [DOC-JVM] Refactor JVM docs 2016-03-06 20:42:01 -08:00
Tianqi Chen
79f9fceb6b Merge pull request #927 from CodingCat/spark_example
Spark example
2016-03-06 15:46:54 -08:00
CodingCat
c211a80633 log tracker exit value in logger
capture InterruptedException
2016-03-06 17:37:18 -05:00
CodingCat
718a9d8c96 use another thread to control spark job 2016-03-06 15:46:27 -05:00
CodingCat
6499422e90 fix the merge 2016-03-06 15:22:05 -05:00
CodingCat
16008ebfb8 merge with master 2016-03-06 15:16:55 -05:00
CodingCat
50337d1906 fix rabitEnv 2016-03-06 14:56:49 -05:00
Tianqi Chen
cf2a7851eb Merge pull request #926 from tqchen/master
[JVM] Refactor, add filesys API
2016-03-06 11:49:01 -08:00
CodingCat
808e30f9fc example of DistTrainWithSpark and trigger job with foreachPartition 2016-03-06 14:34:23 -05:00
tqchen
56f7a414d1 [JVM] Refactor, add filesys API 2016-03-06 11:33:48 -08:00
CodingCat
f768edfede adjust the return values of RabitTracker.waitFor(), remove typesafe.Config 2016-03-06 08:44:04 -05:00
Tianqi Chen
457ff82e33 Merge pull request #919 from Far0n/master
Complete Guide to Parameter Tuning in XGBoost
2016-03-05 20:18:48 -08:00
Tianqi Chen
300c16d0f6 Merge pull request #923 from tqchen/master
[FLINK] Make runnable flink
2016-03-05 18:04:54 -08:00
tqchen
99dc311f6d [FLINK] Make runnable flink 2016-03-05 17:59:22 -08:00
Tianqi Chen
3ddddfce79 Merge pull request #922 from CodingCat/label
spark with new labeledpoint
2016-03-05 17:04:32 -08:00
CodingCat
130ca7b00c test case for XGBoostSpark 2016-03-05 19:41:26 -05:00
CodingCat
f0647ec76d test resources 2016-03-05 18:18:07 -05:00
CodingCat
5c1af13f84 distributed in RDD 2016-03-05 17:50:40 -05:00
CodingCat
fb41e4e673 spark with new labeledpoint
fix import order
2016-03-05 17:22:34 -05:00
Tianqi Chen
74bda4bfc5 Merge pull request #921 from tqchen/master
[JVM] Add LabeledPoint read support
2016-03-05 14:18:20 -08:00
tqchen
514df14baf [JVM] Add LabeledPoint read support
fix
2016-03-05 13:36:33 -08:00
Far0n
3a34c53f57 Complete Guide to Parameter Tuning in XGBoost 2016-03-05 21:51:47 +01:00
Tianqi Chen
ae032b12b4 Merge pull request #918 from tqchen/master
Add Labeled Point, minor fix build
2016-03-05 12:21:22 -08:00
tqchen
ac8e950227 Add Labeled Point, minor fix build 2016-03-05 12:12:43 -08:00
Tianqi Chen
51d8595372 Merge pull request #911 from CodingCat/spark_xgboost
[WIP]xgboost4j-spark
2016-03-05 11:49:24 -08:00
CodingCat
bb43177eb1 merge 2016-03-05 14:40:30 -05:00
tqchen
e8560c7909 [refactor] move java package to namespace java 2016-03-05 14:04:13 -05:00
tqchen
ae969a0e69 [refactor] move java package to namespace java 2016-03-05 14:00:04 -05:00
tqchen
81dbf564a4 [Flink] Check 2016-03-05 13:57:26 -05:00
CodingCat
2cec10c46f try to get more memory from Travis 2016-03-05 08:44:56 -05:00
CodingCat
b2d705ffb0 framework of xgboost-spark
iterator

return java iterator and recover test
2016-03-05 08:44:55 -05:00
CodingCat
1540773340 sketch of xgboost-spark
chooseBestBooster shall be in Boosters

remove tracker.py

rename XGBoost

remove cross-validation
2016-03-05 08:44:55 -05:00
Tianqi Chen
4568692daf Merge pull request #913 from tqchen/master
update libsvm file to start with 1 index
2016-03-05 00:02:47 -08:00
tqchen
a894ab6898 update libsvm file to start with 1 index 2016-03-05 00:01:42 -08:00
Tianqi Chen
e23a24be8c Merge pull request #912 from tqchen/master
[JVM] Add Iterator loading API
2016-03-04 18:14:30 -08:00
tqchen
86871d4be9 [JVM] Add Iterator loading API 2016-03-04 17:37:46 -08:00
Tianqi Chen
770b3451ca Merge pull request #907 from tqchen/master
[DIST] Enable multiple thread  make rabit and xgboost threadsafe
2016-03-04 08:24:00 -08:00
Tianqi Chen
04bdbca63f Merge pull request #2 from CodingCat/tianqi
revise the RabitTracker Impl & delete FileUtil class
2016-03-04 08:02:58 -08:00
CodingCat
416e1434e7 change initTracker() to static 2016-03-04 10:55:02 -05:00
CodingCat
10a1517502 revise the RabitTracker Impl
delete FileUtil class

fix bugs
2016-03-04 10:08:37 -05:00
tqchen
0df2ed80c8 [JVM] Make JVM Serializable 2016-03-03 21:04:02 -08:00
tqchen
e80d3db64b [DIST] Enable multiple thread and tracker, make rabit and xgboost more thread-safe by using thread local variables. 2016-03-03 20:36:14 -08:00
Tianqi Chen
12dc92f7e0 Merge pull request #906 from CodingCat/style
apply google-java-style indentation and impose import orders
2016-03-03 14:39:05 -08:00
CodingCat
e3dc67c6a0 apply google-java-style indentation and impose import orders.... 2016-03-03 12:59:18 -05:00
Tianqi Chen
0f367a6ade Merge pull request #904 from tqchen/master
[JVM-PKG] Update JNI to include Rabit interface
2016-03-02 22:44:46 -08:00
tqchen
c428a93adc [JVM-PKG] add distributed test simple case 2016-03-02 22:27:55 -08:00
tqchen
5c9e50148a [JVM-PKG] Update JNI to include rabit codes 2016-03-02 22:12:17 -08:00
tqchen
ced6d45e01 Update rabit 2016-03-02 20:53:34 -08:00
Tianqi Chen
0515e4ec28 Merge pull request #903 from CodingCat/jvm_package
add test cases for Scala API
2016-03-02 16:28:36 -08:00
CodingCat
8c220f51fc add default values for Scala API 2016-03-02 17:21:42 -05:00
CodingCat
cbf5eba9c0 add maven-assembly plugins 2016-03-02 17:11:15 -05:00
Nan Zhu
8e0c3b08c7 Merge branch 'master' into jvm_package 2016-03-02 15:26:39 -05:00
CodingCat
5e309f1ce8 add test cases for Scala API 2016-03-02 15:24:13 -05:00
Tianqi Chen
7d9457d72f Merge pull request #890 from CodingCat/jvm_package
[WIP] refactor xgboost4j to create jvm-packages
2016-03-01 21:01:01 -08:00
CodingCat
f8fff6c6fc rename files/packages 2016-03-01 23:48:35 -05:00
CodingCat
55e36893cd add style check for java and scala code 2016-03-01 20:53:50 -05:00
CodingCat
3b246c2420 re-structure Java API, add Scala API and consolidate the names of Java/Scala API 2016-03-01 20:53:41 -05:00
Tianqi Chen
fc4c88fceb Merge pull request #897 from tqchen/master
[PYTHON-DIST] Distributed xgboost python training API.
2016-02-29 17:11:00 -08:00
Tianqi Chen
ec9df13c70 Merge pull request #898 from maximsch2/patch-1
Describe colsample_bylevel
2016-02-29 17:01:59 -08:00
Maxim Grechkin
ba805d0fca Describe colsample_bylevel 2016-02-29 16:59:58 -08:00
tqchen
ecb3a271be [PYTHON-DIST] Distributed xgboost python training API. 2016-02-29 16:54:13 -08:00
Yuan (Terry) Tang
51bb556898 Merge pull request #895 from terrytangyuan/sklearn
Fixed #858: Separate dependencies and lightweight test env for Python
2016-02-29 11:26:11 -06:00
terrytangyuan
ae3962f757 Exclude osx for lightweight python test 2016-02-29 11:01:28 -06:00
Yuan (Terry) Tang
fdd520d774 Merge branch 'master' into sklearn 2016-02-29 10:50:55 -06:00
Tianqi Chen
728b65cec0 Merge pull request #880 from pauloalves86/master
Improve compatibility with sklearn
2016-02-29 08:37:53 -08:00
Paulo Alves
3d56caaab5 dmlc-core updated 2016-02-29 09:32:50 -03:00
Paulo Alves
b7985466a4 Merge remote-tracking branch 'upstream/master' 2016-02-29 08:53:48 -03:00
terrytangyuan
803a6fe474 Separate dependencies and lightweight test env for Python 2016-02-28 20:11:10 -06:00
Tianqi Chen
5f70b4df7a Merge pull request #891 from tqchen/master
[PYTHON] Simplify training logic, update rabit lib
2016-02-28 14:10:21 -08:00
tqchen
4a16b729fc [PYTHON] Simplify training logic, update rabit lib 2016-02-28 13:20:55 -08:00
Tianqi Chen
a868d803d0 Merge pull request #886 from zhengruifeng/mlog
add url for mlogloss
2016-02-28 09:27:26 -08:00
Ruifeng Zheng
07ed143cd6 Merge branch 'master' into mlog 2016-02-28 10:43:41 +08:00
Zheng RuiFeng
c7dc0cb50e update dmlc-core 2016-02-28 10:34:45 +08:00
Tianqi Chen
19f5f027a6 Merge pull request #889 from tqchen/master
[TEST] Fix travis test when reading hdfs
2016-02-27 18:15:52 -08:00
tqchen
90bc7f8f6b [TEST] Fix travis test when reading hdfs 2016-02-27 18:15:32 -08:00
Tianqi Chen
38ba66a5bd Merge pull request #887 from vatsan/patch-1
Plugging in gp_xgboost_gridsearch
2016-02-26 21:08:24 -08:00
Srivatsan Ramanujam
44d5ac7d37 Plugging in gp_xgboost_gridsearch 2016-02-26 19:15:32 -08:00
Tianqi Chen
758a77de9c Fix testcase after update and allow hdfs load 2016-02-26 17:04:51 -08:00
Tianqi Chen
be810e4f16 Merge pull request #885 from ChrisBarker-NOAA/patch-1
fix PyPi Description issue
2016-02-26 16:54:55 -08:00
Chris Barker
ed5781fa55 fix PyPi Description issue
the description field was set to what should be the long_description field -- making a bit of a mess on PyPi
2016-02-26 16:54:13 -08:00
Zheng RuiFeng
edf68b5933 add url for mlogloss 2016-02-26 20:46:40 +08:00
Paulo Alves
592004b38f XGBClassifier.feature_importances_ compatible with sklearn RFECV 2016-02-26 08:56:07 -03:00
Paulo Alves
81257dcfb4 Update upstream 2016-02-26 08:52:57 -03:00
Tianqi Chen
84e9ca000e Update README.md 2016-02-25 22:01:46 -08:00
Tianqi Chen
c15b7aa9cc Update index.md 2016-02-25 22:01:23 -08:00
Tianqi Chen
c50df8f3b5 Merge pull request #878 from tqchen/master
temp compatibility with sklearn
2016-02-25 21:57:45 -08:00
tqchen
ebc802756f temp compatibility with sklearn 2016-02-25 21:57:00 -08:00
Tianqi Chen
7b9cee3cbd Merge pull request #877 from tqchen/master
[DIST] Add Distributed XGBoost on AWS Tutorial
2016-02-25 21:52:13 -08:00
tqchen
a71ba04109 [DIST] Add Distributed XGBoost on AWS Tutorial 2016-02-25 21:51:37 -08:00
Tianqi Chen
61d9edcaa4 Merge pull request #867 from wyj2046/master
cause this code test pickle the booster, so change bst2 -> bst3
2016-02-25 20:25:47 -08:00
Tianqi Chen
1e435ee3ec Update README.md 2016-02-25 17:18:07 -08:00
Tianqi Chen
eae0aa256e Update README.md 2016-02-25 17:17:28 -08:00
Tianqi Chen
1176f9ac1b Merge pull request #876 from tqchen/master
[DOC] reorg docs
2016-02-25 14:08:48 -08:00
tqchen
6b02317ea8 [DOC] reorg docs 2016-02-25 14:08:30 -08:00
Tianqi Chen
76c320e9f0 Merge pull request #875 from tqchen/master
Fix model save problem in YARN
2016-02-25 13:15:52 -08:00
tqchen
02e98e5d45 [CLI] Fix model save problem 2016-02-25 13:15:23 -08:00
tqchen
d66c17881e Update readme 2016-02-25 13:11:51 -08:00
Tianqi Chen
17b5ca7351 Merge pull request #872 from tqchen/master
[TRAVIS] Fix script
2016-02-25 12:39:49 -08:00
tqchen
b69219df05 [doc] update news 2016-02-25 12:38:47 -08:00
tqchen
80239aaf00 [TRAVIS] Fix script 2016-02-25 12:17:40 -08:00
Tianqi Chen
bb0d163d22 Merge pull request #860 from zhengruifeng/mae
Add "mean absolute error" to metrics
2016-02-25 12:17:03 -08:00
Tianqi Chen
4c40fdb73a Merge pull request #864 from phunterlau/master
Awesome-XGBoost page
2016-02-25 12:15:15 -08:00
phunterlau
80595d6cc5 move TOC under title 2016-02-25 11:32:11 -08:00
Yuan (Terry) Tang
319091b3f4 Merge pull request #868 from catena/master
minor fix: in sklearn.py return attribute best_ntree_limit if early stopped
2016-02-25 07:26:43 -06:00
catena
790dc877c3 return best_ntree_limit if early stopped 2016-02-25 13:42:19 +05:30
王煜杰
d52d0ee9ed cause this code test pickle the booster, so change bst2 -> bst3 2016-02-25 14:49:33 +08:00
phunterlau
ef7d26eb07 add TOC, simplied text in the solution section 2016-02-24 21:56:51 -08:00
phunterlau
d9614dfbe8 Awesome-XGBoost, first commit 2016-02-24 17:36:20 -08:00
Tianqi Chen
cdbafafc04 Merge pull request #848 from Kontinuation/master
Minor fix on installation guide and (the probably deprecated) build script
2016-02-24 16:33:32 -08:00
Kontinuation
54a9f30e92 Minor fix on installation guide and (the probably deprecated) build script 2016-02-24 12:46:37 +08:00
Ruifeng Zheng
10af94c77e Merge branch 'master' into mae 2016-02-24 11:28:55 +08:00
Zheng RuiFeng
2c7c27e297 create mae 2016-02-24 11:15:31 +08:00
Tianqi Chen
b3a81a216d Merge pull request #859 from thirdwing/master
[Doc] documents update. close #821
2016-02-23 18:39:36 -08:00
Qiang Kou
1cc0a44264 [Doc] documents update:
(1) install_github is not support due to the usage of submodule

(2) remove part of the markdown which is not displayed correctly, see
https://xgboost.readthedocs.org/en/latest/R-package/discoverYourData.html
2016-02-23 14:49:12 -05:00
Tianqi Chen
d063eaccb1 Delete training.py 2016-02-23 08:48:50 -08:00
Tianqi Chen
49f6b384e3 Merge pull request #849 from ivallesp/master
Request for solving the problem in the tests of my contribution
2016-02-20 19:29:04 -08:00
ivallesp
c17d0ef560 changed the param show_progress by verbose_eval in cv and aggcv functions 2016-02-21 01:28:55 +01:00
Tianqi Chen
532615a32a Merge pull request #827 from ivallesp/master
Muting the remaining messages when show_progress=False
2016-02-19 08:16:25 -08:00
ivallesp
ed5c98f0ee re-using the verbose-eval parameter in the cv and aggcv methods and tests adapted 2016-02-19 17:14:57 +01:00
Yuan (Terry) Tang
c7f2f3f5b7 Merge pull request #845 from thirdwing/master
[Documents] update windows instructions
2016-02-18 18:48:37 -06:00
Qiang Kou
41052f0d6e update windows instructions 2016-02-18 16:36:04 -08:00
Yuan (Terry) Tang
75d23c8bb2 Merge pull request #833 from AlexisMignon/master
Added the possibility to use custom objective function in the sklearn…
2016-02-18 09:36:38 -06:00
Alexis Mignon
a46706c82e Merge branch 'master' into master 2016-02-17 09:35:30 +01:00
Tianqi Chen
2baea12d97 Merge pull request #818 from webgeist/master
Add feature_importances_ property for XGBClassifier
2016-02-16 10:19:04 -08:00
Yuan (Terry) Tang
ba4ec551ed Merge pull request #836 from hetong007/master
fix cran, update R-package version to 0.4-3
2016-02-16 09:28:31 -06:00
hetong007
371ff20a3b fix cran, update version to 0.4-3 2016-02-16 20:37:26 +08:00
Alexis Mignon
52e9085579 Merge branch 'master' of github.com:AlexisMignon/xgboost 2016-02-16 11:00:57 +01:00
Alexis Mignon
6e27d7539f - Added test cases for the use of custom objective functions
- Made the indentation more consistent with pep8
2016-02-16 10:59:25 +01:00
Alexis Mignon
07bd149b68 Created decorator function so that custom objective function passed to the constructor are more consistent with the sklearn conventions. Added comments in the doc string 2016-02-16 10:58:22 +01:00
Alexis Mignon
5c29eeac18 Merge branch 'master' into master 2016-02-16 10:16:58 +01:00
Yuan (Terry) Tang
29c7cfcbbf Merge pull request #823 from Far0n/py_cv
stratified cv for python wrapper
2016-02-15 13:22:55 -06:00
Alexis Mignon
c8714f587a Added the possibility to use custom objective function in the sklearn API 2016-02-15 17:13:13 +01:00
Faron
4b3a053913 stratified cv for python wrapper
finalize docstring
2016-02-15 16:06:17 +01:00
Yuan (Terry) Tang
9b2b81e6a4 Merge pull request #830 from fsimond/patch-1
Fix CV which was monitoring train-metric
2016-02-15 06:09:39 -06:00
Florian
2443cb9ca8 Fix CV which was monitoring train-metric
https://github.com/dmlc/xgboost/issues/807
2016-02-15 12:00:35 +01:00
Tianqi Chen
70d9732765 Merge pull request #816 from tqchen/master
[DISK] Major improvements in external memory, add support to group back
2016-02-10 15:31:20 -08:00
tqchen
413f119c7e Update dmlc-core 2016-02-10 13:11:21 -08:00
Pavel Gladkov
31c0408cb4 add feature_importances_ property for XGBClassifier 2016-02-10 23:01:33 +03:00
tqchen
63c4ad7617 [APPROX] Make global proposal default, add group ptr solution 2016-02-10 11:19:10 -08:00
tqchen
ce4d59ed69 [TREE] Enable global proposal for faster speed 2016-02-10 11:19:10 -08:00
tqchen
2f2080a337 [TREE] Remove gap constraint, make tree construction more robust 2016-02-10 11:17:54 -08:00
Ubuntu
c36195795a increase shard 2016-02-10 11:17:18 -08:00
Ubuntu
724eda2435 remove reserve for more aggressive memory generation 2016-02-10 11:17:18 -08:00
Ubuntu
46be6181b5 [DIST] fix distirbuted setting 2016-02-10 11:17:18 -08:00
tqchen
5218438716 [DMLC] update dmlccore 2016-02-10 11:17:18 -08:00
tqchen
b27b51f60e [PLUGIN] Add densify parser 2016-02-10 11:17:18 -08:00
tqchen
88e362732f [DMLC] Update dmlc 2016-02-10 11:17:17 -08:00
tqchen
a500fbc9b0 [TREE] switch to two pass 2016-02-10 11:17:17 -08:00
tqchen
523afcbcd2 [TREE] Cleanup some functions, add utility function for two pass 2016-02-10 11:17:17 -08:00
tqchen
52227a8920 [TREE] Refactor histmaker 2016-02-10 11:17:17 -08:00
tqchen
468bc7725a [METRIC] change metric accumulator to double 2016-02-10 11:17:17 -08:00
tqchen
88447ca32e [MEM] Add rowset struct to save memory with billion level rows 2016-02-10 11:17:17 -08:00
tqchen
2230f1273f [DISK] Add shard option to disk 2016-02-10 11:17:17 -08:00
Tianqi Chen
72961d914b Merge pull request #812 from samuel-liyi/master
fsplit value
2016-02-08 09:16:40 -08:00
Yuan (Terry) Tang
5345990dae Merge pull request #813 from angadgill/patch-2
Update build.md
2016-02-08 08:18:52 -06:00
Angad Gill
c9e09c9875 Update build.md
Minor typo
2016-02-08 01:20:43 -08:00
samuel-liyi
d3540aacc5 change the formula of fsplit value 2016-02-08 15:00:04 +08:00
Tianqi Chen
eb169e4f73 Merge pull request #788 from maximsch2/fix-missing
Make missing handling consistent with sklearn's portion of the Python package
2016-01-28 21:14:31 -08:00
Maxim Grechkin
f5e96eba72 Make missing handling consistent with sklearn's portion of the python package 2016-01-28 14:16:11 -08:00
Yuan (Terry) Tang
21d5ec7275 Merge pull request #778 from dmlc/terrytangyuan-patch-1
Update installation instructions for R package
2016-01-24 23:44:43 -06:00
Yuan (Terry) Tang
be58d6f9d6 Update installation instructions for R package 2016-01-24 19:38:36 -06:00
Yuan (Terry) Tang
5d9b80cd8b Merge pull request #774 from thirdwing/master
[R] update doc (close #760, close #773)
2016-01-24 12:19:07 -05:00
Qiang Kou
bdeb095a7d [R] update doc; add drat repo 2016-01-24 11:42:24 -05:00
Tianqi Chen
1ab0c3c248 Merge pull request #768 from moutai/patch-1
[docs] Fix typo in release notes
2016-01-21 09:55:42 -08:00
Moussa Taifi
51f0e469cb [docs] Fix typo in release notes
small typo fix
thanks
2016-01-21 10:22:11 -05:00
Yuan (Terry) Tang
015c3e0b45 Merge pull request #767 from jenshaase/patch-1
Python Package Installation Documentation Bug
2016-01-21 09:13:21 -06:00
Jens Haase
3077571976 Python Package Installation Documentation Bug 2016-01-21 15:53:18 +01:00
Tianqi Chen
52c8d09ba8 Update build.md 2016-01-20 11:40:29 -08:00
Tianqi Chen
e5b1bd39a0 Merge pull request #761 from aayush26/patch-1
line 100: path changed updated
2016-01-20 10:01:42 -08:00
Tianqi Chen
6f5b68095b Merge pull request #763 from bzEq/issue751
fix signature of __deepcopy__ method
2016-01-20 10:01:32 -08:00
Kai Luo
d9e50fd7f3 __copy__ calls __deepcopy__ with an argument 2016-01-20 19:57:20 +08:00
Kai Luo
5cd765e935 fix signature of __deepcopy__ method 2016-01-20 17:18:11 +08:00
Aayush Kumar Singha
5af97e5e47 line 100: path changed updated 2016-01-20 07:17:02 +05:30
Tianqi Chen
ef4dcce737 Merge pull request #759 from dmlc/brick
Merge Brick into master
2016-01-19 09:24:58 -08:00
Tianqi Chen
fb0ced2639 Merge pull request #755 from tqchen/brick
Brick
2016-01-16 11:52:59 -08:00
tqchen
8e7f2679d5 [DOC] Update R doc 2016-01-16 11:52:33 -08:00
tqchen
e7d8ed71d6 [DOC] cleanup distributed training 2016-01-16 11:00:40 -08:00
tqchen
df7c7930d0 [WINDOWS] Remove windows 2016-01-16 10:30:07 -08:00
tqchen
219e58d453 Minor wordings to doc 2016-01-16 10:25:12 -08:00
tqchen
1495a43cea [R] make all customizations to meet strict standard of cran 2016-01-16 10:25:12 -08:00
tqchen
634db18a0f [TRAVIS] cleanup travis script 2016-01-16 10:25:12 -08:00
tqchen
fd173e260f [FIX] change evaluation to more precision 2016-01-16 10:25:12 -08:00
tqchen
67fbf8d264 [TEST] add partial load option 2016-01-16 10:25:12 -08:00
tqchen
6de1c86d18 [LZ4] enable 16 bit index 2016-01-16 10:25:11 -08:00
tqchen
c4d389c5df [LZ] Improve lz4 format 2016-01-16 10:25:11 -08:00
tqchen
31d8e93ef3 [FIX] fix plugin system 2016-01-16 10:25:11 -08:00
tqchen
96f4542a67 [PLUGIN] Add plugin system 2016-01-16 10:25:11 -08:00
tqchen
36c389ac46 [DATA] Isolate the format of page file 2016-01-16 10:25:11 -08:00
黄子轩
a662340fda modify java wrapper settings for new refactor 2016-01-16 10:25:11 -08:00
tqchen
263b7befde [LOG] Simplfy README.md add change logs. 2016-01-16 10:25:11 -08:00
tqchen
2dc6c2dc52 [R] enable R compile
[R] Enable R build for windows and linux
2016-01-16 10:24:02 -08:00
tqchen
72347e2d45 [DATA] Make it fully compatible with rank 2016-01-16 10:24:01 -08:00
tqchen
ef1021e759 [IO] Enable external memory 2016-01-16 10:24:01 -08:00
tqchen
5f28617d7d [REFACTOR] completely remove old src 2016-01-16 10:24:01 -08:00
tqchen
d75e3ed05d [LIBXGBOOST] pass demo running. 2016-01-16 10:24:01 -08:00
tqchen
cee148ed64 [CLI] initial refactor of CLI 2016-01-16 10:24:01 -08:00
tqchen
0d95e863c9 [LEARNER] refactor learner 2016-01-16 10:24:01 -08:00
tqchen
4b4b36d047 [GBM] remove need to explicit InitModel, rename save/load 2016-01-16 10:24:01 -08:00
tqchen
82ceb4de0a [LEARNER] Init learner interface 2016-01-16 10:24:01 -08:00
tqchen
084f5f4715 [Make] refactor build script to use config file 2016-01-16 10:24:01 -08:00
tqchen
7e6f00ee11 [MAKE] fix makefile 2016-01-16 10:24:01 -08:00
tqchen
9042b9e2c7 [GBM] Finish migrate all gbms 2016-01-16 10:24:01 -08:00
tqchen
e4567bbc47 [REFACTOR] Add alias, allow missing variables, init gbm interface 2016-01-16 10:24:01 -08:00
tqchen
4f26d98150 [Update] remove rabit subtree, use submodule, move code 2016-01-16 10:24:01 -08:00
tqchen
d4677b6561 [TREE] finish move of updater 2016-01-16 10:24:01 -08:00
tqchen
4adc4cf0b9 [TREE] Move the files to target refactor location 2016-01-16 10:24:01 -08:00
tqchen
3128e1705b [TREE] Refactor colmaker 2016-01-16 10:24:01 -08:00
tqchen
20043f63a6 [TREE] Move colmaker 2016-01-16 10:24:01 -08:00
tqchen
c8ccb61b9e [TREE] Enable updater registry 2016-01-16 10:24:01 -08:00
tqchen
a62a66d545 [TREE] Finalize regression tree refactor 2016-01-16 10:24:01 -08:00
tqchen
844e8a153d [TREE] Refactor to new logging 2016-01-16 10:24:01 -08:00
tqchen
05115adbff [TREE] move tree model 2016-01-16 10:24:01 -08:00
tqchen
b4d0bb5a6d [METRIC] all metric move finished 2016-01-16 10:24:01 -08:00
tqchen
dedd87662b [OBJ] Add basic objective function and registry 2016-01-16 10:24:01 -08:00
tqchen
46bcba7173 [DATA] basic data refactor done, basic version of csr source. 2016-01-16 10:24:00 -08:00
tqchen
3d708e4788 latest data 2016-01-16 10:24:00 -08:00
tqchen
7ff91fe5f9 Data interface ready 2016-01-16 10:24:00 -08:00
tqchen
d530e0c14f [REFACTOR] cleanup structure 2016-01-16 10:24:00 -08:00
tqchen
5ed4dc4f60 fix makefile warning when cc is defined 2016-01-16 10:24:00 -08:00
505 changed files with 39608 additions and 36489 deletions

24
.gitignore vendored
View File

@@ -20,12 +20,12 @@
*buffer
*model
*pyc
*train
*test
*.train
*.test
*.tar
*group
*rar
*vali
*data
*sdf
Release
*exe*
@@ -36,7 +36,6 @@ ipch
*log
Debug
*suo
*test*
.Rhistory
*.dll
*i386
@@ -51,12 +50,9 @@ Debug
./xgboost
./xgboost.mpi
./xgboost.mock
rabit
#.Rbuildignore
R-package.Rproj
*.cache*
R-package/inst
R-package/src
#java
java/xgboost4j/target
java/xgboost4j/tmp
@@ -65,9 +61,21 @@ java/xgboost4j-demo/data/
java/xgboost4j-demo/tmp/
java/xgboost4j-demo/model/
nb-configuration*
dmlc-core
# Eclipse
.project
.cproject
.pydevproject
.settings/
build
config.mk
xgboost
*.data
build_plugin
dmlc-core
.idea
recommonmark/
tags
*.iml
*.class
target
*.swp

6
.gitmodules vendored Normal file
View File

@@ -0,0 +1,6 @@
[submodule "dmlc-core"]
path = dmlc-core
url = https://github.com/dmlc/dmlc-core
[submodule "rabit"]
path = rabit
url = https://github.com/dmlc/rabit

View File

@@ -1,4 +1,5 @@
sudo: true
# disable sudo for container build.
sudo: false
# Enabling test on Linux and OS X
os:
@@ -8,51 +9,66 @@ os:
# Use Build Matrix to do lint and build seperately
env:
matrix:
- TASK=lint LINT_LANG=cpp
- TASK=lint LINT_LANG=python
- TASK=R-package CXX=g++
- TASK=python-package CXX=g++
- TASK=python-package3 CXX=g++
- TASK=java-package CXX=g++
- TASK=build CXX=g++
- TASK=build-with-dmlc CXX=g++
# code lint
- TASK=lint
# r package test
- TASK=r_test
# python package test
- TASK=python_test
- TASK=python_lightweight_test
# java package test
- TASK=java_test
# cmake test
- TASK=cmake_test
os:
- linux
- osx
matrix:
exclude:
- os: osx
env: TASK=lint
- os: linux
env: TASK=r_test
- os: osx
env: TASK=java_test
- os: osx
env: TASK=python_lightweight_test
# dependent apt packages
addons:
apt:
packages:
- doxygen
- libopenmpi-dev
- wget
- libcurl4-openssl-dev
- unzip
- python-numpy
- python-scipy
- graphviz
before_install:
- scripts/travis_osx_install.sh
- git clone https://github.com/dmlc/dmlc-core
- export TRAVIS=dmlc-core/scripts/travis/
- source dmlc-core/scripts/travis/travis_setup_env.sh
- export PYTHONPATH=${PYTHONPATH}:${PWD}/python-package
- source ${TRAVIS}/travis_setup_env.sh
- echo "MAVEN_OPTS='-Xmx2048m -XX:MaxPermSize=1024m -XX:ReservedCodeCacheSize=512m'" > ~/.mavenrc
install:
- pip install cpplint pylint --user `whoami`
- source tests/travis/setup.sh
script:
- tests/travis/run_test.sh
script: scripts/travis_script.sh
cache:
directories:
- ${HOME}/.cache/usr
- ${HOME}/.cache/pip
before_cache:
- dmlc-core/scripts/travis/travis_before_cache.sh
after_failure:
- scripts/travis_after_failure.sh
- tests/travis/travis_after_failure.sh
notifications:
email:
on_success: change
on_failure: always

View File

@@ -1,61 +0,0 @@
Change Log
==========
xgboost-0.1
-----------
* Initial release
xgboost-0.2x
------------
* Python module
* Weighted samples instances
* Initial version of pairwise rank
xgboost-0.3
-----------
* Faster tree construction module
- Allows subsample columns during tree construction via ```bst:col_samplebytree=ratio```
* Support for boosting from initial predictions
* Experimental version of LambdaRank
* Linear booster is now parallelized, using parallel coordinated descent.
* Add [Code Guide](src/README.md) for customizing objective function and evaluation
* Add R module
xgboost-0.4
-----------
* Distributed version of xgboost that runs on YARN, scales to billions of examples
* Direct save/load data and model from/to S3 and HDFS
* Feature importance visualization in R module, by Michael Benesty
* Predict leaf index
* Poisson regression for counts data
* Early stopping option in training
* Native save load support in R and python
- xgboost models now can be saved using save/load in R
- xgboost python model is now pickable
* sklearn wrapper is supported in python module
* Experimental External memory version
xgboost-0.47
------------
* Changes in R library
- fixed possible problem of poisson regression.
- switched from 0 to NA for missing values.
- exposed access to additional model parameters.
* Changes in Python library
- throws exception instead of crash terminal when a parameter error happens.
- has importance plot and tree plot functions.
- accepts different learning rates for each boosting round.
- allows model training continuation from previously saved model.
- allows early stopping in CV.
- allows feval to return a list of tuples.
- allows eval_metric to handle additional format.
- improved compatibility in sklearn module.
- additional parameters added for sklearn wrapper.
- added pip installation functionality.
- supports more Pandas DataFrame dtypes.
- added best_ntree_limit attribute, in addition to best_score and best_iteration.
* Java api is ready for use
* Added more test cases and continuous integration to make each build more robust.
on going at master
------------------

79
CMakeLists.txt Normal file
View File

@@ -0,0 +1,79 @@
cmake_minimum_required (VERSION 2.6)
project (xgboost)
find_package(OpenMP)
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS} -fPIC")
# Make sure we are using C++11
# Visual Studio 12.0 and newer supports enough c++11 to make this work
if(MSVC AND MSVC_VERSION LESS 1800)
message(STATUS "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.")
else()
# GCC 4.6 with c++0x supports enough to make this work
include(CheckCXXCompilerFlag)
CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORTS_CXX11)
CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X)
if(COMPILER_SUPPORTS_CXX11)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
elseif(COMPILER_SUPPORTS_CXX0X)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x")
else()
message(STATUS "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.")
endif()
endif()
#Make sure we are using the static runtime
if(MSVC)
set(variables
CMAKE_C_FLAGS_DEBUG
CMAKE_C_FLAGS_MINSIZEREL
CMAKE_C_FLAGS_RELEASE
CMAKE_C_FLAGS_RELWITHDEBINFO
CMAKE_CXX_FLAGS_DEBUG
CMAKE_CXX_FLAGS_MINSIZEREL
CMAKE_CXX_FLAGS_RELEASE
CMAKE_CXX_FLAGS_RELWITHDEBINFO
)
foreach(variable ${variables})
if(${variable} MATCHES "/MD")
string(REGEX REPLACE "/MD" "/MT" ${variable} "${${variable}}")
endif()
endforeach()
endif()
include_directories (
${PROJECT_SOURCE_DIR}/include
${PROJECT_SOURCE_DIR}/dmlc-core/include
${PROJECT_SOURCE_DIR}/rabit/include
)
file(GLOB SOURCES
src/c_api/*.cc
src/common/*.cc
src/data/*.cc
src/gbm/*.cc
src/metric/*.cc
src/objective/*.cc
src/tree/*.cc
src/*.cc
)
set(RABIT_SOURCES
rabit/src/allreduce_base.cc
rabit/src/allreduce_robust.cc
rabit/src/engine.cc
rabit/src/c_api.cc
)
add_subdirectory(dmlc-core)
add_library(rabit STATIC ${RABIT_SOURCES})
add_executable(xgboost ${SOURCES})
add_library(libxgboost SHARED ${SOURCES})
target_link_libraries(xgboost dmlccore rabit)
target_link_libraries(libxgboost dmlccore rabit)

View File

@@ -59,3 +59,4 @@ List of Contributors
* [Sam Thomson](https://github.com/sammthomson)
* [ganesh-krishnan](https://github.com/ganesh-krishnan)
* [Damien Carol](https://github.com/damiencarol)
* [Alex Bain](https://github.com/convexquad)

View File

@@ -1,9 +1,9 @@
Copyright (c) 2014 by Contributors
Copyright (c) 2016 by Contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software

306
Makefile
View File

@@ -1,18 +1,60 @@
export CC = $(if $(shell which gcc-5 2>/dev/null),gcc-5,gcc)
export CXX = $(if $(shell which g++-5 2>/dev/null),g++-5,g++)
export MPICXX = mpicxx
export LDFLAGS= -pthread -lm
export CFLAGS = -Wall -O3 -msse2 -Wno-unknown-pragmas -funroll-loops
# java include path
export JAVAINCFLAGS = -I${JAVA_HOME}/include -I./java
ifeq ($(OS), Windows_NT)
export CXX = g++ -m64
export CC = gcc -m64
ifndef config
ifneq ("$(wildcard ./config.mk)","")
config = config.mk
else
config = make/config.mk
endif
endif
UNAME= $(shell uname)
ifndef DMLC_CORE
DMLC_CORE = dmlc-core
endif
ifndef RABIT
RABIT = rabit
endif
ROOTDIR = $(CURDIR)
ifeq ($(OS), Windows_NT)
UNAME="Windows"
else
UNAME=$(shell uname)
endif
include $(config)
ifeq ($(USE_OPENMP), 0)
export NO_OPENMP = 1
endif
include $(DMLC_CORE)/make/dmlc.mk
# include the plugins
include $(XGB_PLUGINS)
# use customized config file
ifndef CC
export CC = $(if $(shell which gcc-5),gcc-5,gcc)
endif
ifndef CXX
export CXX = $(if $(shell which g++-5),g++-5,g++)
endif
export LDFLAGS= -pthread -lm $(ADD_LDFLAGS) $(DMLC_LDFLAGS) $(PLUGIN_LDFLAGS)
export CFLAGS= -std=c++0x -Wall -O3 -msse2 -Wno-unknown-pragmas -funroll-loops -Iinclude $(ADD_CFLAGS) $(PLUGIN_CFLAGS)
CFLAGS += -I$(DMLC_CORE)/include -I$(RABIT)/include
#java include path
export JAVAINCFLAGS = -I${JAVA_HOME}/include -I./java
ifndef LINT_LANG
LINT_LANG= "all"
endif
ifneq ($(UNAME), Windows)
CFLAGS += -fPIC
XGBOOST_DYLIB = lib/libxgboost.so
else
XGBOOST_DYLIB = lib/libxgboost.dll
endif
ifeq ($(UNAME), Linux)
LDFLAGS += -lrt
@@ -23,192 +65,124 @@ ifeq ($(UNAME), Darwin)
JAVAINCFLAGS += -I${JAVA_HOME}/include/darwin
endif
ifeq ($(no_omp),1)
ifeq ($(USE_OPENMP), 1)
CFLAGS += -fopenmp
else
CFLAGS += -DDISABLE_OPENMP
else
#CFLAGS += -fopenmp
ifeq ($(omp_mac_static),1)
#CFLAGS += -fopenmp -Bstatic
CFLAGS += -static-libgcc -static-libstdc++ -L. -fopenmp
#LDFLAGS += -Wl,--whole-archive -lpthread -Wl --no-whole-archive
else
CFLAGS += -fopenmp
endif
endif
# by default use c++11
ifeq ($(cxx11),1)
CFLAGS += -std=c++11
endif
# handling dmlc
ifdef dmlc
ifndef config
ifneq ("$(wildcard $(dmlc)/config.mk)","")
config = $(dmlc)/config.mk
else
config = $(dmlc)/make/config.mk
endif
endif
include $(config)
include $(dmlc)/make/dmlc.mk
LDFLAGS+= $(DMLC_LDFLAGS)
LIBDMLC=$(dmlc)/libdmlc.a
else
LIBDMLC=dmlc_simple.o
endif
ifndef WITH_FPIC
WITH_FPIC = 1
endif
ifeq ($(WITH_FPIC), 1)
CFLAGS += -fPIC
endif
ifeq ($(OS), Windows_NT)
LIBRABIT = subtree/rabit/lib/librabit_empty.a
SLIB = wrapper/xgboost_wrapper.dll
else
LIBRABIT = subtree/rabit/lib/librabit.a
SLIB = wrapper/libxgboostwrapper.so
endif
# java lib
JLIB = java/libxgboost4j.so
# specify tensor path
BIN = xgboost
MOCKBIN = xgboost.mock
OBJ = updater.o gbm.o io.o main.o dmlc_simple.o
MPIBIN =
ifeq ($(WITH_FPIC), 1)
TARGET = $(BIN) $(OBJ) $(SLIB)
else
TARGET = $(BIN)
endif
.PHONY: clean all lint clean_all doxygen rcpplint pypack Rpack Rbuild Rcheck java pylint
ifndef LINT_LANG
LINT_LANG= "all"
endif
.PHONY: clean all mpi python Rpack lint
all: lib/libxgboost.a $(XGBOOST_DYLIB) xgboost
all: $(TARGET)
mpi: $(MPIBIN)
$(DMLC_CORE)/libdmlc.a: $(wildcard $(DMLC_CORE)/src/*.cc $(DMLC_CORE)/src/*/*.cc)
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
python: wrapper/libxgboostwrapper.so
# now the wrapper takes in two files. io and wrapper part
updater.o: src/tree/updater.cpp src/tree/*.hpp src/*.h src/tree/*.h src/utils/*.h
dmlc_simple.o: src/io/dmlc_simple.cpp src/utils/*.h
gbm.o: src/gbm/gbm.cpp src/gbm/*.hpp src/gbm/*.h
io.o: src/io/io.cpp src/io/*.hpp src/utils/*.h src/learner/dmatrix.h src/*.h
main.o: src/xgboost_main.cpp src/utils/*.h src/*.h src/learner/*.hpp src/learner/*.h
xgboost: updater.o gbm.o io.o main.o $(LIBRABIT) $(LIBDMLC)
wrapper/xgboost_wrapper.dll wrapper/libxgboostwrapper.so: wrapper/xgboost_wrapper.cpp src/utils/*.h src/*.h src/learner/*.hpp src/learner/*.h updater.o gbm.o io.o $(LIBRABIT) $(LIBDMLC)
$(RABIT)/lib/$(LIB_RABIT): $(wildcard $(RABIT)/src/*.cc)
+ cd $(RABIT); $(MAKE) lib/$(LIB_RABIT); cd $(ROOTDIR)
java: java/libxgboost4j.so
java/libxgboost4j.so: java/xgboost4j_wrapper.cpp wrapper/xgboost_wrapper.cpp src/utils/*.h src/*.h src/learner/*.hpp src/learner/*.h updater.o gbm.o io.o $(LIBRABIT) $(LIBDMLC)
jvm: jvm-packages/lib/libxgboost4j.so
# dependency on rabit
subtree/rabit/lib/librabit.a: subtree/rabit/src/engine.cc
+ cd subtree/rabit;make lib/librabit.a; cd ../..
subtree/rabit/lib/librabit_empty.a: subtree/rabit/src/engine_empty.cc
+ cd subtree/rabit;make lib/librabit_empty.a; cd ../..
subtree/rabit/lib/librabit_mock.a: subtree/rabit/src/engine_mock.cc
+ cd subtree/rabit;make lib/librabit_mock.a; cd ../..
subtree/rabit/lib/librabit_mpi.a: subtree/rabit/src/engine_mpi.cc
+ cd subtree/rabit;make lib/librabit_mpi.a; cd ../..
SRC = $(wildcard src/*.cc src/*/*.cc)
ALL_OBJ = $(patsubst src/%.cc, build/%.o, $(SRC)) $(PLUGIN_OBJS)
AMALGA_OBJ = amalgamation/xgboost-all0.o
LIB_DEP = $(DMLC_CORE)/libdmlc.a $(RABIT)/lib/$(LIB_RABIT)
ALL_DEP = $(filter-out build/cli_main.o, $(ALL_OBJ)) $(LIB_DEP)
CLI_OBJ = build/cli_main.o
$(BIN) :
$(CXX) $(CFLAGS) -fPIC -o $@ $(filter %.cpp %.o %.c %.cc %.a, $^) $(LDFLAGS)
build/%.o: src/%.cc
@mkdir -p $(@D)
$(CXX) $(CFLAGS) -MM -MT build/$*.o $< >build/$*.d
$(CXX) -c $(CFLAGS) -c $< -o $@
$(MOCKBIN) :
$(CXX) $(CFLAGS) -o $@ $(filter %.cpp %.o %.c %.cc %.a, $^) $(LDFLAGS)
build_plugin/%.o: plugin/%.cc
@mkdir -p $(@D)
$(CXX) $(CFLAGS) -MM -MT build_plugin/$*.o $< >build_plugin/$*.d
$(CXX) -c $(CFLAGS) -c $< -o $@
$(SLIB) :
$(CXX) $(CFLAGS) -fPIC -shared -o $@ $(filter %.cpp %.o %.c %.a %.cc, $^) $(LDFLAGS) $(DLLFLAGS)
# The should be equivalent to $(ALL_OBJ) except for build/cli_main.o
amalgamation/xgboost-all0.o: amalgamation/xgboost-all0.cc
$(CXX) -c $(CFLAGS) -c $< -o $@
$(JLIB) :
$(CXX) $(CFLAGS) -fPIC -shared -o $@ $(filter %.cpp %.o %.c %.a %.cc, $^) $(LDFLAGS) $(JAVAINCFLAGS)
# Equivalent to lib/libxgboost_all.so
lib/libxgboost_all.so: $(AMALGA_OBJ) $(LIB_DEP)
@mkdir -p $(@D)
$(CXX) $(CFLAGS) -shared -o $@ $(filter %.o %.a, $^) $(LDFLAGS)
$(OBJ) :
$(CXX) -c $(CFLAGS) -o $@ $(firstword $(filter %.cpp %.c %.cc, $^) )
lib/libxgboost.a: $(ALL_DEP)
@mkdir -p $(@D)
ar crv $@ $(filter %.o, $?)
$(MPIOBJ) :
$(MPICXX) -c $(CFLAGS) -o $@ $(firstword $(filter %.cpp %.c, $^) )
lib/libxgboost.dll lib/libxgboost.so: $(ALL_DEP)
@mkdir -p $(@D)
$(CXX) $(CFLAGS) -shared -o $@ $(filter %.o %a, $^) $(LDFLAGS)
$(MPIBIN) :
$(MPICXX) $(CFLAGS) -o $@ $(filter %.cpp %.o %.c %.cc %.a, $^) $(LDFLAGS)
jvm-packages/lib/libxgboost4j.so: jvm-packages/xgboost4j/src/native/xgboost4j.cpp $(ALL_DEP)
@mkdir -p $(@D)
$(CXX) $(CFLAGS) $(JAVAINCFLAGS) -shared -o $@ $(filter %.cpp %.o %.a, $^) $(LDFLAGS)
install:
cp -f -r $(BIN) $(INSTALL_PATH)
xgboost: $(CLI_OBJ) $(ALL_DEP)
$(CXX) $(CFLAGS) -o $@ $(filter %.o %.a, $^) $(LDFLAGS)
rcpplint:
python2 dmlc-core/scripts/lint.py xgboost ${LINT_LANG} R-package/src
lint: rcpplint
python2 dmlc-core/scripts/lint.py xgboost ${LINT_LANG} include src plugin python-package
pylint:
flake8 --ignore E501 python-package
flake8 --ignore E501 tests/python
clean:
$(RM) -rf build build_plugin lib bin *~ */*~ */*/*~ */*/*/*~ */*.o */*/*.o */*/*/*.o xgboost
clean_all: clean
cd $(DMLC_CORE); $(MAKE) clean; cd $(ROODIR)
cd $(RABIT); $(MAKE) clean; cd $(ROODIR)
doxygen:
doxygen doc/Doxyfile
# create standalone python tar file.
pypack: ${XGBOOST_DYLIB}
cp ${XGBOOST_DYLIB} python-package/xgboost
cd python-package; tar cf xgboost.tar xgboost; cd ..
# Script to make a clean installable R package.
Rpack:
make clean
cd subtree/rabit;make clean;cd ..
$(MAKE) clean_all
rm -rf xgboost xgboost*.tar.gz
cp -r R-package xgboost
rm -rf xgboost/src/*.o xgboost/src/*.so xgboost/src/*.dll
rm -rf xgboost/src/*/*.o
rm -rf subtree/rabit/src/*.o
rm -rf xgboost/demo/*.model xgboost/demo/*.buffer xgboost/demo/*.txt
rm -rf xgboost/demo/runall.R
cp -r src xgboost/src/src
mkdir xgboost/src/subtree
mkdir xgboost/src/subtree/rabit
cp -r subtree/rabit/include xgboost/src/subtree/rabit/include
cp -r subtree/rabit/src xgboost/src/subtree/rabit/src
rm -rf xgboost/src/subtree/rabit/src/*.o
mkdir xgboost/src/wrapper
cp wrapper/xgboost_wrapper.h xgboost/src/wrapper
cp wrapper/xgboost_wrapper.cpp xgboost/src/wrapper
cp -r include xgboost/src/include
cp -r amalgamation xgboost/src/amalgamation
mkdir -p xgboost/src/rabit
cp -r rabit/include xgboost/src/rabit/include
cp -r rabit/src xgboost/src/rabit/src
rm -rf xgboost/src/rabit/src/*.o
mkdir -p xgboost/src/dmlc-core
cp -r dmlc-core/include xgboost/src/dmlc-core/include
cp -r dmlc-core/src xgboost/src/dmlc-core/src
cp ./LICENSE xgboost
cat R-package/src/Makevars|sed '2s/.*/PKGROOT=./' > xgboost/src/Makevars
cat R-package/src/Makevars|sed '2s/.*/PKGROOT=./' | sed '3s/.*/ENABLE_STD_THREAD=0/' > xgboost/src/Makevars
cp xgboost/src/Makevars xgboost/src/Makevars.win
# R CMD build --no-build-vignettes xgboost
# R CMD build xgboost
# rm -rf xgboost
# R CMD check --as-cran xgboost*.tar.gz
Rbuild:
make Rpack
R CMD build xgboost
$(MAKE) Rpack
R CMD build --no-build-vignettes xgboost
rm -rf xgboost
Rcheck:
make Rbuild
R CMD check --as-cran xgboost*.tar.gz
$(MAKE) Rbuild
R CMD check xgboost*.tar.gz
pythonpack:
#for pip maintainer only
cd subtree/rabit;make clean;cd ..
rm -rf xgboost-deploy xgboost*.tar.gz
cp -r python-package xgboost-deploy
#cp *.md xgboost-deploy/
cp LICENSE xgboost-deploy/
cp Makefile xgboost-deploy/xgboost
cp -r wrapper xgboost-deploy/xgboost
cp -r subtree xgboost-deploy/xgboost
cp -r multi-node xgboost-deploy/xgboost
cp -r windows xgboost-deploy/xgboost
cp -r src xgboost-deploy/xgboost
cp python-package/setup_pip.py xgboost-deploy/setup.py
#make python
pythonbuild:
make pythonpack
python setup.py install
pythoncheck:
make pythonbuild
python -c 'import xgboost;print xgboost.core.find_lib_path()'
# lint requires dmlc to be in current folder
lint:
dmlc-core/scripts/lint.py xgboost $(LINT_LANG) src wrapper R-package python-package
clean:
$(RM) -rf $(OBJ) $(BIN) $(MPIBIN) $(MPIOBJ) $(SLIB) *.o */*.o */*/*.o *~ */*~ */*/*~
cd subtree/rabit; make clean; cd ..
-include build/*.d
-include build/*/*.d
-include build_plugin/*/*.d

104
NEWS.md Normal file
View File

@@ -0,0 +1,104 @@
XGBoost Change Log
==================
This file records the changes in xgboost library in reverse chronological order.
## v0.6 (2016.07.29)
* Version 0.5 is skipped due to major improvements in the core
* Major refactor of core library.
- Goal: more flexible and modular code as a portable library.
- Switch to use of c++11 standard code.
- Random number generator defaults to ```std::mt19937```.
- Share the data loading pipeline and logging module from dmlc-core.
- Enable registry pattern to allow optionally plugin of objective, metric, tree constructor, data loader.
- Future plugin modules can be put into xgboost/plugin and register back to the library.
- Remove most of the raw pointers to smart ptrs, for RAII safety.
* Add official option to approximate algorithm `tree_method` to parameter.
- Change default behavior to switch to prefer faster algorithm.
- User will get a message when approximate algorithm is chosen.
* Change library name to libxgboost.so
* Backward compatiblity
- The binary buffer file is not backward compatible with previous version.
- The model file is backward compatible on 64 bit platforms.
* The model file is compatible between 64/32 bit platforms(not yet tested).
* External memory version and other advanced features will be exposed to R library as well on linux.
- Previously some of the features are blocked due to C++11 and threading limits.
- The windows version is still blocked due to Rtools do not support ```std::thread```.
* rabit and dmlc-core are maintained through git submodule
- Anyone can open PR to update these dependencies now.
* Improvements
- Rabit and xgboost libs are not thread-safe and use thread local PRNGs
- This could fix some of the previous problem which runs xgboost on multiple threads.
* JVM Package
- Enable xgboost4j for java and scala
- XGBoost distributed now runs on Flink and Spark.
* Support model attributes listing for meta data.
- https://github.com/dmlc/xgboost/pull/1198
- https://github.com/dmlc/xgboost/pull/1166
* Support callback API
- https://github.com/dmlc/xgboost/issues/892
- https://github.com/dmlc/xgboost/pull/1211
- https://github.com/dmlc/xgboost/pull/1264
* Support new booster DART(dropout in tree boosting)
- https://github.com/dmlc/xgboost/pull/1220
* Add CMake build system
- https://github.com/dmlc/xgboost/pull/1314
## v0.47 (2016.01.14)
* Changes in R library
- fixed possible problem of poisson regression.
- switched from 0 to NA for missing values.
- exposed access to additional model parameters.
* Changes in Python library
- throws exception instead of crash terminal when a parameter error happens.
- has importance plot and tree plot functions.
- accepts different learning rates for each boosting round.
- allows model training continuation from previously saved model.
- allows early stopping in CV.
- allows feval to return a list of tuples.
- allows eval_metric to handle additional format.
- improved compatibility in sklearn module.
- additional parameters added for sklearn wrapper.
- added pip installation functionality.
- supports more Pandas DataFrame dtypes.
- added best_ntree_limit attribute, in addition to best_score and best_iteration.
* Java api is ready for use
* Added more test cases and continuous integration to make each build more robust.
## v0.4 (2015.05.11)
* Distributed version of xgboost that runs on YARN, scales to billions of examples
* Direct save/load data and model from/to S3 and HDFS
* Feature importance visualization in R module, by Michael Benesty
* Predict leaf index
* Poisson regression for counts data
* Early stopping option in training
* Native save load support in R and python
- xgboost models now can be saved using save/load in R
- xgboost python model is now pickable
* sklearn wrapper is supported in python module
* Experimental External memory version
## v0.3 (2014.09.07)
* Faster tree construction module
- Allows subsample columns during tree construction via ```bst:col_samplebytree=ratio```
* Support for boosting from initial predictions
* Experimental version of LambdaRank
* Linear booster is now parallelized, using parallel coordinated descent.
* Add [Code Guide](src/README.md) for customizing objective function and evaluation
* Add R module
## v0.2x (2014.05.20)
* Python module
* Weighted samples instances
* Initial version of pairwise rank
## v0.1 (2014.03.26)
* Initial release

View File

@@ -1,10 +1,11 @@
Package: xgboost
Type: Package
Title: Extreme Gradient Boosting
Version: 0.4-2
Version: 0.6-0
Date: 2015-08-01
Author: Tianqi Chen <tianqi.tchen@gmail.com>, Tong He <hetong007@gmail.com>,
Michael Benesty <michael@benesty.fr>
Michael Benesty <michael@benesty.fr>, Vadim Khotilovich <khotilovich@gmail.com>,
Yuan Tang <terrytangyuan@gmail.com>
Maintainer: Tong He <hetong007@gmail.com>
Description: Extreme Gradient Boosting, which is an efficient implementation
of gradient boosting framework. This package is its R interface. The package
@@ -20,6 +21,7 @@ BugReports: https://github.com/dmlc/xgboost/issues
VignetteBuilder: knitr
Suggests:
knitr,
rmarkdown,
ggplot2 (>= 1.0.1),
DiagrammeR (>= 0.8.1),
Ckmeans.1d.dp (>= 3.3.1),
@@ -33,5 +35,5 @@ Imports:
methods,
data.table (>= 1.9.6),
magrittr (>= 1.5),
stringr (>= 0.6.2)
stringi (>= 0.5.2)
RoxygenNote: 5.0.1

View File

@@ -1,13 +1,38 @@
# Generated by roxygen2: do not edit by hand
S3method("[",xgb.DMatrix)
S3method("dimnames<-",xgb.DMatrix)
S3method(dim,xgb.DMatrix)
S3method(dimnames,xgb.DMatrix)
S3method(getinfo,xgb.DMatrix)
S3method(predict,xgb.Booster)
S3method(predict,xgb.Booster.handle)
S3method(print,xgb.Booster)
S3method(print,xgb.DMatrix)
S3method(print,xgb.cv.synchronous)
S3method(setinfo,xgb.DMatrix)
S3method(slice,xgb.DMatrix)
export("xgb.attr<-")
export("xgb.attributes<-")
export("xgb.parameters<-")
export(cb.cv.predict)
export(cb.early.stop)
export(cb.evaluation.log)
export(cb.print.evaluation)
export(cb.reset.parameters)
export(cb.save.model)
export(getinfo)
export(setinfo)
export(slice)
export(xgb.DMatrix)
export(xgb.DMatrix.save)
export(xgb.attr)
export(xgb.attributes)
export(xgb.create.features)
export(xgb.cv)
export(xgb.dump)
export(xgb.ggplot.deepness)
export(xgb.ggplot.importance)
export(xgb.importance)
export(xgb.load)
export(xgb.model.dt.tree)
@@ -19,8 +44,6 @@ export(xgb.save)
export(xgb.save.raw)
export(xgb.train)
export(xgboost)
exportMethods(nrow)
exportMethods(predict)
import(methods)
importClassesFrom(Matrix,dgCMatrix)
importClassesFrom(Matrix,dgeMatrix)
@@ -30,19 +53,19 @@ importFrom(Matrix,sparse.model.matrix)
importFrom(Matrix,sparseVector)
importFrom(data.table,":=")
importFrom(data.table,as.data.table)
importFrom(data.table,copy)
importFrom(data.table,data.table)
importFrom(data.table,fread)
importFrom(data.table,rbindlist)
importFrom(data.table,set)
importFrom(data.table,setkey)
importFrom(data.table,setkeyv)
importFrom(data.table,setnames)
importFrom(magrittr,"%>%")
importFrom(magrittr,add)
importFrom(magrittr,not)
importFrom(stringr,str_detect)
importFrom(stringr,str_extract)
importFrom(stringr,str_extract_all)
importFrom(stringr,str_match)
importFrom(stringr,str_replace)
importFrom(stringr,str_split)
importFrom(stringr,str_trim)
importFrom(stats,predict)
importFrom(stringi,stri_detect_regex)
importFrom(stringi,stri_match_first_regex)
importFrom(stringi,stri_replace_all_regex)
importFrom(stringi,stri_replace_first_regex)
importFrom(stringi,stri_split_regex)
importFrom(utils,object.size)
importFrom(utils,str)
importFrom(utils,tail)
useDynLib(xgboost)

608
R-package/R/callbacks.R Normal file
View File

@@ -0,0 +1,608 @@
#' Callback closures for booster training.
#'
#' These are used to perform various service tasks either during boosting iterations or at the end.
#' This approach helps to modularize many of such tasks without bloating the main training methods,
#' and it offers .
#'
#' @details
#' By default, a callback function is run after each boosting iteration.
#' An R-attribute \code{is_pre_iteration} could be set for a callback to define a pre-iteration function.
#'
#' When a callback function has \code{finalize} parameter, its finalizer part will also be run after
#' the boosting is completed.
#'
#' WARNING: side-effects!!! Be aware that these callback functions access and modify things in
#' the environment from which they are called from, which is a fairly uncommon thing to do in R.
#'
#' To write a custom callback closure, make sure you first understand the main concepts about R envoronments.
#' Check either R documentation on \code{\link[base]{environment}} or the
#' \href{http://adv-r.had.co.nz/Environments.html}{Environments chapter} from the "Advanced R"
#' book by Hadley Wickham. Further, the best option is to read the code of some of the existing callbacks -
#' choose ones that do something similar to what you want to achieve. Also, you would need to get familiar
#' with the objects available inside of the \code{xgb.train} and \code{xgb.cv} internal environments.
#'
#' @seealso
#' \code{\link{cb.print.evaluation}},
#' \code{\link{cb.evaluation.log}},
#' \code{\link{cb.reset.parameters}},
#' \code{\link{cb.early.stop}},
#' \code{\link{cb.save.model}},
#' \code{\link{cb.cv.predict}},
#' \code{\link{xgb.train}},
#' \code{\link{xgb.cv}}
#'
#' @name callbacks
NULL
#
# Callbacks -------------------------------------------------------------------
#
#' Callback closure for printing the result of evaluation
#'
#' @param period results would be printed every number of periods
#'
#' @details
#' The callback function prints the result of evaluation at every \code{period} iterations.
#' The initial and the last iteration's evaluations are always printed.
#'
#' Callback function expects the following values to be set in its calling frame:
#' \code{bst_evaluation} (also \code{bst_evaluation_err} when available),
#' \code{iteration},
#' \code{begin_iteration},
#' \code{end_iteration}.
#'
#' @seealso
#' \code{\link{callbacks}}
#'
#' @export
cb.print.evaluation <- function(period=1) {
callback <- function(env = parent.frame()) {
if (length(env$bst_evaluation) == 0 ||
period == 0 ||
NVL(env$rank, 0) != 0 )
return()
i <- env$iteration
if ((i-1) %% period == 0 ||
i == env$begin_iteration ||
i == env$end_iteration) {
msg <- format.eval.string(i, env$bst_evaluation, env$bst_evaluation_err)
cat(msg, '\n')
}
}
attr(callback, 'call') <- match.call()
attr(callback, 'name') <- 'cb.print.evaluation'
callback
}
#' Callback closure for logging the evaluation history
#'
#' @details
#' This callback function appends the current iteration evaluation results \code{bst_evaluation}
#' available in the calling parent frame to the \code{evaluation_log} list in a calling frame.
#'
#' The finalizer callback (called with \code{finalize = TURE} in the end) converts
#' the \code{evaluation_log} list into a final data.table.
#'
#' The iteration evaluation result \code{bst_evaluation} must be a named numeric vector.
#'
#' Note: in the column names of the final data.table, the dash '-' character is replaced with
#' the underscore '_' in order to make the column names more like regular R identifiers.
#'
#' Callback function expects the following values to be set in its calling frame:
#' \code{evaluation_log},
#' \code{bst_evaluation},
#' \code{iteration}.
#'
#' @seealso
#' \code{\link{callbacks}}
#'
#' @export
cb.evaluation.log <- function() {
mnames <- NULL
init <- function(env) {
if (!is.list(env$evaluation_log))
stop("'evaluation_log' has to be a list")
mnames <<- names(env$bst_evaluation)
if (is.null(mnames) || any(mnames == ""))
stop("bst_evaluation must have non-empty names")
mnames <<- gsub('-', '_', names(env$bst_evaluation))
if(!is.null(env$bst_evaluation_err))
mnames <<- c(paste0(mnames, '_mean'), paste0(mnames, '_std'))
}
finalizer <- function(env) {
env$evaluation_log <- as.data.table(t(simplify2array(env$evaluation_log)))
setnames(env$evaluation_log, c('iter', mnames))
if(!is.null(env$bst_evaluation_err)) {
# rearrange col order from _mean,_mean,...,_std,_std,...
# to be _mean,_std,_mean,_std,...
len <- length(mnames)
means <- mnames[1:(len/2)]
stds <- mnames[(len/2 + 1):len]
cnames <- numeric(len)
cnames[c(TRUE, FALSE)] <- means
cnames[c(FALSE, TRUE)] <- stds
env$evaluation_log <- env$evaluation_log[, c('iter', cnames), with=FALSE]
}
}
callback <- function(env = parent.frame(), finalize = FALSE) {
if (is.null(mnames))
init(env)
if (finalize)
return(finalizer(env))
ev <- env$bst_evaluation
if(!is.null(env$bst_evaluation_err))
ev <- c(ev, env$bst_evaluation_err)
env$evaluation_log <- c(env$evaluation_log,
list(c(iter = env$iteration, ev)))
}
attr(callback, 'call') <- match.call()
attr(callback, 'name') <- 'cb.evaluation.log'
callback
}
#' Callback closure for restetting the booster's parameters at each iteration.
#'
#' @param new_params a list where each element corresponds to a parameter that needs to be reset.
#' Each element's value must be either a vector of values of length \code{nrounds}
#' to be set at each iteration,
#' or a function of two parameters \code{learning_rates(iteration, nrounds)}
#' which returns a new parameter value by using the current iteration number
#' and the total number of boosting rounds.
#'
#' @details
#' This is a "pre-iteration" callback function used to reset booster's parameters
#' at the beginning of each iteration.
#'
#' Note that when training is resumed from some previous model, and a function is used to
#' reset a parameter value, the \code{nround} argument in this function would be the
#' the number of boosting rounds in the current training.
#'
#' Callback function expects the following values to be set in its calling frame:
#' \code{bst} or \code{bst_folds},
#' \code{iteration},
#' \code{begin_iteration},
#' \code{end_iteration}.
#'
#' @seealso
#' \code{\link{callbacks}}
#'
#' @export
cb.reset.parameters <- function(new_params) {
if (typeof(new_params) != "list")
stop("'new_params' must be a list")
pnames <- gsub("\\.", "_", names(new_params))
nrounds <- NULL
# run some checks in the begining
init <- function(env) {
nrounds <<- env$end_iteration - env$begin_iteration + 1
if (is.null(env$bst) && is.null(env$bst_folds))
stop("Parent frame has neither 'bst' nor 'bst_folds'")
# Some parameters are not allowed to be changed,
# since changing them would simply wreck some chaos
not_allowed <- pnames %in%
c('num_class', 'num_output_group', 'size_leaf_vector', 'updater_seq')
if (any(not_allowed))
stop('Parameters ', paste(pnames[not_allowed]), " cannot be changed during boosting.")
for (n in pnames) {
p <- new_params[[n]]
if (is.function(p)) {
if (length(formals(p)) != 2)
stop("Parameter '", n, "' is a function but not of two arguments")
} else if (is.numeric(p) || is.character(p)) {
if (length(p) != nrounds)
stop("Length of '", n, "' has to be equal to 'nrounds'")
} else {
stop("Parameter '", n, "' is not a function or a vector")
}
}
}
callback <- function(env = parent.frame()) {
if (is.null(nrounds))
init(env)
i <- env$iteration
pars <- lapply(new_params, function(p) {
if (is.function(p))
return(p(i, nrounds))
p[i]
})
if (!is.null(env$bst)) {
xgb.parameters(env$bst$handle) <- pars
} else {
for (fd in env$bst_folds)
xgb.parameters(fd$bst$handle) <- pars
}
}
attr(callback, 'is_pre_iteration') <- TRUE
attr(callback, 'call') <- match.call()
attr(callback, 'name') <- 'cb.reset.parameters'
callback
}
#' Callback closure to activate the early stopping.
#'
#' @param stopping_rounds The number of rounds with no improvement in
#' the evaluation metric in order to stop the training.
#' @param maximize whether to maximize the evaluation metric
#' @param metric_name the name of an evaluation column to use as a criteria for early
#' stopping. If not set, the last column would be used.
#' Let's say the test data in \code{watchlist} was labelled as \code{dtest},
#' and one wants to use the AUC in test data for early stopping regardless of where
#' it is in the \code{watchlist}, then one of the following would need to be set:
#' \code{metric_name='dtest-auc'} or \code{metric_name='dtest_auc'}.
#' All dash '-' characters in metric names are considered equivalent to '_'.
#' @param verbose whether to print the early stopping information.
#'
#' @details
#' This callback function determines the condition for early stopping
#' by setting the \code{stop_condition = TRUE} flag in its calling frame.
#'
#' The following additional fields are assigned to the model's R object:
#' \itemize{
#' \item \code{best_score} the evaluation score at the best iteration
#' \item \code{best_iteration} at which boosting iteration the best score has occurred (1-based index)
#' \item \code{best_ntreelimit} to use with the \code{ntreelimit} parameter in \code{predict}.
#' It differs from \code{best_iteration} in multiclass or random forest settings.
#' }
#'
#' The Same values are also stored as xgb-attributes:
#' \itemize{
#' \item \code{best_iteration} is stored as a 0-based iteration index (for interoperability of binary models)
#' \item \code{best_msg} message string is also stored.
#' }
#'
#' At least one data element is required in the evaluation watchlist for early stopping to work.
#'
#' Callback function expects the following values to be set in its calling frame:
#' \code{stop_condition},
#' \code{bst_evaluation},
#' \code{rank},
#' \code{bst} (or \code{bst_folds} and \code{basket}),
#' \code{iteration},
#' \code{begin_iteration},
#' \code{end_iteration},
#' \code{num_parallel_tree}.
#'
#' @seealso
#' \code{\link{callbacks}},
#' \code{\link{xgb.attr}}
#'
#' @export
cb.early.stop <- function(stopping_rounds, maximize=FALSE,
metric_name=NULL, verbose=TRUE) {
# state variables
best_iteration <- -1
best_ntreelimit <- -1
best_score <- Inf
best_msg <- NULL
metric_idx <- 1
init <- function(env) {
if (length(env$bst_evaluation) == 0)
stop("For early stopping, watchlist must have at least one element")
eval_names <- gsub('-', '_', names(env$bst_evaluation))
if (!is.null(metric_name)) {
metric_idx <<- which(gsub('-', '_', metric_name) == eval_names)
if (length(metric_idx) == 0)
stop("'metric_name' for early stopping is not one of the following:\n",
paste(eval_names, collapse=' '), '\n')
}
if (is.null(metric_name) &&
length(env$bst_evaluation) > 1) {
metric_idx <<- length(eval_names)
if (verbose)
cat('Multiple eval metrics are present. Will use ',
eval_names[metric_idx], ' for early stopping.\n', sep = '')
}
metric_name <<- eval_names[metric_idx]
# maximixe is usually NULL when not set in xgb.train and built-in metrics
if (is.null(maximize))
maximize <<- ifelse(grepl('(_auc|_map|_ndcg)', metric_name), TRUE, FALSE)
if (verbose && NVL(env$rank, 0) == 0)
cat("Will train until ", metric_name, " hasn't improved in ",
stopping_rounds, " rounds.\n\n", sep = '')
best_iteration <<- 1
if (maximize) best_score <<- -Inf
env$stop_condition <- FALSE
if (!is.null(env$bst)) {
if (class(env$bst) != 'xgb.Booster')
stop("'bst' in the parent frame must be an 'xgb.Booster'")
if (!is.null(best_score <- xgb.attr(env$bst$handle, 'best_score'))) {
best_score <<- as.numeric(best_score)
best_iteration <<- as.numeric(xgb.attr(env$bst$handle, 'best_iteration')) + 1
best_msg <<- as.numeric(xgb.attr(env$bst$handle, 'best_msg'))
} else {
xgb.attributes(env$bst$handle) <- list(best_iteration = best_iteration - 1,
best_score = best_score)
}
} else if (is.null(env$bst_folds) || is.null(env$basket)) {
stop("Parent frame has neither 'bst' nor ('bst_folds' and 'basket')")
}
}
finalizer <- function(env) {
if (!is.null(env$bst)) {
attr_best_score = as.numeric(xgb.attr(env$bst$handle, 'best_score'))
if (best_score != attr_best_score)
stop("Inconsistent 'best_score' values between the closure state: ", best_score,
" and the xgb.attr: ", attr_best_score)
env$bst$best_iteration = best_iteration
env$bst$best_ntreelimit = best_ntreelimit
env$bst$best_score = best_score
} else {
env$basket$best_iteration <- best_iteration
env$basket$best_ntreelimit <- best_ntreelimit
}
}
callback <- function(env = parent.frame(), finalize = FALSE) {
if (best_iteration < 0)
init(env)
if (finalize)
return(finalizer(env))
i <- env$iteration
score = env$bst_evaluation[metric_idx]
if (( maximize && score > best_score) ||
(!maximize && score < best_score)) {
best_msg <<- format.eval.string(i, env$bst_evaluation, env$bst_evaluation_err)
best_score <<- score
best_iteration <<- i
best_ntreelimit <<- best_iteration * env$num_parallel_tree
# save the property to attributes, so they will occur in checkpoint
if (!is.null(env$bst)) {
xgb.attributes(env$bst) <- list(
best_iteration = best_iteration - 1, # convert to 0-based index
best_score = best_score,
best_msg = best_msg,
best_ntreelimit = best_ntreelimit)
}
} else if (i - best_iteration >= stopping_rounds) {
env$stop_condition <- TRUE
env$end_iteration <- i
if (verbose && NVL(env$rank, 0) == 0)
cat("Stopping. Best iteration:\n", best_msg, "\n\n", sep = '')
}
}
attr(callback, 'call') <- match.call()
attr(callback, 'name') <- 'cb.early.stop'
callback
}
#' Callback closure for saving a model file.
#'
#' @param save_period save the model to disk after every
#' \code{save_period} iterations; 0 means save the model at the end.
#' @param save_name the name or path for the saved model file.
#' It can contain a \code{\link[base]{sprintf}} formatting specifier
#' to include the integer iteration number in the file name.
#' E.g., with \code{save_name} = 'xgboost_%04d.model',
#' the file saved at iteration 50 would be named "xgboost_0050.model".
#'
#' @details
#' This callback function allows to save an xgb-model file, either periodically after each \code{save_period}'s or at the end.
#'
#' Callback function expects the following values to be set in its calling frame:
#' \code{bst},
#' \code{iteration},
#' \code{begin_iteration},
#' \code{end_iteration}.
#'
#' @seealso
#' \code{\link{callbacks}}
#'
#' @export
cb.save.model <- function(save_period = 0, save_name = "xgboost.model") {
if (save_period < 0)
stop("'save_period' cannot be negative")
callback <- function(env = parent.frame()) {
if (is.null(env$bst))
stop("'save_model' callback requires the 'bst' booster object in its calling frame")
if ((save_period > 0 && (env$iteration - env$begin_iteration) %% save_period == 0) ||
(save_period == 0 && env$iteration == env$end_iteration))
xgb.save(env$bst, sprintf(save_name, env$iteration))
}
attr(callback, 'call') <- match.call()
attr(callback, 'name') <- 'cb.save.model'
callback
}
#' Callback closure for returning cross-validation based predictions.
#'
#' @param save_models a flag for whether to save the folds' models.
#'
#' @details
#' This callback function saves predictions for all of the test folds,
#' and also allows to save the folds' models.
#'
#' It is a "finalizer" callback and it uses early stopping information whenever it is available,
#' thus it must be run after the early stopping callback if the early stopping is used.
#'
#' Callback function expects the following values to be set in its calling frame:
#' \code{bst_folds},
#' \code{basket},
#' \code{data},
#' \code{end_iteration},
#' \code{num_parallel_tree},
#' \code{num_class}.
#'
#' @return
#' Predictions are returned inside of the \code{pred} element, which is either a vector or a matrix,
#' depending on the number of prediction outputs per data row. The order of predictions corresponds
#' to the order of rows in the original dataset. Note that when a custom \code{folds} list is
#' provided in \code{xgb.cv}, the predictions would only be returned properly when this list is a
#' non-overlapping list of k sets of indices, as in a standard k-fold CV. The predictions would not be
#' meaningful when user-profided folds have overlapping indices as in, e.g., random sampling splits.
#' When some of the indices in the training dataset are not included into user-provided \code{folds},
#' their prediction value would be \code{NA}.
#'
#' @seealso
#' \code{\link{callbacks}}
#'
#' @export
cb.cv.predict <- function(save_models = FALSE) {
finalizer <- function(env) {
if (is.null(env$basket) || is.null(env$bst_folds))
stop("'cb.cv.predict' callback requires 'basket' and 'bst_folds' lists in its calling frame")
N <- nrow(env$data)
pred <-
if (env$num_class > 1) {
matrix(NA_real_, N, env$num_class)
} else {
rep(NA_real_, N)
}
ntreelimit <- NVL(env$basket$best_ntreelimit,
env$end_iteration * env$num_parallel_tree)
for (fd in env$bst_folds) {
pr <- predict(fd$bst, fd$watchlist[[2]], ntreelimit = ntreelimit, reshape = TRUE)
if (is.matrix(pred)) {
pred[fd$index,] <- pr
} else {
pred[fd$index] <- pr
}
}
env$basket$pred <- pred
if (save_models) {
env$basket$models <- lapply(env$bst_folds, function(fd) {
xgb.attr(fd$bst, 'niter') <- env$end_iteration - 1
xgb.Booster.check(xgb.handleToBooster(fd$bst), saveraw = TRUE)
})
}
}
callback <- function(env = parent.frame(), finalize = FALSE) {
if (finalize)
return(finalizer(env))
}
attr(callback, 'call') <- match.call()
attr(callback, 'name') <- 'cb.cv.predict'
callback
}
#
# Internal utility functions for callbacks ------------------------------------
#
# Format the evaluation metric string
format.eval.string <- function(iter, eval_res, eval_err=NULL) {
if (length(eval_res) == 0)
stop('no evaluation results')
enames <- names(eval_res)
if (is.null(enames))
stop('evaluation results must have names')
iter <- sprintf('[%d]\t', iter)
if (!is.null(eval_err)) {
if (length(eval_res) != length(eval_err))
stop('eval_res & eval_err lengths mismatch')
res <- paste0(sprintf("%s:%f+%f", enames, eval_res, eval_err), collapse='\t')
} else {
res <- paste0(sprintf("%s:%f", enames, eval_res), collapse='\t')
}
return(paste0(iter, res))
}
# Extract callback names from the list of callbacks
callback.names <- function(cb_list) {
unlist(lapply(cb_list, function(x) attr(x, 'name')))
}
# Extract callback calls from the list of callbacks
callback.calls <- function(cb_list) {
unlist(lapply(cb_list, function(x) attr(x, 'call')))
}
# Add a callback cb to the list and make sure that
# cb.early.stop and cb.cv.predict are at the end of the list
# with cb.cv.predict being the last (when present)
add.cb <- function(cb_list, cb) {
cb_list <- c(cb_list, cb)
names(cb_list) <- callback.names(cb_list)
if ('cb.early.stop' %in% names(cb_list)) {
cb_list <- c(cb_list, cb_list['cb.early.stop'])
# this removes only the first one
cb_list['cb.early.stop'] <- NULL
}
if ('cb.cv.predict' %in% names(cb_list)) {
cb_list <- c(cb_list, cb_list['cb.cv.predict'])
cb_list['cb.cv.predict'] <- NULL
}
cb_list
}
# Sort callbacks list into categories
categorize.callbacks <- function(cb_list) {
list(
pre_iter = Filter(function(x) {
pre <- attr(x, 'is_pre_iteration')
!is.null(pre) && pre
}, cb_list),
post_iter = Filter(function(x) {
pre <- attr(x, 'is_pre_iteration')
is.null(pre) || !pre
}, cb_list),
finalize = Filter(function(x) {
'finalize' %in% names(formals(x))
}, cb_list)
)
}
# Check whether all callback functions with names given by 'query_names' are present in the 'cb_list'.
has.callbacks <- function(cb_list, query_names) {
if (length(cb_list) < length(query_names))
return(FALSE)
if (!is.list(cb_list) ||
any(sapply(cb_list, class) != 'function')) {
stop('`cb_list`` must be a list of callback functions')
}
cb_names <- callback.names(cb_list)
if (!is.character(cb_names) ||
length(cb_names) != length(cb_list) ||
any(cb_names == "")) {
stop('All callbacks in the `cb_list` must have a non-empty `name` attribute')
}
if (!is.character(query_names) ||
length(query_names) == 0 ||
any(query_names == "")) {
stop('query_names must be a non-empty vector of non-empty character names')
}
return(all(query_names %in% cb_names))
}

View File

@@ -1,55 +0,0 @@
setClass('xgb.DMatrix')
#' Get information of an xgb.DMatrix object
#'
#' Get information of an xgb.DMatrix object
#'
#' The information can be one of the following:
#'
#' \itemize{
#' \item \code{label}: label Xgboost learn from ;
#' \item \code{weight}: to do a weight rescale ;
#' \item \code{base_margin}: base margin is the base prediction Xgboost will boost from ;
#' \item \code{nrow}: number of rows of the \code{xgb.DMatrix}.
#' }
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#' labels <- getinfo(dtrain, 'label')
#' setinfo(dtrain, 'label', 1-labels)
#' labels2 <- getinfo(dtrain, 'label')
#' stopifnot(all(labels2 == 1-labels))
#' @rdname getinfo
#' @export
getinfo <- function(object, ...){
UseMethod("getinfo")
}
#' @param object Object of class \code{xgb.DMatrix}
#' @param name the name of the field to get
#' @param ... other parameters
#' @rdname getinfo
#' @method getinfo xgb.DMatrix
setMethod("getinfo", signature = "xgb.DMatrix",
definition = function(object, name) {
if (typeof(name) != "character") {
stop("xgb.getinfo: name must be character")
}
if (class(object) != "xgb.DMatrix") {
stop("xgb.setinfo: first argument dtrain must be xgb.DMatrix")
}
if (name != "label" && name != "weight" &&
name != "base_margin" && name != "nrow") {
stop(paste("xgb.getinfo: unknown info name", name))
}
if (name != "nrow"){
ret <- .Call("XGDMatrixGetInfo_R", object, name, PACKAGE = "xgboost")
} else {
ret <- xgb.numrow(object)
}
return(ret)
})

View File

@@ -1,19 +0,0 @@
setGeneric("nrow")
#' @title Number of xgb.DMatrix rows
#' @description \code{nrow} return the number of rows present in the \code{xgb.DMatrix}.
#' @param x Object of class \code{xgb.DMatrix}
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#' stopifnot(nrow(dtrain) == nrow(train$data))
#'
#' @export
setMethod("nrow",
signature = "xgb.DMatrix",
definition = function(x) {
xgb.numrow(x)
}
)

View File

@@ -1,80 +0,0 @@
setClass("xgb.Booster.handle")
setClass("xgb.Booster",
slots = c(handle = "xgb.Booster.handle",
raw = "raw"))
#' Predict method for eXtreme Gradient Boosting model
#'
#' Predicted values based on xgboost model object.
#'
#' @param object Object of class "xgb.Boost"
#' @param newdata takes \code{matrix}, \code{dgCMatrix}, local data file or
#' \code{xgb.DMatrix}.
#' @param missing Missing is only used when input is dense matrix, pick a float
#' value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.
#' @param outputmargin whether the prediction should be shown in the original
#' value of sum of functions, when outputmargin=TRUE, the prediction is
#' untransformed margin value. In logistic regression, outputmargin=T will
#' output value before logistic transformation.
#' @param ntreelimit limit number of trees used in prediction, this parameter is
#' only valid for gbtree, but not for gblinear. set it to be value bigger
#' than 0. It will use all trees by default.
#' @param predleaf whether predict leaf index instead. If set to TRUE, the output will be a matrix object.
#'
#' @details
#' The option \code{ntreelimit} purpose is to let the user train a model with lots
#' of trees but use only the first trees for prediction to avoid overfitting
#' (without having to train a new model with less trees).
#'
#' The option \code{predleaf} purpose is inspired from §3.1 of the paper
#' \code{Practical Lessons from Predicting Clicks on Ads at Facebook}.
#' The idea is to use the model as a generator of new features which capture non linear link
#' from original features.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
#' pred <- predict(bst, test$data)
#' @export
setMethod("predict", signature = "xgb.Booster",
definition = function(object, newdata, missing = NA,
outputmargin = FALSE, ntreelimit = NULL, predleaf = FALSE) {
if (class(object) != "xgb.Booster"){
stop("predict: model in prediction must be of class xgb.Booster")
} else {
object <- xgb.Booster.check(object, saveraw = FALSE)
}
if (class(newdata) != "xgb.DMatrix") {
newdata <- xgb.DMatrix(newdata, missing = missing)
}
if (is.null(ntreelimit)) {
ntreelimit <- 0
} else {
if (ntreelimit < 1){
stop("predict: ntreelimit must be equal to or greater than 1")
}
}
option <- 0
if (outputmargin) {
option <- option + 1
}
if (predleaf) {
option <- option + 2
}
ret <- .Call("XGBoosterPredict_R", object$handle, newdata, as.integer(option),
as.integer(ntreelimit), PACKAGE = "xgboost")
if (predleaf){
len <- getinfo(newdata, "nrow")
if (length(ret) == len){
ret <- matrix(ret,ncol = 1)
} else {
ret <- matrix(ret, ncol = len)
ret <- t(ret)
}
}
return(ret)
})

View File

@@ -1,18 +0,0 @@
#' Predict method for eXtreme Gradient Boosting model handle
#'
#' Predicted values based on xgb.Booster.handle object.
#'
#' @param object Object of class "xgb.Boost.handle"
#' @param ... Parameters pass to \code{predict.xgb.Booster}
#'
setMethod("predict", signature = "xgb.Booster.handle",
definition = function(object, ...) {
if (class(object) != "xgb.Booster.handle"){
stop("predict: model in prediction must be of class xgb.Booster.handle")
}
bst <- xgb.handleToBooster(object)
ret <- predict(bst, ...)
return(ret)
})

View File

@@ -1,37 +0,0 @@
#' Set information of an xgb.DMatrix object
#'
#' Set information of an xgb.DMatrix object
#'
#' It can be one of the following:
#'
#' \itemize{
#' \item \code{label}: label Xgboost learn from ;
#' \item \code{weight}: to do a weight rescale ;
#' \item \code{base_margin}: base margin is the base prediction Xgboost will boost from ;
#' \item \code{group}.
#' }
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#' labels <- getinfo(dtrain, 'label')
#' setinfo(dtrain, 'label', 1-labels)
#' labels2 <- getinfo(dtrain, 'label')
#' stopifnot(all(labels2 == 1-labels))
#' @rdname setinfo
#' @export
setinfo <- function(object, ...){
UseMethod("setinfo")
}
#' @param object Object of class "xgb.DMatrix"
#' @param name the name of the field to get
#' @param info the specific field of information to set
#' @param ... other parameters
#' @rdname setinfo
#' @method setinfo xgb.DMatrix
setMethod("setinfo", signature = "xgb.DMatrix",
definition = function(object, name, info) {
xgb.setinfo(object, name, info)
})

View File

@@ -1,44 +0,0 @@
setClass('xgb.DMatrix')
#' Get a new DMatrix containing the specified rows of
#' orginal xgb.DMatrix object
#'
#' Get a new DMatrix containing the specified rows of
#' orginal xgb.DMatrix object
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#' dsub <- slice(dtrain, 1:3)
#' @rdname slice
#' @export
slice <- function(object, ...){
UseMethod("slice")
}
#' @param object Object of class "xgb.DMatrix"
#' @param idxset a integer vector of indices of rows needed
#' @param ... other parameters
#' @rdname slice
#' @method slice xgb.DMatrix
setMethod("slice", signature = "xgb.DMatrix",
definition = function(object, idxset, ...) {
if (class(object) != "xgb.DMatrix") {
stop("slice: first argument dtrain must be xgb.DMatrix")
}
ret <- .Call("XGDMatrixSliceDMatrix_R", object, idxset,
PACKAGE = "xgboost")
attr_list <- attributes(object)
nr <- xgb.numrow(object)
len <- sapply(attr_list,length)
ind <- which(len == nr)
if (length(ind) > 0) {
nms <- names(attr_list)[ind]
for (i in 1:length(ind)) {
attr(ret,nms[i]) <- attr(object,nms[i])[idxset]
}
}
return(structure(ret, class = "xgb.DMatrix"))
})

View File

@@ -1,305 +1,224 @@
#' @importClassesFrom Matrix dgCMatrix dgeMatrix
#' @import methods
#
# This file is for the low level reuseable utility functions
# that are not supposed to be visibe to a user.
#
# depends on matrix
.onLoad <- function(libname, pkgname) {
library.dynam("xgboost", pkgname, libname)
}
.onUnload <- function(libpath) {
library.dynam.unload("xgboost", libpath)
#
# General helper utilities ----------------------------------------------------
#
# SQL-style NVL shortcut.
NVL <- function(x, val) {
if (is.null(x))
return(val)
if (is.vector(x)) {
x[is.na(x)] <- val
return(x)
}
if (typeof(x) == 'closure')
return(x)
stop('x of unsupported for NVL type')
}
# set information into dmatrix, this mutate dmatrix
xgb.setinfo <- function(dmat, name, info) {
if (class(dmat) != "xgb.DMatrix") {
stop("xgb.setinfo: first argument dtrain must be xgb.DMatrix")
#
# Low-level functions for boosting --------------------------------------------
#
# Merges booster params with whatever is provided in ...
# plus runs some checks
check.booster.params <- function(params, ...) {
if (typeof(params) != "list")
stop("params must be a list")
# in R interface, allow for '.' instead of '_' in parameter names
names(params) <- gsub("\\.", "_", names(params))
# merge parameters from the params and the dots-expansion
dot_params <- list(...)
names(dot_params) <- gsub("\\.", "_", names(dot_params))
if (length(intersect(names(params),
names(dot_params))) > 0)
stop("Same parameters in 'params' and in the call are not allowed. Please check your 'params' list.")
params <- c(params, dot_params)
# providing a parameter multiple times only makes sense for 'eval_metric'
name_freqs <- table(names(params))
multi_names <- setdiff(names(name_freqs[name_freqs > 1]), 'eval_metric')
if (length(multi_names) > 0) {
warning("The following parameters were provided multiple times:\n\t",
paste(multi_names, collapse=', '), "\n Only the last value for each of them will be used.\n")
# While xgboost itself would choose the last value for a multi-parameter,
# will do some clean-up here b/c multi-parameters could be used further in R code, and R would
# pick the 1st (not the last) value when multiple elements with the same name are present in a list.
for (n in multi_names) {
del_idx <- which(n == names(params))
del_idx <- del_idx[-length(del_idx)]
params[[del_idx]] <- NULL
}
}
if (name == "label") {
if (length(info) != xgb.numrow(dmat))
stop("The length of labels must equal to the number of rows in the input data")
.Call("XGDMatrixSetInfo_R", dmat, name, as.numeric(info),
PACKAGE = "xgboost")
return(TRUE)
# for multiclass, expect num_class to be set
if (typeof(params[['objective']]) == "character" &&
substr(NVL(params[['objective']], 'x'), 1, 6) == 'multi:') {
if (as.numeric(NVL(params[['num_class']], 0)) < 2)
stop("'num_class' > 1 parameter must be set for multiclass classification")
}
if (name == "weight") {
if (length(info) != xgb.numrow(dmat))
stop("The length of weights must equal to the number of rows in the input data")
.Call("XGDMatrixSetInfo_R", dmat, name, as.numeric(info),
PACKAGE = "xgboost")
return(TRUE)
}
if (name == "base_margin") {
# if (length(info)!=xgb.numrow(dmat))
# stop("The length of base margin must equal to the number of rows in the input data")
.Call("XGDMatrixSetInfo_R", dmat, name, as.numeric(info),
PACKAGE = "xgboost")
return(TRUE)
}
if (name == "group") {
if (sum(info) != xgb.numrow(dmat))
stop("The sum of groups must equal to the number of rows in the input data")
.Call("XGDMatrixSetInfo_R", dmat, name, as.integer(info),
PACKAGE = "xgboost")
return(TRUE)
}
stop(paste("xgb.setinfo: unknown info name", name))
return(FALSE)
return(params)
}
# construct a Booster from cachelist
xgb.Booster <- function(params = list(), cachelist = list(), modelfile = NULL) {
if (typeof(cachelist) != "list") {
stop("xgb.Booster: only accepts list of DMatrix as cachelist")
# Performs some checks related to custom objective function.
# WARNING: has side-effects and can modify 'params' and 'obj' in its calling frame
check.custom.obj <- function(env = parent.frame()) {
if (!is.null(env$params[['objective']]) && !is.null(env$obj))
stop("Setting objectives in 'params' and 'obj' at the same time is not allowed")
if (!is.null(env$obj) && typeof(env$obj) != 'closure')
stop("'obj' must be a function")
# handle the case when custom objective function was provided through params
if (!is.null(env$params[['objective']]) &&
typeof(env$params$objective) == 'closure') {
env$obj <- env$params$objective
p <- env$params
p$objective <- NULL
env$params <- p
}
for (dm in cachelist) {
if (class(dm) != "xgb.DMatrix") {
stop("xgb.Booster: only accepts list of DMatrix as cachelist")
}
}
handle <- .Call("XGBoosterCreate_R", cachelist, PACKAGE = "xgboost")
if (length(params) != 0) {
for (i in 1:length(params)) {
p <- params[i]
.Call("XGBoosterSetParam_R", handle, gsub("\\.", "_", names(p)), as.character(p),
PACKAGE = "xgboost")
}
}
if (!is.null(modelfile)) {
if (typeof(modelfile) == "character") {
.Call("XGBoosterLoadModel_R", handle, modelfile, PACKAGE = "xgboost")
} else if (typeof(modelfile) == "raw") {
.Call("XGBoosterLoadModelFromRaw_R", handle, modelfile, PACKAGE = "xgboost")
} else {
stop("xgb.Booster: modelfile must be character or raw vector")
}
}
return(structure(handle, class = "xgb.Booster.handle"))
}
# convert xgb.Booster.handle to xgb.Booster
xgb.handleToBooster <- function(handle, raw = NULL)
{
bst <- list(handle = handle, raw = raw)
class(bst) <- "xgb.Booster"
return(bst)
# Performs some checks related to custom evaluation function.
# WARNING: has side-effects and can modify 'params' and 'feval' in its calling frame
check.custom.eval <- function(env = parent.frame()) {
if (!is.null(env$params[['eval_metric']]) && !is.null(env$feval))
stop("Setting evaluation metrics in 'params' and 'feval' at the same time is not allowed")
if (!is.null(env$feval) && typeof(env$feval) != 'closure')
stop("'feval' must be a function")
if (!is.null(env$feval) && is.null(env$maximize))
stop("Please set 'maximize' to indicate whether the metric needs to be maximized or not")
# handle a situation when custom eval function was provided through params
if (!is.null(env$params[['eval_metric']]) &&
typeof(env$params$eval_metric) == 'closure') {
env$feval <- env$params$eval_metric
p <- env$params
p[ which(names(p) == 'eval_metric') ] <- NULL
env$params <- p
}
}
# Check whether an xgb.Booster object is complete
xgb.Booster.check <- function(bst, saveraw = TRUE)
{
isnull <- is.null(bst$handle)
if (!isnull) {
isnull <- .Call("XGCheckNullPtr_R", bst$handle, PACKAGE="xgboost")
}
if (isnull) {
bst$handle <- xgb.Booster(modelfile = bst$raw)
} else {
if (is.null(bst$raw) && saveraw)
bst$raw <- xgb.save.raw(bst$handle)
}
return(bst)
}
## ----the following are low level iteratively function, not needed if
## you do not want to use them ---------------------------------------
# get dmatrix from data, label
xgb.get.DMatrix <- function(data, label = NULL, missing = NA, weight = NULL) {
inClass <- class(data)
if (inClass == "dgCMatrix" || inClass == "matrix") {
if (is.null(label)) {
stop("xgboost: need label when data is a matrix")
}
dtrain <- xgb.DMatrix(data, label = label, missing = missing)
if (!is.null(weight)){
xgb.setinfo(dtrain, "weight", weight)
}
} else {
if (!is.null(label)) {
warning("xgboost: label will be ignored.")
}
if (inClass == "character") {
dtrain <- xgb.DMatrix(data)
} else if (inClass == "xgb.DMatrix") {
dtrain <- data
} else if (inClass == "data.frame") {
stop("xgboost only support numerical matrix input,
use 'data.matrix' to transform the data.")
} else {
stop("xgboost: Invalid input of data")
}
}
return (dtrain)
}
xgb.numrow <- function(dmat) {
nrow <- .Call("XGDMatrixNumRow_R", dmat, PACKAGE="xgboost")
return(nrow)
}
# iteratively update booster with customized statistics
xgb.iter.boost <- function(booster, dtrain, gpair) {
if (class(booster) != "xgb.Booster.handle") {
stop("xgb.iter.update: first argument must be type xgb.Booster.handle")
}
if (class(dtrain) != "xgb.DMatrix") {
stop("xgb.iter.update: second argument must be type xgb.DMatrix")
}
.Call("XGBoosterBoostOneIter_R", booster, dtrain, gpair$grad, gpair$hess, PACKAGE = "xgboost")
return(TRUE)
}
# iteratively update booster with dtrain
# Update booster with dtrain for an iteration
xgb.iter.update <- function(booster, dtrain, iter, obj = NULL) {
if (class(booster) != "xgb.Booster.handle") {
stop("xgb.iter.update: first argument must be type xgb.Booster.handle")
stop("first argument type must be xgb.Booster.handle")
}
if (class(dtrain) != "xgb.DMatrix") {
stop("xgb.iter.update: second argument must be type xgb.DMatrix")
stop("second argument type must be xgb.DMatrix")
}
if (is.null(obj)) {
.Call("XGBoosterUpdateOneIter_R", booster, as.integer(iter), dtrain,
PACKAGE = "xgboost")
} else {
} else {
pred <- predict(booster, dtrain)
gpair <- obj(pred, dtrain)
succ <- xgb.iter.boost(booster, dtrain, gpair)
.Call("XGBoosterBoostOneIter_R", booster, dtrain, gpair$grad, gpair$hess, PACKAGE = "xgboost")
}
return(TRUE)
}
# iteratively evaluate one iteration
xgb.iter.eval <- function(booster, watchlist, iter, feval = NULL, prediction = FALSE) {
if (class(booster) != "xgb.Booster.handle") {
stop("xgb.eval: first argument must be type xgb.Booster")
}
if (typeof(watchlist) != "list") {
stop("xgb.eval: only accepts list of DMatrix as watchlist")
}
for (w in watchlist) {
if (class(w) != "xgb.DMatrix") {
stop("xgb.eval: watch list can only contain xgb.DMatrix")
}
}
if (length(watchlist) != 0) {
if (is.null(feval)) {
evnames <- list()
for (i in 1:length(watchlist)) {
w <- watchlist[i]
if (length(names(w)) == 0) {
stop("xgb.eval: name tag must be presented for every elements in watchlist")
}
evnames <- append(evnames, names(w))
}
msg <- .Call("XGBoosterEvalOneIter_R", booster, as.integer(iter), watchlist,
evnames, PACKAGE = "xgboost")
} else {
msg <- paste("[", iter, "]", sep="")
for (j in 1:length(watchlist)) {
w <- watchlist[j]
if (length(names(w)) == 0) {
stop("xgb.eval: name tag must be presented for every elements in watchlist")
}
preds <- predict(booster, w[[1]])
ret <- feval(preds, w[[1]])
msg <- paste(msg, "\t", names(w), "-", ret$metric, ":", ret$value, sep="")
}
}
# Evaluate one iteration.
# Returns a named vector of evaluation metrics
# with the names in a 'datasetname-metricname' format.
xgb.iter.eval <- function(booster, watchlist, iter, feval = NULL) {
if (class(booster) != "xgb.Booster.handle")
stop("first argument type must be xgb.Booster.handle")
if (length(watchlist) == 0)
return(NULL)
evnames <- names(watchlist)
if (is.null(feval)) {
msg <- .Call("XGBoosterEvalOneIter_R", booster, as.integer(iter), watchlist,
as.list(evnames), PACKAGE = "xgboost")
msg <- stri_split_regex(msg, '(\\s+|:|\\s+)')[[1]][-1]
res <- as.numeric(msg[c(FALSE,TRUE)]) # even indices are the values
names(res) <- msg[c(TRUE,FALSE)] # odds are the names
} else {
msg <- ""
res <- sapply(seq_along(watchlist), function(j) {
w <- watchlist[[j]]
preds <- predict(booster, w) # predict using all trees
eval_res <- feval(preds, w)
out <- eval_res$value
names(out) <- paste0(evnames[j], "-", eval_res$metric)
out
})
}
if (prediction){
preds <- predict(booster,watchlist[[2]])
return(list(msg,preds))
}
return(msg)
return(res)
}
#------------------------------------------
# helper functions for cross validation
#
xgb.cv.mknfold <- function(dall, nfold, param, stratified, folds) {
if (nfold <= 1) {
stop("nfold must be bigger than 1")
# Helper functions for cross validation ---------------------------------------
#
# Generates random (stratified if needed) CV folds
generate.cv.folds <- function(nfold, nrows, stratified, label, params) {
# cannot do it for rank
if (exists('objective', where=params) &&
is.character(params$objective) &&
strtrim(params$objective, 5) == 'rank:') {
stop("\n\tAutomatic generation of CV-folds is not implemented for ranking!\n",
"\tConsider providing pre-computed CV-folds through the 'folds=' parameter.\n")
}
if(is.null(folds)) {
if (exists('objective', where=param) && is.character(param$objective) &&
strtrim(param[['objective']], 5) == 'rank:') {
stop("\tAutomatic creation of CV-folds is not implemented for ranking!\n",
"\tConsider providing pre-computed CV-folds through the folds parameter.")
}
y <- getinfo(dall, 'label')
randidx <- sample(1 : xgb.numrow(dall))
if (stratified & length(y) == length(randidx)) {
y <- y[randidx]
#
# WARNING: some heuristic logic is employed to identify classification setting!
#
# For classification, need to convert y labels to factor before making the folds,
# and then do stratification by factor levels.
# For regression, leave y numeric and do stratification by quantiles.
if (exists('objective', where=param) && is.character(param$objective)) {
# If 'objective' provided in params, assume that y is a classification label
# unless objective is reg:linear
if (param[['objective']] != 'reg:linear') y <- factor(y)
} else {
# If no 'objective' given in params, it means that user either wants to use
# the default 'reg:linear' objective or has provided a custom obj function.
# Here, assume classification setting when y has 5 or less unique values:
if (length(unique(y)) <= 5) y <- factor(y)
}
folds <- xgb.createFolds(y, nfold)
# shuffle
rnd_idx <- sample(1:nrows)
if (stratified &&
length(label) == length(rnd_idx)) {
y <- label[rnd_idx]
# WARNING: some heuristic logic is employed to identify classification setting!
# - For classification, need to convert y labels to factor before making the folds,
# and then do stratification by factor levels.
# - For regression, leave y numeric and do stratification by quantiles.
if (exists('objective', where=params) &&
is.character(params$objective)) {
# If 'objective' provided in params, assume that y is a classification label
# unless objective is reg:linear
if (params$objective != 'reg:linear')
y <- factor(y)
} else {
# make simple non-stratified folds
kstep <- length(randidx) %/% nfold
folds <- list()
for (i in 1:(nfold - 1)) {
folds[[i]] <- randidx[1:kstep]
randidx <- setdiff(randidx, folds[[i]])
}
folds[[nfold]] <- randidx
# If no 'objective' given in params, it means that user either wants to use
# the default 'reg:linear' objective or has provided a custom obj function.
# Here, assume classification setting when y has 5 or less unique values:
if (length(unique(y)) <= 5)
y <- factor(y)
}
}
ret <- list()
for (k in 1:nfold) {
dtest <- slice(dall, folds[[k]])
didx <- c()
for (i in 1:nfold) {
if (i != k) {
didx <- append(didx, folds[[i]])
}
folds <- xgb.createFolds(y, nfold)
} else {
# make simple non-stratified folds
kstep <- length(rnd_idx) %/% nfold
folds <- list()
for (i in 1:(nfold - 1)) {
folds[[i]] <- rnd_idx[1:kstep]
rnd_idx <- rnd_idx[-(1:kstep)]
}
dtrain <- slice(dall, didx)
bst <- xgb.Booster(param, list(dtrain, dtest))
watchlist <- list(train=dtrain, test=dtest)
ret[[k]] <- list(dtrain=dtrain, booster=bst, watchlist=watchlist, index=folds[[k]])
folds[[nfold]] <- rnd_idx
}
return (ret)
return(folds)
}
xgb.cv.aggcv <- function(res, showsd = TRUE) {
header <- res[[1]]
ret <- header[1]
for (i in 2:length(header)) {
kv <- strsplit(header[i], ":")[[1]]
ret <- paste(ret, "\t", kv[1], ":", sep="")
stats <- c()
stats[1] <- as.numeric(kv[2])
for (j in 2:length(res)) {
tkv <- strsplit(res[[j]][i], ":")[[1]]
stats[j] <- as.numeric(tkv[2])
}
ret <- paste(ret, sprintf("%f", mean(stats)), sep="")
if (showsd) {
ret <- paste(ret, sprintf("+%f", stats::sd(stats)), sep="")
}
}
return (ret)
}
# Shamelessly copied from caret::createFolds
# and simplified by always returning an unnamed list of test indices
# Creates CV folds stratified by the values of y.
# It was borrowed from caret::createFolds and simplified
# by always returning an unnamed list of fold indices.
xgb.createFolds <- function(y, k = 10)
{
if(is.numeric(y)) {
if (is.numeric(y)) {
## Group the numeric data based on their magnitudes
## and sample within those groups.
@@ -318,7 +237,7 @@ xgb.createFolds <- function(y, k = 10)
include.lowest = TRUE)
}
if(k < length(y)) {
if (k < length(y)) {
## reset levels so that the possible levels and
## the levels in the vector are the same
y <- factor(as.character(y))
@@ -328,19 +247,83 @@ xgb.createFolds <- function(y, k = 10)
## For each class, balance the fold allocation as far
## as possible, then resample the remainder.
## The final assignment of folds is also randomized.
for(i in 1:length(numInClass)) {
for (i in 1:length(numInClass)) {
## create a vector of integers from 1:k as many times as possible without
## going over the number of samples in the class. Note that if the number
## of samples in a class is less than k, nothing is producd here.
seqVector <- rep(1:k, numInClass[i] %/% k)
## add enough random integers to get length(seqVector) == numInClass[i]
if(numInClass[i] %% k > 0) seqVector <- c(seqVector, sample(1:k, numInClass[i] %% k))
if (numInClass[i] %% k > 0) seqVector <- c(seqVector, sample(1:k, numInClass[i] %% k))
## shuffle the integers for fold assignment and assign to this classes's data
foldVector[which(y == dimnames(numInClass)$y[i])] <- sample(seqVector)
}
} else foldVector <- seq(along = y)
} else {
foldVector <- seq(along = y)
}
out <- split(seq(along = y), foldVector)
names(out) <- NULL
out
}
#
# Deprectaion notice utilities ------------------------------------------------
#
#' Deprecation notices.
#'
#' At this time, some of the parameter names were changed in order to make the code style more uniform.
#' The deprecated parameters would be removed in the next release.
#'
#' To see all the current deprecated and new parameters, check the \code{xgboost:::depr_par_lut} table.
#'
#' A deprecation warning is shown when any of the deprecated parameters is used in a call.
#' An additional warning is shown when there was a partial match to a deprecated parameter
#' (as R is able to partially match parameter names).
#'
#' @name xgboost-deprecated
NULL
# Lookup table for the deprecated parameters bookkeeping
depr_par_lut <- matrix(c(
'print.every.n', 'print_every_n',
'early.stop.round', 'early_stopping_rounds',
'training.data', 'data',
'with.stats', 'with_stats',
'numberOfClusters', 'n_clusters',
'features.keep', 'features_keep',
'plot.height','plot_height',
'plot.width','plot_width',
'dummy', 'DUMMY'
), ncol=2, byrow = TRUE)
colnames(depr_par_lut) <- c('old', 'new')
# Checks the dot-parameters for deprecated names
# (including partial matching), gives a deprecation warning,
# and sets new parameters to the old parameters' values within its parent frame.
# WARNING: has side-effects
check.deprecation <- function(..., env = parent.frame()) {
pars <- list(...)
# exact and partial matches
all_match <- pmatch(names(pars), depr_par_lut[,1])
# indices of matched pars' names
idx_pars <- which(!is.na(all_match))
if (length(idx_pars) == 0) return()
# indices of matched LUT rows
idx_lut <- all_match[idx_pars]
# which of idx_lut were the exact matches?
ex_match <- depr_par_lut[idx_lut,1] %in% names(pars)
for (i in seq_along(idx_pars)) {
pars_par <- names(pars)[idx_pars[i]]
old_par <- depr_par_lut[idx_lut[i], 1]
new_par <- depr_par_lut[idx_lut[i], 2]
if (!ex_match[i]) {
warning("'", pars_par, "' was partially matched to '", old_par,"'")
}
.Deprecated(new_par, old=old_par, package = 'xgboost')
if (new_par != 'NULL') {
eval(parse(text = paste(new_par, '<-', pars[[pars_par]])), envir = env)
}
}
}

486
R-package/R/xgb.Booster.R Normal file
View File

@@ -0,0 +1,486 @@
# Construct a Booster from cachelist
# internal utility function
xgb.Booster <- function(params = list(), cachelist = list(), modelfile = NULL) {
if (typeof(cachelist) != "list" ||
any(sapply(cachelist, class) != 'xgb.DMatrix')) {
stop("xgb.Booster only accepts list of DMatrix as cachelist")
}
handle <- .Call("XGBoosterCreate_R", cachelist, PACKAGE = "xgboost")
if (!is.null(modelfile)) {
if (typeof(modelfile) == "character") {
.Call("XGBoosterLoadModel_R", handle, modelfile, PACKAGE = "xgboost")
} else if (typeof(modelfile) == "raw") {
.Call("XGBoosterLoadModelFromRaw_R", handle, modelfile, PACKAGE = "xgboost")
} else if (class(modelfile) == "xgb.Booster") {
modelfile <- xgb.Booster.check(modelfile, saveraw=TRUE)
.Call("XGBoosterLoadModelFromRaw_R", handle, modelfile$raw, PACKAGE = "xgboost")
} else {
stop("modelfile must be either character filename, or raw booster dump, or xgb.Booster object")
}
}
class(handle) <- "xgb.Booster.handle"
if (length(params) > 0) {
xgb.parameters(handle) <- params
}
return(handle)
}
# Convert xgb.Booster.handle to xgb.Booster
# internal utility function
xgb.handleToBooster <- function(handle, raw = NULL) {
bst <- list(handle = handle, raw = raw)
class(bst) <- "xgb.Booster"
return(bst)
}
# Return a verified to be valid handle out of either xgb.Booster.handle or xgb.Booster
# internal utility function
xgb.get.handle <- function(object) {
handle <- switch(class(object)[1],
xgb.Booster = object$handle,
xgb.Booster.handle = object,
stop("argument must be of either xgb.Booster or xgb.Booster.handle class")
)
if (is.null(handle) || .Call("XGCheckNullPtr_R", handle, PACKAGE="xgboost")) {
stop("invalid xgb.Booster.handle")
}
handle
}
# Check whether an xgb.Booster object is complete
# internal utility function
xgb.Booster.check <- function(bst, saveraw = TRUE) {
if (class(bst) != "xgb.Booster")
stop("argument type must be xgb.Booster")
isnull <- is.null(bst$handle)
if (!isnull) {
isnull <- .Call("XGCheckNullPtr_R", bst$handle, PACKAGE="xgboost")
}
if (isnull) {
bst$handle <- xgb.Booster(modelfile = bst$raw)
} else {
if (is.null(bst$raw) && saveraw)
bst$raw <- xgb.save.raw(bst$handle)
}
return(bst)
}
#' Predict method for eXtreme Gradient Boosting model
#'
#' Predicted values based on either xgboost model or model handle object.
#'
#' @param object Object of class \code{xgb.Booster} or \code{xgb.Booster.handle}
#' @param newdata takes \code{matrix}, \code{dgCMatrix}, local data file or \code{xgb.DMatrix}.
#' @param missing Missing is only used when input is dense matrix. Pick a float value that represents
#' missing values in data (e.g., sometimes 0 or some other extreme value is used).
#' @param outputmargin whether the prediction should be returned in the for of original untransformed
#' sum of predictions from boosting iterations' results. E.g., setting \code{outputmargin=TRUE} for
#' logistic regression would result in predictions for log-odds instead of probabilities.
#' @param ntreelimit limit the number of model's trees or boosting iterations used in prediction (see Details).
#' It will use all the trees by default (\code{NULL} value).
#' @param predleaf whether predict leaf index instead.
#' @param reshape whether to reshape the vector of predictions to a matrix form when there are several
#' prediction outputs per case. This option has no effect when \code{predleaf = TRUE}.
#' @param ... Parameters passed to \code{predict.xgb.Booster}
#'
#' @details
#' Note that \code{ntreelimit} is not necesserily equal to the number of boosting iterations
#' and it is not necesserily equal to the number of trees in a model.
#' E.g., in a random forest-like model, \code{ntreelimit} would limit the number of trees.
#' But for multiclass classification, there are multiple trees per iteration,
#' but \code{ntreelimit} limits the number of boosting iterations.
#'
#' Also note that \code{ntreelimit} would currently do nothing for predictions from gblinear,
#' since gblinear doesn't keep its boosting history.
#'
#' One possible practical applications of the \code{predleaf} option is to use the model
#' as a generator of new features which capture non-linearity and interactions,
#' e.g., as implemented in \code{\link{xgb.create.features}}.
#'
#' @return
#' For regression or binary classification, it returns a vector of length \code{nrows(newdata)}.
#' For multiclass classification, either a \code{num_class * nrows(newdata)} vector or
#' a \code{(nrows(newdata), num_class)} dimension matrix is returned, depending on
#' the \code{reshape} value.
#'
#' When \code{predleaf = TRUE}, the output is a matrix object with the
#' number of columns corresponding to the number of trees.
#'
#' @seealso
#' \code{\link{xgb.train}}.
#'
#' @examples
#' ## binary classification:
#'
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#'
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' # use all trees by default
#' pred <- predict(bst, test$data)
#' # use only the 1st tree
#' pred <- predict(bst, test$data, ntreelimit = 1)
#'
#'
#' ## multiclass classification in iris dataset:
#'
#' lb <- as.numeric(iris$Species) - 1
#' num_class <- 3
#' set.seed(11)
#' bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
#' max_depth = 4, eta = 0.5, nthread = 2, nrounds = 10, subsample = 0.5,
#' objective = "multi:softprob", num_class = num_class)
#' # predict for softmax returns num_class probability numbers per case:
#' pred <- predict(bst, as.matrix(iris[, -5]))
#' str(pred)
#' # reshape it to a num_class-columns matrix
#' pred <- matrix(pred, ncol=num_class, byrow=TRUE)
#' # convert the probabilities to softmax labels
#' pred_labels <- max.col(pred) - 1
#' # the following should result in the same error as seen in the last iteration
#' sum(pred_labels != lb)/length(lb)
#'
#' # compare that to the predictions from softmax:
#' set.seed(11)
#' bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
#' max_depth = 4, eta = 0.5, nthread = 2, nrounds = 10, subsample = 0.5,
#' objective = "multi:softmax", num_class = num_class)
#' pred <- predict(bst, as.matrix(iris[, -5]))
#' str(pred)
#' all.equal(pred, pred_labels)
#' # prediction from using only 5 iterations should result
#' # in the same error as seen in iteration 5:
#' pred5 <- predict(bst, as.matrix(iris[, -5]), ntreelimit=5)
#' sum(pred5 != lb)/length(lb)
#'
#'
#' ## random forest-like model of 25 trees for binary classification:
#'
#' set.seed(11)
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 5,
#' nthread = 2, nrounds = 1, objective = "binary:logistic",
#' num_parallel_tree = 25, subsample = 0.6, colsample_bytree = 0.1)
#' # Inspect the prediction error vs number of trees:
#' lb <- test$label
#' dtest <- xgb.DMatrix(test$data, label=lb)
#' err <- sapply(1:25, function(n) {
#' pred <- predict(bst, dtest, ntreelimit=n)
#' sum((pred > 0.5) != lb)/length(lb)
#' })
#' plot(err, type='l', ylim=c(0,0.1), xlab='#trees')
#'
#' @rdname predict.xgb.Booster
#' @export
predict.xgb.Booster <- function(object, newdata, missing = NA,
outputmargin = FALSE, ntreelimit = NULL, predleaf = FALSE, reshape = FALSE, ...) {
object <- xgb.Booster.check(object, saveraw = FALSE)
if (class(newdata) != "xgb.DMatrix")
newdata <- xgb.DMatrix(newdata, missing = missing)
if (is.null(ntreelimit))
ntreelimit <- NVL(object$best_ntreelimit, 0)
if (ntreelimit < 0)
stop("ntreelimit cannot be negative")
option <- 0L + 1L * as.logical(outputmargin) + 2L * as.logical(predleaf)
ret <- .Call("XGBoosterPredict_R", object$handle, newdata, option[1],
as.integer(ntreelimit), PACKAGE = "xgboost")
if (length(ret) %% nrow(newdata) != 0)
stop("prediction length ", length(ret)," is not multiple of nrows(newdata) ", nrow(newdata))
npred_per_case <- length(ret) / nrow(newdata)
if (predleaf){
len <- nrow(newdata)
ret <- if (length(ret) == len) {
matrix(ret, ncol = 1)
} else {
t(matrix(ret, ncol = len))
}
} else if (reshape && npred_per_case > 1) {
ret <- matrix(ret, ncol = length(ret) / nrow(newdata), byrow = TRUE)
}
return(ret)
}
#' @rdname predict.xgb.Booster
#' @export
predict.xgb.Booster.handle <- function(object, ...) {
bst <- xgb.handleToBooster(object)
ret <- predict(bst, ...)
return(ret)
}
#' Accessors for serializable attributes of a model.
#'
#' These methods allow to manipulate the key-value attribute strings of an xgboost model.
#'
#' @param object Object of class \code{xgb.Booster} or \code{xgb.Booster.handle}.
#' @param name a non-empty character string specifying which attribute is to be accessed.
#' @param value a value of an attribute for \code{xgb.attr<-}; for \code{xgb.attributes<-}
#' it's a list (or an object coercible to a list) with the names of attributes to set
#' and the elements corresponding to attribute values.
#' Non-character values are converted to character.
#' When attribute value is not a scalar, only the first index is used.
#' Use \code{NULL} to remove an attribute.
#'
#' @details
#' The primary purpose of xgboost model attributes is to store some meta-data about the model.
#' Note that they are a separate concept from the object attributes in R.
#' Specifically, they refer to key-value strings that can be attached to an xgboost model,
#' stored together with the model's binary representation, and accessed later
#' (from R or any other interface).
#' In contrast, any R-attribute assigned to an R-object of \code{xgb.Booster} class
#' would not be saved by \code{xgb.save} because an xgboost model is an external memory object
#' and its serialization is handled extrnally.
#' Also, setting an attribute that has the same name as one of xgboost's parameters wouldn't
#' change the value of that parameter for a model.
#' Use \code{\link{xgb.parameters<-}} to set or change model parameters.
#'
#' The attribute setters would usually work more efficiently for \code{xgb.Booster.handle}
#' than for \code{xgb.Booster}, since only just a handle (pointer) would need to be copied.
#' That would only matter if attributes need to be set many times.
#' Note, however, that when feeding a handle of an \code{xgb.Booster} object to the attribute setters,
#' the raw model cache of an \code{xgb.Booster} object would not be automatically updated,
#' and it would be user's responsibility to call \code{xgb.save.raw} to update it.
#'
#' The \code{xgb.attributes<-} setter either updates the existing or adds one or several attributes,
#' but it doesn't delete the other existing attributes.
#'
#' @return
#' \code{xgb.attr} returns either a string value of an attribute
#' or \code{NULL} if an attribute wasn't stored in a model.
#'
#' \code{xgb.attributes} returns a list of all attribute stored in a model
#' or \code{NULL} if a model has no stored attributes.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#'
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#'
#' xgb.attr(bst, "my_attribute") <- "my attribute value"
#' print(xgb.attr(bst, "my_attribute"))
#' xgb.attributes(bst) <- list(a = 123, b = "abc")
#'
#' xgb.save(bst, 'xgb.model')
#' bst1 <- xgb.load('xgb.model')
#' print(xgb.attr(bst1, "my_attribute"))
#' print(xgb.attributes(bst1))
#'
#' # deletion:
#' xgb.attr(bst1, "my_attribute") <- NULL
#' print(xgb.attributes(bst1))
#' xgb.attributes(bst1) <- list(a = NULL, b = NULL)
#' print(xgb.attributes(bst1))
#'
#' @rdname xgb.attr
#' @export
xgb.attr <- function(object, name) {
if (is.null(name) || nchar(as.character(name[1])) == 0) stop("invalid attribute name")
handle <- xgb.get.handle(object)
.Call("XGBoosterGetAttr_R", handle, as.character(name[1]), PACKAGE="xgboost")
}
#' @rdname xgb.attr
#' @export
`xgb.attr<-` <- function(object, name, value) {
if (is.null(name) || nchar(as.character(name[1])) == 0) stop("invalid attribute name")
handle <- xgb.get.handle(object)
if (!is.null(value)) {
# Coerce the elements to be scalar strings.
# Q: should we warn user about non-scalar elements?
value <- as.character(value[1])
}
.Call("XGBoosterSetAttr_R", handle, as.character(name[1]), value, PACKAGE="xgboost")
if (is(object, 'xgb.Booster') && !is.null(object$raw)) {
object$raw <- xgb.save.raw(object$handle)
}
object
}
#' @rdname xgb.attr
#' @export
xgb.attributes <- function(object) {
handle <- xgb.get.handle(object)
attr_names <- .Call("XGBoosterGetAttrNames_R", handle, PACKAGE="xgboost")
if (is.null(attr_names)) return(NULL)
res <- lapply(attr_names, function(x) {
.Call("XGBoosterGetAttr_R", handle, x, PACKAGE="xgboost")
})
names(res) <- attr_names
res
}
#' @rdname xgb.attr
#' @export
`xgb.attributes<-` <- function(object, value) {
a <- as.list(value)
if (is.null(names(a)) || any(nchar(names(a)) == 0)) {
stop("attribute names cannot be empty strings")
}
# Coerce the elements to be scalar strings.
# Q: should we warn a user about non-scalar elements?
a <- lapply(a, function(x) {
if (is.null(x)) return(NULL)
as.character(x[1])
})
handle <- xgb.get.handle(object)
for (i in seq_along(a)) {
.Call("XGBoosterSetAttr_R", handle, names(a[i]), a[[i]], PACKAGE="xgboost")
}
if (is(object, 'xgb.Booster') && !is.null(object$raw)) {
object$raw <- xgb.save.raw(object$handle)
}
object
}
#' Accessors for model parameters.
#'
#' Only the setter for xgboost parameters is currently implemented.
#'
#' @param object Object of class \code{xgb.Booster} or \code{xgb.Booster.handle}.
#' @param value a list (or an object coercible to a list) with the names of parameters to set
#' and the elements corresponding to parameter values.
#'
#' @details
#' Note that the setter would usually work more efficiently for \code{xgb.Booster.handle}
#' than for \code{xgb.Booster}, since only just a handle would need to be copied.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#'
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#'
#' xgb.parameters(bst) <- list(eta = 0.1)
#'
#' @rdname xgb.parameters
#' @export
`xgb.parameters<-` <- function(object, value) {
if (length(value) == 0) return(object)
p <- as.list(value)
if (is.null(names(p)) || any(nchar(names(p)) == 0)) {
stop("parameter names cannot be empty strings")
}
names(p) <- gsub("\\.", "_", names(p))
p <- lapply(p, function(x) as.character(x)[1])
handle <- xgb.get.handle(object)
for (i in seq_along(p)) {
.Call("XGBoosterSetParam_R", handle, names(p[i]), p[[i]], PACKAGE = "xgboost")
}
if (is(object, 'xgb.Booster') && !is.null(object$raw)) {
object$raw <- xgb.save.raw(object$handle)
}
object
}
# Extract # of trees in a model
# TODO: either add a getter to C-interface, or simply set an 'ntree' attribute after each iteration
# internal utility function
xgb.ntree <- function(bst) {
length(grep('^booster', xgb.dump(bst)))
}
#' Print xgb.Booster
#'
#' Print information about xgb.Booster.
#'
#' @param x an xgb.Booster object
#' @param verbose whether to print detailed data (e.g., attribute values)
#' @param ... not currently used
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' attr(bst, 'myattr') <- 'memo'
#'
#' print(bst)
#' print(bst, verbose=TRUE)
#'
#' @method print xgb.Booster
#' @export
print.xgb.Booster <- function(x, verbose=FALSE, ...) {
cat('##### xgb.Booster\n')
if (is.null(x$handle) || .Call("XGCheckNullPtr_R", x$handle, PACKAGE="xgboost")) {
cat("handle is invalid\n")
return(x)
}
cat('raw: ')
if (!is.null(x$raw)) {
cat(format(object.size(x$raw), units="auto"), '\n')
} else {
cat('NULL\n')
}
if (!is.null(x$call)) {
cat('call:\n ')
print(x$call)
}
if (!is.null(x$params)) {
cat('params (as set within xgb.train):\n')
cat( ' ',
paste(names(x$params),
paste0('"', unlist(x$params), '"'),
sep=' = ', collapse=', '), '\n', sep='')
}
# TODO: need an interface to access all the xgboosts parameters
attrs <- xgb.attributes(x)
if (length(attrs) > 0) {
cat('xgb.attributes:\n')
if (verbose) {
cat( paste(paste0(' ',names(attrs)),
paste0('"', unlist(attrs), '"'),
sep=' = ', collapse='\n'), '\n', sep='')
} else {
cat(' ', paste(names(attrs), collapse=', '), '\n', sep='')
}
}
if (!is.null(x$callbacks) && length(x$callbacks) > 0) {
cat('callbacks:\n')
lapply(callback.calls(x$callbacks), function(x) {
cat(' ')
print(x)
})
}
cat('niter: ', x$niter, '\n', sep='')
# TODO: uncomment when faster xgb.ntree is implemented
#cat('ntree: ', xgb.ntree(x), '\n', sep='')
for (n in setdiff(names(x), c('handle', 'raw', 'call', 'params', 'callbacks','evaluation_log','niter'))) {
if (is.atomic(x[[n]])) {
cat(n, ': ', x[[n]], '\n', sep='')
} else {
cat(n, ':\n\t', sep='')
print(x[[n]])
}
}
if (!is.null(x$evaluation_log)) {
cat('evaluation_log:\n')
print(x$evaluation_log, row.names = FALSE, topn = 2)
}
invisible(x)
}

View File

@@ -1,9 +1,9 @@
#' Contruct xgb.DMatrix object
#'
#' Contruct xgb.DMatrix object from dense matrix, sparse matrix or local file.
#' Contruct xgb.DMatrix object from dense matrix, sparse matrix
#' or local file (that was created previously by saving an \code{xgb.DMatrix}).
#'
#' @param data a \code{matrix} object, a \code{dgCMatrix} object or a character
#' indicating the data file.
#' @param data a \code{matrix} object, a \code{dgCMatrix} object or a character representing a filename
#' @param info a list of information of the xgb.DMatrix object
#' @param missing Missing is only used when input is dense matrix, pick a float
#' value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.
@@ -18,27 +18,350 @@
#' dtrain <- xgb.DMatrix('xgb.DMatrix.data')
#' @export
xgb.DMatrix <- function(data, info = list(), missing = NA, ...) {
cnames <- NULL
if (typeof(data) == "character") {
handle <- .Call("XGDMatrixCreateFromFile_R", data, as.integer(FALSE),
PACKAGE = "xgboost")
} else if (is.matrix(data)) {
handle <- .Call("XGDMatrixCreateFromMat_R", data, missing,
PACKAGE = "xgboost")
cnames <- colnames(data)
} else if (class(data) == "dgCMatrix") {
handle <- .Call("XGDMatrixCreateFromCSC_R", data@p, data@i, data@x,
PACKAGE = "xgboost")
cnames <- colnames(data)
} else {
stop(paste("xgb.DMatrix: does not support to construct from ",
typeof(data)))
}
dmat <- structure(handle, class = "xgb.DMatrix")
dmat <- handle
attributes(dmat) <- list(.Dimnames = list(NULL, cnames), class = "xgb.DMatrix")
#dmat <- list(handle = handle, colnames = cnames)
#attr(dmat, 'class') <- "xgb.DMatrix"
info <- append(info, list(...))
if (length(info) == 0)
return(dmat)
for (i in 1:length(info)) {
p <- info[i]
xgb.setinfo(dmat, names(p), p[[1]])
setinfo(dmat, names(p), p[[1]])
}
return(dmat)
}
# get dmatrix from data, label
# internal helper method
xgb.get.DMatrix <- function(data, label = NULL, missing = NA, weight = NULL) {
inClass <- class(data)
if ("dgCMatrix" %in% inClass || "matrix" %in% inClass ) {
if (is.null(label)) {
stop("xgboost: need label when data is a matrix")
}
dtrain <- xgb.DMatrix(data, label = label, missing = missing)
if (!is.null(weight)){
setinfo(dtrain, "weight", weight)
}
} else {
if (!is.null(label)) {
warning("xgboost: label will be ignored.")
}
if (inClass == "character") {
dtrain <- xgb.DMatrix(data)
} else if (inClass == "xgb.DMatrix") {
dtrain <- data
} else if (inClass == "data.frame") {
stop("xgboost only support numerical matrix input,
use 'data.matrix' to transform the data.")
} else {
stop("xgboost: Invalid input of data")
}
}
return (dtrain)
}
#' Dimensions of xgb.DMatrix
#'
#' Returns a vector of numbers of rows and of columns in an \code{xgb.DMatrix}.
#' @param x Object of class \code{xgb.DMatrix}
#'
#' @details
#' Note: since \code{nrow} and \code{ncol} internally use \code{dim}, they can also
#' be directly used with an \code{xgb.DMatrix} object.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#'
#' stopifnot(nrow(dtrain) == nrow(train$data))
#' stopifnot(ncol(dtrain) == ncol(train$data))
#' stopifnot(all(dim(dtrain) == dim(train$data)))
#'
#' @export
dim.xgb.DMatrix <- function(x) {
c(.Call("XGDMatrixNumRow_R", x, PACKAGE="xgboost"),
.Call("XGDMatrixNumCol_R", x, PACKAGE="xgboost"))
}
#' Handling of column names of \code{xgb.DMatrix}
#'
#' Only column names are supported for \code{xgb.DMatrix}, thus setting of
#' row names would have no effect and returnten row names would be NULL.
#'
#' @param x object of class \code{xgb.DMatrix}
#' @param value a list of two elements: the first one is ignored
#' and the second one is column names
#'
#' @details
#' Generic \code{dimnames} methods are used by \code{colnames}.
#' Since row names are irrelevant, it is recommended to use \code{colnames} directly.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#' dimnames(dtrain)
#' colnames(dtrain)
#' colnames(dtrain) <- make.names(1:ncol(train$data))
#' print(dtrain, verbose=TRUE)
#'
#' @rdname dimnames.xgb.DMatrix
#' @export
dimnames.xgb.DMatrix <- function(x) {
attr(x, '.Dimnames')
}
#' @rdname dimnames.xgb.DMatrix
#' @export
`dimnames<-.xgb.DMatrix` <- function(x, value) {
if (!is.list(value) || length(value) != 2L)
stop("invalid 'dimnames' given: must be a list of two elements")
if (!is.null(value[[1L]]))
stop("xgb.DMatrix does not have rownames")
if (is.null(value[[2]])) {
attr(x, '.Dimnames') <- NULL
return(x)
}
if (ncol(x) != length(value[[2]]))
stop("can't assign ", length(value[[2]]), " colnames to a ",
ncol(x), " column xgb.DMatrix")
attr(x, '.Dimnames') <- value
x
}
#' Get information of an xgb.DMatrix object
#'
#' Get information of an xgb.DMatrix object
#' @param object Object of class \code{xgb.DMatrix}
#' @param name the name of the information field to get (see details)
#' @param ... other parameters
#'
#' @details
#' The \code{name} field can be one of the following:
#'
#' \itemize{
#' \item \code{label}: label Xgboost learn from ;
#' \item \code{weight}: to do a weight rescale ;
#' \item \code{base_margin}: base margin is the base prediction Xgboost will boost from ;
#' \item \code{nrow}: number of rows of the \code{xgb.DMatrix}.
#' }
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#'
#' labels <- getinfo(dtrain, 'label')
#' setinfo(dtrain, 'label', 1-labels)
#'
#' labels2 <- getinfo(dtrain, 'label')
#' stopifnot(all(labels2 == 1-labels))
#' @rdname getinfo
#' @export
getinfo <- function(object, ...) UseMethod("getinfo")
#' @rdname getinfo
#' @export
getinfo.xgb.DMatrix <- function(object, name, ...) {
if (typeof(name) != "character" ||
length(name) != 1 ||
!name %in% c('label', 'weight', 'base_margin', 'nrow')) {
stop("getinfo: name must one of the following\n",
" 'label', 'weight', 'base_margin', 'nrow'")
}
if (name != "nrow"){
ret <- .Call("XGDMatrixGetInfo_R", object, name, PACKAGE = "xgboost")
} else {
ret <- nrow(object)
}
if (length(ret) == 0) return(NULL)
return(ret)
}
#' Set information of an xgb.DMatrix object
#'
#' Set information of an xgb.DMatrix object
#'
#' @param object Object of class "xgb.DMatrix"
#' @param name the name of the field to get
#' @param info the specific field of information to set
#' @param ... other parameters
#'
#' @details
#' The \code{name} field can be one of the following:
#'
#' \itemize{
#' \item \code{label}: label Xgboost learn from ;
#' \item \code{weight}: to do a weight rescale ;
#' \item \code{base_margin}: base margin is the base prediction Xgboost will boost from ;
#' \item \code{group}.
#' }
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#'
#' labels <- getinfo(dtrain, 'label')
#' setinfo(dtrain, 'label', 1-labels)
#' labels2 <- getinfo(dtrain, 'label')
#' stopifnot(all.equal(labels2, 1-labels))
#' @rdname setinfo
#' @export
setinfo <- function(object, ...) UseMethod("setinfo")
#' @rdname setinfo
#' @export
setinfo.xgb.DMatrix <- function(object, name, info, ...) {
if (name == "label") {
if (length(info) != nrow(object))
stop("The length of labels must equal to the number of rows in the input data")
.Call("XGDMatrixSetInfo_R", object, name, as.numeric(info),
PACKAGE = "xgboost")
return(TRUE)
}
if (name == "weight") {
if (length(info) != nrow(object))
stop("The length of weights must equal to the number of rows in the input data")
.Call("XGDMatrixSetInfo_R", object, name, as.numeric(info),
PACKAGE = "xgboost")
return(TRUE)
}
if (name == "base_margin") {
# if (length(info)!=nrow(object))
# stop("The length of base margin must equal to the number of rows in the input data")
.Call("XGDMatrixSetInfo_R", object, name, as.numeric(info),
PACKAGE = "xgboost")
return(TRUE)
}
if (name == "group") {
if (sum(info) != nrow(object))
stop("The sum of groups must equal to the number of rows in the input data")
.Call("XGDMatrixSetInfo_R", object, name, as.integer(info),
PACKAGE = "xgboost")
return(TRUE)
}
stop(paste("setinfo: unknown info name", name))
return(FALSE)
}
#' Get a new DMatrix containing the specified rows of
#' orginal xgb.DMatrix object
#'
#' Get a new DMatrix containing the specified rows of
#' orginal xgb.DMatrix object
#'
#' @param object Object of class "xgb.DMatrix"
#' @param idxset a integer vector of indices of rows needed
#' @param colset currently not used (columns subsetting is not available)
#' @param ... other parameters (currently not used)
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#'
#' dsub <- slice(dtrain, 1:42)
#' labels1 <- getinfo(dsub, 'label')
#' dsub <- dtrain[1:42, ]
#' labels2 <- getinfo(dsub, 'label')
#' all.equal(labels1, labels2)
#'
#' @rdname slice.xgb.DMatrix
#' @export
slice <- function(object, ...) UseMethod("slice")
#' @rdname slice.xgb.DMatrix
#' @export
slice.xgb.DMatrix <- function(object, idxset, ...) {
if (class(object) != "xgb.DMatrix") {
stop("slice: first argument dtrain must be xgb.DMatrix")
}
ret <- .Call("XGDMatrixSliceDMatrix_R", object, idxset, PACKAGE = "xgboost")
attr_list <- attributes(object)
nr <- nrow(object)
len <- sapply(attr_list, length)
ind <- which(len == nr)
if (length(ind) > 0) {
nms <- names(attr_list)[ind]
for (i in 1:length(ind)) {
attr(ret, nms[i]) <- attr(object, nms[i])[idxset]
}
}
return(structure(ret, class = "xgb.DMatrix"))
}
#' @rdname slice.xgb.DMatrix
#' @export
`[.xgb.DMatrix` <- function(object, idxset, colset=NULL) {
slice(object, idxset)
}
#' Print xgb.DMatrix
#'
#' Print information about xgb.DMatrix.
#' Currently it displays dimensions and presence of info-fields and colnames.
#'
#' @param x an xgb.DMatrix object
#' @param verbose whether to print colnames (when present)
#' @param ... not currently used
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
#'
#' dtrain
#' print(dtrain, verbose=TRUE)
#'
#' @method print xgb.DMatrix
#' @export
print.xgb.DMatrix <- function(x, verbose=FALSE, ...) {
cat('xgb.DMatrix dim:', nrow(x), 'x', ncol(x), ' info: ')
infos <- c()
if(length(getinfo(x, 'label')) > 0) infos <- 'label'
if(length(getinfo(x, 'weight')) > 0) infos <- c(infos, 'weight')
if(length(getinfo(x, 'base_margin')) > 0) infos <- c(infos, 'base_margin')
if (length(infos) == 0) infos <- 'NA'
cat(infos)
cnames <- colnames(x)
cat(' colnames:')
if (verbose & !is.null(cnames)) {
cat("\n'")
cat(cnames, sep="','")
cat("'")
} else {
if (is.null(cnames)) cat(' no')
else cat(' yes')
}
cat("\n")
invisible(x)
}

View File

@@ -2,8 +2,8 @@
#'
#' Save xgb.DMatrix object to binary file
#'
#' @param DMatrix the DMatrix object
#' @param fname the name of the binary file.
#' @param dmatrix the \code{xgb.DMatrix} object
#' @param fname the name of the file to write.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
@@ -12,15 +12,12 @@
#' xgb.DMatrix.save(dtrain, 'xgb.DMatrix.data')
#' dtrain <- xgb.DMatrix('xgb.DMatrix.data')
#' @export
xgb.DMatrix.save <- function(DMatrix, fname) {
if (typeof(fname) != "character") {
stop("xgb.save: fname must be character")
}
if (class(DMatrix) == "xgb.DMatrix") {
.Call("XGDMatrixSaveBinary_R", DMatrix, fname, as.integer(FALSE),
PACKAGE = "xgboost")
return(TRUE)
}
stop("xgb.DMatrix.save: the input must be xgb.DMatrix")
return(FALSE)
xgb.DMatrix.save <- function(dmatrix, fname) {
if (typeof(fname) != "character")
stop("fname must be character")
if (class(dmatrix) != "xgb.DMatrix")
stop("the input data must be xgb.DMatrix")
.Call("XGDMatrixSaveBinary_R", dmatrix, fname, 0L, PACKAGE = "xgboost")
return(TRUE)
}

View File

@@ -2,12 +2,9 @@
#'
#' May improve the learning by adding new features to the training data based on the decision trees from a previously learned model.
#'
#' @importFrom magrittr %>%
#' @importFrom Matrix cBind
#' @importFrom Matrix sparse.model.matrix
#'
#' @param model decision tree boosting model learned on the original data
#' @param training.data original data (usually provided as a \code{dgCMatrix} matrix)
#' @param data original data (usually provided as a \code{dgCMatrix} matrix)
#' @param ... currently not used
#'
#' @return \code{dgCMatrix} matrix including both the original data and the new features.
#'
@@ -17,7 +14,7 @@
#' \strong{Practical Lessons from Predicting Clicks on Ads at Facebook}
#'
#' \emph{(Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yan, xin Shi, Antoine Atallah, Ralf Herbrich, Stuart Bowers,
#' Joaquin Quiñonero Candela)}
#' Joaquin Quinonero Candela)}
#'
#' International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014
#'
@@ -25,7 +22,7 @@
#'
#' Extract explaining the method:
#'
#' "\emph{We found that boosted decision trees are a powerful and very
#' "We found that boosted decision trees are a powerful and very
#' convenient way to implement non-linear and tuple transformations
#' of the kind we just described. We treat each individual
#' tree as a categorical feature that takes as value the
@@ -46,7 +43,7 @@
#' based transformation as a supervised feature encoding that
#' converts a real-valued vector into a compact binary-valued
#' vector. A traversal from root node to a leaf node represents
#' a rule on certain features.}"
#' a rule on certain features."
#'
#' @examples
#' data(agaricus.train, package='xgboost')
@@ -54,7 +51,7 @@
#' dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label)
#' dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)
#'
#' param <- list(max.depth=2, eta=1, silent=1, objective='binary:logistic')
#' param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
#' nround = 4
#'
#' bst = xgb.train(params = param, data = dtrain, nrounds = nround, nthread = 2)
@@ -79,13 +76,9 @@
#' cat(paste("The accuracy was", accuracy.before, "before adding leaf features and it is now", accuracy.after, "!\n"))
#'
#' @export
xgb.create.features <- function(model, training.data){
pred_with_leaf = predict(model, training.data, predleaf = TRUE)
cols <- list()
for(i in 1:length(trees)){
# max is not the real max but it s not important for the purpose of adding features
leaf.id <- sort(unique(pred_with_leaf[,i]))
cols[[i]] <- factor(x = pred_with_leaf[,i], level = leaf.id)
}
cBind(training.data, sparse.model.matrix( ~ . -1, as.data.frame(cols)))
}
xgb.create.features <- function(model, data, ...){
check.deprecation(...)
pred_with_leaf <- predict(model, data, predleaf = TRUE)
cols <- lapply(as.data.frame(pred_with_leaf), factor)
cBind(data, sparse.model.matrix( ~ . -1, cols))
}

View File

@@ -2,17 +2,6 @@
#'
#' The cross valudation function of xgboost
#'
#' @importFrom data.table data.table
#' @importFrom data.table as.data.table
#' @importFrom magrittr %>%
#' @importFrom data.table :=
#' @importFrom data.table rbindlist
#' @importFrom stringr str_extract_all
#' @importFrom stringr str_extract
#' @importFrom stringr str_split
#' @importFrom stringr str_replace
#' @importFrom stringr str_match
#'
#' @param params the list of parameters. Commonly used ones are:
#' \itemize{
#' \item \code{objective} objective function, common ones are
@@ -21,21 +10,23 @@
#' \item \code{binary:logistic} logistic regression for classification
#' }
#' \item \code{eta} step size of each boosting step
#' \item \code{max.depth} maximum depth of the tree
#' \item \code{max_depth} maximum depth of the tree
#' \item \code{nthread} number of thread used in training, if not set, all threads are used
#' }
#'
#' See \link{xgb.train} for further details.
#' See \code{\link{xgb.train}} for further details.
#' See also demo/ for walkthrough example in R.
#' @param data takes an \code{xgb.DMatrix} or \code{Matrix} as the input.
#' @param nrounds the max number of iterations
#' @param nfold the original dataset is randomly partitioned into \code{nfold} equal size subsamples.
#' @param label option field, when data is \code{Matrix}
#' @param missing Missing is only used when input is dense matrix, pick a float
#' value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.
#' @param prediction A logical value indicating whether to return the prediction vector.
#' @param showsd \code{boolean}, whether show standard deviation of cross validation
#' @param metrics, list of evaluation metrics to be used in corss validation,
#' @param label vector of response values. Should be provided only when data is \code{DMatrix}.
#' @param missing is only used when input is a dense matrix. By default is set to NA, which means
#' that NA values should be considered as 'missing' by the algorithm.
#' Sometimes, 0 or other extreme value might be used to represent missing values.
#' @param prediction A logical value indicating whether to return the test fold predictions
#' from each CV model. This parameter engages the \code{\link{cb.cv.predict}} callback.
#' @param showsd \code{boolean}, whether to show standard deviation of cross validation
#' @param metrics, list of evaluation metrics to be used in cross validation,
#' when it is not specified, the evaluation metric is chosen according to objective function.
#' Possible options are:
#' \itemize{
@@ -46,32 +37,33 @@
#' \item \code{merror} Exact matching error, used to evaluate multi-class classification
#' }
#' @param obj customized objective function. Returns gradient and second order
#' gradient with given prediction and dtrain.
#' gradient with given prediction and dtrain.
#' @param feval custimized evaluation function. Returns
#' \code{list(metric='metric-name', value='metric-value')} with given
#' prediction and dtrain.
#' @param stratified \code{boolean} whether sampling of folds should be stratified by the values of labels in \code{data}
#' @param folds \code{list} provides a possibility of using a list of pre-defined CV folds (each element must be a vector of fold's indices).
#' If folds are supplied, the nfold and stratified parameters would be ignored.
#' \code{list(metric='metric-name', value='metric-value')} with given
#' prediction and dtrain.
#' @param stratified a \code{boolean} indicating whether sampling of folds should be stratified
#' by the values of outcome labels.
#' @param folds \code{list} provides a possibility to use a list of pre-defined CV folds
#' (each element must be a vector of test fold's indices). When folds are supplied,
#' the \code{nfold} and \code{stratified} parameters are ignored.
#' @param verbose \code{boolean}, print the statistics during the process
#' @param print.every.n Print every N progress messages when \code{verbose>0}. Default is 1 which means all messages are printed.
#' @param early.stop.round If \code{NULL}, the early stopping function is not triggered.
#' If set to an integer \code{k}, training with a validation set will stop if the performance
#' keeps getting worse consecutively for \code{k} rounds.
#' @param maximize If \code{feval} and \code{early.stop.round} are set, then \code{maximize} must be set as well.
#' \code{maximize=TRUE} means the larger the evaluation score the better.
#'
#' @param print_every_n Print each n-th iteration evaluation messages when \code{verbose>0}.
#' Default is 1 which means all messages are printed. This parameter is passed to the
#' \code{\link{cb.print.evaluation}} callback.
#' @param early_stopping_rounds If \code{NULL}, the early stopping function is not triggered.
#' If set to an integer \code{k}, training with a validation set will stop if the performance
#' doesn't improve for \code{k} rounds.
#' Setting this parameter engages the \code{\link{cb.early.stop}} callback.
#' @param maximize If \code{feval} and \code{early_stopping_rounds} are set,
#' then this parameter must be set as well.
#' When it is \code{TRUE}, it means the larger the evaluation score the better.
#' This parameter is passed to the \code{\link{cb.early.stop}} callback.
#' @param callbacks a list of callback functions to perform various task during boosting.
#' See \code{\link{callbacks}}. Some of the callbacks are automatically created depending on the
#' parameters' values. User can provide either existing or their own callback methods in order
#' to customize the training process.
#' @param ... other parameters to pass to \code{params}.
#'
#' @return
#' If \code{prediction = TRUE}, a list with the following elements is returned:
#' \itemize{
#' \item \code{dt} a \code{data.table} with each mean and standard deviation stat for training set and test set
#' \item \code{pred} an array or matrix (for multiclass classification) with predictions for each CV-fold for the model having been trained on the data in all other folds.
#' }
#'
#' If \code{prediction = FALSE}, just a \code{data.table} with each mean and standard deviation stat for training set and test set is returned.
#'
#' @details
#' The original sample is randomly partitioned into \code{nfold} equal size subsamples.
#'
@@ -83,168 +75,228 @@
#'
#' Adapted from \url{http://en.wikipedia.org/wiki/Cross-validation_\%28statistics\%29#k-fold_cross-validation}
#'
#' @return
#' An object of class \code{xgb.cv.synchronous} with the following elements:
#' \itemize{
#' \item \code{call} a function call.
#' \item \code{params} parameters that were passed to the xgboost library. Note that it does not
#' capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
#' \item \code{callbacks} callback functions that were either automatically assigned or
#' explicitely passed.
#' \item \code{evaluation_log} evaluation history storead as a \code{data.table} with the
#' first column corresponding to iteration number and the rest corresponding to the
#' CV-based evaluation means and standard deviations for the training and test CV-sets.
#' It is created by the \code{\link{cb.evaluation.log}} callback.
#' \item \code{niter} number of boosting iterations.
#' \item \code{folds} the list of CV folds' indices - either those passed through the \code{folds}
#' parameter or randomly generated.
#' \item \code{best_iteration} iteration number with the best evaluation metric value
#' (only available with early stopping).
#' \item \code{best_ntreelimit} the \code{ntreelimit} value corresponding to the best iteration,
#' which could further be used in \code{predict} method
#' (only available with early stopping).
#' \item \code{pred} CV prediction values available when \code{prediction} is set.
#' It is either vector or matrix (see \code{\link{cb.cv.predict}}).
#' \item \code{models} a liost of the CV folds' models. It is only available with the explicit
#' setting of the \code{cb.cv.predict(save_models = TRUE)} callback.
#' }
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
#' history <- xgb.cv(data = dtrain, nround=3, nthread = 2, nfold = 5, metrics=list("rmse","auc"),
#' max.depth =3, eta = 1, objective = "binary:logistic")
#' print(history)
#' cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"),
#' max_depth = 3, eta = 1, objective = "binary:logistic")
#' print(cv)
#' print(cv, verbose=TRUE)
#'
#' @export
xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing = NA,
prediction = FALSE, showsd = TRUE, metrics=list(),
obj = NULL, feval = NULL, stratified = TRUE, folds = NULL, verbose = T, print.every.n=1L,
early.stop.round = NULL, maximize = NULL, ...) {
if (typeof(params) != "list") {
stop("xgb.cv: first argument params must be list")
}
if(!is.null(folds)) {
if(class(folds) != "list" | length(folds) < 2) {
stop("folds must be a list with 2 or more elements that are vectors of indices for each CV-fold")
}
nfold <- length(folds)
}
if (nfold <= 1) {
stop("nfold must be bigger than 1")
}
dtrain <- xgb.get.DMatrix(data, label, missing)
dot.params <- list(...)
nms.params <- names(params)
nms.dot.params <- names(dot.params)
if (length(intersect(nms.params,nms.dot.params)) > 0)
stop("Duplicated defined term in parameters. Please check your list of params.")
params <- append(params, dot.params)
params <- append(params, list(silent=1))
for (mc in metrics) {
params <- append(params, list("eval_metric"=mc))
}
obj = NULL, feval = NULL, stratified = TRUE, folds = NULL,
verbose = TRUE, print_every_n=1L,
early_stopping_rounds = NULL, maximize = NULL, callbacks = list(), ...) {
# customized objective and evaluation metric interface
if (!is.null(params$objective) && !is.null(obj))
stop("xgb.cv: cannot assign two different objectives")
if (!is.null(params$objective))
if (class(params$objective) == 'function') {
obj <- params$objective
params[['objective']] <- NULL
}
# if (!is.null(params$eval_metric) && !is.null(feval))
# stop("xgb.cv: cannot assign two different evaluation metrics")
if (!is.null(params$eval_metric))
if (class(params$eval_metric) == 'function') {
feval <- params$eval_metric
params[['eval_metric']] <- NULL
}
check.deprecation(...)
params <- check.booster.params(params, ...)
# TODO: should we deprecate the redundant 'metrics' parameter?
for (m in metrics)
params <- c(params, list("eval_metric" = m))
check.custom.obj()
check.custom.eval()
# Early Stopping
if (!is.null(early.stop.round)){
if (!is.null(feval) && is.null(maximize))
stop('Please set maximize to note whether the model is maximizing the evaluation or not.')
if (is.null(maximize) && is.null(params$eval_metric))
stop('Please set maximize to note whether the model is maximizing the evaluation or not.')
if (is.null(maximize))
{
if (params$eval_metric %in% c('rmse','logloss','error','merror','mlogloss')) {
maximize <- FALSE
} else {
maximize <- TRUE
}
}
#if (is.null(params[['eval_metric']]) && is.null(feval))
# stop("Either 'eval_metric' or 'feval' must be provided for CV")
# Labels
if (class(data) == 'xgb.DMatrix')
labels <- getinfo(data, 'label')
if (is.null(labels))
stop("Labels must be provided for CV either through xgb.DMatrix, or through 'label=' when 'data' is matrix")
# CV folds
if(!is.null(folds)) {
if(class(folds) != "list" || length(folds) < 2)
stop("'folds' must be a list with 2 or more elements that are vectors of indices for each CV-fold")
nfold <- length(folds)
} else {
if (nfold <= 1)
stop("'nfold' must be > 1")
folds <- generate.cv.folds(nfold, nrow(data), stratified, label, params)
}
# Potential TODO: sequential CV
#if (strategy == 'sequential')
# stop('Sequential CV strategy is not yet implemented')
if (maximize) {
bestScore <- 0
} else {
bestScore <- Inf
}
bestInd <- 0
earlyStopflag <- FALSE
# verbosity & evaluation printing callback:
params <- c(params, list(silent = 1))
print_every_n <- max( as.integer(print_every_n), 1L)
if (!has.callbacks(callbacks, 'cb.print.evaluation') && verbose) {
callbacks <- add.cb(callbacks, cb.print.evaluation(print_every_n))
}
# evaluation log callback: always is on in CV
evaluation_log <- list()
if (!has.callbacks(callbacks, 'cb.evaluation.log')) {
callbacks <- add.cb(callbacks, cb.evaluation.log())
}
# Early stopping callback
stop_condition <- FALSE
if (!is.null(early_stopping_rounds) &&
!has.callbacks(callbacks, 'cb.early.stop')) {
callbacks <- add.cb(callbacks, cb.early.stop(early_stopping_rounds,
maximize=maximize, verbose=verbose))
}
# CV-predictions callback
if (prediction &&
!has.callbacks(callbacks, 'cb.cv.predict')) {
callbacks <- add.cb(callbacks, cb.cv.predict(save_models=FALSE))
}
# Sort the callbacks into categories
cb <- categorize.callbacks(callbacks)
if (length(metrics) > 1)
warning('Only the first metric is used for early stopping process.')
}
# create the booster-folds
dall <- xgb.get.DMatrix(data, label, missing)
bst_folds <- lapply(1:length(folds), function(k) {
dtest <- slice(dall, folds[[k]])
dtrain <- slice(dall, unlist(folds[-k]))
bst <- xgb.Booster(params, list(dtrain, dtest))
list(dtrain=dtrain, bst=bst, watchlist=list(train=dtrain, test=dtest), index=folds[[k]])
})
# a "basket" to collect some results from callbacks
basket <- list()
xgb_folds <- xgb.cv.mknfold(dtrain, nfold, params, stratified, folds)
obj_type <- params[['objective']]
mat_pred <- FALSE
if (!is.null(obj_type) && obj_type == 'multi:softprob')
{
num_class <- params[['num_class']]
if (is.null(num_class))
stop('must set num_class to use softmax')
predictValues <- matrix(0,xgb.numrow(dtrain),num_class)
mat_pred <- TRUE
}
else
predictValues <- rep(0,xgb.numrow(dtrain))
history <- c()
print.every.n <- max(as.integer(print.every.n), 1L)
for (i in 1:nrounds) {
msg <- list()
for (k in 1:nfold) {
fd <- xgb_folds[[k]]
succ <- xgb.iter.update(fd$booster, fd$dtrain, i - 1, obj)
msg[[k]] <- xgb.iter.eval(fd$booster, fd$watchlist, i - 1, feval) %>% str_split("\t") %>% .[[1]]
}
ret <- xgb.cv.aggcv(msg, showsd)
history <- c(history, ret)
if(verbose)
if (0 == (i - 1L) %% print.every.n)
cat(ret, "\n", sep="")
# extract parameters that can affect the relationship b/w #trees and #iterations
num_class <- max(as.numeric(NVL(params[['num_class']], 1)), 1)
num_parallel_tree <- max(as.numeric(NVL(params[['num_parallel_tree']], 1)), 1)
# early_Stopping
if (!is.null(early.stop.round)){
score <- strsplit(ret,'\\s+')[[1]][2 + length(metrics)]
score <- strsplit(score,'\\+|:')[[1]][[2]]
score <- as.numeric(score)
if ( (maximize && score > bestScore) || (!maximize && score < bestScore)) {
bestScore <- score
bestInd <- i
} else {
if (i - bestInd >= early.stop.round) {
earlyStopflag <- TRUE
cat('Stopping. Best iteration:', bestInd, '\n')
break
}
}
}
}
# those are fixed for CV (no training continuation)
begin_iteration <- 1
end_iteration <- nrounds
# synchronous CV boosting: run CV folds' models within each iteration
for (iteration in begin_iteration:end_iteration) {
for (f in cb$pre_iter) f()
msg <- lapply(bst_folds, function(fd) {
xgb.iter.update(fd$bst, fd$dtrain, iteration - 1, obj)
xgb.iter.eval(fd$bst, fd$watchlist, iteration - 1, feval)
})
msg <- simplify2array(msg)
bst_evaluation <- rowMeans(msg)
bst_evaluation_err <- sqrt(rowMeans(msg^2) - bst_evaluation^2)
for (f in cb$post_iter) f()
if (stop_condition) break
}
for (f in cb$finalize) f(finalize=TRUE)
if (prediction) {
for (k in 1:nfold) {
fd <- xgb_folds[[k]]
if (!is.null(early.stop.round) && earlyStopflag) {
res <- xgb.iter.eval(fd$booster, fd$watchlist, bestInd - 1, feval, prediction)
} else {
res <- xgb.iter.eval(fd$booster, fd$watchlist, nrounds - 1, feval, prediction)
}
if (mat_pred) {
pred_mat <- matrix(res[[2]],num_class,length(fd$index))
predictValues[fd$index,] <- t(pred_mat)
} else {
predictValues[fd$index] <- res[[2]]
}
}
}
# the CV result
ret <- list(
call = match.call(),
params = params,
callbacks = callbacks,
evaluation_log = evaluation_log,
niter = end_iteration,
folds = folds
)
ret <- c(ret, basket)
colnames <- str_split(string = history[1], pattern = "\t")[[1]] %>% .[2:length(.)] %>% str_extract(".*:") %>% str_replace(":","") %>% str_replace("-", ".")
colnamesMean <- paste(colnames, "mean")
if(showsd) colnamesStd <- paste(colnames, "std")
colnames <- c()
if(showsd) for(i in 1:length(colnamesMean)) colnames <- c(colnames, colnamesMean[i], colnamesStd[i])
else colnames <- colnamesMean
type <- rep(x = "numeric", times = length(colnames))
dt <- utils::read.table(text = "", colClasses = type, col.names = colnames) %>% as.data.table
split <- str_split(string = history, pattern = "\t")
for(line in split) dt <- line[2:length(line)] %>% str_extract_all(pattern = "\\d*\\.+\\d*") %>% unlist %>% as.numeric %>% as.list %>% {rbindlist( list( dt, .), use.names = F, fill = F)}
if (prediction) {
return( list( dt = dt,pred = predictValues))
}
return(dt)
class(ret) <- 'xgb.cv.synchronous'
invisible(ret)
}
# Avoid error messages during CRAN check.
# The reason is that these variables are never declared
# They are mainly column names inferred by Data.table...
globalVariables(".")
#' Print xgb.cv result
#'
#' Prints formatted results of \code{xgb.cv}.
#'
#' @param x an \code{xgb.cv.synchronous} object
#' @param verbose whether to print detailed data
#' @param ... passed to \code{data.table.print}
#'
#' @details
#' When not verbose, it would only print the evaluation results,
#' including the best iteration (when available).
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' train <- agaricus.train
#' cv <- xgb.cv(data = train$data, label = train$label, nfold = 5, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' print(cv)
#' print(cv, verbose=TRUE)
#'
#' @rdname print.xgb.cv
#' @method print xgb.cv.synchronous
#' @export
print.xgb.cv.synchronous <- function(x, verbose=FALSE, ...) {
cat('##### xgb.cv ', length(x$folds), '-folds\n', sep='')
if (verbose) {
if (!is.null(x$call)) {
cat('call:\n ')
print(x$call)
}
if (!is.null(x$params)) {
cat('params (as set within xgb.cv):\n')
cat( ' ',
paste(names(x$params),
paste0('"', unlist(x$params), '"'),
sep=' = ', collapse=', '), '\n', sep='')
}
if (!is.null(x$callbacks) && length(x$callbacks) > 0) {
cat('callbacks:\n')
lapply(callback.calls(x$callbacks), function(x) {
cat(' ')
print(x)
})
}
for (n in c('niter', 'best_iteration', 'best_ntreelimit')) {
if (is.null(x[[n]]))
next
cat(n, ': ', x[[n]], '\n', sep='')
}
if (!is.null(x$pred)) {
cat('pred:\n')
str(x$pred)
}
}
if (verbose)
cat('evaluation_log:\n')
print(x$evaluation_log, row.names = FALSE, ...)
if (!is.null(x$best_iteration)) {
cat('Best iteration:\n')
print(x$evaluation_log[x$best_iteration], row.names = FALSE, ...)
}
invisible(x)
}

View File

@@ -2,11 +2,6 @@
#'
#' Save a xgboost model to text file. Could be parsed later.
#'
#' @importFrom magrittr %>%
#' @importFrom stringr str_replace
#' @importFrom data.table fread
#' @importFrom data.table :=
#' @importFrom data.table setnames
#' @param model the model object.
#' @param fname the name of the text file where to save the model text dump. If not provided or set to \code{NULL} the function will return the model as a \code{character} vector.
#' @param fmap feature map file representing the type of feature.
@@ -15,10 +10,11 @@
#' See demo/ for walkthrough example in R, and
#' \url{https://github.com/dmlc/xgboost/blob/master/demo/data/featmap.txt}
#' for example Format.
#' @param with.stats whether dump statistics of splits
#' @param with_stats whether dump statistics of splits
#' When this option is on, the model dump comes with two additional statistics:
#' gain is the approximate loss function gain we get in each split;
#' cover is the sum of second order gradient in each node.
#' @param ... currently not used
#'
#' @return
#' if fname is not provided or set to \code{NULL} the function will return the model as a \code{character} vector. Otherwise it will return \code{TRUE}.
@@ -28,43 +24,36 @@
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#' # save the model in file 'xgb.model.dump'
#' xgb.dump(bst, 'xgb.model.dump', with.stats = TRUE)
#' xgb.dump(bst, 'xgb.model.dump', with_stats = TRUE)
#'
#' # print the model without saving it to a file
#' print(xgb.dump(bst))
#' @export
xgb.dump <- function(model = NULL, fname = NULL, fmap = "", with.stats=FALSE) {
if (class(model) != "xgb.Booster") {
stop("model: argument must be type xgb.Booster")
xgb.dump <- function(model = NULL, fname = NULL, fmap = "", with_stats=FALSE, ...) {
check.deprecation(...)
if (class(model) != "xgb.Booster")
stop("model: argument must be of type xgb.Booster")
if (!(class(fname) %in% c("character", "NULL") && length(fname) <= 1))
stop("fname: argument must be of type character (when provided)")
if (!(class(fmap) %in% c("character", "NULL") && length(fmap) <= 1))
stop("fmap: argument must be of type character (when provided)")
model <- xgb.Booster.check(model)
model_dump <- .Call("XGBoosterDumpModel_R", model$handle, fmap, as.integer(with_stats), PACKAGE = "xgboost")
if (is.null(fname))
model_dump <- stri_replace_all_regex(model_dump, '\t', '')
model_dump <- unlist(stri_split_regex(model_dump, '\n'))
model_dump <- grep('^\\s*$', model_dump, invert = TRUE, value = TRUE)
if (is.null(fname)) {
return(model_dump)
} else {
model <- xgb.Booster.check(model)
}
if (!(class(fname) %in% c("character", "NULL") && length(fname) <= 1)) {
stop("fname: argument must be type character (when provided)")
}
if (!(class(fmap) %in% c("character", "NULL") && length(fname) <= 1)) {
stop("fmap: argument must be type character (when provided)")
}
longString <- .Call("XGBoosterDumpModel_R", model$handle, fmap, as.integer(with.stats), PACKAGE = "xgboost")
dt <- fread(paste(longString, collapse = ""), sep = "\n", header = F)
setnames(dt, "Lines")
if(is.null(fname)) {
result <- dt[Lines != "0"][, Lines := str_replace(Lines, "^\t+", "")][Lines != ""][, paste(Lines)]
return(result)
} else {
result <- dt[Lines != "0"][Lines != ""][, paste(Lines)] %>% writeLines(fname)
writeLines(model_dump, fname)
return(TRUE)
}
}
# Avoid error messages during CRAN check.
# The reason is that these variables are never declared
# They are mainly column names inferred by Data.table...
globalVariables(c("Lines", "."))

135
R-package/R/xgb.ggplot.R Normal file
View File

@@ -0,0 +1,135 @@
# ggplot backend for the xgboost plotting facilities
#' @rdname xgb.plot.importance
#' @export
xgb.ggplot.importance <- function(importance_matrix = NULL, top_n = NULL, measure = NULL,
rel_to_first = FALSE, n_clusters = c(1:10), ...) {
importance_matrix <- xgb.plot.importance(importance_matrix, top_n = top_n, measure = measure,
rel_to_first = rel_to_first, plot = FALSE, ...)
if (!requireNamespace("ggplot2", quietly = TRUE)) {
stop("ggplot2 package is required", call. = FALSE)
}
if (!requireNamespace("Ckmeans.1d.dp", quietly = TRUE)) {
stop("Ckmeans.1d.dp package is required", call. = FALSE)
}
clusters <- suppressWarnings(
Ckmeans.1d.dp::Ckmeans.1d.dp(importance_matrix$Importance, n_clusters)
)
importance_matrix[, Cluster := as.character(clusters$cluster)]
plot <-
ggplot2::ggplot(importance_matrix,
ggplot2::aes(x = factor(Feature, levels = rev(Feature)), y = Importance, width = 0.05),
environment = environment()) +
ggplot2::geom_bar(ggplot2::aes(fill = Cluster), stat = "identity", position = "identity") +
ggplot2::coord_flip() +
ggplot2::xlab("Features") +
ggplot2::ggtitle("Feature importance") +
ggplot2::theme(plot.title = ggplot2::element_text(lineheight = .9, face = "bold"),
panel.grid.major.y = ggplot2::element_blank())
return(plot)
}
#' @rdname xgb.plot.deepness
#' @export
xgb.ggplot.deepness <- function(model = NULL, which = c("2x1", "max.depth", "med.depth", "med.weight")) {
if (!requireNamespace("ggplot2", quietly = TRUE))
stop("ggplot2 package is required for plotting the graph deepness.", call. = FALSE)
which <- match.arg(which)
dt_depths <- xgb.plot.deepness(model = model, plot = FALSE)
dt_summaries <- dt_depths[, .(.N, Cover = mean(Cover)), Depth]
setkey(dt_summaries, 'Depth')
if (which == "2x1") {
p1 <-
ggplot2::ggplot(dt_summaries) +
ggplot2::geom_bar(ggplot2::aes(x = Depth, y = N), stat = "Identity") +
ggplot2::xlab("") +
ggplot2::ylab("Number of leafs") +
ggplot2::ggtitle("Model complexity") +
ggplot2::theme(
plot.title = ggplot2::element_text(lineheight = 0.9, face = "bold"),
panel.grid.major.y = ggplot2::element_blank(),
axis.ticks = ggplot2::element_blank(),
axis.text.x = ggplot2::element_blank()
)
p2 <-
ggplot2::ggplot(dt_summaries) +
ggplot2::geom_bar(ggplot2::aes(x = Depth, y = Cover), stat = "Identity") +
ggplot2::xlab("Leaf depth") +
ggplot2::ylab("Weighted cover")
multiplot(p1, p2, cols = 1)
return(invisible(list(p1, p2)))
} else if (which == "max.depth") {
p <-
ggplot2::ggplot(dt_depths[, max(Depth), Tree]) +
ggplot2::geom_jitter(ggplot2::aes(x = Tree, y = V1),
height = 0.15, alpha=0.4, size=3, stroke=0) +
ggplot2::xlab("tree #") +
ggplot2::ylab("Max tree leaf depth")
return(p)
} else if (which == "med.depth") {
p <-
ggplot2::ggplot(dt_depths[, median(as.numeric(Depth)), Tree]) +
ggplot2::geom_jitter(ggplot2::aes(x = Tree, y = V1),
height = 0.15, alpha=0.4, size=3, stroke=0) +
ggplot2::xlab("tree #") +
ggplot2::ylab("Median tree leaf depth")
return(p)
} else if (which == "med.weight") {
p <-
ggplot2::ggplot(dt_depths[, median(abs(Weight)), Tree]) +
ggplot2::geom_point(ggplot2::aes(x = Tree, y = V1),
alpha=0.4, size=3, stroke=0) +
ggplot2::xlab("tree #") +
ggplot2::ylab("Median absolute leaf weight")
return(p)
}
}
# Plot multiple ggplot graph aligned by rows and columns.
# ... the plots
# cols number of columns
# internal utility function
multiplot <- function(..., cols = 1) {
plots <- list(...)
num_plots = length(plots)
layout <- matrix(seq(1, cols * ceiling(num_plots / cols)),
ncol = cols, nrow = ceiling(num_plots / cols))
if (num_plots == 1) {
print(plots[[1]])
} else {
grid::grid.newpage()
grid::pushViewport(grid::viewport(layout = grid::grid.layout(nrow(layout), ncol(layout))))
for (i in 1:num_plots) {
# Get the i,j matrix positions of the regions that contain this subplot
matchidx <- as.data.table(which(layout == i, arr.ind = TRUE))
print(
plots[[i]], vp = grid::viewport(
layout.pos.row = matchidx$row,
layout.pos.col = matchidx$col
)
)
}
}
}
globalVariables(c(
"Cluster", "ggplot", "aes", "geom_bar", "coord_flip", "xlab", "ylab", "ggtitle", "theme",
"element_blank", "element_text"
))

View File

@@ -2,14 +2,6 @@
#'
#' Create a \code{data.table} of the most important features of a model.
#'
#' @importFrom data.table data.table
#' @importFrom data.table setnames
#' @importFrom data.table :=
#' @importFrom magrittr %>%
#' @importFrom Matrix colSums
#' @importFrom Matrix cBind
#' @importFrom Matrix sparseVector
#'
#' @param feature_names names of each feature as a \code{character} vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.
#' @param model generated by the \code{xgb.train} function.
#' @param data the dataset used for the training step. Will be used with \code{label} parameter for co-occurence computation. More information in \code{Detail} part. This parameter is optional.
@@ -46,14 +38,13 @@
#' @examples
#' data(agaricus.train, package='xgboost')
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 2,
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#'
#' # agaricus.train$data@@Dimnames[[2]] represents the column names of the sparse matrix.
#' xgb.importance(agaricus.train$data@@Dimnames[[2]], model = bst)
#' xgb.importance(colnames(agaricus.train$data), model = bst)
#'
#' # Same thing with co-occurence computation this time
#' xgb.importance(agaricus.train$data@@Dimnames[[2]], model = bst, data = agaricus.train$data, label = agaricus.train$label)
#' xgb.importance(colnames(agaricus.train$data), model = bst, data = agaricus.train$data, label = agaricus.train$label)
#'
#' @export
xgb.importance <- function(feature_names = NULL, model = NULL, data = NULL, label = NULL, target = function(x) ( (x + label) == 2)){
@@ -84,7 +75,7 @@ xgb.importance <- function(feature_names = NULL, model = NULL, data = NULL, labe
data.table(Feature = feature_names, Weight = weights)
}
model.text.dump <- xgb.dump(model = model, with.stats = T)
model.text.dump <- xgb.dump(model = model, with_stats = T)
if(model.text.dump[2] == "bias:"){
result <- model.text.dump %>% linearDump(feature_names, .)
@@ -112,4 +103,4 @@ xgb.importance <- function(feature_names = NULL, model = NULL, data = NULL, labe
# Avoid error messages during CRAN check.
# The reason is that these variables are never declared
# They are mainly column names inferred by Data.table...
globalVariables(c(".", "Feature", "Split", "No", "Missing", "MissingNo", "RealCover"))
globalVariables(c(".", ".N", "Gain", "Frequency", "Feature", "Split", "No", "Missing", "MissingNo", "RealCover"))

View File

@@ -9,8 +9,8 @@
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' xgb.save(bst, 'xgb.model')
#' bst <- xgb.load('xgb.model')
#' pred <- predict(bst, test$data)
@@ -26,6 +26,6 @@ xgb.load <- function(modelfile) {
} else {
bst <- xgb.handleToBooster(handle, NULL)
}
bst <- xgb.Booster.check(bst)
bst <- xgb.Booster.check(bst, saveraw = TRUE)
return(bst)
}

View File

@@ -1,153 +1,122 @@
#' Parse boosted tree model text dump
#' Parse a boosted tree model text dump
#'
#' Parse a boosted tree model text dump and return a \code{data.table}.
#' Parse a boosted tree model text dump into a \code{data.table} structure.
#'
#' @importFrom data.table data.table
#' @importFrom data.table set
#' @importFrom data.table rbindlist
#' @importFrom data.table copy
#' @importFrom data.table :=
#' @importFrom magrittr %>%
#' @importFrom magrittr not
#' @importFrom magrittr add
#' @importFrom stringr str_extract
#' @importFrom stringr str_split
#' @importFrom stringr str_trim
#' @param feature_names names of each feature as a character vector. Can be extracted from a sparse matrix (see example). If the model already contains feature names, this argument should be \code{NULL} (default value).
#' @param model object created by the \code{xgb.train} function.
#' @param text \code{character} vector generated by the \code{xgb.dump} function. Model dump must include the gain per feature and per tree (parameter \code{with.stats = TRUE} in function \code{xgb.dump}).
#' @param n_first_tree limit the plot to the \code{n} first trees. If set to \code{NULL}, all trees of the model are plotted. Performance can be low depending of the size of the model.
#' @param feature_names character vector of feature names. If the model already
#' contains feature names, this argument should be \code{NULL} (default value)
#' @param model object of class \code{xgb.Booster}
#' @param text \code{character} vector previously generated by the \code{xgb.dump}
#' function (where parameter \code{with_stats = TRUE} should have been set).
#' @param n_first_tree limit the parsing to the \code{n} first trees.
#' If set to \code{NULL}, all trees of the model are parsed.
#'
#' @return A \code{data.table} of the features used in the model with their gain, cover and few other information.
#' @return
#' A \code{data.table} with detailed information about model trees' nodes.
#'
#' @details
#' General function to convert a text dump of tree model to a \code{data.table}.
#'
#' The purpose is to help user to explore the model and get a better understanding of it.
#'
#' The columns of the \code{data.table} are:
#'
#' \itemize{
#' \item \code{ID}: unique identifier of a node ;
#' \item \code{Feature}: feature used in the tree to operate a split. When Leaf is indicated, it is the end of a branch ;
#' \item \code{Split}: value of the chosen feature where is operated the split ;
#' \item \code{Yes}: ID of the feature for the next node in the branch when the split condition is met ;
#' \item \code{No}: ID of the feature for the next node in the branch when the split condition is not met ;
#' \item \code{Missing}: ID of the feature for the next node in the branch for observation where the feature used for the split are not provided ;
#' \item \code{Quality}: it's the gain related to the split in this specific node ;
#' \item \code{Cover}: metric to measure the number of observation affected by the split ;
#' \item \code{Tree}: ID of the tree. It is included in the main ID ;
#' \item \code{Yes.Feature}, \code{No.Feature}, \code{Yes.Cover}, \code{No.Cover}, \code{Yes.Quality} and \code{No.Quality}: data related to the pointer in \code{Yes} or \code{No} column ;
#' \item \code{Tree}: ID of a tree in a model
#' \item \code{Node}: ID of a node in a tree
#' \item \code{ID}: unique identifier of a node in a model
#' \item \code{Feature}: for a branch node, it's a feature id or name (when available);
#' for a leaf note, it simply labels it as \code{'Leaf'}
#' \item \code{Split}: location of the split for a branch node (split condition is always "less than")
#' \item \code{Yes}: ID of the next node when the split condition is met
#' \item \code{No}: ID of the next node when the split condition is not met
#' \item \code{Missing}: ID of the next node when branch value is missing
#' \item \code{Quality}: either the split gain (change in loss) or the leaf value
#' \item \code{Cover}: metric related to the number of observation either seen by a split
#' or collected by a leaf during training.
#' }
#'
#'
#' @examples
#' # Basic use:
#'
#' data(agaricus.train, package='xgboost')
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 2,
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#'
#' # agaricus.train$data@@Dimnames[[2]] represents the column names of the sparse matrix.
#' xgb.model.dt.tree(feature_names = agaricus.train$data@@Dimnames[[2]], model = bst)
#' (dt <- xgb.model.dt.tree(colnames(agaricus.train$data), bst))
#'
#'
#' # How to match feature names of splits that are following a current 'Yes' branch:
#'
#' merge(dt, dt[, .(ID, Y.Feature=Feature)], by.x='Yes', by.y='ID', all.x=TRUE)[order(Tree,Node)]
#'
#' @export
xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL, n_first_tree = NULL){
xgb.model.dt.tree <- function(feature_names = NULL, model = NULL, text = NULL,
n_first_tree = NULL){
if (!class(feature_names) %in% c("character", "NULL")) {
stop("feature_names: Has to be a vector of character or NULL if the model dump already contains feature name. Look at this function documentation to see where to get feature names.")
stop("feature_names: Has to be a vector of character\n",
" or NULL if the model dump already contains feature names.\n",
" Look at this function documentation to see where to get feature names.")
}
if (class(model) != "xgb.Booster" & class(text) != "character") {
"model: Has to be an object of class xgb.Booster model generaged by the xgb.train function.\n" %>%
paste0("text: Has to be a vector of character or NULL if a path to the model dump has already been provided.") %>%
stop()
stop("Either 'model' has to be an object of class xgb.Booster\n",
" or 'text' has to be a character vector with the result of xgb.dump\n",
" (or NULL if the model was provided).")
}
if (!class(n_first_tree) %in% c("numeric", "NULL") | length(n_first_tree) > 1) {
stop("n_first_tree: Has to be a numeric vector of size 1.")
}
if(is.null(text)){
text <- xgb.dump(model = model, with.stats = T)
if(is.null(text)){
text <- xgb.dump(model = model, with_stats = T)
}
position <- str_match(text, "booster") %>% is.na %>% not %>% which %>% c(length(text) + 1)
extract <- function(x, pattern) str_extract(x, pattern) %>% str_split("=") %>% lapply(function(x) x[2] %>% as.numeric) %>% unlist
n_round <- min(length(position) - 1, n_first_tree)
addTreeId <- function(x, i) paste(i,x,sep = "-")
allTrees <- data.table()
position <- which(!is.na(stri_match_first_regex(text, "booster")))
add.tree.id <- function(x, i) paste(i, x, sep = "-")
anynumber_regex <- "[-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?"
for (i in 1:n_round){
td <- data.table(t=text)
td[position, Tree := 1L]
td[, Tree := cumsum(ifelse(is.na(Tree), 0L, Tree)) - 1L]
n_first_tree <- min(max(td$Tree), n_first_tree)
td <- td[Tree <= n_first_tree & !grepl('^booster', t)]
td[, Node := stri_match_first_regex(t, "(\\d+):")[,2] %>% as.numeric ]
td[, ID := add.tree.id(Node, Tree)]
td[, isLeaf := !is.na(stri_match_first_regex(t, "leaf"))]
tree <- text[(position[i] + 1):(position[i + 1] - 1)]
# avoid tree made of a leaf only (no split)
if(length(tree) < 2) next
treeID <- i - 1
notLeaf <- str_match(tree, "leaf") %>% is.na
leaf <- notLeaf %>% not %>% tree[.]
branch <- notLeaf %>% tree[.]
idBranch <- str_extract(branch, "\\d*:") %>% str_replace(":", "") %>% addTreeId(treeID)
idLeaf <- str_extract(leaf, "\\d*:") %>% str_replace(":", "") %>% addTreeId(treeID)
featureBranch <- str_extract(branch, "f\\d*<") %>% str_replace("<", "") %>% str_replace("f", "") %>% as.numeric
if(!is.null(feature_names)){
featureBranch <- feature_names[featureBranch + 1]
}
featureLeaf <- rep("Leaf", length(leaf))
splitBranch <- str_extract(branch, paste0("<",anynumber_regex,"\\]")) %>% str_replace("<", "") %>% str_replace("\\]", "")
splitLeaf <- rep(NA, length(leaf))
yesBranch <- extract(branch, "yes=\\d*") %>% addTreeId(treeID)
yesLeaf <- rep(NA, length(leaf))
noBranch <- extract(branch, "no=\\d*") %>% addTreeId(treeID)
noLeaf <- rep(NA, length(leaf))
missingBranch <- extract(branch, "missing=\\d+") %>% addTreeId(treeID)
missingLeaf <- rep(NA, length(leaf))
qualityBranch <- extract(branch, paste0("gain=",anynumber_regex))
qualityLeaf <- extract(leaf, paste0("leaf=",anynumber_regex))
coverBranch <- extract(branch, "cover=\\d*\\.*\\d*")
coverLeaf <- extract(leaf, "cover=\\d*\\.*\\d*")
dt <- data.table(ID = c(idBranch, idLeaf), Feature = c(featureBranch, featureLeaf), Split = c(splitBranch, splitLeaf), Yes = c(yesBranch, yesLeaf), No = c(noBranch, noLeaf), Missing = c(missingBranch, missingLeaf), Quality = c(qualityBranch, qualityLeaf), Cover = c(coverBranch, coverLeaf))[order(ID)][,Tree := treeID]
allTrees <- rbindlist(list(allTrees, dt), use.names = T, fill = F)
}
yes <- allTrees[!is.na(Yes), Yes]
set(allTrees, i = which(allTrees[, Feature] != "Leaf"),
j = "Yes.Feature",
value = allTrees[ID %in% yes, Feature])
set(allTrees, i = which(allTrees[, Feature] != "Leaf"),
j = "Yes.Cover",
value = allTrees[ID %in% yes, Cover])
set(allTrees, i = which(allTrees[, Feature] != "Leaf"),
j = "Yes.Quality",
value = allTrees[ID %in% yes, Quality])
no <- allTrees[!is.na(No), No]
set(allTrees, i = which(allTrees[, Feature] != "Leaf"),
j = "No.Feature",
value = allTrees[ID %in% no, Feature])
set(allTrees, i = which(allTrees[, Feature] != "Leaf"),
j = "No.Cover",
value = allTrees[ID %in% no, Cover])
set(allTrees, i = which(allTrees[, Feature] != "Leaf"),
j = "No.Quality",
value = allTrees[ID %in% no, Quality])
allTrees
# parse branch lines
td[isLeaf==FALSE, c("Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover") := {
rx <- paste0("f(\\d+)<(", anynumber_regex, ")\\] yes=(\\d+),no=(\\d+),missing=(\\d+),",
"gain=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
# skip some indices with spurious capture groups from anynumber_regex
xtr <- stri_match_first_regex(t, rx)[, c(2,3,5,6,7,8,10)]
xtr[, 3:5] <- add.tree.id(xtr[, 3:5], Tree)
lapply(1:ncol(xtr), function(i) xtr[,i])
}]
# assign feature_names when available
td[isLeaf==FALSE & !is.null(feature_names),
Feature := feature_names[as.numeric(Feature) + 1] ]
# parse leaf lines
td[isLeaf==TRUE, c("Feature", "Quality", "Cover") := {
rx <- paste0("leaf=(", anynumber_regex, "),cover=(", anynumber_regex, ")")
xtr <- stri_match_first_regex(t, rx)[, c(2,4)]
c("Leaf", lapply(1:ncol(xtr), function(i) xtr[,i]))
}]
# convert some columns to numeric
numeric_cols <- c("Quality", "Cover")
td[, (numeric_cols) := lapply(.SD, as.numeric), .SDcols=numeric_cols]
td[, t := NULL]
td[, isLeaf := NULL]
td[order(Tree, Node)]
}
# Avoid error messages during CRAN check.
# The reason is that these variables are never declared
# They are mainly column names inferred by Data.table...
globalVariables(c("ID", "Tree", "Yes", ".", ".N", "Feature", "Cover", "Quality", "No", "Gain", "Frequency"))
globalVariables(c("Tree", "Node", "ID", "Feature", "t", "isLeaf",".SD", ".SDcols"))

View File

@@ -1,153 +1,142 @@
#' Plot multiple graphs at the same time
#'
#' Plot multiple graph aligned by rows and columns.
#'
#' @importFrom data.table data.table
#' @param cols number of columns
#' @return NULL
multiplot <- function(..., cols = 1) {
plots <- list(...)
numPlots = length(plots)
layout <- matrix(seq(1, cols * ceiling(numPlots / cols)),
ncol = cols, nrow = ceiling(numPlots / cols))
if (numPlots == 1) {
print(plots[[1]])
} else {
grid::grid.newpage()
grid::pushViewport(grid::viewport(layout = grid::grid.layout(nrow(layout), ncol(layout))))
for (i in 1:numPlots) {
# Get the i,j matrix positions of the regions that contain this subplot
matchidx <- as.data.table(which(layout == i, arr.ind = TRUE))
print(
plots[[i]], vp = grid::viewport(
layout.pos.row = matchidx$row,
layout.pos.col = matchidx$col
)
)
}
}
}
#' Parse the graph to extract vector of edges
#' @param element igraph object containing the path from the root to the leaf.
edge.parser <- function(element) {
edges.vector <- igraph::as_ids(element)
t <- tail(edges.vector, n = 1)
l <- length(edges.vector)
list(t,l)
}
#' Extract path from root to leaf from data.table
#' @param dt.tree data.table containing the nodes and edges of the trees
get.paths.to.leaf <- function(dt.tree) {
dt.not.leaf.edges <-
dt.tree[Feature != "Leaf",.(ID, Yes, Tree)] %>% list(dt.tree[Feature != "Leaf",.(ID, No, Tree)]) %>% rbindlist(use.names = F)
trees <- dt.tree[,unique(Tree)]
paths <- list()
for (tree in trees) {
graph <-
igraph::graph_from_data_frame(dt.not.leaf.edges[Tree == tree])
paths.tmp <-
igraph::shortest_paths(graph, from = paste0(tree, "-0"), to = dt.tree[Tree == tree &
Feature == "Leaf", c(ID)])
paths <- c(paths, paths.tmp$vpath)
}
paths
}
#' Plot model trees deepness
#'
#' Generate a graph to plot the distribution of deepness among trees.
#'
#' @importFrom data.table data.table
#' @importFrom data.table rbindlist
#' @importFrom data.table setnames
#' @importFrom data.table :=
#' @importFrom magrittr %>%
#' @param model dump generated by the \code{xgb.train} function.
#'
#' @return Two graphs showing the distribution of the model deepness.
#'
#' Visualizes distributions related to depth of tree leafs.
#' \code{xgb.plot.deepness} uses base R graphics, while \code{xgb.ggplot.deepness} uses the ggplot backend.
#'
#' @param model either an \code{xgb.Booster} model generated by the \code{xgb.train} function
#' or a data.table result of the \code{xgb.model.dt.tree} function.
#' @param plot (base R barplot) whether a barplot should be produced.
#' If FALSE, only a data.table is returned.
#' @param which which distribution to plot (see details).
#' @param ... other parameters passed to \code{barplot} or \code{plot}.
#'
#' @details
#' Display both the number of \code{leaf} and the distribution of \code{weighted observations}
#' by tree deepness level.
#'
#' The purpose of this function is to help the user to find the best trade-off to set
#' the \code{max.depth} and \code{min_child_weight} parameters according to the bias / variance trade-off.
#'
#' See \link{xgb.train} for more information about these parameters.
#'
#' The graph is made of two parts:
#'
#' When \code{which="2x1"}, two distributions with respect to the leaf depth
#' are plotted on top of each other:
#' \itemize{
#' \item Count: number of leaf per level of deepness;
#' \item Weighted cover: noramlized weighted cover per leaf (weighted number of instances).
#' \item the distribution of the number of leafs in a tree model at a certain depth;
#' \item the distribution of average weighted number of observations ("cover")
#' ending up in leafs at certain depth.
#' }
#' Those could be helpful in determining sensible ranges of the \code{max_depth}
#' and \code{min_child_weight} parameters.
#'
#' When \code{which="max.depth"} or \code{which="med.depth"}, plots of either maximum or median depth
#' per tree with respect to tree number are created. And \code{which="med.weight"} allows to see how
#' a tree's median absolute leaf weight changes through the iterations.
#'
#' This function is inspired by the blog post \url{http://aysent.github.io/2015/11/08/random-forest-leaf-visualization.html}
#' This function was inspired by the blog post
#' \url{http://aysent.github.io/2015/11/08/random-forest-leaf-visualization.html}.
#'
#' @return
#'
#' Other than producing plots (when \code{plot=TRUE}), the \code{xgb.plot.deepness} function
#' silently returns a processed data.table where each row corresponds to a terminal leaf in a tree model,
#' and contains information about leaf's depth, cover, and weight (which is used in calculating predictions).
#'
#' The \code{xgb.ggplot.deepness} silently returns either a list of two ggplot graphs when \code{which="2x1"}
#' or a single ggplot graph for the other \code{which} options.
#'
#' @seealso
#'
#' \code{\link{xgb.train}}, \code{\link{xgb.model.dt.tree}}.
#'
#' @examples
#'
#' data(agaricus.train, package='xgboost')
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 15,
#' eta = 1, nthread = 2, nround = 30, objective = "binary:logistic",
#' min_child_weight = 50)
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 15,
#' eta = 0.1, nthread = 2, nrounds = 50, objective = "binary:logistic",
#' subsample = 0.5, min_child_weight = 2)
#'
#' xgb.plot.deepness(model = bst)
#' xgb.plot.deepness(bst)
#' xgb.ggplot.deepness(bst)
#'
#' xgb.plot.deepness(bst, which='max.depth', pch=16, col=rgb(0,0,1,0.3), cex=2)
#'
#' xgb.plot.deepness(bst, which='med.weight', pch=16, col=rgb(0,0,1,0.3), cex=2)
#'
#' @rdname xgb.plot.deepness
#' @export
xgb.plot.deepness <- function(model = NULL) {
if (!requireNamespace("ggplot2", quietly = TRUE)) {
stop("ggplot2 package is required for plotting the graph deepness.",
call. = FALSE)
xgb.plot.deepness <- function(model = NULL, which = c("2x1", "max.depth", "med.depth", "med.weight"),
plot = TRUE, ...) {
if (!(class(model) == "xgb.Booster" || is.data.table(model)))
stop("model: Has to be either an xgb.Booster model generaged by the xgb.train function\n",
"or a data.table result of the xgb.importance function")
if (!requireNamespace("igraph", quietly = TRUE))
stop("igraph package is required for plotting the graph deepness.", call. = FALSE)
which <- match.arg(which)
dt_tree <- model
if (class(model) == "xgb.Booster")
dt_tree <- xgb.model.dt.tree(model = model)
if (!all(c("Feature", "Tree", "ID", "Yes", "No", "Cover") %in% colnames(dt_tree)))
stop("Model tree columns are not as expected!\n",
" Note that this function works only for tree models.")
dt_depths <- merge(get.leaf.depth(dt_tree), dt_tree[, .(ID, Cover, Weight=Quality)], by = "ID")
setkeyv(dt_depths, c("Tree", "ID"))
# count by depth levels, and also calculate average cover at a depth
dt_summaries <- dt_depths[, .(.N, Cover = mean(Cover)), Depth]
setkey(dt_summaries, "Depth")
if (plot) {
if (which == "2x1") {
op <- par(no.readonly = TRUE)
par(mfrow=c(2,1),
oma = c(3,1,3,1) + 0.1,
mar = c(1,4,1,0) + 0.1)
dt_summaries[, barplot(N, border=NA, ylab = 'Number of leafs', ...)]
dt_summaries[, barplot(Cover, border=NA, ylab = "Weighted cover", names.arg=Depth, ...)]
title("Model complexity", xlab = "Leaf depth", outer = TRUE, line = 1)
par(op)
} else if (which == "max.depth") {
dt_depths[, max(Depth), Tree][
, plot(jitter(V1, amount = 0.1) ~ Tree, ylab = 'Max tree leaf depth', xlab = "tree #", ...)]
} else if (which == "med.depth") {
dt_depths[, median(as.numeric(Depth)), Tree][
, plot(jitter(V1, amount = 0.1) ~ Tree, ylab = 'Median tree leaf depth', xlab = "tree #", ...)]
} else if (which == "med.weight") {
dt_depths[, median(abs(Weight)), Tree][
, plot(V1 ~ Tree, ylab = 'Median absolute leaf weight', xlab = "tree #", ...)]
}
}
if (!requireNamespace("igraph", quietly = TRUE)) {
stop("igraph package is required for plotting the graph deepness.",
call. = FALSE)
}
if (!requireNamespace("grid", quietly = TRUE)) {
stop("grid package is required for plotting the graph deepness.",
call. = FALSE)
}
if (class(model) != "xgb.Booster") {
stop("model: Has to be an object of class xgb.Booster model generaged by the xgb.train function.")
}
dt.tree <- xgb.model.dt.tree(model = model)
dt.edge.elements <- data.table()
paths <- get.paths.to.leaf(dt.tree)
dt.edge.elements <-
lapply(paths, edge.parser) %>% rbindlist %>% setnames(c("last.edge", "size")) %>%
merge(dt.tree, by.x = "last.edge", by.y = "ID") %>% rbind(dt.edge.elements)
dt.edge.summuize <-
dt.edge.elements[, .(.N, Cover = sum(Cover)), size][,Cover:= Cover / sum(Cover)]
p1 <-
ggplot2::ggplot(dt.edge.summuize) + ggplot2::geom_line(ggplot2::aes(x = size, y = N, group = 1)) +
ggplot2::xlab("") + ggplot2::ylab("Count") + ggplot2::ggtitle("Model complexity") +
ggplot2::theme(
plot.title = ggplot2::element_text(lineheight = 0.9, face = "bold"),
panel.grid.major.y = ggplot2::element_blank(),
axis.ticks = ggplot2::element_blank(),
axis.text.x = ggplot2::element_blank()
)
p2 <-
ggplot2::ggplot(dt.edge.summuize) + ggplot2::geom_line(ggplot2::aes(x =size, y = Cover, group = 1)) +
ggplot2::xlab("From root to leaf path length") + ggplot2::ylab("Weighted cover")
multiplot(p1,p2,cols = 1)
invisible(dt_depths)
}
# Extract path depths from root to leaf
# from data.table containing the nodes and edges of the trees.
# internal utility function
get.leaf.depth <- function(dt_tree) {
# extract tree graph's edges
dt_edges <- rbindlist(list(
dt_tree[Feature != "Leaf", .(ID, To=Yes, Tree)],
dt_tree[Feature != "Leaf", .(ID, To=No, Tree)]
))
# whether "To" is a leaf:
dt_edges <-
merge(dt_edges,
dt_tree[Feature == "Leaf", .(ID, Leaf = TRUE)],
all.x = TRUE, by.x = "To", by.y = "ID")
dt_edges[is.na(Leaf), Leaf := FALSE]
dt_edges[, {
graph <- igraph::graph_from_data_frame(.SD[,.(ID, To)])
# min(ID) in a tree is a root node
paths_tmp <- igraph::shortest_paths(graph, from = min(ID), to = To[Leaf == TRUE])
# list of paths to each leaf in a tree
paths <- lapply(paths_tmp$vpath, names)
# combine into a resulting path lengths table for a tree
data.table(Depth = sapply(paths, length), ID = To[Leaf == TRUE])
}, by = Tree]
}
# Avoid error messages during CRAN check.
@@ -155,6 +144,6 @@ xgb.plot.deepness <- function(model = NULL) {
# They are mainly column names inferred by Data.table...
globalVariables(
c(
"Feature", "Count", "ggplot", "aes", "geom_bar", "xlab", "ylab", "ggtitle", "theme", "element_blank", "element_text", "ID", "Yes", "No", "Tree"
".N", "N", "Depth", "Quality", "Cover", "Tree", "ID", "Yes", "No", "Feature"
)
)

View File

@@ -1,79 +1,125 @@
#' Plot feature importance bar graph
#' Plot feature importance as a bar graph
#'
#' Read a data.table containing feature importance details and plot it (for both GLM and Trees).
#' Represents previously calculated feature importance as a bar graph.
#' \code{xgb.plot.importance} uses base R graphics, while \code{xgb.ggplot.importance} uses the ggplot backend.
#'
#' @importFrom magrittr %>%
#' @param importance_matrix a \code{data.table} returned by the \code{xgb.importance} function.
#' @param numberOfClusters a \code{numeric} vector containing the min and the max range of the possible number of clusters of bars.
#'
#' @return A \code{ggplot2} bar graph representing each feature by a horizontal bar. Longer is the bar, more important is the feature. Features are classified by importance and clustered by importance. The group is represented through the color of the bar.
#' @param importance_matrix a \code{data.table} returned by \code{\link{xgb.importance}}.
#' @param top_n maximal number of top features to include into the plot.
#' @param measure the name of importance measure to plot.
#' When \code{NULL}, 'Gain' would be used for trees and 'Weight' would be used for gblinear.
#' @param rel_to_first whether importance values should be represented as relative to the highest ranked feature.
#' See Details.
#' @param left_margin (base R barplot) allows to adjust the left margin size to fit feature names.
#' When it is NULL, the existing \code{par('mar')} is used.
#' @param cex (base R barplot) passed as \code{cex.names} parameter to \code{barplot}.
#' @param plot (base R barplot) whether a barplot should be produced.
#' If FALSE, only a data.table is returned.
#' @param n_clusters (ggplot only) a \code{numeric} vector containing the min and the max range
#' of the possible number of clusters of bars.
#' @param ... other parameters passed to \code{barplot} (except horiz, border, cex.names, names.arg, and las).
#'
#' @details
#' The purpose of this function is to easily represent the importance of each feature of a model.
#' The function returns a ggplot graph, therefore each of its characteristic can be overriden (to customize it).
#' In particular you may want to override the title of the graph. To do so, add \code{+ ggtitle("A GRAPH NAME")} next to the value returned by this function.
#' The graph represents each feature as a horizontal bar of length proportional to the importance of a feature.
#' Features are shown ranked in a decreasing importance order.
#' It works for importances from both \code{gblinear} and \code{gbtree} models.
#'
#' When \code{rel_to_first = FALSE}, the values would be plotted as they were in \code{importance_matrix}.
#' For gbtree model, that would mean being normalized to the total of 1
#' ("what is feature's importance contribution relative to the whole model?").
#' For linear models, \code{rel_to_first = FALSE} would show actual values of the coefficients.
#' Setting \code{rel_to_first = TRUE} allows to see the picture from the perspective of
#' "what is feature's importance contribution relative to the most important feature?"
#'
#' The ggplot-backend method also performs 1-D custering of the importance values,
#' with bar colors coresponding to different clusters that have somewhat similar importance values.
#'
#' @return
#' The \code{xgb.plot.importance} function creates a \code{barplot} (when \code{plot=TRUE})
#' and silently returns a processed data.table with \code{n_top} features sorted by importance.
#'
#' The \code{xgb.ggplot.importance} function returns a ggplot graph which could be customized afterwards.
#' E.g., to change the title of the graph, add \code{+ ggtitle("A GRAPH NAME")} to the result.
#'
#' @seealso
#' \code{\link[graphics]{barplot}}.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.train)
#'
#' #Both dataset are list with two items, a sparse matrix and labels
#' #(labels = outcome column which will be learned).
#' #Each column of the sparse Matrix is a feature in one hot encoding format.
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 3,
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 2,
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
#'
#' #agaricus.train$data@@Dimnames[[2]] represents the column names of the sparse matrix.
#' importance_matrix <- xgb.importance(agaricus.train$data@@Dimnames[[2]], model = bst)
#' xgb.plot.importance(importance_matrix)
#' importance_matrix <- xgb.importance(colnames(agaricus.train$data), model = bst)
#'
#' xgb.plot.importance(importance_matrix, rel_to_first = TRUE, xlab = "Relative importance")
#'
#' (gg <- xgb.ggplot.importance(importance_matrix, measure = "Frequency", rel_to_first = TRUE))
#' gg + ggplot2::ylab("Frequency")
#'
#' @rdname xgb.plot.importance
#' @export
xgb.plot.importance <-
function(importance_matrix = NULL, numberOfClusters = c(1:10)) {
if (!"data.table" %in% class(importance_matrix)) {
stop("importance_matrix: Should be a data.table.")
}
if (!requireNamespace("ggplot2", quietly = TRUE)) {
stop("ggplot2 package is required for plotting the importance", call. = FALSE)
}
if (!requireNamespace("Ckmeans.1d.dp", quietly = TRUE)) {
stop("Ckmeans.1d.dp package is required for plotting the importance", call. = FALSE)
}
if(isTRUE(all.equal(colnames(importance_matrix), c("Feature", "Gain", "Cover", "Frequency")))){
y.axe.name <- "Gain"
} else if(isTRUE(all.equal(colnames(importance_matrix), c("Feature", "Weight")))){
y.axe.name <- "Weight"
} else {
stop("Importance matrix is not correct (column names issue)")
}
# To avoid issues in clustering when co-occurences are used
importance_matrix <-
importance_matrix[, .(Gain.or.Weight = sum(get(y.axe.name))), by = Feature]
clusters <-
suppressWarnings(Ckmeans.1d.dp::Ckmeans.1d.dp(importance_matrix[,Gain.or.Weight], numberOfClusters))
importance_matrix[,"Cluster":= clusters$cluster %>% as.character]
plot <-
ggplot2::ggplot(
importance_matrix, ggplot2::aes(
x = stats::reorder(Feature, Gain.or.Weight), y = Gain.or.Weight, width = 0.05
), environment = environment()
) + ggplot2::geom_bar(ggplot2::aes(fill = Cluster), stat = "identity", position =
"identity") + ggplot2::coord_flip() + ggplot2::xlab("Features") + ggplot2::ylab(y.axe.name) + ggplot2::ggtitle("Feature importance") + ggplot2::theme(
plot.title = ggplot2::element_text(lineheight = .9, face = "bold"), panel.grid.major.y = ggplot2::element_blank()
)
return(plot)
xgb.plot.importance <- function(importance_matrix = NULL, top_n = NULL, measure = NULL,
rel_to_first = FALSE, left_margin = 10, cex = NULL, plot = TRUE, ...) {
check.deprecation(...)
if (!"data.table" %in% class(importance_matrix)) {
stop("importance_matrix: Should be a data.table.")
}
imp_names <- colnames(importance_matrix)
if (is.null(measure)) {
if (all(c("Feature", "Gain") %in% imp_names)) {
measure <- "Gain"
} else if (all(c("Feature", "Weight") %in% imp_names)) {
measure <- "Weight"
} else {
stop("Importance matrix column names are not as expected!")
}
} else {
if (!measure %in% imp_names)
stop("Invalid `measure`")
if (!"Feature" %in% imp_names)
stop("Importance matrix column names are not as expected!")
}
# also aggregate, just in case when the values were not yet summed up by feature
importance_matrix <- importance_matrix[, Importance := sum(get(measure)), by = Feature]
# make sure it's ordered
importance_matrix <- importance_matrix[order(-abs(Importance))]
if (!is.null(top_n)) {
top_n <- min(top_n, nrow(importance_matrix))
importance_matrix <- head(importance_matrix, top_n)
}
if (rel_to_first) {
importance_matrix[, Importance := Importance/max(abs(Importance))]
}
if (is.null(cex)) {
cex <- 2.5/log2(1 + nrow(importance_matrix))
}
if (plot) {
op <- par(no.readonly = TRUE)
mar <- op$mar
if (!is.null(left_margin))
mar[2] <- left_margin
par(mar = mar)
# reverse the order of rows to have the highest ranked at the top
importance_matrix[nrow(importance_matrix):1,
barplot(Importance, horiz=TRUE, border=NA, cex.names=cex,
names.arg=Feature, las=1, ...)]
grid(NULL, NA)
# redraw over the grid
importance_matrix[nrow(importance_matrix):1,
barplot(Importance, horiz=TRUE, border=NA, add=TRUE)]
par(op)
}
invisible(importance_matrix)
}
# Avoid error messages during CRAN check.
# The reason is that these variables are never declared
# They are mainly column names inferred by Data.table...
globalVariables(
c(
"Feature", "Gain.or.Weight", "Cluster", "ggplot", "aes", "geom_bar", "coord_flip", "xlab", "ylab", "ggtitle", "theme", "element_blank", "element_text", "Gain.or.Weight"
)
)
globalVariables(c("Feature", "Importance"))

View File

@@ -2,19 +2,12 @@
#'
#' Visualization of the ensemble of trees as a single collective unit.
#'
#' @importFrom data.table data.table
#' @importFrom data.table rbindlist
#' @importFrom data.table setnames
#' @importFrom data.table :=
#' @importFrom magrittr %>%
#' @importFrom stringr str_detect
#' @importFrom stringr str_extract
#'
#' @param model dump generated by the \code{xgb.train} function.
#' @param feature_names names of each feature as a \code{character} vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.
#' @param features.keep number of features to keep in each position of the multi trees.
#' @param plot.width width in pixels of the graph to produce
#' @param plot.height height in pixels of the graph to produce
#' @param features_keep number of features to keep in each position of the multi trees.
#' @param plot_width width in pixels of the graph to produce
#' @param plot_height height in pixels of the graph to produce
#' @param ... currently not used
#'
#' @return Two graphs showing the distribution of the model deepness.
#'
@@ -34,7 +27,7 @@
#' Moreover, the trees tend to reuse the same features.
#'
#' The function will project each tree on one, and keep for each position the
#' \code{features.keep} first features (based on Gain per feature measure).
#' \code{features_keep} first features (based on Gain per feature measure).
#'
#' This function is inspired by this blog post:
#' \url{https://wellecks.wordpress.com/2015/02/21/peering-into-the-black-box-visualizing-lambdamart/}
@@ -42,20 +35,21 @@
#' @examples
#' data(agaricus.train, package='xgboost')
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 15,
#' eta = 1, nthread = 2, nround = 30, objective = "binary:logistic",
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 15,
#' eta = 1, nthread = 2, nrounds = 30, objective = "binary:logistic",
#' min_child_weight = 50)
#'
#' p <- xgb.plot.multi.trees(model = bst, feature_names = agaricus.train$data@Dimnames[[2]], features.keep = 3)
#' p <- xgb.plot.multi.trees(model = bst, feature_names = colnames(agaricus.train$data), features_keep = 3)
#' print(p)
#'
#' @export
xgb.plot.multi.trees <- function(model, feature_names = NULL, features.keep = 5, plot.width = NULL, plot.height = NULL){
xgb.plot.multi.trees <- function(model, feature_names = NULL, features_keep = 5, plot_width = NULL, plot_height = NULL, ...){
check.deprecation(...)
tree.matrix <- xgb.model.dt.tree(feature_names = feature_names, model = model)
# first number of the path represents the tree, then the following numbers are related to the path to follow
# root init
root.nodes <- tree.matrix[str_detect(ID, "\\d+-0"), ID]
root.nodes <- tree.matrix[stri_detect_regex(ID, "\\d+-0"), ID]
tree.matrix[ID %in% root.nodes, abs.node.position:=root.nodes]
precedent.nodes <- root.nodes
@@ -76,11 +70,11 @@ xgb.plot.multi.trees <- function(model, feature_names = NULL, features.keep = 5,
remove.tree <- . %>% str_replace(pattern = "^\\d+-", replacement = "")
remove.tree <- . %>% stri_replace_first_regex(pattern = "^\\d+-", replacement = "")
tree.matrix[,`:=`(abs.node.position=remove.tree(abs.node.position), Yes=remove.tree(Yes), No=remove.tree(No))]
nodes.dt <- tree.matrix[,.(Quality = sum(Quality)),by = .(abs.node.position, Feature)][,.(Text =paste0(Feature[1:min(length(Feature), features.keep)], " (", Quality[1:min(length(Quality), features.keep)], ")") %>% paste0(collapse = "\n")), by=abs.node.position]
nodes.dt <- tree.matrix[,.(Quality = sum(Quality)),by = .(abs.node.position, Feature)][,.(Text =paste0(Feature[1:min(length(Feature), features_keep)], " (", Quality[1:min(length(Quality), features_keep)], ")") %>% paste0(collapse = "\n")), by=abs.node.position]
edges.dt <- tree.matrix[Feature != "Leaf",.(abs.node.position, Yes)] %>% list(tree.matrix[Feature != "Leaf",.(abs.node.position, No)]) %>% rbindlist() %>% setnames(c("From", "To")) %>% .[,.N,.(From, To)] %>% .[,N:=NULL]
nodes <- DiagrammeR::create_nodes(nodes = nodes.dt[,abs.node.position],
@@ -104,11 +98,11 @@ xgb.plot.multi.trees <- function(model, feature_names = NULL, features.keep = 5,
edges_df = edges,
graph_attrs = "rankdir = LR")
DiagrammeR::render_graph(graph, width = plot.width, height = plot.height)
DiagrammeR::render_graph(graph, width = plot_width, height = plot_height)
}
globalVariables(
c(
"Feature", "no.nodes.abs.pos", "ID", "Yes", "No", "Tree", "yes.nodes.abs.pos", "abs.node.position"
".N", "N", "From", "To", "Text", "Feature", "no.nodes.abs.pos", "ID", "Yes", "No", "Tree", "yes.nodes.abs.pos", "abs.node.position"
)
)
)

View File

@@ -2,14 +2,12 @@
#'
#' Read a tree model text dump and plot the model.
#'
#' @importFrom data.table data.table
#' @importFrom data.table :=
#' @importFrom magrittr %>%
#' @param feature_names names of each feature as a \code{character} vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.
#' @param model generated by the \code{xgb.train} function. Avoid the creation of a dump file.
#' @param n_first_tree limit the plot to the n first trees. If \code{NULL}, all trees of the model are plotted. Performance can be low for huge models.
#' @param plot.width the width of the diagram in pixels.
#' @param plot.height the height of the diagram in pixels.
#' @param plot_width the width of the diagram in pixels.
#' @param plot_height the height of the diagram in pixels.
#' @param ... currently not used.
#'
#' @return A \code{DiagrammeR} of the model.
#'
@@ -28,15 +26,14 @@
#' @examples
#' data(agaricus.train, package='xgboost')
#'
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 2,
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#'
#' # agaricus.train$data@@Dimnames[[2]] represents the column names of the sparse matrix.
#' xgb.plot.tree(feature_names = agaricus.train$data@@Dimnames[[2]], model = bst)
#' xgb.plot.tree(feature_names = colnames(agaricus.train$data), model = bst)
#'
#' @export
xgb.plot.tree <- function(feature_names = NULL, model = NULL, n_first_tree = NULL, plot.width = NULL, plot.height = NULL){
xgb.plot.tree <- function(feature_names = NULL, model = NULL, n_first_tree = NULL, plot_width = NULL, plot_height = NULL, ...){
check.deprecation(...)
if (class(model) != "xgb.Booster") {
stop("model: Has to be an object of class xgb.Booster model generaged by the xgb.train function.")
}
@@ -75,7 +72,7 @@ xgb.plot.tree <- function(feature_names = NULL, model = NULL, n_first_tree = NUL
edges_df = edges,
graph_attrs = "rankdir = LR")
DiagrammeR::render_graph(graph, width = plot.width, height = plot.height)
DiagrammeR::render_graph(graph, width = plot_width, height = plot_height)
}
# Avoid error messages during CRAN check.

View File

@@ -3,29 +3,25 @@
#' Save xgboost model from xgboost or xgb.train
#'
#' @param model the model object.
#' @param fname the name of the binary file.
#' @param fname the name of the file to write.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' xgb.save(bst, 'xgb.model')
#' bst <- xgb.load('xgb.model')
#' pred <- predict(bst, test$data)
#' @export
xgb.save <- function(model, fname) {
if (typeof(fname) != "character") {
stop("xgb.save: fname must be character")
}
if (class(model) == "xgb.Booster") {
model <- xgb.Booster.check(model)
.Call("XGBoosterSaveModel_R", model$handle, fname, PACKAGE = "xgboost")
return(TRUE)
}
stop("xgb.save: the input must be xgb.Booster. Use xgb.DMatrix.save to save
xgb.DMatrix object.")
return(FALSE)
if (typeof(fname) != "character")
stop("fname must be character")
if (class(model) != "xgb.Booster")
stop("the input must be xgb.Booster. Use xgb.DMatrix.save to save xgb.DMatrix object.")
.Call("XGBoosterSaveModel_R", model$handle, fname, PACKAGE = "xgboost")
return(TRUE)
}

View File

@@ -10,20 +10,14 @@
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
#' eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
#' raw <- xgb.save.raw(bst)
#' bst <- xgb.load(raw)
#' pred <- predict(bst, test$data)
#'
#' @export
xgb.save.raw <- function(model) {
if (class(model) == "xgb.Booster"){
model <- model$handle
}
if (class(model) == "xgb.Booster.handle") {
raw <- .Call("XGBoosterModelToRaw_R", model, PACKAGE = "xgboost")
return(raw)
}
stop("xgb.raw: the input must be xgb.Booster.handle. Use xgb.DMatrix.save to save
xgb.DMatrix object.")
model <- xgb.get.handle(model)
.Call("XGBoosterModelToRaw_R", model, PACKAGE = "xgboost")
}

View File

@@ -1,8 +1,10 @@
#' eXtreme Gradient Boosting Training
#'
#' An advanced interface for training xgboost model. Look at \code{\link{xgboost}} function for a simpler interface.
#' \code{xgb.train} is an advanced interface for training an xgboost model. The \code{xgboost} function provides a simpler interface.
#'
#' @param params the list of parameters.
#' The complete list of parameters is available at \url{http://xgboost.readthedocs.io/en/latest/parameter.html}.
#' Below is a shorter summary:
#'
#' 1. General Parameters
#'
@@ -43,68 +45,139 @@
#' \item \code{binary:logistic} logistic regression for binary classification. Output probability.
#' \item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation.
#' \item \code{num_class} set the number of classes. To use only with multiclass objectives.
#' \item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{num_class}.
#' \item \code{multi:softprob} same as softmax, but output a vector of ndata * nclass, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class.
#' \item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{num_class - 1}.
#' \item \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class.
#' \item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss.
#' }
#' \item \code{base_score} the initial prediction score of all instances, global bias. Default: 0.5
#' \item \code{eval_metric} evaluation metrics for validation data. Users can pass a self-defined function to it. Default: metric will be assigned according to objective(rmse for regression, and error for classification, mean average precision for ranking). List is provided in detail section.
#' }
#'
#' @param data takes an \code{xgb.DMatrix} as the input.
#' @param data input dataset. \code{xgb.train} takes only an \code{xgb.DMatrix} as the input.
#' \code{xgboost}, in addition, also accepts \code{matrix}, \code{dgCMatrix}, or local data file.
#' @param nrounds the max number of iterations
#' @param watchlist what information should be printed when \code{verbose=1} or
#' \code{verbose=2}. Watchlist is used to specify validation set monitoring
#' during training. For example user can specify
#' watchlist=list(validation1=mat1, validation2=mat2) to watch
#' the performance of each round's model on mat1 and mat2
#' \code{verbose=2}. Watchlist is used to specify validation set monitoring
#' during training. For example user can specify
#' watchlist=list(validation1=mat1, validation2=mat2) to watch
#' the performance of each round's model on mat1 and mat2
#'
#' @param obj customized objective function. Returns gradient and second order
#' gradient with given prediction and dtrain,
#' gradient with given prediction and dtrain.
#' @param feval custimized evaluation function. Returns
#' \code{list(metric='metric-name', value='metric-value')} with given
#' prediction and dtrain,
#' \code{list(metric='metric-name', value='metric-value')} with given
#' prediction and dtrain.
#' @param verbose If 0, xgboost will stay silent. If 1, xgboost will print
#' information of performance. If 2, xgboost will print information of both
#' @param print.every.n Print every N progress messages when \code{verbose>0}. Default is 1 which means all messages are printed.
#' @param early.stop.round If \code{NULL}, the early stopping function is not triggered.
#' If set to an integer \code{k}, training with a validation set will stop if the performance
#' keeps getting worse consecutively for \code{k} rounds.
#' @param maximize If \code{feval} and \code{early.stop.round} are set, then \code{maximize} must be set as well.
#' \code{maximize=TRUE} means the larger the evaluation score the better.
#' @param save_period save the model to the disk in every \code{save_period} rounds, 0 means no such action.
#' information of performance. If 2, xgboost will print some additional information.
#' Setting \code{verbose > 0} automatically engages the \code{\link{cb.evaluation.log}} and
#' \code{\link{cb.print.evaluation}} callback functions.
#' @param print_every_n Print each n-th iteration evaluation messages when \code{verbose>0}.
#' Default is 1 which means all messages are printed. This parameter is passed to the
#' \code{\link{cb.print.evaluation}} callback.
#' @param early_stopping_rounds If \code{NULL}, the early stopping function is not triggered.
#' If set to an integer \code{k}, training with a validation set will stop if the performance
#' doesn't improve for \code{k} rounds.
#' Setting this parameter engages the \code{\link{cb.early.stop}} callback.
#' @param maximize If \code{feval} and \code{early_stopping_rounds} are set,
#' then this parameter must be set as well.
#' When it is \code{TRUE}, it means the larger the evaluation score the better.
#' This parameter is passed to the \code{\link{cb.early.stop}} callback.
#' @param save_period when it is non-NULL, model is saved to disk after every \code{save_period} rounds,
#' 0 means save at the end. The saving is handled by the \code{\link{cb.save.model}} callback.
#' @param save_name the name or path for periodically saved model file.
#' @param xgb_model a previously built model to continue the trainig from.
#' Could be either an object of class \code{xgb.Booster}, or its raw data, or the name of a
#' file with a previously saved model.
#' @param callbacks a list of callback functions to perform various task during boosting.
#' See \code{\link{callbacks}}. Some of the callbacks are automatically created depending on the
#' parameters' values. User can provide either existing or their own callback methods in order
#' to customize the training process.
#' @param ... other parameters to pass to \code{params}.
#' @param label vector of response values. Should not be provided when data is
#' a local data file name or an \code{xgb.DMatrix}.
#' @param missing by default is set to NA, which means that NA values should be considered as 'missing'
#' by the algorithm. Sometimes, 0 or other extreme value might be used to represent missing values.
#' This parameter is only used when input is a dense matrix.
#' @param weight a vector indicating the weight for each row of the input.
#'
#' @details
#' This is the training function for \code{xgboost}.
#' These are the training functions for \code{xgboost}.
#'
#' It supports advanced features such as \code{watchlist}, customized objective function (\code{feval}),
#' therefore it is more flexible than \code{\link{xgboost}} function.
#' The \code{xgb.train} interface supports advanced features such as \code{watchlist},
#' customized objective and evaluation metric functions, therefore it is more flexible
#' than the \code{\link{xgboost}} interface.
#'
#' Parallelization is automatically enabled if \code{OpenMP} is present.
#' Number of threads can also be manually specified via \code{nthread} parameter.
#'
#' \code{eval_metric} parameter (not listed above) is set automatically by Xgboost but can be overriden by parameter. Below is provided the list of different metric optimized by Xgboost to help you to understand how it works inside or to use them with the \code{watchlist} parameter.
#' The evaluation metric is chosen automatically by Xgboost (according to the objective)
#' when the \code{eval_metric} parameter is not provided.
#' User may set one or several \code{eval_metric} parameters.
#' Note that when using a customized metric, only this single metric can be used.
#' The folloiwing is the list of built-in metrics for which Xgboost provides optimized implementation:
#' \itemize{
#' \item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
#' \item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
#' \item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss}
#' \item \code{error} Binary classification error rate. It is calculated as \code{(wrong cases) / (all cases)}. For the predictions, the evaluation will regard the instances with prediction value larger than 0.5 as positive instances, and the others as negative instances.
#' \item \code{merror} Multiclass classification error rate. It is calculated as \code{(wrong cases) / (all cases)}.
#' \item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
#' By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
#' Different threshold (e.g., 0.) could be specified as "error@0."
#' \item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
#' \item \code{auc} Area under the curve. \url{http://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
#' \item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{http://en.wikipedia.org/wiki/NDCG}
#' }
#'
#' Full list of parameters is available in the Wiki \url{https://github.com/dmlc/xgboost/wiki/Parameters}.
#'
#' This function only accepts an \code{\link{xgb.DMatrix}} object as the input.
#' The following callbacks are automatically created when certain parameters are set:
#' \itemize{
#' \item \code{cb.print.evaluation} is turned on when \code{verbose > 0};
#' and the \code{print_every_n} parameter is passed to it.
#' \item \code{cb.evaluation.log} is on when \code{verbose > 0} and \code{watchlist} is present.
#' \item \code{cb.early.stop}: when \code{early_stopping_rounds} is set.
#' \item \code{cb.save.model}: when \code{save_period > 0} is set.
#' }
#'
#' @return
#' An object of class \code{xgb.Booster} with the following elements:
#' \itemize{
#' \item \code{handle} a handle (pointer) to the xgboost model in memory.
#' \item \code{raw} a cached memory dump of the xgboost model saved as R's \code{raw} type.
#' \item \code{niter} number of boosting iterations.
#' \item \code{evaluation_log} evaluation history storead as a \code{data.table} with the
#' first column corresponding to iteration number and the rest corresponding to evaluation
#' metrics' values. It is created by the \code{\link{cb.evaluation.log}} callback.
#' \item \code{call} a function call.
#' \item \code{params} parameters that were passed to the xgboost library. Note that it does not
#' capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
#' \item \code{callbacks} callback functions that were either automatically assigned or
#' explicitely passed.
#' \item \code{best_iteration} iteration number with the best evaluation metric value
#' (only available with early stopping).
#' \item \code{best_ntreelimit} the \code{ntreelimit} value corresponding to the best iteration,
#' which could further be used in \code{predict} method
#' (only available with early stopping).
#' \item \code{best_score} the best evaluation metric value during early stopping.
#' (only available with early stopping).
#' }
#'
#' @seealso
#' \code{\link{callbacks}},
#' \code{\link{predict.xgb.Booster}},
#' \code{\link{xgb.cv}}
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#'
#' dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
#' dtest <- dtrain
#' dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
#' watchlist <- list(eval = dtest, train = dtrain)
#'
#' ## A simple xgb.train example:
#' param <- list(max_depth = 2, eta = 1, silent = 1,
#' objective = "binary:logistic", eval_metric = "auc")
#' bst <- xgb.train(param, dtrain, nthread = 2, nrounds = 2, watchlist)
#'
#' ## An xgb.train example where custom objective and evaluation metric are used:
#' logregobj <- function(preds, dtrain) {
#' labels <- getinfo(dtrain, "label")
#' preds <- 1/(1 + exp(-preds))
@@ -117,121 +190,145 @@
#' err <- as.numeric(sum(labels != (preds > 0)))/length(labels)
#' return(list(metric = "error", value = err))
#' }
#' param <- list(max.depth = 2, eta = 1, silent = 1, objective=logregobj,eval_metric=evalerror)
#' bst <- xgb.train(param, dtrain, nthread = 2, nround = 2, watchlist)
#' bst <- xgb.train(param, dtrain, nthread = 2, nrounds = 2, watchlist)
#'
#' ## An xgb.train example of using variable learning rates at each iteration:
#' my_etas <- list(eta = c(0.5, 0.1))
#' bst <- xgb.train(param, dtrain, nthread = 2, nrounds = 2, watchlist,
#' callbacks = list(cb.reset.parameters(my_etas)))
#'
#' ## Explicit use of the cb.evaluation.log callback allows to run
#' ## xgb.train silently but still store the evaluation results:
#' bst <- xgb.train(param, dtrain, nthread = 2, nrounds = 2, watchlist,
#' verbose = 0, callbacks = list(cb.evaluation.log()))
#' print(bst$evaluation_log)
#'
#' ## An 'xgboost' interface example:
#' bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label,
#' max_depth = 2, eta = 1, nthread = 2, nrounds = 2,
#' objective = "binary:logistic")
#' pred <- predict(bst, agaricus.test$data)
#'
#' @rdname xgb.train
#' @export
xgb.train <- function(params=list(), data, nrounds, watchlist = list(),
obj = NULL, feval = NULL, verbose = 1, print.every.n=1L,
early.stop.round = NULL, maximize = NULL,
save_period = 0, save_name = "xgboost.model", ...) {
xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
obj = NULL, feval = NULL, verbose = 1, print_every_n=1L,
early_stopping_rounds = NULL, maximize = NULL,
save_period = NULL, save_name = "xgboost.model",
xgb_model = NULL, callbacks = list(), ...) {
check.deprecation(...)
params <- check.booster.params(params, ...)
check.custom.obj()
check.custom.eval()
# data & watchlist checks
dtrain <- data
if (typeof(params) != "list") {
stop("xgb.train: first argument params must be list")
}
if (class(dtrain) != "xgb.DMatrix") {
stop("xgb.train: second argument dtrain must be xgb.DMatrix")
}
if (verbose > 1) {
params <- append(params, list(silent = 0))
} else {
params <- append(params, list(silent = 1))
}
if (length(watchlist) != 0 && verbose == 0) {
warning('watchlist is provided but verbose=0, no evaluation information will be printed')
if (class(dtrain) != "xgb.DMatrix")
stop("second argument dtrain must be xgb.DMatrix")
if (length(watchlist) > 0) {
if (typeof(watchlist) != "list" ||
!all(sapply(watchlist, class) == "xgb.DMatrix"))
stop("watchlist must be a list of xgb.DMatrix elements")
evnames <- names(watchlist)
if (is.null(evnames) || any(evnames == ""))
stop("each element of the watchlist must have a name tag")
}
fit.call <- match.call()
dot.params <- list(...)
nms.params <- names(params)
nms.dot.params <- names(dot.params)
if (length(intersect(nms.params,nms.dot.params)) > 0)
stop("Duplicated term in parameters. Please check your list of params.")
params <- append(params, dot.params)
# customized objective and evaluation metric interface
if (!is.null(params$objective) && !is.null(obj))
stop("xgb.train: cannot assign two different objectives")
if (!is.null(params$objective))
if (class(params$objective) == 'function') {
obj <- params$objective
params$objective <- NULL
}
if (!is.null(params$eval_metric) && !is.null(feval))
stop("xgb.train: cannot assign two different evaluation metrics")
if (!is.null(params$eval_metric))
if (class(params$eval_metric) == 'function') {
feval <- params$eval_metric
params$eval_metric <- NULL
}
# Early stopping
if (!is.null(early.stop.round)){
if (!is.null(feval) && is.null(maximize))
stop('Please set maximize to note whether the model is maximizing the evaluation or not.')
if (length(watchlist) == 0)
stop('For early stopping you need at least one set in watchlist.')
if (is.null(maximize) && is.null(params$eval_metric))
stop('Please set maximize to note whether the model is maximizing the evaluation or not.')
if (is.null(maximize))
{
if (params$eval_metric %in% c('rmse','logloss','error','merror','mlogloss')) {
maximize <- FALSE
} else {
maximize <- TRUE
}
}
if (maximize) {
bestScore <- 0
} else {
bestScore <- Inf
}
bestInd <- 0
earlyStopflag = FALSE
if (length(watchlist) > 1)
warning('Only the first data set in watchlist is used for early stopping process.')
# evaluation printing callback
params <- c(params, list(silent = ifelse(verbose > 1, 0, 1)))
print_every_n <- max( as.integer(print_every_n), 1L)
if (!has.callbacks(callbacks, 'cb.print.evaluation') && verbose) {
callbacks <- add.cb(callbacks, cb.print.evaluation(print_every_n))
}
# evaluation log callback: it is automatically enabled only when verbose > 0
evaluation_log <- list()
if (verbose > 0 &&
!has.callbacks(callbacks, 'cb.evaluation.log') &&
length(watchlist) > 0) {
callbacks <- add.cb(callbacks, cb.evaluation.log())
}
# Model saving callback
if (!is.null(save_period) &&
!has.callbacks(callbacks, 'cb.save.model')) {
callbacks <- add.cb(callbacks, cb.save.model(save_period, save_name))
}
# Early stopping callback
stop_condition <- FALSE
if (!is.null(early_stopping_rounds) &&
!has.callbacks(callbacks, 'cb.early.stop')) {
callbacks <- add.cb(callbacks, cb.early.stop(early_stopping_rounds,
maximize=maximize, verbose=verbose))
}
# Sort the callbacks into categories
cb <- categorize.callbacks(callbacks)
handle <- xgb.Booster(params, append(watchlist, dtrain))
# Construct a booster (either a new one or load from xgb_model)
handle <- xgb.Booster(params, append(watchlist, dtrain), xgb_model)
bst <- xgb.handleToBooster(handle)
print.every.n <- max( as.integer(print.every.n), 1L)
for (i in 1:nrounds) {
succ <- xgb.iter.update(bst$handle, dtrain, i - 1, obj)
if (length(watchlist) != 0) {
msg <- xgb.iter.eval(bst$handle, watchlist, i - 1, feval)
if (0 == ( (i - 1) %% print.every.n))
cat(paste(msg, "\n", sep = ""))
if (!is.null(early.stop.round))
{
score <- strsplit(msg,':|\\s+')[[1]][3]
score <- as.numeric(score)
if ( (maximize && score > bestScore) || (!maximize && score < bestScore)) {
bestScore <- score
bestInd <- i
} else {
earlyStopflag = TRUE
if (i - bestInd >= early.stop.round) {
cat('Stopping. Best iteration:', bestInd, '\n')
break
}
}
}
}
if (save_period > 0) {
if (i %% save_period == 0) {
xgb.save(bst, save_name)
}
# extract parameters that can affect the relationship b/w #trees and #iterations
num_class <- max(as.numeric(NVL(params[['num_class']], 1)), 1)
num_parallel_tree <- max(as.numeric(NVL(params[['num_parallel_tree']], 1)), 1)
# When the 'xgb_model' was set, find out how many boosting iterations it has
niter_skip <- 0
if (!is.null(xgb_model)) {
niter_skip <- as.numeric(xgb.attr(bst, 'niter')) + 1
if (length(niter_skip) == 0) {
niter_skip <- xgb.ntree(bst) %/% (num_parallel_tree * num_class)
}
}
bst <- xgb.Booster.check(bst)
if (!is.null(early.stop.round)) {
bst$bestScore <- bestScore
bst$bestInd <- bestInd
# TODO: distributed code
rank <- 0
begin_iteration <- niter_skip + 1
end_iteration <- niter_skip + nrounds
# the main loop for boosting iterations
for (iteration in begin_iteration:end_iteration) {
for (f in cb$pre_iter) f()
xgb.iter.update(bst$handle, dtrain, iteration - 1, obj)
bst_evaluation <- numeric(0)
if (length(watchlist) > 0)
bst_evaluation <- xgb.iter.eval(bst$handle, watchlist, iteration - 1, feval)
xgb.attr(bst$handle, 'niter') <- iteration - 1
for (f in cb$post_iter) f()
if (stop_condition) break
}
for (f in cb$finalize) f(finalize=TRUE)
bst <- xgb.Booster.check(bst, saveraw = TRUE)
# store the total number of boosting iterations
bst$niter = end_iteration
# store the evaluation results
if (length(evaluation_log) > 0 &&
nrow(evaluation_log) > 0) {
# include the previous compatible history when available
if (class(xgb_model) == 'xgb.Booster' &&
!is.null(xgb_model$evaluation_log) &&
all.equal(colnames(evaluation_log),
colnames(xgb_model$evaluation_log))) {
evaluation_log <- rbindlist(list(xgb_model$evaluation_log, evaluation_log))
}
bst$evaluation_log <- evaluation_log
}
attr(bst, "call") <- fit.call
attr(bst, "params") <- params
bst$call <- match.call()
bst$params <- params
bst$callbacks <- callbacks
return(bst)
}

View File

@@ -1,83 +1,28 @@
#' eXtreme Gradient Boosting (Tree) library
#'
#' A simple interface for training xgboost model. Look at \code{\link{xgb.train}} function for a more advanced interface.
#'
#' @param data takes \code{matrix}, \code{dgCMatrix}, local data file or
#' \code{xgb.DMatrix}.
#' @param label the response variable. User should not set this field,
#' if data is local data file or \code{xgb.DMatrix}.
#' @param params the list of parameters.
#'
#' Commonly used ones are:
#' \itemize{
#' \item \code{objective} objective function, common ones are
#' \itemize{
#' \item \code{reg:linear} linear regression
#' \item \code{binary:logistic} logistic regression for classification
#' }
#' \item \code{eta} step size of each boosting step
#' \item \code{max.depth} maximum depth of the tree
#' \item \code{nthread} number of thread used in training, if not set, all threads are used
#' }
#'
#' Look at \code{\link{xgb.train}} for a more complete list of parameters or \url{https://github.com/dmlc/xgboost/wiki/Parameters} for the full list.
#'
#' See also \code{demo/} for walkthrough example in R.
#'
#' @param nrounds the max number of iterations
#' @param verbose If 0, xgboost will stay silent. If 1, xgboost will print
#' information of performance. If 2, xgboost will print information of both
#' performance and construction progress information
#' @param print.every.n Print every N progress messages when \code{verbose>0}. Default is 1 which means all messages are printed.
#' @param missing Missing is only used when input is dense matrix, pick a float
#' value that represents missing value. Sometimes a data use 0 or other extreme value to represents missing values.
#' @param weight a vector indicating the weight for each row of the input.
#' @param early.stop.round If \code{NULL}, the early stopping function is not triggered.
#' If set to an integer \code{k}, training with a validation set will stop if the performance
#' keeps getting worse consecutively for \code{k} rounds.
#' @param maximize If \code{feval} and \code{early.stop.round} are set, then \code{maximize} must be set as well.
#' \code{maximize=TRUE} means the larger the evaluation score the better.
#' @param save_period save the model to the disk in every \code{save_period} rounds, 0 means no such action.
#' @param save_name the name or path for periodically saved model file.
#' @param ... other parameters to pass to \code{params}.
#'
#' @details
#' This is the modeling function for Xgboost.
#'
#' Parallelization is automatically enabled if \code{OpenMP} is present.
#'
#' Number of threads can also be manually specified via \code{nthread} parameter.
#'
#' @examples
#' data(agaricus.train, package='xgboost')
#' data(agaricus.test, package='xgboost')
#' train <- agaricus.train
#' test <- agaricus.test
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
#' eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
#' pred <- predict(bst, test$data)
#'
# Simple interface for training an xgboost model.
# Its documentation is combined with xgb.train.
#
#' @rdname xgb.train
#' @export
xgboost <- function(data = NULL, label = NULL, missing = NA, weight = NULL,
params = list(), nrounds,
verbose = 1, print.every.n = 1L, early.stop.round = NULL,
maximize = NULL, save_period = 0, save_name = "xgboost.model", ...) {
verbose = 1, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL,
save_period = 0, save_name = "xgboost.model",
xgb_model = NULL, callbacks = list(), ...) {
dtrain <- xgb.get.DMatrix(data, label, missing, weight)
params <- append(params, list(...))
if (verbose > 0) {
watchlist <- list(train = dtrain)
} else {
watchlist <- list()
}
bst <- xgb.train(params, dtrain, nrounds, watchlist, verbose = verbose, print.every.n=print.every.n,
early.stop.round = early.stop.round, maximize = maximize,
save_period = save_period, save_name = save_name)
watchlist <- list()
if (verbose > 0)
watchlist$train = dtrain
bst <- xgb.train(params, dtrain, nrounds, watchlist, verbose = verbose, print_every_n=print_every_n,
early_stopping_rounds = early_stopping_rounds, maximize = maximize,
save_period = save_period, save_name = save_name,
xgb_model = xgb_model, callbacks = callbacks, ...)
return(bst)
}
#' Training part from Mushroom Data Set
#'
#' This data set is originally from the Mushroom data set,
@@ -131,3 +76,29 @@ NULL
#' @format A list containing a label vector, and a dgCMatrix object with 1611
#' rows and 126 variables
NULL
# Various imports
#' @importClassesFrom Matrix dgCMatrix dgeMatrix
#' @importFrom Matrix cBind
#' @importFrom Matrix colSums
#' @importFrom Matrix sparse.model.matrix
#' @importFrom Matrix sparseVector
#' @importFrom data.table data.table
#' @importFrom data.table as.data.table
#' @importFrom data.table :=
#' @importFrom data.table rbindlist
#' @importFrom data.table setkey
#' @importFrom data.table setkeyv
#' @importFrom data.table setnames
#' @importFrom magrittr %>%
#' @importFrom stringi stri_detect_regex
#' @importFrom stringi stri_match_first_regex
#' @importFrom stringi stri_replace_first_regex
#' @importFrom stringi stri_replace_all_regex
#' @importFrom stringi stri_split_regex
#' @importFrom utils object.size str tail
#' @importFrom stats predict
#'
#' @import methods
#' @useDynLib xgboost
NULL

View File

@@ -1,8 +1,14 @@
R package for xgboost
=====================
XGBoost R Package for Scalable GBM
==================================
[![CRAN Status Badge](http://www.r-pkg.org/badges/version/xgboost)](http://cran.r-project.org/web/packages/xgboost)
[![CRAN Downloads](http://cranlogs.r-pkg.org/badges/xgboost)](http://cran.rstudio.com/web/packages/xgboost/index.html)
[![Documentation Status](https://readthedocs.org/projects/xgboost/badge/?version=latest)](http://xgboost.readthedocs.org/en/latest/R-package/index.html)
Resources
---------
* [XGBoost R Package Online Documentation](http://xgboost.readthedocs.org/en/latest/R-package/index.html)
- Check this out for detailed documents, examples and tutorials.
Installation
------------
@@ -13,32 +19,55 @@ We are [on CRAN](https://cran.r-project.org/web/packages/xgboost/index.html) now
install.packages('xgboost')
```
For up-to-date version, please install from github. Windows user will need to install [RTools](http://cran.r-project.org/bin/windows/Rtools/) first.
You can also install from our weekly updated drat repo:
```r
install.packages("drat", repos="https://cran.rstudio.com")
drat:::addRepo("dmlc")
install.packages("xgboost", repos="http://dmlc.ml/drat/", type="source")
```
***Important*** Due to the usage of submodule, `install_github` is no longer support to install the
latest version of R package.
For up-to-date version, please install from github.
Windows users will need to install [RTools](http://cran.r-project.org/bin/windows/Rtools/) first. They also need to download [MinGW-W64](http://iweb.dl.sourceforge.net/project/mingw-w64/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/installer/mingw-w64-install.exe) using x86_64 architecture during installation.
Run the following command to add MinGW to PATH in Windows if not already added.
```cmd
PATH %PATH%;C:\Program Files\mingw-w64\x86_64-5.3.0-posix-seh-rt_v4-rev0\mingw64\bin
```
To compile xgboost at the root of your storage, run the following bash script.
```bash
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost
git submodule init
git submodule update
alias make='mingw32-make'
cd dmlc-core
make -j4
cd ../rabit
make lib/librabit_empty.a -j4
cd ..
cp make/mingw64.mk config.mk
make -j4
```
Run the following R script to install xgboost package from the root directory.
```r
devtools::install_github('dmlc/xgboost',subdir='R-package')
install.package('devtools') # if not installed
setwd('C:/xgboost/')
library(devtools)
install('R-package')
```
For more detailed installation instructions, please see [here](http://xgboost.readthedocs.org/en/latest/build.html#r-package-installation).
Examples
--------
* Please visit [walk through example](demo).
* See also the [example scripts](../demo/kaggle-higgs) for Kaggle Higgs Challenge, including [speedtest script](../demo/kaggle-higgs/speedtest.R) on this dataset and the one related to [Otto challenge](../demo/kaggle-otto), including a [RMarkdown documentation](../demo/kaggle-otto/understandingXGBoostModel.Rmd).
Notes
-----
If you face an issue installing the package using ```devtools::install_github```, something like this (even after updating libxml and RCurl as lot of forums say) -
```
devtools::install_github('dmlc/xgboost',subdir='R-package')
Downloading github repo dmlc/xgboost@master
Error in function (type, msg, asError = TRUE) :
Peer certificate cannot be authenticated with given CA certificates
```
To get around this you can build the package locally as mentioned [here](https://github.com/dmlc/xgboost/issues/347) -
```
1. Clone the current repository and set your workspace to xgboost/R-package/
2. Run R CMD INSTALL --build . in terminal to get the tarball.
3. Run install.packages('path_to_the_tarball',repo=NULL) in R to install.
```

View File

@@ -1,7 +1,8 @@
require(xgboost)
require(methods)
# we load in the agaricus dataset
# In this example, we are aiming to predict whether a mushroom can be eaten
# In this example, we are aiming to predict whether a mushroom is edible
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
@@ -15,33 +16,33 @@ class(train$data)
# note: we are putting in sparse matrix here, xgboost naturally handles sparse input
# use sparse matrix when your feature is sparse(e.g. when you are using one-hot encoding vector)
print("Training xgboost with sparseMatrix")
bst <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nround = 2,
bst <- xgboost(data = train$data, label = train$label, max_depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic")
# alternatively, you can put in dense matrix, i.e. basic R-matrix
print("Training xgboost with Matrix")
bst <- xgboost(data = as.matrix(train$data), label = train$label, max.depth = 2, eta = 1, nround = 2,
bst <- xgboost(data = as.matrix(train$data), label = train$label, max_depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic")
# you can also put in xgb.DMatrix object, which stores label, data and other meta datas needed for advanced features
print("Training xgboost with xgb.DMatrix")
dtrain <- xgb.DMatrix(data = train$data, label = train$label)
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2, nthread = 2,
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nrounds = 2, nthread = 2,
objective = "binary:logistic")
# Verbose = 0,1,2
print("Train xgboost with verbose 0, no message")
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2,
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic", verbose = 0)
print("Train xgboost with verbose 1, print evaluation metric")
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2,
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic", verbose = 1)
print("Train xgboost with verbose 2, also print information about tree")
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2,
bst <- xgboost(data = dtrain, max_depth = 2, eta = 1, nrounds = 2,
nthread = 2, objective = "binary:logistic", verbose = 2)
# you can also specify data as file path to a LibSVM format input
# since we do not have this file with us, the following line is just for illustration
# bst <- xgboost(data = 'agaricus.train.svm', max.depth = 2, eta = 1, nround = 2,objective = "binary:logistic")
# bst <- xgboost(data = 'agaricus.train.svm', max_depth = 2, eta = 1, nrounds = 2,objective = "binary:logistic")
#--------------------basic prediction using xgboost--------------
# you can do prediction using the following line
@@ -64,8 +65,8 @@ raw = xgb.save.raw(bst)
# load binary model to R
bst3 <- xgb.load(raw)
pred3 <- predict(bst3, test$data)
# pred2 should be identical to pred
print(paste("sum(abs(pred3-pred))=", sum(abs(pred2-pred))))
# pred3 should be identical to pred
print(paste("sum(abs(pred3-pred))=", sum(abs(pred3-pred))))
#----------------Advanced features --------------
# to use advanced features, we need to put data in xgb.DMatrix
@@ -77,19 +78,19 @@ watchlist <- list(train=dtrain, test=dtest)
# to train with watchlist, use xgb.train, which contains more advanced features
# watchlist allows us to monitor the evaluation result on all data in the list
print("Train xgboost using xgb.train with watchlist")
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nround=2, watchlist=watchlist,
bst <- xgb.train(data=dtrain, max_depth=2, eta=1, nrounds=2, watchlist=watchlist,
nthread = 2, objective = "binary:logistic")
# we can change evaluation metrics, or use multiple evaluation metrics
print("train xgboost using xgb.train with watchlist, watch logloss and error")
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nround=2, watchlist=watchlist,
eval.metric = "error", eval.metric = "logloss",
bst <- xgb.train(data=dtrain, max_depth=2, eta=1, nrounds=2, watchlist=watchlist,
eval_metric = "error", eval_metric = "logloss",
nthread = 2, objective = "binary:logistic")
# xgb.DMatrix can also be saved using xgb.DMatrix.save
xgb.DMatrix.save(dtrain, "dtrain.buffer")
# to load it in, simply call xgb.DMatrix
dtrain2 <- xgb.DMatrix("dtrain.buffer")
bst <- xgb.train(data=dtrain2, max.depth=2, eta=1, nround=2, watchlist=watchlist,
bst <- xgb.train(data=dtrain2, max_depth=2, eta=1, nrounds=2, watchlist=watchlist,
nthread = 2, objective = "binary:logistic")
# information can be extracted from xgb.DMatrix using getinfo
label = getinfo(dtest, "label")
@@ -98,11 +99,11 @@ err <- as.numeric(sum(as.integer(pred > 0.5) != label))/length(label)
print(paste("test-error=", err))
# You can dump the tree you learned using xgb.dump into a text file
xgb.dump(bst, "dump.raw.txt", with.stats = T)
xgb.dump(bst, "dump.raw.txt", with_stats = T)
# Finally, you can check which features are the most important.
print("Most important features (look at column Gain):")
imp_matrix <- xgb.importance(feature_names = train$data@Dimnames[[2]], model = bst)
imp_matrix <- xgb.importance(feature_names = colnames(train$data), model = bst)
print(imp_matrix)
# Feature importance bar plot by gain

View File

@@ -11,8 +11,8 @@ watchlist <- list(eval = dtest, train = dtrain)
#
print('start running example to start from a initial prediction')
# train xgboost for 1 round
param <- list(max.depth=2,eta=1,nthread = 2, silent=1,objective='binary:logistic')
bst <- xgb.train( param, dtrain, 1, watchlist )
param <- list(max_depth=2, eta=1, nthread = 2, silent=1, objective='binary:logistic')
bst <- xgb.train(param, dtrain, 1, watchlist)
# Note: we need the margin value instead of transformed prediction in set_base_margin
# do predict with output_margin=TRUE, will always give you margin values before logistic transformation
ptrain <- predict(bst, dtrain, outputmargin=TRUE)

View File

@@ -65,11 +65,10 @@ output_vector = df[,Y:=0][Improved == "Marked",Y:=1][,Y]
# Following is the same process as other demo
cat("Learning...\n")
bst <- xgboost(data = sparse_matrix, label = output_vector, max.depth = 9,
eta = 1, nthread = 2, nround = 10,objective = "binary:logistic")
bst <- xgboost(data = sparse_matrix, label = output_vector, max_depth = 9,
eta = 1, nthread = 2, nrounds = 10, objective = "binary:logistic")
# sparse_matrix@Dimnames[[2]] represents the column names of the sparse matrix.
importance <- xgb.importance(feature_names = sparse_matrix@Dimnames[[2]], model = bst)
importance <- xgb.importance(feature_names = colnames(sparse_matrix), model = bst)
print(importance)
# According to the matrix below, the most important feature in this dataset to predict if the treatment will work is the Age. The second most important feature is having received a placebo or not. The sex is third. Then we see our generated features (AgeDiscret). We can see that their contribution is very low (Gain column).

View File

@@ -6,7 +6,7 @@ dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
nround <- 2
param <- list(max.depth=2,eta=1,silent=1,nthread = 2, objective='binary:logistic')
param <- list(max_depth=2, eta=1, silent=1, nthread=2, objective='binary:logistic')
cat('running cross validation\n')
# do cross validation, this will print result out as
@@ -19,7 +19,7 @@ cat('running cross validation, disable standard deviation display\n')
# [iteration] metric_name:mean_value+std_value
# std_value is standard deviation of the metric
xgb.cv(param, dtrain, nround, nfold=5,
metrics={'error'}, showsd = FALSE)
metrics='error', showsd = FALSE)
###
# you can also do cross validation with cutomized loss function
@@ -40,12 +40,12 @@ evalerror <- function(preds, dtrain) {
return(list(metric = "error", value = err))
}
param <- list(max.depth=2,eta=1,silent=1,
param <- list(max_depth=2, eta=1, silent=1,
objective = logregobj, eval_metric = evalerror)
# train with customized objective
xgb.cv(params = param, data = dtrain, nrounds = nround, nfold = 5)
# do cross validation with prediction values for each fold
res <- xgb.cv(params = param, data = dtrain, nrounds = nround, nfold = 5, prediction = TRUE)
res$dt
res$evaluation_log
length(res$pred)

View File

@@ -33,7 +33,7 @@ evalerror <- function(preds, dtrain) {
return(list(metric = "error", value = err))
}
param <- list(max.depth=2, eta=1, nthread = 2, silent=1,
param <- list(max_depth=2, eta=1, nthread = 2, silent=1,
objective=logregobj, eval_metric=evalerror)
print ('start training with user customized objective')
# training with customized objective, we can also do step by step training
@@ -57,7 +57,7 @@ logregobjattr <- function(preds, dtrain) {
hess <- preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
param <- list(max.depth=2, eta=1, nthread = 2, silent=1,
param <- list(max_depth=2, eta=1, nthread = 2, silent=1,
objective=logregobjattr, eval_metric=evalerror)
print ('start training with user customized objective, with additional attributes in DMatrix')
# training with customized objective, we can also do step by step training

View File

@@ -7,7 +7,7 @@ dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
# note: for customized objective function, we leave objective as default
# note: what we are getting is margin value in prediction
# you must know what you are doing
param <- list(max.depth=2,eta=1,nthread = 2, silent=1)
param <- list(max_depth=2, eta=1, nthread = 2, silent=1)
watchlist <- list(eval = dtest)
num_round <- 20
# user define objective function, given prediction, return gradient and second order gradient
@@ -34,7 +34,7 @@ print ('start training with early Stopping setting')
bst <- xgb.train(param, dtrain, num_round, watchlist,
objective = logregobj, eval_metric = evalerror, maximize = FALSE,
early.stop.round = 3)
early_stopping_round = 3)
bst <- xgb.cv(param, dtrain, num_round, nfold = 5,
objective = logregobj, eval_metric = evalerror,
maximize = FALSE, early.stop.round = 3)
maximize = FALSE, early_stopping_rounds = 3)

View File

@@ -5,7 +5,7 @@ data(agaricus.test, package='xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
param <- list(max.depth=2,eta=1,silent=1,objective='binary:logistic')
param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
watchlist <- list(eval = dtest, train = dtrain)
nround = 2

View File

@@ -10,7 +10,7 @@ data(agaricus.test, package='xgboost')
dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)
param <- list(max.depth=2, eta=1, silent=1, objective='binary:logistic')
param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
nround = 4
# training the model for two rounds

View File

@@ -0,0 +1,38 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{callbacks}
\alias{callbacks}
\title{Callback closures for booster training.}
\description{
These are used to perform various service tasks either during boosting iterations or at the end.
This approach helps to modularize many of such tasks without bloating the main training methods,
and it offers .
}
\details{
By default, a callback function is run after each boosting iteration.
An R-attribute \code{is_pre_iteration} could be set for a callback to define a pre-iteration function.
When a callback function has \code{finalize} parameter, its finalizer part will also be run after
the boosting is completed.
WARNING: side-effects!!! Be aware that these callback functions access and modify things in
the environment from which they are called from, which is a fairly uncommon thing to do in R.
To write a custom callback closure, make sure you first understand the main concepts about R envoronments.
Check either R documentation on \code{\link[base]{environment}} or the
\href{http://adv-r.had.co.nz/Environments.html}{Environments chapter} from the "Advanced R"
book by Hadley Wickham. Further, the best option is to read the code of some of the existing callbacks -
choose ones that do something similar to what you want to achieve. Also, you would need to get familiar
with the objects available inside of the \code{xgb.train} and \code{xgb.cv} internal environments.
}
\seealso{
\code{\link{cb.print.evaluation}},
\code{\link{cb.evaluation.log}},
\code{\link{cb.reset.parameters}},
\code{\link{cb.early.stop}},
\code{\link{cb.save.model}},
\code{\link{cb.cv.predict}},
\code{\link{xgb.train}},
\code{\link{xgb.cv}}
}

View File

@@ -0,0 +1,43 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.cv.predict}
\alias{cb.cv.predict}
\title{Callback closure for returning cross-validation based predictions.}
\usage{
cb.cv.predict(save_models = FALSE)
}
\arguments{
\item{save_models}{a flag for whether to save the folds' models.}
}
\value{
Predictions are returned inside of the \code{pred} element, which is either a vector or a matrix,
depending on the number of prediction outputs per data row. The order of predictions corresponds
to the order of rows in the original dataset. Note that when a custom \code{folds} list is
provided in \code{xgb.cv}, the predictions would only be returned properly when this list is a
non-overlapping list of k sets of indices, as in a standard k-fold CV. The predictions would not be
meaningful when user-profided folds have overlapping indices as in, e.g., random sampling splits.
When some of the indices in the training dataset are not included into user-provided \code{folds},
their prediction value would be \code{NA}.
}
\description{
Callback closure for returning cross-validation based predictions.
}
\details{
This callback function saves predictions for all of the test folds,
and also allows to save the folds' models.
It is a "finalizer" callback and it uses early stopping information whenever it is available,
thus it must be run after the early stopping callback if the early stopping is used.
Callback function expects the following values to be set in its calling frame:
\code{bst_folds},
\code{basket},
\code{data},
\code{end_iteration},
\code{num_parallel_tree},
\code{num_class}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@@ -0,0 +1,63 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.early.stop}
\alias{cb.early.stop}
\title{Callback closure to activate the early stopping.}
\usage{
cb.early.stop(stopping_rounds, maximize = FALSE, metric_name = NULL,
verbose = TRUE)
}
\arguments{
\item{stopping_rounds}{The number of rounds with no improvement in
the evaluation metric in order to stop the training.}
\item{maximize}{whether to maximize the evaluation metric}
\item{metric_name}{the name of an evaluation column to use as a criteria for early
stopping. If not set, the last column would be used.
Let's say the test data in \code{watchlist} was labelled as \code{dtest},
and one wants to use the AUC in test data for early stopping regardless of where
it is in the \code{watchlist}, then one of the following would need to be set:
\code{metric_name='dtest-auc'} or \code{metric_name='dtest_auc'}.
All dash '-' characters in metric names are considered equivalent to '_'.}
\item{verbose}{whether to print the early stopping information.}
}
\description{
Callback closure to activate the early stopping.
}
\details{
This callback function determines the condition for early stopping
by setting the \code{stop_condition = TRUE} flag in its calling frame.
The following additional fields are assigned to the model's R object:
\itemize{
\item \code{best_score} the evaluation score at the best iteration
\item \code{best_iteration} at which boosting iteration the best score has occurred (1-based index)
\item \code{best_ntreelimit} to use with the \code{ntreelimit} parameter in \code{predict}.
It differs from \code{best_iteration} in multiclass or random forest settings.
}
The Same values are also stored as xgb-attributes:
\itemize{
\item \code{best_iteration} is stored as a 0-based iteration index (for interoperability of binary models)
\item \code{best_msg} message string is also stored.
}
At least one data element is required in the evaluation watchlist for early stopping to work.
Callback function expects the following values to be set in its calling frame:
\code{stop_condition},
\code{bst_evaluation},
\code{rank},
\code{bst} (or \code{bst_folds} and \code{basket}),
\code{iteration},
\code{begin_iteration},
\code{end_iteration},
\code{num_parallel_tree}.
}
\seealso{
\code{\link{callbacks}},
\code{\link{xgb.attr}}
}

View File

@@ -0,0 +1,32 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.evaluation.log}
\alias{cb.evaluation.log}
\title{Callback closure for logging the evaluation history}
\usage{
cb.evaluation.log()
}
\description{
Callback closure for logging the evaluation history
}
\details{
This callback function appends the current iteration evaluation results \code{bst_evaluation}
available in the calling parent frame to the \code{evaluation_log} list in a calling frame.
The finalizer callback (called with \code{finalize = TURE} in the end) converts
the \code{evaluation_log} list into a final data.table.
The iteration evaluation result \code{bst_evaluation} must be a named numeric vector.
Note: in the column names of the final data.table, the dash '-' character is replaced with
the underscore '_' in order to make the column names more like regular R identifiers.
Callback function expects the following values to be set in its calling frame:
\code{evaluation_log},
\code{bst_evaluation},
\code{iteration}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@@ -0,0 +1,28 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.print.evaluation}
\alias{cb.print.evaluation}
\title{Callback closure for printing the result of evaluation}
\usage{
cb.print.evaluation(period = 1)
}
\arguments{
\item{period}{results would be printed every number of periods}
}
\description{
Callback closure for printing the result of evaluation
}
\details{
The callback function prints the result of evaluation at every \code{period} iterations.
The initial and the last iteration's evaluations are always printed.
Callback function expects the following values to be set in its calling frame:
\code{bst_evaluation} (also \code{bst_evaluation_err} when available),
\code{iteration},
\code{begin_iteration},
\code{end_iteration}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@@ -0,0 +1,37 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.reset.parameters}
\alias{cb.reset.parameters}
\title{Callback closure for restetting the booster's parameters at each iteration.}
\usage{
cb.reset.parameters(new_params)
}
\arguments{
\item{new_params}{a list where each element corresponds to a parameter that needs to be reset.
Each element's value must be either a vector of values of length \code{nrounds}
to be set at each iteration,
or a function of two parameters \code{learning_rates(iteration, nrounds)}
which returns a new parameter value by using the current iteration number
and the total number of boosting rounds.}
}
\description{
Callback closure for restetting the booster's parameters at each iteration.
}
\details{
This is a "pre-iteration" callback function used to reset booster's parameters
at the beginning of each iteration.
Note that when training is resumed from some previous model, and a function is used to
reset a parameter value, the \code{nround} argument in this function would be the
the number of boosting rounds in the current training.
Callback function expects the following values to be set in its calling frame:
\code{bst} or \code{bst_folds},
\code{iteration},
\code{begin_iteration},
\code{end_iteration}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@@ -0,0 +1,34 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/callbacks.R
\name{cb.save.model}
\alias{cb.save.model}
\title{Callback closure for saving a model file.}
\usage{
cb.save.model(save_period = 0, save_name = "xgboost.model")
}
\arguments{
\item{save_period}{save the model to disk after every
\code{save_period} iterations; 0 means save the model at the end.}
\item{save_name}{the name or path for the saved model file.
It can contain a \code{\link[base]{sprintf}} formatting specifier
to include the integer iteration number in the file name.
E.g., with \code{save_name} = 'xgboost_%04d.model',
the file saved at iteration 50 would be named "xgboost_0050.model".}
}
\description{
Callback closure for saving a model file.
}
\details{
This callback function allows to save an xgb-model file, either periodically after each \code{save_period}'s or at the end.
Callback function expects the following values to be set in its calling frame:
\code{bst},
\code{iteration},
\code{begin_iteration},
\code{end_iteration}.
}
\seealso{
\code{\link{callbacks}}
}

View File

@@ -0,0 +1,29 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{dim.xgb.DMatrix}
\alias{dim.xgb.DMatrix}
\title{Dimensions of xgb.DMatrix}
\usage{
\method{dim}{xgb.DMatrix}(x)
}
\arguments{
\item{x}{Object of class \code{xgb.DMatrix}}
}
\description{
Returns a vector of numbers of rows and of columns in an \code{xgb.DMatrix}.
}
\details{
Note: since \code{nrow} and \code{ncol} internally use \code{dim}, they can also
be directly used with an \code{xgb.DMatrix} object.
}
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label)
stopifnot(nrow(dtrain) == nrow(train$data))
stopifnot(ncol(dtrain) == ncol(train$data))
stopifnot(all(dim(dtrain) == dim(train$data)))
}

View File

@@ -0,0 +1,36 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{dimnames.xgb.DMatrix}
\alias{dimnames.xgb.DMatrix}
\alias{dimnames<-.xgb.DMatrix}
\title{Handling of column names of \code{xgb.DMatrix}}
\usage{
\method{dimnames}{xgb.DMatrix}(x)
\method{dimnames}{xgb.DMatrix}(x) <- value
}
\arguments{
\item{x}{object of class \code{xgb.DMatrix}}
\item{value}{a list of two elements: the first one is ignored
and the second one is column names}
}
\description{
Only column names are supported for \code{xgb.DMatrix}, thus setting of
row names would have no effect and returnten row names would be NULL.
}
\details{
Generic \code{dimnames} methods are used by \code{colnames}.
Since row names are irrelevant, it is recommended to use \code{colnames} directly.
}
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label)
dimnames(dtrain)
colnames(dtrain)
colnames(dtrain) <- make.names(1:ncol(train$data))
print(dtrain, verbose=TRUE)
}

View File

@@ -1,15 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.plot.deepness.R
\name{edge.parser}
\alias{edge.parser}
\title{Parse the graph to extract vector of edges}
\usage{
edge.parser(element)
}
\arguments{
\item{element}{igraph object containing the path from the root to the leaf.}
}
\description{
Parse the graph to extract vector of edges
}

View File

@@ -1,15 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.plot.deepness.R
\name{get.paths.to.leaf}
\alias{get.paths.to.leaf}
\title{Extract path from root to leaf from data.table}
\usage{
get.paths.to.leaf(dt.tree)
}
\arguments{
\item{dt.tree}{data.table containing the nodes and edges of the trees}
}
\description{
Extract path from root to leaf from data.table
}

View File

@@ -1,27 +1,26 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/getinfo.xgb.DMatrix.R
\docType{methods}
% Please edit documentation in R/xgb.DMatrix.R
\name{getinfo}
\alias{getinfo}
\alias{getinfo,xgb.DMatrix-method}
\alias{getinfo.xgb.DMatrix}
\title{Get information of an xgb.DMatrix object}
\usage{
getinfo(object, ...)
\S4method{getinfo}{xgb.DMatrix}(object, name)
\method{getinfo}{xgb.DMatrix}(object, name, ...)
}
\arguments{
\item{object}{Object of class \code{xgb.DMatrix}}
\item{...}{other parameters}
\item{name}{the name of the field to get}
\item{name}{the name of the information field to get (see details)}
}
\description{
Get information of an xgb.DMatrix object
}
\details{
The information can be one of the following:
The \code{name} field can be one of the following:
\itemize{
\item \code{label}: label Xgboost learn from ;
@@ -34,8 +33,10 @@ The information can be one of the following:
data(agaricus.train, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label)
labels <- getinfo(dtrain, 'label')
setinfo(dtrain, 'label', 1-labels)
labels2 <- getinfo(dtrain, 'label')
stopifnot(all(labels2 == 1-labels))
}

View File

@@ -1,15 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.plot.deepness.R
\name{multiplot}
\alias{multiplot}
\title{Plot multiple graphs at the same time}
\usage{
multiplot(..., cols = 1)
}
\arguments{
\item{cols}{number of columns}
}
\description{
Plot multiple graph aligned by rows and columns.
}

View File

@@ -1,23 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/nrow.xgb.DMatrix.R
\docType{methods}
\name{nrow,xgb.DMatrix-method}
\alias{nrow,xgb.DMatrix-method}
\title{Number of xgb.DMatrix rows}
\usage{
\S4method{nrow}{xgb.DMatrix}(x)
}
\arguments{
\item{x}{Object of class \code{xgb.DMatrix}}
}
\description{
\code{nrow} return the number of rows present in the \code{xgb.DMatrix}.
}
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label)
stopifnot(nrow(dtrain) == nrow(train$data))
}

View File

@@ -1,53 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/predict.xgb.Booster.R
\docType{methods}
\name{predict,xgb.Booster-method}
\alias{predict,xgb.Booster-method}
\title{Predict method for eXtreme Gradient Boosting model}
\usage{
\S4method{predict}{xgb.Booster}(object, newdata, missing = NA,
outputmargin = FALSE, ntreelimit = NULL, predleaf = FALSE)
}
\arguments{
\item{object}{Object of class "xgb.Boost"}
\item{newdata}{takes \code{matrix}, \code{dgCMatrix}, local data file or
\code{xgb.DMatrix}.}
\item{missing}{Missing is only used when input is dense matrix, pick a float
value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.}
\item{outputmargin}{whether the prediction should be shown in the original
value of sum of functions, when outputmargin=TRUE, the prediction is
untransformed margin value. In logistic regression, outputmargin=T will
output value before logistic transformation.}
\item{ntreelimit}{limit number of trees used in prediction, this parameter is
only valid for gbtree, but not for gblinear. set it to be value bigger
than 0. It will use all trees by default.}
\item{predleaf}{whether predict leaf index instead. If set to TRUE, the output will be a matrix object.}
}
\description{
Predicted values based on xgboost model object.
}
\details{
The option \code{ntreelimit} purpose is to let the user train a model with lots
of trees but use only the first trees for prediction to avoid overfitting
(without having to train a new model with less trees).
The option \code{predleaf} purpose is inspired from §3.1 of the paper
\code{Practical Lessons from Predicting Clicks on Ads at Facebook}.
The idea is to use the model as a generator of new features which capture non linear link
from original features.
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
pred <- predict(bst, test$data)
}

View File

@@ -1,18 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/predict.xgb.Booster.handle.R
\docType{methods}
\name{predict,xgb.Booster.handle-method}
\alias{predict,xgb.Booster.handle-method}
\title{Predict method for eXtreme Gradient Boosting model handle}
\usage{
\S4method{predict}{xgb.Booster.handle}(object, ...)
}
\arguments{
\item{object}{Object of class "xgb.Boost.handle"}
\item{...}{Parameters pass to \code{predict.xgb.Booster}}
}
\description{
Predicted values based on xgb.Booster.handle object.
}

View File

@@ -0,0 +1,129 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.Booster.R
\name{predict.xgb.Booster}
\alias{predict.xgb.Booster}
\alias{predict.xgb.Booster.handle}
\title{Predict method for eXtreme Gradient Boosting model}
\usage{
\method{predict}{xgb.Booster}(object, newdata, missing = NA,
outputmargin = FALSE, ntreelimit = NULL, predleaf = FALSE,
reshape = FALSE, ...)
\method{predict}{xgb.Booster.handle}(object, ...)
}
\arguments{
\item{object}{Object of class \code{xgb.Booster} or \code{xgb.Booster.handle}}
\item{newdata}{takes \code{matrix}, \code{dgCMatrix}, local data file or \code{xgb.DMatrix}.}
\item{missing}{Missing is only used when input is dense matrix. Pick a float value that represents
missing values in data (e.g., sometimes 0 or some other extreme value is used).}
\item{outputmargin}{whether the prediction should be returned in the for of original untransformed
sum of predictions from boosting iterations' results. E.g., setting \code{outputmargin=TRUE} for
logistic regression would result in predictions for log-odds instead of probabilities.}
\item{ntreelimit}{limit the number of model's trees or boosting iterations used in prediction (see Details).
It will use all the trees by default (\code{NULL} value).}
\item{predleaf}{whether predict leaf index instead.}
\item{reshape}{whether to reshape the vector of predictions to a matrix form when there are several
prediction outputs per case. This option has no effect when \code{predleaf = TRUE}.}
\item{...}{Parameters passed to \code{predict.xgb.Booster}}
}
\value{
For regression or binary classification, it returns a vector of length \code{nrows(newdata)}.
For multiclass classification, either a \code{num_class * nrows(newdata)} vector or
a \code{(nrows(newdata), num_class)} dimension matrix is returned, depending on
the \code{reshape} value.
When \code{predleaf = TRUE}, the output is a matrix object with the
number of columns corresponding to the number of trees.
}
\description{
Predicted values based on either xgboost model or model handle object.
}
\details{
Note that \code{ntreelimit} is not necesserily equal to the number of boosting iterations
and it is not necesserily equal to the number of trees in a model.
E.g., in a random forest-like model, \code{ntreelimit} would limit the number of trees.
But for multiclass classification, there are multiple trees per iteration,
but \code{ntreelimit} limits the number of boosting iterations.
Also note that \code{ntreelimit} would currently do nothing for predictions from gblinear,
since gblinear doesn't keep its boosting history.
One possible practical applications of the \code{predleaf} option is to use the model
as a generator of new features which capture non-linearity and interactions,
e.g., as implemented in \code{\link{xgb.create.features}}.
}
\examples{
## binary classification:
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
# use all trees by default
pred <- predict(bst, test$data)
# use only the 1st tree
pred <- predict(bst, test$data, ntreelimit = 1)
## multiclass classification in iris dataset:
lb <- as.numeric(iris$Species) - 1
num_class <- 3
set.seed(11)
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 4, eta = 0.5, nthread = 2, nrounds = 10, subsample = 0.5,
objective = "multi:softprob", num_class = num_class)
# predict for softmax returns num_class probability numbers per case:
pred <- predict(bst, as.matrix(iris[, -5]))
str(pred)
# reshape it to a num_class-columns matrix
pred <- matrix(pred, ncol=num_class, byrow=TRUE)
# convert the probabilities to softmax labels
pred_labels <- max.col(pred) - 1
# the following should result in the same error as seen in the last iteration
sum(pred_labels != lb)/length(lb)
# compare that to the predictions from softmax:
set.seed(11)
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 4, eta = 0.5, nthread = 2, nrounds = 10, subsample = 0.5,
objective = "multi:softmax", num_class = num_class)
pred <- predict(bst, as.matrix(iris[, -5]))
str(pred)
all.equal(pred, pred_labels)
# prediction from using only 5 iterations should result
# in the same error as seen in iteration 5:
pred5 <- predict(bst, as.matrix(iris[, -5]), ntreelimit=5)
sum(pred5 != lb)/length(lb)
## random forest-like model of 25 trees for binary classification:
set.seed(11)
bst <- xgboost(data = train$data, label = train$label, max_depth = 5,
nthread = 2, nrounds = 1, objective = "binary:logistic",
num_parallel_tree = 25, subsample = 0.6, colsample_bytree = 0.1)
# Inspect the prediction error vs number of trees:
lb <- test$label
dtest <- xgb.DMatrix(test$data, label=lb)
err <- sapply(1:25, function(n) {
pred <- predict(bst, dtest, ntreelimit=n)
sum((pred > 0.5) != lb)/length(lb)
})
plot(err, type='l', ylim=c(0,0.1), xlab='#trees')
}
\seealso{
\code{\link{xgb.train}}.
}

View File

@@ -0,0 +1,30 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.Booster.R
\name{print.xgb.Booster}
\alias{print.xgb.Booster}
\title{Print xgb.Booster}
\usage{
\method{print}{xgb.Booster}(x, verbose = FALSE, ...)
}
\arguments{
\item{x}{an xgb.Booster object}
\item{verbose}{whether to print detailed data (e.g., attribute values)}
\item{...}{not currently used}
}
\description{
Print information about xgb.Booster.
}
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
attr(bst, 'myattr') <- 'memo'
print(bst)
print(bst, verbose=TRUE)
}

View File

@@ -0,0 +1,29 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.DMatrix.R
\name{print.xgb.DMatrix}
\alias{print.xgb.DMatrix}
\title{Print xgb.DMatrix}
\usage{
\method{print}{xgb.DMatrix}(x, verbose = FALSE, ...)
}
\arguments{
\item{x}{an xgb.DMatrix object}
\item{verbose}{whether to print colnames (when present)}
\item{...}{not currently used}
}
\description{
Print information about xgb.DMatrix.
Currently it displays dimensions and presence of info-fields and colnames.
}
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label)
dtrain
print(dtrain, verbose=TRUE)
}

View File

@@ -0,0 +1,32 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.cv.R
\name{print.xgb.cv.synchronous}
\alias{print.xgb.cv.synchronous}
\title{Print xgb.cv result}
\usage{
\method{print}{xgb.cv.synchronous}(x, verbose = FALSE, ...)
}
\arguments{
\item{x}{an \code{xgb.cv.synchronous} object}
\item{verbose}{whether to print detailed data}
\item{...}{passed to \code{data.table.print}}
}
\description{
Prints formatted results of \code{xgb.cv}.
}
\details{
When not verbose, it would only print the evaluation results,
including the best iteration (when available).
}
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
cv <- xgb.cv(data = train$data, label = train$label, nfold = 5, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
print(cv)
print(cv, verbose=TRUE)
}

View File

@@ -1,14 +1,13 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/setinfo.xgb.DMatrix.R
\docType{methods}
% Please edit documentation in R/xgb.DMatrix.R
\name{setinfo}
\alias{setinfo}
\alias{setinfo,xgb.DMatrix-method}
\alias{setinfo.xgb.DMatrix}
\title{Set information of an xgb.DMatrix object}
\usage{
setinfo(object, ...)
\S4method{setinfo}{xgb.DMatrix}(object, name, info)
\method{setinfo}{xgb.DMatrix}(object, name, info, ...)
}
\arguments{
\item{object}{Object of class "xgb.DMatrix"}
@@ -23,7 +22,7 @@ setinfo(object, ...)
Set information of an xgb.DMatrix object
}
\details{
It can be one of the following:
The \code{name} field can be one of the following:
\itemize{
\item \code{label}: label Xgboost learn from ;
@@ -36,9 +35,10 @@ It can be one of the following:
data(agaricus.train, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label)
labels <- getinfo(dtrain, 'label')
setinfo(dtrain, 'label', 1-labels)
labels2 <- getinfo(dtrain, 'label')
stopifnot(all(labels2 == 1-labels))
stopifnot(all.equal(labels2, 1-labels))
}

View File

@@ -1,22 +1,26 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/slice.xgb.DMatrix.R
\docType{methods}
% Please edit documentation in R/xgb.DMatrix.R
\name{slice}
\alias{[.xgb.DMatrix}
\alias{slice}
\alias{slice,xgb.DMatrix-method}
\alias{slice.xgb.DMatrix}
\title{Get a new DMatrix containing the specified rows of
orginal xgb.DMatrix object}
\usage{
slice(object, ...)
\S4method{slice}{xgb.DMatrix}(object, idxset, ...)
\method{slice}{xgb.DMatrix}(object, idxset, ...)
\method{[}{xgb.DMatrix}(object, idxset, colset = NULL)
}
\arguments{
\item{object}{Object of class "xgb.DMatrix"}
\item{...}{other parameters}
\item{...}{other parameters (currently not used)}
\item{idxset}{a integer vector of indices of rows needed}
\item{colset}{currently not used (columns subsetting is not available)}
}
\description{
Get a new DMatrix containing the specified rows of
@@ -26,6 +30,12 @@ orginal xgb.DMatrix object
data(agaricus.train, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(train$data, label=train$label)
dsub <- slice(dtrain, 1:3)
dsub <- slice(dtrain, 1:42)
labels1 <- getinfo(dsub, 'label')
dsub <- dtrain[1:42, ]
labels2 <- getinfo(dsub, 'label')
all.equal(labels1, labels2)
}

View File

@@ -7,8 +7,7 @@
xgb.DMatrix(data, info = list(), missing = NA, ...)
}
\arguments{
\item{data}{a \code{matrix} object, a \code{dgCMatrix} object or a character
indicating the data file.}
\item{data}{a \code{matrix} object, a \code{dgCMatrix} object or a character representing a filename}
\item{info}{a list of information of the xgb.DMatrix object}
@@ -18,7 +17,8 @@ value that represents missing value. Sometime a data use 0 or other extreme valu
\item{...}{other information to pass to \code{info}.}
}
\description{
Contruct xgb.DMatrix object from dense matrix, sparse matrix or local file.
Contruct xgb.DMatrix object from dense matrix, sparse matrix
or local file (that was created previously by saving an \code{xgb.DMatrix}).
}
\examples{
data(agaricus.train, package='xgboost')

View File

@@ -4,12 +4,12 @@
\alias{xgb.DMatrix.save}
\title{Save xgb.DMatrix object to binary file}
\usage{
xgb.DMatrix.save(DMatrix, fname)
xgb.DMatrix.save(dmatrix, fname)
}
\arguments{
\item{DMatrix}{the DMatrix object}
\item{dmatrix}{the \code{xgb.DMatrix} object}
\item{fname}{the name of the binary file.}
\item{fname}{the name of the file to write.}
}
\description{
Save xgb.DMatrix object to binary file

86
R-package/man/xgb.attr.Rd Normal file
View File

@@ -0,0 +1,86 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.Booster.R
\name{xgb.attr}
\alias{xgb.attr}
\alias{xgb.attr<-}
\alias{xgb.attributes}
\alias{xgb.attributes<-}
\title{Accessors for serializable attributes of a model.}
\usage{
xgb.attr(object, name)
xgb.attr(object, name) <- value
xgb.attributes(object)
xgb.attributes(object) <- value
}
\arguments{
\item{object}{Object of class \code{xgb.Booster} or \code{xgb.Booster.handle}.}
\item{name}{a non-empty character string specifying which attribute is to be accessed.}
\item{value}{a value of an attribute for \code{xgb.attr<-}; for \code{xgb.attributes<-}
it's a list (or an object coercible to a list) with the names of attributes to set
and the elements corresponding to attribute values.
Non-character values are converted to character.
When attribute value is not a scalar, only the first index is used.
Use \code{NULL} to remove an attribute.}
}
\value{
\code{xgb.attr} returns either a string value of an attribute
or \code{NULL} if an attribute wasn't stored in a model.
\code{xgb.attributes} returns a list of all attribute stored in a model
or \code{NULL} if a model has no stored attributes.
}
\description{
These methods allow to manipulate the key-value attribute strings of an xgboost model.
}
\details{
The primary purpose of xgboost model attributes is to store some meta-data about the model.
Note that they are a separate concept from the object attributes in R.
Specifically, they refer to key-value strings that can be attached to an xgboost model,
stored together with the model's binary representation, and accessed later
(from R or any other interface).
In contrast, any R-attribute assigned to an R-object of \code{xgb.Booster} class
would not be saved by \code{xgb.save} because an xgboost model is an external memory object
and its serialization is handled extrnally.
Also, setting an attribute that has the same name as one of xgboost's parameters wouldn't
change the value of that parameter for a model.
Use \code{\link{xgb.parameters<-}} to set or change model parameters.
The attribute setters would usually work more efficiently for \code{xgb.Booster.handle}
than for \code{xgb.Booster}, since only just a handle (pointer) would need to be copied.
That would only matter if attributes need to be set many times.
Note, however, that when feeding a handle of an \code{xgb.Booster} object to the attribute setters,
the raw model cache of an \code{xgb.Booster} object would not be automatically updated,
and it would be user's responsibility to call \code{xgb.save.raw} to update it.
The \code{xgb.attributes<-} setter either updates the existing or adds one or several attributes,
but it doesn't delete the other existing attributes.
}
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
xgb.attr(bst, "my_attribute") <- "my attribute value"
print(xgb.attr(bst, "my_attribute"))
xgb.attributes(bst) <- list(a = 123, b = "abc")
xgb.save(bst, 'xgb.model')
bst1 <- xgb.load('xgb.model')
print(xgb.attr(bst1, "my_attribute"))
print(xgb.attributes(bst1))
# deletion:
xgb.attr(bst1, "my_attribute") <- NULL
print(xgb.attributes(bst1))
xgb.attributes(bst1) <- list(a = NULL, b = NULL)
print(xgb.attributes(bst1))
}

View File

@@ -4,12 +4,14 @@
\alias{xgb.create.features}
\title{Create new features from a previously learned model}
\usage{
xgb.create.features(model, training.data)
xgb.create.features(model, data, ...)
}
\arguments{
\item{model}{decision tree boosting model learned on the original data}
\item{training.data}{original data (usually provided as a \code{dgCMatrix} matrix)}
\item{data}{original data (usually provided as a \code{dgCMatrix} matrix)}
\item{...}{currently not used}
}
\value{
\code{dgCMatrix} matrix including both the original data and the new features.
@@ -23,7 +25,7 @@ This is the function inspired from the paragraph 3.1 of the paper:
\strong{Practical Lessons from Predicting Clicks on Ads at Facebook}
\emph{(Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yan, xin Shi, Antoine Atallah, Ralf Herbrich, Stuart Bowers,
Joaquin Quiñonero Candela)}
Joaquin Quinonero Candela)}
International Workshop on Data Mining for Online Advertising (ADKDD) - August 24, 2014
@@ -31,7 +33,7 @@ International Workshop on Data Mining for Online Advertising (ADKDD) - August 24
Extract explaining the method:
"\emph{We found that boosted decision trees are a powerful and very
"We found that boosted decision trees are a powerful and very
convenient way to implement non-linear and tuple transformations
of the kind we just described. We treat each individual
tree as a categorical feature that takes as value the
@@ -52,7 +54,7 @@ We can understand boosted decision tree
based transformation as a supervised feature encoding that
converts a real-valued vector into a compact binary-valued
vector. A traversal from root node to a leaf node represents
a rule on certain features.}"
a rule on certain features."
}
\examples{
data(agaricus.train, package='xgboost')
@@ -60,7 +62,7 @@ data(agaricus.test, package='xgboost')
dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)
param <- list(max.depth=2, eta=1, silent=1, objective='binary:logistic')
param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
nround = 4
bst = xgb.train(params = param, data = dtrain, nrounds = nround, nthread = 2)

View File

@@ -6,8 +6,9 @@
\usage{
xgb.cv(params = list(), data, nrounds, nfold, label = NULL, missing = NA,
prediction = FALSE, showsd = TRUE, metrics = list(), obj = NULL,
feval = NULL, stratified = TRUE, folds = NULL, verbose = T,
print.every.n = 1L, early.stop.round = NULL, maximize = NULL, ...)
feval = NULL, stratified = TRUE, folds = NULL, verbose = TRUE,
print_every_n = 1L, early_stopping_rounds = NULL, maximize = NULL,
callbacks = list(), ...)
}
\arguments{
\item{params}{the list of parameters. Commonly used ones are:
@@ -18,11 +19,11 @@ xgb.cv(params = list(), data, nrounds, nfold, label = NULL, missing = NA,
\item \code{binary:logistic} logistic regression for classification
}
\item \code{eta} step size of each boosting step
\item \code{max.depth} maximum depth of the tree
\item \code{max_depth} maximum depth of the tree
\item \code{nthread} number of thread used in training, if not set, all threads are used
}
See \link{xgb.train} for further details.
See \code{\link{xgb.train}} for further details.
See also demo/ for walkthrough example in R.}
\item{data}{takes an \code{xgb.DMatrix} or \code{Matrix} as the input.}
@@ -31,16 +32,18 @@ xgb.cv(params = list(), data, nrounds, nfold, label = NULL, missing = NA,
\item{nfold}{the original dataset is randomly partitioned into \code{nfold} equal size subsamples.}
\item{label}{option field, when data is \code{Matrix}}
\item{label}{vector of response values. Should be provided only when data is \code{DMatrix}.}
\item{missing}{Missing is only used when input is dense matrix, pick a float
value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.}
\item{missing}{is only used when input is a dense matrix. By default is set to NA, which means
that NA values should be considered as 'missing' by the algorithm.
Sometimes, 0 or other extreme value might be used to represent missing values.}
\item{prediction}{A logical value indicating whether to return the prediction vector.}
\item{prediction}{A logical value indicating whether to return the test fold predictions
from each CV model. This parameter engages the \code{\link{cb.cv.predict}} callback.}
\item{showsd}{\code{boolean}, whether show standard deviation of cross validation}
\item{showsd}{\code{boolean}, whether to show standard deviation of cross validation}
\item{metrics, }{list of evaluation metrics to be used in corss validation,
\item{metrics, }{list of evaluation metrics to be used in cross validation,
when it is not specified, the evaluation metric is chosen according to objective function.
Possible options are:
\itemize{
@@ -58,32 +61,61 @@ gradient with given prediction and dtrain.}
\code{list(metric='metric-name', value='metric-value')} with given
prediction and dtrain.}
\item{stratified}{\code{boolean} whether sampling of folds should be stratified by the values of labels in \code{data}}
\item{stratified}{a \code{boolean} indicating whether sampling of folds should be stratified
by the values of outcome labels.}
\item{folds}{\code{list} provides a possibility of using a list of pre-defined CV folds (each element must be a vector of fold's indices).
If folds are supplied, the nfold and stratified parameters would be ignored.}
\item{folds}{\code{list} provides a possibility to use a list of pre-defined CV folds
(each element must be a vector of test fold's indices). When folds are supplied,
the \code{nfold} and \code{stratified} parameters are ignored.}
\item{verbose}{\code{boolean}, print the statistics during the process}
\item{print.every.n}{Print every N progress messages when \code{verbose>0}. Default is 1 which means all messages are printed.}
\item{print_every_n}{Print each n-th iteration evaluation messages when \code{verbose>0}.
Default is 1 which means all messages are printed. This parameter is passed to the
\code{\link{cb.print.evaluation}} callback.}
\item{early.stop.round}{If \code{NULL}, the early stopping function is not triggered.
\item{early_stopping_rounds}{If \code{NULL}, the early stopping function is not triggered.
If set to an integer \code{k}, training with a validation set will stop if the performance
keeps getting worse consecutively for \code{k} rounds.}
doesn't improve for \code{k} rounds.
Setting this parameter engages the \code{\link{cb.early.stop}} callback.}
\item{maximize}{If \code{feval} and \code{early.stop.round} are set, then \code{maximize} must be set as well.
\code{maximize=TRUE} means the larger the evaluation score the better.}
\item{maximize}{If \code{feval} and \code{early_stopping_rounds} are set,
then this parameter must be set as well.
When it is \code{TRUE}, it means the larger the evaluation score the better.
This parameter is passed to the \code{\link{cb.early.stop}} callback.}
\item{callbacks}{a list of callback functions to perform various task during boosting.
See \code{\link{callbacks}}. Some of the callbacks are automatically created depending on the
parameters' values. User can provide either existing or their own callback methods in order
to customize the training process.}
\item{...}{other parameters to pass to \code{params}.}
}
\value{
If \code{prediction = TRUE}, a list with the following elements is returned:
An object of class \code{xgb.cv.synchronous} with the following elements:
\itemize{
\item \code{dt} a \code{data.table} with each mean and standard deviation stat for training set and test set
\item \code{pred} an array or matrix (for multiclass classification) with predictions for each CV-fold for the model having been trained on the data in all other folds.
\item \code{call} a function call.
\item \code{params} parameters that were passed to the xgboost library. Note that it does not
capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
\item \code{callbacks} callback functions that were either automatically assigned or
explicitely passed.
\item \code{evaluation_log} evaluation history storead as a \code{data.table} with the
first column corresponding to iteration number and the rest corresponding to the
CV-based evaluation means and standard deviations for the training and test CV-sets.
It is created by the \code{\link{cb.evaluation.log}} callback.
\item \code{niter} number of boosting iterations.
\item \code{folds} the list of CV folds' indices - either those passed through the \code{folds}
parameter or randomly generated.
\item \code{best_iteration} iteration number with the best evaluation metric value
(only available with early stopping).
\item \code{best_ntreelimit} the \code{ntreelimit} value corresponding to the best iteration,
which could further be used in \code{predict} method
(only available with early stopping).
\item \code{pred} CV prediction values available when \code{prediction} is set.
It is either vector or matrix (see \code{\link{cb.cv.predict}}).
\item \code{models} a liost of the CV folds' models. It is only available with the explicit
setting of the \code{cb.cv.predict(save_models = TRUE)} callback.
}
If \code{prediction = FALSE}, just a \code{data.table} with each mean and standard deviation stat for training set and test set is returned.
}
\description{
The cross valudation function of xgboost
@@ -102,8 +134,10 @@ Adapted from \url{http://en.wikipedia.org/wiki/Cross-validation_\%28statistics\%
\examples{
data(agaricus.train, package='xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
history <- xgb.cv(data = dtrain, nround=3, nthread = 2, nfold = 5, metrics=list("rmse","auc"),
max.depth =3, eta = 1, objective = "binary:logistic")
print(history)
cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"),
max_depth = 3, eta = 1, objective = "binary:logistic")
print(cv)
print(cv, verbose=TRUE)
}

View File

@@ -4,7 +4,7 @@
\alias{xgb.dump}
\title{Save xgboost model to text file}
\usage{
xgb.dump(model = NULL, fname = NULL, fmap = "", with.stats = FALSE)
xgb.dump(model = NULL, fname = NULL, fmap = "", with_stats = FALSE, ...)
}
\arguments{
\item{model}{the model object.}
@@ -18,10 +18,12 @@ See demo/ for walkthrough example in R, and
\url{https://github.com/dmlc/xgboost/blob/master/demo/data/featmap.txt}
for example Format.}
\item{with.stats}{whether dump statistics of splits
\item{with_stats}{whether dump statistics of splits
When this option is on, the model dump comes with two additional statistics:
gain is the approximate loss function gain we get in each split;
cover is the sum of second order gradient in each node.}
\item{...}{currently not used}
}
\value{
if fname is not provided or set to \code{NULL} the function will return the model as a \code{character} vector. Otherwise it will return \code{TRUE}.
@@ -34,10 +36,10 @@ data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
# save the model in file 'xgb.model.dump'
xgb.dump(bst, 'xgb.model.dump', with.stats = TRUE)
xgb.dump(bst, 'xgb.model.dump', with_stats = TRUE)
# print the model without saving it to a file
print(xgb.dump(bst))

View File

@@ -52,14 +52,13 @@ If you need to remember one thing only: until you want to leave us early, don't
\examples{
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
# agaricus.train$data@Dimnames[[2]] represents the column names of the sparse matrix.
xgb.importance(agaricus.train$data@Dimnames[[2]], model = bst)
xgb.importance(colnames(agaricus.train$data), model = bst)
# Same thing with co-occurence computation this time
xgb.importance(agaricus.train$data@Dimnames[[2]], model = bst, data = agaricus.train$data, label = agaricus.train$label)
xgb.importance(colnames(agaricus.train$data), model = bst, data = agaricus.train$data, label = agaricus.train$label)
}

View File

@@ -17,8 +17,8 @@ data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
xgb.save(bst, 'xgb.model')
bst <- xgb.load('xgb.model')
pred <- predict(bst, test$data)

View File

@@ -2,54 +2,60 @@
% Please edit documentation in R/xgb.model.dt.tree.R
\name{xgb.model.dt.tree}
\alias{xgb.model.dt.tree}
\title{Parse boosted tree model text dump}
\title{Parse a boosted tree model text dump}
\usage{
xgb.model.dt.tree(feature_names = NULL, model = NULL, text = NULL,
n_first_tree = NULL)
}
\arguments{
\item{feature_names}{names of each feature as a character vector. Can be extracted from a sparse matrix (see example). If the model already contains feature names, this argument should be \code{NULL} (default value).}
\item{feature_names}{character vector of feature names. If the model already
contains feature names, this argument should be \code{NULL} (default value)}
\item{model}{object created by the \code{xgb.train} function.}
\item{model}{object of class \code{xgb.Booster}}
\item{text}{\code{character} vector generated by the \code{xgb.dump} function. Model dump must include the gain per feature and per tree (parameter \code{with.stats = TRUE} in function \code{xgb.dump}).}
\item{text}{\code{character} vector previously generated by the \code{xgb.dump}
function (where parameter \code{with_stats = TRUE} should have been set).}
\item{n_first_tree}{limit the plot to the \code{n} first trees. If set to \code{NULL}, all trees of the model are plotted. Performance can be low depending of the size of the model.}
\item{n_first_tree}{limit the parsing to the \code{n} first trees.
If set to \code{NULL}, all trees of the model are parsed.}
}
\value{
A \code{data.table} of the features used in the model with their gain, cover and few other information.
}
\description{
Parse a boosted tree model text dump and return a \code{data.table}.
}
\details{
General function to convert a text dump of tree model to a \code{data.table}.
The purpose is to help user to explore the model and get a better understanding of it.
A \code{data.table} with detailed information about model trees' nodes.
The columns of the \code{data.table} are:
\itemize{
\item \code{ID}: unique identifier of a node ;
\item \code{Feature}: feature used in the tree to operate a split. When Leaf is indicated, it is the end of a branch ;
\item \code{Split}: value of the chosen feature where is operated the split ;
\item \code{Yes}: ID of the feature for the next node in the branch when the split condition is met ;
\item \code{No}: ID of the feature for the next node in the branch when the split condition is not met ;
\item \code{Missing}: ID of the feature for the next node in the branch for observation where the feature used for the split are not provided ;
\item \code{Quality}: it's the gain related to the split in this specific node ;
\item \code{Cover}: metric to measure the number of observation affected by the split ;
\item \code{Tree}: ID of the tree. It is included in the main ID ;
\item \code{Yes.Feature}, \code{No.Feature}, \code{Yes.Cover}, \code{No.Cover}, \code{Yes.Quality} and \code{No.Quality}: data related to the pointer in \code{Yes} or \code{No} column ;
\item \code{Tree}: ID of a tree in a model
\item \code{Node}: ID of a node in a tree
\item \code{ID}: unique identifier of a node in a model
\item \code{Feature}: for a branch node, it's a feature id or name (when available);
for a leaf note, it simply labels it as \code{'Leaf'}
\item \code{Split}: location of the split for a branch node (split condition is always "less than")
\item \code{Yes}: ID of the next node when the split condition is met
\item \code{No}: ID of the next node when the split condition is not met
\item \code{Missing}: ID of the next node when branch value is missing
\item \code{Quality}: either the split gain (change in loss) or the leaf value
\item \code{Cover}: metric related to the number of observation either seen by a split
or collected by a leaf during training.
}
}
\description{
Parse a boosted tree model text dump into a \code{data.table} structure.
}
\examples{
# Basic use:
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
# agaricus.train$data@Dimnames[[2]] represents the column names of the sparse matrix.
xgb.model.dt.tree(feature_names = agaricus.train$data@Dimnames[[2]], model = bst)
(dt <- xgb.model.dt.tree(colnames(agaricus.train$data), bst))
# How to match feature names of splits that are following a current 'Yes' branch:
merge(dt, dt[, .(ID, Y.Feature=Feature)], by.x='Yes', by.y='ID', all.x=TRUE)[order(Tree,Node)]
}

View File

@@ -0,0 +1,32 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.Booster.R
\name{xgb.parameters<-}
\alias{xgb.parameters<-}
\title{Accessors for model parameters.}
\usage{
xgb.parameters(object) <- value
}
\arguments{
\item{object}{Object of class \code{xgb.Booster} or \code{xgb.Booster.handle}.}
\item{value}{a list (or an object coercible to a list) with the names of parameters to set
and the elements corresponding to parameter values.}
}
\description{
Only the setter for xgboost parameters is currently implemented.
}
\details{
Note that the setter would usually work more efficiently for \code{xgb.Booster.handle}
than for \code{xgb.Booster}, since only just a handle would need to be copied.
}
\examples{
data(agaricus.train, package='xgboost')
train <- agaricus.train
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
xgb.parameters(bst) <- list(eta = 0.1)
}

View File

@@ -1,46 +1,74 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.plot.deepness.R
\name{xgb.plot.deepness}
% Please edit documentation in R/xgb.ggplot.R, R/xgb.plot.deepness.R
\name{xgb.ggplot.deepness}
\alias{xgb.ggplot.deepness}
\alias{xgb.plot.deepness}
\title{Plot model trees deepness}
\usage{
xgb.plot.deepness(model = NULL)
xgb.ggplot.deepness(model = NULL, which = c("2x1", "max.depth", "med.depth",
"med.weight"))
xgb.plot.deepness(model = NULL, which = c("2x1", "max.depth", "med.depth",
"med.weight"), plot = TRUE, ...)
}
\arguments{
\item{model}{dump generated by the \code{xgb.train} function.}
\item{model}{either an \code{xgb.Booster} model generated by the \code{xgb.train} function
or a data.table result of the \code{xgb.model.dt.tree} function.}
\item{which}{which distribution to plot (see details).}
\item{plot}{(base R barplot) whether a barplot should be produced.
If FALSE, only a data.table is returned.}
\item{...}{other parameters passed to \code{barplot} or \code{plot}.}
}
\value{
Two graphs showing the distribution of the model deepness.
Other than producing plots (when \code{plot=TRUE}), the \code{xgb.plot.deepness} function
silently returns a processed data.table where each row corresponds to a terminal leaf in a tree model,
and contains information about leaf's depth, cover, and weight (which is used in calculating predictions).
The \code{xgb.ggplot.deepness} silently returns either a list of two ggplot graphs when \code{which="2x1"}
or a single ggplot graph for the other \code{which} options.
}
\description{
Generate a graph to plot the distribution of deepness among trees.
Visualizes distributions related to depth of tree leafs.
\code{xgb.plot.deepness} uses base R graphics, while \code{xgb.ggplot.deepness} uses the ggplot backend.
}
\details{
Display both the number of \code{leaf} and the distribution of \code{weighted observations}
by tree deepness level.
The purpose of this function is to help the user to find the best trade-off to set
the \code{max.depth} and \code{min_child_weight} parameters according to the bias / variance trade-off.
See \link{xgb.train} for more information about these parameters.
The graph is made of two parts:
When \code{which="2x1"}, two distributions with respect to the leaf depth
are plotted on top of each other:
\itemize{
\item Count: number of leaf per level of deepness;
\item Weighted cover: noramlized weighted cover per leaf (weighted number of instances).
\item the distribution of the number of leafs in a tree model at a certain depth;
\item the distribution of average weighted number of observations ("cover")
ending up in leafs at certain depth.
}
Those could be helpful in determining sensible ranges of the \code{max_depth}
and \code{min_child_weight} parameters.
This function is inspired by the blog post \url{http://aysent.github.io/2015/11/08/random-forest-leaf-visualization.html}
When \code{which="max.depth"} or \code{which="med.depth"}, plots of either maximum or median depth
per tree with respect to tree number are created. And \code{which="med.weight"} allows to see how
a tree's median absolute leaf weight changes through the iterations.
This function was inspired by the blog post
\url{http://aysent.github.io/2015/11/08/random-forest-leaf-visualization.html}.
}
\examples{
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 15,
eta = 1, nthread = 2, nround = 30, objective = "binary:logistic",
min_child_weight = 50)
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 15,
eta = 0.1, nthread = 2, nrounds = 50, objective = "binary:logistic",
subsample = 0.5, min_child_weight = 2)
xgb.plot.deepness(model = bst)
xgb.plot.deepness(bst)
xgb.ggplot.deepness(bst)
xgb.plot.deepness(bst, which='max.depth', pch=16, col=rgb(0,0,1,0.3), cex=2)
xgb.plot.deepness(bst, which='med.weight', pch=16, col=rgb(0,0,1,0.3), cex=2)
}
\seealso{
\code{\link{xgb.train}}, \code{\link{xgb.model.dt.tree}}.
}

View File

@@ -1,40 +1,82 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.plot.importance.R
\name{xgb.plot.importance}
% Please edit documentation in R/xgb.ggplot.R, R/xgb.plot.importance.R
\name{xgb.ggplot.importance}
\alias{xgb.ggplot.importance}
\alias{xgb.plot.importance}
\title{Plot feature importance bar graph}
\title{Plot feature importance as a bar graph}
\usage{
xgb.plot.importance(importance_matrix = NULL, numberOfClusters = c(1:10))
xgb.ggplot.importance(importance_matrix = NULL, top_n = NULL,
measure = NULL, rel_to_first = FALSE, n_clusters = c(1:10), ...)
xgb.plot.importance(importance_matrix = NULL, top_n = NULL,
measure = NULL, rel_to_first = FALSE, left_margin = 10, cex = NULL,
plot = TRUE, ...)
}
\arguments{
\item{importance_matrix}{a \code{data.table} returned by the \code{xgb.importance} function.}
\item{importance_matrix}{a \code{data.table} returned by \code{\link{xgb.importance}}.}
\item{numberOfClusters}{a \code{numeric} vector containing the min and the max range of the possible number of clusters of bars.}
\item{top_n}{maximal number of top features to include into the plot.}
\item{measure}{the name of importance measure to plot.
When \code{NULL}, 'Gain' would be used for trees and 'Weight' would be used for gblinear.}
\item{rel_to_first}{whether importance values should be represented as relative to the highest ranked feature.
See Details.}
\item{n_clusters}{(ggplot only) a \code{numeric} vector containing the min and the max range
of the possible number of clusters of bars.}
\item{...}{other parameters passed to \code{barplot} (except horiz, border, cex.names, names.arg, and las).}
\item{left_margin}{(base R barplot) allows to adjust the left margin size to fit feature names.
When it is NULL, the existing \code{par('mar')} is used.}
\item{cex}{(base R barplot) passed as \code{cex.names} parameter to \code{barplot}.}
\item{plot}{(base R barplot) whether a barplot should be produced.
If FALSE, only a data.table is returned.}
}
\value{
A \code{ggplot2} bar graph representing each feature by a horizontal bar. Longer is the bar, more important is the feature. Features are classified by importance and clustered by importance. The group is represented through the color of the bar.
The \code{xgb.plot.importance} function creates a \code{barplot} (when \code{plot=TRUE})
and silently returns a processed data.table with \code{n_top} features sorted by importance.
The \code{xgb.ggplot.importance} function returns a ggplot graph which could be customized afterwards.
E.g., to change the title of the graph, add \code{+ ggtitle("A GRAPH NAME")} to the result.
}
\description{
Read a data.table containing feature importance details and plot it (for both GLM and Trees).
Represents previously calculated feature importance as a bar graph.
\code{xgb.plot.importance} uses base R graphics, while \code{xgb.ggplot.importance} uses the ggplot backend.
}
\details{
The purpose of this function is to easily represent the importance of each feature of a model.
The function returns a ggplot graph, therefore each of its characteristic can be overriden (to customize it).
In particular you may want to override the title of the graph. To do so, add \code{+ ggtitle("A GRAPH NAME")} next to the value returned by this function.
The graph represents each feature as a horizontal bar of length proportional to the importance of a feature.
Features are shown ranked in a decreasing importance order.
It works for importances from both \code{gblinear} and \code{gbtree} models.
When \code{rel_to_first = FALSE}, the values would be plotted as they were in \code{importance_matrix}.
For gbtree model, that would mean being normalized to the total of 1
("what is feature's importance contribution relative to the whole model?").
For linear models, \code{rel_to_first = FALSE} would show actual values of the coefficients.
Setting \code{rel_to_first = TRUE} allows to see the picture from the perspective of
"what is feature's importance contribution relative to the most important feature?"
The ggplot-backend method also performs 1-D custering of the importance values,
with bar colors coresponding to different clusters that have somewhat similar importance values.
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.train)
#Both dataset are list with two items, a sparse matrix and labels
#(labels = outcome column which will be learned).
#Each column of the sparse Matrix is a feature in one hot encoding format.
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 3,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
importance_matrix <- xgb.importance(colnames(agaricus.train$data), model = bst)
#agaricus.train$data@Dimnames[[2]] represents the column names of the sparse matrix.
importance_matrix <- xgb.importance(agaricus.train$data@Dimnames[[2]], model = bst)
xgb.plot.importance(importance_matrix)
xgb.plot.importance(importance_matrix, rel_to_first = TRUE, xlab = "Relative importance")
(gg <- xgb.ggplot.importance(importance_matrix, measure = "Frequency", rel_to_first = TRUE))
gg + ggplot2::ylab("Frequency")
}
\seealso{
\code{\link[graphics]{barplot}}.
}

View File

@@ -4,19 +4,21 @@
\alias{xgb.plot.multi.trees}
\title{Project all trees on one tree and plot it}
\usage{
xgb.plot.multi.trees(model, feature_names = NULL, features.keep = 5,
plot.width = NULL, plot.height = NULL)
xgb.plot.multi.trees(model, feature_names = NULL, features_keep = 5,
plot_width = NULL, plot_height = NULL, ...)
}
\arguments{
\item{model}{dump generated by the \code{xgb.train} function.}
\item{feature_names}{names of each feature as a \code{character} vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.}
\item{features.keep}{number of features to keep in each position of the multi trees.}
\item{features_keep}{number of features to keep in each position of the multi trees.}
\item{plot.width}{width in pixels of the graph to produce}
\item{plot_width}{width in pixels of the graph to produce}
\item{plot.height}{height in pixels of the graph to produce}
\item{plot_height}{height in pixels of the graph to produce}
\item{...}{currently not used}
}
\value{
Two graphs showing the distribution of the model deepness.
@@ -39,7 +41,7 @@ its deepness (therefore in a boosting model, all trees have the same shape).
Moreover, the trees tend to reuse the same features.
The function will project each tree on one, and keep for each position the
\code{features.keep} first features (based on Gain per feature measure).
\code{features_keep} first features (based on Gain per feature measure).
This function is inspired by this blog post:
\url{https://wellecks.wordpress.com/2015/02/21/peering-into-the-black-box-visualizing-lambdamart/}
@@ -47,11 +49,11 @@ This function is inspired by this blog post:
\examples{
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 15,
eta = 1, nthread = 2, nround = 30, objective = "binary:logistic",
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 15,
eta = 1, nthread = 2, nrounds = 30, objective = "binary:logistic",
min_child_weight = 50)
p <- xgb.plot.multi.trees(model = bst, feature_names = agaricus.train$data@Dimnames[[2]], features.keep = 3)
p <- xgb.plot.multi.trees(model = bst, feature_names = colnames(agaricus.train$data), features_keep = 3)
print(p)
}

View File

@@ -5,7 +5,7 @@
\title{Plot a boosted tree model}
\usage{
xgb.plot.tree(feature_names = NULL, model = NULL, n_first_tree = NULL,
plot.width = NULL, plot.height = NULL)
plot_width = NULL, plot_height = NULL, ...)
}
\arguments{
\item{feature_names}{names of each feature as a \code{character} vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.}
@@ -14,9 +14,11 @@ xgb.plot.tree(feature_names = NULL, model = NULL, n_first_tree = NULL,
\item{n_first_tree}{limit the plot to the n first trees. If \code{NULL}, all trees of the model are plotted. Performance can be low for huge models.}
\item{plot.width}{the width of the diagram in pixels.}
\item{plot_width}{the width of the diagram in pixels.}
\item{plot.height}{the height of the diagram in pixels.}
\item{plot_height}{the height of the diagram in pixels.}
\item{...}{currently not used.}
}
\value{
A \code{DiagrammeR} of the model.
@@ -38,11 +40,10 @@ The function uses \href{http://www.graphviz.org/}{GraphViz} library for that pur
\examples{
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
# agaricus.train$data@Dimnames[[2]] represents the column names of the sparse matrix.
xgb.plot.tree(feature_names = agaricus.train$data@Dimnames[[2]], model = bst)
xgb.plot.tree(feature_names = colnames(agaricus.train$data), model = bst)
}

View File

@@ -9,7 +9,7 @@ xgb.save(model, fname)
\arguments{
\item{model}{the model object.}
\item{fname}{the name of the binary file.}
\item{fname}{the name of the file to write.}
}
\description{
Save xgboost model from xgboost or xgb.train
@@ -19,8 +19,8 @@ data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
xgb.save(bst, 'xgb.model')
bst <- xgb.load('xgb.model')
pred <- predict(bst, test$data)

View File

@@ -18,10 +18,11 @@ data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
raw <- xgb.save.raw(bst)
bst <- xgb.load(raw)
pred <- predict(bst, test$data)
}

View File

@@ -1,16 +1,24 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.train.R
% Please edit documentation in R/xgb.train.R, R/xgboost.R
\name{xgb.train}
\alias{xgb.train}
\alias{xgboost}
\title{eXtreme Gradient Boosting Training}
\usage{
xgb.train(params = list(), data, nrounds, watchlist = list(), obj = NULL,
feval = NULL, verbose = 1, print.every.n = 1L,
early.stop.round = NULL, maximize = NULL, save_period = 0,
save_name = "xgboost.model", ...)
feval = NULL, verbose = 1, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL, save_period = NULL,
save_name = "xgboost.model", xgb_model = NULL, callbacks = list(), ...)
xgboost(data = NULL, label = NULL, missing = NA, weight = NULL,
params = list(), nrounds, verbose = 1, print_every_n = 1L,
early_stopping_rounds = NULL, maximize = NULL, save_period = 0,
save_name = "xgboost.model", xgb_model = NULL, callbacks = list(), ...)
}
\arguments{
\item{params}{the list of parameters.
The complete list of parameters is available at \url{http://xgboost.readthedocs.io/en/latest/parameter.html}.
Below is a shorter summary:
1. General Parameters
@@ -51,81 +59,152 @@ xgb.train(params = list(), data, nrounds, watchlist = list(), obj = NULL,
\item \code{binary:logistic} logistic regression for binary classification. Output probability.
\item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation.
\item \code{num_class} set the number of classes. To use only with multiclass objectives.
\item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{num_class}.
\item \code{multi:softprob} same as softmax, but output a vector of ndata * nclass, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class.
\item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{num_class - 1}.
\item \code{multi:softprob} same as softmax, but prediction outputs a vector of ndata * nclass elements, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class.
\item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss.
}
\item \code{base_score} the initial prediction score of all instances, global bias. Default: 0.5
\item \code{eval_metric} evaluation metrics for validation data. Users can pass a self-defined function to it. Default: metric will be assigned according to objective(rmse for regression, and error for classification, mean average precision for ranking). List is provided in detail section.
}}
\item{data}{takes an \code{xgb.DMatrix} as the input.}
\item{data}{input dataset. \code{xgb.train} takes only an \code{xgb.DMatrix} as the input.
\code{xgboost}, in addition, also accepts \code{matrix}, \code{dgCMatrix}, or local data file.}
\item{nrounds}{the max number of iterations}
\item{watchlist}{what information should be printed when \code{verbose=1} or
\code{verbose=2}. Watchlist is used to specify validation set monitoring
during training. For example user can specify
watchlist=list(validation1=mat1, validation2=mat2) to watch
the performance of each round's model on mat1 and mat2}
watchlist=list(validation1=mat1, validation2=mat2) to watch
the performance of each round's model on mat1 and mat2}
\item{obj}{customized objective function. Returns gradient and second order
gradient with given prediction and dtrain,}
gradient with given prediction and dtrain.}
\item{feval}{custimized evaluation function. Returns
\code{list(metric='metric-name', value='metric-value')} with given
prediction and dtrain,}
prediction and dtrain.}
\item{verbose}{If 0, xgboost will stay silent. If 1, xgboost will print
information of performance. If 2, xgboost will print information of both}
information of performance. If 2, xgboost will print some additional information.
Setting \code{verbose > 0} automatically engages the \code{\link{cb.evaluation.log}} and
\code{\link{cb.print.evaluation}} callback functions.}
\item{print.every.n}{Print every N progress messages when \code{verbose>0}. Default is 1 which means all messages are printed.}
\item{print_every_n}{Print each n-th iteration evaluation messages when \code{verbose>0}.
Default is 1 which means all messages are printed. This parameter is passed to the
\code{\link{cb.print.evaluation}} callback.}
\item{early.stop.round}{If \code{NULL}, the early stopping function is not triggered.
\item{early_stopping_rounds}{If \code{NULL}, the early stopping function is not triggered.
If set to an integer \code{k}, training with a validation set will stop if the performance
keeps getting worse consecutively for \code{k} rounds.}
doesn't improve for \code{k} rounds.
Setting this parameter engages the \code{\link{cb.early.stop}} callback.}
\item{maximize}{If \code{feval} and \code{early.stop.round} are set, then \code{maximize} must be set as well.
\code{maximize=TRUE} means the larger the evaluation score the better.}
\item{maximize}{If \code{feval} and \code{early_stopping_rounds} are set,
then this parameter must be set as well.
When it is \code{TRUE}, it means the larger the evaluation score the better.
This parameter is passed to the \code{\link{cb.early.stop}} callback.}
\item{save_period}{save the model to the disk in every \code{save_period} rounds, 0 means no such action.}
\item{save_period}{when it is non-NULL, model is saved to disk after every \code{save_period} rounds,
0 means save at the end. The saving is handled by the \code{\link{cb.save.model}} callback.}
\item{save_name}{the name or path for periodically saved model file.}
\item{xgb_model}{a previously built model to continue the trainig from.
Could be either an object of class \code{xgb.Booster}, or its raw data, or the name of a
file with a previously saved model.}
\item{callbacks}{a list of callback functions to perform various task during boosting.
See \code{\link{callbacks}}. Some of the callbacks are automatically created depending on the
parameters' values. User can provide either existing or their own callback methods in order
to customize the training process.}
\item{...}{other parameters to pass to \code{params}.}
\item{label}{vector of response values. Should not be provided when data is
a local data file name or an \code{xgb.DMatrix}.}
\item{missing}{by default is set to NA, which means that NA values should be considered as 'missing'
by the algorithm. Sometimes, 0 or other extreme value might be used to represent missing values.
This parameter is only used when input is a dense matrix.}
\item{weight}{a vector indicating the weight for each row of the input.}
}
\value{
An object of class \code{xgb.Booster} with the following elements:
\itemize{
\item \code{handle} a handle (pointer) to the xgboost model in memory.
\item \code{raw} a cached memory dump of the xgboost model saved as R's \code{raw} type.
\item \code{niter} number of boosting iterations.
\item \code{evaluation_log} evaluation history storead as a \code{data.table} with the
first column corresponding to iteration number and the rest corresponding to evaluation
metrics' values. It is created by the \code{\link{cb.evaluation.log}} callback.
\item \code{call} a function call.
\item \code{params} parameters that were passed to the xgboost library. Note that it does not
capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
\item \code{callbacks} callback functions that were either automatically assigned or
explicitely passed.
\item \code{best_iteration} iteration number with the best evaluation metric value
(only available with early stopping).
\item \code{best_ntreelimit} the \code{ntreelimit} value corresponding to the best iteration,
which could further be used in \code{predict} method
(only available with early stopping).
\item \code{best_score} the best evaluation metric value during early stopping.
(only available with early stopping).
}
}
\description{
An advanced interface for training xgboost model. Look at \code{\link{xgboost}} function for a simpler interface.
\code{xgb.train} is an advanced interface for training an xgboost model. The \code{xgboost} function provides a simpler interface.
}
\details{
This is the training function for \code{xgboost}.
These are the training functions for \code{xgboost}.
It supports advanced features such as \code{watchlist}, customized objective function (\code{feval}),
therefore it is more flexible than \code{\link{xgboost}} function.
The \code{xgb.train} interface supports advanced features such as \code{watchlist},
customized objective and evaluation metric functions, therefore it is more flexible
than the \code{\link{xgboost}} interface.
Parallelization is automatically enabled if \code{OpenMP} is present.
Number of threads can also be manually specified via \code{nthread} parameter.
\code{eval_metric} parameter (not listed above) is set automatically by Xgboost but can be overriden by parameter. Below is provided the list of different metric optimized by Xgboost to help you to understand how it works inside or to use them with the \code{watchlist} parameter.
The evaluation metric is chosen automatically by Xgboost (according to the objective)
when the \code{eval_metric} parameter is not provided.
User may set one or several \code{eval_metric} parameters.
Note that when using a customized metric, only this single metric can be used.
The folloiwing is the list of built-in metrics for which Xgboost provides optimized implementation:
\itemize{
\item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
\item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
\item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss}
\item \code{error} Binary classification error rate. It is calculated as \code{(wrong cases) / (all cases)}. For the predictions, the evaluation will regard the instances with prediction value larger than 0.5 as positive instances, and the others as negative instances.
\item \code{merror} Multiclass classification error rate. It is calculated as \code{(wrong cases) / (all cases)}.
\item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
Different threshold (e.g., 0.) could be specified as "error@0."
\item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
\item \code{auc} Area under the curve. \url{http://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
\item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{http://en.wikipedia.org/wiki/NDCG}
}
Full list of parameters is available in the Wiki \url{https://github.com/dmlc/xgboost/wiki/Parameters}.
This function only accepts an \code{\link{xgb.DMatrix}} object as the input.
The following callbacks are automatically created when certain parameters are set:
\itemize{
\item \code{cb.print.evaluation} is turned on when \code{verbose > 0};
and the \code{print_every_n} parameter is passed to it.
\item \code{cb.evaluation.log} is on when \code{verbose > 0} and \code{watchlist} is present.
\item \code{cb.early.stop}: when \code{early_stopping_rounds} is set.
\item \code{cb.save.model}: when \code{save_period > 0} is set.
}
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- dtrain
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
watchlist <- list(eval = dtest, train = dtrain)
## A simple xgb.train example:
param <- list(max_depth = 2, eta = 1, silent = 1,
objective = "binary:logistic", eval_metric = "auc")
bst <- xgb.train(param, dtrain, nthread = 2, nrounds = 2, watchlist)
## An xgb.train example where custom objective and evaluation metric are used:
logregobj <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
preds <- 1/(1 + exp(-preds))
@@ -138,7 +217,29 @@ evalerror <- function(preds, dtrain) {
err <- as.numeric(sum(labels != (preds > 0)))/length(labels)
return(list(metric = "error", value = err))
}
param <- list(max.depth = 2, eta = 1, silent = 1, objective=logregobj,eval_metric=evalerror)
bst <- xgb.train(param, dtrain, nthread = 2, nround = 2, watchlist)
bst <- xgb.train(param, dtrain, nthread = 2, nrounds = 2, watchlist)
## An xgb.train example of using variable learning rates at each iteration:
my_etas <- list(eta = c(0.5, 0.1))
bst <- xgb.train(param, dtrain, nthread = 2, nrounds = 2, watchlist,
callbacks = list(cb.reset.parameters(my_etas)))
## Explicit use of the cb.evaluation.log callback allows to run
## xgb.train silently but still store the evaluation results:
bst <- xgb.train(param, dtrain, nthread = 2, nrounds = 2, watchlist,
verbose = 0, callbacks = list(cb.evaluation.log()))
print(bst$evaluation_log)
## An 'xgboost' interface example:
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label,
max_depth = 2, eta = 1, nthread = 2, nrounds = 2,
objective = "binary:logistic")
pred <- predict(bst, agaricus.test$data)
}
\seealso{
\code{\link{callbacks}},
\code{\link{predict.xgb.Booster}},
\code{\link{xgb.cv}}
}

View File

@@ -0,0 +1,17 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{xgboost-deprecated}
\alias{xgboost-deprecated}
\title{Deprecation notices.}
\description{
At this time, some of the parameter names were changed in order to make the code style more uniform.
The deprecated parameters would be removed in the next release.
}
\details{
To see all the current deprecated and new parameters, check the \code{xgboost:::depr_par_lut} table.
A deprecation warning is shown when any of the deprecated parameters is used in a call.
An additional warning is shown when there was a partial match to a deprecated parameter
(as R is able to partially match parameter names).
}

View File

@@ -1,83 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgboost.R
\name{xgboost}
\alias{xgboost}
\title{eXtreme Gradient Boosting (Tree) library}
\usage{
xgboost(data = NULL, label = NULL, missing = NA, weight = NULL,
params = list(), nrounds, verbose = 1, print.every.n = 1L,
early.stop.round = NULL, maximize = NULL, save_period = 0,
save_name = "xgboost.model", ...)
}
\arguments{
\item{data}{takes \code{matrix}, \code{dgCMatrix}, local data file or
\code{xgb.DMatrix}.}
\item{label}{the response variable. User should not set this field,
if data is local data file or \code{xgb.DMatrix}.}
\item{missing}{Missing is only used when input is dense matrix, pick a float
value that represents missing value. Sometimes a data use 0 or other extreme value to represents missing values.}
\item{weight}{a vector indicating the weight for each row of the input.}
\item{params}{the list of parameters.
Commonly used ones are:
\itemize{
\item \code{objective} objective function, common ones are
\itemize{
\item \code{reg:linear} linear regression
\item \code{binary:logistic} logistic regression for classification
}
\item \code{eta} step size of each boosting step
\item \code{max.depth} maximum depth of the tree
\item \code{nthread} number of thread used in training, if not set, all threads are used
}
Look at \code{\link{xgb.train}} for a more complete list of parameters or \url{https://github.com/dmlc/xgboost/wiki/Parameters} for the full list.
See also \code{demo/} for walkthrough example in R.}
\item{nrounds}{the max number of iterations}
\item{verbose}{If 0, xgboost will stay silent. If 1, xgboost will print
information of performance. If 2, xgboost will print information of both
performance and construction progress information}
\item{print.every.n}{Print every N progress messages when \code{verbose>0}. Default is 1 which means all messages are printed.}
\item{early.stop.round}{If \code{NULL}, the early stopping function is not triggered.
If set to an integer \code{k}, training with a validation set will stop if the performance
keeps getting worse consecutively for \code{k} rounds.}
\item{maximize}{If \code{feval} and \code{early.stop.round} are set, then \code{maximize} must be set as well.
\code{maximize=TRUE} means the larger the evaluation score the better.}
\item{save_period}{save the model to the disk in every \code{save_period} rounds, 0 means no such action.}
\item{save_name}{the name or path for periodically saved model file.}
\item{...}{other parameters to pass to \code{params}.}
}
\description{
A simple interface for training xgboost model. Look at \code{\link{xgb.train}} function for a more advanced interface.
}
\details{
This is the modeling function for Xgboost.
Parallelization is automatically enabled if \code{OpenMP} is present.
Number of threads can also be manually specified via \code{nthread} parameter.
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
pred <- predict(bst, test$data)
}

View File

@@ -1,8 +1,18 @@
# package root
PKGROOT=../../
ENABLE_STD_THREAD=1
# _*_ mode: Makefile; _*_
PKG_CPPFLAGS= -DXGBOOST_CUSTOMIZE_MSG_ -DXGBOOST_CUSTOMIZE_PRNG_ -DXGBOOST_STRICT_CXX98_ -DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_ -I$(PKGROOT)
CXX_STD = CXX11
XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
-DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\
-DDMLC_LOG_CUSTOMIZE=1 -DXGBOOST_CUSTOMIZE_LOGGER=1\
-DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_
PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS)
PKG_CXXFLAGS= $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)
PKG_LIBS = $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)
OBJECTS= xgboost_R.o xgboost_assert.o $(PKGROOT)/wrapper/xgboost_wrapper.o $(PKGROOT)/src/io/io.o $(PKGROOT)/src/gbm/gbm.o $(PKGROOT)/src/tree/updater.o $(PKGROOT)/subtree/rabit/src/engine_empty.o $(PKGROOT)/src/io/dmlc_simple.o
OBJECTS= ./xgboost_R.o ./xgboost_custom.o ./xgboost_assert.o\
$(PKGROOT)/amalgamation/xgboost-all0.o $(PKGROOT)/amalgamation/dmlc-minimum0.o\
$(PKGROOT)/rabit/src/engine_empty.o $(PKGROOT)/rabit/src/c_api.o

View File

@@ -1,5 +1,6 @@
# package root
PKGROOT=./
ENABLE_STD_THREAD=0
# _*_ mode: Makefile; _*_
# This file is only used for windows compilation from github
@@ -9,11 +10,23 @@ all: $(SHLIB)
$(SHLIB): xgblib
xgblib:
cp -r ../../src .
cp -r ../../wrapper .
cp -r ../../subtree .
cp -r ../../rabit .
cp -r ../../dmlc-core .
cp -r ../../include .
cp -r ../../amalgamation .
PKG_CPPFLAGS= -DXGBOOST_CUSTOMIZE_MSG_ -DXGBOOST_CUSTOMIZE_PRNG_ -DXGBOOST_STRICT_CXX98_ -DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_ -I$(PKGROOT) -I../..
CXX_STD = CXX11
XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
-DDMLC_ENABLE_STD_THREAD=$(ENABLE_STD_THREAD) -DDMLC_DISABLE_STDIN=1\
-DDMLC_LOG_CUSTOMIZE=1 -DXGBOOST_CUSTOMIZE_LOGGER=1\
-DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_
PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS)
PKG_CXXFLAGS= $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)
PKG_LIBS = $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)
OBJECTS= xgboost_R.o xgboost_assert.o $(PKGROOT)/wrapper/xgboost_wrapper.o $(PKGROOT)/src/io/io.o $(PKGROOT)/src/gbm/gbm.o $(PKGROOT)/src/tree/updater.o $(PKGROOT)/subtree/rabit/src/engine_empty.o $(PKGROOT)/src/io/dmlc_simple.o
OBJECTS= ./xgboost_R.o ./xgboost_custom.o ./xgboost_assert.o\
$(PKGROOT)/amalgamation/xgboost-all0.o $(PKGROOT)/amalgamation/dmlc-minimum0.o\
$(PKGROOT)/rabit/src/engine_empty.o $(PKGROOT)/rabit/src/c_api.o
$(OBJECTS) : xgblib

419
R-package/src/xgboost_R.cc Normal file
View File

@@ -0,0 +1,419 @@
// Copyright (c) 2014 by Contributors
#include <dmlc/logging.h>
#include <dmlc/omp.h>
#include <xgboost/c_api.h>
#include <vector>
#include <string>
#include <utility>
#include <cstring>
#include <cstdio>
#include <sstream>
#include "./xgboost_R.h"
/*!
* \brief macro to annotate begin of api
*/
#define R_API_BEGIN() \
GetRNGstate(); \
try {
/*!
* \brief macro to annotate end of api
*/
#define R_API_END() \
} catch(dmlc::Error& e) { \
PutRNGstate(); \
error(e.what()); \
} \
PutRNGstate();
/*!
* \brief macro to check the call.
*/
#define CHECK_CALL(x) \
if ((x) != 0) { \
error(XGBGetLastError()); \
}
using namespace dmlc;
SEXP XGCheckNullPtr_R(SEXP handle) {
return ScalarLogical(R_ExternalPtrAddr(handle) == NULL);
}
void _DMatrixFinalizer(SEXP ext) {
R_API_BEGIN();
if (R_ExternalPtrAddr(ext) == NULL) return;
CHECK_CALL(XGDMatrixFree(R_ExternalPtrAddr(ext)));
R_ClearExternalPtr(ext);
R_API_END();
}
SEXP XGDMatrixCreateFromFile_R(SEXP fname, SEXP silent) {
SEXP ret;
R_API_BEGIN();
DMatrixHandle handle;
CHECK_CALL(XGDMatrixCreateFromFile(CHAR(asChar(fname)), asInteger(silent), &handle));
ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
UNPROTECT(1);
R_API_END();
return ret;
}
SEXP XGDMatrixCreateFromMat_R(SEXP mat,
SEXP missing) {
SEXP ret;
R_API_BEGIN();
SEXP dim = getAttrib(mat, R_DimSymbol);
size_t nrow = static_cast<size_t>(INTEGER(dim)[0]);
size_t ncol = static_cast<size_t>(INTEGER(dim)[1]);
double *din = REAL(mat);
std::vector<float> data(nrow * ncol);
#pragma omp parallel for schedule(static)
for (omp_ulong i = 0; i < nrow; ++i) {
for (size_t j = 0; j < ncol; ++j) {
data[i * ncol +j] = din[i + nrow * j];
}
}
DMatrixHandle handle;
CHECK_CALL(XGDMatrixCreateFromMat(BeginPtr(data), nrow, ncol, asReal(missing), &handle));
ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
UNPROTECT(1);
R_API_END();
return ret;
}
SEXP XGDMatrixCreateFromCSC_R(SEXP indptr,
SEXP indices,
SEXP data) {
SEXP ret;
R_API_BEGIN();
const int *p_indptr = INTEGER(indptr);
const int *p_indices = INTEGER(indices);
const double *p_data = REAL(data);
int nindptr = length(indptr);
int ndata = length(data);
std::vector<bst_ulong> col_ptr_(nindptr);
std::vector<unsigned> indices_(ndata);
std::vector<float> data_(ndata);
for (int i = 0; i < nindptr; ++i) {
col_ptr_[i] = static_cast<bst_ulong>(p_indptr[i]);
}
#pragma omp parallel for schedule(static)
for (int i = 0; i < ndata; ++i) {
indices_[i] = static_cast<unsigned>(p_indices[i]);
data_[i] = static_cast<float>(p_data[i]);
}
DMatrixHandle handle;
CHECK_CALL(XGDMatrixCreateFromCSC(BeginPtr(col_ptr_), BeginPtr(indices_),
BeginPtr(data_), nindptr, ndata,
&handle));
ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
UNPROTECT(1);
R_API_END();
return ret;
}
SEXP XGDMatrixSliceDMatrix_R(SEXP handle, SEXP idxset) {
SEXP ret;
R_API_BEGIN();
int len = length(idxset);
std::vector<int> idxvec(len);
for (int i = 0; i < len; ++i) {
idxvec[i] = INTEGER(idxset)[i] - 1;
}
DMatrixHandle res;
CHECK_CALL(XGDMatrixSliceDMatrix(R_ExternalPtrAddr(handle),
BeginPtr(idxvec), len,
&res));
ret = PROTECT(R_MakeExternalPtr(res, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
UNPROTECT(1);
R_API_END();
return ret;
}
SEXP XGDMatrixSaveBinary_R(SEXP handle, SEXP fname, SEXP silent) {
R_API_BEGIN();
CHECK_CALL(XGDMatrixSaveBinary(R_ExternalPtrAddr(handle),
CHAR(asChar(fname)),
asInteger(silent)));
R_API_END();
return R_NilValue;
}
SEXP XGDMatrixSetInfo_R(SEXP handle, SEXP field, SEXP array) {
R_API_BEGIN();
int len = length(array);
const char *name = CHAR(asChar(field));
if (!strcmp("group", name)) {
std::vector<unsigned> vec(len);
#pragma omp parallel for schedule(static)
for (int i = 0; i < len; ++i) {
vec[i] = static_cast<unsigned>(INTEGER(array)[i]);
}
CHECK_CALL(XGDMatrixSetGroup(R_ExternalPtrAddr(handle), BeginPtr(vec), len));
} else {
std::vector<float> vec(len);
#pragma omp parallel for schedule(static)
for (int i = 0; i < len; ++i) {
vec[i] = REAL(array)[i];
}
CHECK_CALL(XGDMatrixSetFloatInfo(R_ExternalPtrAddr(handle),
CHAR(asChar(field)),
BeginPtr(vec), len));
}
R_API_END();
return R_NilValue;
}
SEXP XGDMatrixGetInfo_R(SEXP handle, SEXP field) {
SEXP ret;
R_API_BEGIN();
bst_ulong olen;
const float *res;
CHECK_CALL(XGDMatrixGetFloatInfo(R_ExternalPtrAddr(handle),
CHAR(asChar(field)),
&olen,
&res));
ret = PROTECT(allocVector(REALSXP, olen));
for (size_t i = 0; i < olen; ++i) {
REAL(ret)[i] = res[i];
}
UNPROTECT(1);
R_API_END();
return ret;
}
SEXP XGDMatrixNumRow_R(SEXP handle) {
bst_ulong nrow;
R_API_BEGIN();
CHECK_CALL(XGDMatrixNumRow(R_ExternalPtrAddr(handle), &nrow));
R_API_END();
return ScalarInteger(static_cast<int>(nrow));
}
SEXP XGDMatrixNumCol_R(SEXP handle) {
bst_ulong ncol;
R_API_BEGIN();
CHECK_CALL(XGDMatrixNumCol(R_ExternalPtrAddr(handle), &ncol));
R_API_END();
return ScalarInteger(static_cast<int>(ncol));
}
// functions related to booster
void _BoosterFinalizer(SEXP ext) {
if (R_ExternalPtrAddr(ext) == NULL) return;
CHECK_CALL(XGBoosterFree(R_ExternalPtrAddr(ext)));
R_ClearExternalPtr(ext);
}
SEXP XGBoosterCreate_R(SEXP dmats) {
SEXP ret;
R_API_BEGIN();
int len = length(dmats);
std::vector<void*> dvec;
for (int i = 0; i < len; ++i) {
dvec.push_back(R_ExternalPtrAddr(VECTOR_ELT(dmats, i)));
}
BoosterHandle handle;
CHECK_CALL(XGBoosterCreate(BeginPtr(dvec), dvec.size(), &handle));
ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _BoosterFinalizer, TRUE);
UNPROTECT(1);
R_API_END();
return ret;
}
SEXP XGBoosterSetParam_R(SEXP handle, SEXP name, SEXP val) {
R_API_BEGIN();
CHECK_CALL(XGBoosterSetParam(R_ExternalPtrAddr(handle),
CHAR(asChar(name)),
CHAR(asChar(val))));
R_API_END();
return R_NilValue;
}
SEXP XGBoosterUpdateOneIter_R(SEXP handle, SEXP iter, SEXP dtrain) {
R_API_BEGIN();
CHECK_CALL(XGBoosterUpdateOneIter(R_ExternalPtrAddr(handle),
asInteger(iter),
R_ExternalPtrAddr(dtrain)));
R_API_END();
return R_NilValue;
}
SEXP XGBoosterBoostOneIter_R(SEXP handle, SEXP dtrain, SEXP grad, SEXP hess) {
R_API_BEGIN();
CHECK_EQ(length(grad), length(hess))
<< "gradient and hess must have same length";
int len = length(grad);
std::vector<float> tgrad(len), thess(len);
#pragma omp parallel for schedule(static)
for (int j = 0; j < len; ++j) {
tgrad[j] = REAL(grad)[j];
thess[j] = REAL(hess)[j];
}
CHECK_CALL(XGBoosterBoostOneIter(R_ExternalPtrAddr(handle),
R_ExternalPtrAddr(dtrain),
BeginPtr(tgrad), BeginPtr(thess),
len));
R_API_END();
return R_NilValue;
}
SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames) {
const char *ret;
R_API_BEGIN();
CHECK_EQ(length(dmats), length(evnames))
<< "dmats and evnams must have same length";
int len = length(dmats);
std::vector<void*> vec_dmats;
std::vector<std::string> vec_names;
std::vector<const char*> vec_sptr;
for (int i = 0; i < len; ++i) {
vec_dmats.push_back(R_ExternalPtrAddr(VECTOR_ELT(dmats, i)));
vec_names.push_back(std::string(CHAR(asChar(VECTOR_ELT(evnames, i)))));
}
for (int i = 0; i < len; ++i) {
vec_sptr.push_back(vec_names[i].c_str());
}
CHECK_CALL(XGBoosterEvalOneIter(R_ExternalPtrAddr(handle),
asInteger(iter),
BeginPtr(vec_dmats),
BeginPtr(vec_sptr),
len, &ret));
R_API_END();
return mkString(ret);
}
SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask, SEXP ntree_limit) {
SEXP ret;
R_API_BEGIN();
bst_ulong olen;
const float *res;
CHECK_CALL(XGBoosterPredict(R_ExternalPtrAddr(handle),
R_ExternalPtrAddr(dmat),
asInteger(option_mask),
asInteger(ntree_limit),
&olen, &res));
ret = PROTECT(allocVector(REALSXP, olen));
for (size_t i = 0; i < olen; ++i) {
REAL(ret)[i] = res[i];
}
UNPROTECT(1);
R_API_END();
return ret;
}
SEXP XGBoosterLoadModel_R(SEXP handle, SEXP fname) {
R_API_BEGIN();
CHECK_CALL(XGBoosterLoadModel(R_ExternalPtrAddr(handle), CHAR(asChar(fname))));
R_API_END();
return R_NilValue;
}
SEXP XGBoosterSaveModel_R(SEXP handle, SEXP fname) {
R_API_BEGIN();
CHECK_CALL(XGBoosterSaveModel(R_ExternalPtrAddr(handle), CHAR(asChar(fname))));
R_API_END();
return R_NilValue;
}
SEXP XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw) {
R_API_BEGIN();
CHECK_CALL(XGBoosterLoadModelFromBuffer(R_ExternalPtrAddr(handle),
RAW(raw),
length(raw)));
R_API_END();
return R_NilValue;
}
SEXP XGBoosterModelToRaw_R(SEXP handle) {
SEXP ret;
R_API_BEGIN();
bst_ulong olen;
const char *raw;
CHECK_CALL(XGBoosterGetModelRaw(R_ExternalPtrAddr(handle), &olen, &raw));
ret = PROTECT(allocVector(RAWSXP, olen));
if (olen != 0) {
memcpy(RAW(ret), raw, olen);
}
UNPROTECT(1);
R_API_END();
return ret;
}
SEXP XGBoosterDumpModel_R(SEXP handle, SEXP fmap, SEXP with_stats) {
SEXP out;
R_API_BEGIN();
bst_ulong olen;
const char **res;
CHECK_CALL(XGBoosterDumpModel(R_ExternalPtrAddr(handle),
CHAR(asChar(fmap)),
asInteger(with_stats),
&olen, &res));
out = PROTECT(allocVector(STRSXP, olen));
for (size_t i = 0; i < olen; ++i) {
std::stringstream stream;
stream << "booster[" << i <<"]\n" << res[i];
SET_STRING_ELT(out, i, mkChar(stream.str().c_str()));
}
UNPROTECT(1);
R_API_END();
return out;
}
SEXP XGBoosterGetAttr_R(SEXP handle, SEXP name) {
SEXP out;
R_API_BEGIN();
int success;
const char *val;
CHECK_CALL(XGBoosterGetAttr(R_ExternalPtrAddr(handle),
CHAR(asChar(name)),
&val,
&success));
if (success) {
out = PROTECT(allocVector(STRSXP, 1));
SET_STRING_ELT(out, 0, mkChar(val));
} else {
out = PROTECT(R_NilValue);
}
UNPROTECT(1);
R_API_END();
return out;
}
SEXP XGBoosterSetAttr_R(SEXP handle, SEXP name, SEXP val) {
R_API_BEGIN();
const char *v = isNull(val) ? nullptr : CHAR(asChar(val));
CHECK_CALL(XGBoosterSetAttr(R_ExternalPtrAddr(handle),
CHAR(asChar(name)), v));
R_API_END();
return R_NilValue;
}
SEXP XGBoosterGetAttrNames_R(SEXP handle) {
SEXP out;
R_API_BEGIN();
bst_ulong len;
const char **res;
CHECK_CALL(XGBoosterGetAttrNames(R_ExternalPtrAddr(handle),
&len, &res));
if (len > 0) {
out = PROTECT(allocVector(STRSXP, len));
for (size_t i = 0; i < len; ++i) {
SET_STRING_ELT(out, i, mkChar(res[i]));
}
} else {
out = PROTECT(R_NilValue);
}
UNPROTECT(1);
R_API_END();
return out;
}

View File

@@ -1,344 +0,0 @@
// Copyright (c) 2014 by Contributors
#include <vector>
#include <string>
#include <utility>
#include <cstring>
#include <cstdio>
#include <sstream>
#include "wrapper/xgboost_wrapper.h"
#include "src/utils/utils.h"
#include "src/utils/omp.h"
#include "xgboost_R.h"
using namespace std;
using namespace xgboost;
extern "C" {
void XGBoostAssert_R(int exp, const char *fmt, ...);
void XGBoostCheck_R(int exp, const char *fmt, ...);
int XGBoostSPrintf_R(char *buf, size_t size, const char *fmt, ...);
}
// implements error handling
namespace xgboost {
namespace utils {
extern "C" {
void (*Printf)(const char *fmt, ...) = Rprintf;
int (*SPrintf)(char *buf, size_t size, const char *fmt, ...) = XGBoostSPrintf_R;
void (*Assert)(int exp, const char *fmt, ...) = XGBoostAssert_R;
void (*Check)(int exp, const char *fmt, ...) = XGBoostCheck_R;
void (*Error)(const char *fmt, ...) = error;
}
bool CheckNAN(double v) {
return ISNAN(v);
}
double LogGamma(double v) {
return lgammafn(v);
}
} // namespace utils
namespace random {
void Seed(unsigned seed) {
// warning("parameter seed is ignored, please set random seed using set.seed");
}
double Uniform(void) {
return unif_rand();
}
double Normal(void) {
return norm_rand();
}
} // namespace random
} // namespace xgboost
// call before wrapper starts
inline void _WrapperBegin(void) {
GetRNGstate();
}
// call after wrapper starts
inline void _WrapperEnd(void) {
PutRNGstate();
}
// do nothing, check error
inline void CheckErr(int ret) {
}
extern "C" {
SEXP XGCheckNullPtr_R(SEXP handle) {
return ScalarLogical(R_ExternalPtrAddr(handle) == NULL);
}
void _DMatrixFinalizer(SEXP ext) {
if (R_ExternalPtrAddr(ext) == NULL) return;
XGDMatrixFree(R_ExternalPtrAddr(ext));
R_ClearExternalPtr(ext);
}
SEXP XGDMatrixCreateFromFile_R(SEXP fname, SEXP silent) {
_WrapperBegin();
DMatrixHandle handle;
CheckErr(XGDMatrixCreateFromFile(CHAR(asChar(fname)), asInteger(silent), &handle));
_WrapperEnd();
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
UNPROTECT(1);
return ret;
}
SEXP XGDMatrixCreateFromMat_R(SEXP mat,
SEXP missing) {
_WrapperBegin();
SEXP dim = getAttrib(mat, R_DimSymbol);
size_t nrow = static_cast<size_t>(INTEGER(dim)[0]);
size_t ncol = static_cast<size_t>(INTEGER(dim)[1]);
double *din = REAL(mat);
std::vector<float> data(nrow * ncol);
#pragma omp parallel for schedule(static)
for (bst_omp_uint i = 0; i < nrow; ++i) {
for (size_t j = 0; j < ncol; ++j) {
data[i * ncol +j] = din[i + nrow * j];
}
}
DMatrixHandle handle;
CheckErr(XGDMatrixCreateFromMat(BeginPtr(data), nrow, ncol, asReal(missing), &handle));
_WrapperEnd();
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
UNPROTECT(1);
return ret;
}
SEXP XGDMatrixCreateFromCSC_R(SEXP indptr,
SEXP indices,
SEXP data) {
_WrapperBegin();
const int *p_indptr = INTEGER(indptr);
const int *p_indices = INTEGER(indices);
const double *p_data = REAL(data);
int nindptr = length(indptr);
int ndata = length(data);
std::vector<bst_ulong> col_ptr_(nindptr);
std::vector<unsigned> indices_(ndata);
std::vector<float> data_(ndata);
for (int i = 0; i < nindptr; ++i) {
col_ptr_[i] = static_cast<bst_ulong>(p_indptr[i]);
}
#pragma omp parallel for schedule(static)
for (int i = 0; i < ndata; ++i) {
indices_[i] = static_cast<unsigned>(p_indices[i]);
data_[i] = static_cast<float>(p_data[i]);
}
DMatrixHandle handle;
CheckErr(XGDMatrixCreateFromCSC(BeginPtr(col_ptr_), BeginPtr(indices_),
BeginPtr(data_), nindptr, ndata,
&handle));
_WrapperEnd();
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
UNPROTECT(1);
return ret;
}
SEXP XGDMatrixSliceDMatrix_R(SEXP handle, SEXP idxset) {
_WrapperBegin();
int len = length(idxset);
std::vector<int> idxvec(len);
for (int i = 0; i < len; ++i) {
idxvec[i] = INTEGER(idxset)[i] - 1;
}
DMatrixHandle res;
CheckErr(XGDMatrixSliceDMatrix(R_ExternalPtrAddr(handle),
BeginPtr(idxvec), len,
&res));
_WrapperEnd();
SEXP ret = PROTECT(R_MakeExternalPtr(res, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
UNPROTECT(1);
return ret;
}
void XGDMatrixSaveBinary_R(SEXP handle, SEXP fname, SEXP silent) {
_WrapperBegin();
CheckErr(XGDMatrixSaveBinary(R_ExternalPtrAddr(handle),
CHAR(asChar(fname)), asInteger(silent)));
_WrapperEnd();
}
void XGDMatrixSetInfo_R(SEXP handle, SEXP field, SEXP array) {
_WrapperBegin();
int len = length(array);
const char *name = CHAR(asChar(field));
if (!strcmp("group", name)) {
std::vector<unsigned> vec(len);
#pragma omp parallel for schedule(static)
for (int i = 0; i < len; ++i) {
vec[i] = static_cast<unsigned>(INTEGER(array)[i]);
}
CheckErr(XGDMatrixSetGroup(R_ExternalPtrAddr(handle), BeginPtr(vec), len));
} else {
std::vector<float> vec(len);
#pragma omp parallel for schedule(static)
for (int i = 0; i < len; ++i) {
vec[i] = REAL(array)[i];
}
CheckErr(XGDMatrixSetFloatInfo(R_ExternalPtrAddr(handle),
CHAR(asChar(field)),
BeginPtr(vec), len));
}
_WrapperEnd();
}
SEXP XGDMatrixGetInfo_R(SEXP handle, SEXP field) {
_WrapperBegin();
bst_ulong olen;
const float *res;
CheckErr(XGDMatrixGetFloatInfo(R_ExternalPtrAddr(handle),
CHAR(asChar(field)),
&olen,
&res));
_WrapperEnd();
SEXP ret = PROTECT(allocVector(REALSXP, olen));
for (size_t i = 0; i < olen; ++i) {
REAL(ret)[i] = res[i];
}
UNPROTECT(1);
return ret;
}
SEXP XGDMatrixNumRow_R(SEXP handle) {
bst_ulong nrow;
CheckErr(XGDMatrixNumRow(R_ExternalPtrAddr(handle), &nrow));
return ScalarInteger(static_cast<int>(nrow));
}
// functions related to booster
void _BoosterFinalizer(SEXP ext) {
if (R_ExternalPtrAddr(ext) == NULL) return;
CheckErr(XGBoosterFree(R_ExternalPtrAddr(ext)));
R_ClearExternalPtr(ext);
}
SEXP XGBoosterCreate_R(SEXP dmats) {
_WrapperBegin();
int len = length(dmats);
std::vector<void*> dvec;
for (int i = 0; i < len; ++i) {
dvec.push_back(R_ExternalPtrAddr(VECTOR_ELT(dmats, i)));
}
BoosterHandle handle;
CheckErr(XGBoosterCreate(BeginPtr(dvec), dvec.size(), &handle));
_WrapperEnd();
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ret, _BoosterFinalizer, TRUE);
UNPROTECT(1);
return ret;
}
void XGBoosterSetParam_R(SEXP handle, SEXP name, SEXP val) {
_WrapperBegin();
CheckErr(XGBoosterSetParam(R_ExternalPtrAddr(handle),
CHAR(asChar(name)),
CHAR(asChar(val))));
_WrapperEnd();
}
void XGBoosterUpdateOneIter_R(SEXP handle, SEXP iter, SEXP dtrain) {
_WrapperBegin();
CheckErr(XGBoosterUpdateOneIter(R_ExternalPtrAddr(handle),
asInteger(iter),
R_ExternalPtrAddr(dtrain)));
_WrapperEnd();
}
void XGBoosterBoostOneIter_R(SEXP handle, SEXP dtrain, SEXP grad, SEXP hess) {
_WrapperBegin();
utils::Check(length(grad) == length(hess), "gradient and hess must have same length");
int len = length(grad);
std::vector<float> tgrad(len), thess(len);
#pragma omp parallel for schedule(static)
for (int j = 0; j < len; ++j) {
tgrad[j] = REAL(grad)[j];
thess[j] = REAL(hess)[j];
}
CheckErr(XGBoosterBoostOneIter(R_ExternalPtrAddr(handle),
R_ExternalPtrAddr(dtrain),
BeginPtr(tgrad), BeginPtr(thess),
len));
_WrapperEnd();
}
SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames) {
_WrapperBegin();
utils::Check(length(dmats) == length(evnames), "dmats and evnams must have same length");
int len = length(dmats);
std::vector<void*> vec_dmats;
std::vector<std::string> vec_names;
std::vector<const char*> vec_sptr;
for (int i = 0; i < len; ++i) {
vec_dmats.push_back(R_ExternalPtrAddr(VECTOR_ELT(dmats, i)));
vec_names.push_back(std::string(CHAR(asChar(VECTOR_ELT(evnames, i)))));
}
for (int i = 0; i < len; ++i) {
vec_sptr.push_back(vec_names[i].c_str());
}
const char *ret;
CheckErr(XGBoosterEvalOneIter(R_ExternalPtrAddr(handle),
asInteger(iter),
BeginPtr(vec_dmats),
BeginPtr(vec_sptr),
len, &ret));
_WrapperEnd();
return mkString(ret);
}
SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask, SEXP ntree_limit) {
_WrapperBegin();
bst_ulong olen;
const float *res;
CheckErr(XGBoosterPredict(R_ExternalPtrAddr(handle),
R_ExternalPtrAddr(dmat),
asInteger(option_mask),
asInteger(ntree_limit),
&olen, &res));
_WrapperEnd();
SEXP ret = PROTECT(allocVector(REALSXP, olen));
for (size_t i = 0; i < olen; ++i) {
REAL(ret)[i] = res[i];
}
UNPROTECT(1);
return ret;
}
void XGBoosterLoadModel_R(SEXP handle, SEXP fname) {
_WrapperBegin();
CheckErr(XGBoosterLoadModel(R_ExternalPtrAddr(handle), CHAR(asChar(fname))));
_WrapperEnd();
}
void XGBoosterSaveModel_R(SEXP handle, SEXP fname) {
_WrapperBegin();
CheckErr(XGBoosterSaveModel(R_ExternalPtrAddr(handle), CHAR(asChar(fname))));
_WrapperEnd();
}
void XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw) {
_WrapperBegin();
XGBoosterLoadModelFromBuffer(R_ExternalPtrAddr(handle),
RAW(raw),
length(raw));
_WrapperEnd();
}
SEXP XGBoosterModelToRaw_R(SEXP handle) {
bst_ulong olen;
_WrapperBegin();
const char *raw;
CheckErr(XGBoosterGetModelRaw(R_ExternalPtrAddr(handle), &olen, &raw));
_WrapperEnd();
SEXP ret = PROTECT(allocVector(RAWSXP, olen));
if (olen != 0) {
memcpy(RAW(ret), raw, olen);
}
UNPROTECT(1);
return ret;
}
SEXP XGBoosterDumpModel_R(SEXP handle, SEXP fmap, SEXP with_stats) {
_WrapperBegin();
bst_ulong olen;
const char **res;
CheckErr(XGBoosterDumpModel(R_ExternalPtrAddr(handle),
CHAR(asChar(fmap)),
asInteger(with_stats),
&olen, &res));
_WrapperEnd();
SEXP out = PROTECT(allocVector(STRSXP, olen));
for (size_t i = 0; i < olen; ++i) {
stringstream stream;
stream << "booster[" << i <<"]\n" << res[i];
SET_STRING_ELT(out, i, mkChar(stream.str().c_str()));
}
UNPROTECT(1);
return out;
}
}

View File

@@ -4,155 +4,209 @@
* \author Tianqi Chen
* \brief R wrapper of xgboost
*/
#ifndef XGBOOST_WRAPPER_R_H_ // NOLINT(*)
#define XGBOOST_WRAPPER_R_H_ // NOLINT(*)
#ifndef XGBOOST_R_H_ // NOLINT(*)
#define XGBOOST_R_H_ // NOLINT(*)
extern "C" {
#include <Rinternals.h>
#include <R_ext/Random.h>
#include <Rmath.h>
}
extern "C" {
/*!
* \brief check whether a handle is NULL
* \param handle
* \return whether it is null ptr
#include <xgboost/c_api.h>
/*!
* \brief check whether a handle is NULL
* \param handle
* \return whether it is null ptr
*/
XGB_DLL SEXP XGCheckNullPtr_R(SEXP handle);
/*!
* \brief load a data matrix
* \param fname name of the content
* \param silent whether print messages
* \return a loaded data matrix
*/
XGB_DLL SEXP XGDMatrixCreateFromFile_R(SEXP fname, SEXP silent);
/*!
* \brief create matrix content from dense matrix
* This assumes the matrix is stored in column major format
* \param data R Matrix object
* \param missing which value to represent missing value
* \return created dmatrix
*/
XGB_DLL SEXP XGDMatrixCreateFromMat_R(SEXP mat,
SEXP missing);
/*!
* \brief create a matrix content from CSC format
* \param indptr pointer to column headers
* \param indices row indices
* \param data content of the data
* \return created dmatrix
*/
XGB_DLL SEXP XGDMatrixCreateFromCSC_R(SEXP indptr,
SEXP indices,
SEXP data);
/*!
* \brief create a new dmatrix from sliced content of existing matrix
* \param handle instance of data matrix to be sliced
* \param idxset index set
* \return a sliced new matrix
*/
XGB_DLL SEXP XGDMatrixSliceDMatrix_R(SEXP handle, SEXP idxset);
/*!
* \brief load a data matrix into binary file
* \param handle a instance of data matrix
* \param fname file name
* \param silent print statistics when saving
* \return R_NilValue
*/
XGB_DLL SEXP XGDMatrixSaveBinary_R(SEXP handle, SEXP fname, SEXP silent);
/*!
* \brief set information to dmatrix
* \param handle a instance of data matrix
* \param field field name, can be label, weight
* \param array pointer to float vector
* \return R_NilValue
*/
XGB_DLL SEXP XGDMatrixSetInfo_R(SEXP handle, SEXP field, SEXP array);
/*!
* \brief get info vector from matrix
* \param handle a instance of data matrix
* \param field field name
* \return info vector
*/
XGB_DLL SEXP XGDMatrixGetInfo_R(SEXP handle, SEXP field);
/*!
* \brief return number of rows
* \param handle an instance of data matrix
*/
XGB_DLL SEXP XGDMatrixNumRow_R(SEXP handle);
/*!
* \brief return number of columns
* \param handle an instance of data matrix
*/
XGB_DLL SEXP XGDMatrixNumCol_R(SEXP handle);
/*!
* \brief create xgboost learner
* \param dmats a list of dmatrix handles that will be cached
*/
XGB_DLL SEXP XGBoosterCreate_R(SEXP dmats);
/*!
* \brief set parameters
* \param handle handle
* \param name parameter name
* \param val value of parameter
* \return R_NilValue
*/
XGB_DLL SEXP XGBoosterSetParam_R(SEXP handle, SEXP name, SEXP val);
/*!
* \brief update the model in one round using dtrain
* \param handle handle
* \param iter current iteration rounds
* \param dtrain training data
* \return R_NilValue
*/
XGB_DLL SEXP XGBoosterUpdateOneIter_R(SEXP ext, SEXP iter, SEXP dtrain);
/*!
* \brief update the model, by directly specify gradient and second order gradient,
* this can be used to replace UpdateOneIter, to support customized loss function
* \param handle handle
* \param dtrain training data
* \param grad gradient statistics
* \param hess second order gradient statistics
* \return R_NilValue
*/
XGB_DLL SEXP XGBoosterBoostOneIter_R(SEXP handle, SEXP dtrain, SEXP grad, SEXP hess);
/*!
* \brief get evaluation statistics for xgboost
* \param handle handle
* \param iter current iteration rounds
* \param dmats list of handles to dmatrices
* \param evname name of evaluation
* \return the string containing evaluation stats
*/
XGB_DLL SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames);
/*!
* \brief make prediction based on dmat
* \param handle handle
* \param dmat data matrix
* \param option_mask output_margin:1 predict_leaf:2
* \param ntree_limit limit number of trees used in prediction
*/
XGB_DLL SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask, SEXP ntree_limit);
/*!
* \brief load model from existing file
* \param handle handle
* \param fname file name
* \return R_NilValue
*/
XGB_DLL SEXP XGBoosterLoadModel_R(SEXP handle, SEXP fname);
/*!
* \brief save model into existing file
* \param handle handle
* \param fname file name
* \return R_NilValue
*/
XGB_DLL SEXP XGBoosterSaveModel_R(SEXP handle, SEXP fname);
/*!
* \brief load model from raw array
* \param handle handle
* \return R_NilValue
*/
XGB_DLL SEXP XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw);
/*!
* \brief save model into R's raw array
* \param handle handle
* \return raw array
*/
SEXP XGCheckNullPtr_R(SEXP handle);
/*!
* \brief load a data matrix
* \param fname name of the content
* \param silent whether print messages
* \return a loaded data matrix
*/
SEXP XGDMatrixCreateFromFile_R(SEXP fname, SEXP silent);
/*!
* \brief create matrix content from dense matrix
* This assumes the matrix is stored in column major format
* \param data R Matrix object
* \param missing which value to represent missing value
* \return created dmatrix
*/
SEXP XGDMatrixCreateFromMat_R(SEXP mat,
SEXP missing);
/*!
* \brief create a matrix content from CSC format
* \param indptr pointer to column headers
* \param indices row indices
* \param data content of the data
* \return created dmatrix
*/
SEXP XGDMatrixCreateFromCSC_R(SEXP indptr,
SEXP indices,
SEXP data);
/*!
* \brief create a new dmatrix from sliced content of existing matrix
* \param handle instance of data matrix to be sliced
* \param idxset index set
* \return a sliced new matrix
*/
SEXP XGDMatrixSliceDMatrix_R(SEXP handle, SEXP idxset);
/*!
* \brief load a data matrix into binary file
* \param handle a instance of data matrix
* \param fname file name
* \param silent print statistics when saving
*/
void XGDMatrixSaveBinary_R(SEXP handle, SEXP fname, SEXP silent);
/*!
* \brief set information to dmatrix
* \param handle a instance of data matrix
* \param field field name, can be label, weight
* \param array pointer to float vector
*/
void XGDMatrixSetInfo_R(SEXP handle, SEXP field, SEXP array);
/*!
* \brief get info vector from matrix
* \param handle a instance of data matrix
* \param field field name
* \return info vector
*/
SEXP XGDMatrixGetInfo_R(SEXP handle, SEXP field);
/*!
* \brief return number of rows
* \param handle a instance of data matrix
*/
SEXP XGDMatrixNumRow_R(SEXP handle);
/*!
* \brief create xgboost learner
* \param dmats a list of dmatrix handles that will be cached
*/
SEXP XGBoosterCreate_R(SEXP dmats);
/*!
* \brief set parameters
* \param handle handle
* \param name parameter name
* \param val value of parameter
*/
void XGBoosterSetParam_R(SEXP handle, SEXP name, SEXP val);
/*!
* \brief update the model in one round using dtrain
* \param handle handle
* \param iter current iteration rounds
* \param dtrain training data
*/
void XGBoosterUpdateOneIter_R(SEXP ext, SEXP iter, SEXP dtrain);
/*!
* \brief update the model, by directly specify gradient and second order gradient,
* this can be used to replace UpdateOneIter, to support customized loss function
* \param handle handle
* \param dtrain training data
* \param grad gradient statistics
* \param hess second order gradient statistics
*/
void XGBoosterBoostOneIter_R(SEXP handle, SEXP dtrain, SEXP grad, SEXP hess);
/*!
* \brief get evaluation statistics for xgboost
* \param handle handle
* \param iter current iteration rounds
* \param dmats list of handles to dmatrices
* \param evname name of evaluation
* \return the string containing evaluation stati
*/
SEXP XGBoosterEvalOneIter_R(SEXP handle, SEXP iter, SEXP dmats, SEXP evnames);
/*!
* \brief make prediction based on dmat
* \param handle handle
* \param dmat data matrix
* \param option_mask output_margin:1 predict_leaf:2
* \param ntree_limit limit number of trees used in prediction
*/
SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask, SEXP ntree_limit);
/*!
* \brief load model from existing file
* \param handle handle
* \param fname file name
*/
void XGBoosterLoadModel_R(SEXP handle, SEXP fname);
/*!
* \brief save model into existing file
* \param handle handle
* \param fname file name
*/
void XGBoosterSaveModel_R(SEXP handle, SEXP fname);
/*!
* \brief load model from raw array
* \param handle handle
*/
void XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw);
/*!
* \brief save model into R's raw array
* \param handle handle
* \return raw array
*/
SEXP XGBoosterModelToRaw_R(SEXP handle);
/*!
* \brief dump model into a string
* \param handle handle
* \param fmap name to fmap can be empty string
* \param with_stats whether dump statistics of splits
*/
SEXP XGBoosterDumpModel_R(SEXP handle, SEXP fmap, SEXP with_stats);
}
XGB_DLL SEXP XGBoosterModelToRaw_R(SEXP handle);
/*!
* \brief dump model into a string
* \param handle handle
* \param fmap name to fmap can be empty string
* \param with_stats whether dump statistics of splits
*/
XGB_DLL SEXP XGBoosterDumpModel_R(SEXP handle, SEXP fmap, SEXP with_stats);
/*!
* \brief get learner attribute value
* \param handle handle
* \param name attribute name
* \return character containing attribute value
*/
XGB_DLL SEXP XGBoosterGetAttr_R(SEXP handle, SEXP name);
/*!
* \brief set learner attribute value
* \param handle handle
* \param name attribute name
* \param val attribute value; NULL value would delete an attribute
* \return R_NilValue
*/
XGB_DLL SEXP XGBoosterSetAttr_R(SEXP handle, SEXP name, SEXP val);
/*!
* \brief get the names of learner attributes
* \return string vector containing attribute names
*/
XGB_DLL SEXP XGBoosterGetAttrNames_R(SEXP handle);
#endif // XGBOOST_WRAPPER_R_H_ // NOLINT(*)

View File

@@ -24,11 +24,3 @@ void XGBoostCheck_R(int exp, const char *fmt, ...) {
error("%s\n", buf);
}
}
int XGBoostSPrintf_R(char *buf, size_t size, const char *fmt, ...) {
int ret;
va_list args;
va_start(args, fmt);
ret = vsnprintf(buf, size, fmt, args);
va_end(args);
return ret;
}

View File

@@ -0,0 +1,65 @@
// Copyright (c) 2015 by Contributors
// This file contains the customization implementations of R module
// to change behavior of libxgboost
#include <xgboost/logging.h>
#include "src/common/random.h"
#include "./xgboost_R.h"
// redirect the messages to R's console.
namespace dmlc {
void CustomLogMessage::Log(const std::string& msg) {
Rprintf("%s\n", msg.c_str());
}
} // namespace dmlc
// implements rabit error handling.
extern "C" {
void XGBoostAssert_R(int exp, const char *fmt, ...);
void XGBoostCheck_R(int exp, const char *fmt, ...);
}
namespace rabit {
namespace utils {
extern "C" {
void (*Printf)(const char *fmt, ...) = Rprintf;
void (*Assert)(int exp, const char *fmt, ...) = XGBoostAssert_R;
void (*Check)(int exp, const char *fmt, ...) = XGBoostCheck_R;
void (*Error)(const char *fmt, ...) = error;
}
}
}
namespace xgboost {
ConsoleLogger::~ConsoleLogger() {
dmlc::CustomLogMessage::Log(log_stream_.str());
}
TrackerLogger::~TrackerLogger() {
dmlc::CustomLogMessage::Log(log_stream_.str());
}
} // namespace xgboost
namespace xgboost {
namespace common {
// redirect the nath functions.
bool CheckNAN(double v) {
return ISNAN(v);
}
double LogGamma(double v) {
return lgammafn(v);
}
// customize random engine.
void CustomGlobalRandomEngine::seed(CustomGlobalRandomEngine::result_type val) {
// ignore the seed
}
// use R's PRNG to replacd
CustomGlobalRandomEngine::result_type
CustomGlobalRandomEngine::operator()() {
return static_cast<result_type>(
std::floor(unif_rand() * CustomGlobalRandomEngine::max()));
}
} // namespace common
} // namespace xgboost

View File

@@ -8,29 +8,175 @@ train <- agaricus.train
test <- agaricus.test
set.seed(1994)
test_that("train and predict", {
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
test_that("train and predict binary classification", {
nrounds = 2
expect_output(
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = nrounds, objective = "binary:logistic")
, "train-error")
expect_equal(class(bst), "xgb.Booster")
expect_equal(bst$niter, nrounds)
expect_false(is.null(bst$evaluation_log))
expect_equal(nrow(bst$evaluation_log), nrounds)
expect_lt(bst$evaluation_log[, min(train_error)], 0.03)
pred <- predict(bst, test$data)
expect_equal(length(pred), 1611)
expect_length(pred, 1611)
pred1 <- predict(bst, train$data, ntreelimit = 1)
expect_length(pred1, 6513)
err_pred1 <- sum((pred1 > 0.5) != train$label)/length(train$label)
err_log <- bst$evaluation_log[1, train_error]
expect_lt(abs(err_pred1 - err_log), 10e-6)
})
test_that("early stopping", {
res <- xgb.cv(data = train$data, label = train$label, max.depth = 2, nfold = 5,
eta = 0.3, nthread = 2, nround = 20, objective = "binary:logistic",
early.stop.round = 3, maximize = FALSE)
expect_true(nrow(res) < 20)
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 0.3, nthread = 2, nround = 20, objective = "binary:logistic",
early.stop.round = 3, maximize = FALSE)
pred <- predict(bst, test$data)
expect_equal(length(pred), 1611)
test_that("train and predict softprob", {
lb <- as.numeric(iris$Species) - 1
set.seed(11)
expect_output(
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 3, eta = 0.5, nthread = 2, nrounds = 5,
objective = "multi:softprob", num_class=3)
, "train-merror")
expect_false(is.null(bst$evaluation_log))
expect_lt(bst$evaluation_log[, min(train_merror)], 0.025)
expect_equal(bst$niter * 3, xgb.ntree(bst))
pred <- predict(bst, as.matrix(iris[, -5]))
expect_length(pred, nrow(iris) * 3)
# row sums add up to total probability of 1:
expect_equal(rowSums(matrix(pred, ncol=3, byrow=TRUE)), rep(1, nrow(iris)), tolerance = 1e-7)
# manually calculate error at the last iteration:
mpred <- predict(bst, as.matrix(iris[, -5]), reshape = TRUE)
expect_equal(as.numeric(t(mpred)), pred)
pred_labels <- max.col(mpred) - 1
err <- sum(pred_labels != lb)/length(lb)
expect_equal(bst$evaluation_log[5, train_merror], err, tolerance = 5e-6)
# manually calculate error at the 1st iteration:
mpred <- predict(bst, as.matrix(iris[, -5]), reshape = TRUE, ntreelimit = 1)
pred_labels <- max.col(mpred) - 1
err <- sum(pred_labels != lb)/length(lb)
expect_equal(bst$evaluation_log[1, train_merror], err, tolerance = 5e-6)
})
test_that("save_period", {
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 0.3, nthread = 2, nround = 20, objective = "binary:logistic",
save_period = 10, save_name = "xgb.model")
pred <- predict(bst, test$data)
expect_equal(length(pred), 1611)
test_that("train and predict softmax", {
lb <- as.numeric(iris$Species) - 1
set.seed(11)
expect_output(
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 3, eta = 0.5, nthread = 2, nrounds = 5,
objective = "multi:softmax", num_class=3)
, "train-merror")
expect_false(is.null(bst$evaluation_log))
expect_lt(bst$evaluation_log[, min(train_merror)], 0.025)
expect_equal(bst$niter * 3, xgb.ntree(bst))
pred <- predict(bst, as.matrix(iris[, -5]))
expect_length(pred, nrow(iris))
err <- sum(pred != lb)/length(lb)
expect_equal(bst$evaluation_log[5, train_merror], err, tolerance = 5e-6)
})
test_that("train and predict RF", {
set.seed(11)
lb <- train$label
# single iteration
bst <- xgboost(data = train$data, label = lb, max_depth = 5,
nthread = 2, nrounds = 1, objective = "binary:logistic",
num_parallel_tree = 20, subsample = 0.6, colsample_bytree = 0.1)
expect_equal(bst$niter, 1)
expect_equal(xgb.ntree(bst), 20)
pred <- predict(bst, train$data)
pred_err <- sum((pred > 0.5) != lb)/length(lb)
expect_lt(abs(bst$evaluation_log[1, train_error] - pred_err), 10e-6)
#expect_lt(pred_err, 0.03)
pred <- predict(bst, train$data, ntreelimit = 20)
pred_err_20 <- sum((pred > 0.5) != lb)/length(lb)
expect_equal(pred_err_20, pred_err)
#pred <- predict(bst, train$data, ntreelimit = 1)
#pred_err_1 <- sum((pred > 0.5) != lb)/length(lb)
#expect_lt(pred_err, pred_err_1)
#expect_lt(pred_err, 0.08)
})
test_that("train and predict RF with softprob", {
lb <- as.numeric(iris$Species) - 1
nrounds <- 15
set.seed(11)
bst <- xgboost(data = as.matrix(iris[, -5]), label = lb,
max_depth = 3, eta = 0.9, nthread = 2, nrounds = nrounds,
objective = "multi:softprob", num_class=3,
num_parallel_tree = 4, subsample = 0.5, colsample_bytree = 0.5)
expect_equal(bst$niter, 15)
expect_equal(xgb.ntree(bst), 15*3*4)
# predict for all iterations:
pred <- predict(bst, as.matrix(iris[, -5]), reshape=TRUE)
expect_equal(dim(pred), c(nrow(iris), 3))
pred_labels <- max.col(pred) - 1
err <- sum(pred_labels != lb)/length(lb)
expect_equal(bst$evaluation_log[nrounds, train_merror], err, tolerance = 5e-6)
# predict for 7 iterations and adjust for 4 parallel trees per iteration
pred <- predict(bst, as.matrix(iris[, -5]), reshape=TRUE, ntreelimit = 7 * 4)
err <- sum((max.col(pred) - 1) != lb)/length(lb)
expect_equal(bst$evaluation_log[7, train_merror], err, tolerance = 5e-6)
})
test_that("use of multiple eval metrics works", {
expect_output(
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic",
eval_metric = 'error', eval_metric = 'auc', eval_metric = "logloss")
, "train-error.*train-auc.*train-logloss")
expect_false(is.null(bst$evaluation_log))
expect_equal(dim(bst$evaluation_log), c(2, 4))
expect_equal(colnames(bst$evaluation_log), c("iter", "train_error", "train_auc", "train_logloss"))
})
test_that("training continuation works", {
dtrain <- xgb.DMatrix(train$data, label = train$label)
watchlist = list(train=dtrain)
param <- list(objective = "binary:logistic", max_depth = 2, eta = 1, nthread = 2)
# for the reference, use 4 iterations at once:
set.seed(11)
bst <- xgb.train(param, dtrain, nrounds = 4, watchlist)
# first two iterations:
set.seed(11)
bst1 <- xgb.train(param, dtrain, nrounds = 2, watchlist)
# continue for two more:
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, xgb_model = bst1)
expect_equal(bst$raw, bst2$raw)
expect_false(is.null(bst2$evaluation_log))
expect_equal(dim(bst2$evaluation_log), c(4, 2))
expect_equal(bst2$evaluation_log, bst$evaluation_log)
# test continuing from raw model data
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, xgb_model = bst1$raw)
expect_equal(bst$raw, bst2$raw)
expect_equal(dim(bst2$evaluation_log), c(2, 2))
# test continuing from a model in file
xgb.save(bst1, "xgboost.model")
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, xgb_model = "xgboost.model")
expect_equal(bst$raw, bst2$raw)
expect_equal(dim(bst2$evaluation_log), c(2, 2))
})
test_that("xgb.cv works", {
set.seed(11)
cv <- xgb.cv(data = train$data, label = train$label, max_depth = 2, nfold = 5,
eta = 1., nthread = 2, nrounds = 2, objective = "binary:logistic",
verbose=TRUE)
expect_is(cv, 'xgb.cv.synchronous')
expect_false(is.null(cv$evaluation_log))
expect_lt(cv$evaluation_log[, min(test_error_mean)], 0.03)
expect_lt(cv$evaluation_log[, min(test_error_std)], 0.004)
expect_equal(cv$niter, 2)
expect_false(is.null(cv$folds) && is.list(cv$folds))
expect_length(cv$folds, 5)
expect_false(is.null(cv$params) && is.list(cv$params))
expect_false(is.null(cv$callbacks))
expect_false(is.null(cv$call))
})

View File

@@ -0,0 +1,295 @@
# More specific testing of callbacks
require(xgboost)
require(data.table)
context("callbacks")
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
# add some label noise for early stopping tests
add.noise <- function(label, frac) {
inoise <- sample(length(label), length(label) * frac)
label[inoise] <- !label[inoise]
label
}
set.seed(11)
ltrain <- add.noise(train$label, 0.2)
ltest <- add.noise(test$label, 0.2)
dtrain <- xgb.DMatrix(train$data, label = ltrain)
dtest <- xgb.DMatrix(test$data, label = ltest)
watchlist = list(train=dtrain, test=dtest)
err <- function(label, pr) sum((pr > 0.5) != label)/length(label)
param <- list(objective = "binary:logistic", max_depth = 2, nthread = 2)
test_that("cb.print.evaluation works as expected", {
bst_evaluation <- c('train-auc'=0.9, 'test-auc'=0.8)
bst_evaluation_err <- NULL
begin_iteration <- 1
end_iteration <- 7
f0 <- cb.print.evaluation(period=0)
f1 <- cb.print.evaluation(period=1)
f5 <- cb.print.evaluation(period=5)
expect_false(is.null(attr(f1, 'call')))
expect_equal(attr(f1, 'name'), 'cb.print.evaluation')
iteration <- 1
expect_silent(f0())
expect_output(f1(), "\\[1\\]\ttrain-auc:0.900000\ttest-auc:0.800000")
expect_output(f5(), "\\[1\\]\ttrain-auc:0.900000\ttest-auc:0.800000")
expect_null(f1())
iteration <- 2
expect_output(f1(), "\\[2\\]\ttrain-auc:0.900000\ttest-auc:0.800000")
expect_silent(f5())
iteration <- 7
expect_output(f1(), "\\[7\\]\ttrain-auc:0.900000\ttest-auc:0.800000")
expect_output(f5(), "\\[7\\]\ttrain-auc:0.900000\ttest-auc:0.800000")
bst_evaluation_err <- c('train-auc'=0.1, 'test-auc'=0.2)
expect_output(f1(), "\\[7\\]\ttrain-auc:0.900000\\+0.100000\ttest-auc:0.800000\\+0.200000")
})
test_that("cb.evaluation.log works as expected", {
bst_evaluation <- c('train-auc'=0.9, 'test-auc'=0.8)
bst_evaluation_err <- NULL
evaluation_log <- list()
f <- cb.evaluation.log()
expect_false(is.null(attr(f, 'call')))
expect_equal(attr(f, 'name'), 'cb.evaluation.log')
iteration <- 1
expect_silent(f())
expect_equal(evaluation_log,
list(c(iter=1, bst_evaluation)))
iteration <- 2
expect_silent(f())
expect_equal(evaluation_log,
list(c(iter=1, bst_evaluation), c(iter=2, bst_evaluation)))
expect_silent(f(finalize = TRUE))
expect_equal(evaluation_log,
data.table(iter=1:2, train_auc=c(0.9,0.9), test_auc=c(0.8,0.8)))
bst_evaluation_err <- c('train-auc'=0.1, 'test-auc'=0.2)
evaluation_log <- list()
f <- cb.evaluation.log()
iteration <- 1
expect_silent(f())
expect_equal(evaluation_log,
list(c(iter=1, c(bst_evaluation, bst_evaluation_err))))
iteration <- 2
expect_silent(f())
expect_equal(evaluation_log,
list(c(iter=1, c(bst_evaluation, bst_evaluation_err)),
c(iter=2, c(bst_evaluation, bst_evaluation_err))))
expect_silent(f(finalize = TRUE))
expect_equal(evaluation_log,
data.table(iter=1:2,
train_auc_mean=c(0.9,0.9), train_auc_std=c(0.1,0.1),
test_auc_mean=c(0.8,0.8), test_auc_std=c(0.2,0.2)))
})
param <- list(objective = "binary:logistic", max_depth = 4, nthread = 2)
test_that("cb.reset.parameters works as expected", {
# fixed eta
set.seed(111)
bst0 <- xgb.train(param, dtrain, nrounds = 2, watchlist, eta = 0.9)
expect_false(is.null(bst0$evaluation_log))
expect_false(is.null(bst0$evaluation_log$train_error))
# same eta but re-set as a vector parameter in the callback
set.seed(111)
my_par <- list(eta = c(0.9, 0.9))
bst1 <- xgb.train(param, dtrain, nrounds = 2, watchlist,
callbacks = list(cb.reset.parameters(my_par)))
expect_false(is.null(bst1$evaluation_log$train_error))
expect_equal(bst0$evaluation_log$train_error,
bst1$evaluation_log$train_error)
# same eta but re-set via a function in the callback
set.seed(111)
my_par <- list(eta = function(itr, itr_end) 0.9)
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist,
callbacks = list(cb.reset.parameters(my_par)))
expect_false(is.null(bst2$evaluation_log$train_error))
expect_equal(bst0$evaluation_log$train_error,
bst2$evaluation_log$train_error)
# different eta re-set as a vector parameter in the callback
set.seed(111)
my_par <- list(eta = c(0.6, 0.5))
bst3 <- xgb.train(param, dtrain, nrounds = 2, watchlist,
callbacks = list(cb.reset.parameters(my_par)))
expect_false(is.null(bst3$evaluation_log$train_error))
expect_false(all(bst0$evaluation_log$train_error == bst3$evaluation_log$train_error))
# resetting multiple parameters at the same time runs with no error
my_par <- list(eta = c(1., 0.5), gamma = c(1, 2), max_depth = c(4, 8))
expect_error(
bst4 <- xgb.train(param, dtrain, nrounds = 2, watchlist,
callbacks = list(cb.reset.parameters(my_par)))
, NA) # NA = no error
# expect no learning with 0 learning rate
my_par <- list(eta = c(0., 0.))
bstX <- xgb.train(param, dtrain, nrounds = 2, watchlist,
callbacks = list(cb.reset.parameters(my_par)))
expect_false(is.null(bstX$evaluation_log$train_error))
er <- unique(bstX$evaluation_log$train_error)
expect_length(er, 1)
expect_gt(er, 0.4)
})
test_that("cb.save.model works as expected", {
files <- c('xgboost_01.model', 'xgboost_02.model', 'xgboost.model')
for (f in files) if (file.exists(f)) file.remove(f)
bst <- xgb.train(param, dtrain, nrounds = 2, watchlist, eta = 1,
save_period = 1, save_name = "xgboost_%02d.model")
expect_true(file.exists('xgboost_01.model'))
expect_true(file.exists('xgboost_02.model'))
b1 <- xgb.load('xgboost_01.model')
expect_equal(xgb.ntree(b1), 1)
b2 <- xgb.load('xgboost_02.model')
expect_equal(xgb.ntree(b2), 2)
expect_equal(bst$raw, b2$raw)
# save_period = 0 saves the last iteration's model
bst <- xgb.train(param, dtrain, nrounds = 2, watchlist, eta = 1, save_period = 0)
expect_true(file.exists('xgboost.model'))
b2 <- xgb.load('xgboost.model')
expect_equal(bst$raw, b2$raw)
for (f in files) if (file.exists(f)) file.remove(f)
})
test_that("can store evaluation_log without printing", {
expect_silent(
bst <- xgb.train(param, dtrain, nrounds = 10, watchlist, eta = 1,
verbose = 0, callbacks = list(cb.evaluation.log()))
)
expect_false(is.null(bst$evaluation_log))
expect_false(is.null(bst$evaluation_log$train_error))
expect_lt(bst$evaluation_log[, min(train_error)], 0.2)
})
test_that("early stopping xgb.train works", {
set.seed(11)
expect_output(
bst <- xgb.train(param, dtrain, nrounds = 20, watchlist, eta = 0.3,
early_stopping_rounds = 3, maximize = FALSE)
, "Stopping. Best iteration")
expect_false(is.null(bst$best_iteration))
expect_lt(bst$best_iteration, 19)
expect_equal(bst$best_iteration, bst$best_ntreelimit)
pred <- predict(bst, dtest)
expect_equal(length(pred), 1611)
err_pred <- err(ltest, pred)
err_log <- bst$evaluation_log[bst$best_iteration, test_error]
expect_equal(err_log, err_pred, tolerance = 5e-6)
})
test_that("early stopping using a specific metric works", {
set.seed(11)
expect_output(
bst <- xgb.train(param, dtrain, nrounds = 20, watchlist, eta = 0.6,
eval_metric="logloss", eval_metric="auc",
callbacks = list(cb.early.stop(stopping_rounds = 3, maximize = FALSE,
metric_name = 'test_logloss')))
, "Stopping. Best iteration")
expect_false(is.null(bst$best_iteration))
expect_lt(bst$best_iteration, 19)
expect_equal(bst$best_iteration, bst$best_ntreelimit)
pred <- predict(bst, dtest, ntreelimit = bst$best_ntreelimit)
expect_equal(length(pred), 1611)
logloss_pred <- sum(-ltest * log(pred) - (1 - ltest) * log(1 - pred)) / length(ltest)
logloss_log <- bst$evaluation_log[bst$best_iteration, test_logloss]
expect_equal(logloss_log, logloss_pred, tolerance = 5e-6)
})
test_that("early stopping xgb.cv works", {
set.seed(11)
expect_output(
cv <- xgb.cv(param, dtrain, nfold = 5, eta = 0.3, nrounds = 20,
early_stopping_rounds = 3, maximize = FALSE)
, "Stopping. Best iteration")
expect_false(is.null(cv$best_iteration))
expect_lt(cv$best_iteration, 19)
expect_equal(cv$best_iteration, cv$best_ntreelimit)
# the best error is min error:
expect_true(cv$evaluation_log[, test_error_mean[cv$best_iteration] == min(test_error_mean)])
})
test_that("prediction in xgb.cv works", {
set.seed(11)
nrounds = 4
cv <- xgb.cv(param, dtrain, nfold = 5, eta = 0.5, nrounds = nrounds, prediction = TRUE)
expect_false(is.null(cv$evaluation_log))
expect_false(is.null(cv$pred))
expect_length(cv$pred, nrow(train$data))
err_pred <- mean( sapply(cv$folds, function(f) mean(err(ltrain[f], cv$pred[f]))) )
err_log <- cv$evaluation_log[nrounds, test_error_mean]
expect_equal(err_pred, err_log, tolerance = 1e-6)
# save CV models
set.seed(11)
cvx <- xgb.cv(param, dtrain, nfold = 5, eta = 0.5, nrounds = nrounds, prediction = TRUE,
callbacks = list(cb.cv.predict(save_models = TRUE)))
expect_equal(cv$evaluation_log, cvx$evaluation_log)
expect_length(cvx$models, 5)
expect_true(all(sapply(cvx$models, class) == 'xgb.Booster'))
})
test_that("prediction in early-stopping xgb.cv works", {
set.seed(1)
expect_output(
cv <- xgb.cv(param, dtrain, nfold = 5, eta = 0.1, nrounds = 20,
early_stopping_rounds = 5, maximize = FALSE, prediction = TRUE)
, "Stopping. Best iteration")
expect_false(is.null(cv$best_iteration))
expect_lt(cv$best_iteration, 19)
expect_false(is.null(cv$evaluation_log))
expect_false(is.null(cv$pred))
expect_length(cv$pred, nrow(train$data))
err_pred <- mean( sapply(cv$folds, function(f) mean(err(ltrain[f], cv$pred[f]))) )
err_log <- cv$evaluation_log[cv$best_iteration, test_error_mean]
expect_equal(err_pred, err_log, tolerance = 1e-6)
err_log_last <- cv$evaluation_log[cv$niter, test_error_mean]
expect_gt(abs(err_pred - err_log_last), 1e-4)
})
test_that("prediction in xgb.cv for softprob works", {
lb <- as.numeric(iris$Species) - 1
set.seed(11)
expect_warning(
cv <- xgb.cv(data = as.matrix(iris[, -5]), label = lb, nfold = 4,
eta = 0.5, nrounds = 5, max_depth = 3, nthread = 2,
subsample = 0.8, gamma = 2,
prediction = TRUE, objective = "multi:softprob", num_class = 3)
, NA)
expect_false(is.null(cv$pred))
expect_equal(dim(cv$pred), c(nrow(iris), 3))
expect_lt(diff(range(rowSums(cv$pred))), 1e-6)
})

View File

@@ -2,35 +2,50 @@ context('Test models with custom objective')
require(xgboost)
set.seed(1994)
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
watchlist <- list(eval = dtest, train = dtrain)
logregobj <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
preds <- 1 / (1 + exp(-preds))
grad <- preds - labels
hess <- preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
evalerror <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
err <- as.numeric(sum(labels != (preds > 0))) / length(labels)
return(list(metric = "error", value = err))
}
param <- list(max_depth=2, eta=1, nthread = 2,
objective=logregobj, eval_metric=evalerror)
num_round <- 2
test_that("custom objective works", {
watchlist <- list(eval = dtest, train = dtrain)
num_round <- 2
logregobj <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
preds <- 1 / (1 + exp(-preds))
grad <- preds - labels
hess <- preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
evalerror <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
err <- as.numeric(sum(labels != (preds > 0))) / length(labels)
return(list(metric = "error", value = err))
}
param <- list(max.depth=2, eta=1, nthread = 2, silent=1,
objective=logregobj, eval_metric=evalerror)
bst <- xgb.train(param, dtrain, num_round, watchlist)
expect_equal(class(bst), "xgb.Booster")
expect_equal(length(bst$raw), 1064)
expect_equal(length(bst$raw), 1094)
expect_false(is.null(bst$evaluation_log))
expect_false(is.null(bst$evaluation_log$eval_error))
expect_lt(bst$evaluation_log[num_round, eval_error], 0.03)
})
test_that("custom objective in CV works", {
cv <- xgb.cv(param, dtrain, num_round, nfold=10, verbose=FALSE)
expect_false(is.null(cv$evaluation_log))
expect_equal(dim(cv$evaluation_log), c(2, 5))
expect_lt(cv$evaluation_log[num_round, test_error_mean], 0.03)
})
test_that("custom objective using DMatrix attr works", {
attr(dtrain, 'label') <- getinfo(dtrain, 'label')
logregobjattr <- function(preds, dtrain) {
@@ -40,9 +55,8 @@ test_that("custom objective works", {
hess <- preds * (1 - preds)
return(list(grad = grad, hess = hess))
}
param <- list(max.depth=2, eta=1, nthread = 2, silent = 1,
objective = logregobjattr, eval_metric = evalerror)
param$objective = logregobjattr
bst <- xgb.train(param, dtrain, num_round, watchlist)
expect_equal(class(bst), "xgb.Booster")
expect_equal(length(bst$raw), 1064)
expect_equal(length(bst$raw), 1094)
})

View File

@@ -0,0 +1,67 @@
require(xgboost)
context("testing xgb.DMatrix functionality")
data(agaricus.test, package='xgboost')
test_data <- agaricus.test$data[1:100,]
test_label <- agaricus.test$label[1:100]
test_that("xgb.DMatrix: basic construction, saving, loading", {
# from sparse matrix
dtest1 <- xgb.DMatrix(test_data, label=test_label)
# from dense matrix
dtest2 <- xgb.DMatrix(as.matrix(test_data), label=test_label)
expect_equal(getinfo(dtest1, 'label'), getinfo(dtest2, 'label'))
# save to a local file
tmp_file <- tempfile('xgb.DMatrix_')
expect_true(xgb.DMatrix.save(dtest1, tmp_file))
# read from a local file
dtest3 <- xgb.DMatrix(tmp_file)
unlink(tmp_file)
expect_equal(getinfo(dtest1, 'label'), getinfo(dtest3, 'label'))
})
test_that("xgb.DMatrix: getinfo & setinfo", {
dtest <- xgb.DMatrix(test_data)
expect_true(setinfo(dtest, 'label', test_label))
labels <- getinfo(dtest, 'label')
expect_equal(test_label, getinfo(dtest, 'label'))
expect_true(length(getinfo(dtest, 'weight')) == 0)
expect_true(length(getinfo(dtest, 'base_margin')) == 0)
expect_true(setinfo(dtest, 'weight', test_label))
expect_true(setinfo(dtest, 'base_margin', test_label))
expect_true(setinfo(dtest, 'group', c(50,50)))
expect_error(setinfo(dtest, 'group', test_label))
# providing character values will give a warning
expect_warning( setinfo(dtest, 'weight', rep('a', nrow(test_data))) )
# any other label should error
expect_error(setinfo(dtest, 'asdf', test_label))
})
test_that("xgb.DMatrix: slice, dim", {
dtest <- xgb.DMatrix(test_data, label=test_label)
expect_equal(dim(dtest), dim(test_data))
dsub1 <- slice(dtest, 1:42)
expect_equal(nrow(dsub1), 42)
expect_equal(ncol(dsub1), ncol(test_data))
dsub2 <- dtest[1:42,]
expect_equal(dim(dtest), dim(test_data))
expect_equal(getinfo(dsub1, 'label'), getinfo(dsub2, 'label'))
})
test_that("xgb.DMatrix: colnames", {
dtest <- xgb.DMatrix(test_data, label=test_label)
expect_equal(colnames(dtest), colnames(test_data))
expect_error( colnames(dtest) <- 'asdf')
new_names <- make.names(1:ncol(test_data))
expect_silent( colnames(dtest) <- new_names)
expect_equal(colnames(dtest), new_names)
expect_silent(colnames(dtest) <- NULL)
expect_null(colnames(dtest))
})

View File

@@ -7,50 +7,98 @@ require(vcd)
set.seed(1982)
data(Arthritis)
data(agaricus.train, package='xgboost')
df <- data.table(Arthritis, keep.rownames = F)
df[,AgeDiscret := as.factor(round(Age / 10,0))]
df[,AgeCat := as.factor(ifelse(Age > 30, "Old", "Young"))]
df[,ID := NULL]
sparse_matrix <- sparse.model.matrix(Improved~.-1, data = df)
output_vector <- df[,Y := 0][Improved == "Marked",Y := 1][,Y]
bst.Tree <- xgboost(data = sparse_matrix, label = output_vector, max.depth = 9,
eta = 1, nthread = 2, nround = 10, objective = "binary:logistic", booster = "gbtree")
label <- df[, ifelse(Improved == "Marked", 1, 0)]
bst.GLM <- xgboost(data = sparse_matrix, label = output_vector,
eta = 1, nthread = 2, nround = 10, objective = "binary:logistic", booster = "gblinear")
bst.Tree <- xgboost(data = sparse_matrix, label = label, max_depth = 9,
eta = 1, nthread = 2, nrounds = 10, objective = "binary:logistic", booster = "gbtree")
feature.names <- agaricus.train$data@Dimnames[[2]]
bst.GLM <- xgboost(data = sparse_matrix, label = label,
eta = 1, nthread = 2, nrounds = 10, objective = "binary:logistic", booster = "gblinear")
feature.names <- colnames(sparse_matrix)
test_that("xgb.dump works", {
capture.output(print(xgb.dump(bst.Tree)))
capture.output(print(xgb.dump(bst.GLM)))
expect_true(xgb.dump(bst.Tree, 'xgb.model.dump', with.stats = T))
expect_length(xgb.dump(bst.Tree), 172)
expect_true(xgb.dump(bst.Tree, 'xgb.model.dump', with_stats = T))
expect_true(file.exists('xgb.model.dump'))
expect_gt(file.size('xgb.model.dump'), 8000)
})
test_that("xgb.dump works for gblinear", {
expect_length(xgb.dump(bst.GLM), 14)
# also make sure that it works properly for a sparse model where some coefficients
# are 0 from setting large L1 regularization:
bst.GLM.sp <- xgboost(data = sparse_matrix, label = label, eta = 1, nthread = 2, nrounds = 1,
alpha=2, objective = "binary:logistic", booster = "gblinear")
d.sp <- xgb.dump(bst.GLM.sp)
expect_length(d.sp, 14)
expect_gt(sum(d.sp == "0"), 0)
})
test_that("xgb-attribute functionality", {
val <- "my attribute value"
list.val <- list(my_attr=val, a=123, b='ok')
list.ch <- list.val[order(names(list.val))]
list.ch <- lapply(list.ch, as.character)
# note: iter is 0-index in xgb attributes
list.default <- list(niter = "9")
list.ch <- c(list.ch, list.default)
# proper input:
expect_error(xgb.attr(bst.Tree, NULL))
expect_error(xgb.attr(val, val))
# set & get:
expect_null(xgb.attr(bst.Tree, "asdf"))
expect_equal(xgb.attributes(bst.Tree), list.default)
xgb.attr(bst.Tree, "my_attr") <- val
expect_equal(xgb.attr(bst.Tree, "my_attr"), val)
xgb.attributes(bst.Tree) <- list.val
expect_equal(xgb.attributes(bst.Tree), list.ch)
# serializing:
xgb.save(bst.Tree, 'xgb.model')
bst <- xgb.load('xgb.model')
expect_equal(xgb.attr(bst, "my_attr"), val)
expect_equal(xgb.attributes(bst), list.ch)
# deletion:
xgb.attr(bst, "my_attr") <- NULL
expect_null(xgb.attr(bst, "my_attr"))
expect_equal(xgb.attributes(bst), list.ch[c("a", "b", "niter")])
xgb.attributes(bst) <- list(a=NULL, b=NULL)
expect_equal(xgb.attributes(bst), list.default)
xgb.attributes(bst) <- list(niter=NULL)
expect_null(xgb.attributes(bst))
})
test_that("xgb.model.dt.tree works with and without feature names", {
names.dt.trees <- c("ID", "Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover",
"Tree", "Yes.Feature", "Yes.Cover", "Yes.Quality", "No.Feature", "No.Cover", "No.Quality")
names.dt.trees <- c("Tree", "Node", "ID", "Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover")
dt.tree <- xgb.model.dt.tree(feature_names = feature.names, model = bst.Tree)
expect_equal(names.dt.trees, names(dt.tree))
expect_equal(dim(dt.tree), c(162, 15))
xgb.model.dt.tree(model = bst.Tree)
expect_equal(dim(dt.tree), c(162, 10))
expect_output(str(xgb.model.dt.tree(model = bst.Tree)), 'Feature.*\\"3\\"')
})
test_that("xgb.importance works with and without feature names", {
importance.Tree <- xgb.importance(feature_names = sparse_matrix@Dimnames[[2]], model = bst.Tree)
importance.Tree <- xgb.importance(feature_names = feature.names, model = bst.Tree)
expect_equal(dim(importance.Tree), c(7, 4))
expect_equal(colnames(importance.Tree), c("Feature", "Gain", "Cover", "Frequency"))
xgb.importance(model = bst.Tree)
xgb.plot.importance(importance_matrix = importance.Tree)
expect_output(str(xgb.importance(model = bst.Tree)), 'Feature.*\\"3\\"')
imp2plot <- xgb.plot.importance(importance_matrix = importance.Tree)
expect_equal(colnames(imp2plot), c("Feature", "Gain", "Cover", "Frequency", "Importance"))
xgb.ggplot.importance(importance_matrix = importance.Tree)
})
test_that("xgb.importance works with GLM model", {
importance.GLM <- xgb.importance(feature_names = sparse_matrix@Dimnames[[2]], model = bst.GLM)
importance.GLM <- xgb.importance(feature_names = feature.names, model = bst.GLM)
expect_equal(dim(importance.GLM), c(10, 2))
expect_equal(colnames(importance.GLM), c("Feature", "Weight"))
xgb.importance(model = bst.GLM)
xgb.plot.importance(importance.GLM)
imp2plot <- xgb.plot.importance(importance.GLM)
expect_equal(colnames(imp2plot), c("Feature", "Weight", "Importance"))
xgb.ggplot.importance(importance.GLM)
})
test_that("xgb.plot.tree works with and without feature names", {
@@ -59,10 +107,30 @@ test_that("xgb.plot.tree works with and without feature names", {
})
test_that("xgb.plot.multi.trees works with and without feature names", {
xgb.plot.multi.trees(model = bst.Tree, feature_names = feature.names, features.keep = 3)
xgb.plot.multi.trees(model = bst.Tree, features.keep = 3)
xgb.plot.multi.trees(model = bst.Tree, feature_names = feature.names, features_keep = 3)
xgb.plot.multi.trees(model = bst.Tree, features_keep = 3)
})
test_that("xgb.plot.deepness works", {
xgb.plot.deepness(model = bst.Tree)
d2p <- xgb.plot.deepness(model = bst.Tree)
expect_equal(colnames(d2p), c("ID", "Tree", "Depth", "Cover", "Weight"))
xgb.plot.deepness(model = bst.Tree, which = "med.depth")
xgb.ggplot.deepness(model = bst.Tree)
})
test_that("check.deprecation works", {
ttt <- function(a = NNULL, DUMMY=NULL, ...) {
check.deprecation(...)
as.list((environment()))
}
res <- ttt(a = 1, DUMMY = 2, z = 3)
expect_equal(res, list(a = 1, DUMMY = 2))
expect_warning(
res <- ttt(a = 1, dummy = 22, z = 3)
, "\'dummy\' is deprecated")
expect_equal(res, list(a = 1, DUMMY = 22))
expect_warning(
res <- ttt(a = 1, dumm = 22, z = 3)
, "\'dumm\' was partially matched to \'dummy\'")
expect_equal(res, list(a = 1, DUMMY = 22))
})

View File

@@ -9,24 +9,22 @@ dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
bst <- xgboost(data = dtrain,
max.depth = 2,
max_depth = 2,
eta = 1,
nround = 10,
nrounds = 10,
nthread = 1,
verbose = 0,
objective = "binary:logistic")
test_that("call is exposed to R", {
model_call <- attr(bst, "call")
expect_is(model_call, "call")
expect_false(is.null(bst$call))
expect_is(bst$call, "call")
})
test_that("params is exposed to R", {
model_params <- attr(bst, "params")
model_params <- bst$params
expect_is(model_params, "list")
expect_equal(model_params$eta, 1)
expect_equal(model_params$max.depth, 2)
expect_equal(model_params$max_depth, 2)
expect_equal(model_params$objective, "binary:logistic")
})

Some files were not shown because too many files have changed in this diff Show More