Compare commits
887 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cb4d7f821f | ||
|
|
42bf52f462 | ||
|
|
755eab8949 | ||
|
|
c05cc48dfa | ||
|
|
cfdd6029a8 | ||
|
|
d7da4189dc | ||
|
|
90096e718c | ||
|
|
83ace55f51 | ||
|
|
60d307c445 | ||
|
|
5dacab0e22 | ||
|
|
9c0ba67088 | ||
|
|
8b9e87790a | ||
|
|
15ea00540a | ||
|
|
fa8c6e2f0b | ||
|
|
99c2df9913 | ||
|
|
932af821c5 | ||
|
|
08848ab3ee | ||
|
|
6f56e0f4ef | ||
|
|
3104f1f806 | ||
|
|
cebca6846d | ||
|
|
d3564f34d5 | ||
|
|
3f9921762a | ||
|
|
3a534d264d | ||
|
|
9a85c108e2 | ||
|
|
f6fc38f7af | ||
|
|
11ba651a07 | ||
|
|
e92d384a6a | ||
|
|
a4de0ebcd4 | ||
|
|
6942980ebb | ||
|
|
68444a0626 | ||
|
|
0af5cfbac3 | ||
|
|
c6c7dc0a93 | ||
|
|
2d748fb6fa | ||
|
|
60bf389825 | ||
|
|
594bed34e4 | ||
|
|
382dcf6c34 | ||
|
|
62f938d2b4 | ||
|
|
3244f1e9ae | ||
|
|
76bad1c4cc | ||
|
|
ba49f82ace | ||
|
|
ab6a3b1ee8 | ||
|
|
7f7947f31c | ||
|
|
993d7b9da3 | ||
|
|
419e4dbda6 | ||
|
|
fd983dfb97 | ||
|
|
a985d7dd2b | ||
|
|
13837060f1 | ||
|
|
11fa419720 | ||
|
|
0f182b0b66 | ||
|
|
54fb49ee5c | ||
|
|
3b4697786e | ||
|
|
8aa739d374 | ||
|
|
5eeec6a33f | ||
|
|
937a75bcb1 | ||
|
|
c242f9bb66 | ||
|
|
a3ad9df0b4 | ||
|
|
2157146cea | ||
|
|
206f3cdbe0 | ||
|
|
37d704826a | ||
|
|
667a752e04 | ||
|
|
a310db86a1 | ||
|
|
32b1d9d6b0 | ||
|
|
a8d059902d | ||
|
|
1b95df4e54 | ||
|
|
5fa2abee6e | ||
|
|
feac425851 | ||
|
|
514c5fd447 | ||
|
|
5b430ee019 | ||
|
|
8c59c82d92 | ||
|
|
897180b2c6 | ||
|
|
b1f489fd8b | ||
|
|
5e89943ed0 | ||
|
|
5466b36ddb | ||
|
|
7297c2352f | ||
|
|
7258f3353c | ||
|
|
869c68f149 | ||
|
|
90b2c0946e | ||
|
|
99eaf771c4 | ||
|
|
fe32725fa0 | ||
|
|
4ff6697d83 | ||
|
|
c18e081f48 | ||
|
|
f05c7d87cb | ||
|
|
0a3e7722fd | ||
|
|
f325930bd9 | ||
|
|
2b3b55554f | ||
|
|
6f0cbcaf2b | ||
|
|
8a411150ea | ||
|
|
d74d199a1e | ||
|
|
962837bab7 | ||
|
|
52afe1cd7e | ||
|
|
9f3b02cc3e | ||
|
|
d860469030 | ||
|
|
654aa0b3b5 | ||
|
|
68d9e7d673 | ||
|
|
bab7b58d94 | ||
|
|
188d81d64a | ||
|
|
c77fa7a670 | ||
|
|
b2bd79bc76 | ||
|
|
18164e677a | ||
|
|
32a7c906b4 | ||
|
|
d7846d0ef9 | ||
|
|
0c7e6327fb | ||
|
|
d4fcebf8c5 | ||
|
|
7b730093a0 | ||
|
|
0de862cdbc | ||
|
|
afe0a552e0 | ||
|
|
55fe810232 | ||
|
|
0c8b6e2008 | ||
|
|
e63faf0e85 | ||
|
|
2eccdda3c5 | ||
|
|
279758a92e | ||
|
|
48bcc021f7 | ||
|
|
856a18e457 | ||
|
|
ed901ddbb8 | ||
|
|
69627567da | ||
|
|
1e56ba86d9 | ||
|
|
59b96cdda5 | ||
|
|
6783b66b9f | ||
|
|
ee7e8b6e8a | ||
|
|
f271af488b | ||
|
|
c1a24c0fb1 | ||
|
|
8ac89b290e | ||
|
|
efdbec4d4c | ||
|
|
abcc09286c | ||
|
|
bb91bdea84 | ||
|
|
94fac1076a | ||
|
|
d16b2c9670 | ||
|
|
2eb30e732d | ||
|
|
b5690e618e | ||
|
|
4abd76386b | ||
|
|
c0e0fc0c91 | ||
|
|
6c83a94204 | ||
|
|
5e63b5d469 | ||
|
|
be1c530a0c | ||
|
|
afdebe8d8f | ||
|
|
84515cd2a8 | ||
|
|
4275434ec5 | ||
|
|
5870b47d76 | ||
|
|
b31d1c4ad9 | ||
|
|
f28a7a0f8d | ||
|
|
c6d2e16b61 | ||
|
|
0058ebac9a | ||
|
|
1d5b4e19a5 | ||
|
|
b5c8085638 | ||
|
|
84b82ab55f | ||
|
|
b94f7b0849 | ||
|
|
1d8fc6280c | ||
|
|
44d1043031 | ||
|
|
fcb833373b | ||
|
|
4aa1ea2d44 | ||
|
|
dcb7ac81c1 | ||
|
|
29e76c7ac0 | ||
|
|
7d3b51b873 | ||
|
|
11c45e5c60 | ||
|
|
f8ce8899bd | ||
|
|
e2c0ecbc92 | ||
|
|
78907ca08d | ||
|
|
d3af4e138f | ||
|
|
1b22ab7a7e | ||
|
|
263d9bf84f | ||
|
|
3e03c66e8a | ||
|
|
0461231d3d | ||
|
|
dfec406afd | ||
|
|
5ad1555daf | ||
|
|
a68928579b | ||
|
|
50c1ce950f | ||
|
|
315299aea8 | ||
|
|
6f14405b09 | ||
|
|
0220a22ca4 | ||
|
|
a1fdff0522 | ||
|
|
c6c868449c | ||
|
|
5b042691b0 | ||
|
|
54a78b87dc | ||
|
|
5123b07d73 | ||
|
|
44fd329b02 | ||
|
|
ee112353cb | ||
|
|
18277086d9 | ||
|
|
9527b55f35 | ||
|
|
20da8bbe50 | ||
|
|
eb7cccffa4 | ||
|
|
47ee5e7c14 | ||
|
|
5dfab4ba70 | ||
|
|
6d9cb3a2fa | ||
|
|
0a7d233c5d | ||
|
|
788785f164 | ||
|
|
6bc5d6f0b4 | ||
|
|
c528c1e8e6 | ||
|
|
ddb7e538df | ||
|
|
22abf4e295 | ||
|
|
a514340c96 | ||
|
|
e8f6f3b541 | ||
|
|
3d8431fc5c | ||
|
|
a596d11ed1 | ||
|
|
a49150a6d2 | ||
|
|
de3f74f755 | ||
|
|
e4c8d9d2e1 | ||
|
|
511d74c631 | ||
|
|
ab8cf14fb9 | ||
|
|
0ae6d470c7 | ||
|
|
925fa30316 | ||
|
|
2034b91b7d | ||
|
|
20dfcd7cec | ||
|
|
12047056ae | ||
|
|
4e1002a52c | ||
|
|
aa0f612ac9 | ||
|
|
2b7c35870f | ||
|
|
6370b38c14 | ||
|
|
24207d96fe | ||
|
|
a30045c7cc | ||
|
|
f55f8f023f | ||
|
|
bf7b750b86 | ||
|
|
91a7a5f2e2 | ||
|
|
0ea28c35c4 | ||
|
|
7975dd03a9 | ||
|
|
f4dbee5523 | ||
|
|
73ab391309 | ||
|
|
c8c1dc6a3b | ||
|
|
3d11f56880 | ||
|
|
9a6adb0f33 | ||
|
|
23c273173f | ||
|
|
2c9631a254 | ||
|
|
a0e07f16c4 | ||
|
|
ceb62e9231 | ||
|
|
c972feb4b5 | ||
|
|
87b4332cc1 | ||
|
|
76cef701ab | ||
|
|
aefd234da3 | ||
|
|
0405676734 | ||
|
|
e91bacd378 | ||
|
|
b4545df0e3 | ||
|
|
34cbbab84c | ||
|
|
b39c16ea02 | ||
|
|
01771c813d | ||
|
|
99f8dd280e | ||
|
|
36dcb061a8 | ||
|
|
dc37023226 | ||
|
|
65abc26797 | ||
|
|
421f5c6570 | ||
|
|
3cc48d6707 | ||
|
|
b6d85b9d9b | ||
|
|
529a732737 | ||
|
|
30e61084eb | ||
|
|
0ffaeb8c64 | ||
|
|
84957c3f84 | ||
|
|
8a3c0f1ae4 | ||
|
|
b8fd7c3c7c | ||
|
|
fba9e5c714 | ||
|
|
5f902982f2 | ||
|
|
89244b4aec | ||
|
|
9b7907eda3 | ||
|
|
e626b62daa | ||
|
|
18cb8d7de2 | ||
|
|
402e832ce5 | ||
|
|
31b0e53cd4 | ||
|
|
c03b42054f | ||
|
|
271e8202a7 | ||
|
|
b04920d8e7 | ||
|
|
93d3f4fe61 | ||
|
|
d17cdd639f | ||
|
|
611d69c771 | ||
|
|
b8711226e2 | ||
|
|
9b0dee986f | ||
|
|
e9c95645a3 | ||
|
|
d7f9499f88 | ||
|
|
a1a427af37 | ||
|
|
136e902fb2 | ||
|
|
8d1f4a40a5 | ||
|
|
49e641012f | ||
|
|
39093bc432 | ||
|
|
7994858697 | ||
|
|
f9e157011f | ||
|
|
431277d5ca | ||
|
|
37567e440c | ||
|
|
930497e271 | ||
|
|
be6bd3859d | ||
|
|
b04591cbfc | ||
|
|
68c2aaa7fe | ||
|
|
135d461c40 | ||
|
|
0c349d6101 | ||
|
|
38911fe2b2 | ||
|
|
4eae8e8676 | ||
|
|
98618646f6 | ||
|
|
23e46b7fa5 | ||
|
|
149b43a0a8 | ||
|
|
a84d6c55b3 | ||
|
|
db0b06d19c | ||
|
|
047c4b20de | ||
|
|
08fb205102 | ||
|
|
53c9a7b66b | ||
|
|
d53e642b5d | ||
|
|
da3a376384 | ||
|
|
7d0ac3a3dd | ||
|
|
70045c41f9 | ||
|
|
03911cf748 | ||
|
|
1a9a3a2fd0 | ||
|
|
87741bded6 | ||
|
|
25266796e9 | ||
|
|
9ccbeaa8f0 | ||
|
|
75bf97b575 | ||
|
|
5648bec8a3 | ||
|
|
7ced224722 | ||
|
|
2e71d2dfe4 | ||
|
|
4bcc73f0c9 | ||
|
|
f6722ba628 | ||
|
|
3777ad8f17 | ||
|
|
2b24697d79 | ||
|
|
360cc7118d | ||
|
|
e1538ae615 | ||
|
|
8025b338a8 | ||
|
|
4094039ce5 | ||
|
|
33205d1fbd | ||
|
|
adfa023822 | ||
|
|
a146f0c5e1 | ||
|
|
1e001f7cf3 | ||
|
|
240c314ac0 | ||
|
|
9d1d76532d | ||
|
|
6ca76fe784 | ||
|
|
81caba5dce | ||
|
|
cdfa78a3b9 | ||
|
|
8386c2b7fa | ||
|
|
2159d18f0b | ||
|
|
90ade3bb84 | ||
|
|
93a019d174 | ||
|
|
09091884be | ||
|
|
e52de85e59 | ||
|
|
12528c535a | ||
|
|
03f34824b4 | ||
|
|
8437e43afc | ||
|
|
52fe528615 | ||
|
|
8f24f3cd5a | ||
|
|
d5303af068 | ||
|
|
13a319ca01 | ||
|
|
5c389ed89a | ||
|
|
deceec3e10 | ||
|
|
8f7e9abf89 | ||
|
|
4c060df2f1 | ||
|
|
a8d5af39fd | ||
|
|
57b5d7873f | ||
|
|
9f7c6fe271 | ||
|
|
21a4a32655 | ||
|
|
66cf88f7b0 | ||
|
|
99ef34ca8c | ||
|
|
e79840e620 | ||
|
|
09e466764e | ||
|
|
05dbc40186 | ||
|
|
5a59c0b26c | ||
|
|
2ec27679eb | ||
|
|
d202d8b977 | ||
|
|
bae1a08c9b | ||
|
|
5bc9642d31 | ||
|
|
39cb9d2c5e | ||
|
|
841d076f20 | ||
|
|
e50fa9e78f | ||
|
|
ef2de29f06 | ||
|
|
3897b7bf99 | ||
|
|
9fd8612700 | ||
|
|
ee6e8279eb | ||
|
|
41b080e35f | ||
|
|
87ec48c1d3 | ||
|
|
aa60c44b25 | ||
|
|
0c77726b55 | ||
|
|
a6a707f23c | ||
|
|
4ee43f2167 | ||
|
|
c62583bb0f | ||
|
|
48deb49ba1 | ||
|
|
57972ef2c2 | ||
|
|
4210f9cf51 | ||
|
|
576b8acfae | ||
|
|
b8c0d8ba72 | ||
|
|
de6bedc7cb | ||
|
|
711fb128cd | ||
|
|
d88cf20c23 | ||
|
|
a749cf3133 | ||
|
|
46082a54c9 | ||
|
|
8e52c4b45a | ||
|
|
4559477d63 | ||
|
|
2986d913ed | ||
|
|
8f0e99c3ce | ||
|
|
a96ac937f8 | ||
|
|
8abd9c747a | ||
|
|
9784c471d5 | ||
|
|
2c69a17e77 | ||
|
|
8e93b18555 | ||
|
|
56068b5453 | ||
|
|
56e9bff11f | ||
|
|
48390bdd6a | ||
|
|
56877338b7 | ||
|
|
dce522d7a1 | ||
|
|
815789bed6 | ||
|
|
d982f2746c | ||
|
|
83ddbbf03b | ||
|
|
8523fb9f49 | ||
|
|
dabb0fd4c0 | ||
|
|
f57f0f2543 | ||
|
|
8fd546ab3c | ||
|
|
1cfa810edb | ||
|
|
fe4f73920b | ||
|
|
412a6e1085 | ||
|
|
08493c2b3d | ||
|
|
d4731e7b29 | ||
|
|
2ea6fd9931 | ||
|
|
e2b2c21aef | ||
|
|
2e391ed0ee | ||
|
|
8e3c25ed33 | ||
|
|
15562126a6 | ||
|
|
8660ea91b5 | ||
|
|
1b92d9eadf | ||
|
|
f0eaac2174 | ||
|
|
f84cc0843f | ||
|
|
def2674dd1 | ||
|
|
d75194303b | ||
|
|
fe7651fe53 | ||
|
|
3adfe4eeda | ||
|
|
3da261b6e7 | ||
|
|
a718a43d92 | ||
|
|
9a4bf40e5e | ||
|
|
8a7d803e52 | ||
|
|
ae9f7e9307 | ||
|
|
276b68b984 | ||
|
|
7421f35136 | ||
|
|
ba36c495be | ||
|
|
7f71cc12f4 | ||
|
|
8a8eb33114 | ||
|
|
df63c86afa | ||
|
|
09a6522704 | ||
|
|
234cf49e35 | ||
|
|
7bb2926414 | ||
|
|
16ffd7c9b2 | ||
|
|
f1f346713a | ||
|
|
f8a314e2e4 | ||
|
|
13776a006a | ||
|
|
e923bdb12f | ||
|
|
97cb8bf637 | ||
|
|
c40afa2023 | ||
|
|
c639efc71b | ||
|
|
2ec113b1be | ||
|
|
adf8b6553d | ||
|
|
d70f52d4b1 | ||
|
|
e457b5ea58 | ||
|
|
9d11936790 | ||
|
|
a16cbedfab | ||
|
|
292f4f0e0d | ||
|
|
dc9e4905e4 | ||
|
|
d7ba5c1511 | ||
|
|
cefd55ef00 | ||
|
|
c0d8ae3781 | ||
|
|
423c3e6a8d | ||
|
|
a30635e0b4 | ||
|
|
e889da4cc1 | ||
|
|
7f3dc7cf7e | ||
|
|
25f508e43e | ||
|
|
47b5cf5148 | ||
|
|
4c25600d2a | ||
|
|
0aef62dabc | ||
|
|
f7c838ffaa | ||
|
|
5b611c355e | ||
|
|
ea5860d574 | ||
|
|
8c16491b42 | ||
|
|
ac3791bf74 | ||
|
|
eecfd015fa | ||
|
|
f4b454d6dd | ||
|
|
a3cf30592f | ||
|
|
3971323203 | ||
|
|
0922883250 | ||
|
|
a45497e6f3 | ||
|
|
76e24fdd36 | ||
|
|
29b5312428 | ||
|
|
9d89441e38 | ||
|
|
12b0e8e6d5 | ||
|
|
75f205b0b1 | ||
|
|
85739c537d | ||
|
|
85186a2e55 | ||
|
|
8b4acef662 | ||
|
|
a82a942cd6 | ||
|
|
68290546ca | ||
|
|
b7526671ba | ||
|
|
92652bffa1 | ||
|
|
9f5889f1e3 | ||
|
|
b34a56b1f9 | ||
|
|
90c698ba13 | ||
|
|
5d135858f7 | ||
|
|
1d21ff87ff | ||
|
|
dc3003cefd | ||
|
|
6e91846c55 | ||
|
|
451944c52b | ||
|
|
b31cbdb0a4 | ||
|
|
a17e29b130 | ||
|
|
9f5929497a | ||
|
|
f35950dc46 | ||
|
|
02e98e0534 | ||
|
|
3791ae5cf0 | ||
|
|
8b2dbbb782 | ||
|
|
b32d4faa82 | ||
|
|
9725cf2aeb | ||
|
|
25957bb1d4 | ||
|
|
42a4da91b5 | ||
|
|
964c668d44 | ||
|
|
f3b2c74153 | ||
|
|
d788bf9aeb | ||
|
|
4d79ed9bb1 | ||
|
|
7ec17038f0 | ||
|
|
f71aa2874c | ||
|
|
170dcc49be | ||
|
|
e35a9f4822 | ||
|
|
16db3ce620 | ||
|
|
3e0fba392d | ||
|
|
d6ef74386d | ||
|
|
5687af9774 | ||
|
|
e06c1da842 | ||
|
|
deb4983273 | ||
|
|
a264bc3969 | ||
|
|
e72174f0f8 | ||
|
|
1f6b8eb344 | ||
|
|
c34367b207 | ||
|
|
97e058dbd7 | ||
|
|
4266827105 | ||
|
|
15dee73795 | ||
|
|
5188bad873 | ||
|
|
5e94126963 | ||
|
|
52a2b652d3 | ||
|
|
f75387f701 | ||
|
|
33101d5cad | ||
|
|
8971f0ff50 | ||
|
|
f848844310 | ||
|
|
da9f0989c6 | ||
|
|
d188c997f0 | ||
|
|
7f1aff7858 | ||
|
|
f1d9fe8153 | ||
|
|
e475b7d84e | ||
|
|
34e2fbd2c4 | ||
|
|
417ac4a631 | ||
|
|
42110f3d70 | ||
|
|
d87cb24793 | ||
|
|
6901e90730 | ||
|
|
eb01acfad8 | ||
|
|
947f0a926d | ||
|
|
6937384e62 | ||
|
|
89d5e67b78 | ||
|
|
cd2bce4719 | ||
|
|
ea50f8e030 | ||
|
|
25cf27d50f | ||
|
|
3b190123c8 | ||
|
|
c0c6951b73 | ||
|
|
f295177b1d | ||
|
|
a1e188aa75 | ||
|
|
43c13d82ba | ||
|
|
312546b99d | ||
|
|
7c6cf4bad8 | ||
|
|
1ea23d3390 | ||
|
|
632fdbbf5c | ||
|
|
9b3a601ede | ||
|
|
b9650f19c1 | ||
|
|
c1f84ba446 | ||
|
|
902f84cf4a | ||
|
|
9ea6b2f1b8 | ||
|
|
f0a412d224 | ||
|
|
e5c609271f | ||
|
|
ccba73e5d5 | ||
|
|
1211ea40c9 | ||
|
|
748389f052 | ||
|
|
8e8926550f | ||
|
|
0b55fa6aff | ||
|
|
631b092b25 | ||
|
|
f22ee7cb61 | ||
|
|
7780bc45c2 | ||
|
|
81749e6b63 | ||
|
|
c51e01da2f | ||
|
|
ba0b950a84 | ||
|
|
d87691ec60 | ||
|
|
152e08974d | ||
|
|
07da390575 | ||
|
|
9695c51ce1 | ||
|
|
f49fd88de8 | ||
|
|
d50079f993 | ||
|
|
d84d27ae3d | ||
|
|
b898672753 | ||
|
|
90ec783e65 | ||
|
|
4715672d76 | ||
|
|
b1df8039a0 | ||
|
|
b1f89f29b8 | ||
|
|
b762231b02 | ||
|
|
962c2432a0 | ||
|
|
4d30fa2449 | ||
|
|
ede1222b02 | ||
|
|
bbbc6be58e | ||
|
|
a53f0cd9bf | ||
|
|
9346c328cb | ||
|
|
2a9a864b11 | ||
|
|
6b7f20c002 | ||
|
|
5e0e8a5ff7 | ||
|
|
083c032319 | ||
|
|
48a44b24f9 | ||
|
|
d57cb4f17b | ||
|
|
62a108a7c2 | ||
|
|
166e7525da | ||
|
|
48c1911bc4 | ||
|
|
d441a9d382 | ||
|
|
9a2ad91b48 | ||
|
|
15bf8677da | ||
|
|
0111a14aef | ||
|
|
df3f87c182 | ||
|
|
fdbca6013d | ||
|
|
31a3b38ef8 | ||
|
|
ef2518364c | ||
|
|
525c1594e5 | ||
|
|
c38f7109bd | ||
|
|
69e079941e | ||
|
|
ceabf5755f | ||
|
|
fb65356dd4 | ||
|
|
2f95968a1c | ||
|
|
966416e69c | ||
|
|
db4637b085 | ||
|
|
9eaf073e3c | ||
|
|
d5e9b1d4ea | ||
|
|
c8c5789efd | ||
|
|
70df227689 | ||
|
|
d348f83c17 | ||
|
|
7665dd1ed2 | ||
|
|
74348c8001 | ||
|
|
24f99220cb | ||
|
|
61a43111a7 | ||
|
|
e20d4f4387 | ||
|
|
72f6fbd46f | ||
|
|
359889e3d6 | ||
|
|
75a75bc1e9 | ||
|
|
99b4ead937 | ||
|
|
a3493934d1 | ||
|
|
51935851bd | ||
|
|
b656ca1554 | ||
|
|
d96bd15b7d | ||
|
|
31d0e8f65d | ||
|
|
9d6eecf34e | ||
|
|
10f755e055 | ||
|
|
3e1eea0eea | ||
|
|
6fd8bbe71a | ||
|
|
3d0bbae2c2 | ||
|
|
d532f04394 | ||
|
|
e380e4facf | ||
|
|
cce26756bf | ||
|
|
9e20893d35 | ||
|
|
3dd202a19e | ||
|
|
94d070da60 | ||
|
|
a6c588f90d | ||
|
|
f82732a362 | ||
|
|
c64bfad5bb | ||
|
|
59412f64ad | ||
|
|
f793df671b | ||
|
|
3d068b4e1a | ||
|
|
b9799c6ac4 | ||
|
|
ffbd78fce4 | ||
|
|
f6290ad792 | ||
|
|
33bb168574 | ||
|
|
2925236fab | ||
|
|
8b45ef07ca | ||
|
|
cfe5015e54 | ||
|
|
cdea1685e5 | ||
|
|
61df646eed | ||
|
|
4d0d65837d | ||
|
|
8bbe45eed2 | ||
|
|
a524a51a06 | ||
|
|
34aaeff3d9 | ||
|
|
5e5500d6d3 | ||
|
|
901904b535 | ||
|
|
3974231440 | ||
|
|
d07be2bb96 | ||
|
|
4f0ae53974 | ||
|
|
9998575c32 | ||
|
|
4cc3790b76 | ||
|
|
4183c239ca | ||
|
|
c3d8f21df3 | ||
|
|
9267e3b368 | ||
|
|
006578e2e6 | ||
|
|
97fd9b47d4 | ||
|
|
79731f48b6 | ||
|
|
7558a94507 | ||
|
|
8e74bcdd05 | ||
|
|
2364e914bd | ||
|
|
e64cb99f89 | ||
|
|
af31397ec2 | ||
|
|
31ed2813bd | ||
|
|
45a006f367 | ||
|
|
345b93fcfa | ||
|
|
d8eb978f98 | ||
|
|
01f640f8a6 | ||
|
|
39bb719063 | ||
|
|
c6f76fab56 | ||
|
|
c754fd4ad0 | ||
|
|
3694772bde | ||
|
|
5ad100b5a3 | ||
|
|
c395c5bed3 | ||
|
|
78813d8f78 | ||
|
|
263f7fa69d | ||
|
|
dba1ce7050 | ||
|
|
9b6a14a99d | ||
|
|
755be4b846 | ||
|
|
6b96737811 | ||
|
|
0c7e090c19 | ||
|
|
99af2c8ffd | ||
|
|
84fb89af70 | ||
|
|
2154a160a3 | ||
|
|
151285300b | ||
|
|
46862e561b | ||
|
|
ce83611a72 | ||
|
|
e63c79d6c6 | ||
|
|
8c17a86b38 | ||
|
|
1d64cd8896 | ||
|
|
4369a57270 | ||
|
|
c8f422b3b9 | ||
|
|
6d7ef172ef | ||
|
|
c8396ca24e | ||
|
|
677475529f | ||
|
|
eff5c6baa8 | ||
|
|
d603852828 | ||
|
|
31eedfea59 | ||
|
|
b078663982 | ||
|
|
7a35e1a906 | ||
|
|
deb21351b9 | ||
|
|
8e16cc4617 | ||
|
|
646f33d01d | ||
|
|
a50fd27fd3 | ||
|
|
5ae99372d6 | ||
|
|
be5fb800d5 | ||
|
|
baf41d589d | ||
|
|
8d7dbc65b3 | ||
|
|
198489438f | ||
|
|
c356a0acc2 | ||
|
|
cdcfa5687a | ||
|
|
f53be2884a | ||
|
|
f805ecb5f3 | ||
|
|
3e162ceda6 | ||
|
|
35bf2101fe | ||
|
|
fde580b08e | ||
|
|
77ffd0465b | ||
|
|
78ca72b9c7 | ||
|
|
d2f151ef5a | ||
|
|
7f3dc967cf | ||
|
|
db2adb6885 | ||
|
|
2e444f8338 | ||
|
|
b55fe80350 | ||
|
|
372de9f968 | ||
|
|
373620503a | ||
|
|
5f08313cb2 | ||
|
|
69b2f31098 | ||
|
|
115424826b | ||
|
|
c499dd0f0c | ||
|
|
cb1c34aef0 | ||
|
|
67c5d8a2e6 | ||
|
|
4864220702 | ||
|
|
7ec3fc936a | ||
|
|
b6e1b19205 | ||
|
|
84dcab6795 | ||
|
|
c29a600d46 | ||
|
|
168bb0d0c9 | ||
|
|
6ed82edad7 | ||
|
|
d4103ea7ea | ||
|
|
c16e0f6809 | ||
|
|
98ee7e8057 | ||
|
|
20817b56f3 | ||
|
|
bbd7098e51 | ||
|
|
ed87eb61bd | ||
|
|
23fbf079b9 | ||
|
|
974202eb55 | ||
|
|
6b674b491f | ||
|
|
9af464303a | ||
|
|
b595854e8c | ||
|
|
970dd58dc2 | ||
|
|
26e5eae6f2 | ||
|
|
41eac089c8 | ||
|
|
338117867b | ||
|
|
a0342cb196 | ||
|
|
3b48a9f359 | ||
|
|
c42ba8d281 | ||
|
|
7c3a392136 | ||
|
|
55e62a7120 | ||
|
|
da54f5e5d8 | ||
|
|
03e24cf590 | ||
|
|
54e2ed90d7 | ||
|
|
dffcbc838b | ||
|
|
fa1581b94c | ||
|
|
32beb56ba3 | ||
|
|
08e9813c9b | ||
|
|
1b66a87456 | ||
|
|
303f8b9bc5 | ||
|
|
ce7ecadf5e | ||
|
|
5de0a2cdc0 | ||
|
|
5e8e9a9b74 | ||
|
|
8874234e5e | ||
|
|
d11445e0b1 | ||
|
|
8ed585a7a2 | ||
|
|
5061d55725 | ||
|
|
129fee64f3 | ||
|
|
02c2278f96 | ||
|
|
daa28f238e | ||
|
|
c86b83ea04 | ||
|
|
c1f1bb9206 | ||
|
|
076159cf7a | ||
|
|
b66bcb7974 | ||
|
|
42712988af | ||
|
|
698c010247 | ||
|
|
e7ea87b5fd | ||
|
|
9d101b47f9 | ||
|
|
b426eef527 | ||
|
|
9855a90142 | ||
|
|
7b8ba268dc | ||
|
|
d4c4ee0b01 | ||
|
|
69874dc571 | ||
|
|
5561dd9cb0 | ||
|
|
7c1ec78a01 | ||
|
|
0e6b899d07 | ||
|
|
aace84c349 | ||
|
|
539fce2856 | ||
|
|
ca96468745 | ||
|
|
b2850ae0f9 | ||
|
|
c17c0f3197 | ||
|
|
96c5196647 | ||
|
|
23eaa7ed32 | ||
|
|
dcd0dd5e26 | ||
|
|
a7bc769971 | ||
|
|
c2fa390181 | ||
|
|
a68ac8033e | ||
|
|
9df9e07f9b | ||
|
|
f6d61f02f6 | ||
|
|
3f3c90c3c0 | ||
|
|
f512f08437 | ||
|
|
0cf2dd39ea | ||
|
|
a21df0770d | ||
|
|
47145a7fac | ||
|
|
aefe58a207 | ||
|
|
6680bffaae | ||
|
|
f2577fec86 | ||
|
|
e295128973 | ||
|
|
d0daecb4d3 | ||
|
|
f2cceb37eb | ||
|
|
c957e1a648 | ||
|
|
78efa13d41 | ||
|
|
d6b60a1e4a | ||
|
|
d3f7952991 | ||
|
|
91e34c6fb4 | ||
|
|
bf2426f3cd | ||
|
|
3a0be47b1c | ||
|
|
87cc53f0cd | ||
|
|
fe9e89cadd | ||
|
|
0e8846a42f | ||
|
|
496301585a | ||
|
|
4275403004 | ||
|
|
c380342c5f | ||
|
|
2fec85ab8a | ||
|
|
86bdef1f19 | ||
|
|
9e701440e7 | ||
|
|
1a6af1aacf | ||
|
|
011df2993a | ||
|
|
7d0d3f07ef | ||
|
|
a3806398b9 | ||
|
|
a3d5930f26 | ||
|
|
e90b25a381 | ||
|
|
4e44dd83a7 | ||
|
|
02e41be857 | ||
|
|
d4ab359be1 | ||
|
|
19a1ee24a5 | ||
|
|
75aa5bd258 | ||
|
|
ae3621b372 | ||
|
|
df3eafc5ba | ||
|
|
46cddb80f4 | ||
|
|
5f6e849b21 | ||
|
|
244a589e5d | ||
|
|
401d648372 | ||
|
|
e6e467ad60 | ||
|
|
f3360d173b | ||
|
|
226d26d40c | ||
|
|
a89e3063e6 | ||
|
|
4b9aeea89c | ||
|
|
76c513b191 | ||
|
|
eeb04a0603 | ||
|
|
e43bb91185 | ||
|
|
9d3e09ff2a | ||
|
|
7d1e9f06d4 | ||
|
|
e3153b976c | ||
|
|
0a7cfb32c6 | ||
|
|
e18a4fc5b6 | ||
|
|
602558c5d6 | ||
|
|
366ac95ad3 | ||
|
|
9830674b75 | ||
|
|
7bc1c3ee79 | ||
|
|
ce772c2f3e | ||
|
|
d0e27482ef | ||
|
|
ce2d34ecd4 | ||
|
|
551b3b70f1 |
16
.gitignore
vendored
16
.gitignore
vendored
@@ -2,7 +2,7 @@
|
|||||||
*.slo
|
*.slo
|
||||||
*.lo
|
*.lo
|
||||||
*.o
|
*.o
|
||||||
|
*.page
|
||||||
# Compiled Dynamic libraries
|
# Compiled Dynamic libraries
|
||||||
*.so
|
*.so
|
||||||
*.dylib
|
*.dylib
|
||||||
@@ -44,3 +44,17 @@ Debug
|
|||||||
*dump
|
*dump
|
||||||
*save
|
*save
|
||||||
*csv
|
*csv
|
||||||
|
.Rproj.user
|
||||||
|
*.cpage.col
|
||||||
|
*.cpage
|
||||||
|
*.Rproj
|
||||||
|
xgboost
|
||||||
|
xgboost.mpi
|
||||||
|
xgboost.mock
|
||||||
|
train*
|
||||||
|
rabit
|
||||||
|
#.Rbuildignore
|
||||||
|
R-package.Rproj
|
||||||
|
*.cache*
|
||||||
|
R-package/inst
|
||||||
|
R-package/src
|
||||||
|
|||||||
14
CHANGES.md
14
CHANGES.md
@@ -20,3 +20,17 @@ xgboost-0.3
|
|||||||
* Linear booster is now parallelized, using parallel coordinated descent.
|
* Linear booster is now parallelized, using parallel coordinated descent.
|
||||||
* Add [Code Guide](src/README.md) for customizing objective function and evaluation
|
* Add [Code Guide](src/README.md) for customizing objective function and evaluation
|
||||||
* Add R module
|
* Add R module
|
||||||
|
|
||||||
|
xgboost-0.4
|
||||||
|
=====
|
||||||
|
* Distributed version of xgboost that runs on YARN, scales to billions of examples
|
||||||
|
* Direct save/load data and model from/to S3 and HDFS
|
||||||
|
* Feature importance visualization in R module, by Michael Benesty
|
||||||
|
* Predict leaf index
|
||||||
|
* Poisson regression for counts data
|
||||||
|
* Early stopping option in training
|
||||||
|
* Native save load support in R and python
|
||||||
|
- xgboost models now can be saved using save/load in R
|
||||||
|
- xgboost python model is now pickable
|
||||||
|
* sklearn wrapper is supported in python module
|
||||||
|
* Experimental External memory version
|
||||||
|
|||||||
102
Makefile
102
Makefile
@@ -1,8 +1,13 @@
|
|||||||
export CC = gcc
|
export CC = gcc
|
||||||
export CXX = g++
|
export CXX = g++
|
||||||
|
export MPICXX = mpicxx
|
||||||
export LDFLAGS= -pthread -lm
|
export LDFLAGS= -pthread -lm
|
||||||
|
export CFLAGS = -Wall -O3 -msse2 -Wno-unknown-pragmas -fPIC
|
||||||
|
|
||||||
export CFLAGS = -Wall -O3 -msse2 -Wno-unknown-pragmas -fPIC -pedantic
|
ifeq ($(OS), Windows_NT)
|
||||||
|
export CXX = g++ -m64
|
||||||
|
export CC = gcc -m64
|
||||||
|
endif
|
||||||
|
|
||||||
ifeq ($(no_omp),1)
|
ifeq ($(no_omp),1)
|
||||||
CFLAGS += -DDISABLE_OPENMP
|
CFLAGS += -DDISABLE_OPENMP
|
||||||
@@ -10,56 +15,117 @@ else
|
|||||||
CFLAGS += -fopenmp
|
CFLAGS += -fopenmp
|
||||||
endif
|
endif
|
||||||
|
|
||||||
|
# by default use c++11
|
||||||
|
ifeq ($(cxx11),1)
|
||||||
|
CFLAGS += -std=c++11
|
||||||
|
else
|
||||||
|
endif
|
||||||
|
|
||||||
|
# handling dmlc
|
||||||
|
ifdef dmlc
|
||||||
|
ifndef config
|
||||||
|
ifneq ("$(wildcard $(dmlc)/config.mk)","")
|
||||||
|
config = $(dmlc)/config.mk
|
||||||
|
else
|
||||||
|
config = $(dmlc)/make/config.mk
|
||||||
|
endif
|
||||||
|
endif
|
||||||
|
include $(config)
|
||||||
|
include $(dmlc)/make/dmlc.mk
|
||||||
|
LDFLAGS+= $(DMLC_LDFLAGS)
|
||||||
|
LIBDMLC=$(dmlc)/libdmlc.a
|
||||||
|
else
|
||||||
|
LIBDMLC=dmlc_simple.o
|
||||||
|
endif
|
||||||
|
|
||||||
|
ifeq ($(OS), Windows_NT)
|
||||||
|
LIBRABIT = subtree/rabit/lib/librabit_empty.a
|
||||||
|
SLIB = wrapper/xgboost_wrapper.dll
|
||||||
|
else
|
||||||
|
LIBRABIT = subtree/rabit/lib/librabit.a
|
||||||
|
SLIB = wrapper/libxgboostwrapper.so
|
||||||
|
endif
|
||||||
|
|
||||||
# specify tensor path
|
# specify tensor path
|
||||||
BIN = xgboost
|
BIN = xgboost
|
||||||
OBJ = updater.o gbm.o io.o
|
MOCKBIN = xgboost.mock
|
||||||
SLIB = wrapper/libxgboostwrapper.so
|
OBJ = updater.o gbm.o io.o main.o dmlc_simple.o
|
||||||
|
MPIBIN =
|
||||||
|
TARGET = $(BIN) $(OBJ) $(SLIB)
|
||||||
|
|
||||||
.PHONY: clean all python Rpack
|
.PHONY: clean all mpi python Rpack
|
||||||
|
|
||||||
all: $(BIN) $(OBJ) $(SLIB)
|
all: $(BIN) $(OBJ) $(SLIB)
|
||||||
|
mpi: $(MPIBIN)
|
||||||
|
|
||||||
python: wrapper/libxgboostwrapper.so
|
python: wrapper/libxgboostwrapper.so
|
||||||
# now the wrapper takes in two files. io and wrapper part
|
# now the wrapper takes in two files. io and wrapper part
|
||||||
wrapper/libxgboostwrapper.so: wrapper/xgboost_wrapper.cpp $(OBJ)
|
updater.o: src/tree/updater.cpp src/tree/*.hpp src/*.h src/tree/*.h src/utils/*.h
|
||||||
updater.o: src/tree/updater.cpp src/tree/*.hpp src/*.h src/tree/*.h
|
dmlc_simple.o: src/io/dmlc_simple.cpp src/utils/*.h
|
||||||
gbm.o: src/gbm/gbm.cpp src/gbm/*.hpp src/gbm/*.h
|
gbm.o: src/gbm/gbm.cpp src/gbm/*.hpp src/gbm/*.h
|
||||||
io.o: src/io/io.cpp src/io/*.hpp src/utils/*.h src/learner/dmatrix.h src/*.h
|
io.o: src/io/io.cpp src/io/*.hpp src/utils/*.h src/learner/dmatrix.h src/*.h
|
||||||
xgboost: src/xgboost_main.cpp src/utils/*.h src/*.h src/learner/*.hpp src/learner/*.h $(OBJ)
|
main.o: src/xgboost_main.cpp src/utils/*.h src/*.h src/learner/*.hpp src/learner/*.h
|
||||||
wrapper/libxgboostwrapper.so: wrapper/xgboost_wrapper.cpp src/utils/*.h src/*.h src/learner/*.hpp src/learner/*.h $(OBJ)
|
xgboost: updater.o gbm.o io.o main.o $(LIBRABIT) $(LIBDMLC)
|
||||||
|
wrapper/xgboost_wrapper.dll wrapper/libxgboostwrapper.so: wrapper/xgboost_wrapper.cpp src/utils/*.h src/*.h src/learner/*.hpp src/learner/*.h updater.o gbm.o io.o $(LIBRABIT) $(LIBDMLC)
|
||||||
|
|
||||||
|
# dependency on rabit
|
||||||
|
subtree/rabit/lib/librabit.a: subtree/rabit/src/engine.cc
|
||||||
|
+ cd subtree/rabit;make lib/librabit.a; cd ../..
|
||||||
|
subtree/rabit/lib/librabit_empty.a: subtree/rabit/src/engine_empty.cc
|
||||||
|
+ cd subtree/rabit;make lib/librabit_empty.a; cd ../..
|
||||||
|
subtree/rabit/lib/librabit_mock.a: subtree/rabit/src/engine_mock.cc
|
||||||
|
+ cd subtree/rabit;make lib/librabit_mock.a; cd ../..
|
||||||
|
subtree/rabit/lib/librabit_mpi.a: subtree/rabit/src/engine_mpi.cc
|
||||||
|
+ cd subtree/rabit;make lib/librabit_mpi.a; cd ../..
|
||||||
|
|
||||||
$(BIN) :
|
$(BIN) :
|
||||||
$(CXX) $(CFLAGS) $(LDFLAGS) -o $@ $(filter %.cpp %.o %.c, $^)
|
$(CXX) $(CFLAGS) -o $@ $(filter %.cpp %.o %.c %.cc %.a, $^) $(LDFLAGS)
|
||||||
|
|
||||||
|
$(MOCKBIN) :
|
||||||
|
$(CXX) $(CFLAGS) -o $@ $(filter %.cpp %.o %.c %.cc %.a, $^) $(LDFLAGS)
|
||||||
|
|
||||||
$(SLIB) :
|
$(SLIB) :
|
||||||
$(CXX) $(CFLAGS) -fPIC $(LDFLAGS) -shared -o $@ $(filter %.cpp %.o %.c, $^)
|
$(CXX) $(CFLAGS) -fPIC -shared -o $@ $(filter %.cpp %.o %.c %.a %.cc, $^) $(LDFLAGS) $(DLLFLAGS)
|
||||||
|
|
||||||
$(OBJ) :
|
$(OBJ) :
|
||||||
$(CXX) -c $(CFLAGS) -o $@ $(firstword $(filter %.cpp %.c, $^) )
|
$(CXX) -c $(CFLAGS) -o $@ $(firstword $(filter %.cpp %.c %.cc, $^) )
|
||||||
|
|
||||||
|
$(MPIOBJ) :
|
||||||
|
$(MPICXX) -c $(CFLAGS) -o $@ $(firstword $(filter %.cpp %.c, $^) )
|
||||||
|
|
||||||
|
$(MPIBIN) :
|
||||||
|
$(MPICXX) $(CFLAGS) -o $@ $(filter %.cpp %.o %.c %.cc %.a, $^) $(LDFLAGS)
|
||||||
|
|
||||||
install:
|
install:
|
||||||
cp -f -r $(BIN) $(INSTALL_PATH)
|
cp -f -r $(BIN) $(INSTALL_PATH)
|
||||||
|
|
||||||
Rpack:
|
Rpack:
|
||||||
make clean
|
make clean
|
||||||
|
cd subtree/rabit;make clean;cd ..
|
||||||
rm -rf xgboost xgboost*.tar.gz
|
rm -rf xgboost xgboost*.tar.gz
|
||||||
cp -r R-package xgboost
|
cp -r R-package xgboost
|
||||||
rm -rf xgboost/inst/examples/*.buffer
|
|
||||||
rm -rf xgboost/inst/examples/*.model
|
|
||||||
rm -rf xgboost/inst/examples/dump*
|
|
||||||
rm -rf xgboost/src/*.o xgboost/src/*.so xgboost/src/*.dll
|
rm -rf xgboost/src/*.o xgboost/src/*.so xgboost/src/*.dll
|
||||||
|
rm -rf xgboost/src/*/*.o
|
||||||
|
rm -rf subtree/rabit/src/*.o
|
||||||
rm -rf xgboost/demo/*.model xgboost/demo/*.buffer xgboost/demo/*.txt
|
rm -rf xgboost/demo/*.model xgboost/demo/*.buffer xgboost/demo/*.txt
|
||||||
rm -rf xgboost/demo/runall.R
|
rm -rf xgboost/demo/runall.R
|
||||||
cp -r src xgboost/src/src
|
cp -r src xgboost/src/src
|
||||||
|
mkdir xgboost/src/subtree
|
||||||
|
mkdir xgboost/src/subtree/rabit
|
||||||
|
cp -r subtree/rabit/include xgboost/src/subtree/rabit/include
|
||||||
|
cp -r subtree/rabit/src xgboost/src/subtree/rabit/src
|
||||||
|
rm -rf xgboost/src/subtree/rabit/src/*.o
|
||||||
mkdir xgboost/src/wrapper
|
mkdir xgboost/src/wrapper
|
||||||
cp wrapper/xgboost_wrapper.h xgboost/src/wrapper
|
cp wrapper/xgboost_wrapper.h xgboost/src/wrapper
|
||||||
cp wrapper/xgboost_wrapper.cpp xgboost/src/wrapper
|
cp wrapper/xgboost_wrapper.cpp xgboost/src/wrapper
|
||||||
cp ./LICENSE xgboost
|
cp ./LICENSE xgboost
|
||||||
cat R-package/src/Makevars|sed '2s/.*/PKGROOT=./' > xgboost/src/Makevars
|
cat R-package/src/Makevars|sed '2s/.*/PKGROOT=./' > xgboost/src/Makevars
|
||||||
cat R-package/src/Makevars.win|sed '2s/.*/PKGROOT=./' > xgboost/src/Makevars.win
|
cp xgboost/src/Makevars xgboost/src/Makevars.win
|
||||||
|
# R CMD build --no-build-vignettes xgboost
|
||||||
R CMD build xgboost
|
R CMD build xgboost
|
||||||
rm -rf xgboost
|
rm -rf xgboost
|
||||||
R CMD check --as-cran xgboost*.tar.gz
|
R CMD check --as-cran xgboost*.tar.gz
|
||||||
|
|
||||||
clean:
|
clean:
|
||||||
$(RM) $(OBJ) $(BIN) $(SLIB) *.o */*.o */*/*.o *~ */*~ */*/*~
|
$(RM) -rf $(OBJ) $(BIN) $(MPIBIN) $(MPIOBJ) $(SLIB) *.o */*.o */*/*.o *~ */*~ */*/*~
|
||||||
|
cd subtree/rabit; make clean; cd ..
|
||||||
|
|||||||
5
R-package/.Rbuildignore
Normal file
5
R-package/.Rbuildignore
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
\.o$
|
||||||
|
\.so$
|
||||||
|
\.dll$
|
||||||
|
^.*\.Rproj$
|
||||||
|
^\.Rproj\.user$
|
||||||
@@ -1,24 +1,34 @@
|
|||||||
Package: xgboost
|
Package: xgboost
|
||||||
Type: Package
|
Type: Package
|
||||||
Title: eXtreme Gradient Boosting
|
Title: eXtreme Gradient Boosting
|
||||||
Version: 0.3-2
|
Version: 0.4-0
|
||||||
Date: 2014-08-23
|
Date: 2015-05-11
|
||||||
Author: Tianqi Chen <tianqi.tchen@gmail.com>, Tong He <hetong007@gmail.com>
|
Author: Tianqi Chen <tianqi.tchen@gmail.com>, Tong He <hetong007@gmail.com>, Michael Benesty <michael@benesty.fr>
|
||||||
Maintainer: Tong He <hetong007@gmail.com>
|
Maintainer: Tong He <hetong007@gmail.com>
|
||||||
Description: This package is a R wrapper of xgboost, which is short for eXtreme
|
Description: Xgboost is short for eXtreme Gradient Boosting, which is an
|
||||||
Gradient Boosting. It is an efficient and scalable implementation of
|
efficient and scalable implementation of gradient boosting framework.
|
||||||
gradient boosting framework. The package includes efficient linear model
|
This package is an R wrapper of xgboost. The package includes efficient
|
||||||
solver and tree learning algorithms. The package can automatically do
|
linear model solver and tree learning algorithms. The package can automatically
|
||||||
parallel computation with OpenMP, and it can be more than 10 times faster
|
do parallel computation with OpenMP, and it can be more than 10 times faster
|
||||||
than existing gradient boosting packages such as gbm. It supports various
|
than existing gradient boosting packages such as gbm. It supports various
|
||||||
objective functions, including regression, classification and ranking. The
|
objective functions, including regression, classification and ranking. The
|
||||||
package is made to be extensible, so that users are also allowed to define
|
package is made to be extensible, so that users are also allowed to define
|
||||||
their own objectives easily.
|
their own objectives easily.
|
||||||
License: Apache License (== 2.0) | file LICENSE
|
License: Apache License (== 2.0) | file LICENSE
|
||||||
URL: https://github.com/tqchen/xgboost
|
URL: https://github.com/dmlc/xgboost
|
||||||
BugReports: https://github.com/tqchen/xgboost/issues
|
BugReports: https://github.com/dmlc/xgboost/issues
|
||||||
|
VignetteBuilder: knitr
|
||||||
|
Suggests:
|
||||||
|
knitr,
|
||||||
|
ggplot2 (>= 1.0.0),
|
||||||
|
DiagrammeR (>= 0.6),
|
||||||
|
Ckmeans.1d.dp (>= 3.3.1),
|
||||||
|
vcd (>= 1.3)
|
||||||
Depends:
|
Depends:
|
||||||
R (>= 2.10)
|
R (>= 2.10)
|
||||||
Imports:
|
Imports:
|
||||||
Matrix (>= 1.1-0),
|
Matrix (>= 1.1-0),
|
||||||
methods
|
methods,
|
||||||
|
data.table (>= 1.9.4),
|
||||||
|
magrittr (>= 1.5),
|
||||||
|
stringr (>= 0.6.2)
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# Generated by roxygen2 (4.0.1): do not edit by hand
|
# Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
|
||||||
export(getinfo)
|
export(getinfo)
|
||||||
export(setinfo)
|
export(setinfo)
|
||||||
@@ -7,11 +7,37 @@ export(xgb.DMatrix)
|
|||||||
export(xgb.DMatrix.save)
|
export(xgb.DMatrix.save)
|
||||||
export(xgb.cv)
|
export(xgb.cv)
|
||||||
export(xgb.dump)
|
export(xgb.dump)
|
||||||
|
export(xgb.importance)
|
||||||
export(xgb.load)
|
export(xgb.load)
|
||||||
|
export(xgb.model.dt.tree)
|
||||||
|
export(xgb.plot.importance)
|
||||||
|
export(xgb.plot.tree)
|
||||||
export(xgb.save)
|
export(xgb.save)
|
||||||
|
export(xgb.save.raw)
|
||||||
export(xgb.train)
|
export(xgb.train)
|
||||||
export(xgboost)
|
export(xgboost)
|
||||||
|
exportMethods(nrow)
|
||||||
exportMethods(predict)
|
exportMethods(predict)
|
||||||
import(methods)
|
import(methods)
|
||||||
importClassesFrom(Matrix,dgCMatrix)
|
importClassesFrom(Matrix,dgCMatrix)
|
||||||
importClassesFrom(Matrix,dgeMatrix)
|
importClassesFrom(Matrix,dgeMatrix)
|
||||||
|
importFrom(Matrix,cBind)
|
||||||
|
importFrom(Matrix,colSums)
|
||||||
|
importFrom(Matrix,sparseVector)
|
||||||
|
importFrom(data.table,":=")
|
||||||
|
importFrom(data.table,as.data.table)
|
||||||
|
importFrom(data.table,copy)
|
||||||
|
importFrom(data.table,data.table)
|
||||||
|
importFrom(data.table,fread)
|
||||||
|
importFrom(data.table,rbindlist)
|
||||||
|
importFrom(data.table,set)
|
||||||
|
importFrom(data.table,setnames)
|
||||||
|
importFrom(magrittr,"%>%")
|
||||||
|
importFrom(magrittr,add)
|
||||||
|
importFrom(magrittr,not)
|
||||||
|
importFrom(stringr,str_extract)
|
||||||
|
importFrom(stringr,str_extract_all)
|
||||||
|
importFrom(stringr,str_match)
|
||||||
|
importFrom(stringr,str_replace)
|
||||||
|
importFrom(stringr,str_split)
|
||||||
|
importFrom(stringr,str_trim)
|
||||||
|
|||||||
@@ -4,6 +4,15 @@ setClass('xgb.DMatrix')
|
|||||||
#'
|
#'
|
||||||
#' Get information of an xgb.DMatrix object
|
#' Get information of an xgb.DMatrix object
|
||||||
#'
|
#'
|
||||||
|
#' The information can be one of the following:
|
||||||
|
#'
|
||||||
|
#' \itemize{
|
||||||
|
#' \item \code{label}: label Xgboost learn from ;
|
||||||
|
#' \item \code{weight}: to do a weight rescale ;
|
||||||
|
#' \item \code{base_margin}: base margin is the base prediction Xgboost will boost from ;
|
||||||
|
#' \item \code{nrow}: number of rows of the \code{xgb.DMatrix}.
|
||||||
|
#' }
|
||||||
|
#'
|
||||||
#' @examples
|
#' @examples
|
||||||
#' data(agaricus.train, package='xgboost')
|
#' data(agaricus.train, package='xgboost')
|
||||||
#' train <- agaricus.train
|
#' train <- agaricus.train
|
||||||
@@ -19,7 +28,9 @@ getinfo <- function(object, ...){
|
|||||||
UseMethod("getinfo")
|
UseMethod("getinfo")
|
||||||
}
|
}
|
||||||
|
|
||||||
#' @param object Object of class "xgb.DMatrix"
|
|
||||||
|
|
||||||
|
#' @param object Object of class \code{xgb.DMatrix}
|
||||||
#' @param name the name of the field to get
|
#' @param name the name of the field to get
|
||||||
#' @param ... other parameters
|
#' @param ... other parameters
|
||||||
#' @rdname getinfo
|
#' @rdname getinfo
|
||||||
@@ -32,10 +43,15 @@ setMethod("getinfo", signature = "xgb.DMatrix",
|
|||||||
if (class(object) != "xgb.DMatrix") {
|
if (class(object) != "xgb.DMatrix") {
|
||||||
stop("xgb.setinfo: first argument dtrain must be xgb.DMatrix")
|
stop("xgb.setinfo: first argument dtrain must be xgb.DMatrix")
|
||||||
}
|
}
|
||||||
if (name != "label" && name != "weight" && name != "base_margin") {
|
if (name != "label" && name != "weight" &&
|
||||||
|
name != "base_margin" && name != "nrow") {
|
||||||
stop(paste("xgb.getinfo: unknown info name", name))
|
stop(paste("xgb.getinfo: unknown info name", name))
|
||||||
}
|
}
|
||||||
ret <- .Call("XGDMatrixGetInfo_R", object, name, PACKAGE = "xgboost")
|
if (name != "nrow"){
|
||||||
|
ret <- .Call("XGDMatrixGetInfo_R", object, name, PACKAGE = "xgboost")
|
||||||
|
} else {
|
||||||
|
ret <- xgb.numrow(object)
|
||||||
|
}
|
||||||
return(ret)
|
return(ret)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
|||||||
19
R-package/R/nrow.xgb.DMatrix.R
Normal file
19
R-package/R/nrow.xgb.DMatrix.R
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
setGeneric("nrow")
|
||||||
|
|
||||||
|
#' @title Number of xgb.DMatrix rows
|
||||||
|
#' @description \code{nrow} return the number of rows present in the \code{xgb.DMatrix}.
|
||||||
|
#' @param x Object of class \code{xgb.DMatrix}
|
||||||
|
#'
|
||||||
|
#' @examples
|
||||||
|
#' data(agaricus.train, package='xgboost')
|
||||||
|
#' train <- agaricus.train
|
||||||
|
#' dtrain <- xgb.DMatrix(train$data, label=train$label)
|
||||||
|
#' stopifnot(nrow(dtrain) == nrow(train$data))
|
||||||
|
#'
|
||||||
|
#' @export
|
||||||
|
setMethod("nrow",
|
||||||
|
signature = "xgb.DMatrix",
|
||||||
|
definition = function(x) {
|
||||||
|
xgb.numrow(x)
|
||||||
|
}
|
||||||
|
)
|
||||||
@@ -1,4 +1,7 @@
|
|||||||
setClass("xgb.Booster")
|
setClass("xgb.Booster.handle")
|
||||||
|
setClass("xgb.Booster",
|
||||||
|
slots = c(handle = "xgb.Booster.handle",
|
||||||
|
raw = "raw"))
|
||||||
|
|
||||||
#' Predict method for eXtreme Gradient Boosting model
|
#' Predict method for eXtreme Gradient Boosting model
|
||||||
#'
|
#'
|
||||||
@@ -7,6 +10,8 @@ setClass("xgb.Booster")
|
|||||||
#' @param object Object of class "xgb.Boost"
|
#' @param object Object of class "xgb.Boost"
|
||||||
#' @param newdata takes \code{matrix}, \code{dgCMatrix}, local data file or
|
#' @param newdata takes \code{matrix}, \code{dgCMatrix}, local data file or
|
||||||
#' \code{xgb.DMatrix}.
|
#' \code{xgb.DMatrix}.
|
||||||
|
#' @param missing Missing is only used when input is dense matrix, pick a float
|
||||||
|
#' value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.
|
||||||
#' @param outputmargin whether the prediction should be shown in the original
|
#' @param outputmargin whether the prediction should be shown in the original
|
||||||
#' value of sum of functions, when outputmargin=TRUE, the prediction is
|
#' value of sum of functions, when outputmargin=TRUE, the prediction is
|
||||||
#' untransformed margin value. In logistic regression, outputmargin=T will
|
#' untransformed margin value. In logistic regression, outputmargin=T will
|
||||||
@@ -14,20 +19,31 @@ setClass("xgb.Booster")
|
|||||||
#' @param ntreelimit limit number of trees used in prediction, this parameter is
|
#' @param ntreelimit limit number of trees used in prediction, this parameter is
|
||||||
#' only valid for gbtree, but not for gblinear. set it to be value bigger
|
#' only valid for gbtree, but not for gblinear. set it to be value bigger
|
||||||
#' than 0. It will use all trees by default.
|
#' than 0. It will use all trees by default.
|
||||||
|
#' @param predleaf whether predict leaf index instead. If set to TRUE, the output will be a matrix object.
|
||||||
#' @examples
|
#' @examples
|
||||||
#' data(agaricus.train, package='xgboost')
|
#' data(agaricus.train, package='xgboost')
|
||||||
#' data(agaricus.test, package='xgboost')
|
#' data(agaricus.test, package='xgboost')
|
||||||
#' train <- agaricus.train
|
#' train <- agaricus.train
|
||||||
#' test <- agaricus.test
|
#' test <- agaricus.test
|
||||||
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
#' eta = 1, nround = 2,objective = "binary:logistic")
|
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
#' pred <- predict(bst, test$data)
|
#' pred <- predict(bst, test$data)
|
||||||
#' @export
|
#' @export
|
||||||
#'
|
#'
|
||||||
setMethod("predict", signature = "xgb.Booster",
|
setMethod("predict", signature = "xgb.Booster",
|
||||||
definition = function(object, newdata, outputmargin = FALSE, ntreelimit = NULL) {
|
definition = function(object, newdata, missing = NULL,
|
||||||
|
outputmargin = FALSE, ntreelimit = NULL, predleaf = FALSE) {
|
||||||
|
if (class(object) != "xgb.Booster"){
|
||||||
|
stop("predict: model in prediction must be of class xgb.Booster")
|
||||||
|
} else {
|
||||||
|
object <- xgb.Booster.check(object, saveraw = FALSE)
|
||||||
|
}
|
||||||
if (class(newdata) != "xgb.DMatrix") {
|
if (class(newdata) != "xgb.DMatrix") {
|
||||||
newdata <- xgb.DMatrix(newdata)
|
if (is.null(missing)) {
|
||||||
|
newdata <- xgb.DMatrix(newdata)
|
||||||
|
} else {
|
||||||
|
newdata <- xgb.DMatrix(newdata, missing = missing)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if (is.null(ntreelimit)) {
|
if (is.null(ntreelimit)) {
|
||||||
ntreelimit <- 0
|
ntreelimit <- 0
|
||||||
@@ -36,7 +52,24 @@ setMethod("predict", signature = "xgb.Booster",
|
|||||||
stop("predict: ntreelimit must be equal to or greater than 1")
|
stop("predict: ntreelimit must be equal to or greater than 1")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
ret <- .Call("XGBoosterPredict_R", object, newdata, as.integer(outputmargin), as.integer(ntreelimit), PACKAGE = "xgboost")
|
option = 0
|
||||||
|
if (outputmargin) {
|
||||||
|
option <- option + 1
|
||||||
|
}
|
||||||
|
if (predleaf) {
|
||||||
|
option <- option + 2
|
||||||
|
}
|
||||||
|
ret <- .Call("XGBoosterPredict_R", object$handle, newdata, as.integer(option),
|
||||||
|
as.integer(ntreelimit), PACKAGE = "xgboost")
|
||||||
|
if (predleaf){
|
||||||
|
len <- getinfo(newdata, "nrow")
|
||||||
|
if (length(ret) == len){
|
||||||
|
ret <- matrix(ret,ncol = 1)
|
||||||
|
} else {
|
||||||
|
ret <- matrix(ret, ncol = len)
|
||||||
|
ret <- t(ret)
|
||||||
|
}
|
||||||
|
}
|
||||||
return(ret)
|
return(ret)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
|||||||
19
R-package/R/predict.xgb.Booster.handle.R
Normal file
19
R-package/R/predict.xgb.Booster.handle.R
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
#' Predict method for eXtreme Gradient Boosting model handle
|
||||||
|
#'
|
||||||
|
#' Predicted values based on xgb.Booster.handle object.
|
||||||
|
#'
|
||||||
|
#' @param object Object of class "xgb.Boost.handle"
|
||||||
|
#' @param ... Parameters pass to \code{predict.xgb.Booster}
|
||||||
|
#'
|
||||||
|
setMethod("predict", signature = "xgb.Booster.handle",
|
||||||
|
definition = function(object, ...) {
|
||||||
|
if (class(object) != "xgb.Booster.handle"){
|
||||||
|
stop("predict: model in prediction must be of class xgb.Booster.handle")
|
||||||
|
}
|
||||||
|
|
||||||
|
bst <- xgb.handleToBooster(object)
|
||||||
|
|
||||||
|
ret = predict(bst, ...)
|
||||||
|
return(ret)
|
||||||
|
})
|
||||||
|
|
||||||
@@ -2,6 +2,15 @@
|
|||||||
#'
|
#'
|
||||||
#' Set information of an xgb.DMatrix object
|
#' Set information of an xgb.DMatrix object
|
||||||
#'
|
#'
|
||||||
|
#' It can be one of the following:
|
||||||
|
#'
|
||||||
|
#' \itemize{
|
||||||
|
#' \item \code{label}: label Xgboost learn from ;
|
||||||
|
#' \item \code{weight}: to do a weight rescale ;
|
||||||
|
#' \item \code{base_margin}: base margin is the base prediction Xgboost will boost from ;
|
||||||
|
#' \item \code{group}.
|
||||||
|
#' }
|
||||||
|
#'
|
||||||
#' @examples
|
#' @examples
|
||||||
#' data(agaricus.train, package='xgboost')
|
#' data(agaricus.train, package='xgboost')
|
||||||
#' train <- agaricus.train
|
#' train <- agaricus.train
|
||||||
|
|||||||
@@ -28,6 +28,18 @@ setMethod("slice", signature = "xgb.DMatrix",
|
|||||||
if (class(object) != "xgb.DMatrix") {
|
if (class(object) != "xgb.DMatrix") {
|
||||||
stop("slice: first argument dtrain must be xgb.DMatrix")
|
stop("slice: first argument dtrain must be xgb.DMatrix")
|
||||||
}
|
}
|
||||||
ret <- .Call("XGDMatrixSliceDMatrix_R", object, idxset, PACKAGE = "xgboost")
|
ret <- .Call("XGDMatrixSliceDMatrix_R", object, idxset,
|
||||||
|
PACKAGE = "xgboost")
|
||||||
|
|
||||||
|
attr_list <- attributes(object)
|
||||||
|
nr <- xgb.numrow(object)
|
||||||
|
len <- sapply(attr_list,length)
|
||||||
|
ind <- which(len==nr)
|
||||||
|
if (length(ind)>0) {
|
||||||
|
nms <- names(attr_list)[ind]
|
||||||
|
for (i in 1:length(ind)) {
|
||||||
|
attr(ret,nms[i]) <- attr(object,nms[i])[idxset]
|
||||||
|
}
|
||||||
|
}
|
||||||
return(structure(ret, class = "xgb.DMatrix"))
|
return(structure(ret, class = "xgb.DMatrix"))
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -15,21 +15,29 @@ xgb.setinfo <- function(dmat, name, info) {
|
|||||||
stop("xgb.setinfo: first argument dtrain must be xgb.DMatrix")
|
stop("xgb.setinfo: first argument dtrain must be xgb.DMatrix")
|
||||||
}
|
}
|
||||||
if (name == "label") {
|
if (name == "label") {
|
||||||
|
if (length(info)!=xgb.numrow(dmat))
|
||||||
|
stop("The length of labels must equal to the number of rows in the input data")
|
||||||
.Call("XGDMatrixSetInfo_R", dmat, name, as.numeric(info),
|
.Call("XGDMatrixSetInfo_R", dmat, name, as.numeric(info),
|
||||||
PACKAGE = "xgboost")
|
PACKAGE = "xgboost")
|
||||||
return(TRUE)
|
return(TRUE)
|
||||||
}
|
}
|
||||||
if (name == "weight") {
|
if (name == "weight") {
|
||||||
|
if (length(info)!=xgb.numrow(dmat))
|
||||||
|
stop("The length of weights must equal to the number of rows in the input data")
|
||||||
.Call("XGDMatrixSetInfo_R", dmat, name, as.numeric(info),
|
.Call("XGDMatrixSetInfo_R", dmat, name, as.numeric(info),
|
||||||
PACKAGE = "xgboost")
|
PACKAGE = "xgboost")
|
||||||
return(TRUE)
|
return(TRUE)
|
||||||
}
|
}
|
||||||
if (name == "base_margin") {
|
if (name == "base_margin") {
|
||||||
|
# if (length(info)!=xgb.numrow(dmat))
|
||||||
|
# stop("The length of base margin must equal to the number of rows in the input data")
|
||||||
.Call("XGDMatrixSetInfo_R", dmat, name, as.numeric(info),
|
.Call("XGDMatrixSetInfo_R", dmat, name, as.numeric(info),
|
||||||
PACKAGE = "xgboost")
|
PACKAGE = "xgboost")
|
||||||
return(TRUE)
|
return(TRUE)
|
||||||
}
|
}
|
||||||
if (name == "group") {
|
if (name == "group") {
|
||||||
|
if (sum(info)!=xgb.numrow(dmat))
|
||||||
|
stop("The sum of groups must equal to the number of rows in the input data")
|
||||||
.Call("XGDMatrixSetInfo_R", dmat, name, as.integer(info),
|
.Call("XGDMatrixSetInfo_R", dmat, name, as.integer(info),
|
||||||
PACKAGE = "xgboost")
|
PACKAGE = "xgboost")
|
||||||
return(TRUE)
|
return(TRUE)
|
||||||
@@ -57,24 +65,55 @@ xgb.Booster <- function(params = list(), cachelist = list(), modelfile = NULL) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (!is.null(modelfile)) {
|
if (!is.null(modelfile)) {
|
||||||
if (typeof(modelfile) != "character") {
|
if (typeof(modelfile) == "character") {
|
||||||
stop("xgb.Booster: modelfile must be character")
|
.Call("XGBoosterLoadModel_R", handle, modelfile, PACKAGE = "xgboost")
|
||||||
|
} else if (typeof(modelfile) == "raw") {
|
||||||
|
.Call("XGBoosterLoadModelFromRaw_R", handle, modelfile, PACKAGE = "xgboost")
|
||||||
|
} else {
|
||||||
|
stop("xgb.Booster: modelfile must be character or raw vector")
|
||||||
}
|
}
|
||||||
.Call("XGBoosterLoadModel_R", handle, modelfile, PACKAGE = "xgboost")
|
|
||||||
}
|
}
|
||||||
return(structure(handle, class = "xgb.Booster"))
|
return(structure(handle, class = "xgb.Booster.handle"))
|
||||||
|
}
|
||||||
|
|
||||||
|
# convert xgb.Booster.handle to xgb.Booster
|
||||||
|
xgb.handleToBooster <- function(handle, raw = NULL)
|
||||||
|
{
|
||||||
|
bst <- list(handle = handle, raw = raw)
|
||||||
|
class(bst) <- "xgb.Booster"
|
||||||
|
return(bst)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check whether an xgb.Booster object is complete
|
||||||
|
xgb.Booster.check <- function(bst, saveraw = TRUE)
|
||||||
|
{
|
||||||
|
isnull <- is.null(bst$handle)
|
||||||
|
if (!isnull) {
|
||||||
|
isnull <- .Call("XGCheckNullPtr_R", bst$handle, PACKAGE="xgboost")
|
||||||
|
}
|
||||||
|
if (isnull) {
|
||||||
|
bst$handle <- xgb.Booster(modelfile = bst$raw)
|
||||||
|
} else {
|
||||||
|
if (is.null(bst$raw) && saveraw)
|
||||||
|
bst$raw <- xgb.save.raw(bst$handle)
|
||||||
|
}
|
||||||
|
return(bst)
|
||||||
}
|
}
|
||||||
|
|
||||||
## ----the following are low level iteratively function, not needed if
|
## ----the following are low level iteratively function, not needed if
|
||||||
## you do not want to use them ---------------------------------------
|
## you do not want to use them ---------------------------------------
|
||||||
# get dmatrix from data, label
|
# get dmatrix from data, label
|
||||||
xgb.get.DMatrix <- function(data, label = NULL) {
|
xgb.get.DMatrix <- function(data, label = NULL, missing = NULL) {
|
||||||
inClass <- class(data)
|
inClass <- class(data)
|
||||||
if (inClass == "dgCMatrix" || inClass == "matrix") {
|
if (inClass == "dgCMatrix" || inClass == "matrix") {
|
||||||
if (is.null(label)) {
|
if (is.null(label)) {
|
||||||
stop("xgboost: need label when data is a matrix")
|
stop("xgboost: need label when data is a matrix")
|
||||||
}
|
}
|
||||||
dtrain <- xgb.DMatrix(data, label = label)
|
if (is.null(missing)){
|
||||||
|
dtrain <- xgb.DMatrix(data, label = label)
|
||||||
|
} else {
|
||||||
|
dtrain <- xgb.DMatrix(data, label = label, missing = missing)
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
if (!is.null(label)) {
|
if (!is.null(label)) {
|
||||||
warning("xgboost: label will be ignored.")
|
warning("xgboost: label will be ignored.")
|
||||||
@@ -95,8 +134,8 @@ xgb.numrow <- function(dmat) {
|
|||||||
}
|
}
|
||||||
# iteratively update booster with customized statistics
|
# iteratively update booster with customized statistics
|
||||||
xgb.iter.boost <- function(booster, dtrain, gpair) {
|
xgb.iter.boost <- function(booster, dtrain, gpair) {
|
||||||
if (class(booster) != "xgb.Booster") {
|
if (class(booster) != "xgb.Booster.handle") {
|
||||||
stop("xgb.iter.update: first argument must be type xgb.Booster")
|
stop("xgb.iter.update: first argument must be type xgb.Booster.handle")
|
||||||
}
|
}
|
||||||
if (class(dtrain) != "xgb.DMatrix") {
|
if (class(dtrain) != "xgb.DMatrix") {
|
||||||
stop("xgb.iter.update: second argument must be type xgb.DMatrix")
|
stop("xgb.iter.update: second argument must be type xgb.DMatrix")
|
||||||
@@ -108,8 +147,8 @@ xgb.iter.boost <- function(booster, dtrain, gpair) {
|
|||||||
|
|
||||||
# iteratively update booster with dtrain
|
# iteratively update booster with dtrain
|
||||||
xgb.iter.update <- function(booster, dtrain, iter, obj = NULL) {
|
xgb.iter.update <- function(booster, dtrain, iter, obj = NULL) {
|
||||||
if (class(booster) != "xgb.Booster") {
|
if (class(booster) != "xgb.Booster.handle") {
|
||||||
stop("xgb.iter.update: first argument must be type xgb.Booster")
|
stop("xgb.iter.update: first argument must be type xgb.Booster.handle")
|
||||||
}
|
}
|
||||||
if (class(dtrain) != "xgb.DMatrix") {
|
if (class(dtrain) != "xgb.DMatrix") {
|
||||||
stop("xgb.iter.update: second argument must be type xgb.DMatrix")
|
stop("xgb.iter.update: second argument must be type xgb.DMatrix")
|
||||||
@@ -127,8 +166,8 @@ xgb.iter.update <- function(booster, dtrain, iter, obj = NULL) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
# iteratively evaluate one iteration
|
# iteratively evaluate one iteration
|
||||||
xgb.iter.eval <- function(booster, watchlist, iter, feval = NULL) {
|
xgb.iter.eval <- function(booster, watchlist, iter, feval = NULL, prediction = FALSE) {
|
||||||
if (class(booster) != "xgb.Booster") {
|
if (class(booster) != "xgb.Booster.handle") {
|
||||||
stop("xgb.eval: first argument must be type xgb.Booster")
|
stop("xgb.eval: first argument must be type xgb.Booster")
|
||||||
}
|
}
|
||||||
if (typeof(watchlist) != "list") {
|
if (typeof(watchlist) != "list") {
|
||||||
@@ -158,41 +197,82 @@ xgb.iter.eval <- function(booster, watchlist, iter, feval = NULL) {
|
|||||||
if (length(names(w)) == 0) {
|
if (length(names(w)) == 0) {
|
||||||
stop("xgb.eval: name tag must be presented for every elements in watchlist")
|
stop("xgb.eval: name tag must be presented for every elements in watchlist")
|
||||||
}
|
}
|
||||||
ret <- feval(predict(booster, w[[1]]), w[[1]])
|
preds <- predict(booster, w[[1]])
|
||||||
|
ret <- feval(preds, w[[1]])
|
||||||
msg <- paste(msg, "\t", names(w), "-", ret$metric, ":", ret$value, sep="")
|
msg <- paste(msg, "\t", names(w), "-", ret$metric, ":", ret$value, sep="")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
msg <- ""
|
msg <- ""
|
||||||
}
|
}
|
||||||
|
if (prediction){
|
||||||
|
preds <- predict(booster,watchlist[[2]])
|
||||||
|
return(list(msg,preds))
|
||||||
|
}
|
||||||
return(msg)
|
return(msg)
|
||||||
}
|
}
|
||||||
|
|
||||||
#------------------------------------------
|
#------------------------------------------
|
||||||
# helper functions for cross validation
|
# helper functions for cross validation
|
||||||
#
|
#
|
||||||
xgb.cv.mknfold <- function(dall, nfold, param) {
|
xgb.cv.mknfold <- function(dall, nfold, param, stratified, folds) {
|
||||||
randidx <- sample(1 : xgb.numrow(dall))
|
if (nfold <= 1) {
|
||||||
kstep <- length(randidx) / nfold
|
stop("nfold must be bigger than 1")
|
||||||
idset <- list()
|
}
|
||||||
for (i in 1:nfold) {
|
if(is.null(folds)) {
|
||||||
idset[[i]] <- randidx[ ((i-1) * kstep + 1) : min(i * kstep, length(randidx)) ]
|
if (exists('objective', where=param) && strtrim(param[['objective']], 5) == 'rank:') {
|
||||||
|
stop("\tAutomatic creation of CV-folds is not implemented for ranking!\n",
|
||||||
|
"\tConsider providing pre-computed CV-folds through the folds parameter.")
|
||||||
|
}
|
||||||
|
y <- getinfo(dall, 'label')
|
||||||
|
randidx <- sample(1 : xgb.numrow(dall))
|
||||||
|
if (stratified & length(y) == length(randidx)) {
|
||||||
|
y <- y[randidx]
|
||||||
|
#
|
||||||
|
# WARNING: some heuristic logic is employed to identify classification setting!
|
||||||
|
#
|
||||||
|
# For classification, need to convert y labels to factor before making the folds,
|
||||||
|
# and then do stratification by factor levels.
|
||||||
|
# For regression, leave y numeric and do stratification by quantiles.
|
||||||
|
if (exists('objective', where=param)) {
|
||||||
|
# If 'objective' provided in params, assume that y is a classification label
|
||||||
|
# unless objective is reg:linear
|
||||||
|
if (param[['objective']] != 'reg:linear') y <- factor(y)
|
||||||
|
} else {
|
||||||
|
# If no 'objective' given in params, it means that user either wants to use
|
||||||
|
# the default 'reg:linear' objective or has provided a custom obj function.
|
||||||
|
# Here, assume classification setting when y has 5 or less unique values:
|
||||||
|
if (length(unique(y)) <= 5) y <- factor(y)
|
||||||
|
}
|
||||||
|
folds <- xgb.createFolds(y, nfold)
|
||||||
|
} else {
|
||||||
|
# make simple non-stratified folds
|
||||||
|
kstep <- length(randidx) %/% nfold
|
||||||
|
folds <- list()
|
||||||
|
for (i in 1:(nfold-1)) {
|
||||||
|
folds[[i]] = randidx[1:kstep]
|
||||||
|
randidx = setdiff(randidx, folds[[i]])
|
||||||
|
}
|
||||||
|
folds[[nfold]] = randidx
|
||||||
|
}
|
||||||
}
|
}
|
||||||
ret <- list()
|
ret <- list()
|
||||||
for (k in 1:nfold) {
|
for (k in 1:nfold) {
|
||||||
dtest <- slice(dall, idset[[k]])
|
dtest <- slice(dall, folds[[k]])
|
||||||
didx = c()
|
didx = c()
|
||||||
for (i in 1:nfold) {
|
for (i in 1:nfold) {
|
||||||
if (i != k) {
|
if (i != k) {
|
||||||
didx <- append(didx, idset[[i]])
|
didx <- append(didx, folds[[i]])
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
dtrain <- slice(dall, didx)
|
dtrain <- slice(dall, didx)
|
||||||
bst <- xgb.Booster(param, list(dtrain, dtest))
|
bst <- xgb.Booster(param, list(dtrain, dtest))
|
||||||
watchlist = list(train=dtrain, test=dtest)
|
watchlist = list(train=dtrain, test=dtest)
|
||||||
ret[[k]] <- list(dtrain=dtrain, booster=bst, watchlist=watchlist)
|
ret[[k]] <- list(dtrain=dtrain, booster=bst, watchlist=watchlist, index=folds[[k]])
|
||||||
}
|
}
|
||||||
return (ret)
|
return (ret)
|
||||||
}
|
}
|
||||||
|
|
||||||
xgb.cv.aggcv <- function(res, showsd = TRUE) {
|
xgb.cv.aggcv <- function(res, showsd = TRUE) {
|
||||||
header <- res[[1]]
|
header <- res[[1]]
|
||||||
ret <- header[1]
|
ret <- header[1]
|
||||||
@@ -212,3 +292,53 @@ xgb.cv.aggcv <- function(res, showsd = TRUE) {
|
|||||||
}
|
}
|
||||||
return (ret)
|
return (ret)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Shamelessly copied from caret::createFolds
|
||||||
|
# and simplified by always returning an unnamed list of test indices
|
||||||
|
xgb.createFolds <- function(y, k = 10)
|
||||||
|
{
|
||||||
|
if(is.numeric(y)) {
|
||||||
|
## Group the numeric data based on their magnitudes
|
||||||
|
## and sample within those groups.
|
||||||
|
|
||||||
|
## When the number of samples is low, we may have
|
||||||
|
## issues further slicing the numeric data into
|
||||||
|
## groups. The number of groups will depend on the
|
||||||
|
## ratio of the number of folds to the sample size.
|
||||||
|
## At most, we will use quantiles. If the sample
|
||||||
|
## is too small, we just do regular unstratified
|
||||||
|
## CV
|
||||||
|
cuts <- floor(length(y)/k)
|
||||||
|
if(cuts < 2) cuts <- 2
|
||||||
|
if(cuts > 5) cuts <- 5
|
||||||
|
y <- cut(y,
|
||||||
|
unique(quantile(y, probs = seq(0, 1, length = cuts))),
|
||||||
|
include.lowest = TRUE)
|
||||||
|
}
|
||||||
|
|
||||||
|
if(k < length(y)) {
|
||||||
|
## reset levels so that the possible levels and
|
||||||
|
## the levels in the vector are the same
|
||||||
|
y <- factor(as.character(y))
|
||||||
|
numInClass <- table(y)
|
||||||
|
foldVector <- vector(mode = "integer", length(y))
|
||||||
|
|
||||||
|
## For each class, balance the fold allocation as far
|
||||||
|
## as possible, then resample the remainder.
|
||||||
|
## The final assignment of folds is also randomized.
|
||||||
|
for(i in 1:length(numInClass)) {
|
||||||
|
## create a vector of integers from 1:k as many times as possible without
|
||||||
|
## going over the number of samples in the class. Note that if the number
|
||||||
|
## of samples in a class is less than k, nothing is producd here.
|
||||||
|
seqVector <- rep(1:k, numInClass[i] %/% k)
|
||||||
|
## add enough random integers to get length(seqVector) == numInClass[i]
|
||||||
|
if(numInClass[i] %% k > 0) seqVector <- c(seqVector, sample(1:k, numInClass[i] %% k))
|
||||||
|
## shuffle the integers for fold assignment and assign to this classes's data
|
||||||
|
foldVector[which(y == dimnames(numInClass)$y[i])] <- sample(seqVector)
|
||||||
|
}
|
||||||
|
} else foldVector <- seq(along = y)
|
||||||
|
|
||||||
|
out <- split(seq(along = y), foldVector)
|
||||||
|
names(out) <- NULL
|
||||||
|
out
|
||||||
|
}
|
||||||
|
|||||||
@@ -6,7 +6,7 @@
|
|||||||
#' indicating the data file.
|
#' indicating the data file.
|
||||||
#' @param info a list of information of the xgb.DMatrix object
|
#' @param info a list of information of the xgb.DMatrix object
|
||||||
#' @param missing Missing is only used when input is dense matrix, pick a float
|
#' @param missing Missing is only used when input is dense matrix, pick a float
|
||||||
# value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.
|
#' value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.
|
||||||
#
|
#
|
||||||
#' @param ... other information to pass to \code{info}.
|
#' @param ... other information to pass to \code{info}.
|
||||||
#'
|
#'
|
||||||
|
|||||||
@@ -1,7 +1,18 @@
|
|||||||
#' Cross Validation
|
#' Cross Validation
|
||||||
#'
|
#'
|
||||||
#' The cross valudation function of xgboost
|
#' The cross valudation function of xgboost
|
||||||
#'
|
#'
|
||||||
|
#' @importFrom data.table data.table
|
||||||
|
#' @importFrom data.table as.data.table
|
||||||
|
#' @importFrom magrittr %>%
|
||||||
|
#' @importFrom data.table :=
|
||||||
|
#' @importFrom data.table rbindlist
|
||||||
|
#' @importFrom stringr str_extract_all
|
||||||
|
#' @importFrom stringr str_extract
|
||||||
|
#' @importFrom stringr str_split
|
||||||
|
#' @importFrom stringr str_replace
|
||||||
|
#' @importFrom stringr str_match
|
||||||
|
#'
|
||||||
#' @param params the list of parameters. Commonly used ones are:
|
#' @param params the list of parameters. Commonly used ones are:
|
||||||
#' \itemize{
|
#' \itemize{
|
||||||
#' \item \code{objective} objective function, common ones are
|
#' \item \code{objective} objective function, common ones are
|
||||||
@@ -14,13 +25,16 @@
|
|||||||
#' \item \code{nthread} number of thread used in training, if not set, all threads are used
|
#' \item \code{nthread} number of thread used in training, if not set, all threads are used
|
||||||
#' }
|
#' }
|
||||||
#'
|
#'
|
||||||
#' See \url{https://github.com/tqchen/xgboost/wiki/Parameters} for
|
#' See \link{xgb.train} for further details.
|
||||||
#' further details. See also demo/ for walkthrough example in R.
|
#' See also demo/ for walkthrough example in R.
|
||||||
#' @param data takes an \code{xgb.DMatrix} as the input.
|
#' @param data takes an \code{xgb.DMatrix} or \code{Matrix} as the input.
|
||||||
#' @param nrounds the max number of iterations
|
#' @param nrounds the max number of iterations
|
||||||
#' @param nfold number of folds used
|
#' @param nfold the original dataset is randomly partitioned into \code{nfold} equal size subsamples.
|
||||||
#' @param label option field, when data is Matrix
|
#' @param label option field, when data is \code{Matrix}
|
||||||
#' @param showsd boolean, whether show standard deviation of cross validation
|
#' @param missing Missing is only used when input is dense matrix, pick a float
|
||||||
|
#' value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.
|
||||||
|
#' @param prediction A logical value indicating whether to return the prediction vector.
|
||||||
|
#' @param showsd \code{boolean}, whether show standard deviation of cross validation
|
||||||
#' @param metrics, list of evaluation metrics to be used in corss validation,
|
#' @param metrics, list of evaluation metrics to be used in corss validation,
|
||||||
#' when it is not specified, the evaluation metric is chosen according to objective function.
|
#' when it is not specified, the evaluation metric is chosen according to objective function.
|
||||||
#' Possible options are:
|
#' Possible options are:
|
||||||
@@ -32,55 +46,187 @@
|
|||||||
#' \item \code{merror} Exact matching error, used to evaluate multi-class classification
|
#' \item \code{merror} Exact matching error, used to evaluate multi-class classification
|
||||||
#' }
|
#' }
|
||||||
#' @param obj customized objective function. Returns gradient and second order
|
#' @param obj customized objective function. Returns gradient and second order
|
||||||
#' gradient with given prediction and dtrain,
|
#' gradient with given prediction and dtrain.
|
||||||
#' @param feval custimized evaluation function. Returns
|
#' @param feval custimized evaluation function. Returns
|
||||||
#' \code{list(metric='metric-name', value='metric-value')} with given
|
#' \code{list(metric='metric-name', value='metric-value')} with given
|
||||||
#' prediction and dtrain,
|
#' prediction and dtrain.
|
||||||
|
#' @param stratified \code{boolean} whether sampling of folds should be stratified by the values of labels in \code{data}
|
||||||
|
#' @param folds \code{list} provides a possibility of using a list of pre-defined CV folds (each element must be a vector of fold's indices).
|
||||||
|
#' If folds are supplied, the nfold and stratified parameters would be ignored.
|
||||||
|
#' @param verbose \code{boolean}, print the statistics during the process
|
||||||
|
#' @param early_stop_round If \code{NULL}, the early stopping function is not triggered.
|
||||||
|
#' If set to an integer \code{k}, training with a validation set will stop if the performance
|
||||||
|
#' keeps getting worse consecutively for \code{k} rounds.
|
||||||
|
#' @param early.stop.round An alternative of \code{early_stop_round}.
|
||||||
|
#' @param maximize If \code{feval} and \code{early_stop_round} are set, then \code{maximize} must be set as well.
|
||||||
|
#' \code{maximize=TRUE} means the larger the evaluation score the better.
|
||||||
|
#'
|
||||||
#' @param ... other parameters to pass to \code{params}.
|
#' @param ... other parameters to pass to \code{params}.
|
||||||
#'
|
#'
|
||||||
#' @details
|
#' @return
|
||||||
#' This is the cross validation function for xgboost
|
#' If \code{prediction = TRUE}, a list with the following elements is returned:
|
||||||
|
#' \itemize{
|
||||||
|
#' \item \code{dt} a \code{data.table} with each mean and standard deviation stat for training set and test set
|
||||||
|
#' \item \code{pred} an array or matrix (for multiclass classification) with predictions for each CV-fold for the model having been trained on the data in all other folds.
|
||||||
|
#' }
|
||||||
#'
|
#'
|
||||||
#' Parallelization is automatically enabled if OpenMP is present.
|
#' If \code{prediction = FALSE}, just a \code{data.table} with each mean and standard deviation stat for training set and test set is returned.
|
||||||
#' Number of threads can also be manually specified via "nthread" parameter.
|
#'
|
||||||
|
#' @details
|
||||||
|
#' The original sample is randomly partitioned into \code{nfold} equal size subsamples.
|
||||||
#'
|
#'
|
||||||
#' This function only accepts an \code{xgb.DMatrix} object as the input.
|
#' Of the \code{nfold} subsamples, a single subsample is retained as the validation data for testing the model, and the remaining \code{nfold - 1} subsamples are used as training data.
|
||||||
|
#'
|
||||||
|
#' The cross-validation process is then repeated \code{nrounds} times, with each of the \code{nfold} subsamples used exactly once as the validation data.
|
||||||
|
#'
|
||||||
|
#' All observations are used for both training and validation.
|
||||||
|
#'
|
||||||
|
#' Adapted from \url{http://en.wikipedia.org/wiki/Cross-validation_\%28statistics\%29#k-fold_cross-validation}
|
||||||
#'
|
#'
|
||||||
#' @examples
|
#' @examples
|
||||||
#' data(agaricus.train, package='xgboost')
|
#' data(agaricus.train, package='xgboost')
|
||||||
#' dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
|
#' dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
|
||||||
#' history <- xgb.cv(data = dtrain, nround=3, nfold = 5, metrics=list("rmse","auc"),
|
#' history <- xgb.cv(data = dtrain, nround=3, nthread = 2, nfold = 5, metrics=list("rmse","auc"),
|
||||||
#' "max.depth"=3, "eta"=1, "objective"="binary:logistic")
|
#' max.depth =3, eta = 1, objective = "binary:logistic")
|
||||||
|
#' print(history)
|
||||||
#' @export
|
#' @export
|
||||||
#'
|
#'
|
||||||
xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL,
|
xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing = NULL,
|
||||||
showsd = TRUE, metrics=list(), obj = NULL, feval = NULL, ...) {
|
prediction = FALSE, showsd = TRUE, metrics=list(),
|
||||||
|
obj = NULL, feval = NULL, stratified = TRUE, folds = NULL, verbose = T,
|
||||||
|
early_stop_round = NULL, early.stop.round = NULL, maximize = NULL, ...) {
|
||||||
if (typeof(params) != "list") {
|
if (typeof(params) != "list") {
|
||||||
stop("xgb.cv: first argument params must be list")
|
stop("xgb.cv: first argument params must be list")
|
||||||
}
|
}
|
||||||
|
if(!is.null(folds)) {
|
||||||
|
if(class(folds)!="list" | length(folds) < 2) {
|
||||||
|
stop("folds must be a list with 2 or more elements that are vectors of indices for each CV-fold")
|
||||||
|
}
|
||||||
|
nfold <- length(folds)
|
||||||
|
}
|
||||||
if (nfold <= 1) {
|
if (nfold <= 1) {
|
||||||
stop("nfold must be bigger than 1")
|
stop("nfold must be bigger than 1")
|
||||||
}
|
}
|
||||||
dtrain <- xgb.get.DMatrix(data, label)
|
if (is.null(missing)) {
|
||||||
|
dtrain <- xgb.get.DMatrix(data, label)
|
||||||
|
} else {
|
||||||
|
dtrain <- xgb.get.DMatrix(data, label, missing)
|
||||||
|
}
|
||||||
params <- append(params, list(...))
|
params <- append(params, list(...))
|
||||||
params <- append(params, list(silent=1))
|
params <- append(params, list(silent=1))
|
||||||
for (mc in metrics) {
|
for (mc in metrics) {
|
||||||
params <- append(params, list("eval_metric"=mc))
|
params <- append(params, list("eval_metric"=mc))
|
||||||
}
|
}
|
||||||
|
|
||||||
folds <- xgb.cv.mknfold(dtrain, nfold, params)
|
# Early Stopping
|
||||||
history <- list()
|
if (is.null(early_stop_round) && !is.null(early.stop.round))
|
||||||
|
early_stop_round = early.stop.round
|
||||||
|
if (!is.null(early_stop_round)){
|
||||||
|
if (!is.null(feval) && is.null(maximize))
|
||||||
|
stop('Please set maximize to note whether the model is maximizing the evaluation or not.')
|
||||||
|
if (is.null(maximize) && is.null(params$eval_metric))
|
||||||
|
stop('Please set maximize to note whether the model is maximizing the evaluation or not.')
|
||||||
|
if (is.null(maximize))
|
||||||
|
{
|
||||||
|
if (params$eval_metric %in% c('rmse','logloss','error','merror','mlogloss')) {
|
||||||
|
maximize = FALSE
|
||||||
|
} else {
|
||||||
|
maximize = TRUE
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (maximize) {
|
||||||
|
bestScore = 0
|
||||||
|
} else {
|
||||||
|
bestScore = Inf
|
||||||
|
}
|
||||||
|
bestInd = 0
|
||||||
|
earlyStopflag = FALSE
|
||||||
|
|
||||||
|
if (length(metrics)>1)
|
||||||
|
warning('Only the first metric is used for early stopping process.')
|
||||||
|
}
|
||||||
|
|
||||||
|
xgb_folds <- xgb.cv.mknfold(dtrain, nfold, params, stratified, folds)
|
||||||
|
obj_type = params[['objective']]
|
||||||
|
mat_pred = FALSE
|
||||||
|
if (!is.null(obj_type) && obj_type=='multi:softprob')
|
||||||
|
{
|
||||||
|
num_class = params[['num_class']]
|
||||||
|
if (is.null(num_class))
|
||||||
|
stop('must set num_class to use softmax')
|
||||||
|
predictValues <- matrix(0,xgb.numrow(dtrain),num_class)
|
||||||
|
mat_pred = TRUE
|
||||||
|
}
|
||||||
|
else
|
||||||
|
predictValues <- rep(0,xgb.numrow(dtrain))
|
||||||
|
history <- c()
|
||||||
for (i in 1:nrounds) {
|
for (i in 1:nrounds) {
|
||||||
msg <- list()
|
msg <- list()
|
||||||
for (k in 1:nfold) {
|
for (k in 1:nfold) {
|
||||||
fd <- folds[[k]]
|
fd <- xgb_folds[[k]]
|
||||||
succ <- xgb.iter.update(fd$booster, fd$dtrain, i - 1, obj)
|
succ <- xgb.iter.update(fd$booster, fd$dtrain, i - 1, obj)
|
||||||
msg[[k]] <- strsplit(xgb.iter.eval(fd$booster, fd$watchlist, i - 1, feval),
|
if (i<nrounds) {
|
||||||
"\t")[[1]]
|
msg[[k]] <- xgb.iter.eval(fd$booster, fd$watchlist, i - 1, feval) %>% str_split("\t") %>% .[[1]]
|
||||||
|
} else {
|
||||||
|
if (!prediction) {
|
||||||
|
msg[[k]] <- xgb.iter.eval(fd$booster, fd$watchlist, i - 1, feval) %>% str_split("\t") %>% .[[1]]
|
||||||
|
} else {
|
||||||
|
res <- xgb.iter.eval(fd$booster, fd$watchlist, i - 1, feval, prediction)
|
||||||
|
if (mat_pred) {
|
||||||
|
pred_mat = matrix(res[[2]],num_class,length(fd$index))
|
||||||
|
predictValues[fd$index,] <- t(pred_mat)
|
||||||
|
} else {
|
||||||
|
predictValues[fd$index] <- res[[2]]
|
||||||
|
}
|
||||||
|
msg[[k]] <- res[[1]] %>% str_split("\t") %>% .[[1]]
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
ret <- xgb.cv.aggcv(msg, showsd)
|
ret <- xgb.cv.aggcv(msg, showsd)
|
||||||
history <- append(history, ret)
|
history <- c(history, ret)
|
||||||
cat(paste(ret, "\n", sep=""))
|
if(verbose) paste(ret, "\n", sep="") %>% cat
|
||||||
|
|
||||||
|
# early_Stopping
|
||||||
|
if (!is.null(early_stop_round)){
|
||||||
|
score = strsplit(ret,'\\s+')[[1]][1+length(metrics)+1]
|
||||||
|
score = strsplit(score,'\\+|:')[[1]][[2]]
|
||||||
|
score = as.numeric(score)
|
||||||
|
if ((maximize && score>bestScore) || (!maximize && score<bestScore)) {
|
||||||
|
bestScore = score
|
||||||
|
bestInd = i
|
||||||
|
} else {
|
||||||
|
if (i-bestInd>=early_stop_round) {
|
||||||
|
earlyStopflag = TRUE
|
||||||
|
cat('Stopping. Best iteration:',bestInd)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
return (TRUE)
|
|
||||||
|
colnames <- str_split(string = history[1], pattern = "\t")[[1]] %>% .[2:length(.)] %>% str_extract(".*:") %>% str_replace(":","") %>% str_replace("-", ".")
|
||||||
|
colnamesMean <- paste(colnames, "mean")
|
||||||
|
if(showsd) colnamesStd <- paste(colnames, "std")
|
||||||
|
|
||||||
|
colnames <- c()
|
||||||
|
if(showsd) for(i in 1:length(colnamesMean)) colnames <- c(colnames, colnamesMean[i], colnamesStd[i])
|
||||||
|
else colnames <- colnamesMean
|
||||||
|
|
||||||
|
type <- rep(x = "numeric", times = length(colnames))
|
||||||
|
dt <- read.table(text = "", colClasses = type, col.names = colnames) %>% as.data.table
|
||||||
|
split <- str_split(string = history, pattern = "\t")
|
||||||
|
|
||||||
|
for(line in split) dt <- line[2:length(line)] %>% str_extract_all(pattern = "\\d*\\.+\\d*") %>% unlist %>% as.numeric %>% as.list %>% {rbindlist(list(dt, .), use.names = F, fill = F)}
|
||||||
|
|
||||||
|
if (prediction) {
|
||||||
|
return(list(dt = dt,pred = predictValues))
|
||||||
|
}
|
||||||
|
return(dt)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Avoid error messages during CRAN check.
|
||||||
|
# The reason is that these variables are never declared
|
||||||
|
# They are mainly column names inferred by Data.table...
|
||||||
|
globalVariables(".")
|
||||||
|
|||||||
@@ -2,14 +2,26 @@
|
|||||||
#'
|
#'
|
||||||
#' Save a xgboost model to text file. Could be parsed later.
|
#' Save a xgboost model to text file. Could be parsed later.
|
||||||
#'
|
#'
|
||||||
|
#' @importFrom magrittr %>%
|
||||||
|
#' @importFrom stringr str_replace
|
||||||
|
#' @importFrom data.table fread
|
||||||
|
#' @importFrom data.table :=
|
||||||
|
#' @importFrom data.table setnames
|
||||||
#' @param model the model object.
|
#' @param model the model object.
|
||||||
#' @param fname the name of the binary file.
|
#' @param fname the name of the text file where to save the model text dump. If not provided or set to \code{NULL} the function will return the model as a \code{character} vector.
|
||||||
#' @param fmap feature map file representing the type of feature.
|
#' @param fmap feature map file representing the type of feature.
|
||||||
#' Detailed description could be found at
|
#' Detailed description could be found at
|
||||||
#' \url{https://github.com/tqchen/xgboost/wiki/Binary-Classification#dump-model}.
|
#' \url{https://github.com/dmlc/xgboost/wiki/Binary-Classification#dump-model}.
|
||||||
#' See demo/ for walkthrough example in R, and
|
#' See demo/ for walkthrough example in R, and
|
||||||
#' \url{https://github.com/tqchen/xgboost/blob/master/demo/data/featmap.txt}
|
#' \url{https://github.com/dmlc/xgboost/blob/master/demo/data/featmap.txt}
|
||||||
#' for example Format.
|
#' for example Format.
|
||||||
|
#' @param with.stats whether dump statistics of splits
|
||||||
|
#' When this option is on, the model dump comes with two additional statistics:
|
||||||
|
#' gain is the approximate loss function gain we get in each split;
|
||||||
|
#' cover is the sum of second order gradient in each node.
|
||||||
|
#'
|
||||||
|
#' @return
|
||||||
|
#' if fname is not provided or set to \code{NULL} the function will return the model as a \code{character} vector. Otherwise it will return \code{TRUE}.
|
||||||
#'
|
#'
|
||||||
#' @examples
|
#' @examples
|
||||||
#' data(agaricus.train, package='xgboost')
|
#' data(agaricus.train, package='xgboost')
|
||||||
@@ -17,17 +29,43 @@
|
|||||||
#' train <- agaricus.train
|
#' train <- agaricus.train
|
||||||
#' test <- agaricus.test
|
#' test <- agaricus.test
|
||||||
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
#' eta = 1, nround = 2,objective = "binary:logistic")
|
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
#' xgb.dump(bst, 'xgb.model.dump')
|
#' # save the model in file 'xgb.model.dump'
|
||||||
|
#' xgb.dump(bst, 'xgb.model.dump', with.stats = TRUE)
|
||||||
|
#'
|
||||||
|
#' # print the model without saving it to a file
|
||||||
|
#' print(xgb.dump(bst))
|
||||||
#' @export
|
#' @export
|
||||||
#'
|
#'
|
||||||
xgb.dump <- function(model, fname, fmap = "") {
|
xgb.dump <- function(model = NULL, fname = NULL, fmap = "", with.stats=FALSE) {
|
||||||
if (class(model) != "xgb.Booster") {
|
if (class(model) != "xgb.Booster") {
|
||||||
stop("xgb.dump: first argument must be type xgb.Booster")
|
stop("model: argument must be type xgb.Booster")
|
||||||
|
} else {
|
||||||
|
model <- xgb.Booster.check(model)
|
||||||
}
|
}
|
||||||
if (typeof(fname) != "character") {
|
if (!(class(fname) %in% c("character", "NULL") && length(fname) <= 1)) {
|
||||||
stop("xgb.dump: second argument must be type character")
|
stop("fname: argument must be type character (when provided)")
|
||||||
}
|
}
|
||||||
.Call("XGBoosterDumpModel_R", model, fname, fmap, PACKAGE = "xgboost")
|
if (!(class(fmap) %in% c("character", "NULL") && length(fname) <= 1)) {
|
||||||
return(TRUE)
|
stop("fmap: argument must be type character (when provided)")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
longString <- .Call("XGBoosterDumpModel_R", model$handle, fmap, as.integer(with.stats), PACKAGE = "xgboost")
|
||||||
|
|
||||||
|
dt <- fread(paste(longString, collapse = ""), sep = "\n", header = F)
|
||||||
|
|
||||||
|
setnames(dt, "Lines")
|
||||||
|
|
||||||
|
if(is.null(fname)) {
|
||||||
|
result <- dt[Lines != "0"][, Lines := str_replace(Lines, "^\t+", "")][Lines != ""][, paste(Lines)]
|
||||||
|
return(result)
|
||||||
|
} else {
|
||||||
|
result <- dt[Lines != "0"][Lines != ""][, paste(Lines)] %>% writeLines(fname)
|
||||||
|
return(TRUE)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Avoid error messages during CRAN check.
|
||||||
|
# The reason is that these variables are never declared
|
||||||
|
# They are mainly column names inferred by Data.table...
|
||||||
|
globalVariables(c("Lines", "."))
|
||||||
|
|||||||
134
R-package/R/xgb.importance.R
Normal file
134
R-package/R/xgb.importance.R
Normal file
@@ -0,0 +1,134 @@
|
|||||||
|
#' Show importance of features in a model
|
||||||
|
#'
|
||||||
|
#' Read a xgboost model text dump.
|
||||||
|
#' Can be tree or linear model (text dump of linear model are only supported in dev version of \code{Xgboost} for now).
|
||||||
|
#'
|
||||||
|
#' @importFrom data.table data.table
|
||||||
|
#' @importFrom data.table setnames
|
||||||
|
#' @importFrom data.table :=
|
||||||
|
#' @importFrom magrittr %>%
|
||||||
|
#' @importFrom Matrix colSums
|
||||||
|
#' @importFrom Matrix cBind
|
||||||
|
#' @importFrom Matrix sparseVector
|
||||||
|
#'
|
||||||
|
#' @param feature_names names of each feature as a character vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.
|
||||||
|
#'
|
||||||
|
#' @param filename_dump the path to the text file storing the model. Model dump must include the gain per feature and per tree (\code{with.stats = T} in function \code{xgb.dump}).
|
||||||
|
#'
|
||||||
|
#' @param model generated by the \code{xgb.train} function. Avoid the creation of a dump file.
|
||||||
|
#'
|
||||||
|
#' @param data the dataset used for the training step. Will be used with \code{label} parameter for co-occurence computation. More information in \code{Detail} part. This parameter is optional.
|
||||||
|
#'
|
||||||
|
#' @param label the label vetor used for the training step. Will be used with \code{data} parameter for co-occurence computation. More information in \code{Detail} part. This parameter is optional.
|
||||||
|
#'
|
||||||
|
#' @param target a function which returns \code{TRUE} or \code{1} when an observation should be count as a co-occurence and \code{FALSE} or \code{0} otherwise. Default function is provided for computing co-occurences in a binary classification. The \code{target} function should have only one parameter. This parameter will be used to provide each important feature vector after having applied the split condition, therefore these vector will be only made of 0 and 1 only, whatever was the information before. More information in \code{Detail} part. This parameter is optional.
|
||||||
|
#'
|
||||||
|
#' @return A \code{data.table} of the features used in the model with their average gain (and their weight for boosted tree model) in the model.
|
||||||
|
#'
|
||||||
|
#' @details
|
||||||
|
#' This is the function to understand the model trained (and through your model, your data).
|
||||||
|
#'
|
||||||
|
#' Results are returned for both linear and tree models.
|
||||||
|
#'
|
||||||
|
#' \code{data.table} is returned by the function.
|
||||||
|
#' There are 3 columns :
|
||||||
|
#' \itemize{
|
||||||
|
#' \item \code{Features} name of the features as provided in \code{feature_names} or already present in the model dump.
|
||||||
|
#' \item \code{Gain} contribution of each feature to the model. For boosted tree model, each gain of each feature of each tree is taken into account, then average per feature to give a vision of the entire model. Highest percentage means important feature to predict the \code{label} used for the training ;
|
||||||
|
#' \item \code{Cover} metric of the number of observation related to this feature (only available for tree models) ;
|
||||||
|
#' \item \code{Weight} percentage representing the relative number of times a feature have been taken into trees. \code{Gain} should be prefered to search the most important feature. For boosted linear model, this column has no meaning.
|
||||||
|
#' }
|
||||||
|
#'
|
||||||
|
#' Co-occurence count
|
||||||
|
#' ------------------
|
||||||
|
#'
|
||||||
|
#' The gain gives you indication about the information of how a feature is important in making a branch of a decision tree more pure. However, with this information only, you can't know if this feature has to be present or not to get a specific classification. In the example code, you may wonder if odor=none should be \code{TRUE} to not eat a mushroom.
|
||||||
|
#'
|
||||||
|
#' Co-occurence computation is here to help in understanding this relation between a predictor and a specific class. It will count how many observations are returned as \code{TRUE} by the \code{target} function (see parameters). When you execute the example below, there are 92 times only over the 3140 observations of the train dataset where a mushroom have no odor and can be eaten safely.
|
||||||
|
#'
|
||||||
|
#' If you need to remember one thing only: until you want to leave us early, don't eat a mushroom which has no odor :-)
|
||||||
|
#'
|
||||||
|
#' @examples
|
||||||
|
#' data(agaricus.train, package='xgboost')
|
||||||
|
#'
|
||||||
|
#' # Both dataset are list with two items, a sparse matrix and labels
|
||||||
|
#' # (labels = outcome column which will be learned).
|
||||||
|
#' # Each column of the sparse Matrix is a feature in one hot encoding format.
|
||||||
|
#' train <- agaricus.train
|
||||||
|
#'
|
||||||
|
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
|
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
|
#'
|
||||||
|
#' # train$data@@Dimnames[[2]] represents the column names of the sparse matrix.
|
||||||
|
#' xgb.importance(train$data@@Dimnames[[2]], model = bst)
|
||||||
|
#'
|
||||||
|
#' # Same thing with co-occurence computation this time
|
||||||
|
#' xgb.importance(train$data@@Dimnames[[2]], model = bst, data = train$data, label = train$label)
|
||||||
|
#'
|
||||||
|
#' @export
|
||||||
|
xgb.importance <- function(feature_names = NULL, filename_dump = NULL, model = NULL, data = NULL, label = NULL, target = function(x) ((x + label) == 2)){
|
||||||
|
if (!class(feature_names) %in% c("character", "NULL")) {
|
||||||
|
stop("feature_names: Has to be a vector of character or NULL if the model dump already contains feature name. Look at this function documentation to see where to get feature names.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!(class(filename_dump) %in% c("character", "NULL") && length(filename_dump) <= 1)) {
|
||||||
|
stop("filename_dump: Has to be a path to the model dump file.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!class(model) %in% c("xgb.Booster", "NULL")) {
|
||||||
|
stop("model: Has to be an object of class xgb.Booster model generaged by the xgb.train function.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if((is.null(data) & !is.null(label)) |(!is.null(data) & is.null(label))) {
|
||||||
|
stop("data/label: Provide the two arguments if you want co-occurence computation or none of them if you are not interested but not one of them only.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if(class(label) == "numeric"){
|
||||||
|
if(sum(label == 0) / length(label) > 0.5) label <- as(label, "sparseVector")
|
||||||
|
}
|
||||||
|
|
||||||
|
if(is.null(model)){
|
||||||
|
text <- readLines(filename_dump)
|
||||||
|
} else {
|
||||||
|
text <- xgb.dump(model = model, with.stats = T)
|
||||||
|
}
|
||||||
|
|
||||||
|
if(text[2] == "bias:"){
|
||||||
|
result <- readLines(filename_dump) %>% linearDump(feature_names, .)
|
||||||
|
if(!is.null(data) | !is.null(label)) warning("data/label: these parameters should only be provided with decision tree based models.")
|
||||||
|
} else {
|
||||||
|
result <- treeDump(feature_names, text = text, keepDetail = !is.null(data))
|
||||||
|
|
||||||
|
# Co-occurence computation
|
||||||
|
if(!is.null(data) & !is.null(label) & nrow(result) > 0) {
|
||||||
|
# Take care of missing column
|
||||||
|
a <- data[, result[MissingNo == T,Feature], drop=FALSE] != 0
|
||||||
|
# Bind the two Matrix and reorder columns
|
||||||
|
c <- data[, result[MissingNo == F,Feature], drop=FALSE] %>% cBind(a,.) %>% .[,result[,Feature]]
|
||||||
|
rm(a)
|
||||||
|
# Apply split
|
||||||
|
d <- data[, result[,Feature], drop=FALSE] < as.numeric(result[,Split])
|
||||||
|
apply(c & d, 2, . %>% target %>% sum) -> vec
|
||||||
|
|
||||||
|
result <- result[, "RealCover":= as.numeric(vec), with = F][, "RealCover %" := RealCover / sum(label)][,MissingNo:=NULL]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
treeDump <- function(feature_names, text, keepDetail){
|
||||||
|
if(keepDetail) groupBy <- c("Feature", "Split", "MissingNo") else groupBy <- "Feature"
|
||||||
|
|
||||||
|
result <- xgb.model.dt.tree(feature_names = feature_names, text = text)[,"MissingNo":= Missing == No ][Feature!="Leaf",.(Gain = sum(Quality), Cover = sum(Cover), Frequence = .N), by = groupBy, with = T][,`:=`(Gain = Gain/sum(Gain), Cover = Cover/sum(Cover), Frequence = Frequence/sum(Frequence))][order(Gain, decreasing = T)]
|
||||||
|
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
linearDump <- function(feature_names, text){
|
||||||
|
which(text == "weight:") %>% {a=.+1;text[a:length(text)]} %>% as.numeric %>% data.table(Feature = feature_names, Weight = .)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Avoid error messages during CRAN check.
|
||||||
|
# The reason is that these variables are never declared
|
||||||
|
# They are mainly column names inferred by Data.table...
|
||||||
|
globalVariables(c(".", "Feature", "Split", "No", "Missing", "MissingNo", "RealCover"))
|
||||||
@@ -10,7 +10,7 @@
|
|||||||
#' train <- agaricus.train
|
#' train <- agaricus.train
|
||||||
#' test <- agaricus.test
|
#' test <- agaricus.test
|
||||||
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
#' eta = 1, nround = 2,objective = "binary:logistic")
|
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
#' xgb.save(bst, 'xgb.model')
|
#' xgb.save(bst, 'xgb.model')
|
||||||
#' bst <- xgb.load('xgb.model')
|
#' bst <- xgb.load('xgb.model')
|
||||||
#' pred <- predict(bst, test$data)
|
#' pred <- predict(bst, test$data)
|
||||||
@@ -19,5 +19,14 @@
|
|||||||
xgb.load <- function(modelfile) {
|
xgb.load <- function(modelfile) {
|
||||||
if (is.null(modelfile))
|
if (is.null(modelfile))
|
||||||
stop("xgb.load: modelfile cannot be NULL")
|
stop("xgb.load: modelfile cannot be NULL")
|
||||||
xgb.Booster(modelfile = modelfile)
|
|
||||||
|
handle <- xgb.Booster(modelfile = modelfile)
|
||||||
|
# re-use modelfile if it is raw so we donot need to serialize
|
||||||
|
if (typeof(modelfile) == "raw") {
|
||||||
|
bst <- xgb.handleToBooster(handle, modelfile)
|
||||||
|
} else {
|
||||||
|
bst <- xgb.handleToBooster(handle, NULL)
|
||||||
|
}
|
||||||
|
bst <- xgb.Booster.check(bst)
|
||||||
|
return(bst)
|
||||||
}
|
}
|
||||||
|
|||||||
170
R-package/R/xgb.model.dt.tree.R
Normal file
170
R-package/R/xgb.model.dt.tree.R
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
#' Convert tree model dump to data.table
|
||||||
|
#'
|
||||||
|
#' Read a tree model text dump and return a data.table.
|
||||||
|
#'
|
||||||
|
#' @importFrom data.table data.table
|
||||||
|
#' @importFrom data.table set
|
||||||
|
#' @importFrom data.table rbindlist
|
||||||
|
#' @importFrom data.table copy
|
||||||
|
#' @importFrom data.table :=
|
||||||
|
#' @importFrom magrittr %>%
|
||||||
|
#' @importFrom magrittr not
|
||||||
|
#' @importFrom magrittr add
|
||||||
|
#' @importFrom stringr str_extract
|
||||||
|
#' @importFrom stringr str_split
|
||||||
|
#' @importFrom stringr str_extract
|
||||||
|
#' @importFrom stringr str_trim
|
||||||
|
#' @param feature_names names of each feature as a character vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.
|
||||||
|
#' @param filename_dump the path to the text file storing the model. Model dump must include the gain per feature and per tree (parameter \code{with.stats = T} in function \code{xgb.dump}).
|
||||||
|
#' @param model dump generated by the \code{xgb.train} function. Avoid the creation of a dump file.
|
||||||
|
#' @param text dump generated by the \code{xgb.dump} function. Avoid the creation of a dump file. Model dump must include the gain per feature and per tree (parameter \code{with.stats = T} in function \code{xgb.dump}).
|
||||||
|
#' @param n_first_tree limit the plot to the n first trees. If \code{NULL}, all trees of the model are plotted. Performance can be low for huge models.
|
||||||
|
#'
|
||||||
|
#' @return A \code{data.table} of the features used in the model with their gain, cover and few other thing.
|
||||||
|
#'
|
||||||
|
#' @details
|
||||||
|
#' General function to convert a text dump of tree model to a Matrix. The purpose is to help user to explore the model and get a better understanding of it.
|
||||||
|
#'
|
||||||
|
#' The content of the \code{data.table} is organised that way:
|
||||||
|
#'
|
||||||
|
#' \itemize{
|
||||||
|
#' \item \code{ID}: unique identifier of a node ;
|
||||||
|
#' \item \code{Feature}: feature used in the tree to operate a split. When Leaf is indicated, it is the end of a branch ;
|
||||||
|
#' \item \code{Split}: value of the chosen feature where is operated the split ;
|
||||||
|
#' \item \code{Yes}: ID of the feature for the next node in the branch when the split condition is met ;
|
||||||
|
#' \item \code{No}: ID of the feature for the next node in the branch when the split condition is not met ;
|
||||||
|
#' \item \code{Missing}: ID of the feature for the next node in the branch for observation where the feature used for the split are not provided ;
|
||||||
|
#' \item \code{Quality}: it's the gain related to the split in this specific node ;
|
||||||
|
#' \item \code{Cover}: metric to measure the number of observation affected by the split ;
|
||||||
|
#' \item \code{Tree}: ID of the tree. It is included in the main ID ;
|
||||||
|
#' \item \code{Yes.X} or \code{No.X}: data related to the pointer in \code{Yes} or \code{No} column ;
|
||||||
|
#' }
|
||||||
|
#'
|
||||||
|
#' @examples
|
||||||
|
#' data(agaricus.train, package='xgboost')
|
||||||
|
#'
|
||||||
|
#' #Both dataset are list with two items, a sparse matrix and labels
|
||||||
|
#' #(labels = outcome column which will be learned).
|
||||||
|
#' #Each column of the sparse Matrix is a feature in one hot encoding format.
|
||||||
|
#' train <- agaricus.train
|
||||||
|
#'
|
||||||
|
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
|
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
|
#'
|
||||||
|
#' #agaricus.test$data@@Dimnames[[2]] represents the column names of the sparse matrix.
|
||||||
|
#' xgb.model.dt.tree(agaricus.train$data@@Dimnames[[2]], model = bst)
|
||||||
|
#'
|
||||||
|
#' @export
|
||||||
|
xgb.model.dt.tree <- function(feature_names = NULL, filename_dump = NULL, model = NULL, text = NULL, n_first_tree = NULL){
|
||||||
|
|
||||||
|
if (!class(feature_names) %in% c("character", "NULL")) {
|
||||||
|
stop("feature_names: Has to be a vector of character or NULL if the model dump already contains feature name. Look at this function documentation to see where to get feature names.")
|
||||||
|
}
|
||||||
|
if (!(class(filename_dump) %in% c("character", "NULL") && length(filename_dump) <= 1)) {
|
||||||
|
stop("filename_dump: Has to be a character vector of size 1 representing the path to the model dump file.")
|
||||||
|
} else if (!is.null(filename_dump) && !file.exists(filename_dump)) {
|
||||||
|
stop("filename_dump: path to the model doesn't exist.")
|
||||||
|
} else if(is.null(filename_dump) && is.null(model) && is.null(text)){
|
||||||
|
stop("filename_dump & model & text: no path to dump model, no model, no text dump, have been provided.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!class(model) %in% c("xgb.Booster", "NULL")) {
|
||||||
|
stop("model: Has to be an object of class xgb.Booster model generaged by the xgb.train function.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!class(text) %in% c("character", "NULL")) {
|
||||||
|
stop("text: Has to be a vector of character or NULL if a path to the model dump has already been provided.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!class(n_first_tree) %in% c("numeric", "NULL") | length(n_first_tree) > 1) {
|
||||||
|
stop("n_first_tree: Has to be a numeric vector of size 1.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if(!is.null(model)){
|
||||||
|
text = xgb.dump(model = model, with.stats = T)
|
||||||
|
} else if(!is.null(filename_dump)){
|
||||||
|
text <- readLines(filename_dump) %>% str_trim(side = "both")
|
||||||
|
}
|
||||||
|
|
||||||
|
position <- str_match(text, "booster") %>% is.na %>% not %>% which %>% c(length(text)+1)
|
||||||
|
|
||||||
|
extract <- function(x, pattern) str_extract(x, pattern) %>% str_split("=") %>% lapply(function(x) x[2] %>% as.numeric) %>% unlist
|
||||||
|
|
||||||
|
n_round <- min(length(position) - 1, n_first_tree)
|
||||||
|
|
||||||
|
addTreeId <- function(x, i) paste(i,x,sep = "-")
|
||||||
|
|
||||||
|
allTrees <- data.table()
|
||||||
|
|
||||||
|
anynumber_regex<-"[-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?"
|
||||||
|
for(i in 1:n_round){
|
||||||
|
|
||||||
|
tree <- text[(position[i]+1):(position[i+1]-1)]
|
||||||
|
|
||||||
|
# avoid tree made of a leaf only (no split)
|
||||||
|
if(length(tree) <2) next
|
||||||
|
|
||||||
|
treeID <- i-1
|
||||||
|
|
||||||
|
notLeaf <- str_match(tree, "leaf") %>% is.na
|
||||||
|
leaf <- notLeaf %>% not %>% tree[.]
|
||||||
|
branch <- notLeaf %>% tree[.]
|
||||||
|
idBranch <- str_extract(branch, "\\d*:") %>% str_replace(":", "") %>% addTreeId(treeID)
|
||||||
|
idLeaf <- str_extract(leaf, "\\d*:") %>% str_replace(":", "") %>% addTreeId(treeID)
|
||||||
|
featureBranch <- str_extract(branch, "f\\d*<") %>% str_replace("<", "") %>% str_replace("f", "") %>% as.numeric
|
||||||
|
if(!is.null(feature_names)){
|
||||||
|
featureBranch <- feature_names[featureBranch + 1]
|
||||||
|
}
|
||||||
|
featureLeaf <- rep("Leaf", length(leaf))
|
||||||
|
splitBranch <- str_extract(branch, paste0("<",anynumber_regex,"\\]")) %>% str_replace("<", "") %>% str_replace("\\]", "")
|
||||||
|
splitLeaf <- rep(NA, length(leaf))
|
||||||
|
yesBranch <- extract(branch, "yes=\\d*") %>% addTreeId(treeID)
|
||||||
|
yesLeaf <- rep(NA, length(leaf))
|
||||||
|
noBranch <- extract(branch, "no=\\d*") %>% addTreeId(treeID)
|
||||||
|
noLeaf <- rep(NA, length(leaf))
|
||||||
|
missingBranch <- extract(branch, "missing=\\d+") %>% addTreeId(treeID)
|
||||||
|
missingLeaf <- rep(NA, length(leaf))
|
||||||
|
qualityBranch <- extract(branch, paste0("gain=",anynumber_regex))
|
||||||
|
qualityLeaf <- extract(leaf, paste0("leaf=",anynumber_regex))
|
||||||
|
coverBranch <- extract(branch, "cover=\\d*\\.*\\d*")
|
||||||
|
coverLeaf <- extract(leaf, "cover=\\d*\\.*\\d*")
|
||||||
|
dt <- data.table(ID = c(idBranch, idLeaf), Feature = c(featureBranch, featureLeaf), Split = c(splitBranch, splitLeaf), Yes = c(yesBranch, yesLeaf), No = c(noBranch, noLeaf), Missing = c(missingBranch, missingLeaf), Quality = c(qualityBranch, qualityLeaf), Cover = c(coverBranch, coverLeaf))[order(ID)][,Tree:=treeID]
|
||||||
|
|
||||||
|
allTrees <- rbindlist(list(allTrees, dt), use.names = T, fill = F)
|
||||||
|
}
|
||||||
|
|
||||||
|
yes <- allTrees[!is.na(Yes),Yes]
|
||||||
|
|
||||||
|
set(allTrees, i = which(allTrees[,Feature]!= "Leaf"),
|
||||||
|
j = "Yes.Feature",
|
||||||
|
value = allTrees[ID == yes,Feature])
|
||||||
|
|
||||||
|
set(allTrees, i = which(allTrees[,Feature]!= "Leaf"),
|
||||||
|
j = "Yes.Cover",
|
||||||
|
value = allTrees[ID == yes,Cover])
|
||||||
|
|
||||||
|
set(allTrees, i = which(allTrees[,Feature]!= "Leaf"),
|
||||||
|
j = "Yes.Quality",
|
||||||
|
value = allTrees[ID == yes,Quality])
|
||||||
|
|
||||||
|
no <- allTrees[!is.na(No),No]
|
||||||
|
|
||||||
|
set(allTrees, i = which(allTrees[,Feature]!= "Leaf"),
|
||||||
|
j = "No.Feature",
|
||||||
|
value = allTrees[ID == no,Feature])
|
||||||
|
|
||||||
|
set(allTrees, i = which(allTrees[,Feature]!= "Leaf"),
|
||||||
|
j = "No.Cover",
|
||||||
|
value = allTrees[ID == no,Cover])
|
||||||
|
|
||||||
|
set(allTrees, i = which(allTrees[,Feature]!= "Leaf"),
|
||||||
|
j = "No.Quality",
|
||||||
|
value = allTrees[ID == no,Quality])
|
||||||
|
|
||||||
|
allTrees
|
||||||
|
}
|
||||||
|
|
||||||
|
# Avoid error messages during CRAN check.
|
||||||
|
# The reason is that these variables are never declared
|
||||||
|
# They are mainly column names inferred by Data.table...
|
||||||
|
globalVariables(c("ID", "Tree", "Yes", ".", ".N", "Feature", "Cover", "Quality", "No", "Gain", "Frequence"))
|
||||||
57
R-package/R/xgb.plot.importance.R
Normal file
57
R-package/R/xgb.plot.importance.R
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
#' Plot feature importance bar graph
|
||||||
|
#'
|
||||||
|
#' Read a data.table containing feature importance details and plot it.
|
||||||
|
#'
|
||||||
|
#' @importFrom magrittr %>%
|
||||||
|
#' @param importance_matrix a \code{data.table} returned by the \code{xgb.importance} function.
|
||||||
|
#' @param numberOfClusters a \code{numeric} vector containing the min and the max range of the possible number of clusters of bars.
|
||||||
|
#'
|
||||||
|
#' @return A \code{ggplot2} bar graph representing each feature by a horizontal bar. Longer is the bar, more important is the feature. Features are classified by importance and clustered by importance. The group is represented through the color of the bar.
|
||||||
|
#'
|
||||||
|
#' @details
|
||||||
|
#' The purpose of this function is to easily represent the importance of each feature of a model.
|
||||||
|
#' The function return a ggplot graph, therefore each of its characteristic can be overriden (to customize it).
|
||||||
|
#' In particular you may want to override the title of the graph. To do so, add \code{+ ggtitle("A GRAPH NAME")} next to the value returned by this function.
|
||||||
|
#'
|
||||||
|
#' @examples
|
||||||
|
#' data(agaricus.train, package='xgboost')
|
||||||
|
#'
|
||||||
|
#' #Both dataset are list with two items, a sparse matrix and labels
|
||||||
|
#' #(labels = outcome column which will be learned).
|
||||||
|
#' #Each column of the sparse Matrix is a feature in one hot encoding format.
|
||||||
|
#' train <- agaricus.train
|
||||||
|
#'
|
||||||
|
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
|
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
|
#'
|
||||||
|
#' #train$data@@Dimnames[[2]] represents the column names of the sparse matrix.
|
||||||
|
#' importance_matrix <- xgb.importance(train$data@@Dimnames[[2]], model = bst)
|
||||||
|
#' xgb.plot.importance(importance_matrix)
|
||||||
|
#'
|
||||||
|
#' @export
|
||||||
|
xgb.plot.importance <- function(importance_matrix = NULL, numberOfClusters = c(1:10)){
|
||||||
|
if (!"data.table" %in% class(importance_matrix)) {
|
||||||
|
stop("importance_matrix: Should be a data.table.")
|
||||||
|
}
|
||||||
|
if (!require(ggplot2, quietly = TRUE)) {
|
||||||
|
stop("ggplot2 package is required for plotting the importance", call. = FALSE)
|
||||||
|
}
|
||||||
|
if (!requireNamespace("Ckmeans.1d.dp", quietly = TRUE)) {
|
||||||
|
stop("Ckmeans.1d.dp package is required for plotting the importance", call. = FALSE)
|
||||||
|
}
|
||||||
|
|
||||||
|
# To avoid issues in clustering when co-occurences are used
|
||||||
|
importance_matrix <- importance_matrix[, .(Gain = sum(Gain)), by = Feature]
|
||||||
|
|
||||||
|
clusters <- suppressWarnings(Ckmeans.1d.dp::Ckmeans.1d.dp(importance_matrix[,Gain], numberOfClusters))
|
||||||
|
importance_matrix[,"Cluster":=clusters$cluster %>% as.character]
|
||||||
|
|
||||||
|
plot <- ggplot(importance_matrix, aes(x=reorder(Feature, Gain), y = Gain, width= 0.05), environment = environment())+ geom_bar(aes(fill=Cluster), stat="identity", position="identity") + coord_flip() + xlab("Features") + ylab("Gain") + ggtitle("Feature importance") + theme(plot.title = element_text(lineheight=.9, face="bold"), panel.grid.major.y = element_blank() )
|
||||||
|
|
||||||
|
return(plot)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Avoid error messages during CRAN check.
|
||||||
|
# The reason is that these variables are never declared
|
||||||
|
# They are mainly column names inferred by Data.table...
|
||||||
|
globalVariables(c("Feature", "Gain", "Cluster", "ggplot", "aes", "geom_bar", "coord_flip", "xlab", "ylab", "ggtitle", "theme", "element_blank", "element_text"))
|
||||||
97
R-package/R/xgb.plot.tree.R
Normal file
97
R-package/R/xgb.plot.tree.R
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
#' Plot a boosted tree model
|
||||||
|
#'
|
||||||
|
#' Read a tree model text dump.
|
||||||
|
#' Plotting only works for boosted tree model (not linear model).
|
||||||
|
#'
|
||||||
|
#' @importFrom data.table data.table
|
||||||
|
#' @importFrom data.table set
|
||||||
|
#' @importFrom data.table rbindlist
|
||||||
|
#' @importFrom data.table :=
|
||||||
|
#' @importFrom data.table copy
|
||||||
|
#' @importFrom magrittr %>%
|
||||||
|
#' @importFrom magrittr not
|
||||||
|
#' @importFrom magrittr add
|
||||||
|
#' @importFrom stringr str_extract
|
||||||
|
#' @importFrom stringr str_split
|
||||||
|
#' @importFrom stringr str_extract
|
||||||
|
#' @importFrom stringr str_trim
|
||||||
|
#' @param feature_names names of each feature as a character vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.
|
||||||
|
#' @param filename_dump the path to the text file storing the model. Model dump must include the gain per feature and per tree (parameter \code{with.stats = T} in function \code{xgb.dump}). Possible to provide a model directly (see \code{model} argument).
|
||||||
|
#' @param model generated by the \code{xgb.train} function. Avoid the creation of a dump file.
|
||||||
|
#' @param n_first_tree limit the plot to the n first trees. If \code{NULL}, all trees of the model are plotted. Performance can be low for huge models.
|
||||||
|
#' @param CSSstyle a \code{character} vector storing a css style to customize the appearance of nodes. Look at the \href{https://github.com/knsv/mermaid/wiki}{Mermaid wiki} for more information.
|
||||||
|
#' @param width the width of the diagram in pixels.
|
||||||
|
#' @param height the height of the diagram in pixels.
|
||||||
|
#'
|
||||||
|
#' @return A \code{DiagrammeR} of the model.
|
||||||
|
#'
|
||||||
|
#' @details
|
||||||
|
#'
|
||||||
|
#' The content of each node is organised that way:
|
||||||
|
#'
|
||||||
|
#' \itemize{
|
||||||
|
#' \item \code{feature} value ;
|
||||||
|
#' \item \code{cover}: the sum of second order gradient of training data classified to the leaf, if it is square loss, this simply corresponds to the number of instances in that branch. Deeper in the tree a node is, lower this metric will be ;
|
||||||
|
#' \item \code{gain}: metric the importance of the node in the model.
|
||||||
|
#' }
|
||||||
|
#'
|
||||||
|
#' Each branch finishes with a leaf. For each leaf, only the \code{cover} is indicated.
|
||||||
|
#' It uses \href{https://github.com/knsv/mermaid/}{Mermaid} library for that purpose.
|
||||||
|
#'
|
||||||
|
#' @examples
|
||||||
|
#' data(agaricus.train, package='xgboost')
|
||||||
|
#'
|
||||||
|
#' #Both dataset are list with two items, a sparse matrix and labels
|
||||||
|
#' #(labels = outcome column which will be learned).
|
||||||
|
#' #Each column of the sparse Matrix is a feature in one hot encoding format.
|
||||||
|
#' train <- agaricus.train
|
||||||
|
#'
|
||||||
|
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
|
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
|
#'
|
||||||
|
#' #agaricus.test$data@@Dimnames[[2]] represents the column names of the sparse matrix.
|
||||||
|
#' xgb.plot.tree(agaricus.train$data@@Dimnames[[2]], model = bst)
|
||||||
|
#'
|
||||||
|
#' @export
|
||||||
|
#'
|
||||||
|
xgb.plot.tree <- function(feature_names = NULL, filename_dump = NULL, model = NULL, n_first_tree = NULL, CSSstyle = NULL, width = NULL, height = NULL){
|
||||||
|
|
||||||
|
if (!(class(CSSstyle) %in% c("character", "NULL") && length(CSSstyle) <= 1)) {
|
||||||
|
stop("style: Has to be a character vector of size 1.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!class(model) %in% c("xgb.Booster", "NULL")) {
|
||||||
|
stop("model: Has to be an object of class xgb.Booster model generaged by the xgb.train function.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!requireNamespace("DiagrammeR", quietly = TRUE)) {
|
||||||
|
stop("DiagrammeR package is required for xgb.plot.tree", call. = FALSE)
|
||||||
|
}
|
||||||
|
|
||||||
|
if(is.null(model)){
|
||||||
|
allTrees <- xgb.model.dt.tree(feature_names = feature_names, filename_dump = filename_dump, n_first_tree = n_first_tree)
|
||||||
|
} else {
|
||||||
|
allTrees <- xgb.model.dt.tree(feature_names = feature_names, model = model, n_first_tree = n_first_tree)
|
||||||
|
}
|
||||||
|
|
||||||
|
allTrees[Feature!="Leaf" ,yesPath:= paste(ID,"(", Feature, "<br/>Cover: ", Cover, "<br/>Gain: ", Quality, ")-->|< ", Split, "|", Yes, ">", Yes.Feature, "]", sep = "")]
|
||||||
|
|
||||||
|
allTrees[Feature!="Leaf" ,noPath:= paste(ID,"(", Feature, ")-->|>= ", Split, "|", No, ">", No.Feature, "]", sep = "")]
|
||||||
|
|
||||||
|
|
||||||
|
if(is.null(CSSstyle)){
|
||||||
|
CSSstyle <- "classDef greenNode fill:#A2EB86, stroke:#04C4AB, stroke-width:2px;classDef redNode fill:#FFA070, stroke:#FF5E5E, stroke-width:2px"
|
||||||
|
}
|
||||||
|
|
||||||
|
yes <- allTrees[Feature!="Leaf", c(Yes)] %>% paste(collapse = ",") %>% paste("class ", ., " greenNode", sep = "")
|
||||||
|
|
||||||
|
no <- allTrees[Feature!="Leaf", c(No)] %>% paste(collapse = ",") %>% paste("class ", ., " redNode", sep = "")
|
||||||
|
|
||||||
|
path <- allTrees[Feature!="Leaf", c(yesPath, noPath)] %>% .[order(.)] %>% paste(sep = "", collapse = ";") %>% paste("graph LR", .,collapse = "", sep = ";") %>% paste(CSSstyle, yes, no, sep = ";")
|
||||||
|
DiagrammeR::mermaid(path, width, height)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Avoid error messages during CRAN check.
|
||||||
|
# The reason is that these variables are never declared
|
||||||
|
# They are mainly column names inferred by Data.table...
|
||||||
|
globalVariables(c("Feature", "yesPath", "ID", "Cover", "Quality", "Split", "Yes", "Yes.Feature", "noPath", "No", "No.Feature", "."))
|
||||||
@@ -11,7 +11,7 @@
|
|||||||
#' train <- agaricus.train
|
#' train <- agaricus.train
|
||||||
#' test <- agaricus.test
|
#' test <- agaricus.test
|
||||||
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
#' eta = 1, nround = 2,objective = "binary:logistic")
|
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
#' xgb.save(bst, 'xgb.model')
|
#' xgb.save(bst, 'xgb.model')
|
||||||
#' bst <- xgb.load('xgb.model')
|
#' bst <- xgb.load('xgb.model')
|
||||||
#' pred <- predict(bst, test$data)
|
#' pred <- predict(bst, test$data)
|
||||||
@@ -22,7 +22,8 @@ xgb.save <- function(model, fname) {
|
|||||||
stop("xgb.save: fname must be character")
|
stop("xgb.save: fname must be character")
|
||||||
}
|
}
|
||||||
if (class(model) == "xgb.Booster") {
|
if (class(model) == "xgb.Booster") {
|
||||||
.Call("XGBoosterSaveModel_R", model, fname, PACKAGE = "xgboost")
|
model <- xgb.Booster.check(model)
|
||||||
|
.Call("XGBoosterSaveModel_R", model$handle, fname, PACKAGE = "xgboost")
|
||||||
return(TRUE)
|
return(TRUE)
|
||||||
}
|
}
|
||||||
stop("xgb.save: the input must be xgb.Booster. Use xgb.DMatrix.save to save
|
stop("xgb.save: the input must be xgb.Booster. Use xgb.DMatrix.save to save
|
||||||
|
|||||||
30
R-package/R/xgb.save.raw.R
Normal file
30
R-package/R/xgb.save.raw.R
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
#' Save xgboost model to R's raw vector,
|
||||||
|
#' user can call xgb.load to load the model back from raw vector
|
||||||
|
#'
|
||||||
|
#' Save xgboost model from xgboost or xgb.train
|
||||||
|
#'
|
||||||
|
#' @param model the model object.
|
||||||
|
#'
|
||||||
|
#' @examples
|
||||||
|
#' data(agaricus.train, package='xgboost')
|
||||||
|
#' data(agaricus.test, package='xgboost')
|
||||||
|
#' train <- agaricus.train
|
||||||
|
#' test <- agaricus.test
|
||||||
|
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
|
#' eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
|
#' raw <- xgb.save.raw(bst)
|
||||||
|
#' bst <- xgb.load(raw)
|
||||||
|
#' pred <- predict(bst, test$data)
|
||||||
|
#' @export
|
||||||
|
#'
|
||||||
|
xgb.save.raw <- function(model) {
|
||||||
|
if (class(model) == "xgb.Booster"){
|
||||||
|
model <- model$handle
|
||||||
|
}
|
||||||
|
if (class(model) == "xgb.Booster.handle") {
|
||||||
|
raw <- .Call("XGBoosterModelToRaw_R", model, PACKAGE = "xgboost")
|
||||||
|
return(raw)
|
||||||
|
}
|
||||||
|
stop("xgb.raw: the input must be xgb.Booster.handle. Use xgb.DMatrix.save to save
|
||||||
|
xgb.DMatrix object.")
|
||||||
|
}
|
||||||
@@ -1,21 +1,56 @@
|
|||||||
#' eXtreme Gradient Boosting Training
|
#' eXtreme Gradient Boosting Training
|
||||||
#'
|
#'
|
||||||
#' The training function of xgboost
|
#' An advanced interface for training xgboost model. Look at \code{\link{xgboost}} function for a simpler interface.
|
||||||
#'
|
#'
|
||||||
#' @param params the list of parameters. Commonly used ones are:
|
#' @param params the list of parameters.
|
||||||
|
#'
|
||||||
|
#' 1. General Parameters
|
||||||
|
#'
|
||||||
#' \itemize{
|
#' \itemize{
|
||||||
#' \item \code{objective} objective function, common ones are
|
#' \item \code{booster} which booster to use, can be \code{gbtree} or \code{gblinear}. Default: \code{gbtree}
|
||||||
#' \itemize{
|
#' \item \code{silent} 0 means printing running messages, 1 means silent mode. Default: 0
|
||||||
#' \item \code{reg:linear} linear regression
|
|
||||||
#' \item \code{binary:logistic} logistic regression for classification
|
|
||||||
#' }
|
|
||||||
#' \item \code{eta} step size of each boosting step
|
|
||||||
#' \item \code{max.depth} maximum depth of the tree
|
|
||||||
#' \item \code{nthread} number of thread used in training, if not set, all threads are used
|
|
||||||
#' }
|
#' }
|
||||||
#'
|
#'
|
||||||
#' See \url{https://github.com/tqchen/xgboost/wiki/Parameters} for
|
#' 2. Booster Parameters
|
||||||
#' further details. See also demo/ for walkthrough example in R.
|
#'
|
||||||
|
#' 2.1. Parameter for Tree Booster
|
||||||
|
#'
|
||||||
|
#' \itemize{
|
||||||
|
#' \item \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1} when it is added to the current approximation. Used to prevent overfitting by making the boosting process more conservative. Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model more robust to overfitting but slower to compute. Default: 0.3
|
||||||
|
#' \item \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be.
|
||||||
|
#' \item \code{max_depth} maximum depth of a tree. Default: 6
|
||||||
|
#' \item \code{min_child_weight} minimum sum of instance weight(hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1
|
||||||
|
#' \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nround}. Default: 1
|
||||||
|
#' \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
|
||||||
|
#' \item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through Xgboost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1
|
||||||
|
#' }
|
||||||
|
#'
|
||||||
|
#' 2.2. Parameter for Linear Booster
|
||||||
|
#'
|
||||||
|
#' \itemize{
|
||||||
|
#' \item \code{lambda} L2 regularization term on weights. Default: 0
|
||||||
|
#' \item \code{lambda_bias} L2 regularization term on bias. Default: 0
|
||||||
|
#' \item \code{alpha} L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0
|
||||||
|
#' }
|
||||||
|
#'
|
||||||
|
#' 3. Task Parameters
|
||||||
|
#'
|
||||||
|
#' \itemize{
|
||||||
|
#' \item \code{objective} specify the learning task and the corresponding learning objective, and the objective options are below:
|
||||||
|
#' \itemize{
|
||||||
|
#' \item \code{reg:linear} linear regression (Default).
|
||||||
|
#' \item \code{reg:logistic} logistic regression.
|
||||||
|
#' \item \code{binary:logistic} logistic regression for binary classification. Output probability.
|
||||||
|
#' \item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation.
|
||||||
|
#' \item \code{num_class} set the number of classes. To use only with multiclass objectives.
|
||||||
|
#' \item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{tonum_class}.
|
||||||
|
#' \item \code{multi:softprob} same as softmax, but output a vector of ndata * nclass, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class.
|
||||||
|
#' \item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss.
|
||||||
|
#' }
|
||||||
|
#' \item \code{base_score} the initial prediction score of all instances, global bias. Default: 0.5
|
||||||
|
#' \item \code{eval_metric} evaluation metrics for validation data. Default: metric will be assigned according to objective(rmse for regression, and error for classification, mean average precision for ranking). List is provided in detail section.
|
||||||
|
#' }
|
||||||
|
#'
|
||||||
#' @param data takes an \code{xgb.DMatrix} as the input.
|
#' @param data takes an \code{xgb.DMatrix} as the input.
|
||||||
#' @param nrounds the max number of iterations
|
#' @param nrounds the max number of iterations
|
||||||
#' @param watchlist what information should be printed when \code{verbose=1} or
|
#' @param watchlist what information should be printed when \code{verbose=1} or
|
||||||
@@ -31,19 +66,37 @@
|
|||||||
#' prediction and dtrain,
|
#' prediction and dtrain,
|
||||||
#' @param verbose If 0, xgboost will stay silent. If 1, xgboost will print
|
#' @param verbose If 0, xgboost will stay silent. If 1, xgboost will print
|
||||||
#' information of performance. If 2, xgboost will print information of both
|
#' information of performance. If 2, xgboost will print information of both
|
||||||
#'
|
#' @param printEveryN Print every N progress messages when \code{verbose>0}. Default is 1 which means all messages are printed.
|
||||||
|
#' @param early_stop_round If \code{NULL}, the early stopping function is not triggered.
|
||||||
|
#' If set to an integer \code{k}, training with a validation set will stop if the performance
|
||||||
|
#' keeps getting worse consecutively for \code{k} rounds.
|
||||||
|
#' @param early.stop.round An alternative of \code{early_stop_round}.
|
||||||
|
#' @param maximize If \code{feval} and \code{early_stop_round} are set, then \code{maximize} must be set as well.
|
||||||
|
#' \code{maximize=TRUE} means the larger the evaluation score the better.
|
||||||
#' @param ... other parameters to pass to \code{params}.
|
#' @param ... other parameters to pass to \code{params}.
|
||||||
#'
|
#'
|
||||||
#' @details
|
#' @details
|
||||||
#' This is the training function for xgboost.
|
#' This is the training function for \code{xgboost}.
|
||||||
|
#'
|
||||||
|
#' It supports advanced features such as \code{watchlist}, customized objective function (\code{feval}),
|
||||||
|
#' therefore it is more flexible than \code{\link{xgboost}} function.
|
||||||
#'
|
#'
|
||||||
#' Parallelization is automatically enabled if OpenMP is present.
|
#' Parallelization is automatically enabled if \code{OpenMP} is present.
|
||||||
#' Number of threads can also be manually specified via "nthread" parameter.
|
#' Number of threads can also be manually specified via \code{nthread} parameter.
|
||||||
#'
|
#'
|
||||||
#' This function only accepts an \code{xgb.DMatrix} object as the input.
|
#' \code{eval_metric} parameter (not listed above) is set automatically by Xgboost but can be overriden by parameter. Below is provided the list of different metric optimized by Xgboost to help you to understand how it works inside or to use them with the \code{watchlist} parameter.
|
||||||
#' It supports advanced features such as watchlist, customized objective function,
|
#' \itemize{
|
||||||
#' therefore it is more flexible than \code{\link{xgboost}}.
|
#' \item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
|
||||||
|
#' \item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
|
||||||
|
#' \item \code{error} Binary classification error rate. It is calculated as \code{(wrong cases) / (all cases)}. For the predictions, the evaluation will regard the instances with prediction value larger than 0.5 as positive instances, and the others as negative instances.
|
||||||
|
#' \item \code{merror} Multiclass classification error rate. It is calculated as \code{(wrong cases) / (all cases)}.
|
||||||
|
#' \item \code{auc} Area under the curve. \url{http://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
|
||||||
|
#' \item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{http://en.wikipedia.org/wiki/NDCG}
|
||||||
|
#' }
|
||||||
|
#'
|
||||||
|
#' Full list of parameters is available in the Wiki \url{https://github.com/dmlc/xgboost/wiki/Parameters}.
|
||||||
#'
|
#'
|
||||||
|
#' This function only accepts an \code{\link{xgb.DMatrix}} object as the input.
|
||||||
#'
|
#'
|
||||||
#' @examples
|
#' @examples
|
||||||
#' data(agaricus.train, package='xgboost')
|
#' data(agaricus.train, package='xgboost')
|
||||||
@@ -63,11 +116,13 @@
|
|||||||
#' err <- as.numeric(sum(labels != (preds > 0)))/length(labels)
|
#' err <- as.numeric(sum(labels != (preds > 0)))/length(labels)
|
||||||
#' return(list(metric = "error", value = err))
|
#' return(list(metric = "error", value = err))
|
||||||
#' }
|
#' }
|
||||||
#' bst <- xgb.train(param, dtrain, nround = 2, watchlist, logregobj, evalerror)
|
#' bst <- xgb.train(param, dtrain, nthread = 2, nround = 2, watchlist, logregobj, evalerror)
|
||||||
#' @export
|
#' @export
|
||||||
#'
|
#'
|
||||||
xgb.train <- function(params=list(), data, nrounds, watchlist = list(),
|
xgb.train <- function(params=list(), data, nrounds, watchlist = list(),
|
||||||
obj = NULL, feval = NULL, verbose = 1, ...) {
|
obj = NULL, feval = NULL, verbose = 1, printEveryN=1L,
|
||||||
|
early_stop_round = NULL, early.stop.round = NULL,
|
||||||
|
maximize = NULL, ...) {
|
||||||
dtrain <- data
|
dtrain <- data
|
||||||
if (typeof(params) != "list") {
|
if (typeof(params) != "list") {
|
||||||
stop("xgb.train: first argument params must be list")
|
stop("xgb.train: first argument params must be list")
|
||||||
@@ -86,13 +141,68 @@ xgb.train <- function(params=list(), data, nrounds, watchlist = list(),
|
|||||||
}
|
}
|
||||||
params = append(params, list(...))
|
params = append(params, list(...))
|
||||||
|
|
||||||
bst <- xgb.Booster(params, append(watchlist, dtrain))
|
# Early stopping
|
||||||
for (i in 1:nrounds) {
|
if (is.null(early_stop_round) && !is.null(early.stop.round))
|
||||||
succ <- xgb.iter.update(bst, dtrain, i - 1, obj)
|
early_stop_round = early.stop.round
|
||||||
if (length(watchlist) != 0) {
|
if (!is.null(early_stop_round)){
|
||||||
msg <- xgb.iter.eval(bst, watchlist, i - 1, feval)
|
if (!is.null(feval) && is.null(maximize))
|
||||||
cat(paste(msg, "\n", sep=""))
|
stop('Please set maximize to note whether the model is maximizing the evaluation or not.')
|
||||||
|
if (length(watchlist) == 0)
|
||||||
|
stop('For early stopping you need at least one set in watchlist.')
|
||||||
|
if (is.null(maximize) && is.null(params$eval_metric))
|
||||||
|
stop('Please set maximize to note whether the model is maximizing the evaluation or not.')
|
||||||
|
if (is.null(maximize))
|
||||||
|
{
|
||||||
|
if (params$eval_metric %in% c('rmse','logloss','error','merror','mlogloss')) {
|
||||||
|
maximize = FALSE
|
||||||
|
} else {
|
||||||
|
maximize = TRUE
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (maximize) {
|
||||||
|
bestScore = 0
|
||||||
|
} else {
|
||||||
|
bestScore = Inf
|
||||||
|
}
|
||||||
|
bestInd = 0
|
||||||
|
earlyStopflag = FALSE
|
||||||
|
|
||||||
|
if (length(watchlist)>1)
|
||||||
|
warning('Only the first data set in watchlist is used for early stopping process.')
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
handle <- xgb.Booster(params, append(watchlist, dtrain))
|
||||||
|
bst <- xgb.handleToBooster(handle)
|
||||||
|
printEveryN=max( as.integer(printEveryN), 1L)
|
||||||
|
for (i in 1:nrounds) {
|
||||||
|
succ <- xgb.iter.update(bst$handle, dtrain, i - 1, obj)
|
||||||
|
if (length(watchlist) != 0) {
|
||||||
|
msg <- xgb.iter.eval(bst$handle, watchlist, i - 1, feval)
|
||||||
|
if (0== ( (i-1) %% printEveryN))
|
||||||
|
cat(paste(msg, "\n", sep=""))
|
||||||
|
if (!is.null(early_stop_round))
|
||||||
|
{
|
||||||
|
score = strsplit(msg,':|\\s+')[[1]][3]
|
||||||
|
score = as.numeric(score)
|
||||||
|
if ((maximize && score>bestScore) || (!maximize && score<bestScore)) {
|
||||||
|
bestScore = score
|
||||||
|
bestInd = i
|
||||||
|
} else {
|
||||||
|
if (i-bestInd>=early_stop_round) {
|
||||||
|
earlyStopflag = TRUE
|
||||||
|
cat('Stopping. Best iteration:',bestInd)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
bst <- xgb.Booster.check(bst)
|
||||||
|
if (!is.null(early_stop_round)) {
|
||||||
|
bst$bestScore = bestScore
|
||||||
|
bst$bestInd = bestInd
|
||||||
}
|
}
|
||||||
return(bst)
|
return(bst)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,12 +1,14 @@
|
|||||||
#' eXtreme Gradient Boosting (Tree) library
|
#' eXtreme Gradient Boosting (Tree) library
|
||||||
#'
|
#'
|
||||||
#' A simple interface for xgboost in R
|
#' A simple interface for training xgboost model. Look at \code{\link{xgb.train}} function for a more advanced interface.
|
||||||
#'
|
#'
|
||||||
#' @param data takes \code{matrix}, \code{dgCMatrix}, local data file or
|
#' @param data takes \code{matrix}, \code{dgCMatrix}, local data file or
|
||||||
#' \code{xgb.DMatrix}.
|
#' \code{xgb.DMatrix}.
|
||||||
#' @param label the response variable. User should not set this field,
|
#' @param label the response variable. User should not set this field,
|
||||||
# if data is local data file or \code{xgb.DMatrix}.
|
#' if data is local data file or \code{xgb.DMatrix}.
|
||||||
#' @param params the list of parameters. Commonly used ones are:
|
#' @param params the list of parameters.
|
||||||
|
#'
|
||||||
|
#' Commonly used ones are:
|
||||||
#' \itemize{
|
#' \itemize{
|
||||||
#' \item \code{objective} objective function, common ones are
|
#' \item \code{objective} objective function, common ones are
|
||||||
#' \itemize{
|
#' \itemize{
|
||||||
@@ -17,20 +19,32 @@
|
|||||||
#' \item \code{max.depth} maximum depth of the tree
|
#' \item \code{max.depth} maximum depth of the tree
|
||||||
#' \item \code{nthread} number of thread used in training, if not set, all threads are used
|
#' \item \code{nthread} number of thread used in training, if not set, all threads are used
|
||||||
#' }
|
#' }
|
||||||
#'
|
#'
|
||||||
#' See \url{https://github.com/tqchen/xgboost/wiki/Parameters} for
|
#' Look at \code{\link{xgb.train}} for a more complete list of parameters or \url{https://github.com/dmlc/xgboost/wiki/Parameters} for the full list.
|
||||||
#' further details. See also demo/ for walkthrough example in R.
|
#'
|
||||||
|
#' See also \code{demo/} for walkthrough example in R.
|
||||||
|
#'
|
||||||
#' @param nrounds the max number of iterations
|
#' @param nrounds the max number of iterations
|
||||||
#' @param verbose If 0, xgboost will stay silent. If 1, xgboost will print
|
#' @param verbose If 0, xgboost will stay silent. If 1, xgboost will print
|
||||||
#' information of performance. If 2, xgboost will print information of both
|
#' information of performance. If 2, xgboost will print information of both
|
||||||
#' performance and construction progress information
|
#' performance and construction progress information
|
||||||
|
#' @param printEveryN Print every N progress messages when \code{verbose>0}. Default is 1 which means all messages are printed.
|
||||||
|
#' @param missing Missing is only used when input is dense matrix, pick a float
|
||||||
|
#' value that represents missing value. Sometimes a data use 0 or other extreme value to represents missing values.
|
||||||
|
#' @param early_stop_round If \code{NULL}, the early stopping function is not triggered.
|
||||||
|
#' If set to an integer \code{k}, training with a validation set will stop if the performance
|
||||||
|
#' keeps getting worse consecutively for \code{k} rounds.
|
||||||
|
#' @param early.stop.round An alternative of \code{early_stop_round}.
|
||||||
|
#' @param maximize If \code{feval} and \code{early_stop_round} are set, then \code{maximize} must be set as well.
|
||||||
|
#' \code{maximize=TRUE} means the larger the evaluation score the better.
|
||||||
#' @param ... other parameters to pass to \code{params}.
|
#' @param ... other parameters to pass to \code{params}.
|
||||||
#'
|
#'
|
||||||
#' @details
|
#' @details
|
||||||
#' This is the modeling function for xgboost.
|
#' This is the modeling function for Xgboost.
|
||||||
#'
|
#'
|
||||||
#' Parallelization is automatically enabled if OpenMP is present.
|
#' Parallelization is automatically enabled if \code{OpenMP} is present.
|
||||||
#' Number of threads can also be manually specified via "nthread" parameter
|
#'
|
||||||
|
#' Number of threads can also be manually specified via \code{nthread} parameter.
|
||||||
#'
|
#'
|
||||||
#' @examples
|
#' @examples
|
||||||
#' data(agaricus.train, package='xgboost')
|
#' data(agaricus.train, package='xgboost')
|
||||||
@@ -38,14 +52,20 @@
|
|||||||
#' train <- agaricus.train
|
#' train <- agaricus.train
|
||||||
#' test <- agaricus.test
|
#' test <- agaricus.test
|
||||||
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
#' bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
#' eta = 1, nround = 2,objective = "binary:logistic")
|
#' eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
|
||||||
#' pred <- predict(bst, test$data)
|
#' pred <- predict(bst, test$data)
|
||||||
#'
|
#'
|
||||||
#' @export
|
#' @export
|
||||||
#'
|
#'
|
||||||
xgboost <- function(data = NULL, label = NULL, params = list(), nrounds,
|
xgboost <- function(data = NULL, label = NULL, missing = NULL, params = list(), nrounds,
|
||||||
verbose = 1, ...) {
|
verbose = 1, printEveryN=1L, early_stop_round = NULL, early.stop.round = NULL,
|
||||||
dtrain <- xgb.get.DMatrix(data, label)
|
maximize = NULL, ...) {
|
||||||
|
if (is.null(missing)) {
|
||||||
|
dtrain <- xgb.get.DMatrix(data, label)
|
||||||
|
} else {
|
||||||
|
dtrain <- xgb.get.DMatrix(data, label, missing)
|
||||||
|
}
|
||||||
|
|
||||||
params <- append(params, list(...))
|
params <- append(params, list(...))
|
||||||
|
|
||||||
if (verbose > 0) {
|
if (verbose > 0) {
|
||||||
@@ -54,7 +74,9 @@ xgboost <- function(data = NULL, label = NULL, params = list(), nrounds,
|
|||||||
watchlist <- list()
|
watchlist <- list()
|
||||||
}
|
}
|
||||||
|
|
||||||
bst <- xgb.train(params, dtrain, nrounds, watchlist, verbose=verbose)
|
bst <- xgb.train(params, dtrain, nrounds, watchlist, verbose = verbose, printEveryN=printEveryN,
|
||||||
|
early_stop_round = early_stop_round,
|
||||||
|
early.stop.round = early.stop.round)
|
||||||
|
|
||||||
return(bst)
|
return(bst)
|
||||||
}
|
}
|
||||||
@@ -69,7 +91,7 @@ xgboost <- function(data = NULL, label = NULL, params = list(), nrounds,
|
|||||||
#'
|
#'
|
||||||
#' \itemize{
|
#' \itemize{
|
||||||
#' \item \code{label} the label for each record
|
#' \item \code{label} the label for each record
|
||||||
#' \item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 127 columns.
|
#' \item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
|
||||||
#' }
|
#' }
|
||||||
#'
|
#'
|
||||||
#' @references
|
#' @references
|
||||||
@@ -96,7 +118,7 @@ NULL
|
|||||||
#'
|
#'
|
||||||
#' \itemize{
|
#' \itemize{
|
||||||
#' \item \code{label} the label for each record
|
#' \item \code{label} the label for each record
|
||||||
#' \item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 127 columns.
|
#' \item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
|
||||||
#' }
|
#' }
|
||||||
#'
|
#'
|
||||||
#' @references
|
#' @references
|
||||||
@@ -111,5 +133,5 @@ NULL
|
|||||||
#' @name agaricus.test
|
#' @name agaricus.test
|
||||||
#' @usage data(agaricus.test)
|
#' @usage data(agaricus.test)
|
||||||
#' @format A list containing a label vector, and a dgCMatrix object with 1611
|
#' @format A list containing a label vector, and a dgCMatrix object with 1611
|
||||||
#' rows and 127 variables
|
#' rows and 126 variables
|
||||||
NULL
|
NULL
|
||||||
|
|||||||
@@ -2,11 +2,10 @@
|
|||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
For up-to-date version(which is recommended), please install from github. Windows user will need to install [RTools](http://cran.r-project.org/bin/windows/Rtools/) first.
|
For up-to-date version (which is recommended), please install from github. Windows user will need to install [RTools](http://cran.r-project.org/bin/windows/Rtools/) first.
|
||||||
|
|
||||||
```r
|
```r
|
||||||
require(devtools)
|
devtools::install_github('dmlc/xgboost',subdir='R-package')
|
||||||
install_github('xgboost','tqchen',subdir='R-package')
|
|
||||||
```
|
```
|
||||||
|
|
||||||
For stable version on CRAN, please run
|
For stable version on CRAN, please run
|
||||||
@@ -17,5 +16,5 @@ install.packages('xgboost')
|
|||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
* Please visit [walk through example](https://github.com/tqchen/xgboost/blob/master/R-package/demo).
|
* Please visit [walk through example](demo).
|
||||||
* See also the [example scripts](https://github.com/tqchen/xgboost/tree/master/demo/kaggle-higgs) for Kaggle Higgs Challenge, including [speedtest script](https://github.com/tqchen/xgboost/blob/master/demo/kaggle-higgs/speedtest.R) on this dataset.
|
* See also the [example scripts](../demo/kaggle-higgs) for Kaggle Higgs Challenge, including [speedtest script](../demo/kaggle-higgs/speedtest.R) on this dataset and the one related to [Otto challenge](../demo/kaggle-otto), including a [RMarkdown documentation](../demo/kaggle-otto/understandingXGBoostModel.Rmd).
|
||||||
|
|||||||
Binary file not shown.
Binary file not shown.
@@ -4,3 +4,7 @@ boost_from_prediction Boosting from existing prediction
|
|||||||
predict_first_ntree Predicting using first n trees
|
predict_first_ntree Predicting using first n trees
|
||||||
generalized_linear_model Generalized Linear Model
|
generalized_linear_model Generalized Linear Model
|
||||||
cross_validation Cross validation
|
cross_validation Cross validation
|
||||||
|
create_sparse_matrix Create Sparse Matrix
|
||||||
|
predict_leaf_indices Predicting the corresponding leaves
|
||||||
|
early_stopping Early Stop in training
|
||||||
|
poisson_regression Poisson Regression on count data
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ XGBoost R Feature Walkthrough
|
|||||||
* [Predicting using first n trees](predict_first_ntree.R)
|
* [Predicting using first n trees](predict_first_ntree.R)
|
||||||
* [Generalized Linear Model](generalized_linear_model.R)
|
* [Generalized Linear Model](generalized_linear_model.R)
|
||||||
* [Cross validation](cross_validation.R)
|
* [Cross validation](cross_validation.R)
|
||||||
|
* [Create a sparse matrix from a dense one](create_sparse_matrix.R)
|
||||||
|
|
||||||
Benchmarks
|
Benchmarks
|
||||||
====
|
====
|
||||||
@@ -13,5 +14,5 @@ Benchmarks
|
|||||||
|
|
||||||
Notes
|
Notes
|
||||||
====
|
====
|
||||||
* Contribution of exampls, benchmarks is more than welcomed!
|
* Contribution of examples, benchmarks is more than welcomed!
|
||||||
* If you like to share how you use xgboost to solve your problem, send a pull request:)
|
* If you like to share how you use xgboost to solve your problem, send a pull request:)
|
||||||
|
|||||||
@@ -16,27 +16,28 @@ class(train$data)
|
|||||||
# use sparse matrix when your feature is sparse(e.g. when you using one-hot encoding vector)
|
# use sparse matrix when your feature is sparse(e.g. when you using one-hot encoding vector)
|
||||||
print("training xgboost with sparseMatrix")
|
print("training xgboost with sparseMatrix")
|
||||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nround = 2,
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nround = 2,
|
||||||
objective = "binary:logistic")
|
nthread = 2, objective = "binary:logistic")
|
||||||
# alternatively, you can put in dense matrix, i.e. basic R-matrix
|
# alternatively, you can put in dense matrix, i.e. basic R-matrix
|
||||||
print("training xgboost with Matrix")
|
print("training xgboost with Matrix")
|
||||||
bst <- xgboost(data = as.matrix(train$data), label = train$label, max.depth = 2, eta = 1, nround = 2,
|
bst <- xgboost(data = as.matrix(train$data), label = train$label, max.depth = 2, eta = 1, nround = 2,
|
||||||
objective = "binary:logistic")
|
nthread = 2, objective = "binary:logistic")
|
||||||
|
|
||||||
# you can also put in xgb.DMatrix object, stores label, data and other meta datas needed for advanced features
|
# you can also put in xgb.DMatrix object, stores label, data and other meta datas needed for advanced features
|
||||||
print("training xgboost with xgb.DMatrix")
|
print("training xgboost with xgb.DMatrix")
|
||||||
dtrain <- xgb.DMatrix(data = train$data, label = train$label)
|
dtrain <- xgb.DMatrix(data = train$data, label = train$label)
|
||||||
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2, objective = "binary:logistic")
|
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2, nthread = 2,
|
||||||
|
objective = "binary:logistic")
|
||||||
|
|
||||||
# Verbose = 0,1,2
|
# Verbose = 0,1,2
|
||||||
print ('train xgboost with verbose 0, no message')
|
print ('train xgboost with verbose 0, no message')
|
||||||
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2,
|
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2,
|
||||||
objective = "binary:logistic", verbose = 0)
|
nthread = 2, objective = "binary:logistic", verbose = 0)
|
||||||
print ('train xgboost with verbose 1, print evaluation metric')
|
print ('train xgboost with verbose 1, print evaluation metric')
|
||||||
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2,
|
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2,
|
||||||
objective = "binary:logistic", verbose = 1)
|
nthread = 2, objective = "binary:logistic", verbose = 1)
|
||||||
print ('train xgboost with verbose 2, also print information about tree')
|
print ('train xgboost with verbose 2, also print information about tree')
|
||||||
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2,
|
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nround = 2,
|
||||||
objective = "binary:logistic", verbose = 2)
|
nthread = 2, objective = "binary:logistic", verbose = 2)
|
||||||
|
|
||||||
# you can also specify data as file path to a LibSVM format input
|
# you can also specify data as file path to a LibSVM format input
|
||||||
# since we do not have this file with us, the following line is just for illustration
|
# since we do not have this file with us, the following line is just for illustration
|
||||||
@@ -58,6 +59,14 @@ pred2 <- predict(bst2, test$data)
|
|||||||
# pred2 should be identical to pred
|
# pred2 should be identical to pred
|
||||||
print(paste("sum(abs(pred2-pred))=", sum(abs(pred2-pred))))
|
print(paste("sum(abs(pred2-pred))=", sum(abs(pred2-pred))))
|
||||||
|
|
||||||
|
# save model to R's raw vector
|
||||||
|
raw = xgb.save.raw(bst)
|
||||||
|
# load binary model to R
|
||||||
|
bst3 <- xgb.load(raw)
|
||||||
|
pred3 <- predict(bst3, test$data)
|
||||||
|
# pred2 should be identical to pred
|
||||||
|
print(paste("sum(abs(pred3-pred))=", sum(abs(pred2-pred))))
|
||||||
|
|
||||||
#----------------Advanced features --------------
|
#----------------Advanced features --------------
|
||||||
# to use advanced features, we need to put data in xgb.DMatrix
|
# to use advanced features, we need to put data in xgb.DMatrix
|
||||||
dtrain <- xgb.DMatrix(data = train$data, label=train$label)
|
dtrain <- xgb.DMatrix(data = train$data, label=train$label)
|
||||||
@@ -69,25 +78,28 @@ watchlist <- list(train=dtrain, test=dtest)
|
|||||||
# watchlist allows us to monitor the evaluation result on all data in the list
|
# watchlist allows us to monitor the evaluation result on all data in the list
|
||||||
print ('train xgboost using xgb.train with watchlist')
|
print ('train xgboost using xgb.train with watchlist')
|
||||||
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nround=2, watchlist=watchlist,
|
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nround=2, watchlist=watchlist,
|
||||||
objective = "binary:logistic")
|
nthread = 2, objective = "binary:logistic")
|
||||||
# we can change evaluation metrics, or use multiple evaluation metrics
|
# we can change evaluation metrics, or use multiple evaluation metrics
|
||||||
print ('train xgboost using xgb.train with watchlist, watch logloss and error')
|
print ('train xgboost using xgb.train with watchlist, watch logloss and error')
|
||||||
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nround=2, watchlist=watchlist,
|
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nround=2, watchlist=watchlist,
|
||||||
eval.metric = "error", eval.metric = "logloss",
|
eval.metric = "error", eval.metric = "logloss",
|
||||||
objective = "binary:logistic")
|
nthread = 2, objective = "binary:logistic")
|
||||||
|
|
||||||
# xgb.DMatrix can also be saved using xgb.DMatrix.save
|
# xgb.DMatrix can also be saved using xgb.DMatrix.save
|
||||||
xgb.DMatrix.save(dtrain, "dtrain.buffer")
|
xgb.DMatrix.save(dtrain, "dtrain.buffer")
|
||||||
# to load it in, simply call xgb.DMatrix
|
# to load it in, simply call xgb.DMatrix
|
||||||
dtrain2 <- xgb.DMatrix("dtrain.buffer")
|
dtrain2 <- xgb.DMatrix("dtrain.buffer")
|
||||||
bst <- xgb.train(data=dtrain2, max.depth=2, eta=1, nround=2, watchlist=watchlist,
|
bst <- xgb.train(data=dtrain2, max.depth=2, eta=1, nround=2, watchlist=watchlist,
|
||||||
objective = "binary:logistic")
|
nthread = 2, objective = "binary:logistic")
|
||||||
# information can be extracted from xgb.DMatrix using getinfo
|
# information can be extracted from xgb.DMatrix using getinfo
|
||||||
label = getinfo(dtest, "label")
|
label = getinfo(dtest, "label")
|
||||||
pred <- predict(bst, dtest)
|
pred <- predict(bst, dtest)
|
||||||
err <- as.numeric(sum(as.integer(pred > 0.5) != label))/length(label)
|
err <- as.numeric(sum(as.integer(pred > 0.5) != label))/length(label)
|
||||||
print(paste("test-error=", err))
|
print(paste("test-error=", err))
|
||||||
|
|
||||||
# Finally, you can dump the tree you learned using xgb.dump into a text file
|
# You can dump the tree you learned using xgb.dump into a text file
|
||||||
xgb.dump(bst, "dump.raw.txt")
|
xgb.dump(bst, "dump.raw.txt", with.stats = T)
|
||||||
|
|
||||||
|
# Finally, you can check which features are the most important.
|
||||||
|
print("Most important features (look at column Gain):")
|
||||||
|
print(xgb.importance(feature_names = train$data@Dimnames[[2]], filename_dump = "dump.raw.txt"))
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ watchlist <- list(eval = dtest, train = dtrain)
|
|||||||
#
|
#
|
||||||
print('start running example to start from a initial prediction')
|
print('start running example to start from a initial prediction')
|
||||||
# train xgboost for 1 round
|
# train xgboost for 1 round
|
||||||
param <- list(max.depth=2,eta=1,silent=1,objective='binary:logistic')
|
param <- list(max.depth=2,eta=1,nthread = 2, silent=1,objective='binary:logistic')
|
||||||
bst <- xgb.train( param, dtrain, 1, watchlist )
|
bst <- xgb.train( param, dtrain, 1, watchlist )
|
||||||
# Note: we need the margin value instead of transformed prediction in set_base_margin
|
# Note: we need the margin value instead of transformed prediction in set_base_margin
|
||||||
# do predict with output_margin=TRUE, will always give you margin values before logistic transformation
|
# do predict with output_margin=TRUE, will always give you margin values before logistic transformation
|
||||||
|
|||||||
89
R-package/demo/create_sparse_matrix.R
Normal file
89
R-package/demo/create_sparse_matrix.R
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
require(xgboost)
|
||||||
|
require(Matrix)
|
||||||
|
require(data.table)
|
||||||
|
if (!require(vcd)) install.packages('vcd') #Available in Cran. Used for its dataset with categorical values.
|
||||||
|
|
||||||
|
# According to its documentation, Xgboost works only on numbers.
|
||||||
|
# Sometimes the dataset we have to work on have categorical data.
|
||||||
|
# A categorical variable is one which have a fixed number of values. By exemple, if for each observation a variable called "Colour" can have only "red", "blue" or "green" as value, it is a categorical variable.
|
||||||
|
#
|
||||||
|
# In R, categorical variable is called Factor.
|
||||||
|
# Type ?factor in console for more information.
|
||||||
|
#
|
||||||
|
# In this demo we will see how to transform a dense dataframe with categorical variables to a sparse matrix before analyzing it in Xgboost.
|
||||||
|
# The method we are going to see is usually called "one hot encoding".
|
||||||
|
|
||||||
|
#load Arthritis dataset in memory.
|
||||||
|
data(Arthritis)
|
||||||
|
|
||||||
|
# create a copy of the dataset with data.table package (data.table is 100% compliant with R dataframe but its syntax is a lot more consistent and its performance are really good).
|
||||||
|
df <- data.table(Arthritis, keep.rownames = F)
|
||||||
|
|
||||||
|
# Let's have a look to the data.table
|
||||||
|
cat("Print the dataset\n")
|
||||||
|
print(df)
|
||||||
|
|
||||||
|
# 2 columns have factor type, one has ordinal type (ordinal variable is a categorical variable with values wich can be ordered, here: None > Some > Marked).
|
||||||
|
cat("Structure of the dataset\n")
|
||||||
|
str(df)
|
||||||
|
|
||||||
|
# Let's add some new categorical features to see if it helps. Of course these feature are highly correlated to the Age feature. Usually it's not a good thing in ML, but Tree algorithms (including boosted trees) are able to select the best features, even in case of highly correlated features.
|
||||||
|
|
||||||
|
# For the first feature we create groups of age by rounding the real age. Note that we transform it to factor (categorical data) so the algorithm treat them as independant values.
|
||||||
|
df[,AgeDiscret:= as.factor(round(Age/10,0))]
|
||||||
|
|
||||||
|
# Here is an even stronger simplification of the real age with an arbitrary split at 30 years old. I choose this value based on nothing. We will see later if simplifying the information based on arbitrary values is a good strategy (I am sure you already have an idea of how well it will work!).
|
||||||
|
df[,AgeCat:= as.factor(ifelse(Age > 30, "Old", "Young"))]
|
||||||
|
|
||||||
|
# We remove ID as there is nothing to learn from this feature (it will just add some noise as the dataset is small).
|
||||||
|
df[,ID:=NULL]
|
||||||
|
|
||||||
|
# List the different values for the column Treatment: Placebo, Treated.
|
||||||
|
cat("Values of the categorical feature Treatment\n")
|
||||||
|
print(levels(df[,Treatment]))
|
||||||
|
|
||||||
|
# Next step, we will transform the categorical data to dummy variables.
|
||||||
|
# This method is also called one hot encoding.
|
||||||
|
# The purpose is to transform each value of each categorical feature in one binary feature.
|
||||||
|
#
|
||||||
|
# Let's take, the column Treatment will be replaced by two columns, Placebo, and Treated. Each of them will be binary. For example an observation which had the value Placebo in column Treatment before the transformation will have, after the transformation, the value 1 in the new column Placebo and the value 0 in the new column Treated.
|
||||||
|
#
|
||||||
|
# Formulae Improved~.-1 used below means transform all categorical features but column Improved to binary values.
|
||||||
|
# Column Improved is excluded because it will be our output column, the one we want to predict.
|
||||||
|
sparse_matrix = sparse.model.matrix(Improved~.-1, data = df)
|
||||||
|
|
||||||
|
cat("Encoding of the sparse Matrix\n")
|
||||||
|
print(sparse_matrix)
|
||||||
|
|
||||||
|
# Create the output vector (not sparse)
|
||||||
|
# 1. Set, for all rows, field in Y column to 0;
|
||||||
|
# 2. set Y to 1 when Improved == Marked;
|
||||||
|
# 3. Return Y column
|
||||||
|
output_vector = df[,Y:=0][Improved == "Marked",Y:=1][,Y]
|
||||||
|
|
||||||
|
# Following is the same process as other demo
|
||||||
|
cat("Learning...\n")
|
||||||
|
bst <- xgboost(data = sparse_matrix, label = output_vector, max.depth = 9,
|
||||||
|
eta = 1, nthread = 2, nround = 10,objective = "binary:logistic")
|
||||||
|
xgb.dump(bst, 'xgb.model.dump', with.stats = T)
|
||||||
|
|
||||||
|
# sparse_matrix@Dimnames[[2]] represents the column names of the sparse matrix.
|
||||||
|
importance <- xgb.importance(sparse_matrix@Dimnames[[2]], 'xgb.model.dump')
|
||||||
|
print(importance)
|
||||||
|
# According to the matrix below, the most important feature in this dataset to predict if the treatment will work is the Age. The second most important feature is having received a placebo or not. The sex is third. Then we see our generated features (AgeDiscret). We can see that their contribution is very low (Gain column).
|
||||||
|
|
||||||
|
# Does these results make sense?
|
||||||
|
# Let's check some Chi2 between each of these features and the outcome.
|
||||||
|
|
||||||
|
print(chisq.test(df$Age, df$Y))
|
||||||
|
# Pearson correlation between Age and illness disapearing is 35
|
||||||
|
|
||||||
|
print(chisq.test(df$AgeDiscret, df$Y))
|
||||||
|
# Our first simplification of Age gives a Pearson correlation of 8.
|
||||||
|
|
||||||
|
print(chisq.test(df$AgeCat, df$Y))
|
||||||
|
# The perfectly random split I did between young and old at 30 years old have a low correlation of 2. It's a result we may expect as may be in my mind > 30 years is being old (I am 32 and starting feeling old, this may explain that), but for the illness we are studying, the age to be vulnerable is not the same. Don't let your "gut" lower the quality of your model. In "data science", there is science :-)
|
||||||
|
|
||||||
|
# As you can see, in general destroying information by simplying it won't improve your model. Chi2 just demonstrates that. But in more complex cases, creating a new feature based on existing one which makes link with the outcome more obvious may help the algorithm and improve the model. The case studied here is not enough complex to show that. Check Kaggle forum for some challenging datasets.
|
||||||
|
# However it's almost always worse when you add some arbitrary rules.
|
||||||
|
# Moreover, you can notice that even if we have added some not useful new features highly correlated with other features, the boosting tree algorithm have been able to choose the best one, which in this case is the Age. Linear model may not be that strong in these scenario.
|
||||||
@@ -6,7 +6,7 @@ dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
|
|||||||
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
|
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
|
||||||
|
|
||||||
nround <- 2
|
nround <- 2
|
||||||
param <- list(max.depth=2,eta=1,silent=1,objective='binary:logistic')
|
param <- list(max.depth=2,eta=1,silent=1,nthread = 2, objective='binary:logistic')
|
||||||
|
|
||||||
cat('running cross validation\n')
|
cat('running cross validation\n')
|
||||||
# do cross validation, this will print result out as
|
# do cross validation, this will print result out as
|
||||||
@@ -19,7 +19,7 @@ cat('running cross validation, disable standard deviation display\n')
|
|||||||
# [iteration] metric_name:mean_value+std_value
|
# [iteration] metric_name:mean_value+std_value
|
||||||
# std_value is standard deviation of the metric
|
# std_value is standard deviation of the metric
|
||||||
xgb.cv(param, dtrain, nround, nfold=5,
|
xgb.cv(param, dtrain, nround, nfold=5,
|
||||||
metrics={'error'}, , showsd = FALSE)
|
metrics={'error'}, showsd = FALSE)
|
||||||
|
|
||||||
###
|
###
|
||||||
# you can also do cross validation with cutomized loss function
|
# you can also do cross validation with cutomized loss function
|
||||||
@@ -45,3 +45,7 @@ param <- list(max.depth=2,eta=1,silent=1)
|
|||||||
xgb.cv(param, dtrain, nround, nfold = 5,
|
xgb.cv(param, dtrain, nround, nfold = 5,
|
||||||
obj = logregobj, feval=evalerror)
|
obj = logregobj, feval=evalerror)
|
||||||
|
|
||||||
|
# do cross validation with prediction values for each fold
|
||||||
|
res <- xgb.cv(param, dtrain, nround, nfold=5, prediction = TRUE)
|
||||||
|
res$dt
|
||||||
|
length(res$pred)
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
|
|||||||
# note: for customized objective function, we leave objective as default
|
# note: for customized objective function, we leave objective as default
|
||||||
# note: what we are getting is margin value in prediction
|
# note: what we are getting is margin value in prediction
|
||||||
# you must know what you are doing
|
# you must know what you are doing
|
||||||
param <- list(max.depth=2,eta=1,silent=1)
|
param <- list(max.depth=2,eta=1,nthread = 2, silent=1)
|
||||||
watchlist <- list(eval = dtest, train = dtrain)
|
watchlist <- list(eval = dtest, train = dtrain)
|
||||||
num_round <- 2
|
num_round <- 2
|
||||||
|
|
||||||
@@ -37,3 +37,26 @@ print ('start training with user customized objective')
|
|||||||
# training with customized objective, we can also do step by step training
|
# training with customized objective, we can also do step by step training
|
||||||
# simply look at xgboost.py's implementation of train
|
# simply look at xgboost.py's implementation of train
|
||||||
bst <- xgb.train(param, dtrain, num_round, watchlist, logregobj, evalerror)
|
bst <- xgb.train(param, dtrain, num_round, watchlist, logregobj, evalerror)
|
||||||
|
|
||||||
|
#
|
||||||
|
# there can be cases where you want additional information
|
||||||
|
# being considered besides the property of DMatrix you can get by getinfo
|
||||||
|
# you can set additional information as attributes if DMatrix
|
||||||
|
|
||||||
|
# set label attribute of dtrain to be label, we use label as an example, it can be anything
|
||||||
|
attr(dtrain, 'label') <- getinfo(dtrain, 'label')
|
||||||
|
# this is new customized objective, where you can access things you set
|
||||||
|
# same thing applies to customized evaluation function
|
||||||
|
logregobjattr <- function(preds, dtrain) {
|
||||||
|
# now you can access the attribute in customized function
|
||||||
|
labels <- attr(dtrain, 'label')
|
||||||
|
preds <- 1/(1 + exp(-preds))
|
||||||
|
grad <- preds - labels
|
||||||
|
hess <- preds * (1 - preds)
|
||||||
|
return(list(grad = grad, hess = hess))
|
||||||
|
}
|
||||||
|
|
||||||
|
print ('start training with user customized objective, with additional attributes in DMatrix')
|
||||||
|
# training with customized objective, we can also do step by step training
|
||||||
|
# simply look at xgboost.py's implementation of train
|
||||||
|
bst <- xgb.train(param, dtrain, num_round, watchlist, logregobjattr, evalerror)
|
||||||
|
|||||||
39
R-package/demo/early_stopping.R
Normal file
39
R-package/demo/early_stopping.R
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
require(xgboost)
|
||||||
|
# load in the agaricus dataset
|
||||||
|
data(agaricus.train, package='xgboost')
|
||||||
|
data(agaricus.test, package='xgboost')
|
||||||
|
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
|
||||||
|
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
|
||||||
|
# note: for customized objective function, we leave objective as default
|
||||||
|
# note: what we are getting is margin value in prediction
|
||||||
|
# you must know what you are doing
|
||||||
|
param <- list(max.depth=2,eta=1,nthread = 2, silent=1)
|
||||||
|
watchlist <- list(eval = dtest)
|
||||||
|
num_round <- 20
|
||||||
|
# user define objective function, given prediction, return gradient and second order gradient
|
||||||
|
# this is loglikelihood loss
|
||||||
|
logregobj <- function(preds, dtrain) {
|
||||||
|
labels <- getinfo(dtrain, "label")
|
||||||
|
preds <- 1/(1 + exp(-preds))
|
||||||
|
grad <- preds - labels
|
||||||
|
hess <- preds * (1 - preds)
|
||||||
|
return(list(grad = grad, hess = hess))
|
||||||
|
}
|
||||||
|
# user defined evaluation function, return a pair metric_name, result
|
||||||
|
# NOTE: when you do customized loss function, the default prediction value is margin
|
||||||
|
# this may make buildin evalution metric not function properly
|
||||||
|
# for example, we are doing logistic loss, the prediction is score before logistic transformation
|
||||||
|
# the buildin evaluation error assumes input is after logistic transformation
|
||||||
|
# Take this in mind when you use the customization, and maybe you need write customized evaluation function
|
||||||
|
evalerror <- function(preds, dtrain) {
|
||||||
|
labels <- getinfo(dtrain, "label")
|
||||||
|
err <- as.numeric(sum(labels != (preds > 0)))/length(labels)
|
||||||
|
return(list(metric = "error", value = err))
|
||||||
|
}
|
||||||
|
print ('start training with early Stopping setting')
|
||||||
|
# training with customized objective, we can also do step by step training
|
||||||
|
# simply look at xgboost.py's implementation of train
|
||||||
|
bst <- xgb.train(param, dtrain, num_round, watchlist, logregobj, evalerror, maximize = FALSE,
|
||||||
|
early.stop.round = 3)
|
||||||
|
bst <- xgb.cv(param, dtrain, num_round, nfold=5, obj=logregobj, feval = evalerror,
|
||||||
|
maximize = FALSE, early.stop.round = 3)
|
||||||
@@ -15,7 +15,7 @@ dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
|
|||||||
# lambda is the L2 regularizer
|
# lambda is the L2 regularizer
|
||||||
# you can also set lambda_bias which is L2 regularizer on the bias term
|
# you can also set lambda_bias which is L2 regularizer on the bias term
|
||||||
param <- list(objective = "binary:logistic", booster = "gblinear",
|
param <- list(objective = "binary:logistic", booster = "gblinear",
|
||||||
alpha = 0.0001, lambda = 1)
|
nthread = 2, alpha = 0.0001, lambda = 1)
|
||||||
|
|
||||||
# normally, you do not need to set eta (step_size)
|
# normally, you do not need to set eta (step_size)
|
||||||
# XGBoost uses a parallel coordinate descent algorithm (shotgun),
|
# XGBoost uses a parallel coordinate descent algorithm (shotgun),
|
||||||
|
|||||||
7
R-package/demo/poisson_regression.R
Normal file
7
R-package/demo/poisson_regression.R
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
data(mtcars)
|
||||||
|
head(mtcars)
|
||||||
|
bst = xgboost(data=as.matrix(mtcars[,-11]),label=mtcars[,11],
|
||||||
|
objective='count:poisson',nrounds=5)
|
||||||
|
pred = predict(bst,as.matrix(mtcars[,-11]))
|
||||||
|
sqrt(mean((pred-mtcars[,11])^2))
|
||||||
|
|
||||||
@@ -10,7 +10,7 @@ watchlist <- list(eval = dtest, train = dtrain)
|
|||||||
nround = 2
|
nround = 2
|
||||||
|
|
||||||
# training the model for two rounds
|
# training the model for two rounds
|
||||||
bst = xgb.train(param, dtrain, nround, watchlist)
|
bst = xgb.train(param, dtrain, nround, nthread = 2, watchlist)
|
||||||
cat('start testing prediction from first n trees\n')
|
cat('start testing prediction from first n trees\n')
|
||||||
labels <- getinfo(dtest,'label')
|
labels <- getinfo(dtest,'label')
|
||||||
|
|
||||||
|
|||||||
21
R-package/demo/predict_leaf_indices.R
Normal file
21
R-package/demo/predict_leaf_indices.R
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
require(xgboost)
|
||||||
|
# load in the agaricus dataset
|
||||||
|
data(agaricus.train, package='xgboost')
|
||||||
|
data(agaricus.test, package='xgboost')
|
||||||
|
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
|
||||||
|
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
|
||||||
|
|
||||||
|
param <- list(max.depth=2,eta=1,silent=1,objective='binary:logistic')
|
||||||
|
watchlist <- list(eval = dtest, train = dtrain)
|
||||||
|
nround = 5
|
||||||
|
|
||||||
|
# training the model for two rounds
|
||||||
|
bst = xgb.train(param, dtrain, nround, nthread = 2, watchlist)
|
||||||
|
cat('start testing prediction from first n trees\n')
|
||||||
|
|
||||||
|
### predict using first 2 tree
|
||||||
|
pred_with_leaf = predict(bst, dtest, ntreelimit = 2, predleaf = TRUE)
|
||||||
|
head(pred_with_leaf)
|
||||||
|
# by default, we predict using all the trees
|
||||||
|
pred_with_leaf = predict(bst, dtest, predleaf = TRUE)
|
||||||
|
head(pred_with_leaf)
|
||||||
@@ -5,4 +5,7 @@ demo(boost_from_prediction)
|
|||||||
demo(predict_first_ntree)
|
demo(predict_first_ntree)
|
||||||
demo(generalized_linear_model)
|
demo(generalized_linear_model)
|
||||||
demo(cross_validation)
|
demo(cross_validation)
|
||||||
|
demo(create_sparse_matrix)
|
||||||
|
demo(predict_leaf_indices)
|
||||||
|
demo(early_stopping)
|
||||||
|
demo(poisson_regression)
|
||||||
|
|||||||
@@ -1,10 +1,11 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgboost.R
|
||||||
\docType{data}
|
\docType{data}
|
||||||
\name{agaricus.test}
|
\name{agaricus.test}
|
||||||
\alias{agaricus.test}
|
\alias{agaricus.test}
|
||||||
\title{Test part from Mushroom Data Set}
|
\title{Test part from Mushroom Data Set}
|
||||||
\format{A list containing a label vector, and a dgCMatrix object with 1611
|
\format{A list containing a label vector, and a dgCMatrix object with 1611
|
||||||
rows and 127 variables}
|
rows and 126 variables}
|
||||||
\usage{
|
\usage{
|
||||||
data(agaricus.test)
|
data(agaricus.test)
|
||||||
}
|
}
|
||||||
@@ -17,7 +18,7 @@ This data set includes the following fields:
|
|||||||
|
|
||||||
\itemize{
|
\itemize{
|
||||||
\item \code{label} the label for each record
|
\item \code{label} the label for each record
|
||||||
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 127 columns.
|
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
\references{
|
\references{
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgboost.R
|
||||||
\docType{data}
|
\docType{data}
|
||||||
\name{agaricus.train}
|
\name{agaricus.train}
|
||||||
\alias{agaricus.train}
|
\alias{agaricus.train}
|
||||||
@@ -17,7 +18,7 @@ This data set includes the following fields:
|
|||||||
|
|
||||||
\itemize{
|
\itemize{
|
||||||
\item \code{label} the label for each record
|
\item \code{label} the label for each record
|
||||||
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 127 columns.
|
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
\references{
|
\references{
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/getinfo.xgb.DMatrix.R
|
||||||
\docType{methods}
|
\docType{methods}
|
||||||
\name{getinfo}
|
\name{getinfo}
|
||||||
\alias{getinfo}
|
\alias{getinfo}
|
||||||
@@ -10,15 +11,25 @@ getinfo(object, ...)
|
|||||||
\S4method{getinfo}{xgb.DMatrix}(object, name)
|
\S4method{getinfo}{xgb.DMatrix}(object, name)
|
||||||
}
|
}
|
||||||
\arguments{
|
\arguments{
|
||||||
\item{object}{Object of class "xgb.DMatrix"}
|
\item{object}{Object of class \code{xgb.DMatrix}}
|
||||||
|
|
||||||
\item{name}{the name of the field to get}
|
|
||||||
|
|
||||||
\item{...}{other parameters}
|
\item{...}{other parameters}
|
||||||
|
|
||||||
|
\item{name}{the name of the field to get}
|
||||||
}
|
}
|
||||||
\description{
|
\description{
|
||||||
Get information of an xgb.DMatrix object
|
Get information of an xgb.DMatrix object
|
||||||
}
|
}
|
||||||
|
\details{
|
||||||
|
The information can be one of the following:
|
||||||
|
|
||||||
|
\itemize{
|
||||||
|
\item \code{label}: label Xgboost learn from ;
|
||||||
|
\item \code{weight}: to do a weight rescale ;
|
||||||
|
\item \code{base_margin}: base margin is the base prediction Xgboost will boost from ;
|
||||||
|
\item \code{nrow}: number of rows of the \code{xgb.DMatrix}.
|
||||||
|
}
|
||||||
|
}
|
||||||
\examples{
|
\examples{
|
||||||
data(agaricus.train, package='xgboost')
|
data(agaricus.train, package='xgboost')
|
||||||
train <- agaricus.train
|
train <- agaricus.train
|
||||||
|
|||||||
22
R-package/man/nrow-xgb.DMatrix-method.Rd
Normal file
22
R-package/man/nrow-xgb.DMatrix-method.Rd
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/nrow.xgb.DMatrix.R
|
||||||
|
\docType{methods}
|
||||||
|
\name{nrow,xgb.DMatrix-method}
|
||||||
|
\alias{nrow,xgb.DMatrix-method}
|
||||||
|
\title{Number of xgb.DMatrix rows}
|
||||||
|
\usage{
|
||||||
|
\S4method{nrow}{xgb.DMatrix}(x)
|
||||||
|
}
|
||||||
|
\arguments{
|
||||||
|
\item{x}{Object of class \code{xgb.DMatrix}}
|
||||||
|
}
|
||||||
|
\description{
|
||||||
|
\code{nrow} return the number of rows present in the \code{xgb.DMatrix}.
|
||||||
|
}
|
||||||
|
\examples{
|
||||||
|
data(agaricus.train, package='xgboost')
|
||||||
|
train <- agaricus.train
|
||||||
|
dtrain <- xgb.DMatrix(train$data, label=train$label)
|
||||||
|
stopifnot(nrow(dtrain) == nrow(train$data))
|
||||||
|
}
|
||||||
|
|
||||||
@@ -1,11 +1,12 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/predict.xgb.Booster.R
|
||||||
\docType{methods}
|
\docType{methods}
|
||||||
\name{predict,xgb.Booster-method}
|
\name{predict,xgb.Booster-method}
|
||||||
\alias{predict,xgb.Booster-method}
|
\alias{predict,xgb.Booster-method}
|
||||||
\title{Predict method for eXtreme Gradient Boosting model}
|
\title{Predict method for eXtreme Gradient Boosting model}
|
||||||
\usage{
|
\usage{
|
||||||
\S4method{predict}{xgb.Booster}(object, newdata, outputmargin = FALSE,
|
\S4method{predict}{xgb.Booster}(object, newdata, missing = NULL,
|
||||||
ntreelimit = NULL)
|
outputmargin = FALSE, ntreelimit = NULL, predleaf = FALSE)
|
||||||
}
|
}
|
||||||
\arguments{
|
\arguments{
|
||||||
\item{object}{Object of class "xgb.Boost"}
|
\item{object}{Object of class "xgb.Boost"}
|
||||||
@@ -13,6 +14,9 @@
|
|||||||
\item{newdata}{takes \code{matrix}, \code{dgCMatrix}, local data file or
|
\item{newdata}{takes \code{matrix}, \code{dgCMatrix}, local data file or
|
||||||
\code{xgb.DMatrix}.}
|
\code{xgb.DMatrix}.}
|
||||||
|
|
||||||
|
\item{missing}{Missing is only used when input is dense matrix, pick a float
|
||||||
|
value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.}
|
||||||
|
|
||||||
\item{outputmargin}{whether the prediction should be shown in the original
|
\item{outputmargin}{whether the prediction should be shown in the original
|
||||||
value of sum of functions, when outputmargin=TRUE, the prediction is
|
value of sum of functions, when outputmargin=TRUE, the prediction is
|
||||||
untransformed margin value. In logistic regression, outputmargin=T will
|
untransformed margin value. In logistic regression, outputmargin=T will
|
||||||
@@ -21,6 +25,8 @@ output value before logistic transformation.}
|
|||||||
\item{ntreelimit}{limit number of trees used in prediction, this parameter is
|
\item{ntreelimit}{limit number of trees used in prediction, this parameter is
|
||||||
only valid for gbtree, but not for gblinear. set it to be value bigger
|
only valid for gbtree, but not for gblinear. set it to be value bigger
|
||||||
than 0. It will use all trees by default.}
|
than 0. It will use all trees by default.}
|
||||||
|
|
||||||
|
\item{predleaf}{whether predict leaf index instead. If set to TRUE, the output will be a matrix object.}
|
||||||
}
|
}
|
||||||
\description{
|
\description{
|
||||||
Predicted values based on xgboost model object.
|
Predicted values based on xgboost model object.
|
||||||
@@ -31,7 +37,7 @@ data(agaricus.test, package='xgboost')
|
|||||||
train <- agaricus.train
|
train <- agaricus.train
|
||||||
test <- agaricus.test
|
test <- agaricus.test
|
||||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
eta = 1, nround = 2,objective = "binary:logistic")
|
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
pred <- predict(bst, test$data)
|
pred <- predict(bst, test$data)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
18
R-package/man/predict-xgb.Booster.handle-method.Rd
Normal file
18
R-package/man/predict-xgb.Booster.handle-method.Rd
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/predict.xgb.Booster.handle.R
|
||||||
|
\docType{methods}
|
||||||
|
\name{predict,xgb.Booster.handle-method}
|
||||||
|
\alias{predict,xgb.Booster.handle-method}
|
||||||
|
\title{Predict method for eXtreme Gradient Boosting model handle}
|
||||||
|
\usage{
|
||||||
|
\S4method{predict}{xgb.Booster.handle}(object, ...)
|
||||||
|
}
|
||||||
|
\arguments{
|
||||||
|
\item{object}{Object of class "xgb.Boost.handle"}
|
||||||
|
|
||||||
|
\item{...}{Parameters pass to \code{predict.xgb.Booster}}
|
||||||
|
}
|
||||||
|
\description{
|
||||||
|
Predicted values based on xgb.Booster.handle object.
|
||||||
|
}
|
||||||
|
|
||||||
@@ -1,4 +1,5 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/setinfo.xgb.DMatrix.R
|
||||||
\docType{methods}
|
\docType{methods}
|
||||||
\name{setinfo}
|
\name{setinfo}
|
||||||
\alias{setinfo}
|
\alias{setinfo}
|
||||||
@@ -12,15 +13,25 @@ setinfo(object, ...)
|
|||||||
\arguments{
|
\arguments{
|
||||||
\item{object}{Object of class "xgb.DMatrix"}
|
\item{object}{Object of class "xgb.DMatrix"}
|
||||||
|
|
||||||
|
\item{...}{other parameters}
|
||||||
|
|
||||||
\item{name}{the name of the field to get}
|
\item{name}{the name of the field to get}
|
||||||
|
|
||||||
\item{info}{the specific field of information to set}
|
\item{info}{the specific field of information to set}
|
||||||
|
|
||||||
\item{...}{other parameters}
|
|
||||||
}
|
}
|
||||||
\description{
|
\description{
|
||||||
Set information of an xgb.DMatrix object
|
Set information of an xgb.DMatrix object
|
||||||
}
|
}
|
||||||
|
\details{
|
||||||
|
It can be one of the following:
|
||||||
|
|
||||||
|
\itemize{
|
||||||
|
\item \code{label}: label Xgboost learn from ;
|
||||||
|
\item \code{weight}: to do a weight rescale ;
|
||||||
|
\item \code{base_margin}: base margin is the base prediction Xgboost will boost from ;
|
||||||
|
\item \code{group}.
|
||||||
|
}
|
||||||
|
}
|
||||||
\examples{
|
\examples{
|
||||||
data(agaricus.train, package='xgboost')
|
data(agaricus.train, package='xgboost')
|
||||||
train <- agaricus.train
|
train <- agaricus.train
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/slice.xgb.DMatrix.R
|
||||||
\docType{methods}
|
\docType{methods}
|
||||||
\name{slice}
|
\name{slice}
|
||||||
\alias{slice}
|
\alias{slice}
|
||||||
@@ -13,9 +14,9 @@ slice(object, ...)
|
|||||||
\arguments{
|
\arguments{
|
||||||
\item{object}{Object of class "xgb.DMatrix"}
|
\item{object}{Object of class "xgb.DMatrix"}
|
||||||
|
|
||||||
\item{idxset}{a integer vector of indices of rows needed}
|
|
||||||
|
|
||||||
\item{...}{other parameters}
|
\item{...}{other parameters}
|
||||||
|
|
||||||
|
\item{idxset}{a integer vector of indices of rows needed}
|
||||||
}
|
}
|
||||||
\description{
|
\description{
|
||||||
Get a new DMatrix containing the specified rows of
|
Get a new DMatrix containing the specified rows of
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.DMatrix.R
|
||||||
\name{xgb.DMatrix}
|
\name{xgb.DMatrix}
|
||||||
\alias{xgb.DMatrix}
|
\alias{xgb.DMatrix}
|
||||||
\title{Contruct xgb.DMatrix object}
|
\title{Contruct xgb.DMatrix object}
|
||||||
@@ -11,7 +12,8 @@ indicating the data file.}
|
|||||||
|
|
||||||
\item{info}{a list of information of the xgb.DMatrix object}
|
\item{info}{a list of information of the xgb.DMatrix object}
|
||||||
|
|
||||||
\item{missing}{Missing is only used when input is dense matrix, pick a float}
|
\item{missing}{Missing is only used when input is dense matrix, pick a float
|
||||||
|
value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.}
|
||||||
|
|
||||||
\item{...}{other information to pass to \code{info}.}
|
\item{...}{other information to pass to \code{info}.}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.DMatrix.save.R
|
||||||
\name{xgb.DMatrix.save}
|
\name{xgb.DMatrix.save}
|
||||||
\alias{xgb.DMatrix.save}
|
\alias{xgb.DMatrix.save}
|
||||||
\title{Save xgb.DMatrix object to binary file}
|
\title{Save xgb.DMatrix object to binary file}
|
||||||
|
|||||||
@@ -1,10 +1,14 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.cv.R
|
||||||
\name{xgb.cv}
|
\name{xgb.cv}
|
||||||
\alias{xgb.cv}
|
\alias{xgb.cv}
|
||||||
\title{Cross Validation}
|
\title{Cross Validation}
|
||||||
\usage{
|
\usage{
|
||||||
xgb.cv(params = list(), data, nrounds, nfold, label = NULL, showsd = TRUE,
|
xgb.cv(params = list(), data, nrounds, nfold, label = NULL,
|
||||||
metrics = list(), obj = NULL, feval = NULL, ...)
|
missing = NULL, prediction = FALSE, showsd = TRUE, metrics = list(),
|
||||||
|
obj = NULL, feval = NULL, stratified = TRUE, folds = NULL,
|
||||||
|
verbose = T, early_stop_round = NULL, early.stop.round = NULL,
|
||||||
|
maximize = NULL, ...)
|
||||||
}
|
}
|
||||||
\arguments{
|
\arguments{
|
||||||
\item{params}{the list of parameters. Commonly used ones are:
|
\item{params}{the list of parameters. Commonly used ones are:
|
||||||
@@ -19,18 +23,23 @@ xgb.cv(params = list(), data, nrounds, nfold, label = NULL, showsd = TRUE,
|
|||||||
\item \code{nthread} number of thread used in training, if not set, all threads are used
|
\item \code{nthread} number of thread used in training, if not set, all threads are used
|
||||||
}
|
}
|
||||||
|
|
||||||
See \url{https://github.com/tqchen/xgboost/wiki/Parameters} for
|
See \link{xgb.train} for further details.
|
||||||
further details. See also demo/ for walkthrough example in R.}
|
See also demo/ for walkthrough example in R.}
|
||||||
|
|
||||||
\item{data}{takes an \code{xgb.DMatrix} as the input.}
|
\item{data}{takes an \code{xgb.DMatrix} or \code{Matrix} as the input.}
|
||||||
|
|
||||||
\item{nrounds}{the max number of iterations}
|
\item{nrounds}{the max number of iterations}
|
||||||
|
|
||||||
\item{nfold}{number of folds used}
|
\item{nfold}{the original dataset is randomly partitioned into \code{nfold} equal size subsamples.}
|
||||||
|
|
||||||
\item{label}{option field, when data is Matrix}
|
\item{label}{option field, when data is \code{Matrix}}
|
||||||
|
|
||||||
\item{showsd}{boolean, whether show standard deviation of cross validation}
|
\item{missing}{Missing is only used when input is dense matrix, pick a float
|
||||||
|
value that represents missing value. Sometime a data use 0 or other extreme value to represents missing values.}
|
||||||
|
|
||||||
|
\item{prediction}{A logical value indicating whether to return the prediction vector.}
|
||||||
|
|
||||||
|
\item{showsd}{\code{boolean}, whether show standard deviation of cross validation}
|
||||||
|
|
||||||
\item{metrics,}{list of evaluation metrics to be used in corss validation,
|
\item{metrics,}{list of evaluation metrics to be used in corss validation,
|
||||||
when it is not specified, the evaluation metric is chosen according to objective function.
|
when it is not specified, the evaluation metric is chosen according to objective function.
|
||||||
@@ -44,29 +53,58 @@ xgb.cv(params = list(), data, nrounds, nfold, label = NULL, showsd = TRUE,
|
|||||||
}}
|
}}
|
||||||
|
|
||||||
\item{obj}{customized objective function. Returns gradient and second order
|
\item{obj}{customized objective function. Returns gradient and second order
|
||||||
gradient with given prediction and dtrain,}
|
gradient with given prediction and dtrain.}
|
||||||
|
|
||||||
\item{feval}{custimized evaluation function. Returns
|
\item{feval}{custimized evaluation function. Returns
|
||||||
\code{list(metric='metric-name', value='metric-value')} with given
|
\code{list(metric='metric-name', value='metric-value')} with given
|
||||||
prediction and dtrain,}
|
prediction and dtrain.}
|
||||||
|
|
||||||
|
\item{stratified}{\code{boolean} whether sampling of folds should be stratified by the values of labels in \code{data}}
|
||||||
|
|
||||||
|
\item{folds}{\code{list} provides a possibility of using a list of pre-defined CV folds (each element must be a vector of fold's indices).
|
||||||
|
If folds are supplied, the nfold and stratified parameters would be ignored.}
|
||||||
|
|
||||||
|
\item{verbose}{\code{boolean}, print the statistics during the process}
|
||||||
|
|
||||||
|
\item{early_stop_round}{If \code{NULL}, the early stopping function is not triggered.
|
||||||
|
If set to an integer \code{k}, training with a validation set will stop if the performance
|
||||||
|
keeps getting worse consecutively for \code{k} rounds.}
|
||||||
|
|
||||||
|
\item{early.stop.round}{An alternative of \code{early_stop_round}.}
|
||||||
|
|
||||||
|
\item{maximize}{If \code{feval} and \code{early_stop_round} are set, then \code{maximize} must be set as well.
|
||||||
|
\code{maximize=TRUE} means the larger the evaluation score the better.}
|
||||||
|
|
||||||
\item{...}{other parameters to pass to \code{params}.}
|
\item{...}{other parameters to pass to \code{params}.}
|
||||||
}
|
}
|
||||||
|
\value{
|
||||||
|
If \code{prediction = TRUE}, a list with the following elements is returned:
|
||||||
|
\itemize{
|
||||||
|
\item \code{dt} a \code{data.table} with each mean and standard deviation stat for training set and test set
|
||||||
|
\item \code{pred} an array or matrix (for multiclass classification) with predictions for each CV-fold for the model having been trained on the data in all other folds.
|
||||||
|
}
|
||||||
|
|
||||||
|
If \code{prediction = FALSE}, just a \code{data.table} with each mean and standard deviation stat for training set and test set is returned.
|
||||||
|
}
|
||||||
\description{
|
\description{
|
||||||
The cross valudation function of xgboost
|
The cross valudation function of xgboost
|
||||||
}
|
}
|
||||||
\details{
|
\details{
|
||||||
This is the cross validation function for xgboost
|
The original sample is randomly partitioned into \code{nfold} equal size subsamples.
|
||||||
|
|
||||||
Parallelization is automatically enabled if OpenMP is present.
|
Of the \code{nfold} subsamples, a single subsample is retained as the validation data for testing the model, and the remaining \code{nfold - 1} subsamples are used as training data.
|
||||||
Number of threads can also be manually specified via "nthread" parameter.
|
|
||||||
|
|
||||||
This function only accepts an \code{xgb.DMatrix} object as the input.
|
The cross-validation process is then repeated \code{nrounds} times, with each of the \code{nfold} subsamples used exactly once as the validation data.
|
||||||
|
|
||||||
|
All observations are used for both training and validation.
|
||||||
|
|
||||||
|
Adapted from \url{http://en.wikipedia.org/wiki/Cross-validation_\%28statistics\%29#k-fold_cross-validation}
|
||||||
}
|
}
|
||||||
\examples{
|
\examples{
|
||||||
data(agaricus.train, package='xgboost')
|
data(agaricus.train, package='xgboost')
|
||||||
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
|
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
|
||||||
history <- xgb.cv(data = dtrain, nround=3, nfold = 5, metrics=list("rmse","auc"),
|
history <- xgb.cv(data = dtrain, nround=3, nthread = 2, nfold = 5, metrics=list("rmse","auc"),
|
||||||
"max.depth"=3, "eta"=1, "objective"="binary:logistic")
|
max.depth =3, eta = 1, objective = "binary:logistic")
|
||||||
|
print(history)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,21 +1,30 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.dump.R
|
||||||
\name{xgb.dump}
|
\name{xgb.dump}
|
||||||
\alias{xgb.dump}
|
\alias{xgb.dump}
|
||||||
\title{Save xgboost model to text file}
|
\title{Save xgboost model to text file}
|
||||||
\usage{
|
\usage{
|
||||||
xgb.dump(model, fname, fmap = "")
|
xgb.dump(model = NULL, fname = NULL, fmap = "", with.stats = FALSE)
|
||||||
}
|
}
|
||||||
\arguments{
|
\arguments{
|
||||||
\item{model}{the model object.}
|
\item{model}{the model object.}
|
||||||
|
|
||||||
\item{fname}{the name of the binary file.}
|
\item{fname}{the name of the text file where to save the model text dump. If not provided or set to \code{NULL} the function will return the model as a \code{character} vector.}
|
||||||
|
|
||||||
\item{fmap}{feature map file representing the type of feature.
|
\item{fmap}{feature map file representing the type of feature.
|
||||||
Detailed description could be found at
|
Detailed description could be found at
|
||||||
\url{https://github.com/tqchen/xgboost/wiki/Binary-Classification#dump-model}.
|
\url{https://github.com/dmlc/xgboost/wiki/Binary-Classification#dump-model}.
|
||||||
See demo/ for walkthrough example in R, and
|
See demo/ for walkthrough example in R, and
|
||||||
\url{https://github.com/tqchen/xgboost/blob/master/demo/data/featmap.txt}
|
\url{https://github.com/dmlc/xgboost/blob/master/demo/data/featmap.txt}
|
||||||
for example Format.}
|
for example Format.}
|
||||||
|
|
||||||
|
\item{with.stats}{whether dump statistics of splits
|
||||||
|
When this option is on, the model dump comes with two additional statistics:
|
||||||
|
gain is the approximate loss function gain we get in each split;
|
||||||
|
cover is the sum of second order gradient in each node.}
|
||||||
|
}
|
||||||
|
\value{
|
||||||
|
if fname is not provided or set to \code{NULL} the function will return the model as a \code{character} vector. Otherwise it will return \code{TRUE}.
|
||||||
}
|
}
|
||||||
\description{
|
\description{
|
||||||
Save a xgboost model to text file. Could be parsed later.
|
Save a xgboost model to text file. Could be parsed later.
|
||||||
@@ -26,7 +35,11 @@ data(agaricus.test, package='xgboost')
|
|||||||
train <- agaricus.train
|
train <- agaricus.train
|
||||||
test <- agaricus.test
|
test <- agaricus.test
|
||||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
eta = 1, nround = 2,objective = "binary:logistic")
|
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
xgb.dump(bst, 'xgb.model.dump')
|
# save the model in file 'xgb.model.dump'
|
||||||
|
xgb.dump(bst, 'xgb.model.dump', with.stats = TRUE)
|
||||||
|
|
||||||
|
# print the model without saving it to a file
|
||||||
|
print(xgb.dump(bst))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
70
R-package/man/xgb.importance.Rd
Normal file
70
R-package/man/xgb.importance.Rd
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.importance.R
|
||||||
|
\name{xgb.importance}
|
||||||
|
\alias{xgb.importance}
|
||||||
|
\title{Show importance of features in a model}
|
||||||
|
\usage{
|
||||||
|
xgb.importance(feature_names = NULL, filename_dump = NULL, model = NULL,
|
||||||
|
data = NULL, label = NULL, target = function(x) ((x + label) == 2))
|
||||||
|
}
|
||||||
|
\arguments{
|
||||||
|
\item{feature_names}{names of each feature as a character vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.}
|
||||||
|
|
||||||
|
\item{filename_dump}{the path to the text file storing the model. Model dump must include the gain per feature and per tree (\code{with.stats = T} in function \code{xgb.dump}).}
|
||||||
|
|
||||||
|
\item{model}{generated by the \code{xgb.train} function. Avoid the creation of a dump file.}
|
||||||
|
|
||||||
|
\item{data}{the dataset used for the training step. Will be used with \code{label} parameter for co-occurence computation. More information in \code{Detail} part. This parameter is optional.}
|
||||||
|
|
||||||
|
\item{label}{the label vetor used for the training step. Will be used with \code{data} parameter for co-occurence computation. More information in \code{Detail} part. This parameter is optional.}
|
||||||
|
|
||||||
|
\item{target}{a function which returns \code{TRUE} or \code{1} when an observation should be count as a co-occurence and \code{FALSE} or \code{0} otherwise. Default function is provided for computing co-occurences in a binary classification. The \code{target} function should have only one parameter. This parameter will be used to provide each important feature vector after having applied the split condition, therefore these vector will be only made of 0 and 1 only, whatever was the information before. More information in \code{Detail} part. This parameter is optional.}
|
||||||
|
}
|
||||||
|
\value{
|
||||||
|
A \code{data.table} of the features used in the model with their average gain (and their weight for boosted tree model) in the model.
|
||||||
|
}
|
||||||
|
\description{
|
||||||
|
Read a xgboost model text dump.
|
||||||
|
Can be tree or linear model (text dump of linear model are only supported in dev version of \code{Xgboost} for now).
|
||||||
|
}
|
||||||
|
\details{
|
||||||
|
This is the function to understand the model trained (and through your model, your data).
|
||||||
|
|
||||||
|
Results are returned for both linear and tree models.
|
||||||
|
|
||||||
|
\code{data.table} is returned by the function.
|
||||||
|
There are 3 columns :
|
||||||
|
\itemize{
|
||||||
|
\item \code{Features} name of the features as provided in \code{feature_names} or already present in the model dump.
|
||||||
|
\item \code{Gain} contribution of each feature to the model. For boosted tree model, each gain of each feature of each tree is taken into account, then average per feature to give a vision of the entire model. Highest percentage means important feature to predict the \code{label} used for the training ;
|
||||||
|
\item \code{Cover} metric of the number of observation related to this feature (only available for tree models) ;
|
||||||
|
\item \code{Weight} percentage representing the relative number of times a feature have been taken into trees. \code{Gain} should be prefered to search the most important feature. For boosted linear model, this column has no meaning.
|
||||||
|
}
|
||||||
|
|
||||||
|
Co-occurence count
|
||||||
|
------------------
|
||||||
|
|
||||||
|
The gain gives you indication about the information of how a feature is important in making a branch of a decision tree more pure. However, with this information only, you can't know if this feature has to be present or not to get a specific classification. In the example code, you may wonder if odor=none should be \code{TRUE} to not eat a mushroom.
|
||||||
|
|
||||||
|
Co-occurence computation is here to help in understanding this relation between a predictor and a specific class. It will count how many observations are returned as \code{TRUE} by the \code{target} function (see parameters). When you execute the example below, there are 92 times only over the 3140 observations of the train dataset where a mushroom have no odor and can be eaten safely.
|
||||||
|
|
||||||
|
If you need to remember one thing only: until you want to leave us early, don't eat a mushroom which has no odor :-)
|
||||||
|
}
|
||||||
|
\examples{
|
||||||
|
data(agaricus.train, package='xgboost')
|
||||||
|
|
||||||
|
# Both dataset are list with two items, a sparse matrix and labels
|
||||||
|
# (labels = outcome column which will be learned).
|
||||||
|
# Each column of the sparse Matrix is a feature in one hot encoding format.
|
||||||
|
train <- agaricus.train
|
||||||
|
|
||||||
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
|
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
|
|
||||||
|
# train$data@Dimnames[[2]] represents the column names of the sparse matrix.
|
||||||
|
xgb.importance(train$data@Dimnames[[2]], model = bst)
|
||||||
|
|
||||||
|
# Same thing with co-occurence computation this time
|
||||||
|
xgb.importance(train$data@Dimnames[[2]], model = bst, data = train$data, label = train$label)
|
||||||
|
}
|
||||||
|
|
||||||
@@ -1,4 +1,5 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.load.R
|
||||||
\name{xgb.load}
|
\name{xgb.load}
|
||||||
\alias{xgb.load}
|
\alias{xgb.load}
|
||||||
\title{Load xgboost model from binary file}
|
\title{Load xgboost model from binary file}
|
||||||
@@ -17,7 +18,7 @@ data(agaricus.test, package='xgboost')
|
|||||||
train <- agaricus.train
|
train <- agaricus.train
|
||||||
test <- agaricus.test
|
test <- agaricus.test
|
||||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
eta = 1, nround = 2,objective = "binary:logistic")
|
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
xgb.save(bst, 'xgb.model')
|
xgb.save(bst, 'xgb.model')
|
||||||
bst <- xgb.load('xgb.model')
|
bst <- xgb.load('xgb.model')
|
||||||
pred <- predict(bst, test$data)
|
pred <- predict(bst, test$data)
|
||||||
|
|||||||
59
R-package/man/xgb.model.dt.tree.Rd
Normal file
59
R-package/man/xgb.model.dt.tree.Rd
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.model.dt.tree.R
|
||||||
|
\name{xgb.model.dt.tree}
|
||||||
|
\alias{xgb.model.dt.tree}
|
||||||
|
\title{Convert tree model dump to data.table}
|
||||||
|
\usage{
|
||||||
|
xgb.model.dt.tree(feature_names = NULL, filename_dump = NULL,
|
||||||
|
model = NULL, text = NULL, n_first_tree = NULL)
|
||||||
|
}
|
||||||
|
\arguments{
|
||||||
|
\item{feature_names}{names of each feature as a character vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.}
|
||||||
|
|
||||||
|
\item{filename_dump}{the path to the text file storing the model. Model dump must include the gain per feature and per tree (parameter \code{with.stats = T} in function \code{xgb.dump}).}
|
||||||
|
|
||||||
|
\item{model}{dump generated by the \code{xgb.train} function. Avoid the creation of a dump file.}
|
||||||
|
|
||||||
|
\item{text}{dump generated by the \code{xgb.dump} function. Avoid the creation of a dump file. Model dump must include the gain per feature and per tree (parameter \code{with.stats = T} in function \code{xgb.dump}).}
|
||||||
|
|
||||||
|
\item{n_first_tree}{limit the plot to the n first trees. If \code{NULL}, all trees of the model are plotted. Performance can be low for huge models.}
|
||||||
|
}
|
||||||
|
\value{
|
||||||
|
A \code{data.table} of the features used in the model with their gain, cover and few other thing.
|
||||||
|
}
|
||||||
|
\description{
|
||||||
|
Read a tree model text dump and return a data.table.
|
||||||
|
}
|
||||||
|
\details{
|
||||||
|
General function to convert a text dump of tree model to a Matrix. The purpose is to help user to explore the model and get a better understanding of it.
|
||||||
|
|
||||||
|
The content of the \code{data.table} is organised that way:
|
||||||
|
|
||||||
|
\itemize{
|
||||||
|
\item \code{ID}: unique identifier of a node ;
|
||||||
|
\item \code{Feature}: feature used in the tree to operate a split. When Leaf is indicated, it is the end of a branch ;
|
||||||
|
\item \code{Split}: value of the chosen feature where is operated the split ;
|
||||||
|
\item \code{Yes}: ID of the feature for the next node in the branch when the split condition is met ;
|
||||||
|
\item \code{No}: ID of the feature for the next node in the branch when the split condition is not met ;
|
||||||
|
\item \code{Missing}: ID of the feature for the next node in the branch for observation where the feature used for the split are not provided ;
|
||||||
|
\item \code{Quality}: it's the gain related to the split in this specific node ;
|
||||||
|
\item \code{Cover}: metric to measure the number of observation affected by the split ;
|
||||||
|
\item \code{Tree}: ID of the tree. It is included in the main ID ;
|
||||||
|
\item \code{Yes.X} or \code{No.X}: data related to the pointer in \code{Yes} or \code{No} column ;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
\examples{
|
||||||
|
data(agaricus.train, package='xgboost')
|
||||||
|
|
||||||
|
#Both dataset are list with two items, a sparse matrix and labels
|
||||||
|
#(labels = outcome column which will be learned).
|
||||||
|
#Each column of the sparse Matrix is a feature in one hot encoding format.
|
||||||
|
train <- agaricus.train
|
||||||
|
|
||||||
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
|
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
|
|
||||||
|
#agaricus.test$data@Dimnames[[2]] represents the column names of the sparse matrix.
|
||||||
|
xgb.model.dt.tree(agaricus.train$data@Dimnames[[2]], model = bst)
|
||||||
|
}
|
||||||
|
|
||||||
40
R-package/man/xgb.plot.importance.Rd
Normal file
40
R-package/man/xgb.plot.importance.Rd
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.plot.importance.R
|
||||||
|
\name{xgb.plot.importance}
|
||||||
|
\alias{xgb.plot.importance}
|
||||||
|
\title{Plot feature importance bar graph}
|
||||||
|
\usage{
|
||||||
|
xgb.plot.importance(importance_matrix = NULL, numberOfClusters = c(1:10))
|
||||||
|
}
|
||||||
|
\arguments{
|
||||||
|
\item{importance_matrix}{a \code{data.table} returned by the \code{xgb.importance} function.}
|
||||||
|
|
||||||
|
\item{numberOfClusters}{a \code{numeric} vector containing the min and the max range of the possible number of clusters of bars.}
|
||||||
|
}
|
||||||
|
\value{
|
||||||
|
A \code{ggplot2} bar graph representing each feature by a horizontal bar. Longer is the bar, more important is the feature. Features are classified by importance and clustered by importance. The group is represented through the color of the bar.
|
||||||
|
}
|
||||||
|
\description{
|
||||||
|
Read a data.table containing feature importance details and plot it.
|
||||||
|
}
|
||||||
|
\details{
|
||||||
|
The purpose of this function is to easily represent the importance of each feature of a model.
|
||||||
|
The function return a ggplot graph, therefore each of its characteristic can be overriden (to customize it).
|
||||||
|
In particular you may want to override the title of the graph. To do so, add \code{+ ggtitle("A GRAPH NAME")} next to the value returned by this function.
|
||||||
|
}
|
||||||
|
\examples{
|
||||||
|
data(agaricus.train, package='xgboost')
|
||||||
|
|
||||||
|
#Both dataset are list with two items, a sparse matrix and labels
|
||||||
|
#(labels = outcome column which will be learned).
|
||||||
|
#Each column of the sparse Matrix is a feature in one hot encoding format.
|
||||||
|
train <- agaricus.train
|
||||||
|
|
||||||
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
|
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
|
|
||||||
|
#train$data@Dimnames[[2]] represents the column names of the sparse matrix.
|
||||||
|
importance_matrix <- xgb.importance(train$data@Dimnames[[2]], model = bst)
|
||||||
|
xgb.plot.importance(importance_matrix)
|
||||||
|
}
|
||||||
|
|
||||||
58
R-package/man/xgb.plot.tree.Rd
Normal file
58
R-package/man/xgb.plot.tree.Rd
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.plot.tree.R
|
||||||
|
\name{xgb.plot.tree}
|
||||||
|
\alias{xgb.plot.tree}
|
||||||
|
\title{Plot a boosted tree model}
|
||||||
|
\usage{
|
||||||
|
xgb.plot.tree(feature_names = NULL, filename_dump = NULL, model = NULL,
|
||||||
|
n_first_tree = NULL, CSSstyle = NULL, width = NULL, height = NULL)
|
||||||
|
}
|
||||||
|
\arguments{
|
||||||
|
\item{feature_names}{names of each feature as a character vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be \code{NULL}.}
|
||||||
|
|
||||||
|
\item{filename_dump}{the path to the text file storing the model. Model dump must include the gain per feature and per tree (parameter \code{with.stats = T} in function \code{xgb.dump}). Possible to provide a model directly (see \code{model} argument).}
|
||||||
|
|
||||||
|
\item{model}{generated by the \code{xgb.train} function. Avoid the creation of a dump file.}
|
||||||
|
|
||||||
|
\item{n_first_tree}{limit the plot to the n first trees. If \code{NULL}, all trees of the model are plotted. Performance can be low for huge models.}
|
||||||
|
|
||||||
|
\item{CSSstyle}{a \code{character} vector storing a css style to customize the appearance of nodes. Look at the \href{https://github.com/knsv/mermaid/wiki}{Mermaid wiki} for more information.}
|
||||||
|
|
||||||
|
\item{width}{the width of the diagram in pixels.}
|
||||||
|
|
||||||
|
\item{height}{the height of the diagram in pixels.}
|
||||||
|
}
|
||||||
|
\value{
|
||||||
|
A \code{DiagrammeR} of the model.
|
||||||
|
}
|
||||||
|
\description{
|
||||||
|
Read a tree model text dump.
|
||||||
|
Plotting only works for boosted tree model (not linear model).
|
||||||
|
}
|
||||||
|
\details{
|
||||||
|
The content of each node is organised that way:
|
||||||
|
|
||||||
|
\itemize{
|
||||||
|
\item \code{feature} value ;
|
||||||
|
\item \code{cover}: the sum of second order gradient of training data classified to the leaf, if it is square loss, this simply corresponds to the number of instances in that branch. Deeper in the tree a node is, lower this metric will be ;
|
||||||
|
\item \code{gain}: metric the importance of the node in the model.
|
||||||
|
}
|
||||||
|
|
||||||
|
Each branch finishes with a leaf. For each leaf, only the \code{cover} is indicated.
|
||||||
|
It uses \href{https://github.com/knsv/mermaid/}{Mermaid} library for that purpose.
|
||||||
|
}
|
||||||
|
\examples{
|
||||||
|
data(agaricus.train, package='xgboost')
|
||||||
|
|
||||||
|
#Both dataset are list with two items, a sparse matrix and labels
|
||||||
|
#(labels = outcome column which will be learned).
|
||||||
|
#Each column of the sparse Matrix is a feature in one hot encoding format.
|
||||||
|
train <- agaricus.train
|
||||||
|
|
||||||
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
|
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
|
|
||||||
|
#agaricus.test$data@Dimnames[[2]] represents the column names of the sparse matrix.
|
||||||
|
xgb.plot.tree(agaricus.train$data@Dimnames[[2]], model = bst)
|
||||||
|
}
|
||||||
|
|
||||||
@@ -1,4 +1,5 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.save.R
|
||||||
\name{xgb.save}
|
\name{xgb.save}
|
||||||
\alias{xgb.save}
|
\alias{xgb.save}
|
||||||
\title{Save xgboost model to binary file}
|
\title{Save xgboost model to binary file}
|
||||||
@@ -19,7 +20,7 @@ data(agaricus.test, package='xgboost')
|
|||||||
train <- agaricus.train
|
train <- agaricus.train
|
||||||
test <- agaricus.test
|
test <- agaricus.test
|
||||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
eta = 1, nround = 2,objective = "binary:logistic")
|
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
xgb.save(bst, 'xgb.model')
|
xgb.save(bst, 'xgb.model')
|
||||||
bst <- xgb.load('xgb.model')
|
bst <- xgb.load('xgb.model')
|
||||||
pred <- predict(bst, test$data)
|
pred <- predict(bst, test$data)
|
||||||
|
|||||||
27
R-package/man/xgb.save.raw.Rd
Normal file
27
R-package/man/xgb.save.raw.Rd
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.save.raw.R
|
||||||
|
\name{xgb.save.raw}
|
||||||
|
\alias{xgb.save.raw}
|
||||||
|
\title{Save xgboost model to R's raw vector,
|
||||||
|
user can call xgb.load to load the model back from raw vector}
|
||||||
|
\usage{
|
||||||
|
xgb.save.raw(model)
|
||||||
|
}
|
||||||
|
\arguments{
|
||||||
|
\item{model}{the model object.}
|
||||||
|
}
|
||||||
|
\description{
|
||||||
|
Save xgboost model from xgboost or xgb.train
|
||||||
|
}
|
||||||
|
\examples{
|
||||||
|
data(agaricus.train, package='xgboost')
|
||||||
|
data(agaricus.test, package='xgboost')
|
||||||
|
train <- agaricus.train
|
||||||
|
test <- agaricus.test
|
||||||
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
|
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
|
||||||
|
raw <- xgb.save.raw(bst)
|
||||||
|
bst <- xgb.load(raw)
|
||||||
|
pred <- predict(bst, test$data)
|
||||||
|
}
|
||||||
|
|
||||||
@@ -1,26 +1,62 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgb.train.R
|
||||||
\name{xgb.train}
|
\name{xgb.train}
|
||||||
\alias{xgb.train}
|
\alias{xgb.train}
|
||||||
\title{eXtreme Gradient Boosting Training}
|
\title{eXtreme Gradient Boosting Training}
|
||||||
\usage{
|
\usage{
|
||||||
xgb.train(params = list(), data, nrounds, watchlist = list(), obj = NULL,
|
xgb.train(params = list(), data, nrounds, watchlist = list(), obj = NULL,
|
||||||
feval = NULL, verbose = 1, ...)
|
feval = NULL, verbose = 1, printEveryN=1L, early_stop_round = NULL,
|
||||||
|
early.stop.round = NULL, maximize = NULL, ...)
|
||||||
}
|
}
|
||||||
\arguments{
|
\arguments{
|
||||||
\item{params}{the list of parameters. Commonly used ones are:
|
\item{params}{the list of parameters.
|
||||||
|
|
||||||
|
1. General Parameters
|
||||||
|
|
||||||
\itemize{
|
\itemize{
|
||||||
\item \code{objective} objective function, common ones are
|
\item \code{booster} which booster to use, can be \code{gbtree} or \code{gblinear}. Default: \code{gbtree}
|
||||||
\itemize{
|
\item \code{silent} 0 means printing running messages, 1 means silent mode. Default: 0
|
||||||
\item \code{reg:linear} linear regression
|
|
||||||
\item \code{binary:logistic} logistic regression for classification
|
|
||||||
}
|
|
||||||
\item \code{eta} step size of each boosting step
|
|
||||||
\item \code{max.depth} maximum depth of the tree
|
|
||||||
\item \code{nthread} number of thread used in training, if not set, all threads are used
|
|
||||||
}
|
}
|
||||||
|
|
||||||
See \url{https://github.com/tqchen/xgboost/wiki/Parameters} for
|
2. Booster Parameters
|
||||||
further details. See also demo/ for walkthrough example in R.}
|
|
||||||
|
2.1. Parameter for Tree Booster
|
||||||
|
|
||||||
|
\itemize{
|
||||||
|
\item \code{eta} control the learning rate: scale the contribution of each tree by a factor of \code{0 < eta < 1} when it is added to the current approximation. Used to prevent overfitting by making the boosting process more conservative. Lower value for \code{eta} implies larger value for \code{nrounds}: low \code{eta} value means model more robust to overfitting but slower to compute. Default: 0.3
|
||||||
|
\item \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be.
|
||||||
|
\item \code{max_depth} maximum depth of a tree. Default: 6
|
||||||
|
\item \code{min_child_weight} minimum sum of instance weight(hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1
|
||||||
|
\item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nround}. Default: 1
|
||||||
|
\item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
|
||||||
|
\item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through Xgboost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1
|
||||||
|
}
|
||||||
|
|
||||||
|
2.2. Parameter for Linear Booster
|
||||||
|
|
||||||
|
\itemize{
|
||||||
|
\item \code{lambda} L2 regularization term on weights. Default: 0
|
||||||
|
\item \code{lambda_bias} L2 regularization term on bias. Default: 0
|
||||||
|
\item \code{alpha} L1 regularization term on weights. (there is no L1 reg on bias because it is not important). Default: 0
|
||||||
|
}
|
||||||
|
|
||||||
|
3. Task Parameters
|
||||||
|
|
||||||
|
\itemize{
|
||||||
|
\item \code{objective} specify the learning task and the corresponding learning objective, and the objective options are below:
|
||||||
|
\itemize{
|
||||||
|
\item \code{reg:linear} linear regression (Default).
|
||||||
|
\item \code{reg:logistic} logistic regression.
|
||||||
|
\item \code{binary:logistic} logistic regression for binary classification. Output probability.
|
||||||
|
\item \code{binary:logitraw} logistic regression for binary classification, output score before logistic transformation.
|
||||||
|
\item \code{num_class} set the number of classes. To use only with multiclass objectives.
|
||||||
|
\item \code{multi:softmax} set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to \code{tonum_class}.
|
||||||
|
\item \code{multi:softprob} same as softmax, but output a vector of ndata * nclass, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class.
|
||||||
|
\item \code{rank:pairwise} set xgboost to do ranking task by minimizing the pairwise loss.
|
||||||
|
}
|
||||||
|
\item \code{base_score} the initial prediction score of all instances, global bias. Default: 0.5
|
||||||
|
\item \code{eval_metric} evaluation metrics for validation data. Default: metric will be assigned according to objective(rmse for regression, and error for classification, mean average precision for ranking). List is provided in detail section.
|
||||||
|
}}
|
||||||
|
|
||||||
\item{data}{takes an \code{xgb.DMatrix} as the input.}
|
\item{data}{takes an \code{xgb.DMatrix} as the input.}
|
||||||
|
|
||||||
@@ -40,22 +76,46 @@ gradient with given prediction and dtrain,}
|
|||||||
prediction and dtrain,}
|
prediction and dtrain,}
|
||||||
|
|
||||||
\item{verbose}{If 0, xgboost will stay silent. If 1, xgboost will print
|
\item{verbose}{If 0, xgboost will stay silent. If 1, xgboost will print
|
||||||
information of performance. If 2, xgboost will print information of both}
|
information of performance. If 2, xgboost will print information of both}
|
||||||
|
|
||||||
|
\item{printEveryN}{Print every N progress messages when \code{verbose>0}. Default is 1 which means all messages are printed.}
|
||||||
|
|
||||||
|
\item{early_stop_round}{If \code{NULL}, the early stopping function is not triggered.
|
||||||
|
If set to an integer \code{k}, training with a validation set will stop if the performance
|
||||||
|
keeps getting worse consecutively for \code{k} rounds.}
|
||||||
|
|
||||||
|
\item{early.stop.round}{An alternative of \code{early_stop_round}.}
|
||||||
|
|
||||||
|
\item{maximize}{If \code{feval} and \code{early_stop_round} are set, then \code{maximize} must be set as well.
|
||||||
|
\code{maximize=TRUE} means the larger the evaluation score the better.}
|
||||||
|
|
||||||
\item{...}{other parameters to pass to \code{params}.}
|
\item{...}{other parameters to pass to \code{params}.}
|
||||||
}
|
}
|
||||||
\description{
|
\description{
|
||||||
The training function of xgboost
|
An advanced interface for training xgboost model. Look at \code{\link{xgboost}} function for a simpler interface.
|
||||||
}
|
}
|
||||||
\details{
|
\details{
|
||||||
This is the training function for xgboost.
|
This is the training function for \code{xgboost}.
|
||||||
|
|
||||||
Parallelization is automatically enabled if OpenMP is present.
|
It supports advanced features such as \code{watchlist}, customized objective function (\code{feval}),
|
||||||
Number of threads can also be manually specified via "nthread" parameter.
|
therefore it is more flexible than \code{\link{xgboost}} function.
|
||||||
|
|
||||||
This function only accepts an \code{xgb.DMatrix} object as the input.
|
Parallelization is automatically enabled if \code{OpenMP} is present.
|
||||||
It supports advanced features such as watchlist, customized objective function,
|
Number of threads can also be manually specified via \code{nthread} parameter.
|
||||||
therefore it is more flexible than \code{\link{xgboost}}.
|
|
||||||
|
\code{eval_metric} parameter (not listed above) is set automatically by Xgboost but can be overriden by parameter. Below is provided the list of different metric optimized by Xgboost to help you to understand how it works inside or to use them with the \code{watchlist} parameter.
|
||||||
|
\itemize{
|
||||||
|
\item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
|
||||||
|
\item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
|
||||||
|
\item \code{error} Binary classification error rate. It is calculated as \code{(wrong cases) / (all cases)}. For the predictions, the evaluation will regard the instances with prediction value larger than 0.5 as positive instances, and the others as negative instances.
|
||||||
|
\item \code{merror} Multiclass classification error rate. It is calculated as \code{(wrong cases) / (all cases)}.
|
||||||
|
\item \code{auc} Area under the curve. \url{http://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
|
||||||
|
\item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{http://en.wikipedia.org/wiki/NDCG}
|
||||||
|
}
|
||||||
|
|
||||||
|
Full list of parameters is available in the Wiki \url{https://github.com/dmlc/xgboost/wiki/Parameters}.
|
||||||
|
|
||||||
|
This function only accepts an \code{\link{xgb.DMatrix}} object as the input.
|
||||||
}
|
}
|
||||||
\examples{
|
\examples{
|
||||||
data(agaricus.train, package='xgboost')
|
data(agaricus.train, package='xgboost')
|
||||||
@@ -75,6 +135,6 @@ evalerror <- function(preds, dtrain) {
|
|||||||
err <- as.numeric(sum(labels != (preds > 0)))/length(labels)
|
err <- as.numeric(sum(labels != (preds > 0)))/length(labels)
|
||||||
return(list(metric = "error", value = err))
|
return(list(metric = "error", value = err))
|
||||||
}
|
}
|
||||||
bst <- xgb.train(param, dtrain, nround = 2, watchlist, logregobj, evalerror)
|
bst <- xgb.train(param, dtrain, nthread = 2, nround = 2, watchlist, logregobj, evalerror)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,18 +1,26 @@
|
|||||||
% Generated by roxygen2 (4.0.1): do not edit by hand
|
% Generated by roxygen2 (4.1.1): do not edit by hand
|
||||||
|
% Please edit documentation in R/xgboost.R
|
||||||
\name{xgboost}
|
\name{xgboost}
|
||||||
\alias{xgboost}
|
\alias{xgboost}
|
||||||
\title{eXtreme Gradient Boosting (Tree) library}
|
\title{eXtreme Gradient Boosting (Tree) library}
|
||||||
\usage{
|
\usage{
|
||||||
xgboost(data = NULL, label = NULL, params = list(), nrounds,
|
xgboost(data = NULL, label = NULL, missing = NULL, params = list(),
|
||||||
verbose = 1, ...)
|
nrounds, verbose = 1, printEveryN=1L, early_stop_round = NULL, early.stop.round = NULL,
|
||||||
|
maximize = NULL, ...)
|
||||||
}
|
}
|
||||||
\arguments{
|
\arguments{
|
||||||
\item{data}{takes \code{matrix}, \code{dgCMatrix}, local data file or
|
\item{data}{takes \code{matrix}, \code{dgCMatrix}, local data file or
|
||||||
\code{xgb.DMatrix}.}
|
\code{xgb.DMatrix}.}
|
||||||
|
|
||||||
\item{label}{the response variable. User should not set this field,}
|
\item{label}{the response variable. User should not set this field,
|
||||||
|
if data is local data file or \code{xgb.DMatrix}.}
|
||||||
|
|
||||||
\item{params}{the list of parameters. Commonly used ones are:
|
\item{missing}{Missing is only used when input is dense matrix, pick a float
|
||||||
|
value that represents missing value. Sometimes a data use 0 or other extreme value to represents missing values.}
|
||||||
|
|
||||||
|
\item{params}{the list of parameters.
|
||||||
|
|
||||||
|
Commonly used ones are:
|
||||||
\itemize{
|
\itemize{
|
||||||
\item \code{objective} objective function, common ones are
|
\item \code{objective} objective function, common ones are
|
||||||
\itemize{
|
\itemize{
|
||||||
@@ -24,8 +32,9 @@ xgboost(data = NULL, label = NULL, params = list(), nrounds,
|
|||||||
\item \code{nthread} number of thread used in training, if not set, all threads are used
|
\item \code{nthread} number of thread used in training, if not set, all threads are used
|
||||||
}
|
}
|
||||||
|
|
||||||
See \url{https://github.com/tqchen/xgboost/wiki/Parameters} for
|
Look at \code{\link{xgb.train}} for a more complete list of parameters or \url{https://github.com/dmlc/xgboost/wiki/Parameters} for the full list.
|
||||||
further details. See also demo/ for walkthrough example in R.}
|
|
||||||
|
See also \code{demo/} for walkthrough example in R.}
|
||||||
|
|
||||||
\item{nrounds}{the max number of iterations}
|
\item{nrounds}{the max number of iterations}
|
||||||
|
|
||||||
@@ -33,16 +42,28 @@ xgboost(data = NULL, label = NULL, params = list(), nrounds,
|
|||||||
information of performance. If 2, xgboost will print information of both
|
information of performance. If 2, xgboost will print information of both
|
||||||
performance and construction progress information}
|
performance and construction progress information}
|
||||||
|
|
||||||
|
\item{printEveryN}{Print every N progress messages when \code{verbose>0}. Default is 1 which means all messages are printed.}
|
||||||
|
|
||||||
|
\item{early_stop_round}{If \code{NULL}, the early stopping function is not triggered.
|
||||||
|
If set to an integer \code{k}, training with a validation set will stop if the performance
|
||||||
|
keeps getting worse consecutively for \code{k} rounds.}
|
||||||
|
|
||||||
|
\item{early.stop.round}{An alternative of \code{early_stop_round}.}
|
||||||
|
|
||||||
|
\item{maximize}{If \code{feval} and \code{early_stop_round} are set, then \code{maximize} must be set as well.
|
||||||
|
\code{maximize=TRUE} means the larger the evaluation score the better.}
|
||||||
|
|
||||||
\item{...}{other parameters to pass to \code{params}.}
|
\item{...}{other parameters to pass to \code{params}.}
|
||||||
}
|
}
|
||||||
\description{
|
\description{
|
||||||
A simple interface for xgboost in R
|
A simple interface for training xgboost model. Look at \code{\link{xgb.train}} function for a more advanced interface.
|
||||||
}
|
}
|
||||||
\details{
|
\details{
|
||||||
This is the modeling function for xgboost.
|
This is the modeling function for Xgboost.
|
||||||
|
|
||||||
Parallelization is automatically enabled if OpenMP is present.
|
Parallelization is automatically enabled if \code{OpenMP} is present.
|
||||||
Number of threads can also be manually specified via "nthread" parameter
|
|
||||||
|
Number of threads can also be manually specified via \code{nthread} parameter.
|
||||||
}
|
}
|
||||||
\examples{
|
\examples{
|
||||||
data(agaricus.train, package='xgboost')
|
data(agaricus.train, package='xgboost')
|
||||||
@@ -50,7 +71,7 @@ data(agaricus.test, package='xgboost')
|
|||||||
train <- agaricus.train
|
train <- agaricus.train
|
||||||
test <- agaricus.test
|
test <- agaricus.test
|
||||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||||
eta = 1, nround = 2,objective = "binary:logistic")
|
eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
|
||||||
pred <- predict(bst, test$data)
|
pred <- predict(bst, test$data)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,9 +1,8 @@
|
|||||||
# package root
|
# package root
|
||||||
PKGROOT=../../
|
PKGROOT=../../
|
||||||
# _*_ mode: Makefile; _*_
|
# _*_ mode: Makefile; _*_
|
||||||
PKG_CPPFLAGS= -DXGBOOST_CUSTOMIZE_MSG_ -DXGBOOST_CUSTOMIZE_PRNG_ -DXGBOOST_STRICT_CXX98_ -I$(PKGROOT)
|
PKG_CPPFLAGS= -DXGBOOST_CUSTOMIZE_MSG_ -DXGBOOST_CUSTOMIZE_PRNG_ -DXGBOOST_STRICT_CXX98_ -DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_ -I$(PKGROOT)
|
||||||
PKG_CXXFLAGS= $(SHLIB_OPENMP_CFLAGS)
|
PKG_CXXFLAGS= $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)
|
||||||
PKG_LIBS = $(SHLIB_OPENMP_CFLAGS)
|
PKG_LIBS = $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)
|
||||||
OBJECTS= xgboost_R.o xgboost_assert.o $(PKGROOT)/wrapper/xgboost_wrapper.o $(PKGROOT)/src/io/io.o $(PKGROOT)/src/gbm/gbm.o $(PKGROOT)/src/tree/updater.o
|
OBJECTS= xgboost_R.o xgboost_assert.o $(PKGROOT)/wrapper/xgboost_wrapper.o $(PKGROOT)/src/io/io.o $(PKGROOT)/src/gbm/gbm.o $(PKGROOT)/src/tree/updater.o $(PKGROOT)/subtree/rabit/src/engine_empty.o $(PKGROOT)/src/io/dmlc_simple.o
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,19 @@
|
|||||||
# package root
|
# package root
|
||||||
PKGROOT=../../
|
PKGROOT=./
|
||||||
# _*_ mode: Makefile; _*_
|
# _*_ mode: Makefile; _*_
|
||||||
PKG_CPPFLAGS= -DXGBOOST_CUSTOMIZE_MSG_ -DXGBOOST_CUSTOMIZE_PRNG_ -DXGBOOST_STRICT_CXX98_ -I$(PKGROOT)
|
|
||||||
PKG_CXXFLAGS= $(SHLIB_OPENMP_CFLAGS)
|
# This file is only used for windows compilation from github
|
||||||
PKG_LIBS = $(SHLIB_OPENMP_CFLAGS)
|
# It will be replaced by Makevars in CRAN version
|
||||||
OBJECTS= xgboost_R.o xgboost_assert.o $(PKGROOT)/wrapper/xgboost_wrapper.o $(PKGROOT)/src/io/io.o $(PKGROOT)/src/gbm/gbm.o $(PKGROOT)/src/tree/updater.o
|
.PHONY: all xgblib
|
||||||
|
all: $(SHLIB)
|
||||||
|
$(SHLIB): xgblib
|
||||||
|
xgblib:
|
||||||
|
cp -r ../../src .
|
||||||
|
cp -r ../../wrapper .
|
||||||
|
cp -r ../../subtree .
|
||||||
|
|
||||||
|
PKG_CPPFLAGS= -DXGBOOST_CUSTOMIZE_MSG_ -DXGBOOST_CUSTOMIZE_PRNG_ -DXGBOOST_STRICT_CXX98_ -DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_ -I$(PKGROOT) -I../..
|
||||||
|
PKG_CXXFLAGS= $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)
|
||||||
|
PKG_LIBS = $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)
|
||||||
|
OBJECTS= xgboost_R.o xgboost_assert.o $(PKGROOT)/wrapper/xgboost_wrapper.o $(PKGROOT)/src/io/io.o $(PKGROOT)/src/gbm/gbm.o $(PKGROOT)/src/tree/updater.o $(PKGROOT)/subtree/rabit/src/engine_empty.o $(PKGROOT)/src/io/dmlc_simple.o
|
||||||
|
$(OBJECTS) : xgblib
|
||||||
|
|||||||
@@ -3,10 +3,12 @@
|
|||||||
#include <utility>
|
#include <utility>
|
||||||
#include <cstring>
|
#include <cstring>
|
||||||
#include <cstdio>
|
#include <cstdio>
|
||||||
#include "xgboost_R.h"
|
#include <sstream>
|
||||||
#include "wrapper/xgboost_wrapper.h"
|
#include "wrapper/xgboost_wrapper.h"
|
||||||
#include "src/utils/utils.h"
|
#include "src/utils/utils.h"
|
||||||
#include "src/utils/omp.h"
|
#include "src/utils/omp.h"
|
||||||
|
#include "xgboost_R.h"
|
||||||
|
|
||||||
using namespace std;
|
using namespace std;
|
||||||
using namespace xgboost;
|
using namespace xgboost;
|
||||||
|
|
||||||
@@ -26,7 +28,13 @@ extern "C" {
|
|||||||
void (*Check)(int exp, const char *fmt, ...) = XGBoostCheck_R;
|
void (*Check)(int exp, const char *fmt, ...) = XGBoostCheck_R;
|
||||||
void (*Error)(const char *fmt, ...) = error;
|
void (*Error)(const char *fmt, ...) = error;
|
||||||
}
|
}
|
||||||
} // namespace utils
|
bool CheckNAN(double v) {
|
||||||
|
return ISNAN(v);
|
||||||
|
}
|
||||||
|
bool LogGamma(double v) {
|
||||||
|
return lgammafn(v);
|
||||||
|
}
|
||||||
|
} // namespace utils
|
||||||
|
|
||||||
namespace random {
|
namespace random {
|
||||||
void Seed(unsigned seed) {
|
void Seed(unsigned seed) {
|
||||||
@@ -51,6 +59,9 @@ inline void _WrapperEnd(void) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
extern "C" {
|
extern "C" {
|
||||||
|
SEXP XGCheckNullPtr_R(SEXP handle) {
|
||||||
|
return ScalarLogical(R_ExternalPtrAddr(handle) == NULL);
|
||||||
|
}
|
||||||
void _DMatrixFinalizer(SEXP ext) {
|
void _DMatrixFinalizer(SEXP ext) {
|
||||||
if (R_ExternalPtrAddr(ext) == NULL) return;
|
if (R_ExternalPtrAddr(ext) == NULL) return;
|
||||||
XGDMatrixFree(R_ExternalPtrAddr(ext));
|
XGDMatrixFree(R_ExternalPtrAddr(ext));
|
||||||
@@ -59,31 +70,31 @@ extern "C" {
|
|||||||
SEXP XGDMatrixCreateFromFile_R(SEXP fname, SEXP silent) {
|
SEXP XGDMatrixCreateFromFile_R(SEXP fname, SEXP silent) {
|
||||||
_WrapperBegin();
|
_WrapperBegin();
|
||||||
void *handle = XGDMatrixCreateFromFile(CHAR(asChar(fname)), asInteger(silent));
|
void *handle = XGDMatrixCreateFromFile(CHAR(asChar(fname)), asInteger(silent));
|
||||||
|
_WrapperEnd();
|
||||||
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
|
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
|
||||||
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
|
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
|
||||||
UNPROTECT(1);
|
UNPROTECT(1);
|
||||||
_WrapperEnd();
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
SEXP XGDMatrixCreateFromMat_R(SEXP mat,
|
SEXP XGDMatrixCreateFromMat_R(SEXP mat,
|
||||||
SEXP missing) {
|
SEXP missing) {
|
||||||
_WrapperBegin();
|
_WrapperBegin();
|
||||||
SEXP dim = getAttrib(mat, R_DimSymbol);
|
SEXP dim = getAttrib(mat, R_DimSymbol);
|
||||||
int nrow = INTEGER(dim)[0];
|
size_t nrow = static_cast<size_t>(INTEGER(dim)[0]);
|
||||||
int ncol = INTEGER(dim)[1];
|
size_t ncol = static_cast<size_t>(INTEGER(dim)[1]);
|
||||||
double *din = REAL(mat);
|
double *din = REAL(mat);
|
||||||
std::vector<float> data(nrow * ncol);
|
std::vector<float> data(nrow * ncol);
|
||||||
#pragma omp parallel for schedule(static)
|
#pragma omp parallel for schedule(static)
|
||||||
for (int i = 0; i < nrow; ++i) {
|
for (bst_omp_uint i = 0; i < nrow; ++i) {
|
||||||
for (int j = 0; j < ncol; ++j) {
|
for (size_t j = 0; j < ncol; ++j) {
|
||||||
data[i * ncol +j] = din[i + nrow * j];
|
data[i * ncol +j] = din[i + nrow * j];
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
void *handle = XGDMatrixCreateFromMat(BeginPtr(data), nrow, ncol, asReal(missing));
|
void *handle = XGDMatrixCreateFromMat(BeginPtr(data), nrow, ncol, asReal(missing));
|
||||||
|
_WrapperEnd();
|
||||||
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
|
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
|
||||||
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
|
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
|
||||||
UNPROTECT(1);
|
UNPROTECT(1);
|
||||||
_WrapperEnd();
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
SEXP XGDMatrixCreateFromCSC_R(SEXP indptr,
|
SEXP XGDMatrixCreateFromCSC_R(SEXP indptr,
|
||||||
@@ -109,10 +120,10 @@ extern "C" {
|
|||||||
}
|
}
|
||||||
void *handle = XGDMatrixCreateFromCSC(BeginPtr(col_ptr_), BeginPtr(indices_),
|
void *handle = XGDMatrixCreateFromCSC(BeginPtr(col_ptr_), BeginPtr(indices_),
|
||||||
BeginPtr(data_), nindptr, ndata);
|
BeginPtr(data_), nindptr, ndata);
|
||||||
|
_WrapperEnd();
|
||||||
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
|
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
|
||||||
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
|
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
|
||||||
UNPROTECT(1);
|
UNPROTECT(1);
|
||||||
_WrapperEnd();
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
SEXP XGDMatrixSliceDMatrix_R(SEXP handle, SEXP idxset) {
|
SEXP XGDMatrixSliceDMatrix_R(SEXP handle, SEXP idxset) {
|
||||||
@@ -123,10 +134,10 @@ extern "C" {
|
|||||||
idxvec[i] = INTEGER(idxset)[i] - 1;
|
idxvec[i] = INTEGER(idxset)[i] - 1;
|
||||||
}
|
}
|
||||||
void *res = XGDMatrixSliceDMatrix(R_ExternalPtrAddr(handle), BeginPtr(idxvec), len);
|
void *res = XGDMatrixSliceDMatrix(R_ExternalPtrAddr(handle), BeginPtr(idxvec), len);
|
||||||
|
_WrapperEnd();
|
||||||
SEXP ret = PROTECT(R_MakeExternalPtr(res, R_NilValue, R_NilValue));
|
SEXP ret = PROTECT(R_MakeExternalPtr(res, R_NilValue, R_NilValue));
|
||||||
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
|
R_RegisterCFinalizerEx(ret, _DMatrixFinalizer, TRUE);
|
||||||
UNPROTECT(1);
|
UNPROTECT(1);
|
||||||
_WrapperEnd();
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
void XGDMatrixSaveBinary_R(SEXP handle, SEXP fname, SEXP silent) {
|
void XGDMatrixSaveBinary_R(SEXP handle, SEXP fname, SEXP silent) {
|
||||||
@@ -146,10 +157,7 @@ extern "C" {
|
|||||||
vec[i] = static_cast<unsigned>(INTEGER(array)[i]);
|
vec[i] = static_cast<unsigned>(INTEGER(array)[i]);
|
||||||
}
|
}
|
||||||
XGDMatrixSetGroup(R_ExternalPtrAddr(handle), BeginPtr(vec), len);
|
XGDMatrixSetGroup(R_ExternalPtrAddr(handle), BeginPtr(vec), len);
|
||||||
_WrapperEnd();
|
} else {
|
||||||
return;
|
|
||||||
}
|
|
||||||
{
|
|
||||||
std::vector<float> vec(len);
|
std::vector<float> vec(len);
|
||||||
#pragma omp parallel for schedule(static)
|
#pragma omp parallel for schedule(static)
|
||||||
for (int i = 0; i < len; ++i) {
|
for (int i = 0; i < len; ++i) {
|
||||||
@@ -166,12 +174,12 @@ extern "C" {
|
|||||||
bst_ulong olen;
|
bst_ulong olen;
|
||||||
const float *res = XGDMatrixGetFloatInfo(R_ExternalPtrAddr(handle),
|
const float *res = XGDMatrixGetFloatInfo(R_ExternalPtrAddr(handle),
|
||||||
CHAR(asChar(field)), &olen);
|
CHAR(asChar(field)), &olen);
|
||||||
|
_WrapperEnd();
|
||||||
SEXP ret = PROTECT(allocVector(REALSXP, olen));
|
SEXP ret = PROTECT(allocVector(REALSXP, olen));
|
||||||
for (size_t i = 0; i < olen; ++i) {
|
for (size_t i = 0; i < olen; ++i) {
|
||||||
REAL(ret)[i] = res[i];
|
REAL(ret)[i] = res[i];
|
||||||
}
|
}
|
||||||
UNPROTECT(1);
|
UNPROTECT(1);
|
||||||
_WrapperEnd();
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
SEXP XGDMatrixNumRow_R(SEXP handle) {
|
SEXP XGDMatrixNumRow_R(SEXP handle) {
|
||||||
@@ -192,10 +200,10 @@ extern "C" {
|
|||||||
dvec.push_back(R_ExternalPtrAddr(VECTOR_ELT(dmats, i)));
|
dvec.push_back(R_ExternalPtrAddr(VECTOR_ELT(dmats, i)));
|
||||||
}
|
}
|
||||||
void *handle = XGBoosterCreate(BeginPtr(dvec), dvec.size());
|
void *handle = XGBoosterCreate(BeginPtr(dvec), dvec.size());
|
||||||
|
_WrapperEnd();
|
||||||
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
|
SEXP ret = PROTECT(R_MakeExternalPtr(handle, R_NilValue, R_NilValue));
|
||||||
R_RegisterCFinalizerEx(ret, _BoosterFinalizer, TRUE);
|
R_RegisterCFinalizerEx(ret, _BoosterFinalizer, TRUE);
|
||||||
UNPROTECT(1);
|
UNPROTECT(1);
|
||||||
_WrapperEnd();
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
void XGBoosterSetParam_R(SEXP handle, SEXP name, SEXP val) {
|
void XGBoosterSetParam_R(SEXP handle, SEXP name, SEXP val) {
|
||||||
@@ -241,25 +249,27 @@ extern "C" {
|
|||||||
for (int i = 0; i < len; ++i) {
|
for (int i = 0; i < len; ++i) {
|
||||||
vec_sptr.push_back(vec_names[i].c_str());
|
vec_sptr.push_back(vec_names[i].c_str());
|
||||||
}
|
}
|
||||||
return mkString(XGBoosterEvalOneIter(R_ExternalPtrAddr(handle),
|
const char *ret =
|
||||||
asInteger(iter),
|
XGBoosterEvalOneIter(R_ExternalPtrAddr(handle),
|
||||||
BeginPtr(vec_dmats), BeginPtr(vec_sptr), len));
|
asInteger(iter),
|
||||||
|
BeginPtr(vec_dmats), BeginPtr(vec_sptr), len);
|
||||||
_WrapperEnd();
|
_WrapperEnd();
|
||||||
|
return mkString(ret);
|
||||||
}
|
}
|
||||||
SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP output_margin, SEXP ntree_limit) {
|
SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask, SEXP ntree_limit) {
|
||||||
_WrapperBegin();
|
_WrapperBegin();
|
||||||
bst_ulong olen;
|
bst_ulong olen;
|
||||||
const float *res = XGBoosterPredict(R_ExternalPtrAddr(handle),
|
const float *res = XGBoosterPredict(R_ExternalPtrAddr(handle),
|
||||||
R_ExternalPtrAddr(dmat),
|
R_ExternalPtrAddr(dmat),
|
||||||
asInteger(output_margin),
|
asInteger(option_mask),
|
||||||
asInteger(ntree_limit),
|
asInteger(ntree_limit),
|
||||||
&olen);
|
&olen);
|
||||||
|
_WrapperEnd();
|
||||||
SEXP ret = PROTECT(allocVector(REALSXP, olen));
|
SEXP ret = PROTECT(allocVector(REALSXP, olen));
|
||||||
for (size_t i = 0; i < olen; ++i) {
|
for (size_t i = 0; i < olen; ++i) {
|
||||||
REAL(ret)[i] = res[i];
|
REAL(ret)[i] = res[i];
|
||||||
}
|
}
|
||||||
UNPROTECT(1);
|
UNPROTECT(1);
|
||||||
_WrapperEnd();
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
void XGBoosterLoadModel_R(SEXP handle, SEXP fname) {
|
void XGBoosterLoadModel_R(SEXP handle, SEXP fname) {
|
||||||
@@ -272,18 +282,41 @@ extern "C" {
|
|||||||
XGBoosterSaveModel(R_ExternalPtrAddr(handle), CHAR(asChar(fname)));
|
XGBoosterSaveModel(R_ExternalPtrAddr(handle), CHAR(asChar(fname)));
|
||||||
_WrapperEnd();
|
_WrapperEnd();
|
||||||
}
|
}
|
||||||
void XGBoosterDumpModel_R(SEXP handle, SEXP fname, SEXP fmap) {
|
void XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw) {
|
||||||
_WrapperBegin();
|
_WrapperBegin();
|
||||||
bst_ulong olen;
|
XGBoosterLoadModelFromBuffer(R_ExternalPtrAddr(handle),
|
||||||
const char **res = XGBoosterDumpModel(R_ExternalPtrAddr(handle),
|
RAW(raw),
|
||||||
CHAR(asChar(fmap)),
|
length(raw));
|
||||||
&olen);
|
|
||||||
FILE *fo = utils::FopenCheck(CHAR(asChar(fname)), "w");
|
|
||||||
for (size_t i = 0; i < olen; ++i) {
|
|
||||||
fprintf(fo, "booster[%u]:\n", static_cast<unsigned>(i));
|
|
||||||
fprintf(fo, "%s", res[i]);
|
|
||||||
}
|
|
||||||
fclose(fo);
|
|
||||||
_WrapperEnd();
|
_WrapperEnd();
|
||||||
}
|
}
|
||||||
|
SEXP XGBoosterModelToRaw_R(SEXP handle) {
|
||||||
|
bst_ulong olen;
|
||||||
|
_WrapperBegin();
|
||||||
|
const char *raw = XGBoosterGetModelRaw(R_ExternalPtrAddr(handle), &olen);
|
||||||
|
_WrapperEnd();
|
||||||
|
SEXP ret = PROTECT(allocVector(RAWSXP, olen));
|
||||||
|
if (olen != 0) {
|
||||||
|
memcpy(RAW(ret), raw, olen);
|
||||||
|
}
|
||||||
|
UNPROTECT(1);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
SEXP XGBoosterDumpModel_R(SEXP handle, SEXP fmap, SEXP with_stats) {
|
||||||
|
_WrapperBegin();
|
||||||
|
bst_ulong olen;
|
||||||
|
const char **res =
|
||||||
|
XGBoosterDumpModel(R_ExternalPtrAddr(handle),
|
||||||
|
CHAR(asChar(fmap)),
|
||||||
|
asInteger(with_stats),
|
||||||
|
&olen);
|
||||||
|
_WrapperEnd();
|
||||||
|
SEXP out = PROTECT(allocVector(STRSXP, olen));
|
||||||
|
for (size_t i = 0; i < olen; ++i) {
|
||||||
|
stringstream stream;
|
||||||
|
stream << "booster["<<i<<"]\n" << res[i];
|
||||||
|
SET_STRING_ELT(out, i, mkChar(stream.str().c_str()));
|
||||||
|
}
|
||||||
|
UNPROTECT(1);
|
||||||
|
return out;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,9 +8,16 @@
|
|||||||
extern "C" {
|
extern "C" {
|
||||||
#include <Rinternals.h>
|
#include <Rinternals.h>
|
||||||
#include <R_ext/Random.h>
|
#include <R_ext/Random.h>
|
||||||
|
#include <Rmath.h>
|
||||||
}
|
}
|
||||||
|
|
||||||
extern "C" {
|
extern "C" {
|
||||||
|
/*!
|
||||||
|
* \brief check whether a handle is NULL
|
||||||
|
* \param handle
|
||||||
|
* \return whether it is null ptr
|
||||||
|
*/
|
||||||
|
SEXP XGCheckNullPtr_R(SEXP handle);
|
||||||
/*!
|
/*!
|
||||||
* \brief load a data matrix
|
* \brief load a data matrix
|
||||||
* \param fname name of the content
|
* \param fname name of the content
|
||||||
@@ -111,10 +118,10 @@ extern "C" {
|
|||||||
* \brief make prediction based on dmat
|
* \brief make prediction based on dmat
|
||||||
* \param handle handle
|
* \param handle handle
|
||||||
* \param dmat data matrix
|
* \param dmat data matrix
|
||||||
* \param output_margin whether only output raw margin value
|
* \param option_mask output_margin:1 predict_leaf:2
|
||||||
* \param ntree_limit limit number of trees used in prediction
|
* \param ntree_limit limit number of trees used in prediction
|
||||||
*/
|
*/
|
||||||
SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP output_margin, SEXP ntree_limit);
|
SEXP XGBoosterPredict_R(SEXP handle, SEXP dmat, SEXP option_mask, SEXP ntree_limit);
|
||||||
/*!
|
/*!
|
||||||
* \brief load model from existing file
|
* \brief load model from existing file
|
||||||
* \param handle handle
|
* \param handle handle
|
||||||
@@ -128,11 +135,22 @@ extern "C" {
|
|||||||
*/
|
*/
|
||||||
void XGBoosterSaveModel_R(SEXP handle, SEXP fname);
|
void XGBoosterSaveModel_R(SEXP handle, SEXP fname);
|
||||||
/*!
|
/*!
|
||||||
* \brief dump model into text file
|
* \brief load model from raw array
|
||||||
* \param handle handle
|
* \param handle handle
|
||||||
* \param fname file name of model that can be dumped into
|
*/
|
||||||
* \param fmap name to fmap can be empty string
|
void XGBoosterLoadModelFromRaw_R(SEXP handle, SEXP raw);
|
||||||
|
/*!
|
||||||
|
* \brief save model into R's raw array
|
||||||
|
* \param handle handle
|
||||||
|
* \return raw array
|
||||||
*/
|
*/
|
||||||
void XGBoosterDumpModel_R(SEXP handle, SEXP fname, SEXP fmap);
|
SEXP XGBoosterModelToRaw_R(SEXP handle);
|
||||||
|
/*!
|
||||||
|
* \brief dump model into a string
|
||||||
|
* \param handle handle
|
||||||
|
* \param fmap name to fmap can be empty string
|
||||||
|
* \param with_stats whether dump statistics of splits
|
||||||
|
*/
|
||||||
|
SEXP XGBoosterDumpModel_R(SEXP handle, SEXP fmap, SEXP with_stats);
|
||||||
}
|
}
|
||||||
#endif // XGBOOST_WRAPPER_R_H_
|
#endif // XGBOOST_WRAPPER_R_H_
|
||||||
|
|||||||
337
R-package/vignettes/discoverYourData.Rmd
Normal file
337
R-package/vignettes/discoverYourData.Rmd
Normal file
@@ -0,0 +1,337 @@
|
|||||||
|
---
|
||||||
|
title: "Understand your dataset with Xgboost"
|
||||||
|
output:
|
||||||
|
rmarkdown::html_vignette:
|
||||||
|
css: vignette.css
|
||||||
|
number_sections: yes
|
||||||
|
toc: yes
|
||||||
|
author: Tianqi Chen, Tong He, Michaël Benesty
|
||||||
|
vignette: >
|
||||||
|
%\VignetteIndexEntry{Discover your data}
|
||||||
|
%\VignetteEngine{knitr::rmarkdown}
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
---
|
||||||
|
|
||||||
|
Introduction
|
||||||
|
============
|
||||||
|
|
||||||
|
The purpose of this Vignette is to show you how to use **Xgboost** to discover and understand your own dataset better.
|
||||||
|
|
||||||
|
This Vignette is not about predicting anything (see [Xgboost presentation](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd)). We will explain how to use **Xgboost** to highlight the *link* between the *features* of your data and the *outcome*.
|
||||||
|
|
||||||
|
Pacakge loading:
|
||||||
|
|
||||||
|
```{r libLoading, results='hold', message=F, warning=F}
|
||||||
|
require(xgboost)
|
||||||
|
require(Matrix)
|
||||||
|
require(data.table)
|
||||||
|
if (!require('vcd')) install.packages('vcd')
|
||||||
|
```
|
||||||
|
|
||||||
|
> **VCD** package is used for one of its embedded dataset only.
|
||||||
|
|
||||||
|
Preparation of the dataset
|
||||||
|
==========================
|
||||||
|
|
||||||
|
Numeric VS categorical variables
|
||||||
|
--------------------------------
|
||||||
|
|
||||||
|
**Xgboost** manages only `numeric` vectors.
|
||||||
|
|
||||||
|
What to do when you have *categorical* data?
|
||||||
|
|
||||||
|
A *categorical* variable has a fixed number of different values. For instance, if a variable called *Colour* can have only one of these three values, *red*, *blue* or *green*, then *Colour* is a *categorical* variable.
|
||||||
|
|
||||||
|
> In **R**, a *categorical* variable is called `factor`.
|
||||||
|
>
|
||||||
|
> Type `?factor` in the console for more information.
|
||||||
|
|
||||||
|
To answer the question above we will convert *categorical* variables to `numeric` one.
|
||||||
|
|
||||||
|
Conversion from categorical to numeric variables
|
||||||
|
------------------------------------------------
|
||||||
|
|
||||||
|
### Looking at the raw data
|
||||||
|
|
||||||
|
In this Vignette we will see how to transform a *dense* `data.frame` (*dense* = few zeroes in the matrix) with *categorical* variables to a very *sparse* matrix (*sparse* = lots of zero in the matrix) of `numeric` features.
|
||||||
|
|
||||||
|
The method we are going to see is usually called [one-hot encoding](http://en.wikipedia.org/wiki/One-hot).
|
||||||
|
|
||||||
|
The first step is to load `Arthritis` dataset in memory and wrap it with `data.table` package.
|
||||||
|
|
||||||
|
```{r, results='hide'}
|
||||||
|
data(Arthritis)
|
||||||
|
df <- data.table(Arthritis, keep.rownames = F)
|
||||||
|
```
|
||||||
|
|
||||||
|
> `data.table` is 100% compliant with **R** `data.frame` but its syntax is more consistent and its performance for large dataset is [best in class](http://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) (`dplyr` from **R** and `panda` from **Python** [included](https://github.com/Rdatatable/data.table/wiki/Benchmarks-%3A-Grouping)). Some parts of **Xgboost** **R** package use `data.table`.
|
||||||
|
|
||||||
|
The first thing we want to do is to have a look to the first lines of the `data.table`:
|
||||||
|
|
||||||
|
```{r}
|
||||||
|
head(df)
|
||||||
|
```
|
||||||
|
|
||||||
|
Now we will check the format of each column.
|
||||||
|
|
||||||
|
```{r}
|
||||||
|
str(df)
|
||||||
|
```
|
||||||
|
|
||||||
|
2 columns have `factor` type, one has `ordinal` type.
|
||||||
|
|
||||||
|
> `ordinal` variable :
|
||||||
|
>
|
||||||
|
> * can take a limited number of values (like `factor`) ;
|
||||||
|
> * these values are ordered (unlike `factor`). Here these ordered values are: `Marked > Some > None`
|
||||||
|
|
||||||
|
### Creation of new features based on old ones
|
||||||
|
|
||||||
|
We will add some new *categorical* features to see if it helps.
|
||||||
|
|
||||||
|
#### Grouping per 10 years
|
||||||
|
|
||||||
|
For the first feature we create groups of age by rounding the real age.
|
||||||
|
|
||||||
|
Note that we transform it to `factor` so the algorithm treat these age groups as independent values.
|
||||||
|
|
||||||
|
Therefore, 20 is not closer to 30 than 60. To make it short, the distance between ages is lost in this transformation.
|
||||||
|
|
||||||
|
```{r}
|
||||||
|
head(df[,AgeDiscret := as.factor(round(Age/10,0))])
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Random split in two groups
|
||||||
|
|
||||||
|
Following is an even stronger simplification of the real age with an arbitrary split at 30 years old. I choose this value **based on nothing**. We will see later if simplifying the information based on arbitrary values is a good strategy (you may already have an idea of how well it will work...).
|
||||||
|
|
||||||
|
```{r}
|
||||||
|
head(df[,AgeCat:= as.factor(ifelse(Age > 30, "Old", "Young"))])
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Risks in adding correlated features
|
||||||
|
|
||||||
|
These new features are highly correlated to the `Age` feature because they are simple transformations of this feature.
|
||||||
|
|
||||||
|
For many machine learning algorithms, using correlated features is not a good idea. It may sometimes make prediction less accurate, and most of the time make interpretation of the model almost impossible. GLM, for instance, assumes that the features are uncorrelated.
|
||||||
|
|
||||||
|
Fortunately, decision tree algorithms (including boosted trees) are very robust to these features. Therefore we have nothing to do to manage this situation.
|
||||||
|
|
||||||
|
#### Cleaning data
|
||||||
|
|
||||||
|
We remove ID as there is nothing to learn from this feature (it would just add some noise).
|
||||||
|
|
||||||
|
```{r, results='hide'}
|
||||||
|
df[,ID:=NULL]
|
||||||
|
```
|
||||||
|
|
||||||
|
We will list the different values for the column `Treatment`:
|
||||||
|
|
||||||
|
```{r}
|
||||||
|
levels(df[,Treatment])
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### One-hot encoding
|
||||||
|
|
||||||
|
Next step, we will transform the categorical data to dummy variables.
|
||||||
|
This is the [one-hot encoding](http://en.wikipedia.org/wiki/One-hot) step.
|
||||||
|
|
||||||
|
The purpose is to transform each value of each *categorical* feature in a *binary* feature `{0, 1}`.
|
||||||
|
|
||||||
|
For example, the column `Treatment` will be replaced by two columns, `Placebo`, and `Treated`. Each of them will be *binary*. Therefore, an observation which has the value `Placebo` in column `Treatment` before the transformation will have after the transformation the value `1` in the new column `Placebo` and the value `0` in the new column `Treated`. The column `Treatment` will disappear during the one-hot encoding.
|
||||||
|
|
||||||
|
Column `Improved` is excluded because it will be our `label` column, the one we want to predict.
|
||||||
|
|
||||||
|
```{r, warning=FALSE,message=FALSE}
|
||||||
|
sparse_matrix <- sparse.model.matrix(Improved~.-1, data = df)
|
||||||
|
head(sparse_matrix)
|
||||||
|
```
|
||||||
|
|
||||||
|
> Formulae `Improved~.-1` used above means transform all *categorical* features but column `Improved` to binary values. The `-1` is here to remove the first column which is full of `1` (this column is generated by the conversion). For more information, you can type `?sparse.model.matrix` in the console.
|
||||||
|
|
||||||
|
Create the output `numeric` vector (not as a sparse `Matrix`):
|
||||||
|
|
||||||
|
```{r}
|
||||||
|
output_vector = df[,Improved] == "Marked"
|
||||||
|
```
|
||||||
|
|
||||||
|
1. set `Y` vector to `0`;
|
||||||
|
2. set `Y` to `1` for rows where `Improved == Marked` is `TRUE` ;
|
||||||
|
3. return `Y` vector.
|
||||||
|
|
||||||
|
Build the model
|
||||||
|
===============
|
||||||
|
|
||||||
|
The code below is very usual. For more information, you can look at the documentation of `xgboost` function (or at the vignette [Xgboost presentation](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd)).
|
||||||
|
|
||||||
|
```{r}
|
||||||
|
bst <- xgboost(data = sparse_matrix, label = output_vector, max.depth = 4,
|
||||||
|
eta = 1, nthread = 2, nround = 10,objective = "binary:logistic")
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You can see some `train-error: 0.XXXXX` lines followed by a number. It decreases. Each line shows how well the model explains your data. Lower is better.
|
||||||
|
|
||||||
|
A model which fits too well may [overfit](http://en.wikipedia.org/wiki/Overfitting) (meaning it copy/paste too much the past, and won't be that good to predict the future).
|
||||||
|
|
||||||
|
> Here you can see the numbers decrease until line 7 and then increase.
|
||||||
|
>
|
||||||
|
> It probably means we are overfitting. To fix that I should reduce the number of rounds to `nround = 4`. I will let things like that because I don't really care for the purpose of this example :-)
|
||||||
|
|
||||||
|
Feature importance
|
||||||
|
==================
|
||||||
|
|
||||||
|
Measure feature importance
|
||||||
|
--------------------------
|
||||||
|
|
||||||
|
### Build the feature importance data.table
|
||||||
|
|
||||||
|
In the code below, `sparse_matrix@Dimnames[[2]]` represents the column names of the sparse matrix. These names are the original values of the features (remember, each binary column == one value of one *categorical* feature).
|
||||||
|
|
||||||
|
```{r}
|
||||||
|
importance <- xgb.importance(sparse_matrix@Dimnames[[2]], model = bst)
|
||||||
|
head(importance)
|
||||||
|
```
|
||||||
|
|
||||||
|
> The column `Gain` provide the information we are looking for.
|
||||||
|
>
|
||||||
|
> As you can see, features are classified by `Gain`.
|
||||||
|
|
||||||
|
`Gain` is the improvement in accuracy brought by a feature to the branches it is on. The idea is that before adding a new split on a feature X to the branch there was some wrongly classified elements, after adding the split on this feature, there are two new branches, and each of these branch is more accurate (one branch saying if your observation is on this branch then it should be classified as `1`, and the other branch saying the exact opposite).
|
||||||
|
|
||||||
|
`Cover` measures the relative quantity of observations concerned by a feature.
|
||||||
|
|
||||||
|
`Frequence` is a simpler way to measure the `Gain`. It just counts the number of times a feature is used in all generated trees. You should not use it (unless you know why you want to use it).
|
||||||
|
|
||||||
|
### Improvement in the interpretability of feature importance data.table
|
||||||
|
|
||||||
|
We can go deeper in the analysis of the model. In the `data.table` above, we have discovered which features counts to predict if the illness will go or not. But we don't yet know the role of these features. For instance, one of the question we may want to answer would be: does receiving a placebo treatment helps to recover from the illness?
|
||||||
|
|
||||||
|
One simple solution is to count the co-occurrences of a feature and a class of the classification.
|
||||||
|
|
||||||
|
For that purpose we will execute the same function as above but using two more parameters, `data` and `label`.
|
||||||
|
|
||||||
|
```{r}
|
||||||
|
importanceRaw <- xgb.importance(sparse_matrix@Dimnames[[2]], model = bst, data = sparse_matrix, label = output_vector)
|
||||||
|
|
||||||
|
# Cleaning for better display
|
||||||
|
importanceClean <- importanceRaw[,`:=`(Cover=NULL, Frequence=NULL)]
|
||||||
|
|
||||||
|
head(importanceClean)
|
||||||
|
```
|
||||||
|
|
||||||
|
> In the table above we have removed two not needed columns and select only the first lines.
|
||||||
|
|
||||||
|
First thing you notice is the new column `Split`. It is the split applied to the feature on a branch of one of the tree. Each split is present, therefore a feature can appear several times in this table. Here we can see the feature `Age` is used several times with different splits.
|
||||||
|
|
||||||
|
How the split is applied to count the co-occurrences? It is always `<`. For instance, in the second line, we measure the number of persons under 61.5 years with the illness gone after the treatment.
|
||||||
|
|
||||||
|
The two other new columns are `RealCover` and `RealCover %`. In the first column it measures the number of observations in the dataset where the split is respected and the label marked as `1`. The second column is the percentage of the whole population that `RealCover` represents.
|
||||||
|
|
||||||
|
Therefore, according to our findings, getting a placebo doesn't seem to help but being younger than 61 years may help (seems logic).
|
||||||
|
|
||||||
|
> You may wonder how to interpret the `< 1.00001` on the first line. Basically, in a sparse `Matrix`, there is no `0`, therefore, looking for one hot-encoded categorical observations validating the rule `< 1.00001` is like just looking for `1` for this feature.
|
||||||
|
|
||||||
|
Plotting the feature importance
|
||||||
|
-------------------------------
|
||||||
|
|
||||||
|
All these things are nice, but it would be even better to plot the results.
|
||||||
|
|
||||||
|
```{r, fig.width=8, fig.height=5, fig.align='center'}
|
||||||
|
xgb.plot.importance(importance_matrix = importanceRaw)
|
||||||
|
```
|
||||||
|
|
||||||
|
Feature have automatically been divided in 2 clusters: the interesting features... and the others.
|
||||||
|
|
||||||
|
> Depending of the dataset and the learning parameters you may have more than two clusters. Default value is to limit them to `10`, but you can increase this limit. Look at the function documentation for more information.
|
||||||
|
|
||||||
|
According to the plot above, the most important features in this dataset to predict if the treatment will work are :
|
||||||
|
|
||||||
|
* the Age ;
|
||||||
|
* having received a placebo or not ;
|
||||||
|
* the sex is third but already included in the not interesting features group ;
|
||||||
|
* then we see our generated features (AgeDiscret). We can see that their contribution is very low.
|
||||||
|
|
||||||
|
Do these results make sense?
|
||||||
|
------------------------------
|
||||||
|
|
||||||
|
Let's check some **Chi2** between each of these features and the label.
|
||||||
|
|
||||||
|
Higher **Chi2** means better correlation.
|
||||||
|
|
||||||
|
```{r, warning=FALSE, message=FALSE}
|
||||||
|
c2 <- chisq.test(df$Age, output_vector)
|
||||||
|
print(c2)
|
||||||
|
```
|
||||||
|
|
||||||
|
Pearson correlation between Age and illness disapearing is **`r round(c2$statistic, 2 )`**.
|
||||||
|
|
||||||
|
```{r, warning=FALSE, message=FALSE}
|
||||||
|
c2 <- chisq.test(df$AgeDiscret, output_vector)
|
||||||
|
print(c2)
|
||||||
|
```
|
||||||
|
|
||||||
|
Our first simplification of Age gives a Pearson correlation is **`r round(c2$statistic, 2)`**.
|
||||||
|
|
||||||
|
```{r, warning=FALSE, message=FALSE}
|
||||||
|
c2 <- chisq.test(df$AgeCat, output_vector)
|
||||||
|
print(c2)
|
||||||
|
```
|
||||||
|
|
||||||
|
The perfectly random split I did between young and old at 30 years old have a low correlation of **`r round(c2$statistic, 2)`**. It's a result we may expect as may be in my mind > 30 years is being old (I am 32 and starting feeling old, this may explain that), but for the illness we are studying, the age to be vulnerable is not the same.
|
||||||
|
|
||||||
|
Morality: don't let your *gut* lower the quality of your model.
|
||||||
|
|
||||||
|
In *data science* expression, there is the word *science* :-)
|
||||||
|
|
||||||
|
Conclusion
|
||||||
|
==========
|
||||||
|
|
||||||
|
As you can see, in general *destroying information by simplifying it won't improve your model*. **Chi2** just demonstrates that.
|
||||||
|
|
||||||
|
But in more complex cases, creating a new feature based on existing one which makes link with the outcome more obvious may help the algorithm and improve the model.
|
||||||
|
|
||||||
|
The case studied here is not enough complex to show that. Check [Kaggle website](http://www.kaggle.com/) for some challenging datasets. However it's almost always worse when you add some arbitrary rules.
|
||||||
|
|
||||||
|
Moreover, you can notice that even if we have added some not useful new features highly correlated with other features, the boosting tree algorithm have been able to choose the best one, which in this case is the Age.
|
||||||
|
|
||||||
|
Linear model may not be that smart in this scenario.
|
||||||
|
|
||||||
|
Special Note: What about Random Forests™?
|
||||||
|
==========================================
|
||||||
|
|
||||||
|
As you may know, [Random Forests™](http://en.wikipedia.org/wiki/Random_forest) algorithm is cousin with boosting and both are part of the [ensemble learning](http://en.wikipedia.org/wiki/Ensemble_learning) family.
|
||||||
|
|
||||||
|
Both trains several decision trees for one dataset. The *main* difference is that in Random Forests™, trees are independent and in boosting, the tree `N+1` focus its learning on the loss (<=> what has not been well modeled by the tree `N`).
|
||||||
|
|
||||||
|
This difference have an impact on a corner case in feature importance analysis: the *correlated features*.
|
||||||
|
|
||||||
|
Imagine two features perfectly correlated, feature `A` and feature `B`. For one specific tree, if the algorithm needs one of them, it will choose randomly (true in both boosting and Random Forests™).
|
||||||
|
|
||||||
|
However, in Random Forests™ this random choice will be done for each tree, because each tree is independent from the others. Therefore, approximatively, depending of your parameters, 50% of the trees will choose feature `A` and the other 50% will choose feature `B`. So the *importance* of the information contained in `A` and `B` (which is the same, because they are perfectly correlated) is diluted in `A` and `B`. So you won't easily know this information is important to predict what you want to predict! It is even worse when you have 10 correlated features...
|
||||||
|
|
||||||
|
In boosting, when a specific link between feature and outcome have been learned by the algorithm, it will try to not refocus on it (in theory it is what happens, reality is not always that simple). Therefore, all the importance will be on feature `A` or on feature `B` (but not both). You will know that one feature have an important role in the link between the observations and the label. It is still up to you to search for the correlated features to the one detected as important if you need to know all of them.
|
||||||
|
|
||||||
|
If you want to try Random Forests™ algorithm, you can tweak Xgboost parameters!
|
||||||
|
|
||||||
|
**Warning**: this is still an experimental parameter.
|
||||||
|
|
||||||
|
For instance, to compute a model with 1000 trees, with a 0.5 factor on sampling rows and columns:
|
||||||
|
|
||||||
|
```{r, warning=FALSE, message=FALSE}
|
||||||
|
data(agaricus.train, package='xgboost')
|
||||||
|
data(agaricus.test, package='xgboost')
|
||||||
|
train <- agaricus.train
|
||||||
|
test <- agaricus.test
|
||||||
|
|
||||||
|
#Random Forest™ - 1000 trees
|
||||||
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nround = 1, objective = "binary:logistic")
|
||||||
|
|
||||||
|
#Boosting - 3 rounds
|
||||||
|
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, nround = 3, objective = "binary:logistic")
|
||||||
|
```
|
||||||
|
|
||||||
|
> Note that the parameter `round` is set to `1`.
|
||||||
|
|
||||||
|
> [**Random Forests™**](https://www.stat.berkeley.edu/~breiman/RandomForests/cc_papers.htm) is a trademark of Leo Breiman and Adele Cutler and is licensed exclusively to Salford Systems for the commercial release of the software.
|
||||||
225
R-package/vignettes/vignette.css
Normal file
225
R-package/vignettes/vignette.css
Normal file
@@ -0,0 +1,225 @@
|
|||||||
|
body {
|
||||||
|
margin: 0 auto;
|
||||||
|
background-color: white;
|
||||||
|
|
||||||
|
/* --------- FONT FAMILY --------
|
||||||
|
following are some optional font families. Usually a family
|
||||||
|
is safer to choose than a specific font,
|
||||||
|
which may not be on the users computer */
|
||||||
|
/ font-family:Georgia, Palatino, serif;
|
||||||
|
font-family: "Open Sans", "Book Antiqua", Palatino, serif;
|
||||||
|
/ font-family:Arial, Helvetica, sans-serif;
|
||||||
|
/ font-family:Tahoma, Verdana, Geneva, sans-serif;
|
||||||
|
/ font-family:Courier, monospace;
|
||||||
|
/ font-family:"Times New Roman", Times, serif;
|
||||||
|
|
||||||
|
/* -------------- COLOR OPTIONS ------------
|
||||||
|
following are additional color options for base font
|
||||||
|
you could uncomment another one to easily change the base color
|
||||||
|
or add one to a specific element style below */
|
||||||
|
color: #333333; /* dark gray not black */
|
||||||
|
/ color: #000000; /* black */
|
||||||
|
/ color: #666666; /* medium gray black */
|
||||||
|
/ color: #E3E3E3; /* very light gray */
|
||||||
|
/ color: white;
|
||||||
|
|
||||||
|
line-height: 100%;
|
||||||
|
max-width: 800px;
|
||||||
|
padding: 10px;
|
||||||
|
font-size: 17px;
|
||||||
|
text-align: justify;
|
||||||
|
text-justify: inter-word;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
p {
|
||||||
|
line-height: 150%;
|
||||||
|
/ max-width: 540px;
|
||||||
|
max-width: 960px;
|
||||||
|
margin-bottom: 5px;
|
||||||
|
font-weight: 400;
|
||||||
|
/ color: #333333
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
h1, h2, h3, h4, h5, h6 {
|
||||||
|
font-weight: 400;
|
||||||
|
margin-top: 35px;
|
||||||
|
margin-bottom: 15px;
|
||||||
|
padding-top: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
|
h1 {
|
||||||
|
margin-top: 70px;
|
||||||
|
color: #606AAA;
|
||||||
|
font-size:230%;
|
||||||
|
font-variant:small-caps;
|
||||||
|
padding-bottom:20px;
|
||||||
|
width:100%;
|
||||||
|
border-bottom:1px solid #606AAA;
|
||||||
|
}
|
||||||
|
|
||||||
|
h2 {
|
||||||
|
font-size:160%;
|
||||||
|
}
|
||||||
|
|
||||||
|
h3 {
|
||||||
|
font-size:130%;
|
||||||
|
}
|
||||||
|
|
||||||
|
h4 {
|
||||||
|
font-size:120%;
|
||||||
|
font-variant:small-caps;
|
||||||
|
}
|
||||||
|
|
||||||
|
h5 {
|
||||||
|
font-size:120%;
|
||||||
|
}
|
||||||
|
|
||||||
|
h6 {
|
||||||
|
font-size:120%;
|
||||||
|
font-variant:small-caps;
|
||||||
|
}
|
||||||
|
|
||||||
|
a {
|
||||||
|
color: #606AAA;
|
||||||
|
margin: 0;
|
||||||
|
padding: 0;
|
||||||
|
vertical-align: baseline;
|
||||||
|
}
|
||||||
|
|
||||||
|
a:hover {
|
||||||
|
text-decoration: blink;
|
||||||
|
color: green;
|
||||||
|
}
|
||||||
|
|
||||||
|
a:visited {
|
||||||
|
color: gray;
|
||||||
|
}
|
||||||
|
|
||||||
|
ul, ol {
|
||||||
|
padding: 0;
|
||||||
|
margin: 0px 0px 0px 50px;
|
||||||
|
}
|
||||||
|
ul {
|
||||||
|
list-style-type: square;
|
||||||
|
list-style-position: inside;
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
li {
|
||||||
|
line-height:150%
|
||||||
|
}
|
||||||
|
|
||||||
|
li ul, li ul {
|
||||||
|
margin-left: 24px;
|
||||||
|
}
|
||||||
|
|
||||||
|
pre {
|
||||||
|
padding: 0px 10px;
|
||||||
|
max-width: 800px;
|
||||||
|
white-space: pre-wrap;
|
||||||
|
}
|
||||||
|
|
||||||
|
code {
|
||||||
|
font-family: Consolas, Monaco, Andale Mono, monospace, courrier new;
|
||||||
|
line-height: 1.5;
|
||||||
|
font-size: 15px;
|
||||||
|
background: #F8F8F8;
|
||||||
|
border-radius: 4px;
|
||||||
|
padding: 5px;
|
||||||
|
display: inline-block;
|
||||||
|
max-width: 800px;
|
||||||
|
white-space: pre-wrap;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
li code, p code {
|
||||||
|
background: #CDCDCD;
|
||||||
|
color: #606AAA;
|
||||||
|
padding: 0px 5px 0px 5px;
|
||||||
|
}
|
||||||
|
|
||||||
|
code.r, code.cpp {
|
||||||
|
display: block;
|
||||||
|
word-wrap: break-word;
|
||||||
|
border: 1px solid #606AAA;
|
||||||
|
}
|
||||||
|
|
||||||
|
aside {
|
||||||
|
display: block;
|
||||||
|
float: right;
|
||||||
|
width: 390px;
|
||||||
|
}
|
||||||
|
|
||||||
|
blockquote {
|
||||||
|
border-left:.5em solid #606AAA;
|
||||||
|
background: #F8F8F8;
|
||||||
|
padding: 0em 1em 0em 1em;
|
||||||
|
margin-left:10px;
|
||||||
|
max-width: 500px;
|
||||||
|
}
|
||||||
|
|
||||||
|
blockquote cite {
|
||||||
|
line-height:10px;
|
||||||
|
color:#bfbfbf;
|
||||||
|
}
|
||||||
|
|
||||||
|
blockquote cite:before {
|
||||||
|
/content: '\2014 \00A0';
|
||||||
|
}
|
||||||
|
|
||||||
|
blockquote p, blockquote li {
|
||||||
|
color: #666;
|
||||||
|
}
|
||||||
|
hr {
|
||||||
|
/ width: 540px;
|
||||||
|
text-align: left;
|
||||||
|
margin: 0 auto 0 0;
|
||||||
|
color: #999;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
/* table */
|
||||||
|
|
||||||
|
table {
|
||||||
|
width: 100%;
|
||||||
|
border-top: 1px solid #919699;
|
||||||
|
border-left: 1px solid #919699;
|
||||||
|
border-spacing: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
table th {
|
||||||
|
padding: 4px 8px 4px 8px;
|
||||||
|
text-align: center;
|
||||||
|
color: white;
|
||||||
|
background: #606AAA;
|
||||||
|
border-bottom: 1px solid #919699;
|
||||||
|
border-right: 1px solid #919699;
|
||||||
|
}
|
||||||
|
table th p {
|
||||||
|
font-weight: bold;
|
||||||
|
margin-bottom: 0px;
|
||||||
|
}
|
||||||
|
|
||||||
|
table td {
|
||||||
|
padding: 8px;
|
||||||
|
vertical-align: top;
|
||||||
|
border-bottom: 1px solid #919699;
|
||||||
|
border-right: 1px solid #919699;
|
||||||
|
}
|
||||||
|
|
||||||
|
table td:last-child {
|
||||||
|
/background: lightgray;
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
|
||||||
|
table td p {
|
||||||
|
margin-bottom: 0px;
|
||||||
|
}
|
||||||
|
table td p + p {
|
||||||
|
margin-top: 5px;
|
||||||
|
}
|
||||||
|
table td p + p + p {
|
||||||
|
margin-top: 5px;
|
||||||
|
}
|
||||||
@@ -49,7 +49,7 @@ xgboost.version = '0.3-0'
|
|||||||
This is an introductory document of using the \verb@xgboost@ package in R.
|
This is an introductory document of using the \verb@xgboost@ package in R.
|
||||||
|
|
||||||
\verb@xgboost@ is short for eXtreme Gradient Boosting package. It is an efficient
|
\verb@xgboost@ is short for eXtreme Gradient Boosting package. It is an efficient
|
||||||
and scalable implementation of gradient boosting framework by \citep{friedman2001greedy}.
|
and scalable implementation of gradient boosting framework by \citep{friedman2001greedy} \citep{friedman2000additive}.
|
||||||
The package includes efficient linear model solver and tree learning algorithm.
|
The package includes efficient linear model solver and tree learning algorithm.
|
||||||
It supports various objective functions, including regression, classification
|
It supports various objective functions, including regression, classification
|
||||||
and ranking. The package is made to be extendible, so that users are also allowed to define their own objectives easily. It has several features:
|
and ranking. The package is made to be extendible, so that users are also allowed to define their own objectives easily. It has several features:
|
||||||
@@ -214,3 +214,8 @@ competition.
|
|||||||
|
|
||||||
\end{document}
|
\end{document}
|
||||||
|
|
||||||
|
<<Temp file cleaning, include=FALSE>>=
|
||||||
|
file.remove("xgb.DMatrix")
|
||||||
|
file.remove("model.dump")
|
||||||
|
file.remove("model.save")
|
||||||
|
@
|
||||||
|
|||||||
405
R-package/vignettes/xgboostPresentation.Rmd
Normal file
405
R-package/vignettes/xgboostPresentation.Rmd
Normal file
@@ -0,0 +1,405 @@
|
|||||||
|
---
|
||||||
|
title: "Xgboost presentation"
|
||||||
|
output:
|
||||||
|
rmarkdown::html_vignette:
|
||||||
|
css: vignette.css
|
||||||
|
number_sections: yes
|
||||||
|
toc: yes
|
||||||
|
bibliography: xgboost.bib
|
||||||
|
author: Tianqi Chen, Tong He, Michaël Benesty
|
||||||
|
vignette: >
|
||||||
|
%\VignetteIndexEntry{Xgboost presentation}
|
||||||
|
%\VignetteEngine{knitr::rmarkdown}
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
---
|
||||||
|
|
||||||
|
Introduction
|
||||||
|
============
|
||||||
|
|
||||||
|
**Xgboost** is short for e**X**treme **G**radient **Boost**ing package.
|
||||||
|
|
||||||
|
The purpose of this Vignette is to show you how to use **Xgboost** to build a model and make predictions.
|
||||||
|
|
||||||
|
It is an efficient and scalable implementation of gradient boosting framework by @friedman2000additive and @friedman2001greedy. Two solvers are included:
|
||||||
|
|
||||||
|
- *linear* model ;
|
||||||
|
- *tree learning* algorithm.
|
||||||
|
|
||||||
|
It supports various objective functions, including *regression*, *classification* and *ranking*. The package is made to be extendible, so that users are also allowed to define their own objective functions easily.
|
||||||
|
|
||||||
|
It has been [used](https://github.com/dmlc/xgboost) to win several [Kaggle](http://www.kaggle.com) competitions.
|
||||||
|
|
||||||
|
It has several features:
|
||||||
|
|
||||||
|
* Speed: it can automatically do parallel computation on *Windows* and *Linux*, with *OpenMP*. It is generally over 10 times faster than the classical `gbm`.
|
||||||
|
* Input Type: it takes several types of input data:
|
||||||
|
* *Dense* Matrix: *R*'s *dense* matrix, i.e. `matrix` ;
|
||||||
|
* *Sparse* Matrix: *R*'s *sparse* matrix, i.e. `Matrix::dgCMatrix` ;
|
||||||
|
* Data File: local data files ;
|
||||||
|
* `xgb.DMatrix`: its own class (recommended).
|
||||||
|
* Sparsity: it accepts *sparse* input for both *tree booster* and *linear booster*, and is optimized for *sparse* input ;
|
||||||
|
* Customization: it supports customized objective functions and evaluation functions.
|
||||||
|
|
||||||
|
Installation
|
||||||
|
============
|
||||||
|
|
||||||
|
Github version
|
||||||
|
--------------
|
||||||
|
|
||||||
|
For up-to-date version (highly recommended), install from *Github*:
|
||||||
|
|
||||||
|
```{r installGithub, eval=FALSE}
|
||||||
|
devtools::install_github('dmlc/xgboost', subdir='R-package')
|
||||||
|
```
|
||||||
|
|
||||||
|
> *Windows* user will need to install [RTools](http://cran.r-project.org/bin/windows/Rtools/) first.
|
||||||
|
|
||||||
|
Cran version
|
||||||
|
------------
|
||||||
|
|
||||||
|
For stable version on *CRAN*, run:
|
||||||
|
|
||||||
|
```{r installCran, eval=FALSE}
|
||||||
|
install.packages('xgboost')
|
||||||
|
```
|
||||||
|
|
||||||
|
Learning
|
||||||
|
========
|
||||||
|
|
||||||
|
For the purpose of this tutorial we will load **Xgboost** package.
|
||||||
|
|
||||||
|
```{r libLoading, results='hold', message=F, warning=F}
|
||||||
|
require(xgboost)
|
||||||
|
```
|
||||||
|
|
||||||
|
Dataset presentation
|
||||||
|
--------------------
|
||||||
|
|
||||||
|
In this example, we are aiming to predict whether a mushroom can be eaten or not (like in many tutorials, example data are the the same as you will use on in your every day life :-).
|
||||||
|
|
||||||
|
Mushroom data is cited from UCI Machine Learning Repository. @Bache+Lichman:2013.
|
||||||
|
|
||||||
|
Dataset loading
|
||||||
|
---------------
|
||||||
|
|
||||||
|
We will load the `agaricus` datasets embedded with the package and will link them to variables.
|
||||||
|
|
||||||
|
The datasets are already split in:
|
||||||
|
|
||||||
|
* `train`: will be used to build the model ;
|
||||||
|
* `test`: will be used to assess the quality of our model.
|
||||||
|
|
||||||
|
Why *split* the dataset in two parts?
|
||||||
|
|
||||||
|
In the first part we will build our model. In the second part we will want to test it and assess its quality. Without dividing the dataset we would test the model on the data which the algorithm have already seen.
|
||||||
|
|
||||||
|
```{r datasetLoading, results='hold', message=F, warning=F}
|
||||||
|
data(agaricus.train, package='xgboost')
|
||||||
|
data(agaricus.test, package='xgboost')
|
||||||
|
train <- agaricus.train
|
||||||
|
test <- agaricus.test
|
||||||
|
```
|
||||||
|
|
||||||
|
> In the real world, it would be up to you to make this division between `train` and `test` data. The way to do it is out of the purpose of this article, however `caret` package may [help](http://topepo.github.io/caret/splitting.html).
|
||||||
|
|
||||||
|
Each variable is a `list` containing two things, `label` and `data`:
|
||||||
|
|
||||||
|
```{r dataList, message=F, warning=F}
|
||||||
|
str(train)
|
||||||
|
```
|
||||||
|
|
||||||
|
`label` is the outcome of our dataset meaning it is the binary *classification* we will try to predict.
|
||||||
|
|
||||||
|
Let's discover the dimensionality of our datasets.
|
||||||
|
|
||||||
|
```{r dataSize, message=F, warning=F}
|
||||||
|
dim(train$data)
|
||||||
|
dim(test$data)
|
||||||
|
```
|
||||||
|
|
||||||
|
This dataset is very small to not make the **R** package too heavy, however **Xgboost** is built to manage huge dataset very efficiently.
|
||||||
|
|
||||||
|
As seen below, the `data` are stored in a `dgCMatrix` which is a *sparse* matrix and `label` vector is a `numeric` vector (`{0,1}`):
|
||||||
|
|
||||||
|
```{r dataClass, message=F, warning=F}
|
||||||
|
class(train$data)[1]
|
||||||
|
class(train$label)
|
||||||
|
```
|
||||||
|
|
||||||
|
Basic Training using Xgboost
|
||||||
|
----------------------------
|
||||||
|
|
||||||
|
This step is the most critical part of the process for the quality of our model.
|
||||||
|
|
||||||
|
### Basic training
|
||||||
|
|
||||||
|
We are using the `train` data. As explained above, both `data` and `label` are stored in a `list`.
|
||||||
|
|
||||||
|
In a *sparse* matrix, cells containing `0` are not stored in memory. Therefore, in a dataset mainly made of `0`, memory size is reduced. It is very usual to have such dataset.
|
||||||
|
|
||||||
|
We will train decision tree model using the following parameters:
|
||||||
|
|
||||||
|
* `objective = "binary:logistic"`: we will train a binary classification model ;
|
||||||
|
* `max.deph = 2`: the trees won't be deep, because our case is very simple ;
|
||||||
|
* `nthread = 2`: the number of cpu threads we are going to use;
|
||||||
|
* `nround = 2`: there will be two passes on the data, the second one will enhance the model by further reducing the difference between ground truth and prediction.
|
||||||
|
|
||||||
|
```{r trainingSparse, message=F, warning=F}
|
||||||
|
bstSparse <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
|
||||||
|
```
|
||||||
|
|
||||||
|
> More complex the relationship between your features and your `label` is, more passes you need.
|
||||||
|
|
||||||
|
### Parameter variations
|
||||||
|
|
||||||
|
#### Dense matrix
|
||||||
|
|
||||||
|
Alternatively, you can put your dataset in a *dense* matrix, i.e. a basic **R** matrix.
|
||||||
|
|
||||||
|
```{r trainingDense, message=F, warning=F}
|
||||||
|
bstDense <- xgboost(data = as.matrix(train$data), label = train$label, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
|
||||||
|
```
|
||||||
|
|
||||||
|
#### xgb.DMatrix
|
||||||
|
|
||||||
|
**Xgboost** offers a way to group them in a `xgb.DMatrix`. You can even add other meta data in it. It will be usefull for the most advanced features we will discover later.
|
||||||
|
|
||||||
|
```{r trainingDmatrix, message=F, warning=F}
|
||||||
|
dtrain <- xgb.DMatrix(data = train$data, label = train$label)
|
||||||
|
bstDMatrix <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verbose option
|
||||||
|
|
||||||
|
**Xgboost** has severa features to help you to view how the learning progress internally. The purpose is to help you to set the best parameters, which is the key of your model quality.
|
||||||
|
|
||||||
|
One of the simplest way to see the training progress is to set the `verbose` option (see below for more advanced technics).
|
||||||
|
|
||||||
|
```{r trainingVerbose0, message=T, warning=F}
|
||||||
|
# verbose = 0, no message
|
||||||
|
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic", verbose = 0)
|
||||||
|
```
|
||||||
|
|
||||||
|
```{r trainingVerbose1, message=T, warning=F}
|
||||||
|
# verbose = 1, print evaluation metric
|
||||||
|
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic", verbose = 1)
|
||||||
|
```
|
||||||
|
|
||||||
|
```{r trainingVerbose2, message=T, warning=F}
|
||||||
|
# verbose = 2, also print information about tree
|
||||||
|
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic", verbose = 2)
|
||||||
|
```
|
||||||
|
|
||||||
|
Basic prediction using Xgboost
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Perform the prediction
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
The pupose of the model we have built is to classify new data. As explained before, we will use the `test` dataset for this step.
|
||||||
|
|
||||||
|
```{r predicting, message=F, warning=F}
|
||||||
|
pred <- predict(bst, test$data)
|
||||||
|
|
||||||
|
# size of the prediction vector
|
||||||
|
print(length(pred))
|
||||||
|
|
||||||
|
# limit display of predictions to the first 10
|
||||||
|
print(head(pred))
|
||||||
|
```
|
||||||
|
|
||||||
|
These numbers doesn't look like *binary classification* `{0,1}`. We need to perform a simple transformation before being able to use these results.
|
||||||
|
|
||||||
|
Transform the regression in a binary classification
|
||||||
|
---------------------------------------------------
|
||||||
|
|
||||||
|
The only thing that **Xgboost** does is a *regression*. **Xgboost** is using `label` vector to build its *regression* model.
|
||||||
|
|
||||||
|
How can we use a *regression* model to perform a binary classification?
|
||||||
|
|
||||||
|
If we think about the meaning of a regression applied to our data, the numbers we get are probabilities that a datum will be classified as `1`. Therefore, we will set the rule that if this probability for a specific datum is `> 0.5` then the observation is classified as `1` (or `0` otherwise).
|
||||||
|
|
||||||
|
```{r predictingTest, message=F, warning=F}
|
||||||
|
prediction <- as.numeric(pred > 0.5)
|
||||||
|
print(head(prediction))
|
||||||
|
```
|
||||||
|
|
||||||
|
Measuring model performance
|
||||||
|
---------------------------
|
||||||
|
|
||||||
|
To measure the model performance, we will compute a simple metric, the *average error*.
|
||||||
|
|
||||||
|
```{r predictingAverageError, message=F, warning=F}
|
||||||
|
err <- mean(as.numeric(pred > 0.5) != test$label)
|
||||||
|
print(paste("test-error=", err))
|
||||||
|
```
|
||||||
|
|
||||||
|
> Note that the algorithm has not seen the `test` data during the model construction.
|
||||||
|
|
||||||
|
Steps explanation:
|
||||||
|
|
||||||
|
1. `as.numeric(pred > 0.5)` applies our rule that when the probability (<=> regression <=> prediction) is `> 0.5` the observation is classified as `1` and `0` otherwise ;
|
||||||
|
2. `probabilityVectorPreviouslyComputed != test$label` computes the vector of error between true data and computed probabilities ;
|
||||||
|
3. `mean(vectorOfErrors)` computes the *average error* itself.
|
||||||
|
|
||||||
|
The most important thing to remember is that **to do a classification, you just do a regression to the** `label` **and then apply a threshold**.
|
||||||
|
|
||||||
|
*Multiclass* classification works in a similar way.
|
||||||
|
|
||||||
|
This metric is **`r round(err, 2)`** and is pretty low: our yummly mushroom model works well!
|
||||||
|
|
||||||
|
Advanced features
|
||||||
|
=================
|
||||||
|
|
||||||
|
Most of the features below have been implemented to help you to improve your model by offering a better understanding of its content.
|
||||||
|
|
||||||
|
|
||||||
|
Dataset preparation
|
||||||
|
-------------------
|
||||||
|
|
||||||
|
For the following advanced features, we need to put data in `xgb.DMatrix` as explained above.
|
||||||
|
|
||||||
|
```{r DMatrix, message=F, warning=F}
|
||||||
|
dtrain <- xgb.DMatrix(data = train$data, label=train$label)
|
||||||
|
dtest <- xgb.DMatrix(data = test$data, label=test$label)
|
||||||
|
```
|
||||||
|
|
||||||
|
Measure learning progress with xgb.train
|
||||||
|
----------------------------------------
|
||||||
|
|
||||||
|
Both `xgboost` (simple) and `xgb.train` (advanced) functions train models.
|
||||||
|
|
||||||
|
One of the special feature of `xgb.train` is the capacity to follow the progress of the learning after each round. Because of the way boosting works, there is a time when having too many rounds lead to an overfitting. You can see this feature as a cousin of cross-validation method. The following technics will help you to avoid overfitting or optimizing the learning time in stopping it as soon as possible.
|
||||||
|
|
||||||
|
One way to measure progress in learning of a model is to provide to **Xgboost** a second dataset already classified. Therefore it can learn on the first dataset and test its model on the second one. Some metrics are measured after each round during the learning.
|
||||||
|
|
||||||
|
> in some way it is similar to what we have done above with the average error. The main difference is that below it was after building the model, and now it is during the construction that we measure errors.
|
||||||
|
|
||||||
|
For the purpose of this example, we use `watchlist` parameter. It is a list of `xgb.DMatrix`, each of them tagged with a name.
|
||||||
|
|
||||||
|
```{r watchlist, message=F, warning=F}
|
||||||
|
watchlist <- list(train=dtrain, test=dtest)
|
||||||
|
|
||||||
|
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nround=2, watchlist=watchlist, objective = "binary:logistic")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Xgboost** has computed at each round the same average error metric than seen above (we set `nround` to 2, that is why we have two lines). Obviously, the `train-error` number is related to the training dataset (the one the algorithm learns from) and the `test-error` number to the test dataset.
|
||||||
|
|
||||||
|
Both training and test error related metrics are very similar, and in some way, it makes sense: what we have learned from the training dataset matches the observations from the test dataset.
|
||||||
|
|
||||||
|
If with your own dataset you have not such results, you should think about how you did to divide your dataset in training and test. May be there is something to fix. Again, `caret` package may [help](http://topepo.github.io/caret/splitting.html).
|
||||||
|
|
||||||
|
For a better understanding of the learning progression, you may want to have some specific metric or even use multiple evaluation metrics.
|
||||||
|
|
||||||
|
```{r watchlist2, message=F, warning=F}
|
||||||
|
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nround=2, watchlist=watchlist, eval.metric = "error", eval.metric = "logloss", objective = "binary:logistic")
|
||||||
|
```
|
||||||
|
|
||||||
|
> `eval.metric` allows us to monitor two new metrics for each round, `logloss` and `error`.
|
||||||
|
|
||||||
|
Linear boosting
|
||||||
|
---------------
|
||||||
|
|
||||||
|
Until know, all the learnings we have performed were based on boosting trees. **Xgboost** implements a second algorithm, based on linear boosting. The only difference with previous command is `booster = "gblinear"` parameter (and removing `eta` parameter).
|
||||||
|
|
||||||
|
```{r linearBoosting, message=F, warning=F}
|
||||||
|
bst <- xgb.train(data=dtrain, booster = "gblinear", max.depth=2, nthread = 2, nround=2, watchlist=watchlist, eval.metric = "error", eval.metric = "logloss", objective = "binary:logistic")
|
||||||
|
```
|
||||||
|
|
||||||
|
In this specific case, *linear boosting* gets sligtly better performance metrics than decision trees based algorithm.
|
||||||
|
|
||||||
|
In simple cases, it will happem because there is nothing better than a linear algorithm to catch a linear link. However, decision trees are much better to catch a non linear link between predictors and outcome. Because there is no silver bullet, we advise you to check both algorithms with your own datasets to have an idea of what to use.
|
||||||
|
|
||||||
|
Manipulating xgb.DMatrix
|
||||||
|
------------------------
|
||||||
|
|
||||||
|
### Save / Load
|
||||||
|
|
||||||
|
Like saving models, `xgb.DMatrix` object (which groups both dataset and outcome) can also be saved using `xgb.DMatrix.save` function.
|
||||||
|
|
||||||
|
```{r DMatrixSave, message=F, warning=F}
|
||||||
|
xgb.DMatrix.save(dtrain, "dtrain.buffer")
|
||||||
|
# to load it in, simply call xgb.DMatrix
|
||||||
|
dtrain2 <- xgb.DMatrix("dtrain.buffer")
|
||||||
|
bst <- xgb.train(data=dtrain2, max.depth=2, eta=1, nthread = 2, nround=2, watchlist=watchlist, objective = "binary:logistic")
|
||||||
|
```
|
||||||
|
|
||||||
|
```{r DMatrixDel, include=FALSE}
|
||||||
|
file.remove("dtrain.buffer")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Information extraction
|
||||||
|
|
||||||
|
Information can be extracted from `xgb.DMatrix` using `getinfo` function. Hereafter we will extract `label` data.
|
||||||
|
|
||||||
|
```{r getinfo, message=F, warning=F}
|
||||||
|
label = getinfo(dtest, "label")
|
||||||
|
pred <- predict(bst, dtest)
|
||||||
|
err <- as.numeric(sum(as.integer(pred > 0.5) != label))/length(label)
|
||||||
|
print(paste("test-error=", err))
|
||||||
|
```
|
||||||
|
|
||||||
|
View the trees from a model
|
||||||
|
---------------------------
|
||||||
|
|
||||||
|
You can dump the tree you learned using `xgb.dump` into a text file.
|
||||||
|
|
||||||
|
```{r dump, message=T, warning=F}
|
||||||
|
xgb.dump(bst, with.stats = T)
|
||||||
|
```
|
||||||
|
|
||||||
|
> if you provide a path to `fname` parameter you can save the trees to your hard drive.
|
||||||
|
|
||||||
|
Save and load models
|
||||||
|
--------------------
|
||||||
|
|
||||||
|
May be your dataset is big, and it takes time to train a model on it? May be you are not a big fan of loosing time in redoing the same task again and again? In these very rare cases, you will want to save your model and load it when required.
|
||||||
|
|
||||||
|
Hopefully for you, **Xgboost** implements such functions.
|
||||||
|
|
||||||
|
```{r saveModel, message=F, warning=F}
|
||||||
|
# save model to binary local file
|
||||||
|
xgb.save(bst, "xgboost.model")
|
||||||
|
```
|
||||||
|
|
||||||
|
> `xgb.save` function should return `r TRUE` if everything goes well and crashes otherwise.
|
||||||
|
|
||||||
|
An interesting test to see how identic is our saved model with the original one would be to compare the two predictions.
|
||||||
|
|
||||||
|
```{r loadModel, message=F, warning=F}
|
||||||
|
# load binary model to R
|
||||||
|
bst2 <- xgb.load("xgboost.model")
|
||||||
|
pred2 <- predict(bst2, test$data)
|
||||||
|
|
||||||
|
# And now the test
|
||||||
|
print(paste("sum(abs(pred2-pred))=", sum(abs(pred2-pred))))
|
||||||
|
```
|
||||||
|
|
||||||
|
```{r clean, include=FALSE}
|
||||||
|
# delete the created model
|
||||||
|
file.remove("./xgboost.model")
|
||||||
|
```
|
||||||
|
|
||||||
|
> result is `0`? We are good!
|
||||||
|
|
||||||
|
In some very specific cases, like when you want to pilot **Xgboost** from `caret` package, you will want to save the model as a *R* binary vector. See below how to do it.
|
||||||
|
|
||||||
|
```{r saveLoadRBinVectorModel, message=F, warning=F}
|
||||||
|
# save model to R's raw vector
|
||||||
|
rawVec <- xgb.save.raw(bst)
|
||||||
|
|
||||||
|
# print class
|
||||||
|
print(class(rawVec))
|
||||||
|
|
||||||
|
# load binary model to R
|
||||||
|
bst3 <- xgb.load(rawVec)
|
||||||
|
pred3 <- predict(bst3, test$data)
|
||||||
|
|
||||||
|
# pred2 should be identical to pred
|
||||||
|
print(paste("sum(abs(pred3-pred))=", sum(abs(pred2-pred))))
|
||||||
|
```
|
||||||
|
|
||||||
|
> Again `0`? It seems that `Xgboost` works pretty well!
|
||||||
|
|
||||||
|
References
|
||||||
|
==========
|
||||||
73
README.md
73
README.md
@@ -1,52 +1,57 @@
|
|||||||
xgboost: eXtreme Gradient Boosting
|
XGBoost: eXtreme Gradient Boosting
|
||||||
======
|
==================================
|
||||||
An optimized general purpose gradient boosting library. The library is parallelized using OpenMP. It implements machine learning algorithm under gradient boosting framework, including generalized linear model and gradient boosted regression tree.
|
|
||||||
|
|
||||||
Contributors: https://github.com/tqchen/xgboost/graphs/contributors
|
An optimized general purpose gradient boosting library. The library is parallelized, and also provides an optimized distributed version.
|
||||||
|
It implements machine learning algorithm under gradient boosting framework, including generalized linear model and gradient boosted regression tree (GBDT). XGBoost can also also distributed and scale to Terascale data
|
||||||
|
|
||||||
Turorial and Documentation: https://github.com/tqchen/xgboost/wiki
|
Contributors: https://github.com/dmlc/xgboost/graphs/contributors
|
||||||
|
|
||||||
Questions and Issues: [https://github.com/tqchen/xgboost/issues](https://github.com/tqchen/xgboost/issues?q=is%3Aissue+label%3Aquestion)
|
Documentations: [Documentation of xgboost](doc/README.md)
|
||||||
|
|
||||||
Examples Code: [Learning to use xgboost by examples](demo)
|
Issues Tracker: [https://github.com/dmlc/xgboost/issues](https://github.com/dmlc/xgboost/issues?q=is%3Aissue+label%3Aquestion)
|
||||||
|
|
||||||
Notes on the Code: [Code Guide](src)
|
Please join [XGBoost User Group](https://groups.google.com/forum/#!forum/xgboost-user/) to ask questions and share your experience on xgboost.
|
||||||
|
- Use issue tracker for bug reports, feature requests etc.
|
||||||
|
- Use the user group to post your experience, ask questions about general usages.
|
||||||
|
|
||||||
|
Gitter for developers [](https://gitter.im/dmlc/xgboost?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||||
|
|
||||||
|
Distributed Version: [Distributed XGBoost](multi-node)
|
||||||
|
|
||||||
|
Highlights of Usecases: [Highlight Links](doc/README.md#highlight-links)
|
||||||
|
|
||||||
What's New
|
What's New
|
||||||
=====
|
==========
|
||||||
* See the updated [demo folder](demo) for feature walkthrough
|
* XGBoost-0.4 release, see [CHANGES.md](CHANGES.md#xgboost-04)
|
||||||
* Thanks to Tong He, the new [R package](R-package) is available
|
* XGBoost wins [WWW2015 Microsoft Malware Classification Challenge (BIG 2015)](http://www.kaggle.com/c/malware-classification/forums/t/13490/say-no-to-overfitting-approaches-sharing)
|
||||||
|
- Checkout the winning solution at [Highlight links](doc/README.md#highlight-links)
|
||||||
|
* [External Memory Version](doc/external_memory.md)
|
||||||
|
|
||||||
Features
|
Features
|
||||||
======
|
========
|
||||||
* Sparse feature format:
|
* Easily accessible in python, R, Julia, CLI
|
||||||
- Sparse feature format allows easy handling of missing values, and improve computation efficiency.
|
* Fast speed and memory efficient
|
||||||
* Push the limit on single machine:
|
- Can be more than 10 times faster than GBM in sklearn and R
|
||||||
- Efficient implementation that optimizes memory and computation.
|
- Handles sparse matrices, support external memory
|
||||||
* Speed: XGBoost is very fast
|
* Accurate prediction, and used extensively by data scientists and kagglers
|
||||||
- IN [demo/higgs/speedtest.py](demo/kaggle-higgs/speedtest.py), kaggle higgs data it is faster(on our machine 20 times faster using 4 threads) than sklearn.ensemble.GradientBoostingClassifier
|
- See [highlight links](https://github.com/dmlc/xgboost/blob/master/doc/README.md#highlight-links)
|
||||||
* Layout of gradient boosting algorithm to support user defined objective
|
* Distributed and Portable
|
||||||
* Python interface, works with numpy and scipy.sparse matrix
|
- The distributed version runs on Hadoop (YARN), MPI, SGE etc.
|
||||||
|
- Scales to billions of examples and beyond
|
||||||
|
|
||||||
Build
|
Build
|
||||||
=====
|
=======
|
||||||
* Run ```bash build.sh``` (you can also type make)
|
* Run ```bash build.sh``` (you can also type make)
|
||||||
* If your compiler does not come with OpenMP support, it will fire an warning telling you that the code will compile into single thread mode, and you will get single thread xgboost
|
- Normally it gives what you want
|
||||||
* You may get a error: -lgomp is not found
|
- See [Build Instruction](doc/build.md) for more information
|
||||||
- You can type ```make no_omp=1```, this will get you single thread xgboost
|
|
||||||
- Alternatively, you can upgrade your compiler to compile multi-thread version
|
|
||||||
* Windows(VS 2010): see [windows](windows) folder
|
|
||||||
- In principle, you put all the cpp files in the Makefile to the project, and build
|
|
||||||
|
|
||||||
Version
|
Version
|
||||||
======
|
=======
|
||||||
* This version xgboost-0.3, the code has been refactored from 0.2x to be cleaner and more flexibility
|
* Current version xgboost-0.4, a lot improvment has been made since 0.3
|
||||||
* This version of xgboost is not compatible with 0.2x, due to huge amount of changes in code structure
|
- Change log in [CHANGES.md](CHANGES.md)
|
||||||
- This means the model and buffer file of previous version can not be loaded in xgboost-3.0
|
- This version is compatible with 0.3x versions
|
||||||
* For legacy 0.2x code, refer to [Here](https://github.com/tqchen/xgboost/releases/tag/v0.22)
|
|
||||||
* Change log in [CHANGES.md](CHANGES.md)
|
|
||||||
|
|
||||||
XGBoost in Graphlab Create
|
XGBoost in Graphlab Create
|
||||||
======
|
==========================
|
||||||
* XGBoost is adopted as part of boosted tree toolkit in Graphlab Create (GLC). Graphlab Create is a powerful python toolkit that allows you to data manipulation, graph processing, hyper-parameter search, and visualization of TeraBytes scale data in one framework. Try the Graphlab Create in http://graphlab.com/products/create/quick-start-guide.html
|
* XGBoost is adopted as part of boosted tree toolkit in Graphlab Create (GLC). Graphlab Create is a powerful python toolkit that allows you to data manipulation, graph processing, hyper-parameter search, and visualization of TeraBytes scale data in one framework. Try the Graphlab Create in http://graphlab.com/products/create/quick-start-guide.html
|
||||||
* Nice blogpost by Jay Gu using GLC boosted tree to solve kaggle bike sharing challenge: http://blog.graphlab.com/using-gradient-boosted-trees-to-predict-bike-sharing-demand
|
* Nice blogpost by Jay Gu using GLC boosted tree to solve kaggle bike sharing challenge: http://blog.graphlab.com/using-gradient-boosted-trees-to-predict-bike-sharing-demand
|
||||||
|
|||||||
14
build.sh
14
build.sh
@@ -1,8 +1,12 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# this is a simple script to make xgboost in MAC nad Linux
|
# This is a simple script to make xgboost in MAC and Linux
|
||||||
# basically, it first try to make with OpenMP, if fails, disable OpenMP and make again
|
# Basically, it first try to make with OpenMP, if fails, disable OpenMP and make it again.
|
||||||
# This will automatically make xgboost for MAC users who do not have openmp support
|
# This will automatically make xgboost for MAC users who don't have OpenMP support.
|
||||||
# In most cases, type make will give what you want
|
# In most cases, type make will give what you want.
|
||||||
|
|
||||||
|
# See additional instruction in doc/build.md
|
||||||
|
|
||||||
|
|
||||||
if make; then
|
if make; then
|
||||||
echo "Successfully build multi-thread xgboost"
|
echo "Successfully build multi-thread xgboost"
|
||||||
else
|
else
|
||||||
@@ -12,4 +16,6 @@ else
|
|||||||
make clean
|
make clean
|
||||||
make no_omp=1
|
make no_omp=1
|
||||||
echo "Successfully build single-thread xgboost"
|
echo "Successfully build single-thread xgboost"
|
||||||
|
echo "If you want multi-threaded version"
|
||||||
|
echo "See additional instructions in doc/build.md"
|
||||||
fi
|
fi
|
||||||
|
|||||||
1
demo/.gitignore
vendored
Normal file
1
demo/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
*.libsvm
|
||||||
@@ -1,22 +1,45 @@
|
|||||||
XGBoost Examples
|
XGBoost Examples
|
||||||
====
|
====
|
||||||
This folder contains the all example codes using xgboost.
|
This folder contains all the code examples using xgboost.
|
||||||
|
|
||||||
* Contribution of exampls, benchmarks is more than welcomed!
|
* Contribution of examples, benchmarks is more than welcome!
|
||||||
* If you like to share how you use xgboost to solve your problem, send a pull request:)
|
* If you like to share how you use xgboost to solve your problem, send a pull request:)
|
||||||
|
|
||||||
Features Walkthrough
|
Features Walkthrough
|
||||||
====
|
====
|
||||||
This is a list of short codes introducing different functionalities of xgboost and its wrapper.
|
This is a list of short codes introducing different functionalities of xgboost and its wrapper.
|
||||||
* Basic walkthrough of wrappers [python](guide-python/basic_walkthrough.py)
|
* Basic walkthrough of wrappers
|
||||||
* Cutomize loss function, and evaluation metric [python](guide-python/custom_objective.py)
|
[python](guide-python/basic_walkthrough.py)
|
||||||
* Boosting from existing prediction [python](guide-python/boost_from_prediction.py)
|
[R](../R-package/demo/basic_walkthrough.R)
|
||||||
* Predicting using first n trees [python](guide-python/predict_first_ntree.py)
|
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/basic_walkthrough.jl)
|
||||||
* Generalized Linear Model [python](guide-python/generalized_linear_model.py)
|
* Customize loss function, and evaluation metric
|
||||||
* Cross validation [python](guide-python/cross_validation.py)
|
[python](guide-python/custom_objective.py)
|
||||||
|
[R](../R-package/demo/custom_objective.R)
|
||||||
|
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/custom_objective.jl)
|
||||||
|
* Boosting from existing prediction
|
||||||
|
[python](guide-python/boost_from_prediction.py)
|
||||||
|
[R](../R-package/demo/boost_from_prediction.R)
|
||||||
|
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/boost_from_prediction.jl)
|
||||||
|
* Predicting using first n trees
|
||||||
|
[python](guide-python/predict_first_ntree.py)
|
||||||
|
[R](../R-package/demo/boost_from_prediction.R)
|
||||||
|
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/boost_from_prediction.jl)
|
||||||
|
* Generalized Linear Model
|
||||||
|
[python](guide-python/generalized_linear_model.py)
|
||||||
|
[R](../R-package/demo/generalized_linear_model.R)
|
||||||
|
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/generalized_linear_model.jl)
|
||||||
|
* Cross validation
|
||||||
|
[python](guide-python/cross_validation.py)
|
||||||
|
[R](../R-package/demo/cross_validation.R)
|
||||||
|
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/cross_validation.jl)
|
||||||
|
* Predicting leaf indices
|
||||||
|
[python](guide-python/predict_leaf_indices.py)
|
||||||
|
[R](../R-package/demo/predict_leaf_indices.R)
|
||||||
|
|
||||||
Basic Examples by Tasks
|
Basic Examples by Tasks
|
||||||
====
|
====
|
||||||
|
Most of examples in this section are based on CLI or python version.
|
||||||
|
However, the parameter settings can be applied to all versions
|
||||||
* [Binary classification](binary_classification)
|
* [Binary classification](binary_classification)
|
||||||
* [Multiclass classification](multiclass_classification)
|
* [Multiclass classification](multiclass_classification)
|
||||||
* [Regression](regression)
|
* [Regression](regression)
|
||||||
@@ -25,3 +48,5 @@ Basic Examples by Tasks
|
|||||||
Benchmarks
|
Benchmarks
|
||||||
====
|
====
|
||||||
* [Starter script for Kaggle Higgs Boson](kaggle-higgs)
|
* [Starter script for Kaggle Higgs Boson](kaggle-higgs)
|
||||||
|
* [Kaggle Tradeshift winning solution by daxiongshu](https://github.com/daxiongshu/kaggle-tradeshift-winning-solution)
|
||||||
|
|
||||||
|
|||||||
@@ -1,14 +0,0 @@
|
|||||||
Demonstrating how to use XGBoost accomplish binary classification tasks on UCI mushroom dataset http://archive.ics.uci.edu/ml/datasets/Mushroom
|
|
||||||
|
|
||||||
Run: ./runexp.sh
|
|
||||||
|
|
||||||
Format of input: LIBSVM format
|
|
||||||
|
|
||||||
Format of ```featmap.txt: <featureid> <featurename> <q or i or int>\n ```:
|
|
||||||
- Feature id must be from 0 to number of features, in sorted order.
|
|
||||||
- i means this feature is binary indicator feature
|
|
||||||
- q means this feature is a quantitative value, such as age, time, can be missing
|
|
||||||
- int means this feature is integer value (when int is hinted, the decision boundary will be integer)
|
|
||||||
|
|
||||||
|
|
||||||
Explainations: https://github.com/tqchen/xgboost/wiki/Binary-Classification
|
|
||||||
172
demo/binary_classification/README.md
Normal file
172
demo/binary_classification/README.md
Normal file
@@ -0,0 +1,172 @@
|
|||||||
|
Binary Classification
|
||||||
|
====
|
||||||
|
This is the quick start tutorial for xgboost CLI version. You can also checkout [../../doc/README.md](../../doc/README.md) for links to tutorial in python or R.
|
||||||
|
Here we demonstrate how to use XGBoost for a binary classification task. Before getting started, make sure you compile xgboost in the root directory of the project by typing ```make```
|
||||||
|
The script runexp.sh can be used to run the demo. Here we use [mushroom dataset](https://archive.ics.uci.edu/ml/datasets/Mushroom) from UCI machine learning repository.
|
||||||
|
|
||||||
|
### Tutorial
|
||||||
|
#### Generate Input Data
|
||||||
|
XGBoost takes LibSVM format. An example of faked input data is below:
|
||||||
|
```
|
||||||
|
1 101:1.2 102:0.03
|
||||||
|
0 1:2.1 10001:300 10002:400
|
||||||
|
...
|
||||||
|
```
|
||||||
|
Each line represent a single instance, and in the first line '1' is the instance label,'101' and '102' are feature indices, '1.2' and '0.03' are feature values. In the binary classification case, '1' is used to indicate positive samples, and '0' is used to indicate negative samples. We also support probability values in [0,1] as label, to indicate the probability of the instance being positive.
|
||||||
|
|
||||||
|
|
||||||
|
First we will transform the dataset into classic LibSVM format and split the data into training set and test set by running:
|
||||||
|
```
|
||||||
|
python mapfeat.py
|
||||||
|
python mknfold.py agaricus.txt 1
|
||||||
|
```
|
||||||
|
The two files, 'agaricus.txt.train' and 'agaricus.txt.test' will be used as training set and test set.
|
||||||
|
|
||||||
|
#### Training
|
||||||
|
Then we can run the training process:
|
||||||
|
```
|
||||||
|
../../xgboost mushroom.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
mushroom.conf is the configuration for both training and testing. Each line containing the [attribute]=[value] configuration:
|
||||||
|
|
||||||
|
```conf
|
||||||
|
# General Parameters, see comment for each definition
|
||||||
|
# can be gbtree or gblinear
|
||||||
|
booster = gbtree
|
||||||
|
# choose logistic regression loss function for binary classification
|
||||||
|
objective = binary:logistic
|
||||||
|
|
||||||
|
# Tree Booster Parameters
|
||||||
|
# step size shrinkage
|
||||||
|
eta = 1.0
|
||||||
|
# minimum loss reduction required to make a further partition
|
||||||
|
gamma = 1.0
|
||||||
|
# minimum sum of instance weight(hessian) needed in a child
|
||||||
|
min_child_weight = 1
|
||||||
|
# maximum depth of a tree
|
||||||
|
max_depth = 3
|
||||||
|
|
||||||
|
# Task Parameters
|
||||||
|
# the number of round to do boosting
|
||||||
|
num_round = 2
|
||||||
|
# 0 means do not save any model except the final round model
|
||||||
|
save_period = 0
|
||||||
|
# The path of training data
|
||||||
|
data = "agaricus.txt.train"
|
||||||
|
# The path of validation data, used to monitor training process, here [test] sets name of the validation set
|
||||||
|
eval[test] = "agaricus.txt.test"
|
||||||
|
# The path of test data
|
||||||
|
test:data = "agaricus.txt.test"
|
||||||
|
```
|
||||||
|
We use the tree booster and logistic regression objective in our setting. This indicates that we accomplish our task using classic gradient boosting regression tree(GBRT), which is a promising method for binary classification.
|
||||||
|
|
||||||
|
The parameters shown in the example gives the most common ones that are needed to use xgboost.
|
||||||
|
If you are interested in more parameter settings, the complete parameter settings and detailed descriptions are [here](../../doc/parameter.md). Besides putting the parameters in the configuration file, we can set them by passing them as arguments as below:
|
||||||
|
|
||||||
|
```
|
||||||
|
../../xgboost mushroom.conf max_depth=6
|
||||||
|
```
|
||||||
|
This means that the parameter max_depth will be set as 6 rather than 3 in the conf file. When you use command line, make sure max_depth=6 is passed in as single argument, i.e. do not contain space in the argument. When a parameter setting is provided in both command line input and the config file, the command line setting will override the setting in config file.
|
||||||
|
|
||||||
|
In this example, we use tree booster for gradient boosting. If you would like to use linear booster for regression, you can keep all the parameters except booster and the tree booster parameters as below:
|
||||||
|
```conf
|
||||||
|
# General Parameters
|
||||||
|
# choose the linear booster
|
||||||
|
booster = gblinear
|
||||||
|
...
|
||||||
|
|
||||||
|
# Change Tree Booster Parameters into Linear Booster Parameters
|
||||||
|
# L2 regularization term on weights, default 0
|
||||||
|
lambda = 0.01
|
||||||
|
# L1 regularization term on weights, default 0
|
||||||
|
f ```agaricus.txt.test.buffer``` exists, and automatically loads from binary buffer if possible, this can speedup training process when you do training many times. You can disable it by setting ```use_buffer=0```.
|
||||||
|
- Buffer file can also be used as standalone input, i.e if buffer file exists, but original agaricus.txt.test was removed, xgboost will still run
|
||||||
|
* Deviation from LibSVM input format: xgboost is compatible with LibSVM format, with the following minor differences:
|
||||||
|
- xgboost allows feature index starts from 0
|
||||||
|
- for binary classification, the label is 1 for positive, 0 for negative, instead of +1,-1
|
||||||
|
- the feature indices in each line *do not* need to be sorted
|
||||||
|
alpha = 0.01
|
||||||
|
# L2 regularization term on bias, default 0
|
||||||
|
lambda_bias = 0.01
|
||||||
|
|
||||||
|
# Regression Parameters
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Get Predictions
|
||||||
|
After training, we can use the output model to get the prediction of the test data:
|
||||||
|
```
|
||||||
|
../../xgboost mushroom.conf task=pred model_in=0003.model
|
||||||
|
```
|
||||||
|
For binary classification, the output predictions are probability confidence scores in [0,1], corresponds to the probability of the label to be positive.
|
||||||
|
|
||||||
|
#### Dump Model
|
||||||
|
This is a preliminary feature, so far only tree model support text dump. XGBoost can display the tree models in text files and we can scan the model in an easy way:
|
||||||
|
```
|
||||||
|
../../xgboost mushroom.conf task=dump model_in=0003.model name_dump=dump.raw.txt
|
||||||
|
../../xgboost mushroom.conf task=dump model_in=0003.model fmap=featmap.txt name_dump=dump.nice.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
In this demo, the tree boosters obtained will be printed in dump.raw.txt and dump.nice.txt, and the latter one is easier to understand because of usage of feature mapping featmap.txt
|
||||||
|
|
||||||
|
Format of ```featmap.txt: <featureid> <featurename> <q or i or int>\n ```:
|
||||||
|
- Feature id must be from 0 to number of features, in sorted order.
|
||||||
|
- i means this feature is binary indicator feature
|
||||||
|
- q means this feature is a quantitative value, such as age, time, can be missing
|
||||||
|
- int means this feature is integer value (when int is hinted, the decision boundary will be integer)
|
||||||
|
|
||||||
|
#### Monitoring Progress
|
||||||
|
When you run training we can find there are messages displayed on screen
|
||||||
|
```
|
||||||
|
tree train end, 1 roots, 12 extra nodes, 0 pruned nodes ,max_depth=3
|
||||||
|
[0] test-error:0.016139
|
||||||
|
boosting round 1, 0 sec elapsed
|
||||||
|
|
||||||
|
tree train end, 1 roots, 10 extra nodes, 0 pruned nodes ,max_depth=3
|
||||||
|
[1] test-error:0.000000
|
||||||
|
```
|
||||||
|
The messages for evaluation are printed into stderr, so if you want only to log the evaluation progress, simply type
|
||||||
|
```
|
||||||
|
../../xgboost mushroom.conf 2>log.txt
|
||||||
|
```
|
||||||
|
Then you can find the following content in log.txt
|
||||||
|
```
|
||||||
|
[0] test-error:0.016139
|
||||||
|
[1] test-error:0.000000
|
||||||
|
```
|
||||||
|
We can also monitor both training and test statistics, by adding following lines to configure
|
||||||
|
```conf
|
||||||
|
eval[test] = "agaricus.txt.test"
|
||||||
|
eval[trainname] = "agaricus.txt.train"
|
||||||
|
```
|
||||||
|
Run the command again, we can find the log file becomes
|
||||||
|
```
|
||||||
|
[0] test-error:0.016139 trainname-error:0.014433
|
||||||
|
[1] test-error:0.000000 trainname-error:0.001228
|
||||||
|
```
|
||||||
|
The rule is eval[name-printed-in-log] = filename, then the file will be added to monitoring process, and evaluated each round.
|
||||||
|
|
||||||
|
xgboost also support monitoring multiple metrics, suppose we also want to monitor average log-likelihood of each prediction during training, simply add ```eval_metric=logloss``` to configure. Run again, we can find the log file becomes
|
||||||
|
```
|
||||||
|
[0] test-error:0.016139 test-negllik:0.029795 trainname-error:0.014433 trainname-negllik:0.027023
|
||||||
|
[1] test-error:0.000000 test-negllik:0.000000 trainname-error:0.001228 trainname-negllik:0.002457
|
||||||
|
```
|
||||||
|
### Saving Progress Models
|
||||||
|
If you want to save model every two round, simply set save_period=2. You will find 0002.model in the current folder. If you want to change the output folder of models, add model_dir=foldername. By default xgboost saves the model of last round.
|
||||||
|
|
||||||
|
#### Continue from Existing Model
|
||||||
|
If you want to continue boosting from existing model, say 0002.model, use
|
||||||
|
```
|
||||||
|
../../xgboost mushroom.conf model_in=0002.model num_round=2 model_out=continue.model
|
||||||
|
```
|
||||||
|
xgboost will load from 0002.model continue boosting for 2 rounds, and save output to continue.model. However, beware that the training and evaluation data specified in mushroom.conf should not change when you use this function.
|
||||||
|
#### Use Multi-Threading
|
||||||
|
When you are working with a large dataset, you may want to take advantage of parallelism. If your compiler supports OpenMP, xgboost is naturally multi-threaded, to set number of parallel running threads to 10, add ```nthread=10``` to your configuration.
|
||||||
|
|
||||||
|
#### Additional Notes
|
||||||
|
* What are ```agaricus.txt.test.buffer``` and ```agaricus.txt.train.buffer``` generated during runexp.sh?
|
||||||
|
- By default xgboost will automatically generate a binary format buffer of input data, with suffix ```buffer```. When next time you run xgboost, it detects i
|
||||||
|
Demonstrating how to use XGBoost accomplish binary classification tasks on UCI mushroom dataset http://archive.ics.uci.edu/ml/datasets/Mushroom
|
||||||
|
|
||||||
|
|
||||||
@@ -1,17 +1,16 @@
|
|||||||
#!/usr/bin/python
|
#!/usr/bin/python
|
||||||
import sys
|
|
||||||
|
|
||||||
def loadfmap( fname ):
|
def loadfmap( fname ):
|
||||||
fmap = {}
|
fmap = {}
|
||||||
nmap = {}
|
nmap = {}
|
||||||
|
|
||||||
for l in open( fname ):
|
for l in open( fname ):
|
||||||
arr = l.split()
|
arr = l.split()
|
||||||
if arr[0].find('.') != -1:
|
if arr[0].find('.') != -1:
|
||||||
idx = int( arr[0].strip('.') )
|
idx = int( arr[0].strip('.') )
|
||||||
assert idx not in fmap
|
assert idx not in fmap
|
||||||
fmap[ idx ] = {}
|
fmap[ idx ] = {}
|
||||||
ftype = arr[1].strip(':')
|
ftype = arr[1].strip(':')
|
||||||
content = arr[2]
|
content = arr[2]
|
||||||
else:
|
else:
|
||||||
content = arr[0]
|
content = arr[0]
|
||||||
@@ -23,7 +22,7 @@ def loadfmap( fname ):
|
|||||||
nmap[ len(nmap) ] = ftype+'='+k
|
nmap[ len(nmap) ] = ftype+'='+k
|
||||||
return fmap, nmap
|
return fmap, nmap
|
||||||
|
|
||||||
def write_nmap( fo, nmap ):
|
def write_nmap( fo, nmap ):
|
||||||
for i in range( len(nmap) ):
|
for i in range( len(nmap) ):
|
||||||
fo.write('%d\t%s\ti\n' % (i, nmap[i]) )
|
fo.write('%d\t%s\ti\n' % (i, nmap[i]) )
|
||||||
|
|
||||||
@@ -33,7 +32,7 @@ fo = open( 'featmap.txt', 'w' )
|
|||||||
write_nmap( fo, nmap )
|
write_nmap( fo, nmap )
|
||||||
fo.close()
|
fo.close()
|
||||||
|
|
||||||
fo = open( 'agaricus.txt', 'w' )
|
fo = open( 'agaricus.txt', 'w' )
|
||||||
for l in open( 'agaricus-lepiota.data' ):
|
for l in open( 'agaricus-lepiota.data' ):
|
||||||
arr = l.split(',')
|
arr = l.split(',')
|
||||||
if arr[0] == 'p':
|
if arr[0] == 'p':
|
||||||
@@ -47,4 +46,4 @@ for l in open( 'agaricus-lepiota.data' ):
|
|||||||
|
|
||||||
fo.close()
|
fo.close()
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -6,3 +6,6 @@ XGBoost Python Feature Walkthrough
|
|||||||
* [Predicting using first n trees](predict_first_ntree.py)
|
* [Predicting using first n trees](predict_first_ntree.py)
|
||||||
* [Generalized Linear Model](generalized_linear_model.py)
|
* [Generalized Linear Model](generalized_linear_model.py)
|
||||||
* [Cross validation](cross_validation.py)
|
* [Cross validation](cross_validation.py)
|
||||||
|
* [Predicting leaf indices](predict_leaf_indices.py)
|
||||||
|
* [Sklearn Wrapper](sklearn_example.py)
|
||||||
|
* [External Memory](external_memory.py)
|
||||||
|
|||||||
@@ -1,10 +1,6 @@
|
|||||||
#!/usr/bin/python
|
#!/usr/bin/python
|
||||||
import sys
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import scipy.sparse
|
import scipy.sparse
|
||||||
# append the path to xgboost, you may need to change the following line
|
|
||||||
# alternatively, you can add the path to PYTHONPATH environment variable
|
|
||||||
sys.path.append('../../wrapper')
|
|
||||||
import xgboost as xgb
|
import xgboost as xgb
|
||||||
|
|
||||||
### simple example
|
### simple example
|
||||||
@@ -33,7 +29,7 @@ bst.dump_model('dump.nice.txt','../data/featmap.txt')
|
|||||||
# save dmatrix into binary buffer
|
# save dmatrix into binary buffer
|
||||||
dtest.save_binary('dtest.buffer')
|
dtest.save_binary('dtest.buffer')
|
||||||
bst.save_model('xgb.model')
|
bst.save_model('xgb.model')
|
||||||
# load model and data in
|
# load model and data in
|
||||||
bst2 = xgb.Booster(model_file='xgb.model')
|
bst2 = xgb.Booster(model_file='xgb.model')
|
||||||
dtest2 = xgb.DMatrix('dtest.buffer')
|
dtest2 = xgb.DMatrix('dtest.buffer')
|
||||||
preds2 = bst2.predict(dtest2)
|
preds2 = bst2.predict(dtest2)
|
||||||
|
|||||||
@@ -1,7 +1,5 @@
|
|||||||
#!/usr/bin/python
|
#!/usr/bin/python
|
||||||
import sys
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
sys.path.append('../../wrapper')
|
|
||||||
import xgboost as xgb
|
import xgboost as xgb
|
||||||
|
|
||||||
dtrain = xgb.DMatrix('../data/agaricus.txt.train')
|
dtrain = xgb.DMatrix('../data/agaricus.txt.train')
|
||||||
|
|||||||
@@ -1,7 +1,5 @@
|
|||||||
#!/usr/bin/python
|
#!/usr/bin/python
|
||||||
import sys
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
sys.path.append('../../wrapper')
|
|
||||||
import xgboost as xgb
|
import xgboost as xgb
|
||||||
|
|
||||||
### load data in do training
|
### load data in do training
|
||||||
@@ -56,7 +54,7 @@ def evalerror(preds, dtrain):
|
|||||||
labels = dtrain.get_label()
|
labels = dtrain.get_label()
|
||||||
return 'error', float(sum(labels != (preds > 0.0))) / len(labels)
|
return 'error', float(sum(labels != (preds > 0.0))) / len(labels)
|
||||||
|
|
||||||
param = {'max_depth':2, 'eta':1, 'silent':1}
|
param = {'max_depth':2, 'eta':1, 'silent':1}
|
||||||
# train with customized objective
|
# train with customized objective
|
||||||
xgb.cv(param, dtrain, num_round, nfold = 5, seed = 0,
|
xgb.cv(param, dtrain, num_round, nfold = 5, seed = 0,
|
||||||
obj = logregobj, feval=evalerror)
|
obj = logregobj, feval=evalerror)
|
||||||
|
|||||||
@@ -1,11 +1,9 @@
|
|||||||
#!/usr/bin/python
|
#!/usr/bin/python
|
||||||
import sys
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
sys.path.append('../../wrapper')
|
|
||||||
import xgboost as xgb
|
import xgboost as xgb
|
||||||
###
|
###
|
||||||
# advanced: cutomsized loss function
|
# advanced: cutomsized loss function
|
||||||
#
|
#
|
||||||
print ('start running example to used cutomized objective function')
|
print ('start running example to used cutomized objective function')
|
||||||
|
|
||||||
dtrain = xgb.DMatrix('../data/agaricus.txt.train')
|
dtrain = xgb.DMatrix('../data/agaricus.txt.train')
|
||||||
|
|||||||
25
demo/guide-python/external_memory.py
Executable file
25
demo/guide-python/external_memory.py
Executable file
@@ -0,0 +1,25 @@
|
|||||||
|
#!/usr/bin/python
|
||||||
|
import numpy as np
|
||||||
|
import scipy.sparse
|
||||||
|
import xgboost as xgb
|
||||||
|
|
||||||
|
### simple example for using external memory version
|
||||||
|
|
||||||
|
# this is the only difference, add a # followed by a cache prefix name
|
||||||
|
# several cache file with the prefix will be generated
|
||||||
|
# currently only support convert from libsvm file
|
||||||
|
dtrain = xgb.DMatrix('../data/agaricus.txt.train#dtrain.cache')
|
||||||
|
dtest = xgb.DMatrix('../data/agaricus.txt.test#dtest.cache')
|
||||||
|
|
||||||
|
# specify validations set to watch performance
|
||||||
|
param = {'max_depth':2, 'eta':1, 'silent':1, 'objective':'binary:logistic' }
|
||||||
|
|
||||||
|
# performance notice: set nthread to be the number of your real cpu
|
||||||
|
# some cpu offer two threads per core, for example, a 4 core cpu with 8 threads, in such case set nthread=4
|
||||||
|
#param['nthread']=num_real_cpu
|
||||||
|
|
||||||
|
watchlist = [(dtest,'eval'), (dtrain,'train')]
|
||||||
|
num_round = 2
|
||||||
|
bst = xgb.train(param, dtrain, num_round, watchlist)
|
||||||
|
|
||||||
|
|
||||||
@@ -1,6 +1,4 @@
|
|||||||
#!/usr/bin/python
|
#!/usr/bin/python
|
||||||
import sys
|
|
||||||
sys.path.append('../../wrapper')
|
|
||||||
import xgboost as xgb
|
import xgboost as xgb
|
||||||
##
|
##
|
||||||
# this script demonstrate how to fit generalized linear model in xgboost
|
# this script demonstrate how to fit generalized linear model in xgboost
|
||||||
@@ -9,17 +7,17 @@ import xgboost as xgb
|
|||||||
dtrain = xgb.DMatrix('../data/agaricus.txt.train')
|
dtrain = xgb.DMatrix('../data/agaricus.txt.train')
|
||||||
dtest = xgb.DMatrix('../data/agaricus.txt.test')
|
dtest = xgb.DMatrix('../data/agaricus.txt.test')
|
||||||
# change booster to gblinear, so that we are fitting a linear model
|
# change booster to gblinear, so that we are fitting a linear model
|
||||||
# alpha is the L1 regularizer
|
# alpha is the L1 regularizer
|
||||||
# lambda is the L2 regularizer
|
# lambda is the L2 regularizer
|
||||||
# you can also set lambda_bias which is L2 regularizer on the bias term
|
# you can also set lambda_bias which is L2 regularizer on the bias term
|
||||||
param = {'silent':1, 'objective':'binary:logistic', 'booster':'gblinear',
|
param = {'silent':1, 'objective':'binary:logistic', 'booster':'gblinear',
|
||||||
'alpha': 0.0001, 'lambda': 1 }
|
'alpha': 0.0001, 'lambda': 1 }
|
||||||
|
|
||||||
# normally, you do not need to set eta (step_size)
|
# normally, you do not need to set eta (step_size)
|
||||||
# XGBoost uses a parallel coordinate descent algorithm (shotgun),
|
# XGBoost uses a parallel coordinate descent algorithm (shotgun),
|
||||||
# there could be affection on convergence with parallelization on certain cases
|
# there could be affection on convergence with parallelization on certain cases
|
||||||
# setting eta to be smaller value, e.g 0.5 can make the optimization more stable
|
# setting eta to be smaller value, e.g 0.5 can make the optimization more stable
|
||||||
# param['eta'] = 1
|
# param['eta'] = 1
|
||||||
|
|
||||||
##
|
##
|
||||||
# the rest of settings are the same
|
# the rest of settings are the same
|
||||||
|
|||||||
@@ -1,7 +1,5 @@
|
|||||||
#!/usr/bin/python
|
#!/usr/bin/python
|
||||||
import sys
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
sys.path.append('../../wrapper')
|
|
||||||
import xgboost as xgb
|
import xgboost as xgb
|
||||||
|
|
||||||
### load data in do training
|
### load data in do training
|
||||||
|
|||||||
20
demo/guide-python/predict_leaf_indices.py
Executable file
20
demo/guide-python/predict_leaf_indices.py
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/usr/bin/python
|
||||||
|
import numpy as np
|
||||||
|
import xgboost as xgb
|
||||||
|
|
||||||
|
### load data in do training
|
||||||
|
dtrain = xgb.DMatrix('../data/agaricus.txt.train')
|
||||||
|
dtest = xgb.DMatrix('../data/agaricus.txt.test')
|
||||||
|
param = {'max_depth':2, 'eta':1, 'silent':1, 'objective':'binary:logistic' }
|
||||||
|
watchlist = [(dtest,'eval'), (dtrain,'train')]
|
||||||
|
num_round = 3
|
||||||
|
bst = xgb.train(param, dtrain, num_round, watchlist)
|
||||||
|
|
||||||
|
print ('start testing predict the leaf indices')
|
||||||
|
### predict using first 2 tree
|
||||||
|
leafindex = bst.predict(dtest, ntree_limit=2, pred_leaf = True)
|
||||||
|
print leafindex.shape
|
||||||
|
print leafindex
|
||||||
|
### predict all trees
|
||||||
|
leafindex = bst.predict(dtest, pred_leaf = True)
|
||||||
|
print leafindex.shape
|
||||||
@@ -4,4 +4,5 @@ python custom_objective.py
|
|||||||
python boost_from_prediction.py
|
python boost_from_prediction.py
|
||||||
python generalized_linear_model.py
|
python generalized_linear_model.py
|
||||||
python cross_validation.py
|
python cross_validation.py
|
||||||
rm -rf *~ *.model *.buffer
|
python predict_leaf_indices.py
|
||||||
|
rm -rf *~ *.model *.buffer
|
||||||
|
|||||||
67
demo/guide-python/sklearn_examples.py
Executable file
67
demo/guide-python/sklearn_examples.py
Executable file
@@ -0,0 +1,67 @@
|
|||||||
|
#!/usr/bin/python
|
||||||
|
'''
|
||||||
|
Created on 1 Apr 2015
|
||||||
|
|
||||||
|
@author: Jamie Hall
|
||||||
|
'''
|
||||||
|
import pickle
|
||||||
|
import xgboost as xgb
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
from sklearn.cross_validation import KFold
|
||||||
|
from sklearn.metrics import confusion_matrix, mean_squared_error
|
||||||
|
from sklearn.grid_search import GridSearchCV
|
||||||
|
from sklearn.datasets import load_iris, load_digits, load_boston
|
||||||
|
|
||||||
|
rng = np.random.RandomState(31337)
|
||||||
|
|
||||||
|
print("Zeros and Ones from the Digits dataset: binary classification")
|
||||||
|
digits = load_digits(2)
|
||||||
|
y = digits['target']
|
||||||
|
X = digits['data']
|
||||||
|
kf = KFold(y.shape[0], n_folds=2, shuffle=True, random_state=rng)
|
||||||
|
for train_index, test_index in kf:
|
||||||
|
xgb_model = xgb.XGBClassifier().fit(X[train_index],y[train_index])
|
||||||
|
predictions = xgb_model.predict(X[test_index])
|
||||||
|
actuals = y[test_index]
|
||||||
|
print(confusion_matrix(actuals, predictions))
|
||||||
|
|
||||||
|
print("Iris: multiclass classification")
|
||||||
|
iris = load_iris()
|
||||||
|
y = iris['target']
|
||||||
|
X = iris['data']
|
||||||
|
kf = KFold(y.shape[0], n_folds=2, shuffle=True, random_state=rng)
|
||||||
|
for train_index, test_index in kf:
|
||||||
|
xgb_model = xgb.XGBClassifier().fit(X[train_index],y[train_index])
|
||||||
|
predictions = xgb_model.predict(X[test_index])
|
||||||
|
actuals = y[test_index]
|
||||||
|
print(confusion_matrix(actuals, predictions))
|
||||||
|
|
||||||
|
print("Boston Housing: regression")
|
||||||
|
boston = load_boston()
|
||||||
|
y = boston['target']
|
||||||
|
X = boston['data']
|
||||||
|
kf = KFold(y.shape[0], n_folds=2, shuffle=True, random_state=rng)
|
||||||
|
for train_index, test_index in kf:
|
||||||
|
xgb_model = xgb.XGBRegressor().fit(X[train_index],y[train_index])
|
||||||
|
predictions = xgb_model.predict(X[test_index])
|
||||||
|
actuals = y[test_index]
|
||||||
|
print(mean_squared_error(actuals, predictions))
|
||||||
|
|
||||||
|
print("Parameter optimization")
|
||||||
|
y = boston['target']
|
||||||
|
X = boston['data']
|
||||||
|
xgb_model = xgb.XGBRegressor()
|
||||||
|
clf = GridSearchCV(xgb_model,
|
||||||
|
{'max_depth': [2,4,6],
|
||||||
|
'n_estimators': [50,100,200]}, verbose=1)
|
||||||
|
clf.fit(X,y)
|
||||||
|
print(clf.best_score_)
|
||||||
|
print(clf.best_params_)
|
||||||
|
|
||||||
|
# The sklearn API models are picklable
|
||||||
|
print("Pickling sklearn API models")
|
||||||
|
# must open in binary format to pickle
|
||||||
|
pickle.dump(clf, open("best_boston.pkl", "wb"))
|
||||||
|
clf2 = pickle.load(open("best_boston.pkl", "rb"))
|
||||||
|
print(np.allclose(clf.predict(X), clf2.predict(X)))
|
||||||
35
demo/guide-python/sklearn_parallel.py
Normal file
35
demo/guide-python/sklearn_parallel.py
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
import os
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# NOTE: on posix systems, this *has* to be here and in the
|
||||||
|
# `__name__ == "__main__"` clause to run XGBoost in parallel processes
|
||||||
|
# using fork, if XGBoost was built with OpenMP support. Otherwise, if you
|
||||||
|
# build XGBoost without OpenMP support, you can use fork, which is the
|
||||||
|
# default backend for joblib, and omit this.
|
||||||
|
try:
|
||||||
|
from multiprocessing import set_start_method
|
||||||
|
except ImportError:
|
||||||
|
raise ImportError("Unable to import multiprocessing.set_start_method."
|
||||||
|
" This example only runs on Python 3.4")
|
||||||
|
set_start_method("forkserver")
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
from sklearn.grid_search import GridSearchCV
|
||||||
|
from sklearn.datasets import load_boston
|
||||||
|
import xgboost as xgb
|
||||||
|
|
||||||
|
rng = np.random.RandomState(31337)
|
||||||
|
|
||||||
|
print("Parallel Parameter optimization")
|
||||||
|
boston = load_boston()
|
||||||
|
|
||||||
|
os.environ["OMP_NUM_THREADS"] = "2" # or to whatever you want
|
||||||
|
y = boston['target']
|
||||||
|
X = boston['data']
|
||||||
|
xgb_model = xgb.XGBRegressor()
|
||||||
|
clf = GridSearchCV(xgb_model, {'max_depth': [2, 4, 6],
|
||||||
|
'n_estimators': [50, 100, 200]}, verbose=1,
|
||||||
|
n_jobs=2)
|
||||||
|
clf.fit(X, y)
|
||||||
|
print(clf.best_score_)
|
||||||
|
print(clf.best_params_)
|
||||||
@@ -1,3 +1,9 @@
|
|||||||
|
Highlights
|
||||||
|
=====
|
||||||
|
Higgs challenge ends recently, xgboost is being used by many users. This list highlights the xgboost solutions of players
|
||||||
|
* Blogpost by phunther: [Winning solution of Kaggle Higgs competition: what a single model can do](http://no2147483647.wordpress.com/2014/09/17/winning-solution-of-kaggle-higgs-competition-what-a-single-model-can-do/)
|
||||||
|
* The solution by Tianqi Chen and Tong He [Link](https://github.com/hetong007/higgsml)
|
||||||
|
|
||||||
Guide for Kaggle Higgs Challenge
|
Guide for Kaggle Higgs Challenge
|
||||||
=====
|
=====
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,5 @@
|
|||||||
#!/usr/bin/python
|
#!/usr/bin/python
|
||||||
import sys
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
sys.path.append('../../wrapper')
|
|
||||||
import xgboost as xgb
|
import xgboost as xgb
|
||||||
|
|
||||||
### load data in do training
|
### load data in do training
|
||||||
|
|||||||
@@ -1,14 +1,6 @@
|
|||||||
#!/usr/bin/python
|
#!/usr/bin/python
|
||||||
# this is the example script to use xgboost to train
|
# this is the example script to use xgboost to train
|
||||||
import inspect
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
# add path of xgboost python module
|
|
||||||
code_path = os.path.join(
|
|
||||||
os.path.split(inspect.getfile(inspect.currentframe()))[0], "../../wrapper")
|
|
||||||
|
|
||||||
sys.path.append(code_path)
|
|
||||||
|
|
||||||
import xgboost as xgb
|
import xgboost as xgb
|
||||||
|
|
||||||
@@ -29,7 +21,7 @@ weight = dtrain[:,31] * float(test_size) / len(label)
|
|||||||
sum_wpos = sum( weight[i] for i in range(len(label)) if label[i] == 1.0 )
|
sum_wpos = sum( weight[i] for i in range(len(label)) if label[i] == 1.0 )
|
||||||
sum_wneg = sum( weight[i] for i in range(len(label)) if label[i] == 0.0 )
|
sum_wneg = sum( weight[i] for i in range(len(label)) if label[i] == 0.0 )
|
||||||
|
|
||||||
# print weight statistics
|
# print weight statistics
|
||||||
print ('weight statistics: wpos=%g, wneg=%g, ratio=%g' % ( sum_wpos, sum_wneg, sum_wneg/sum_wpos ))
|
print ('weight statistics: wpos=%g, wneg=%g, ratio=%g' % ( sum_wpos, sum_wneg, sum_wneg/sum_wpos ))
|
||||||
|
|
||||||
# construct xgboost.DMatrix from numpy array, treat -999.0 as missing value
|
# construct xgboost.DMatrix from numpy array, treat -999.0 as missing value
|
||||||
@@ -42,13 +34,13 @@ param = {}
|
|||||||
param['objective'] = 'binary:logitraw'
|
param['objective'] = 'binary:logitraw'
|
||||||
# scale weight of positive examples
|
# scale weight of positive examples
|
||||||
param['scale_pos_weight'] = sum_wneg/sum_wpos
|
param['scale_pos_weight'] = sum_wneg/sum_wpos
|
||||||
param['eta'] = 0.1
|
param['eta'] = 0.1
|
||||||
param['max_depth'] = 6
|
param['max_depth'] = 6
|
||||||
param['eval_metric'] = 'auc'
|
param['eval_metric'] = 'auc'
|
||||||
param['silent'] = 1
|
param['silent'] = 1
|
||||||
param['nthread'] = 16
|
param['nthread'] = 16
|
||||||
|
|
||||||
# you can directly throw param in, though we want to watch multiple metrics here
|
# you can directly throw param in, though we want to watch multiple metrics here
|
||||||
plst = list(param.items())+[('eval_metric', 'ams@0.15')]
|
plst = list(param.items())+[('eval_metric', 'ams@0.15')]
|
||||||
|
|
||||||
watchlist = [ (xgmat,'train') ]
|
watchlist = [ (xgmat,'train') ]
|
||||||
|
|||||||
@@ -1,9 +1,6 @@
|
|||||||
#!/usr/bin/python
|
#!/usr/bin/python
|
||||||
# make prediction
|
# make prediction
|
||||||
import sys
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
# add path of xgboost python module
|
|
||||||
sys.path.append('../../wrapper/')
|
|
||||||
import xgboost as xgb
|
import xgboost as xgb
|
||||||
|
|
||||||
# path to where the data lies
|
# path to where the data lies
|
||||||
@@ -11,7 +8,7 @@ dpath = 'data'
|
|||||||
|
|
||||||
modelfile = 'higgs.model'
|
modelfile = 'higgs.model'
|
||||||
outfile = 'higgs.pred.csv'
|
outfile = 'higgs.pred.csv'
|
||||||
# make top 15% as positive
|
# make top 15% as positive
|
||||||
threshold_ratio = 0.15
|
threshold_ratio = 0.15
|
||||||
|
|
||||||
# load in training data, directly use numpy
|
# load in training data, directly use numpy
|
||||||
@@ -24,7 +21,7 @@ xgmat = xgb.DMatrix( data, missing = -999.0 )
|
|||||||
bst = xgb.Booster({'nthread':16}, model_file = modelfile)
|
bst = xgb.Booster({'nthread':16}, model_file = modelfile)
|
||||||
ypred = bst.predict( xgmat )
|
ypred = bst.predict( xgmat )
|
||||||
|
|
||||||
res = [ ( int(idx[i]), ypred[i] ) for i in range(len(ypred)) ]
|
res = [ ( int(idx[i]), ypred[i] ) for i in range(len(ypred)) ]
|
||||||
|
|
||||||
rorder = {}
|
rorder = {}
|
||||||
for k, v in sorted( res, key = lambda x:-x[1] ):
|
for k, v in sorted( res, key = lambda x:-x[1] ):
|
||||||
@@ -36,12 +33,12 @@ fo = open(outfile, 'w')
|
|||||||
nhit = 0
|
nhit = 0
|
||||||
ntot = 0
|
ntot = 0
|
||||||
fo.write('EventId,RankOrder,Class\n')
|
fo.write('EventId,RankOrder,Class\n')
|
||||||
for k, v in res:
|
for k, v in res:
|
||||||
if rorder[k] <= ntop:
|
if rorder[k] <= ntop:
|
||||||
lb = 's'
|
lb = 's'
|
||||||
nhit += 1
|
nhit += 1
|
||||||
else:
|
else:
|
||||||
lb = 'b'
|
lb = 'b'
|
||||||
# change output rank order to follow Kaggle convention
|
# change output rank order to follow Kaggle convention
|
||||||
fo.write('%s,%d,%s\n' % ( k, len(rorder)+1-rorder[k], lb ) )
|
fo.write('%s,%d,%s\n' % ( k, len(rorder)+1-rorder[k], lb ) )
|
||||||
ntot += 1
|
ntot += 1
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ require(methods)
|
|||||||
testsize <- 550000
|
testsize <- 550000
|
||||||
|
|
||||||
dtrain <- read.csv("data/training.csv", header=TRUE, nrows=350001)
|
dtrain <- read.csv("data/training.csv", header=TRUE, nrows=350001)
|
||||||
|
dtrain$Label = as.numeric(dtrain$Label=='s')
|
||||||
# gbm.time = system.time({
|
# gbm.time = system.time({
|
||||||
# gbm.model <- gbm(Label ~ ., data = dtrain[, -c(1,32)], n.trees = 120,
|
# gbm.model <- gbm(Label ~ ., data = dtrain[, -c(1,32)], n.trees = 120,
|
||||||
# interaction.depth = 6, shrinkage = 0.1, bag.fraction = 1,
|
# interaction.depth = 6, shrinkage = 0.1, bag.fraction = 1,
|
||||||
@@ -15,8 +15,8 @@ dtrain <- read.csv("data/training.csv", header=TRUE, nrows=350001)
|
|||||||
# print(gbm.time)
|
# print(gbm.time)
|
||||||
# Test result: 761.48 secs
|
# Test result: 761.48 secs
|
||||||
|
|
||||||
dtrain[33] <- dtrain[33] == "s"
|
# dtrain[33] <- dtrain[33] == "s"
|
||||||
label <- as.numeric(dtrain[[33]])
|
# label <- as.numeric(dtrain[[33]])
|
||||||
data <- as.matrix(dtrain[2:31])
|
data <- as.matrix(dtrain[2:31])
|
||||||
weight <- as.numeric(dtrain[[32]]) * testsize / length(label)
|
weight <- as.numeric(dtrain[[32]]) * testsize / length(label)
|
||||||
|
|
||||||
@@ -51,21 +51,21 @@ for (i in 1:length(threads)){
|
|||||||
xgboost.time
|
xgboost.time
|
||||||
# [[1]]
|
# [[1]]
|
||||||
# user system elapsed
|
# user system elapsed
|
||||||
# 444.98 1.96 450.22
|
# 99.015 0.051 98.982
|
||||||
#
|
#
|
||||||
# [[2]]
|
# [[2]]
|
||||||
# user system elapsed
|
# user system elapsed
|
||||||
# 188.15 0.82 102.41
|
# 100.268 0.317 55.473
|
||||||
#
|
#
|
||||||
# [[3]]
|
# [[3]]
|
||||||
# user system elapsed
|
# user system elapsed
|
||||||
# 143.29 0.79 44.18
|
# 111.682 0.777 35.963
|
||||||
#
|
#
|
||||||
# [[4]]
|
# [[4]]
|
||||||
# user system elapsed
|
# user system elapsed
|
||||||
# 176.60 1.45 34.04
|
# 149.396 1.851 32.661
|
||||||
#
|
#
|
||||||
# [[5]]
|
# [[5]]
|
||||||
# user system elapsed
|
# user system elapsed
|
||||||
# 180.15 2.85 35.26
|
# 157.390 5.988 40.949
|
||||||
|
|
||||||
|
|||||||
@@ -1,9 +1,6 @@
|
|||||||
#!/usr/bin/python
|
#!/usr/bin/python
|
||||||
# this is the example script to use xgboost to train
|
# this is the example script to use xgboost to train
|
||||||
import sys
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
# add path of xgboost python module
|
|
||||||
sys.path.append('../../wrapper/')
|
|
||||||
import xgboost as xgb
|
import xgboost as xgb
|
||||||
from sklearn.ensemble import GradientBoostingClassifier
|
from sklearn.ensemble import GradientBoostingClassifier
|
||||||
import time
|
import time
|
||||||
|
|||||||
24
demo/kaggle-otto/README.MD
Normal file
24
demo/kaggle-otto/README.MD
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
Benckmark for Otto Group Competition
|
||||||
|
=========
|
||||||
|
|
||||||
|
This is a folder containing the benchmark for the [Otto Group Competition on Kaggle](http://www.kaggle.com/c/otto-group-product-classification-challenge).
|
||||||
|
|
||||||
|
## Getting started
|
||||||
|
|
||||||
|
1. Put `train.csv` and `test.csv` under the `data` folder
|
||||||
|
2. Run the script
|
||||||
|
3. Submit the `submission.csv`
|
||||||
|
|
||||||
|
The parameter `nthread` controls the number of cores to run on, please set it to suit your machine.
|
||||||
|
|
||||||
|
## R-package
|
||||||
|
|
||||||
|
To install the R-package of xgboost, please run
|
||||||
|
|
||||||
|
```r
|
||||||
|
devtools::install_github('tqchen/xgboost',subdir='R-package')
|
||||||
|
```
|
||||||
|
|
||||||
|
Windows users may need to install [RTools](http://cran.r-project.org/bin/windows/Rtools/) first.
|
||||||
|
|
||||||
|
|
||||||
43
demo/kaggle-otto/otto_train_pred.R
Normal file
43
demo/kaggle-otto/otto_train_pred.R
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
require(xgboost)
|
||||||
|
require(methods)
|
||||||
|
|
||||||
|
train = read.csv('data/train.csv',header=TRUE,stringsAsFactors = F)
|
||||||
|
test = read.csv('data/test.csv',header=TRUE,stringsAsFactors = F)
|
||||||
|
train = train[,-1]
|
||||||
|
test = test[,-1]
|
||||||
|
|
||||||
|
y = train[,ncol(train)]
|
||||||
|
y = gsub('Class_','',y)
|
||||||
|
y = as.integer(y)-1 #xgboost take features in [0,numOfClass)
|
||||||
|
|
||||||
|
x = rbind(train[,-ncol(train)],test)
|
||||||
|
x = as.matrix(x)
|
||||||
|
x = matrix(as.numeric(x),nrow(x),ncol(x))
|
||||||
|
trind = 1:length(y)
|
||||||
|
teind = (nrow(train)+1):nrow(x)
|
||||||
|
|
||||||
|
# Set necessary parameter
|
||||||
|
param <- list("objective" = "multi:softprob",
|
||||||
|
"eval_metric" = "mlogloss",
|
||||||
|
"num_class" = 9,
|
||||||
|
"nthread" = 8)
|
||||||
|
|
||||||
|
# Run Cross Valication
|
||||||
|
cv.nround = 50
|
||||||
|
bst.cv = xgb.cv(param=param, data = x[trind,], label = y,
|
||||||
|
nfold = 3, nrounds=cv.nround)
|
||||||
|
|
||||||
|
# Train the model
|
||||||
|
nround = 50
|
||||||
|
bst = xgboost(param=param, data = x[trind,], label = y, nrounds=nround)
|
||||||
|
|
||||||
|
# Make prediction
|
||||||
|
pred = predict(bst,x[teind,])
|
||||||
|
pred = matrix(pred,9,length(pred)/9)
|
||||||
|
pred = t(pred)
|
||||||
|
|
||||||
|
# Output submission
|
||||||
|
pred = format(pred, digits=2,scientific=F) # shrink the size of submission
|
||||||
|
pred = data.frame(1:nrow(pred),pred)
|
||||||
|
names(pred) = c('id', paste0('Class_',1:9))
|
||||||
|
write.csv(pred,file='submission.csv', quote=FALSE,row.names=FALSE)
|
||||||
231
demo/kaggle-otto/understandingXGBoostModel.Rmd
Normal file
231
demo/kaggle-otto/understandingXGBoostModel.Rmd
Normal file
@@ -0,0 +1,231 @@
|
|||||||
|
---
|
||||||
|
title: "Understanding XGBoost Model on Otto Dataset"
|
||||||
|
author: "Michaël Benesty"
|
||||||
|
output:
|
||||||
|
rmarkdown::html_vignette:
|
||||||
|
css: ../../R-package/vignettes/vignette.css
|
||||||
|
number_sections: yes
|
||||||
|
toc: yes
|
||||||
|
---
|
||||||
|
|
||||||
|
Introduction
|
||||||
|
============
|
||||||
|
|
||||||
|
**XGBoost** is an implementation of the famous gradient boosting algorithm. This model is often described as a *blackbox*, meaning it works well but it is not trivial to understand how. Indeed, the model is made of hundreds (thousands?) of decision trees. You may wonder how possible a human would be able to have a general view of the model?
|
||||||
|
|
||||||
|
While XGBoost is known for its fast speed and accurate predictive power, it also comes with various functions to help you understand the model.
|
||||||
|
The purpose of this RMarkdown document is to demonstrate how easily we can leverage the functions already implemented in **XGBoost R** package. Of course, everything showed below can be applied to the dataset you may have to manipulate at work or wherever!
|
||||||
|
|
||||||
|
First we will prepare the **Otto** dataset and train a model, then we will generate two vizualisations to get a clue of what is important to the model, finally, we will see how we can leverage these information.
|
||||||
|
|
||||||
|
Preparation of the data
|
||||||
|
=======================
|
||||||
|
|
||||||
|
This part is based on the **R** tutorial example by [Tong He](https://github.com/dmlc/xgboost/blob/master/demo/kaggle-otto/otto_train_pred.R)
|
||||||
|
|
||||||
|
First, let's load the packages and the dataset.
|
||||||
|
|
||||||
|
```{r loading}
|
||||||
|
require(xgboost)
|
||||||
|
require(methods)
|
||||||
|
require(data.table)
|
||||||
|
require(magrittr)
|
||||||
|
train <- fread('data/train.csv', header = T, stringsAsFactors = F)
|
||||||
|
test <- fread('data/test.csv', header=TRUE, stringsAsFactors = F)
|
||||||
|
```
|
||||||
|
> `magrittr` and `data.table` are here to make the code cleaner and much more rapid.
|
||||||
|
|
||||||
|
Let's explore the dataset.
|
||||||
|
|
||||||
|
```{r explore}
|
||||||
|
# Train dataset dimensions
|
||||||
|
dim(train)
|
||||||
|
|
||||||
|
# Training content
|
||||||
|
train[1:6,1:5, with =F]
|
||||||
|
|
||||||
|
# Test dataset dimensions
|
||||||
|
dim(train)
|
||||||
|
|
||||||
|
# Test content
|
||||||
|
test[1:6,1:5, with =F]
|
||||||
|
```
|
||||||
|
> We only display the 6 first rows and 5 first columns for convenience
|
||||||
|
|
||||||
|
Each *column* represents a feature measured by an `integer`. Each *row* is an **Otto** product.
|
||||||
|
|
||||||
|
Obviously the first column (`ID`) doesn't contain any useful information.
|
||||||
|
|
||||||
|
To let the algorithm focus on real stuff, we will delete it.
|
||||||
|
|
||||||
|
```{r clean, results='hide'}
|
||||||
|
# Delete ID column in training dataset
|
||||||
|
train[, id := NULL]
|
||||||
|
|
||||||
|
# Delete ID column in testing dataset
|
||||||
|
test[, id := NULL]
|
||||||
|
```
|
||||||
|
|
||||||
|
According to its description, the **Otto** challenge is a multi class classification challenge. We need to extract the labels (here the name of the different classes) from the dataset. We only have two files (test and training), it seems logical that the training file contains the class we are looking for. Usually the labels is in the first or the last column. We already know what is in the first column, let's check the content of the last one.
|
||||||
|
|
||||||
|
```{r searchLabel}
|
||||||
|
# Check the content of the last column
|
||||||
|
train[1:6, ncol(train), with = F]
|
||||||
|
# Save the name of the last column
|
||||||
|
nameLastCol <- names(train)[ncol(train)]
|
||||||
|
```
|
||||||
|
|
||||||
|
The classes are provided as character string in the `r ncol(train)`th column called `r nameLastCol`. As you may know, **XGBoost** doesn't support anything else than numbers. So we will convert classes to `integer`. Moreover, according to the documentation, it should start at `0`.
|
||||||
|
|
||||||
|
For that purpose, we will:
|
||||||
|
|
||||||
|
* extract the target column
|
||||||
|
* remove `Class_` from each class name
|
||||||
|
* convert to `integer`
|
||||||
|
* remove `1` to the new value
|
||||||
|
|
||||||
|
```{r classToIntegers}
|
||||||
|
# Convert from classes to numbers
|
||||||
|
y <- train[, nameLastCol, with = F][[1]] %>% gsub('Class_','',.) %>% {as.integer(.) -1}
|
||||||
|
|
||||||
|
# Display the first 5 levels
|
||||||
|
y[1:5]
|
||||||
|
```
|
||||||
|
|
||||||
|
We remove label column from training dataset, otherwise **XGBoost** would use it to guess the labels!
|
||||||
|
|
||||||
|
```{r deleteCols, results='hide'}
|
||||||
|
train[, nameLastCol:=NULL, with = F]
|
||||||
|
```
|
||||||
|
|
||||||
|
`data.table` is an awesome implementation of data.frame, unfortunately it is not a format supported natively by **XGBoost**. We need to convert both datasets (training and test) in `numeric` Matrix format.
|
||||||
|
|
||||||
|
```{r convertToNumericMatrix}
|
||||||
|
trainMatrix <- train[,lapply(.SD,as.numeric)] %>% as.matrix
|
||||||
|
testMatrix <- test[,lapply(.SD,as.numeric)] %>% as.matrix
|
||||||
|
```
|
||||||
|
|
||||||
|
Model training
|
||||||
|
==============
|
||||||
|
|
||||||
|
Before the learning we will use the cross validation to evaluate the our error rate.
|
||||||
|
|
||||||
|
Basically **XGBoost** will divide the training data in `nfold` parts, then **XGBoost** will retain the first part to use it as the test data and perform a training. Then it will reintegrate the first part and retain the second part, do a training and so on...
|
||||||
|
|
||||||
|
You can look at the function documentation for more information.
|
||||||
|
|
||||||
|
```{r crossValidation}
|
||||||
|
numberOfClasses <- max(y) + 1
|
||||||
|
|
||||||
|
param <- list("objective" = "multi:softprob",
|
||||||
|
"eval_metric" = "mlogloss",
|
||||||
|
"num_class" = numberOfClasses)
|
||||||
|
|
||||||
|
cv.nround <- 5
|
||||||
|
cv.nfold <- 3
|
||||||
|
|
||||||
|
bst.cv = xgb.cv(param=param, data = trainMatrix, label = y,
|
||||||
|
nfold = cv.nfold, nrounds = cv.nround)
|
||||||
|
```
|
||||||
|
> As we can see the error rate is low on the test dataset (for a 5mn trained model).
|
||||||
|
|
||||||
|
Finally, we are ready to train the real model!!!
|
||||||
|
|
||||||
|
```{r modelTraining}
|
||||||
|
nround = 50
|
||||||
|
bst = xgboost(param=param, data = trainMatrix, label = y, nrounds=nround)
|
||||||
|
```
|
||||||
|
|
||||||
|
Model understanding
|
||||||
|
===================
|
||||||
|
|
||||||
|
Feature importance
|
||||||
|
------------------
|
||||||
|
|
||||||
|
So far, we have built a model made of **`r nround`** trees.
|
||||||
|
|
||||||
|
To build a tree, the dataset is divided recursively several times. At the end of the process, you get groups of observations (here, these observations are properties regarding **Otto** products).
|
||||||
|
|
||||||
|
Each division operation is called a *split*.
|
||||||
|
|
||||||
|
Each group at each division level is called a branch and the deepest level is called a *leaf*.
|
||||||
|
|
||||||
|
In the final model, these *leafs* are supposed to be as pure as possible for each tree, meaning in our case that each *leaf* should be made of one class of **Otto** product only (of course it is not true, but that's what we try to achieve in a minimum of splits).
|
||||||
|
|
||||||
|
**Not all *splits* are equally important**. Basically the first *split* of a tree will have more impact on the purity that, for instance, the deepest *split*. Intuitively, we understand that the first *split* makes most of the work, and the following *splits* focus on smaller parts of the dataset which have been missclassified by the first *tree*.
|
||||||
|
|
||||||
|
In the same way, in Boosting we try to optimize the missclassification at each round (it is called the *loss*). So the first *tree* will do the big work and the following trees will focus on the remaining, on the parts not correctly learned by the previous *trees*.
|
||||||
|
|
||||||
|
The improvement brought by each *split* can be measured, it is the *gain*.
|
||||||
|
|
||||||
|
Each *split* is done on one feature only at one value.
|
||||||
|
|
||||||
|
Let's see what the model looks like.
|
||||||
|
|
||||||
|
```{r modelDump}
|
||||||
|
model <- xgb.dump(bst, with.stats = T)
|
||||||
|
model[1:10]
|
||||||
|
```
|
||||||
|
> For convenience, we are displaying the first 10 lines of the model only.
|
||||||
|
|
||||||
|
Clearly, it is not easy to understand what it means.
|
||||||
|
|
||||||
|
Basically each line represents a *branch*, there is the *tree* ID, the feature ID, the point where it *splits*, and information regarding the next *branches* (left, right, when the row for this feature is N/A).
|
||||||
|
|
||||||
|
Hopefully, **XGBoost** offers a better representation: **feature importance**.
|
||||||
|
|
||||||
|
Feature importance is about averaging the *gain* of each feature for all *split* and all *trees*.
|
||||||
|
|
||||||
|
Then we can use the function `xgb.plot.importance`.
|
||||||
|
|
||||||
|
```{r importanceFeature, fig.align='center', fig.height=5, fig.width=10}
|
||||||
|
# Get the feature real names
|
||||||
|
names <- dimnames(trainMatrix)[[2]]
|
||||||
|
|
||||||
|
# Compute feature importance matrix
|
||||||
|
importance_matrix <- xgb.importance(names, model = bst)
|
||||||
|
|
||||||
|
# Nice graph
|
||||||
|
xgb.plot.importance(importance_matrix[1:10,])
|
||||||
|
```
|
||||||
|
|
||||||
|
> To make it understandable we first extract the column names from the `Matrix`.
|
||||||
|
|
||||||
|
Interpretation
|
||||||
|
--------------
|
||||||
|
|
||||||
|
In the feature importance above, we can see the first 10 most important features.
|
||||||
|
|
||||||
|
This function gives a color to each bar. These colors represent groups of features. Basically a K-means clustering is applied to group each feature by importance.
|
||||||
|
|
||||||
|
From here you can take several actions. For instance you can remove the less important feature (feature selection process), or go deeper in the interaction between the most important features and labels.
|
||||||
|
|
||||||
|
Or you can just reason about why these features are so importat (in **Otto** challenge we can't go this way because there is not enough information).
|
||||||
|
|
||||||
|
Tree graph
|
||||||
|
----------
|
||||||
|
|
||||||
|
Feature importance gives you feature weight information but not interaction between features.
|
||||||
|
|
||||||
|
**XGBoost R** package have another useful function for that.
|
||||||
|
|
||||||
|
Please, scroll on the right to see the tree.
|
||||||
|
|
||||||
|
```{r treeGraph, dpi=1500, fig.align='left'}
|
||||||
|
xgb.plot.tree(feature_names = names, model = bst, n_first_tree = 2)
|
||||||
|
```
|
||||||
|
|
||||||
|
We are just displaying the first two trees here.
|
||||||
|
|
||||||
|
On simple models the first two trees may be enough. Here, it might not be the case. We can see from the size of the trees that the intersaction between features is complicated.
|
||||||
|
Besides, **XGBoost** generate `k` trees at each round for a `k`-classification problem. Therefore the two trees illustrated here are trying to classify data into different classes.
|
||||||
|
|
||||||
|
Going deeper
|
||||||
|
============
|
||||||
|
|
||||||
|
There are 4 documents you may also be interested in:
|
||||||
|
|
||||||
|
* [xgboostPresentation.Rmd](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd): general presentation
|
||||||
|
* [discoverYourData.Rmd](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/discoverYourData.Rmd): explaining feature analysus
|
||||||
|
* [Feature Importance Analysis with XGBoost in Tax audit](http://fr.slideshare.net/MichaelBENESTY/feature-importance-analysis-with-xgboost-in-tax-audit): use case
|
||||||
|
* [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/): very good book to have a good understanding of the model
|
||||||
@@ -7,4 +7,4 @@ Make sure you make make xgboost python module in ../../python
|
|||||||
./runexp.sh
|
./runexp.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
Explainations can be found in [wiki](https://github.com/tqchen/xgboost/wiki)
|
|
||||||
|
|||||||
@@ -1,7 +1,5 @@
|
|||||||
#! /usr/bin/python
|
#! /usr/bin/python
|
||||||
import sys
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
sys.path.append('../../wrapper/')
|
|
||||||
import xgboost as xgb
|
import xgboost as xgb
|
||||||
|
|
||||||
# label need to be 0 to num_class -1
|
# label need to be 0 to num_class -1
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user