Compare commits
196 Commits
v0.7
...
release_0.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6852d0afd5 | ||
|
|
c0bd296354 | ||
|
|
09142c94f5 | ||
|
|
ba4244ef51 | ||
|
|
a46b0ac2d2 | ||
|
|
4bc7e94603 | ||
|
|
a899e8f4cd | ||
|
|
f9a833f525 | ||
|
|
1afd2f1b2d | ||
|
|
b1d76d533d | ||
|
|
9d70655c42 | ||
|
|
dd1fda449c | ||
|
|
324f3b5259 | ||
|
|
24e08c2638 | ||
|
|
96826a3515 | ||
|
|
06ef4db4cc | ||
|
|
645996b12f | ||
|
|
0b607fb884 | ||
|
|
4202332783 | ||
|
|
7300002516 | ||
|
|
9c647d8130 | ||
|
|
2e7c3a0ed5 | ||
|
|
aa4ee6a0e4 | ||
|
|
bad76048d1 | ||
|
|
bbb771f32e | ||
|
|
3c72654e3b | ||
|
|
e3e776bd58 | ||
|
|
1c08b3b2ea | ||
|
|
246ec92163 | ||
|
|
55caad6e49 | ||
|
|
69454d9487 | ||
|
|
44811f2330 | ||
|
|
109473dae2 | ||
|
|
8c633d1ca3 | ||
|
|
4a429a7c4f | ||
|
|
7fefd6865d | ||
|
|
31d1baba3d | ||
|
|
34dc9155ab | ||
|
|
70026655b0 | ||
|
|
437b368b1f | ||
|
|
6cf97b4eae | ||
|
|
860263f814 | ||
|
|
b546321c83 | ||
|
|
3b62e75f2e | ||
|
|
dd07c25d12 | ||
|
|
2bb9b9d3db | ||
|
|
b5178d3d99 | ||
|
|
5850a2558a | ||
|
|
8973f2cb0e | ||
|
|
3363b9142e | ||
|
|
07ff52d54c | ||
|
|
b5fad42da2 | ||
|
|
8a5209c55e | ||
|
|
cc6a5a3666 | ||
|
|
e2f09db77a | ||
|
|
a725272e19 | ||
|
|
e9a97e0d88 | ||
|
|
a1505de631 | ||
|
|
a393d44c5d | ||
|
|
8e90b60c4d | ||
|
|
05b089405d | ||
|
|
c004cea788 | ||
|
|
b6dcbf0e07 | ||
|
|
0f145a0365 | ||
|
|
1b59316444 | ||
|
|
a13e29ece1 | ||
|
|
2f8764955c | ||
|
|
2200939416 | ||
|
|
a6331925d2 | ||
|
|
b40959042c | ||
|
|
6bed54ac39 | ||
|
|
cb017d0c9a | ||
|
|
aa90e5c6ce | ||
|
|
66e74d2223 | ||
|
|
48d6e68690 | ||
|
|
45bf4fbffb | ||
|
|
01aff45f26 | ||
|
|
e62639c59b | ||
|
|
aec6299c49 | ||
|
|
295252249e | ||
|
|
0cf88d036f | ||
|
|
18813a26ab | ||
|
|
594bcea83e | ||
|
|
24fde92660 | ||
|
|
30d10ab035 | ||
|
|
8bec8d5e9a | ||
|
|
12e34f32e2 | ||
|
|
64b8cffde3 | ||
|
|
cafc621914 | ||
|
|
e2743548ed | ||
|
|
a0a1df1aba | ||
|
|
0988fb191f | ||
|
|
5cd851ccef | ||
|
|
d062c6f61b | ||
|
|
9ac163d0bb | ||
|
|
eecf341ea7 | ||
|
|
0e78034607 | ||
|
|
2c4359e914 | ||
|
|
e6696337e4 | ||
|
|
578a0c7ddb | ||
|
|
34e3edfb1a | ||
|
|
902ecbade8 | ||
|
|
a96039141a | ||
|
|
286dccb8e8 | ||
|
|
3f7696ff53 | ||
|
|
bd01acdfbc | ||
|
|
f66731181f | ||
|
|
1214081f99 | ||
|
|
b7cbec4d4b | ||
|
|
a510e68dda | ||
|
|
b018ef104f | ||
|
|
34aeee2961 | ||
|
|
8efbadcde4 | ||
|
|
480e3fd764 | ||
|
|
71e226120a | ||
|
|
d367e4fc6b | ||
|
|
8f6aadd4b7 | ||
|
|
3ee725e3bb | ||
|
|
f8b7686719 | ||
|
|
098075b81b | ||
|
|
49b9f39818 | ||
|
|
9a8211f668 | ||
|
|
039dbe6aec | ||
|
|
0c0a78c255 | ||
|
|
747381b520 | ||
|
|
cc79a65ab9 | ||
|
|
d13f1a0f16 | ||
|
|
088bb4b27c | ||
|
|
b8a0d66fe6 | ||
|
|
90a5c4db9d | ||
|
|
c80d51ccb3 | ||
|
|
e1f57b4417 | ||
|
|
4850f67b85 | ||
|
|
c2b647f26e | ||
|
|
25b2919c44 | ||
|
|
d9dd485313 | ||
|
|
a185ddfe03 | ||
|
|
ccf80703ef | ||
|
|
3242b0a378 | ||
|
|
842e28fdcd | ||
|
|
230cb9b787 | ||
|
|
4109818b32 | ||
|
|
443ff746e9 | ||
|
|
a1ec7b1716 | ||
|
|
017acf54d9 | ||
|
|
ace4016c36 | ||
|
|
b087620661 | ||
|
|
92782a8406 | ||
|
|
04221a7469 | ||
|
|
8fb3388af2 | ||
|
|
00d9728e4b | ||
|
|
c85995952f | ||
|
|
9fa45d3a9c | ||
|
|
cdc036b752 | ||
|
|
7a81c87dfa | ||
|
|
706be4e5d4 | ||
|
|
a1b48afa41 | ||
|
|
d5f1b74ef5 | ||
|
|
8937134015 | ||
|
|
32ea70c1c9 | ||
|
|
d5992dd881 | ||
|
|
11bfa8584d | ||
|
|
cf89fa7139 | ||
|
|
5d4cc49080 | ||
|
|
3d7aff5697 | ||
|
|
eb9e30bb30 | ||
|
|
20b733e1a0 | ||
|
|
8153ba6fe7 | ||
|
|
dd82b28e20 | ||
|
|
10eb05a63a | ||
|
|
9ffe8596f2 | ||
|
|
cf19caa46a | ||
|
|
375d75304d | ||
|
|
81d1b17f9c | ||
|
|
b99f56e386 | ||
|
|
874525c152 | ||
|
|
d878c36c84 | ||
|
|
077abb35cd | ||
|
|
94e655329f | ||
|
|
7c99e90ecd | ||
|
|
86bf930497 | ||
|
|
24c2e41287 | ||
|
|
98be9aef9a | ||
|
|
c88bae112e | ||
|
|
5ef684641b | ||
|
|
f87802f00c | ||
|
|
8b2f4e2d39 | ||
|
|
3f3f54bcad | ||
|
|
84ab74f3a5 | ||
|
|
a187ed6c8f | ||
|
|
740eba42f7 | ||
|
|
65fb4e3f5c | ||
|
|
9747ea2acb | ||
|
|
bf43671841 | ||
|
|
14c6392381 | ||
|
|
526801cdb3 |
21
.clang-tidy
Normal file
21
.clang-tidy
Normal file
@@ -0,0 +1,21 @@
|
||||
Checks: 'modernize-*,-modernize-make-*,-modernize-raw-string-literal,google-*,-google-default-arguments,-clang-diagnostic-#pragma-messages,readability-identifier-naming'
|
||||
CheckOptions:
|
||||
- { key: readability-identifier-naming.ClassCase, value: CamelCase }
|
||||
- { key: readability-identifier-naming.StructCase, value: CamelCase }
|
||||
- { key: readability-identifier-naming.TypeAliasCase, value: CamelCase }
|
||||
- { key: readability-identifier-naming.TypedefCase, value: CamelCase }
|
||||
- { key: readability-identifier-naming.TypeTemplateParameterCase, value: CamelCase }
|
||||
- { key: readability-identifier-naming.MemberCase, value: lower_case }
|
||||
- { key: readability-identifier-naming.PrivateMemberSuffix, value: '_' }
|
||||
- { key: readability-identifier-naming.ProtectedMemberSuffix, value: '_' }
|
||||
- { key: readability-identifier-naming.EnumCase, value: CamelCase }
|
||||
- { key: readability-identifier-naming.EnumConstant, value: CamelCase }
|
||||
- { key: readability-identifier-naming.EnumConstantPrefix, value: k }
|
||||
- { key: readability-identifier-naming.GlobalConstantCase, value: CamelCase }
|
||||
- { key: readability-identifier-naming.GlobalConstantPrefix, value: k }
|
||||
- { key: readability-identifier-naming.StaticConstantCase, value: CamelCase }
|
||||
- { key: readability-identifier-naming.StaticConstantPrefix, value: k }
|
||||
- { key: readability-identifier-naming.ConstexprVariableCase, value: CamelCase }
|
||||
- { key: readability-identifier-naming.ConstexprVariablePrefix, value: k }
|
||||
- { key: readability-identifier-naming.FunctionCase, value: CamelCase }
|
||||
- { key: readability-identifier-naming.NamespaceCase, value: lower_case }
|
||||
11
.editorconfig
Normal file
11
.editorconfig
Normal file
@@ -0,0 +1,11 @@
|
||||
root = true
|
||||
|
||||
[*]
|
||||
charset=utf-8
|
||||
indent_style = space
|
||||
indent_size = 2
|
||||
insert_final_newline = true
|
||||
|
||||
[*.py]
|
||||
indent_style = space
|
||||
indent_size = 4
|
||||
7
.github/ISSUE_TEMPLATE.md
vendored
Normal file
7
.github/ISSUE_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
Thanks for participating in the XGBoost community! We use https://discuss.xgboost.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first :)
|
||||
|
||||
Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.
|
||||
|
||||
For bug reports, to help the developer act on the issues, please include a description of your environment, preferably a minimum script to reproduce the problem.
|
||||
|
||||
For feature proposals, list clear, small actionable items so we can track the progress of the change.
|
||||
7
.gitignore
vendored
7
.gitignore
vendored
@@ -15,7 +15,6 @@
|
||||
*.Rcheck
|
||||
*.rds
|
||||
*.tar.gz
|
||||
#*txt*
|
||||
*conf
|
||||
*buffer
|
||||
*model
|
||||
@@ -47,13 +46,12 @@ Debug
|
||||
*.cpage.col
|
||||
*.cpage
|
||||
*.Rproj
|
||||
./xgboost
|
||||
./xgboost.mpi
|
||||
./xgboost.mock
|
||||
#.Rbuildignore
|
||||
R-package.Rproj
|
||||
*.cache*
|
||||
#java
|
||||
# java
|
||||
java/xgboost4j/target
|
||||
java/xgboost4j/tmp
|
||||
java/xgboost4j-demo/target
|
||||
@@ -68,10 +66,9 @@ nb-configuration*
|
||||
.settings/
|
||||
build
|
||||
config.mk
|
||||
xgboost
|
||||
/xgboost
|
||||
*.data
|
||||
build_plugin
|
||||
dmlc-core
|
||||
.idea
|
||||
recommonmark/
|
||||
tags
|
||||
|
||||
3
.gitmodules
vendored
3
.gitmodules
vendored
@@ -4,9 +4,6 @@
|
||||
[submodule "rabit"]
|
||||
path = rabit
|
||||
url = https://github.com/dmlc/rabit
|
||||
[submodule "nccl"]
|
||||
path = nccl
|
||||
url = https://github.com/dmlc/nccl
|
||||
[submodule "cub"]
|
||||
path = cub
|
||||
url = https://github.com/NVlabs/cub
|
||||
|
||||
@@ -26,6 +26,8 @@ env:
|
||||
- TASK=cmake_test
|
||||
# c++ test
|
||||
- TASK=cpp_test
|
||||
# distributed test
|
||||
- TASK=distributed_test
|
||||
|
||||
matrix:
|
||||
exclude:
|
||||
@@ -39,15 +41,19 @@ matrix:
|
||||
env: TASK=python_lightweight_test
|
||||
- os: osx
|
||||
env: TASK=cpp_test
|
||||
- os: osx
|
||||
env: TASK=distributed_test
|
||||
|
||||
# dependent apt packages
|
||||
addons:
|
||||
apt:
|
||||
sources:
|
||||
- llvm-toolchain-trusty-5.0
|
||||
- ubuntu-toolchain-r-test
|
||||
- george-edison55-precise-backports
|
||||
packages:
|
||||
- cmake
|
||||
- clang
|
||||
- clang-tidy-5.0
|
||||
- cmake-data
|
||||
- doxygen
|
||||
- wget
|
||||
|
||||
@@ -8,14 +8,18 @@ set_default_configuration_release()
|
||||
msvc_use_static_runtime()
|
||||
|
||||
# Options
|
||||
option(USE_CUDA "Build with GPU acceleration")
|
||||
option(USE_AVX "Build with AVX instructions. May not produce identical results due to approximate math." OFF)
|
||||
option(USE_NCCL "Build using NCCL for multi-GPU. Also requires USE_CUDA")
|
||||
option(USE_CUDA "Build with GPU acceleration")
|
||||
option(USE_AVX "Build with AVX instructions. May not produce identical results due to approximate math." OFF)
|
||||
option(USE_NCCL "Build using NCCL for multi-GPU. Also requires USE_CUDA")
|
||||
option(JVM_BINDINGS "Build JVM bindings" OFF)
|
||||
option(GOOGLE_TEST "Build google tests" OFF)
|
||||
option(R_LIB "Build shared library for R package" OFF)
|
||||
set(GPU_COMPUTE_VER 35;50;52;60;61 CACHE STRING
|
||||
"Space separated list of compute versions to be built against")
|
||||
option(USE_SANITIZER "Use santizer flags" OFF)
|
||||
set(GPU_COMPUTE_VER "" CACHE STRING
|
||||
"Space separated list of compute versions to be built against, e.g. '35 61'")
|
||||
set(ENABLED_SANITIZERS "address" "leak" CACHE STRING
|
||||
"Semicolon separated list of sanitizer names. E.g 'address;leak'. Supported sanitizers are
|
||||
address, leak and thread.")
|
||||
|
||||
# Deprecation warning
|
||||
if(PLUGIN_UPDATER_GPU)
|
||||
@@ -39,6 +43,15 @@ else()
|
||||
# Performance
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -funroll-loops")
|
||||
endif()
|
||||
if(WIN32 AND MINGW)
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -static-libstdc++")
|
||||
endif()
|
||||
|
||||
# Sanitizer
|
||||
if(USE_SANITIZER)
|
||||
include(cmake/Sanitizer.cmake)
|
||||
enable_sanitizers("${ENABLED_SANITIZERS}")
|
||||
endif(USE_SANITIZER)
|
||||
|
||||
# AVX
|
||||
if(USE_AVX)
|
||||
@@ -50,6 +63,12 @@ if(USE_AVX)
|
||||
add_definitions(-DXGBOOST_USE_AVX)
|
||||
endif()
|
||||
|
||||
# dmlc-core
|
||||
add_subdirectory(dmlc-core)
|
||||
set(LINK_LIBRARIES dmlc rabit)
|
||||
|
||||
# enable custom logging
|
||||
add_definitions(-DDMLC_LOG_CUSTOMIZE=1)
|
||||
|
||||
# compiled code customizations for R package
|
||||
if(R_LIB)
|
||||
@@ -70,7 +89,7 @@ include_directories (
|
||||
${PROJECT_SOURCE_DIR}/rabit/include
|
||||
)
|
||||
|
||||
file(GLOB_RECURSE SOURCES
|
||||
file(GLOB_RECURSE SOURCES
|
||||
src/*.cc
|
||||
src/*.h
|
||||
include/*.h
|
||||
@@ -103,47 +122,36 @@ else()
|
||||
add_library(rabit STATIC ${RABIT_SOURCES})
|
||||
endif()
|
||||
|
||||
|
||||
# dmlc-core
|
||||
add_subdirectory(dmlc-core)
|
||||
set(LINK_LIBRARIES dmlccore rabit)
|
||||
|
||||
|
||||
if(USE_CUDA)
|
||||
find_package(CUDA 8.0 REQUIRED)
|
||||
cmake_minimum_required(VERSION 3.5)
|
||||
|
||||
add_definitions(-DXGBOOST_USE_CUDA)
|
||||
|
||||
|
||||
include_directories(cub)
|
||||
|
||||
if(USE_NCCL)
|
||||
include_directories(nccl/src)
|
||||
find_package(Nccl REQUIRED)
|
||||
include_directories(${NCCL_INCLUDE_DIR})
|
||||
add_definitions(-DXGBOOST_USE_NCCL)
|
||||
endif()
|
||||
|
||||
if((CUDA_VERSION_MAJOR EQUAL 9) OR (CUDA_VERSION_MAJOR GREATER 9))
|
||||
message("CUDA 9.0 detected, adding Volta compute capability (7.0).")
|
||||
set(GPU_COMPUTE_VER "${GPU_COMPUTE_VER};70")
|
||||
endif()
|
||||
|
||||
set(GENCODE_FLAGS "")
|
||||
format_gencode_flags("${GPU_COMPUTE_VER}" GENCODE_FLAGS)
|
||||
message("cuda architecture flags: ${GENCODE_FLAGS}")
|
||||
|
||||
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS};--expt-extended-lambda;--expt-relaxed-constexpr;${GENCODE_FLAGS};-lineinfo;")
|
||||
if(NOT MSVC)
|
||||
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS};-Xcompiler -fPIC; -std=c++11")
|
||||
endif()
|
||||
|
||||
if(USE_NCCL)
|
||||
add_subdirectory(nccl)
|
||||
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS};-Xcompiler -fPIC; -Xcompiler -Werror; -std=c++11")
|
||||
endif()
|
||||
|
||||
cuda_add_library(gpuxgboost ${CUDA_SOURCES} STATIC)
|
||||
|
||||
|
||||
if(USE_NCCL)
|
||||
target_link_libraries(gpuxgboost nccl)
|
||||
link_directories(${NCCL_LIBRARY})
|
||||
target_link_libraries(gpuxgboost ${NCCL_LIB_NAME})
|
||||
endif()
|
||||
list(APPEND LINK_LIBRARIES gpuxgboost)
|
||||
list(APPEND LINK_LIBRARIES gpuxgboost)
|
||||
endif()
|
||||
|
||||
|
||||
@@ -224,12 +232,12 @@ endif()
|
||||
|
||||
# Test
|
||||
if(GOOGLE_TEST)
|
||||
find_package(GTest REQUIRED)
|
||||
enable_testing()
|
||||
find_package(GTest REQUIRED)
|
||||
|
||||
file(GLOB_RECURSE TEST_SOURCES "tests/cpp/*.cc")
|
||||
auto_source_group("${TEST_SOURCES}")
|
||||
include_directories(${GTEST_INCLUDE_DIR})
|
||||
include_directories(${GTEST_INCLUDE_DIRS})
|
||||
|
||||
if(USE_CUDA)
|
||||
file(GLOB_RECURSE CUDA_TEST_SOURCES "tests/cpp/*.cu")
|
||||
|
||||
@@ -7,8 +7,8 @@ Committers
|
||||
Committers are people who have made substantial contribution to the project and granted write access to the project.
|
||||
* [Tianqi Chen](https://github.com/tqchen), University of Washington
|
||||
- Tianqi is a PhD working on large-scale machine learning, he is the creator of the project.
|
||||
* [Tong He](https://github.com/hetong007), Simon Fraser University
|
||||
- Tong is a master student working on data mining, he is the maintainer of xgboost R package.
|
||||
* [Tong He](https://github.com/hetong007), Amazon AI
|
||||
- Tong is an applied scientist in Amazon AI, he is the maintainer of xgboost R package.
|
||||
* [Vadim Khotilovich](https://github.com/khotilov)
|
||||
- Vadim contributes many improvements in R and core packages.
|
||||
* [Bing Xu](https://github.com/antinucleon)
|
||||
@@ -54,7 +54,8 @@ List of Contributors
|
||||
* [Masaaki Horikoshi](https://github.com/sinhrks)
|
||||
- Masaaki is the initial creator of xgboost python plotting module.
|
||||
* [Hongliang Liu](https://github.com/phunterlau)
|
||||
- Hongliang is the maintainer of xgboost python PyPI package for pip installation.
|
||||
* [Hyunsu Cho](http://hyunsu-cho.io/)
|
||||
- Hyunsu is the maintainer of the XGBoost Python package. He is in charge of submitting the Python package to Python Package Index (PyPI). He is also the initial author of the CPU 'hist' updater.
|
||||
* [daiyl0320](https://github.com/daiyl0320)
|
||||
- daiyl0320 contributed patch to xgboost distributed version more robust, and scales stably on TB scale datasets.
|
||||
* [Huayi Zhang](https://github.com/irachex)
|
||||
@@ -72,3 +73,8 @@ List of Contributors
|
||||
* [Gideon Whitehead](https://github.com/gaw89)
|
||||
* [Yi-Lin Juang](https://github.com/frankyjuang)
|
||||
* [Andrew Hannigan](https://github.com/andrewhannigan)
|
||||
* [Andy Adinets](https://github.com/canonizer)
|
||||
* [Henry Gouk](https://github.com/henrygouk)
|
||||
* [Pierre de Sahb](https://github.com/pdesahb)
|
||||
* [liuliang01](https://github.com/liuliang01)
|
||||
- liuliang01 added support for the qid column for LibSVM input format. This makes ranking task easier in distributed setting.
|
||||
|
||||
@@ -1,44 +0,0 @@
|
||||
For bugs or installation issues, please provide the following information.
|
||||
The more information you provide, the more easily we will be able to offer
|
||||
help and advice.
|
||||
|
||||
## Environment info
|
||||
Operating System:
|
||||
|
||||
Compiler:
|
||||
|
||||
Package used (python/R/jvm/C++):
|
||||
|
||||
`xgboost` version used:
|
||||
|
||||
If installing from source, please provide
|
||||
|
||||
1. The commit hash (`git rev-parse HEAD`)
|
||||
2. Logs will be helpful (If logs are large, please upload as attachment).
|
||||
|
||||
If you are using jvm package, please
|
||||
|
||||
1. add [jvm-packages] in the title to make it quickly be identified
|
||||
2. the gcc version and distribution
|
||||
|
||||
If you are using python package, please provide
|
||||
|
||||
1. The python version and distribution
|
||||
2. The command to install `xgboost` if you are not installing from source
|
||||
|
||||
If you are using R package, please provide
|
||||
|
||||
1. The R `sessionInfo()`
|
||||
2. The command to install `xgboost` if you are not installing from source
|
||||
|
||||
## Steps to reproduce
|
||||
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
|
||||
## What have you tried?
|
||||
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
121
Jenkinsfile
vendored
121
Jenkinsfile
vendored
@@ -3,13 +3,20 @@
|
||||
// Jenkins pipeline
|
||||
// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/
|
||||
|
||||
import groovy.transform.Field
|
||||
|
||||
/* Unrestricted tasks: tasks that do NOT generate artifacts */
|
||||
|
||||
// Command to run command inside a docker container
|
||||
dockerRun = 'tests/ci_build/ci_build.sh'
|
||||
def dockerRun = 'tests/ci_build/ci_build.sh'
|
||||
// Utility functions
|
||||
@Field
|
||||
def utils
|
||||
|
||||
def buildMatrix = [
|
||||
[ "enabled": true, "os" : "linux", "withGpu": true, "withOmp": true, "pythonVersion": "2.7" ],
|
||||
[ "enabled": true, "os" : "linux", "withGpu": false, "withOmp": true, "pythonVersion": "2.7" ],
|
||||
[ "enabled": false, "os" : "osx", "withGpu": false, "withOmp": false, "pythonVersion": "2.7" ],
|
||||
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": true, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "9.2" ],
|
||||
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": true, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "8.0" ],
|
||||
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": false, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "8.0" ],
|
||||
]
|
||||
|
||||
pipeline {
|
||||
@@ -26,20 +33,25 @@ pipeline {
|
||||
|
||||
// Build stages
|
||||
stages {
|
||||
stage('Get sources') {
|
||||
agent any
|
||||
stage('Jenkins: Get sources') {
|
||||
agent {
|
||||
label 'unrestricted'
|
||||
}
|
||||
steps {
|
||||
checkoutSrcs()
|
||||
script {
|
||||
utils = load('tests/ci_build/jenkins_tools.Groovy')
|
||||
utils.checkoutSrcs()
|
||||
}
|
||||
stash name: 'srcs', excludes: '.git/'
|
||||
milestone label: 'Sources ready', ordinal: 1
|
||||
}
|
||||
}
|
||||
stage('Build & Test') {
|
||||
stage('Jenkins: Build & Test') {
|
||||
steps {
|
||||
script {
|
||||
parallel (buildMatrix.findAll{it['enabled']}.collectEntries{ c ->
|
||||
def buildName = getBuildName(c)
|
||||
buildFactory(buildName, c)
|
||||
def buildName = utils.getBuildName(c)
|
||||
utils.buildFactory(buildName, c, false, this.&buildPlatformCmake)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -47,40 +59,17 @@ pipeline {
|
||||
}
|
||||
}
|
||||
|
||||
// initialize source codes
|
||||
def checkoutSrcs() {
|
||||
retry(5) {
|
||||
try {
|
||||
timeout(time: 2, unit: 'MINUTES') {
|
||||
checkout scm
|
||||
sh 'git submodule update --init'
|
||||
}
|
||||
} catch (exc) {
|
||||
deleteDir()
|
||||
error "Failed to fetch source codes"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates cmake and make builds
|
||||
*/
|
||||
def buildFactory(buildName, conf) {
|
||||
def os = conf["os"]
|
||||
def nodeReq = conf["withGpu"] ? "${os} && gpu" : "${os}"
|
||||
def dockerTarget = conf["withGpu"] ? "gpu" : "cpu"
|
||||
[ ("cmake_${buildName}") : { buildPlatformCmake("cmake_${buildName}", conf, nodeReq, dockerTarget) },
|
||||
("make_${buildName}") : { buildPlatformMake("make_${buildName}", conf, nodeReq, dockerTarget) }
|
||||
]
|
||||
}
|
||||
|
||||
/**
|
||||
* Build platform and test it via cmake.
|
||||
*/
|
||||
def buildPlatformCmake(buildName, conf, nodeReq, dockerTarget) {
|
||||
def opts = cmakeOptions(conf)
|
||||
def opts = utils.cmakeOptions(conf)
|
||||
// Destination dir for artifacts
|
||||
def distDir = "dist/${buildName}"
|
||||
def dockerArgs = ""
|
||||
if(conf["withGpu"]){
|
||||
dockerArgs = "--build-arg CUDA_VERSION=" + conf["cudaVersion"]
|
||||
}
|
||||
// Build node - this is returned result
|
||||
node(nodeReq) {
|
||||
unstash name: 'srcs'
|
||||
@@ -92,60 +81,8 @@ def buildPlatformCmake(buildName, conf, nodeReq, dockerTarget) {
|
||||
""".stripMargin('|')
|
||||
// Invoke command inside docker
|
||||
sh """
|
||||
${dockerRun} ${dockerTarget} tests/ci_build/build_via_cmake.sh ${opts}
|
||||
${dockerRun} ${dockerTarget} tests/ci_build/test_${dockerTarget}.sh
|
||||
${dockerRun} ${dockerTarget} bash -c "cd python-package; python setup.py bdist_wheel"
|
||||
rm -rf "${distDir}"; mkdir -p "${distDir}/py"
|
||||
cp xgboost "${distDir}"
|
||||
cp -r lib "${distDir}"
|
||||
cp -r python-package/dist "${distDir}/py"
|
||||
"""
|
||||
archiveArtifacts artifacts: "${distDir}/**/*.*", allowEmptyArchive: true
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Build platform via make
|
||||
*/
|
||||
def buildPlatformMake(buildName, conf, nodeReq, dockerTarget) {
|
||||
def opts = makeOptions(conf)
|
||||
// Destination dir for artifacts
|
||||
def distDir = "dist/${buildName}"
|
||||
// Build node
|
||||
node(nodeReq) {
|
||||
unstash name: 'srcs'
|
||||
echo """
|
||||
|===== XGBoost Make build =====
|
||||
| dockerTarget: ${dockerTarget}
|
||||
| makeOpts : ${opts}
|
||||
|=========================
|
||||
""".stripMargin('|')
|
||||
// Invoke command inside docker
|
||||
sh """
|
||||
${dockerRun} ${dockerTarget} tests/ci_build/build_via_make.sh ${opts}
|
||||
${dockerRun} ${dockerTarget} ${dockerArgs} tests/ci_build/build_via_cmake.sh ${opts}
|
||||
${dockerRun} ${dockerTarget} ${dockerArgs} tests/ci_build/test_${dockerTarget}.sh
|
||||
"""
|
||||
}
|
||||
}
|
||||
|
||||
def makeOptions(conf) {
|
||||
return ([
|
||||
conf["withGpu"] ? 'PLUGIN_UPDATER_GPU=ON' : 'PLUGIN_UPDATER_GPU=OFF',
|
||||
conf["withOmp"] ? 'USE_OPENMP=1' : 'USE_OPENMP=0']
|
||||
).join(" ")
|
||||
}
|
||||
|
||||
|
||||
def cmakeOptions(conf) {
|
||||
return ([
|
||||
conf["withGpu"] ? '-DPLUGIN_UPDATER_GPU:BOOL=ON' : '',
|
||||
conf["withOmp"] ? '-DOPEN_MP:BOOL=ON' : '']
|
||||
).join(" ")
|
||||
}
|
||||
|
||||
def getBuildName(conf) {
|
||||
def gpuLabel = conf['withGpu'] ? "_gpu" : "_cpu"
|
||||
def ompLabel = conf['withOmp'] ? "_omp" : ""
|
||||
def pyLabel = "_py${conf['pythonVersion']}"
|
||||
return "${conf['os']}${gpuLabel}${ompLabel}${pyLabel}"
|
||||
}
|
||||
|
||||
|
||||
121
Jenkinsfile-restricted
Normal file
121
Jenkinsfile-restricted
Normal file
@@ -0,0 +1,121 @@
|
||||
#!/usr/bin/groovy
|
||||
// -*- mode: groovy -*-
|
||||
// Jenkins pipeline
|
||||
// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/
|
||||
|
||||
import groovy.transform.Field
|
||||
|
||||
/* Restricted tasks: tasks generating artifacts, such as binary wheels and
|
||||
documentation */
|
||||
|
||||
// Command to run command inside a docker container
|
||||
def dockerRun = 'tests/ci_build/ci_build.sh'
|
||||
// Utility functions
|
||||
@Field
|
||||
def utils
|
||||
|
||||
def buildMatrix = [
|
||||
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": true, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "9.2" ],
|
||||
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": true, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "8.0" ],
|
||||
[ "enabled": true, "os" : "linux", "withGpu": true, "withNccl": false, "withOmp": true, "pythonVersion": "2.7", "cudaVersion": "8.0" ],
|
||||
]
|
||||
|
||||
pipeline {
|
||||
// Each stage specify its own agent
|
||||
agent none
|
||||
|
||||
// Setup common job properties
|
||||
options {
|
||||
ansiColor('xterm')
|
||||
timestamps()
|
||||
timeout(time: 120, unit: 'MINUTES')
|
||||
buildDiscarder(logRotator(numToKeepStr: '10'))
|
||||
}
|
||||
|
||||
// Build stages
|
||||
stages {
|
||||
stage('Jenkins: Get sources') {
|
||||
agent {
|
||||
label 'restricted'
|
||||
}
|
||||
steps {
|
||||
script {
|
||||
utils = load('tests/ci_build/jenkins_tools.Groovy')
|
||||
utils.checkoutSrcs()
|
||||
}
|
||||
stash name: 'srcs', excludes: '.git/'
|
||||
milestone label: 'Sources ready', ordinal: 1
|
||||
}
|
||||
}
|
||||
stage('Jenkins: Build doc') {
|
||||
agent {
|
||||
label 'linux && cpu && restricted'
|
||||
}
|
||||
steps {
|
||||
unstash name: 'srcs'
|
||||
script {
|
||||
def commit_id = "${GIT_COMMIT}"
|
||||
def branch_name = "${GIT_LOCAL_BRANCH}"
|
||||
echo 'Building doc...'
|
||||
dir ('jvm-packages') {
|
||||
sh "bash ./build_doc.sh ${commit_id}"
|
||||
archiveArtifacts artifacts: "${commit_id}.tar.bz2", allowEmptyArchive: true
|
||||
echo 'Deploying doc...'
|
||||
withAWS(credentials:'xgboost-doc-bucket') {
|
||||
s3Upload file: "${commit_id}.tar.bz2", bucket: 'xgboost-docs', acl: 'PublicRead', path: "${branch_name}.tar.bz2"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
stage('Jenkins: Build artifacts') {
|
||||
steps {
|
||||
script {
|
||||
parallel (buildMatrix.findAll{it['enabled']}.collectEntries{ c ->
|
||||
def buildName = utils.getBuildName(c)
|
||||
utils.buildFactory(buildName, c, true, this.&buildPlatformCmake)
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Build platform and test it via cmake.
|
||||
*/
|
||||
def buildPlatformCmake(buildName, conf, nodeReq, dockerTarget) {
|
||||
def opts = utils.cmakeOptions(conf)
|
||||
// Destination dir for artifacts
|
||||
def distDir = "dist/${buildName}"
|
||||
def dockerArgs = ""
|
||||
if(conf["withGpu"]){
|
||||
dockerArgs = "--build-arg CUDA_VERSION=" + conf["cudaVersion"]
|
||||
}
|
||||
// Build node - this is returned result
|
||||
node(nodeReq) {
|
||||
unstash name: 'srcs'
|
||||
echo """
|
||||
|===== XGBoost CMake build =====
|
||||
| dockerTarget: ${dockerTarget}
|
||||
| cmakeOpts : ${opts}
|
||||
|=========================
|
||||
""".stripMargin('|')
|
||||
// Invoke command inside docker
|
||||
sh """
|
||||
${dockerRun} ${dockerTarget} ${dockerArgs} tests/ci_build/build_via_cmake.sh ${opts}
|
||||
${dockerRun} ${dockerTarget} ${dockerArgs} bash -c "cd python-package; rm -f dist/*; python setup.py bdist_wheel --universal"
|
||||
rm -rf "${distDir}"; mkdir -p "${distDir}/py"
|
||||
cp xgboost "${distDir}"
|
||||
cp -r lib "${distDir}"
|
||||
cp -r python-package/dist "${distDir}/py"
|
||||
# Test the wheel for compatibility on a barebones CPU container
|
||||
${dockerRun} release ${dockerArgs} bash -c " \
|
||||
auditwheel show xgboost-*-py2-none-any.whl
|
||||
pip install --user python-package/dist/xgboost-*-none-any.whl && \
|
||||
python -m nose tests/python"
|
||||
"""
|
||||
archiveArtifacts artifacts: "${distDir}/**/*.*", allowEmptyArchive: true
|
||||
}
|
||||
}
|
||||
26
Makefile
26
Makefile
@@ -68,7 +68,7 @@ endif
|
||||
endif
|
||||
|
||||
export LDFLAGS= -pthread -lm $(ADD_LDFLAGS) $(DMLC_LDFLAGS) $(PLUGIN_LDFLAGS)
|
||||
export CFLAGS= -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude $(ADD_CFLAGS) $(PLUGIN_CFLAGS)
|
||||
export CFLAGS= -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude $(ADD_CFLAGS) $(PLUGIN_CFLAGS)
|
||||
CFLAGS += -I$(DMLC_CORE)/include -I$(RABIT)/include -I$(GTEST_PATH)/include
|
||||
#java include path
|
||||
export JAVAINCFLAGS = -I${JAVA_HOME}/include -I./java
|
||||
@@ -198,7 +198,11 @@ endif
|
||||
clean:
|
||||
$(RM) -rf build build_plugin lib bin *~ */*~ */*/*~ */*/*/*~ */*.o */*/*.o */*/*/*.o #xgboost
|
||||
$(RM) -rf build_tests *.gcov tests/cpp/xgboost_test
|
||||
cd R-package/src; $(RM) -rf rabit src include dmlc-core amalgamation *.so *.dll; cd $(ROOTDIR)
|
||||
if [ -d "R-package/src" ]; then \
|
||||
cd R-package/src; \
|
||||
$(RM) -rf rabit src include dmlc-core amalgamation *.so *.dll; \
|
||||
cd $(ROOTDIR); \
|
||||
fi
|
||||
|
||||
clean_all: clean
|
||||
cd $(DMLC_CORE); "$(MAKE)" clean; cd $(ROOTDIR)
|
||||
@@ -212,16 +216,28 @@ pypack: ${XGBOOST_DYLIB}
|
||||
cp ${XGBOOST_DYLIB} python-package/xgboost
|
||||
cd python-package; tar cf xgboost.tar xgboost; cd ..
|
||||
|
||||
# create pip installation pack for PyPI
|
||||
# create pip source dist (sdist) pack for PyPI
|
||||
pippack: clean_all
|
||||
rm -rf xgboost-python
|
||||
# remove symlinked directories in python-package/xgboost
|
||||
rm -rf python-package/xgboost/lib
|
||||
rm -rf python-package/xgboost/dmlc-core
|
||||
rm -rf python-package/xgboost/include
|
||||
rm -rf python-package/xgboost/make
|
||||
rm -rf python-package/xgboost/rabit
|
||||
rm -rf python-package/xgboost/src
|
||||
cp -r python-package xgboost-python
|
||||
cp -r Makefile xgboost-python/xgboost/
|
||||
cp -r make xgboost-python/xgboost/
|
||||
cp -r src xgboost-python/xgboost/
|
||||
cp -r tests xgboost-python/xgboost/
|
||||
cp -r include xgboost-python/xgboost/
|
||||
cp -r dmlc-core xgboost-python/xgboost/
|
||||
cp -r rabit xgboost-python/xgboost/
|
||||
# Use setup_pip.py instead of setup.py
|
||||
mv xgboost-python/setup_pip.py xgboost-python/setup.py
|
||||
# Build sdist tarball
|
||||
cd xgboost-python; python setup.py sdist; mv dist/*.tar.gz ..; cd ..
|
||||
|
||||
# Script to make a clean installable R package.
|
||||
Rpack: clean_all
|
||||
@@ -245,13 +261,15 @@ Rpack: clean_all
|
||||
cat R-package/src/Makevars.in|sed '2s/.*/PKGROOT=./' | sed '3s/.*/ENABLE_STD_THREAD=0/' > xgboost/src/Makevars.in
|
||||
cp xgboost/src/Makevars.in xgboost/src/Makevars.win
|
||||
sed -i -e 's/@OPENMP_CXXFLAGS@/$$\(SHLIB_OPENMP_CFLAGS\)/g' xgboost/src/Makevars.win
|
||||
bash R-package/remove_warning_suppression_pragma.sh
|
||||
rm xgboost/remove_warning_suppression_pragma.sh
|
||||
|
||||
Rbuild: Rpack
|
||||
R CMD build --no-build-vignettes xgboost
|
||||
rm -rf xgboost
|
||||
|
||||
Rcheck: Rbuild
|
||||
R CMD check xgboost*.tar.gz
|
||||
R CMD check xgboost*.tar.gz
|
||||
|
||||
-include build/*.d
|
||||
-include build/*/*.d
|
||||
|
||||
111
NEWS.md
111
NEWS.md
@@ -3,6 +3,117 @@ XGBoost Change Log
|
||||
|
||||
This file records the changes in xgboost library in reverse chronological order.
|
||||
|
||||
## v0.80 (2018.08.13)
|
||||
* **JVM packages received a major upgrade**: To consolidate the APIs and improve the user experience, we refactored the design of XGBoost4J-Spark in a significant manner. (#3387)
|
||||
- Consolidated APIs: It is now much easier to integrate XGBoost models into a Spark ML pipeline. Users can control behaviors like output leaf prediction results by setting corresponding column names. Training is now more consistent with other Estimators in Spark MLLIB: there is now one single method `fit()` to train decision trees.
|
||||
- Better user experience: we refactored the parameters relevant modules in XGBoost4J-Spark to provide both camel-case (Spark ML style) and underscore (XGBoost style) parameters
|
||||
- A brand-new tutorial is [available](https://xgboost.readthedocs.io/en/release_0.80/jvm/xgboost4j_spark_tutorial.html) for XGBoost4J-Spark.
|
||||
- Latest API documentation is now hosted at https://xgboost.readthedocs.io/.
|
||||
* XGBoost documentation now keeps track of multiple versions:
|
||||
- Latest master: https://xgboost.readthedocs.io/en/latest
|
||||
- 0.80 stable: https://xgboost.readthedocs.io/en/release_0.80
|
||||
- 0.72 stable: https://xgboost.readthedocs.io/en/release_0.72
|
||||
* Ranking task now uses instance weights (#3379)
|
||||
* Fix inaccurate decimal parsing (#3546)
|
||||
* New functionality
|
||||
- Query ID column support in LIBSVM data files (#2749). This is convenient for performing ranking task in distributed setting.
|
||||
- Hinge loss for binary classification (`binary:hinge`) (#3477)
|
||||
- Ability to specify delimiter and instance weight column for CSV files (#3546)
|
||||
- Ability to use 1-based indexing instead of 0-based (#3546)
|
||||
* GPU support
|
||||
- Quantile sketch, binning, and index compression are now performed on GPU, eliminating PCIe transfer for 'gpu_hist' algorithm (#3319, #3393)
|
||||
- Upgrade to NCCL2 for multi-GPU training (#3404).
|
||||
- Use shared memory atomics for faster training (#3384).
|
||||
- Dynamically allocate GPU memory, to prevent large allocations for deep trees (#3519)
|
||||
- Fix memory copy bug for large files (#3472)
|
||||
* Python package
|
||||
- Importing data from Python datatable (#3272)
|
||||
- Pre-built binary wheels available for 64-bit Linux and Windows (#3424, #3443)
|
||||
- Add new importance measures 'total_gain', 'total_cover' (#3498)
|
||||
- Sklearn API now supports saving and loading models (#3192)
|
||||
- Arbitrary cross validation fold indices (#3353)
|
||||
- `predict()` function in Sklearn API uses `best_ntree_limit` if available, to make early stopping easier to use (#3445)
|
||||
- Informational messages are now directed to Python's `print()` rather than standard output (#3438). This way, messages appear inside Jupyter notebooks.
|
||||
* R package
|
||||
- Oracle Solaris support, per CRAN policy (#3372)
|
||||
* JVM packages
|
||||
- Single-instance prediction (#3464)
|
||||
- Pre-built JARs are now available from Maven Central (#3401)
|
||||
- Add NULL pointer check (#3021)
|
||||
- Consider `spark.task.cpus` when controlling parallelism (#3530)
|
||||
- Handle missing values in prediction (#3529)
|
||||
- Eliminate outputs of `System.out` (#3572)
|
||||
* Refactored C++ DMatrix class for simplicity and de-duplication (#3301)
|
||||
* Refactored C++ histogram facilities (#3564)
|
||||
* Refactored constraints / regularization mechanism for split finding (#3335, #3429). Users may specify an elastic net (L2 + L1 regularization) on leaf weights as well as monotonic constraints on test nodes. The refactor will be useful for a future addition of feature interaction constraints.
|
||||
* Statically link `libstdc++` for MinGW32 (#3430)
|
||||
* Enable loading from `group`, `base_margin` and `weight` (see [here](http://xgboost.readthedocs.io/en/latest/tutorials/input_format.html#auxiliary-files-for-additional-information)) for Python, R, and JVM packages (#3431)
|
||||
* Fix model saving for `count:possion` so that `max_delta_step` doesn't get truncated (#3515)
|
||||
* Fix loading of sparse CSC matrix (#3553)
|
||||
* Fix incorrect handling of `base_score` parameter for Tweedie regression (#3295)
|
||||
|
||||
## v0.72.1 (2018.07.08)
|
||||
This version is only applicable for the Python package. The content is identical to that of v0.72.
|
||||
|
||||
## v0.72 (2018.06.01)
|
||||
* Starting with this release, we plan to make a new release every two months. See #3252 for more details.
|
||||
* Fix a pathological behavior (near-zero second-order gradients) in multiclass objective (#3304)
|
||||
* Tree dumps now use high precision in storing floating-point values (#3298)
|
||||
* Submodules `rabit` and `dmlc-core` have been brought up to date, bringing bug fixes (#3330, #3221).
|
||||
* GPU support
|
||||
- Continuous integration tests for GPU code (#3294, #3309)
|
||||
- GPU accelerated coordinate descent algorithm (#3178)
|
||||
- Abstract 1D vector class now works with multiple GPUs (#3287)
|
||||
- Generate PTX code for most recent architecture (#3316)
|
||||
- Fix a memory bug on NVIDIA K80 cards (#3293)
|
||||
- Address performance instability for single-GPU, multi-core machines (#3324)
|
||||
* Python package
|
||||
- FreeBSD support (#3247)
|
||||
- Validation of feature names in `Booster.predict()` is now optional (#3323)
|
||||
* Updated Sklearn API
|
||||
- Validation sets now support instance weights (#2354)
|
||||
- `XGBClassifier.predict_proba()` should not support `output_margin` option. (#3343) See BREAKING CHANGES below.
|
||||
* R package:
|
||||
- Better handling of NULL in `print.xgb.Booster()` (#3338)
|
||||
- Comply with CRAN policy by removing compiler warning suppression (#3329)
|
||||
- Updated CRAN submission
|
||||
* JVM packages
|
||||
- JVM packages will now use the same versioning scheme as other packages (#3253)
|
||||
- Update Spark to 2.3 (#3254)
|
||||
- Add scripts to cross-build and deploy artifacts (#3276, #3307)
|
||||
- Fix a compilation error for Scala 2.10 (#3332)
|
||||
* BREAKING CHANGES
|
||||
- `XGBClassifier.predict_proba()` no longer accepts paramter `output_margin`. The paramater makes no sense for `predict_proba()` because the method is to predict class probabilities, not raw margin scores.
|
||||
|
||||
## v0.71 (2018.04.11)
|
||||
* This is a minor release, mainly motivated by issues concerning `pip install`, e.g. #2426, #3189, #3118, and #3194.
|
||||
With this release, users of Linux and MacOS will be able to run `pip install` for the most part.
|
||||
* Refactored linear booster class (`gblinear`), so as to support multiple coordinate descent updaters (#3103, #3134). See BREAKING CHANGES below.
|
||||
* Fix slow training for multiclass classification with high number of classes (#3109)
|
||||
* Fix a corner case in approximate quantile sketch (#3167). Applicable for 'hist' and 'gpu_hist' algorithms
|
||||
* Fix memory leak in DMatrix (#3182)
|
||||
* New functionality
|
||||
- Better linear booster class (#3103, #3134)
|
||||
- Pairwise SHAP interaction effects (#3043)
|
||||
- Cox loss (#3043)
|
||||
- AUC-PR metric for ranking task (#3172)
|
||||
- Monotonic constraints for 'hist' algorithm (#3085)
|
||||
* GPU support
|
||||
- Create an abtract 1D vector class that moves data seamlessly between the main and GPU memory (#2935, #3116, #3068). This eliminates unnecessary PCIe data transfer during training time.
|
||||
- Fix minor bugs (#3051, #3217)
|
||||
- Fix compatibility error for CUDA 9.1 (#3218)
|
||||
* Python package:
|
||||
- Correctly handle parameter `verbose_eval=0` (#3115)
|
||||
* R package:
|
||||
- Eliminate segmentation fault on 32-bit Windows platform (#2994)
|
||||
* JVM packages
|
||||
- Fix a memory bug involving double-freeing Booster objects (#3005, #3011)
|
||||
- Handle empty partition in predict (#3014)
|
||||
- Update docs and unify terminology (#3024)
|
||||
- Delete cache files after job finishes (#3022)
|
||||
- Compatibility fixes for latest Spark versions (#3062, #3093)
|
||||
* BREAKING CHANGES: Updated linear modelling algorithms. In particular L1/L2 regularisation penalties are now normalised to number of training examples. This makes the implementation consistent with sklearn/glmnet. L2 regularisation has also been removed from the intercept. To produce linear models with the old regularisation behaviour, the alpha/lambda regularisation parameters can be manually scaled by dividing them by the number of training examples.
|
||||
|
||||
## v0.7 (2017.12.30)
|
||||
* **This version represents a major change from the last release (v0.6), which was released one year and half ago.**
|
||||
* Updated Sklearn API
|
||||
|
||||
@@ -1,12 +1,34 @@
|
||||
Package: xgboost
|
||||
Type: Package
|
||||
Title: Extreme Gradient Boosting
|
||||
Version: 0.6.4.8
|
||||
Date: 2017-12-05
|
||||
Author: Tianqi Chen <tianqi.tchen@gmail.com>, Tong He <hetong007@gmail.com>,
|
||||
Michael Benesty <michael@benesty.fr>, Vadim Khotilovich <khotilovich@gmail.com>,
|
||||
Yuan Tang <terrytangyuan@gmail.com>
|
||||
Maintainer: Tong He <hetong007@gmail.com>
|
||||
Version: 0.80.1
|
||||
Date: 2018-08-13
|
||||
Authors@R: c(
|
||||
person("Tianqi", "Chen", role = c("aut"),
|
||||
email = "tianqi.tchen@gmail.com"),
|
||||
person("Tong", "He", role = c("aut", "cre"),
|
||||
email = "hetong007@gmail.com"),
|
||||
person("Michael", "Benesty", role = c("aut"),
|
||||
email = "michael@benesty.fr"),
|
||||
person("Vadim", "Khotilovich", role = c("aut"),
|
||||
email = "khotilovich@gmail.com"),
|
||||
person("Yuan", "Tang", role = c("aut"),
|
||||
email = "terrytangyuan@gmail.com",
|
||||
comment = c(ORCID = "0000-0001-5243-233X")),
|
||||
person("Hyunsu", "Cho", role = c("aut"),
|
||||
email = "chohyu01@cs.washington.edu"),
|
||||
person("Kailong", "Chen", role = c("aut")),
|
||||
person("Rory", "Mitchell", role = c("aut")),
|
||||
person("Ignacio", "Cano", role = c("aut")),
|
||||
person("Tianyi", "Zhou", role = c("aut")),
|
||||
person("Mu", "Li", role = c("aut")),
|
||||
person("Junyuan", "Xie", role = c("aut")),
|
||||
person("Min", "Lin", role = c("aut")),
|
||||
person("Yifeng", "Geng", role = c("aut")),
|
||||
person("Yutian", "Li", role = c("aut")),
|
||||
person("XGBoost contributors", role = c("cph"),
|
||||
comment = "base XGBoost implementation")
|
||||
)
|
||||
Description: Extreme Gradient Boosting, which is an efficient implementation
|
||||
of the gradient boosting framework from Chen & Guestrin (2016) <doi:10.1145/2939672.2939785>.
|
||||
This package is its R interface. The package includes efficient linear
|
||||
@@ -19,6 +41,7 @@ Description: Extreme Gradient Boosting, which is an efficient implementation
|
||||
License: Apache License (== 2.0) | file LICENSE
|
||||
URL: https://github.com/dmlc/xgboost
|
||||
BugReports: https://github.com/dmlc/xgboost/issues
|
||||
NeedsCompilation: yes
|
||||
VignetteBuilder: knitr
|
||||
Suggests:
|
||||
knitr,
|
||||
@@ -28,6 +51,7 @@ Suggests:
|
||||
Ckmeans.1d.dp (>= 3.3.1),
|
||||
vcd (>= 1.3),
|
||||
testthat,
|
||||
lintr,
|
||||
igraph (>= 1.0.1)
|
||||
Depends:
|
||||
R (>= 3.3.0)
|
||||
@@ -38,3 +62,4 @@ Imports:
|
||||
magrittr (>= 1.5),
|
||||
stringi (>= 0.5.2)
|
||||
RoxygenNote: 6.0.1
|
||||
SystemRequirements: GNU make, C++11
|
||||
|
||||
@@ -18,6 +18,7 @@ export("xgb.parameters<-")
|
||||
export(cb.cv.predict)
|
||||
export(cb.early.stop)
|
||||
export(cb.evaluation.log)
|
||||
export(cb.gblinear.history)
|
||||
export(cb.print.evaluation)
|
||||
export(cb.reset.parameters)
|
||||
export(cb.save.model)
|
||||
@@ -32,6 +33,7 @@ export(xgb.attributes)
|
||||
export(xgb.create.features)
|
||||
export(xgb.cv)
|
||||
export(xgb.dump)
|
||||
export(xgb.gblinear.history)
|
||||
export(xgb.ggplot.deepness)
|
||||
export(xgb.ggplot.importance)
|
||||
export(xgb.importance)
|
||||
@@ -49,10 +51,11 @@ export(xgboost)
|
||||
import(methods)
|
||||
importClassesFrom(Matrix,dgCMatrix)
|
||||
importClassesFrom(Matrix,dgeMatrix)
|
||||
importFrom(Matrix,cBind)
|
||||
importFrom(Matrix,colSums)
|
||||
importFrom(Matrix,sparse.model.matrix)
|
||||
importFrom(Matrix,sparseMatrix)
|
||||
importFrom(Matrix,sparseVector)
|
||||
importFrom(Matrix,t)
|
||||
importFrom(data.table,":=")
|
||||
importFrom(data.table,as.data.table)
|
||||
importFrom(data.table,data.table)
|
||||
|
||||
@@ -168,7 +168,7 @@ cb.evaluation.log <- function() {
|
||||
#' at the beginning of each iteration.
|
||||
#'
|
||||
#' Note that when training is resumed from some previous model, and a function is used to
|
||||
#' reset a parameter value, the \code{nround} argument in this function would be the
|
||||
#' reset a parameter value, the \code{nrounds} argument in this function would be the
|
||||
#' the number of boosting rounds in the current training.
|
||||
#'
|
||||
#' Callback function expects the following values to be set in its calling frame:
|
||||
@@ -524,6 +524,223 @@ cb.cv.predict <- function(save_models = FALSE) {
|
||||
}
|
||||
|
||||
|
||||
#' Callback closure for collecting the model coefficients history of a gblinear booster
|
||||
#' during its training.
|
||||
#'
|
||||
#' @param sparse when set to FALSE/TURE, a dense/sparse matrix is used to store the result.
|
||||
#' Sparse format is useful when one expects only a subset of coefficients to be non-zero,
|
||||
#' when using the "thrifty" feature selector with fairly small number of top features
|
||||
#' selected per iteration.
|
||||
#'
|
||||
#' @details
|
||||
#' To keep things fast and simple, gblinear booster does not internally store the history of linear
|
||||
#' model coefficients at each boosting iteration. This callback provides a workaround for storing
|
||||
#' the coefficients' path, by extracting them after each training iteration.
|
||||
#'
|
||||
#' Callback function expects the following values to be set in its calling frame:
|
||||
#' \code{bst} (or \code{bst_folds}).
|
||||
#'
|
||||
#' @return
|
||||
#' Results are stored in the \code{coefs} element of the closure.
|
||||
#' The \code{\link{xgb.gblinear.history}} convenience function provides an easy way to access it.
|
||||
#' With \code{xgb.train}, it is either a dense of a sparse matrix.
|
||||
#' While with \code{xgb.cv}, it is a list (an element per each fold) of such matrices.
|
||||
#'
|
||||
#' @seealso
|
||||
#' \code{\link{callbacks}}, \code{\link{xgb.gblinear.history}}.
|
||||
#'
|
||||
#' @examples
|
||||
#' #### Binary classification:
|
||||
#' #
|
||||
#' # In the iris dataset, it is hard to linearly separate Versicolor class from the rest
|
||||
#' # without considering the 2nd order interactions:
|
||||
#' require(magrittr)
|
||||
#' x <- model.matrix(Species ~ .^2, iris)[,-1]
|
||||
#' colnames(x)
|
||||
#' dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor"))
|
||||
#' param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc",
|
||||
#' lambda = 0.0003, alpha = 0.0003, nthread = 2)
|
||||
#' # For 'shotgun', which is a default linear updater, using high eta values may result in
|
||||
#' # unstable behaviour in some datasets. With this simple dataset, however, the high learning
|
||||
#' # rate does not break the convergence, but allows us to illustrate the typical pattern of
|
||||
#' # "stochastic explosion" behaviour of this lock-free algorithm at early boosting iterations.
|
||||
#' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 1.,
|
||||
#' callbacks = list(cb.gblinear.history()))
|
||||
#' # Extract the coefficients' path and plot them vs boosting iteration number:
|
||||
#' coef_path <- xgb.gblinear.history(bst)
|
||||
#' matplot(coef_path, type = 'l')
|
||||
#'
|
||||
#' # With the deterministic coordinate descent updater, it is safer to use higher learning rates.
|
||||
#' # Will try the classical componentwise boosting which selects a single best feature per round:
|
||||
#' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 0.8,
|
||||
#' updater = 'coord_descent', feature_selector = 'thrifty', top_k = 1,
|
||||
#' callbacks = list(cb.gblinear.history()))
|
||||
#' xgb.gblinear.history(bst) %>% matplot(type = 'l')
|
||||
#' # Componentwise boosting is known to have similar effect to Lasso regularization.
|
||||
#' # Try experimenting with various values of top_k, eta, nrounds,
|
||||
#' # as well as different feature_selectors.
|
||||
#'
|
||||
#' # For xgb.cv:
|
||||
#' bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8,
|
||||
#' callbacks = list(cb.gblinear.history()))
|
||||
#' # coefficients in the CV fold #3
|
||||
#' xgb.gblinear.history(bst)[[3]] %>% matplot(type = 'l')
|
||||
#'
|
||||
#'
|
||||
#' #### Multiclass classification:
|
||||
#' #
|
||||
#' dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1)
|
||||
#' param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3,
|
||||
#' lambda = 0.0003, alpha = 0.0003, nthread = 2)
|
||||
#' # For the default linear updater 'shotgun' it sometimes is helpful
|
||||
#' # to use smaller eta to reduce instability
|
||||
#' bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 70, eta = 0.5,
|
||||
#' callbacks = list(cb.gblinear.history()))
|
||||
#' # Will plot the coefficient paths separately for each class:
|
||||
#' xgb.gblinear.history(bst, class_index = 0) %>% matplot(type = 'l')
|
||||
#' xgb.gblinear.history(bst, class_index = 1) %>% matplot(type = 'l')
|
||||
#' xgb.gblinear.history(bst, class_index = 2) %>% matplot(type = 'l')
|
||||
#'
|
||||
#' # CV:
|
||||
#' bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 70, eta = 0.5,
|
||||
#' callbacks = list(cb.gblinear.history(FALSE)))
|
||||
#' # 1st forld of 1st class
|
||||
#' xgb.gblinear.history(bst, class_index = 0)[[1]] %>% matplot(type = 'l')
|
||||
#'
|
||||
#' @export
|
||||
cb.gblinear.history <- function(sparse=FALSE) {
|
||||
coefs <- NULL
|
||||
|
||||
init <- function(env) {
|
||||
if (!is.null(env$bst)) { # xgb.train:
|
||||
coef_path <- list()
|
||||
} else if (!is.null(env$bst_folds)) { # xgb.cv:
|
||||
coef_path <- rep(list(), length(env$bst_folds))
|
||||
} else stop("Parent frame has neither 'bst' nor 'bst_folds'")
|
||||
}
|
||||
|
||||
# convert from list to (sparse) matrix
|
||||
list2mat <- function(coef_list) {
|
||||
if (sparse) {
|
||||
coef_mat <- sparseMatrix(x = unlist(lapply(coef_list, slot, "x")),
|
||||
i = unlist(lapply(coef_list, slot, "i")),
|
||||
p = c(0, cumsum(sapply(coef_list, function(x) length(x@x)))),
|
||||
dims = c(length(coef_list[[1]]), length(coef_list)))
|
||||
return(t(coef_mat))
|
||||
} else {
|
||||
return(do.call(rbind, coef_list))
|
||||
}
|
||||
}
|
||||
|
||||
finalizer <- function(env) {
|
||||
if (length(coefs) == 0)
|
||||
return()
|
||||
if (!is.null(env$bst)) { # # xgb.train:
|
||||
coefs <<- list2mat(coefs)
|
||||
} else { # xgb.cv:
|
||||
# first lapply transposes the list
|
||||
coefs <<- lapply(seq_along(coefs[[1]]), function(i) lapply(coefs, "[[", i)) %>%
|
||||
lapply(function(x) list2mat(x))
|
||||
}
|
||||
}
|
||||
|
||||
extract.coef <- function(env) {
|
||||
if (!is.null(env$bst)) { # # xgb.train:
|
||||
cf <- as.numeric(grep('(booster|bias|weigh)', xgb.dump(env$bst), invert = TRUE, value = TRUE))
|
||||
if (sparse) cf <- as(cf, "sparseVector")
|
||||
} else { # xgb.cv:
|
||||
cf <- vector("list", length(env$bst_folds))
|
||||
for (i in seq_along(env$bst_folds)) {
|
||||
dmp <- xgb.dump(xgb.handleToBooster(env$bst_folds[[i]]$bst))
|
||||
cf[[i]] <- as.numeric(grep('(booster|bias|weigh)', dmp, invert = TRUE, value = TRUE))
|
||||
if (sparse) cf[[i]] <- as(cf[[i]], "sparseVector")
|
||||
}
|
||||
}
|
||||
cf
|
||||
}
|
||||
|
||||
callback <- function(env = parent.frame(), finalize = FALSE) {
|
||||
if (is.null(coefs)) init(env)
|
||||
if (finalize) return(finalizer(env))
|
||||
cf <- extract.coef(env)
|
||||
coefs <<- c(coefs, list(cf))
|
||||
}
|
||||
|
||||
attr(callback, 'call') <- match.call()
|
||||
attr(callback, 'name') <- 'cb.gblinear.history'
|
||||
callback
|
||||
}
|
||||
|
||||
#' Extract gblinear coefficients history.
|
||||
#'
|
||||
#' A helper function to extract the matrix of linear coefficients' history
|
||||
#' from a gblinear model created while using the \code{cb.gblinear.history()}
|
||||
#' callback.
|
||||
#'
|
||||
#' @param model either an \code{xgb.Booster} or a result of \code{xgb.cv()}, trained
|
||||
#' using the \code{cb.gblinear.history()} callback.
|
||||
#' @param class_index zero-based class index to extract the coefficients for only that
|
||||
#' specific class in a multinomial multiclass model. When it is NULL, all the
|
||||
#' coeffients are returned. Has no effect in non-multiclass models.
|
||||
#'
|
||||
#' @return
|
||||
#' For an \code{xgb.train} result, a matrix (either dense or sparse) with the columns
|
||||
#' corresponding to iteration's coefficients (in the order as \code{xgb.dump()} would
|
||||
#' return) and the rows corresponding to boosting iterations.
|
||||
#'
|
||||
#' For an \code{xgb.cv} result, a list of such matrices is returned with the elements
|
||||
#' corresponding to CV folds.
|
||||
#'
|
||||
#' @export
|
||||
xgb.gblinear.history <- function(model, class_index = NULL) {
|
||||
|
||||
if (!(inherits(model, "xgb.Booster") ||
|
||||
inherits(model, "xgb.cv.synchronous")))
|
||||
stop("model must be an object of either xgb.Booster or xgb.cv.synchronous class")
|
||||
is_cv <- inherits(model, "xgb.cv.synchronous")
|
||||
|
||||
if (is.null(model[["callbacks"]]) || is.null(model$callbacks[["cb.gblinear.history"]]))
|
||||
stop("model must be trained while using the cb.gblinear.history() callback")
|
||||
|
||||
if (!is_cv) {
|
||||
# extract num_class & num_feat from the internal model
|
||||
dmp <- xgb.dump(model)
|
||||
if(length(dmp) < 2 || dmp[2] != "bias:")
|
||||
stop("It does not appear to be a gblinear model")
|
||||
dmp <- dmp[-c(1,2)]
|
||||
n <- which(dmp == 'weight:')
|
||||
if(length(n) != 1)
|
||||
stop("It does not appear to be a gblinear model")
|
||||
num_class <- n - 1
|
||||
num_feat <- (length(dmp) - 4) / num_class
|
||||
} else {
|
||||
# in case of CV, the object is expected to have this info
|
||||
if (model$params$booster != "gblinear")
|
||||
stop("It does not appear to be a gblinear model")
|
||||
num_class <- NVL(model$params$num_class, 1)
|
||||
num_feat <- model$nfeatures
|
||||
if (is.null(num_feat))
|
||||
stop("This xgb.cv result does not have nfeatures info")
|
||||
}
|
||||
|
||||
if (!is.null(class_index) &&
|
||||
num_class > 1 &&
|
||||
(class_index[1] < 0 || class_index[1] >= num_class))
|
||||
stop("class_index has to be within [0,", num_class - 1, "]")
|
||||
|
||||
coef_path <- environment(model$callbacks$cb.gblinear.history)[["coefs"]]
|
||||
if (!is.null(class_index) && num_class > 1) {
|
||||
coef_path <- if (is.list(coef_path)) {
|
||||
lapply(coef_path,
|
||||
function(x) x[, seq(1 + class_index, by=num_class, length.out=num_feat)])
|
||||
} else {
|
||||
coef_path <- coef_path[, seq(1 + class_index, by=num_class, length.out=num_feat)]
|
||||
}
|
||||
}
|
||||
coef_path
|
||||
}
|
||||
|
||||
|
||||
#
|
||||
# Internal utility functions for callbacks ------------------------------------
|
||||
#
|
||||
|
||||
@@ -37,11 +37,14 @@ xgb.handleToBooster <- function(handle, raw = NULL) {
|
||||
# Check whether xgb.Booster.handle is null
|
||||
# internal utility function
|
||||
is.null.handle <- function(handle) {
|
||||
if (is.null(handle)) return(TRUE)
|
||||
|
||||
if (!identical(class(handle), "xgb.Booster.handle"))
|
||||
stop("argument type must be xgb.Booster.handle")
|
||||
|
||||
if (is.null(handle) || .Call(XGCheckNullPtr_R, handle))
|
||||
if (.Call(XGCheckNullPtr_R, handle))
|
||||
return(TRUE)
|
||||
|
||||
return(FALSE)
|
||||
}
|
||||
|
||||
@@ -537,7 +540,7 @@ xgb.ntree <- function(bst) {
|
||||
print.xgb.Booster <- function(x, verbose = FALSE, ...) {
|
||||
cat('##### xgb.Booster\n')
|
||||
|
||||
valid_handle <- is.null.handle(x$handle)
|
||||
valid_handle <- !is.null.handle(x$handle)
|
||||
if (!valid_handle)
|
||||
cat("Handle is invalid! Suggest using xgb.Booster.complete\n")
|
||||
|
||||
|
||||
@@ -52,9 +52,9 @@
|
||||
#' dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)
|
||||
#'
|
||||
#' param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
|
||||
#' nround = 4
|
||||
#' nrounds = 4
|
||||
#'
|
||||
#' bst = xgb.train(params = param, data = dtrain, nrounds = nround, nthread = 2)
|
||||
#' bst = xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)
|
||||
#'
|
||||
#' # Model accuracy without new features
|
||||
#' accuracy.before <- sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.test$label) /
|
||||
@@ -68,7 +68,7 @@
|
||||
#' new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label)
|
||||
#' new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label)
|
||||
#' watchlist <- list(train = new.dtrain)
|
||||
#' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nround, nthread = 2)
|
||||
#' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
|
||||
#'
|
||||
#' # Model accuracy with new features
|
||||
#' accuracy.after <- sum((predict(bst, new.dtest) >= 0.5) == agaricus.test$label) /
|
||||
@@ -83,5 +83,5 @@ xgb.create.features <- function(model, data, ...){
|
||||
check.deprecation(...)
|
||||
pred_with_leaf <- predict(model, data, predleaf = TRUE)
|
||||
cols <- lapply(as.data.frame(pred_with_leaf), factor)
|
||||
cBind(data, sparse.model.matrix( ~ . -1, cols))
|
||||
cbind(data, sparse.model.matrix( ~ . -1, cols))
|
||||
}
|
||||
|
||||
@@ -34,6 +34,7 @@
|
||||
#' \item \code{rmse} Rooted mean square error
|
||||
#' \item \code{logloss} negative log-likelihood function
|
||||
#' \item \code{auc} Area under curve
|
||||
#' \item \code{aucpr} Area under PR curve
|
||||
#' \item \code{merror} Exact matching error, used to evaluate multi-class classification
|
||||
#' }
|
||||
#' @param obj customized objective function. Returns gradient and second order
|
||||
@@ -82,12 +83,13 @@
|
||||
#' \item \code{params} parameters that were passed to the xgboost library. Note that it does not
|
||||
#' capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
|
||||
#' \item \code{callbacks} callback functions that were either automatically assigned or
|
||||
#' explicitely passed.
|
||||
#' explicitly passed.
|
||||
#' \item \code{evaluation_log} evaluation history storead as a \code{data.table} with the
|
||||
#' first column corresponding to iteration number and the rest corresponding to the
|
||||
#' CV-based evaluation means and standard deviations for the training and test CV-sets.
|
||||
#' It is created by the \code{\link{cb.evaluation.log}} callback.
|
||||
#' \item \code{niter} number of boosting iterations.
|
||||
#' \item \code{nfeatures} number of features in training data.
|
||||
#' \item \code{folds} the list of CV folds' indices - either those passed through the \code{folds}
|
||||
#' parameter or randomly generated.
|
||||
#' \item \code{best_iteration} iteration number with the best evaluation metric value
|
||||
@@ -184,6 +186,7 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
|
||||
handle <- xgb.Booster.handle(params, list(dtrain, dtest))
|
||||
list(dtrain = dtrain, bst = handle, watchlist = list(train = dtrain, test=dtest), index = folds[[k]])
|
||||
})
|
||||
rm(dall)
|
||||
# a "basket" to collect some results from callbacks
|
||||
basket <- list()
|
||||
|
||||
@@ -221,6 +224,7 @@ xgb.cv <- function(params=list(), data, nrounds, nfold, label = NULL, missing =
|
||||
callbacks = callbacks,
|
||||
evaluation_log = evaluation_log,
|
||||
niter = end_iteration,
|
||||
nfeatures = ncol(data),
|
||||
folds = folds
|
||||
)
|
||||
ret <- c(ret, basket)
|
||||
|
||||
@@ -30,7 +30,8 @@
|
||||
#' bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
|
||||
#' eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
|
||||
#' # save the model in file 'xgb.model.dump'
|
||||
#' xgb.dump(bst, 'xgb.model.dump', with_stats = TRUE)
|
||||
#' dump_path = file.path(tempdir(), 'model.dump')
|
||||
#' xgb.dump(bst, dump_path, with_stats = TRUE)
|
||||
#'
|
||||
#' # print the model without saving it to a file
|
||||
#' print(xgb.dump(bst, with_stats = TRUE))
|
||||
|
||||
@@ -212,6 +212,7 @@ xgb.plot.shap <- function(data, shap_contrib = NULL, features = NULL, top_n = 1,
|
||||
}
|
||||
if (plot && which == "2d") {
|
||||
# TODO
|
||||
warning("Bivariate plotting is currently not available.")
|
||||
}
|
||||
invisible(list(data = data, shap_contrib = shap_contrib))
|
||||
}
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
#' \item \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be.
|
||||
#' \item \code{max_depth} maximum depth of a tree. Default: 6
|
||||
#' \item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1
|
||||
#' \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nround}. Default: 1
|
||||
#' \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1
|
||||
#' \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
|
||||
#' \item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through Xgboost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1
|
||||
#' \item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.
|
||||
@@ -121,12 +121,13 @@
|
||||
#' \itemize{
|
||||
#' \item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
|
||||
#' \item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
|
||||
#' \item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss/}
|
||||
#' \item \code{mlogloss} multiclass logloss. \url{http://wiki.fast.ai/index.php/Log_Loss}
|
||||
#' \item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
|
||||
#' By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
|
||||
#' Different threshold (e.g., 0.) could be specified as "error@0."
|
||||
#' \item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
|
||||
#' \item \code{auc} Area under the curve. \url{http://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
|
||||
#' \item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation.
|
||||
#' \item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{http://en.wikipedia.org/wiki/NDCG}
|
||||
#' }
|
||||
#'
|
||||
@@ -162,6 +163,7 @@
|
||||
#' (only available with early stopping).
|
||||
#' \item \code{feature_names} names of the training dataset features
|
||||
#' (only when comun names were defined in training data).
|
||||
#' \item \code{nfeatures} number of features in training data.
|
||||
#' }
|
||||
#'
|
||||
#' @seealso
|
||||
@@ -351,8 +353,8 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
|
||||
if (inherits(xgb_model, 'xgb.Booster') &&
|
||||
!is_update &&
|
||||
!is.null(xgb_model$evaluation_log) &&
|
||||
all.equal(colnames(evaluation_log),
|
||||
colnames(xgb_model$evaluation_log))) {
|
||||
isTRUE(all.equal(colnames(evaluation_log),
|
||||
colnames(xgb_model$evaluation_log)))) {
|
||||
evaluation_log <- rbindlist(list(xgb_model$evaluation_log, evaluation_log))
|
||||
}
|
||||
bst$evaluation_log <- evaluation_log
|
||||
@@ -363,6 +365,7 @@ xgb.train <- function(params = list(), data, nrounds, watchlist = list(),
|
||||
bst$callbacks <- callbacks
|
||||
if (!is.null(colnames(dtrain)))
|
||||
bst$feature_names <- colnames(dtrain)
|
||||
|
||||
bst$nfeatures <- ncol(dtrain)
|
||||
|
||||
return(bst)
|
||||
}
|
||||
|
||||
@@ -77,10 +77,11 @@ NULL
|
||||
|
||||
# Various imports
|
||||
#' @importClassesFrom Matrix dgCMatrix dgeMatrix
|
||||
#' @importFrom Matrix cBind
|
||||
#' @importFrom Matrix colSums
|
||||
#' @importFrom Matrix sparse.model.matrix
|
||||
#' @importFrom Matrix sparseVector
|
||||
#' @importFrom Matrix sparseMatrix
|
||||
#' @importFrom Matrix t
|
||||
#' @importFrom data.table data.table
|
||||
#' @importFrom data.table is.data.table
|
||||
#' @importFrom data.table as.data.table
|
||||
|
||||
@@ -30,4 +30,4 @@ Examples
|
||||
Development
|
||||
-----------
|
||||
|
||||
* See the [R Package section](https://xgboost.readthedocs.io/en/latest/how_to/contribute.html#r-package) of the contributiors guide.
|
||||
* See the [R Package section](https://xgboost.readthedocs.io/en/latest/how_to/contribute.html#r-package) of the contributors guide.
|
||||
|
||||
0
R-package/configure.win
Normal file
0
R-package/configure.win
Normal file
@@ -99,7 +99,8 @@ err <- as.numeric(sum(as.integer(pred > 0.5) != label))/length(label)
|
||||
print(paste("test-error=", err))
|
||||
|
||||
# You can dump the tree you learned using xgb.dump into a text file
|
||||
xgb.dump(bst, "dump.raw.txt", with_stats = T)
|
||||
dump_path = file.path(tempdir(), 'dump.raw.txt')
|
||||
xgb.dump(bst, dump_path, with_stats = T)
|
||||
|
||||
# Finally, you can check which features are the most important.
|
||||
print("Most important features (look at column Gain):")
|
||||
|
||||
@@ -5,20 +5,20 @@ data(agaricus.test, package='xgboost')
|
||||
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
|
||||
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
|
||||
|
||||
nround <- 2
|
||||
nrounds <- 2
|
||||
param <- list(max_depth=2, eta=1, silent=1, nthread=2, objective='binary:logistic')
|
||||
|
||||
cat('running cross validation\n')
|
||||
# do cross validation, this will print result out as
|
||||
# [iteration] metric_name:mean_value+std_value
|
||||
# std_value is standard deviation of the metric
|
||||
xgb.cv(param, dtrain, nround, nfold=5, metrics={'error'})
|
||||
xgb.cv(param, dtrain, nrounds, nfold=5, metrics={'error'})
|
||||
|
||||
cat('running cross validation, disable standard deviation display\n')
|
||||
# do cross validation, this will print result out as
|
||||
# [iteration] metric_name:mean_value+std_value
|
||||
# std_value is standard deviation of the metric
|
||||
xgb.cv(param, dtrain, nround, nfold=5,
|
||||
xgb.cv(param, dtrain, nrounds, nfold=5,
|
||||
metrics='error', showsd = FALSE)
|
||||
|
||||
###
|
||||
@@ -43,9 +43,9 @@ evalerror <- function(preds, dtrain) {
|
||||
param <- list(max_depth=2, eta=1, silent=1,
|
||||
objective = logregobj, eval_metric = evalerror)
|
||||
# train with customized objective
|
||||
xgb.cv(params = param, data = dtrain, nrounds = nround, nfold = 5)
|
||||
xgb.cv(params = param, data = dtrain, nrounds = nrounds, nfold = 5)
|
||||
|
||||
# do cross validation with prediction values for each fold
|
||||
res <- xgb.cv(params = param, data = dtrain, nrounds = nround, nfold = 5, prediction = TRUE)
|
||||
res <- xgb.cv(params = param, data = dtrain, nrounds = nrounds, nfold = 5, prediction = TRUE)
|
||||
res$evaluation_log
|
||||
length(res$pred)
|
||||
|
||||
@@ -7,10 +7,10 @@ dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
|
||||
|
||||
param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
|
||||
watchlist <- list(eval = dtest, train = dtrain)
|
||||
nround = 2
|
||||
nrounds = 2
|
||||
|
||||
# training the model for two rounds
|
||||
bst = xgb.train(param, dtrain, nround, nthread = 2, watchlist)
|
||||
bst = xgb.train(param, dtrain, nrounds, nthread = 2, watchlist)
|
||||
cat('start testing prediction from first n trees\n')
|
||||
labels <- getinfo(dtest,'label')
|
||||
|
||||
|
||||
@@ -11,10 +11,10 @@ dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label)
|
||||
dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)
|
||||
|
||||
param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
|
||||
nround = 4
|
||||
nrounds = 4
|
||||
|
||||
# training the model for two rounds
|
||||
bst = xgb.train(params = param, data = dtrain, nrounds = nround, nthread = 2)
|
||||
bst = xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)
|
||||
|
||||
# Model accuracy without new features
|
||||
accuracy.before <- sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.test$label) / length(agaricus.test$label)
|
||||
@@ -32,7 +32,7 @@ create.new.tree.features <- function(model, original.features){
|
||||
leaf.id <- sort(unique(pred_with_leaf[,i]))
|
||||
cols[[i]] <- factor(x = pred_with_leaf[,i], level = leaf.id)
|
||||
}
|
||||
cBind(original.features, sparse.model.matrix( ~ . -1, as.data.frame(cols)))
|
||||
cbind(original.features, sparse.model.matrix( ~ . -1, as.data.frame(cols)))
|
||||
}
|
||||
|
||||
# Convert previous features to one hot encoding
|
||||
@@ -43,7 +43,7 @@ new.features.test <- create.new.tree.features(bst, agaricus.test$data)
|
||||
new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label)
|
||||
new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label)
|
||||
watchlist <- list(train = new.dtrain)
|
||||
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nround, nthread = 2)
|
||||
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
|
||||
|
||||
# Model accuracy with new features
|
||||
accuracy.after <- sum((predict(bst, new.dtest) >= 0.5) == agaricus.test$label) / length(agaricus.test$label)
|
||||
|
||||
95
R-package/man/cb.gblinear.history.Rd
Normal file
95
R-package/man/cb.gblinear.history.Rd
Normal file
@@ -0,0 +1,95 @@
|
||||
% Generated by roxygen2: do not edit by hand
|
||||
% Please edit documentation in R/callbacks.R
|
||||
\name{cb.gblinear.history}
|
||||
\alias{cb.gblinear.history}
|
||||
\title{Callback closure for collecting the model coefficients history of a gblinear booster
|
||||
during its training.}
|
||||
\usage{
|
||||
cb.gblinear.history(sparse = FALSE)
|
||||
}
|
||||
\arguments{
|
||||
\item{sparse}{when set to FALSE/TURE, a dense/sparse matrix is used to store the result.
|
||||
Sparse format is useful when one expects only a subset of coefficients to be non-zero,
|
||||
when using the "thrifty" feature selector with fairly small number of top features
|
||||
selected per iteration.}
|
||||
}
|
||||
\value{
|
||||
Results are stored in the \code{coefs} element of the closure.
|
||||
The \code{\link{xgb.gblinear.history}} convenience function provides an easy way to access it.
|
||||
With \code{xgb.train}, it is either a dense of a sparse matrix.
|
||||
While with \code{xgb.cv}, it is a list (an element per each fold) of such matrices.
|
||||
}
|
||||
\description{
|
||||
Callback closure for collecting the model coefficients history of a gblinear booster
|
||||
during its training.
|
||||
}
|
||||
\details{
|
||||
To keep things fast and simple, gblinear booster does not internally store the history of linear
|
||||
model coefficients at each boosting iteration. This callback provides a workaround for storing
|
||||
the coefficients' path, by extracting them after each training iteration.
|
||||
|
||||
Callback function expects the following values to be set in its calling frame:
|
||||
\code{bst} (or \code{bst_folds}).
|
||||
}
|
||||
\examples{
|
||||
#### Binary classification:
|
||||
#
|
||||
# In the iris dataset, it is hard to linearly separate Versicolor class from the rest
|
||||
# without considering the 2nd order interactions:
|
||||
require(magrittr)
|
||||
x <- model.matrix(Species ~ .^2, iris)[,-1]
|
||||
colnames(x)
|
||||
dtrain <- xgb.DMatrix(scale(x), label = 1*(iris$Species == "versicolor"))
|
||||
param <- list(booster = "gblinear", objective = "reg:logistic", eval_metric = "auc",
|
||||
lambda = 0.0003, alpha = 0.0003, nthread = 2)
|
||||
# For 'shotgun', which is a default linear updater, using high eta values may result in
|
||||
# unstable behaviour in some datasets. With this simple dataset, however, the high learning
|
||||
# rate does not break the convergence, but allows us to illustrate the typical pattern of
|
||||
# "stochastic explosion" behaviour of this lock-free algorithm at early boosting iterations.
|
||||
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 1.,
|
||||
callbacks = list(cb.gblinear.history()))
|
||||
# Extract the coefficients' path and plot them vs boosting iteration number:
|
||||
coef_path <- xgb.gblinear.history(bst)
|
||||
matplot(coef_path, type = 'l')
|
||||
|
||||
# With the deterministic coordinate descent updater, it is safer to use higher learning rates.
|
||||
# Will try the classical componentwise boosting which selects a single best feature per round:
|
||||
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 200, eta = 0.8,
|
||||
updater = 'coord_descent', feature_selector = 'thrifty', top_k = 1,
|
||||
callbacks = list(cb.gblinear.history()))
|
||||
xgb.gblinear.history(bst) \%>\% matplot(type = 'l')
|
||||
# Componentwise boosting is known to have similar effect to Lasso regularization.
|
||||
# Try experimenting with various values of top_k, eta, nrounds,
|
||||
# as well as different feature_selectors.
|
||||
|
||||
# For xgb.cv:
|
||||
bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 100, eta = 0.8,
|
||||
callbacks = list(cb.gblinear.history()))
|
||||
# coefficients in the CV fold #3
|
||||
xgb.gblinear.history(bst)[[3]] \%>\% matplot(type = 'l')
|
||||
|
||||
|
||||
#### Multiclass classification:
|
||||
#
|
||||
dtrain <- xgb.DMatrix(scale(x), label = as.numeric(iris$Species) - 1)
|
||||
param <- list(booster = "gblinear", objective = "multi:softprob", num_class = 3,
|
||||
lambda = 0.0003, alpha = 0.0003, nthread = 2)
|
||||
# For the default linear updater 'shotgun' it sometimes is helpful
|
||||
# to use smaller eta to reduce instability
|
||||
bst <- xgb.train(param, dtrain, list(tr=dtrain), nrounds = 70, eta = 0.5,
|
||||
callbacks = list(cb.gblinear.history()))
|
||||
# Will plot the coefficient paths separately for each class:
|
||||
xgb.gblinear.history(bst, class_index = 0) \%>\% matplot(type = 'l')
|
||||
xgb.gblinear.history(bst, class_index = 1) \%>\% matplot(type = 'l')
|
||||
xgb.gblinear.history(bst, class_index = 2) \%>\% matplot(type = 'l')
|
||||
|
||||
# CV:
|
||||
bst <- xgb.cv(param, dtrain, nfold = 5, nrounds = 70, eta = 0.5,
|
||||
callbacks = list(cb.gblinear.history(FALSE)))
|
||||
# 1st forld of 1st class
|
||||
xgb.gblinear.history(bst, class_index = 0)[[1]] \%>\% matplot(type = 'l')
|
||||
|
||||
}
|
||||
\seealso{
|
||||
\code{\link{callbacks}}, \code{\link{xgb.gblinear.history}}.
|
||||
}
|
||||
@@ -22,7 +22,7 @@ This is a "pre-iteration" callback function used to reset booster's parameters
|
||||
at the beginning of each iteration.
|
||||
|
||||
Note that when training is resumed from some previous model, and a function is used to
|
||||
reset a parameter value, the \code{nround} argument in this function would be the
|
||||
reset a parameter value, the \code{nrounds} argument in this function would be the
|
||||
the number of boosting rounds in the current training.
|
||||
|
||||
Callback function expects the following values to be set in its calling frame:
|
||||
|
||||
@@ -63,9 +63,9 @@ dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label)
|
||||
dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)
|
||||
|
||||
param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
|
||||
nround = 4
|
||||
nrounds = 4
|
||||
|
||||
bst = xgb.train(params = param, data = dtrain, nrounds = nround, nthread = 2)
|
||||
bst = xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)
|
||||
|
||||
# Model accuracy without new features
|
||||
accuracy.before <- sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.test$label) /
|
||||
@@ -79,7 +79,7 @@ new.features.test <- xgb.create.features(model = bst, agaricus.test$data)
|
||||
new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label)
|
||||
new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label)
|
||||
watchlist <- list(train = new.dtrain)
|
||||
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nround, nthread = 2)
|
||||
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
|
||||
|
||||
# Model accuracy with new features
|
||||
accuracy.after <- sum((predict(bst, new.dtest) >= 0.5) == agaricus.test$label) /
|
||||
|
||||
@@ -51,6 +51,7 @@ from each CV model. This parameter engages the \code{\link{cb.cv.predict}} callb
|
||||
\item \code{rmse} Rooted mean square error
|
||||
\item \code{logloss} negative log-likelihood function
|
||||
\item \code{auc} Area under curve
|
||||
\item \code{aucpr} Area under PR curve
|
||||
\item \code{merror} Exact matching error, used to evaluate multi-class classification
|
||||
}}
|
||||
|
||||
@@ -98,12 +99,13 @@ An object of class \code{xgb.cv.synchronous} with the following elements:
|
||||
\item \code{params} parameters that were passed to the xgboost library. Note that it does not
|
||||
capture parameters changed by the \code{\link{cb.reset.parameters}} callback.
|
||||
\item \code{callbacks} callback functions that were either automatically assigned or
|
||||
explicitely passed.
|
||||
explicitly passed.
|
||||
\item \code{evaluation_log} evaluation history storead as a \code{data.table} with the
|
||||
first column corresponding to iteration number and the rest corresponding to the
|
||||
CV-based evaluation means and standard deviations for the training and test CV-sets.
|
||||
It is created by the \code{\link{cb.evaluation.log}} callback.
|
||||
\item \code{niter} number of boosting iterations.
|
||||
\item \code{nfeatures} number of features in training data.
|
||||
\item \code{folds} the list of CV folds' indices - either those passed through the \code{folds}
|
||||
parameter or randomly generated.
|
||||
\item \code{best_iteration} iteration number with the best evaluation metric value
|
||||
|
||||
@@ -44,7 +44,8 @@ test <- agaricus.test
|
||||
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
|
||||
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
|
||||
# save the model in file 'xgb.model.dump'
|
||||
xgb.dump(bst, 'xgb.model.dump', with_stats = TRUE)
|
||||
dump.path = file.path(tempdir(), 'model.dump')
|
||||
xgb.dump(bst, dump.path, with_stats = TRUE)
|
||||
|
||||
# print the model without saving it to a file
|
||||
print(xgb.dump(bst, with_stats = TRUE))
|
||||
|
||||
29
R-package/man/xgb.gblinear.history.Rd
Normal file
29
R-package/man/xgb.gblinear.history.Rd
Normal file
@@ -0,0 +1,29 @@
|
||||
% Generated by roxygen2: do not edit by hand
|
||||
% Please edit documentation in R/callbacks.R
|
||||
\name{xgb.gblinear.history}
|
||||
\alias{xgb.gblinear.history}
|
||||
\title{Extract gblinear coefficients history.}
|
||||
\usage{
|
||||
xgb.gblinear.history(model, class_index = NULL)
|
||||
}
|
||||
\arguments{
|
||||
\item{model}{either an \code{xgb.Booster} or a result of \code{xgb.cv()}, trained
|
||||
using the \code{cb.gblinear.history()} callback.}
|
||||
|
||||
\item{class_index}{zero-based class index to extract the coefficients for only that
|
||||
specific class in a multinomial multiclass model. When it is NULL, all the
|
||||
coeffients are returned. Has no effect in non-multiclass models.}
|
||||
}
|
||||
\value{
|
||||
For an \code{xgb.train} result, a matrix (either dense or sparse) with the columns
|
||||
corresponding to iteration's coefficients (in the order as \code{xgb.dump()} would
|
||||
return) and the rows corresponding to boosting iterations.
|
||||
|
||||
For an \code{xgb.cv} result, a list of such matrices is returned with the elements
|
||||
corresponding to CV folds.
|
||||
}
|
||||
\description{
|
||||
A helper function to extract the matrix of linear coefficients' history
|
||||
from a gblinear model created while using the \code{cb.gblinear.history()}
|
||||
callback.
|
||||
}
|
||||
@@ -35,7 +35,7 @@ xgboost(data = NULL, label = NULL, missing = NA, weight = NULL,
|
||||
\item \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be.
|
||||
\item \code{max_depth} maximum depth of a tree. Default: 6
|
||||
\item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1
|
||||
\item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nround}. Default: 1
|
||||
\item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1
|
||||
\item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
|
||||
\item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through Xgboost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1
|
||||
\item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.
|
||||
@@ -155,6 +155,7 @@ An object of class \code{xgb.Booster} with the following elements:
|
||||
(only available with early stopping).
|
||||
\item \code{feature_names} names of the training dataset features
|
||||
(only when comun names were defined in training data).
|
||||
\item \code{nfeatures} number of features in training data.
|
||||
}
|
||||
}
|
||||
\description{
|
||||
@@ -179,12 +180,13 @@ The folloiwing is the list of built-in metrics for which Xgboost provides optimi
|
||||
\itemize{
|
||||
\item \code{rmse} root mean square error. \url{http://en.wikipedia.org/wiki/Root_mean_square_error}
|
||||
\item \code{logloss} negative log-likelihood. \url{http://en.wikipedia.org/wiki/Log-likelihood}
|
||||
\item \code{mlogloss} multiclass logloss. \url{https://www.kaggle.com/wiki/MultiClassLogLoss/}
|
||||
\item \code{mlogloss} multiclass logloss. \url{http://wiki.fast.ai/index.php/Log_Loss}
|
||||
\item \code{error} Binary classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
|
||||
By default, it uses the 0.5 threshold for predicted values to define negative and positive instances.
|
||||
Different threshold (e.g., 0.) could be specified as "error@0."
|
||||
\item \code{merror} Multiclass classification error rate. It is calculated as \code{(# wrong cases) / (# all cases)}.
|
||||
\item \code{auc} Area under the curve. \url{http://en.wikipedia.org/wiki/Receiver_operating_characteristic#'Area_under_curve} for ranking evaluation.
|
||||
\item \code{aucpr} Area under the PR curve. \url{https://en.wikipedia.org/wiki/Precision_and_recall} for ranking evaluation.
|
||||
\item \code{ndcg} Normalized Discounted Cumulative Gain (for ranking task). \url{http://en.wikipedia.org/wiki/NDCG}
|
||||
}
|
||||
|
||||
|
||||
14
R-package/remove_warning_suppression_pragma.sh
Executable file
14
R-package/remove_warning_suppression_pragma.sh
Executable file
@@ -0,0 +1,14 @@
|
||||
#!/bin/bash
|
||||
# remove all #pragma's that suppress compiler warnings
|
||||
set -e
|
||||
set -x
|
||||
for file in xgboost/src/dmlc-core/include/dmlc/*.h
|
||||
do
|
||||
sed -i.bak -e 's/^.*#pragma GCC diagnostic.*$//' -e 's/^.*#pragma clang diagnostic.*$//' -e 's/^.*#pragma warning.*$//' "${file}"
|
||||
done
|
||||
for file in xgboost/src/dmlc-core/include/dmlc/*.h.bak
|
||||
do
|
||||
rm "${file}"
|
||||
done
|
||||
set +x
|
||||
set +e
|
||||
@@ -10,6 +10,12 @@ XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
|
||||
-DDMLC_LOG_CUSTOMIZE=1 -DXGBOOST_CUSTOMIZE_LOGGER=1\
|
||||
-DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_
|
||||
|
||||
# disable the use of thread_local for 32 bit windows:
|
||||
ifeq ($(R_OSTYPE)$(WIN),windows)
|
||||
XGB_RFLAGS += -DDMLC_CXX11_THREAD_LOCAL=0
|
||||
endif
|
||||
$(foreach v, $(XGB_RFLAGS), $(warning $(v)))
|
||||
|
||||
PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS)
|
||||
PKG_CXXFLAGS= @OPENMP_CXXFLAGS@ $(SHLIB_PTHREAD_FLAGS)
|
||||
PKG_LIBS = @OPENMP_CXXFLAGS@ $(SHLIB_PTHREAD_FLAGS)
|
||||
|
||||
@@ -4,7 +4,7 @@ ENABLE_STD_THREAD=0
|
||||
# _*_ mode: Makefile; _*_
|
||||
|
||||
# This file is only used for windows compilation from github
|
||||
# It will be replaced by Makevars in CRAN version
|
||||
# It will be replaced with Makevars.in for the CRAN version
|
||||
.PHONY: all xgblib
|
||||
all: $(SHLIB)
|
||||
$(SHLIB): xgblib
|
||||
@@ -22,6 +22,12 @@ XGB_RFLAGS = -DXGBOOST_STRICT_R_MODE=1 -DDMLC_LOG_BEFORE_THROW=0\
|
||||
-DDMLC_LOG_CUSTOMIZE=1 -DXGBOOST_CUSTOMIZE_LOGGER=1\
|
||||
-DRABIT_CUSTOMIZE_MSG_ -DRABIT_STRICT_CXX98_
|
||||
|
||||
# disable the use of thread_local for 32 bit windows:
|
||||
ifeq ($(R_OSTYPE)$(WIN),windows)
|
||||
XGB_RFLAGS += -DDMLC_CXX11_THREAD_LOCAL=0
|
||||
endif
|
||||
$(foreach v, $(XGB_RFLAGS), $(warning $(v)))
|
||||
|
||||
PKG_CPPFLAGS= -I$(PKGROOT)/include -I$(PKGROOT)/dmlc-core/include -I$(PKGROOT)/rabit/include -I$(PKGROOT) $(XGB_RFLAGS)
|
||||
PKG_CXXFLAGS= $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)
|
||||
PKG_LIBS = $(SHLIB_OPENMP_CFLAGS) $(SHLIB_PTHREAD_FLAGS)
|
||||
|
||||
@@ -19,10 +19,10 @@ extern SEXP XGBoosterBoostOneIter_R(SEXP, SEXP, SEXP, SEXP);
|
||||
extern SEXP XGBoosterCreate_R(SEXP);
|
||||
extern SEXP XGBoosterDumpModel_R(SEXP, SEXP, SEXP, SEXP);
|
||||
extern SEXP XGBoosterEvalOneIter_R(SEXP, SEXP, SEXP, SEXP);
|
||||
extern SEXP XGBoosterGetAttr_R(SEXP, SEXP);
|
||||
extern SEXP XGBoosterGetAttrNames_R(SEXP);
|
||||
extern SEXP XGBoosterLoadModel_R(SEXP, SEXP);
|
||||
extern SEXP XGBoosterGetAttr_R(SEXP, SEXP);
|
||||
extern SEXP XGBoosterLoadModelFromRaw_R(SEXP, SEXP);
|
||||
extern SEXP XGBoosterLoadModel_R(SEXP, SEXP);
|
||||
extern SEXP XGBoosterModelToRaw_R(SEXP);
|
||||
extern SEXP XGBoosterPredict_R(SEXP, SEXP, SEXP, SEXP);
|
||||
extern SEXP XGBoosterSaveModel_R(SEXP, SEXP);
|
||||
@@ -45,10 +45,10 @@ static const R_CallMethodDef CallEntries[] = {
|
||||
{"XGBoosterCreate_R", (DL_FUNC) &XGBoosterCreate_R, 1},
|
||||
{"XGBoosterDumpModel_R", (DL_FUNC) &XGBoosterDumpModel_R, 4},
|
||||
{"XGBoosterEvalOneIter_R", (DL_FUNC) &XGBoosterEvalOneIter_R, 4},
|
||||
{"XGBoosterGetAttr_R", (DL_FUNC) &XGBoosterGetAttr_R, 2},
|
||||
{"XGBoosterGetAttrNames_R", (DL_FUNC) &XGBoosterGetAttrNames_R, 1},
|
||||
{"XGBoosterLoadModel_R", (DL_FUNC) &XGBoosterLoadModel_R, 2},
|
||||
{"XGBoosterGetAttr_R", (DL_FUNC) &XGBoosterGetAttr_R, 2},
|
||||
{"XGBoosterLoadModelFromRaw_R", (DL_FUNC) &XGBoosterLoadModelFromRaw_R, 2},
|
||||
{"XGBoosterLoadModel_R", (DL_FUNC) &XGBoosterLoadModel_R, 2},
|
||||
{"XGBoosterModelToRaw_R", (DL_FUNC) &XGBoosterModelToRaw_R, 1},
|
||||
{"XGBoosterPredict_R", (DL_FUNC) &XGBoosterPredict_R, 4},
|
||||
{"XGBoosterSaveModel_R", (DL_FUNC) &XGBoosterSaveModel_R, 2},
|
||||
|
||||
@@ -11,6 +11,7 @@ set.seed(1994)
|
||||
# disable some tests for Win32
|
||||
windows_flag = .Platform$OS.type == "windows" &&
|
||||
.Machine$sizeof.pointer != 8
|
||||
solaris_flag = (Sys.info()['sysname'] == "SunOS")
|
||||
|
||||
test_that("train and predict binary classification", {
|
||||
nrounds = 2
|
||||
@@ -152,20 +153,20 @@ test_that("training continuation works", {
|
||||
bst1 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0)
|
||||
# continue for two more:
|
||||
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0, xgb_model = bst1)
|
||||
if (!windows_flag)
|
||||
if (!windows_flag && !solaris_flag)
|
||||
expect_equal(bst$raw, bst2$raw)
|
||||
expect_false(is.null(bst2$evaluation_log))
|
||||
expect_equal(dim(bst2$evaluation_log), c(4, 2))
|
||||
expect_equal(bst2$evaluation_log, bst$evaluation_log)
|
||||
# test continuing from raw model data
|
||||
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0, xgb_model = bst1$raw)
|
||||
if (!windows_flag)
|
||||
if (!windows_flag && !solaris_flag)
|
||||
expect_equal(bst$raw, bst2$raw)
|
||||
expect_equal(dim(bst2$evaluation_log), c(2, 2))
|
||||
# test continuing from a model in file
|
||||
xgb.save(bst1, "xgboost.model")
|
||||
bst2 <- xgb.train(param, dtrain, nrounds = 2, watchlist, verbose = 0, xgb_model = "xgboost.model")
|
||||
if (!windows_flag)
|
||||
if (!windows_flag && !solaris_flag)
|
||||
expect_equal(bst$raw, bst2$raw)
|
||||
expect_equal(dim(bst2$evaluation_log), c(2, 2))
|
||||
})
|
||||
|
||||
@@ -77,6 +77,18 @@ test_that("xgb.DMatrix: slice, dim", {
|
||||
expect_equal(getinfo(dsub1, 'label'), getinfo(dsub2, 'label'))
|
||||
})
|
||||
|
||||
test_that("xgb.DMatrix: slice, trailing empty rows", {
|
||||
data(agaricus.train, package='xgboost')
|
||||
train_data <- agaricus.train$data
|
||||
train_label <- agaricus.train$label
|
||||
dtrain <- xgb.DMatrix(data=train_data, label=train_label)
|
||||
slice(dtrain, 6513L)
|
||||
train_data[6513, ] <- 0
|
||||
dtrain <- xgb.DMatrix(data=train_data, label=train_label)
|
||||
slice(dtrain, 6513L)
|
||||
expect_equal(nrow(dtrain), 6513)
|
||||
})
|
||||
|
||||
test_that("xgb.DMatrix: colnames", {
|
||||
dtest <- xgb.DMatrix(test_data, label=test_label)
|
||||
expect_equal(colnames(dtest), colnames(test_data))
|
||||
|
||||
@@ -9,7 +9,7 @@ test_that("train and prediction when gctorture is on", {
|
||||
test <- agaricus.test
|
||||
gctorture(TRUE)
|
||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
|
||||
eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
|
||||
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
|
||||
pred <- predict(bst, test$data)
|
||||
gctorture(FALSE)
|
||||
})
|
||||
|
||||
@@ -2,18 +2,47 @@ context('Test generalized linear models')
|
||||
|
||||
require(xgboost)
|
||||
|
||||
test_that("glm works", {
|
||||
test_that("gblinear works", {
|
||||
data(agaricus.train, package='xgboost')
|
||||
data(agaricus.test, package='xgboost')
|
||||
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
|
||||
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
|
||||
expect_equal(class(dtrain), "xgb.DMatrix")
|
||||
expect_equal(class(dtest), "xgb.DMatrix")
|
||||
|
||||
param <- list(objective = "binary:logistic", booster = "gblinear",
|
||||
nthread = 2, alpha = 0.0001, lambda = 1)
|
||||
nthread = 2, eta = 0.8, alpha = 0.0001, lambda = 0.0001)
|
||||
watchlist <- list(eval = dtest, train = dtrain)
|
||||
num_round <- 2
|
||||
bst <- xgb.train(param, dtrain, num_round, watchlist)
|
||||
|
||||
n <- 5 # iterations
|
||||
ERR_UL <- 0.005 # upper limit for the test set error
|
||||
VERB <- 0 # chatterbox switch
|
||||
|
||||
param$updater = 'shotgun'
|
||||
bst <- xgb.train(param, dtrain, n, watchlist, verbose = VERB, feature_selector = 'shuffle')
|
||||
ypred <- predict(bst, dtest)
|
||||
expect_equal(length(getinfo(dtest, 'label')), 1611)
|
||||
expect_lt(bst$evaluation_log$eval_error[n], ERR_UL)
|
||||
|
||||
bst <- xgb.train(param, dtrain, n, watchlist, verbose = VERB, feature_selector = 'cyclic',
|
||||
callbacks = list(cb.gblinear.history()))
|
||||
expect_lt(bst$evaluation_log$eval_error[n], ERR_UL)
|
||||
h <- xgb.gblinear.history(bst)
|
||||
expect_equal(dim(h), c(n, ncol(dtrain) + 1))
|
||||
expect_is(h, "matrix")
|
||||
|
||||
param$updater = 'coord_descent'
|
||||
bst <- xgb.train(param, dtrain, n, watchlist, verbose = VERB, feature_selector = 'cyclic')
|
||||
expect_lt(bst$evaluation_log$eval_error[n], ERR_UL)
|
||||
|
||||
bst <- xgb.train(param, dtrain, n, watchlist, verbose = VERB, feature_selector = 'shuffle')
|
||||
expect_lt(bst$evaluation_log$eval_error[n], ERR_UL)
|
||||
|
||||
bst <- xgb.train(param, dtrain, 2, watchlist, verbose = VERB, feature_selector = 'greedy')
|
||||
expect_lt(bst$evaluation_log$eval_error[2], ERR_UL)
|
||||
|
||||
bst <- xgb.train(param, dtrain, n, watchlist, verbose = VERB, feature_selector = 'thrifty',
|
||||
top_n = 50, callbacks = list(cb.gblinear.history(sparse = TRUE)))
|
||||
expect_lt(bst$evaluation_log$eval_error[n], ERR_UL)
|
||||
h <- xgb.gblinear.history(bst)
|
||||
expect_equal(dim(h), c(n, ncol(dtrain) + 1))
|
||||
expect_s4_class(h, "dgCMatrix")
|
||||
})
|
||||
|
||||
@@ -5,6 +5,11 @@ require(data.table)
|
||||
require(Matrix)
|
||||
require(vcd, quietly = TRUE)
|
||||
|
||||
float_tolerance = 5e-6
|
||||
|
||||
# disable some tests for Win32
|
||||
win32_flag = .Platform$OS.type == "windows" && .Machine$sizeof.pointer != 8
|
||||
|
||||
set.seed(1982)
|
||||
data(Arthritis)
|
||||
df <- data.table(Arthritis, keep.rownames = F)
|
||||
@@ -39,15 +44,18 @@ mbst.GLM <- xgboost(data = as.matrix(iris[, -5]), label = mlabel, verbose = 0,
|
||||
|
||||
|
||||
test_that("xgb.dump works", {
|
||||
expect_length(xgb.dump(bst.Tree), 200)
|
||||
expect_true(xgb.dump(bst.Tree, 'xgb.model.dump', with_stats = T))
|
||||
expect_true(file.exists('xgb.model.dump'))
|
||||
expect_gt(file.size('xgb.model.dump'), 8000)
|
||||
if (!win32_flag)
|
||||
expect_length(xgb.dump(bst.Tree), 200)
|
||||
dump_file = file.path(tempdir(), 'xgb.model.dump')
|
||||
expect_true(xgb.dump(bst.Tree, dump_file, with_stats = T))
|
||||
expect_true(file.exists(dump_file))
|
||||
expect_gt(file.size(dump_file), 8000)
|
||||
|
||||
# JSON format
|
||||
dmp <- xgb.dump(bst.Tree, dump_format = "json")
|
||||
expect_length(dmp, 1)
|
||||
expect_length(grep('nodeid', strsplit(dmp, '\n')[[1]]), 188)
|
||||
if (!win32_flag)
|
||||
expect_length(grep('nodeid', strsplit(dmp, '\n')[[1]]), 188)
|
||||
})
|
||||
|
||||
test_that("xgb.dump works for gblinear", {
|
||||
@@ -85,7 +93,8 @@ test_that("predict feature contributions works", {
|
||||
X <- sparse_matrix
|
||||
colnames(X) <- NULL
|
||||
expect_error(pred_contr_ <- predict(bst.Tree, X, predcontrib = TRUE), regexp = NA)
|
||||
expect_equal(pred_contr, pred_contr_, check.attributes = FALSE)
|
||||
expect_equal(pred_contr, pred_contr_, check.attributes = FALSE,
|
||||
tolerance = float_tolerance)
|
||||
|
||||
# gbtree binary classifier (approximate method)
|
||||
expect_error(pred_contr <- predict(bst.Tree, sparse_matrix, predcontrib = TRUE, approxcontrib = TRUE), regexp = NA)
|
||||
@@ -104,7 +113,8 @@ test_that("predict feature contributions works", {
|
||||
coefs <- xgb.dump(bst.GLM)[-c(1,2,4)] %>% as.numeric
|
||||
coefs <- c(coefs[-1], coefs[1]) # intercept must be the last
|
||||
pred_contr_manual <- sweep(cbind(sparse_matrix, 1), 2, coefs, FUN="*")
|
||||
expect_equal(as.numeric(pred_contr), as.numeric(pred_contr_manual), 1e-5)
|
||||
expect_equal(as.numeric(pred_contr), as.numeric(pred_contr_manual),
|
||||
tolerance = float_tolerance)
|
||||
|
||||
# gbtree multiclass
|
||||
pred <- predict(mbst.Tree, as.matrix(iris[, -5]), outputmargin = TRUE, reshape = TRUE)
|
||||
@@ -123,11 +133,12 @@ test_that("predict feature contributions works", {
|
||||
coefs_all <- xgb.dump(mbst.GLM)[-c(1,2,6)] %>% as.numeric %>% matrix(ncol = 3, byrow = TRUE)
|
||||
for (g in seq_along(pred_contr)) {
|
||||
expect_equal(colnames(pred_contr[[g]]), c(colnames(iris[, -5]), "BIAS"))
|
||||
expect_lt(max(abs(rowSums(pred_contr[[g]]) - pred[, g])), 2e-6)
|
||||
expect_lt(max(abs(rowSums(pred_contr[[g]]) - pred[, g])), float_tolerance)
|
||||
# manual calculation of linear terms
|
||||
coefs <- c(coefs_all[-1, g], coefs_all[1, g]) # intercept needs to be the last
|
||||
pred_contr_manual <- sweep(as.matrix(cbind(iris[,-5], 1)), 2, coefs, FUN="*")
|
||||
expect_equal(as.numeric(pred_contr[[g]]), as.numeric(pred_contr_manual), 2e-6)
|
||||
expect_equal(as.numeric(pred_contr[[g]]), as.numeric(pred_contr_manual),
|
||||
tolerance = float_tolerance)
|
||||
}
|
||||
})
|
||||
|
||||
@@ -171,14 +182,16 @@ if (grepl('Windows', Sys.info()[['sysname']]) ||
|
||||
# check that lossless conversion works with 17 digits
|
||||
# numeric -> character -> numeric
|
||||
X <- 10^runif(100, -20, 20)
|
||||
X2X <- as.numeric(format(X, digits = 17))
|
||||
expect_identical(X, X2X)
|
||||
if (capabilities('long.double')) {
|
||||
X2X <- as.numeric(format(X, digits = 17))
|
||||
expect_identical(X, X2X)
|
||||
}
|
||||
# retrieved attributes to be the same as written
|
||||
for (x in X) {
|
||||
xgb.attr(bst.Tree, "x") <- x
|
||||
expect_identical(as.numeric(xgb.attr(bst.Tree, "x")), x)
|
||||
expect_equal(as.numeric(xgb.attr(bst.Tree, "x")), x, tolerance = float_tolerance)
|
||||
xgb.attributes(bst.Tree) <- list(a = "A", b = x)
|
||||
expect_identical(as.numeric(xgb.attr(bst.Tree, "b")), x)
|
||||
expect_equal(as.numeric(xgb.attr(bst.Tree, "b")), x, tolerance = float_tolerance)
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -187,7 +200,7 @@ test_that("xgb.Booster serializing as R object works", {
|
||||
saveRDS(bst.Tree, 'xgb.model.rds')
|
||||
bst <- readRDS('xgb.model.rds')
|
||||
dtrain <- xgb.DMatrix(sparse_matrix, label = label)
|
||||
expect_equal(predict(bst.Tree, dtrain), predict(bst, dtrain))
|
||||
expect_equal(predict(bst.Tree, dtrain), predict(bst, dtrain), tolerance = float_tolerance)
|
||||
expect_equal(xgb.dump(bst.Tree), xgb.dump(bst))
|
||||
xgb.save(bst, 'xgb.model')
|
||||
nil_ptr <- new("externalptr")
|
||||
@@ -195,14 +208,15 @@ test_that("xgb.Booster serializing as R object works", {
|
||||
expect_true(identical(bst$handle, nil_ptr))
|
||||
bst <- xgb.Booster.complete(bst)
|
||||
expect_true(!identical(bst$handle, nil_ptr))
|
||||
expect_equal(predict(bst.Tree, dtrain), predict(bst, dtrain))
|
||||
expect_equal(predict(bst.Tree, dtrain), predict(bst, dtrain), tolerance = float_tolerance)
|
||||
})
|
||||
|
||||
test_that("xgb.model.dt.tree works with and without feature names", {
|
||||
names.dt.trees <- c("Tree", "Node", "ID", "Feature", "Split", "Yes", "No", "Missing", "Quality", "Cover")
|
||||
dt.tree <- xgb.model.dt.tree(feature_names = feature.names, model = bst.Tree)
|
||||
expect_equal(names.dt.trees, names(dt.tree))
|
||||
expect_equal(dim(dt.tree), c(188, 10))
|
||||
if (!win32_flag)
|
||||
expect_equal(dim(dt.tree), c(188, 10))
|
||||
expect_output(str(dt.tree), 'Feature.*\\"Age\\"')
|
||||
|
||||
dt.tree.0 <- xgb.model.dt.tree(model = bst.Tree)
|
||||
@@ -228,18 +242,20 @@ test_that("xgb.model.dt.tree throws error for gblinear", {
|
||||
|
||||
test_that("xgb.importance works with and without feature names", {
|
||||
importance.Tree <- xgb.importance(feature_names = feature.names, model = bst.Tree)
|
||||
expect_equal(dim(importance.Tree), c(7, 4))
|
||||
if (!win32_flag)
|
||||
expect_equal(dim(importance.Tree), c(7, 4))
|
||||
expect_equal(colnames(importance.Tree), c("Feature", "Gain", "Cover", "Frequency"))
|
||||
expect_output(str(importance.Tree), 'Feature.*\\"Age\\"')
|
||||
|
||||
importance.Tree.0 <- xgb.importance(model = bst.Tree)
|
||||
expect_equal(importance.Tree, importance.Tree.0)
|
||||
expect_equal(importance.Tree, importance.Tree.0, tolerance = float_tolerance)
|
||||
|
||||
# when model contains no feature names:
|
||||
bst.Tree.x <- bst.Tree
|
||||
bst.Tree.x$feature_names <- NULL
|
||||
importance.Tree.x <- xgb.importance(model = bst.Tree)
|
||||
expect_equal(importance.Tree[, -1, with=FALSE], importance.Tree.x[, -1, with=FALSE])
|
||||
expect_equal(importance.Tree[, -1, with=FALSE], importance.Tree.x[, -1, with=FALSE],
|
||||
tolerance = float_tolerance)
|
||||
|
||||
imp2plot <- xgb.plot.importance(importance_matrix = importance.Tree)
|
||||
expect_equal(colnames(imp2plot), c("Feature", "Gain", "Cover", "Frequency", "Importance"))
|
||||
|
||||
@@ -7,6 +7,10 @@ data(agaricus.test, package = 'xgboost')
|
||||
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
|
||||
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)
|
||||
|
||||
# Disable flaky tests for 32-bit Windows.
|
||||
# See https://github.com/dmlc/xgboost/issues/3720
|
||||
win32_flag = .Platform$OS.type == "windows" && .Machine$sizeof.pointer != 8
|
||||
|
||||
test_that("updating the model works", {
|
||||
watchlist = list(train = dtrain, test = dtest)
|
||||
|
||||
@@ -29,7 +33,9 @@ test_that("updating the model works", {
|
||||
tr1r <- xgb.model.dt.tree(model = bst1r)
|
||||
# all should be the same when no subsampling
|
||||
expect_equal(bst1$evaluation_log, bst1r$evaluation_log)
|
||||
expect_equal(tr1, tr1r, tolerance = 0.00001, check.attributes = FALSE)
|
||||
if (!win32_flag) {
|
||||
expect_equal(tr1, tr1r, tolerance = 0.00001, check.attributes = FALSE)
|
||||
}
|
||||
|
||||
# the same boosting with subsampling with an extra 'refresh' updater:
|
||||
p2r <- modifyList(p2, list(updater = 'grow_colmaker,prune,refresh', refresh_leaf = FALSE))
|
||||
@@ -38,7 +44,9 @@ test_that("updating the model works", {
|
||||
tr2r <- xgb.model.dt.tree(model = bst2r)
|
||||
# should be the same evaluation but different gains and larger cover
|
||||
expect_equal(bst2$evaluation_log, bst2r$evaluation_log)
|
||||
expect_equal(tr2[Feature == 'Leaf']$Quality, tr2r[Feature == 'Leaf']$Quality)
|
||||
if (!win32_flag) {
|
||||
expect_equal(tr2[Feature == 'Leaf']$Quality, tr2r[Feature == 'Leaf']$Quality)
|
||||
}
|
||||
expect_gt(sum(abs(tr2[Feature != 'Leaf']$Quality - tr2r[Feature != 'Leaf']$Quality)), 100)
|
||||
expect_gt(sum(tr2r$Cover) / sum(tr2$Cover), 1.5)
|
||||
|
||||
@@ -61,7 +69,9 @@ test_that("updating the model works", {
|
||||
expect_gt(sum(tr2u$Cover) / sum(tr2$Cover), 1.5)
|
||||
# the results should be the same as for the model with an extra 'refresh' updater
|
||||
expect_equal(bst2r$evaluation_log, bst2u$evaluation_log)
|
||||
expect_equal(tr2r, tr2u, tolerance = 0.00001, check.attributes = FALSE)
|
||||
if (!win32_flag) {
|
||||
expect_equal(tr2r, tr2u, tolerance = 0.00001, check.attributes = FALSE)
|
||||
}
|
||||
|
||||
# process type 'update' for no-subsampling model, refreshing only the tree stats from TEST data:
|
||||
p1ut <- modifyList(p1, list(process_type = 'update', updater = 'refresh', refresh_leaf = FALSE))
|
||||
|
||||
38
README.md
38
README.md
@@ -6,46 +6,28 @@
|
||||
[](./LICENSE)
|
||||
[](http://cran.r-project.org/web/packages/xgboost)
|
||||
[](https://pypi.python.org/pypi/xgboost/)
|
||||
[](https://gitter.im/dmlc/xgboost?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||
|
||||
[Community](https://xgboost.ai/community) |
|
||||
[Documentation](https://xgboost.readthedocs.org) |
|
||||
[Resources](demo/README.md) |
|
||||
[Installation](https://xgboost.readthedocs.org/en/latest/build.html) |
|
||||
[Release Notes](NEWS.md) |
|
||||
[RoadMap](https://github.com/dmlc/xgboost/issues/873)
|
||||
[Contributors](CONTRIBUTORS.md) |
|
||||
[Release Notes](NEWS.md)
|
||||
|
||||
XGBoost is an optimized distributed gradient boosting library designed to be highly ***efficient***, ***flexible*** and ***portable***.
|
||||
It implements machine learning algorithms under the [Gradient Boosting](https://en.wikipedia.org/wiki/Gradient_boosting) framework.
|
||||
XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.
|
||||
The same code runs on major distributed environment (Hadoop, SGE, MPI) and can solve problems beyond billions of examples.
|
||||
|
||||
What's New
|
||||
----------
|
||||
* [XGBoost GPU support with fast histogram algorithm](https://github.com/dmlc/xgboost/tree/master/plugin/updater_gpu)
|
||||
* [XGBoost4J: Portable Distributed XGboost in Spark, Flink and Dataflow](http://dmlc.ml/2016/03/14/xgboost4j-portable-distributed-xgboost-in-spark-flink-and-dataflow.html), see [JVM-Package](https://github.com/dmlc/xgboost/tree/master/jvm-packages)
|
||||
* [Story and Lessons Behind the Evolution of XGBoost](http://homes.cs.washington.edu/~tqchen/2016/03/10/story-and-lessons-behind-the-evolution-of-xgboost.html)
|
||||
* [Tutorial: Distributed XGBoost on AWS with YARN](https://xgboost.readthedocs.io/en/latest/tutorials/aws_yarn.html)
|
||||
* [XGBoost brick](NEWS.md) Release
|
||||
|
||||
Ask a Question
|
||||
--------------
|
||||
* For reporting bugs please use the [xgboost/issues](https://github.com/dmlc/xgboost/issues) page.
|
||||
* For generic questions or to share your experience using XGBoost please use the [XGBoost User Group](https://groups.google.com/forum/#!forum/xgboost-user/)
|
||||
|
||||
Help to Make XGBoost Better
|
||||
---------------------------
|
||||
XGBoost has been developed and used by a group of active community members. Your help is very valuable to make the package better for everyone.
|
||||
- Check out [call for contributions](https://github.com/dmlc/xgboost/issues?q=is%3Aissue+label%3Acall-for-contribution+is%3Aopen) and [Roadmap](https://github.com/dmlc/xgboost/issues/873) to see what can be improved, or open an issue if you want something.
|
||||
- Contribute to the [documents and examples](https://github.com/dmlc/xgboost/blob/master/doc/) to share your experience with other users.
|
||||
- Add your stories and experience to [Awesome XGBoost](demo/README.md).
|
||||
- Please add your name to [CONTRIBUTORS.md](CONTRIBUTORS.md) and after your patch has been merged.
|
||||
- Please also update [NEWS.md](NEWS.md) on changes and improvements in API and docs.
|
||||
|
||||
License
|
||||
-------
|
||||
© Contributors, 2016. Licensed under an [Apache-2](https://github.com/dmlc/xgboost/blob/master/LICENSE) license.
|
||||
|
||||
Contribute to XGBoost
|
||||
---------------------
|
||||
XGBoost has been developed and used by a group of active community members. Your help is very valuable to make the package better for everyone.
|
||||
Checkout the [Community Page](https://xgboost.ai/community)
|
||||
|
||||
Reference
|
||||
---------
|
||||
- Tianqi Chen and Carlos Guestrin. [XGBoost: A Scalable Tree Boosting System](http://arxiv.org/abs/1603.02754). In 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, 2016
|
||||
- XGBoost originates from research project at University of Washington, see also the [Project Page at UW](http://dmlc.cs.washington.edu/xgboost.html).
|
||||
- Tianqi Chen and Carlos Guestrin. [XGBoost: A Scalable Tree Boosting System](http://arxiv.org/abs/1603.02754). In 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, 2016
|
||||
- XGBoost originates from research project at University of Washington.
|
||||
|
||||
@@ -7,6 +7,8 @@
|
||||
#include "../dmlc-core/src/io/recordio_split.cc"
|
||||
#include "../dmlc-core/src/io/input_split_base.cc"
|
||||
#include "../dmlc-core/src/io/local_filesys.cc"
|
||||
#include "../dmlc-core/src/io/filesys.cc"
|
||||
#include "../dmlc-core/src/io/indexed_recordio_split.cc"
|
||||
#include "../dmlc-core/src/data.cc"
|
||||
#include "../dmlc-core/src/io.cc"
|
||||
#include "../dmlc-core/src/recordio.cc"
|
||||
|
||||
@@ -20,6 +20,7 @@
|
||||
#include "../src/objective/regression_obj.cc"
|
||||
#include "../src/objective/multiclass_obj.cc"
|
||||
#include "../src/objective/rank_obj.cc"
|
||||
#include "../src/objective/hinge.cc"
|
||||
|
||||
// gbms
|
||||
#include "../src/gbm/gbm.cc"
|
||||
@@ -43,6 +44,7 @@
|
||||
#endif
|
||||
|
||||
// tress
|
||||
#include "../src/tree/split_evaluator.cc"
|
||||
#include "../src/tree/tree_model.cc"
|
||||
#include "../src/tree/tree_updater.cc"
|
||||
#include "../src/tree/updater_colmaker.cc"
|
||||
@@ -53,10 +55,16 @@
|
||||
#include "../src/tree/updater_histmaker.cc"
|
||||
#include "../src/tree/updater_skmaker.cc"
|
||||
|
||||
// linear
|
||||
#include "../src/linear/linear_updater.cc"
|
||||
#include "../src/linear/updater_coordinate.cc"
|
||||
#include "../src/linear/updater_shotgun.cc"
|
||||
|
||||
// global
|
||||
#include "../src/learner.cc"
|
||||
#include "../src/logging.cc"
|
||||
#include "../src/common/common.cc"
|
||||
#include "../src/common/host_device_vector.cc"
|
||||
#include "../src/common/hist_util.cc"
|
||||
|
||||
// c_api
|
||||
|
||||
11
appveyor.yml
11
appveyor.yml
@@ -52,8 +52,10 @@ install:
|
||||
Invoke-WebRequest http://raw.github.com/krlmlr/r-appveyor/master/scripts/appveyor-tool.ps1 -OutFile "$Env:TEMP\appveyor-tool.ps1"
|
||||
Import-Module "$Env:TEMP\appveyor-tool.ps1"
|
||||
Bootstrap
|
||||
$DEPS = "c('data.table','magrittr','stringi','ggplot2','DiagrammeR','Ckmeans.1d.dp','vcd','testthat','igraph','knitr','rmarkdown')"
|
||||
cmd /c "R.exe -q -e ""install.packages($DEPS, repos='$CRAN', type='win.binary')"" 2>&1"
|
||||
$DEPS = "c('data.table','magrittr','stringi','ggplot2','DiagrammeR','Ckmeans.1d.dp','vcd','testthat','lintr','knitr','rmarkdown')"
|
||||
cmd.exe /c "R.exe -q -e ""install.packages($DEPS, repos='$CRAN', type='both')"" 2>&1"
|
||||
$BINARY_DEPS = "c('XML','igraph')"
|
||||
cmd.exe /c "R.exe -q -e ""install.packages($BINARY_DEPS, repos='$CRAN', type='win.binary')"" 2>&1"
|
||||
}
|
||||
|
||||
build_script:
|
||||
@@ -81,7 +83,7 @@ build_script:
|
||||
- if /i "%target%" == "rmingw" (
|
||||
make Rbuild &&
|
||||
ls -l &&
|
||||
R.exe CMD INSTALL --no-multiarch xgboost*.tar.gz
|
||||
R.exe CMD INSTALL xgboost*.tar.gz
|
||||
)
|
||||
# R package: cmake + VC2015
|
||||
- if /i "%target%" == "rmsvc" (
|
||||
@@ -98,10 +100,9 @@ test_script:
|
||||
# mingw R package: run the R check (which includes unit tests), and also keep the built binary package
|
||||
- if /i "%target%" == "rmingw" (
|
||||
set _R_CHECK_CRAN_INCOMING_=FALSE&&
|
||||
R.exe CMD check xgboost*.tar.gz --no-manual --no-build-vignettes --as-cran --install-args=--build --no-multiarch
|
||||
R.exe CMD check xgboost*.tar.gz --no-manual --no-build-vignettes --as-cran --install-args=--build
|
||||
)
|
||||
# MSVC R package: run only the unit tests
|
||||
# TODO: create a binary msvc-built package to keep as an artifact
|
||||
- if /i "%target%" == "rmsvc" (
|
||||
cd build_rmsvc%ver%\R-package &&
|
||||
R.exe -q -e "library(testthat); setwd('tests'); source('testthat.R')"
|
||||
|
||||
14
build.sh
14
build.sh
@@ -15,25 +15,21 @@ else
|
||||
|
||||
if [[ ! -e ./rabit/Makefile ]]; then
|
||||
echo ""
|
||||
echo "Please clone the rabit repository into this directory."
|
||||
echo "Here are the commands:"
|
||||
echo "rm -rf rabit"
|
||||
echo "git clone https://github.com/dmlc/rabit.git rabit"
|
||||
echo "Please init the rabit submodule:"
|
||||
echo "git submodule update --init --recursive -- rabit"
|
||||
not_ready=1
|
||||
fi
|
||||
|
||||
if [[ ! -e ./dmlc-core/Makefile ]]; then
|
||||
echo ""
|
||||
echo "Please clone the dmlc-core repository into this directory."
|
||||
echo "Here are the commands:"
|
||||
echo "rm -rf dmlc-core"
|
||||
echo "git clone https://github.com/dmlc/dmlc-core.git dmlc-core"
|
||||
echo "Please init the dmlc-core submodule:"
|
||||
echo "git submodule update --init --recursive -- dmlc-core"
|
||||
not_ready=1
|
||||
fi
|
||||
|
||||
if [[ "${not_ready}" == "1" ]]; then
|
||||
echo ""
|
||||
echo "Please fix the errors above and retry the build or reclone the repository with:"
|
||||
echo "Please fix the errors above and retry the build, or reclone the repository with:"
|
||||
echo "git clone --recursive https://github.com/dmlc/xgboost.git"
|
||||
echo ""
|
||||
exit 1
|
||||
|
||||
58
cmake/Sanitizer.cmake
Normal file
58
cmake/Sanitizer.cmake
Normal file
@@ -0,0 +1,58 @@
|
||||
# Set appropriate compiler and linker flags for sanitizers.
|
||||
#
|
||||
# Usage of this module:
|
||||
# enable_sanitizers("address;leak")
|
||||
|
||||
# Add flags
|
||||
macro(enable_sanitizer santizer)
|
||||
if(${santizer} MATCHES "address")
|
||||
find_package(ASan REQUIRED)
|
||||
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=address")
|
||||
link_libraries(${ASan_LIBRARY})
|
||||
|
||||
elseif(${santizer} MATCHES "thread")
|
||||
find_package(TSan REQUIRED)
|
||||
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=thread")
|
||||
link_libraries(${TSan_LIBRARY})
|
||||
|
||||
elseif(${santizer} MATCHES "leak")
|
||||
find_package(LSan REQUIRED)
|
||||
set(SAN_COMPILE_FLAGS "${SAN_COMPILE_FLAGS} -fsanitize=leak")
|
||||
link_libraries(${LSan_LIBRARY})
|
||||
|
||||
else()
|
||||
message(FATAL_ERROR "Santizer ${santizer} not supported.")
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
macro(enable_sanitizers SANITIZERS)
|
||||
# Check sanitizers compatibility.
|
||||
# Idealy, we should use if(san IN_LIST SANITIZERS) ... endif()
|
||||
# But I haven't figure out how to make it work.
|
||||
foreach ( _san ${SANITIZERS} )
|
||||
string(TOLOWER ${_san} _san)
|
||||
if (_san MATCHES "thread")
|
||||
if (${_use_other_sanitizers})
|
||||
message(FATAL_ERROR
|
||||
"thread sanitizer is not compatible with ${_san} sanitizer.")
|
||||
endif()
|
||||
set(_use_thread_sanitizer 1)
|
||||
else ()
|
||||
if (${_use_thread_sanitizer})
|
||||
message(FATAL_ERROR
|
||||
"${_san} sanitizer is not compatible with thread sanitizer.")
|
||||
endif()
|
||||
set(_use_other_sanitizers 1)
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
message("Sanitizers: ${SANITIZERS}")
|
||||
|
||||
foreach( _san ${SANITIZERS} )
|
||||
string(TOLOWER ${_san} _san)
|
||||
enable_sanitizer(${_san})
|
||||
endforeach()
|
||||
message("Sanitizers compile flags: ${SAN_COMPILE_FLAGS}")
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SAN_COMPILE_FLAGS}")
|
||||
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SAN_COMPILE_FLAGS}")
|
||||
endmacro()
|
||||
@@ -54,10 +54,25 @@ function(set_default_configuration_release)
|
||||
endif()
|
||||
endfunction(set_default_configuration_release)
|
||||
|
||||
# Generate nvcc compiler flags given a list of architectures
|
||||
# Also generates PTX for the most recent architecture for forwards compatibility
|
||||
function(format_gencode_flags flags out)
|
||||
# Set up architecture flags
|
||||
if(NOT flags)
|
||||
if((CUDA_VERSION_MAJOR EQUAL 9) OR (CUDA_VERSION_MAJOR GREATER 9))
|
||||
set(flags "35;50;52;60;61;70")
|
||||
else()
|
||||
set(flags "35;50;52;60;61")
|
||||
endif()
|
||||
endif()
|
||||
# Generate SASS
|
||||
foreach(ver ${flags})
|
||||
set(${out} "${${out}}-gencode arch=compute_${ver},code=sm_${ver};")
|
||||
endforeach()
|
||||
# Generate PTX for last architecture
|
||||
list(GET flags -1 ver)
|
||||
set(${out} "${${out}}-gencode arch=compute_${ver},code=compute_${ver};")
|
||||
|
||||
set(${out} "${${out}}" PARENT_SCOPE)
|
||||
endfunction(format_gencode_flags flags)
|
||||
|
||||
|
||||
13
cmake/modules/FindASan.cmake
Normal file
13
cmake/modules/FindASan.cmake
Normal file
@@ -0,0 +1,13 @@
|
||||
set(ASan_LIB_NAME ASan)
|
||||
|
||||
find_library(ASan_LIBRARY
|
||||
NAMES libasan.so libasan.so.4
|
||||
PATHS /usr/lib64 /usr/lib /usr/local/lib64 /usr/local/lib)
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(ASan DEFAULT_MSG
|
||||
ASan_LIBRARY)
|
||||
|
||||
mark_as_advanced(
|
||||
ASan_LIBRARY
|
||||
ASan_LIB_NAME)
|
||||
@@ -1,79 +0,0 @@
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Tries to find GTest headers and libraries.
|
||||
#
|
||||
# Usage of this module as follows:
|
||||
#
|
||||
# find_package(GTest)
|
||||
#
|
||||
# Variables used by this module, they can change the default behaviour and need
|
||||
# to be set before calling find_package:
|
||||
#
|
||||
# GTest_HOME - When set, this path is inspected instead of standard library
|
||||
# locations as the root of the GTest installation.
|
||||
# The environment variable GTEST_HOME overrides this veriable.
|
||||
#
|
||||
# This module defines
|
||||
# GTEST_INCLUDE_DIR, directory containing headers
|
||||
# GTEST_LIBS, directory containing gtest libraries
|
||||
# GTEST_STATIC_LIB, path to libgtest.a
|
||||
# GTEST_SHARED_LIB, path to libgtest's shared library
|
||||
# GTEST_FOUND, whether gtest has been found
|
||||
|
||||
find_path(GTEST_INCLUDE_DIR NAMES gtest/gtest.h gtest.h PATHS ${CMAKE_SOURCE_DIR}/gtest/include NO_DEFAULT_PATH)
|
||||
find_library(GTEST_LIBRARIES NAMES gtest PATHS ${CMAKE_SOURCE_DIR}/gtest/lib NO_DEFAULT_PATH)
|
||||
|
||||
if (GTEST_INCLUDE_DIR )
|
||||
message(STATUS "Found the GTest includes: ${GTEST_INCLUDE_DIR}")
|
||||
endif ()
|
||||
|
||||
|
||||
if (GTEST_INCLUDE_DIR AND GTEST_LIBRARIES)
|
||||
set(GTEST_FOUND TRUE)
|
||||
get_filename_component( GTEST_LIBS ${GTEST_LIBRARIES} PATH )
|
||||
set(GTEST_LIB_NAME gtest)
|
||||
set(GTEST_STATIC_LIB ${GTEST_LIBS}/${CMAKE_STATIC_LIBRARY_PREFIX}${GTEST_LIB_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX})
|
||||
set(GTEST_MAIN_STATIC_LIB ${GTEST_LIBS}/${CMAKE_STATIC_LIBRARY_PREFIX}${GTEST_LIB_NAME}_main${CMAKE_STATIC_LIBRARY_SUFFIX})
|
||||
set(GTEST_SHARED_LIB ${GTEST_LIBS}/${CMAKE_SHARED_LIBRARY_PREFIX}${GTEST_LIB_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX})
|
||||
else ()
|
||||
set(GTEST_FOUND FALSE)
|
||||
endif ()
|
||||
|
||||
if (GTEST_FOUND)
|
||||
if (NOT GTest_FIND_QUIETLY)
|
||||
message(STATUS "Found the GTest library: ${GTEST_LIBRARIES}")
|
||||
endif ()
|
||||
else ()
|
||||
if (NOT GTest_FIND_QUIETLY)
|
||||
set(GTEST_ERR_MSG "Could not find the GTest library. Looked in ")
|
||||
if ( _gtest_roots )
|
||||
set(GTEST_ERR_MSG "${GTEST_ERR_MSG} in ${_gtest_roots}.")
|
||||
else ()
|
||||
set(GTEST_ERR_MSG "${GTEST_ERR_MSG} system search paths.")
|
||||
endif ()
|
||||
if (GTest_FIND_REQUIRED)
|
||||
message(FATAL_ERROR "${GTEST_ERR_MSG}")
|
||||
else (GTest_FIND_REQUIRED)
|
||||
message(STATUS "${GTEST_ERR_MSG}")
|
||||
endif (GTest_FIND_REQUIRED)
|
||||
endif ()
|
||||
endif ()
|
||||
|
||||
mark_as_advanced(
|
||||
GTEST_INCLUDE_DIR
|
||||
GTEST_LIBS
|
||||
GTEST_LIBRARIES
|
||||
GTEST_STATIC_LIB
|
||||
GTEST_SHARED_LIB
|
||||
)
|
||||
13
cmake/modules/FindLSan.cmake
Normal file
13
cmake/modules/FindLSan.cmake
Normal file
@@ -0,0 +1,13 @@
|
||||
set(LSan_LIB_NAME lsan)
|
||||
|
||||
find_library(LSan_LIBRARY
|
||||
NAMES liblsan.so liblsan.so.0 liblsan.so.0.0.0
|
||||
PATHS /usr/lib64 /usr/lib /usr/local/lib64 /usr/local/lib)
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(LSan DEFAULT_MSG
|
||||
LSan_LIBRARY)
|
||||
|
||||
mark_as_advanced(
|
||||
LSan_LIBRARY
|
||||
LSan_LIB_NAME)
|
||||
@@ -117,7 +117,7 @@ else()
|
||||
# ask R for R_HOME
|
||||
if(LIBR_EXECUTABLE)
|
||||
execute_process(
|
||||
COMMAND ${LIBR_EXECUTABLE} "--slave" "--no-save" "-e" "cat(normalizePath(R.home(), winslash='/'))"
|
||||
COMMAND ${LIBR_EXECUTABLE} "--slave" "--no-save" "-e" "cat(normalizePath(R.home(),winslash='/'))"
|
||||
OUTPUT_VARIABLE LIBR_HOME)
|
||||
endif()
|
||||
# if R executable not available, query R_HOME path from registry
|
||||
|
||||
58
cmake/modules/FindNccl.cmake
Normal file
58
cmake/modules/FindNccl.cmake
Normal file
@@ -0,0 +1,58 @@
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Tries to find NCCL headers and libraries.
|
||||
#
|
||||
# Usage of this module as follows:
|
||||
#
|
||||
# find_package(NCCL)
|
||||
#
|
||||
# Variables used by this module, they can change the default behaviour and need
|
||||
# to be set before calling find_package:
|
||||
#
|
||||
# NCCL_ROOT - When set, this path is inspected instead of standard library
|
||||
# locations as the root of the NCCL installation.
|
||||
# The environment variable NCCL_ROOT overrides this veriable.
|
||||
#
|
||||
# This module defines
|
||||
# Nccl_FOUND, whether nccl has been found
|
||||
# NCCL_INCLUDE_DIR, directory containing header
|
||||
# NCCL_LIBRARY, directory containing nccl library
|
||||
# NCCL_LIB_NAME, nccl library name
|
||||
#
|
||||
# This module assumes that the user has already called find_package(CUDA)
|
||||
|
||||
|
||||
set(NCCL_LIB_NAME nccl_static)
|
||||
|
||||
find_path(NCCL_INCLUDE_DIR
|
||||
NAMES nccl.h
|
||||
PATHS $ENV{NCCL_ROOT}/include ${NCCL_ROOT}/include ${CUDA_INCLUDE_DIRS} /usr/include)
|
||||
|
||||
find_library(NCCL_LIBRARY
|
||||
NAMES ${NCCL_LIB_NAME}
|
||||
PATHS $ENV{NCCL_ROOT}/lib ${NCCL_ROOT}/lib ${CUDA_INCLUDE_DIRS}/../lib /usr/lib)
|
||||
|
||||
if (NCCL_INCLUDE_DIR AND NCCL_LIBRARY)
|
||||
get_filename_component(NCCL_LIBRARY ${NCCL_LIBRARY} PATH)
|
||||
endif ()
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(Nccl DEFAULT_MSG
|
||||
NCCL_INCLUDE_DIR NCCL_LIBRARY)
|
||||
|
||||
mark_as_advanced(
|
||||
NCCL_INCLUDE_DIR
|
||||
NCCL_LIBRARY
|
||||
NCCL_LIB_NAME
|
||||
)
|
||||
13
cmake/modules/FindTSan.cmake
Normal file
13
cmake/modules/FindTSan.cmake
Normal file
@@ -0,0 +1,13 @@
|
||||
set(TSan_LIB_NAME tsan)
|
||||
|
||||
find_library(TSan_LIBRARY
|
||||
NAMES libtsan.so libtsan.so.0 libtsan.so.0.0.0
|
||||
PATHS /usr/lib64 /usr/lib /usr/local/lib64 /usr/local/lib)
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(TSan DEFAULT_MSG
|
||||
TSan_LIBRARY)
|
||||
|
||||
mark_as_advanced(
|
||||
TSan_LIBRARY
|
||||
TSan_LIB_NAME)
|
||||
@@ -80,12 +80,6 @@ booster = gblinear
|
||||
# L2 regularization term on weights, default 0
|
||||
lambda = 0.01
|
||||
# L1 regularization term on weights, default 0
|
||||
If ```agaricus.txt.test.buffer``` exists, and automatically loads from binary buffer if possible, this can speedup training process when you do training many times. You can disable it by setting ```use_buffer=0```.
|
||||
- Buffer file can also be used as standalone input, i.e if buffer file exists, but original agaricus.txt.test was removed, xgboost will still run
|
||||
* Deviation from LibSVM input format: xgboost is compatible with LibSVM format, with the following minor differences:
|
||||
- xgboost allows feature index starts from 0
|
||||
- for binary classification, the label is 1 for positive, 0 for negative, instead of +1,-1
|
||||
- the feature indices in each line *do not* need to be sorted
|
||||
alpha = 0.01
|
||||
# L2 regularization term on bias, default 0
|
||||
lambda_bias = 0.01
|
||||
@@ -102,7 +96,7 @@ After training, we can use the output model to get the prediction of the test da
|
||||
For binary classification, the output predictions are probability confidence scores in [0,1], corresponds to the probability of the label to be positive.
|
||||
|
||||
#### Dump Model
|
||||
This is a preliminary feature, so far only tree model support text dump. XGBoost can display the tree models in text files and we can scan the model in an easy way:
|
||||
This is a preliminary feature, so only tree models support text dump. XGBoost can display the tree models in text or JSON files, and we can scan the model in an easy way:
|
||||
```
|
||||
../../xgboost mushroom.conf task=dump model_in=0002.model name_dump=dump.raw.txt
|
||||
../../xgboost mushroom.conf task=dump model_in=0002.model fmap=featmap.txt name_dump=dump.nice.txt
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
|
||||
This demo shows how to train a model on the [forest cover type](https://archive.ics.uci.edu/ml/datasets/covertype) dataset using GPU acceleration. The forest cover type dataset has 581,012 rows and 54 features, making it time consuming to process. We compare the run-time and accuracy of the GPU and CPU histogram algorithms.
|
||||
|
||||
This demo requires the [GPU plug-in](https://github.com/dmlc/xgboost/tree/master/plugin/updater_gpu) to be built and installed.
|
||||
This demo requires the [GPU plug-in](https://xgboost.readthedocs.io/en/latest/gpu/index.html) to be built and installed.
|
||||
|
||||
The dataset is automatically loaded via the sklearn script.
|
||||
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
XGBoost Python Feature Walkthrough
|
||||
==================================
|
||||
* [Basic walkthrough of wrappers](basic_walkthrough.py)
|
||||
* [Cutomize loss function, and evaluation metric](custom_objective.py)
|
||||
* [Customize loss function, and evaluation metric](custom_objective.py)
|
||||
* [Boosting from existing prediction](boost_from_prediction.py)
|
||||
* [Predicting using first n trees](predict_first_ntree.py)
|
||||
* [Generalized Linear Model](generalized_linear_model.py)
|
||||
|
||||
@@ -42,7 +42,7 @@ xgb.cv(param, dtrain, num_round, nfold=5,
|
||||
metrics={'auc'}, seed=0, fpreproc=fpreproc)
|
||||
|
||||
###
|
||||
# you can also do cross validation with cutomized loss function
|
||||
# you can also do cross validation with customized loss function
|
||||
# See custom_objective.py
|
||||
##
|
||||
print('running cross validation, with cutomsized loss function')
|
||||
|
||||
@@ -33,10 +33,10 @@ def logregobj(preds, dtrain):
|
||||
# Take this in mind when you use the customization, and maybe you need write customized evaluation function
|
||||
def evalerror(preds, dtrain):
|
||||
labels = dtrain.get_label()
|
||||
# return a pair metric_name, result
|
||||
# return a pair metric_name, result. The metric name must not contain a colon (:)
|
||||
# since preds are margin(before logistic transformation, cutoff at 0)
|
||||
return 'error', float(sum(labels != (preds > 0.0))) / len(labels)
|
||||
|
||||
# training with customized objective, we can also do step by step training
|
||||
# simply look at xgboost.py's implementation of train
|
||||
bst = xgb.train(param, dtrain, num_round, watchlist, logregobj, evalerror)
|
||||
bst = xgb.train(param, dtrain, num_round, watchlist, obj=logregobj, feval=evalerror)
|
||||
|
||||
@@ -24,9 +24,9 @@ param <- list("objective" = "binary:logitraw",
|
||||
"silent" = 1,
|
||||
"nthread" = 16)
|
||||
watchlist <- list("train" = xgmat)
|
||||
nround = 120
|
||||
nrounds = 120
|
||||
print ("loading data end, start to boost trees")
|
||||
bst = xgb.train(param, xgmat, nround, watchlist );
|
||||
bst = xgb.train(param, xgmat, nrounds, watchlist );
|
||||
# save out model
|
||||
xgb.save(bst, "higgs.model")
|
||||
print ('finish training')
|
||||
|
||||
@@ -39,9 +39,9 @@ for (i in 1:length(threads)){
|
||||
"silent" = 1,
|
||||
"nthread" = thread)
|
||||
watchlist <- list("train" = xgmat)
|
||||
nround = 120
|
||||
nrounds = 120
|
||||
print ("loading data end, start to boost trees")
|
||||
bst = xgb.train(param, xgmat, nround, watchlist );
|
||||
bst = xgb.train(param, xgmat, nrounds, watchlist );
|
||||
# save out model
|
||||
xgb.save(bst, "higgs.model")
|
||||
print ('finish training')
|
||||
|
||||
@@ -23,13 +23,13 @@ param <- list("objective" = "multi:softprob",
|
||||
"nthread" = 8)
|
||||
|
||||
# Run Cross Validation
|
||||
cv.nround = 50
|
||||
cv.nrounds = 50
|
||||
bst.cv = xgb.cv(param=param, data = x[trind,], label = y,
|
||||
nfold = 3, nrounds=cv.nround)
|
||||
nfold = 3, nrounds=cv.nrounds)
|
||||
|
||||
# Train the model
|
||||
nround = 50
|
||||
bst = xgboost(param=param, data = x[trind,], label = y, nrounds=nround)
|
||||
nrounds = 50
|
||||
bst = xgboost(param=param, data = x[trind,], label = y, nrounds=nrounds)
|
||||
|
||||
# Make prediction
|
||||
pred = predict(bst,x[teind,])
|
||||
|
||||
@@ -121,19 +121,19 @@ param <- list("objective" = "multi:softprob",
|
||||
"eval_metric" = "mlogloss",
|
||||
"num_class" = numberOfClasses)
|
||||
|
||||
cv.nround <- 5
|
||||
cv.nrounds <- 5
|
||||
cv.nfold <- 3
|
||||
|
||||
bst.cv = xgb.cv(param=param, data = trainMatrix, label = y,
|
||||
nfold = cv.nfold, nrounds = cv.nround)
|
||||
nfold = cv.nfold, nrounds = cv.nrounds)
|
||||
```
|
||||
> As we can see the error rate is low on the test dataset (for a 5mn trained model).
|
||||
|
||||
Finally, we are ready to train the real model!!!
|
||||
|
||||
```{r modelTraining}
|
||||
nround = 50
|
||||
bst = xgboost(param=param, data = trainMatrix, label = y, nrounds=nround)
|
||||
nrounds = 50
|
||||
bst = xgboost(param=param, data = trainMatrix, label = y, nrounds=nrounds)
|
||||
```
|
||||
|
||||
Model understanding
|
||||
@@ -142,7 +142,7 @@ Model understanding
|
||||
Feature importance
|
||||
------------------
|
||||
|
||||
So far, we have built a model made of **`r nround`** trees.
|
||||
So far, we have built a model made of **`r nrounds`** trees.
|
||||
|
||||
To build a tree, the dataset is divided recursively several times. At the end of the process, you get groups of observations (here, these observations are properties regarding **Otto** products).
|
||||
|
||||
|
||||
@@ -14,8 +14,15 @@ For more usage details please refer to the [binary classification demo](../binar
|
||||
|
||||
Instructions
|
||||
====
|
||||
The dataset for ranking demo is from LETOR04 MQ2008 fold1,
|
||||
You can use the following command to run the example
|
||||
The dataset for ranking demo is from LETOR04 MQ2008 fold1.
|
||||
You can use the following command to run the example:
|
||||
|
||||
Get the data: ./wgetdata.sh
|
||||
Run the example: ./runexp.sh
|
||||
Get the data:
|
||||
```
|
||||
./wgetdata.sh
|
||||
```
|
||||
|
||||
Run the example:
|
||||
```
|
||||
./runexp.sh
|
||||
```
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
#!/bin/bash
|
||||
wget http://research.microsoft.com/en-us/um/beijing/projects/letor/LETOR4.0/Data/MQ2008.rar
|
||||
wget https://s3-us-west-2.amazonaws.com/xgboost-examples/MQ2008.rar
|
||||
unrar x MQ2008.rar
|
||||
mv -f MQ2008/Fold1/*.txt .
|
||||
|
||||
Submodule dmlc-core updated: b5bec5481d...f2afdc7788
@@ -222,7 +222,7 @@ The code below is very usual. For more information, you can look at the document
|
||||
|
||||
```r
|
||||
bst <- xgboost(data = sparse_matrix, label = output_vector, max.depth = 4,
|
||||
eta = 1, nthread = 2, nround = 10,objective = "binary:logistic")
|
||||
eta = 1, nthread = 2, nrounds = 10,objective = "binary:logistic")
|
||||
```
|
||||
|
||||
```
|
||||
@@ -244,7 +244,7 @@ A model which fits too well may [overfit](http://en.wikipedia.org/wiki/Overfitti
|
||||
|
||||
> Here you can see the numbers decrease until line 7 and then increase.
|
||||
>
|
||||
> It probably means we are overfitting. To fix that I should reduce the number of rounds to `nround = 4`. I will let things like that because I don't really care for the purpose of this example :-)
|
||||
> It probably means we are overfitting. To fix that I should reduce the number of rounds to `nrounds = 4`. I will let things like that because I don't really care for the purpose of this example :-)
|
||||
|
||||
Feature importance
|
||||
------------------
|
||||
@@ -448,7 +448,7 @@ train <- agaricus.train
|
||||
test <- agaricus.test
|
||||
|
||||
#Random Forest™ - 1000 trees
|
||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nround = 1, objective = "binary:logistic")
|
||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nrounds = 1, objective = "binary:logistic")
|
||||
```
|
||||
|
||||
```
|
||||
@@ -457,7 +457,7 @@ bst <- xgboost(data = train$data, label = train$label, max.depth = 4, num_parall
|
||||
|
||||
```r
|
||||
#Boosting - 3 rounds
|
||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, nround = 3, objective = "binary:logistic")
|
||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, nrounds = 3, objective = "binary:logistic")
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
XGBoost R Package
|
||||
=================
|
||||
[](http://cran.r-project.org/web/packages/xgboost)
|
||||
[](http://cran.rstudio.com/web/packages/xgboost/index.html)
|
||||
|
||||
|
||||
You have found the XGBoost R Package!
|
||||
|
||||
Get Started
|
||||
-----------
|
||||
* Checkout the [Installation Guide](../build.md) contains instructions to install xgboost, and [Tutorials](#tutorials) for examples on how to use xgboost for various tasks.
|
||||
* Please visit [walk through example](../../R-package/demo).
|
||||
|
||||
Tutorials
|
||||
---------
|
||||
- [Introduction to XGBoost in R](xgboostPresentation.md)
|
||||
- [Discover your data with XGBoost in R](discoverYourData.md)
|
||||
28
doc/R-package/index.rst
Normal file
28
doc/R-package/index.rst
Normal file
@@ -0,0 +1,28 @@
|
||||
#################
|
||||
XGBoost R Package
|
||||
#################
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a href="http://cran.r-project.org/web/packages/xgboost"><img alt="CRAN Status Badge" src="http://www.r-pkg.org/badges/version/xgboost"></a>
|
||||
<a href="http://cran.rstudio.com/web/packages/xgboost/index.html"><img alt="CRAN Downloads" src="http://cranlogs.r-pkg.org/badges/xgboost"></a>
|
||||
|
||||
You have found the XGBoost R Package!
|
||||
|
||||
***********
|
||||
Get Started
|
||||
***********
|
||||
* Checkout the :doc:`Installation Guide </build>` contains instructions to install xgboost, and :doc:`Tutorials </tutorials/index>` for examples on how to use XGBoost for various tasks.
|
||||
* Read the `API documentation <https://cran.r-project.org/web/packages/xgboost/xgboost.pdf>`_.
|
||||
* Please visit `Walk-through Examples <https://github.com/dmlc/xgboost/tree/master/R-package/demo>`_.
|
||||
|
||||
*********
|
||||
Tutorials
|
||||
*********
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:titlesonly:
|
||||
|
||||
Introduction to XGBoost in R <xgboostPresentation>
|
||||
Understanding your dataset with XGBoost <discoverYourData>
|
||||
@@ -178,11 +178,11 @@ We will train decision tree model using the following parameters:
|
||||
* `objective = "binary:logistic"`: we will train a binary classification model ;
|
||||
* `max.deph = 2`: the trees won't be deep, because our case is very simple ;
|
||||
* `nthread = 2`: the number of cpu threads we are going to use;
|
||||
* `nround = 2`: there will be two passes on the data, the second one will enhance the model by further reducing the difference between ground truth and prediction.
|
||||
* `nrounds = 2`: there will be two passes on the data, the second one will enhance the model by further reducing the difference between ground truth and prediction.
|
||||
|
||||
|
||||
```r
|
||||
bstSparse <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
|
||||
bstSparse <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
|
||||
```
|
||||
|
||||
```
|
||||
@@ -200,7 +200,7 @@ Alternatively, you can put your dataset in a *dense* matrix, i.e. a basic **R**
|
||||
|
||||
|
||||
```r
|
||||
bstDense <- xgboost(data = as.matrix(train$data), label = train$label, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
|
||||
bstDense <- xgboost(data = as.matrix(train$data), label = train$label, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
|
||||
```
|
||||
|
||||
```
|
||||
@@ -215,7 +215,7 @@ bstDense <- xgboost(data = as.matrix(train$data), label = train$label, max.depth
|
||||
|
||||
```r
|
||||
dtrain <- xgb.DMatrix(data = train$data, label = train$label)
|
||||
bstDMatrix <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
|
||||
bstDMatrix <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
|
||||
```
|
||||
|
||||
```
|
||||
@@ -232,13 +232,13 @@ One of the simplest way to see the training progress is to set the `verbose` opt
|
||||
|
||||
```r
|
||||
# verbose = 0, no message
|
||||
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic", verbose = 0)
|
||||
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic", verbose = 0)
|
||||
```
|
||||
|
||||
|
||||
```r
|
||||
# verbose = 1, print evaluation metric
|
||||
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic", verbose = 1)
|
||||
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic", verbose = 1)
|
||||
```
|
||||
|
||||
```
|
||||
@@ -249,7 +249,7 @@ bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, o
|
||||
|
||||
```r
|
||||
# verbose = 2, also print information about tree
|
||||
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nround = 2, objective = "binary:logistic", verbose = 2)
|
||||
bst <- xgboost(data = dtrain, max.depth = 2, eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic", verbose = 2)
|
||||
```
|
||||
|
||||
```
|
||||
@@ -372,7 +372,7 @@ For the purpose of this example, we use `watchlist` parameter. It is a list of `
|
||||
```r
|
||||
watchlist <- list(train=dtrain, test=dtest)
|
||||
|
||||
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nround=2, watchlist=watchlist, objective = "binary:logistic")
|
||||
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nrounds=2, watchlist=watchlist, objective = "binary:logistic")
|
||||
```
|
||||
|
||||
```
|
||||
@@ -380,7 +380,7 @@ bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nround=2, watchli
|
||||
## [1] train-error:0.022263 test-error:0.021726
|
||||
```
|
||||
|
||||
**XGBoost** has computed at each round the same average error metric than seen above (we set `nround` to 2, that is why we have two lines). Obviously, the `train-error` number is related to the training dataset (the one the algorithm learns from) and the `test-error` number to the test dataset.
|
||||
**XGBoost** has computed at each round the same average error metric than seen above (we set `nrounds` to 2, that is why we have two lines). Obviously, the `train-error` number is related to the training dataset (the one the algorithm learns from) and the `test-error` number to the test dataset.
|
||||
|
||||
Both training and test error related metrics are very similar, and in some way, it makes sense: what we have learned from the training dataset matches the observations from the test dataset.
|
||||
|
||||
@@ -390,7 +390,7 @@ For a better understanding of the learning progression, you may want to have som
|
||||
|
||||
|
||||
```r
|
||||
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nround=2, watchlist=watchlist, eval.metric = "error", eval.metric = "logloss", objective = "binary:logistic")
|
||||
bst <- xgb.train(data=dtrain, max.depth=2, eta=1, nthread = 2, nrounds=2, watchlist=watchlist, eval.metric = "error", eval.metric = "logloss", objective = "binary:logistic")
|
||||
```
|
||||
|
||||
```
|
||||
@@ -407,7 +407,7 @@ Until now, all the learnings we have performed were based on boosting trees. **X
|
||||
|
||||
|
||||
```r
|
||||
bst <- xgb.train(data=dtrain, booster = "gblinear", max.depth=2, nthread = 2, nround=2, watchlist=watchlist, eval.metric = "error", eval.metric = "logloss", objective = "binary:logistic")
|
||||
bst <- xgb.train(data=dtrain, booster = "gblinear", max.depth=2, nthread = 2, nrounds=2, watchlist=watchlist, eval.metric = "error", eval.metric = "logloss", objective = "binary:logistic")
|
||||
```
|
||||
|
||||
```
|
||||
@@ -445,7 +445,7 @@ dtrain2 <- xgb.DMatrix("dtrain.buffer")
|
||||
```
|
||||
|
||||
```r
|
||||
bst <- xgb.train(data=dtrain2, max.depth=2, eta=1, nthread = 2, nround=2, watchlist=watchlist, objective = "binary:logistic")
|
||||
bst <- xgb.train(data=dtrain2, max.depth=2, eta=1, nthread = 2, nrounds=2, watchlist=watchlist, objective = "binary:logistic")
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
The documentation of xgboost is generated with recommonmark and sphinx.
|
||||
|
||||
You can build it locally by typing "make html" in this folder.
|
||||
- clone https://github.com/tqchen/recommonmark to root
|
||||
- type make html
|
||||
|
||||
Checkout https://recommonmark.readthedocs.org for guide on how to write markdown with extensions used in this doc, such as math formulas and table of content.
|
||||
|
||||
23
doc/_static/custom.css
vendored
Normal file
23
doc/_static/custom.css
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
div.breathe-sectiondef.container {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.literal-block-wrapper.container {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.red {
|
||||
color: red;
|
||||
}
|
||||
|
||||
table {
|
||||
border: 0;
|
||||
}
|
||||
|
||||
td, th {
|
||||
padding: 1px 8px 1px 5px;
|
||||
border-top: 0;
|
||||
border-bottom: 1px solid #aaa;
|
||||
border-left: 0;
|
||||
border-right: 0;
|
||||
}
|
||||
5
doc/_static/xgboost-theme/footer.html
vendored
5
doc/_static/xgboost-theme/footer.html
vendored
@@ -1,5 +0,0 @@
|
||||
<div class="container">
|
||||
<div class="footer">
|
||||
<p> © 2015-2016 DMLC. All rights reserved. </p>
|
||||
</div>
|
||||
</div>
|
||||
58
doc/_static/xgboost-theme/index.html
vendored
58
doc/_static/xgboost-theme/index.html
vendored
@@ -1,58 +0,0 @@
|
||||
<div class="splash">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-12">
|
||||
<h1>Scalable and Flexible Gradient Boosting</h1>
|
||||
<div id="social">
|
||||
<span>
|
||||
<iframe src="https://ghbtns.com/github-btn.html?user=dmlc&repo=xgboost&type=star&count=true&v=2"
|
||||
frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
|
||||
<iframe src="https://ghbtns.com/github-btn.html?user=dmlc&repo=xgboost&type=fork&count=true&v=2"
|
||||
frameborder="0" scrolling="0" width="100px" height="20px"></iframe>
|
||||
</span>
|
||||
</div> <!-- end of social -->
|
||||
<div class="get_start">
|
||||
<a href="get_started/" class="get_start_btn">Get Started</a>
|
||||
</div> <!-- end of get started button -->
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="section-tout">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-4 col-sm-6">
|
||||
<h3><i class="fa fa-flag"></i> Flexible</h3>
|
||||
<p>Supports regression, classification, ranking and user defined objectives.
|
||||
</p>
|
||||
</div>
|
||||
<div class="col-lg-4 col-sm-6">
|
||||
<h3><i class="fa fa-cube"></i> Portable</h3>
|
||||
<p>Runs on Windows, Linux and OS X, as well as various cloud Platforms</p>
|
||||
</div>
|
||||
<div class="col-lg-4 col-sm-6">
|
||||
<h3><i class="fa fa-wrench"></i>Multiple Languages</h3>
|
||||
<p>Supports multiple languages including C++, Python, R, Java, Scala, Julia.</p>
|
||||
</div>
|
||||
<div class="col-lg-4 col-sm-6">
|
||||
<h3><i class="fa fa-cogs"></i> Battle-tested</h3>
|
||||
<p>Wins many data science and machine learning challenges.
|
||||
Used in production by multiple companies.
|
||||
</p>
|
||||
</div>
|
||||
<div class="col-lg-4 col-sm-6">
|
||||
<h3><i class="fa fa-cloud"></i>Distributed on Cloud</h3>
|
||||
<p>Supports distributed training on multiple machines, including AWS,
|
||||
GCE, Azure, and Yarn clusters. Can be integrated with Flink, Spark and other cloud dataflow systems.</p>
|
||||
</div>
|
||||
<div class="col-lg-4 col-sm-6">
|
||||
<h3><i class="fa fa-rocket"></i> Performance</h3>
|
||||
<p>The well-optimized backend system for the best performance with limited resources.
|
||||
The distributed version solves problems beyond billions of examples with same code.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
156
doc/_static/xgboost-theme/layout.html
vendored
156
doc/_static/xgboost-theme/layout.html
vendored
@@ -1,156 +0,0 @@
|
||||
{%- block doctype -%}
|
||||
<!DOCTYPE html>
|
||||
{%- endblock %}
|
||||
{%- set reldelim1 = reldelim1 is not defined and ' »' or reldelim1 %}
|
||||
{%- set reldelim2 = reldelim2 is not defined and ' |' or reldelim2 %}
|
||||
{%- set render_sidebar = (not embedded) and (not theme_nosidebar|tobool) and
|
||||
(sidebars != []) %}
|
||||
{%- set url_root = pathto('', 1) %}
|
||||
{%- if url_root == '#' %}{% set url_root = '' %}{% endif %}
|
||||
{%- if not embedded and docstitle %}
|
||||
{%- set titlesuffix = " — "|safe + docstitle|e %}
|
||||
{%- else %}
|
||||
{%- set titlesuffix = "" %}
|
||||
{%- endif %}
|
||||
|
||||
{%- macro searchform(classes, button) %}
|
||||
<form class="{{classes}}" role="search" action="{{ pathto('search') }}" method="get">
|
||||
<div class="form-group">
|
||||
<input type="text" name="q" class="form-control" {{ 'placeholder="Search"' if not button }} >
|
||||
</div>
|
||||
<input type="hidden" name="check_keywords" value="yes" />
|
||||
<input type="hidden" name="area" value="default" />
|
||||
{% if button %}
|
||||
<input type="submit" class="btn btn-default" value="search">
|
||||
{% endif %}
|
||||
</form>
|
||||
{%- endmacro %}
|
||||
|
||||
{%- macro sidebarglobal() %}
|
||||
<ul class="globaltoc">
|
||||
{{ toctree(maxdepth=2|toint, collapse=False,includehidden=theme_globaltoc_includehidden|tobool) }}
|
||||
</ul>
|
||||
{%- endmacro %}
|
||||
|
||||
{%- macro sidebar() %}
|
||||
{%- if render_sidebar %}
|
||||
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
|
||||
<div class="sphinxsidebarwrapper">
|
||||
{%- block sidebartoc %}
|
||||
{%- include "localtoc.html" %}
|
||||
{%- endblock %}
|
||||
</div>
|
||||
</div>
|
||||
{%- endif %}
|
||||
{%- endmacro %}
|
||||
|
||||
|
||||
{%- macro script() %}
|
||||
<script type="text/javascript">
|
||||
var DOCUMENTATION_OPTIONS = {
|
||||
URL_ROOT: '{{ url_root }}',
|
||||
VERSION: '{{ release|e }}',
|
||||
COLLAPSE_INDEX: false,
|
||||
FILE_SUFFIX: '{{ '' if no_search_suffix else file_suffix }}',
|
||||
HAS_SOURCE: {{ has_source|lower }}
|
||||
};
|
||||
</script>
|
||||
|
||||
{% for name in ['jquery.js', 'underscore.js', 'doctools.js', 'searchtools.js'] %}
|
||||
<script type="text/javascript" src="{{ pathto('_static/' + name, 1) }}"></script>
|
||||
{% endfor %}
|
||||
|
||||
<script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
|
||||
|
||||
<!-- {%- for scriptfile in script_files %} -->
|
||||
<!-- <script type="text/javascript" src="{{ pathto(scriptfile, 1) }}"></script> -->
|
||||
<!-- {%- endfor %} -->
|
||||
{%- endmacro %}
|
||||
|
||||
{%- macro css() %}
|
||||
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" crossorigin="anonymous">
|
||||
{% if pagename == 'index' %}
|
||||
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.5.0/css/font-awesome.min.css">
|
||||
{%- else %}
|
||||
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css" />
|
||||
<link rel="stylesheet" href="{{ pathto('_static/pygments.css', 1) }}" type="text/css" />
|
||||
{%- endif %}
|
||||
|
||||
<link rel="stylesheet" href="{{ pathto('_static/xgboost.css', 1) }}">
|
||||
{%- endmacro %}
|
||||
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="{{ encoding }}">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
{# The above 3 meta tags *must* come first in the head; any other head content
|
||||
must come *after* these tags. #}
|
||||
{{ metatags }}
|
||||
{%- block htmltitle %}
|
||||
{%- if pagename != 'index' %}
|
||||
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
|
||||
{%- else %}
|
||||
<title>XGBoost Documents</title>
|
||||
{%- endif %}
|
||||
{%- endblock %}
|
||||
{{ css() }}
|
||||
{%- if not embedded %}
|
||||
{{ script() }}
|
||||
{%- if use_opensearch %}
|
||||
<link rel="search" type="application/opensearchdescription+xml"
|
||||
title="{% trans docstitle=docstitle|e %}Search within {{ docstitle }}{% endtrans %}"
|
||||
href="{{ pathto('_static/opensearch.xml', 1) }}"/>
|
||||
{%- endif %}
|
||||
{%- if favicon %}
|
||||
<link rel="shortcut icon" href="{{ pathto('_static/' + favicon, 1) }}"/>
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- block linktags %}
|
||||
{%- if hasdoc('about') %}
|
||||
<link rel="author" title="{{ _('About these documents') }}" href="{{ pathto('about') }}" />
|
||||
{%- endif %}
|
||||
{%- if hasdoc('genindex') %}
|
||||
<link rel="index" title="{{ _('Index') }}" href="{{ pathto('genindex') }}" />
|
||||
{%- endif %}
|
||||
{%- if hasdoc('search') %}
|
||||
<link rel="search" title="{{ _('Search') }}" href="{{ pathto('search') }}" />
|
||||
{%- endif %}
|
||||
{%- if hasdoc('copyright') %}
|
||||
<link rel="copyright" title="{{ _('Copyright') }}" href="{{ pathto('copyright') }}" />
|
||||
{%- endif %}
|
||||
{%- if parents %}
|
||||
<link rel="up" title="{{ parents[-1].title|striptags|e }}" href="{{ parents[-1].link|e }}" />
|
||||
{%- endif %}
|
||||
{%- if next %}
|
||||
<link rel="next" title="{{ next.title|striptags|e }}" href="{{ next.link|e }}" />
|
||||
{%- endif %}
|
||||
{%- if prev %}
|
||||
<link rel="prev" title="{{ prev.title|striptags|e }}" href="{{ prev.link|e }}" />
|
||||
{%- endif %}
|
||||
{%- endblock %}
|
||||
{%- block extrahead %} {% endblock %}
|
||||
|
||||
<link rel="icon" type="image/png" href="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet-icon.png">
|
||||
</head>
|
||||
<body role="document">
|
||||
{%- include "navbar.html" %}
|
||||
|
||||
{% if pagename != 'index' %}
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
{{ sidebar() }}
|
||||
<div class="content">
|
||||
{% block body %} {% endblock %}
|
||||
{%- include "footer.html" %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{%- else %}
|
||||
{%- include "index.html" %}
|
||||
{%- include "footer.html" %}
|
||||
{%- endif %} <!-- pagename != index -->
|
||||
|
||||
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js" integrity="sha384-0mSbJDEHialfmuBBQP6A4Qrprq5OVfW37PRR3j5ELqxss1yVqOtnepnHVP9aJ7xS" crossorigin="anonymous"></script>
|
||||
</body>
|
||||
</html>
|
||||
41
doc/_static/xgboost-theme/navbar.html
vendored
41
doc/_static/xgboost-theme/navbar.html
vendored
@@ -1,41 +0,0 @@
|
||||
<div class="navbar navbar-default navbar-fixed-top">
|
||||
<div class="container">
|
||||
<div class="navbar-header">
|
||||
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
</div>
|
||||
<div id="navbar" class="navbar-collapse collapse">
|
||||
<ul id="navbar" class="navbar navbar-left">
|
||||
<li> <a href="{{url_root}}">XGBoost</a> </li>
|
||||
{% for name in ['Get Started', 'Tutorials', 'How To'] %}
|
||||
<li> <a href="{{url_root}}{{name.lower()|replace(" ", "_")}}/index.html">{{name}}</a> </li>
|
||||
{% endfor %}
|
||||
{% for name in ['Packages'] %}
|
||||
<li class="dropdown">
|
||||
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="true">{{name}} <span class="caret"></span></a>
|
||||
<ul class="dropdown-menu">
|
||||
<li><a href="{{url_root}}python/index.html">Python</a></li>
|
||||
<li><a href="{{url_root}}R-package/index.html">R</a></li>
|
||||
<li><a href="{{url_root}}jvm/index.html">JVM</a></li>
|
||||
<li><a href="{{url_root}}julia/index.html">Julia</a></li>
|
||||
<li><a href="{{url_root}}cli/index.html">CLI</a></li>
|
||||
<li><a href="{{url_root}}gpu/index.html">GPU</a></li>
|
||||
</ul>
|
||||
</li>
|
||||
{% endfor %}
|
||||
<li> <a href="{{url_root}}/parameter.html"> Knobs </a> </li>
|
||||
<li> {{searchform('', False)}} </li>
|
||||
</ul>
|
||||
<!--
|
||||
<ul id="navbar" class="navbar navbar-right">
|
||||
<li> <a href="{{url_root}}index.html"><span class="flag-icon flag-icon-us"></span></a> </li>
|
||||
<li> <a href="{{url_root}}/zh/index.html"><span class="flag-icon flag-icon-cn"></span></a> </li>
|
||||
</ul>
|
||||
navbar -->
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
752
doc/_static/xgboost-theme/static/searchtools.js
vendored
752
doc/_static/xgboost-theme/static/searchtools.js
vendored
@@ -1,752 +0,0 @@
|
||||
/*
|
||||
* searchtools.js_t
|
||||
* ~~~~~~~~~~~~~~~~
|
||||
*
|
||||
* Sphinx JavaScript utilities for the full-text search.
|
||||
*
|
||||
* :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
|
||||
* :license: BSD, see LICENSE for details.
|
||||
*
|
||||
*/
|
||||
|
||||
|
||||
/* Non-minified version JS is _stemmer.js if file is provided */
|
||||
/**
|
||||
* Porter Stemmer
|
||||
*/
|
||||
var Stemmer = function() {
|
||||
|
||||
var step2list = {
|
||||
ational: 'ate',
|
||||
tional: 'tion',
|
||||
enci: 'ence',
|
||||
anci: 'ance',
|
||||
izer: 'ize',
|
||||
bli: 'ble',
|
||||
alli: 'al',
|
||||
entli: 'ent',
|
||||
eli: 'e',
|
||||
ousli: 'ous',
|
||||
ization: 'ize',
|
||||
ation: 'ate',
|
||||
ator: 'ate',
|
||||
alism: 'al',
|
||||
iveness: 'ive',
|
||||
fulness: 'ful',
|
||||
ousness: 'ous',
|
||||
aliti: 'al',
|
||||
iviti: 'ive',
|
||||
biliti: 'ble',
|
||||
logi: 'log'
|
||||
};
|
||||
|
||||
var step3list = {
|
||||
icate: 'ic',
|
||||
ative: '',
|
||||
alize: 'al',
|
||||
iciti: 'ic',
|
||||
ical: 'ic',
|
||||
ful: '',
|
||||
ness: ''
|
||||
};
|
||||
|
||||
var c = "[^aeiou]"; // consonant
|
||||
var v = "[aeiouy]"; // vowel
|
||||
var C = c + "[^aeiouy]*"; // consonant sequence
|
||||
var V = v + "[aeiou]*"; // vowel sequence
|
||||
|
||||
var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0
|
||||
var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1
|
||||
var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1
|
||||
var s_v = "^(" + C + ")?" + v; // vowel in stem
|
||||
|
||||
this.stemWord = function (w) {
|
||||
var stem;
|
||||
var suffix;
|
||||
var firstch;
|
||||
var origword = w;
|
||||
|
||||
if (w.length < 3)
|
||||
return w;
|
||||
|
||||
var re;
|
||||
var re2;
|
||||
var re3;
|
||||
var re4;
|
||||
|
||||
firstch = w.substr(0,1);
|
||||
if (firstch == "y")
|
||||
w = firstch.toUpperCase() + w.substr(1);
|
||||
|
||||
// Step 1a
|
||||
re = /^(.+?)(ss|i)es$/;
|
||||
re2 = /^(.+?)([^s])s$/;
|
||||
|
||||
if (re.test(w))
|
||||
w = w.replace(re,"$1$2");
|
||||
else if (re2.test(w))
|
||||
w = w.replace(re2,"$1$2");
|
||||
|
||||
// Step 1b
|
||||
re = /^(.+?)eed$/;
|
||||
re2 = /^(.+?)(ed|ing)$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
re = new RegExp(mgr0);
|
||||
if (re.test(fp[1])) {
|
||||
re = /.$/;
|
||||
w = w.replace(re,"");
|
||||
}
|
||||
}
|
||||
else if (re2.test(w)) {
|
||||
var fp = re2.exec(w);
|
||||
stem = fp[1];
|
||||
re2 = new RegExp(s_v);
|
||||
if (re2.test(stem)) {
|
||||
w = stem;
|
||||
re2 = /(at|bl|iz)$/;
|
||||
re3 = new RegExp("([^aeiouylsz])\\1$");
|
||||
re4 = new RegExp("^" + C + v + "[^aeiouwxy]$");
|
||||
if (re2.test(w))
|
||||
w = w + "e";
|
||||
else if (re3.test(w)) {
|
||||
re = /.$/;
|
||||
w = w.replace(re,"");
|
||||
}
|
||||
else if (re4.test(w))
|
||||
w = w + "e";
|
||||
}
|
||||
}
|
||||
|
||||
// Step 1c
|
||||
re = /^(.+?)y$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
stem = fp[1];
|
||||
re = new RegExp(s_v);
|
||||
if (re.test(stem))
|
||||
w = stem + "i";
|
||||
}
|
||||
|
||||
// Step 2
|
||||
re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
stem = fp[1];
|
||||
suffix = fp[2];
|
||||
re = new RegExp(mgr0);
|
||||
if (re.test(stem))
|
||||
w = stem + step2list[suffix];
|
||||
}
|
||||
|
||||
// Step 3
|
||||
re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
stem = fp[1];
|
||||
suffix = fp[2];
|
||||
re = new RegExp(mgr0);
|
||||
if (re.test(stem))
|
||||
w = stem + step3list[suffix];
|
||||
}
|
||||
|
||||
// Step 4
|
||||
re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
|
||||
re2 = /^(.+?)(s|t)(ion)$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
stem = fp[1];
|
||||
re = new RegExp(mgr1);
|
||||
if (re.test(stem))
|
||||
w = stem;
|
||||
}
|
||||
else if (re2.test(w)) {
|
||||
var fp = re2.exec(w);
|
||||
stem = fp[1] + fp[2];
|
||||
re2 = new RegExp(mgr1);
|
||||
if (re2.test(stem))
|
||||
w = stem;
|
||||
}
|
||||
|
||||
// Step 5
|
||||
re = /^(.+?)e$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
stem = fp[1];
|
||||
re = new RegExp(mgr1);
|
||||
re2 = new RegExp(meq1);
|
||||
re3 = new RegExp("^" + C + v + "[^aeiouwxy]$");
|
||||
if (re.test(stem) || (re2.test(stem) && !(re3.test(stem))))
|
||||
w = stem;
|
||||
}
|
||||
re = /ll$/;
|
||||
re2 = new RegExp(mgr1);
|
||||
if (re.test(w) && re2.test(w)) {
|
||||
re = /.$/;
|
||||
w = w.replace(re,"");
|
||||
}
|
||||
|
||||
// and turn initial Y back to y
|
||||
if (firstch == "y")
|
||||
w = firstch.toLowerCase() + w.substr(1);
|
||||
return w;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* Simple result scoring code.
|
||||
*/
|
||||
var Scorer = {
|
||||
// Implement the following function to further tweak the score for each result
|
||||
// The function takes a result array [filename, title, anchor, descr, score]
|
||||
// and returns the new score.
|
||||
/*
|
||||
score: function(result) {
|
||||
return result[4];
|
||||
},
|
||||
*/
|
||||
|
||||
// query matches the full name of an object
|
||||
objNameMatch: 11,
|
||||
// or matches in the last dotted part of the object name
|
||||
objPartialMatch: 6,
|
||||
// Additive scores depending on the priority of the object
|
||||
objPrio: {0: 15, // used to be importantResults
|
||||
1: 5, // used to be objectResults
|
||||
2: -5}, // used to be unimportantResults
|
||||
// Used when the priority is not in the mapping.
|
||||
objPrioDefault: 0,
|
||||
|
||||
// query found in title
|
||||
title: 15,
|
||||
// query found in terms
|
||||
term: 5
|
||||
};
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
var splitChars = (function() {
|
||||
var result = {};
|
||||
var singles = [96, 180, 187, 191, 215, 247, 749, 885, 903, 907, 909, 930, 1014, 1648,
|
||||
1748, 1809, 2416, 2473, 2481, 2526, 2601, 2609, 2612, 2615, 2653, 2702,
|
||||
2706, 2729, 2737, 2740, 2857, 2865, 2868, 2910, 2928, 2948, 2961, 2971,
|
||||
2973, 3085, 3089, 3113, 3124, 3213, 3217, 3241, 3252, 3295, 3341, 3345,
|
||||
3369, 3506, 3516, 3633, 3715, 3721, 3736, 3744, 3748, 3750, 3756, 3761,
|
||||
3781, 3912, 4239, 4347, 4681, 4695, 4697, 4745, 4785, 4799, 4801, 4823,
|
||||
4881, 5760, 5901, 5997, 6313, 7405, 8024, 8026, 8028, 8030, 8117, 8125,
|
||||
8133, 8181, 8468, 8485, 8487, 8489, 8494, 8527, 11311, 11359, 11687, 11695,
|
||||
11703, 11711, 11719, 11727, 11735, 12448, 12539, 43010, 43014, 43019, 43587,
|
||||
43696, 43713, 64286, 64297, 64311, 64317, 64319, 64322, 64325, 65141];
|
||||
var i, j, start, end;
|
||||
for (i = 0; i < singles.length; i++) {
|
||||
result[singles[i]] = true;
|
||||
}
|
||||
var ranges = [[0, 47], [58, 64], [91, 94], [123, 169], [171, 177], [182, 184], [706, 709],
|
||||
[722, 735], [741, 747], [751, 879], [888, 889], [894, 901], [1154, 1161],
|
||||
[1318, 1328], [1367, 1368], [1370, 1376], [1416, 1487], [1515, 1519], [1523, 1568],
|
||||
[1611, 1631], [1642, 1645], [1750, 1764], [1767, 1773], [1789, 1790], [1792, 1807],
|
||||
[1840, 1868], [1958, 1968], [1970, 1983], [2027, 2035], [2038, 2041], [2043, 2047],
|
||||
[2070, 2073], [2075, 2083], [2085, 2087], [2089, 2307], [2362, 2364], [2366, 2383],
|
||||
[2385, 2391], [2402, 2405], [2419, 2424], [2432, 2436], [2445, 2446], [2449, 2450],
|
||||
[2483, 2485], [2490, 2492], [2494, 2509], [2511, 2523], [2530, 2533], [2546, 2547],
|
||||
[2554, 2564], [2571, 2574], [2577, 2578], [2618, 2648], [2655, 2661], [2672, 2673],
|
||||
[2677, 2692], [2746, 2748], [2750, 2767], [2769, 2783], [2786, 2789], [2800, 2820],
|
||||
[2829, 2830], [2833, 2834], [2874, 2876], [2878, 2907], [2914, 2917], [2930, 2946],
|
||||
[2955, 2957], [2966, 2968], [2976, 2978], [2981, 2983], [2987, 2989], [3002, 3023],
|
||||
[3025, 3045], [3059, 3076], [3130, 3132], [3134, 3159], [3162, 3167], [3170, 3173],
|
||||
[3184, 3191], [3199, 3204], [3258, 3260], [3262, 3293], [3298, 3301], [3312, 3332],
|
||||
[3386, 3388], [3390, 3423], [3426, 3429], [3446, 3449], [3456, 3460], [3479, 3481],
|
||||
[3518, 3519], [3527, 3584], [3636, 3647], [3655, 3663], [3674, 3712], [3717, 3718],
|
||||
[3723, 3724], [3726, 3731], [3752, 3753], [3764, 3772], [3774, 3775], [3783, 3791],
|
||||
[3802, 3803], [3806, 3839], [3841, 3871], [3892, 3903], [3949, 3975], [3980, 4095],
|
||||
[4139, 4158], [4170, 4175], [4182, 4185], [4190, 4192], [4194, 4196], [4199, 4205],
|
||||
[4209, 4212], [4226, 4237], [4250, 4255], [4294, 4303], [4349, 4351], [4686, 4687],
|
||||
[4702, 4703], [4750, 4751], [4790, 4791], [4806, 4807], [4886, 4887], [4955, 4968],
|
||||
[4989, 4991], [5008, 5023], [5109, 5120], [5741, 5742], [5787, 5791], [5867, 5869],
|
||||
[5873, 5887], [5906, 5919], [5938, 5951], [5970, 5983], [6001, 6015], [6068, 6102],
|
||||
[6104, 6107], [6109, 6111], [6122, 6127], [6138, 6159], [6170, 6175], [6264, 6271],
|
||||
[6315, 6319], [6390, 6399], [6429, 6469], [6510, 6511], [6517, 6527], [6572, 6592],
|
||||
[6600, 6607], [6619, 6655], [6679, 6687], [6741, 6783], [6794, 6799], [6810, 6822],
|
||||
[6824, 6916], [6964, 6980], [6988, 6991], [7002, 7042], [7073, 7085], [7098, 7167],
|
||||
[7204, 7231], [7242, 7244], [7294, 7400], [7410, 7423], [7616, 7679], [7958, 7959],
|
||||
[7966, 7967], [8006, 8007], [8014, 8015], [8062, 8063], [8127, 8129], [8141, 8143],
|
||||
[8148, 8149], [8156, 8159], [8173, 8177], [8189, 8303], [8306, 8307], [8314, 8318],
|
||||
[8330, 8335], [8341, 8449], [8451, 8454], [8456, 8457], [8470, 8472], [8478, 8483],
|
||||
[8506, 8507], [8512, 8516], [8522, 8525], [8586, 9311], [9372, 9449], [9472, 10101],
|
||||
[10132, 11263], [11493, 11498], [11503, 11516], [11518, 11519], [11558, 11567],
|
||||
[11622, 11630], [11632, 11647], [11671, 11679], [11743, 11822], [11824, 12292],
|
||||
[12296, 12320], [12330, 12336], [12342, 12343], [12349, 12352], [12439, 12444],
|
||||
[12544, 12548], [12590, 12592], [12687, 12689], [12694, 12703], [12728, 12783],
|
||||
[12800, 12831], [12842, 12880], [12896, 12927], [12938, 12976], [12992, 13311],
|
||||
[19894, 19967], [40908, 40959], [42125, 42191], [42238, 42239], [42509, 42511],
|
||||
[42540, 42559], [42592, 42593], [42607, 42622], [42648, 42655], [42736, 42774],
|
||||
[42784, 42785], [42889, 42890], [42893, 43002], [43043, 43055], [43062, 43071],
|
||||
[43124, 43137], [43188, 43215], [43226, 43249], [43256, 43258], [43260, 43263],
|
||||
[43302, 43311], [43335, 43359], [43389, 43395], [43443, 43470], [43482, 43519],
|
||||
[43561, 43583], [43596, 43599], [43610, 43615], [43639, 43641], [43643, 43647],
|
||||
[43698, 43700], [43703, 43704], [43710, 43711], [43715, 43738], [43742, 43967],
|
||||
[44003, 44015], [44026, 44031], [55204, 55215], [55239, 55242], [55292, 55295],
|
||||
[57344, 63743], [64046, 64047], [64110, 64111], [64218, 64255], [64263, 64274],
|
||||
[64280, 64284], [64434, 64466], [64830, 64847], [64912, 64913], [64968, 65007],
|
||||
[65020, 65135], [65277, 65295], [65306, 65312], [65339, 65344], [65371, 65381],
|
||||
[65471, 65473], [65480, 65481], [65488, 65489], [65496, 65497]];
|
||||
for (i = 0; i < ranges.length; i++) {
|
||||
start = ranges[i][0];
|
||||
end = ranges[i][1];
|
||||
for (j = start; j <= end; j++) {
|
||||
result[j] = true;
|
||||
}
|
||||
}
|
||||
return result;
|
||||
})();
|
||||
|
||||
function splitQuery(query) {
|
||||
var result = [];
|
||||
var start = -1;
|
||||
for (var i = 0; i < query.length; i++) {
|
||||
if (splitChars[query.charCodeAt(i)]) {
|
||||
if (start !== -1) {
|
||||
result.push(query.slice(start, i));
|
||||
start = -1;
|
||||
}
|
||||
} else if (start === -1) {
|
||||
start = i;
|
||||
}
|
||||
}
|
||||
if (start !== -1) {
|
||||
result.push(query.slice(start));
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* Search Module
|
||||
*/
|
||||
var Search = {
|
||||
|
||||
_index : null,
|
||||
_queued_query : null,
|
||||
_pulse_status : -1,
|
||||
|
||||
init : function() {
|
||||
var params = $.getQueryParameters();
|
||||
if (params.q) {
|
||||
var query = params.q[0];
|
||||
$('input[name="q"]')[0].value = query;
|
||||
this.performSearch(query);
|
||||
}
|
||||
},
|
||||
|
||||
loadIndex : function(url) {
|
||||
$.ajax({type: "GET", url: url, data: null,
|
||||
dataType: "script", cache: true,
|
||||
complete: function(jqxhr, textstatus) {
|
||||
if (textstatus != "success") {
|
||||
document.getElementById("searchindexloader").src = url;
|
||||
}
|
||||
}});
|
||||
},
|
||||
|
||||
setIndex : function(index) {
|
||||
var q;
|
||||
this._index = index;
|
||||
if ((q = this._queued_query) !== null) {
|
||||
this._queued_query = null;
|
||||
Search.query(q);
|
||||
}
|
||||
},
|
||||
|
||||
hasIndex : function() {
|
||||
return this._index !== null;
|
||||
},
|
||||
|
||||
deferQuery : function(query) {
|
||||
this._queued_query = query;
|
||||
},
|
||||
|
||||
stopPulse : function() {
|
||||
this._pulse_status = 0;
|
||||
},
|
||||
|
||||
startPulse : function() {
|
||||
if (this._pulse_status >= 0)
|
||||
return;
|
||||
function pulse() {
|
||||
var i;
|
||||
Search._pulse_status = (Search._pulse_status + 1) % 4;
|
||||
var dotString = '';
|
||||
for (i = 0; i < Search._pulse_status; i++)
|
||||
dotString += '.';
|
||||
Search.dots.text(dotString);
|
||||
if (Search._pulse_status > -1)
|
||||
window.setTimeout(pulse, 500);
|
||||
}
|
||||
pulse();
|
||||
},
|
||||
|
||||
/**
|
||||
* perform a search for something (or wait until index is loaded)
|
||||
*/
|
||||
performSearch : function(query) {
|
||||
// create the required interface elements
|
||||
this.out = $('#search-results');
|
||||
this.title = $('<h2>' + _('Searching') + '</h2>').appendTo(this.out);
|
||||
this.dots = $('<span></span>').appendTo(this.title);
|
||||
this.status = $('<p style="display: none"></p>').appendTo(this.out);
|
||||
this.output = $('<ul class="search"/>').appendTo(this.out);
|
||||
|
||||
$('#search-progress').text(_('Preparing search...'));
|
||||
this.startPulse();
|
||||
|
||||
// index already loaded, the browser was quick!
|
||||
if (this.hasIndex())
|
||||
this.query(query);
|
||||
else
|
||||
this.deferQuery(query);
|
||||
},
|
||||
|
||||
/**
|
||||
* execute search (requires search index to be loaded)
|
||||
*/
|
||||
query : function(query) {
|
||||
var i;
|
||||
var stopwords = ["a","and","are","as","at","be","but","by","for","if","in","into","is","it","near","no","not","of","on","or","such","that","the","their","then","there","these","they","this","to","was","will","with"];
|
||||
|
||||
// stem the searchterms and add them to the correct list
|
||||
var stemmer = new Stemmer();
|
||||
var searchterms = [];
|
||||
var excluded = [];
|
||||
var hlterms = [];
|
||||
var tmp = splitQuery(query);
|
||||
var objectterms = [];
|
||||
for (i = 0; i < tmp.length; i++) {
|
||||
if (tmp[i] !== "") {
|
||||
objectterms.push(tmp[i].toLowerCase());
|
||||
}
|
||||
|
||||
if ($u.indexOf(stopwords, tmp[i].toLowerCase()) != -1 || tmp[i].match(/^\d+$/) ||
|
||||
tmp[i] === "") {
|
||||
// skip this "word"
|
||||
continue;
|
||||
}
|
||||
// stem the word
|
||||
var word = stemmer.stemWord(tmp[i].toLowerCase());
|
||||
var toAppend;
|
||||
// select the correct list
|
||||
if (word[0] == '-') {
|
||||
toAppend = excluded;
|
||||
word = word.substr(1);
|
||||
}
|
||||
else {
|
||||
toAppend = searchterms;
|
||||
hlterms.push(tmp[i].toLowerCase());
|
||||
}
|
||||
// only add if not already in the list
|
||||
if (!$u.contains(toAppend, word))
|
||||
toAppend.push(word);
|
||||
}
|
||||
var highlightstring = '?highlight=' + $.urlencode(hlterms.join(" "));
|
||||
|
||||
// console.debug('SEARCH: searching for:');
|
||||
// console.info('required: ', searchterms);
|
||||
// console.info('excluded: ', excluded);
|
||||
|
||||
// prepare search
|
||||
var terms = this._index.terms;
|
||||
var titleterms = this._index.titleterms;
|
||||
|
||||
// array of [filename, title, anchor, descr, score]
|
||||
var results = [];
|
||||
$('#search-progress').empty();
|
||||
|
||||
// lookup as object
|
||||
for (i = 0; i < objectterms.length; i++) {
|
||||
var others = [].concat(objectterms.slice(0, i),
|
||||
objectterms.slice(i+1, objectterms.length));
|
||||
results = results.concat(this.performObjectSearch(objectterms[i], others));
|
||||
}
|
||||
|
||||
// lookup as search terms in fulltext
|
||||
results = results.concat(this.performTermsSearch(searchterms, excluded, terms, titleterms));
|
||||
|
||||
// let the scorer override scores with a custom scoring function
|
||||
if (Scorer.score) {
|
||||
for (i = 0; i < results.length; i++)
|
||||
results[i][4] = Scorer.score(results[i]);
|
||||
}
|
||||
|
||||
// now sort the results by score (in opposite order of appearance, since the
|
||||
// display function below uses pop() to retrieve items) and then
|
||||
// alphabetically
|
||||
results.sort(function(a, b) {
|
||||
var left = a[4];
|
||||
var right = b[4];
|
||||
if (left > right) {
|
||||
return 1;
|
||||
} else if (left < right) {
|
||||
return -1;
|
||||
} else {
|
||||
// same score: sort alphabetically
|
||||
left = a[1].toLowerCase();
|
||||
right = b[1].toLowerCase();
|
||||
return (left > right) ? -1 : ((left < right) ? 1 : 0);
|
||||
}
|
||||
});
|
||||
|
||||
// for debugging
|
||||
//Search.lastresults = results.slice(); // a copy
|
||||
//console.info('search results:', Search.lastresults);
|
||||
|
||||
// print the results
|
||||
var resultCount = results.length;
|
||||
function displayNextItem() {
|
||||
// results left, load the summary and display it
|
||||
if (results.length) {
|
||||
var item = results.pop();
|
||||
var listItem = $('<li style="display:none"></li>');
|
||||
if (DOCUMENTATION_OPTIONS.FILE_SUFFIX === '') {
|
||||
// dirhtml builder
|
||||
var dirname = item[0] + '/';
|
||||
if (dirname.match(/\/index\/$/)) {
|
||||
dirname = dirname.substring(0, dirname.length-6);
|
||||
} else if (dirname == 'index/') {
|
||||
dirname = '';
|
||||
}
|
||||
listItem.append($('<a/>').attr('href',
|
||||
DOCUMENTATION_OPTIONS.URL_ROOT + dirname +
|
||||
highlightstring + item[2]).html(item[1]));
|
||||
} else {
|
||||
// normal html builders
|
||||
listItem.append($('<a/>').attr('href',
|
||||
item[0] + DOCUMENTATION_OPTIONS.FILE_SUFFIX +
|
||||
highlightstring + item[2]).html(item[1]));
|
||||
}
|
||||
if (item[3]) {
|
||||
listItem.append($('<span> (' + item[3] + ')</span>'));
|
||||
Search.output.append(listItem);
|
||||
listItem.slideDown(5, function() {
|
||||
displayNextItem();
|
||||
});
|
||||
} else if (DOCUMENTATION_OPTIONS.HAS_SOURCE) {
|
||||
$.ajax({url: DOCUMENTATION_OPTIONS.URL_ROOT + '_sources/' + item[0] + '.md.txt',
|
||||
dataType: "text",
|
||||
complete: function(jqxhr, textstatus) {
|
||||
var data = jqxhr.responseText;
|
||||
if (data !== '' && data !== undefined) {
|
||||
listItem.append(Search.makeSearchSummary(data, searchterms, hlterms));
|
||||
}
|
||||
Search.output.append(listItem);
|
||||
listItem.slideDown(5, function() {
|
||||
displayNextItem();
|
||||
});
|
||||
}});
|
||||
} else {
|
||||
// no source available, just display title
|
||||
Search.output.append(listItem);
|
||||
listItem.slideDown(5, function() {
|
||||
displayNextItem();
|
||||
});
|
||||
}
|
||||
}
|
||||
// search finished, update title and status message
|
||||
else {
|
||||
Search.stopPulse();
|
||||
Search.title.text(_('Search Results'));
|
||||
if (!resultCount)
|
||||
Search.status.text(_('Your search did not match any documents. Please make sure that all words are spelled correctly and that you\'ve selected enough categories.'));
|
||||
else
|
||||
Search.status.text(_('Search finished, found %s page(s) matching the search query.').replace('%s', resultCount));
|
||||
Search.status.fadeIn(500);
|
||||
}
|
||||
}
|
||||
displayNextItem();
|
||||
},
|
||||
|
||||
/**
|
||||
* search for object names
|
||||
*/
|
||||
performObjectSearch : function(object, otherterms) {
|
||||
var filenames = this._index.docnames;
|
||||
var objects = this._index.objects;
|
||||
var objnames = this._index.objnames;
|
||||
var titles = this._index.titles;
|
||||
|
||||
var i;
|
||||
var results = [];
|
||||
|
||||
for (var prefix in objects) {
|
||||
for (var name in objects[prefix]) {
|
||||
var fullname = (prefix ? prefix + '.' : '') + name;
|
||||
if (fullname.toLowerCase().indexOf(object) > -1) {
|
||||
var score = 0;
|
||||
var parts = fullname.split('.');
|
||||
// check for different match types: exact matches of full name or
|
||||
// "last name" (i.e. last dotted part)
|
||||
if (fullname == object || parts[parts.length - 1] == object) {
|
||||
score += Scorer.objNameMatch;
|
||||
// matches in last name
|
||||
} else if (parts[parts.length - 1].indexOf(object) > -1) {
|
||||
score += Scorer.objPartialMatch;
|
||||
}
|
||||
var match = objects[prefix][name];
|
||||
var objname = objnames[match[1]][2];
|
||||
var title = titles[match[0]];
|
||||
// If more than one term searched for, we require other words to be
|
||||
// found in the name/title/description
|
||||
if (otherterms.length > 0) {
|
||||
var haystack = (prefix + ' ' + name + ' ' +
|
||||
objname + ' ' + title).toLowerCase();
|
||||
var allfound = true;
|
||||
for (i = 0; i < otherterms.length; i++) {
|
||||
if (haystack.indexOf(otherterms[i]) == -1) {
|
||||
allfound = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!allfound) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
var descr = objname + _(', in ') + title;
|
||||
|
||||
var anchor = match[3];
|
||||
if (anchor === '')
|
||||
anchor = fullname;
|
||||
else if (anchor == '-')
|
||||
anchor = objnames[match[1]][1] + '-' + fullname;
|
||||
// add custom score for some objects according to scorer
|
||||
if (Scorer.objPrio.hasOwnProperty(match[2])) {
|
||||
score += Scorer.objPrio[match[2]];
|
||||
} else {
|
||||
score += Scorer.objPrioDefault;
|
||||
}
|
||||
results.push([filenames[match[0]], fullname, '#'+anchor, descr, score]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
},
|
||||
|
||||
/**
|
||||
* search for full-text terms in the index
|
||||
*/
|
||||
performTermsSearch : function(searchterms, excluded, terms, titleterms) {
|
||||
var filenames = this._index.docnames;
|
||||
var titles = this._index.titles;
|
||||
|
||||
var i, j, file;
|
||||
var fileMap = {};
|
||||
var scoreMap = {};
|
||||
var results = [];
|
||||
|
||||
// perform the search on the required terms
|
||||
for (i = 0; i < searchterms.length; i++) {
|
||||
var word = searchterms[i];
|
||||
var files = [];
|
||||
var _o = [
|
||||
{files: terms[word], score: Scorer.term},
|
||||
{files: titleterms[word], score: Scorer.title}
|
||||
];
|
||||
|
||||
// no match but word was a required one
|
||||
if ($u.every(_o, function(o){return o.files === undefined;})) {
|
||||
break;
|
||||
}
|
||||
// found search word in contents
|
||||
$u.each(_o, function(o) {
|
||||
var _files = o.files;
|
||||
if (_files === undefined)
|
||||
return
|
||||
|
||||
if (_files.length === undefined)
|
||||
_files = [_files];
|
||||
files = files.concat(_files);
|
||||
|
||||
// set score for the word in each file to Scorer.term
|
||||
for (j = 0; j < _files.length; j++) {
|
||||
file = _files[j];
|
||||
if (!(file in scoreMap))
|
||||
scoreMap[file] = {}
|
||||
scoreMap[file][word] = o.score;
|
||||
}
|
||||
});
|
||||
|
||||
// create the mapping
|
||||
for (j = 0; j < files.length; j++) {
|
||||
file = files[j];
|
||||
if (file in fileMap)
|
||||
fileMap[file].push(word);
|
||||
else
|
||||
fileMap[file] = [word];
|
||||
}
|
||||
}
|
||||
|
||||
// now check if the files don't contain excluded terms
|
||||
for (file in fileMap) {
|
||||
var valid = true;
|
||||
|
||||
// check if all requirements are matched
|
||||
if (fileMap[file].length != searchterms.length)
|
||||
continue;
|
||||
|
||||
// ensure that none of the excluded terms is in the search result
|
||||
for (i = 0; i < excluded.length; i++) {
|
||||
if (terms[excluded[i]] == file ||
|
||||
titleterms[excluded[i]] == file ||
|
||||
$u.contains(terms[excluded[i]] || [], file) ||
|
||||
$u.contains(titleterms[excluded[i]] || [], file)) {
|
||||
valid = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// if we have still a valid result we can add it to the result list
|
||||
if (valid) {
|
||||
// select one (max) score for the file.
|
||||
// for better ranking, we should calculate ranking by using words statistics like basic tf-idf...
|
||||
var score = $u.max($u.map(fileMap[file], function(w){return scoreMap[file][w]}));
|
||||
results.push([filenames[file], titles[file], '', null, score]);
|
||||
}
|
||||
}
|
||||
return results;
|
||||
},
|
||||
|
||||
/**
|
||||
* helper function to return a node containing the
|
||||
* search summary for a given text. keywords is a list
|
||||
* of stemmed words, hlwords is the list of normal, unstemmed
|
||||
* words. the first one is used to find the occurrence, the
|
||||
* latter for highlighting it.
|
||||
*/
|
||||
makeSearchSummary : function(text, keywords, hlwords) {
|
||||
var textLower = text.toLowerCase();
|
||||
var start = 0;
|
||||
$.each(keywords, function() {
|
||||
var i = textLower.indexOf(this.toLowerCase());
|
||||
if (i > -1)
|
||||
start = i;
|
||||
});
|
||||
start = Math.max(start - 120, 0);
|
||||
var excerpt = ((start > 0) ? '...' : '') +
|
||||
$.trim(text.substr(start, 240)) +
|
||||
((start + 240 - text.length) ? '...' : '');
|
||||
var rv = $('<div class="context"></div>').text(excerpt);
|
||||
$.each(hlwords, function() {
|
||||
rv = rv.highlightText(this, 'highlighted');
|
||||
});
|
||||
return rv;
|
||||
}
|
||||
};
|
||||
|
||||
/* Search initialization removed for Read the Docs */
|
||||
$(document).ready(function() {
|
||||
Search.init();
|
||||
});
|
||||
2
doc/_static/xgboost-theme/theme.conf
vendored
2
doc/_static/xgboost-theme/theme.conf
vendored
@@ -1,2 +0,0 @@
|
||||
[theme]
|
||||
inherit = basic
|
||||
232
doc/_static/xgboost.css
vendored
232
doc/_static/xgboost.css
vendored
@@ -1,232 +0,0 @@
|
||||
/* header section */
|
||||
.splash{
|
||||
padding:5em 0 1em 0;
|
||||
background-color:#0079b2;
|
||||
/* background-image:url(../img/bg.jpg); */
|
||||
background-size:cover;
|
||||
background-attachment:fixed;
|
||||
color:#fff;
|
||||
text-align:center
|
||||
}
|
||||
|
||||
.splash h1{
|
||||
font-size: 40px;
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
.splash .social{
|
||||
margin:2em 0
|
||||
}
|
||||
|
||||
.splash .get_start {
|
||||
margin:2em 0
|
||||
}
|
||||
|
||||
.splash .get_start_btn {
|
||||
border: 2px solid #FFFFFF;
|
||||
border-radius: 5px;
|
||||
color: #FFFFFF;
|
||||
display: inline-block;
|
||||
font-size: 26px;
|
||||
padding: 9px 20px;
|
||||
}
|
||||
|
||||
.section-tout{
|
||||
padding:3em 0 3em;
|
||||
border-bottom:1px solid rgba(0,0,0,.05);
|
||||
background-color:#eaf1f1
|
||||
}
|
||||
.section-tout .fa{
|
||||
margin-right:.5em
|
||||
}
|
||||
|
||||
.section-tout h3{
|
||||
font-size:20px;
|
||||
}
|
||||
|
||||
.section-tout p {
|
||||
margin-bottom:2em
|
||||
}
|
||||
|
||||
.section-inst{
|
||||
padding:3em 0 3em;
|
||||
border-bottom:1px solid rgba(0,0,0,.05);
|
||||
|
||||
text-align:center
|
||||
}
|
||||
|
||||
.section-inst p {
|
||||
margin-bottom:2em
|
||||
}
|
||||
.section-inst img {
|
||||
-webkit-filter: grayscale(90%); /* Chrome, Safari, Opera */
|
||||
filter: grayscale(90%);
|
||||
margin-bottom:2em
|
||||
}
|
||||
.section-inst img:hover {
|
||||
-webkit-filter: grayscale(0%); /* Chrome, Safari, Opera */
|
||||
filter: grayscale(0%);
|
||||
}
|
||||
|
||||
.footer{
|
||||
padding-top: 40px;
|
||||
}
|
||||
.footer li{
|
||||
float:right;
|
||||
margin-right:1.5em;
|
||||
margin-bottom:1.5em
|
||||
}
|
||||
.footer p{
|
||||
font-size: 15px;
|
||||
color: #888;
|
||||
clear:right;
|
||||
margin-bottom:0
|
||||
}
|
||||
|
||||
|
||||
/* sidebar */
|
||||
div.sphinxsidebar {
|
||||
margin-top: 20px;
|
||||
margin-left: 0;
|
||||
position: fixed;
|
||||
overflow-y: scroll;
|
||||
width: 250px;
|
||||
top: 52px;
|
||||
bottom: 0;
|
||||
display: none
|
||||
}
|
||||
div.sphinxsidebar ul { padding: 0 }
|
||||
div.sphinxsidebar ul ul { margin-left: 15px }
|
||||
|
||||
@media (min-width:1200px) {
|
||||
.content { float: right; width: 66.66666667%; margin-right: 5% }
|
||||
div.sphinxsidebar {display: block}
|
||||
}
|
||||
|
||||
|
||||
.github-btn { border: 0; overflow: hidden }
|
||||
|
||||
.container {
|
||||
margin-right: auto;
|
||||
margin-left: auto;
|
||||
padding-left: 15px;
|
||||
padding-right: 15px
|
||||
}
|
||||
|
||||
body>.container {
|
||||
padding-top: 80px
|
||||
}
|
||||
|
||||
body {
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
pre {
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
/* navbar */
|
||||
.navbar {
|
||||
background-color:#0079b2;
|
||||
border: 0px;
|
||||
height: 65px;
|
||||
}
|
||||
.navbar-right li {
|
||||
display:inline-block;
|
||||
vertical-align:top;
|
||||
padding: 22px 4px;
|
||||
}
|
||||
|
||||
.navbar-left li {
|
||||
display:inline-block;
|
||||
vertical-align:top;
|
||||
padding: 17px 10px;
|
||||
/* margin: 0 5px; */
|
||||
}
|
||||
|
||||
.navbar-left li a {
|
||||
font-size: 22px;
|
||||
color: #fff;
|
||||
}
|
||||
|
||||
.navbar-left > li > a:hover{
|
||||
color:#fff;
|
||||
}
|
||||
.flag-icon {
|
||||
background-size: contain;
|
||||
background-position: 50%;
|
||||
background-repeat: no-repeat;
|
||||
position: relative;
|
||||
display: inline-block;
|
||||
width: 1.33333333em;
|
||||
line-height: 1em;
|
||||
}
|
||||
|
||||
.flag-icon:before {
|
||||
content: "\00a0";
|
||||
}
|
||||
|
||||
|
||||
.flag-icon-cn {
|
||||
background-image: url(./cn.svg);
|
||||
}
|
||||
|
||||
.flag-icon-us {
|
||||
background-image: url(./us.svg);
|
||||
}
|
||||
|
||||
|
||||
/* .flags { */
|
||||
/* padding: 10px; */
|
||||
/* } */
|
||||
|
||||
.navbar-brand >img {
|
||||
width: 110px;
|
||||
}
|
||||
|
||||
.dropdown-menu li {
|
||||
padding: 0px 0px;
|
||||
width: 120px;
|
||||
}
|
||||
.dropdown-menu li a {
|
||||
color: #0079b2;
|
||||
font-size: 20px;
|
||||
}
|
||||
|
||||
.section h1 {
|
||||
padding-top: 90px;
|
||||
margin-top: -60px;
|
||||
padding-bottom: 10px;
|
||||
font-size: 28px;
|
||||
}
|
||||
|
||||
.section h2 {
|
||||
padding-top: 80px;
|
||||
margin-top: -60px;
|
||||
padding-bottom: 10px;
|
||||
font-size: 22px;
|
||||
}
|
||||
|
||||
.section h3 {
|
||||
padding-top: 80px;
|
||||
margin-top: -64px;
|
||||
padding-bottom: 8px;
|
||||
}
|
||||
|
||||
.section h4 {
|
||||
padding-top: 80px;
|
||||
margin-top: -64px;
|
||||
padding-bottom: 8px;
|
||||
}
|
||||
|
||||
dt {
|
||||
margin-top: -76px;
|
||||
padding-top: 76px;
|
||||
}
|
||||
|
||||
dt:target, .highlighted {
|
||||
background-color: #fff;
|
||||
}
|
||||
|
||||
.section code.descname {
|
||||
font-size: 1em;
|
||||
}
|
||||
353
doc/build.md
353
doc/build.md
@@ -1,353 +0,0 @@
|
||||
Installation Guide
|
||||
==================
|
||||
|
||||
This page gives instructions on how to build and install the xgboost package from
|
||||
scratch on various systems. It consists of two steps:
|
||||
|
||||
1. First build the shared library from the C++ codes (`libxgboost.so` for linux/osx and `libxgboost.dll` for windows).
|
||||
- Exception: for R-package installation please directly refer to the R package section.
|
||||
2. Then install the language packages (e.g. Python Package).
|
||||
|
||||
***Important*** the newest version of xgboost uses submodule to maintain packages. So when you clone the repo, remember to use the recursive option as follows.
|
||||
```bash
|
||||
git clone --recursive https://github.com/dmlc/xgboost
|
||||
```
|
||||
For windows users who use github tools, you can open the git shell, and type the following command.
|
||||
```bash
|
||||
git submodule init
|
||||
git submodule update
|
||||
```
|
||||
|
||||
Please refer to [Trouble Shooting Section](#trouble-shooting) first if you had any problem
|
||||
during installation. If the instructions do not work for you, please feel free
|
||||
to ask questions at [xgboost/issues](https://github.com/dmlc/xgboost/issues), or
|
||||
even better to send pull request if you can fix the problem.
|
||||
|
||||
## Contents
|
||||
- [Build the Shared Library](#build-the-shared-library)
|
||||
- [Building on Ubuntu/Debian](#building-on-ubuntu-debian)
|
||||
- [Building on macOS](#building-on-macos)
|
||||
- [Building on Windows](#building-on-windows)
|
||||
- [Building with GPU support](#building-with-gpu-support)
|
||||
- [Windows Binaries](#windows-binaries)
|
||||
- [Customized Building](#customized-building)
|
||||
- [Python Package Installation](#python-package-installation)
|
||||
- [R Package Installation](#r-package-installation)
|
||||
- [Trouble Shooting](#trouble-shooting)
|
||||
|
||||
## Build the Shared Library
|
||||
|
||||
Our goal is to build the shared library:
|
||||
- On Linux/OSX the target library is `libxgboost.so`
|
||||
- On Windows the target library is `libxgboost.dll`
|
||||
|
||||
The minimal building requirement is
|
||||
|
||||
- A recent c++ compiler supporting C++ 11 (g++-4.8 or higher)
|
||||
|
||||
We can edit `make/config.mk` to change the compile options, and then build by
|
||||
`make`. If everything goes well, we can go to the specific language installation section.
|
||||
|
||||
### Building on Ubuntu/Debian
|
||||
|
||||
On Ubuntu, one builds xgboost by
|
||||
|
||||
```bash
|
||||
git clone --recursive https://github.com/dmlc/xgboost
|
||||
cd xgboost; make -j4
|
||||
```
|
||||
|
||||
### Building on macOS
|
||||
|
||||
**Install with pip - simple method**
|
||||
|
||||
First, make sure you obtained *gcc-5* (newer version does not work with this method yet). Note: installation of `gcc` can take a while (~ 30 minutes)
|
||||
|
||||
```bash
|
||||
brew install gcc5
|
||||
```
|
||||
|
||||
You might need to run the following command with `sudo` if you run into some permission errors:
|
||||
|
||||
```bash
|
||||
pip install xgboost
|
||||
```
|
||||
|
||||
**Build from the source code - advanced method**
|
||||
|
||||
First, obtain gcc-7.x.x with brew (https://brew.sh/) if you want multi-threaded version, otherwise, Clang is ok if OpenMP / multi-threaded is not required. Note: installation of `gcc` can take a while (~ 30 minutes)
|
||||
|
||||
```bash
|
||||
brew install gcc
|
||||
```
|
||||
|
||||
Now, clone the repository
|
||||
|
||||
```bash
|
||||
git clone --recursive https://github.com/dmlc/xgboost
|
||||
```
|
||||
|
||||
and build using the following commands
|
||||
|
||||
```bash
|
||||
cd xgboost; cp make/config.mk ./config.mk; make -j4
|
||||
```
|
||||
head over to `Python Package Installation` for the next steps
|
||||
|
||||
### Building on Windows
|
||||
You need to first clone the xgboost repo with recursive option clone the submodules.
|
||||
If you are using github tools, you can open the git-shell, and type the following command.
|
||||
We recommend using [Git for Windows](https://git-for-windows.github.io/)
|
||||
because it brings a standard bash shell. This will highly ease the installation process.
|
||||
|
||||
```bash
|
||||
git submodule init
|
||||
git submodule update
|
||||
```
|
||||
|
||||
XGBoost support both build by MSVC or MinGW. Here is how you can build xgboost library using MinGW.
|
||||
|
||||
After installing [Git for Windows](https://git-for-windows.github.io/), you should have a shortcut `Git Bash`.
|
||||
All the following steps are in the `Git Bash`.
|
||||
|
||||
In MinGW, `make` command comes with the name `mingw32-make`. You can add the following line into the `.bashrc` file.
|
||||
|
||||
```bash
|
||||
alias make='mingw32-make'
|
||||
```
|
||||
|
||||
To build with MinGW
|
||||
|
||||
```bash
|
||||
cp make/mingw64.mk config.mk; make -j4
|
||||
```
|
||||
|
||||
To build with Visual Studio 2013 use cmake. Make sure you have a recent version of cmake added to your path and then from the xgboost directory:
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
cmake .. -G"Visual Studio 12 2013 Win64"
|
||||
```
|
||||
|
||||
This specifies an out of source build using the MSVC 12 64 bit generator. Open the .sln file in the build directory and build with Visual Studio. To use the Python module you can copy libxgboost.dll into python-package\xgboost.
|
||||
|
||||
Other versions of Visual Studio may work but are untested.
|
||||
|
||||
### Building with GPU support
|
||||
|
||||
XGBoost can be built with GPU support for both Linux and Windows using cmake. GPU support works with the Python package as well as the CLI version. See [Installing R package with GPU support](#installing-r-package-with-gpu-support) for special instructions for R.
|
||||
|
||||
An up-to-date version of the CUDA toolkit is required.
|
||||
|
||||
From the command line on Linux starting from the xgboost directory:
|
||||
|
||||
```bash
|
||||
$ mkdir build
|
||||
$ cd build
|
||||
$ cmake .. -DUSE_CUDA=ON
|
||||
$ make -j
|
||||
```
|
||||
**Windows requirements** for GPU build: only Visual C++ 2015 or 2013 with CUDA v8.0 were fully tested. Either install Visual C++ 2015 Build Tools separately, or as a part of Visual Studio 2015. If you already have Visual Studio 2017, the Visual C++ 2015 Toolchain componenet has to be installed using the VS 2017 Installer. Likely, you would need to use the VS2015 x64 Native Tools command prompt to run the cmake commands given below. In some situations, however, things run just fine from MSYS2 bash command line.
|
||||
|
||||
On Windows, using cmake, see what options for Generators you have for cmake, and choose one with [arch] replaced by Win64:
|
||||
```bash
|
||||
cmake -help
|
||||
```
|
||||
Then run cmake as:
|
||||
```bash
|
||||
$ mkdir build
|
||||
$ cd build
|
||||
$ cmake .. -G"Visual Studio 14 2015 Win64" -DUSE_CUDA=ON
|
||||
```
|
||||
To speed up compilation, compute version specific to your GPU could be passed to cmake as, e.g., `-DGPU_COMPUTE_VER=50`.
|
||||
The above cmake configuration run will create an xgboost.sln solution file in the build directory. Build this solution in release mode as a x64 build, either from Visual studio or from command line:
|
||||
```
|
||||
cmake --build . --target xgboost --config Release
|
||||
```
|
||||
If build seems to use only a single process, you might try to append an option like ` -- /m:6` to the above command.
|
||||
|
||||
### Windows Binaries
|
||||
|
||||
Unofficial windows binaries and instructions on how to use them are hosted on [Guido Tapia's blog](http://www.picnet.com.au/blogs/guido/post/2016/09/22/xgboost-windows-x64-binaries-for-download/)
|
||||
|
||||
### Customized Building
|
||||
|
||||
The configuration of xgboost can be modified by ```config.mk```
|
||||
- modify configuration on various distributed filesystem such as HDFS/Amazon S3/...
|
||||
- First copy [make/config.mk](../make/config.mk) to the project root, on which
|
||||
any local modification will be ignored by git, then modify the according flags.
|
||||
|
||||
|
||||
|
||||
## Python Package Installation
|
||||
|
||||
The python package is located at [python-package](../python-package).
|
||||
There are several ways to install the package:
|
||||
|
||||
1. Install system-widely, which requires root permission
|
||||
|
||||
```bash
|
||||
cd python-package; sudo python setup.py install
|
||||
```
|
||||
|
||||
You will however need Python `distutils` module for this to
|
||||
work. It is often part of the core python package or it can be installed using your
|
||||
package manager, e.g. in Debian use
|
||||
|
||||
```bash
|
||||
sudo apt-get install python-setuptools
|
||||
```
|
||||
|
||||
*NOTE: If you recompiled xgboost, then you need to reinstall it again to
|
||||
make the new library take effect*
|
||||
|
||||
2. Only set the environment variable `PYTHONPATH` to tell python where to find
|
||||
the library. For example, assume we cloned `xgboost` on the home directory
|
||||
`~`. then we can added the following line in `~/.bashrc`.
|
||||
It is ***recommended for developers*** who may change the codes. The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ```setup``` again)
|
||||
|
||||
```bash
|
||||
export PYTHONPATH=~/xgboost/python-package
|
||||
```
|
||||
|
||||
3. Install only for the current user.
|
||||
|
||||
```bash
|
||||
cd python-package; python setup.py develop --user
|
||||
```
|
||||
|
||||
4. If you are installing the latest xgboost version which requires compilation, add MinGW to the system PATH:
|
||||
|
||||
```python
|
||||
import os
|
||||
os.environ['PATH'] = os.environ['PATH'] + ';C:\\Program Files\\mingw-w64\\x86_64-5.3.0-posix-seh-rt_v4-rev0\\mingw64\\bin'
|
||||
```
|
||||
|
||||
## R Package Installation
|
||||
|
||||
### Installing pre-packaged version
|
||||
|
||||
You can install xgboost from CRAN just like any other R package:
|
||||
|
||||
```r
|
||||
install.packages("xgboost")
|
||||
```
|
||||
|
||||
Or you can install it from our weekly updated drat repo:
|
||||
|
||||
```r
|
||||
install.packages("drat", repos="https://cran.rstudio.com")
|
||||
drat:::addRepo("dmlc")
|
||||
install.packages("xgboost", repos="http://dmlc.ml/drat/", type = "source")
|
||||
```
|
||||
|
||||
For OSX users, single threaded version will be installed. To install multi-threaded version,
|
||||
first follow [Building on OSX](#building-on-osx) to get the OpenMP enabled compiler, then:
|
||||
|
||||
- Set the `Makevars` file in highest piority for R.
|
||||
|
||||
The point is, there are three `Makevars` : `~/.R/Makevars`, `xgboost/R-package/src/Makevars`, and `/usr/local/Cellar/r/3.2.0/R.framework/Resources/etc/Makeconf` (the last one obtained by running `file.path(R.home("etc"), "Makeconf")` in R), and `SHLIB_OPENMP_CXXFLAGS` is not set by default!! After trying, it seems that the first one has highest piority (surprise!).
|
||||
|
||||
Then inside R, run
|
||||
|
||||
```R
|
||||
install.packages("drat", repos="https://cran.rstudio.com")
|
||||
drat:::addRepo("dmlc")
|
||||
install.packages("xgboost", repos="http://dmlc.ml/drat/", type = "source")
|
||||
```
|
||||
|
||||
### Installing the development version
|
||||
|
||||
Make sure you have installed git and a recent C++ compiler supporting C++11 (e.g., g++-4.8 or higher).
|
||||
On Windows, Rtools must be installed, and its bin directory has to be added to PATH during the installation.
|
||||
And see the previous subsection for an OSX tip.
|
||||
|
||||
Due to the use of git-submodules, `devtools::install_github` can no longer be used to install the latest version of R package.
|
||||
Thus, one has to run git to check out the code first:
|
||||
|
||||
```bash
|
||||
git clone --recursive https://github.com/dmlc/xgboost
|
||||
cd xgboost
|
||||
git submodule init
|
||||
git submodule update
|
||||
cd R-package
|
||||
R CMD INSTALL .
|
||||
```
|
||||
|
||||
If the last line fails because of "R: command not found", it means that R was not set up to run from command line.
|
||||
In this case, just start R as you would normally do and run the following:
|
||||
|
||||
```r
|
||||
setwd('wherever/you/cloned/it/xgboost/R-package/')
|
||||
install.packages('.', repos = NULL, type="source")
|
||||
```
|
||||
|
||||
The package could also be built and installed with cmake (and Visual C++ 2015 on Windows) using instructions from the next section, but without GPU support (omit the `-DUSE_CUDA=ON` cmake parameter).
|
||||
|
||||
If all fails, try [building the shared library](#build-the-shared-library) to see whether a problem is specific to R package or not.
|
||||
|
||||
### Installing R package with GPU support
|
||||
|
||||
The procedure and requirements are similar as in [Building with GPU support](#building-with-gpu-support), so make sure to read it first.
|
||||
|
||||
On Linux, starting from the xgboost directory:
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
cmake .. -DUSE_CUDA=ON -DR_LIB=ON
|
||||
make install -j
|
||||
```
|
||||
When default target is used, an R package shared library would be built in the `build` area.
|
||||
The `install` target, in addition, assembles the package files with this shared library under `build/R-package`, and runs `R CMD INSTALL`.
|
||||
|
||||
On Windows, cmake with Visual C++ Build Tools (or Visual Studio) has to be used to build an R package with GPU support. Rtools must also be installed (perhaps, some other MinGW distributions with `gendef.exe` and `dlltool.exe` would work, but that was not tested).
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
cmake .. -G"Visual Studio 14 2015 Win64" -DUSE_CUDA=ON -DR_LIB=ON
|
||||
cmake --build . --target install --config Release
|
||||
```
|
||||
When `--target xgboost` is used, an R package dll would be built under `build/Release`.
|
||||
The `--target install`, in addition, assembles the package files with this dll under `build/R-package`, and runs `R CMD INSTALL`.
|
||||
|
||||
If cmake can't find your R during the configuration step, you might provide the location of its executable to cmake like this: `-DLIBR_EXECUTABLE="C:/Program Files/R/R-3.4.1/bin/x64/R.exe"`.
|
||||
|
||||
If on Windows you get a "permission denied" error when trying to write to ...Program Files/R/... during the package installation, create a `.Rprofile` file in your personal home directory (if you don't already have one in there), and add a line to it which specifies the location of your R packages user library, like the following:
|
||||
```r
|
||||
.libPaths( unique(c("C:/Users/USERNAME/Documents/R/win-library/3.4", .libPaths())))
|
||||
```
|
||||
You might find the exact location by running `.libPaths()` in R GUI or RStudio.
|
||||
|
||||
## Trouble Shooting
|
||||
|
||||
1. **Compile failed after `git pull`**
|
||||
|
||||
Please first update the submodules, clean all and recompile:
|
||||
|
||||
```bash
|
||||
git submodule update && make clean_all && make -j4
|
||||
```
|
||||
|
||||
2. **Compile failed after `config.mk` is modified**
|
||||
|
||||
Need to clean all first:
|
||||
|
||||
```bash
|
||||
make clean_all && make -j4
|
||||
```
|
||||
|
||||
|
||||
3. **Makefile: dmlc-core/make/dmlc.mk: No such file or directory**
|
||||
|
||||
We need to recursively clone the submodule, you can do:
|
||||
|
||||
```bash
|
||||
git submodule init
|
||||
git submodule update
|
||||
```
|
||||
Alternatively, do another clone
|
||||
```bash
|
||||
git clone https://github.com/dmlc/xgboost --recursive
|
||||
```
|
||||
445
doc/build.rst
Normal file
445
doc/build.rst
Normal file
@@ -0,0 +1,445 @@
|
||||
##################
|
||||
Installation Guide
|
||||
##################
|
||||
|
||||
.. note:: Pre-built binary wheel for Python
|
||||
|
||||
If you are planning to use Python, consider installing XGBoost from a pre-built binary wheel, available from Python Package Index (PyPI). You may download and install it by running
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Ensure that you are downloading one of the following:
|
||||
# * xgboost-{version}-py2.py3-none-manylinux1_x86_64.whl
|
||||
# * xgboost-{version}-py2.py3-none-win_amd64.whl
|
||||
pip3 install xgboost
|
||||
|
||||
* The binary wheel will support GPU algorithms (`gpu_exact`, `gpu_hist`) on machines with NVIDIA GPUs. **However, it will not support multi-GPU training; only single GPU will be used.** To enable multi-GPU training, download and install the binary wheel from `this page <https://s3-us-west-2.amazonaws.com/xgboost-wheels/list.html>`_.
|
||||
* Currently, we provide binary wheels for 64-bit Linux and Windows.
|
||||
|
||||
****************************
|
||||
Building XGBoost from source
|
||||
****************************
|
||||
This page gives instructions on how to build and install XGBoost from scratch on various systems. It consists of two steps:
|
||||
|
||||
1. First build the shared library from the C++ codes (``libxgboost.so`` for Linux/OSX and ``xgboost.dll`` for Windows).
|
||||
(For R-package installation, please directly refer to `R Package Installation`_.)
|
||||
2. Then install the language packages (e.g. Python Package).
|
||||
|
||||
.. note:: Use of Git submodules
|
||||
|
||||
XGBoost uses Git submodules to manage dependencies. So when you clone the repo, remember to specify ``--recursive`` option:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git clone --recursive https://github.com/dmlc/xgboost
|
||||
|
||||
For windows users who use github tools, you can open the git shell and type the following command:
|
||||
|
||||
.. code-block:: batch
|
||||
|
||||
git submodule init
|
||||
git submodule update
|
||||
|
||||
Please refer to `Trouble Shooting`_ section first if you have any problem
|
||||
during installation. If the instructions do not work for you, please feel free
|
||||
to ask questions at `the user forum <https://discuss.xgboost.ai>`_.
|
||||
|
||||
**Contents**
|
||||
|
||||
* `Building the Shared Library`_
|
||||
|
||||
- `Building on Ubuntu/Debian`_
|
||||
- `Building on OSX`_
|
||||
- `Building on Windows`_
|
||||
- `Building with GPU support`_
|
||||
- `Customized Building`_
|
||||
|
||||
* `Python Package Installation`_
|
||||
* `R Package Installation`_
|
||||
* `Trouble Shooting`_
|
||||
|
||||
***************************
|
||||
Building the Shared Library
|
||||
***************************
|
||||
|
||||
Our goal is to build the shared library:
|
||||
|
||||
- On Linux/OSX the target library is ``libxgboost.so``
|
||||
- On Windows the target library is ``xgboost.dll``
|
||||
|
||||
The minimal building requirement is
|
||||
|
||||
- A recent C++ compiler supporting C++11 (g++-4.8 or higher)
|
||||
|
||||
We can edit ``make/config.mk`` to change the compile options, and then build by
|
||||
``make``. If everything goes well, we can go to the specific language installation section.
|
||||
|
||||
Building on Ubuntu/Debian
|
||||
=========================
|
||||
|
||||
On Ubuntu, one builds XGBoost by running
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git clone --recursive https://github.com/dmlc/xgboost
|
||||
cd xgboost; make -j4
|
||||
|
||||
Building on OSX
|
||||
===============
|
||||
|
||||
Install with pip: simple method
|
||||
--------------------------------
|
||||
|
||||
First, make sure you obtained ``gcc-5`` (newer version does not work with this method yet). Note: installation of ``gcc`` can take a while (~ 30 minutes).
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
brew install gcc@5
|
||||
|
||||
Then install XGBoost with ``pip``:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip3 install xgboost
|
||||
|
||||
You might need to run the command with ``sudo`` if you run into permission errors.
|
||||
|
||||
Build from the source code - advanced method
|
||||
--------------------------------------------
|
||||
|
||||
First, obtain ``gcc-7`` with homebrew (https://brew.sh/) if you want multi-threaded version. Clang is okay if multithreading is not required. Note: installation of ``gcc`` can take a while (~ 30 minutes).
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
brew install gcc@7
|
||||
|
||||
Now, clone the repository:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git clone --recursive https://github.com/dmlc/xgboost
|
||||
cd xgboost; cp make/config.mk ./config.mk
|
||||
|
||||
Open ``config.mk`` and uncomment these two lines:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export CC = gcc
|
||||
export CXX = g++
|
||||
|
||||
and replace these two lines as follows: (specify the GCC version)
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export CC = gcc-7
|
||||
export CXX = g++-7
|
||||
|
||||
Now, you may build XGBoost using the following command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
make -j4
|
||||
|
||||
You may now continue to `Python Package Installation`_.
|
||||
|
||||
Building on Windows
|
||||
===================
|
||||
You need to first clone the XGBoost repo with ``--recursive`` option, to clone the submodules.
|
||||
We recommend you use `Git for Windows <https://git-for-windows.github.io/>`_, as it comes with a standard Bash shell. This will highly ease the installation process.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git submodule init
|
||||
git submodule update
|
||||
|
||||
XGBoost support compilation with Microsoft Visual Studio and MinGW.
|
||||
|
||||
Compile XGBoost using MinGW
|
||||
---------------------------
|
||||
After installing `Git for Windows <https://git-for-windows.github.io/>`_, you should have a shortcut named ``Git Bash``. You should run all subsequent steps in ``Git Bash``.
|
||||
|
||||
In MinGW, ``make`` command comes with the name ``mingw32-make``. You can add the following line into the ``.bashrc`` file:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
alias make='mingw32-make'
|
||||
|
||||
(On 64-bit Windows, you should get `MinGW64 <https://sourceforge.net/projects/mingw-w64/>`_ instead.) Make sure
|
||||
that the path to MinGW is in the system PATH.
|
||||
|
||||
To build with MinGW, type:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cp make/mingw64.mk config.mk; make -j4
|
||||
|
||||
Compile XGBoost with Microsoft Visual Studio
|
||||
--------------------------------------------
|
||||
To build with Visual Studio, we will need CMake. Make sure to install a recent version of CMake. Then run the following from the root of the XGBoost directory:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
cmake .. -G"Visual Studio 12 2013 Win64"
|
||||
|
||||
This specifies an out of source build using the MSVC 12 64 bit generator. Open the ``.sln`` file in the build directory and build with Visual Studio. To use the Python module you can copy ``xgboost.dll`` into ``python-package/xgboost``.
|
||||
|
||||
After the build process successfully ends, you will find a ``xgboost.dll`` library file inside ``./lib/`` folder, copy this file to the the API package folder like ``python-package/xgboost`` if you are using Python API.
|
||||
|
||||
Unofficial windows binaries and instructions on how to use them are hosted on `Guido Tapia's blog <http://www.picnet.com.au/blogs/guido/post/2016/09/22/xgboost-windows-x64-binaries-for-download/>`_.
|
||||
|
||||
.. _build_gpu_support:
|
||||
|
||||
Building with GPU support
|
||||
=========================
|
||||
XGBoost can be built with GPU support for both Linux and Windows using CMake. GPU support works with the Python package as well as the CLI version. See `Installing R package with GPU support`_ for special instructions for R.
|
||||
|
||||
An up-to-date version of the CUDA toolkit is required.
|
||||
|
||||
From the command line on Linux starting from the XGBoost directory:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
cmake .. -DUSE_CUDA=ON
|
||||
make -j
|
||||
|
||||
.. note:: Enabling multi-GPU training
|
||||
|
||||
By default, multi-GPU training is disabled and only a single GPU will be used. To enable multi-GPU training, set the option ``USE_NCCL=ON``. Multi-GPU training depends on NCCL2, available at `this link <https://developer.nvidia.com/nccl>`_. Since NCCL2 is only available for Linux machines, **multi-GPU training is available only for Linux**.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON
|
||||
make -j
|
||||
|
||||
On Windows, see what options for generators you have for CMake, and choose one with ``[arch]`` replaced with Win64:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cmake -help
|
||||
|
||||
Then run CMake as follows:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
cmake .. -G"Visual Studio 14 2015 Win64" -DUSE_CUDA=ON
|
||||
|
||||
.. note:: Visual Studio 2017 Win64 Generator may not work
|
||||
|
||||
Choosing the Visual Studio 2017 generator may cause compilation failure. When it happens, specify the 2015 compiler by adding the ``-T`` option:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
make .. -G"Visual Studio 15 2017 Win64" -T v140,cuda=8.0 -DR_LIB=ON -DUSE_CUDA=ON
|
||||
|
||||
To speed up compilation, the compute version specific to your GPU could be passed to cmake as, e.g., ``-DGPU_COMPUTE_VER=50``.
|
||||
The above cmake configuration run will create an ``xgboost.sln`` solution file in the build directory. Build this solution in release mode as a x64 build, either from Visual studio or from command line:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cmake --build . --target xgboost --config Release
|
||||
|
||||
To speed up compilation, run multiple jobs in parallel by appending option ``-- /MP``.
|
||||
|
||||
Customized Building
|
||||
===================
|
||||
|
||||
The configuration file ``config.mk`` modifies several compilation flags:
|
||||
- Whether to enable support for various distributed filesystems such as HDFS and Amazon S3
|
||||
- Which compiler to use
|
||||
- And some more
|
||||
|
||||
To customize, first copy ``make/config.mk`` to the project root and then modify the copy.
|
||||
|
||||
Python Package Installation
|
||||
===========================
|
||||
|
||||
The python package is located at ``python-package/``.
|
||||
There are several ways to install the package:
|
||||
|
||||
1. Install system-wide, which requires root permission:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd python-package; sudo python setup.py install
|
||||
|
||||
You will however need Python ``distutils`` module for this to
|
||||
work. It is often part of the core python package or it can be installed using your
|
||||
package manager, e.g. in Debian use
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt-get install python-setuptools
|
||||
|
||||
.. note:: Re-compiling XGBoost
|
||||
|
||||
If you recompiled XGBoost, then you need to reinstall it again to make the new library take effect.
|
||||
|
||||
2. Only set the environment variable ``PYTHONPATH`` to tell python where to find
|
||||
the library. For example, assume we cloned `xgboost` on the home directory
|
||||
`~`. then we can added the following line in `~/.bashrc`.
|
||||
This option is **recommended for developers** who change the code frequently. The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call ``setup`` again)
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export PYTHONPATH=~/xgboost/python-package
|
||||
|
||||
3. Install only for the current user.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd python-package; python setup.py develop --user
|
||||
|
||||
4. If you are installing the latest XGBoost version which requires compilation, add MinGW to the system PATH:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
import os
|
||||
os.environ['PATH'] = os.environ['PATH'] + ';C:\\Program Files\\mingw-w64\\x86_64-5.3.0-posix-seh-rt_v4-rev0\\mingw64\\bin'
|
||||
|
||||
R Package Installation
|
||||
======================
|
||||
|
||||
Installing pre-packaged version
|
||||
-------------------------------
|
||||
|
||||
You can install xgboost from CRAN just like any other R package:
|
||||
|
||||
.. code-block:: R
|
||||
|
||||
install.packages("xgboost")
|
||||
|
||||
Or you can install it from our weekly updated drat repo:
|
||||
|
||||
.. code-block:: R
|
||||
|
||||
install.packages("drat", repos="https://cran.rstudio.com")
|
||||
drat:::addRepo("dmlc")
|
||||
install.packages("xgboost", repos="http://dmlc.ml/drat/", type = "source")
|
||||
|
||||
For OSX users, single threaded version will be installed. To install multi-threaded version,
|
||||
first follow `Building on OSX`_ to get the OpenMP enabled compiler. Then
|
||||
|
||||
- Set the ``Makevars`` file in highest piority for R.
|
||||
|
||||
The point is, there are three ``Makevars`` : ``~/.R/Makevars``, ``xgboost/R-package/src/Makevars``, and ``/usr/local/Cellar/r/3.2.0/R.framework/Resources/etc/Makeconf`` (the last one obtained by running ``file.path(R.home("etc"), "Makeconf")`` in R), and ``SHLIB_OPENMP_CXXFLAGS`` is not set by default!! After trying, it seems that the first one has highest piority (surprise!).
|
||||
|
||||
Then inside R, run
|
||||
|
||||
.. code-block:: R
|
||||
|
||||
install.packages("drat", repos="https://cran.rstudio.com")
|
||||
drat:::addRepo("dmlc")
|
||||
install.packages("xgboost", repos="http://dmlc.ml/drat/", type = "source")
|
||||
|
||||
Installing the development version
|
||||
----------------------------------
|
||||
|
||||
Make sure you have installed git and a recent C++ compiler supporting C++11 (e.g., g++-4.8 or higher).
|
||||
On Windows, Rtools must be installed, and its bin directory has to be added to PATH during the installation.
|
||||
And see the previous subsection for an OSX tip.
|
||||
|
||||
Due to the use of git-submodules, ``devtools::install_github`` can no longer be used to install the latest version of R package.
|
||||
Thus, one has to run git to check out the code first:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git clone --recursive https://github.com/dmlc/xgboost
|
||||
cd xgboost
|
||||
git submodule init
|
||||
git submodule update
|
||||
cd R-package
|
||||
R CMD INSTALL .
|
||||
|
||||
If the last line fails because of the error ``R: command not found``, it means that R was not set up to run from command line.
|
||||
In this case, just start R as you would normally do and run the following:
|
||||
|
||||
.. code-block:: R
|
||||
|
||||
setwd('wherever/you/cloned/it/xgboost/R-package/')
|
||||
install.packages('.', repos = NULL, type="source")
|
||||
|
||||
The package could also be built and installed with cmake (and Visual C++ 2015 on Windows) using instructions from the next section, but without GPU support (omit the ``-DUSE_CUDA=ON`` cmake parameter).
|
||||
|
||||
If all fails, try `Building the shared library`_ to see whether a problem is specific to R package or not.
|
||||
|
||||
Installing R package with GPU support
|
||||
-------------------------------------
|
||||
|
||||
The procedure and requirements are similar as in `Building with GPU support`_, so make sure to read it first.
|
||||
|
||||
On Linux, starting from the XGBoost directory type:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
cmake .. -DUSE_CUDA=ON -DR_LIB=ON
|
||||
make install -j
|
||||
|
||||
When default target is used, an R package shared library would be built in the ``build`` area.
|
||||
The ``install`` target, in addition, assembles the package files with this shared library under ``build/R-package``, and runs ``R CMD INSTALL``.
|
||||
|
||||
On Windows, cmake with Visual C++ Build Tools (or Visual Studio) has to be used to build an R package with GPU support. Rtools must also be installed (perhaps, some other MinGW distributions with ``gendef.exe`` and ``dlltool.exe`` would work, but that was not tested).
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
cmake .. -G"Visual Studio 14 2015 Win64" -DUSE_CUDA=ON -DR_LIB=ON
|
||||
cmake --build . --target install --config Release
|
||||
|
||||
When ``--target xgboost`` is used, an R package dll would be built under ``build/Release``.
|
||||
The ``--target install``, in addition, assembles the package files with this dll under ``build/R-package``, and runs ``R CMD INSTALL``.
|
||||
|
||||
If cmake can't find your R during the configuration step, you might provide the location of its executable to cmake like this: ``-DLIBR_EXECUTABLE="C:/Program Files/R/R-3.4.1/bin/x64/R.exe"``.
|
||||
|
||||
If on Windows you get a "permission denied" error when trying to write to ...Program Files/R/... during the package installation, create a ``.Rprofile`` file in your personal home directory (if you don't already have one in there), and add a line to it which specifies the location of your R packages user library, like the following:
|
||||
|
||||
.. code-block:: R
|
||||
|
||||
.libPaths( unique(c("C:/Users/USERNAME/Documents/R/win-library/3.4", .libPaths())))
|
||||
|
||||
You might find the exact location by running ``.libPaths()`` in R GUI or RStudio.
|
||||
|
||||
Trouble Shooting
|
||||
================
|
||||
|
||||
1. Compile failed after ``git pull``
|
||||
|
||||
Please first update the submodules, clean all and recompile:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git submodule update && make clean_all && make -j4
|
||||
|
||||
2. Compile failed after ``config.mk`` is modified
|
||||
|
||||
Need to clean all first:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
make clean_all && make -j4
|
||||
|
||||
3. ``Makefile: dmlc-core/make/dmlc.mk: No such file or directory``
|
||||
|
||||
We need to recursively clone the submodule:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git submodule init
|
||||
git submodule update
|
||||
|
||||
Alternatively, do another clone
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git clone https://github.com/dmlc/xgboost --recursive
|
||||
|
||||
5
doc/cli.rst
Normal file
5
doc/cli.rst
Normal file
@@ -0,0 +1,5 @@
|
||||
############################
|
||||
XGBoost Command Line version
|
||||
############################
|
||||
|
||||
See `XGBoost Command Line walkthrough <https://github.com/dmlc/xgboost/blob/master/demo/binary_classification/README.md>`_.
|
||||
@@ -1,3 +0,0 @@
|
||||
# XGBoost Command Line version
|
||||
|
||||
See [XGBoost Command Line walkthrough](https://github.com/dmlc/xgboost/blob/master/demo/binary_classification/README.md)
|
||||
100
doc/conf.py
100
doc/conf.py
@@ -11,10 +11,26 @@
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
from subprocess import call
|
||||
from sh.contrib import git
|
||||
import urllib.request
|
||||
from urllib.error import HTTPError
|
||||
from recommonmark.parser import CommonMarkParser
|
||||
import sys
|
||||
import re
|
||||
import os, subprocess
|
||||
import shlex
|
||||
import urllib
|
||||
import guzzle_sphinx_theme
|
||||
|
||||
git_branch = [re.sub(r'origin/', '', x.lstrip(' ')) for x in str(git.branch('-r', '--contains', 'HEAD')).rstrip('\n').split('\n')]
|
||||
git_branch = [x for x in git_branch if 'HEAD' not in x]
|
||||
print('git_branch = {}'.format(git_branch[0]))
|
||||
try:
|
||||
filename, _ = urllib.request.urlretrieve('https://s3-us-west-2.amazonaws.com/xgboost-docs/{}.tar.bz2'.format(git_branch[0]))
|
||||
call('if [ -d tmp ]; then rm -rf tmp; fi; mkdir -p tmp/jvm; cd tmp/jvm; tar xvf {}'.format(filename), shell=True)
|
||||
except HTTPError:
|
||||
print('JVM doc not found. Skipping...')
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
@@ -23,13 +39,11 @@ libpath = os.path.join(curr_path, '../python-package/')
|
||||
sys.path.insert(0, libpath)
|
||||
sys.path.insert(0, curr_path)
|
||||
|
||||
from sphinx_util import MarkdownParser, AutoStructify
|
||||
|
||||
# -- mock out modules
|
||||
import mock
|
||||
MOCK_MODULES = ['numpy', 'scipy', 'scipy.sparse', 'sklearn', 'matplotlib', 'pandas', 'graphviz']
|
||||
for mod_name in MOCK_MODULES:
|
||||
sys.modules[mod_name] = mock.Mock()
|
||||
sys.modules[mod_name] = mock.Mock()
|
||||
|
||||
# -- General configuration ------------------------------------------------
|
||||
|
||||
@@ -39,11 +53,6 @@ author = u'%s developers' % project
|
||||
copyright = u'2016, %s' % author
|
||||
github_doc_root = 'https://github.com/dmlc/xgboost/tree/master/doc/'
|
||||
|
||||
# add markdown parser
|
||||
MarkdownParser.github_doc_root = github_doc_root
|
||||
source_parsers = {
|
||||
'.md': MarkdownParser,
|
||||
}
|
||||
os.environ['XGBOOST_BUILD_DOC'] = '1'
|
||||
# Version information.
|
||||
import xgboost
|
||||
@@ -56,14 +65,23 @@ extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'sphinx.ext.napoleon',
|
||||
'sphinx.ext.mathjax',
|
||||
'sphinx.ext.intersphinx',
|
||||
'breathe'
|
||||
]
|
||||
|
||||
# Breathe extension variables
|
||||
breathe_projects = {"xgboost": "doxyxml/"}
|
||||
breathe_default_project = "xgboost"
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
source_parsers = {
|
||||
'.md': CommonMarkParser,
|
||||
}
|
||||
|
||||
# The suffix(es) of source filenames.
|
||||
# You can specify multiple suffix as a list of string:
|
||||
# source_suffix = ['.rst', '.md']
|
||||
source_suffix = ['.rst', '.md']
|
||||
|
||||
# The encoding of source files.
|
||||
@@ -79,6 +97,8 @@ master_doc = 'index'
|
||||
# Usually you set "language" from the command line for these cases.
|
||||
language = None
|
||||
|
||||
autoclass_content = 'both'
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
#today = ''
|
||||
@@ -88,6 +108,7 @@ language = None
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = ['_build']
|
||||
html_extra_path = ['./tmp']
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all
|
||||
# documents.
|
||||
@@ -118,11 +139,23 @@ todo_include_todos = False
|
||||
|
||||
# -- Options for HTML output ----------------------------------------------
|
||||
|
||||
html_theme_path = ['_static']
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
# html_theme = 'alabaster'
|
||||
html_theme = 'xgboost-theme'
|
||||
html_theme_path = guzzle_sphinx_theme.html_theme_path()
|
||||
html_theme = 'guzzle_sphinx_theme'
|
||||
|
||||
# Register the theme as an extension to generate a sitemap.xml
|
||||
extensions.append("guzzle_sphinx_theme")
|
||||
|
||||
# Guzzle theme options (see theme.conf for more information)
|
||||
html_theme_options = {
|
||||
# Set the name of the project to appear in the sidebar
|
||||
"project_nav_name": "XGBoost (0.80)"
|
||||
}
|
||||
|
||||
html_sidebars = {
|
||||
'**': ['logo-text.html', 'globaltoc.html', 'searchbox.html']
|
||||
}
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
@@ -144,32 +177,27 @@ latex_documents = [
|
||||
author, 'manual'),
|
||||
]
|
||||
|
||||
intersphinx_mapping = {'python': ('https://docs.python.org/3.6', None),
|
||||
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
|
||||
'scipy': ('http://docs.scipy.org/doc/scipy/reference/', None),
|
||||
'pandas': ('http://pandas-docs.github.io/pandas-docs-travis/', None),
|
||||
'sklearn': ('http://scikit-learn.org/stable', None)}
|
||||
|
||||
# hook for doxygen
|
||||
def run_doxygen(folder):
|
||||
"""Run the doxygen make command in the designated folder."""
|
||||
try:
|
||||
retcode = subprocess.call("cd %s; make doxygen" % folder, shell=True)
|
||||
if retcode < 0:
|
||||
sys.stderr.write("doxygen terminated by signal %s" % (-retcode))
|
||||
except OSError as e:
|
||||
sys.stderr.write("doxygen execution failed: %s" % e)
|
||||
"""Run the doxygen make command in the designated folder."""
|
||||
try:
|
||||
retcode = subprocess.call("cd %s; make doxygen" % folder, shell=True)
|
||||
if retcode < 0:
|
||||
sys.stderr.write("doxygen terminated by signal %s" % (-retcode))
|
||||
except OSError as e:
|
||||
sys.stderr.write("doxygen execution failed: %s" % e)
|
||||
|
||||
def generate_doxygen_xml(app):
|
||||
"""Run the doxygen make commands if we're on the ReadTheDocs server"""
|
||||
read_the_docs_build = os.environ.get('READTHEDOCS', None) == 'True'
|
||||
if read_the_docs_build:
|
||||
run_doxygen('..')
|
||||
"""Run the doxygen make commands if we're on the ReadTheDocs server"""
|
||||
read_the_docs_build = os.environ.get('READTHEDOCS', None) == 'True'
|
||||
if read_the_docs_build:
|
||||
run_doxygen('..')
|
||||
|
||||
def setup(app):
|
||||
# Add hook for building doxygen xml when needed
|
||||
# no c++ API for now
|
||||
# app.connect("builder-inited", generate_doxygen_xml)
|
||||
urllib.urlretrieve('https://code.jquery.com/jquery-2.2.4.min.js',
|
||||
'_static/jquery.js')
|
||||
app.add_config_value('recommonmark_config', {
|
||||
'url_resolver': lambda url: github_doc_root + url,
|
||||
'enable_eval_rst': True,
|
||||
}, True,
|
||||
)
|
||||
app.add_transform(AutoStructify)
|
||||
app.add_javascript('jquery.js')
|
||||
app.add_stylesheet('custom.css')
|
||||
|
||||
256
doc/contribute.rst
Normal file
256
doc/contribute.rst
Normal file
@@ -0,0 +1,256 @@
|
||||
#####################
|
||||
Contribute to XGBoost
|
||||
#####################
|
||||
XGBoost has been developed and used by a group of active community members.
|
||||
Everyone is more than welcome to contribute. It is a way to make the project better and more accessible to more users.
|
||||
|
||||
- Please add your name to `CONTRIBUTORS.md <https://github.com/dmlc/xgboost/blob/master/CONTRIBUTORS.md>`_ after your patch has been merged.
|
||||
- Please also update `NEWS.md <https://github.com/dmlc/xgboost/blob/master/NEWS.md>`_ to add note on your changes to the API or XGBoost documentation.
|
||||
|
||||
**Guidelines**
|
||||
|
||||
* `Submit Pull Request`_
|
||||
* `Git Workflow Howtos`_
|
||||
|
||||
- `How to resolve conflict with master`_
|
||||
- `How to combine multiple commits into one`_
|
||||
- `What is the consequence of force push`_
|
||||
|
||||
* `Documents`_
|
||||
* `Testcases`_
|
||||
* `Sanitizers`_
|
||||
* `Examples`_
|
||||
* `Core Library`_
|
||||
* `Python Package`_
|
||||
* `R Package`_
|
||||
|
||||
*******************
|
||||
Submit Pull Request
|
||||
*******************
|
||||
|
||||
* Before submit, please rebase your code on the most recent version of master, you can do it by
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git remote add upstream https://github.com/dmlc/xgboost
|
||||
git fetch upstream
|
||||
git rebase upstream/master
|
||||
|
||||
* If you have multiple small commits,
|
||||
it might be good to merge them together(use git rebase then squash) into more meaningful groups.
|
||||
* Send the pull request!
|
||||
|
||||
- Fix the problems reported by automatic checks
|
||||
- If you are contributing a new module, consider add a testcase in `tests <https://github.com/dmlc/xgboost/tree/master/tests>`_.
|
||||
|
||||
*******************
|
||||
Git Workflow Howtos
|
||||
*******************
|
||||
|
||||
How to resolve conflict with master
|
||||
===================================
|
||||
- First rebase to most recent master
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# The first two steps can be skipped after you do it once.
|
||||
git remote add upstream https://github.com/dmlc/xgboost
|
||||
git fetch upstream
|
||||
git rebase upstream/master
|
||||
|
||||
- The git may show some conflicts it cannot merge, say ``conflicted.py``.
|
||||
|
||||
- Manually modify the file to resolve the conflict.
|
||||
- After you resolved the conflict, mark it as resolved by
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git add conflicted.py
|
||||
|
||||
- Then you can continue rebase by
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git rebase --continue
|
||||
|
||||
- Finally push to your fork, you may need to force push here.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git push --force
|
||||
|
||||
How to combine multiple commits into one
|
||||
========================================
|
||||
Sometimes we want to combine multiple commits, especially when later commits are only fixes to previous ones,
|
||||
to create a PR with set of meaningful commits. You can do it by following steps.
|
||||
|
||||
- Before doing so, configure the default editor of git if you haven't done so before.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git config core.editor the-editor-you-like
|
||||
|
||||
- Assume we want to merge last 3 commits, type the following commands
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git rebase -i HEAD~3
|
||||
|
||||
- It will pop up an text editor. Set the first commit as ``pick``, and change later ones to ``squash``.
|
||||
- After you saved the file, it will pop up another text editor to ask you modify the combined commit message.
|
||||
- Push the changes to your fork, you need to force push.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git push --force
|
||||
|
||||
What is the consequence of force push
|
||||
=====================================
|
||||
The previous two tips requires force push, this is because we altered the path of the commits.
|
||||
It is fine to force push to your own fork, as long as the commits changed are only yours.
|
||||
|
||||
*********
|
||||
Documents
|
||||
*********
|
||||
* Documentation is built using sphinx.
|
||||
* Each document is written in `reStructuredText <http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_.
|
||||
* You can build document locally to see the effect.
|
||||
|
||||
*********
|
||||
Testcases
|
||||
*********
|
||||
* All the testcases are in `tests <https://github.com/dmlc/xgboost/tree/master/tests>`_.
|
||||
* We use python nose for python test cases.
|
||||
|
||||
**********
|
||||
Sanitizers
|
||||
**********
|
||||
|
||||
By default, sanitizers are bundled in GCC and Clang/LLVM. One can enable
|
||||
sanitizers with GCC >= 4.8 or LLVM >= 3.1, But some distributions might package
|
||||
sanitizers separately. Here is a list of supported sanitizers with
|
||||
corresponding library names:
|
||||
|
||||
- Address sanitizer: libasan
|
||||
- Leak sanitizer: liblsan
|
||||
- Thread sanitizer: libtsan
|
||||
|
||||
Memory sanitizer is exclusive to LLVM, hence not supported in XGBoost.
|
||||
|
||||
How to build XGBoost with sanitizers
|
||||
====================================
|
||||
One can build XGBoost with sanitizer support by specifying -DUSE_SANITIZER=ON.
|
||||
By default, address sanitizer and leak sanitizer are used when you turn the
|
||||
USE_SANITIZER flag on. You can always change the default by providing a
|
||||
semicolon separated list of sanitizers to ENABLED_SANITIZERS. Note that thread
|
||||
sanitizer is not compatible with the other two sanitizers.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cmake -DUSE_SANITIZER=ON -DENABLED_SANITIZERS="address;leak" /path/to/xgboost
|
||||
|
||||
How to use sanitizers with CUDA support
|
||||
=======================================
|
||||
Runing XGBoost on CUDA with address sanitizer (asan) will raise memory error.
|
||||
To use asan with CUDA correctly, you need to configure asan via ASAN_OPTIONS
|
||||
environment variable:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
ASAN_OPTIONS=protect_shadow_gap=0 ../testxgboost
|
||||
|
||||
For details, please consult `official documentation <https://github.com/google/sanitizers/wiki>`_ for sanitizers.
|
||||
|
||||
|
||||
********
|
||||
Examples
|
||||
********
|
||||
* Usecases and examples will be in `demo <https://github.com/dmlc/xgboost/tree/master/demo>`_.
|
||||
* We are super excited to hear about your story, if you have blogposts,
|
||||
tutorials code solutions using XGBoost, please tell us and we will add
|
||||
a link in the example pages.
|
||||
|
||||
************
|
||||
Core Library
|
||||
************
|
||||
- Follow `Google style for C++ <https://google.github.io/styleguide/cppguide.html>`_.
|
||||
- Use C++11 features such as smart pointers, braced initializers, lambda functions, and ``std::thread``.
|
||||
- We use Doxygen to document all the interface code.
|
||||
- You can reproduce the linter checks by running ``make lint``
|
||||
|
||||
**************
|
||||
Python Package
|
||||
**************
|
||||
- Always add docstring to the new functions in numpydoc format.
|
||||
- You can reproduce the linter checks by typing ``make lint``
|
||||
|
||||
*********
|
||||
R Package
|
||||
*********
|
||||
|
||||
Code Style
|
||||
==========
|
||||
- We follow Google's C++ Style guide for C++ code.
|
||||
|
||||
- This is mainly to be consistent with the rest of the project.
|
||||
- Another reason is we will be able to check style automatically with a linter.
|
||||
|
||||
- You can check the style of the code by typing the following command at root folder.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
make rcpplint
|
||||
|
||||
- When needed, you can disable the linter warning of certain line with ```// NOLINT(*)``` comments.
|
||||
- We use `roxygen <https://cran.r-project.org/web/packages/roxygen2/vignettes/roxygen2.html>`_ for documenting the R package.
|
||||
|
||||
Rmarkdown Vignettes
|
||||
===================
|
||||
Rmarkdown vignettes are placed in `R-package/vignettes <https://github.com/dmlc/xgboost/tree/master/R-package/vignettes>`_.
|
||||
These Rmarkdown files are not compiled. We host the compiled version on `doc/R-package <https://github.com/dmlc/xgboost/tree/master/doc/R-package>`_.
|
||||
|
||||
The following steps are followed to add a new Rmarkdown vignettes:
|
||||
|
||||
- Add the original rmarkdown to ``R-package/vignettes``.
|
||||
- Modify ``doc/R-package/Makefile`` to add the markdown files to be build.
|
||||
- Clone the `dmlc/web-data <https://github.com/dmlc/web-data>`_ repo to folder ``doc``.
|
||||
- Now type the following command on ``doc/R-package``:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
make the-markdown-to-make.md
|
||||
|
||||
- This will generate the markdown, as well as the figures in ``doc/web-data/xgboost/knitr``.
|
||||
- Modify the ``doc/R-package/index.md`` to point to the generated markdown.
|
||||
- Add the generated figure to the ``dmlc/web-data`` repo.
|
||||
|
||||
- If you already cloned the repo to doc, this means ``git add``
|
||||
|
||||
- Create PR for both the markdown and ``dmlc/web-data``.
|
||||
- You can also build the document locally by typing the following command at the ``doc`` directory:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
make html
|
||||
|
||||
The reason we do this is to avoid exploded repo size due to generated images.
|
||||
|
||||
R package versioning
|
||||
====================
|
||||
Since version 0.6.4.3, we have adopted a versioning system that uses x.y.z (or ``core_major.core_minor.cran_release``)
|
||||
format for CRAN releases and an x.y.z.p (or ``core_major.core_minor.cran_release.patch``) format for development patch versions.
|
||||
This approach is similar to the one described in Yihui Xie's
|
||||
`blog post on R Package Versioning <https://yihui.name/en/2013/06/r-package-versioning/>`_,
|
||||
except we need an additional field to accomodate the x.y core library version.
|
||||
|
||||
Each new CRAN release bumps up the 3rd field, while developments in-between CRAN releases
|
||||
would be marked by an additional 4th field on the top of an existing CRAN release version.
|
||||
Some additional consideration is needed when the core library version changes.
|
||||
E.g., after the core changes from 0.6 to 0.7, the R package development version would become 0.7.0.1, working towards
|
||||
a 0.7.1 CRAN release. The 0.7.0 would not be released to CRAN, unless it would require almost no additional development.
|
||||
|
||||
Registering native routines in R
|
||||
================================
|
||||
According to `R extension manual <https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Registering-native-routines>`_,
|
||||
it is good practice to register native routines and to disable symbol search. When any changes or additions are made to the
|
||||
C++ interface of the R package, please make corresponding changes in ``src/init.c`` as well.
|
||||
@@ -1,46 +1,50 @@
|
||||
##########################
|
||||
Frequently Asked Questions
|
||||
========================
|
||||
This document contains frequently asked questions about xgboost.
|
||||
##########################
|
||||
|
||||
This document contains frequently asked questions about XGBoost.
|
||||
|
||||
**********************
|
||||
How to tune parameters
|
||||
----------------------
|
||||
See [Parameter Tunning Guide](how_to/param_tuning.md)
|
||||
**********************
|
||||
See :doc:`Parameter Tuning Guide </tutorials/param_tuning>`.
|
||||
|
||||
************************
|
||||
Description on the model
|
||||
------------------------
|
||||
See [Introduction to Boosted Trees](model.md)
|
||||
|
||||
************************
|
||||
See :doc:`Introduction to Boosted Trees </tutorials/model>`.
|
||||
|
||||
********************
|
||||
I have a big dataset
|
||||
--------------------
|
||||
XGBoost is designed to be memory efficient. Usually it can handle problems as long as the data fit into your memory
|
||||
(This usually means millions of instances).
|
||||
If you are running out of memory, checkout [external memory version](how_to/external_memory.md) or
|
||||
[distributed version](../demo/distributed-training) of xgboost.
|
||||
********************
|
||||
XGBoost is designed to be memory efficient. Usually it can handle problems as long as the data fit into your memory.
|
||||
(This usually means millions of instances)
|
||||
If you are running out of memory, checkout :doc:`external memory version </tutorials/external_memory>` or
|
||||
:doc:`distributed version </tutorials/aws_yarn>` of XGBoost.
|
||||
|
||||
|
||||
Running xgboost on Platform X (Hadoop/Yarn, Mesos)
|
||||
--------------------------------------------------
|
||||
**************************************************
|
||||
Running XGBoost on Platform X (Hadoop/Yarn, Mesos)
|
||||
**************************************************
|
||||
The distributed version of XGBoost is designed to be portable to various environment.
|
||||
Distributed XGBoost can be ported to any platform that supports [rabit](https://github.com/dmlc/rabit).
|
||||
You can directly run xgboost on Yarn. In theory Mesos and other resource allocation engines can be easily supported as well.
|
||||
Distributed XGBoost can be ported to any platform that supports `rabit <https://github.com/dmlc/rabit>`_.
|
||||
You can directly run XGBoost on Yarn. In theory Mesos and other resource allocation engines can be easily supported as well.
|
||||
|
||||
|
||||
Why not implement distributed xgboost on top of X (Spark, Hadoop)
|
||||
-----------------------------------------------------------------
|
||||
*****************************************************************
|
||||
Why not implement distributed XGBoost on top of X (Spark, Hadoop)
|
||||
*****************************************************************
|
||||
The first fact we need to know is going distributed does not necessarily solve all the problems.
|
||||
Instead, it creates more problems such as more communication overhead and fault tolerance.
|
||||
The ultimate question will still come back to how to push the limit of each computation node
|
||||
and use less resources to complete the task (thus with less communication and chance of failure).
|
||||
|
||||
To achieve these, we decide to reuse the optimizations in the single node xgboost and build distributed version on top of it.
|
||||
To achieve these, we decide to reuse the optimizations in the single node XGBoost and build distributed version on top of it.
|
||||
The demand of communication in machine learning is rather simple, in the sense that we can depend on a limited set of API (in our case rabit).
|
||||
Such design allows us to reuse most of the code, while being portable to major platforms such as Hadoop/Yarn, MPI, SGE.
|
||||
Most importantly, it pushes the limit of the computation resources we can use.
|
||||
|
||||
|
||||
*****************************************
|
||||
How can I port the model to my own system
|
||||
-----------------------------------------
|
||||
*****************************************
|
||||
The model and data format of XGBoost is exchangeable,
|
||||
which means the model trained by one language can be loaded in another.
|
||||
This means you can train the model using R, while running prediction using
|
||||
@@ -48,26 +52,26 @@ Java or C++, which are more common in production systems.
|
||||
You can also train the model using distributed versions,
|
||||
and load them in from Python to do some interactive analysis.
|
||||
|
||||
|
||||
*************************
|
||||
Do you support LambdaMART
|
||||
-------------------------
|
||||
Yes, xgboost implements LambdaMART. Checkout the objective section in [parameters](parameter.md)
|
||||
|
||||
*************************
|
||||
Yes, XGBoost implements LambdaMART. Checkout the objective section in :doc:`parameters </parameter>`.
|
||||
|
||||
******************************
|
||||
How to deal with Missing Value
|
||||
------------------------------
|
||||
xgboost supports missing value by default.
|
||||
******************************
|
||||
XGBoost supports missing value by default.
|
||||
In tree algorithms, branch directions for missing values are learned during training.
|
||||
Note that the gblinear booster treats missing values as zeros.
|
||||
|
||||
|
||||
**************************************
|
||||
Slightly different result between runs
|
||||
--------------------------------------
|
||||
**************************************
|
||||
This could happen, due to non-determinism in floating point summation order and multi-threading.
|
||||
Though the general accuracy will usually remain the same.
|
||||
|
||||
|
||||
**********************************************************
|
||||
Why do I see different results with sparse and dense data?
|
||||
--------------------------------------------------------
|
||||
**********************************************************
|
||||
"Sparse" elements are treated as if they were "missing" by the tree booster, and as zeros by the linear booster.
|
||||
For tree models, it is important to use consistent data formats during training and scoring.
|
||||
For tree models, it is important to use consistent data formats during training and scoring.
|
||||
94
doc/get_started.rst
Normal file
94
doc/get_started.rst
Normal file
@@ -0,0 +1,94 @@
|
||||
########################
|
||||
Get Started with XGBoost
|
||||
########################
|
||||
|
||||
This is a quick start tutorial showing snippets for you to quickly try out XGBoost
|
||||
on the demo dataset on a binary classification task.
|
||||
|
||||
********************************
|
||||
Links to Other Helpful Resources
|
||||
********************************
|
||||
- See :doc:`Installation Guide </build>` on how to install XGBoost.
|
||||
- See :doc:`Text Input Format </tutorials/input_format>` on using text format for specifying training/testing data.
|
||||
- See :doc:`Tutorials </tutorials/index>` for tips and tutorials.
|
||||
- See `Learning to use XGBoost by Examples <https://github.com/dmlc/xgboost/tree/master/demo>`_ for more code examples.
|
||||
|
||||
******
|
||||
Python
|
||||
******
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import xgboost as xgb
|
||||
# read in data
|
||||
dtrain = xgb.DMatrix('demo/data/agaricus.txt.train')
|
||||
dtest = xgb.DMatrix('demo/data/agaricus.txt.test')
|
||||
# specify parameters via map
|
||||
param = {'max_depth':2, 'eta':1, 'silent':1, 'objective':'binary:logistic' }
|
||||
num_round = 2
|
||||
bst = xgb.train(param, dtrain, num_round)
|
||||
# make prediction
|
||||
preds = bst.predict(dtest)
|
||||
|
||||
***
|
||||
R
|
||||
***
|
||||
|
||||
.. code-block:: R
|
||||
|
||||
# load data
|
||||
data(agaricus.train, package='xgboost')
|
||||
data(agaricus.test, package='xgboost')
|
||||
train <- agaricus.train
|
||||
test <- agaricus.test
|
||||
# fit model
|
||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nrounds = 2,
|
||||
nthread = 2, objective = "binary:logistic")
|
||||
# predict
|
||||
pred <- predict(bst, test$data)
|
||||
|
||||
*****
|
||||
Julia
|
||||
*****
|
||||
|
||||
.. code-block:: julia
|
||||
|
||||
using XGBoost
|
||||
# read data
|
||||
train_X, train_Y = readlibsvm("demo/data/agaricus.txt.train", (6513, 126))
|
||||
test_X, test_Y = readlibsvm("demo/data/agaricus.txt.test", (1611, 126))
|
||||
# fit model
|
||||
num_round = 2
|
||||
bst = xgboost(train_X, num_round, label=train_Y, eta=1, max_depth=2)
|
||||
# predict
|
||||
pred = predict(bst, test_X)
|
||||
|
||||
*****
|
||||
Scala
|
||||
*****
|
||||
|
||||
.. code-block:: scala
|
||||
|
||||
import ml.dmlc.xgboost4j.scala.DMatrix
|
||||
import ml.dmlc.xgboost4j.scala.XGBoost
|
||||
|
||||
object XGBoostScalaExample {
|
||||
def main(args: Array[String]) {
|
||||
// read trainining data, available at xgboost/demo/data
|
||||
val trainData =
|
||||
new DMatrix("/path/to/agaricus.txt.train")
|
||||
// define parameters
|
||||
val paramMap = List(
|
||||
"eta" -> 0.1,
|
||||
"max_depth" -> 2,
|
||||
"objective" -> "binary:logistic").toMap
|
||||
// number of iterations
|
||||
val round = 2
|
||||
// train the model
|
||||
val model = XGBoost.train(trainData, paramMap, round)
|
||||
// run prediction
|
||||
val predTrain = model.predict(trainData)
|
||||
// save model to the file.
|
||||
model.saveModel("/local/path/to/model")
|
||||
}
|
||||
}
|
||||
@@ -1,79 +0,0 @@
|
||||
# Get Started with XGBoost
|
||||
|
||||
This is a quick start tutorial showing snippets for you to quickly try out xgboost
|
||||
on the demo dataset on a binary classification task.
|
||||
|
||||
## Links to Helpful Other Resources
|
||||
- See [Installation Guide](../build.md) on how to install xgboost.
|
||||
- See [How to pages](../how_to/index.md) on various tips on using xgboost.
|
||||
- See [Tutorials](../tutorials/index.md) on tutorials on specific tasks.
|
||||
- See [Learning to use XGBoost by Examples](../../demo) for more code examples.
|
||||
|
||||
## Python
|
||||
```python
|
||||
import xgboost as xgb
|
||||
# read in data
|
||||
dtrain = xgb.DMatrix('demo/data/agaricus.txt.train')
|
||||
dtest = xgb.DMatrix('demo/data/agaricus.txt.test')
|
||||
# specify parameters via map
|
||||
param = {'max_depth':2, 'eta':1, 'silent':1, 'objective':'binary:logistic' }
|
||||
num_round = 2
|
||||
bst = xgb.train(param, dtrain, num_round)
|
||||
# make prediction
|
||||
preds = bst.predict(dtest)
|
||||
```
|
||||
|
||||
## R
|
||||
|
||||
```r
|
||||
# load data
|
||||
data(agaricus.train, package='xgboost')
|
||||
data(agaricus.test, package='xgboost')
|
||||
train <- agaricus.train
|
||||
test <- agaricus.test
|
||||
# fit model
|
||||
bst <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, nround = 2,
|
||||
nthread = 2, objective = "binary:logistic")
|
||||
# predict
|
||||
pred <- predict(bst, test$data)
|
||||
```
|
||||
|
||||
## Julia
|
||||
```julia
|
||||
using XGBoost
|
||||
# read data
|
||||
train_X, train_Y = readlibsvm("demo/data/agaricus.txt.train", (6513, 126))
|
||||
test_X, test_Y = readlibsvm("demo/data/agaricus.txt.test", (1611, 126))
|
||||
# fit model
|
||||
num_round = 2
|
||||
bst = xgboost(train_X, num_round, label=train_Y, eta=1, max_depth=2)
|
||||
# predict
|
||||
pred = predict(bst, test_X)
|
||||
```
|
||||
|
||||
## Scala
|
||||
```scala
|
||||
import ml.dmlc.xgboost4j.scala.DMatrix
|
||||
import ml.dmlc.xgboost4j.scala.XGBoost
|
||||
|
||||
object XGBoostScalaExample {
|
||||
def main(args: Array[String]) {
|
||||
// read trainining data, available at xgboost/demo/data
|
||||
val trainData =
|
||||
new DMatrix("/path/to/agaricus.txt.train")
|
||||
// define parameters
|
||||
val paramMap = List(
|
||||
"eta" -> 0.1,
|
||||
"max_depth" -> 2,
|
||||
"objective" -> "binary:logistic").toMap
|
||||
// number of iterations
|
||||
val round = 2
|
||||
// train the model
|
||||
val model = XGBoost.train(trainData, paramMap, round)
|
||||
// run prediction
|
||||
val predTrain = model.predict(trainData)
|
||||
// save model to the file.
|
||||
model.saveModel("/local/path/to/model")
|
||||
}
|
||||
}
|
||||
```
|
||||
105
doc/gpu/index.md
105
doc/gpu/index.md
@@ -1,105 +0,0 @@
|
||||
XGBoost GPU Support
|
||||
===================
|
||||
|
||||
This page contains information about GPU algorithms supported in XGBoost.
|
||||
To install GPU support, checkout the [build and installation instructions](../build.md).
|
||||
|
||||
# CUDA Accelerated Tree Construction Algorithms
|
||||
This plugin adds GPU accelerated tree construction and prediction algorithms to XGBoost.
|
||||
## Usage
|
||||
Specify the 'tree_method' parameter as one of the following algorithms.
|
||||
|
||||
### Algorithms
|
||||
|
||||
```eval_rst
|
||||
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| tree_method | Description |
|
||||
+==============+=======================================================================================================================================================================+
|
||||
| gpu_exact | The standard XGBoost tree construction algorithm. Performs exact search for splits. Slower and uses considerably more memory than 'gpu_hist' |
|
||||
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| gpu_hist | Equivalent to the XGBoost fast histogram algorithm. Much faster and uses considerably less memory. NOTE: Will run very slowly on GPUs older than Pascal architecture. |
|
||||
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
### Supported parameters
|
||||
|
||||
```eval_rst
|
||||
.. |tick| unicode:: U+2714
|
||||
.. |cross| unicode:: U+2718
|
||||
|
||||
+----------------------+------------+-----------+
|
||||
| parameter | gpu_exact | gpu_hist |
|
||||
+======================+============+===========+
|
||||
| subsample | |cross| | |tick| |
|
||||
+----------------------+------------+-----------+
|
||||
| colsample_bytree | |cross| | |tick| |
|
||||
+----------------------+------------+-----------+
|
||||
| colsample_bylevel | |cross| | |tick| |
|
||||
+----------------------+------------+-----------+
|
||||
| max_bin | |cross| | |tick| |
|
||||
+----------------------+------------+-----------+
|
||||
| gpu_id | |tick| | |tick| |
|
||||
+----------------------+------------+-----------+
|
||||
| n_gpus | |cross| | |tick| |
|
||||
+----------------------+------------+-----------+
|
||||
| predictor | |tick| | |tick| |
|
||||
+----------------------+------------+-----------+
|
||||
| grow_policy | |cross| | |tick| |
|
||||
+----------------------+------------+-----------+
|
||||
| monotone_constraints | |cross| | |tick| |
|
||||
+----------------------+------------+-----------+
|
||||
```
|
||||
|
||||
GPU accelerated prediction is enabled by default for the above mentioned 'tree_method' parameters but can be switched to CPU prediction by setting 'predictor':'cpu_predictor'. This could be useful if you want to conserve GPU memory. Likewise when using CPU algorithms, GPU accelerated prediction can be enabled by setting 'predictor':'gpu_predictor'.
|
||||
|
||||
The device ordinal can be selected using the 'gpu_id' parameter, which defaults to 0.
|
||||
|
||||
Multiple GPUs can be used with the grow_gpu_hist parameter using the n_gpus parameter. which defaults to 1. If this is set to -1 all available GPUs will be used. If gpu_id is specified as non-zero, the gpu device order is mod(gpu_id + i) % n_visible_devices for i=0 to n_gpus-1. As with GPU vs. CPU, multi-GPU will not always be faster than a single GPU due to PCI bus bandwidth that can limit performance.
|
||||
|
||||
This plugin currently works with the CLI, python and R - see installation guide for details.
|
||||
|
||||
Python example:
|
||||
```python
|
||||
param['gpu_id'] = 0
|
||||
param['max_bin'] = 16
|
||||
param['tree_method'] = 'gpu_hist'
|
||||
```
|
||||
## Benchmarks
|
||||
To run benchmarks on synthetic data for binary classification:
|
||||
```bash
|
||||
$ python tests/benchmark/benchmark.py
|
||||
```
|
||||
|
||||
Training time time on 1,000,000 rows x 50 columns with 500 boosting iterations and 0.25/0.75 test/train split on i7-6700K CPU @ 4.00GHz and Pascal Titan X.
|
||||
|
||||
```eval_rst
|
||||
+--------------+----------+
|
||||
| tree_method | Time (s) |
|
||||
+==============+==========+
|
||||
| gpu_hist | 13.87 |
|
||||
+--------------+----------+
|
||||
| hist | 63.55 |
|
||||
+--------------+----------+
|
||||
| gpu_exact | 161.08 |
|
||||
+--------------+----------+
|
||||
| exact | 1082.20 |
|
||||
+--------------+----------+
|
||||
|
||||
```
|
||||
|
||||
[See here](http://dmlc.ml/2016/12/14/GPU-accelerated-xgboost.html) for additional performance benchmarks of the 'gpu_exact' tree_method.
|
||||
|
||||
## References
|
||||
[Mitchell R, Frank E. (2017) Accelerating the XGBoost algorithm using GPU computing. PeerJ Computer Science 3:e127 https://doi.org/10.7717/peerj-cs.127](https://peerj.com/articles/cs-127/)
|
||||
|
||||
[Nvidia Parallel Forall: Gradient Boosting, Decision Trees and XGBoost with CUDA](https://devblogs.nvidia.com/parallelforall/gradient-boosting-decision-trees-xgboost-cuda/)
|
||||
|
||||
## Author
|
||||
Rory Mitchell
|
||||
Jonathan C. McKinney
|
||||
Shankara Rao Thejaswi Nanditale
|
||||
Vinay Deshpande
|
||||
... and the rest of the H2O.ai and NVIDIA team.
|
||||
|
||||
Please report bugs to the xgboost/issues page.
|
||||
|
||||
121
doc/gpu/index.rst
Normal file
121
doc/gpu/index.rst
Normal file
@@ -0,0 +1,121 @@
|
||||
###################
|
||||
XGBoost GPU Support
|
||||
###################
|
||||
|
||||
This page contains information about GPU algorithms supported in XGBoost.
|
||||
To install GPU support, checkout the :doc:`/build`.
|
||||
|
||||
.. note:: CUDA 8.0, Compute Capability 3.5 required
|
||||
|
||||
The GPU algorithms in XGBoost require a graphics card with compute capability 3.5 or higher, with
|
||||
CUDA toolkits 8.0 or later.
|
||||
(See `this list <https://en.wikipedia.org/wiki/CUDA#GPUs_supported>`_ to look up compute capability of your GPU card.)
|
||||
|
||||
*********************************************
|
||||
CUDA Accelerated Tree Construction Algorithms
|
||||
*********************************************
|
||||
Tree construction (training) and prediction can be accelerated with CUDA-capable GPUs.
|
||||
|
||||
Usage
|
||||
=====
|
||||
Specify the ``tree_method`` parameter as one of the following algorithms.
|
||||
|
||||
Algorithms
|
||||
----------
|
||||
|
||||
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| tree_method | Description |
|
||||
+==============+=======================================================================================================================================================================+
|
||||
| gpu_exact | The standard XGBoost tree construction algorithm. Performs exact search for splits. Slower and uses considerably more memory than ``gpu_hist``. |
|
||||
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| gpu_hist | Equivalent to the XGBoost fast histogram algorithm. Much faster and uses considerably less memory. NOTE: Will run very slowly on GPUs older than Pascal architecture. |
|
||||
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
Supported parameters
|
||||
--------------------
|
||||
|
||||
.. |tick| unicode:: U+2714
|
||||
.. |cross| unicode:: U+2718
|
||||
|
||||
+--------------------------+---------------+--------------+
|
||||
| parameter | ``gpu_exact`` | ``gpu_hist`` |
|
||||
+==========================+===============+==============+
|
||||
| ``subsample`` | |cross| | |tick| |
|
||||
+--------------------------+---------------+--------------+
|
||||
| ``colsample_bytree`` | |cross| | |tick| |
|
||||
+--------------------------+---------------+--------------+
|
||||
| ``colsample_bylevel`` | |cross| | |tick| |
|
||||
+--------------------------+---------------+--------------+
|
||||
| ``max_bin`` | |cross| | |tick| |
|
||||
+--------------------------+---------------+--------------+
|
||||
| ``gpu_id`` | |tick| | |tick| |
|
||||
+--------------------------+---------------+--------------+
|
||||
| ``n_gpus`` | |cross| | |tick| |
|
||||
+--------------------------+---------------+--------------+
|
||||
| ``predictor`` | |tick| | |tick| |
|
||||
+--------------------------+---------------+--------------+
|
||||
| ``grow_policy`` | |cross| | |tick| |
|
||||
+--------------------------+---------------+--------------+
|
||||
| ``monotone_constraints`` | |cross| | |tick| |
|
||||
+--------------------------+---------------+--------------+
|
||||
|
||||
GPU accelerated prediction is enabled by default for the above mentioned ``tree_method`` parameters but can be switched to CPU prediction by setting ``predictor`` to ``cpu_predictor``. This could be useful if you want to conserve GPU memory. Likewise when using CPU algorithms, GPU accelerated prediction can be enabled by setting ``predictor`` to ``gpu_predictor``.
|
||||
|
||||
The device ordinal can be selected using the ``gpu_id`` parameter, which defaults to 0.
|
||||
|
||||
Multiple GPUs can be used with the ``gpu_hist`` tree method using the ``n_gpus`` parameter. which defaults to 1. If this is set to -1 all available GPUs will be used. If ``gpu_id`` is specified as non-zero, the gpu device order is ``mod(gpu_id + i) % n_visible_devices`` for ``i=0`` to ``n_gpus-1``. As with GPU vs. CPU, multi-GPU will not always be faster than a single GPU due to PCI bus bandwidth that can limit performance.
|
||||
|
||||
.. note:: Enabling multi-GPU training
|
||||
|
||||
Default installation may not enable multi-GPU training. To use multiple GPUs, make sure to read :ref:`build_gpu_support`.
|
||||
|
||||
The GPU algorithms currently work with CLI, Python and R packages. See :doc:`/build` for details.
|
||||
|
||||
.. code-block:: python
|
||||
:caption: Python example
|
||||
|
||||
param['gpu_id'] = 0
|
||||
param['max_bin'] = 16
|
||||
param['tree_method'] = 'gpu_hist'
|
||||
|
||||
Benchmarks
|
||||
==========
|
||||
You can run benchmarks on synthetic data for binary classification:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python tests/benchmark/benchmark.py
|
||||
|
||||
Training time time on 1,000,000 rows x 50 columns with 500 boosting iterations and 0.25/0.75 test/train split on i7-6700K CPU @ 4.00GHz and Pascal Titan X yields the following results:
|
||||
|
||||
+--------------+----------+
|
||||
| tree_method | Time (s) |
|
||||
+==============+==========+
|
||||
| gpu_hist | 13.87 |
|
||||
+--------------+----------+
|
||||
| hist | 63.55 |
|
||||
+--------------+----------+
|
||||
| gpu_exact | 161.08 |
|
||||
+--------------+----------+
|
||||
| exact | 1082.20 |
|
||||
+--------------+----------+
|
||||
|
||||
See `GPU Accelerated XGBoost <https://xgboost.ai/2016/12/14/GPU-accelerated-xgboost.html>`_ and `Updates to the XGBoost GPU algorithms <https://xgboost.ai/2018/07/04/gpu-xgboost-update.html>`_ for additional performance benchmarks of the ``gpu_exact`` and ``gpu_hist`` tree methods.
|
||||
|
||||
**********
|
||||
References
|
||||
**********
|
||||
`Mitchell R, Frank E. (2017) Accelerating the XGBoost algorithm using GPU computing. PeerJ Computer Science 3:e127 https://doi.org/10.7717/peerj-cs.127 <https://peerj.com/articles/cs-127/>`_
|
||||
|
||||
`Nvidia Parallel Forall: Gradient Boosting, Decision Trees and XGBoost with CUDA <https://devblogs.nvidia.com/parallelforall/gradient-boosting-decision-trees-xgboost-cuda/>`_
|
||||
|
||||
Authors
|
||||
=======
|
||||
* Rory Mitchell
|
||||
* Jonathan C. McKinney
|
||||
* Shankara Rao Thejaswi Nanditale
|
||||
* Vinay Deshpande
|
||||
* ... and the rest of the H2O.ai and NVIDIA team.
|
||||
|
||||
Please report bugs to the user forum https://discuss.xgboost.ai/.
|
||||
|
||||
@@ -1,164 +0,0 @@
|
||||
Contribute to XGBoost
|
||||
=====================
|
||||
XGBoost has been developed and used by a group of active community members.
|
||||
Everyone is more than welcome to contribute. It is a way to make the project better and more accessible to more users.
|
||||
|
||||
- Please add your name to [CONTRIBUTORS.md](../../CONTRIBUTORS.md) after your patch has been merged.
|
||||
- Please also update [NEWS.md](../../NEWS.md) to add note on your changes to the API or added a new document.
|
||||
|
||||
Guidelines
|
||||
----------
|
||||
* [Submit Pull Request](#submit-pull-request)
|
||||
* [Git Workflow Howtos](#git-workflow-howtos)
|
||||
- [How to resolve conflict with master](#how-to-resolve-conflict-with-master)
|
||||
- [How to combine multiple commits into one](#how-to-combine-multiple-commits-into-one)
|
||||
- [What is the consequence of force push](#what-is-the-consequence-of-force-push)
|
||||
* [Document](#document)
|
||||
* [Testcases](#testcases)
|
||||
* [Examples](#examples)
|
||||
* [Core Library](#core-library)
|
||||
* [Python Package](#python-package)
|
||||
* [R Package](#r-package)
|
||||
|
||||
Submit Pull Request
|
||||
-------------------
|
||||
* Before submit, please rebase your code on the most recent version of master, you can do it by
|
||||
```bash
|
||||
git remote add upstream https://github.com/dmlc/xgboost
|
||||
git fetch upstream
|
||||
git rebase upstream/master
|
||||
```
|
||||
* If you have multiple small commits,
|
||||
it might be good to merge them together(use git rebase then squash) into more meaningful groups.
|
||||
* Send the pull request!
|
||||
- Fix the problems reported by automatic checks
|
||||
- If you are contributing a new module, consider add a testcase in [tests](../tests)
|
||||
|
||||
Git Workflow Howtos
|
||||
-------------------
|
||||
### How to resolve conflict with master
|
||||
- First rebase to most recent master
|
||||
```bash
|
||||
# The first two steps can be skipped after you do it once.
|
||||
git remote add upstream https://github.com/dmlc/xgboost
|
||||
git fetch upstream
|
||||
git rebase upstream/master
|
||||
```
|
||||
- The git may show some conflicts it cannot merge, say ```conflicted.py```.
|
||||
- Manually modify the file to resolve the conflict.
|
||||
- After you resolved the conflict, mark it as resolved by
|
||||
```bash
|
||||
git add conflicted.py
|
||||
```
|
||||
- Then you can continue rebase by
|
||||
```bash
|
||||
git rebase --continue
|
||||
```
|
||||
- Finally push to your fork, you may need to force push here.
|
||||
```bash
|
||||
git push --force
|
||||
```
|
||||
|
||||
### How to combine multiple commits into one
|
||||
Sometimes we want to combine multiple commits, especially when later commits are only fixes to previous ones,
|
||||
to create a PR with set of meaningful commits. You can do it by following steps.
|
||||
- Before doing so, configure the default editor of git if you haven't done so before.
|
||||
```bash
|
||||
git config core.editor the-editor-you-like
|
||||
```
|
||||
- Assume we want to merge last 3 commits, type the following commands
|
||||
```bash
|
||||
git rebase -i HEAD~3
|
||||
```
|
||||
- It will pop up an text editor. Set the first commit as ```pick```, and change later ones to ```squash```.
|
||||
- After you saved the file, it will pop up another text editor to ask you modify the combined commit message.
|
||||
- Push the changes to your fork, you need to force push.
|
||||
```bash
|
||||
git push --force
|
||||
```
|
||||
|
||||
### What is the consequence of force push
|
||||
The previous two tips requires force push, this is because we altered the path of the commits.
|
||||
It is fine to force push to your own fork, as long as the commits changed are only yours.
|
||||
|
||||
Documents
|
||||
---------
|
||||
* The document is created using sphinx and [recommonmark](http://recommonmark.readthedocs.org/en/latest/)
|
||||
* You can build document locally to see the effect.
|
||||
|
||||
Testcases
|
||||
---------
|
||||
* All the testcases are in [tests](../tests)
|
||||
* We use python nose for python test cases.
|
||||
|
||||
Examples
|
||||
--------
|
||||
* Usecases and examples will be in [demo](../demo)
|
||||
* We are super excited to hear about your story, if you have blogposts,
|
||||
tutorials code solutions using xgboost, please tell us and we will add
|
||||
a link in the example pages.
|
||||
|
||||
Core Library
|
||||
------------
|
||||
- Follow Google C style for C++.
|
||||
- We use doxygen to document all the interface code.
|
||||
- You can reproduce the linter checks by typing ```make lint```
|
||||
|
||||
Python Package
|
||||
--------------
|
||||
- Always add docstring to the new functions in numpydoc format.
|
||||
- You can reproduce the linter checks by typing ```make lint```
|
||||
|
||||
R Package
|
||||
---------
|
||||
### Code Style
|
||||
- We follow Google's C++ Style guide on C++ code.
|
||||
- This is mainly to be consistent with the rest of the project.
|
||||
- Another reason is we will be able to check style automatically with a linter.
|
||||
- You can check the style of the code by typing the following command at root folder.
|
||||
```bash
|
||||
make rcpplint
|
||||
```
|
||||
- When needed, you can disable the linter warning of certain line with ```// NOLINT(*)``` comments.
|
||||
- We use [roxygen](https://cran.r-project.org/web/packages/roxygen2/vignettes/roxygen2.html) for documenting the R package.
|
||||
|
||||
### Rmarkdown Vignettes
|
||||
Rmarkdown vignettes are placed in [R-package/vignettes](../R-package/vignettes)
|
||||
These Rmarkdown files are not compiled. We host the compiled version on [doc/R-package](R-package)
|
||||
|
||||
The following steps are followed to add a new Rmarkdown vignettes:
|
||||
- Add the original rmarkdown to ```R-package/vignettes```
|
||||
- Modify ```doc/R-package/Makefile``` to add the markdown files to be build
|
||||
- Clone the [dmlc/web-data](https://github.com/dmlc/web-data) repo to folder ```doc```
|
||||
- Now type the following command on ```doc/R-package```
|
||||
```bash
|
||||
make the-markdown-to-make.md
|
||||
```
|
||||
- This will generate the markdown, as well as the figures into ```doc/web-data/xgboost/knitr```
|
||||
- Modify the ```doc/R-package/index.md``` to point to the generated markdown.
|
||||
- Add the generated figure to the ```dmlc/web-data``` repo.
|
||||
- If you already cloned the repo to doc, this means a ```git add```
|
||||
- Create PR for both the markdown and ```dmlc/web-data```
|
||||
- You can also build the document locally by typing the following command at ```doc```
|
||||
```bash
|
||||
make html
|
||||
```
|
||||
The reason we do this is to avoid exploded repo size due to generated images sizes.
|
||||
|
||||
### R package versioning
|
||||
Since version 0.6.4.3, we have adopted a versioning system that uses an ```x.y.z``` (or ```core_major.core_minor.cran_release```)
|
||||
format for CRAN releases and an ```x.y.z.p``` (or ```core_major.core_minor.cran_release.patch```) format for development patch versions.
|
||||
This approach is similar to the one described in Yihui Xie's
|
||||
[blog post on R Package Versioning](https://yihui.name/en/2013/06/r-package-versioning/),
|
||||
except we need an additional field to accomodate the ```x.y``` core library version.
|
||||
|
||||
Each new CRAN release bumps up the 3rd field, while developments in-between CRAN releases
|
||||
would be marked by an additional 4th field on the top of an existing CRAN release version.
|
||||
Some additional consideration is needed when the core library version changes.
|
||||
E.g., after the core changes from 0.6 to 0.7, the R package development version would become 0.7.0.1, working towards
|
||||
a 0.7.1 CRAN release. The 0.7.0 would not be released to CRAN, unless it would require almost no additional development.
|
||||
|
||||
### Registering native routines in R
|
||||
According to [R extension manual](https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Registering-native-routines),
|
||||
it is good practice to register native routines and to disable symbol search. When any changes or additions are made to the
|
||||
C++ interface of the R package, please make corresponding changes in ```src/init.c``` as well.
|
||||
@@ -1,42 +0,0 @@
|
||||
Using XGBoost External Memory Version(beta)
|
||||
===========================================
|
||||
There is no big difference between using external memory version and in-memory version.
|
||||
The only difference is the filename format.
|
||||
|
||||
The external memory version takes in the following filename format
|
||||
```
|
||||
filename#cacheprefix
|
||||
```
|
||||
|
||||
The ```filename``` is the normal path to libsvm file you want to load in, ```cacheprefix``` is a
|
||||
path to a cache file that xgboost will use for external memory cache.
|
||||
|
||||
The following code was extracted from [../demo/guide-python/external_memory.py](../demo/guide-python/external_memory.py)
|
||||
```python
|
||||
dtrain = xgb.DMatrix('../data/agaricus.txt.train#dtrain.cache')
|
||||
```
|
||||
You can find that there is additional ```#dtrain.cache``` following the libsvm file, this is the name of cache file.
|
||||
For CLI version, simply use ```"../data/agaricus.txt.train#dtrain.cache"``` in filename.
|
||||
|
||||
Performance Note
|
||||
----------------
|
||||
* the parameter ```nthread``` should be set to number of ***real*** cores
|
||||
- Most modern CPU offer hyperthreading, which means you can have a 4 core cpu with 8 threads
|
||||
- Set nthread to be 4 for maximum performance in such case
|
||||
|
||||
Distributed Version
|
||||
-------------------
|
||||
The external memory mode naturally works on distributed version, you can simply set path like
|
||||
```
|
||||
data = "hdfs:///path-to-data/#dtrain.cache"
|
||||
```
|
||||
xgboost will cache the data to the local position. When you run on YARN, the current folder is temporal
|
||||
so that you can directly use ```dtrain.cache``` to cache to current folder.
|
||||
|
||||
|
||||
Usage Note
|
||||
----------
|
||||
* This is a experimental version
|
||||
- If you like to try and test it, report results to https://github.com/dmlc/xgboost/issues/244
|
||||
* Currently only importing from libsvm format is supported
|
||||
- Contribution of ingestion from other common external memory data source is welcomed
|
||||
@@ -1,17 +0,0 @@
|
||||
# XGBoost How To
|
||||
|
||||
This page contains guidelines to use and develop XGBoost.
|
||||
|
||||
## Installation
|
||||
- [How to Install XGBoost](../build.md)
|
||||
|
||||
## Use XGBoost in Specific Ways
|
||||
- [Parameter tuning guide](param_tuning.md)
|
||||
- [Use out of core computation for large dataset](external_memory.md)
|
||||
- [Use XGBoost GPU algorithms](../gpu/index.md)
|
||||
|
||||
## Develop and Hack XGBoost
|
||||
- [Contribute to XGBoost](contribute.md)
|
||||
|
||||
## Frequently Ask Questions
|
||||
- [FAQ](../faq.md)
|
||||
16
doc/index.md
16
doc/index.md
@@ -1,16 +0,0 @@
|
||||
XGBoost Documentation
|
||||
=====================
|
||||
This document is hosted at http://xgboost.readthedocs.org/. You can also browse most of the documents in github directly.
|
||||
|
||||
|
||||
These are used to generate the index used in search.
|
||||
|
||||
* [Python Package Document](python/index.md)
|
||||
* [R Package Document](R-package/index.md)
|
||||
* [Java/Scala Package Document](jvm/index.md)
|
||||
* [Julia Package Document](julia/index.md)
|
||||
* [CLI Package Document](cli/index.md)
|
||||
* [GPU Support Document](gpu/index.md)
|
||||
- [Howto Documents](how_to/index.md)
|
||||
- [Get Started Documents](get_started/index.md)
|
||||
- [Tutorials](tutorials/index.md)
|
||||
30
doc/index.rst
Normal file
30
doc/index.rst
Normal file
@@ -0,0 +1,30 @@
|
||||
#####################
|
||||
XGBoost Documentation
|
||||
#####################
|
||||
|
||||
**XGBoost** is an optimized distributed gradient boosting library designed to be highly **efficient**, **flexible** and **portable**.
|
||||
It implements machine learning algorithms under the `Gradient Boosting <https://en.wikipedia.org/wiki/Gradient_boosting>`_ framework.
|
||||
XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.
|
||||
The same code runs on major distributed environment (Hadoop, SGE, MPI) and can solve problems beyond billions of examples.
|
||||
|
||||
********
|
||||
Contents
|
||||
********
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:titlesonly:
|
||||
|
||||
build
|
||||
get_started
|
||||
tutorials/index
|
||||
faq
|
||||
XGBoost User Forum <https://discuss.xgboost.ai>
|
||||
GPU support <gpu/index>
|
||||
parameter
|
||||
Python package <python/index>
|
||||
R package <R-package/index>
|
||||
JVM package <jvm/index>
|
||||
Julia package <julia>
|
||||
CLI interface <cli>
|
||||
contribute
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user