* [jvm-packages] Fixed JNI_OnLoad overload
It does not compile on Windows without proper export flags.
* [jvm-packages] Use JNI types directly where appropriate
* Removed lib hack from CMake build
Prior to this commit the CMake build use hardcoded lib prefix for
libxgboost and libxgboost4j. Unfortunatelly this did not play well with
Windows, which does not use the lib- prefix.
* [jvm-packages] Replaced create_jni.{bat,sh} with a Python version
This allows to have a single script for all platforms.
* [jvm-packages] Added all configuration options to create_jni.py
Use int32_t explicitly when serializing version field of dmatrix in binary
format. On ILP64 architectures, although very little, size of int is 64 bits.
* Integrating a faster version of grow_gpu plugin
1. Removed the older files to reduce duplication
2. Moved all of the grow_gpu files under 'exact' folder
3. All of them are inside 'exact' namespace to avoid any conflicts
4. Fixed a bug in benchmark.py while running only 'grow_gpu' plugin
5. Added cub and googletest submodules to ease integration and unit-testing
6. Updates to CMakeLists.txt to directly build cuda objects into libxgboost
* Added support for building gpu plugins through make flow
1. updated makefile and config.mk to add right targets
2. added unit-tests for gpu exact plugin code
* 1. Added support for building gpu plugin using 'make' flow as well
2. Updated instructions for building and testing gpu plugin
* Fix travis-ci errors for PR#2360
1. lint errors on unit-tests
2. removed googletest, instead depended upon dmlc-core provide gtest cache
* Some more fixes to travis-ci lint failures PR#2360
* Added Rory's copyrights to the files containing code from both.
* updated copyright statement as per Rory's request
* moved the static datasets into a script to generate them at runtime
* 1. memory usage print when silent=0
2. tests/ and test/ folder organization
3. removal of the dependency of googletest for just building xgboost
4. coding style updates for .cuh as well
* Fixes for compilation warnings
* add cuda object files as well when JVM_BINDINGS=ON
* [jvm-packages] Added libxgboost4j to CMake build
* [jvm-packages] Wired CMake build into create_jni.sh
* User newer CMake version on Travis
* Lowered CMake version constraints
* Fixed various quirks in the new CMake build
Don't use implicit conversions to c_int, which incidentally happen to work
on (some) 64-bit platforms, but:
* may lead to truncation of the input value to a 32-bit signed int,
* cause segfaults on some 32-bit architectures (tested on Ubuntu ARM,
but is also the likely cause of issue #1707).
Also, when passing references use explicit 64-bit integers, where needed,
instead of c_ulong, which is not guaranteed to be this large.
* Specified 'exec-maven-plugin' version
* Changed 'create_jni.sh' to fail on error
and also report each of the executed commands, which makes it easier
to debug.
for loop in create.new.tree.features was referencing length(trees) as the upper bound of the loop. trees is a base R dataset and not the model that the code is generating. Changed loop boundary to model$niter which should be the number of trees.
* Added kwargs support for Sklearn API
* Updated NEWS and CONTRIBUTORS
* Fixed CONTRIBUTORS.md
* Added clarification of **kwargs and test for proper usage
* Fixed lint error
* Fixed more lint errors and clf assigned but never used
* Fixed more lint errors
* Fixed more lint errors
* Fixed issue with changes from different branch bleeding over
* Fixed issue with changes from other branch bleeding over
* Added note that kwargs may not be compatible with Sklearn
* Fixed linting on kwargs note
* Added n_jobs and random_state to keep up to date with sklearn API.
Deprecated nthread and seed. Added tests for new params and
deprecations.
* Fixed docstring to reflect updates to n_jobs and random_state.
* Fixed whitespace issues and removed nose import.
* Added deprecation note for nthread and seed in docstring.
* Attempted fix of deprecation tests.
* Second attempted fix to tests.
* Set n_jobs to 1.
* [gblinear] add features contribution prediction; fix DumpModel bug
* [gbtree] minor changes to PredContrib
* [R] add feature contribution prediction to R
* [R] bump up version; update NEWS
* [gblinear] fix the base_margin issue; fixes#1969
* [R] list of matrices as output of multiclass feature contributions
* [gblinear] make order of DumpModel coefficients consistent: group index changes the fastest
* Fix compilation on OS X with GCC 7
Compilation failed with
In file included from src/tree/tree_updater.cc:6:0:
include/xgboost/tree_updater.h:75:46: error: 'function' is not a member of 'std'
std::function<TreeUpdater* ()> > {
caused by a missing <functional> include.
* Fixed another occurence of that issue spotted by @ClimberPG
* Add option to choose booster in scikit intreface (gbtree by default)
* Add option to choose booster in scikit intreface: complete docstring.
* Fix XGBClassifier to work with booster option
* Added test case for gblinear booster
* [R] add native routines registration
* c_api.h needs to include <cstdint> since it uses fixed width integer types
* [R] use registered native routines from R code
* [R] bump version; add info on native routine registration to the contributors guide
* make lint happy
* Add prediction of feature contributions
This implements the idea described at http://blog.datadive.net/interpreting-random-forests/
which tries to give insight in how a prediction is composed of its feature contributions
and a bias.
* Support multi-class models
* Calculate learning_rate per-tree instead of using the one from the first tree
* Do not rely on node.base_weight * learning_rate having the same value as the node mean value (aka leaf value, if it were a leaf); instead calculate them (lazily) on-the-fly
* Add simple test for contributions feature
* Check against param.num_nodes instead of checking for non-zero length
* Loop over all roots instead of only the first
* add back train method but mark as deprecated
* fix scalastyle error
* fix the persistence of XGBoostEstimator
* test persistence of a complete pipeline
* fix compilation issue
* do not allow persist custom_eval and custom_obj
* fix the failed tesl
* [R] make sure things work for a single split model; fixes#2191
* [R] add option use_int_id to xgb.model.dt.tree
* [R] add example of exporting tree plot to a file
* [R] set save_period = NULL as default in xgboost() to be the same as in xgb.train; fixes#2182
* [R] it's a good practice after CRAN releases to bump up package version in dev
* [R] allow xgb.DMatrix construction from integer dense matrices
* [R] xgb.DMatrix: silent parameter; improve documentation
* [R] xgb.model.dt.tree code style changes
* [R] update NEWS with parameter changes
* [R] code safety & style; handle non-strict matrix and inherited classes of input and model; fixes#2242
* [R] change to x.y.z.p R-package versioning scheme and set version to 0.6.4.3
* [R] add an R package versioning section to the contributors guide
* [R] R-package/README.md: clean up the redundant old installation instructions, link the contributors guide
Reported in issue #2165. Dynamic scheduling of OpenMP loops involve
implicit synchronization. To implement synchronization, libgomp uses futex
(fast userspace mutex), whereas MinGW uses kernel-space mutex, which is more
costly. With chunk size of 1, synchronization overhead may become prohibitive
on Windows machines.
Solution: use 'guided' schedule to minimize the number of syncs
Storing and then loading a model loses any eval_metric that was
provided. This causes implementations that always store/load, like
xgboost4j-spark, to be unable to eval with the desired metric.
This log appears to fire every time I ask the python package to make a prediction. It's the only log that fires from XGBoost. When we're getting predictions on millions of items a day in production, this log seems out of place.
* add back train method but mark as deprecated
* fix scalastyle error
* change class to object in examples
* fix compilation error
* small fix for cleanExternalCache
* add back train method but mark as deprecated
* fix scalastyle error
* change class to object in examples
* fix compilation error
* fix several issues in tests
* Bugfix 1: Fix segfault in multithreaded ApplySplitSparseData()
When there are more threads than rows in rowset, some threads end up
with empty ranges, causing them to crash. (iend - 1 needs to be
accessible as part of algorithm)
Fix: run only those threads with nonempty ranges.
* Add regression test for Bugfix 1
* Moving python_omp_test to existing python test group
Turns out you don't need to set "OMP_NUM_THREADS" to enable
multithreading. Just add nthread parameter.
* Bugfix 2: Fix corner case of ApplySplitSparseData() for categorical feature
When split value is less than all cut points, split_cond is set
incorrectly.
Fix: set split_cond = -1 to indicate this scenario
* Bugfix 3: Initialize data layout indicator before using it
data_layout_ is accessed before being set; this variable determines
whether feature 0 is included in feat_set.
Fix: re-order code in InitData() to initialize data_layout_ first
* Adding regression test for Bugfix 2
Unfortunately, no regression test for Bugfix 3, as there is no
way to deterministically assign value to an uninitialized variable.