* Now `make pippack` works without any manual action: it will produce
xgboost-[version].tar.gz, which one can use by typing
`pip3 install xgboost-[version].tar.gz`.
* Detect OpenMP-capable compilers (clang, gcc-5, gcc-7) on MacOS
* Support CSV file in DMatrix
We'd just need to expose the CSV parser in dmlc-core to the Python wrapper
* Revert extra code; document existing CSV support
CSV support is already there but undocumented
* Add notice about categorical features
* Add interaction effects and cox loss
* Minimize whitespace changes
* Cox loss now no longer needs a pre-sorted dataset.
* Address code review comments
* Remove mem check, rename to pred_interactions, include bias
* Make lint happy
* More lint fixes
* Fix cox loss indexing
* Fix main effects and tests
* Fix lint
* Use half interaction values on the off-diagonals
* Fix lint again
After installing ``gcc@5``, ``CMAKE_C_COMPILER`` will not be set to gcc-5 in some macOS environment automatically and the installation of xgboost will still fail. Manually setting the compiler will solve the problem.
I found the installation of the Python XGBoost package to be problematic as the documentation around compiler requirements was unclear, as discussed in #1501. I decided that I would improve the README.
* SHAP values for feature contributions
* Fix commenting error
* New polynomial time SHAP value estimation algorithm
* Update API to support SHAP values
* Fix merge conflicts with updates in master
* Correct submodule hashes
* Fix variable sized stack allocation
* Make lint happy
* Add docs
* Fix typo
* Adjust tolerances
* Remove unneeded def
* Fixed cpp test setup
* Updated R API and cleaned up
* Fixed test typo
* repared serialization after update process; fixes#2545
* non-stratified folds in python could omit some data instances
* Makefile: fixes for older makes on windows; clean R-package too
* make cub to be a shallow submodule
* improve $(MAKE) recovery
* Fixed DLL name on Windows in ``xgboost.libpath``
* Added support for OS X to ``xgboost.libpath``
* Use .dylib for shared library on OS X
This does not affect the JNI library, because it is not trully
cross-platform in the Makefile-build anyway.
Don't use implicit conversions to c_int, which incidentally happen to work
on (some) 64-bit platforms, but:
* may lead to truncation of the input value to a 32-bit signed int,
* cause segfaults on some 32-bit architectures (tested on Ubuntu ARM,
but is also the likely cause of issue #1707).
Also, when passing references use explicit 64-bit integers, where needed,
instead of c_ulong, which is not guaranteed to be this large.
* Added kwargs support for Sklearn API
* Updated NEWS and CONTRIBUTORS
* Fixed CONTRIBUTORS.md
* Added clarification of **kwargs and test for proper usage
* Fixed lint error
* Fixed more lint errors and clf assigned but never used
* Fixed more lint errors
* Fixed more lint errors
* Fixed issue with changes from different branch bleeding over
* Fixed issue with changes from other branch bleeding over
* Added note that kwargs may not be compatible with Sklearn
* Fixed linting on kwargs note
* Added n_jobs and random_state to keep up to date with sklearn API.
Deprecated nthread and seed. Added tests for new params and
deprecations.
* Fixed docstring to reflect updates to n_jobs and random_state.
* Fixed whitespace issues and removed nose import.
* Added deprecation note for nthread and seed in docstring.
* Attempted fix of deprecation tests.
* Second attempted fix to tests.
* Set n_jobs to 1.
* Add option to choose booster in scikit intreface (gbtree by default)
* Add option to choose booster in scikit intreface: complete docstring.
* Fix XGBClassifier to work with booster option
* Added test case for gblinear booster
* Add prediction of feature contributions
This implements the idea described at http://blog.datadive.net/interpreting-random-forests/
which tries to give insight in how a prediction is composed of its feature contributions
and a bias.
* Support multi-class models
* Calculate learning_rate per-tree instead of using the one from the first tree
* Do not rely on node.base_weight * learning_rate having the same value as the node mean value (aka leaf value, if it were a leaf); instead calculate them (lazily) on-the-fly
* Add simple test for contributions feature
* Check against param.num_nodes instead of checking for non-zero length
* Loop over all roots instead of only the first
verbose_eval docs claim it will log the last iteration (http://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.train). this is also consistent w/the behavior from 0.4. not a huge deal but I found it handy to see the last iter's result b/c my period is usually large.
this doesn't address logging the last stage found by early_stopping (as noted in docs) as I'm not sure how to do that.
* A fix regarding the compatibility with python 2.6
the syntax of {n: self.attr(n) for n in attr_names} is illegal in python 2.6
* Update core.py
add a space after comma
* added the max_features parameter to the plot_importance function.
* renamed max_features parameter to max_num_features for better understanding
* removed unwanted character in docstring
* option to shuffle data in mknfolds
* removed possibility to run as stand alone test
* split function def in 2 lines for lint
* option to shuffle data in mknfolds
* removed possibility to run as stand alone test
* split function def in 2 lines for lint