* use gain for sklearn feature_importances_
`gain` is a better feature importance criteria than the currently used `weight`
* added importance_type to class
* fixed test
* white space
* fix variable name
* fix deprecation warning
* fix exp array
* white spaces
* Fix#3747: Add coef_ and intercept_ as properties of sklearn wrapper
Scikit-learn expects linear learners to expose `coef_` and `intercept_`
as properties.
Closes#3747.
* Fix lint
The `save_model()` and `load_model()` method only saves the part of the model
that's common to all language interfaces and do not preserve Python-specific
attributes, such as `feature_names`. More crucially, label encoder is not
preserved either; this is needed for the scikit-learn wrapper, since you may
have string labels.
Fix: Explicitly recommend pickling as the way to save scikit-learn model
objects.
* Fix#3397: early_stop callback does not maximize metric of form NDCG@n-
Early stopping callback makes splits with '-' letter, which interferes
with metrics of form NDCG@n-. As a result, XGBoost tries to minimize
NDCG@n-, where it should be maximized instead.
Fix. Specify maxsplit=1.
* Python 2.x compatibility fix
* Add scikit-learn tests
Goal is to pass scikit-learn's check_estimator() for XGBClassifier,
XGBRegressor, and XGBRanker. It is actually not possible to do so
entirely, since check_estimator() assumes that NaN is disallowed,
but XGBoost allows for NaN as missing values. However, it is always
good ideas to add some checks inspired by check_estimator().
* Fix lint
* Fix lint
* Fix#3648: XGBClassifier.predict() should return margin scores when output_margin=True
* Fix tests to reflect correct implementation of XGBClassifier.predict(output_margin=True)
* Fix flaky test test_with_sklearn.test_sklearn_api_gblinear
* Add XGBRanker to Python API doc
* Show inherited members of XGBRegressor in API doc, since XGBRegressor uses default methods from XGBModel
* Add table of contents to Python API doc
* Skip JVM doc download if not available
* Show inherited members for XGBRegressor and XGBRanker
* Expose XGBRanker to Python XGBoost module directory
* Add docstring to XGBRegressor.predict() and XGBRanker.predict()
* Fix rendering errors in Python docstrings
* Fix lint
* added xgbranker
* fixed predict method and ranking test
* reformatted code in accordance with pep8
* fixed lint error
* fixed docstring and added checks on objective
* added ranking demo for python
* fixed suffix in rank.py
* Revert "Fix #3485, #3540: Don't use dropout for predicting test sets (#3556)"
This reverts commit 44811f233071c5805d70c287abd22b155b732727.
* Document behavior of predict() for DART booster
* Add notice to parameter.rst
* Fix bug of using list(x) function when x is string
list('abcdcba') = ['a', 'b', 'c', 'd', 'c', 'b', 'a']
* Allow feature_names/feature_types to be of any type
If feature_names/feature_types is iterable, e.g. tuple, list, then convert the value to list, except for string; otherwise construct a list with a single value
* Delete excess whitespace
* Fix whitespace to pass lint
Add `'total_gain'` and `'total_cover'` as possible `importance_type`
arguments to `Booster.get_score` in the Python package.
`get_score` already accepts a `'gain'` argument, which returns each
feature's average gain over all of its splits. `'total_gain'` does the
same, but returns a total rather than an average. This seems more
intuitively meaningful, and also matches the behavior of the R package's
`xgb.importance` function.
I also added an analogous `'total_cover'` command for consistency.
This should resolve#3484.
* Improved library loading a bit
* Fixed indentation.
* Fixes according to the discussion
* Moved the comment to a separate line.
* specified exception type
* allow arbitrary cross validation fold indices
- use training indices passed to `folds` parameter in `training.cv`
- update doc string
* add tests for arbitrary fold indices
* Add option to use weights when evaluating metrics in validation sets
* Add test for validation-set weights functionality
* simplify case with no weights for test sets
* fix lint issues