* Implement tree model dump with a code generator.
* Split up generators.
* Implement graphviz generator.
* Use pattern matching.
* [Breaking] Return a Source in `to_graphviz` instead of Digraph in Python package.
Co-Authored-By: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
* Add XGBRanker to Python API doc
* Show inherited members of XGBRegressor in API doc, since XGBRegressor uses default methods from XGBModel
* Add table of contents to Python API doc
* Skip JVM doc download if not available
* Show inherited members for XGBRegressor and XGBRanker
* Expose XGBRanker to Python XGBoost module directory
* Add docstring to XGBRegressor.predict() and XGBRanker.predict()
* Fix rendering errors in Python docstrings
* Fix lint
* Add option to choose booster in scikit intreface (gbtree by default)
* Add option to choose booster in scikit intreface: complete docstring.
* Fix XGBClassifier to work with booster option
* Added test case for gblinear booster
* added the max_features parameter to the plot_importance function.
* renamed max_features parameter to max_num_features for better understanding
* removed unwanted character in docstring
* Fix various typos
* Add override to functions that are overridden
gcc gives warnings about functions that are being overridden by not
being marked as oveirridden. This fixes it.
* Use bst_float consistently
Use bst_float for all the variables that involve weight,
leaf value, gradient, hessian, gain, loss_chg, predictions,
base_margin, feature values.
In some cases, when due to additions and so on the value can
take a larger value, double is used.
This ensures that type conversions are minimal and reduces loss of
precision.
* added new function to calculate other feature importances
* added capability to plot other feature importance measures
* changed plotting default to fscore
* added info on importance_type to boilerplate comment
* updated text of error statement
* added self module name to fix call
* added unit test for feature importances
* style fixes