A new parameter `custom_metric` is added to `train` and `cv` to distinguish the behaviour from the old `feval`. And `feval` is deprecated. The new `custom_metric` receives transformed prediction when the built-in objective is used. This enables XGBoost to use cost functions from other libraries like scikit-learn directly without going through the definition of the link function.
`eval_metric` and `early_stopping_rounds` in sklearn interface are moved from `fit` to `__init__` and is now saved as part of the scikit-learn model. The old ones in `fit` function are now deprecated. The new `eval_metric` in `__init__` has the same new behaviour as `custom_metric`.
Added more detailed documents for the behaviour of custom objective and metric.
* Implement early stopping with training continuation.
* Add new C API for obtaining boosted rounds.
* Fix off by 1 in `save_best`.
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
* Do not derive from unittest.TestCase (not needed for pytest)
* assertRaises -> pytest.raises
* Simplify test_empty_dmatrix with test parametrization
* setUpClass -> setup_class, tearDownClass -> teardown_class
* Don't import unittest; import pytest
* Use plain assert
* Use parametrized tests in more places
* Fix test_gpu_with_sklearn.py
* Put back run_empty_dmatrix_reg / run_empty_dmatrix_cls
* Fix test_eta_decay_gpu_hist
* Add parametrized tests for monotone constraints
* Fix test names
* Remove test parametrization
* Revise test_slice to be not flaky
This PR is meant the end the confusion around best_ntree_limit and unify model slicing. We have multi-class and random forests, asking users to understand how to set ntree_limit is difficult and error prone.
* Implement the save_best option in early stopping.
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>