* Configuration for init estimation.
* Check whether the model needs configuration based on const attribute `ModelFitted`
instead of a mutable state.
* Add parameter `boost_from_average` to tell whether the user has specified base score.
* Add tests.
Support adaptive tree, a feature supported by both sklearn and lightgbm. The tree leaf is recomputed based on residue of labels and predictions after construction.
For l1 error, the optimal value is the median (50 percentile).
This is marked as experimental support for the following reasons:
- The value is not well defined for distributed training, where we might have empty leaves for local workers. Right now I just use the original leaf value for computing the average with other workers, which might cause significant errors.
- Some follow-ups are required, for exact, pruner, and optimization for quantile function. Also, we need to calculate the initial estimation.
This is the one last PR for removing omp global variable.
* Add context object to the `DMatrix`. This bridges `DMatrix` with https://github.com/dmlc/xgboost/issues/7308 .
* Require context to be available at the construction time of booster.
* Add `n_threads` support for R csc DMatrix constructor.
* Remove `omp_get_max_threads` in R glue code.
* Remove threading utilities that rely on omp global variable.
* Add num target model parameter, which is configured from input labels.
* Change elementwise metric and indexing for weights.
* Add demo.
* Add tests.
* Add a new ctor to tensor for `initilizer_list`.
* Change labels from host device vector to tensor.
* Rename the field from `labels_` to `labels` since it's a public member.
* Re-implement ROC-AUC.
* Binary
* MultiClass
* LTR
* Add documents.
This PR resolves a few issues:
- Define a value when the dataset is invalid, which can happen if there's an
empty dataset, or when the dataset contains only positive or negative values.
- Define ROC-AUC for multi-class classification.
- Define weighted average value for distributed setting.
- A correct implementation for learning to rank task. Previous
implementation is just binary classification with averaging across groups,
which doesn't measure ordered learning to rank.
* Enable loading model from <1.0.0 trained with objective='binary:logitraw'
* Add binary:logitraw in model compatibility testing suite
* Feedback from @trivialfis: Override ProbToMargin() for LogisticRaw
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
* Change DefaultEvalMetric of classification from error to logloss
* Change default binary metric in plugin/example/custom_obj.cc
* Set old error metric in python tests
* Set old error metric in R tests
* Fix missed eval metrics and typos in R tests
* Fix setting eval_metric twice in R tests
* Add warning for empty eval_metric for classification
* Fix Dask tests
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
* Add interval accuracy
* De-virtualize AFT functions
* Lint
* Refactor AFT metric using GPU-CPU reducer
* Fix R build
* Fix build on Windows
* Fix copyright header
* Clang-tidy
* Fix crashing demo
* Fix typos in comment; explain GPU ID
* Remove unnecessary #include
* Add C++ test for interval accuracy
* Fix a bug in accuracy metric: use log pred
* Refactor AFT objective using GPU-CPU Transform
* Lint
* Fix lint
* Use Ninja to speed up build
* Use time, not /usr/bin/time
* Add cpu_build worker class, with concurrency = 1
* Use concurrency = 1 only for CUDA build
* concurrency = 1 for clang-tidy
* Address reviewer's feedback
* Update link to AFT paper
* [WIP] Add lower and upper bounds on the label for survival analysis
* Update test MetaInfo.SaveLoadBinary to account for extra two fields
* Don't clear qids_ for version 2 of MetaInfo
* Add SetInfo() and GetInfo() method for lower and upper bounds
* changes to aft
* Add parameter class for AFT; use enum's to represent distribution and event type
* Add AFT metric
* changes to neg grad to grad
* changes to binomial loss
* changes to overflow
* changes to eps
* changes to code refactoring
* changes to code refactoring
* changes to code refactoring
* Re-factor survival analysis
* Remove aft namespace
* Move function bodies out of AFTNormal and AFTLogistic, to reduce clutter
* Move function bodies out of AFTLoss, to reduce clutter
* Use smart pointer to store AFTDistribution and AFTLoss
* Rename AFTNoiseDistribution enum to AFTDistributionType for clarity
The enum class was not a distribution itself but a distribution type
* Add AFTDistribution::Create() method for convenience
* changes to extreme distribution
* changes to extreme distribution
* changes to extreme
* changes to extreme distribution
* changes to left censored
* deleted cout
* changes to x,mu and sd and code refactoring
* changes to print
* changes to hessian formula in censored and uncensored
* changes to variable names and pow
* changes to Logistic Pdf
* changes to parameter
* Expose lower and upper bound labels to R package
* Use example weights; normalize log likelihood metric
* changes to CHECK
* changes to logistic hessian to standard formula
* changes to logistic formula
* Comply with coding style guideline
* Revert back Rabit submodule
* Revert dmlc-core submodule
* Comply with coding style guideline (clang-tidy)
* Fix an error in AFTLoss::Gradient()
* Add missing files to amalgamation
* Address @RAMitchell's comment: minimize future change in MetaInfo interface
* Fix lint
* Fix compilation error on 32-bit target, when size_t == bst_uint
* Allocate sufficient memory to hold extra label info
* Use OpenMP to speed up
* Fix compilation on Windows
* Address reviewer's feedback
* Add unit tests for probability distributions
* Make Metric subclass of Configurable
* Address reviewer's feedback: Configure() AFT metric
* Add a dummy test for AFT metric configuration
* Complete AFT configuration test; remove debugging print
* Rename AFT parameters
* Clarify test comment
* Add a dummy test for AFT loss for uncensored case
* Fix a bug in AFT loss for uncensored labels
* Complete unit test for AFT loss metric
* Simplify unit tests for AFT metric
* Add unit test to verify aggregate output from AFT metric
* Use EXPECT_* instead of ASSERT_*, so that we run all unit tests
* Use aft_loss_param when serializing AFTObj
This is to be consistent with AFT metric
* Add unit tests for AFT Objective
* Fix OpenMP bug; clarify semantics for shared variables used in OpenMP loops
* Add comments
* Remove AFT prefix from probability distribution; put probability distribution in separate source file
* Add comments
* Define kPI and kEulerMascheroni in probability_distribution.h
* Add probability_distribution.cc to amalgamation
* Remove unnecessary diff
* Address reviewer's feedback: define variables where they're used
* Eliminate all INFs and NANs from AFT loss and gradient
* Add demo
* Add tutorial
* Fix lint
* Use 'survival:aft' to be consistent with 'survival:cox'
* Move sample data to demo/data
* Add visual demo with 1D toy data
* Add Python tests
Co-authored-by: Philip Cho <chohyu01@cs.washington.edu>
- move segment sorter to common
- this is the first of a handful of pr's that splits the larger pr #5326
- it moves this facility to common (from ranking objective class), so that it can be
used for metric computation
- it also wraps all the bald device pointers into span.
* - implementation of map ranking algorithm
- also effected necessary suggestions mentioned in the earlier ranking pr's
- made some performance improvements to the ndcg algo as well
This makes GPU Hist robust in distributed environment as some workers might not
be associated with any data in either training or evaluation.
* Disable rabit mock test for now: See #5012 .
* Disable dask-cudf test at prediction for now: See #5003
* Launch dask job for all workers despite they might not have any data.
* Check 0 rows in elementwise evaluation metrics.
Using AUC and AUC-PR still throws an error. See #4663 for a robust fix.
* Add tests for edge cases.
* Add `LaunchKernel` wrapper handling zero sized grid.
* Move some parts of allreducer into a cu file.
* Don't validate feature names when the booster is empty.
* Sync number of columns in DMatrix.
As num_feature is required to be the same across all workers in data split
mode.
* Filtering in dask interface now by default syncs all booster that's not
empty, instead of using rank 0.
* Fix Jenkins' GPU tests.
* Install dask-cuda from source in Jenkins' test.
Now all tests are actually running.
* Restore GPU Hist tree synchronization test.
* Check UUID of running devices.
The check is only performed on CUDA version >= 10.x, as 9.x doesn't have UUID field.
* Fix CMake policy and project variables.
Use xgboost_SOURCE_DIR uniformly, add policy for CMake >= 3.13.
* Fix copying data to CPU
* Fix race condition in cpu predictor.
* Fix duplicated DMatrix construction.
* Don't download extra nccl in CI script.
* Use `UpdateAllowUnknown' for non-model related parameter.
Model parameter can not pack an additional boolean value due to binary IO
format. This commit deals only with non-model related parameter configuration.
* Add tidy command line arg for use-dmlc-gtest.