- Reduce dependency on dmlc parsers and provide an interface for users to load data by themselves.
- Remove use of threaded iterator and IO queue.
- Remove `page_size`.
- Make sure the number of pages in memory is bounded.
- Make sure the cache can not be violated.
- Provide an interface for internal algorithms to process data asynchronously.
* Re-implement ROC-AUC.
* Binary
* MultiClass
* LTR
* Add documents.
This PR resolves a few issues:
- Define a value when the dataset is invalid, which can happen if there's an
empty dataset, or when the dataset contains only positive or negative values.
- Define ROC-AUC for multi-class classification.
- Define weighted average value for distributed setting.
- A correct implementation for learning to rank task. Previous
implementation is just binary classification with averaging across groups,
which doesn't measure ordered learning to rank.
* Initial support for distributed LTR using dask.
* Support `qid` in libxgboost.
* Refactor `predict` and `n_features_in_`, `best_[score/iteration/ntree_limit]`
to avoid duplicated code.
* Define `DaskXGBRanker`.
The dask ranker doesn't support group structure, instead it uses query id and
convert to group ptr internally.
* Make external memory data partitioning deterministic.
* Change the meaning of `page_size` from bytes to number of rows.
* Design a data pool.
* Note for external memory.
* Enable unity build on Windows CI.
* Force garbage collect on test.
* fixed some endian issues
* Use dmlc::ByteSwap() to simplify code
* Fix lint check
* [CI] Add test for s390x
* Download latest CMake on s390x
* Fix a bug in my code
* Save magic number in dmatrix with byteswap on big-endian machine
* Save version in binary with byteswap on big-endian machine
* Load scalar with byteswap in MetaInfo
* Add a debugging message
* Handle arrays correctly when byteswapping
* EOF can also be 255
* Handle magic number in MetaInfo carefully
* Skip Tree.Load test for big-endian, since the test manually builds little-endian binary model
* Handle missing packages in Python tests
* Don't use boto3 in model compatibility tests
* Add s390 Docker file for local testing
* Add model compatibility tests
* Add R compatibility test
* Revert "Add R compatibility test"
This reverts commit c2d2bdcb7dbae133cbb927fcd20f7e83ee2b18a8.
Co-authored-by: Qi Zhang <q.zhang@ibm.com>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
* Add thread local return entry for DMatrix.
* Save feature name and feature type in binary file.
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
* Group aware GPU weighted sketching.
* Distribute group weights to each data point.
* Relax the test.
* Validate input meta info.
* Fix metainfo copy ctor.
* Set default dtor for SimpleDMatrix to initialize default copy ctor, which is
deleted due to unique ptr.
* Remove commented code.
* Remove warning for calling host function (std::max).
* Remove warning for initialization order.
* Remove warning for unused variables.
* [WIP] Add lower and upper bounds on the label for survival analysis
* Update test MetaInfo.SaveLoadBinary to account for extra two fields
* Don't clear qids_ for version 2 of MetaInfo
* Add SetInfo() and GetInfo() method for lower and upper bounds
* changes to aft
* Add parameter class for AFT; use enum's to represent distribution and event type
* Add AFT metric
* changes to neg grad to grad
* changes to binomial loss
* changes to overflow
* changes to eps
* changes to code refactoring
* changes to code refactoring
* changes to code refactoring
* Re-factor survival analysis
* Remove aft namespace
* Move function bodies out of AFTNormal and AFTLogistic, to reduce clutter
* Move function bodies out of AFTLoss, to reduce clutter
* Use smart pointer to store AFTDistribution and AFTLoss
* Rename AFTNoiseDistribution enum to AFTDistributionType for clarity
The enum class was not a distribution itself but a distribution type
* Add AFTDistribution::Create() method for convenience
* changes to extreme distribution
* changes to extreme distribution
* changes to extreme
* changes to extreme distribution
* changes to left censored
* deleted cout
* changes to x,mu and sd and code refactoring
* changes to print
* changes to hessian formula in censored and uncensored
* changes to variable names and pow
* changes to Logistic Pdf
* changes to parameter
* Expose lower and upper bound labels to R package
* Use example weights; normalize log likelihood metric
* changes to CHECK
* changes to logistic hessian to standard formula
* changes to logistic formula
* Comply with coding style guideline
* Revert back Rabit submodule
* Revert dmlc-core submodule
* Comply with coding style guideline (clang-tidy)
* Fix an error in AFTLoss::Gradient()
* Add missing files to amalgamation
* Address @RAMitchell's comment: minimize future change in MetaInfo interface
* Fix lint
* Fix compilation error on 32-bit target, when size_t == bst_uint
* Allocate sufficient memory to hold extra label info
* Use OpenMP to speed up
* Fix compilation on Windows
* Address reviewer's feedback
* Add unit tests for probability distributions
* Make Metric subclass of Configurable
* Address reviewer's feedback: Configure() AFT metric
* Add a dummy test for AFT metric configuration
* Complete AFT configuration test; remove debugging print
* Rename AFT parameters
* Clarify test comment
* Add a dummy test for AFT loss for uncensored case
* Fix a bug in AFT loss for uncensored labels
* Complete unit test for AFT loss metric
* Simplify unit tests for AFT metric
* Add unit test to verify aggregate output from AFT metric
* Use EXPECT_* instead of ASSERT_*, so that we run all unit tests
* Use aft_loss_param when serializing AFTObj
This is to be consistent with AFT metric
* Add unit tests for AFT Objective
* Fix OpenMP bug; clarify semantics for shared variables used in OpenMP loops
* Add comments
* Remove AFT prefix from probability distribution; put probability distribution in separate source file
* Add comments
* Define kPI and kEulerMascheroni in probability_distribution.h
* Add probability_distribution.cc to amalgamation
* Remove unnecessary diff
* Address reviewer's feedback: define variables where they're used
* Eliminate all INFs and NANs from AFT loss and gradient
* Add demo
* Add tutorial
* Fix lint
* Use 'survival:aft' to be consistent with 'survival:cox'
* Move sample data to demo/data
* Add visual demo with 1D toy data
* Add Python tests
Co-authored-by: Philip Cho <chohyu01@cs.washington.edu>
* Turn xgboost::DataType into C++11 enum class
* New binary serialization format for DMatrix::MetaInfo
* Fix clang-tidy
* Fix c++ test
* Implement new format proposal
* Move helper functions to anonymous namespace; remove unneeded field
* Fix lint
* Add shape.
* Keep only roundtrip test.
* Fix test.
* various fixes
* Update data.cc
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
* Fix syncing DMatrix columns.
* notes for tree method.
* Enable feature validation for all interfaces except for jvm.
* Better tests for boosting from predictions.
* Disable validation on JVM.
* Extract interaction constraints from split evaluator.
The reason for doing so is mostly for model IO, where num_feature and interaction_constraints are copied in split evaluator. Also interaction constraint by itself is a feature selector, acting like column sampler and it's inefficient to bury it deep in the evaluator chain. Lastly removing one another copied parameter is a win.
* Enable inc for approx tree method.
As now the implementation is spited up from evaluator class, it's also enabled for approx method.
* Removing obsoleted code in colmaker.
They are never documented nor actually used in real world. Also there isn't a single test for those code blocks.
* Unifying the types used for row and column.
As the size of input dataset is marching to billion, incorrect use of int is subject to overflow, also singed integer overflow is undefined behaviour. This PR starts the procedure for unifying used index type to unsigned integers. There's optimization that can utilize this undefined behaviour, but after some testings I don't see the optimization is beneficial to XGBoost.
This makes GPU Hist robust in distributed environment as some workers might not
be associated with any data in either training or evaluation.
* Disable rabit mock test for now: See #5012 .
* Disable dask-cudf test at prediction for now: See #5003
* Launch dask job for all workers despite they might not have any data.
* Check 0 rows in elementwise evaluation metrics.
Using AUC and AUC-PR still throws an error. See #4663 for a robust fix.
* Add tests for edge cases.
* Add `LaunchKernel` wrapper handling zero sized grid.
* Move some parts of allreducer into a cu file.
* Don't validate feature names when the booster is empty.
* Sync number of columns in DMatrix.
As num_feature is required to be the same across all workers in data split
mode.
* Filtering in dask interface now by default syncs all booster that's not
empty, instead of using rank 0.
* Fix Jenkins' GPU tests.
* Install dask-cuda from source in Jenkins' test.
Now all tests are actually running.
* Restore GPU Hist tree synchronization test.
* Check UUID of running devices.
The check is only performed on CUDA version >= 10.x, as 9.x doesn't have UUID field.
* Fix CMake policy and project variables.
Use xgboost_SOURCE_DIR uniformly, add policy for CMake >= 3.13.
* Fix copying data to CPU
* Fix race condition in cpu predictor.
* Fix duplicated DMatrix construction.
* Don't download extra nccl in CI script.
* Use CMake config file for representing version.
* Generate c and Python version file with CMake.
The generated file is written into source tree. But unless XGBoost upgrades
its version, there will be no actual modification. This retains compatibility
with Makefiles for R.
* Add XGBoost version the DMatrix binaries.
* Simplify prefetch detection in CMakeLists.txt