This PR replaces the original RABIT implementation with a new one, which has already been partially merged into XGBoost. The new one features:
- Federated learning for both CPU and GPU.
- NCCL.
- More data types.
- A unified interface for all the underlying implementations.
- Improved timeout handling for both tracker and workers.
- Exhausted tests with metrics (fixed a couple of bugs along the way).
- A reusable tracker for Python and JVM packages.
- Add `numBoostedRound` to jvm packages
- Remove rabit checkpoint version.
- Change the starting version of training continuation in JVM [breaking].
- Redefine the checkpoint version policy in jvm package. [breaking]
- Rename the Python check point callback parameter. [breaking]
- Unifies the checkpoint policy between Python and JVM.
---------
Co-authored-by: Stephan T. Lavavej <stl@nuwen.net>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
Co-authored-by: Joe <25804777+ByteSizedJoe@users.noreply.github.com>
- Use the `linalg::Matrix` for storing gradients.
- New API for the custom objective.
- Custom objective for multi-class/multi-target is now required to return the correct shape.
- Custom objective for Python can accept arrays with any strides. (row-major, column-major)
* 1. Add parameters to set feature names and feature types
2. Save feature names and feature types to native json model
* Change serialization and deserialization format to ubj.
* Use array interface for CSC matrix.
Use array interface for CSC matrix and align the interface with CSR and dense.
- Fix nthread issue in the R package DMatrix.
- Unify the behavior of handling `missing` with other inputs.
- Unify the behavior of handling `missing` around R, Python, Java, and Scala DMatrix.
- Expose `num_non_missing` to the JVM interface.
- Deprecate old CSR and CSC constructors.
This PR removes auto-detection of MUSL-based Linux systems in favor of system properties the user can set to configure a specific path for a native library.
Following classes are added to support dataframe in java binding:
- `Column` is an abstract type for a single column in tabular data.
- `ColumnBatch` is an abstract type for dataframe.
- `CuDFColumn` is an implementaiton of `Column` that consume cuDF column
- `CudfColumnBatch` is an implementation of `ColumnBatch` that consumes cuDF dataframe.
- `DeviceQuantileDMatrix` is the interface for quantized data.
The Java implementation mimics the Python interface and uses `__cuda_array_interface__` protocol for memory indexing. One difference is on JVM package, the data batch is staged on the host as java iterators cannot be reset.
Co-authored-by: jiamingy <jm.yuan@outlook.com>
Fix bug introduced in 17913713b554d820a8ce94226d854b4a5f1d8bbc (allow loading from byte array)
When loading model from stream, only last buffer read from the input stream is used to construct the model.
This may work for models smaller than 1 MiB (if you are lucky enough to read the whole model at once), but will always fail if the model is larger.
* Add `XGBOOST_RABIT_TRACKER_IP_FOR_TEST` to set rabit tracker IP
* change spark and rabit tracker IP to 127.0.0.1on GitHub Action.
Co-authored-by: fis <jm.yuan@outlook.com>
* Add ability to load booster direct from byte array
* fix compiler error
* move InputStream to byte-buffer conversion
- move it from Booster to XGBoost facade class
* [java] extending the library loader to use both OS and CPU architecture.
* Simplifying create_jni.py's architecture detection.
* Tidying up the architecture detection in create_jni.py
* Add getNumFeature to the Java API
* Add getNumFeature to the Scala API
* Add unit tests for getNumFeature
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
* [jvm-packages] add gpu_hist tree method
* change updater hist to grow_quantile_histmaker
* add gpu scheduling
* pass correct parameters to xgboost library
* remove debug info
* add use.cuda for pom
* add CI for gpu_hist for jvm
* add gpu unit tests
* use gpu node to build jvm
* use nvidia-docker
* Add CLI interface to create_jni.py using argparse
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
* fix type error
* Validate number of features.
* resolve comments
* add feature size for LabelPoint and DataBatch
* pass the feature size to native
* move feature size validating tests into a separate suite
* resolve comments
Co-authored-by: fis <jm.yuan@outlook.com>
* Fix syncing DMatrix columns.
* notes for tree method.
* Enable feature validation for all interfaces except for jvm.
* Better tests for boosting from predictions.
* Disable validation on JVM.
* Add BigDenseMatrix
* ability to create DMatrix with bigger than Integer.MAX_VALUE size arrays
* uses sun.misc.Unsafe
* make DMatrix test work from a jar as well
* Add public group getter for java and scala
* Remove unnecessary param from javadoc
* Fix typo
* Fix another typo
* Add semicolon
* Fix javadoc return statement
* Fix missing return statement
* Add a unit test
* bump scala to 2.12 which requires java 8 and also newer flink and akka
* put scala version in artifactId
* fix appveyor
* fix for scaladoc issue that looks like https://github.com/scala/bug/issues/10509
* fix ci_build
* update versions in generate_pom.py
* fix generate_pom.py
* apache does not have a download for spark 2.4.3 distro using scala 2.12 yet, so for now i use a tgz i put on s3
* Upload spark-2.4.3-bin-scala2.12-hadoop2.7.tgz to our own S3
* Update Dockerfile.jvm_cross
* Update Dockerfile.jvm_cross