[backport] jvm-packages 1.6.1 (#7849)
* [jvm-packages] move the dmatrix building into rabit context (#7823) This fixes the QuantileDeviceDMatrix in distributed environment. * [doc] update the jvm tutorial to 1.6.1 [skip ci] (#7834) * [Breaking][jvm-packages] Use barrier execution mode (#7836) With the introduction of the barrier execution mode. we don't need to kill SparkContext when some xgboost tasks failed. Instead, Spark will handle the errors for us. So in this PR, `killSparkContextOnWorkerFailure` parameter is deleted. * [doc] remove the doc about killing SparkContext [skip ci] (#7840) * [jvm-package] remove the coalesce in barrier mode (#7846) * [jvm-packages] Fix model compatibility (#7845) * Ignore all Java exceptions when looking for Linux musl support (#7844) Co-authored-by: Bobby Wang <wbo4958@gmail.com> Co-authored-by: Michael Allman <msa@allman.ms>
This commit is contained in:
@@ -1,5 +1,5 @@
|
||||
#############################################
|
||||
XGBoost4J-Spark-GPU Tutorial (version 1.6.0+)
|
||||
XGBoost4J-Spark-GPU Tutorial (version 1.6.1+)
|
||||
#############################################
|
||||
|
||||
**XGBoost4J-Spark-GPU** is an open source library aiming to accelerate distributed XGBoost training on Apache Spark cluster from
|
||||
@@ -220,7 +220,7 @@ application jar is iris-1.0.0.jar
|
||||
|
||||
cudf_version=22.02.0
|
||||
rapids_version=22.02.0
|
||||
xgboost_version=1.6.0
|
||||
xgboost_version=1.6.1
|
||||
main_class=Iris
|
||||
app_jar=iris-1.0.0.jar
|
||||
|
||||
|
||||
@@ -16,12 +16,6 @@ This tutorial is to cover the end-to-end process to build a machine learning pip
|
||||
* Building a Machine Learning Pipeline with XGBoost4J-Spark
|
||||
* Running XGBoost4J-Spark in Production
|
||||
|
||||
.. note::
|
||||
|
||||
**SparkContext will be stopped by default when XGBoost training task fails**.
|
||||
|
||||
XGBoost4J-Spark 1.2.0+ exposes a parameter **kill_spark_context_on_worker_failure**. Set **kill_spark_context_on_worker_failure** to **false** so that the SparkContext will not be stopping on training failure. Instead of stopping the SparkContext, XGBoost4J-Spark will throw an exception instead. Users who want to re-use the SparkContext should wrap the training code in a try-catch block.
|
||||
|
||||
.. contents::
|
||||
:backlinks: none
|
||||
:local:
|
||||
@@ -129,7 +123,7 @@ labels. A DataFrame like this (containing vector-represented features and numeri
|
||||
|
||||
.. note::
|
||||
|
||||
There is no need to assemble feature columns from version 1.6.0+. Instead, users can specify an array of
|
||||
There is no need to assemble feature columns from version 1.6.1+. Instead, users can specify an array of
|
||||
feture column names by ``setFeaturesCol(value: Array[String])`` and XGBoost4j-Spark will do it.
|
||||
|
||||
Dealing with missing values
|
||||
|
||||
Reference in New Issue
Block a user