[Doc] add doc for kill_spark_context_on_worker_failure parameter (#6097)
* [Doc] add doc for kill_spark_context_on_worker_failure parameter * resolve comments
This commit is contained in:
parent
d0ccb13d09
commit
00b0ad1293
@ -16,6 +16,12 @@ This tutorial is to cover the end-to-end process to build a machine learning pip
|
|||||||
* Building a Machine Learning Pipeline with XGBoost4J-Spark
|
* Building a Machine Learning Pipeline with XGBoost4J-Spark
|
||||||
* Running XGBoost4J-Spark in Production
|
* Running XGBoost4J-Spark in Production
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
**SparkContext will be stopped by default when XGBoost training task fails**.
|
||||||
|
|
||||||
|
XGBoost4J-Spark 1.2.0+ exposes a parameter **kill_spark_context_on_worker_failure**. Set **kill_spark_context_on_worker_failure** to **false** so that the SparkContext will not be stopping on training failure. Instead of stopping the SparkContext, XGBoost4J-Spark will throw an exception instead. Users who want to re-use the SparkContext should wrap the training code in a try-catch block.
|
||||||
|
|
||||||
.. contents::
|
.. contents::
|
||||||
:backlinks: none
|
:backlinks: none
|
||||||
:local:
|
:local:
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user