From 00b0ad1293b4fa74d6aca5da4e9ab7a9d16777f0 Mon Sep 17 00:00:00 2001 From: Bobby Wang Date: Thu, 10 Sep 2020 12:28:44 +0800 Subject: [PATCH] [Doc] add doc for kill_spark_context_on_worker_failure parameter (#6097) * [Doc] add doc for kill_spark_context_on_worker_failure parameter * resolve comments --- doc/jvm/xgboost4j_spark_tutorial.rst | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/doc/jvm/xgboost4j_spark_tutorial.rst b/doc/jvm/xgboost4j_spark_tutorial.rst index 67817613a..beda721ca 100644 --- a/doc/jvm/xgboost4j_spark_tutorial.rst +++ b/doc/jvm/xgboost4j_spark_tutorial.rst @@ -16,6 +16,12 @@ This tutorial is to cover the end-to-end process to build a machine learning pip * Building a Machine Learning Pipeline with XGBoost4J-Spark * Running XGBoost4J-Spark in Production +.. note:: + + **SparkContext will be stopped by default when XGBoost training task fails**. + + XGBoost4J-Spark 1.2.0+ exposes a parameter **kill_spark_context_on_worker_failure**. Set **kill_spark_context_on_worker_failure** to **false** so that the SparkContext will not be stopping on training failure. Instead of stopping the SparkContext, XGBoost4J-Spark will throw an exception instead. Users who want to re-use the SparkContext should wrap the training code in a try-catch block. + .. contents:: :backlinks: none :local: