From bef1f939ceb964283f17f1f4cff3234ca18d3bc8 Mon Sep 17 00:00:00 2001 From: Bobby Wang Date: Mon, 25 Apr 2022 19:29:16 +0800 Subject: [PATCH] [doc] remove the doc about killing SparkContext [skip ci] (#7840) --- doc/jvm/xgboost4j_spark_tutorial.rst | 6 ------ 1 file changed, 6 deletions(-) diff --git a/doc/jvm/xgboost4j_spark_tutorial.rst b/doc/jvm/xgboost4j_spark_tutorial.rst index ce689cb95..60c1dd601 100644 --- a/doc/jvm/xgboost4j_spark_tutorial.rst +++ b/doc/jvm/xgboost4j_spark_tutorial.rst @@ -16,12 +16,6 @@ This tutorial is to cover the end-to-end process to build a machine learning pip * Building a Machine Learning Pipeline with XGBoost4J-Spark * Running XGBoost4J-Spark in Production -.. note:: - - **SparkContext will be stopped by default when XGBoost training task fails**. - - XGBoost4J-Spark 1.2.0+ exposes a parameter **kill_spark_context_on_worker_failure**. Set **kill_spark_context_on_worker_failure** to **false** so that the SparkContext will not be stopping on training failure. Instead of stopping the SparkContext, XGBoost4J-Spark will throw an exception instead. Users who want to re-use the SparkContext should wrap the training code in a try-catch block. - .. contents:: :backlinks: none :local: