Mark next release as 1.7 instead of 2.0 (#8281)
This commit is contained in:
@@ -347,7 +347,7 @@ and then loading the model in another session:
|
||||
|
||||
.. note::
|
||||
|
||||
Besides dumping the model to raw format, users are able to dump the model to be json or ubj format from ``version 2.0.0+``.
|
||||
Besides dumping the model to raw format, users are able to dump the model to be json or ubj format from ``version 1.7.0+``.
|
||||
|
||||
.. code-block:: scala
|
||||
|
||||
@@ -362,7 +362,7 @@ Interact with Other Bindings of XGBoost
|
||||
After we train a model with XGBoost4j-Spark on massive dataset, sometimes we want to do model serving
|
||||
in single machine or integrate it with other single node libraries for further processing.
|
||||
|
||||
After saving the model, we can load this model with single node Python XGBoost directly from ``version 2.0.0+``.
|
||||
After saving the model, we can load this model with single node Python XGBoost directly from ``version 1.7.0+``.
|
||||
|
||||
.. code-block:: scala
|
||||
|
||||
@@ -375,7 +375,7 @@ After saving the model, we can load this model with single node Python XGBoost d
|
||||
bst = xgb.Booster({'nthread': 4})
|
||||
bst.load_model("/tmp/xgbClassificationModel/data/XGBoostClassificationModel")
|
||||
|
||||
Before ``version 2.0.0``, XGBoost4j-Spark needs to export model to local manually by:
|
||||
Before ``version 1.7.0``, XGBoost4j-Spark needs to export model to local manually by:
|
||||
|
||||
.. code-block:: scala
|
||||
|
||||
|
||||
@@ -237,7 +237,7 @@ These parameters are only used for training with categorical data. See
|
||||
|
||||
.. versionadded:: 1.6
|
||||
|
||||
.. note:: This parameter is experimental. ``exact`` tree method is not supported yet.
|
||||
.. note:: This parameter is experimental. ``exact`` tree method is not yet supported.
|
||||
|
||||
- A threshold for deciding whether XGBoost should use one-hot encoding based split for
|
||||
categorical data. When number of categories is lesser than the threshold then one-hot
|
||||
@@ -247,10 +247,9 @@ These parameters are only used for training with categorical data. See
|
||||
|
||||
* ``max_cat_threshold``
|
||||
|
||||
.. versionadded:: 2.0
|
||||
.. versionadded:: 1.7.0
|
||||
|
||||
.. note:: This parameter is experimental. ``exact`` and ``gpu_hist`` tree methods are
|
||||
not supported yet.
|
||||
.. note:: This parameter is experimental. ``exact`` tree method is not yet supported.
|
||||
|
||||
- Maximum number of categories considered for each split. Used only by partition-based
|
||||
splits for preventing over-fitting.
|
||||
|
||||
@@ -508,7 +508,7 @@ dask config is used:
|
||||
IPv6 Support
|
||||
************
|
||||
|
||||
.. versionadded:: 2.0.0
|
||||
.. versionadded:: 1.7.0
|
||||
|
||||
XGBoost has initial IPv6 support for the dask interface on Linux. Due to most of the
|
||||
cluster support for IPv6 is partial (dual stack instead of IPv6 only), we require
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
###############################
|
||||
Using XGBoost PySpark Estimator
|
||||
###############################
|
||||
Starting from version 2.0, xgboost supports pyspark estimator APIs.
|
||||
The feature is still experimental and not yet ready for production use.
|
||||
Starting from version 1.7.0, xgboost supports pyspark estimator APIs. The feature is
|
||||
still experimental and not yet ready for production use.
|
||||
|
||||
*****************
|
||||
SparkXGBRegressor
|
||||
|
||||
Reference in New Issue
Block a user