python package refactor into python-package
This commit is contained in:
3
demo/.gitignore
vendored
3
demo/.gitignore
vendored
@@ -1 +1,2 @@
|
||||
*.libsvm
|
||||
*.libsvm
|
||||
*.pkl
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
XGBoost Examples
|
||||
====
|
||||
This folder contains all the code examples using xgboost.
|
||||
This folder contains all the code examples using xgboost.
|
||||
|
||||
* Contribution of examples, benchmarks is more than welcome!
|
||||
* If you like to share how you use xgboost to solve your problem, send a pull request:)
|
||||
|
||||
|
||||
Features Walkthrough
|
||||
====
|
||||
This is a list of short codes introducing different functionalities of xgboost and its wrapper.
|
||||
* Basic walkthrough of wrappers
|
||||
This is a list of short codes introducing different functionalities of xgboost packages.
|
||||
* Basic walkthrough of packages
|
||||
[python](guide-python/basic_walkthrough.py)
|
||||
[R](../R-package/demo/basic_walkthrough.R)
|
||||
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/basic_walkthrough.jl)
|
||||
@@ -20,18 +20,18 @@ This is a list of short codes introducing different functionalities of xgboost a
|
||||
[python](guide-python/boost_from_prediction.py)
|
||||
[R](../R-package/demo/boost_from_prediction.R)
|
||||
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/boost_from_prediction.jl)
|
||||
* Predicting using first n trees
|
||||
* Predicting using first n trees
|
||||
[python](guide-python/predict_first_ntree.py)
|
||||
[R](../R-package/demo/boost_from_prediction.R)
|
||||
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/boost_from_prediction.jl)
|
||||
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/boost_from_prediction.jl)
|
||||
* Generalized Linear Model
|
||||
[python](guide-python/generalized_linear_model.py)
|
||||
[R](../R-package/demo/generalized_linear_model.R)
|
||||
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/generalized_linear_model.jl)
|
||||
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/generalized_linear_model.jl)
|
||||
* Cross validation
|
||||
[python](guide-python/cross_validation.py)
|
||||
[R](../R-package/demo/cross_validation.R)
|
||||
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/cross_validation.jl)
|
||||
[Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/cross_validation.jl)
|
||||
* Predicting leaf indices
|
||||
[python](guide-python/predict_leaf_indices.py)
|
||||
[R](../R-package/demo/predict_leaf_indices.R)
|
||||
@@ -48,5 +48,5 @@ However, the parameter settings can be applied to all versions
|
||||
Benchmarks
|
||||
====
|
||||
* [Starter script for Kaggle Higgs Boson](kaggle-higgs)
|
||||
* [Kaggle Tradeshift winning solution by daxiongshu](https://github.com/daxiongshu/kaggle-tradeshift-winning-solution)
|
||||
* [Kaggle Tradeshift winning solution by daxiongshu](https://github.com/daxiongshu/kaggle-tradeshift-winning-solution)
|
||||
|
||||
|
||||
@@ -75,13 +75,3 @@ clf = xgb.XGBClassifier()
|
||||
clf.fit(X_train, y_train, early_stopping_rounds=10, eval_metric="auc",
|
||||
eval_set=[(X_test, y_test)])
|
||||
|
||||
# Custom evaluation function
|
||||
from sklearn.metrics import log_loss
|
||||
|
||||
|
||||
def log_loss_eval(y_pred, y_true):
|
||||
return "log-loss", log_loss(y_true.get_label(), y_pred)
|
||||
|
||||
|
||||
clf.fit(X_train, y_train, early_stopping_rounds=10, eval_metric=log_loss_eval,
|
||||
eval_set=[(X_test, y_test)])
|
||||
|
||||
Reference in New Issue
Block a user