This commit is contained in:
tqchen 2014-08-22 19:41:58 -07:00
parent 2ac8cdb873
commit 07ddf98718
2 changed files with 30 additions and 5 deletions

21
CHANGES.md Normal file
View File

@ -0,0 +1,21 @@
Change Log of Versions
=====
xgboost-0.1
=====
* Initial release
xgboost-0.2x
=====
* Python module
* Weighted samples instances
* Initial version of pairwise rank
xgboost-unity
=====
* Faster tree construction module
- Allows subsample columns as well during tree construction
* Support for boosting from initial predictions
* Experimental version of LambdaRank
* Linear booster is now parallelized, using parallel coordinated descent.
* Add [code guide](src/README.md)

View File

@ -1,5 +1,5 @@
xgboost: eXtreme Gradient Boosting xgboost: eXtreme Gradient Boosting
======= ======
An optimized general purpose gradient boosting (tree) library. An optimized general purpose gradient boosting (tree) library.
Contributors: https://github.com/tqchen/xgboost/graphs/contributors Contributors: https://github.com/tqchen/xgboost/graphs/contributors
@ -8,8 +8,10 @@ Turorial and Documentation: https://github.com/tqchen/xgboost/wiki
Questions and Issues: [https://github.com/tqchen/xgboost/issues](https://github.com/tqchen/xgboost/issues?q=is%3Aissue+label%3Aquestion) Questions and Issues: [https://github.com/tqchen/xgboost/issues](https://github.com/tqchen/xgboost/issues?q=is%3Aissue+label%3Aquestion)
Notes on the Code: [src/REAMDE.md](src/README.md)
Features Features
======= ======
* Sparse feature format: * Sparse feature format:
- Sparse feature format allows easy handling of missing values, and improve computation efficiency. - Sparse feature format allows easy handling of missing values, and improve computation efficiency.
* Push the limit on single machine: * Push the limit on single machine:
@ -19,11 +21,12 @@ Features
* Layout of gradient boosting algorithm to support user defined objective * Layout of gradient boosting algorithm to support user defined objective
* Python interface, works with numpy and scipy.sparse matrix * Python interface, works with numpy and scipy.sparse matrix
xgboost-unity Version
======= ======
* Experimental branch(not usable yet): refactor xgboost, cleaner code, more flexibility * This version is named xgboost-unity, the code has been refactored from 0.2x to be cleaner and more flexibility
* This version of xgboost is not compatible with 0.2x, due to huge amount of changes in code structure * This version of xgboost is not compatible with 0.2x, due to huge amount of changes in code structure
- This means the model and buffer file of previous version can not be loaded in xgboost-unity - This means the model and buffer file of previous version can not be loaded in xgboost-unity
* For legacy 0.2x code, refer to
Build Build
====== ======
@ -35,3 +38,4 @@ Build
* Possible way to build using Visual Studio (not tested): * Possible way to build using Visual Studio (not tested):
- In principle, you can put src/xgboost.cpp and src/io/io.cpp into the project, and build xgboost. - In principle, you can put src/xgboost.cpp and src/io/io.cpp into the project, and build xgboost.
- For python module, you need python/xgboost_wrapper.cpp and src/io/io.cpp to build a dll. - For python module, you need python/xgboost_wrapper.cpp and src/io/io.cpp to build a dll.