I use the online prediction function(`inline void Predict(const SparseBatch::Inst &inst, ... ) const;`), the results obtained are different from the results of the batch prediction function(` virtual void Predict(DMatrix* data, ...) const = 0`). After the investigation found that the online prediction function using the `base_score_` parameters, and the batch prediction function is not used in this parameter. It is found that the `base_score_` values are different when the same model file is loaded many times.
```
1st times:base_score_: 6.69023e-21
2nd times:base_score_: -3.7668e+19
3rd times:base_score_: 5.40507e+07
```
Online prediction results are affected by `base_score_` parameters. After deleting the if condition(`if (out_preds->size() == 1)`) , the online prediction is consistent with the batch prediction results, and the xgboost prediction results are consistent with python version. Therefore, it is likely that the online prediction function is bug
* [jvm-packages] call setGroup for ranking task
* passing groupData through xgBoostConfMap
* fix original comment position
* make groupData param
* remove groupData variable, use xgBoostConfMap directly
* set default groupData value
* add use groupData tests
* reduce rank-demo size
* use TaskContext.getPartitionId() instead of mapPartitionsWithIndex
* add DF use groupData test
* remove unused varable
* add back train method but mark as deprecated
* fix scalastyle error
* first commit in scala binding for fast histo
* java test
* add missed scala tests
* spark training
* add back train method but mark as deprecated
* fix scalastyle error
* local change
* first commit in scala binding for fast histo
* local change
* fix df frame test
* add back train method but mark as deprecated
* fix scalastyle error
* change class to object in examples
* fix compilation error
* bump spark version to 2.1
* preserve num_class issues
* fix failed test cases
* rivising
* add multi class test
verbose_eval docs claim it will log the last iteration (http://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.train). this is also consistent w/the behavior from 0.4. not a huge deal but I found it handy to see the last iter's result b/c my period is usually large.
this doesn't address logging the last stage found by early_stopping (as noted in docs) as I'm not sure how to do that.
* [jvm-packages] Scala implementation of the Rabit tracker.
A Scala implementation of RabitTracker that is interface-interchangable with the
Java implementation, ported from `tracker.py` in the
[dmlc-core project](https://github.com/dmlc/dmlc-core).
* [jvm-packages] Updated Akka dependency in pom.xml.
* Refactored the RabitTracker directory structure.
* Fixed premature stopping of connection handler.
Added a new finite state "AwaitingPortNumber" to explicitly wait for the
worker to send the port, and close the connection. Stopping the actor
prematurely sends a TCP RST to the worker, causing the worker to crash
on AssertionError.
* Added interface IRabitTracker so that user can switch implementations.
* Default timeout duration changes.
* Dependency for Akka tests.
* Removed the main function of RabitTracker.
* A skeleton for testing Akka-based Rabit tracker.
* waitFor() in RabitTracker no longer throws exceptions.
* Completed unit test for the 'start' command of Rabit tracker.
* Preliminary support for Rabit Allreduce via JNI (no prepare function support yet.)
* Fixed the default timeout duration.
* Use Java container to avoid serialization issues due to intermediate wrappers.
* Added tests for Allreduce/model training using Scala Rabit tracker.
* Added spill-over unit test for the Scala Rabit tracker.
* Fixed a typo.
* Overhaul of RabitTracker interface per code review.
- Removed methods start() waitFor() (no arguments) from IRabitTracker.
- The timeout in start(timeout) is now worker connection timeout, as tcp
socket binding timeout is less intuitive.
- Dropped time unit from start(...) and waitFor(...) methods; the default
time unit is millisecond.
- Moved random port number generation into the RabitTrackerHandler.
- Moved all Rabit-related classes to package ml.dmlc.xgboost4j.scala.rabit.
* More code refactoring and comments.
* Unified timeout constants. Readable tracker status code.
* Add comments to indicate that allReduce is for tests only. Removed all other variants.
* Removed unused imports.
* Simplified signatures of training methods.
- Moved TrackerConf into parameter map.
- Changed GeneralParams so that TrackerConf becomes a standalone parameter.
- Updated test cases accordingly.
* Changed monitoring strategies.
* Reverted monitoring changes.
* Update test case for Rabit AllReduce.
* Mix in UncaughtExceptionHandler into IRabitTracker to prevent tracker from hanging due to exceptions thrown by workers.
* More comprehensive test cases for exception handling and worker connection timeout.
* Handle executor loss due to unknown cause: the newly spawned executor will attempt to connect to the tracker. Interrupt tracker in such case.
* Per code-review, removed training timeout from TrackerConf. Timeout logic must be implemented explicitly and externally in the driver code.
* Reverted scalastyle-config changes.
* Visibility scope change. Interface tweaks.
* Use match pattern to handle tracker_conf parameter.
* Minor clarification in JNI code.
* Clearer intent in match pattern to suppress warnings.
* Removed Future from constructor. Block in start() and waitFor() instead.
* Revert inadvertent comment changes.
* Removed debugging information.
* Updated test cases that are a bit finicky.
* Added comments on the reasoning behind the unit tests for testing Rabit tracker robustness.
* Fixed BufferUnderFlow bug in decoding tracker 'print' command.
* Merge conflicts resolution.
* A fix regarding the compatibility with python 2.6
the syntax of {n: self.attr(n) for n in attr_names} is illegal in python 2.6
* Update core.py
add a space after comma
As discussed in issue #1978, tree_method=hist ignores the parameter
param.num_roots; it simply assumes that the tree has only one root. In
particular, when InitData() method initializes row_set_collection_, it simply
assigns all rows to node 0, the value that's hard-coded.
For now, the updater will simply fail when num_roots exceeds 1. I will revise
the updater soon to support multiple roots.
* [R] xgb.save must work when handle in nil but raw exists
* [R] print.xgb.Booster should still print other info when handle is nil
* [R] rename internal function xgb.Booster to xgb.Booster.handle to make its intent clear
* [R] rename xgb.Booster.check to xgb.Booster.complete and make it visible; more docs
* [R] storing evaluation_log should depend only on watchlist, not on verbose
* [R] reduce the excessive chattiness of unit tests
* [R] only disable some tests in windows when it's not 64-bit
* [R] clean-up xgb.DMatrix
* [R] test xgb.DMatrix loading from libsvm text file
* [R] store feature_names in xgb.Booster, use them from utility functions
* [R] remove non-functional co-occurence computation from xgb.importance
* [R] verbose=0 is enough without a callback
* [R] added forgotten xgb.Booster.complete.Rd; cran check fixes
* [R] update installation instructions
* added the max_features parameter to the plot_importance function.
* renamed max_features parameter to max_num_features for better understanding
* removed unwanted character in docstring
* Support histogram-based algorithm + multiple tree growing strategy
* Add a brand new updater to support histogram-based algorithm, which buckets
continuous features into discrete bins to speed up training. To use it, set
`tree_method = fast_hist` to configuration.
* Support multiple tree growing strategies. For now, two policies are supported:
* `grow_policy=depthwise` (default): favor splitting at nodes closest to the
root, i.e. grow depth-wise.
* `grow_policy=lossguide`: favor splitting at nodes with highest loss change
* Improve single-threaded performance
* Unroll critical loops
* Introduce specialized code for dense data (i.e. no missing values)
* Additional training parameters: `max_leaves`, `max_bin`, `grow_policy`, `verbose`
* Adding a small test for hist method
* Fix memory error in row_set.h
When std::vector is resized, a reference to one of its element may become
stale. Any such reference must be updated as well.
* Resolve cross-platform compilation issues
* Versions of g++ older than 4.8 lacks support for a few C++11 features, e.g.
alignas(*) and new initializer syntax. To support g++ 4.6, use pre-C++11
initializer and remove alignas(*).
* Versions of MSVC older than 2015 does not support alignas(*). To support
MSVC 2012, remove alignas(*).
* For g++ 4.8 and newer, alignas(*) is enabled for performance benefits.
* Some old compilers (MSVC 2012, g++ 4.6) do not support template aliases
(which uses `using` to declate type aliases). So always use `typedef`.
* Fix a host of CI issues
* Remove dependency for libz on osx
* Fix heading for hist_util
* Fix minor style issues
* Add missing #include
* Remove extraneous logging
* Enable tree_method=hist in R
* Renaming HistMaker to GHistBuilder to avoid confusion
* Fix R integration
* Respond to style comments
* Consistent tie-breaking for priority queue using timestamps
* Last-minute style fixes
* Fix issuecomment-271977647
The way we quantize data is broken. The agaricus data consists of all
categorical values. When NAs are converted into 0's,
`HistCutMatrix::Init` assign both 0's and 1's to the same single bin.
Why? gmat only the smallest value (0) and an upper bound (2), which is twice
the maximum value (1). Add the maximum value itself to gmat to fix the issue.
* Fix issuecomment-272266358
* Remove padding from cut values for the continuous case
* For categorical/ordinal values, use midpoints as bin boundaries to be safe
* Fix CI issue -- do not use xrange(*)
* Fix corner case in quantile sketch
Signed-off-by: Philip Cho <chohyu01@cs.washington.edu>
* Adding a test for an edge case in quantile sketcher
max_bin=2 used to cause an exception.
* Fix fast_hist test
The test used to require a strictly increasing Test AUC for all examples.
One of them exhibits a small blip in Test AUC before achieving a Test AUC
of 1. (See bottom.)
Solution: do not require monotonic increase for this particular example.
[0] train-auc:0.99989 test-auc:0.999497
[1] train-auc:1 test-auc:0.999749
[2] train-auc:1 test-auc:0.999749
[3] train-auc:1 test-auc:0.999749
[4] train-auc:1 test-auc:0.999749
[5] train-auc:1 test-auc:0.999497
[6] train-auc:1 test-auc:1
[7] train-auc:1 test-auc:1
[8] train-auc:1 test-auc:1
[9] train-auc:1 test-auc:1