Revert OMP guard. (#6987)
The guard protects the global variable from being changed by XGBoost. But this leads to a bug that the `n_threads` parameter is no longer used after the first iteration. This is due to the fact that `omp_set_num_threads` is only called once in `Learner::Configure` at the beginning of the training process. The guard is still useful for `gpu_id`, since this is called all the times in our codebase doesn't matter which iteration we are currently running.
This commit is contained in:
@@ -163,7 +163,6 @@ inline float GetMissing(Json const &config) {
|
||||
|
||||
// Safe guard some global variables from being changed by XGBoost.
|
||||
class XGBoostAPIGuard {
|
||||
int32_t n_threads_ {omp_get_max_threads()};
|
||||
int32_t device_id_ {0};
|
||||
|
||||
#if defined(XGBOOST_USE_CUDA)
|
||||
@@ -179,7 +178,6 @@ class XGBoostAPIGuard {
|
||||
SetGPUAttribute();
|
||||
}
|
||||
~XGBoostAPIGuard() {
|
||||
omp_set_num_threads(n_threads_);
|
||||
RestoreGPUAttribute();
|
||||
}
|
||||
};
|
||||
|
||||
Reference in New Issue
Block a user