24 Commits

Author SHA1 Message Date
sriramch
90f683b25b Set the appropriate device before freeing device memory... (#4566)
* - set the appropriate device before freeing device memory...
   - pr #4532 added a global memory tracker/logger to keep track of number of (de)allocations
     and peak memory usage on a per device basis.
   - this pr adds the appropriate check to make sure that the (de)allocation counts and memory usages
     makes sense for the device. since verbosity is typically increased on debug/non-retail builds.  
* - pre-create cub allocators and reuse them
   - create them once and not resize them dynamically. we need to ensure that these allocators
     are created and destroyed exactly once so that the appropriate device id's are set
2019-06-18 14:58:05 +12:00
Jiaming Yuan
c5719cc457
Offload some configurations into GBM. (#4553)
This is part 1 of refactoring configuration.

* Move tree heuristic configurations.
* Split up declarations and definitions for GBTree.
* Implement UseGPU in gbm.
2019-06-14 09:18:51 +08:00
Rory Mitchell
9683fd433e
Overload device memory allocation (#4532)
* Group source files, include headers in source files

* Overload device memory allocation
2019-06-10 11:35:13 +12:00
Rong Ou
eaab364a63 More explict sharding methods for device memory (#4396)
* Rename the Reshard method to Shard

* Add a new Reshard method for sharding a vector that's already sharded
2019-05-01 11:47:22 +12:00
sriramch
2f7087eba1 Improve HostDeviceVector exception safety (#4301)
* make the assignments of HostDeviceVector exception safe.
* storing a dummy GPUDistribution instance in HDV for CPU based code.
* change testxgboost binary location to build directory.
2019-03-31 22:48:58 +08:00
Rong Ou
5aa42b5f11 jenkins build for cuda 10.0 (#4281)
* jenkins build for cuda 10.0

* yum install nccl2 for cuda 10.0
2019-03-22 22:35:18 -07:00
Jiaming Yuan
7b9043cf71
Fix clang-tidy warnings. (#4149)
* Upgrade gtest for clang-tidy.
* Use CMake to install GTest instead of mv.
* Don't enforce clang-tidy to return 0 due to errors in thrust.
* Add a small test for tidy itself.

* Reformat.
2019-03-13 02:25:51 +08:00
Rory Mitchell
4eeeded7d1
Remove various synchronisations from cuda API calls, instrument monitor (#4205)
* Remove various synchronisations from cuda API calls, instrument monitor
with nvtx profiler ranges.
2019-03-10 15:01:23 +13:00
Rong Ou
9837b09b20 support cuda 10.1 (#4223)
* support cuda 10.1

* add cuda 10.1 to jenkins build matrix
2019-03-08 12:22:12 +13:00
Jiaming Yuan
f8ca2960fc
Use nccl group calls to prevent from dead lock. (#4113)
* launch all reduce sequentially.
* Fix gpu_exact test memory leak.
2019-02-08 06:12:39 +08:00
Rory Mitchell
a9d684db18
GPU performance logging/improvements (#3945)
- Improved GPU performance logging

- Only use one execute shards function

- Revert performance regression on multi-GPU

- Use threads to launch NCCL AllReduce
2018-11-29 14:36:51 +13:00
Jiaming Yuan
f1275f52c1
Fix specifying gpu_id, add tests. (#3851)
* Rewrite gpu_id related code.

* Remove normalised/unnormalised operatios.
* Address difference between `Index' and `Device ID'.
* Modify doc for `gpu_id'.
* Better LOG for GPUSet.
* Check specified n_gpus.
* Remove inappropriate `device_idx' term.
* Clarify GpuIdType and size_t.
2018-11-06 18:17:53 +13:00
trivialfis
d594b11f35 Implement transform to reduce CPU/GPU code duplication. (#3643)
* Implement Transform class.
* Add tests for softmax.
* Use Transform in regression, softmax and hinge objectives, except for Cox.
* Mark old gpu objective functions deprecated.
* static_assert for softmax.
* Split up multi-gpu tests.
2018-10-02 15:06:21 +13:00
trivialfis
5a7f7e7d49 Implement devices to devices reshard. (#3721)
* Force clearing device memory before Reshard.
* Remove calculating row_segments for gpu_hist and gpu_sketch.
* Guard against changing device.
2018-09-28 17:40:23 +12:00
trivialfis
9119f9e369 Fix gpu devices. (#3693)
* Fix gpu_set normalized and unnormalized.
* Fix DeviceSpan.
2018-09-19 17:39:42 +12:00
Andy Adinets
dee0b69674 Fixed copy constructor for HostDeviceVectorImpl. (#3657)
- previously, vec_ in DeviceShard wasn't updated on copy; as a result,
  the shards continued to refer to the old HostDeviceVectorImpl object,
  which resulted in a dangling pointer once that object was deallocated
2018-09-01 11:38:09 +12:00
Andy Adinets
72cd1517d6 Replaced std::vector with HostDeviceVector in MetaInfo and SparsePage. (#3446)
* Replaced std::vector with HostDeviceVector in MetaInfo and SparsePage.

- added distributions to HostDeviceVector
- using HostDeviceVector for labels, weights and base margings in MetaInfo
- using HostDeviceVector for offset and data in SparsePage
- other necessary refactoring

* Added const version of HostDeviceVector API calls.

- const versions added to calls that can trigger data transfers, e.g. DevicePointer()
- updated the code that uses HostDeviceVector
- objective functions now accept const HostDeviceVector<bst_float>& for predictions

* Updated src/linear/updater_gpu_coordinate.cu.

* Added read-only state for HostDeviceVector sync.

- this means no copies are performed if both host and devices access
  the HostDeviceVector read-only

* Fixed linter and test errors.

- updated the lz4 plugin
- added ConstDeviceSpan to HostDeviceVector
- using device % dh::NVisibleDevices() for the physical device number,
  e.g. in calls to cudaSetDevice()

* Fixed explicit template instantiation errors for HostDeviceVector.

- replaced HostDeviceVector<unsigned int> with HostDeviceVector<int>

* Fixed HostDeviceVector tests that require multiple GPUs.

- added a mock set device handler; when set, it is called instead of cudaSetDevice()
2018-08-30 14:28:47 +12:00
Rory Mitchell
78bea0d204
Add google test for a column sampling, restore metainfo tests (#3637)
* Add google test for a column sampling, restore metainfo tests

* Update metainfo test for visual studio

* Fix multi-GPU bug introduced in #3635
2018-08-28 16:10:26 +12:00
Rory Mitchell
686e990ffc
GPU memory usage fixes + column sampling refactor (#3635)
* Remove thrust copy calls

* Fix  histogram memory usage

* Cap extreme histogram memory usage

* More efficient column sampling

* Use column sampler across updaters

* More efficient split evaluation on GPU with column sampling
2018-08-27 16:26:46 +12:00
trivialfis
2c502784ff Span class. (#3548)
* Add basic Span class based on ISO++20.

* Use Span<Entry const> instead of Inst in SparsePage.

* Add DeviceSpan in HostDeviceVector, use it in regression obj.
2018-08-14 17:58:11 +12:00
Andrew V. Adinetz
b8a0d66fe6 Multi-GPU HostDeviceVector. (#3287)
* Multi-GPU HostDeviceVector.

- HostDeviceVector instances can now span multiple devices, defined by GPUSet struct
- the interface of HostDeviceVector has been modified accordingly
- GPU objective functions are now multi-GPU
- GPU predicting from cache is now multi-GPU
- avoiding omp_set_num_threads() calls
- other minor changes
2018-05-05 08:00:05 +12:00
Rory Mitchell
ccf80703ef
Clang-tidy static analysis (#3222)
* Clang-tidy static analysis

* Modernise checks

* Google coding standard checks

* Identifier renaming according to Google style
2018-04-19 18:57:13 +12:00
Andrew V. Adinetz
d5992dd881 Replaced std::vector-based interfaces with HostDeviceVector-based interfaces. (#3116)
* Replaced std::vector-based interfaces with HostDeviceVector-based interfaces.

- replacement was performed in the learner, boosters, predictors,
  updaters, and objective functions
- only interfaces used in training were replaced;
  interfaces like PredictInstance() still use std::vector
- refactoring necessary for replacement of interfaces was also performed,
  such as using HostDeviceVector in prediction cache

* HostDeviceVector-based interfaces for custom objective function example plugin.
2018-02-28 13:00:04 +13:00
Thejaswi
84ab74f3a5 Objective function evaluation on GPU with minimal PCIe transfers (#2935)
* Added GPU objective function and no-copy interface.

- xgboost::HostDeviceVector<T> syncs automatically between host and device
- no-copy interfaces have been added
- default implementations just sync the data to host
  and call the implementations with std::vector
- GPU objective function, predictor, histogram updater process data
  directly on GPU
2018-01-12 21:33:39 +13:00