* Initial performance optimizations for xgboost
* remove includes
* revert float->double
* fix for CI
* fix for CI
* fix for CI
* fix for CI
* fix for CI
* fix for CI
* fix for CI
* fix for CI
* fix for CI
* fix for CI
* Check existence of _mm_prefetch and __builtin_prefetch
* Fix lint
* Upgrading to NCCL2
* Part - II of NCCL2 upgradation
- Doc updates to build with nccl2
- Dockerfile.gpu update for a correct CI build with nccl2
- Updated FindNccl package to have env-var NCCL_ROOT to take precedence
* Upgrading to v9.2 for CI workflow, since it has the nccl2 binaries available
* Added NCCL2 license + copy the nccl binaries into /usr location for the FindNccl module to find
* Set LD_LIBRARY_PATH variable to pick nccl2 binary at runtime
* Need the nccl2 library download instructions inside Dockerfile.release as well
* Use NCCL2 as a static library
* Update dmlc-core submodule
* Fix dense_parser to work with the latest dmlc-core
* Specify location of Google Test
* Add more source files in dmlc-minimum to get latest dmlc-core working
* Update dmlc-core submodule
- Implement colsampling, subsampling for gpu_hist_experimental
- Optimised multi-GPU implementation for gpu_hist_experimental
- Make nccl optional
- Add Volta architecture flag
- Optimise RegLossObj
- Add timing utilities for debug verbose mode
- Bump required cuda version to 8.0
* Only set OpenMP_CXX_FLAGS when OpenMP is found
I found this trying to get the Mac build working without OpenMP. Tips in
issue #2596 helped to point in the right direction.
* Revise check
* Trigger codecov
* [R] MSVC compatibility
* [GPU] allow seed in BernoulliRng up to size_t and scale to uint32_t
* R package build with cmake and CUDA
* R package CUDA build fixes and cleanups
* always export the R package native initialization routine on windows
* update the install instructions doc
* fix lint
* use static_cast directly to set BernoulliRng seed
* [R] demo for GPU accelerated algorithm
* tidy up the R package cmake stuff
* R pack cmake: installs main dependency packages if needed
* [R] version bump in DESCRIPTION
* update NEWS
* added short missing/sparse values explanations to FAQ
* Removal of redundant code/files.
* Removal of exact namespace in GPU plugin
* Revert double precision histograms to single precision for performance on Maxwell/Kepler
* for MinGW, drop the 'lib' prefix from shared library name
* fix defines for 'g++ 4.8 or higher' to include g++ >= 5
* fix compile warnings
* [Appveyor] add MinGW with python; remove redundant jobs
* [Appveyor] also do python build for one of msvc jobs
* [jvm-packages] Fixed compilation on Windows
* [jvm-packages] Build the JNI bindings on Appveyor
* [jvm-packages] Build & test on OS X
* [jvm-packages] Re-applied the CMake build changes reverted by #2395
* Fixed Appveyor JVM build
* Muted Maven on Travis
* Don't link with libawt
* "linux2"->"linux"
Python2.x and 3.X use slightly different values for ``sys.platform``.
* Support for builing gpu-plugins to specific GPU architectures
1. Option GPU_COMPUTE_VER exposed from both Makefile and CMakeLists.txt
2. updater_gpu documentation updated accordingly
* Re-introduced GPU_COMPUTE_VER option in the cmake flow.
This seems to fix the compile-time, rdc=true and copy-constructor related
errors seen and discussed in PR #2390.
* [jvm-packages] Fixed JNI_OnLoad overload
It does not compile on Windows without proper export flags.
* [jvm-packages] Use JNI types directly where appropriate
* Removed lib hack from CMake build
Prior to this commit the CMake build use hardcoded lib prefix for
libxgboost and libxgboost4j. Unfortunatelly this did not play well with
Windows, which does not use the lib- prefix.
* Integrating a faster version of grow_gpu plugin
1. Removed the older files to reduce duplication
2. Moved all of the grow_gpu files under 'exact' folder
3. All of them are inside 'exact' namespace to avoid any conflicts
4. Fixed a bug in benchmark.py while running only 'grow_gpu' plugin
5. Added cub and googletest submodules to ease integration and unit-testing
6. Updates to CMakeLists.txt to directly build cuda objects into libxgboost
* Added support for building gpu plugins through make flow
1. updated makefile and config.mk to add right targets
2. added unit-tests for gpu exact plugin code
* 1. Added support for building gpu plugin using 'make' flow as well
2. Updated instructions for building and testing gpu plugin
* Fix travis-ci errors for PR#2360
1. lint errors on unit-tests
2. removed googletest, instead depended upon dmlc-core provide gtest cache
* Some more fixes to travis-ci lint failures PR#2360
* Added Rory's copyrights to the files containing code from both.
* updated copyright statement as per Rory's request
* moved the static datasets into a script to generate them at runtime
* 1. memory usage print when silent=0
2. tests/ and test/ folder organization
3. removal of the dependency of googletest for just building xgboost
4. coding style updates for .cuh as well
* Fixes for compilation warnings
* add cuda object files as well when JVM_BINDINGS=ON
* [jvm-packages] Added libxgboost4j to CMake build
* [jvm-packages] Wired CMake build into create_jni.sh
* User newer CMake version on Travis
* Lowered CMake version constraints
* Fixed various quirks in the new CMake build
add_library(libxgboost SHARED ${SOURCES}) builds a library named
liblibxgboost.so; However, simply changing it to add_library(xgboost ...)
won't work, as add_executable(xgboost ...) and add_library(xgbboost ...)
will then have the same target name. This patch correctly handles the
same-name situation through SET_TARGET_PROPERTIES.
Fixed to work with future versions of visual studio i.e., 2015
MSVC has it's own section for setting compile parameters, it shouldn't need to fall into section below i.e., checking for c++11 as this is definitely already supported, though this isn't an issue for Visual Studio 2012, it breaks for later versions
of visual studio i.e., 2015 when the default c++ is version 14. Though still backward compatible with c++11