Thejaswi 85b2fb3eee [GPU-Plugin] Integration of a faster version of grow_gpu plugin into mainstream (#2360)
* Integrating a faster version of grow_gpu plugin
1. Removed the older files to reduce duplication
2. Moved all of the grow_gpu files under 'exact' folder
3. All of them are inside 'exact' namespace to avoid any conflicts
4. Fixed a bug in benchmark.py while running only 'grow_gpu' plugin
5. Added cub and googletest submodules to ease integration and unit-testing
6. Updates to CMakeLists.txt to directly build cuda objects into libxgboost

* Added support for building gpu plugins through make flow
1. updated makefile and config.mk to add right targets
2. added unit-tests for gpu exact plugin code

* 1. Added support for building gpu plugin using 'make' flow as well
2. Updated instructions for building and testing gpu plugin

* Fix travis-ci errors for PR#2360
1. lint errors on unit-tests
2. removed googletest, instead depended upon dmlc-core provide gtest cache

* Some more fixes to travis-ci lint failures PR#2360

* Added Rory's copyrights to the files containing code from both.

* updated copyright statement as per Rory's request

* moved the static datasets into a script to generate them at runtime

* 1. memory usage print when silent=0
2. tests/ and test/ folder organization
3. removal of the dependency of googletest for just building xgboost
4. coding style updates for .cuh as well

* Fixes for compilation warnings

* add cuda object files as well when JVM_BINDINGS=ON
2017-06-06 09:39:53 +12:00

51 lines
1.7 KiB
Python

# pylint: skip-file
import sys, argparse
import xgboost as xgb
import numpy as np
from sklearn.datasets import make_classification
import time
n = 1000000
num_rounds = 500
def run_benchmark(args, gpu_algorithm, cpu_algorithm):
print("Generating dataset: {} rows * {} columns".format(args.rows,args.columns))
X, y = make_classification(args.rows, n_features=args.columns, random_state=7)
dtrain = xgb.DMatrix(X, y)
param = {'objective': 'binary:logistic',
'tree_method': 'exact',
'max_depth': 6,
'silent': 1,
'eval_metric': 'auc'}
param['updater'] = gpu_algorithm
print("Training with '%s'" % param['updater'])
tmp = time.time()
xgb.train(param, dtrain, args.iterations)
print ("Time: %s seconds" % (str(time.time() - tmp)))
param['updater'] = cpu_algorithm
print("Training with '%s'" % param['updater'])
tmp = time.time()
xgb.train(param, dtrain, args.iterations)
print ("Time: %s seconds" % (str(time.time() - tmp)))
parser = argparse.ArgumentParser()
parser.add_argument('--algorithm', choices=['all', 'grow_gpu', 'grow_gpu_hist'], required=True)
parser.add_argument('--rows',type=int,default=1000000)
parser.add_argument('--columns',type=int,default=50)
parser.add_argument('--iterations',type=int,default=500)
args = parser.parse_args()
if 'grow_gpu_hist' in args.algorithm:
run_benchmark(args, args.algorithm, 'grow_fast_histmaker')
if 'grow_gpu' in args.algorithm:
run_benchmark(args, args.algorithm, 'grow_colmaker')
if 'all' in args.algorithm:
run_benchmark(args, 'grow_gpu', 'grow_colmaker')
run_benchmark(args, 'grow_gpu_hist', 'grow_fast_histmaker')