Add Accelerated Failure Time loss for survival analysis task (#4763)

* [WIP] Add lower and upper bounds on the label for survival analysis

* Update test MetaInfo.SaveLoadBinary to account for extra two fields

* Don't clear qids_ for version 2 of MetaInfo

* Add SetInfo() and GetInfo() method for lower and upper bounds

* changes to aft

* Add parameter class for AFT; use enum's to represent distribution and event type

* Add AFT metric

* changes to neg grad to grad

* changes to binomial loss

* changes to overflow

* changes to eps

* changes to code refactoring

* changes to code refactoring

* changes to code refactoring

* Re-factor survival analysis

* Remove aft namespace

* Move function bodies out of AFTNormal and AFTLogistic, to reduce clutter

* Move function bodies out of AFTLoss, to reduce clutter

* Use smart pointer to store AFTDistribution and AFTLoss

* Rename AFTNoiseDistribution enum to AFTDistributionType for clarity

The enum class was not a distribution itself but a distribution type

* Add AFTDistribution::Create() method for convenience

* changes to extreme distribution

* changes to extreme distribution

* changes to extreme

* changes to extreme distribution

* changes to left censored

* deleted cout

* changes to x,mu and sd and code refactoring

* changes to print

* changes to hessian formula in censored and uncensored

* changes to variable names and pow

* changes to Logistic Pdf

* changes to parameter

* Expose lower and upper bound labels to R package

* Use example weights; normalize log likelihood metric

* changes to CHECK

* changes to logistic hessian to standard formula

* changes to logistic formula

* Comply with coding style guideline

* Revert back Rabit submodule

* Revert dmlc-core submodule

* Comply with coding style guideline (clang-tidy)

* Fix an error in AFTLoss::Gradient()

* Add missing files to amalgamation

* Address @RAMitchell's comment: minimize future change in MetaInfo interface

* Fix lint

* Fix compilation error on 32-bit target, when size_t == bst_uint

* Allocate sufficient memory to hold extra label info

* Use OpenMP to speed up

* Fix compilation on Windows

* Address reviewer's feedback

* Add unit tests for probability distributions

* Make Metric subclass of Configurable

* Address reviewer's feedback: Configure() AFT metric

* Add a dummy test for AFT metric configuration

* Complete AFT configuration test; remove debugging print

* Rename AFT parameters

* Clarify test comment

* Add a dummy test for AFT loss for uncensored case

* Fix a bug in AFT loss for uncensored labels

* Complete unit test for AFT loss metric

* Simplify unit tests for AFT metric

* Add unit test to verify aggregate output from AFT metric

* Use EXPECT_* instead of ASSERT_*, so that we run all unit tests

* Use aft_loss_param when serializing AFTObj

This is to be consistent with AFT metric

* Add unit tests for AFT Objective

* Fix OpenMP bug; clarify semantics for shared variables used in OpenMP loops

* Add comments

* Remove AFT prefix from probability distribution; put probability distribution in separate source file

* Add comments

* Define kPI and kEulerMascheroni in probability_distribution.h

* Add probability_distribution.cc to amalgamation

* Remove unnecessary diff

* Address reviewer's feedback: define variables where they're used

* Eliminate all INFs and NANs from AFT loss and gradient

* Add demo

* Add tutorial

* Fix lint

* Use 'survival:aft' to be consistent with 'survival:cox'

* Move sample data to demo/data

* Add visual demo with 1D toy data

* Add Python tests

Co-authored-by: Philip Cho <chohyu01@cs.washington.edu>
This commit is contained in:
Avinash Barnwal
2020-03-25 16:52:51 -04:00
committed by GitHub
parent 1de36cdf1e
commit dcf439932a
21 changed files with 1789 additions and 15 deletions

View File

@@ -0,0 +1,54 @@
"""
Demo for survival analysis (regression) using Accelerated Failure Time (AFT) model
"""
from sklearn.model_selection import ShuffleSplit
import pandas as pd
import numpy as np
import xgboost as xgb
# The Veterans' Administration Lung Cancer Trial
# The Statistical Analysis of Failure Time Data by Kalbfleisch J. and Prentice R (1980)
df = pd.read_csv('../data/veterans_lung_cancer.csv')
print('Training data:')
print(df)
# Split features and labels
y_lower_bound = df['Survival_label_lower_bound']
y_upper_bound = df['Survival_label_upper_bound']
X = df.drop(['Survival_label_lower_bound', 'Survival_label_upper_bound'], axis=1)
# Split data into training and validation sets
rs = ShuffleSplit(n_splits=2, test_size=.7, random_state=0)
train_index, valid_index = next(rs.split(X))
dtrain = xgb.DMatrix(X.values[train_index, :])
dtrain.set_float_info('label_lower_bound', y_lower_bound[train_index])
dtrain.set_float_info('label_upper_bound', y_upper_bound[train_index])
dvalid = xgb.DMatrix(X.values[valid_index, :])
dvalid.set_float_info('label_lower_bound', y_lower_bound[valid_index])
dvalid.set_float_info('label_upper_bound', y_upper_bound[valid_index])
# Train gradient boosted trees using AFT loss and metric
params = {'verbosity': 0,
'objective': 'survival:aft',
'eval_metric': 'aft-nloglik',
'tree_method': 'hist',
'learning_rate': 0.05,
'aft_loss_distribution': 'normal',
'aft_loss_distribution_scale': 1.20,
'max_depth': 6,
'lambda': 0.01,
'alpha': 0.02}
bst = xgb.train(params, dtrain, num_boost_round=10000,
evals=[(dtrain, 'train'), (dvalid, 'valid')],
early_stopping_rounds=50)
# Run prediction on the validation set
df = pd.DataFrame({'Label (lower bound)': y_lower_bound[valid_index],
'Label (upper bound)': y_upper_bound[valid_index],
'Predicted label': bst.predict(dvalid)})
print(df)
# Show only data points with right-censored labels
print(df[np.isinf(df['Label (upper bound)'])])
# Save trained model
bst.save_model('aft_model.json')

View File

@@ -0,0 +1,78 @@
"""
Demo for survival analysis (regression) using Accelerated Failure Time (AFT) model, using Optuna
to tune hyperparameters
"""
from sklearn.model_selection import ShuffleSplit
import pandas as pd
import numpy as np
import xgboost as xgb
import optuna
# The Veterans' Administration Lung Cancer Trial
# The Statistical Analysis of Failure Time Data by Kalbfleisch J. and Prentice R (1980)
df = pd.read_csv('../data/veterans_lung_cancer.csv')
print('Training data:')
print(df)
# Split features and labels
y_lower_bound = df['Survival_label_lower_bound']
y_upper_bound = df['Survival_label_upper_bound']
X = df.drop(['Survival_label_lower_bound', 'Survival_label_upper_bound'], axis=1)
# Split data into training and validation sets
rs = ShuffleSplit(n_splits=2, test_size=.7, random_state=0)
train_index, valid_index = next(rs.split(X))
dtrain = xgb.DMatrix(X.values[train_index, :])
dtrain.set_float_info('label_lower_bound', y_lower_bound[train_index])
dtrain.set_float_info('label_upper_bound', y_upper_bound[train_index])
dvalid = xgb.DMatrix(X.values[valid_index, :])
dvalid.set_float_info('label_lower_bound', y_lower_bound[valid_index])
dvalid.set_float_info('label_upper_bound', y_upper_bound[valid_index])
# Define hyperparameter search space
base_params = {'verbosity': 0,
'objective': 'survival:aft',
'eval_metric': 'aft-nloglik',
'tree_method': 'hist'} # Hyperparameters common to all trials
def objective(trial):
params = {'learning_rate': trial.suggest_loguniform('learning_rate', 0.01, 1.0),
'aft_loss_distribution': trial.suggest_categorical('aft_loss_distribution',
['normal', 'logistic', 'extreme']),
'aft_loss_distribution_scale': trial.suggest_loguniform('aft_loss_distribution_scale', 0.1, 10.0),
'max_depth': trial.suggest_int('max_depth', 3, 8),
'lambda': trial.suggest_loguniform('lambda', 1e-8, 1.0),
'alpha': trial.suggest_loguniform('alpha', 1e-8, 1.0)} # Search space
params.update(base_params)
pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'valid-aft-nloglik')
bst = xgb.train(params, dtrain, num_boost_round=10000,
evals=[(dtrain, 'train'), (dvalid, 'valid')],
early_stopping_rounds=50, verbose_eval=False, callbacks=[pruning_callback])
if bst.best_iteration >= 25:
return bst.best_score
else:
return np.inf # Reject models with < 25 trees
# Run hyperparameter search
study = optuna.create_study(direction='minimize')
study.optimize(objective, n_trials=200)
print('Completed hyperparameter tuning with best aft-nloglik = {}.'.format(study.best_trial.value))
params = {}
params.update(base_params)
params.update(study.best_trial.params)
# Re-run training with the best hyperparameter combination
print('Re-running the best trial... params = {}'.format(params))
bst = xgb.train(params, dtrain, num_boost_round=10000,
evals=[(dtrain, 'train'), (dvalid, 'valid')],
early_stopping_rounds=50)
# Run prediction on the validation set
df = pd.DataFrame({'Label (lower bound)': y_lower_bound[valid_index],
'Label (upper bound)': y_upper_bound[valid_index],
'Predicted label': bst.predict(dvalid)})
print(df)
# Show only data points with right-censored labels
print(df[np.isinf(df['Label (upper bound)'])])
# Save trained model
bst.save_model('aft_best_model.json')

View File

@@ -0,0 +1,97 @@
"""
Visual demo for survival analysis (regression) with Accelerated Failure Time (AFT) model.
This demo uses 1D toy data and visualizes how XGBoost fits a tree ensemble. The ensemble model
starts out as a flat line and evolves into a step function in order to account for all ranged
labels.
"""
import numpy as np
import xgboost as xgb
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 13})
# Function to visualize censored labels
def plot_censored_labels(X, y_lower, y_upper):
def replace_inf(x, target_value):
x[np.isinf(x)] = target_value
return x
plt.plot(X, y_lower, 'o', label='y_lower', color='blue')
plt.plot(X, y_upper, 'o', label='y_upper', color='fuchsia')
plt.vlines(X, ymin=replace_inf(y_lower, 0.01), ymax=replace_inf(y_upper, 1000),
label='Range for y', color='gray')
# Toy data
X = np.array([1, 2, 3, 4, 5]).reshape((-1, 1))
INF = np.inf
y_lower = np.array([ 10, 15, -INF, 30, 100])
y_upper = np.array([INF, INF, 20, 50, INF])
# Visualize toy data
plt.figure(figsize=(5, 4))
plot_censored_labels(X, y_lower, y_upper)
plt.ylim((6, 200))
plt.legend(loc='lower right')
plt.title('Toy data')
plt.xlabel('Input feature')
plt.ylabel('Label')
plt.yscale('log')
plt.tight_layout()
plt.show(block=True)
# Will be used to visualize XGBoost model
grid_pts = np.linspace(0.8, 5.2, 1000).reshape((-1, 1))
# Train AFT model using XGBoost
dmat = xgb.DMatrix(X)
dmat.set_float_info('label_lower_bound', y_lower)
dmat.set_float_info('label_upper_bound', y_upper)
params = {'max_depth': 3, 'objective':'survival:aft', 'min_child_weight': 0}
accuracy_history = []
def plot_intermediate_model_callback(env):
"""Custom callback to plot intermediate models"""
# Compute y_pred = prediction using the intermediate model, at current boosting iteration
y_pred = env.model.predict(dmat)
# "Accuracy" = the number of data points whose ranged label (y_lower, y_upper) includes
# the corresponding predicted label (y_pred)
acc = np.sum(np.logical_and(y_pred >= y_lower, y_pred <= y_upper)/len(X) * 100)
accuracy_history.append(acc)
# Plot ranged labels as well as predictions by the model
plt.subplot(5, 3, env.iteration + 1)
plot_censored_labels(X, y_lower, y_upper)
y_pred_grid_pts = env.model.predict(xgb.DMatrix(grid_pts))
plt.plot(grid_pts, y_pred_grid_pts, 'r-', label='XGBoost AFT model', linewidth=4)
plt.title('Iteration {}'.format(env.iteration), x=0.5, y=0.8)
plt.xlim((0.8, 5.2))
plt.ylim((1 if np.min(y_pred) < 6 else 6, 200))
plt.yscale('log')
res = {}
plt.figure(figsize=(12,13))
bst = xgb.train(params, dmat, 15, [(dmat, 'train')], evals_result=res,
callbacks=[plot_intermediate_model_callback])
plt.tight_layout()
plt.legend(loc='lower center', ncol=4,
bbox_to_anchor=(0.5, 0),
bbox_transform=plt.gcf().transFigure)
plt.tight_layout()
# Plot negative log likelihood over boosting iterations
plt.figure(figsize=(8,3))
plt.subplot(1, 2, 1)
plt.plot(res['train']['aft-nloglik'], 'b-o', label='aft-nloglik')
plt.xlabel('# Boosting Iterations')
plt.legend(loc='best')
# Plot "accuracy" over boosting iterations
# "Accuracy" = the number of data points whose ranged label (y_lower, y_upper) includes
# the corresponding predicted label (y_pred)
plt.subplot(1, 2, 2)
plt.plot(accuracy_history, 'r-o', label='Accuracy (%)')
plt.xlabel('# Boosting Iterations')
plt.legend(loc='best')
plt.tight_layout()
plt.show()

View File

@@ -0,0 +1,138 @@
Survival_label_lower_bound,Survival_label_upper_bound,Age_in_years,Karnofsky_score,Months_from_Diagnosis,Celltype=adeno,Celltype=large,Celltype=smallcell,Celltype=squamous,Prior_therapy=no,Prior_therapy=yes,Treatment=standard,Treatment=test
72.0,72.0,69.0,60.0,7.0,0,0,0,1,1,0,1,0
411.0,411.0,64.0,70.0,5.0,0,0,0,1,0,1,1,0
228.0,228.0,38.0,60.0,3.0,0,0,0,1,1,0,1,0
126.0,126.0,63.0,60.0,9.0,0,0,0,1,0,1,1,0
118.0,118.0,65.0,70.0,11.0,0,0,0,1,0,1,1,0
10.0,10.0,49.0,20.0,5.0,0,0,0,1,1,0,1,0
82.0,82.0,69.0,40.0,10.0,0,0,0,1,0,1,1,0
110.0,110.0,68.0,80.0,29.0,0,0,0,1,1,0,1,0
314.0,314.0,43.0,50.0,18.0,0,0,0,1,1,0,1,0
100.0,inf,70.0,70.0,6.0,0,0,0,1,1,0,1,0
42.0,42.0,81.0,60.0,4.0,0,0,0,1,1,0,1,0
8.0,8.0,63.0,40.0,58.0,0,0,0,1,0,1,1,0
144.0,144.0,63.0,30.0,4.0,0,0,0,1,1,0,1,0
25.0,inf,52.0,80.0,9.0,0,0,0,1,0,1,1,0
11.0,11.0,48.0,70.0,11.0,0,0,0,1,0,1,1,0
30.0,30.0,61.0,60.0,3.0,0,0,1,0,1,0,1,0
384.0,384.0,42.0,60.0,9.0,0,0,1,0,1,0,1,0
4.0,4.0,35.0,40.0,2.0,0,0,1,0,1,0,1,0
54.0,54.0,63.0,80.0,4.0,0,0,1,0,0,1,1,0
13.0,13.0,56.0,60.0,4.0,0,0,1,0,1,0,1,0
123.0,inf,55.0,40.0,3.0,0,0,1,0,1,0,1,0
97.0,inf,67.0,60.0,5.0,0,0,1,0,1,0,1,0
153.0,153.0,63.0,60.0,14.0,0,0,1,0,0,1,1,0
59.0,59.0,65.0,30.0,2.0,0,0,1,0,1,0,1,0
117.0,117.0,46.0,80.0,3.0,0,0,1,0,1,0,1,0
16.0,16.0,53.0,30.0,4.0,0,0,1,0,0,1,1,0
151.0,151.0,69.0,50.0,12.0,0,0,1,0,1,0,1,0
22.0,22.0,68.0,60.0,4.0,0,0,1,0,1,0,1,0
56.0,56.0,43.0,80.0,12.0,0,0,1,0,0,1,1,0
21.0,21.0,55.0,40.0,2.0,0,0,1,0,0,1,1,0
18.0,18.0,42.0,20.0,15.0,0,0,1,0,1,0,1,0
139.0,139.0,64.0,80.0,2.0,0,0,1,0,1,0,1,0
20.0,20.0,65.0,30.0,5.0,0,0,1,0,1,0,1,0
31.0,31.0,65.0,75.0,3.0,0,0,1,0,1,0,1,0
52.0,52.0,55.0,70.0,2.0,0,0,1,0,1,0,1,0
287.0,287.0,66.0,60.0,25.0,0,0,1,0,0,1,1,0
18.0,18.0,60.0,30.0,4.0,0,0,1,0,1,0,1,0
51.0,51.0,67.0,60.0,1.0,0,0,1,0,1,0,1,0
122.0,122.0,53.0,80.0,28.0,0,0,1,0,1,0,1,0
27.0,27.0,62.0,60.0,8.0,0,0,1,0,1,0,1,0
54.0,54.0,67.0,70.0,1.0,0,0,1,0,1,0,1,0
7.0,7.0,72.0,50.0,7.0,0,0,1,0,1,0,1,0
63.0,63.0,48.0,50.0,11.0,0,0,1,0,1,0,1,0
392.0,392.0,68.0,40.0,4.0,0,0,1,0,1,0,1,0
10.0,10.0,67.0,40.0,23.0,0,0,1,0,0,1,1,0
8.0,8.0,61.0,20.0,19.0,1,0,0,0,0,1,1,0
92.0,92.0,60.0,70.0,10.0,1,0,0,0,1,0,1,0
35.0,35.0,62.0,40.0,6.0,1,0,0,0,1,0,1,0
117.0,117.0,38.0,80.0,2.0,1,0,0,0,1,0,1,0
132.0,132.0,50.0,80.0,5.0,1,0,0,0,1,0,1,0
12.0,12.0,63.0,50.0,4.0,1,0,0,0,0,1,1,0
162.0,162.0,64.0,80.0,5.0,1,0,0,0,1,0,1,0
3.0,3.0,43.0,30.0,3.0,1,0,0,0,1,0,1,0
95.0,95.0,34.0,80.0,4.0,1,0,0,0,1,0,1,0
177.0,177.0,66.0,50.0,16.0,0,1,0,0,0,1,1,0
162.0,162.0,62.0,80.0,5.0,0,1,0,0,1,0,1,0
216.0,216.0,52.0,50.0,15.0,0,1,0,0,1,0,1,0
553.0,553.0,47.0,70.0,2.0,0,1,0,0,1,0,1,0
278.0,278.0,63.0,60.0,12.0,0,1,0,0,1,0,1,0
12.0,12.0,68.0,40.0,12.0,0,1,0,0,0,1,1,0
260.0,260.0,45.0,80.0,5.0,0,1,0,0,1,0,1,0
200.0,200.0,41.0,80.0,12.0,0,1,0,0,0,1,1,0
156.0,156.0,66.0,70.0,2.0,0,1,0,0,1,0,1,0
182.0,inf,62.0,90.0,2.0,0,1,0,0,1,0,1,0
143.0,143.0,60.0,90.0,8.0,0,1,0,0,1,0,1,0
105.0,105.0,66.0,80.0,11.0,0,1,0,0,1,0,1,0
103.0,103.0,38.0,80.0,5.0,0,1,0,0,1,0,1,0
250.0,250.0,53.0,70.0,8.0,0,1,0,0,0,1,1,0
100.0,100.0,37.0,60.0,13.0,0,1,0,0,0,1,1,0
999.0,999.0,54.0,90.0,12.0,0,0,0,1,0,1,0,1
112.0,112.0,60.0,80.0,6.0,0,0,0,1,1,0,0,1
87.0,inf,48.0,80.0,3.0,0,0,0,1,1,0,0,1
231.0,inf,52.0,50.0,8.0,0,0,0,1,0,1,0,1
242.0,242.0,70.0,50.0,1.0,0,0,0,1,1,0,0,1
991.0,991.0,50.0,70.0,7.0,0,0,0,1,0,1,0,1
111.0,111.0,62.0,70.0,3.0,0,0,0,1,1,0,0,1
1.0,1.0,65.0,20.0,21.0,0,0,0,1,0,1,0,1
587.0,587.0,58.0,60.0,3.0,0,0,0,1,1,0,0,1
389.0,389.0,62.0,90.0,2.0,0,0,0,1,1,0,0,1
33.0,33.0,64.0,30.0,6.0,0,0,0,1,1,0,0,1
25.0,25.0,63.0,20.0,36.0,0,0,0,1,1,0,0,1
357.0,357.0,58.0,70.0,13.0,0,0,0,1,1,0,0,1
467.0,467.0,64.0,90.0,2.0,0,0,0,1,1,0,0,1
201.0,201.0,52.0,80.0,28.0,0,0,0,1,0,1,0,1
1.0,1.0,35.0,50.0,7.0,0,0,0,1,1,0,0,1
30.0,30.0,63.0,70.0,11.0,0,0,0,1,1,0,0,1
44.0,44.0,70.0,60.0,13.0,0,0,0,1,0,1,0,1
283.0,283.0,51.0,90.0,2.0,0,0,0,1,1,0,0,1
15.0,15.0,40.0,50.0,13.0,0,0,0,1,0,1,0,1
25.0,25.0,69.0,30.0,2.0,0,0,1,0,1,0,0,1
103.0,inf,36.0,70.0,22.0,0,0,1,0,0,1,0,1
21.0,21.0,71.0,20.0,4.0,0,0,1,0,1,0,0,1
13.0,13.0,62.0,30.0,2.0,0,0,1,0,1,0,0,1
87.0,87.0,60.0,60.0,2.0,0,0,1,0,1,0,0,1
2.0,2.0,44.0,40.0,36.0,0,0,1,0,0,1,0,1
20.0,20.0,54.0,30.0,9.0,0,0,1,0,0,1,0,1
7.0,7.0,66.0,20.0,11.0,0,0,1,0,1,0,0,1
24.0,24.0,49.0,60.0,8.0,0,0,1,0,1,0,0,1
99.0,99.0,72.0,70.0,3.0,0,0,1,0,1,0,0,1
8.0,8.0,68.0,80.0,2.0,0,0,1,0,1,0,0,1
99.0,99.0,62.0,85.0,4.0,0,0,1,0,1,0,0,1
61.0,61.0,71.0,70.0,2.0,0,0,1,0,1,0,0,1
25.0,25.0,70.0,70.0,2.0,0,0,1,0,1,0,0,1
95.0,95.0,61.0,70.0,1.0,0,0,1,0,1,0,0,1
80.0,80.0,71.0,50.0,17.0,0,0,1,0,1,0,0,1
51.0,51.0,59.0,30.0,87.0,0,0,1,0,0,1,0,1
29.0,29.0,67.0,40.0,8.0,0,0,1,0,1,0,0,1
24.0,24.0,60.0,40.0,2.0,1,0,0,0,1,0,0,1
18.0,18.0,69.0,40.0,5.0,1,0,0,0,0,1,0,1
83.0,inf,57.0,99.0,3.0,1,0,0,0,1,0,0,1
31.0,31.0,39.0,80.0,3.0,1,0,0,0,1,0,0,1
51.0,51.0,62.0,60.0,5.0,1,0,0,0,1,0,0,1
90.0,90.0,50.0,60.0,22.0,1,0,0,0,0,1,0,1
52.0,52.0,43.0,60.0,3.0,1,0,0,0,1,0,0,1
73.0,73.0,70.0,60.0,3.0,1,0,0,0,1,0,0,1
8.0,8.0,66.0,50.0,5.0,1,0,0,0,1,0,0,1
36.0,36.0,61.0,70.0,8.0,1,0,0,0,1,0,0,1
48.0,48.0,81.0,10.0,4.0,1,0,0,0,1,0,0,1
7.0,7.0,58.0,40.0,4.0,1,0,0,0,1,0,0,1
140.0,140.0,63.0,70.0,3.0,1,0,0,0,1,0,0,1
186.0,186.0,60.0,90.0,3.0,1,0,0,0,1,0,0,1
84.0,84.0,62.0,80.0,4.0,1,0,0,0,0,1,0,1
19.0,19.0,42.0,50.0,10.0,1,0,0,0,1,0,0,1
45.0,45.0,69.0,40.0,3.0,1,0,0,0,1,0,0,1
80.0,80.0,63.0,40.0,4.0,1,0,0,0,1,0,0,1
52.0,52.0,45.0,60.0,4.0,0,1,0,0,1,0,0,1
164.0,164.0,68.0,70.0,15.0,0,1,0,0,0,1,0,1
19.0,19.0,39.0,30.0,4.0,0,1,0,0,0,1,0,1
53.0,53.0,66.0,60.0,12.0,0,1,0,0,1,0,0,1
15.0,15.0,63.0,30.0,5.0,0,1,0,0,1,0,0,1
43.0,43.0,49.0,60.0,11.0,0,1,0,0,0,1,0,1
340.0,340.0,64.0,80.0,10.0,0,1,0,0,0,1,0,1
133.0,133.0,65.0,75.0,1.0,0,1,0,0,1,0,0,1
111.0,111.0,64.0,60.0,5.0,0,1,0,0,1,0,0,1
231.0,231.0,67.0,70.0,18.0,0,1,0,0,0,1,0,1
378.0,378.0,65.0,80.0,4.0,0,1,0,0,1,0,0,1
49.0,49.0,37.0,30.0,3.0,0,1,0,0,1,0,0,1
1 Survival_label_lower_bound Survival_label_upper_bound Age_in_years Karnofsky_score Months_from_Diagnosis Celltype=adeno Celltype=large Celltype=smallcell Celltype=squamous Prior_therapy=no Prior_therapy=yes Treatment=standard Treatment=test
2 72.0 72.0 69.0 60.0 7.0 0 0 0 1 1 0 1 0
3 411.0 411.0 64.0 70.0 5.0 0 0 0 1 0 1 1 0
4 228.0 228.0 38.0 60.0 3.0 0 0 0 1 1 0 1 0
5 126.0 126.0 63.0 60.0 9.0 0 0 0 1 0 1 1 0
6 118.0 118.0 65.0 70.0 11.0 0 0 0 1 0 1 1 0
7 10.0 10.0 49.0 20.0 5.0 0 0 0 1 1 0 1 0
8 82.0 82.0 69.0 40.0 10.0 0 0 0 1 0 1 1 0
9 110.0 110.0 68.0 80.0 29.0 0 0 0 1 1 0 1 0
10 314.0 314.0 43.0 50.0 18.0 0 0 0 1 1 0 1 0
11 100.0 inf 70.0 70.0 6.0 0 0 0 1 1 0 1 0
12 42.0 42.0 81.0 60.0 4.0 0 0 0 1 1 0 1 0
13 8.0 8.0 63.0 40.0 58.0 0 0 0 1 0 1 1 0
14 144.0 144.0 63.0 30.0 4.0 0 0 0 1 1 0 1 0
15 25.0 inf 52.0 80.0 9.0 0 0 0 1 0 1 1 0
16 11.0 11.0 48.0 70.0 11.0 0 0 0 1 0 1 1 0
17 30.0 30.0 61.0 60.0 3.0 0 0 1 0 1 0 1 0
18 384.0 384.0 42.0 60.0 9.0 0 0 1 0 1 0 1 0
19 4.0 4.0 35.0 40.0 2.0 0 0 1 0 1 0 1 0
20 54.0 54.0 63.0 80.0 4.0 0 0 1 0 0 1 1 0
21 13.0 13.0 56.0 60.0 4.0 0 0 1 0 1 0 1 0
22 123.0 inf 55.0 40.0 3.0 0 0 1 0 1 0 1 0
23 97.0 inf 67.0 60.0 5.0 0 0 1 0 1 0 1 0
24 153.0 153.0 63.0 60.0 14.0 0 0 1 0 0 1 1 0
25 59.0 59.0 65.0 30.0 2.0 0 0 1 0 1 0 1 0
26 117.0 117.0 46.0 80.0 3.0 0 0 1 0 1 0 1 0
27 16.0 16.0 53.0 30.0 4.0 0 0 1 0 0 1 1 0
28 151.0 151.0 69.0 50.0 12.0 0 0 1 0 1 0 1 0
29 22.0 22.0 68.0 60.0 4.0 0 0 1 0 1 0 1 0
30 56.0 56.0 43.0 80.0 12.0 0 0 1 0 0 1 1 0
31 21.0 21.0 55.0 40.0 2.0 0 0 1 0 0 1 1 0
32 18.0 18.0 42.0 20.0 15.0 0 0 1 0 1 0 1 0
33 139.0 139.0 64.0 80.0 2.0 0 0 1 0 1 0 1 0
34 20.0 20.0 65.0 30.0 5.0 0 0 1 0 1 0 1 0
35 31.0 31.0 65.0 75.0 3.0 0 0 1 0 1 0 1 0
36 52.0 52.0 55.0 70.0 2.0 0 0 1 0 1 0 1 0
37 287.0 287.0 66.0 60.0 25.0 0 0 1 0 0 1 1 0
38 18.0 18.0 60.0 30.0 4.0 0 0 1 0 1 0 1 0
39 51.0 51.0 67.0 60.0 1.0 0 0 1 0 1 0 1 0
40 122.0 122.0 53.0 80.0 28.0 0 0 1 0 1 0 1 0
41 27.0 27.0 62.0 60.0 8.0 0 0 1 0 1 0 1 0
42 54.0 54.0 67.0 70.0 1.0 0 0 1 0 1 0 1 0
43 7.0 7.0 72.0 50.0 7.0 0 0 1 0 1 0 1 0
44 63.0 63.0 48.0 50.0 11.0 0 0 1 0 1 0 1 0
45 392.0 392.0 68.0 40.0 4.0 0 0 1 0 1 0 1 0
46 10.0 10.0 67.0 40.0 23.0 0 0 1 0 0 1 1 0
47 8.0 8.0 61.0 20.0 19.0 1 0 0 0 0 1 1 0
48 92.0 92.0 60.0 70.0 10.0 1 0 0 0 1 0 1 0
49 35.0 35.0 62.0 40.0 6.0 1 0 0 0 1 0 1 0
50 117.0 117.0 38.0 80.0 2.0 1 0 0 0 1 0 1 0
51 132.0 132.0 50.0 80.0 5.0 1 0 0 0 1 0 1 0
52 12.0 12.0 63.0 50.0 4.0 1 0 0 0 0 1 1 0
53 162.0 162.0 64.0 80.0 5.0 1 0 0 0 1 0 1 0
54 3.0 3.0 43.0 30.0 3.0 1 0 0 0 1 0 1 0
55 95.0 95.0 34.0 80.0 4.0 1 0 0 0 1 0 1 0
56 177.0 177.0 66.0 50.0 16.0 0 1 0 0 0 1 1 0
57 162.0 162.0 62.0 80.0 5.0 0 1 0 0 1 0 1 0
58 216.0 216.0 52.0 50.0 15.0 0 1 0 0 1 0 1 0
59 553.0 553.0 47.0 70.0 2.0 0 1 0 0 1 0 1 0
60 278.0 278.0 63.0 60.0 12.0 0 1 0 0 1 0 1 0
61 12.0 12.0 68.0 40.0 12.0 0 1 0 0 0 1 1 0
62 260.0 260.0 45.0 80.0 5.0 0 1 0 0 1 0 1 0
63 200.0 200.0 41.0 80.0 12.0 0 1 0 0 0 1 1 0
64 156.0 156.0 66.0 70.0 2.0 0 1 0 0 1 0 1 0
65 182.0 inf 62.0 90.0 2.0 0 1 0 0 1 0 1 0
66 143.0 143.0 60.0 90.0 8.0 0 1 0 0 1 0 1 0
67 105.0 105.0 66.0 80.0 11.0 0 1 0 0 1 0 1 0
68 103.0 103.0 38.0 80.0 5.0 0 1 0 0 1 0 1 0
69 250.0 250.0 53.0 70.0 8.0 0 1 0 0 0 1 1 0
70 100.0 100.0 37.0 60.0 13.0 0 1 0 0 0 1 1 0
71 999.0 999.0 54.0 90.0 12.0 0 0 0 1 0 1 0 1
72 112.0 112.0 60.0 80.0 6.0 0 0 0 1 1 0 0 1
73 87.0 inf 48.0 80.0 3.0 0 0 0 1 1 0 0 1
74 231.0 inf 52.0 50.0 8.0 0 0 0 1 0 1 0 1
75 242.0 242.0 70.0 50.0 1.0 0 0 0 1 1 0 0 1
76 991.0 991.0 50.0 70.0 7.0 0 0 0 1 0 1 0 1
77 111.0 111.0 62.0 70.0 3.0 0 0 0 1 1 0 0 1
78 1.0 1.0 65.0 20.0 21.0 0 0 0 1 0 1 0 1
79 587.0 587.0 58.0 60.0 3.0 0 0 0 1 1 0 0 1
80 389.0 389.0 62.0 90.0 2.0 0 0 0 1 1 0 0 1
81 33.0 33.0 64.0 30.0 6.0 0 0 0 1 1 0 0 1
82 25.0 25.0 63.0 20.0 36.0 0 0 0 1 1 0 0 1
83 357.0 357.0 58.0 70.0 13.0 0 0 0 1 1 0 0 1
84 467.0 467.0 64.0 90.0 2.0 0 0 0 1 1 0 0 1
85 201.0 201.0 52.0 80.0 28.0 0 0 0 1 0 1 0 1
86 1.0 1.0 35.0 50.0 7.0 0 0 0 1 1 0 0 1
87 30.0 30.0 63.0 70.0 11.0 0 0 0 1 1 0 0 1
88 44.0 44.0 70.0 60.0 13.0 0 0 0 1 0 1 0 1
89 283.0 283.0 51.0 90.0 2.0 0 0 0 1 1 0 0 1
90 15.0 15.0 40.0 50.0 13.0 0 0 0 1 0 1 0 1
91 25.0 25.0 69.0 30.0 2.0 0 0 1 0 1 0 0 1
92 103.0 inf 36.0 70.0 22.0 0 0 1 0 0 1 0 1
93 21.0 21.0 71.0 20.0 4.0 0 0 1 0 1 0 0 1
94 13.0 13.0 62.0 30.0 2.0 0 0 1 0 1 0 0 1
95 87.0 87.0 60.0 60.0 2.0 0 0 1 0 1 0 0 1
96 2.0 2.0 44.0 40.0 36.0 0 0 1 0 0 1 0 1
97 20.0 20.0 54.0 30.0 9.0 0 0 1 0 0 1 0 1
98 7.0 7.0 66.0 20.0 11.0 0 0 1 0 1 0 0 1
99 24.0 24.0 49.0 60.0 8.0 0 0 1 0 1 0 0 1
100 99.0 99.0 72.0 70.0 3.0 0 0 1 0 1 0 0 1
101 8.0 8.0 68.0 80.0 2.0 0 0 1 0 1 0 0 1
102 99.0 99.0 62.0 85.0 4.0 0 0 1 0 1 0 0 1
103 61.0 61.0 71.0 70.0 2.0 0 0 1 0 1 0 0 1
104 25.0 25.0 70.0 70.0 2.0 0 0 1 0 1 0 0 1
105 95.0 95.0 61.0 70.0 1.0 0 0 1 0 1 0 0 1
106 80.0 80.0 71.0 50.0 17.0 0 0 1 0 1 0 0 1
107 51.0 51.0 59.0 30.0 87.0 0 0 1 0 0 1 0 1
108 29.0 29.0 67.0 40.0 8.0 0 0 1 0 1 0 0 1
109 24.0 24.0 60.0 40.0 2.0 1 0 0 0 1 0 0 1
110 18.0 18.0 69.0 40.0 5.0 1 0 0 0 0 1 0 1
111 83.0 inf 57.0 99.0 3.0 1 0 0 0 1 0 0 1
112 31.0 31.0 39.0 80.0 3.0 1 0 0 0 1 0 0 1
113 51.0 51.0 62.0 60.0 5.0 1 0 0 0 1 0 0 1
114 90.0 90.0 50.0 60.0 22.0 1 0 0 0 0 1 0 1
115 52.0 52.0 43.0 60.0 3.0 1 0 0 0 1 0 0 1
116 73.0 73.0 70.0 60.0 3.0 1 0 0 0 1 0 0 1
117 8.0 8.0 66.0 50.0 5.0 1 0 0 0 1 0 0 1
118 36.0 36.0 61.0 70.0 8.0 1 0 0 0 1 0 0 1
119 48.0 48.0 81.0 10.0 4.0 1 0 0 0 1 0 0 1
120 7.0 7.0 58.0 40.0 4.0 1 0 0 0 1 0 0 1
121 140.0 140.0 63.0 70.0 3.0 1 0 0 0 1 0 0 1
122 186.0 186.0 60.0 90.0 3.0 1 0 0 0 1 0 0 1
123 84.0 84.0 62.0 80.0 4.0 1 0 0 0 0 1 0 1
124 19.0 19.0 42.0 50.0 10.0 1 0 0 0 1 0 0 1
125 45.0 45.0 69.0 40.0 3.0 1 0 0 0 1 0 0 1
126 80.0 80.0 63.0 40.0 4.0 1 0 0 0 1 0 0 1
127 52.0 52.0 45.0 60.0 4.0 0 1 0 0 1 0 0 1
128 164.0 164.0 68.0 70.0 15.0 0 1 0 0 0 1 0 1
129 19.0 19.0 39.0 30.0 4.0 0 1 0 0 0 1 0 1
130 53.0 53.0 66.0 60.0 12.0 0 1 0 0 1 0 0 1
131 15.0 15.0 63.0 30.0 5.0 0 1 0 0 1 0 0 1
132 43.0 43.0 49.0 60.0 11.0 0 1 0 0 0 1 0 1
133 340.0 340.0 64.0 80.0 10.0 0 1 0 0 0 1 0 1
134 133.0 133.0 65.0 75.0 1.0 0 1 0 0 1 0 0 1
135 111.0 111.0 64.0 60.0 5.0 0 1 0 0 1 0 0 1
136 231.0 231.0 67.0 70.0 18.0 0 1 0 0 0 1 0 1
137 378.0 378.0 65.0 80.0 4.0 0 1 0 0 1 0 0 1
138 49.0 49.0 37.0 30.0 3.0 0 1 0 0 1 0 0 1