* cancel job instead of killing SparkContext
This PR changes the default behavior that kills SparkContext. Instead, This PR
cancels jobs when coming across task failed. That means the SparkContext is
still alive even some exceptions happen.
* add a parameter to control if killing SparkContext
* cancel the jobs the failed task belongs to
* remove the jobId from the map when one job failed.
* resolve comments