Cancelling Spark jobs is limited because executors that are blocked are not interrupted. In effect, the cancellation will succeed and the job will no longer be "running", but executor threads may still be tied up with the cancelled job and unable to do further work until complete. This is particularly problematic in the case of deadlock or unlimited/long timeouts.
It would be useful if cancelling a job would call Thread.interrupt() in order to interrupt blocking in most situations, such as Object monitors or IO. The one caveat is HDFS-1208, where HDFS's DFSClient will not only swallow InterruptedException but may reinterpret them as IOException, causing HDFS to mark a node as permanently failed. Thus, this feature must be optional and probably off by default.