Users could submit their jobs from an external to the cluster host which may not have the required keytab locally (also discussed here).
Moreover, in cluster mode it does not make much sense to reference a local resource unless this is uploaded/stored somewhere in the cluster. For yarn HDFS is used, on mesos and certainly on DC/OS right now the secret store is used for storing secrets and consequently keytabs. There is a check here that makes spark submit difficult to use in such deployment scenarios.
On DC/OS the workaround is to directly submit to the mesos dispatcher rest api by passing the spark.yarn.tab property pointing to a path within the driver's container where the keytab will be mounted after its fetched from the secret store, at container's launch time. Target is to allow spark submit be flexible enough for mesos in cluster mode, as DC/OS users often want to deploy using that.