It's currently not possible to provide a custom logging configuration for the Mesos executors.
Upon startup of the executor JVM, it loads a default config file from the Spark assembly, visible by this line in stderr:
> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
That line comes from Logging.scala  where a default config is loaded if none is found in the classpath upon the startup of the Spark Mesos executor in the Mesos sandbox. At that point in time, none of the application-specific resources have been shipped yet, as the executor JVM is just starting up.
To load a custom configuration file we should have it already on the sandbox before the executor JVM starts and add it to the classpath on the startup command.
For the classpath customization, It looks like it should be possible to pass a -Dlog4j.configuration property by using the 'spark.executor.extraClassPath' that will be picked up at  and that should be added to the command that starts the executor JVM, but the resource must be already on the host before we can do that. Therefore we need some means of 'shipping' the log4j.configuration file to the allocated executor.
This all boils down to the need of shipping extra files to the sandbox.
There's a workaround: open up the Spark assembly, replace the log4j-default.properties and pack it up again. That would work, although kind of rudimentary as people may use the same assembly for many jobs. Probably, accessing the log4j API programmatically should also work (we didn't try that yet)
SPARK-8798 Allow additional uris to be fetched with mesos