The Spark scheduler allocates memory only for the executor and cpu only for its tasks.
So it can happen that all memory is nearly completely allocated by Spark executors, but all cpu resources are idle.
In this case Mesos does not offer resources anymore, as less than MIN_MEM (=32MB) memory is allocatable.
This effectively causes a dead lock in the Spark job, as it is not offered cpu resources needed for launching new tasks.
see HierarchicalAllocatorProcess::allocatable(const Resources&) called in HierarchicalAllocatorProcess::allocate(const hashset<SlaveID>&)
A possible solution may to completely drop the condition on allocatable memory.