Details
Description
The Spark scheduler allocates memory only for the executor and cpu only for its tasks.
So it can happen that all memory is nearly completely allocated by Spark executors, but all cpu resources are idle.
In this case Mesos does not offer resources anymore, as less than MIN_MEM (=32MB) memory is allocatable.
This effectively causes a dead lock in the Spark job, as it is not offered cpu resources needed for launching new tasks.
see HierarchicalAllocatorProcess::allocatable(const Resources&) called in HierarchicalAllocatorProcess::allocate(const hashset<SlaveID>&)
template <class RoleSorter, class FrameworkSorter> bool HierarchicalAllocatorProcess<RoleSorter, FrameworkSorter>::allocatable( const Resources& resources) { ... Option<double> cpus = resources.cpus(); Option<Bytes> mem = resources.mem(); if (cpus.isSome() && mem.isSome()) { return cpus.get() >= MIN_CPUS && mem.get() > MIN_MEM; } return false; }
A possible solution may to completely drop the condition on allocatable memory.
Attachments
Issue Links
- relates to
-
MESOS-8626 The 'allocatable' check in the allocator is problematic with multi-role frameworks
- Resolved