Improvement Request for Anti-Affinity Block Placement across datanodes such that for a given data set the blocks are distributed evenly across all available datanodes in order to improve task scheduling while maintaining data locality.
Methods to be implemented:
- dfs balancer command switch plus a target path of files / directories containing the blocks to be rebalanced
- dfs client side write flag
Both options should proactively (re)distribute the given data set as evenly as possible across all datanodes in the cluster.
See this following Spark issue which causes massive under-utilisation across jobs. Only 30-50% of executor cores were being used for tasks due to data locality targeting. Many executors doing literally nothing, while holding significant cluster resources, because the data set, which in at least one job was large enough to have 30,000 tasks churning though slowly on only a subset of the available executors. The workaround in the end was to disable data local tasks in Spark, but if everyone did that the bottleneck would go back to being the network and it undermines Hadoop's first premise of don't move the data to compute. For performance critical jobs, returning containers to Yarn because they cannot find any data to execute on locally isn't a good idea either, they want the jobs to use all the resources available and allocated to the job, not just the resources on a subset of nodes that hold a given dataset or disabling data local task execution to pull half the blocks across the network to make use of the other half of the nodes.