Recently we are noticing many cases in which all the replica of the block are residing on the same rack.
During the block creation, the block placement policy was honored.
But after node failure event in some specific manner, the block ends up in such state.
On investigating more I found out that BlockManager#blockHasEnoughRacks is dependent on the config (net.topology.script.file.name)
We specify DNSToSwitchMapping implementation (our own custom implementation) via net.topology.node.switch.mapping.impl and no longer use net.topology.script.file.name config.