It's hard to resume writing a block when a connection fails, since you don't know how much of the previous write succeeded. Currently the block is streamed over TCP connections. We could instead write it as a series of length-prefixed buffers, and query the remote datanode on reconnect about which buffers it had recieved, etc. But that seems like reinventing a lot of TCP.
If the datanode goes down then currently the entire block is in a temp file so that it can instead be written to a different datanode. Thus if datanodes die during, e.g., a reduce, then the reduce task does not have to restart. But if reduce tasks are running on the same pool of machines as datanodes, then, when a node fails, some reduce tasks will need to be restarted anyway. So I agree that this may not be helping us much. I think throwing an exception when the connection to the datanode fails would be fine.