On a production cluster, I observed the following weird behavior where a block manager cached a block, the store failed due to a task being killed / cancelled, and then a subsequent task incorrectly attempted to read the cached block from the machine where the write failed, eventually leading to a complete job failure.
Here's the executor log snippet from the machine performing the failed cache:
Here's the exception from the reader in the failed job:
I believe that there's a race condition in how we handle cleanup after failed cache stores. Here's an excerpt from BlockManager.doPut()
The only way that I think this "successfully stored followed by immediately failed" case could appear in our logs is if the local memory store write succeeds and then an exception (perhaps InterruptedException) causes us to enter the finally block's error-cleanup path. The problem is that the finally block only cleans up the block's metadata rather than performing the full cleanup path which would also notify the master that the block is no longer available at this host.
The fact that the Spark task was not resilient in the face of remote block fetches is a separate issue which I'll report and fix separately. The scope of this JIRA, however, is the fact that Spark still attempted reads from a machine which was missing the block.
In order to fix this problem, I think that the finally block should perform more thorough cleanup and should send a "block removed" status update to the master following any error during the write. This is necessary because the body of doPut() may have already notified the master of block availability. In addition, we can extend the block serving code path to automatically update the master with "block deleted" statuses whenever the block manager receives invalid requests for blocks that it doesn't have.