We're encountering multiple cases of clients calling updateBlockForPipeline on completed blocks. Initial analysis is the client closes a file, completeFile succeeds, then it immediately attempts recovery. The exception is swallowed on the client, only logged on the NN by checkUCBlock.
The problem "appears" to be benign (no data loss) but it's unproven if the issue always occurs for successfully closed files. There appears to be very poor coordination between the dfs output stream's threads which leads to races that confuse the streamer thread – which probably should have been joined before returning from close.