I've seen this happening. This is worse than it looks. In 3-repl/2-min_repl case, the last datanode in the pipeline does not report anything and the pipeline is recreated with the remaining two nodes. The problem is the two nodes may have already written the corrupt data to disk. The reconstructed pipeline will be used and the block will complete. Once the block is done, NN will schedule replication, which will fail the two sources one by one, causing the block to be "missing".
Looking at the code, the source DatanodeId used in corruption reporting is propagated from the client. But when DFSClient calls writeBlock(), it passes null as srcNode, so no one in the pipline has valid srcNode. Maybe NN should check whether the block is under construction and the reporter was the last one in the pipeline. In this case, all copies of the blocks should be marked as corrupt.
In addition to this, the last one in the pipeline should synchronously return an appropriate failure, instead of simply going away.