When a datanode has a block that NameNode does not have, it results in an NPE at the NameNode. And one of these cases results in an infinite loop of these errors because DataNode keeps invoking the same RPC that resulted in this NPE.
One way to reproduce :
- On a single DN cluster, start writing a large file (something like 'bin/hadoop fs -put 5Gb 5Gb')
- Now, from a different shell, delete this file (bin/hadoop fs -rm 5Gb)
- Most likely you will hit this.
- The cause is that when DataNode invokes blockReceived() to inform about the last block it received, the file is already deleted and results in an NPE at the namenode. The way DataNode works, it basically keep invoking the same RPC with same block and results in the same error.
When block does not exist in NameNode's blocksMap, it basically does not belong to the cluster. Let me know if you need the trace. Basically the NPE is at FSNamesystem.java:2800 (on trunk).