Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Duplicate
-
None
-
None
-
None
Description
I was told that dfs recovers when datanodes go down and come back after a while, even when some blocks went missing.
As a test I stopped the datanode server on a single-node cluster and restarted after 5 hours.
Dfs did not recover because of the following repeated exception:
2007-10-10 02:42:52,808 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call blockReport(127.0.0.1:50010, [Lorg.apache.hadoop.dfs.Block;@ecb8c4) from 127.0.0.1:49678: error: java.io.IOException: java.lang.AssertionError: Index is out of bound
java.io.IOException: java.lang.AssertionError: Index is out of bound
at org.apache.hadoop.dfs.BlocksMap$BlockInfo.getNext(BlocksMap.java:77)
at org.apache.hadoop.dfs.DatanodeDescriptor$BlockIterator.next(DatanodeDescriptor.java:185)
at org.apache.hadoop.dfs.DatanodeDescriptor$BlockIterator.next(DatanodeDescriptor.java:170)
at org.apache.hadoop.dfs.DatanodeDescriptor.reportDiff(DatanodeDescriptor.java:325)
at org.apache.hadoop.dfs.FSNamesystem.processReport(FSNamesystem.java:2111)
at org.apache.hadoop.dfs.NameNode.blockReport(NameNode.java:621)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:340)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:609)