Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.14.3
-
None
-
None
Description
We tried to decommission about 40 nodes at once, each containing 12k blocks. (about 500k total)
(This also happened when we first tried to decommission 2 million blocks)
Clients started experiencing "java.lang.RuntimeException: java.net.SocketTimeoutException: timed out waiting for rpc
response" and namenode was in 100% cpu state.
It was spending most of its time on one thread,
"org.apache.hadoop.dfs.FSNamesystem$ReplicationMonitor@7f401d28" daemon prio=10 tid=0x0000002e10702800 nid=0x6718
runnable [0x0000000041a42000..0x0000000041a42a30]
java.lang.Thread.State: RUNNABLE
at org.apache.hadoop.dfs.FSNamesystem.containingNodeList(FSNamesystem.java:2766)
at org.apache.hadoop.dfs.FSNamesystem.pendingTransfers(FSNamesystem.java:2870)
- locked <0x0000002aa3cef720> (a org.apache.hadoop.dfs.UnderReplicatedBlocks)
- locked <0x0000002aa3c42e28> (a org.apache.hadoop.dfs.FSNamesystem)
at org.apache.hadoop.dfs.FSNamesystem.computeDatanodeWork(FSNamesystem.java:1928)
at org.apache.hadoop.dfs.FSNamesystem$ReplicationMonitor.run(FSNamesystem.java:1868)
at java.lang.Thread.run(Thread.java:619)
We confirmed that Namenode was not in the fullGC states when these problem happened.
Also, dfsadmin -metasave was showing "Blocks waiting for replication" was decreasing very slowly.
I believe this is not specific to decommission and same problem would happen if we lose one rack.
Attachments
Attachments
Issue Links
- duplicates
-
HDFS-150 Replication should be decoupled from heartbeat
- Resolved
- is related to
-
HDFS-373 Name node should notify administrator if when struggling with replication
- Open
- relates to
-
HDFS-150 Replication should be decoupled from heartbeat
- Resolved
-
HADOOP-2649 The ReplicationMonitor sleep period should be configurable
- Closed
-
HADOOP-2755 dfs fsck extremely slow, dfs ls times out
- Closed