Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
2.10.1
-
None
-
None
-
None
Description
Recently we saw missing blocks in our production clusters running on dynamic environments like AWS. We are running some version of hadoop-2.10 code line.
Events that led to data loss:
- We have pool of available IP address and whenever datanode restarts we use any available IP address from that pool.
- We have seen during the lifetime of namenode process, multiple datanodes were restarted and the same datanode has used different IP address.
- One case that I was debugging was very interesting.
DN with datanode UUID DN-UUID-1 moved from ip-address-1 --> ip-address-2 --> ip-address-3
DN with datanode UUID DN-UUID-2 moved from ip-address-4 --> ip-address-5 --> ip-address-1
Observe the last IP address change for DN-UUID-2. It is ip-address-1 which is the first ip address of DN-UUID-1 - There was some bug in our operational script which led to all datanodes getting restarted at the same time.
Just after the restart, we see the following log lines.
2023-08-26 04:04:41,964 INFO [on default port 9000] namenode.NameNode - BLOCK* registerDatanode: 10.x.x.1:50010 2023-08-26 04:04:45,720 INFO [on default port 9000] namenode.NameNode - BLOCK* registerDatanode: 10.x.x.2:50010 2023-08-26 04:04:45,720 INFO [on default port 9000] namenode.NameNode - BLOCK* registerDatanode: 10.x.x.2:50010 2023-08-26 04:04:51,680 INFO [on default port 9000] namenode.NameNode - BLOCK* registerDatanode: 10.x.x.3:50010 2023-08-26 04:04:55,328 INFO [on default port 9000] namenode.NameNode - BLOCK* registerDatanode: 10.x.x.4:50010
This line is logged here.
Snippet below:
DatanodeDescriptor nodeS = getDatanode(nodeReg.getDatanodeUuid()); DatanodeDescriptor nodeN = host2DatanodeMap.getDatanodeByXferAddr( nodeReg.getIpAddr(), nodeReg.getXferPort()); if (nodeN != null && nodeN != nodeS) { NameNode.LOG.info("BLOCK* registerDatanode: " + nodeN); // nodeN previously served a different data storage, // which is not served by anybody anymore. removeDatanode(nodeN); // physically remove node from datanodeMap wipeDatanode(nodeN); nodeN = null; }
This happens when the DatanodeDescriptor is not the same in datanodeMap and host2DatanodeMap. HDFS-16540 fixed this bug for lost data locality and not data loss.
By filing this jira, I want to discuss following things:
- Do we really want to call removeDatanode method from namenode whenever any such discrepancy in maps is spotted? or Can we rely on the first full block report or the periodic full block report from the datanode to fix the metadata?
- Improve logging in the blockmanagement code to debug these issues faster.
- Add a test case with the exact events that occured in our env and still make sure that datanodeMap and host2DatanodeMap are consistent.
Attachments
Issue Links
- relates to
-
HDFS-16540 Data locality is lost when DataNode pod restarts in kubernetes
- Resolved