Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
-
None
Description
Suppose a datanode D has a block B that belongs to file F. Suppose the datanode D dies and the namenode replicates those blocks to other datanodes. No, suppose the user deletes file F. The namenode removes all the blocks that belonged to file F. Now, suppose a new file F1 is created and the namenode generates the same blockid B for this new file F1.
Suppose the old datanode D comes back to life. Now we have a valid corrupted block B on datanode D.
This case is possibly detected by the Client (using CRC). But does HDFS need to handle this scenario better?
Attachments
Issue Links
- duplicates
-
HADOOP-158 dfs should allocate a random blockid range to a file, then assign ids sequentially to blocks in the file
- Closed
- is related to
-
HADOOP-1700 Append to files in HDFS
- Closed