Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
1.0.3
-
None
-
Reviewed
Description
When the file is opened for writing, the DFSClient calls one of the datanode owning the last block to get its size. If this datanode is dead, the socket exception is shallowed and the size of this last block is equals to zero. This seems to be fixed on trunk, but I didn't find a related Jira. On 1.0.3, it's not fixed. It's on the same area as HDFS-1950 or HDFS-3222.
Attachments
Attachments
Issue Links
- duplicates
-
HDFS-3965 DFSInputStream should not eat up exceptions if file is under construction
- Resolved
- is related to
-
HDFS-3222 DFSInputStream#openInfo should not silently get the length as 0 when locations length is zero for last partial block.
- Closed
-
HBASE-6751 Too many retries, leading a a delay to read the HLog after a datanode failure
- Closed
- relates to
-
HBASE-6401 HBase may lose edits after a crash if used with HDFS 1.0.3 or older
- Closed
-
HDFS-4590 Add more Unit Test Case for HDFS-3701 HDFS Miss Final Block Reading when File is Open for Write
- Open