Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
2.0.0-alpha
-
None
-
None
-
-
Reviewed
Description
I saw the following logs on my test cluster:
2012-02-22 14:35:22,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: startFile: recover lease [Lease. Holder: DFSClient_attempt_1329943893604_0007_m_000376_0_453973131_1, pendingcreates: 1], src=/benchmarks/TestDFSIO/io_data/test_io_6 from client DFSClient_attempt_1329943893604_0007_m_000376_0_453973131_1 2012-02-22 14:35:22,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering lease=[Lease. Holder: DFSClient_attempt_1329943893604_0007_m_000376_0_453973131_1, pendingcreates: 1], src=/benchmarks/TestDFSIO/io_data/test_io_6 2012-02-22 14:35:22,888 WARN org.apache.hadoop.hdfs.StateChange: BLOCK* internalReleaseLease: All existing blocks are COMPLETE, lease removed, file closed. 2012-02-22 14:35:22,888 WARN org.apache.hadoop.hdfs.StateChange: DIR* FSDirectory.replaceNode: failed to remove /benchmarks/TestDFSIO/io_data/test_io_6 2012-02-22 14:35:22,888 WARN org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.startFile: FSDirectory.replaceNode: failed to remove /benchmarks/TestDFSIO/io_data/test_io_6
It seems like, if recoverLeaseInternal succeeds in startFileInternal, then the INode will be replaced with a new one, meaning the later replaceNode call can fail.
Attachments
Attachments
Issue Links
- relates to
-
HDFS-8531 Append failed due to unreleased lease from previous appender with quota exceeded exception
- Resolved