Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Not A Problem
-
None
-
None
-
None
-
None
Description
processDatanodeError-> setupPipelineForAppendOrRecovery will set streamerClosed to be true if all nodes in the pipeline failed in the past, and just return.
Back to run() in data streammer, the logic
if (streamerClosed || hasError || dataQueue.size() == 0 || !clientRunning)
will just let set closed=true in closeInternal().
And DataOutputStream will not get a chance to clean up. The DataOutputStream will throw exception or return null for following write/close.
It will leave the file in writing in incomplete state.
Attachments
Issue Links
- is related to
-
HDFS-278 Should DFS outputstream's close wait forever?
- Closed