Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-951

DFSClient should handle all nodes in a pipeline failed.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Not A Problem
    • None
    • None
    • None
    • None

    Description

      processDatanodeError-> setupPipelineForAppendOrRecovery will set streamerClosed to be true if all nodes in the pipeline failed in the past, and just return.
      Back to run() in data streammer, the logic
      if (streamerClosed || hasError || dataQueue.size() == 0 || !clientRunning)

      { continue; }

      will just let set closed=true in closeInternal().

      And DataOutputStream will not get a chance to clean up. The DataOutputStream will throw exception or return null for following write/close.
      It will leave the file in writing in incomplete state.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              he yongqiang He Yongqiang
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: