Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12933

Improve logging when DFSStripedOutputStream failed to write some blocks

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.1.0, 3.0.3
    • Component/s: erasure-coding
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      Currently if there are less DataNodes than the erasure coding policy's (# of data blocks + # of parity blocks), the client sees this:

      09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Cannot allocate parity block(index=13, policy=RS-10-4-1024k). Not enough datanodes? Exclude nodes=[]
      09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Block group <1> has 1 corrupt blocks.
      

      The 1st line is good. The 2nd line may be confusing to end users. We should investigate the error and be more general / accurate. Maybe something like 'failed to read x blocks'.

        Attachments

          Activity

            People

            • Assignee:
              candychencan chencan
              Reporter:
              xiaochen Xiao Chen
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: