Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12933

Improve logging when DFSStripedOutputStream failed to write some blocks

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • None
    • 3.1.0, 3.0.3
    • erasure-coding
    • None
    • Reviewed

    Description

      Currently if there are less DataNodes than the erasure coding policy's (# of data blocks + # of parity blocks), the client sees this:

      09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Cannot allocate parity block(index=13, policy=RS-10-4-1024k). Not enough datanodes? Exclude nodes=[]
      09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Block group <1> has 1 corrupt blocks.
      

      The 1st line is good. The 2nd line may be confusing to end users. We should investigate the error and be more general / accurate. Maybe something like 'failed to read x blocks'.

      Attachments

        1. HDFS-12933.001.patch
          1 kB
          chencan

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            candychencan chencan
            xiaochen Xiao Chen
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment