Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-16430

Validate maximum blocks in EC group when adding an EC policy

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Reviewed

    Description

      HDFS EC adopts the last 4 bits of block ID to store the block index in EC block group. Therefore maximum blocks in EC block group is 2^4=16, and which is defined here: HdfsServerConstants#MAX_BLOCKS_IN_GROUP.

      Currently there is no limitation or warning when adding a bad EC policy with numDataUnits + numParityUnits > 16. It only results in read/write error on EC file with bad EC policy. To users this is not very straightforward.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            cndaimin daimin
            cndaimin daimin
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Time Tracking

                Estimated:
                Original Estimate - Not Specified
                Not Specified
                Remaining:
                Remaining Estimate - 0h
                0h
                Logged:
                Time Spent - 50m
                50m

                Slack

                  Issue deployment