Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-17085

Erasure coding: readTo is computed large than actually needed during pread

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Not A Problem
    • 3.4.0
    • None
    • erasure-coding
    • None

    Description

      In HDFS-16520,it improved EC pread by introducing a readTo field.

      But, the way it was calculated seems still have some room for improvement.

      Now, it was calculated by below code:

      for (AlignedStripe stripe : stripes) {
        readTo = Math.max(readTo, stripe.getOffsetInBlock() + stripe.getSpanInBlock());
      } 

      But in the followed code, for every AlignedStripe object, it uses max readTo to construct StripeReader. I think there still exists waste of resource.

      for (AlignedStripe stripe : stripes) {
        // Parse group to get chosen DN location
        StripeReader preader = new PositionStripeReader(stripe, ecPolicy, blks,
            preaderInfos, corruptedBlocks, decoder, this);
        preader.setReadTo(readTo);
        try {
          preader.readStripe();
        } finally {
          preader.close();
        }
      } 

      Attachments

        Activity

          People

            zhanghaobo farmmamba
            zhanghaobo farmmamba
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: