Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-7347 Allow multiple readers per storefile
  3. HBASE-7371

Blocksize in TestHFileBlock is unintentionally small

    XMLWordPrintableJSON

    Details

    • Type: Sub-task
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.94.4, 0.95.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Looking at TestHFileBlock.writeBlocks I see this:

            for (int j = 0; j < rand.nextInt(500); ++j) {
              // This might compress well.
              dos.writeShort(i + 1);
              dos.writeInt(j + 1);
            }
      

      The result is probably not what the author intended. rand.nextInt(500) is evaluated during each iterations and that leads to very small blocks size mostly between ~100 and 300 bytes or so.

      The author probably intended this:

            int size = rand.nextInt(500);
            for (int j = 0; j < size; ++j) {
              // This might compress well.
              dos.writeShort(i + 1);
              dos.writeInt(j + 1);
            }
      

      This leads to more reasonable block sizes between ~200 and 3000 bytes

        Attachments

          Activity

            People

            • Assignee:
              larsh Lars Hofhansl
              Reporter:
              larsh Lars Hofhansl
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: