Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-27013

Introduce read all bytes when using pread for prefetch

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.5.0, 2.6.0, 3.0.0-alpha-3, 2.4.13
    • 2.5.0, 3.0.0-alpha-3
    • HFile, Performance
    • None
    • Reviewed
    • Hide
      Introduce optional flag hfile.pread.all.bytes.enabled for pread that must read full bytes with the next block header. This feature is specially helpful when users are running HBase with Blob storage like S3 and Azure Blob storage. Especially when using HBase with S3A and set fs.s3a.experimental.input.fadvise=sequential, it can save input stream from seeking backward that spent more time on storage connection reset time.
      Show
      Introduce optional flag hfile.pread.all.bytes.enabled for pread that must read full bytes with the next block header. This feature is specially helpful when users are running HBase with Blob storage like S3 and Azure Blob storage. Especially when using HBase with S3A and set fs.s3a.experimental.input.fadvise=sequential, it can save input stream from seeking backward that spent more time on storage connection reset time.

    Description

      Problem statement

      When prefetching HFiles from blob storage like S3 and use it with the storage implementation like S3A, we found there is a logical issue in HBase pread that causes the reading of the remote HFile aborts the input stream multiple times. This aborted stream and reopen slow down the reads and trigger many aborted bytes and waste time in recreating the connection especially when SSL is enabled.

      ROOT CAUSE

      The root cause of above issue was due to BlockIOUtils#preadWithExtra is reading an input stream that does not guarrentee to return the data block and the next block header as an option data to be cached.

      In the case of the input stream read short and when the input stream read passed the length of the necessary data block with few more bytes within the size of next block header, the BlockIOUtils#preadWithExtra returns to the caller without a cached the next block header. As a result, before HBase tries to read the next block, HFileBlock#readBlockDataInternal in hbase tries to re-read the next block header from the input stream. Here, the reusable input stream has move the current position pointer ahead from the offset of the last read data block, when using with the S3A implementation, the input stream is then closed, aborted all the remaining bytes and reopen a new input stream at the offset of the last read data block .

      How do we fix it?

      S3A is doing the right job that HBase is telling to move the offset from position A back to A - N, so there is not much thing we can do on how S3A handle the inputstream. meanwhile in the case of HDFS, this operation is fast.

      Such that, we should fix in HBase level, and try always to read datablock + next block header when we're using blob storage to avoid expensive draining the bytes in a stream and reopen the socket with the remote storage.

      Draw back and discussion

      • A known drawback is, when we're at the last block, we will read extra length that should not be a header, and we still read that into the byte buffer array. the size should be always 33 bytes, and it should not a big issue in data correctness because the trailer will tell when the last datablock should end. And we just waste a 33 byte read and that data is not being used.
      • I don't know if we can use HFileStreamReader but that will change the Prefetch logic a lot, such that this minimum change should be the best.

      initial result

      We use YCSB 1 billion records data, and we enable prefetch for the userable. the collected the S3A metrics of stream_read_bytes_discarded_in_abort to compare the solution, each region server have abort ~290 GB data to be prefetch to bucketcache.

      • before the change, we have a total of 4235973338472 bytes (~4235GB) has been aborted on a sample region server for about 290GB data.
        • the overall time was about 45 ~ 60 mins
      % grep "stream_read_bytes_discarded_in_abort" ~/prefetch-result/prefetch-s3a-jmx-metrics.json | grep -wv "stream_read_bytes_discarded_in_abort\":0,"
               "stream_read_bytes_discarded_in_abort":3136854553,
               "stream_read_bytes_discarded_in_abort":19119241,
               "stream_read_bytes_discarded_in_abort":2131591701471,
               "stream_read_bytes_discarded_in_abort":150484654298,
               "stream_read_bytes_discarded_in_abort":106536641550,
               "stream_read_bytes_discarded_in_abort":1785264521717,
               "stream_read_bytes_discarded_in_abort":58939845642,
      
      • After the change, we only have 87100225454 bytes (~87GB) data to be aborted.
        • the reason is about the position is way behind the asked target position, then S3A reopen the stream and move the position to the current offset. This is a different problem we will need to look into later.
        • the overall time is then cut to 30~38 mins, about 30% faster.
      % grep "stream_read_bytes_discarded_in_abort" ~/fixed-formatted-jmx2.json
            "stream_read_bytes_discarded_in_abort": 0,
            "stream_read_bytes_discarded_in_abort": 87100225454,
            "stream_read_bytes_discarded_in_abort": 67043088,
      

      Attachments

        Issue Links

          Activity

            People

              taklwu Tak-Lon (Stephen) Wu
              taklwu Tak-Lon (Stephen) Wu
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: