VotersWatch issueWatchersLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

    Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.0.0-alpha-1, 2.3.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      When decompressing an compressed block, we are also allocating HeapByteBuffer for the unpacked block. should allocate ByteBuff from the global ByteBuffAllocator, skimmed the code, the key point is, we need an ByteBuff decompress interface, not the following:

      # Compression.java
        public static void decompress(byte[] dest, int destOffset,
            InputStream bufferedBoundedStream, int compressedSize,
            int uncompressedSize, Compression.Algorithm compressAlgo)
            throws IOException {
            //...
      }
      

      Not very high priority, let me make the block without compression to be offheap firstly.

      In HBASE-22005, I ignored the unit test:
      1. TestLoadAndSwitchEncodeOnDisk ;
      2. TestHFileBlock#testPreviousOffset;

      Need to resolve this issue and make those UT works fine.

        Attachments

        1. HBASE-21937.HBASE-21879.v3.patch
          35 kB
          Zheng Hu
        2. HBASE-21937.HBASE-21879.v2.patch
          35 kB
          Zheng Hu
        3. HBASE-21937.HBASE-21879.v1.patch
          33 kB
          Zheng Hu

          Activity

            People

            • Assignee:
              openinx Zheng Hu
              Reporter:
              openinx Zheng Hu

              Dates

              • Created:
                Updated:
                Resolved:

                Issue deployment