Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-10438

NPE from LRUDictionary when size reaches the max init value

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Blocker
    • Resolution: Fixed
    • 0.98.0
    • 0.98.0, 0.99.0
    • None
    • None
    • Reviewed

    Description

      This happened while testing tags with COMPRESS_TAG=true/false. I was trying to change this attribute of compressing tags by altering the HCD. The DBE used is FAST_DIFF.
      In one particular case I got this

      2014-01-29 16:20:03,023 ERROR [regionserver60020-smallCompactions-1390983591688] regionserver.CompactSplitThread: Compaction failed Request = regionName=usertable,user5146961419203824653,1390979618897.2dd477d0aed888c615a29356c0bbb19d., storeName=f1, fileCount=4, fileSize=498.6 M (226.0 M, 163.7 M, 67.0 M, 41.8 M), priority=6, time=1994941280334574
      java.lang.NullPointerException
              at org.apache.hadoop.hbase.io.util.LRUDictionary$BidirectionalLRUMap.put(LRUDictionary.java:109)
              at org.apache.hadoop.hbase.io.util.LRUDictionary$BidirectionalLRUMap.access$200(LRUDictionary.java:76)
              at org.apache.hadoop.hbase.io.util.LRUDictionary.addEntry(LRUDictionary.java:62)
              at org.apache.hadoop.hbase.io.TagCompressionContext.uncompressTags(TagCompressionContext.java:147)
              at org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.decodeTags(BufferedDataBlockEncoder.java:270)
              at org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:522)
              at org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decodeFirst(FastDiffDeltaEncoder.java:535)
              at org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.setCurrentBuffer(BufferedDataBlockEncoder.java:188)
              at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1017)
              at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1068)
              at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:137)
              at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
              at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:509)
              at org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:217)
              at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:76)
              at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:109)
              at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1074)
              at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1382)
              at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:475)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
              at java.lang.Thread.run(Thread.java:744)
      

      I am not able to reproduce this repeatedly. One thing is I altered the table to use COMPRESS_TAGS here before that it was false.
      My feeling is this is not due to the COMPRESS_TAGS because we try to handle this per file by adding it in FILE_INFO.
      In the above stack trace the problem has occured while compaction and so the flushed file should have this property set. I think the problem could be with LRUDicitonary.
      the reason for NPE is

       if (currSize < initSize) {
              // There is space to add without evicting.
              indexToNode[currSize].setContents(stored, 0, stored.length);
              setHead(indexToNode[currSize]);
              short ret = (short) currSize++;
              nodeToIndex.put(indexToNode[ret], ret);
              System.out.println(currSize);
              return ret;
            } else {
              short s = nodeToIndex.remove(tail);
              tail.setContents(stored, 0, stored.length);
              // we need to rehash this.
              nodeToIndex.put(tail, s);
              moveToHead(tail);
              return s;
            }
      

      Here

      short s = nodeToIndex.remove(tail);
      

      is giving a null value and the typecasting to short primitive is throwing NPE. Am digging this further to see if am able to reproduce this.

      Attachments

        1. HBASE-10438.patch
          0.7 kB
          Anoop Sam John

        Activity

          People

            anoop.hbase Anoop Sam John
            ram_krish ramkrishna.s.vasudevan
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: