Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-18526

FIFOCompactionPolicy pre-check uses wrong scope

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.3.1
    • 1.4.0, 2.0.0-alpha-2, 2.0.0
    • master
    • None
    • Reviewed

    Description

      See https://issues.apache.org/jira/browse/HBASE-14468

      It adds this check to HMaster.checkCompactionPolicy():

      // 1. Check TTL
      if (hcd.getTimeToLive() == HColumnDescriptor.DEFAULT_TTL) {
        message = "Default TTL is not supported for FIFO compaction";
        throw new IOException(message);
      }
      
      // 2. Check min versions
      if (hcd.getMinVersions() > 0) {
        message = "MIN_VERSION > 0 is not supported for FIFO compaction";
        throw new IOException(message);
      }
      
      // 3. blocking file count
      String sbfc = htd.getConfigurationValue(HStore.BLOCKING_STOREFILES_KEY);
      if (sbfc != null) {
        blockingFileCount = Integer.parseInt(sbfc);
      }
      if (blockingFileCount < 1000) {
        message =
            "blocking file count '" + HStore.BLOCKING_STOREFILES_KEY + "' "
      + blockingFileCount
                + " is below recommended minimum of 1000";
        throw new IOException(message);
      }
      

      Why does it only check the blocking file count on the HTD level, while
      others are check on the HCD level? Doing this for example fails
      because of it:

      hbase(main):008:0> create 'ttltable', { NAME => 'cf1', TTL => 300,
      CONFIGURATION => { 'hbase.hstore.defaultengine.compactionpolicy.class'
      => 'org.apache.hadoop.hbase.regionserver.compactions.FIFOCompactionPolicy',
      'hbase.hstore.blockingStoreFiles' => 2000 } }
      
      ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: blocking file
      count 'hbase.hstore.blockingStoreFiles' 10 is below recommended
      minimum of 1000 Set hbase.table.sanity.checks to false at conf or
      table descriptor if you want to bypass sanity checks
      at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1782)
      at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1663)
      at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1545)
      at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:469)
      at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58549)
      at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
      at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
      at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
      at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
      Caused by: java.io.IOException: blocking file count
      'hbase.hstore.blockingStoreFiles' 10 is below recommended minimum of
      1000
      at org.apache.hadoop.hbase.master.HMaster.checkCompactionPolicy(HMaster.java:1773)
      at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1661)
      ... 7 more
      

      The check should be performed on the column family level instead.

      Attachments

        1. 18526.branch-1.txt
          0.8 kB
          Ted Yu
        2. HBASE-18526-v1.patch
          3 kB
          Vladimir Rodionov

        Activity

          People

            vrodionov Vladimir Rodionov
            larsgeorge Lars George
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: