Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-11263

Share the open/close store file thread pool for all store in a region

Details

    • Improvement
    • Status: Closed
    • Minor
    • Resolution: Abandoned
    • 0.99.0
    • None
    • regionserver
    • None

    Description

      Currently, the open/close store file thread pool is divided equally to all stores of a region.

        protected ThreadPoolExecutor getStoreFileOpenAndCloseThreadPool(
            final String threadNamePrefix) {
          int numStores = Math.max(1, this.htableDescriptor.getFamilies().size());
          int maxThreads = Math.max(1,
              conf.getInt(HConstants.HSTORE_OPEN_AND_CLOSE_THREADS_MAX,
                  HConstants.DEFAULT_HSTORE_OPEN_AND_CLOSE_THREADS_MAX)
                  / numStores);
          return getOpenAndCloseThreadPool(maxThreads, threadNamePrefix);
        }
      

      This is not very optimal in following scenarios:

      1. The data of some column families are very large and there are many hfiles in those stores, and others may be very small and in-memory column families.
      2. Usually we preserve some column families for later needs. The thread pool for these column families are wasted。

      The simple way is to share a big thread pool for all stores to open/close hfiles.

      Suggestions are welcomed.
      Thanks.

      Attachments

        1. HBASE-11263-trunk-v1.diff
          6 kB
          Shaohui Liu

        Activity

          fenghh Honghua Feng added a comment -

          A small improvement but sounds reasonable, you can provide a patch accordingly for review

          fenghh Honghua Feng added a comment - A small improvement but sounds reasonable, you can provide a patch accordingly for review
          liushaohui Shaohui Liu added a comment -

          Patch for the trunk

          liushaohui Shaohui Liu added a comment - Patch for the trunk
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12647586/HBASE-11263-trunk-v1.diff
          against trunk revision .
          ATTACHMENT ID: 12647586

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//console

          This message is automatically generated.

          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12647586/HBASE-11263-trunk-v1.diff against trunk revision . ATTACHMENT ID: 12647586 +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9649//console This message is automatically generated.

          Do we need storeFileThreadPoolCounter? Looks like this is being used effectively as a boolean flag when you could achieve the same by testing if storeFileThreadPool is null or not in an exclusive section? Save some heap per HRegion.

          What happens if you use one thread pool for opening and closing store files per RegionServer? If we have thousands of HRegions and they all have thread pools, surely we are not able to fully schedule that degree of parallelism?

          apurtell Andrew Kyle Purtell added a comment - Do we need storeFileThreadPoolCounter? Looks like this is being used effectively as a boolean flag when you could achieve the same by testing if storeFileThreadPool is null or not in an exclusive section? Save some heap per HRegion. What happens if you use one thread pool for opening and closing store files per RegionServer? If we have thousands of HRegions and they all have thread pools, surely we are not able to fully schedule that degree of parallelism?
          stack Michael Stack added a comment -

          Naming the thread pool for the first region that passes by seems a bit odd:

          + String threadNamePrefix =
          + "StoreFileOpenerThread-" + getRegionNameAsString();

          stack Michael Stack added a comment - Naming the thread pool for the first region that passes by seems a bit odd: + String threadNamePrefix = + "StoreFileOpenerThread-" + getRegionNameAsString();
          liushaohui Shaohui Liu added a comment -

          andrew.purtell@gmail.com

          Do we need storeFileThreadPoolCounter?

          Each store of the region will try to get and close the thread pool when opening and closing store files. StoreFileThreadPoolCounter represent the number of stores which are using this thread pool now. When StoreFileThreadPoolCounter is zero, we can shut down the thread pool actually.

          If we only use a boolean flag, we must put the code of closing the thread pool in region level. It's a little odd because the thread pool is used in the construction of store and closed in region.

          What happens if you use one thread pool for opening and closing store files per RegionServer? If we have thousands of HRegions and they all have thread pools, surely we are not able to fully schedule that degree of parallelism?

          Agree! Per regionserver thread pool for for opening and closing store files will be better.

          stack

          Naming the thread pool for the first region that passes by seems a bit odd:

          In the patch, the thread pool opening and closing store files is per region, so we name the thread pool with region name.

          liushaohui Shaohui Liu added a comment - andrew.purtell@gmail.com Do we need storeFileThreadPoolCounter? Each store of the region will try to get and close the thread pool when opening and closing store files. StoreFileThreadPoolCounter represent the number of stores which are using this thread pool now. When StoreFileThreadPoolCounter is zero, we can shut down the thread pool actually. If we only use a boolean flag, we must put the code of closing the thread pool in region level. It's a little odd because the thread pool is used in the construction of store and closed in region. What happens if you use one thread pool for opening and closing store files per RegionServer? If we have thousands of HRegions and they all have thread pools, surely we are not able to fully schedule that degree of parallelism? Agree! Per regionserver thread pool for for opening and closing store files will be better. stack Naming the thread pool for the first region that passes by seems a bit odd: In the patch, the thread pool opening and closing store files is per region, so we name the thread pool with region name.

          Agree! Per regionserver thread pool for for opening and closing store files will be better.

          Great!

          apurtell Andrew Kyle Purtell added a comment - Agree! Per regionserver thread pool for for opening and closing store files will be better. Great!
          stack Michael Stack added a comment -

          Ok. +1 then. Work for a thread pool at server level would be in a different JIRA I presume.

          stack Michael Stack added a comment - Ok. +1 then. Work for a thread pool at server level would be in a different JIRA I presume.

          Ok. +1 then. Work for a thread pool at server level would be in a different JIRA I presume.

          I'd argue not much point for this change if thread pool at server level is being worked on now or next.

          apurtell Andrew Kyle Purtell added a comment - Ok. +1 then. Work for a thread pool at server level would be in a different JIRA I presume. I'd argue not much point for this change if thread pool at server level is being worked on now or next.

          People

            Unassigned Unassigned
            liushaohui Shaohui Liu
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: