Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-1556

[testing] optimize minicluster based testing in the test suite

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Critical
    • Resolution: Duplicate
    • None
    • 0.90.0
    • test
    • None

    Description

      It is possible to tell junit to run all of the unit tests in a single forked JVM:

        <junit fork="yes" forkmode="once" ... >
        ...
        </junit>
      

      Then, use statics to manage miniclusters in background threads:

        protected static HBaseConfiguration conf = new HBaseConfiguration();
        protected static MiniZooKeeperCluster zooKeeperCluster;
        protected static MiniHBaseCluster hbaseCluster;
        protected static MiniDFSCluster dfsCluster;
      
        public static boolean isMiniClusterRunning() {
          return hbaseCluster != null;
        }
      
        private static void startDFS() throws Exception {
          if (dfsCluster != null) {
            LOG.error("MiniDFSCluster already running");
            return;
          }
          Path path = new Path(
              conf.get("test.build.data", "test/build/data"), "MiniClusterTestCase");
          FileSystem testFS = FileSystem.get(conf);
          if (testFS.exists(path)) {
            testFS.delete(path, true);
          }
          testDir = new File(path.toString());
          dfsCluster = new MiniDFSCluster(conf, 2, true, (String[])null);
          FileSystem filesystem = dfsCluster.getFileSystem();
          conf.set("fs.default.name", filesystem.getUri().toString());     
          Path parentdir = filesystem.getHomeDirectory();
          conf.set(HConstants.HBASE_DIR, parentdir.toString());
          filesystem.mkdirs(parentdir);
          FSUtils.setVersion(filesystem, parentdir);
          LOG.info("started MiniDFSCluster in " + testDir.toString());
        }
      
        private static void stopDFS() {
          if (dfsCluster != null) try {
            dfsCluster.shutdown();
            dfsCluster = null;
          } catch (Exception e) {
            LOG.warn(StringUtils.stringifyException(e));
          }
        }
      
        private static void startZooKeeper() throws Exception {
          if (zooKeeperCluster != null) {
            LOG.error("ZooKeeper already running");
            return;
          }
          zooKeeperCluster = new MiniZooKeeperCluster();
          zooKeeperCluster.startup(testDir);
          LOG.info("started " + zooKeeperCluster.getClass().getName());
        }
      
        private static void stopZooKeeper() {
          if (zooKeeperCluster != null) try {
            zooKeeperCluster.shutdown();
            zooKeeperCluster = null;
          } catch (Exception e) {
            LOG.warn(StringUtils.stringifyException(e));
          }
        }
       
        private static void startHBase() throws Exception {
          if (hbaseCluster != null) {
            LOG.error("MiniHBaseCluster already running");
            return;
          }
          hbaseCluster = new MiniHBaseCluster(conf, 1);
          // opening the META table ensures that cluster is running
          new HTable(conf, HConstants.META_TABLE_NAME);
          LOG.info("started MiniHBaseCluster");
        }
       
        private static void stopHBase() {
          if (hbaseCluster != null) try {
            HConnectionManager.deleteConnectionInfo(conf, true);
            hbaseCluster.shutdown();
            hbaseCluster = null;
          } catch (Exception e) {
            LOG.warn(StringUtils.stringifyException(e));
          }
        }
      
        public static void startMiniCluster() throws Exception {
          try {
            startDFS();
            startZooKeeper();
            startHBase();
          } catch (Exception e) {
            stopHBase();
            stopZooKeeper();
            stopDFS();
            throw e;
          }
        }
      
        public static void stopMiniCluster() {
          stopHBase();
          stopZooKeeper();
          stopDFS();
        }
      

      The base class for cluster testing can do something like so in its startUp method:

        protected void setUp() throws Exception {
          // start the mini cluster if it is not running yet
          if (!isMiniClusterRunning()) {
            startMiniCluster();
          }
        }
      

      For example, when testing Stargate, it is clear that the minicluster startup costs are included in the run time of the first unit test, which checks if the miniclusters are all running, and subsequent tests do not incur those costs:

       
      test:
         [delete] Deleting directory /home/apurtell/src/stargate.git/build/test/logs
          [mkdir] Created dir: /home/apurtell/src/stargate.git/build/test/logs
          [junit] Running org.apache.hadoop.hbase.stargate.Test00MiniCluster
          [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 10.329 sec
          [junit] Running org.apache.hadoop.hbase.stargate.Test01VersionResource
          [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.243 sec
          [junit] Running org.apache.hadoop.hbase.stargate.model.TestCellModel
          [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.012 sec
          [junit] Running org.apache.hadoop.hbase.stargate.model.TestCellSetModel
          [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.018 sec
          [junit] Running org.apache.hadoop.hbase.stargate.model.TestRowModel
          [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.008 sec
          [junit] Running org.apache.hadoop.hbase.stargate.model.TestScannerModel
          [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.013 sec
          [junit] Running org.apache.hadoop.hbase.stargate.model.TestStorageClusterStatusModel
          [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.024 sec
          [junit] Running org.apache.hadoop.hbase.stargate.model.TestStorageClusterVersionModel
          [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.006 sec
          [junit] Running org.apache.hadoop.hbase.stargate.model.TestTableInfoModel
          [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.017 sec
          [junit] Running org.apache.hadoop.hbase.stargate.model.TestTableListModel
          [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.012 sec
          [junit] Running org.apache.hadoop.hbase.stargate.model.TestTableRegionModel
          [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.018 sec
          [junit] Running org.apache.hadoop.hbase.stargate.model.TestVersionModel
          [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.014 sec
      
      BUILD SUCCESSFUL
      Total time: 14 seconds
      

      This can obviously shave a lot of time off the current HBase test suite. However, the current suite will need to be heavily modified. Each test case has been written with the expectation that it starts up a pristine minicluster, so there are assumptions made that will be invalidated, and many cases which duplicate the table creates of others, etc.

      Attachments

        Activity

          People

            Unassigned Unassigned
            apurtell Andrew Kyle Purtell
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: