Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-3054

Runtime filters are not disabled when spilling in a rare case

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    Description

      A query in TestSpilling returned incorrect results:

      assert Comparing QueryTestResults (expected vs actual):   1846743 != 1526159
      
      -- executing against localhost:21000
      
      set max_block_mgr_memory=100m;
      
      -- executing against localhost:21000
      
      select count(l1.l_tax)
      from
      lineitem l1,
      lineitem l2,
      lineitem l3
      where
      l1.l_tax < 0.01 and
      l2.l_tax < 0.04 and
      l1.l_orderkey = l2.l_orderkey and
      l1.l_orderkey = l3.l_orderkey and
      l1.l_comment = l3.l_comment and
      l1.l_shipdate = l3.l_shipdate;
      

      http://sandbox.jenkins.cloudera.com/job/impala-master-cdh5-trunk-non-partitioned-hash-and-aggs/238/

      Regression
      
      Impala.tests.custom_cluster.test_spilling.TestSpilling.test_spilling[exec_option: {'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0, 'batch_size': 0, 'num_nodes': 0} | table_format: parquet/none] (from pytest)
      Failing for the past 1 build (Since Failed#238 )
      Took 1 min 9 sec.
      add description
      Error Message
      
      assert Comparing QueryTestResults (expected vs actual):   1846743 != 1526159
      
      Stacktrace
      
      self = <test_spilling.TestSpilling object at 0x2c09ed0>
      vector = <tests.common.test_vector.TestVector object at 0x1918690>
      
          @pytest.mark.execute_serially
          @CustomClusterTestSuite.with_args(
              impalad_args="--read_size=200000",
              catalogd_args="--load_catalog_in_background=false")
          def test_spilling(self, vector):
            new_vector = deepcopy(vector)
            # remove this. the test cases set this explicitly.
            del new_vector.get_value('exec_option')['num_nodes']
      >     self.run_test_case('QueryTest/spilling', new_vector)
      
      custom_cluster/test_spilling.py:92: 
      _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
      common/impala_test_suite.py:287: in run_test_case
          pytest.config.option.update_results)
      common/test_result_verifier.py:357: in verify_raw_results
          VERIFIER_MAP[verifier](expected, actual)
      _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
      
      expected_results = <tests.common.test_result_verifier.QueryTestResult object at 0x2c1e9d0>
      actual_results = <tests.common.test_result_verifier.QueryTestResult object at 0x2c1ef50>
      
          def verify_query_result_is_equal(expected_results, actual_results):
            assert_args_not_none(expected_results, actual_results)
      >     assert expected_results == actual_results
      E     assert Comparing QueryTestResults (expected vs actual):
      E       1846743 != 1526159
      
      common/test_result_verifier.py:203: AssertionError
      
      Standard Output
      
      Starting State Store logging to /data/jenkins/workspace/impala-master-cdh5-trunk-non-partitioned-hash-and-aggs/repos/Impala/cluster_logs/custom_cluster//statestored.INFO
      Starting Catalog Service logging to /data/jenkins/workspace/impala-master-cdh5-trunk-non-partitioned-hash-and-aggs/repos/Impala/cluster_logs/custom_cluster//catalogd.INFO
      Starting Impala Daemon logging to /data/jenkins/workspace/impala-master-cdh5-trunk-non-partitioned-hash-and-aggs/repos/Impala/cluster_logs/custom_cluster//impalad.INFO
      Starting Impala Daemon logging to /data/jenkins/workspace/impala-master-cdh5-trunk-non-partitioned-hash-and-aggs/repos/Impala/cluster_logs/custom_cluster//impalad_node1.INFO
      Starting Impala Daemon logging to /data/jenkins/workspace/impala-master-cdh5-trunk-non-partitioned-hash-and-aggs/repos/Impala/cluster_logs/custom_cluster//impalad_node2.INFO
      Waiting for Catalog... Status: 50 DBs / 1063 tables (ready=True)
      Waiting for Catalog... Status: 50 DBs / 1063 tables (ready=True)
      Waiting for Catalog... Status: 50 DBs / 1063 tables (ready=True)
      Impala Cluster Running with 3 nodes.
      
      Standard Error
      
      MainThread: Found 3 impalad/1 statestored/1 catalogd process(es)
      MainThread: Getting num_known_live_backends from impala-boost-static-burst-slave-0d35.vpc.cloudera.com:25000
      MainThread: Waiting for num_known_live_backends=3. Current value: 0
      MainThread: Getting num_known_live_backends from impala-boost-static-burst-slave-0d35.vpc.cloudera.com:25000
      MainThread: Waiting for num_known_live_backends=3. Current value: 0
      MainThread: Getting num_known_live_backends from impala-boost-static-burst-slave-0d35.vpc.cloudera.com:25000
      MainThread: Waiting for num_known_live_backends=3. Current value: 2
      MainThread: Getting num_known_live_backends from impala-boost-static-burst-slave-0d35.vpc.cloudera.com:25000
      MainThread: Waiting for num_known_live_backends=3. Current value: 2
      MainThread: Getting num_known_live_backends from impala-boost-static-burst-slave-0d35.vpc.cloudera.com:25000
      MainThread: num_known_live_backends has reached value: 3
      MainThread: Getting num_known_live_backends from impala-boost-static-burst-slave-0d35.vpc.cloudera.com:25001
      MainThread: num_known_live_backends has reached value: 3
      MainThread: Getting num_known_live_backends from impala-boost-static-burst-slave-0d35.vpc.cloudera.com:25002
      MainThread: num_known_live_backends has reached value: 3
      MainThread: Found 3 impalad/1 statestored/1 catalogd process(es)
      MainThread: Getting metric: statestore.live-backends from impala-boost-static-burst-slave-0d35.vpc.cloudera.com:25010
      MainThread: Metric 'statestore.live-backends' has reach desired value: 4
      MainThread: Getting num_known_live_backends from impala-boost-static-burst-slave-0d35.vpc.cloudera.com:25000
      MainThread: num_known_live_backends has reached value: 3
      MainThread: Getting num_known_live_backends from impala-boost-static-burst-slave-0d35.vpc.cloudera.com:25001
      MainThread: num_known_live_backends has reached value: 3
      MainThread: Getting num_known_live_backends from impala-boost-static-burst-slave-0d35.vpc.cloudera.com:25002
      MainThread: num_known_live_backends has reached value: 3
      -- connecting to: localhost:21000
      -- executing against localhost:21000
      use tpch_parquet;
      
      SET disable_codegen=False;
      SET abort_on_error=1;
      SET exec_single_node_rows_threshold=0;
      SET batch_size=0;
      -- executing against localhost:21000
      set num_nodes=1;
      
      -- executing against localhost:21000
      
      set max_block_mgr_memory=265m;
      
      -- executing against localhost:21000
      
      select l_orderkey, count(*)
      from lineitem
      group by 1
      order by 1 limit 10;
      
      -- executing against localhost:21000
      SET NUM_NODES=0;;
      
      -- executing against localhost:21000
      set num_nodes=1;
      
      -- executing against localhost:21000
      
      set max_block_mgr_memory=275m;
      
      -- executing against localhost:21000
      
      select l_returnflag, l_orderkey, avg(l_tax), min(l_shipmode)
      from lineitem
      group by 1,2
      order by 1,2 limit 3;
      
      -- executing against localhost:21000
      SET NUM_NODES=0;;
      
      -- executing against localhost:21000
      set max_block_mgr_memory=275m;
      
      -- executing against localhost:21000
      
      select l_orderkey, count(*)
      from lineitem
      group by 1
      order by 1 limit 10;
      
      -- executing against localhost:21000
      SET MAX_BLOCK_MGR_MEMORY=0;;
      
      -- executing against localhost:21000
      set num_nodes=0;
      
      -- executing against localhost:21000
      
      set max_block_mgr_memory=275m;
      
      -- executing against localhost:21000
      
      select l_comment, count(*)
      from lineitem
      group by 1
      order by count(*) desc limit 5;
      
      -- executing against localhost:21000
      SET NUM_NODES=0;;
      
      -- executing against localhost:21000
      set num_nodes=0;
      
      -- executing against localhost:21000
      
      set max_block_mgr_memory=80m;
      
      -- executing against localhost:21000
      
      select l_returnflag, l_orderkey, round(avg(l_tax),2), min(l_shipmode)
      from lineitem
      group by 1,2
      order by 1,2 limit 3;
      
      -- executing against localhost:21000
      SET NUM_NODES=0;;
      
      -- executing against localhost:21000
      set num_nodes=0;
      
      -- executing against localhost:21000
      
      set max_block_mgr_memory=275m;
      
      -- executing against localhost:21000
      
      select l_orderkey, avg(l_orderkey)
      from lineitem
      group by 1
      order by 1 limit 5;
      
      -- executing against localhost:21000
      SET NUM_NODES=0;;
      
      -- executing against localhost:21000
      set num_nodes=0;
      
      -- executing against localhost:21000
      
      set max_block_mgr_memory=100m;
      
      -- executing against localhost:21000
      
      select count(l1.l_tax)
      from
      lineitem l1,
      lineitem l2,
      lineitem l3
      where
      l1.l_tax < 0.01 and
      l2.l_tax < 0.04 and
      l1.l_orderkey = l2.l_orderkey and
      l1.l_orderkey = l3.l_orderkey and
      l1.l_comment = l3.l_comment and
      l1.l_shipdate = l3.l_shipdate;
      
      -- executing against localhost:21000
      SET NUM_NODES=0;;
      
      MainThread: Comparing QueryTestResults (expected vs actual):
      1846743 != 1526159
      

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            tarmstrong Tim Armstrong
            tarmstrong Tim Armstrong
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment