Uploaded image for project: 'IMPALA'
  2. IMPALA-7727

failed compute stats child query status no longer propagates to parent query




      bharathv since you have been dealing with stats, please take a look. Otherwise feel free to reassign. This bug prevents the stress test from running with compute stats statements. It triggers in non-stressful conditions, too.

      $ impala-shell.sh -d tpch_parquet
      [localhost:21000] tpch_parquet> set mem_limit=24m;
      MEM_LIMIT set to 24m
      [localhost:21000] tpch_parquet> compute stats customer;
      Query: compute stats customer
      WARNINGS: Cancelled
      [localhost:21000] tpch_parquet>

      The problem is that the child query didn't have enough memory to run, but this error didn't propagate up.

      Query (id=384d37fb2826a962:f4b1035700000000):
        DEBUG MODE WARNING: Query profile created while running a DEBUG build of Impala. Use RELEASE builds to measure query performance.
          Session ID: d343e1026d497bb0:7e87b342c73c108d
          Session Type: BEESWAX
          Start Time: 2018-10-18 15:16:34.036363000
          End Time: 2018-10-18 15:16:34.177711000
          Query Type: QUERY
          Query State: EXCEPTION
          Query Status: Rejected query from pool default-pool: minimum memory reservation is greater than memory available to the query for buffer reservations. Memory reservation needed given the current plan: 128.00 KB. Adjust either the mem_limit or the pool config (max-query-mem-limit, min-query-mem-limit) for the query to allow the query memory limit to be at least 32.12 MB. Note that changing the mem_limit may also change the plan. See the query profile for more information about the per-node memory requirements.
          Impala Version: impalad version 3.1.0-SNAPSHOT DEBUG (build 9f5c5e6df03824cba292fe5a619153462c11669c)
          User: mikeb
          Connected User: mikeb
          Delegated User: 
          Network Address: ::ffff:
          Default Db: tpch_parquet
          Sql Statement: SELECT COUNT(*) FROM customer
          Coordinator: mikeb-ub162:22000
          Query Options (set by configuration): MEM_LIMIT=25165824,MT_DOP=4
          Query Options (set by configuration and planner): MEM_LIMIT=25165824,NUM_SCANNER_THREADS=1,MT_DOP=4
      Max Per-Host Resource Reservation: Memory=512.00KB Threads=5
      Per-Host Resource Estimates: Memory=146MB
      F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
      |  Per-Host Resources: mem-estimate=10.00MB mem-reservation=0B thread-reservation=1
      |  mem-estimate=0B mem-reservation=0B thread-reservation=0
      |  output: count:merge(*)
      |  mem-estimate=10.00MB mem-reservation=0B spill-buffer=2.00MB thread-reservation=0
      |  tuple-ids=1 row-size=8B cardinality=1
      |  in pipelines: 03(GETNEXT), 01(OPEN)
      |  mem-estimate=0B mem-reservation=0B thread-reservation=0
      |  tuple-ids=1 row-size=8B cardinality=1
      |  in pipelines: 01(GETNEXT)
      F00:PLAN FRAGMENT [RANDOM] hosts=1 instances=4
      Per-Host Resources: mem-estimate=136.00MB mem-reservation=512.00KB thread-reservation=4
      |  output: sum_init_zero(tpch_parquet.customer.parquet-stats: num_rows)
      |  mem-estimate=10.00MB mem-reservation=0B spill-buffer=2.00MB thread-reservation=0
      |  tuple-ids=1 row-size=8B cardinality=1
      |  in pipelines: 01(GETNEXT), 00(OPEN)
      00:SCAN HDFS [tpch_parquet.customer, RANDOM]
         partitions=1/1 files=1 size=12.34MB
         stored statistics:
           table: rows=150000 size=12.34MB
           columns: all
         extrapolated-rows=disabled max-scan-range-rows=150000
         mem-estimate=24.00MB mem-reservation=128.00KB thread-reservation=0
         tuple-ids=0 row-size=8B cardinality=150000
         in pipelines: 00(GETNEXT)
          Estimated Per-Host Mem: 153092096
          Per Host Min Memory Reservation: mikeb-ub162:22000(0) mikeb-ub162:22001(128.00 KB)
          Request Pool: default-pool
          Admission result: Rejected
          Query Compilation: 126.903ms
             - Metadata of all 1 tables cached: 5.484ms (5.484ms)
             - Analysis finished: 16.104ms (10.619ms)
             - Value transfer graph computed: 32.646ms (16.542ms)
             - Single node plan created: 61.289ms (28.642ms)
             - Runtime filters computed: 66.148ms (4.859ms)
             - Distributed plan created: 66.428ms (280.057us)
             - Parallel plans created: 67.866ms (1.437ms)
             - Planning finished: 126.903ms (59.037ms)
          Query Timeline: 140.000ms
             - Query submitted: 0.000ns (0.000ns)
             - Planning finished: 140.000ms (140.000ms)
             - Submit for admission: 140.000ms (0.000ns)
             - Completed admission: 140.000ms (0.000ns)
             - Rows available: 140.000ms (0.000ns)
             - Unregister query: 140.000ms (0.000ns)
           - ComputeScanRangeAssignmentTimer: 0.000ns
           - ClientFetchWaitTimer: 0.000ns
           - RowMaterializationTimer: 0.000ns


        1. 2.12-compute-stats-profile.txt
          2 kB
          Michael Brown
        2. 3.1-compute-stats-profile.txt
          1 kB
          Michael Brown
        3. 3.1-child-profile.txt
          4 kB
          Michael Brown
        4. 2.12-child-profile.txt
          17 kB
          Michael Brown

        Issue Links



              bharathv Bharath Vissapragada
              mikeb Michael Brown
              0 Vote for this issue
              5 Start watching this issue