Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-3836

Queries hang and don't make progress with num_nodes=1

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Not A Bug
    • Impala 2.7.0
    • Impala 2.7.0
    • Backend
    • None

    Description

      From the coordinator log

      I0707 11:45:49.297648 55578 impala-beeswax-server.cc:545] TClientRequest.queryOptions: TQueryOptions {
        01: abort_on_error (bool) = false,
        02: max_errors (i32) = 0,
        03: disable_codegen (bool) = false,
        04: batch_size (i32) = 0,
        05: num_nodes (i32) = 1,
        06: max_scan_range_length (i64) = 0,
        07: num_scanner_threads (i32) = 0,
        08: max_io_buffers (i32) = 0,
        09: allow_unsupported_formats (bool) = false,
        10: default_order_by_limit (i64) = -1,
        11: debug_action (string) = "",
        12: mem_limit (i64) = 0,
        13: abort_on_default_limit_exceeded (bool) = false,
        15: hbase_caching (i32) = 0,
        16: hbase_cache_blocks (bool) = false,
        17: parquet_file_size (i64) = 0,
        18: explain_level (i32) = 1,
        19: sync_ddl (bool) = false,
        23: disable_cached_reads (bool) = false,
        24: disable_outermost_topn (bool) = false,
        25: rm_initial_mem (i64) = 0,
        26: query_timeout_s (i32) = 0,
        28: appx_count_distinct (bool) = false,
        29: disable_unsafe_spills (bool) = false,
        31: exec_single_node_rows_threshold (i32) = 100,
        32: optimize_partition_key_scans (bool) = false,
        34: schedule_random_replica (bool) = false,
        35: scan_node_codegen_threshold (i64) = 1800000,
        36: disable_streaming_preaggregations (bool) = false,
        37: runtime_filter_mode (i32) = 2,
        38: runtime_bloom_filter_size (i32) = 1048576,
        39: runtime_filter_wait_time_ms (i32) = 0,
        40: disable_row_runtime_filtering (bool) = false,
        41: max_num_runtime_filters (i32) = 10,
        42: parquet_annotate_strings_utf8 (bool) = false,
        43: parquet_fallback_schema_resolution (i32) = 0,
        44: mt_num_cores (i32) = 1,
        45: s3_skip_insert_staging (bool) = true,
        46: runtime_filter_min_size (i32) = 1048576,
        47: runtime_filter_max_size (i32) = 16777216,
        48: prefetch_mode (i32) = 1,
        49: strict_mode (bool) = false,
      }
      I0707 11:45:49.299737 55578 Frontend.java:872] analyze query select count(*) from store_sales
      I0707 11:45:49.301941 55578 AggregateInfo.java:181] agg info:
      AggregateInfo{grouping_exprs=, aggregate_exprs=FunctionCallExpr{name=count, isStar=true, isDistinct=false, }, intermediate_tuple=TupleDescriptor{id=1, name=agg-tuple-intermed, tbl=null, byte_size=0, is_materialized=true, slots=[SlotDescriptor{id=0, path=null, type=BIGINT, materialized=false, byteSize=0, byteOffset=-1, nullable=false, nullIndicatorByte=0, nullIndicatorBit=0, slotIdx=0, stats=ColumnStats{avgSerializedSize_=8.0, maxSize_=8, numDistinct_=1, numNulls_=-1}}]}, output_tuple=TupleDescriptor{id=1, name=agg-tuple-intermed, tbl=null, byte_size=0, is_materialized=true, slots=[SlotDescriptor{id=0, path=null, type=BIGINT, materialized=false, byteSize=0, byteOffset=-1, nullable=false, nullIndicatorByte=0, nullIndicatorBit=0, slotIdx=0, stats=ColumnStats{avgSerializedSize_=8.0, maxSize_=8, numDistinct_=1, numNulls_=-1}}]}}AggregateInfo{phase=FIRST, intermediate_smap=smap(count(*):count(*) (FunctionCallExpr{name=count, isStar=true, isDistinct=false, }:SlotRef{type=BIGINT, id=0})), output_smap=smap(count(*):count(*) (FunctionCallExpr{name=count, isStar=true, isDistinct=false, }:SlotRef{type=BIGINT, id=0}))}
      mergeAggInfo:
      AggregateInfo{grouping_exprs=, aggregate_exprs=FunctionCallExpr{name=count, isStar=false, isDistinct=false, SlotRef{type=BIGINT, id=0}}, intermediate_tuple=TupleDescriptor{id=1, name=agg-tuple-intermed, tbl=null, byte_size=0, is_materialized=true, slots=[SlotDescriptor{id=0, path=null, type=BIGINT, materialized=false, byteSize=0, byteOffset=-1, nullable=false, nullIndicatorByte=0, nullIndicatorBit=0, slotIdx=0, stats=ColumnStats{avgSerializedSize_=8.0, maxSize_=8, numDistinct_=1, numNulls_=-1}}]}, output_tuple=TupleDescriptor{id=1, name=agg-tuple-intermed, tbl=null, byte_size=0, is_materialized=true, slots=[SlotDescriptor{id=0, path=null, type=BIGINT, materialized=false, byteSize=0, byteOffset=-1, nullable=false, nullIndicatorByte=0, nullIndicatorBit=0, slotIdx=0, stats=ColumnStats{avgSerializedSize_=8.0, maxSize_=8, numDistinct_=1, numNulls_=-1}}]}}AggregateInfo{phase=FIRST_MERGE, intermediate_smap=smap(count(*):count(*) (FunctionCallExpr{name=count, isStar=true, isDistinct=false, }:SlotRef{type=BIGINT, id=0})), output_smap=smap(count(*):count(*) (FunctionCallExpr{name=count, isStar=true, isDistinct=false, }:SlotRef{type=BIGINT, id=0}))}
      I0707 11:45:49.303839 55578 SelectStmt.java:270] post-analysis AggregateInfo{grouping_exprs=, aggregate_exprs=FunctionCallExpr{name=count, isStar=true, isDistinct=false, }, intermediate_tuple=TupleDescriptor{id=1, name=agg-tuple-intermed, tbl=null, byte_size=0, is_materialized=true, slots=[SlotDescriptor{id=0, path=null, type=BIGINT, materialized=false, byteSize=0, byteOffset=-1, nullable=false, nullIndicatorByte=0, nullIndicatorBit=0, slotIdx=0, stats=ColumnStats{avgSerializedSize_=8.0, maxSize_=8, numDistinct_=1, numNulls_=-1}}]}, output_tuple=TupleDescriptor{id=1, name=agg-tuple-intermed, tbl=null, byte_size=0, is_materialized=true, slots=[SlotDescriptor{id=0, path=null, type=BIGINT, materialized=false, byteSize=0, byteOffset=-1, nullable=false, nullIndicatorByte=0, nullIndicatorBit=0, slotIdx=0, stats=ColumnStats{avgSerializedSize_=8.0, maxSize_=8, numDistinct_=1, numNulls_=-1}}]}}AggregateInfo{phase=FIRST, intermediate_smap=smap(count(*):count(*) (FunctionCallExpr{name=count, isStar=true, isDistinct=false, }:SlotRef{type=BIGINT, id=0})), output_smap=smap(count(*):count(*) (FunctionCallExpr{name=count, isStar=true, isDistinct=false, }:SlotRef{type=BIGINT, id=0}))}
      mergeAggInfo:
      AggregateInfo{grouping_exprs=, aggregate_exprs=FunctionCallExpr{name=count, isStar=false, isDistinct=false, SlotRef{type=BIGINT, id=0}}, intermediate_tuple=TupleDescriptor{id=1, name=agg-tuple-intermed, tbl=null, byte_size=0, is_materialized=true, slots=[SlotDescriptor{id=0, path=null, type=BIGINT, materialized=false, byteSize=0, byteOffset=-1, nullable=false, nullIndicatorByte=0, nullIndicatorBit=0, slotIdx=0, stats=ColumnStats{avgSerializedSize_=8.0, maxSize_=8, numDistinct_=1, numNulls_=-1}}]}, output_tuple=TupleDescriptor{id=1, name=agg-tuple-intermed, tbl=null, byte_size=0, is_materialized=true, slots=[SlotDescriptor{id=0, path=null, type=BIGINT, materialized=false, byteSize=0, byteOffset=-1, nullable=false, nullIndicatorByte=0, nullIndicatorBit=0, slotIdx=0, stats=ColumnStats{avgSerializedSize_=8.0, maxSize_=8, numDistinct_=1, numNulls_=-1}}]}}AggregateInfo{phase=FIRST_MERGE, intermediate_smap=smap(count(*):count(*) (FunctionCallExpr{name=count, isStar=true, isDistinct=false, }:SlotRef{type=BIGINT, id=0})), output_smap=smap(count(*):count(*) (FunctionCallExpr{name=count, isStar=true, isDistinct=false, }:SlotRef{type=BIGINT, id=0}))}
      I0707 11:45:49.304040 55578 Frontend.java:950] create plan
      I0707 11:45:49.359992 55578 HdfsScanNode.java:363] collecting partitions for table store_sales
      I0707 11:45:49.362005 55578 HdfsScanNode.java:408] cardinality_=8639936081 sel=1.0
      I0707 11:45:49.362085 55578 HdfsScanNode.java:415] computeStats HdfsScan: cardinality_=8639936081
      I0707 11:45:49.373124 55578 HdfsScanNode.java:471] computeNumNodes totalRanges=2649 localRanges=2649 remoteRanges=0 localHostSet.size=20 clusterNodes=21
      I0707 11:45:49.373198 55578 HdfsScanNode.java:418] computeStats HdfsScan: #nodes=20
      I0707 11:45:49.373767 55578 Planner.java:127] desctbl: tuples:
      TupleDescriptor{id=0, name=BaseTableRef tpcds_3000_parquet.store_sales, tbl=tpcds_3000_parquet.store_sales, byte_size=0, is_materialized=true, slots=[]}
      TupleDescriptor{id=1, name=agg-tuple-intermed, tbl=null, byte_size=8, is_materialized=true, slots=[SlotDescriptor{id=0, path=null, type=BIGINT, materialized=true, byteSize=8, byteOffset=0, nullable=false, nullIndicatorByte=0, nullIndicatorBit=-1, slotIdx=0, stats=ColumnStats{avgSerializedSize_=8.0, maxSize_=8, numDistinct_=1, numNulls_=-1}}]}
      I0707 11:45:49.373859 55578 Planner.java:128] resultexprs: SlotRef{type=BIGINT, id=0}
      I0707 11:45:49.373906 55578 Planner.java:129] finalize plan fragments
      I0707 11:45:49.374631 55578 Frontend.java:993] get scan range locations
      I0707 11:45:49.374723 55578 Planner.java:328] Estimated per-host peak memory requirement: 0
      I0707 11:45:49.374819 55578 Planner.java:329] Estimated per-host virtual cores requirement: 0
      I0707 11:45:49.399799 55578 Frontend.java:1071] create result set metadata
      I0707 11:45:49.399878 55578 JniFrontend.java:147] Estimated Per-Host Requirements: Memory=0B VCores=0
      
      01:AGGREGATE [FINALIZE]
      |  output: count(*)
      |  hosts=20 per-host-mem=unavailable
      |  tuple-ids=1 row-size=8B cardinality=1
      |
      00:SCAN HDFS [tpcds_3000_parquet.store_sales]
         partitions=1824/1824 files=2649 size=376.50GB
         table stats: 8639936081 rows total
         column stats: all
         hosts=20 per-host-mem=unavailable
         tuple-ids=0 row-size=0B cardinality=8639936081
      I0707 11:45:49.512979 55578 RequestPoolService.java:274] resolveRequestPool(pool=, user=mmokhtar): resolved_pool=root.mmokhtar, has_access=true
      I0707 11:45:49.515537 55578 RequestPoolService.java:372] getPoolConfig(pool=root.mmokhtar): max_mem_resources=-1, max_requests=-1, max_queued=200,  queue_timeout_ms=0, default_query_options=
      I0707 11:45:49.515591 55578 admission-controller.cc:436] Schedule for id=d47662375dfe69a:894e5e9f4aeca6b4 in pool_name=root.mmokhtar cluster_mem_needed=4.00 GB PoolConfig: max_requests=-1 max_queued=200 max_mem=-1.00 B
      I0707 11:45:49.515609 55578 admission-controller.cc:441] Stats: agg_num_running=0, agg_num_queued=0, agg_mem_reserved=0,  local_host(local_mem_admitted=0, num_admitted_running=0, num_queued=0, backend_mem_reserved=0)
      I0707 11:45:49.515614 55578 admission-controller.cc:447] Admitted query id=d47662375dfe69a:894e5e9f4aeca6b4
      I0707 11:45:49.515626 55578 coordinator.cc:386] Exec() query_id=d47662375dfe69a:894e5e9f4aeca6b4
      I0707 11:45:49.527811 55578 plan-fragment-executor.cc:94] Prepare(): query_id=d47662375dfe69a:894e5e9f4aeca6b4 instance_id=d47662375dfe69a:894e5e9f4aeca6b5
      I0707 11:45:49.532526 55578 plan-fragment-executor.cc:211] descriptor table for fragment=d47662375dfe69a:894e5e9f4aeca6b5
      tuples:
      Tuple(id=0 size=0 slots=[] tuple_path=[])
      Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col_path=[] offset=0 null=(offset=0 mask=0) slot_idx=0 field_idx=-1)] tuple_path=[])
      I0707 11:45:49.724506 63282 plan-fragment-executor.cc:321] Open(): instance_id=d47662375dfe69a:894e5e9f4aeca6b5
      

      Attachments

        1. profile.txt
          5 kB
          Gustavo Varisco
        2. profile.txt
          5 kB
          Jim Apple

        Activity

          People

            kwho Michael Ho
            mmokhtar Mostafa Mokhtar
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: