Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-9842

TestValidateMetrics.test_metrics_are_zero fails with num-fragments-in-flight not reaching zero

    XMLWordPrintableJSON

Details

    Description

      A couple test runs have failed with TestValidateMetrics.test_metrics_are_zero failure: 

      AssertionError: Metric value impala-server.num-fragments-in-flight did not reach value 0 in 60s Dumping impalad debug pages: ...

      The memz shows: 

      Process: Limit=12.00 GB Total=2.93 GB Peak=138.77 GB
        JVM: max heap size: Total=1.78 GB
        JVM: non-heap committed: Total=211.75 MB
        Buffer Pool: Free Buffers: Total=1.90 MB
        Buffer Pool: Clean Pages: Total=0  
        Buffer Pool: Unused Reservation: Total=-2.00 MB
        Control Service Queue: Limit=50.00 MB Total=0 Peak=16.47 MB
        Data Stream Service Queue: Limit=614.40 MB Total=0 Peak=636.91 MB
        Data Stream Manager Early RPCs: Total=0 Peak=684.53 KB
        TCMalloc Overhead: Total=347.15 MB
        Spill-to-disk temporary compression buffers: Limit=512.00 MB Total=0 Peak=110.43 MB
        RequestPool=root.default: Total=4.02 MB Peak=136.23 GB
          Query(074a86ef88e16768:1a7bf9f400000000): Limit=300.00 MB Reservation=4.00 MB Reserva$
            Fragment 074a86ef88e16768:1a7bf9f400000000: Reservation=4.00 MB OtherMemory=16.00 K$
              HDFS_SCAN_NODE (id=0): Reservation=0 OtherMemory=0 Total=0 Peak=830.00 KB
              PLAN_ROOT_SINK: Reservation=4.00 MB ReservationLimit=100.00 MB OtherMemory=8.00 K$
        RequestPool=fe-eval-exprs: Total=0 Peak=256.00 MB
        RequestPool=root.no-limits: Total=0 Peak=188.88 MB
        Untracked Memory: Total=616.14 MB
      

      Query 074a86ef88e16768:1a7bf9f4 is: 

      I0608 19:09:08.410117 25628 Frontend.java:1498] 074a86ef88e16768:1a7bf9f400000000] Analyzing query: select * from functional.alltypes limit 10 db: default

      The interesting query options are debug_action and fetch_rows_timeout_ms. These narrow it down to hs2/test_fetch_timeout.py (https://github.com/apache/impala/blob/master/tests/hs2/test_fetch_timeout.py#L207-L215):

        11: debug_action (string) = "CRS_BEFORE_COORD_STARTS:SLEEP@5000",
      ...
        94: fetch_rows_timeout_ms (i64) = 0,

       It is executing only at the coordinator: 

      I0608 19:09:08.430388 25637 scheduler.cc:549] 074a86ef88e16768:1a7bf9f400000000] Exec at coord is true
      I0608 19:09:08.430755 25637 admission-controller.cc:1296] 074a86ef88e16768:1a7bf9f400000000] Trying to admit id=074a86ef88e16768:1a7bf9f400000000 in pool_name=root.default executor_group_name=default per_host_mem_estimate=20.00 MB dedicated_coord_mem_estimate=120.00 MB max_requests=-1 (configured statically) max_queued=200 (configured statically) max_mem=29.30 GB (configured statically)
      I0608 19:09:08.430793 25637 admission-controller.cc:1308] 074a86ef88e16768:1a7bf9f400000000] Stats: agg_num_running=10, agg_num_queued=0, agg_mem_reserved=3.79 GB,  local_host(local_mem_admitted=4.67 GB, num_admitted_running=10, num_queued=0, backend_mem_reserved=2.24 GB)
      I0608 19:09:08.430811 25637 admission-controller.cc:894] 074a86ef88e16768:1a7bf9f400000000] Admitting query id=074a86ef88e16768:1a7bf9f400000000
      I0608 19:09:08.430881 25637 debug-util.cc:427] 074a86ef88e16768:1a7bf9f400000000] Debug Action: CRS_BEFORE_COORD_STARTS:SLEEP@5000 sleeping for 5000 ms

       The query starts execution: 

      I0608 19:09:13.431097 25637 coordinator.cc:143] 074a86ef88e16768:1a7bf9f400000000] Exec() query_id=074a86ef88e16768:1a7bf9f400000000 stmt=select * from functional.alltypes limit 10
      I0608 19:09:13.431520 25637 coordinator.cc:463] 074a86ef88e16768:1a7bf9f400000000] starting execution on 1 backends for query_id=074a86ef88e16768:1a7bf9f400000000
      I0608 19:09:13.432674   114 control-service.cc:153] 074a86ef88e16768:1a7bf9f400000000] ExecQueryFInstances(): query_id=074a86ef88e16768:1a7bf9f400000000 coord=f0873f7637c8:22000 #instances=1
      ...
      I0608 19:09:13.433480 25637 coordinator.cc:520] 074a86ef88e16768:1a7bf9f400000000] started execution on 1 backends for query_id=074a86ef88e16768:1a7bf9f400000000
      I0608 19:09:13.434790 26254 query-state.cc:756] 074a86ef88e16768:1a7bf9f400000000] Executing instance. instance_id=074a86ef88e16768:1a7bf9f400000000 fragment_idx=0 per_fragment_instance_idx=0 coord_state_idx=0 #in-flight=11

      The query is unregistered: 

      I0608 19:09:13.448494 25628 impala-server.cc:1173] UnregisterQuery(): query_id=074a86ef88e16768:1a7bf9f400000000
      I0608 19:09:13.448557 25628 coordinator.cc:644] ExecState: query id=074a86ef88e16768:1a7bf9f400000000 execution cancelled
      I0608 19:09:13.448580 25628 coordinator-backend-state.cc:943] query_id=074a86ef88e16768:1a7bf9f400000000 target backend=172.18.0.4:27000: Sending CancelQueryFInstances rpc
      I0608 19:09:13.448971   114 control-service.cc:223] CancelQueryFInstances(): query_id=074a86ef88e16768:1a7bf9f400000000
      I0608 19:09:13.448995   114 query-exec-mgr.cc:105] QueryState: query_id=074a86ef88e16768:1a7bf9f400000000 refcnt=4
      I0608 19:09:13.449007   114 query-state.cc:782] Cancel: query_id=074a86ef88e16768:1a7bf9f400000000
      I0608 19:09:13.449023   114 krpc-data-stream-mgr.cc:337] cancelling active streams for fragment_instance_id=074a86ef88e16768:1a7bf9f400000000
      I0608 19:09:13.449226 25628 coordinator.cc:889] CancelBackends() query_id=074a86ef88e16768:1a7bf9f400000000, tried to cancel 1 backends
      I0608 19:09:13.449291 25628 coordinator.cc:1225] Release admission control resources for query_id=074a86ef88e16768:1a7bf9f400000000

      After that, there are periodic reports coming in for 074a86ef88e16768:1a7bf9f400000000 which all return that it is an invalid query. 

      The original query is starts at 19:09:08, while the TestValidateMetrics.test_metrics_are_zero check for remaining fragments is happening over an hour later at 20:14:08.**

      Interestingly, this has happened on different runs with the exact same query.

      Attachments

        Activity

          People

            joemcdonnell Joe McDonnell
            joemcdonnell Joe McDonnell
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: