Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-6423

HDFS scanners do not get cancelled in a timely manner

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • Impala 2.11.0
    • Impala 2.12.0
    • Backend
    • None

    Description

      The non-MT scanner (MT_DOP=0) does not check RuntimeState::is_cancelled() for cancellation. Instead, it checks ScannerContext::cancelled() which only returns HdfsScanNode:: Done(). Thus, the scanner can't notice the cancellation in time. The FragmentInstance of it will still try to send out data until it reaches the 2m time-out.

      This can be reproduced by

      bin/impala-shell.sh -q "set debug_action=1:PREPARE:FAIL; select * from functional_parquet.alltypes"
      

      Then tail -f the log. You can find logs like

      I0119 05:39:25.453212 14051 impala-server.cc:1165] ReportExecStatus(): Received report for unknown query ID (probably closed or cancelled): 714ff9fe107a7c25:a2344b1800000000
      I0119 05:39:25.456718 14054 impala-server.cc:1165] ReportExecStatus(): Received report for unknown query ID (probably closed or cancelled): 714ff9fe107a7c25:a2344b1800000000
      I0119 05:39:30.408521 13825 impala-server.cc:1165] ReportExecStatus(): Received report for unknown query ID (probably closed or cancelled): 714ff9fe107a7c25:a2344b1800000000
      I0119 05:39:30.408646 14022 query-state.cc:401] Cancel: query_id=714ff9fe107a7c25:a2344b1800000000
      I0119 05:39:35.409497 13825 impala-server.cc:1165] ReportExecStatus(): Received report for unknown query ID (probably closed or cancelled): 714ff9fe107a7c25:a2344b1800000000
      I0119 05:39:35.409601 14022 query-state.cc:401] Cancel: query_id=714ff9fe107a7c25:a2344b1800000000
      ......
      I0119 05:41:25.429842 13825 impala-server.cc:1165] ReportExecStatus(): Received report for unknown query ID (probably closed or cancelled): 714ff9fe107a7c25:a2344b1800000000
      I0119 05:41:25.429960 14022 query-state.cc:401] Cancel: query_id=714ff9fe107a7c25:a2344b1800000000
      

      and finally the time-out logs

      I0119 05:41:25.445632 13811 data-stream-mgr.cc:130] Datastream sender timed-out waiting for recvr for fragment instance: 714ff9fe107a7c25:a2344b1800000000 (time-out was: 2m). Increase --datastream_sender_timeout_ms if you see this message frequently.
      I0119 05:41:25.445688 13811 data-stream-mgr.cc:191] DataStreamMgr::AddData(): Sender timed out waiting for receiver fragment instance: 714ff9fe107a7c25:a2344b1800000000, dest node: 1
      E0119 05:41:25.445838 14015 data-stream-sender.cc:278] channel send to iadhadoop-c39:22000 failed: Sender timed out waiting for receiver fragment instance: 714ff9fe107a7c25:a2344b1800000000, dest node: 1
      I0119 05:41:25.446475 13825 impala-server.cc:1165] ReportExecStatus(): Received report for unknown query ID (probably closed or cancelled): 714ff9fe107a7c25:a2344b1800000000
      I0119 05:41:25.446524 14015 query-state.cc:401] Cancel: query_id=714ff9fe107a7c25:a2344b1800000000
      I0119 05:41:25.447132 14015 query-state.cc:388] Instance completed. instance_id=714ff9fe107a7c25:a2344b1800000002 #in-flight=0 status=DATASTREAM_SENDER_TIMEOUT: Sender timed out waiting for receiver fragment instance: 714ff9fe107a7c25:a2344b1800000000, dest node: 1 
      I0119 05:41:25.447149 14015 query-state.cc:401] Cancel: query_id=714ff9fe107a7c25:a2344b1800000000
      I0119 05:41:25.447201 14015 query-exec-mgr.cc:149] ReleaseQueryState(): query_id=714ff9fe107a7c25:a2344b1800000000 refcnt=1
      I0119 05:41:25.448110 13843 data-stream-mgr.cc:130] Datastream sender timed-out waiting for recvr for fragment instance: 714ff9fe107a7c25:a2344b1800000000 (time-out was: 2m). Increase --datastream_sender_timeout_ms if you see this message frequently.
      I0119 05:41:25.448130 13843 data-stream-mgr.cc:191] DataStreamMgr::AddData(): Sender timed out waiting for receiver fragment instance: 714ff9fe107a7c25:a2344b1800000000, dest node: 1
      I0119 05:41:25.454107 13852 data-stream-mgr.cc:130] Datastream sender timed-out waiting for recvr for fragment instance: 714ff9fe107a7c25:a2344b1800000000 (time-out was: 2m). Increase --datastream_sender_timeout_ms if you see this message frequently.
      I0119 05:41:25.454149 13852 data-stream-mgr.cc:191] DataStreamMgr::AddData(): Sender timed out waiting for receiver fragment instance: 714ff9fe107a7c25:a2344b1800000000, dest node: 1
      

      This may not always happen. Just have a few try and you will reproduce it.

      Attachments

        1. hdfs_scan_node_cancellation.log
          2.45 MB
          Gabor Kaszab

        Activity

          People

            gaborkaszab Gabor Kaszab
            stigahuang Quanlong Huang
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: