Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-5234

Get rid of redundant LogError() messages

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Later
    • Impala 2.8.0
    • None
    • Backend

    Description

      In a few places in the codebase, there are redundant LogError() calls that add error statuses to the error_log AND return the same error status up the call stack. This results in the same error message being sent back twice to the client. We need to find all such cases and remove these redundant LogError() calls.

      Repro:

      set mem_limit=1m;
      select * from tpch.lineitem;

      Output:

      [localhost:21000] > select * from tpch.lineitem;
      Query: select * from tpch.lineitem
      Query submitted at: 2017-04-20 12:04:22 (Coordinator: http://localhost:25000)
      Query progress can be monitored at: http://localhost:25000/query_plan?query_id=6048492f67282f78:ef0f2bd400000000
      WARNINGS: Memory limit exceeded: Failed to allocate tuple buffer
      HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit.
      Error occurred on backend localhost:22000 by fragment 6048492f67282f78:ef0f2bd400000003
      Memory left in process limit: 8.24 GB
      Memory left in query limit: -7369392.00 B
      Query(6048492f67282f78:ef0f2bd400000000): memory limit exceeded. Limit=1.00 MB Total=8.03 MB Peak=8.03 MB
        Fragment 6048492f67282f78:ef0f2bd400000000: Total=8.00 KB Peak=8.00 KB
          EXCHANGE_NODE (id=1): Total=0 Peak=0
          DataStreamRecvr: Total=0 Peak=0
          PLAN_ROOT_SINK: Total=0 Peak=0
          CodeGen: Total=0 Peak=0
        Block Manager: Total=0 Peak=0
        Fragment 6048492f67282f78:ef0f2bd400000003: Total=8.02 MB Peak=8.02 MB
          HDFS_SCAN_NODE (id=0): Total=8.01 MB Peak=8.01 MB
          DataStreamSender (dst_id=1): Total=688.00 B Peak=688.00 B
          CodeGen: Total=0 Peak=0
      
      
      
      Memory limit exceeded: Failed to allocate tuple buffer
      HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit.
      Error occurred on backend localhost:22000 by fragment 6048492f67282f78:ef0f2bd400000003
      Memory left in process limit: 8.24 GB
      Memory left in query limit: -7369392.00 B
      Query(6048492f67282f78:ef0f2bd400000000): memory limit exceeded. Limit=1.00 MB Total=8.03 MB Peak=8.03 MB
        Fragment 6048492f67282f78:ef0f2bd400000000: Total=8.00 KB Peak=8.00 KB
          EXCHANGE_NODE (id=1): Total=0 Peak=0
          DataStreamRecvr: Total=0 Peak=0
          PLAN_ROOT_SINK: Total=0 Peak=0
          CodeGen: Total=0 Peak=0
        Block Manager: Total=0 Peak=0
        Fragment 6048492f67282f78:ef0f2bd400000003: Total=8.02 MB Peak=8.02 MB
          HDFS_SCAN_NODE (id=0): Total=8.01 MB Peak=8.01 MB
          DataStreamSender (dst_id=1): Total=688.00 B Peak=688.00 B
          CodeGen: Total=0 Peak=0
      
      

      This can be traced back to:
      https://github.com/apache/incubator-impala/blob/a50c344077f6c9bbea3d3cbaa2e9146ba20ac9a9/be/src/runtime/row-batch.cc#L462
      https://github.com/apache/incubator-impala/blob/master/be/src/runtime/mem-tracker.cc#L319-L320

      There are more such examples that need to be taken care of too.

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned
            sailesh Sailesh Mukil
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment