Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-5295

"Process: memory limit exceeded" in shell tests during asf-master-core-asan build

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Blocker
    • Resolution: Fixed
    • None
    • Impala 2.9.0
    • Backend
    • None
    • ghx-label-5

    Description

      I'm guessing that this might be related to the fix for IMPALA-5246, so I'm assigning to Michael Ho. Please reassign if appropriate.

      From the test log:

      22:58:48 ___________________ TestImpalaShell.test_refresh_on_connect ____________________
      22:58:48 shell/test_shell_commandline.py:128: in test_refresh_on_connect
      22:58:48     result = run_impala_shell_cmd(args)
      22:58:48 shell/util.py:95: in run_impala_shell_cmd
      22:58:48     result.stderr)
      22:58:48 E   AssertionError: Cmd -r -q "select 1" was expected to succeed: Starting Impala Shell without Kerberos authentication
      22:58:48 E   Connected to localhost:21000
      22:58:48 E   Server version: impalad version 2.9.0-SNAPSHOT DEBUG (build 25ba76287e36181c3fce81239763532f98fc9420)
      22:58:48 E   Invalidating Metadata
      22:58:48 E   Query: invalidate metadata
      22:58:48 E   Query submitted at: 2017-05-08 22:57:53 (Coordinator: http://impala-boost-static-burst-slave-1591.vpc.cloudera.com:25000)
      22:58:48 E   Query progress can be monitored at: http://impala-boost-static-burst-slave-1591.vpc.cloudera.com:25000/query_plan?query_id=1048cc8cf2b5782f:1c39c7e200000000
      22:58:48 E   Fetched 0 row(s) in 4.26s
      22:58:48 E   Query: select 1
      22:58:48 E   Query submitted at: 2017-05-08 22:57:57 (Coordinator: http://impala-boost-static-burst-slave-1591.vpc.cloudera.com:25000)
      22:58:48 E   ERROR: ExecPlanRequest rpc query_id=c846f88375486aaa:14043ffb00000000 instance_id=c846f88375486aaa:14043ffb00000000 failed: Memory limit exceeded: Query c846f88375486aaa:14043ffb00000000 could not start because the backend Impala daemon is over its memory limit
      22:58:48 E   Error occurred on backend impala-boost-static-burst-slave-1591.vpc.cloudera.com:22000
      22:58:48 E   Memory left in process limit: -89321034284.00 B
      22:58:48 E   Process: memory limit exceeded. Limit=17.89 GB Total=101.07 GB Peak=131.84 GB
      22:58:48 E     RequestPool=fe-eval-exprs: Total=0 Peak=20.00 MB
      22:58:48 E     RequestPool=default-pool: Total=101.07 GB Peak=131.84 GB
      22:58:48 E       Query(f441e77999e36533:2dd37c6600000000): memory limit exceeded. Limit=1.00 MB Total=101.07 GB Peak=101.07 GB
      22:58:48 E         Block Manager: Total=0 Peak=0
      22:58:48 E         Fragment f441e77999e36533:2dd37c6600000003: Total=101.07 GB Peak=101.07 GB
      22:58:48 E           AGGREGATION_NODE (id=1): Total=101.07 GB Peak=101.07 GB
      22:58:48 E             Exprs: Total=101.07 GB Peak=101.07 GB
      22:58:48 E           HDFS_SCAN_NODE (id=0): Total=24.00 KB Peak=61.00 KB
      22:58:48 E           DataStreamSender (dst_id=2): Total=14.22 KB Peak=14.22 KB
      22:58:48 E           CodeGen: Total=3.99 KB Peak=297.50 KB
      22:58:48 E       Query(c846f88375486aaa:14043ffb00000000): Total=0 Peak=0
      22:58:48 E   
      22:58:48 E   
      22:58:48 E   Could not execute command: select 1
      

      From the coordinator log:

      I0508 22:57:57.292851 14523 Frontend.java:892] Compiling query: select 1
      I0508 22:57:57.293179 14523 Frontend.java:929] Compiled query.
      I0508 22:57:57.295092 14523 admission-controller.cc:442] Schedule for id=c846f88375486aaa:14043ffb00000000 in pool_name=default-pool cluster_mem_needed=10.00 MB PoolConfig: max_requests=-1 max_queued=200 max_mem=-1.00 B
      I0508 22:57:57.295171 14523 admission-controller.cc:447] Stats: agg_num_running=0, agg_num_queued=0, agg_mem_reserved=4.00 MB,  local_host(local_mem_admitted=0, num_admitted_running=0, num_queued=0, backend_mem_reserved=1.00 MB)
      I0508 22:57:57.296596 14523 admission-controller.cc:453] Admitted query id=c846f88375486aaa:14043ffb00000000
      I0508 22:57:57.296725 14523 coordinator.cc:438] Exec() query_id=c846f88375486aaa:14043ffb00000000 stmt=select 1
      I0508 22:57:57.296988 14523 query-exec-mgr.cc:95] new QueryState: query_id=c846f88375486aaa:14043ffb00000000
      I0508 22:57:57.297026 14523 query-exec-mgr.cc:105] QueryState: query_id=c846f88375486aaa:14043ffb00000000 refcnt=1
      I0508 22:57:57.297200 14523 coordinator.cc:578] starting 1 fragment instances for query c846f88375486aaa:14043ffb00000000
      I0508 22:57:57.297942 18005 impala-internal-service.cc:44] ExecPlanFragment(): instance_id=c846f88375486aaa:14043ffb00000000
      I0508 22:57:57.298023 18005 query-exec-mgr.cc:46] StartFInstance() instance_id=c846f88375486aaa:14043ffb00000000 coord=impala-boost-static-burst-slave-1591.vpc.cloudera.com:22000
      I0508 22:57:57.298063 18005 query-exec-mgr.cc:105] QueryState: query_id=c846f88375486aaa:14043ffb00000000 refcnt=2
      I0508 22:57:57.071419 15541 status.cc:52] Memory limit exceeded: FunctionContext::TrackAllocation's allocations exceeded memory limits.
      Error occurred on backend impala-boost-static-burst-slave-1591.vpc.cloudera.com:22000 by fragment f441e77999e36533:2dd37c6600000003
      Memory left in process limit: -89321034284.00 B
      Memory left in query limit: -108526648534.00 B
      Query(f441e77999e36533:2dd37c6600000000): memory limit exceeded. Limit=1.00 MB Total=101.07 GB Peak=101.07 GB
        Block Manager: Total=0 Peak=0
        Fragment f441e77999e36533:2dd37c6600000003: Total=101.07 GB Peak=101.07 GB
          AGGREGATION_NODE (id=1): Total=101.07 GB Peak=101.07 GB
            Exprs: Total=101.07 GB Peak=101.07 GB
          HDFS_SCAN_NODE (id=0): Total=24.00 KB Peak=61.00 KB
          DataStreamSender (dst_id=2): Total=14.22 KB Peak=14.22 KB
          CodeGen: Total=3.99 KB Peak=297.50 KB
          @          0x1609d37  impala::GetStackTrace()
          @          0x10b599f  impala::Status::Status()
          @          0x10b54fb  impala::Status::MemLimitExceeded()
          @          0x131993d  impala::MemTracker::MemLimitExceeded()
          @          0x132b081  impala::RuntimeState::SetMemLimitExceeded()
          @          0x1d4f774  impala::FunctionContextImpl::CheckMemLimit()
          @     0x7f0066c99a03  MemTestUpdate()
          @     0x7f0069f61f00  (unknown)
          @          0x18d2985  impala::PartitionedAggregationNode::Open()
          @          0x1d1ad50  impala::PlanFragmentExecutor::OpenInternal()
          @          0x1d1a6c3  impala::PlanFragmentExecutor::Open()
          @          0x1d137b8  impala::FragmentInstanceState::Exec()
          @          0x1d21c80  impala::QueryExecMgr::ExecFInstance()
          @          0x1d23508  boost::_bi::bind_t<>::operator()()
          @          0x12f7b33  boost::function0<>::operator()()
          @          0x16cc3b6  impala::Thread::SuperviseThread()
          @          0x16d648b  boost::_bi::list4<>::operator()<>()
          @          0x16d6318  boost::_bi::bind_t<>::operator()()
          @          0x1dac4da  thread_proxy
          @       0x314e807851  (unknown)
          @       0x314e4e894d  (unknown)
      I0508 22:57:57.321614 15541 runtime-state.cc:197] Error from query f441e77999e36533:2dd37c6600000000: Memory limit exceeded: FunctionContext::TrackAllocation's allocations exceeded memory limits.
      Error occurred on backend impala-boost-static-burst-slave-1591.vpc.cloudera.com:22000 by fragment f441e77999e36533:2dd37c6600000003
      Memory left in process limit: -89321034284.00 B
      Memory left in query limit: -108526648534.00 B
      Query(f441e77999e36533:2dd37c6600000000): memory limit exceeded. Limit=1.00 MB Total=101.07 GB Peak=101.07 GB
        Block Manager: Total=0 Peak=0
        Fragment f441e77999e36533:2dd37c6600000003: Total=101.07 GB Peak=101.07 GB
          AGGREGATION_NODE (id=1): Total=101.07 GB Peak=101.07 GB
            Exprs: Total=101.07 GB Peak=101.07 GB
          HDFS_SCAN_NODE (id=0): Total=24.00 KB Peak=61.00 KB
          DataStreamSender (dst_id=2): Total=14.22 KB Peak=14.22 KB
          CodeGen: Total=3.99 KB Peak=297.50 KB
      

      Attachments

        Issue Links

          Activity

            People

              kwho Michael Ho
              dknupp David Knupp
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: