Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-2531

partitioned-hash-join-node.cc:233] Check failed: null_probe_rows_ != __null

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • Impala 2.3.0
    • Impala 2.3.0
    • None
    • impalad version 2.3.0-cdh5-INTERNAL DEBUG (build 58309c22c1b7e4c33cf0744829a658c3123da037)
      Built on Fri, 09 Oct 2015 22:55:42 PST

    Description

      A stress test run crashed after ~3 hours and ~6k queries saying

      impala-stress-kerberized-3.vpc.cloudera.com crashed:
      F1010 20:17:12.567666  1405 partitioned-hash-join-node.cc:233] Check failed: null_probe_rows_ != __null
      [...skipped...]
      #6  0x000000000205f92d in google::LogMessageFatal::~LogMessageFatal (this=0x7fa08c29cdb0, __in_chrg=<value optimized out>) at src/logging.cc:1836
      #7  0x000000000168412c in impala::PartitionedHashJoinNode::ClosePartitions (this=0x2660b7b00) at /usr/src/debug/impala-2.3.0-cdh5.5.0-SNAPSHOT/be/src/exec/partitioned-hash-join-node.cc:233
      #8  0x0000000001684210 in impala::PartitionedHashJoinNode::Close (this=0x2660b7b00, state=0x8a525200) at /usr/src/debug/impala-2.3.0-cdh5.5.0-SNAPSHOT/be/src/exec/partitioned-hash-join-node.cc:247
      #9  0x0000000001598a39 in impala::ExecNode::Close (this=0x128ff6300, state=0x8a525200) at /usr/src/debug/impala-2.3.0-cdh5.5.0-SNAPSHOT/be/src/exec/exec-node.cc:179
      #10 0x00000000016700e3 in impala::PartitionedAggregationNode::Close (this=0x128ff6300, state=0x8a525200) at /usr/src/debug/impala-2.3.0-cdh5.5.0-SNAPSHOT/be/src/exec/partitioned-aggregation-node.cc:433
      #11 0x0000000001562005 in impala::PlanFragmentExecutor::Close (this=0x9b18d428) at /usr/src/debug/impala-2.3.0-cdh5.5.0-SNAPSHOT/be/src/runtime/plan-fragment-executor.cc:573
      #12 0x00000000013330d6 in impala::FragmentMgr::FragmentExecState::Exec (this=0x9b18d200) at /usr/src/debug/impala-2.3.0-cdh5.5.0-SNAPSHOT/be/src/service/fragment-exec-state.cc:51
      #13 0x000000000132b6d8 in impala::FragmentMgr::FragmentExecThread (this=0x4fc4900, exec_state=0x9b18d200) at /usr/src/debug/impala-2.3.0-cdh5.5.0-SNAPSHOT/be/src/service/fragment-mgr.cc:83
      #14 0x000000000132f0c8 in boost::_mfi::mf1<void, impala::FragmentMgr, impala::FragmentMgr::FragmentExecState*>::operator() (this=0x2c50750a0, p=0x4fc4900, a1=0x9b18d200) at /opt/toolchain/boost-pic-1.55.0/include/boost/bind/mem_fn_template.hpp:165
      #15 0x000000000132ee81 in boost::_bi::list2<boost::_bi::value<impala::FragmentMgr*>, boost::_bi::value<impala::FragmentMgr::FragmentExecState*> >::operator()<boost::_mfi::mf1<void, impala::FragmentMgr, impala::FragmentMgr::FragmentExecState*>, boost::_bi::list0> (this=0x2c50750b0, f=..., a=...) at /opt/toolchain/boost-pic-1.55.0/include/boost/bind/bind.hpp:313
      #16 0x000000000132e769 in boost::_bi::bind_t<void, boost::_mfi::mf1<void, impala::FragmentMgr, impala::FragmentMgr::FragmentExecState*>, boost::_bi::list2<boost::_bi::value<impala::FragmentMgr*>, boost::_bi::value<impala::FragmentMgr::FragmentExecState*> > >::operator() (this=0x2c50750a0) at /opt/toolchain/boost-pic-1.55.0/include/boost/bind/bind_template.hpp:20
      

      The cluster was using kerberos, not sure if that is related.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            tarmstrong Tim Armstrong
            caseyc casey
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment