Uploaded image for project: 'ActiveMQ C++ Client'
  1. ActiveMQ C++ Client
  2. AMQCPP-199

Segmentation fault at decaf/net/SocketInputStream.cpp (line 108)

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.2.1
    • 2.2.2
    • Decaf, Openwire
    • None
    • RHEL 5.2 (32 bits), apr-1.3.3, apr-util-1.3.4, ActiveMQ 5.1, gcc 4.1.2-42 (20071124), Java(TM) SE Runtime Environment (build 1.6.0_06-b02), pyactivemq-0.1.0rc1

    Description

      We're getting occasional segmentation faults in our Python based application.
      The easiest way to reproduce it is by running pyactivemq's stress tests (src/tests/stresstest.py).
      The offending line seems to be always the same.
      I send you attached a full gdb postmortem back-trace.
      Any help will be appreciated!

      Attachments

        1. ASF.LICENSE.NOT.GRANTED--stress.out
          7 kB
          Alexander Martens
        2. python.stress.log.gz
          4 kB
          Alexander Martens
        3. stress-again.tar.gz
          118 kB
          Alexander Martens
        4. stress-fc8.log.gz
          2 kB
          Alexander Martens
        5. stress-segfault-1.log.bz2
          4 kB
          Alexander Martens
        6. stress-waits.tar.gz
          15 kB
          Alexander Martens

        Activity

          Hello,

          I've also seen a segfault once while running the pyactivemq stress tests, which was caused by the broker (AMQ 5.1.0) running out of memory for some unknown reason.

          I'll see if I can reproduce this issue on my side, but it would be useful to know if it always happens with a specific part of the stress test?

          Cheers,

          Albert

          fullung Albert Strasheim added a comment - Hello, I've also seen a segfault once while running the pyactivemq stress tests, which was caused by the broker (AMQ 5.1.0) running out of memory for some unknown reason. I'll see if I can reproduce this issue on my side, but it would be useful to know if it always happens with a specific part of the stress test? Cheers, Albert

          Hi,

          I suspect this is related to the client, not the broker, which is giving no errors nor warnings at all. I'm not experiencing any memory leaks.

          May be this has something to do with the fact that I am unable to manage decaf to pass its unit tests (take a look at issue AMQCPP-200).

          I'd better wait for AMQCPP-200 to be confirmed / solved before I continue debugging at boost / pyactivemq level,...

          Thanks for you help Albert!

          alex.martens Alexander Martens added a comment - Hi, I suspect this is related to the client, not the broker, which is giving no errors nor warnings at all. I'm not experiencing any memory leaks. May be this has something to do with the fact that I am unable to manage decaf to pass its unit tests (take a look at issue AMQCPP-200 ). I'd better wait for AMQCPP-200 to be confirmed / solved before I continue debugging at boost / pyactivemq level,... Thanks for you help Albert!

          Hello,

          Albert, Timothy, since I was getting unit test errors on RHEL5.2, I've switched to a FC8 to continue stress testing.

          This time libactivemq-cpp passes its unit tests.

          Stress testing still causes segmentation faults (this time when closing on class destructors) . You were right, it looks as if ActiveMQ is leaking handles (threads, may be?). I have no easy way to know if the problem is either on the server side, the client side, the chair-keyboard interface (probably) or on all of them.

          Concerning the new test environment:

          ActiveMQ 5.1.0 (default configuration) and , jre1.6.0_03
          FC8, gcc 4.1.2, apr-1.3.3 (unit test passed), apr-utils-1.3.4 (unit test passed), activemq-cpp-2.2.1, pyactivemq, boost 1_34

          See stress-fc8.log.gz for details.

          alex.martens Alexander Martens added a comment - Hello, Albert, Timothy, since I was getting unit test errors on RHEL5.2, I've switched to a FC8 to continue stress testing. This time libactivemq-cpp passes its unit tests. Stress testing still causes segmentation faults (this time when closing on class destructors) . You were right, it looks as if ActiveMQ is leaking handles (threads, may be?). I have no easy way to know if the problem is either on the server side, the client side, the chair-keyboard interface (probably) or on all of them. Concerning the new test environment: ActiveMQ 5.1.0 (default configuration) and , jre1.6.0_03 FC8, gcc 4.1.2, apr-1.3.3 (unit test passed), apr-utils-1.3.4 (unit test passed), activemq-cpp-2.2.1, pyactivemq, boost 1_34 See stress-fc8.log.gz for details.

          info threads, bt, bt full

          Once the stress test has run the server out of handles, it always crashes at the same point.

          alex.martens Alexander Martens added a comment - info threads, bt, bt full Once the stress test has run the server out of handles, it always crashes at the same point.

          Hello,

          I think there is a good chance that this is the same crash I've been seeing (I haven't been able to produce a core dump yet).

          I'm running FC9 x64 here with apr-1.2.12 and AMQCPP 2.2.1.

          Cheers,

          Albert

          fullung Albert Strasheim added a comment - Hello, I think there is a good chance that this is the same crash I've been seeing (I haven't been able to produce a core dump yet). I'm running FC9 x64 here with apr-1.2.12 and AMQCPP 2.2.1. Cheers, Albert

          Hallo Albert!

          I suppose you're closely following AMQCPP-200 (if not, I'd encourage you to a look at it). IMHO, these couple of issues are probably going to have the same solution - if they are not the same at all.

          If you need help getting a core dump in SuSE, (don't think so, just in case) don't hesitate drop me a mail (in german, or even platt, if you want to).

          Best,
          Alex

          alex.martens Alexander Martens added a comment - Hallo Albert! I suppose you're closely following AMQCPP-200 (if not, I'd encourage you to a look at it). IMHO, these couple of issues are probably going to have the same solution - if they are not the same at all. If you need help getting a core dump in SuSE, (don't think so, just in case) don't hesitate drop me a mail (in german, or even platt, if you want to). Best, Alex

          I would be interested to know if anyone has tried this test again with the latest code in trunk given that there has been a number of fixes put in to address problems in AMQCPP-200 and its subtasks.

          tabish Timothy A. Bish added a comment - I would be interested to know if anyone has tried this test again with the latest code in trunk given that there has been a number of fixes put in to address problems in AMQCPP-200 and its subtasks.

          I can do that!
          (Setting up another machine)

          alex.martens Alexander Martens added a comment - I can do that! (Setting up another machine)

          Hi!

          As told in AMQCPP-202, the run got locked on its 75th iteration.
          I see no thread duplication this time, so this could be a different story.

          I'll recompile APR in debug mode (I know I should have done this before... sorry, I was too tired yesterday night) and try again, since this one seems easier to reproduce.

          alex.martens Alexander Martens added a comment - Hi! As told in AMQCPP-202 , the run got locked on its 75th iteration. I see no thread duplication this time, so this could be a different story. I'll recompile APR in debug mode (I know I should have done this before... sorry, I was too tired yesterday night) and try again, since this one seems easier to reproduce.

          This one looks more like a normal wait, as in the case where the broker has been exhausted of resources like we were seeing before. I assume you aren't restarting the broker here since the python stress tests run themselves in a loop instead of having a script do it for you.

          I ran these last night and it ran fine each time until the broker ran out of memory.

          tabish Timothy A. Bish added a comment - This one looks more like a normal wait, as in the case where the broker has been exhausted of resources like we were seeing before. I assume you aren't restarting the broker here since the python stress tests run themselves in a loop instead of having a script do it for you. I ran these last night and it ran fine each time until the broker ran out of memory.

          You got me!

          Yes, I was running a 5.3 snapshot and I forgot to disable persistence at the broker. Sorry again.
          I'll schedule another run for tonight with a dozen stress-tests (each one with its own 100 tries) and a server restart in between.
          Following AMQCPP-202, I'll revert to APR 1.3.4 in debug mode, and 5.1 for the server.

          I find pretty significant it didn't crash, though.

          Will update this thread tomorrow.

          alex.martens Alexander Martens added a comment - You got me! Yes, I was running a 5.3 snapshot and I forgot to disable persistence at the broker. Sorry again. I'll schedule another run for tonight with a dozen stress-tests (each one with its own 100 tries) and a server restart in between. Following AMQCPP-202 , I'll revert to APR 1.3.4 in debug mode, and 5.1 for the server. I find pretty significant it didn't crash, though. Will update this thread tomorrow.

          Hi.

          Something went wrong - I'm not sure what it is, though. It all started with a failing stomp test and after that, almost everything else failed, even closing the server. My previous version test with apr (trunk) and amq (trunk) worked fine, though.
          I'm about to comment all stomp tests, stick to apr 1.3 and rerun with the server's trunk version.

          As long as we stick to Openwire, it should work,...

          alex.martens Alexander Martens added a comment - Hi. Something went wrong - I'm not sure what it is, though. It all started with a failing stomp test and after that, almost everything else failed, even closing the server. My previous version test with apr (trunk) and amq (trunk) worked fine, though. I'm about to comment all stomp tests, stick to apr 1.3 and rerun with the server's trunk version. As long as we stick to Openwire, it should work,...

          Got this one with apr-1.3 (apr-util-1.3.4), no stomp and AMQ 5.3-SNAPSHOT:

          Core was generated by `python stresstests.py'.
          Program terminated with signal 11, Segmentation fault.
          #0 0x010a8f9a in activemq::connector::openwire::OpenWireConnector::closeResource (this=0x91f3110, resource=0x9223f58)
          at activemq/connector/openwire/OpenWireConnector.cpp:1271
          1271 dataStructure = producer->getProducerInfo()->getProducerId();

          For some reason producer->producerInfo is NULL (prints at the end of the file).
          A reference count problem makes no sense to me at this stage (it should have shown up before), ... odd.

          Got the core file, so I can reload if you need more prints.

          alex.martens Alexander Martens added a comment - Got this one with apr-1.3 (apr-util-1.3.4), no stomp and AMQ 5.3-SNAPSHOT: Core was generated by `python stresstests.py'. Program terminated with signal 11, Segmentation fault. #0 0x010a8f9a in activemq::connector::openwire::OpenWireConnector::closeResource (this=0x91f3110, resource=0x9223f58) at activemq/connector/openwire/OpenWireConnector.cpp:1271 1271 dataStructure = producer->getProducerInfo()->getProducerId(); For some reason producer->producerInfo is NULL (prints at the end of the file). A reference count problem makes no sense to me at this stage (it should have shown up before), ... odd. Got the core file, so I can reload if you need more prints.

          For what it's worth, I think the probability of there being things like refcount bugs in the Python wrapper is quite small. It's all very standard stuff. I don't think there's any refcounting code that I wrote in there. So if there is a bug, it would probably be in Boost.Python, which is very widely used, so hopefully quite well exercised.

          fullung Albert Strasheim added a comment - For what it's worth, I think the probability of there being things like refcount bugs in the Python wrapper is quite small. It's all very standard stuff. I don't think there's any refcounting code that I wrote in there. So if there is a bug, it would probably be in Boost.Python, which is very widely used, so hopefully quite well exercised.

          Nice to read from you again, Albert!

          In my current test, I've reduced the number of runs down to 50, taken out all stomp tests and I'm restarting the server every time "stresstests.py" exits.
          With "top" I can see how the virtual memory assigned to java grows (it doesn't shrink until server restart) and looks like every new test is speeding this memory consumption up.
          Is it possible that the way the client is closing down, causes a memory leak on the server?

          Specifically, the "fast" part of the test - between test_Connection (test_openwire_sync.test_openwire_sync) and test_version (test_types.test_pyactivemq), seems to be the most memory eating. I don't remember this behavior with apr (trunk), but I could have simply missed it.

          This might not be related to the segmentation fault, but still concerning.

          alex.martens Alexander Martens added a comment - Nice to read from you again, Albert! In my current test, I've reduced the number of runs down to 50, taken out all stomp tests and I'm restarting the server every time "stresstests.py" exits. With "top" I can see how the virtual memory assigned to java grows (it doesn't shrink until server restart) and looks like every new test is speeding this memory consumption up. Is it possible that the way the client is closing down, causes a memory leak on the server? Specifically, the "fast" part of the test - between test_Connection (test_openwire_sync.test_openwire_sync) and test_version (test_types.test_pyactivemq), seems to be the most memory eating. I don't remember this behavior with apr (trunk), but I could have simply missed it. This might not be related to the segmentation fault, but still concerning.

          I've been stareing at this stack track for a bit, I think I see what happened, looks like the syncRequest failed when creating the producer, but we didn't put the producerInfo Command into the Resource Wrapper until after the call so when we close the resource object it doesn't have an info object to destroy.

          tabish Timothy A. Bish added a comment - I've been stareing at this stack track for a bit, I think I see what happened, looks like the syncRequest failed when creating the producer, but we didn't put the producerInfo Command into the Resource Wrapper until after the call so when we close the resource object it doesn't have an info object to destroy.

          Fix applied.

          tabish Timothy A. Bish added a comment - Fix applied.

          Nice!

          Updating, rebuiling, rerunning,...

          Tanks again, Timothy!

          alex.martens Alexander Martens added a comment - Nice! Updating, rebuiling, rerunning,... Tanks again, Timothy!

          After successfully running the first loop with its 100 passes, stopping and restarting the server, it got locked during the second loop, at the 29th pass, inside test_sessions_with_message_listeners (test_openwire_async.test_openwire_async).

          I'm about to retry with 50 passes per loop tonight, as gdb shows nothing interesting (just seems to be waiting for a message to come).

          No segmentations faults though.

          alex.martens Alexander Martens added a comment - After successfully running the first loop with its 100 passes, stopping and restarting the server, it got locked during the second loop, at the 29th pass, inside test_sessions_with_message_listeners (test_openwire_async.test_openwire_async). I'm about to retry with 50 passes per loop tonight, as gdb shows nothing interesting (just seems to be waiting for a message to come). No segmentations faults though.
          alex.martens Alexander Martens added a comment - - edited

          Albert, Timothy

          It blocked again, even with a server restart every 50 loops - it completed the cycle twice and went stranded around loop 39.

          I'm looking forward to close this issue as its subject wasn't reproduced after the changes made during the past weeks, but the point is that I cannot keep the stress test for too long (less than a couple of hours).

          What I've also noticed is that it seems to block at the same test (test_openwire_async), so I'm giving it another try without it, like we did with the CMSTemplate.

          Anyway, it would be ok to me to close this thread if you agree that the log files attached have little or nothing to do with the original segmentation fault that opened this issue.

          alex.martens Alexander Martens added a comment - - edited Albert, Timothy It blocked again, even with a server restart every 50 loops - it completed the cycle twice and went stranded around loop 39. I'm looking forward to close this issue as its subject wasn't reproduced after the changes made during the past weeks, but the point is that I cannot keep the stress test for too long (less than a couple of hours). What I've also noticed is that it seems to block at the same test (test_openwire_async), so I'm giving it another try without it, like we did with the CMSTemplate. Anyway, it would be ok to me to close this thread if you agree that the log files attached have little or nothing to do with the original segmentation fault that opened this issue.

          I think its safe to close this one as its related more to the segfault that we have fixed.

          It looks to me like there's a deadlock in pyActiveMQ or in the test, thread two here is trying to get a lock on something in python to deliver a message and thread one is trying to be closed by what looks like a boost pointer going out of scope. Note that the session executor attempting to deliver a message is the one owned by the session that is being closed.

          hread 2 (Thread 1100897168 (LWP 14973)):
          #0  0x40000402 in __kernel_vsyscall ()
          #1  0x0062725e in sem_wait@GLIBC_2.0 () from /lib/libpthread.so.0
          #2  0x009a6f3b in PyThread_acquire_lock () from /usr/lib/libpython2.4.so.1.0
          #3  0x0097e755 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0
          #4  0x00983c76 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0
          #5  0x00935cba in PyClassMethod_New () from /usr/lib/libpython2.4.so.1.0
          #6  0x0091dd87 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0
          #7  0x00924388 in PyClass_IsSubclass () from /usr/lib/libpython2.4.so.1.0
          #8  0x0091dd87 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0
          #9  0x0097d49c in PyEval_CallObjectWithKeywords () from /usr/lib/libpython2.4.so.1.0
          #10 0x0099ebce in PyEval_CallFunction () from /usr/lib/libpython2.4.so.1.0
          #11 0x4047d430 in boost::python::call<void, boost::python::handle<_object> > (callable=0x402cf2fc, a0=@0x419e509c) at /usr/include/boost/python/call.hpp:63
          #12 0x404ba2e7 in MessageListenerWrap::call_onMessage<cms::TextMessage> (this=0x953b660, message=0x97a81d0) at src/main/MessageListener.cpp:76
          #13 0x404ba7dd in MessageListenerWrap::onMessage (this=0x953b660, message=0x97a81d0) at src/main/MessageListener.cpp:58
          #14 0x40734796 in activemq::core::ActiveMQConsumer::dispatch (this=0x9539878, data=@0x4214fd00) at activemq/core/ActiveMQConsumer.cpp:489
          #15 0x40759bb0 in activemq::core::ActiveMQSessionExecutor::dispatch (this=0x9466278, data=@0x4214fd00) at activemq/core/ActiveMQSessionExecutor.cpp:185
          #16 0x40759733 in activemq::core::ActiveMQSessionExecutor::dispatchAll (this=0x9466278) at activemq/core/ActiveMQSessionExecutor.cpp:266
          #17 0x40759809 in activemq::core::ActiveMQSessionExecutor::run (this=0x9466278) at activemq/core/ActiveMQSessionExecutor.cpp:208
          #18 0x4082a69f in decaf::lang::Thread::runCallback (self=0x92ba998, param=0x96ea288) at decaf/lang/Thread.cpp:125
          #19 0x40bb7e80 in dummy_worker (opaque=0x92ba998) at threadproc/unix/thread.c:142
          #20 0x0062145b in start_thread () from /lib/libpthread.so.0
          #21 0x00578c4e in clone () from /lib/libc.so.6
          
          Thread 1 (Thread 1073832016 (LWP 10038)):
          #0  0x40000402 in __kernel_vsyscall ()
          #1  0x00627a3e in __lll_mutex_lock_wait () from /lib/libpthread.so.0
          #2  0x006238b4 in _L_mutex_lock_760 () from /lib/libpthread.so.0
          #3  0x00623758 in pthread_mutex_lock () from /lib/libpthread.so.0
          #4  0x40ba7e44 in apr_thread_mutex_lock (mutex=0x96c9e20) at locks/unix/thread_mutex.c:92
          #5  0x408433d1 in decaf::util::concurrent::Mutex::lock (this=0x94662b0) at decaf/util/concurrent/Mutex.cpp:52
          #6  0x4070ecc5 in decaf::util::concurrent::Lock::lock (this=0xbf8f3514) at ./decaf/util/concurrent/Lock.h:94
          #7  0x4070ee4e in Lock (this=0xbf8f3514, object=0x94662b0, intiallyLocked=true) at ./decaf/util/concurrent/Lock.h:66
          #8  0x407594b4 in activemq::core::ActiveMQSessionExecutor::stop (this=0x9466278) at activemq/core/ActiveMQSessionExecutor.cpp:145
          #9  0x40741779 in activemq::core::ActiveMQSession::stop (this=0x94661d8) at activemq/core/ActiveMQSession.cpp:827
          #10 0x40745348 in activemq::core::ActiveMQSession::onConnectorResourceClosed (this=0x94661d8, resource=0x9539820) at activemq/core/ActiveMQSession.cpp:663
          #11 0x407d8bb1 in activemq::connector::BaseConnectorResource::close (this=0x9539820) at activemq/connector/BaseConnectorResource.cpp:66
          #12 0x407360bb in activemq::core::ActiveMQConsumer::close (this=0x9539878) at activemq/core/ActiveMQConsumer.cpp:92
          #13 0x40736618 in ~ActiveMQConsumer (this=0x9539878) at activemq/core/ActiveMQConsumer.cpp:68
          #14 0x4049294b in ~auto_ptr (this=0x9306ebc) at /usr/lib/gcc/i386-redhat-linux/4.1.2/../../../../include/c++/4.1.2/memory:259
          #15 0x4049dfb3 in ~pointer_holder (this=0x9306eb4) at /usr/include/boost/python/object/pointer_holder.hpp:55
          #16 0x40a8c57a in boost::python::objects::copy_class_object () from /usr/lib/libboost_python.so.2
          ...
          
          tabish Timothy A. Bish added a comment - I think its safe to close this one as its related more to the segfault that we have fixed. It looks to me like there's a deadlock in pyActiveMQ or in the test, thread two here is trying to get a lock on something in python to deliver a message and thread one is trying to be closed by what looks like a boost pointer going out of scope. Note that the session executor attempting to deliver a message is the one owned by the session that is being closed. hread 2 (Thread 1100897168 (LWP 14973)): #0 0x40000402 in __kernel_vsyscall () #1 0x0062725e in sem_wait@GLIBC_2.0 () from /lib/libpthread.so.0 #2 0x009a6f3b in PyThread_acquire_lock () from /usr/lib/libpython2.4.so.1.0 #3 0x0097e755 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #4 0x00983c76 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #5 0x00935cba in PyClassMethod_New () from /usr/lib/libpython2.4.so.1.0 #6 0x0091dd87 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #7 0x00924388 in PyClass_IsSubclass () from /usr/lib/libpython2.4.so.1.0 #8 0x0091dd87 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #9 0x0097d49c in PyEval_CallObjectWithKeywords () from /usr/lib/libpython2.4.so.1.0 #10 0x0099ebce in PyEval_CallFunction () from /usr/lib/libpython2.4.so.1.0 #11 0x4047d430 in boost::python::call<void, boost::python::handle<_object> > (callable=0x402cf2fc, a0=@0x419e509c) at /usr/include/boost/python/call.hpp:63 #12 0x404ba2e7 in MessageListenerWrap::call_onMessage<cms::TextMessage> (this=0x953b660, message=0x97a81d0) at src/main/MessageListener.cpp:76 #13 0x404ba7dd in MessageListenerWrap::onMessage (this=0x953b660, message=0x97a81d0) at src/main/MessageListener.cpp:58 #14 0x40734796 in activemq::core::ActiveMQConsumer::dispatch (this=0x9539878, data=@0x4214fd00) at activemq/core/ActiveMQConsumer.cpp:489 #15 0x40759bb0 in activemq::core::ActiveMQSessionExecutor::dispatch (this=0x9466278, data=@0x4214fd00) at activemq/core/ActiveMQSessionExecutor.cpp:185 #16 0x40759733 in activemq::core::ActiveMQSessionExecutor::dispatchAll (this=0x9466278) at activemq/core/ActiveMQSessionExecutor.cpp:266 #17 0x40759809 in activemq::core::ActiveMQSessionExecutor::run (this=0x9466278) at activemq/core/ActiveMQSessionExecutor.cpp:208 #18 0x4082a69f in decaf::lang::Thread::runCallback (self=0x92ba998, param=0x96ea288) at decaf/lang/Thread.cpp:125 #19 0x40bb7e80 in dummy_worker (opaque=0x92ba998) at threadproc/unix/thread.c:142 #20 0x0062145b in start_thread () from /lib/libpthread.so.0 #21 0x00578c4e in clone () from /lib/libc.so.6 Thread 1 (Thread 1073832016 (LWP 10038)): #0 0x40000402 in __kernel_vsyscall () #1 0x00627a3e in __lll_mutex_lock_wait () from /lib/libpthread.so.0 #2 0x006238b4 in _L_mutex_lock_760 () from /lib/libpthread.so.0 #3 0x00623758 in pthread_mutex_lock () from /lib/libpthread.so.0 #4 0x40ba7e44 in apr_thread_mutex_lock (mutex=0x96c9e20) at locks/unix/thread_mutex.c:92 #5 0x408433d1 in decaf::util::concurrent::Mutex::lock (this=0x94662b0) at decaf/util/concurrent/Mutex.cpp:52 #6 0x4070ecc5 in decaf::util::concurrent::Lock::lock (this=0xbf8f3514) at ./decaf/util/concurrent/Lock.h:94 #7 0x4070ee4e in Lock (this=0xbf8f3514, object=0x94662b0, intiallyLocked=true) at ./decaf/util/concurrent/Lock.h:66 #8 0x407594b4 in activemq::core::ActiveMQSessionExecutor::stop (this=0x9466278) at activemq/core/ActiveMQSessionExecutor.cpp:145 #9 0x40741779 in activemq::core::ActiveMQSession::stop (this=0x94661d8) at activemq/core/ActiveMQSession.cpp:827 #10 0x40745348 in activemq::core::ActiveMQSession::onConnectorResourceClosed (this=0x94661d8, resource=0x9539820) at activemq/core/ActiveMQSession.cpp:663 #11 0x407d8bb1 in activemq::connector::BaseConnectorResource::close (this=0x9539820) at activemq/connector/BaseConnectorResource.cpp:66 #12 0x407360bb in activemq::core::ActiveMQConsumer::close (this=0x9539878) at activemq/core/ActiveMQConsumer.cpp:92 #13 0x40736618 in ~ActiveMQConsumer (this=0x9539878) at activemq/core/ActiveMQConsumer.cpp:68 #14 0x4049294b in ~auto_ptr (this=0x9306ebc) at /usr/lib/gcc/i386-redhat-linux/4.1.2/../../../../include/c++/4.1.2/memory:259 #15 0x4049dfb3 in ~pointer_holder (this=0x9306eb4) at /usr/include/boost/python/object/pointer_holder.hpp:55 #16 0x40a8c57a in boost::python::objects::copy_class_object () from /usr/lib/libboost_python.so.2 ...

          Good caught!

          alex.martens Alexander Martens added a comment - Good caught!

          Are there any objections to closing this issue?

          tabish Timothy A. Bish added a comment - Are there any objections to closing this issue?

          Not on my side!

          BTW: After commenting this particular test out of stresstest.py, it has been running for 5 hours and 6350 loops without crashes.

          alex.martens Alexander Martens added a comment - Not on my side! BTW: After commenting this particular test out of stresstest.py, it has been running for 5 hours and 6350 loops without crashes.

          Resolved in Trunk, deadlock seems to be in the Python Library. A new issue can be created if this turns out not to be the case.

          tabish Timothy A. Bish added a comment - Resolved in Trunk, deadlock seems to be in the Python Library. A new issue can be created if this turns out not to be the case.

          Nice work guys. I'll take a look as soon as I can (a few days from now).

          fullung Albert Strasheim added a comment - Nice work guys. I'll take a look as soon as I can (a few days from now).

          People

            tabish Timothy A. Bish
            alex.martens Alexander Martens
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: