ActiveMQ
  1. ActiveMQ
  2. AMQ-3214

"InactivityMonitor Async Task" threads leaking

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 5.6.0
    • Component/s: JMS client, Transport
    • Labels:
      None

      Description

      -Have a multi-thread consumers running to consumer messages
      -Have Connection to have these :
      ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerUrl);
      connectionFactory.setUseAsyncSend(false);
      connectionFactory.setDispatchAsync(false);
      connectionFactory.setAlwaysSessionAsync(false);
      connectionFactory.setAlwaysSyncSend(true);

      -Run the consumers for several hours and profile it
      -You will see there are threads with the name "InactivityMonitor Async Task" being spawning continuously

      This will cause the entire consumer system to slow down eventually due to thread context switching.

      Suggestion to fix: we should not put a limit on the number of "InactivityMonitor Async Task" threads to be Max Integer. There is a bug in Java lib that
      it will not stop a thread after a given idle time-to-live. We could fix this in the file InactivityMonitor.java

        Activity

        Hide
        Gary Tully added a comment -

        do you have a reference to the JDK bug that will not timeout idle threads?

        Show
        Gary Tully added a comment - do you have a reference to the JDK bug that will not timeout idle threads?
        Hide
        Minh Do added a comment -

        Hi Gary,

        I used java version "1.6.0_20". However, I do think this issue is on ThreadPoolExecutor so it will happen on all 1.6.x.
        The problem is that if we construct a ThreadPoolExecutor like the way the InactivityMonitor.java does, all threads will become core threads and they will not get discarded as they become idle unless we set ThreadPoolExecutor.allowCoreThreadTimeOut to be true (it is false be default).

        I think both ActiveMQ client and server are using the InactivityMonitor class and both will be slow down to a complete stop after a while.

        I had a patch on this in our development and I could see the problem is gone. What I did was the change the way InactivityMonitory construct the asynch task threadpool.

        Show
        Minh Do added a comment - Hi Gary, I used java version "1.6.0_20". However, I do think this issue is on ThreadPoolExecutor so it will happen on all 1.6.x. The problem is that if we construct a ThreadPoolExecutor like the way the InactivityMonitor.java does, all threads will become core threads and they will not get discarded as they become idle unless we set ThreadPoolExecutor.allowCoreThreadTimeOut to be true (it is false be default). I think both ActiveMQ client and server are using the InactivityMonitor class and both will be slow down to a complete stop after a while. I had a patch on this in our development and I could see the problem is gone. What I did was the change the way InactivityMonitory construct the asynch task threadpool.
        Hide
        Minh Do added a comment -

        This image is on the ActiveMQ client library after I ran using it stress test the ActiveMQ server

        Show
        Minh Do added a comment - This image is on the ActiveMQ client library after I ran using it stress test the ActiveMQ server
        Hide
        Timothy Bish added a comment -

        I tried to replicate this sort of behaviour from AMQ but can't using JDK 1.6.0_21 also couldn't find any open bugs for the JDK that sound like this. Perhaps you should try an JDK update and if that fails to help maybe provide a stress test example that demonstrates the problem.

        The way the executor is created now should prevent any threads from being treated as Core threads so they should timeout according to the set idle time.

        Show
        Timothy Bish added a comment - I tried to replicate this sort of behaviour from AMQ but can't using JDK 1.6.0_21 also couldn't find any open bugs for the JDK that sound like this. Perhaps you should try an JDK update and if that fails to help maybe provide a stress test example that demonstrates the problem. The way the executor is created now should prevent any threads from being treated as Core threads so they should timeout according to the set idle time.
        Hide
        Minh Do added a comment -

        Hi Timothy,

        We actually had this issue in a few different enviroments here running ActiveMQ server or ActiveMQ client.

        In my previous company, they opted out to purchase SonicMQ after deploying ActiveMQ server to production and having to restart ActiveMQ almost every week with a noticeable slow-down day after day.

        I would not say that it is JDK's bug. The way ThreadPoolExecutor being used in that InactivityMonitor is the problem.
        I can repeat this every time. Otherwise, could you please explain why there are so many InactivityMonitor threads in my attached image? I was using YourKit to do the profiling.

        Show
        Minh Do added a comment - Hi Timothy, We actually had this issue in a few different enviroments here running ActiveMQ server or ActiveMQ client. In my previous company, they opted out to purchase SonicMQ after deploying ActiveMQ server to production and having to restart ActiveMQ almost every week with a noticeable slow-down day after day. I would not say that it is JDK's bug. The way ThreadPoolExecutor being used in that InactivityMonitor is the problem. I can repeat this every time. Otherwise, could you please explain why there are so many InactivityMonitor threads in my attached image? I was using YourKit to do the profiling.
        Hide
        Timothy Bish added a comment -

        Added the call to allowCoreThreadTimeOut(true) since it shouldn't affect the code regardless.

        Show
        Timothy Bish added a comment - Added the call to allowCoreThreadTimeOut(true) since it shouldn't affect the code regardless.

          People

          • Assignee:
            Unassigned
            Reporter:
            Minh Do
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Time Tracking

              Estimated:
              Original Estimate - 1h
              1h
              Remaining:
              Remaining Estimate - 1h
              1h
              Logged:
              Time Spent - Not Specified
              Not Specified

                Development