Details

    • Type: Bug Bug
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 5.2.0
    • Fix Version/s: 5.x
    • Component/s: Broker
    • Labels:
      None
    • Environment:

      Sun solaris 10

      Description

      I have an issue which greatly reduces the quality of service of a network of activemq brokers.

      Here is what I have:
      1. 4 brokers( broker1, broker2, broker3, broker4) in a network by multicast discovery

      2. i have 2 consumers of QueueA on broker1, 2 consumers of QueueA on broker2, and consumer queuePrefetch=1, networkConnector prefetchSize=1. Queue is using RoundRobinDispatchPolicy

      3. I publish to QueueA on broker3 with 100 msgs, 2 consumers on broker1 are fast and they process fine but 2 consumers on broker2 are stuck. However, with this config, msgs are still 50 goes to broker1, 50 goes to broker2, and when consumers on broker2 get stuck, those 50 msgs are stuck on broker2. It seems the prefetchSize=1 on networkConnector have no effect at all.

      what I expect in this case will be that 98 msgs shall go to broker1, and only 2 msgs stuck on broker2's consumers. I cannot lose a single msg so ConstantPendingMessageLimit will not help.

      Please help. Thanks

        Activity

        Gary Tully made changes -
        Fix Version/s 5.6.0 [ 12316331 ]
        Fix Version/s 5.5.0 [ 12315626 ]
        Dejan Bosanac made changes -
        Fix Version/s 5.5.0 [ 12315626 ]
        Fix Version/s 5.4.2 [ 12315625 ]
        Jeff Turner made changes -
        Project Import Fri Nov 26 22:32:02 EST 2010 [ 1290828722158 ]
        Bruce Snyder made changes -
        Fix Version/s 5.5.0 [ 12344 ]
        Fix Version/s 5.4.1 [ 12332 ]
        Rob Davies made changes -
        Fix Version/s 5.4.1 [ 12332 ]
        Gary Tully made changes -
        Field Original Value New Value
        Priority Blocker [ 1 ] Major [ 3 ]
        Hide
        Gary Tully added a comment -

        changing priority to major as I don't think this issue should block a 5.3 release.

        The prefetch is in effect for a network consumer but the consumer dispatches immediately to a broker through a message producer that does not block unless the send blocks through a memory utilization limit. Thus my previous comments. Constraining the queue's memory usage should block message producers for slow brokers.

        Have you had any success with memory usage or disk usage constraints that will cause a send to block pending space?

        Show
        Gary Tully added a comment - changing priority to major as I don't think this issue should block a 5.3 release. The prefetch is in effect for a network consumer but the consumer dispatches immediately to a broker through a message producer that does not block unless the send blocks through a memory utilization limit. Thus my previous comments. Constraining the queue's memory usage should block message producers for slow brokers. Have you had any success with memory usage or disk usage constraints that will cause a send to block pending space?
        Hide
        Gary Tully added a comment -

        ok, I think you need to reduce the memory available to the queues on all brokers and disable producerFlowControl. The prefetch value is applied to network subscriptions but if the remote broker can accept a message it will take it. Even with straight through processing, not having a consumer available to consume is not a problem unless the queue is memory constrained to block a send if there is no space.

        So to ensure that 5000 messages don't build up in a slow broker, constrain the memory allocated to a queue such that it will only accept 10 or twenty messages and disableFlowControll so that a send will block. When that queue is full, because the consumers are slow, this will eventually push back to the network consumer with its small prefetch.

        Show
        Gary Tully added a comment - ok, I think you need to reduce the memory available to the queues on all brokers and disable producerFlowControl. The prefetch value is applied to network subscriptions but if the remote broker can accept a message it will take it. Even with straight through processing, not having a consumer available to consume is not a problem unless the queue is memory constrained to block a send if there is no space. So to ensure that 5000 messages don't build up in a slow broker, constrain the memory allocated to a queue such that it will only accept 10 or twenty messages and disableFlowControll so that a send will block. When that queue is full, because the consumers are slow, this will eventually push back to the network consumer with its small prefetch.
        Hide
        ying added a comment -

        hi, i tried asyncDispatch=false and optimizeDispatch=true and it did not help.

        the real issue we face is when we setup 4 brokers we don't know which broker will be slower.

        The reason the consumer on broker2 got stuck might be because the resource it needs to access is not accessible at the moment. This is a decoupled environment. Applications only know: talk to a broker and get its task. All of this is not predictable beforehand. That is why we think PrefetchSize on a networkConnector shall come to rescue but it does not have any effect at all so far.

        When we found out broker2's consumers are stuck, it is already late because 5000 msg are already dispatched to the broker.

        Show
        ying added a comment - hi, i tried asyncDispatch=false and optimizeDispatch=true and it did not help. the real issue we face is when we setup 4 brokers we don't know which broker will be slower. The reason the consumer on broker2 got stuck might be because the resource it needs to access is not accessible at the moment. This is a decoupled environment. Applications only know: talk to a broker and get its task. All of this is not predictable beforehand. That is why we think PrefetchSize on a networkConnector shall come to rescue but it does not have any effect at all so far. When we found out broker2's consumers are stuck, it is already late because 5000 msg are already dispatched to the broker.
        Hide
        Gary Tully added a comment -

        in this scenario the broker(2) needs to go slow, not just the consumers.

        I wonder if you enable straight through processing, asyncDispatch=false and optimizeDispatch=true for broker2, will the dispatch block the network consumer.
        An alternative solution may be to restrict the memory available to broker2 and have it use producer flow control such that it can only have a small number of messages outstanding.

        Show
        Gary Tully added a comment - in this scenario the broker(2) needs to go slow, not just the consumers. I wonder if you enable straight through processing, asyncDispatch=false and optimizeDispatch=true for broker2, will the dispatch block the network consumer. An alternative solution may be to restrict the memory available to broker2 and have it use producer flow control such that it can only have a small number of messages outstanding.
        ying created issue -

          People

          • Assignee:
            Unassigned
            Reporter:
            ying
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:

              Development