Qpid
  1. Qpid
  2. QPID-1769

64 kilobyte messages not dequeued immediately when messageConsumer.receive is called

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Critical Critical
    • Resolution: Later
    • Affects Version/s: M4
    • Fix Version/s: JIRA Cleanup
    • Component/s: Java Client
    • Labels:
      None
    • Environment:

      Redhat

      Description

      I'm running into a bug where, when I send messages 64 kilobytes long via a JMS producer, and retrieve them via a JMS consumer, it appears they are not dequeued until much later (even though the consumer is somehow still reading the messages). It is probably dequeueing finally when connection.close() or ctx.close() is called. I've concluded this is the situation, because:

      (A) The message number that overflows the queue is the same as the queue size divided by the message size (i.e., all the messages are still in the queue when the overflow happens).

      (B) The qpid-queue-stats program shows no dequeueing occuring.

      (C) When I make a simple consumer to run against the 64k message producer, it receives the messages, despite no actual dequeueing occuring in the queue. The last thing it does is hang on messageConsumer.receive(), and the read messages are never dequeued.

      (D) When I modify the simple consumer from (C) to timeout after 30 seconds (messageConsumer.receive(30000)), and it reaches the end of the program by timing out, the dequeues occurs all at once suddenly.

      (E) This occurs even when I take it down to about 50 messages per second--no dequeueing occurs until after the timeout mentioned in (D).

      This has the effect of causing my queue to fill up. Note that I do not have this problem when sending messages that are 32 kilobytes long and smaller--messages dequeue normally at those sizes.

      I tried to replicate this behavior in the Python client, but the Python client seemed to handle 64k messages without any problems.

      Note that I am running against the C++ broker and my queue size limit is 100 megabytes.

      1. Producer.java
        5 kB
        Jeff Stein
      2. Consumer.java
        6 kB
        Jeff Stein

        Activity

        Hide
        Jeff Stein added a comment -

        Here's the qpid-queue-stats program showing that nothing is dequeueing:

        Queue Name Sec Depth Enq Rate Deq Rate
        ========================================================================================
        message_queue 3470.62 0 0.00 0.00
        message_queue 10.00 306 30.60 0.00
        message_queue 10.00 1599 129.29 0.00

        (at 1599 messages , it maxes out)

        Show
        Jeff Stein added a comment - Here's the qpid-queue-stats program showing that nothing is dequeueing: Queue Name Sec Depth Enq Rate Deq Rate ======================================================================================== message_queue 3470.62 0 0.00 0.00 message_queue 10.00 306 30.60 0.00 message_queue 10.00 1599 129.29 0.00 (at 1599 messages , it maxes out)
        Hide
        Jeff Stein added a comment -

        Here are my hacked-up Producer and Consumer JMS examples so you don't have to create them yourself. There are also a couple extra commented-out things in them, but you can just ignore that.

        Just declare the queue, run the consumer, and then run the producer. You should get the same problem I got.

        Show
        Jeff Stein added a comment - Here are my hacked-up Producer and Consumer JMS examples so you don't have to create them yourself. There are also a couple extra commented-out things in them, but you can just ignore that. Just declare the queue, run the consumer, and then run the producer. You should get the same problem I got.
        Hide
        Rajith Attapattu added a comment -

        Messages are acked in batches (for performance reasons) if prefetch is enabled.
        Default prefetch is 5000. Therefore the client will ack if one of the following conditions are satisfied.
        a) Number of consumed messages >= prefetch/2
        b) Every x millisecs defined by -Dqpid.session.max_ack_delay=x (and the default is 1000 ms)

        If you would like the messages to be dequeued immediately you could use -Dsync_ack=true
        This will ensure that the message is acked as soon as it is consumed.

        Show
        Rajith Attapattu added a comment - Messages are acked in batches (for performance reasons) if prefetch is enabled. Default prefetch is 5000. Therefore the client will ack if one of the following conditions are satisfied. a) Number of consumed messages >= prefetch/2 b) Every x millisecs defined by -Dqpid.session.max_ack_delay=x (and the default is 1000 ms) If you would like the messages to be dequeued immediately you could use -Dsync_ack=true This will ensure that the message is acked as soon as it is consumed.
        Hide
        Rajith Attapattu added a comment -

        I forgot to mention that both -Dqpid.session.max_ack_delay=x and -Dsync_ack were not present during M4.
        You could use the trunk or the upcomming M5 release which will include these fixes.

        Show
        Rajith Attapattu added a comment - I forgot to mention that both -Dqpid.session.max_ack_delay=x and -Dsync_ack were not present during M4. You could use the trunk or the upcomming M5 release which will include these fixes.
        Hide
        Jeff Stein added a comment -

        I adjusted the max_prefetch and everything works fine now with the large messages. Thank you for the help, and sorry for the non-bug bug.

        Show
        Jeff Stein added a comment - I adjusted the max_prefetch and everything works fine now with the large messages. Thank you for the help, and sorry for the non-bug bug.
        Hide
        Rajith Attapattu added a comment -

        Thank you for trying out Qpid and no need to appologize.
        I found some other issue while trying to test the two issues reported by you.

        Show
        Rajith Attapattu added a comment - Thank you for trying out Qpid and no need to appologize. I found some other issue while trying to test the two issues reported by you.
        Hide
        Jeff Stein added a comment -

        Hi again,

        Would it be possible to add to trunk a flag for something similar to max_prefetch, but based on the cumulative message size rather than number of messages? I'm running into a situation where I may have some very small messages or very large messages on the same queue, and basically I'd like a lower max_prefetch for the when there are a lot of large messages and a higher max_prefetch when there are a lot of smaller messages. That way, I can (A) get good performance on the smaller messages, and (B) not run into the situation where my queue fills up with larger messages before the ack occurs. I think something like a max_prefetch based on the total size of the messages in the queue would accomplish this. Is this possible? Thank you!

        Show
        Jeff Stein added a comment - Hi again, Would it be possible to add to trunk a flag for something similar to max_prefetch, but based on the cumulative message size rather than number of messages? I'm running into a situation where I may have some very small messages or very large messages on the same queue, and basically I'd like a lower max_prefetch for the when there are a lot of large messages and a higher max_prefetch when there are a lot of smaller messages. That way, I can (A) get good performance on the smaller messages, and (B) not run into the situation where my queue fills up with larger messages before the ack occurs. I think something like a max_prefetch based on the total size of the messages in the queue would accomplish this. Is this possible? Thank you!
        Hide
        Rajith Attapattu added a comment -

        Hi Jeff,

        In AMQP you have the facility to set message credits both in terms of bytes and number of messages.
        The current solution of setting max prefetch is less than ideal. So I don't think adding another gloabal or connection specific option like that is the best solution.

        The ideal solution IMO is to allow them to be set at the destination level.
        I have some WIP on this front which I am hoping to post during the weekend for review.
        I don't think it's feasible to have to for the M5 release. But atleast you should be able to use it from the trunk.

        Show
        Rajith Attapattu added a comment - Hi Jeff, In AMQP you have the facility to set message credits both in terms of bytes and number of messages. The current solution of setting max prefetch is less than ideal. So I don't think adding another gloabal or connection specific option like that is the best solution. The ideal solution IMO is to allow them to be set at the destination level. I have some WIP on this front which I am hoping to post during the weekend for review. I don't think it's feasible to have to for the M5 release. But atleast you should be able to use it from the trunk.
        Hide
        Jeff Stein added a comment -

        Sounds good!

        By the way, I'm running into a new, semi-related problem. After the first time I get a message rejected from the queue because the queue is full, all subsequent messages get rejected as well, even if during the interim the queue has shrunk in size a lot. Is this a bug, or is there a simple workaround, as with some of the other issues I am running into? Like, maybe there is a flag I need to reset or something?

        Thank you!

        Show
        Jeff Stein added a comment - Sounds good! By the way, I'm running into a new, semi-related problem. After the first time I get a message rejected from the queue because the queue is full, all subsequent messages get rejected as well, even if during the interim the queue has shrunk in size a lot. Is this a bug, or is there a simple workaround, as with some of the other issues I am running into? Like, maybe there is a flag I need to reset or something? Thank you!
        Hide
        Robbie Gemmell added a comment -

        Closing issue out as part of JIRA cleanup. Issue may already be resolved, may be invalid, or may never be fixed. See QPID-3469 for further details.

        Show
        Robbie Gemmell added a comment - Closing issue out as part of JIRA cleanup. Issue may already be resolved, may be invalid, or may never be fixed. See QPID-3469 for further details.

          People

          • Assignee:
            Unassigned
            Reporter:
            Jeff Stein
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development