Qpid
  1. Qpid
  2. QPID-3640

When releasing msgs, the JMS client should not mark msgs in the prefetch buffer as "redelivered".

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.10, 0.12
    • Fix Version/s: 0.15
    • Component/s: Java Client
    • Labels:
      None

      Description

      When releasing messages, for ex when the application calls recover() or Connection.stop(), the client marks all messages in it's prefetch buffer as "redelivered". This is not correct, since messages in the prefetch buffer have not yet been seen (delivered) by the application.

      1. RecoveryTest.java
        5 kB
        Rajith Attapattu

        Activity

        Hide
        Rajith Attapattu added a comment -

        The test case basically does the following.
        It tries to receive 100 messages. But every 5th message, it checks to see if it's marked redelivered. If not it will call recover().

        Once issues identified in QPID-3602 & QPID-3629 is fixed, the correct behaviour should be as follows.

        1. We should receive 100 messages.
        2. We should see 20 messages marked as redelivered.
        3. At at point we should not see the number of messages in our prefetch exceeding the value specified with max_prefetch (or capacity using addressing).

        Currently with some of the patches for QPID-3602, QPID-3629 and a patch for this issue, I see the following.

        1. We receive 100 messages (this was never a problem)

        2. We now receive 24 messages marked as "redelivered" (consistently) no matter what the prefetch count is. Before the patches for these issues, the number of redelivered messages depended on the prefetch, partly due to the client prefetching more than it needed, the broker sending more than the client needed and the client marking everything in it' prefetch buffer as "redelivered".

        3. With the fix for QPID-3629 we now see that the messages in the clients prefetch buffer does not exceed the max_prefetch value at any time. (However prefetch=0 case does not work, and I attribute that to client error at this point). But there seems to be another issue with the said patch. At this point not sure if it's client or broker side. Since this issue does not arise without the patch for QPID-3629 it's reasonable to assume that it's caused by this patch.

        Show
        Rajith Attapattu added a comment - The test case basically does the following. It tries to receive 100 messages. But every 5th message, it checks to see if it's marked redelivered. If not it will call recover(). Once issues identified in QPID-3602 & QPID-3629 is fixed, the correct behaviour should be as follows. 1. We should receive 100 messages. 2. We should see 20 messages marked as redelivered. 3. At at point we should not see the number of messages in our prefetch exceeding the value specified with max_prefetch (or capacity using addressing). Currently with some of the patches for QPID-3602 , QPID-3629 and a patch for this issue, I see the following. 1. We receive 100 messages (this was never a problem) 2. We now receive 24 messages marked as "redelivered" (consistently) no matter what the prefetch count is. Before the patches for these issues, the number of redelivered messages depended on the prefetch, partly due to the client prefetching more than it needed, the broker sending more than the client needed and the client marking everything in it' prefetch buffer as "redelivered". 3. With the fix for QPID-3629 we now see that the messages in the clients prefetch buffer does not exceed the max_prefetch value at any time. (However prefetch=0 case does not work, and I attribute that to client error at this point). But there seems to be another issue with the said patch. At this point not sure if it's client or broker side. Since this issue does not arise without the patch for QPID-3629 it's reasonable to assume that it's caused by this patch.
        Hide
        Rajith Attapattu added a comment -

        Note this test depends on "internalQueueSize()" size method introduced in AMQSession.java for testing purposes. So unless you are running with a patch that contains this, please comment that line when running the test.

        That part is there to verify #3 in the above description.

        Show
        Rajith Attapattu added a comment - Note this test depends on "internalQueueSize()" size method introduced in AMQSession.java for testing purposes. So unless you are running with a patch that contains this, please comment that line when running the test. That part is there to verify #3 in the above description.
        Hide
        Alex Rudyy added a comment -

        Hi Rajith,

        I fixed the setting of redelivery flag on 0-10 client as part of porting of Dead Letter Queue and Max Delivery Count functionality (QPID-3642) from branch to trunk.

        The patch at https://issues.apache.org/jira/secure/attachment/12505027/0002-QPID-3642-QPID-3640-Add-Dead-Letter-Queue-functional.patch has the fix in it. However, it relies on changes made in a first patch at https://issues.apache.org/jira/secure/attachment/12505026/0001-QPID-3642-QPID-3643-Add-Dead-Letter-Queue-functional.patch .

        Initially I have no intention to fix it but DLQ functionality depends on redelivery flag being set correctly.

        Show
        Alex Rudyy added a comment - Hi Rajith, I fixed the setting of redelivery flag on 0-10 client as part of porting of Dead Letter Queue and Max Delivery Count functionality ( QPID-3642 ) from branch to trunk. The patch at https://issues.apache.org/jira/secure/attachment/12505027/0002-QPID-3642-QPID-3640-Add-Dead-Letter-Queue-functional.patch has the fix in it. However, it relies on changes made in a first patch at https://issues.apache.org/jira/secure/attachment/12505026/0001-QPID-3642-QPID-3643-Add-Dead-Letter-Queue-functional.patch . Initially I have no intention to fix it but DLQ functionality depends on redelivery flag being set correctly.
        Hide
        Rajith Attapattu added a comment -

        Alex,

        I was about to attach my patch along with a few other stuff
        However you've pretty much done more or less same thing, so I have no objections to what you have in your patch.

        Could you go ahead commit the portion of this patch that fixes this issue? that way I can pull in your changes and update the rest of my patch for the credit issue.

        Regards,

        Rajith

        Show
        Rajith Attapattu added a comment - Alex, I was about to attach my patch along with a few other stuff However you've pretty much done more or less same thing, so I have no objections to what you have in your patch. Could you go ahead commit the portion of this patch that fixes this issue? that way I can pull in your changes and update the rest of my patch for the credit issue. Regards, Rajith
        Hide
        Rajith Attapattu added a comment -

        I will be testing this along with the rest of my work, so if I do find any issues with your patch, I can fix it. So at this point I think it's best if you go ahead and commit the parts that address this specific issue.

        Show
        Rajith Attapattu added a comment - I will be testing this along with the rest of my work, so if I do find any issues with your patch, I can fix it. So at this point I think it's best if you go ahead and commit the parts that address this specific issue.
        Hide
        Keith Wall added a comment -

        Fixed by way of QPID-3642.

        Show
        Keith Wall added a comment - Fixed by way of QPID-3642 .

          People

          • Assignee:
            Rajith Attapattu
            Reporter:
            Rajith Attapattu
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development