Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Critical Critical
    • Resolution: Unresolved
    • Affects Version/s: 0.10
    • Fix Version/s: None
    • Component/s: C++ Clustering
    • Labels:
    • Environment:

      Two node persistent cluster using openais. Both nodes are CentOS 5.5.

      Description

      I have configured qpid 0.10 c++ brocker as 2 node persistent cluster. I was worked without any issue for few hours or sometimes one or two day. But one node went down after some time with following error.
      ---------------------------------------
      2011-05-30 12:55:28 warning Journal "OPC_MESSAGE_QUEUE": Enqueue capacity threshold exceeded on queue "OPC_MESSAGE_QUEUE".
      2011-05-30 12:55:28 error Unexpected exception: Enqueue capacity threshold exceeded on queue "OPC_MESSAGE_QUEUE". (JournalImpl.cpp:587)
      2011-05-30 12:55:28 error Connection 192.168.1.138:5672-192.168.1.10:58839 closed by error: Enqueue capacity threshold exceeded on queue "OPC_MESSAGE_QUEUE". (JournalImpl.cpp:587)(501)
      2011-05-30 12:55:28 critical cluster(192.168.1.138:6321 READY/error) local error 11545 did not occur on member 192.168.1.139:25161: Enqueue capacity threshold exceeded on queue "OPC_MESSAGE_QUEUE". (JournalImpl.cpp:587)
      2011-05-30 12:55:28 critical Error delivering frames: local error did not occur on all cluster members : Enqueue capacity threshold exceeded on queue "OPC_MESSAGE_QUEUE". (JournalImpl.cpp:587) (qpid/cluster/ErrorCheck.cpp:89)
      2011-05-30 12:55:28 notice cluster(192.168.1.138:6321 LEFT/error) leaving cluster QCLUSTER
      2011-05-30 12:55:28 notice Shut down
      --------------------------------------

      But the remaining node is working without any issue.

        Activity

        Hide
        Alan Conway added a comment -

        The problem here is that you are overflowing your journal. The journal isn't exactly the same on different nodes in a cluster so if one node overflows and the other doesn't the one that overflowed will shut down. This is because it no longer has a faithful record of all the messages sent, so it is better to shut down and let clients fail over to the good broker.

        You should look at the throughput in your producers and consumers. If the consumers are not at least as fast (on average) as the producers then queue depth will increase without limit. You might also increase the capacity of the journal to ensure it is enough to handle the peak message load.

        Show
        Alan Conway added a comment - The problem here is that you are overflowing your journal. The journal isn't exactly the same on different nodes in a cluster so if one node overflows and the other doesn't the one that overflowed will shut down. This is because it no longer has a faithful record of all the messages sent, so it is better to shut down and let clients fail over to the good broker. You should look at the throughput in your producers and consumers. If the consumers are not at least as fast (on average) as the producers then queue depth will increase without limit. You might also increase the capacity of the journal to ensure it is enough to handle the peak message load.
        Hide
        sujith paily added a comment -

        Hi,

        Thanks for your update Alan. As you said qpid is going down due to journal overflow. But how can we automatically up the node which is going down (so that we can gurantee high avilability). Is it possible to increment the journl size and start the node which is down automatically?. When we up will it sync with the node which is running?

        Show
        sujith paily added a comment - Hi, Thanks for your update Alan. As you said qpid is going down due to journal overflow. But how can we automatically up the node which is going down (so that we can gurantee high avilability). Is it possible to increment the journl size and start the node which is down automatically?. When we up will it sync with the node which is running?
        Hide
        Alan Conway added a comment -

        Presently you can't auto-expand a store while the broker is running.

        However a good solution is to set a queue limit policy on you're queues with a limit that is lower than the size of your store. Policy exceptions are synchronized across the broker so if you exceed the limit on a queue, the sender will receive an exception and the cluster will continue as normal.

        Any time you add a node to the cluster, it will synchronize with the other members when it joins.

        Show
        Alan Conway added a comment - Presently you can't auto-expand a store while the broker is running. However a good solution is to set a queue limit policy on you're queues with a limit that is lower than the size of your store. Policy exceptions are synchronized across the broker so if you exceed the limit on a queue, the sender will receive an exception and the cluster will continue as normal. Any time you add a node to the cluster, it will synchronize with the other members when it joins.
        Hide
        sujith paily added a comment - - edited

        Hi
        The cluster node went down due to journal overflow. I have few more questions

        1. Is it possible to monitor the journal file size growth and flush the journal files before it reach certain limit, so that we can save the brocker going down.
        2. Is there any limit on journal file size

        Show
        sujith paily added a comment - - edited Hi The cluster node went down due to journal overflow. I have few more questions 1. Is it possible to monitor the journal file size growth and flush the journal files before it reach certain limit, so that we can save the brocker going down. 2. Is there any limit on journal file size
        Hide
        Alan Conway added a comment -

        1. Is it possible to monitor the journal file size growth and flush the journal files before it reach certain limit, so that we can save the brocker going down.

        No. If the senders are consistently sending messages faster than the receivers are accepting them then you will inevitably hit the limit at some point.

        However as in my previous comment, you can avoid broker shutdown: a good solution is to set a queue limit policy on you're queues with a limit that is lower than the size of your store. Policy exceptions are synchronized across the broker so if you exceed the limit on a queue, the sender will receive an exception and the cluster will continue as normal.

        2. Is there any limit on journal file size

        No.

        Show
        Alan Conway added a comment - 1. Is it possible to monitor the journal file size growth and flush the journal files before it reach certain limit, so that we can save the brocker going down. No. If the senders are consistently sending messages faster than the receivers are accepting them then you will inevitably hit the limit at some point. However as in my previous comment, you can avoid broker shutdown: a good solution is to set a queue limit policy on you're queues with a limit that is lower than the size of your store. Policy exceptions are synchronized across the broker so if you exceed the limit on a queue, the sender will receive an exception and the cluster will continue as normal. 2. Is there any limit on journal file size No.

          People

          • Assignee:
            Alan Conway
            Reporter:
            sujith paily
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Time Tracking

              Estimated:
              Original Estimate - 24h
              24h
              Remaining:
              Remaining Estimate - 24h
              24h
              Logged:
              Time Spent - Not Specified
              Not Specified

                Development