Uploaded image for project: 'ActiveMQ Artemis'
  1. ActiveMQ Artemis
  2. ARTEMIS-2485

Deleting SNF Queue should also delete associated remote bindings

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Invalid
    • 2.10.0
    • 2.10.1
    • Broker
    • None

    Description

      In https://issues.apache.org/jira/browse/ARTEMIS-2462 we offered an option that automatically remove snf queues in scaledown. However the remote bindings are left out and remains in the broker memory.

      Those are no longer used and if a different node comes up those remaining bindings will prevent the new bindings to be added.

      For a common example in a 2 broker cluster,
      if they both deploy a jms.queue.DLQ queue, each will have a remote binding
      for the queue. One of them scaled down and remove the sf queue on the other.
      Then another broker node (with different node id) comes up and form a cluster with the existing broker. If the new broker also has a jms.queue.DLQ then it will cause
      the other broker to create a remote binding. However the other broker already has a remotebinding and for that reason the new remotebinding will not be added.
      You will see some warning like this:
      2019-09-12 01:30:51,427 WARN [org.apache.activemq.artemis.core.server] AMQ222139: MessageFlowRecordImpl [nodeID=a44b0e0a-d4fc-11e9-9e65-0a580a8201d0, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=ex-aao-ss-2, queueName=$.artemis.internal.sf.my-cluster.a44b0e0a-d4fc-11e9-9e65-0a580a8201d0, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.a44b0e0a-d4fc-11e9-9e65-0a580a8201d0, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=f04e96b2-d4fc-11e9-8b50-0a580a8201d2], temp=false]@522d336c, isClosed=false, reset=true]::Remote queue binding DLQf04e96b2-d4fc-11e9-8b50-0a580a8201d2 has already been bound in the post office. Most likely cause for this is you have a loop in your cluster due to cluster max-hops being too large or you have multiple cluster connections to the same nodes using overlapping addresses

      Attachments

        Issue Links

          Activity

            People

              gaohoward Howard Gao
              gaohoward Howard Gao
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: