ActiveMQ
  1. ActiveMQ
  2. AMQ-3418

CLONE - Slave Exception Channel was inactive for too long

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Won't Fix
    • Affects Version/s: 5.5.0
    • Fix Version/s: None
    • Component/s: Broker
    • Labels:
      None
    • Environment:

      CentOS 5.5
      Activemq 5.5.0

      Description

      I have a master/slave setup on the same machine. Clients (consumers) are connecting to the master and they are all up and running for two days. There is no message produced and consumed during the period and then i will get the following:
      2009-03-08 03:16:43,032 [$Worker@1f4577d] ERROR TransportConnection - Slave has exception: Channel was inactive for too long: /127.0.0.1:32803 shutting down master now
      .
      org.apache.activemq.transport.InactivityIOException: Channel was inactive for too long: /127.0.0.1:32803
      at org.apache.activemq.transport.InactivityMonitor$5.run(InactivityMonitor.java:164)
      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      at java.lang.Thread.run(Thread.java:619)
      2009-03-08 03:16:43,032 [0.0.0.245:33220] ERROR MasterBroker - Slave Failed
      org.apache.activemq.transport.InactivityIOException: Channel was inactive for too long: /127.0.0.1:32803
      at org.apache.activemq.transport.InactivityMonitor$5.run(InactivityMonitor.java:164)
      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      at java.lang.Thread.run(Thread.java:619)
      2009-03-08 03:16:43,044 [0.0.0.245:33236] ERROR MasterBroker - Slave Failed
      org.apache.activemq.transport.InactivityIOException: Channel was inactive for too long: /127.0.0.1:32803
      at org.apache.activemq.transport.InactivityMonitor$5.run(InactivityMonitor.java:164)
      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      at java.lang.Thread.run(Thread.java:619)

      1. masterbroker.log.gz
        173 kB
        Aaron Phillips
      2. slavebroker.log.gz
        81 kB
        Aaron Phillips

        Issue Links

          Activity

          Transition Time In Source Status Execution Times Last Executer Last Execution Date
          Open Open Closed Closed
          491d 7h 58m 1 Timothy Bish 29/Nov/12 21:26
          Timothy Bish made changes -
          Status Open [ 1 ] Closed [ 6 ]
          Resolution Won't Fix [ 2 ]
          Hide
          Timothy Bish added a comment -

          Pure master/slave removed in upcoming v5.8.0

          Show
          Timothy Bish added a comment - Pure master/slave removed in upcoming v5.8.0
          Aaron Phillips made changes -
          Affects Version/s 5.5.0 [ 12315626 ]
          Affects Version/s 5.2.0 [ 12315619 ]
          Environment Solaris 10
          Activemq 5.2.0
          CentOS 5.5
          Activemq 5.5.0
          Aaron Phillips made changes -
          Attachment slavebroker.log.gz [ 12487975 ]
          Hide
          Aaron Phillips added a comment -

          slave broker full DEBUG log

          Show
          Aaron Phillips added a comment - slave broker full DEBUG log
          Aaron Phillips made changes -
          Attachment masterbroker.log.gz [ 12487974 ]
          Hide
          Aaron Phillips added a comment -

          master broker full DEBUG log

          Show
          Aaron Phillips added a comment - master broker full DEBUG log
          Hide
          Aaron Phillips added a comment - - edited

          I am seeing the same thing. I have a master-slave pair and almost as frequently as once a day I see a message like this in the master broker's log, then the master broker goes down, leaving the slave up.

          2011-07-26 22:41:10,810 | ERROR | Slave Failed | org.apache.activemq.broker.ft.MasterBroker | ActiveMQ NIO Worker
          org.apache.activemq.transport.InactivityIOException: Channel was inactive for too (>30000) long: /172.16.10.123:33644

          I have tried setting timeout like this in the slave's activemq xml config, but the error above, referencing 30k milli timout still happens, as if this setting has no effect.

          tcp://masterbroker:61616?wireFormat.maxInactivityDuration=150000

          I am attaching both the master and slave logs with DEBUG turned on. I am actually running 5.5.0, not 5.2.0 as the original issue is written. This is also running on CentOS 5.5

          Show
          Aaron Phillips added a comment - - edited I am seeing the same thing. I have a master-slave pair and almost as frequently as once a day I see a message like this in the master broker's log, then the master broker goes down, leaving the slave up. 2011-07-26 22:41:10,810 | ERROR | Slave Failed | org.apache.activemq.broker.ft.MasterBroker | ActiveMQ NIO Worker org.apache.activemq.transport.InactivityIOException: Channel was inactive for too (>30000) long: /172.16.10.123:33644 I have tried setting timeout like this in the slave's activemq xml config, but the error above, referencing 30k milli timout still happens, as if this setting has no effect. tcp://masterbroker:61616?wireFormat.maxInactivityDuration=150000 I am attaching both the master and slave logs with DEBUG turned on. I am actually running 5.5.0, not 5.2.0 as the original issue is written. This is also running on CentOS 5.5
          Aaron Phillips made changes -
          Field Original Value New Value
          Link This issue is a clone of AMQ-2152 [ AMQ-2152 ]
          Aaron Phillips created issue -

            People

            • Assignee:
              Unassigned
              Reporter:
              Aaron Phillips
            • Votes:
              3 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development