ActiveMQ
  1. ActiveMQ
  2. AMQ-1780

ActiveMQ broker does not automatically reconnect if the connection to the database is lost

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 5.0.0
    • Fix Version/s: 5.5.0
    • Component/s: Broker
    • Labels:
      None
    • Environment:

      Windows 2003 Server

      Description

      hi

      We are noticing that after any SQL Server restart or network blip between ActiveMQ and the database, after the connection or the database comes back online activeMQ broker needs to be restarted as well i.e it doesn't automatically re-establish connection to the database as result any message send fails because the broker is still using the stale connection to the database.

      Is this designed behaviour or a bug? we are using ActiveMQ 5.0.0 and the latest version of the JSQLConnect database driver: version 5.7. The database we are using is MS SQL Server 2005

      Right now, in our production environment any time we have network maintenance or database restart we also have to restart the ActiveMQ broker which is not a good option for us.

      Also, We are using a single ActiveMQ broker and not the JDBC(Master/Slave) set up.

      Issue details in

      http://www.nabble.com/Database-connection-between-ActiveMQ-and-broker-td17321330s2354.html

      Please let me know if I need to give more information

      thanks
      jaya

        Issue Links

          Activity

          Hide
          Jaya Srinivasan added a comment -

          hi

          can someone give an ETA on when this would be fixed?

          thanks!
          jaya

          Show
          Jaya Srinivasan added a comment - hi can someone give an ETA on when this would be fixed? thanks! jaya
          Hide
          Kaiqin Sun added a comment -

          This is an issue in our production as well and hope it could be fixed soon.

          Thanks,
          Kaiqin

          Show
          Kaiqin Sun added a comment - This is an issue in our production as well and hope it could be fixed soon. Thanks, Kaiqin
          Hide
          Gary Tully added a comment -

          reused the plug-able IOExceptionHandler. It is now called when a call to JDBC getConnection fails. Extended the DefaultIOExceptionHandler to allow stop/resume of connectors option rather than simple broker stop. Added Xbean support for DefaultIOExceptionHandler so it can be easily included in xml configuration.

          Relevant programatic configuration from the test, use stopStartConnectors and do not ignore sql exceptions. Disable the default DB lock such that it will not stop the broker on a lock failure.

                  DefaultIOExceptionHandler handler = new DefaultIOExceptionHandler();
                  handler.setIgnoreSQLExceptions(false);
                  handler.setStopStartConnectors(true);
                  broker.setIoExceptionHandler(handler);
                  JDBCPersistenceAdapter persistenceAdapter = new JDBCPersistenceAdapter();
                  persistenceAdapter.setDataSource(sharedDs);
                  persistenceAdapter.setUseDatabaseLock(false);
          
          Show
          Gary Tully added a comment - reused the plug-able IOExceptionHandler . It is now called when a call to JDBC getConnection fails. Extended the DefaultIOExceptionHandler to allow stop/resume of connectors option rather than simple broker stop. Added Xbean support for DefaultIOExceptionHandler so it can be easily included in xml configuration. Relevant programatic configuration from the test, use stopStartConnectors and do not ignore sql exceptions. Disable the default DB lock such that it will not stop the broker on a lock failure. DefaultIOExceptionHandler handler = new DefaultIOExceptionHandler(); handler.setIgnoreSQLExceptions( false ); handler.setStopStartConnectors( true ); broker.setIoExceptionHandler(handler); JDBCPersistenceAdapter persistenceAdapter = new JDBCPersistenceAdapter(); persistenceAdapter.setDataSource(sharedDs); persistenceAdapter.setUseDatabaseLock( false );
          Hide
          Oleg Kiorsak added a comment -

          Is this really been fixed in 5.5 ?

          I just tried out 5.5 and same thing happens - as soon as I restart the SQLService the ActiveQM broker shutdowns completely...

          (NOTE: I am using same activemq.xml config files copied across from 5.4.2 instance, do I need to modify something for "fix" to take effect maybe?)

          Thank you!

          — LOG —

          ...
          INFO | Using Persistence Adapter: JDBCPersistenceAdapter(org.apache.commons.dbcp.BasicDataSource@1f758cd1)
          INFO | JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:2011/jmxrmi
          INFO | Database adapter driver override recognized for : [microsoft_sql_server_jdbc_driver_2_0] - adapter: class org.apache.activemq.store.jdbc.adapter.TransactJDBCAdapter
          INFO | Database lock driver override recognized for : [microsoft_sql_server_jdbc_driver_2_0] - adapter: class org.apache.activemq.store.jdbc.adapter.TransactDatabaseLocker
          INFO | Attempting to acquire the exclusive lock to become the Master broker
          INFO | Becoming the master on dataSource: org.apache.commons.dbcp.BasicDataSource@1f758cd1
          INFO | ActiveMQ 5.5.0 JMS Message Broker (MobTechTest1) is starting
          INFO | For help or more information please see: http://activemq.apache.org/
          INFO | Listening for connections at: nio://mdtapdot01:10001
          INFO | Connector openwire Started
          INFO | Listening for connections at: stomp://mdtapdot01:10002
          INFO | Connector STOMP Started
          INFO | Listening for connections at: xmpp://mdtapdot01:10003
          INFO | Connector xmpp Started
          INFO | ActiveMQ JMS Message Broker (MobTechTest1, ID:mdtapdot01-46867-1302071153455-0:1) started
          INFO | jetty-7.1.6.v20100715
          INFO | ActiveMQ WebConsole initialized.
          INFO | Initializing Spring FrameworkServlet 'dispatcher'
          INFO | ActiveMQ Console at http://0.0.0.0:8161/admin
          INFO | Initializing Spring root WebApplicationContext
          INFO | OSGi environment not detected.
          INFO | Apache Camel 2.7.0 (CamelContext: camel) is starting
          INFO | JMX enabled. Using ManagedManagementStrategy.
          INFO | Found 5 packages with 16 @Converter classes to load
          INFO | Loaded 152 type converters in 1.914 seconds
          INFO | Connector vm://MobTechTest1 Started
          INFO | Route: route1 started and consuming from: Endpoint[activemq://MDT.INBOUND]
          INFO | Total 1 routes, of which 1 is started.
          INFO | Apache Camel 2.7.0 (CamelContext: camel) started in 5.167 seconds
          INFO | Camel Console at http://0.0.0.0:8161/camel
          INFO | ActiveMQ Web Demos at http://0.0.0.0:8161/demo
          INFO | RESTful file access application at http://0.0.0.0:8161/fileserver
          INFO | Started SelectChannelConnector@0.0.0.0:8161
          ERROR | Failed to update database lock: com.microsoft.sqlserver.jdbc.SQLServerException: Broken pipe
          com.microsoft.sqlserver.jdbc.SQLServerException: Broken pipe
          at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:1368)
          at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:1355)
          at com.microsoft.sqlserver.jdbc.TDSChannel.write(IOBuffer.java:1548)
          at com.microsoft.sqlserver.jdbc.TDSWriter.flush(IOBuffer.java:2368)
          at com.microsoft.sqlserver.jdbc.TDSWriter.writePacket(IOBuffer.java:2270)
          at com.microsoft.sqlserver.jdbc.TDSWriter.endMessage(IOBuffer.java:1877)
          at com.microsoft.sqlserver.jdbc.TDSCommand.startResponse(IOBuffer.java:4403)
          at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:386)
          at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:338)
          at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4026)
          at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1416)
          at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:185)
          at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:160)
          at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeUpdate(SQLServerPreparedStatement.java:306)
          at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:102)
          at org.apache.activemq.store.jdbc.DefaultDatabaseLocker.keepAlive(DefaultDatabaseLocker.java:161)
          at org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.databaseLockKeepAlive(JDBCPersistenceAdapter.java:605)
          at org.apache.activemq.store.jdbc.JDBCPersistenceAdapter$3.run(JDBCPersistenceAdapter.java:291)
          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
          at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
          at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
          at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
          at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
          at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
          at java.lang.Thread.run(Thread.java:619)
          INFO | No longer able to keep the exclusive lock so giving up being a master
          INFO | ActiveMQ Message Broker (MobTechTest1, ID:mdtapdot01-46867-1302071153455-0:1) is shutting down
          INFO | Connector openwire Stopped
          INFO | Connector STOMP Stopped
          INFO | Connector xmpp Stopped
          INFO | Connector vm://MobTechTest1 Stopped
          INFO | PListStore:/usr/local/tollmobile/activemq54data/data/MobTechTest1/tmp_storage stopped
          INFO | ActiveMQ JMS Message Broker (MobTechTest1, ID:mdtapdot01-46867-1302071153455-0:1) stopped

          Show
          Oleg Kiorsak added a comment - Is this really been fixed in 5.5 ? I just tried out 5.5 and same thing happens - as soon as I restart the SQLService the ActiveQM broker shutdowns completely... (NOTE: I am using same activemq.xml config files copied across from 5.4.2 instance, do I need to modify something for "fix" to take effect maybe?) Thank you! — LOG — ... INFO | Using Persistence Adapter: JDBCPersistenceAdapter(org.apache.commons.dbcp.BasicDataSource@1f758cd1) INFO | JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:2011/jmxrmi INFO | Database adapter driver override recognized for : [microsoft_sql_server_jdbc_driver_2_0] - adapter: class org.apache.activemq.store.jdbc.adapter.TransactJDBCAdapter INFO | Database lock driver override recognized for : [microsoft_sql_server_jdbc_driver_2_0] - adapter: class org.apache.activemq.store.jdbc.adapter.TransactDatabaseLocker INFO | Attempting to acquire the exclusive lock to become the Master broker INFO | Becoming the master on dataSource: org.apache.commons.dbcp.BasicDataSource@1f758cd1 INFO | ActiveMQ 5.5.0 JMS Message Broker (MobTechTest1) is starting INFO | For help or more information please see: http://activemq.apache.org/ INFO | Listening for connections at: nio://mdtapdot01:10001 INFO | Connector openwire Started INFO | Listening for connections at: stomp://mdtapdot01:10002 INFO | Connector STOMP Started INFO | Listening for connections at: xmpp://mdtapdot01:10003 INFO | Connector xmpp Started INFO | ActiveMQ JMS Message Broker (MobTechTest1, ID:mdtapdot01-46867-1302071153455-0:1) started INFO | jetty-7.1.6.v20100715 INFO | ActiveMQ WebConsole initialized. INFO | Initializing Spring FrameworkServlet 'dispatcher' INFO | ActiveMQ Console at http://0.0.0.0:8161/admin INFO | Initializing Spring root WebApplicationContext INFO | OSGi environment not detected. INFO | Apache Camel 2.7.0 (CamelContext: camel) is starting INFO | JMX enabled. Using ManagedManagementStrategy. INFO | Found 5 packages with 16 @Converter classes to load INFO | Loaded 152 type converters in 1.914 seconds INFO | Connector vm://MobTechTest1 Started INFO | Route: route1 started and consuming from: Endpoint [activemq://MDT.INBOUND] INFO | Total 1 routes, of which 1 is started. INFO | Apache Camel 2.7.0 (CamelContext: camel) started in 5.167 seconds INFO | Camel Console at http://0.0.0.0:8161/camel INFO | ActiveMQ Web Demos at http://0.0.0.0:8161/demo INFO | RESTful file access application at http://0.0.0.0:8161/fileserver INFO | Started SelectChannelConnector@0.0.0.0:8161 ERROR | Failed to update database lock: com.microsoft.sqlserver.jdbc.SQLServerException: Broken pipe com.microsoft.sqlserver.jdbc.SQLServerException: Broken pipe at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:1368) at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:1355) at com.microsoft.sqlserver.jdbc.TDSChannel.write(IOBuffer.java:1548) at com.microsoft.sqlserver.jdbc.TDSWriter.flush(IOBuffer.java:2368) at com.microsoft.sqlserver.jdbc.TDSWriter.writePacket(IOBuffer.java:2270) at com.microsoft.sqlserver.jdbc.TDSWriter.endMessage(IOBuffer.java:1877) at com.microsoft.sqlserver.jdbc.TDSCommand.startResponse(IOBuffer.java:4403) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:386) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:338) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4026) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1416) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:185) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:160) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeUpdate(SQLServerPreparedStatement.java:306) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:102) at org.apache.activemq.store.jdbc.DefaultDatabaseLocker.keepAlive(DefaultDatabaseLocker.java:161) at org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.databaseLockKeepAlive(JDBCPersistenceAdapter.java:605) at org.apache.activemq.store.jdbc.JDBCPersistenceAdapter$3.run(JDBCPersistenceAdapter.java:291) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) INFO | No longer able to keep the exclusive lock so giving up being a master INFO | ActiveMQ Message Broker (MobTechTest1, ID:mdtapdot01-46867-1302071153455-0:1) is shutting down INFO | Connector openwire Stopped INFO | Connector STOMP Stopped INFO | Connector xmpp Stopped INFO | Connector vm://MobTechTest1 Stopped INFO | PListStore:/usr/local/tollmobile/activemq54data/data/MobTechTest1/tmp_storage stopped INFO | ActiveMQ JMS Message Broker (MobTechTest1, ID:mdtapdot01-46867-1302071153455-0:1) stopped
          Hide
          Gary Tully added a comment -

          Oleg, you do need to make some configuration changes as indicated in the previous comment as this is a behavior change.
          You need to configure an IOExceptionHandler for the broker to override the defaults

          <ioExceptionHandler
            <defaultIOExceptionHandler ignoreSQLExceptions="false" stopStartConnectors="true" />
          </ioExceptionHandler>
          Show
          Gary Tully added a comment - Oleg, you do need to make some configuration changes as indicated in the previous comment as this is a behavior change. You need to configure an IOExceptionHandler for the broker to override the defaults <ioExceptionHandler <defaultIOExceptionHandler ignoreSQLExceptions= " false " stopStartConnectors= " true " /> </ioExceptionHandler>
          Hide
          Oleg Kiorsak added a comment -

          Thanks Gary!

          I think I "get it" now... adding that config tweak and also - very importantly - setting useDatabaseLock="false" in jdbcPersistenceAdapter

          does the trick for a single standalone broker, but for a cluster there is still an outstanding issue (where you mention all these too in comments): https://issues.apache.org/jira/browse/AMQ-2497

          so yes if this issue (AMQ-1780) is meant to be the context of not-a-cluster setup only then I can confirm it appears indeed to be fixed...

          Hope the AMQ-2497 (as tricky as it sounds) does get fix in 5.6 as planned too...

          Show
          Oleg Kiorsak added a comment - Thanks Gary! I think I "get it" now... adding that config tweak and also - very importantly - setting useDatabaseLock="false" in jdbcPersistenceAdapter does the trick for a single standalone broker, but for a cluster there is still an outstanding issue (where you mention all these too in comments): https://issues.apache.org/jira/browse/AMQ-2497 so yes if this issue ( AMQ-1780 ) is meant to be the context of not-a-cluster setup only then I can confirm it appears indeed to be fixed... Hope the AMQ-2497 (as tricky as it sounds) does get fix in 5.6 as planned too...
          Hide
          Oleg Kiorsak added a comment -

          there is an issue however...

          when the database is stopped for very small period of time - everything works somewhat rather smoothly and as expected... but once DB is not available for longer time (5 secs)

          " INFO | waiting for broker persistence adapter checkpoint to succeed before restarting transports"

          and

          "INFO | Connector openwire Stopped"

          so far so good - makes sense (but note that it only stooped "openwire" one at this point,

          but when I restart the database, it then goes and stops (!!) the other transport connectors (in particular STOMP one), but does not start them back...

          INFO | Connector STOMP Stopped
          INFO | Connector xmpp Stopped

          only when I go to jConsole and manually start the STOMP transport connector (one we're using) things get back to normal...

          INFO | Listening for connections at: stomp://mdtapdot01:10002
          INFO | Connector STOMP Started

          but that means it requires a manual procedure....

          it seems to me this is a bug and in fact it was meant to actually "restart transports", not just stop them...

          it might be that there is in fact a different handling of "openwire" and the other s ("STOMP") connectors... (based on the fact that "openwire" stops when DB connection detected as lost, but "STOMP" gets stopped when DB is restored... and does not get restarted automatically...

          ??

          Show
          Oleg Kiorsak added a comment - there is an issue however... when the database is stopped for very small period of time - everything works somewhat rather smoothly and as expected... but once DB is not available for longer time (5 secs) " INFO | waiting for broker persistence adapter checkpoint to succeed before restarting transports" and "INFO | Connector openwire Stopped" so far so good - makes sense (but note that it only stooped "openwire" one at this point, but when I restart the database, it then goes and stops (!!) the other transport connectors (in particular STOMP one), but does not start them back... INFO | Connector STOMP Stopped INFO | Connector xmpp Stopped only when I go to jConsole and manually start the STOMP transport connector (one we're using) things get back to normal... INFO | Listening for connections at: stomp://mdtapdot01:10002 INFO | Connector STOMP Started but that means it requires a manual procedure.... it seems to me this is a bug and in fact it was meant to actually "restart transports", not just stop them... it might be that there is in fact a different handling of "openwire" and the other s ("STOMP") connectors... (based on the fact that "openwire" stops when DB connection detected as lost, but "STOMP" gets stopped when DB is restored... and does not get restarted automatically... ??
          Hide
          Gary Tully added a comment -

          I wonder what is causing the delay in the shutdown of all the transports. They should all be shutdown in order by the same thread. Is there anything odd in the logs?

          It looks like the stop thread is blocked for some time until restart and the restart is complete before the stop is complete.

          Could you generate a thread dump of the broker after it has stopped openwire and before you restart... so the stop of stomp should be pending at that time?

          Show
          Gary Tully added a comment - I wonder what is causing the delay in the shutdown of all the transports. They should all be shutdown in order by the same thread. Is there anything odd in the logs? It looks like the stop thread is blocked for some time until restart and the restart is complete before the stop is complete. Could you generate a thread dump of the broker after it has stopped openwire and before you restart... so the stop of stomp should be pending at that time?
          Hide
          Oleg Kiorsak added a comment -

          Hi Gary

          interestingly, on some of the attempts I did today all transport connectors do stop all at same time (when DB conn is lost)...but in either case however they are not been restarted automatically when DB restored...

          I tried to produce a thread dump - this is what could fit on shell screen:


          "qtp768129156-32" prio=3 tid=0x0000000101885800 nid=0x23 waiting on condition [0xfffffffd6bafe000]
          java.lang.Thread.State: TIMED_WAITING (parking)
          at sun.misc.Unsafe.park(Native Method)

          • parking to wait for <0xfffffffedce04c38> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
            at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
            at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:320)
            at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:465)
            at java.lang.Thread.run(Thread.java:619)

          "qtp768129156-31" prio=3 tid=0x00000001010dc000 nid=0x22 waiting on condition [0xfffffffd6bcfe000]
          java.lang.Thread.State: TIMED_WAITING (parking)
          at sun.misc.Unsafe.park(Native Method)

          • parking to wait for <0xfffffffedce04c38> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
            at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
            at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:320)
            at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:465)
            at java.lang.Thread.run(Thread.java:619)

          "qtp768129156-30" prio=3 tid=0x00000001013fe000 nid=0x21 waiting on condition [0xfffffffd6befe000]
          java.lang.Thread.State: TIMED_WAITING (parking)
          at sun.misc.Unsafe.park(Native Method)

          • parking to wait for <0xfffffffedce04c38> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
            at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
            at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:320)
            at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:465)
            at java.lang.Thread.run(Thread.java:619)

          "qtp768129156-29" prio=3 tid=0x00000001013fd800 nid=0x20 waiting on condition [0xfffffffd6c0fe000]
          java.lang.Thread.State: TIMED_WAITING (parking)
          at sun.misc.Unsafe.park(Native Method)

          • parking to wait for <0xfffffffedce04c38> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
            at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
            at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:320)
            at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:465)
            at java.lang.Thread.run(Thread.java:619)

          "qtp768129156-28 - Acceptor0 SelectChannelConnector@0.0.0.0:8161" prio=3 tid=0x0000000100e89000 nid=0x1f runnable [0xfffffffd6c2fe000]
          java.lang.Thread.State: RUNNABLE
          at sun.nio.ch.DevPollArrayWrapper.poll0(Native Method)
          at sun.nio.ch.DevPollArrayWrapper.poll(DevPollArrayWrapper.java:170)
          at sun.nio.ch.DevPollSelectorImpl.doSelect(DevPollSelectorImpl.java:68)
          at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

          • locked <0xffffffff03259268> (a sun.nio.ch.Util$1)
          • locked <0xffffffff03259250> (a java.util.Collections$UnmodifiableSet)
          • locked <0xffffffff03258ef0> (a sun.nio.ch.DevPollSelectorImpl)
            at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
            at org.eclipse.jetty.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:480)
            at org.eclipse.jetty.io.nio.SelectorManager.doSelect(SelectorManager.java:193)
            at org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:134)
            at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:793)
            at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436)
            at java.lang.Thread.run(Thread.java:619)

          "ActiveMQ Broker[MobTechTest1] Scheduler" daemon prio=3 tid=0x000000010109d000 nid=0x18 in Object.wait() [0xfffffffd6d6ff000]
          java.lang.Thread.State: TIMED_WAITING (on object monitor)
          at java.lang.Object.wait(Native Method)

          • waiting on <0xfffffffedc814190> (a java.util.TaskQueue)
            at java.util.TimerThread.mainLoop(Timer.java:509)
          • locked <0xfffffffedc814190> (a java.util.TaskQueue)
            at java.util.TimerThread.run(Timer.java:462)

          "ActiveMQ Cleanup Timer" daemon prio=3 tid=0x0000000100e35000 nid=0x17 waiting on condition [0xfffffffd6e8fe000]
          java.lang.Thread.State: TIMED_WAITING (parking)
          at sun.misc.Unsafe.park(Native Method)

          • parking to wait for <0xfffffffedc766668> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
            at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
            at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
            at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:583)
            at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:576)
            at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
            at java.lang.Thread.run(Thread.java:619)

          "RMI TCP Connection(5)-10.38.58.172" daemon prio=3 tid=0x0000000100f54800 nid=0x16 runnable [0xfffffffd6d8fe000]
          java.lang.Thread.State: RUNNABLE
          at java.net.SocketInputStream.socketRead0(Native Method)
          at java.net.SocketInputStream.read(SocketInputStream.java:129)
          at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
          at java.io.BufferedInputStream.read(BufferedInputStream.java:237)

          • locked <0xfffffffed9747f10> (a java.io.BufferedInputStream)
            at java.io.FilterInputStream.read(FilterInputStream.java:66)
            at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:517)
            at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
            at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
            at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
            at java.lang.Thread.run(Thread.java:619)

          "RMI RenewClean-[10.66.179.1:50344]" daemon prio=3 tid=0x0000000100efd800 nid=0x15 in Object.wait() [0xfffffffd6dafe000]
          java.lang.Thread.State: TIMED_WAITING (on object monitor)
          at java.lang.Object.wait(Native Method)

          • waiting on <0xfffffffed7bf5598> (a java.lang.ref.ReferenceQueue$Lock)
            at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
          • locked <0xfffffffed7bf5598> (a java.lang.ref.ReferenceQueue$Lock)
            at sun.rmi.transport.DGCClient$EndpointEntry$RenewCleanThread.run(DGCClient.java:516)
            at java.lang.Thread.run(Thread.java:619)

          "RMI Scheduler(0)" daemon prio=3 tid=0x0000000100dc9800 nid=0x14 waiting on condition [0xfffffffd6defe000]
          java.lang.Thread.State: TIMED_WAITING (parking)
          at sun.misc.Unsafe.park(Native Method)

          • parking to wait for <0xfffffffd780eb878> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
            at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
            at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
            at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:583)
            at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:576)
            at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
            at java.lang.Thread.run(Thread.java:619)

          "GC Daemon" daemon prio=3 tid=0x000000010126f800 nid=0x12 in Object.wait() [0xfffffffd6e2ff000]
          java.lang.Thread.State: TIMED_WAITING (on object monitor)
          at java.lang.Object.wait(Native Method)

          • waiting on <0xfffffffd780001d8> (a sun.misc.GC$LatencyLock)
            at sun.misc.GC$Daemon.run(GC.java:100)
          • locked <0xfffffffd780001d8> (a sun.misc.GC$LatencyLock)

          "RMI Reaper" prio=3 tid=0x00000001010f9000 nid=0x11 in Object.wait() [0xfffffffd6e4ff000]
          java.lang.Thread.State: WAITING (on object monitor)
          at java.lang.Object.wait(Native Method)

          • waiting on <0xfffffffd78000290> (a java.lang.ref.ReferenceQueue$Lock)
            at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
          • locked <0xfffffffd78000290> (a java.lang.ref.ReferenceQueue$Lock)
            at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
            at sun.rmi.transport.ObjectTable$Reaper.run(ObjectTable.java:333)
            at java.lang.Thread.run(Thread.java:619)

          "RMI TCP Accept-0" daemon prio=3 tid=0x0000000101009800 nid=0x10 runnable [0xfffffffd6e6fe000]
          java.lang.Thread.State: RUNNABLE
          at java.net.PlainSocketImpl.socketAccept(Native Method)
          at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)

          • locked <0xfffffffd782d55d0> (a java.net.SocksSocketImpl)
            at java.net.ServerSocket.implAccept(ServerSocket.java:453)
            at java.net.ServerSocket.accept(ServerSocket.java:421)
            at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
            at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
            at java.lang.Thread.run(Thread.java:619)

          "RMI TCP Accept-2011" daemon prio=3 tid=0x0000000101139800 nid=0xe runnable [0xfffffffd6eafe000]
          java.lang.Thread.State: RUNNABLE
          at java.net.PlainSocketImpl.socketAccept(Native Method)
          at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)

          • locked <0xfffffffd780007b0> (a java.net.SocksSocketImpl)
            at java.net.ServerSocket.implAccept(ServerSocket.java:453)
            at java.net.ServerSocket.accept(ServerSocket.java:421)
            at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
            at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
            at java.lang.Thread.run(Thread.java:619)

          "RMI TCP Accept-0" daemon prio=3 tid=0x00000001007ba800 nid=0xc runnable [0xfffffffd6eefe000]
          java.lang.Thread.State: RUNNABLE
          at java.net.PlainSocketImpl.socketAccept(Native Method)
          at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)

          • locked <0xfffffffd7807dcc8> (a java.net.SocksSocketImpl)
            at java.net.ServerSocket.implAccept(ServerSocket.java:453)
            at java.net.ServerSocket.accept(ServerSocket.java:421)
            at sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:34)
            at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
            at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
            at java.lang.Thread.run(Thread.java:619)

          "Low Memory Detector" daemon prio=3 tid=0x00000001004fe000 nid=0xb runnable [0x0000000000000000]
          java.lang.Thread.State: RUNNABLE

          "CompilerThread1" daemon prio=3 tid=0x00000001004f7800 nid=0xa waiting on condition [0x0000000000000000]
          java.lang.Thread.State: RUNNABLE

          "CompilerThread0" daemon prio=3 tid=0x00000001004f4800 nid=0x9 waiting on condition [0x0000000000000000]
          java.lang.Thread.State: RUNNABLE

          "Signal Dispatcher" daemon prio=3 tid=0x00000001004f2800 nid=0x8 waiting on condition [0x0000000000000000]
          java.lang.Thread.State: RUNNABLE

          "Finalizer" daemon prio=3 tid=0x00000001004d2800 nid=0x7 in Object.wait() [0xfffffffd704ff000]
          java.lang.Thread.State: WAITING (on object monitor)
          at java.lang.Object.wait(Native Method)

          • waiting on <0xfffffffd7807e7d0> (a java.lang.ref.ReferenceQueue$Lock)
            at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
          • locked <0xfffffffd7807e7d0> (a java.lang.ref.ReferenceQueue$Lock)
            at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
            at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

          "Reference Handler" daemon prio=3 tid=0x00000001004cb800 nid=0x6 in Object.wait() [0xfffffffd706ff000]
          java.lang.Thread.State: WAITING (on object monitor)
          at java.lang.Object.wait(Native Method)

          • waiting on <0xfffffffd780001b8> (a java.lang.ref.Reference$Lock)
            at java.lang.Object.wait(Object.java:485)
            at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
          • locked <0xfffffffd780001b8> (a java.lang.ref.Reference$Lock)

          "main" prio=3 tid=0x0000000100114800 nid=0x2 in Object.wait() [0xffffffff7c4fd000]
          java.lang.Thread.State: WAITING (on object monitor)
          at java.lang.Object.wait(Native Method)

          • waiting on <0xffffffff03266308> (a [Z)
            at java.lang.Object.wait(Object.java:485)
            at org.apache.activemq.console.command.StartCommand.waitForShutdown(StartCommand.java:161)
          • locked <0xffffffff03266308> (a [Z)
            at org.apache.activemq.console.command.StartCommand.runTask(StartCommand.java:104)
            at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:57)
            at org.apache.activemq.console.command.ShellCommand.runTask(ShellCommand.java:143)
            at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:57)
            at org.apache.activemq.console.command.ShellCommand.main(ShellCommand.java:85)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.activemq.console.Main.runTaskClass(Main.java:251)
            at org.apache.activemq.console.Main.main(Main.java:107)

          "VM Thread" prio=3 tid=0x00000001004c7800 nid=0x5 runnable

          "GC task thread#0 (ParallelGC)" prio=3 tid=0x000000010011c000 nid=0x3 runnable

          "GC task thread#1 (ParallelGC)" prio=3 tid=0x0000000100128000 nid=0x4 runnable

          "VM Periodic Task Thread" prio=3 tid=0x000000010066c000 nid=0xd waiting on condition

          JNI global references: 767

          Heap
          PSYoungGen total 2447872K, used 1608619K [0xfffffffecd800000, 0xffffffff78400000, 0xffffffff78400000)
          eden space 2098176K, 76% used [0xfffffffecd800000,0xffffffff2faeaca0,0xffffffff4d900000)
          from space 349696K, 0% used [0xffffffff4d900000,0xffffffff4d900000,0xffffffff62e80000)
          to space 349696K, 0% used [0xffffffff62e80000,0xffffffff62e80000,0xffffffff78400000)
          PSOldGen total 5595136K, used 4074K [0xfffffffd78000000, 0xfffffffecd800000, 0xfffffffecd800000)
          object space 5595136K, 0% used [0xfffffffd78000000,0xfffffffd783fa8a8,0xfffffffecd800000)
          PSPermGen total 45056K, used 44664K [0xfffffffd72c00000, 0xfffffffd75800000, 0xfffffffd78000000)
          object space 45056K, 99% used [0xfffffffd72c00000,0xfffffffd7579e1b8,0xfffffffd75800000)

          [mdtapdot01]#
          [mdtapdot01]# INFO | waiting for broker persistence adapter checkpoint to succeed before restarting transports
          INFO | waiting for broker persistence adapter checkpoint to succeed before restarting transports
          INFO | waiting for broker persistence adapter checkpoint to succeed before restarting transports

          Show
          Oleg Kiorsak added a comment - Hi Gary interestingly, on some of the attempts I did today all transport connectors do stop all at same time (when DB conn is lost)...but in either case however they are not been restarted automatically when DB restored... I tried to produce a thread dump - this is what could fit on shell screen: — "qtp768129156-32" prio=3 tid=0x0000000101885800 nid=0x23 waiting on condition [0xfffffffd6bafe000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) parking to wait for <0xfffffffedce04c38> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963) at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:320) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:465) at java.lang.Thread.run(Thread.java:619) "qtp768129156-31" prio=3 tid=0x00000001010dc000 nid=0x22 waiting on condition [0xfffffffd6bcfe000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) parking to wait for <0xfffffffedce04c38> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963) at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:320) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:465) at java.lang.Thread.run(Thread.java:619) "qtp768129156-30" prio=3 tid=0x00000001013fe000 nid=0x21 waiting on condition [0xfffffffd6befe000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) parking to wait for <0xfffffffedce04c38> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963) at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:320) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:465) at java.lang.Thread.run(Thread.java:619) "qtp768129156-29" prio=3 tid=0x00000001013fd800 nid=0x20 waiting on condition [0xfffffffd6c0fe000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) parking to wait for <0xfffffffedce04c38> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963) at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:320) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:465) at java.lang.Thread.run(Thread.java:619) "qtp768129156-28 - Acceptor0 SelectChannelConnector@0.0.0.0:8161" prio=3 tid=0x0000000100e89000 nid=0x1f runnable [0xfffffffd6c2fe000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.DevPollArrayWrapper.poll0(Native Method) at sun.nio.ch.DevPollArrayWrapper.poll(DevPollArrayWrapper.java:170) at sun.nio.ch.DevPollSelectorImpl.doSelect(DevPollSelectorImpl.java:68) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69) locked <0xffffffff03259268> (a sun.nio.ch.Util$1) locked <0xffffffff03259250> (a java.util.Collections$UnmodifiableSet) locked <0xffffffff03258ef0> (a sun.nio.ch.DevPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80) at org.eclipse.jetty.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:480) at org.eclipse.jetty.io.nio.SelectorManager.doSelect(SelectorManager.java:193) at org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:134) at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:793) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436) at java.lang.Thread.run(Thread.java:619) "ActiveMQ Broker [MobTechTest1] Scheduler" daemon prio=3 tid=0x000000010109d000 nid=0x18 in Object.wait() [0xfffffffd6d6ff000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) waiting on <0xfffffffedc814190> (a java.util.TaskQueue) at java.util.TimerThread.mainLoop(Timer.java:509) locked <0xfffffffedc814190> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462) "ActiveMQ Cleanup Timer" daemon prio=3 tid=0x0000000100e35000 nid=0x17 waiting on condition [0xfffffffd6e8fe000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) parking to wait for <0xfffffffedc766668> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963) at java.util.concurrent.DelayQueue.take(DelayQueue.java:164) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:583) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:576) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) "RMI TCP Connection(5)-10.38.58.172" daemon prio=3 tid=0x0000000100f54800 nid=0x16 runnable [0xfffffffd6d8fe000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:129) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read(BufferedInputStream.java:237) locked <0xfffffffed9747f10> (a java.io.BufferedInputStream) at java.io.FilterInputStream.read(FilterInputStream.java:66) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:517) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) "RMI RenewClean- [10.66.179.1:50344] " daemon prio=3 tid=0x0000000100efd800 nid=0x15 in Object.wait() [0xfffffffd6dafe000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) waiting on <0xfffffffed7bf5598> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118) locked <0xfffffffed7bf5598> (a java.lang.ref.ReferenceQueue$Lock) at sun.rmi.transport.DGCClient$EndpointEntry$RenewCleanThread.run(DGCClient.java:516) at java.lang.Thread.run(Thread.java:619) "RMI Scheduler(0)" daemon prio=3 tid=0x0000000100dc9800 nid=0x14 waiting on condition [0xfffffffd6defe000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) parking to wait for <0xfffffffd780eb878> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963) at java.util.concurrent.DelayQueue.take(DelayQueue.java:164) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:583) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:576) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) "GC Daemon" daemon prio=3 tid=0x000000010126f800 nid=0x12 in Object.wait() [0xfffffffd6e2ff000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) waiting on <0xfffffffd780001d8> (a sun.misc.GC$LatencyLock) at sun.misc.GC$Daemon.run(GC.java:100) locked <0xfffffffd780001d8> (a sun.misc.GC$LatencyLock) "RMI Reaper" prio=3 tid=0x00000001010f9000 nid=0x11 in Object.wait() [0xfffffffd6e4ff000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) waiting on <0xfffffffd78000290> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118) locked <0xfffffffd78000290> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134) at sun.rmi.transport.ObjectTable$Reaper.run(ObjectTable.java:333) at java.lang.Thread.run(Thread.java:619) "RMI TCP Accept-0" daemon prio=3 tid=0x0000000101009800 nid=0x10 runnable [0xfffffffd6e6fe000] java.lang.Thread.State: RUNNABLE at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390) locked <0xfffffffd782d55d0> (a java.net.SocksSocketImpl) at java.net.ServerSocket.implAccept(ServerSocket.java:453) at java.net.ServerSocket.accept(ServerSocket.java:421) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341) at java.lang.Thread.run(Thread.java:619) "RMI TCP Accept-2011" daemon prio=3 tid=0x0000000101139800 nid=0xe runnable [0xfffffffd6eafe000] java.lang.Thread.State: RUNNABLE at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390) locked <0xfffffffd780007b0> (a java.net.SocksSocketImpl) at java.net.ServerSocket.implAccept(ServerSocket.java:453) at java.net.ServerSocket.accept(ServerSocket.java:421) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341) at java.lang.Thread.run(Thread.java:619) "RMI TCP Accept-0" daemon prio=3 tid=0x00000001007ba800 nid=0xc runnable [0xfffffffd6eefe000] java.lang.Thread.State: RUNNABLE at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390) locked <0xfffffffd7807dcc8> (a java.net.SocksSocketImpl) at java.net.ServerSocket.implAccept(ServerSocket.java:453) at java.net.ServerSocket.accept(ServerSocket.java:421) at sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:34) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341) at java.lang.Thread.run(Thread.java:619) "Low Memory Detector" daemon prio=3 tid=0x00000001004fe000 nid=0xb runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread1" daemon prio=3 tid=0x00000001004f7800 nid=0xa waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread0" daemon prio=3 tid=0x00000001004f4800 nid=0x9 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Signal Dispatcher" daemon prio=3 tid=0x00000001004f2800 nid=0x8 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Finalizer" daemon prio=3 tid=0x00000001004d2800 nid=0x7 in Object.wait() [0xfffffffd704ff000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) waiting on <0xfffffffd7807e7d0> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118) locked <0xfffffffd7807e7d0> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159) "Reference Handler" daemon prio=3 tid=0x00000001004cb800 nid=0x6 in Object.wait() [0xfffffffd706ff000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) waiting on <0xfffffffd780001b8> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:485) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116) locked <0xfffffffd780001b8> (a java.lang.ref.Reference$Lock) "main" prio=3 tid=0x0000000100114800 nid=0x2 in Object.wait() [0xffffffff7c4fd000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) waiting on <0xffffffff03266308> (a [Z) at java.lang.Object.wait(Object.java:485) at org.apache.activemq.console.command.StartCommand.waitForShutdown(StartCommand.java:161) locked <0xffffffff03266308> (a [Z) at org.apache.activemq.console.command.StartCommand.runTask(StartCommand.java:104) at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:57) at org.apache.activemq.console.command.ShellCommand.runTask(ShellCommand.java:143) at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:57) at org.apache.activemq.console.command.ShellCommand.main(ShellCommand.java:85) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.activemq.console.Main.runTaskClass(Main.java:251) at org.apache.activemq.console.Main.main(Main.java:107) "VM Thread" prio=3 tid=0x00000001004c7800 nid=0x5 runnable "GC task thread#0 (ParallelGC)" prio=3 tid=0x000000010011c000 nid=0x3 runnable "GC task thread#1 (ParallelGC)" prio=3 tid=0x0000000100128000 nid=0x4 runnable "VM Periodic Task Thread" prio=3 tid=0x000000010066c000 nid=0xd waiting on condition JNI global references: 767 Heap PSYoungGen total 2447872K, used 1608619K [0xfffffffecd800000, 0xffffffff78400000, 0xffffffff78400000) eden space 2098176K, 76% used [0xfffffffecd800000,0xffffffff2faeaca0,0xffffffff4d900000) from space 349696K, 0% used [0xffffffff4d900000,0xffffffff4d900000,0xffffffff62e80000) to space 349696K, 0% used [0xffffffff62e80000,0xffffffff62e80000,0xffffffff78400000) PSOldGen total 5595136K, used 4074K [0xfffffffd78000000, 0xfffffffecd800000, 0xfffffffecd800000) object space 5595136K, 0% used [0xfffffffd78000000,0xfffffffd783fa8a8,0xfffffffecd800000) PSPermGen total 45056K, used 44664K [0xfffffffd72c00000, 0xfffffffd75800000, 0xfffffffd78000000) object space 45056K, 99% used [0xfffffffd72c00000,0xfffffffd7579e1b8,0xfffffffd75800000) [mdtapdot01] # [mdtapdot01] # INFO | waiting for broker persistence adapter checkpoint to succeed before restarting transports INFO | waiting for broker persistence adapter checkpoint to succeed before restarting transports INFO | waiting for broker persistence adapter checkpoint to succeed before restarting transports
          Hide
          Oleg Kiorsak added a comment -

          I also see a message in teh log that might explain why transport connections don't get restarted:

          [mdtapdot01]# WARN | Failure occurred while restarting broker connectors
          java.io.IOException: Transport Connector could not be registered in JMX: org.apache.activemq:BrokerName=MobTechTest1,Type=Connector,ConnectorName=openwire
          at org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:27)
          at org.apache.activemq.broker.BrokerService.registerConnectorMBean(BrokerService.java:1670)
          at org.apache.activemq.broker.BrokerService.startTransportConnector(BrokerService.java:2159)
          at org.apache.activemq.broker.BrokerService.startAllConnectors(BrokerService.java:2073)
          at org.apache.activemq.util.DefaultIOExceptionHandler$2.run(DefaultIOExceptionHandler.java:99)
          Caused by: javax.management.InstanceAlreadyExistsException: org.apache.activemq:BrokerName=MobTechTest1,Type=Connector,ConnectorName=openwire
          at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
          at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
          at org.apache.activemq.broker.jmx.ManagementContext.registerMBean(ManagementContext.java:299)
          at org.apache.activemq.broker.jmx.AnnotatedMBean.registerMBean(AnnotatedMBean.java:65)
          at org.apache.activemq.broker.BrokerService.registerConnectorMBean(BrokerService.java:1667)
          ... 3 more

          [mdtapdot01]#

          Show
          Oleg Kiorsak added a comment - I also see a message in teh log that might explain why transport connections don't get restarted: [mdtapdot01] # WARN | Failure occurred while restarting broker connectors java.io.IOException: Transport Connector could not be registered in JMX: org.apache.activemq:BrokerName=MobTechTest1,Type=Connector,ConnectorName=openwire at org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:27) at org.apache.activemq.broker.BrokerService.registerConnectorMBean(BrokerService.java:1670) at org.apache.activemq.broker.BrokerService.startTransportConnector(BrokerService.java:2159) at org.apache.activemq.broker.BrokerService.startAllConnectors(BrokerService.java:2073) at org.apache.activemq.util.DefaultIOExceptionHandler$2.run(DefaultIOExceptionHandler.java:99) Caused by: javax.management.InstanceAlreadyExistsException: org.apache.activemq:BrokerName=MobTechTest1,Type=Connector,ConnectorName=openwire at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482) at org.apache.activemq.broker.jmx.ManagementContext.registerMBean(ManagementContext.java:299) at org.apache.activemq.broker.jmx.AnnotatedMBean.registerMBean(AnnotatedMBean.java:65) at org.apache.activemq.broker.BrokerService.registerConnectorMBean(BrokerService.java:1667) ... 3 more [mdtapdot01] #
          Hide
          Jan Lievens added a comment -

          When not using useDatabaseLock="true" on the jdbcPersistenceAdapter I still see the broker go down on a stopped database.
          When useDatabaseLock="false" the broker keeps on going on a stopped database but accepted messages disappear (tested with web-console) completely. One should be warned using this setting.

          <bean id="ioExceptionHandler" class="org.apache.activemq.util.DefaultIOExceptionHandler">
          <property name="ignoreAllErrors"><value>false</value></property>
          <property name="stopStartConnectors"><value>true</value></property>
          </bean>

          Show
          Jan Lievens added a comment - When not using useDatabaseLock="true" on the jdbcPersistenceAdapter I still see the broker go down on a stopped database. When useDatabaseLock="false" the broker keeps on going on a stopped database but accepted messages disappear (tested with web-console) completely. One should be warned using this setting. <bean id="ioExceptionHandler" class="org.apache.activemq.util.DefaultIOExceptionHandler"> <property name="ignoreAllErrors"><value>false</value></property> <property name="stopStartConnectors"><value>true</value></property> </bean>
          Hide
          Gary Tully added a comment -
          Show
          Gary Tully added a comment - https://issues.apache.org/jira/browse/AMQ-4575 is relevant.
          Hide
          kishan added a comment -

          Please have a look at https://issues.apache.org/jira/browse/AMQ-4911 andwould appreciate if u guys try to solve

          Show
          kishan added a comment - Please have a look at https://issues.apache.org/jira/browse/AMQ-4911 andwould appreciate if u guys try to solve

            People

            • Assignee:
              Gary Tully
              Reporter:
              Jaya Srinivasan
            • Votes:
              3 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development