Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
5.8.0
Description
It was an accident when our system ran out of free disk space. ActiveMQ broker was shut down but after disk clean it failed to start with throwing an exception:
2014-01-31 07:59:46,297 | ERROR | Failed to page in more queue messages | org.apache.activemq.broker.region.Queue | ActiveMQ BrokerService[CORE_FuseMQ] Task-149
java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Could not locate data file /opt/broker/data/kahadb/db-97203.log
at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:113)
at org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:157)
at org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1775)
at org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2003)
at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1491)
at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)st
Caused by: java.lang.RuntimeException: java.io.IOException: Could not locate data file /opt/broker/data/kahadb/db-97203.log
at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:277)
at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:110)
... 9 more
Caused by: java.io.IOException: Could not locate data file /opt/broker/data/kahadb/db-97203.log
at org.apache.activemq.store.kahadb.disk.journal.Journal.getDataFile(Journal.java:353)
at org.apache.activemq.store.kahadb.disk.journal.Journal.read(Journal.java:600)
at org.apache.activemq.store.kahadb.MessageDatabase.load(MessageDatabase.java:939)
at org.apache.activemq.store.kahadb.KahaDBStore.loadMessage(KahaDBStore.java:1029)
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$4.execute(KahaDBStore.java:557)
at org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:779)
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recoverNextMessages(KahaDBStore.java:546)
at org.apache.activemq.store.ProxyMessageStore.recoverNextMessages(ProxyMessageStore.java:106)
at org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:97)
at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:274)
... 10 more
2014-01-31 07:59:47,305 | ERROR | Looking for key 97203 but not found in fileMap: {55354=db-55354.log number = 55354 , length = 33563112, 93728=db-93728.log number = 93728 , length = 33554796, 9373
2014-01-31 07:59:47,307 | ERROR | org.apache.activemq.broker.region.cursors.QueueStorePrefetch@3a252a17:mdm.kis,batchResetNeeded=false,storeHasMessages=true,size=157,cacheEnabled=false,maxBatchSize
java.io.IOException: Could not locate data file /opt/broker/data/kahadb/db-97203.log
at org.apache.activemq.store.kahadb.disk.journal.Journal.getDataFile(Journal.java:353)
at org.apache.activemq.store.kahadb.disk.journal.Journal.read(Journal.java:600)
at org.apache.activemq.store.kahadb.MessageDatabase.load(MessageDatabase.java:939)
at org.apache.activemq.store.kahadb.KahaDBStore.loadMessage(KahaDBStore.java:1029)
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$4.execute(KahaDBStore.java:557)
at org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:779)
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recoverNextMessages(KahaDBStore.java:546)
at org.apache.activemq.store.ProxyMessageStore.recoverNextMessages(ProxyMessageStore.java:106)
at org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:97)
at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:274)
at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:110)
at org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:157)
at org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1775)
at org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2003)
at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1491)
at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
[...]
We rebuilt metadata as described here: https://access.redhat.com/site/documentation/en-US/Fuse_MQ_Enterprise/7.1/html/Configuring_Broker_Persistence/files/KahaDB-Recovery.html
Performed steps to rebuild:
1. Stop broker
2. Delete db.data
3. Start broker
Afterwards broker started correctly without exceptions. Problem is that some new messages appeared in queues. Those queues had zero messages before metadata recovery. Appeared messages were quite outdated and already delivered several months ago. This is dangerous behaviour because unexpected redelivery in some cases may bring harm to consumers.
Attachments
Issue Links
- duplicates
-
AMQ-2832 Possible replay of old messages post index recovery from journal - data files containing acks reclaimed/cleaned up in error
- Resolved