Uploaded image for project: 'ActiveMQ Classic'
  1. ActiveMQ Classic
  2. AMQ-732

Infinite recovery loop.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 4.0
    • 5.1.0, 4.0.3
    • Broker
    • None
    • Linux RHEL 3

    Description

      The simplest way to reproduce the problem:
      1. Remove storage directory.
      2. Start broker using the following code:

      public static void main(String[] args) throws Exception

      { BrokerService broker = new BrokerService(); broker.setPersistent(true); DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory(); pFactory.setJournalLogFiles(1); pFactory.setJournalLogFileSize(2048); broker.setPersistenceFactory(pFactory); broker.setUseJmx(false); broker.addConnector("tcp://localhost:61616"); broker.start(); Thread.sleep(1000000000000l); }

      3. Shutdown the broker.
      4. Start broker.
      It enters infinite loop

      [ main] BrokerService INFO ActiveMQ null JMS Message Broker (localhost) is starting
      [ main] BrokerService INFO For help or more information please see: http://incubator.apache.org/activemq/
      [ main] JDBCPersistenceAdapter INFO Database driver recognized: [apache_derby_embedded_jdbc_driver]
      [ main] DefaultJDBCAdapter DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
      [ main] DefaultJDBCAdapter DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
      [ main] DefaultJDBCAdapter DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
      [ main] DefaultJDBCAdapter DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
      [ main] DefaultJDBCAdapter DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
      [ main] DefaultJDBCAdapter DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
      [ main] DefaultJDBCAdapter DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
      [ main] JDBCPersistenceAdapter DEBUG Cleaning up old messages.
      [ main] DefaultJDBCAdapter DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
      [ main] DefaultJDBCAdapter DEBUG Deleted 0 old message(s).
      [ main] JDBCPersistenceAdapter DEBUG Cleanup done.
      [ main] JournalPersistenceAdapter INFO Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
      [ main] JournalPersistenceAdapter DEBUG TRACE Entry: RECOVERED
      [ Journal Writer] LogFileManager DEBUG getNextDataRecordLocation offset=53
      [ Journal Writer] LogFileManager DEBUG getNextDataRecordLocation offset=97
      [ Journal Writer] LogFileManager DEBUG getNextDataRecordLocation overflowing into next logFile=0
      [ main] JournalPersistenceAdapter DEBUG TRACE Entry: RECOVERED
      [ Journal Writer] LogFileManager DEBUG getNextDataRecordLocation offset=53
      [ Journal Writer] LogFileManager DEBUG getNextDataRecordLocation offset=97
      [ Journal Writer] LogFileManager DEBUG getNextDataRecordLocation overflowing into next logFile=0
      [ main] JournalPersistenceAdapter DEBUG TRACE Entry: RECOVERED
      [ Journal Writer] LogFileManager DEBUG getNextDataRecordLocation offset=53
      [ Journal Writer] LogFileManager DEBUG getNextDataRecordLocation offset=97
      [ Journal Writer] LogFileManager DEBUG getNextDataRecordLocation overflowing into next logFile=0

      The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:

      public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
      RecordInfo ri = readRecordInfo(lastLocation);
      while (true) {

      int logFileId = ri.getLocation().getLogFileId();
      int offset = ri.getNextLocation();
      log.debug("getNextDataRecordLocation offset=" + offset);
      // Are we overflowing into next logFile?
      if (offset >= ri.getLogFileState().getAppendOffset()) {
      LogFileNode nextActive = ri.getLogFileState().getNextActive();
      log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null" : String.valueOf(nextActive.getId())));
      if (nextActive == null)

      { return null; }

      logFileId = nextActive.getId();
      offset = 0;
      }

      try

      { ri = readRecordInfo(new Location(logFileId, offset)); }

      catch (InvalidRecordLocationException e)

      { return null; }

      // Is the next record the right record type?
      if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE)

      { return ri.getLocation(); }

      // No? go onto the next record.
      }
      }

      It looks like recovery doesn't take into account that journaling storage file should have some end at some point.

      Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

      Attachments

        1. activemq-data-2-journal.tar.gz
          706 kB
          Maxim Fateev
        2. activemq-data-1-journal.tar.gz
          82 kB
          Maxim Fateev

        Activity

          People

            Unassigned Unassigned
            mfateev Maxim Fateev
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: