Uploaded image for project: 'ActiveMQ Classic'
  1. ActiveMQ Classic
  2. AMQ-6644

Incorrect logging from KahaDB cleanup task when enableAckCompaction=true

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 5.14.0
    • 5.15.0, 5.14.5
    • KahaDB
    • None

    Description

      When KahaDB is configured for enableAckCompaction=true, it moves acks into a new journal file. Such journal file will only contains the compacted acks, it won't be used to hold messages.

      If the actual journal (to which new messages are written to) has a lower number than the journal files that were created during ack compaction, the periodic cleanup task will not delete any journals that are higher than the actual journal file. So multiple journal files may remain active on disk although there is no single unconsumed message on the broker.
      This in itself is okay, however when trace logging for the cleanup task is enabled, it reports differently, namely that it is going to delete these journals, where in fact it is not deleting them.

      E.g. lets take the following example.
      The KahaDB folder on disk consists of

      [kahadb]$ ls -alh
      total 54M
      drwxr-xr-x.  2 fuse fuse  128K Feb  1 15:50 .
      drwxr-xr-x. 13 fuse fuse  4.0K Nov  4 13:14 ..
      -rw-r--r--.  1 fuse fuse   32M Feb  1 16:26 db-65.log
      -rw-r--r--.  1 fuse fuse  4.6M Feb  1 15:24 db-66.log
      -rw-r--r--.  1 fuse fuse  4.5M Feb  1 15:29 db-67.log
      -rw-r--r--.  1 fuse fuse  4.6M Feb  1 15:34 db-68.log
      -rw-r--r--.  1 fuse fuse  4.5M Feb  1 15:39 db-69.log
      -rw-r--r--.  1 fuse fuse  2.5M Feb  1 16:26 db.data
      -rw-r--r--.  1 fuse fuse   32M Feb  1 14:51 db-log.template
      -rw-r--r--.  1 fuse fuse 1002K Feb  1 16:26 db.redo
      -rw-r--r--.  1 fuse fuse     8 Feb  1 14:51 lock
      

      and the logging says:

      Last update: 65:26636520, full gc candidates set: [65, 66, 67, 68, 69]
      gc candidates after producerSequenceIdTrackerLocation:65, [66, 67, 68, 69]
      gc candidates after ackMessageFileMapLocation:65, [66, 67, 68, 69]
      ...
      gc candidates: [66, 67, 68, 69]
          ackMessageFileMap: {65=[65]}
      Cleanup removing the data files: [66, 67, 68, 69]
      

      In this example the actual journal file to which msgs are written is 65. The journal files 66-69 were created during ack compaction and have a higher number than 65.
      So KahaDB won't delete the journals 66-69 until the actual journal file is moved to journal 70, despite that there are no unconsumed messages on the broker.
      However the last log line suggests that it will remove the journals 66-69, but they will not get removed due to rule above.

      We should align the logging output with the logic used to determine which journals to delete.

      Attachments

        Activity

          People

            gtully Gary Tully
            gtully Gary Tully
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: