Uploaded image for project: 'ActiveMQ Classic'
  1. ActiveMQ Classic
  2. AMQ-3076

spurious KahaDB warnings

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 5.4.0, 5.4.1, 5.4.2
    • 5.5.0
    • Message Store
    • None

    Description

      please reduce to DEBUG or remove alltogether, see discussion from the mailing list below

      thanks.

      2010-12-09 09:31:46,613 | WARN | KahaDB PageFile flush: 3 queued writes, latch wait took 142 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker
      2010-12-09 09:32:52,240 | WARN | KahaDB PageFile flush: 3 queued writes, latch wait took 117 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker
      2010-12-09 09:32:57,377 | WARN | KahaDB PageFile flush: 3 queued writes, latch wait took 116 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker
      2010-12-09 09:34:03,052 | WARN | KahaDB PageFile flush: 3 queued writes, latch wait took 111 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker
      2010-12-09 09:34:08,276 | WARN | KahaDB PageFile flush: 3 queued writes, latch wait took 202 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker
      2010-12-09 09:34:53,207 | WARN | KahaDB PageFile flush: 3 queued writes, latch wait took 208 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker
      2010-12-09 09:35:28,377 | WARN | KahaDB PageFile flush: 3 queued writes, latch wait took 283 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker

      From: Gary Tully <gary.tully-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
      Subject: Re: KahaDB latch wait warnings
      Newsgroups: gmane.comp.java.activemq.user
      Date: Wed, 8 Dec 2010 15:24:00 +0000

      In the main, it is not important, that should be at debug or trace
      level logging or removed altogether, it is just an indication of the
      pagefile sync to disk latency and the amount of concurrent writes that
      are pending at the time, an remnant of some performance tuning work
      that was done for 5.4.0. the 100ms limit is arbitrary.
      Do u mind tracking this with a jira issue as it will probably come up again.

      On 8 December 2010 13:54, Aleksandar Ivanisevic
      <aleksandar-9OxODCspnFtM+jpbqlvknA@public.gmane.org> wrote:
      >
      >
      > Just switched to kahadb on my amq 5.4.1 (fuse) and the log is filling
      > with this:
      >
      >
      > 2010-12-08 14:26:12,668 | WARN  | KahaDB PageFile flush: 3 queued writes, latch wait took 119 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker
      > 2010-12-08 14:28:03,769 | WARN  | KahaDB PageFile flush: 7 queued writes, latch wait took 140 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker
      > 2010-12-08 14:28:39,125 | WARN  | KahaDB PageFile flush: 3 queued writes, latch wait took 112 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker
      > 2010-12-08 14:30:04,928 | WARN  | KahaDB PageFile flush: 8 queued writes, latch wait took 109 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker
      > 2010-12-08 14:30:28,788 | WARN  | KahaDB PageFile flush: 8 queued writes, latch wait took 18839 | org.apache.kahadb.page.PageFile | ActiveMQ Journal Checkpoint Worker
      >
      > quick code search turns out that this warning is fixed to 100ms
      >
      > http://bit.ly/gYH1Zu
      >
      > why 100ms and why is this important?

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            gtully Gary Tully
            aivanise Aleksandar Ivanisevic
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment