Uploaded image for project: 'Flume'
  1. Flume
  2. FLUME-926

Memory leak in 0.9.4

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Won't Fix
    • 0.9.4
    • 0.9.5
    • Node
    • Debian Squeeze
      Sun Java 1.6.0_23 (32 or 64 depending on the collector)

    Description

      Without setting Xmx to a sane value for EC2 (between 250-500M depending on the server). A Flume collector in autoCollector mode with 8 logical streams consumes all available server ram until it is gone. I have not been able to see any GC activity on the node/collector which leaves me to believe that flume is leaving objects in an unreapable state. I restarted one flume instance yesterday and within 12 hours I have seen it take 75M of ram for 6534 counts of the int[] object.

      We have it configured with 8 logical collectors per a collector/node separated via flows with all source nodes in autoE2E mode. 4 flows write to HDFS, 4 flows write to S3.

      I have been experiencing this every 2-3 days on the 32 bit machine (1.7 GB RAM) and every 10 or so days on the 64 bit machine (7 GB RAM).

      I have forced Xmx on both machines, but without the GC process working, I have a feeling the will OOM and crash (maybe the watchdog will be able to restart them, but it is unknown at this time).

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned
            tvachon Thomas Vachon
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment