Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-5883

TaskMemoryMonitorThread might shoot down tasks even if their processes momentarily exceed the requested memory

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotBulk Copy AttachmentsBulk Move AttachmentsVotersWatch issueWatchersCreate sub-taskConvert to sub-taskMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 0.20.1
    • None
    • None
    • Reviewed

    Description

      Currently the TaskMemoryMonitorThread kills tasks as soon as it detects they are consuming more memory than the max value specified. There are valid cases (see HADOOP-5059) where if a program is executed from the task, it might momentarily occupy twice the amount of memory for a short time. Ideally the monitoring thread should handle this case.

      Attachments

        1. HADOOP-5883-20.patch
          29 kB
          Hemanth Yamijala
        2. HADOOP-5883-20.patch
          29 kB
          Hemanth Yamijala
        3. HADOOP-5883.patch
          28 kB
          Hemanth Yamijala
        4. HADOOP-5883.patch
          29 kB
          Hemanth Yamijala
        5. HADOOP-5883.patch
          29 kB
          Hemanth Yamijala

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            yhemanth Hemanth Yamijala Assign to me
            yhemanth Hemanth Yamijala
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment