Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-5551

Ignore file backed pages from memory computation when smaps is enabled

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.9.0, 3.0.0-alpha2
    • Component/s: None
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      Currently deleted file mappings are also included in the memory computation when SMAP is enabled. For e.g

      7f612004a000-7f612004c000 rw-s 00000000 00:10 4201507513                 /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185 (deleted)
      Size:                  8 kB
      Rss:                   4 kB
      Pss:                   2 kB
      Shared_Clean:          0 kB
      Shared_Dirty:          4 kB
      Private_Clean:         0 kB
      Private_Dirty:         0 kB
      Referenced:            4 kB
      Anonymous:             0 kB
      AnonHugePages:         0 kB
      Swap:                  0 kB
      KernelPageSize:        4 kB
      MMUPageSize:           4 kB
      
      
      7fbf28000000-7fbf68000000 rw-s 00000000 08:02 11927571                   /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
      Size:            1048576 kB
      Rss:               17288 kB
      Pss:               17288 kB
      Shared_Clean:          0 kB
      Shared_Dirty:          0 kB
      Private_Clean:       232 kB
      Private_Dirty:     17056 kB
      Referenced:        17288 kB
      Anonymous:             0 kB
      AnonHugePages:         0 kB
      Swap:                  0 kB
      KernelPageSize:        4 kB
      MMUPageSize:           4 kB
      

      It would be good to exclude these from getSmapBasedRssMemorySize() computation.

      1. YARN-5551.branch-2.001.patch
        9 kB
        Rajesh Balamohan
      2. YARN-5551.branch-2.002.patch
        8 kB
        Rajesh Balamohan
      3. YARN-5551.branch-2.003.patch
        9 kB
        Rajesh Balamohan

        Activity

        Hide
        vvasudev Varun Vasudev added a comment -

        Nathan Roberts, Jason Lowe - do you mind reviewing the attached patch? It looks ok to me but you guys are more familiar with ProcfsBasedProcessTree.

        Show
        vvasudev Varun Vasudev added a comment - Nathan Roberts , Jason Lowe - do you mind reviewing the attached patch? It looks ok to me but you guys are more familiar with ProcfsBasedProcessTree.
        Hide
        nroberts Nathan Roberts added a comment -

        Nathan Roberts, Jason Lowe - do you mind reviewing the attached patch? It looks ok to me but you guys are more familiar with ProcfsBasedProcessTree.

        I should have time to review this tomorrow. Hope that is ok.

        Show
        nroberts Nathan Roberts added a comment - Nathan Roberts, Jason Lowe - do you mind reviewing the attached patch? It looks ok to me but you guys are more familiar with ProcfsBasedProcessTree. I should have time to review this tomorrow. Hope that is ok.
        Hide
        jlowe Jason Lowe added a comment -

        The "deleted" here refers to the fact that the file path no longer exists, but the mapping is still valid. Even though the file path no longer exists the process really is still using the memory described in that section of the smaps output. Therefore it is correct to account for that memory usage against the process. The storage behind that mapping will not be freed even though the path has been deleted because this process still has an active mapping against it.

        IMHO this should be closed as invalid.

        Show
        jlowe Jason Lowe added a comment - The "deleted" here refers to the fact that the file path no longer exists, but the mapping is still valid. Even though the file path no longer exists the process really is still using the memory described in that section of the smaps output. Therefore it is correct to account for that memory usage against the process. The storage behind that mapping will not be freed even though the path has been deleted because this process still has an active mapping against it. IMHO this should be closed as invalid.
        Hide
        cnauroth Chris Nauroth added a comment -

        My understanding agrees with Jason's last comment. The mapping could last well past the deletion of the underlying file, maybe even for the whole lifetime of the process, so it's correct to include it in the accounting.

        Show
        cnauroth Chris Nauroth added a comment - My understanding agrees with Jason's last comment. The mapping could last well past the deletion of the underlying file, maybe even for the whole lifetime of the process, so it's correct to include it in the accounting.
        Hide
        gopalv Gopal V added a comment - - edited

        The storage behind that mapping will not be freed even though the path has been deleted because this process still has an active mapping against it.

        That's exactly the point - these are actually not memory pages, these are pages borrowed from the buffer-cache. Some of them are dirty and some of them are clean, which implies that they are not actually memory consumed by the process if there's any memory pressure.

        The ideal mechanism for YARN to react would be to force a dirty flush for the specific process to reduce its memory footprint instead of always killing the process when the observed memory footprint is larger - killing a process is not the only way to reclaim memory from a process.

        Operating purely with kill signals is genuinely overkill.

        This implementation is trying to be more forgiving of a process which has a large number of clean pages in memory backed by a disk cache file, which are available to the process via .read or .map, but the disk buffer pages used by the OS are counted differently by YARN if it uses .map().

        The underlying reality is the same even for dirty pages as the writes are being buffered into the buffer cache anyway, except the write() syscall moves it out of the process space faster than an .mmap + msync.

        Show
        gopalv Gopal V added a comment - - edited The storage behind that mapping will not be freed even though the path has been deleted because this process still has an active mapping against it. That's exactly the point - these are actually not memory pages, these are pages borrowed from the buffer-cache. Some of them are dirty and some of them are clean, which implies that they are not actually memory consumed by the process if there's any memory pressure. The ideal mechanism for YARN to react would be to force a dirty flush for the specific process to reduce its memory footprint instead of always killing the process when the observed memory footprint is larger - killing a process is not the only way to reclaim memory from a process. Operating purely with kill signals is genuinely overkill. This implementation is trying to be more forgiving of a process which has a large number of clean pages in memory backed by a disk cache file, which are available to the process via .read or .map, but the disk buffer pages used by the OS are counted differently by YARN if it uses .map(). The underlying reality is the same even for dirty pages as the writes are being buffered into the buffer cache anyway, except the write() syscall moves it out of the process space faster than an .mmap + msync.
        Hide
        cnauroth Chris Nauroth added a comment -

        OK, I get it now. Thanks, Gopal V. I'd be fine proceeding with the change. I'm not online until after Labor Day, so I can't do a full code review, test and commit. If anyone else wants to do it, please don't wait for me.

        Show
        cnauroth Chris Nauroth added a comment - OK, I get it now. Thanks, Gopal V . I'd be fine proceeding with the change. I'm not online until after Labor Day, so I can't do a full code review, test and commit. If anyone else wants to do it, please don't wait for me.
        Hide
        jlowe Jason Lowe added a comment -

        Special casing buffer cache pages is one thing, but I guess where I'm getting hung up is on the deleted part. Unless I'm mistaken, the OS isn't going to care whether the file is deleted or not when the process still has a mapping to it. Dirty pages will still be flushed to the store, and if the now clean page is discarded to make room for something else and the process comes back to touch it again, we need that updated stored data to recreate the page. So in that sense I don't see why we're special-casing deleted files.

        Show
        jlowe Jason Lowe added a comment - Special casing buffer cache pages is one thing, but I guess where I'm getting hung up is on the deleted part. Unless I'm mistaken, the OS isn't going to care whether the file is deleted or not when the process still has a mapping to it. Dirty pages will still be flushed to the store, and if the now clean page is discarded to make room for something else and the process comes back to touch it again, we need that updated stored data to recreate the page. So in that sense I don't see why we're special-casing deleted files.
        Hide
        gopalv Gopal V added a comment -

        I guess where I'm getting hung up is on the deleted part. Unless I'm mistaken, the OS isn't going to care whether the file is deleted or not when the process still has a mapping to it.

        Actually, that's just a safety rail to cut down IO here - when the process exits, the deleted file pages just disappear.

        So in that sense I don't see why we're special-casing deleted files.

        We can apply this patch to all file mappings actually - the special-casing was primarily to cut down the impact of the patch and reduce unintended consequences.

        For non-deleted files, I'd like IO isolation as well (i.e the IO impact lasts past process-death), but that's a harder problem to solve into the 2.7.x branch (definitely to be tackled in 3.x and specifically for a modern cgroups setup).

        Show
        gopalv Gopal V added a comment - I guess where I'm getting hung up is on the deleted part. Unless I'm mistaken, the OS isn't going to care whether the file is deleted or not when the process still has a mapping to it. Actually, that's just a safety rail to cut down IO here - when the process exits, the deleted file pages just disappear. So in that sense I don't see why we're special-casing deleted files. We can apply this patch to all file mappings actually - the special-casing was primarily to cut down the impact of the patch and reduce unintended consequences. For non-deleted files, I'd like IO isolation as well (i.e the IO impact lasts past process-death), but that's a harder problem to solve into the 2.7.x branch (definitely to be tackled in 3.x and specifically for a modern cgroups setup).
        Hide
        jlowe Jason Lowe added a comment -

        Actually, that's just a safety rail to cut down IO here - when the process exits, the deleted file pages just disappear.

        True, but until that happens it acts just like an undeleted file unless I'm missing something. The process exit case isn't interesting for purposes of accounting for how much memory the process is using right now.

        Show
        jlowe Jason Lowe added a comment - Actually, that's just a safety rail to cut down IO here - when the process exits, the deleted file pages just disappear. True, but until that happens it acts just like an undeleted file unless I'm missing something. The process exit case isn't interesting for purposes of accounting for how much memory the process is using right now.
        Hide
        nroberts Nathan Roberts added a comment -

        I think the two examples you provided in the description are actually 2 very different cases. Notice how the first has an anonymous size of 0 while the second has the entire dirty region marked as anonymous. I think (not certain here), that this means in the first case the kernel actually has file-backed pages to write to if necessary. In the second case, I feel like anonymous means it does NOT have a place to put dirty pages (like maybe the file has been both truncated and unlinked). If that's a correct interpretation of "anonymous" then I feel like we should be counting the second mapping in the processes memory usage.

        Show
        nroberts Nathan Roberts added a comment - I think the two examples you provided in the description are actually 2 very different cases. Notice how the first has an anonymous size of 0 while the second has the entire dirty region marked as anonymous. I think (not certain here), that this means in the first case the kernel actually has file-backed pages to write to if necessary. In the second case, I feel like anonymous means it does NOT have a place to put dirty pages (like maybe the file has been both truncated and unlinked). If that's a correct interpretation of "anonymous" then I feel like we should be counting the second mapping in the processes memory usage.
        Hide
        gopalv Gopal V added a comment - - edited

        the second has the entire dirty region marked as anonymous

        Nathan Roberts: good catch - the cache pages were supposed to be private_dirty only - not anon_dirty.

        Those allocations were supposed to look the same way, let me fix my cache code and re-run that on YARN.

        Show
        gopalv Gopal V added a comment - - edited the second has the entire dirty region marked as anonymous Nathan Roberts : good catch - the cache pages were supposed to be private_dirty only - not anon_dirty. Those allocations were supposed to look the same way, let me fix my cache code and re-run that on YARN.
        Hide
        gopalv Gopal V added a comment - - edited

        purposes of accounting for how much memory the process is using right now.

        The crucial distinction is exactly there. YARN can account memory in two different ways - "how much memory is this process using?" vs "how much memory can I retrieve by killing this process?" [to run other containers in that capacity].

        The 2nd question is what should motivate a process kill (btw, in the non-smaps case, the kill is motivated by the first, with no concern for the 2nd).

        Show
        gopalv Gopal V added a comment - - edited purposes of accounting for how much memory the process is using right now. The crucial distinction is exactly there. YARN can account memory in two different ways - "how much memory is this process using?" vs "how much memory can I retrieve by killing this process?" [to run other containers in that capacity] . The 2nd question is what should motivate a process kill (btw, in the non-smaps case, the kill is motivated by the first, with no concern for the 2nd).
        Hide
        gopalv Gopal V added a comment - - edited

        Jason Lowe,Nathan Roberts,Rajesh Balamohan: I have edited up the JIRA to actually show the private_dirty/private_clean (i.e referenced/resident set size is non-zero) with a 0 anonymous pages.

        Show
        gopalv Gopal V added a comment - - edited Jason Lowe , Nathan Roberts , Rajesh Balamohan : I have edited up the JIRA to actually show the private_dirty/private_clean (i.e referenced/resident set size is non-zero) with a 0 anonymous pages.
        Hide
        jlowe Jason Lowe added a comment -

        Sorry I'm confused, so apologies if this is obvious to everyone else. Was the original data posted to the JIRA not possible in practice? If it is possible then it seems critical to not skip deleted files or risk severely under-reporting the memory usage of a process in some cases. If it's only not possible because app-specific cache code was changed then that should not influence how YARN does accounting since ideally YARN should not be making app-specific assumptions.

        Show
        jlowe Jason Lowe added a comment - Sorry I'm confused, so apologies if this is obvious to everyone else. Was the original data posted to the JIRA not possible in practice? If it is possible then it seems critical to not skip deleted files or risk severely under-reporting the memory usage of a process in some cases. If it's only not possible because app-specific cache code was changed then that should not influence how YARN does accounting since ideally YARN should not be making app-specific assumptions.
        Hide
        jlowe Jason Lowe added a comment -

        The more I think about this, the more I feel ignoring deleted files is the wrong thing to do. I think we all can agree that mappings to deleted files can still consume memory, and if we skip those mappings then we fail to account for that memory. For purposes of deciding how much memory will be freed when YARN kills a process, skipping those sections will make YARN think it can free up less memory than it really would.

        If we go back to the write() vs. mmap'd file which seems to be the origin behind this idea, the write() case is going to eventually be throttled by the OS because it will only allow so many dirty buffer cache pages in the system. I don't believe that's the case for the mmap'd file. If we create a process that mmap's a large file, deletes it, then spin-loops dirtying the pages, that significant memory use needs to be associated with that process in the accounting.

        Show
        jlowe Jason Lowe added a comment - The more I think about this, the more I feel ignoring deleted files is the wrong thing to do. I think we all can agree that mappings to deleted files can still consume memory, and if we skip those mappings then we fail to account for that memory. For purposes of deciding how much memory will be freed when YARN kills a process, skipping those sections will make YARN think it can free up less memory than it really would. If we go back to the write() vs. mmap'd file which seems to be the origin behind this idea, the write() case is going to eventually be throttled by the OS because it will only allow so many dirty buffer cache pages in the system. I don't believe that's the case for the mmap'd file. If we create a process that mmap's a large file, deletes it, then spin-loops dirtying the pages, that significant memory use needs to be associated with that process in the accounting.
        Hide
        vinodkv Vinod Kumar Vavilapalli added a comment -

        2.7.3 is released and 2.8.0 is close to being done. Moving target-version to 2.9.0.

        Show
        vinodkv Vinod Kumar Vavilapalli added a comment - 2.7.3 is released and 2.8.0 is close to being done. Moving target-version to 2.9.0.
        Hide
        gopalv Gopal V added a comment -

        The more I think about this, the more I feel ignoring deleted files is the wrong thing to do

        Yes, deleted files is a red-herring (happens to be how we secure the files away from other users).

        I think the original problem of YARN killing a process needs to be fixed (the original SMAPS fix was for HDFS Zero Copy read via mmap).

                      total +=
                          Math.min(info.sharedDirty, info.pss) + info.privateDirty
                              + info.privateClean;
        

        If as Nathan Roberts suggests, If YARN counted only the "anonymous" pages as the "will be free'd a kill" memory, it would give me a better way.

        the write() case is going to eventually be throttled by the OS because it will only allow so many dirty buffer cache pages in the system. I don't believe that's the case for the mmap'd file.

        Once you exceed the dirty_ratio, the only way you can avoid a page-fault is by modifying an existing dirty page over & over again.

        If I understand page-writeback.c correctly, the blocking operation would be the page fault on a memory block which is missing in memory.

        that significant memory use needs to be associated with that process in the accounting.

        Accounting isn't the problem, killing processes is the problem.

        Show
        gopalv Gopal V added a comment - The more I think about this, the more I feel ignoring deleted files is the wrong thing to do Yes, deleted files is a red-herring (happens to be how we secure the files away from other users). I think the original problem of YARN killing a process needs to be fixed (the original SMAPS fix was for HDFS Zero Copy read via mmap). total += Math .min(info.sharedDirty, info.pss) + info.privateDirty + info.privateClean; If as Nathan Roberts suggests, If YARN counted only the "anonymous" pages as the "will be free'd a kill" memory, it would give me a better way. the write() case is going to eventually be throttled by the OS because it will only allow so many dirty buffer cache pages in the system. I don't believe that's the case for the mmap'd file. Once you exceed the dirty_ratio, the only way you can avoid a page-fault is by modifying an existing dirty page over & over again. If I understand page-writeback.c correctly, the blocking operation would be the page fault on a memory block which is missing in memory. that significant memory use needs to be associated with that process in the accounting. Accounting isn't the problem, killing processes is the problem.
        Hide
        rajesh.balamohan Rajesh Balamohan added a comment - - edited

        This patch worked for the scenario we ran into.

        If memory mapping of a file is anon=0, should that cause the process to be killed?

        A more generic patch would be figure out whether memory mapping with annon=0 should be deciding factor for killing the process.

        Show
        rajesh.balamohan Rajesh Balamohan added a comment - - edited This patch worked for the scenario we ran into. If memory mapping of a file is anon=0, should that cause the process to be killed? A more generic patch would be figure out whether memory mapping with annon=0 should be deciding factor for killing the process.
        Hide
        jlowe Jason Lowe added a comment -

        Yes, deleted files is a red-herring

        OK, cool. That's been my main issue with the JIRA's proposed change.

        If I understand page-writeback.c correctly, the blocking operation would be the page fault on a memory block which is missing in memory.

        Good, so it looks like users can't abuse an mmap'd region any more than they can abuse the buffer cache. Thanks for looking into this.

        If YARN counted only the "anonymous" pages as the "will be free'd a kill" memory, it would give me a better way.

        I'm torn on this proposal. My initial reaction to ignoring private dirty pages is that a user can hide a lot of memory by creating their own, personal swap file. For example, a user allocates 10G normally and touches it all. I think we all agree that should count as 10G of usage. If the user changes the app to back that memory with a private file on disk then it wouldn't count for anything. That seems wrong since the user is charged differently based on whether the memory is backed by public swap vs. private swap. Memory pressure could cause either of them to spill to disk, so it doesn't make sense to me why we would count them differently.

        On the other hand, it would be a different situation if there was no swap configured on the system. Memory pressure could push out the private dirty pages but dirty anonymous pages are pinned until the process exits. Then I think it makes sense to count them differently because they won't behave the same under memory pressure.

        Show
        jlowe Jason Lowe added a comment - Yes, deleted files is a red-herring OK, cool. That's been my main issue with the JIRA's proposed change. If I understand page-writeback.c correctly, the blocking operation would be the page fault on a memory block which is missing in memory. Good, so it looks like users can't abuse an mmap'd region any more than they can abuse the buffer cache. Thanks for looking into this. If YARN counted only the "anonymous" pages as the "will be free'd a kill" memory, it would give me a better way. I'm torn on this proposal. My initial reaction to ignoring private dirty pages is that a user can hide a lot of memory by creating their own, personal swap file. For example, a user allocates 10G normally and touches it all. I think we all agree that should count as 10G of usage. If the user changes the app to back that memory with a private file on disk then it wouldn't count for anything. That seems wrong since the user is charged differently based on whether the memory is backed by public swap vs. private swap. Memory pressure could cause either of them to spill to disk, so it doesn't make sense to me why we would count them differently. On the other hand, it would be a different situation if there was no swap configured on the system. Memory pressure could push out the private dirty pages but dirty anonymous pages are pinned until the process exits. Then I think it makes sense to count them differently because they won't behave the same under memory pressure.
        Hide
        jlowe Jason Lowe added a comment -

        Memory pressure could cause either of them to spill to disk, so it doesn't make sense to me why we would count them differently.

        Hmm. I guess they will behave differently in practice because a file-backed region of memory could pause the process if it dirties too many pages and triggers an I/O flush, whereas that's not going to happen if the memory is simply backed by the system's swap device.

        So I suppose ignoring the private dirty field and only focusing on anonymous pages even with plenty of swap makes sense because we're equating private dirty with the buffer cache and we don't charge other processes for their use of the buffer cache.

        Show
        jlowe Jason Lowe added a comment - Memory pressure could cause either of them to spill to disk, so it doesn't make sense to me why we would count them differently. Hmm. I guess they will behave differently in practice because a file-backed region of memory could pause the process if it dirties too many pages and triggers an I/O flush, whereas that's not going to happen if the memory is simply backed by the system's swap device. So I suppose ignoring the private dirty field and only focusing on anonymous pages even with plenty of swap makes sense because we're equating private dirty with the buffer cache and we don't charge other processes for their use of the buffer cache.
        Hide
        rajesh.balamohan Rajesh Balamohan added a comment - - edited

        Attaching .2 version which takes into account "anonymous" pages.

        Show
        rajesh.balamohan Rajesh Balamohan added a comment - - edited Attaching .2 version which takes into account "anonymous" pages.
        Hide
        gopalv Gopal V added a comment -

        Nathan Roberts/Jason Lowe: Edited the ticket title for clarity (smaps is still not the default mode for YARN).

        Show
        gopalv Gopal V added a comment - Nathan Roberts / Jason Lowe : Edited the ticket title for clarity (smaps is still not the default mode for YARN).
        Hide
        jlowe Jason Lowe added a comment -

        +1 lgtm. I'll commit this tomorrow if there are no objections.

        Show
        jlowe Jason Lowe added a comment - +1 lgtm. I'll commit this tomorrow if there are no objections.
        Hide
        jlowe Jason Lowe added a comment -

        Kicking a Jenkins run.

        Show
        jlowe Jason Lowe added a comment - Kicking a Jenkins run.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 33s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 6m 40s branch-2 passed
        +1 compile 0m 24s branch-2 passed with JDK v1.8.0_101
        +1 compile 0m 27s branch-2 passed with JDK v1.7.0_111
        +1 checkstyle 0m 22s branch-2 passed
        +1 mvnsite 0m 33s branch-2 passed
        +1 mvneclipse 0m 14s branch-2 passed
        +1 findbugs 1m 9s branch-2 passed
        +1 javadoc 0m 27s branch-2 passed with JDK v1.8.0_101
        +1 javadoc 0m 31s branch-2 passed with JDK v1.7.0_111
        +1 mvninstall 0m 27s the patch passed
        +1 compile 0m 22s the patch passed with JDK v1.8.0_101
        +1 javac 0m 22s the patch passed
        +1 compile 0m 25s the patch passed with JDK v1.7.0_111
        +1 javac 0m 25s the patch passed
        -1 checkstyle 0m 19s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 14 new + 127 unchanged - 14 fixed = 141 total (was 141)
        +1 mvnsite 0m 29s the patch passed
        +1 mvneclipse 0m 11s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 22s the patch passed
        +1 javadoc 0m 24s the patch passed with JDK v1.8.0_101
        +1 javadoc 0m 28s the patch passed with JDK v1.7.0_111
        +1 unit 2m 5s hadoop-yarn-common in the patch passed with JDK v1.8.0_101.
        +1 unit 2m 23s hadoop-yarn-common in the patch passed with JDK v1.7.0_111.
        +1 asflicense 0m 17s The patch does not generate ASF License warnings.
        21m 34s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:b59b8b7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12829994/YARN-5551.branch-2.002.patch
        JIRA Issue YARN-5551
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 143c250ddb62 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2 / ad06595
        Default Java 1.7.0_111
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/13334/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
        JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13334/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/13334/console
        Powered by Apache Yetus 0.3.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 33s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 6m 40s branch-2 passed +1 compile 0m 24s branch-2 passed with JDK v1.8.0_101 +1 compile 0m 27s branch-2 passed with JDK v1.7.0_111 +1 checkstyle 0m 22s branch-2 passed +1 mvnsite 0m 33s branch-2 passed +1 mvneclipse 0m 14s branch-2 passed +1 findbugs 1m 9s branch-2 passed +1 javadoc 0m 27s branch-2 passed with JDK v1.8.0_101 +1 javadoc 0m 31s branch-2 passed with JDK v1.7.0_111 +1 mvninstall 0m 27s the patch passed +1 compile 0m 22s the patch passed with JDK v1.8.0_101 +1 javac 0m 22s the patch passed +1 compile 0m 25s the patch passed with JDK v1.7.0_111 +1 javac 0m 25s the patch passed -1 checkstyle 0m 19s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 14 new + 127 unchanged - 14 fixed = 141 total (was 141) +1 mvnsite 0m 29s the patch passed +1 mvneclipse 0m 11s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 22s the patch passed +1 javadoc 0m 24s the patch passed with JDK v1.8.0_101 +1 javadoc 0m 28s the patch passed with JDK v1.7.0_111 +1 unit 2m 5s hadoop-yarn-common in the patch passed with JDK v1.8.0_101. +1 unit 2m 23s hadoop-yarn-common in the patch passed with JDK v1.7.0_111. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 21m 34s Subsystem Report/Notes Docker Image:yetus/hadoop:b59b8b7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12829994/YARN-5551.branch-2.002.patch JIRA Issue YARN-5551 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 143c250ddb62 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / ad06595 Default Java 1.7.0_111 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/13334/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13334/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common Console output https://builds.apache.org/job/PreCommit-YARN-Build/13334/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
        Hide
        rajesh.balamohan Rajesh Balamohan added a comment -

        Rebasing to address the checkstyle issues.

        Show
        rajesh.balamohan Rajesh Balamohan added a comment - Rebasing to address the checkstyle issues.
        Hide
        hadoopqa Hadoop QA added a comment -
        +1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 41s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 7m 3s branch-2 passed
        +1 compile 0m 27s branch-2 passed with JDK v1.8.0_101
        +1 compile 0m 29s branch-2 passed with JDK v1.7.0_111
        +1 checkstyle 0m 23s branch-2 passed
        +1 mvnsite 0m 33s branch-2 passed
        +1 mvneclipse 0m 15s branch-2 passed
        +1 findbugs 1m 16s branch-2 passed
        +1 javadoc 0m 28s branch-2 passed with JDK v1.8.0_101
        +1 javadoc 0m 31s branch-2 passed with JDK v1.7.0_111
        +1 mvninstall 0m 27s the patch passed
        +1 compile 0m 21s the patch passed with JDK v1.8.0_101
        +1 javac 0m 21s the patch passed
        +1 compile 0m 25s the patch passed with JDK v1.7.0_111
        +1 javac 0m 25s the patch passed
        +1 checkstyle 0m 19s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 0 new + 121 unchanged - 20 fixed = 121 total (was 141)
        +1 mvnsite 0m 30s the patch passed
        +1 mvneclipse 0m 11s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 21s the patch passed
        +1 javadoc 0m 24s the patch passed with JDK v1.8.0_101
        +1 javadoc 0m 30s the patch passed with JDK v1.7.0_111
        +1 unit 2m 8s hadoop-yarn-common in the patch passed with JDK v1.8.0_101.
        +1 unit 2m 25s hadoop-yarn-common in the patch passed with JDK v1.7.0_111.
        +1 asflicense 0m 17s The patch does not generate ASF License warnings.
        22m 33s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:b59b8b7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12832642/YARN-5551.branch-2.003.patch
        JIRA Issue YARN-5551
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux d0aae778fe3a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2 / dc5f7a9
        Default Java 1.7.0_111
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111
        findbugs v3.0.0
        JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13349/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/13349/console
        Powered by Apache Yetus 0.3.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 41s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 3s branch-2 passed +1 compile 0m 27s branch-2 passed with JDK v1.8.0_101 +1 compile 0m 29s branch-2 passed with JDK v1.7.0_111 +1 checkstyle 0m 23s branch-2 passed +1 mvnsite 0m 33s branch-2 passed +1 mvneclipse 0m 15s branch-2 passed +1 findbugs 1m 16s branch-2 passed +1 javadoc 0m 28s branch-2 passed with JDK v1.8.0_101 +1 javadoc 0m 31s branch-2 passed with JDK v1.7.0_111 +1 mvninstall 0m 27s the patch passed +1 compile 0m 21s the patch passed with JDK v1.8.0_101 +1 javac 0m 21s the patch passed +1 compile 0m 25s the patch passed with JDK v1.7.0_111 +1 javac 0m 25s the patch passed +1 checkstyle 0m 19s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 0 new + 121 unchanged - 20 fixed = 121 total (was 141) +1 mvnsite 0m 30s the patch passed +1 mvneclipse 0m 11s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 21s the patch passed +1 javadoc 0m 24s the patch passed with JDK v1.8.0_101 +1 javadoc 0m 30s the patch passed with JDK v1.7.0_111 +1 unit 2m 8s hadoop-yarn-common in the patch passed with JDK v1.8.0_101. +1 unit 2m 25s hadoop-yarn-common in the patch passed with JDK v1.7.0_111. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 22m 33s Subsystem Report/Notes Docker Image:yetus/hadoop:b59b8b7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12832642/YARN-5551.branch-2.003.patch JIRA Issue YARN-5551 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d0aae778fe3a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / dc5f7a9 Default Java 1.7.0_111 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 findbugs v3.0.0 JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13349/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common Console output https://builds.apache.org/job/PreCommit-YARN-Build/13349/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
        Hide
        jlowe Jason Lowe added a comment -

        +1 for the latest patch, committing this.

        Show
        jlowe Jason Lowe added a comment - +1 for the latest patch, committing this.
        Hide
        jlowe Jason Lowe added a comment -

        Thanks to Rajesh Balamohan for the contribution and to Gopal V, Nathan Roberts, Chris Nauroth, and Varun Vasudev for additional review! I committed this to trunk and branch-2.

        Show
        jlowe Jason Lowe added a comment - Thanks to Rajesh Balamohan for the contribution and to Gopal V , Nathan Roberts , Chris Nauroth , and Varun Vasudev for additional review! I committed this to trunk and branch-2.
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10585 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10585/)
        YARN-5551. Ignore file backed pages from memory computation when smaps (jlowe: rev ecb51b857ac7faceff981b2b6f22ea1af0d42ab1)

        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10585 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10585/ ) YARN-5551 . Ignore file backed pages from memory computation when smaps (jlowe: rev ecb51b857ac7faceff981b2b6f22ea1af0d42ab1) (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java

          People

          • Assignee:
            rajesh.balamohan Rajesh Balamohan
            Reporter:
            rajesh.balamohan Rajesh Balamohan
          • Votes:
            0 Vote for this issue
            Watchers:
            13 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development