Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-4392

ApplicationCreatedEvent event time resets after RM restart/failover

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 2.8.0
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: None
    • Labels:
      None

      Description

      2015-09-01 12:39:09,852 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437453994768 is ahead of started time 1440308399674 
      2015-09-01 12:39:09,852 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437454008244 is ahead of started time 1440308399676 
      2015-09-01 12:39:09,852 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437444305171 is ahead of started time 1440308399653 
      2015-09-01 12:39:09,852 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437444293115 is ahead of started time 1440308399647 
      2015-09-01 12:39:09,852 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437444379645 is ahead of started time 1440308399656 
      2015-09-01 12:39:09,852 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437444361234 is ahead of started time 1440308399655 
      2015-09-01 12:39:09,852 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437444342029 is ahead of started time 1440308399654 
      2015-09-01 12:39:09,852 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437444323447 is ahead of started time 1440308399654 
      2015-09-01 12:39:09,853 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437444430006 is ahead of started time 1440308399660 
      2015-09-01 12:39:09,853 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437444415698 is ahead of started time 1440308399659 
      2015-09-01 12:39:09,853 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437444419060 is ahead of started time 1440308399658 
      2015-09-01 12:39:09,853 WARN util.Times (Times.java:elapsed(53)) - Finished time 1437444393931 is ahead of started time 1440308399657
      

      .

      From ATS logs, we would see a large amount of 'stale alerts' messages periodically

      1. YARN-4392-2015-11-24.patch
        3 kB
        Xuan Gong
      2. YARN-4392.1.patch
        14 kB
        Xuan Gong
      3. YARN-4392.2.patch
        2 kB
        Xuan Gong
      4. YARN-4392.3.patch
        9 kB
        Naganarasimha G R

        Issue Links

          Activity

          Hide
          xgong Xuan Gong added a comment - - edited

          We would see those only when the following two conditions are satisfied:
          1)The app entity has been deleted from EntityDeletionThread
          2) RM restart/failover

          Because when we recover the Applications, we always send a new ApplicationCreatedEvent:

              this.startTime = this.systemClock.getTime();
              rmContext.getSystemMetricsPublisher().appCreated(this, startTime);
          

          which would give this event a new timestamp.

          And when generate AppReport from ATS, we are doing

          if (event.getEventType().equals(
                       ApplicationMetricsConstants.CREATED_EVENT_TYPE)) {
                   createdTime = event.getTimestamp();
          }
          

          In that case, we would get the new timeStamp as the application start_time

          Show
          xgong Xuan Gong added a comment - - edited We would see those only when the following two conditions are satisfied: 1)The app entity has been deleted from EntityDeletionThread 2) RM restart/failover Because when we recover the Applications, we always send a new ApplicationCreatedEvent: this .startTime = this .systemClock.getTime(); rmContext.getSystemMetricsPublisher().appCreated( this , startTime); which would give this event a new timestamp. And when generate AppReport from ATS, we are doing if (event.getEventType().equals( ApplicationMetricsConstants.CREATED_EVENT_TYPE)) { createdTime = event.getTimestamp(); } In that case, we would get the new timeStamp as the application start_time
          Hide
          xgong Xuan Gong added a comment -

          Created two patch to fix this issue:
          1) the patch with timestamp: when ATS generates the Application create_time, it would read ApplicationMetricsConstants.SUBMITTED_TIME_ENTITY_INFO instead of timelineevent timestamp

          2) the patch without timestamp: when create RMAppImpl object, we would use startTime as an input. If this is the new Application, the startTime would be set as currentTimeStamp. If it is the recovered application, the startTime would be set from appState. By doing this, we could also get the consistent application start time from both RM Web ui and ATS ui.

          Personally, I prefer the option 2.

          Jason Lowe, Naganarasimha G R, Jonathan Eagles what does you think ?

          Show
          xgong Xuan Gong added a comment - Created two patch to fix this issue: 1) the patch with timestamp: when ATS generates the Application create_time, it would read ApplicationMetricsConstants.SUBMITTED_TIME_ENTITY_INFO instead of timelineevent timestamp 2) the patch without timestamp: when create RMAppImpl object, we would use startTime as an input. If this is the new Application, the startTime would be set as currentTimeStamp. If it is the recovered application, the startTime would be set from appState. By doing this, we could also get the consistent application start time from both RM Web ui and ATS ui. Personally, I prefer the option 2. Jason Lowe , Naganarasimha G R , Jonathan Eagles what does you think ?
          Hide
          xgong Xuan Gong added a comment -
          Show
          xgong Xuan Gong added a comment - + Jian He
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Xuan Gong,
          I prefer not to resend the events on recovery which i think i tried to achieve in YARN-3127, You had given some comments earlier on it i tried to cover them in the additional patches. Can you take a look at it once ?

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Xuan Gong , I prefer not to resend the events on recovery which i think i tried to achieve in YARN-3127 , You had given some comments earlier on it i tried to cover them in the additional patches. Can you take a look at it once ?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 7 new or modified test files.
          +1 mvninstall 13m 25s trunk passed
          +1 compile 20m 7s trunk passed with JDK v1.8.0_66
          +1 compile 16m 26s trunk passed with JDK v1.7.0_85
          +1 checkstyle 1m 43s trunk passed
          +1 mvnsite 1m 41s trunk passed
          +1 mvneclipse 0m 50s trunk passed
          +1 findbugs 2m 53s trunk passed
          +1 javadoc 1m 19s trunk passed with JDK v1.8.0_66
          +1 javadoc 1m 10s trunk passed with JDK v1.7.0_85
          +1 mvninstall 1m 30s the patch passed
          +1 compile 21m 4s the patch passed with JDK v1.8.0_66
          +1 javac 21m 4s the patch passed
          +1 compile 16m 18s the patch passed with JDK v1.7.0_85
          +1 javac 16m 18s the patch passed
          +1 checkstyle 1m 41s the patch passed
          +1 mvnsite 1m 44s the patch passed
          +1 mvneclipse 0m 50s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          findbugs 3m 24s the patch passed
          +1 javadoc 1m 22s the patch passed with JDK v1.8.0_66
          +1 javadoc 1m 14s the patch passed with JDK v1.7.0_85
          -1 unit 83m 24s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          +1 unit 0m 55s hadoop-archive-logs in the patch passed with JDK v1.8.0_66.
          -1 unit 74m 47s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85.
          +1 unit 0m 43s hadoop-archive-logs in the patch passed with JDK v1.7.0_85.
          +1 asflicense 0m 29s Patch does not generate ASF License warnings.
          271m 17s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.scheduler.fair.TestSchedulingPolicy
          JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774211/YARN-4392.1.patch
          JIRA Issue YARN-4392
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 09c5472ff3c5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 23c625e
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9788/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9788/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9788/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9788/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9788/testReport/
          modules C: hadoop-tools/hadoop-archive-logs hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: .
          Max memory used 78MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9788/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 7 new or modified test files. +1 mvninstall 13m 25s trunk passed +1 compile 20m 7s trunk passed with JDK v1.8.0_66 +1 compile 16m 26s trunk passed with JDK v1.7.0_85 +1 checkstyle 1m 43s trunk passed +1 mvnsite 1m 41s trunk passed +1 mvneclipse 0m 50s trunk passed +1 findbugs 2m 53s trunk passed +1 javadoc 1m 19s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 10s trunk passed with JDK v1.7.0_85 +1 mvninstall 1m 30s the patch passed +1 compile 21m 4s the patch passed with JDK v1.8.0_66 +1 javac 21m 4s the patch passed +1 compile 16m 18s the patch passed with JDK v1.7.0_85 +1 javac 16m 18s the patch passed +1 checkstyle 1m 41s the patch passed +1 mvnsite 1m 44s the patch passed +1 mvneclipse 0m 50s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. findbugs 3m 24s the patch passed +1 javadoc 1m 22s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 14s the patch passed with JDK v1.7.0_85 -1 unit 83m 24s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. +1 unit 0m 55s hadoop-archive-logs in the patch passed with JDK v1.8.0_66. -1 unit 74m 47s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85. +1 unit 0m 43s hadoop-archive-logs in the patch passed with JDK v1.7.0_85. +1 asflicense 0m 29s Patch does not generate ASF License warnings. 271m 17s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.scheduler.fair.TestSchedulingPolicy JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774211/YARN-4392.1.patch JIRA Issue YARN-4392 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 09c5472ff3c5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 23c625e findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-YARN-Build/9788/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9788/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9788/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9788/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9788/testReport/ modules C: hadoop-tools/hadoop-archive-logs hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: . Max memory used 78MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9788/console This message was automatically generated.
          Hide
          jeagles Jonathan Eagles added a comment -

          Xuan Gong, jason and I will be out until monday and will take a look at it then.

          Show
          jeagles Jonathan Eagles added a comment - Xuan Gong , jason and I will be out until monday and will take a look at it then.
          Hide
          jlowe Jason Lowe added a comment -

          I agree that if we're going to resend the ATS events then the start time should be consistent. This is already done with the audit logs. There's still Naganarasimha G R's question of whether we should simply avoid sending the events at all upon recovery. If we take that approach I'm wondering if there may be cases where we are updating the app state before we know for certain that the ATS has received the event. Therefore re-sending the events is probably a safer approach, but it does send a flood of events from the RM to the ATS upon recovery.

          Anyway if we proceed with a resend event approach, I'm wondering if there's a simpler way to handle it. Rather than updating the RMAppImpl constructor, can't we simply wait until we recover to send the event? I find it odd that we are telling the ATS that the app has started in the RMAppImpl constructor rather than in the transition triggered by the START event. Moving the ATS app start notification out of the constructor and instead to that start transition allows us to construct an app and send it a recover event without triggering an ATS event. Then we can let the app recover and either send the event with the recovered startTime or avoid sending it during recovery. It would be our choice. Then we don't need to update the constructor, leak even more app state recovery logic into RMAppManager, etc.

          Show
          jlowe Jason Lowe added a comment - I agree that if we're going to resend the ATS events then the start time should be consistent. This is already done with the audit logs. There's still Naganarasimha G R 's question of whether we should simply avoid sending the events at all upon recovery. If we take that approach I'm wondering if there may be cases where we are updating the app state before we know for certain that the ATS has received the event. Therefore re-sending the events is probably a safer approach, but it does send a flood of events from the RM to the ATS upon recovery. Anyway if we proceed with a resend event approach, I'm wondering if there's a simpler way to handle it. Rather than updating the RMAppImpl constructor, can't we simply wait until we recover to send the event? I find it odd that we are telling the ATS that the app has started in the RMAppImpl constructor rather than in the transition triggered by the START event. Moving the ATS app start notification out of the constructor and instead to that start transition allows us to construct an app and send it a recover event without triggering an ATS event. Then we can let the app recover and either send the event with the recovered startTime or avoid sending it during recovery. It would be our choice. Then we don't need to update the constructor, leak even more app state recovery logic into RMAppManager, etc.
          Hide
          xgong Xuan Gong added a comment -

          Thanks for Suggestion. Jason Lowe That makes sense.

          Uploaded a new patch to address the comments. Could you review it, please ?

          Show
          xgong Xuan Gong added a comment - Thanks for Suggestion. Jason Lowe That makes sense. Uploaded a new patch to address the comments. Could you review it, please ?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 1s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 9m 39s trunk passed
          +1 compile 0m 38s trunk passed with JDK v1.8.0_66
          +1 compile 0m 41s trunk passed with JDK v1.7.0_85
          +1 checkstyle 0m 17s trunk passed
          +1 mvnsite 0m 46s trunk passed
          +1 mvneclipse 0m 18s trunk passed
          +1 findbugs 1m 33s trunk passed
          +1 javadoc 0m 31s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 35s trunk passed with JDK v1.7.0_85
          +1 mvninstall 0m 43s the patch passed
          +1 compile 0m 38s the patch passed with JDK v1.8.0_66
          +1 javac 0m 38s the patch passed
          +1 compile 0m 38s the patch passed with JDK v1.7.0_85
          +1 javac 0m 38s the patch passed
          -1 checkstyle 0m 16s Patch generated 1 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 114, now 115).
          +1 mvnsite 0m 45s the patch passed
          +1 mvneclipse 0m 18s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 41s the patch passed
          +1 javadoc 0m 28s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 33s the patch passed with JDK v1.7.0_85
          -1 unit 62m 57s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 62m 23s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85.
          +1 asflicense 0m 27s Patch does not generate ASF License warnings.
          148m 9s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions
          JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
            hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12775110/YARN-4392.2.patch
          JIRA Issue YARN-4392
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 1d66ea76216a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 1cc7e61
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9828/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9828/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9828/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9828/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9828/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9828/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9828/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 1s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 9m 39s trunk passed +1 compile 0m 38s trunk passed with JDK v1.8.0_66 +1 compile 0m 41s trunk passed with JDK v1.7.0_85 +1 checkstyle 0m 17s trunk passed +1 mvnsite 0m 46s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 33s trunk passed +1 javadoc 0m 31s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 35s trunk passed with JDK v1.7.0_85 +1 mvninstall 0m 43s the patch passed +1 compile 0m 38s the patch passed with JDK v1.8.0_66 +1 javac 0m 38s the patch passed +1 compile 0m 38s the patch passed with JDK v1.7.0_85 +1 javac 0m 38s the patch passed -1 checkstyle 0m 16s Patch generated 1 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 114, now 115). +1 mvnsite 0m 45s the patch passed +1 mvneclipse 0m 18s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 41s the patch passed +1 javadoc 0m 28s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 33s the patch passed with JDK v1.7.0_85 -1 unit 62m 57s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 62m 23s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_85. +1 asflicense 0m 27s Patch does not generate ASF License warnings. 148m 9s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions JDK v1.7.0_85 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization   hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12775110/YARN-4392.2.patch JIRA Issue YARN-4392 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 1d66ea76216a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 1cc7e61 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9828/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9828/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9828/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9828/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9828/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9828/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9828/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks Jason Lowe,
          bq, If we take that approach I'm wondering if there may be cases where we are updating the app state before we know for certain that the ATS has received the event.
          IIUC i think you are pointing out at the finish events. So actually in the patch i had followed the approach such that for finish events i had sent synchronous push in the ATS side, in this way we are sure that AppFinish event is sent out before we store the state of the app in the RM state store. But yes this approach looks little shaky but thought it might solve the issue.

          Moving the ATS app start notification out of the constructor and instead to that start transition allows us to construct an app and send it a recover event without triggering an ATS event.

          Yes this is the same approach i had adopted in my YARN-3127 patch to avoid resend AppCreated events to ATS and this was also required for YARN-4350.

          If we are handling this issue here shall i close YARN-3127 ?

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks Jason Lowe , bq, If we take that approach I'm wondering if there may be cases where we are updating the app state before we know for certain that the ATS has received the event. IIUC i think you are pointing out at the finish events. So actually in the patch i had followed the approach such that for finish events i had sent synchronous push in the ATS side, in this way we are sure that AppFinish event is sent out before we store the state of the app in the RM state store. But yes this approach looks little shaky but thought it might solve the issue. Moving the ATS app start notification out of the constructor and instead to that start transition allows us to construct an app and send it a recover event without triggering an ATS event. Yes this is the same approach i had adopted in my YARN-3127 patch to avoid resend AppCreated events to ATS and this was also required for YARN-4350 . If we are handling this issue here shall i close YARN-3127 ?
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Hi Xuan Gong,
          wrt YARN-4392.2.patch, is it required to send the App created event to ATS, during restore? , as even before we store the app information in the RM state store we would have pushed this app created event to ATS.

          Show
          Naganarasimha Naganarasimha G R added a comment - Hi Xuan Gong , wrt YARN-4392 .2.patch, is it required to send the App created event to ATS, during restore? , as even before we store the app information in the RM state store we would have pushed this app created event to ATS.
          Hide
          xgong Xuan Gong added a comment -

          Thanks for the comments, Naganarasimha G R

          So actually in the patch i had followed the approach such that for finish events i had sent synchronous push in the ATS side, in this way we are sure that AppFinish event is sent out before we store the state of the app in the RM state store. But yes this approach looks little shaky but thought it might solve the issue.

          Let us not synchronously send the ATS event. Otherwise, it would depend on the ATS.
          It is always good to make sure that we can send the ATS event "exactly once", but this would make things complicate, such as send ats events synchronously. This would add the additional but not necessary dependency.
          Currently, we are using "at least once" approach. Since all the information are the same if they are the duplicate events (after applying the patch), I think that is fine.

          What is your opinion??

          Show
          xgong Xuan Gong added a comment - Thanks for the comments, Naganarasimha G R So actually in the patch i had followed the approach such that for finish events i had sent synchronous push in the ATS side, in this way we are sure that AppFinish event is sent out before we store the state of the app in the RM state store. But yes this approach looks little shaky but thought it might solve the issue. Let us not synchronously send the ATS event. Otherwise, it would depend on the ATS. It is always good to make sure that we can send the ATS event "exactly once", but this would make things complicate, such as send ats events synchronously. This would add the additional but not necessary dependency. Currently, we are using "at least once" approach. Since all the information are the same if they are the duplicate events (after applying the patch), I think that is fine. What is your opinion??
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Xuan Gong,
          Yes you are right, it would not be good to depend on ATS that it will send certain events synchronously.
          but IIUC there is no limit on number of running apps in state store and finished apps are restricted to a configurable number. In such cases would not there be many created events in a larger cluster on recovery? my 2 cents would be atleast to avoid for app created event but if its not a great deal, then fine with the current fix.
          Thanks for assigning it to me, i can get the test case failure corrected as it was already handled in YARN-3127.

          Show
          Naganarasimha Naganarasimha G R added a comment - Xuan Gong , Yes you are right, it would not be good to depend on ATS that it will send certain events synchronously. but IIUC there is no limit on number of running apps in state store and finished apps are restricted to a configurable number. In such cases would not there be many created events in a larger cluster on recovery? my 2 cents would be atleast to avoid for app created event but if its not a great deal, then fine with the current fix. Thanks for assigning it to me, i can get the test case failure corrected as it was already handled in YARN-3127 .
          Hide
          xgong Xuan Gong added a comment -

          Naganarasimha G R

          there is no limit on number of running apps in state store and finished apps are restricted to a configurable number. In such cases would not there be many created events in a larger cluster on recovery?

          This is a good point given the performance of ATS v1 is not that scalable.

          Will it cause any issue if the APP_CREATED event is missing ? If that only cause the missing related information in ATS webui/webservice, I am OK with not re-sending the ATS events on recovery.

          Jason Lowe What is your opinion ?

          Show
          xgong Xuan Gong added a comment - Naganarasimha G R there is no limit on number of running apps in state store and finished apps are restricted to a configurable number. In such cases would not there be many created events in a larger cluster on recovery? This is a good point given the performance of ATS v1 is not that scalable. Will it cause any issue if the APP_CREATED event is missing ? If that only cause the missing related information in ATS webui/webservice, I am OK with not re-sending the ATS events on recovery. Jason Lowe What is your opinion ?
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Xuan Gong,
          bq, Will it cause any issue if the APP_CREATED event is missing ? If that only cause the missing related information in ATS webui/webservice, I am OK with not re-sending the ATS events on recovery.
          IMO even if it causes any issue we need to correct it, as there is another scenario when RM is started much before the ATS server., then there is possibility that ATS will miss the App start events but might receive the App finish events.

          Show
          Naganarasimha Naganarasimha G R added a comment - Xuan Gong , bq, Will it cause any issue if the APP_CREATED event is missing ? If that only cause the missing related information in ATS webui/webservice, I am OK with not re-sending the ATS events on recovery. IMO even if it causes any issue we need to correct it, as there is another scenario when RM is started much before the ATS server., then there is possibility that ATS will miss the App start events but might receive the App finish events.
          Hide
          jlowe Jason Lowe added a comment -

          +1 for the latest patch, if we go with re-sending of events upon recovery.

          I think re-sending of events is "safer" assuming the redundant events are handled properly. That way if we missed an event we will fill that gap upon recovery. There is the concern of extra load it generates on the RM and ATS during recovery. Note that we probably will miss ATS events upon recovery in some scenarios if we don't re-send since ATS event posting is async and state store updating are async. There's a race where we could update the state store and crash before the ATS event is sent.

          Show
          jlowe Jason Lowe added a comment - +1 for the latest patch, if we go with re-sending of events upon recovery. I think re-sending of events is "safer" assuming the redundant events are handled properly. That way if we missed an event we will fill that gap upon recovery. There is the concern of extra load it generates on the RM and ATS during recovery. Note that we probably will miss ATS events upon recovery in some scenarios if we don't re-send since ATS event posting is async and state store updating are async. There's a race where we could update the state store and crash before the ATS event is sent.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          ATS events upon recovery in some scenarios if we don't re-send since ATS event posting is async and state store updating are async. There's a race where we could update the state store and crash before the ATS event is sent.

          IMO its like situation where in we need to decide which is greater of 2 evils and try to take care of it. Had a offline discussion with Sunil G and Varun Vasudev, and few thoughts were :

          • anyway processing of the ATS events even in recovery is in separate thread hence its not blocking
          • when we move out of ATS1.0, storage will also not be a problem.
          • Data is also not getting changed on resending during recovery

          considering all these i am fine with approach in the patch. Also have uploaded a new patch with test case correction and addition of test case to validate during creation and recovery events are sent for container created.

          Show
          Naganarasimha Naganarasimha G R added a comment - ATS events upon recovery in some scenarios if we don't re-send since ATS event posting is async and state store updating are async. There's a race where we could update the state store and crash before the ATS event is sent. IMO its like situation where in we need to decide which is greater of 2 evils and try to take care of it. Had a offline discussion with Sunil G and Varun Vasudev , and few thoughts were : anyway processing of the ATS events even in recovery is in separate thread hence its not blocking when we move out of ATS1.0, storage will also not be a problem. Data is also not getting changed on resending during recovery considering all these i am fine with approach in the patch. Also have uploaded a new patch with test case correction and addition of test case to validate during creation and recovery events are sent for container created.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 10m 22s trunk passed
          +1 compile 0m 48s trunk passed with JDK v1.8.0_66
          +1 compile 0m 39s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 17s trunk passed
          +1 mvnsite 0m 47s trunk passed
          +1 mvneclipse 0m 18s trunk passed
          +1 findbugs 1m 33s trunk passed
          +1 javadoc 0m 35s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 35s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 44s the patch passed
          +1 compile 0m 44s the patch passed with JDK v1.8.0_66
          +1 javac 0m 44s the patch passed
          +1 compile 0m 39s the patch passed with JDK v1.7.0_91
          -1 javac 4m 51s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91 with JDK v1.7.0_91 generated 1 new issues (was 2, now 2).
          +1 javac 0m 39s the patch passed
          -1 checkstyle 0m 16s Patch generated 1 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 148, now 148).
          +1 mvnsite 0m 46s the patch passed
          +1 mvneclipse 0m 18s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 41s the patch passed
          +1 javadoc 0m 35s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 34s the patch passed with JDK v1.7.0_91
          -1 unit 65m 19s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
          -1 unit 64m 8s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 28s Patch does not generate ASF License warnings.
          153m 29s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization
          JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12775910/YARN-4392.3.patch
          JIRA Issue YARN-4392
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux ddd5760d8874 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 4265a85
          findbugs v3.0.0
          javac hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91: https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9870/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Max memory used 76MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/9870/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 10m 22s trunk passed +1 compile 0m 48s trunk passed with JDK v1.8.0_66 +1 compile 0m 39s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 17s trunk passed +1 mvnsite 0m 47s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 33s trunk passed +1 javadoc 0m 35s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 35s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 44s the patch passed +1 compile 0m 44s the patch passed with JDK v1.8.0_66 +1 javac 0m 44s the patch passed +1 compile 0m 39s the patch passed with JDK v1.7.0_91 -1 javac 4m 51s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91 with JDK v1.7.0_91 generated 1 new issues (was 2, now 2). +1 javac 0m 39s the patch passed -1 checkstyle 0m 16s Patch generated 1 new checkstyle issues in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager (total was 148, now 148). +1 mvnsite 0m 46s the patch passed +1 mvneclipse 0m 18s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 41s the patch passed +1 javadoc 0m 35s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 34s the patch passed with JDK v1.7.0_91 -1 unit 65m 19s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. -1 unit 64m 8s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 28s Patch does not generate ASF License warnings. 153m 29s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization JDK v1.7.0_91 Failed junit tests hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12775910/YARN-4392.3.patch JIRA Issue YARN-4392 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux ddd5760d8874 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4265a85 findbugs v3.0.0 javac hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91: https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-YARN-Build/9870/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/9870/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-YARN-Build/9870/console This message was automatically generated.
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Test case failures and checkstyle issues are not related to the changes in the patch

          Show
          Naganarasimha Naganarasimha G R added a comment - Test case failures and checkstyle issues are not related to the changes in the patch
          Hide
          xgong Xuan Gong added a comment -

          +1 lgtm. Checking this in

          Show
          xgong Xuan Gong added a comment - +1 lgtm. Checking this in
          Hide
          xgong Xuan Gong added a comment -

          Committed into trunk/branch-2. Thanks, Naganarasimha. And thanks Jason for the review

          Show
          xgong Xuan Gong added a comment - Committed into trunk/branch-2. Thanks, Naganarasimha. And thanks Jason for the review
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8933 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8933/)
          YARN-4392. ApplicationCreatedEvent event time resets after RM (xgong: rev 4546c7582b6762c18ba150d80a8976eb51a8290c)

          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8933 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8933/ ) YARN-4392 . ApplicationCreatedEvent event time resets after RM (xgong: rev 4546c7582b6762c18ba150d80a8976eb51a8290c) hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #673 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/673/)
          YARN-4392. ApplicationCreatedEvent event time resets after RM (xgong: rev 4546c7582b6762c18ba150d80a8976eb51a8290c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #673 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/673/ ) YARN-4392 . ApplicationCreatedEvent event time resets after RM (xgong: rev 4546c7582b6762c18ba150d80a8976eb51a8290c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
          Hide
          Naganarasimha Naganarasimha G R added a comment -

          Thanks for the review and commit Xuan Gong & Jason Lowe !

          Show
          Naganarasimha Naganarasimha G R added a comment - Thanks for the review and commit Xuan Gong & Jason Lowe !
          Hide
          leftnoteasy Wangda Tan added a comment -

          Committed to branch-2.8.

          Show
          leftnoteasy Wangda Tan added a comment - Committed to branch-2.8.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #10074 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10074/)
          YARN-4392. ApplicationCreatedEvent event time resets after RM (sjlee: rev 2e2dbf59d1ab39c06923103ccbd77c5e13e20b06)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #10074 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10074/ ) YARN-4392 . ApplicationCreatedEvent event time resets after RM (sjlee: rev 2e2dbf59d1ab39c06923103ccbd77c5e13e20b06) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java

            People

            • Assignee:
              Naganarasimha Naganarasimha G R
              Reporter:
              xgong Xuan Gong
            • Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development