Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-5910

Support for multi-cluster delegation tokens

    Details

    • Type: New Feature
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.9.0, 3.0.0-alpha4
    • Component/s: security
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      As an administrator running many secure (kerberized) clusters, some which have peer clusters managed by other teams, I am looking for a way to run jobs which may require services running on other clusters. Particular cases where this rears itself are running something as core as a distcp between two kerberized clusters (e.g. hadoop --config /home/user292/conf/ distcp hdfs://LOCALCLUSTER/user/user292/test.out hdfs://REMOTECLUSTER/user/user292/test.out.result).

      Thanks to YARN-3021, once can run for a while but if the delegation token for the remote cluster needs renewal the job will fail[1]. One can pre-configure their hdfs-site.xml loaded by the YARN RM to know of all possible HDFSes available but that requires coordination that is not always feasible, especially as a cluster's peers grow into the tens of clusters or across management teams. Ideally, one could have core systems configured this way but jobs could also specify their own handling of tokens and management when needed?

      [1]: Example stack trace when the RM is unaware of a remote service:
      ----------------

      2016-03-23 14:59:50,528 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: application_1458441356031_3317 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:REMOTECLUSTER, Ident: (HDFS_DELEGATION_TOKEN token
       10927 for user292)
      2016-03-23 14:59:50,557 WARN org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Unable to add the application to the delegation token renewer.
      java.io.IOException: Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:REMOTECLUSTER, Ident: (HDFS_DELEGATION_TOKEN token 10927 for user292)
      at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:427)
      at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:78)
      at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:781)
      at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:762)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      at java.lang.Thread.run(Thread.java:744)
      Caused by: java.io.IOException: Unable to map logical nameservice URI 'hdfs://REMOTECLUSTER' to a NameNode. Local configuration does not have a failover proxy provider configured.
      at org.apache.hadoop.hdfs.DFSClient$Renewer.getNNProxy(DFSClient.java:1164)
      at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1128)
      at org.apache.hadoop.security.token.Token.renew(Token.java:377)
      at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:516)
      at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:513)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:415)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
      at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:511)
      at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:425)
      ... 6 more
      
      1. YARN-5910.7.patch
        61 kB
        Jian He
      2. YARN-5910.6.patch
        61 kB
        Jian He
      3. YARN-5910.5.patch
        56 kB
        Jian He
      4. YARN-5910.4.patch
        46 kB
        Jian He
      5. YARN-5910.3.patch
        35 kB
        Jian He
      6. YARN-5910.2.patch
        34 kB
        Jian He
      7. YARN-5910.01.patch
        29 kB
        Jian He

        Activity

        Hide
        aw Allen Wittenauer added a comment -

        Related: 3.0.0-alpha1 added 'hadoop dtutil' and the hadoop.token.files property. Between the two of them, it's very possible for end users to provide multiple DTs for multiple (and unrelated) clusters at job submission time.

        Show
        aw Allen Wittenauer added a comment - Related: 3.0.0-alpha1 added 'hadoop dtutil' and the hadoop.token.files property. Between the two of them, it's very possible for end users to provide multiple DTs for multiple (and unrelated) clusters at job submission time.
        Hide
        jianhe Jian He added a comment -

        Allen Wittenauer, I think the problem here is that RM cannot renew the delegation token because it lacks the configuration for remote hdfs HA cluster's name-service to address mapping.

        Could you elaborate how the "hadoop dtutil" can solve this problem ?

        Show
        jianhe Jian He added a comment - Allen Wittenauer , I think the problem here is that RM cannot renew the delegation token because it lacks the configuration for remote hdfs HA cluster's name-service to address mapping. Could you elaborate how the "hadoop dtutil" can solve this problem ?
        Hide
        aw Allen Wittenauer added a comment -

        dtutil allows you to fetch, bundle and alias multiple tokens for multiple services into a single file. This eliminates the need for job submission to gather all required tokens itself. (Job setup will fail to do under specific circumstances such as side input from a third cluster service. Now humans or automated processes can do the work that YARN itself would be unable to do.)

        Show
        aw Allen Wittenauer added a comment - dtutil allows you to fetch, bundle and alias multiple tokens for multiple services into a single file. This eliminates the need for job submission to gather all required tokens itself. (Job setup will fail to do under specific circumstances such as side input from a third cluster service. Now humans or automated processes can do the work that YARN itself would be unable to do.)
        Hide
        jianhe Jian He added a comment -

        I see. But I think the problem here is not about gathering the tokens on job submission. The problem is about whether RM is able to renew them. In this case, RM cannot renew this token because it does not have the necessary hdfs config.

        So, I think the "hadoop dtutil" functionality is orthogonal to this problem ?

        Show
        jianhe Jian He added a comment - I see. But I think the problem here is not about gathering the tokens on job submission. The problem is about whether RM is able to renew them. In this case, RM cannot renew this token because it does not have the necessary hdfs config. So, I think the "hadoop dtutil" functionality is orthogonal to this problem ?
        Hide
        aw Allen Wittenauer added a comment -

        RM cannot renew this token because it does not have the necessary hdfs config.

        The RM should be able to effective rebuild the necessary config for any cluster service that it knows about in order to attempt to renew it. The service field is effectively the URL to use to renew and kind specifically tells what to ask that URL. Anything extra would need to be provided the same way that dtutil gets it (via a class definition).

        So, I think the "hadoop dtutil" functionality is orthogonal to this problem ?

        No, it's not. Given a submission with a file that contains multiple tokens, it eliminates the need to configure the RM to have multiple HDFS configurations set in the site.xml files. It allows for jobs to provide tokens for unconfigured services and necessary info to renew.

        Show
        aw Allen Wittenauer added a comment - RM cannot renew this token because it does not have the necessary hdfs config. The RM should be able to effective rebuild the necessary config for any cluster service that it knows about in order to attempt to renew it. The service field is effectively the URL to use to renew and kind specifically tells what to ask that URL. Anything extra would need to be provided the same way that dtutil gets it (via a class definition). So, I think the "hadoop dtutil" functionality is orthogonal to this problem ? No, it's not. Given a submission with a file that contains multiple tokens, it eliminates the need to configure the RM to have multiple HDFS configurations set in the site.xml files. It allows for jobs to provide tokens for unconfigured services and necessary info to renew.
        Hide
        jianhe Jian He added a comment -

        The service field is effectively the URL to use to renew and kind specifically tells what to ask that URL.

        This is not true if HDFS is configured in HA mode. In case of HDFS HA, the token service is only the name service ID, RM has to rely on local hdfs config to map the name service ID to the real address, which I think is what this jira is talking about.

        Show
        jianhe Jian He added a comment - The service field is effectively the URL to use to renew and kind specifically tells what to ask that URL. This is not true if HDFS is configured in HA mode. In case of HDFS HA, the token service is only the name service ID, RM has to rely on local hdfs config to map the name service ID to the real address, which I think is what this jira is talking about.
        Hide
        aw Allen Wittenauer added a comment -

        Well, that's just an extension of the already known design flaws in Hadoop's default HA implementations. It's only HA if you are "inside the bubble". Lots of other things are going to break too. Tokens are just one of them.

        Show
        aw Allen Wittenauer added a comment - Well, that's just an extension of the already known design flaws in Hadoop's default HA implementations. It's only HA if you are "inside the bubble". Lots of other things are going to break too. Tokens are just one of them.
        Hide
        jianhe Jian He added a comment -

        Hence, the "hadoop dtutil" cannot solve this problem then ? we need a different solution here.

        Show
        jianhe Jian He added a comment - Hence, the "hadoop dtutil" cannot solve this problem then ? we need a different solution here.
        Hide
        aw Allen Wittenauer added a comment -

        dtutil solves it quite well for the non-HA case. dtutil alias might even fix at least token renewal for the HA case.

        However: putting the renewer info in the token would also only get you so far, since that configuration information would need to get propagated into other configs. It also makes the assumption that the renewer is the same as the service provider, which isn't necessarily true (with the reverse case demonstrated by the HA situation).

        But really, without putting in DNS resolution for the service name, Hadoop's HA implementation is flawed and this is just a symptom.

        Show
        aw Allen Wittenauer added a comment - dtutil solves it quite well for the non-HA case. dtutil alias might even fix at least token renewal for the HA case. However: putting the renewer info in the token would also only get you so far, since that configuration information would need to get propagated into other configs. It also makes the assumption that the renewer is the same as the service provider, which isn't necessarily true (with the reverse case demonstrated by the HA situation). But really, without putting in DNS resolution for the service name, Hadoop's HA implementation is flawed and this is just a symptom.
        Hide
        jianhe Jian He added a comment -

        dtutil alias might even fix at least token renewal for the HA case. However..

        Not fully getting your point.. after all, you are saying the "dtutil util/alias" functionality cannot solve the HA case, right ?

        If so, we need a different approach for this problem.

        Show
        jianhe Jian He added a comment - dtutil alias might even fix at least token renewal for the HA case. However.. Not fully getting your point.. after all, you are saying the "dtutil util/alias" functionality cannot solve the HA case, right ? If so, we need a different approach for this problem.
        Hide
        aw Allen Wittenauer added a comment -

        Not fully getting your point.

        Yup. I'm aware of that.

        Show
        aw Allen Wittenauer added a comment - Not fully getting your point. Yup. I'm aware of that.
        Hide
        jianhe Jian He added a comment -

        To summarize, ideally, the token should be self-sufficient to discover the renewer address. But this is not the case if Hdfs is in HA mode which uses logical URI for the token service name, RM has to rely on the local hdfs config to discover the renewer address. To let RM not depend on the local hdfs config, below are possible approaches I can think of:

        • 1) Change the way hdfs token is constructed in HA to be self-sufficient, instead of using logical URI, probably use a comma-separated list of real address and change DFS client HA implementation all the way down to not rely on configuration. I guess this is too big a change for hdfs to be accepted.
        • 2) Push the token renewal responsibility to the AM itself. That is , we distribute the kerberos keytab along with the AM and let AM itself renew the token periodically, instead of RM doing the renewal. we probably write a library for this to avoid each AM write its own.
        • 3) Have ApplicationSubmissonContext carry a app configuration object, RM uses this configuration object for token renewal instead of local config.

        Jason Lowe, would you mind sharing some thoughts on this ?

        Show
        jianhe Jian He added a comment - To summarize, ideally, the token should be self-sufficient to discover the renewer address. But this is not the case if Hdfs is in HA mode which uses logical URI for the token service name, RM has to rely on the local hdfs config to discover the renewer address. To let RM not depend on the local hdfs config, below are possible approaches I can think of: 1) Change the way hdfs token is constructed in HA to be self-sufficient, instead of using logical URI, probably use a comma-separated list of real address and change DFS client HA implementation all the way down to not rely on configuration. I guess this is too big a change for hdfs to be accepted. 2) Push the token renewal responsibility to the AM itself. That is , we distribute the kerberos keytab along with the AM and let AM itself renew the token periodically, instead of RM doing the renewal. we probably write a library for this to avoid each AM write its own. 3) Have ApplicationSubmissonContext carry a app configuration object, RM uses this configuration object for token renewal instead of local config. Jason Lowe , would you mind sharing some thoughts on this ?
        Hide
        jlowe Jason Lowe added a comment -

        Pinging Daryn Sharp since I'm sure he has an opinion on this.

        I'm not sure distributing the keytab is going to be considered a reasonable thing to do in some setups. Part of the point of getting a token is to avoid needing to ship a keytab everywhere. Once we have a keytab, is there a need to have a token? There's also the problem of needing to renew the token while the AM is waiting to get scheduled if the cluster is really busy. If the AM isn't running it can't renew the token.

        My preference is to have the token be as self-descriptive as we can possibly get. Doing the ApplicationSubmissionContext thing could work for the HA case, but I could see this being a potentially non-trivial payload the RM has to bear for each app (configs can get quite large). It'd rather avoid adding that to the context for this purpose if we can do so, but if the token cannot be self-descriptive in all cases then we may not have much other choice that I can see.

        Show
        jlowe Jason Lowe added a comment - Pinging Daryn Sharp since I'm sure he has an opinion on this. I'm not sure distributing the keytab is going to be considered a reasonable thing to do in some setups. Part of the point of getting a token is to avoid needing to ship a keytab everywhere. Once we have a keytab, is there a need to have a token? There's also the problem of needing to renew the token while the AM is waiting to get scheduled if the cluster is really busy. If the AM isn't running it can't renew the token. My preference is to have the token be as self-descriptive as we can possibly get. Doing the ApplicationSubmissionContext thing could work for the HA case, but I could see this being a potentially non-trivial payload the RM has to bear for each app (configs can get quite large). It'd rather avoid adding that to the context for this purpose if we can do so, but if the token cannot be self-descriptive in all cases then we may not have much other choice that I can see.
        Hide
        jianhe Jian He added a comment -

        Thanks for your inputs, Jason

        Once we have a keytab, is there a need to have a token?

        The map/reducer task can continue to use token.

        My preference is to have the token be as self-descriptive as we can possibly get.

        I agree this sounds a better approach, but it requires a lot of work in HDFS.

        but I could see this being a potentially non-trivial payload the RM has to bear for each app

        In this case, we can set the conf object to null once RM gets what it wants.

        I'll talk with some hdfs folks to see whether this is doable on their side. Else, I think passing a conf object and then void it might be a straightfoward approach at this point. Waiting for Daryn Sharp's input also.

        Show
        jianhe Jian He added a comment - Thanks for your inputs, Jason Once we have a keytab, is there a need to have a token? The map/reducer task can continue to use token. My preference is to have the token be as self-descriptive as we can possibly get. I agree this sounds a better approach, but it requires a lot of work in HDFS. but I could see this being a potentially non-trivial payload the RM has to bear for each app In this case, we can set the conf object to null once RM gets what it wants. I'll talk with some hdfs folks to see whether this is doable on their side. Else, I think passing a conf object and then void it might be a straightfoward approach at this point. Waiting for Daryn Sharp 's input also.
        Hide
        jianhe Jian He added a comment -

        ideally, the token should be self-sufficient to discover the renewer address.

        After digging the code more for this approach, even in non-HA mode, conf is also required for things like retry settings, also the principal name is required for secure setting. Basically the Token has to selectively carry all the necessary conf for connecting to the renewer in HA, non-HA, secure scenarios. How to maintain such an unknown list is a non-trivial task in the first place. I'd prefer the passing via appSubmissionContext approach now.

        Show
        jianhe Jian He added a comment - ideally, the token should be self-sufficient to discover the renewer address. After digging the code more for this approach, even in non-HA mode, conf is also required for things like retry settings, also the principal name is required for secure setting. Basically the Token has to selectively carry all the necessary conf for connecting to the renewer in HA, non-HA, secure scenarios. How to maintain such an unknown list is a non-trivial task in the first place. I'd prefer the passing via appSubmissionContext approach now.
        Hide
        aw Allen Wittenauer added a comment -

        How to maintain such an unknown list is a non-trivial task in the first place.

        Yup... and you haven't even gotten to the part where you try to use the service for your application. This is why DNS support would be extremely useful here. Ask it where uri://haservice is located then query the host responding for that service the details.

        In any case, this isn't a YARN problem. This is a HADOOP problem.

        Show
        aw Allen Wittenauer added a comment - How to maintain such an unknown list is a non-trivial task in the first place. Yup... and you haven't even gotten to the part where you try to use the service for your application. This is why DNS support would be extremely useful here. Ask it where uri://haservice is located then query the host responding for that service the details. In any case, this isn't a YARN problem. This is a HADOOP problem.
        Hide
        clayb Clay B. added a comment -

        This conversation has been very educational for me; thank you! I am concerned still that if we do not use kerberos, the requesting user will have no way to renew tokens as themselves? If we can not authenticate as the user, won't we be unable to work when the administrators of two clusters may be different (and thus not have the same yarn user setup – e.g. two different principals in kerberos). Can we find a solution to that issue here as well (or ensure that this issue doesn't preclude that issue)?

        I really like the idea that the client (human client) is responsible for specifying the resources needed, as again in a highly federated Hadoop environment, one administration group may not even know of all clusters and this allows for more agile cross-cluster usage.

        I see there are two issues here I was hoping to solve:
        1. A remote cluster's services are needed (e.g. as a data source to this job)
        2. A remote cluster does not trust this cluster's YARN principal

        Jason Lowe brings up some good questions and points which hit this well:

        I'm not sure distributing the keytab is going to be considered a reasonable thing to do in some setups. Part of the point of getting a token is to avoid needing to ship a keytab everywhere. Once we have a keytab, is there a need to have a token?

        If the YARN principals of each cluster are different but the user is entitled to services on both clusters is there another way around this issue? Further, while I think many shops may have the kerberos tooling to avoid shipping keytabs, some shops are heavily HBase (e.g. long running query services) dependent or streaming centric (jobs last longer than maximal token refresh periods) and thus have to use keytabs today.

        There's also the problem of needing to renew the token while the AM is waiting to get scheduled if the cluster is really busy. If the AM isn't running it can't renew the token.

        I would expect the remote-cluster resources to not be central to operating the job. E.g. we would use the local cluster for HDFS and YARN but might want to access a remote cluster's YARN. If the AM can request tokens (i.e. with a keytab or proxy kerberos credential which was refreshed by the RM) then we can request new tokens when the job is scheduled if it was hung-up longer than the renewal time; further we do not worry about exploits of custom configuration running as a privileged process but something running as a user.

        Regardless, are there many clusters folks see today where the scheduling time is longer than the renewal time of a delegation token? (I.e. that would be by-default one seventh of the total job's maximal runtime – longer than a day?)

        My preference is to have the token be as self-descriptive as we can possibly get. Doing the ApplicationSubmissionContext thing could work for the HA case, but I could see this being a potentially non-trivial payload the RM has to bear for each app (configs can get quite large). It'd rather avoid adding that to the context for this purpose if we can do so, but if the token cannot be self-descriptive in all cases then we may not have much other choice that I can see.

        I agree this seems to be the sanest idea for how to get the configuration in; we could also perhaps extend the various delegation token types to only optionally include this payload? Then we the RM would only pay the price when needed for an off-cluster request?

        Show
        clayb Clay B. added a comment - This conversation has been very educational for me; thank you! I am concerned still that if we do not use kerberos, the requesting user will have no way to renew tokens as themselves? If we can not authenticate as the user, won't we be unable to work when the administrators of two clusters may be different (and thus not have the same yarn user setup – e.g. two different principals in kerberos). Can we find a solution to that issue here as well (or ensure that this issue doesn't preclude that issue)? I really like the idea that the client (human client) is responsible for specifying the resources needed, as again in a highly federated Hadoop environment, one administration group may not even know of all clusters and this allows for more agile cross-cluster usage. I see there are two issues here I was hoping to solve: 1. A remote cluster's services are needed (e.g. as a data source to this job) 2. A remote cluster does not trust this cluster's YARN principal Jason Lowe brings up some good questions and points which hit this well: I'm not sure distributing the keytab is going to be considered a reasonable thing to do in some setups. Part of the point of getting a token is to avoid needing to ship a keytab everywhere. Once we have a keytab, is there a need to have a token? If the YARN principals of each cluster are different but the user is entitled to services on both clusters is there another way around this issue? Further, while I think many shops may have the kerberos tooling to avoid shipping keytabs, some shops are heavily HBase (e.g. long running query services) dependent or streaming centric (jobs last longer than maximal token refresh periods) and thus have to use keytabs today. There's also the problem of needing to renew the token while the AM is waiting to get scheduled if the cluster is really busy. If the AM isn't running it can't renew the token. I would expect the remote-cluster resources to not be central to operating the job. E.g. we would use the local cluster for HDFS and YARN but might want to access a remote cluster's YARN. If the AM can request tokens (i.e. with a keytab or proxy kerberos credential which was refreshed by the RM) then we can request new tokens when the job is scheduled if it was hung-up longer than the renewal time; further we do not worry about exploits of custom configuration running as a privileged process but something running as a user. Regardless, are there many clusters folks see today where the scheduling time is longer than the renewal time of a delegation token? (I.e. that would be by-default one seventh of the total job's maximal runtime – longer than a day?) My preference is to have the token be as self-descriptive as we can possibly get. Doing the ApplicationSubmissionContext thing could work for the HA case, but I could see this being a potentially non-trivial payload the RM has to bear for each app (configs can get quite large). It'd rather avoid adding that to the context for this purpose if we can do so, but if the token cannot be self-descriptive in all cases then we may not have much other choice that I can see. I agree this seems to be the sanest idea for how to get the configuration in; we could also perhaps extend the various delegation token types to only optionally include this payload? Then we the RM would only pay the price when needed for an off-cluster request?
        Hide
        jianhe Jian He added a comment -

        Hi Clay, thanks for the feedback.

        we could also perhaps extend the various delegation token types to only optionally include this payload? Then we the RM would only pay the price when needed for an off-cluster request?

        We realized that changing existing token structure might have issues regarding compatibility.

        Show
        jianhe Jian He added a comment - Hi Clay, thanks for the feedback. we could also perhaps extend the various delegation token types to only optionally include this payload? Then we the RM would only pay the price when needed for an off-cluster request? We realized that changing existing token structure might have issues regarding compatibility.
        Hide
        jianhe Jian He added a comment - - edited

        Uploaded an in-process patch which uses the approach of making client send the jobConf to RM, RM DelegationTokenRenewer will renew the token using the app conf
        A flag is added in MR to indicate whether sending the conf or not.

        Show
        jianhe Jian He added a comment - - edited Uploaded an in-process patch which uses the approach of making client send the jobConf to RM, RM DelegationTokenRenewer will renew the token using the app conf A flag is added in MR to indicate whether sending the conf or not.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 11s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
        0 mvndep 0m 20s Maven dependency ordering for branch
        +1 mvninstall 15m 20s trunk passed
        +1 compile 10m 7s trunk passed
        +1 checkstyle 1m 53s trunk passed
        +1 mvnsite 3m 31s trunk passed
        +1 mvneclipse 2m 7s trunk passed
        +1 findbugs 5m 26s trunk passed
        +1 javadoc 2m 52s trunk passed
        0 mvndep 0m 21s Maven dependency ordering for patch
        +1 mvninstall 2m 50s the patch passed
        +1 compile 9m 47s the patch passed
        +1 cc 9m 47s the patch passed
        +1 javac 9m 47s the patch passed
        -0 checkstyle 1m 50s root: The patch generated 14 new + 949 unchanged - 8 fixed = 963 total (was 957)
        +1 mvnsite 3m 50s the patch passed
        +1 mvneclipse 2m 25s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 7m 6s the patch passed
        -1 javadoc 0m 32s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 4 new + 913 unchanged - 0 fixed = 917 total (was 913)
        +1 unit 0m 36s hadoop-yarn-api in the patch passed.
        +1 unit 2m 29s hadoop-yarn-common in the patch passed.
        +1 unit 0m 44s hadoop-yarn-server-common in the patch passed.
        -1 unit 40m 33s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 unit 2m 58s hadoop-mapreduce-client-core in the patch passed.
        +1 unit 106m 19s hadoop-mapreduce-client-jobclient in the patch passed.
        +1 asflicense 0m 43s The patch does not generate ASF License warnings.
        252m 29s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.TestAppManager
          hadoop.yarn.server.resourcemanager.logaggregationstatus.TestRMAppLogAggregationStatus
          hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestAppRunnability



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5910
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12844113/YARN-5910.01.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
        uname Linux 2c3011714e9a 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 3bcfe3a
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14400/artifact/patchprocess/diff-checkstyle-root.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14400/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14400/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14400/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: .
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14400/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. 0 mvndep 0m 20s Maven dependency ordering for branch +1 mvninstall 15m 20s trunk passed +1 compile 10m 7s trunk passed +1 checkstyle 1m 53s trunk passed +1 mvnsite 3m 31s trunk passed +1 mvneclipse 2m 7s trunk passed +1 findbugs 5m 26s trunk passed +1 javadoc 2m 52s trunk passed 0 mvndep 0m 21s Maven dependency ordering for patch +1 mvninstall 2m 50s the patch passed +1 compile 9m 47s the patch passed +1 cc 9m 47s the patch passed +1 javac 9m 47s the patch passed -0 checkstyle 1m 50s root: The patch generated 14 new + 949 unchanged - 8 fixed = 963 total (was 957) +1 mvnsite 3m 50s the patch passed +1 mvneclipse 2m 25s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 7m 6s the patch passed -1 javadoc 0m 32s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 4 new + 913 unchanged - 0 fixed = 917 total (was 913) +1 unit 0m 36s hadoop-yarn-api in the patch passed. +1 unit 2m 29s hadoop-yarn-common in the patch passed. +1 unit 0m 44s hadoop-yarn-server-common in the patch passed. -1 unit 40m 33s hadoop-yarn-server-resourcemanager in the patch failed. +1 unit 2m 58s hadoop-mapreduce-client-core in the patch passed. +1 unit 106m 19s hadoop-mapreduce-client-jobclient in the patch passed. +1 asflicense 0m 43s The patch does not generate ASF License warnings. 252m 29s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestAppManager   hadoop.yarn.server.resourcemanager.logaggregationstatus.TestRMAppLogAggregationStatus   hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler   hadoop.yarn.server.resourcemanager.scheduler.fair.TestAppRunnability Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5910 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12844113/YARN-5910.01.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 2c3011714e9a 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 3bcfe3a Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14400/artifact/patchprocess/diff-checkstyle-root.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14400/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14400/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14400/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/14400/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        jianhe Jian He added a comment -

        Updated the patch with minor changes.

        Show
        jianhe Jian He added a comment - Updated the patch with minor changes.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 17s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
        0 mvndep 1m 53s Maven dependency ordering for branch
        +1 mvninstall 15m 6s trunk passed
        +1 compile 12m 26s trunk passed
        +1 checkstyle 2m 0s trunk passed
        +1 mvnsite 4m 20s trunk passed
        +1 mvneclipse 2m 20s trunk passed
        +1 findbugs 6m 45s trunk passed
        +1 javadoc 3m 6s trunk passed
        0 mvndep 0m 22s Maven dependency ordering for patch
        +1 mvninstall 3m 36s the patch passed
        +1 compile 12m 54s the patch passed
        +1 cc 12m 54s the patch passed
        +1 javac 12m 54s the patch passed
        -0 checkstyle 2m 23s root: The patch generated 16 new + 1022 unchanged - 8 fixed = 1038 total (was 1030)
        +1 mvnsite 4m 36s the patch passed
        +1 mvneclipse 2m 42s the patch passed
        +1 whitespace 0m 1s The patch has no whitespace issues.
        +1 xml 0m 2s The patch has no ill-formed XML file.
        +1 findbugs 8m 55s the patch passed
        -1 javadoc 0m 32s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 4 new + 913 unchanged - 0 fixed = 917 total (was 913)
        +1 unit 0m 43s hadoop-yarn-api in the patch passed.
        +1 unit 2m 54s hadoop-yarn-common in the patch passed.
        +1 unit 0m 42s hadoop-yarn-server-common in the patch passed.
        -1 unit 44m 45s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 unit 3m 33s hadoop-mapreduce-client-core in the patch passed.
        -1 unit 108m 23s hadoop-mapreduce-client-jobclient in the patch failed.
        +1 asflicense 0m 50s The patch does not generate ASF License warnings.
        274m 17s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.TestAppManager
          hadoop.yarn.server.resourcemanager.logaggregationstatus.TestRMAppLogAggregationStatus
          hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestAppRunnability
          hadoop.yarn.server.resourcemanager.TestRMAdminService
          hadoop.mapreduce.TestMRJobClient
          hadoop.mapred.TestYARNRunner



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5910
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847704/YARN-5910.2.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc
        uname Linux 5957d3596604 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / cf69557
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14665/artifact/patchprocess/diff-checkstyle-root.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14665/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14665/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14665/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14665/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: .
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14665/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. 0 mvndep 1m 53s Maven dependency ordering for branch +1 mvninstall 15m 6s trunk passed +1 compile 12m 26s trunk passed +1 checkstyle 2m 0s trunk passed +1 mvnsite 4m 20s trunk passed +1 mvneclipse 2m 20s trunk passed +1 findbugs 6m 45s trunk passed +1 javadoc 3m 6s trunk passed 0 mvndep 0m 22s Maven dependency ordering for patch +1 mvninstall 3m 36s the patch passed +1 compile 12m 54s the patch passed +1 cc 12m 54s the patch passed +1 javac 12m 54s the patch passed -0 checkstyle 2m 23s root: The patch generated 16 new + 1022 unchanged - 8 fixed = 1038 total (was 1030) +1 mvnsite 4m 36s the patch passed +1 mvneclipse 2m 42s the patch passed +1 whitespace 0m 1s The patch has no whitespace issues. +1 xml 0m 2s The patch has no ill-formed XML file. +1 findbugs 8m 55s the patch passed -1 javadoc 0m 32s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 4 new + 913 unchanged - 0 fixed = 917 total (was 913) +1 unit 0m 43s hadoop-yarn-api in the patch passed. +1 unit 2m 54s hadoop-yarn-common in the patch passed. +1 unit 0m 42s hadoop-yarn-server-common in the patch passed. -1 unit 44m 45s hadoop-yarn-server-resourcemanager in the patch failed. +1 unit 3m 33s hadoop-mapreduce-client-core in the patch passed. -1 unit 108m 23s hadoop-mapreduce-client-jobclient in the patch failed. +1 asflicense 0m 50s The patch does not generate ASF License warnings. 274m 17s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestAppManager   hadoop.yarn.server.resourcemanager.logaggregationstatus.TestRMAppLogAggregationStatus   hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler   hadoop.yarn.server.resourcemanager.scheduler.fair.TestAppRunnability   hadoop.yarn.server.resourcemanager.TestRMAdminService   hadoop.mapreduce.TestMRJobClient   hadoop.mapred.TestYARNRunner Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5910 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847704/YARN-5910.2.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc uname Linux 5957d3596604 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / cf69557 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14665/artifact/patchprocess/diff-checkstyle-root.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14665/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14665/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14665/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14665/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/14665/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        jianhe Jian He added a comment -

        Fixed jenkins issues

        Show
        jianhe Jian He added a comment - Fixed jenkins issues
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 14s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
        0 mvndep 0m 25s Maven dependency ordering for branch
        +1 mvninstall 13m 32s trunk passed
        +1 compile 10m 5s trunk passed
        +1 checkstyle 1m 45s trunk passed
        +1 mvnsite 3m 20s trunk passed
        +1 mvneclipse 1m 59s trunk passed
        +1 findbugs 5m 38s trunk passed
        +1 javadoc 2m 40s trunk passed
        0 mvndep 0m 16s Maven dependency ordering for patch
        +1 mvninstall 2m 42s the patch passed
        +1 compile 9m 44s the patch passed
        +1 cc 9m 44s the patch passed
        +1 javac 9m 44s the patch passed
        -0 checkstyle 2m 19s root: The patch generated 18 new + 1022 unchanged - 8 fixed = 1040 total (was 1030)
        +1 mvnsite 3m 44s the patch passed
        +1 mvneclipse 2m 16s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 1s The patch has no ill-formed XML file.
        +1 findbugs 6m 46s the patch passed
        +1 javadoc 0m 27s hadoop-yarn-api in the patch passed.
        +1 javadoc 0m 40s hadoop-yarn-common in the patch passed.
        +1 javadoc 0m 26s hadoop-yarn-server-common in the patch passed.
        +1 javadoc 0m 32s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913)
        +1 javadoc 0m 32s hadoop-mapreduce-client-core in the patch passed.
        +1 javadoc 0m 23s hadoop-mapreduce-client-jobclient in the patch passed.
        +1 unit 0m 36s hadoop-yarn-api in the patch passed.
        +1 unit 2m 31s hadoop-yarn-common in the patch passed.
        +1 unit 0m 38s hadoop-yarn-server-common in the patch passed.
        -1 unit 39m 57s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 unit 2m 59s hadoop-mapreduce-client-core in the patch passed.
        -1 unit 101m 58s hadoop-mapreduce-client-jobclient in the patch failed.
        +1 asflicense 0m 46s The patch does not generate ASF License warnings.
        244m 54s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions
          hadoop.yarn.server.resourcemanager.logaggregationstatus.TestRMAppLogAggregationStatus
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
          hadoop.yarn.server.resourcemanager.TestAppManager
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestAppRunnability
          hadoop.mapred.TestYARNRunner



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5910
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847915/YARN-5910.3.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc
        uname Linux 74dc1194f92a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 4d1f3d9
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14680/artifact/patchprocess/diff-checkstyle-root.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14680/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14680/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14680/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: .
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14680/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. 0 mvndep 0m 25s Maven dependency ordering for branch +1 mvninstall 13m 32s trunk passed +1 compile 10m 5s trunk passed +1 checkstyle 1m 45s trunk passed +1 mvnsite 3m 20s trunk passed +1 mvneclipse 1m 59s trunk passed +1 findbugs 5m 38s trunk passed +1 javadoc 2m 40s trunk passed 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 2m 42s the patch passed +1 compile 9m 44s the patch passed +1 cc 9m 44s the patch passed +1 javac 9m 44s the patch passed -0 checkstyle 2m 19s root: The patch generated 18 new + 1022 unchanged - 8 fixed = 1040 total (was 1030) +1 mvnsite 3m 44s the patch passed +1 mvneclipse 2m 16s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 6m 46s the patch passed +1 javadoc 0m 27s hadoop-yarn-api in the patch passed. +1 javadoc 0m 40s hadoop-yarn-common in the patch passed. +1 javadoc 0m 26s hadoop-yarn-server-common in the patch passed. +1 javadoc 0m 32s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913) +1 javadoc 0m 32s hadoop-mapreduce-client-core in the patch passed. +1 javadoc 0m 23s hadoop-mapreduce-client-jobclient in the patch passed. +1 unit 0m 36s hadoop-yarn-api in the patch passed. +1 unit 2m 31s hadoop-yarn-common in the patch passed. +1 unit 0m 38s hadoop-yarn-server-common in the patch passed. -1 unit 39m 57s hadoop-yarn-server-resourcemanager in the patch failed. +1 unit 2m 59s hadoop-mapreduce-client-core in the patch passed. -1 unit 101m 58s hadoop-mapreduce-client-jobclient in the patch failed. +1 asflicense 0m 46s The patch does not generate ASF License warnings. 244m 54s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions   hadoop.yarn.server.resourcemanager.logaggregationstatus.TestRMAppLogAggregationStatus   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler   hadoop.yarn.server.resourcemanager.TestAppManager   hadoop.yarn.server.resourcemanager.scheduler.fair.TestAppRunnability   hadoop.mapred.TestYARNRunner Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5910 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847915/YARN-5910.3.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc uname Linux 74dc1194f92a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4d1f3d9 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14680/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14680/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14680/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14680/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/14680/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        jianhe Jian He added a comment -

        fixed failed UT. testRMAppSubmitWithValidTokens is removed, because it's not actually being tested as expected as security is not enabled in the test scope, and the test scenario should already be covered in other place.

        Show
        jianhe Jian He added a comment - fixed failed UT. testRMAppSubmitWithValidTokens is removed, because it's not actually being tested as expected as security is not enabled in the test scope, and the test scenario should already be covered in other place.
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for updating the patch!

        It's confusing to see a MR_JOB_SEND_TOKEN_CONF_DEFAULT in MRJobConfig yet it clearly is not the default value.

        Should this feature be tied to UserGroupInformation.isSecurityEnabled? I'm wondering if this can cause issues where the current cluster isn't secure but the RM needs to renew the job's tokens for a remote secure cluster or some other secure service. Seems like if this conf is set then that's all we need to know.

        Similarly the code explicitly fails in ClientRMService if the conf is there when security is disabled which seems like we're taking a case that isn't optimal but should work benignly and explicitly making sure it fails. Not sure that's user friendly behavior.

        Nit: For the ByteBuffer usage in parseCredentials and parseTokensConf, the rewind method calls seem unnecessary since we're throwing the buffers away immediately afterwards.

        Should the Configuration constructor call in parseTokensConf be using the version that does not load defaults? If not then I recommend we at least allow a conf to be passed in to use as a copy constructor. Loading a new Configuration from scratch is really expensive and we should avoid it if possible. See the discussion on HADOOP-11223 for details.

        In DelegationTokenRenewer, why aren't we using the appConf as-is when renewing the tokens? Also it looks like we're polluting subsequent app-conf renewals with prior app configurations, as well as simply leaking appConf objects as renewerConf resources infinitum. I don't see where renewerConf gets reset in-between.

        Arguably there should be a unit tests that verifies a first app with token conf key A and a second app with token conf key B doesn't leave a situation where the renewals of the second app are polluted with conf key A. Speaking of unit tests, I see where we fixed up the YARN unit tests to pass the new conf but not a new test that verifies the specified conf is used appropriately when renewing for that app and not for other apps that didn't specify a conf.

        Show
        jlowe Jason Lowe added a comment - Thanks for updating the patch! It's confusing to see a MR_JOB_SEND_TOKEN_CONF_DEFAULT in MRJobConfig yet it clearly is not the default value. Should this feature be tied to UserGroupInformation.isSecurityEnabled? I'm wondering if this can cause issues where the current cluster isn't secure but the RM needs to renew the job's tokens for a remote secure cluster or some other secure service. Seems like if this conf is set then that's all we need to know. Similarly the code explicitly fails in ClientRMService if the conf is there when security is disabled which seems like we're taking a case that isn't optimal but should work benignly and explicitly making sure it fails. Not sure that's user friendly behavior. Nit: For the ByteBuffer usage in parseCredentials and parseTokensConf, the rewind method calls seem unnecessary since we're throwing the buffers away immediately afterwards. Should the Configuration constructor call in parseTokensConf be using the version that does not load defaults? If not then I recommend we at least allow a conf to be passed in to use as a copy constructor. Loading a new Configuration from scratch is really expensive and we should avoid it if possible. See the discussion on HADOOP-11223 for details. In DelegationTokenRenewer, why aren't we using the appConf as-is when renewing the tokens? Also it looks like we're polluting subsequent app-conf renewals with prior app configurations, as well as simply leaking appConf objects as renewerConf resources infinitum. I don't see where renewerConf gets reset in-between. Arguably there should be a unit tests that verifies a first app with token conf key A and a second app with token conf key B doesn't leave a situation where the renewals of the second app are polluted with conf key A. Speaking of unit tests, I see where we fixed up the YARN unit tests to pass the new conf but not a new test that verifies the specified conf is used appropriately when renewing for that app and not for other apps that didn't specify a conf.
        Hide
        jianhe Jian He added a comment - - edited

        Hi Jason, thank you very much for the review !

        It's confusing to see a MR_JOB_SEND_TOKEN_CONF_DEFAULT in MRJobConfig yet it clearly is not the default value.

        removed it

        Should this feature be tied to UserGroupInformation.isSecurityEnabled? I'm wondering if this can cause issues where the current cluster isn't secure but the RM needs to renew the job's tokens for a remote secure cluster or some other secure service. Seems like if this conf is set then that's all we need to know.

        Currently, the RM DelegationTokenRewener will only add the tokens if security is enabled (code in RMAppManager#submitApplication), so I think with this existing implemtation, we can assume this feature is for security enabled only ?

        Similarly the code explicitly fails in ClientRMService if the conf is there when security is disabled which seems like we're taking a case that isn't optimal but should work benignly and explicitly making sure it fails. Not sure that's user friendly behavior.

        My intention was to prevent user from sending conf in non-secure mode(which anyways is not needed if my above reply is true), in case the conf size huge which may increase load on RM. On ther other hand, Varun chatted offline that we can add a limit config in RM to limit the size of configs, your opinion ?

        Nit: For the ByteBuffer usage in parseCredentials and parseTokensConf, the rewind method calls seem unnecessary since we're throwing the buffers away immediately afterwards.

        Actually, the bytebuffer is a direct reference from the containerLaunchContext, not a copy. I think this is also required because it was specifically to solve issues in YARN-2893.

        Should the Configuration constructor call in parseTokensConf be using the version that does not load defaults? If not then I recommend we at least allow a conf to be passed in to use as a copy constructor.Loading a new Configuration from scratch is really expensive and we should avoid it if possible. See the discussion on HADOOP-11223 for details.

        Good point. I actually did the same in YarnRunner#setAppConf method, but missed this place.

        In DelegationTokenRenewer, why aren't we using the appConf as-is when renewing the tokens?

        I wasn't sure whether the mere appConf is enough for the connection - (Is there any kerberos related configs for RM itself are required for authentication?). Let me do some experiments, if this works, I'll just use appConf.

        Also it looks like we're polluting subsequent app-conf renewals with prior app configurations, as well as simply leaking appConf objects as renewerConf resources infinitum. I don't see where renewerConf gets reset in-between.

        My previous patch made a copy of each appConf and merge with RM's conf(for the reason I wasn't sure whether RM's own conf is required) and use that for renwer. But then I think this maybe bad because every app will have its own copy of configs, which may largely increase the memory size if the number of apps is very big. So, in the latest patch I changed it to let all apps share the same renewerConf - this is based on the assumption that "dfs.nameservices" must have distint keys for each distinct cluster, so we won't have situation where two apps use different configs for the same cluster - it is true that unnecessary configs used by 1st app will be shared by subsequent apps.

        Arguably there should be a unit tests that verifies a first app with token conf key A and a second app with token conf key B doesn't leave a situation where the renewals of the second app are polluted with conf key A.

        If the mere appConf works, we should be fine.

        Speaking of unit tests, I see where we fixed up the YARN unit tests to pass the new conf but not a new test that verifies the specified conf is used appropriately when renewing for that app and not for other apps that didn't specify a conf.

        Yep, I'll add the UT.

        Show
        jianhe Jian He added a comment - - edited Hi Jason, thank you very much for the review ! It's confusing to see a MR_JOB_SEND_TOKEN_CONF_DEFAULT in MRJobConfig yet it clearly is not the default value. removed it Should this feature be tied to UserGroupInformation.isSecurityEnabled? I'm wondering if this can cause issues where the current cluster isn't secure but the RM needs to renew the job's tokens for a remote secure cluster or some other secure service. Seems like if this conf is set then that's all we need to know. Currently, the RM DelegationTokenRewener will only add the tokens if security is enabled (code in RMAppManager#submitApplication), so I think with this existing implemtation, we can assume this feature is for security enabled only ? Similarly the code explicitly fails in ClientRMService if the conf is there when security is disabled which seems like we're taking a case that isn't optimal but should work benignly and explicitly making sure it fails. Not sure that's user friendly behavior. My intention was to prevent user from sending conf in non-secure mode(which anyways is not needed if my above reply is true), in case the conf size huge which may increase load on RM. On ther other hand, Varun chatted offline that we can add a limit config in RM to limit the size of configs, your opinion ? Nit: For the ByteBuffer usage in parseCredentials and parseTokensConf, the rewind method calls seem unnecessary since we're throwing the buffers away immediately afterwards. Actually, the bytebuffer is a direct reference from the containerLaunchContext, not a copy. I think this is also required because it was specifically to solve issues in YARN-2893 . Should the Configuration constructor call in parseTokensConf be using the version that does not load defaults? If not then I recommend we at least allow a conf to be passed in to use as a copy constructor.Loading a new Configuration from scratch is really expensive and we should avoid it if possible. See the discussion on HADOOP-11223 for details. Good point. I actually did the same in YarnRunner#setAppConf method, but missed this place. In DelegationTokenRenewer, why aren't we using the appConf as-is when renewing the tokens? I wasn't sure whether the mere appConf is enough for the connection - (Is there any kerberos related configs for RM itself are required for authentication?). Let me do some experiments, if this works, I'll just use appConf. Also it looks like we're polluting subsequent app-conf renewals with prior app configurations, as well as simply leaking appConf objects as renewerConf resources infinitum. I don't see where renewerConf gets reset in-between. My previous patch made a copy of each appConf and merge with RM's conf(for the reason I wasn't sure whether RM's own conf is required) and use that for renwer. But then I think this maybe bad because every app will have its own copy of configs, which may largely increase the memory size if the number of apps is very big. So, in the latest patch I changed it to let all apps share the same renewerConf - this is based on the assumption that "dfs.nameservices" must have distint keys for each distinct cluster, so we won't have situation where two apps use different configs for the same cluster - it is true that unnecessary configs used by 1st app will be shared by subsequent apps. Arguably there should be a unit tests that verifies a first app with token conf key A and a second app with token conf key B doesn't leave a situation where the renewals of the second app are polluted with conf key A. If the mere appConf works, we should be fine. Speaking of unit tests, I see where we fixed up the YARN unit tests to pass the new conf but not a new test that verifies the specified conf is used appropriately when renewing for that app and not for other apps that didn't specify a conf. Yep, I'll add the UT.
        Hide
        jlowe Jason Lowe added a comment -

        Currently, the RM DelegationTokenRewener will only add the tokens if security is enabled (code in RMAppManager#submitApplication), so I think with this existing implemtation, we can assume this feature is for security enabled only ?

        Yeah, I'm thinking it's unnecessary to check both. This new config has no value by default. A user or admin would have to go out of their way to set it. If they did, then they expect the confs to be in the submission context.

        On ther other hand, Varun chatted offline that we can add a limit config in RM to limit the size of configs, your opinion ?

        A RM-side limit for configs may not be a bad idea to avoid a problematic client or user that sets ".*" as the conf filter. It won't solve the problem of the gigantic RPC trying to come in, but at least the RM can quickly discard it before trying to persist it.

        So, in the latest patch I changed it to let all apps share the same renewerConf - this is based on the assumption that "dfs.nameservices" must have distint keys for each distinct cluster, so we won't have situation where two apps use different configs for the same cluster - it is true that unnecessary configs used by 1st app will be shared by subsequent apps.

        This is bad for two reasons. One is the polluting problem – there could be cases where the presence of a config key from a previous app is problematic for renewal in a subsequent app. The other is a memory leak. Configuration.addResource will add a resource object to the list of resources for the config and never get rid of them. This will cause every app-specific conf to be tracked by renewerConf forever, resulting in a memory leak.

        One solution to this is to track the (partial) app configurations separately and then make a copy of the RM's conf and merge in the partial app conf "on-demand" when it's time to renew the token for the app. Then we're not storing a full copy of the RM's configs for every app, just the parts that need to be per-app. If doing the repetitive copy and merge of the conf is too expensive then we can derive a Configuration subclass that takes the app conf and RM conf in the constructor. When we try to do property lookups it tries to find it in the app conf and falls back to the RM conf if necessary. Then we don't have to make copies and merge each time.

        Show
        jlowe Jason Lowe added a comment - Currently, the RM DelegationTokenRewener will only add the tokens if security is enabled (code in RMAppManager#submitApplication), so I think with this existing implemtation, we can assume this feature is for security enabled only ? Yeah, I'm thinking it's unnecessary to check both. This new config has no value by default. A user or admin would have to go out of their way to set it. If they did, then they expect the confs to be in the submission context. On ther other hand, Varun chatted offline that we can add a limit config in RM to limit the size of configs, your opinion ? A RM-side limit for configs may not be a bad idea to avoid a problematic client or user that sets ".*" as the conf filter. It won't solve the problem of the gigantic RPC trying to come in, but at least the RM can quickly discard it before trying to persist it. So, in the latest patch I changed it to let all apps share the same renewerConf - this is based on the assumption that "dfs.nameservices" must have distint keys for each distinct cluster, so we won't have situation where two apps use different configs for the same cluster - it is true that unnecessary configs used by 1st app will be shared by subsequent apps. This is bad for two reasons. One is the polluting problem – there could be cases where the presence of a config key from a previous app is problematic for renewal in a subsequent app. The other is a memory leak. Configuration.addResource will add a resource object to the list of resources for the config and never get rid of them. This will cause every app-specific conf to be tracked by renewerConf forever, resulting in a memory leak. One solution to this is to track the (partial) app configurations separately and then make a copy of the RM's conf and merge in the partial app conf "on-demand" when it's time to renew the token for the app. Then we're not storing a full copy of the RM's configs for every app, just the parts that need to be per-app. If doing the repetitive copy and merge of the conf is too expensive then we can derive a Configuration subclass that takes the app conf and RM conf in the constructor. When we try to do property lookups it tries to find it in the app conf and falls back to the RM conf if necessary. Then we don't have to make copies and merge each time.
        Hide
        jianhe Jian He added a comment - - edited

        Yeah, I'm thinking it's unnecessary to check both.

        sounds good, I'll remove the is security enabled check in YARNRunner. Regarding the if security enabled check in ClientRMSerivce, do you also prefer removing it ?

        Configuration.addResource will add a resource object to the list of resources for the config and never get rid of them. This will cause every app-specific conf to be tracked by renewerConf forever, resulting in a memory leak.

        Ah, I see. Good point. I didn't understand you previous comment about this.

        So I've done the experiment. Actually, we don't need RM's own config for renew. Additionally, we need to pass in the dfs.namenode.kerberos.principal from the client to pass the check in SaslRpcClient#getServerPrincipal where it checks whether the remote principle equals to the local config. I have one question about this design: the dfs.namenode.kerberos.principal is not differentiated by clusterId. So it assumes all clusters will have the same value for 'dfs.namenode.kerberos.principal' ? This applies to all other service including RM as well.

        So I can just use appConfig in DelegationTokenRenewer.
        I'll also add the config limit in RM.

        Show
        jianhe Jian He added a comment - - edited Yeah, I'm thinking it's unnecessary to check both. sounds good, I'll remove the is security enabled check in YARNRunner. Regarding the if security enabled check in ClientRMSerivce, do you also prefer removing it ? Configuration.addResource will add a resource object to the list of resources for the config and never get rid of them. This will cause every app-specific conf to be tracked by renewerConf forever, resulting in a memory leak. Ah, I see. Good point. I didn't understand you previous comment about this. So I've done the experiment. Actually, we don't need RM's own config for renew. Additionally, we need to pass in the dfs.namenode.kerberos.principal from the client to pass the check in SaslRpcClient#getServerPrincipal where it checks whether the remote principle equals to the local config. I have one question about this design: the dfs.namenode.kerberos.principal is not differentiated by clusterId. So it assumes all clusters will have the same value for 'dfs.namenode.kerberos.principal' ? This applies to all other service including RM as well. So I can just use appConfig in DelegationTokenRenewer. I'll also add the config limit in RM.
        Hide
        jlowe Jason Lowe added a comment -

        Regarding the if security enabled check in ClientRMSerivce, do you also prefer removing it ?

        Yes, I'd rather not fail a job that would otherwise work without this check.

        I have one question about this design: the dfs.namenode.kerberos.principal is not differentiated by clusterId. So it assumes all clusters will have the same value for 'dfs.namenode.kerberos.principal' ? This applies to all other service including RM as well.

        I'll have to defer to Daryn Sharp's expertise on whether we may need some RM-specific configs to be able to successfully connect with kerberos. There may be some remappings that the admins only bothered to configure on the RM or are RM specific? Not sure. It'd be nice if we didn't need the RM configs, but now I'm thinking there may be cases where we need them.

        Show
        jlowe Jason Lowe added a comment - Regarding the if security enabled check in ClientRMSerivce, do you also prefer removing it ? Yes, I'd rather not fail a job that would otherwise work without this check. I have one question about this design: the dfs.namenode.kerberos.principal is not differentiated by clusterId. So it assumes all clusters will have the same value for 'dfs.namenode.kerberos.principal' ? This applies to all other service including RM as well. I'll have to defer to Daryn Sharp 's expertise on whether we may need some RM-specific configs to be able to successfully connect with kerberos. There may be some remappings that the admins only bothered to configure on the RM or are RM specific? Not sure. It'd be nice if we didn't need the RM configs, but now I'm thinking there may be cases where we need them.
        Hide
        jianhe Jian He added a comment -

        whether we may need some RM-specific configs to be able to successfully connect with kerberos. There may be some remappings that the admins only bothered to configure on the RM or are RM specific?

        sorry, didn't get you. The 'dfs.namenode.kerberos.principal' is actually HDFS config, not RM config. If two clusters have different DFS principal name configured, when MR client asks for the delegation token from both clusters, I guess this check will fail, because it cannot differentiate the cluster.

        Show
        jianhe Jian He added a comment - whether we may need some RM-specific configs to be able to successfully connect with kerberos. There may be some remappings that the admins only bothered to configure on the RM or are RM specific? sorry, didn't get you. The 'dfs.namenode.kerberos.principal' is actually HDFS config, not RM config. If two clusters have different DFS principal name configured, when MR client asks for the delegation token from both clusters, I guess this check will fail, because it cannot differentiate the cluster.
        Hide
        jianhe Jian He added a comment -

        Uploaded a patch that addressed all the comments.

        Show
        jianhe Jian He added a comment - Uploaded a patch that addressed all the comments.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 14s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 8 new or modified test files.
        0 mvndep 1m 46s Maven dependency ordering for branch
        +1 mvninstall 15m 2s trunk passed
        +1 compile 12m 6s trunk passed
        +1 checkstyle 2m 12s trunk passed
        +1 mvnsite 4m 5s trunk passed
        +1 mvneclipse 2m 9s trunk passed
        +1 findbugs 7m 6s trunk passed
        +1 javadoc 3m 9s trunk passed
        0 mvndep 0m 19s Maven dependency ordering for patch
        +1 mvninstall 3m 3s the patch passed
        +1 compile 12m 38s the patch passed
        +1 cc 12m 38s the patch passed
        +1 javac 12m 38s the patch passed
        -0 checkstyle 2m 13s root: The patch generated 21 new + 1413 unchanged - 10 fixed = 1434 total (was 1423)
        +1 mvnsite 4m 26s the patch passed
        +1 mvneclipse 2m 36s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 3s The patch has no ill-formed XML file.
        +1 findbugs 8m 19s the patch passed
        +1 javadoc 0m 30s hadoop-yarn-api in the patch passed.
        +1 javadoc 0m 40s hadoop-yarn-common in the patch passed.
        +1 javadoc 0m 28s hadoop-yarn-server-common in the patch passed.
        +1 javadoc 0m 34s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913)
        +1 javadoc 0m 36s hadoop-mapreduce-client-core in the patch passed.
        +1 javadoc 0m 24s hadoop-mapreduce-client-jobclient in the patch passed.
        +1 unit 0m 40s hadoop-yarn-api in the patch passed.
        +1 unit 2m 45s hadoop-yarn-common in the patch passed.
        +1 unit 0m 50s hadoop-yarn-server-common in the patch passed.
        -1 unit 42m 5s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 unit 3m 20s hadoop-mapreduce-client-core in the patch passed.
        +1 unit 108m 42s hadoop-mapreduce-client-jobclient in the patch passed.
        +1 asflicense 0m 50s The patch does not generate ASF License warnings.
        269m 21s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5910
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848172/YARN-5910.5.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc
        uname Linux 34e04872f8cf 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 2bf4c6e
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14695/artifact/patchprocess/diff-checkstyle-root.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14695/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14695/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: .
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14695/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 8 new or modified test files. 0 mvndep 1m 46s Maven dependency ordering for branch +1 mvninstall 15m 2s trunk passed +1 compile 12m 6s trunk passed +1 checkstyle 2m 12s trunk passed +1 mvnsite 4m 5s trunk passed +1 mvneclipse 2m 9s trunk passed +1 findbugs 7m 6s trunk passed +1 javadoc 3m 9s trunk passed 0 mvndep 0m 19s Maven dependency ordering for patch +1 mvninstall 3m 3s the patch passed +1 compile 12m 38s the patch passed +1 cc 12m 38s the patch passed +1 javac 12m 38s the patch passed -0 checkstyle 2m 13s root: The patch generated 21 new + 1413 unchanged - 10 fixed = 1434 total (was 1423) +1 mvnsite 4m 26s the patch passed +1 mvneclipse 2m 36s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 3s The patch has no ill-formed XML file. +1 findbugs 8m 19s the patch passed +1 javadoc 0m 30s hadoop-yarn-api in the patch passed. +1 javadoc 0m 40s hadoop-yarn-common in the patch passed. +1 javadoc 0m 28s hadoop-yarn-server-common in the patch passed. +1 javadoc 0m 34s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913) +1 javadoc 0m 36s hadoop-mapreduce-client-core in the patch passed. +1 javadoc 0m 24s hadoop-mapreduce-client-jobclient in the patch passed. +1 unit 0m 40s hadoop-yarn-api in the patch passed. +1 unit 2m 45s hadoop-yarn-common in the patch passed. +1 unit 0m 50s hadoop-yarn-server-common in the patch passed. -1 unit 42m 5s hadoop-yarn-server-resourcemanager in the patch failed. +1 unit 3m 20s hadoop-mapreduce-client-core in the patch passed. +1 unit 108m 42s hadoop-mapreduce-client-jobclient in the patch passed. +1 asflicense 0m 50s The patch does not generate ASF License warnings. 269m 21s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5910 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848172/YARN-5910.5.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc uname Linux 34e04872f8cf 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2bf4c6e Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14695/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14695/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14695/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/14695/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for updating the patch!

        Last I knew, the descriptions for properties in mapred-site.xml were programatically scraped to generate documentation. Therefore the additional comment will be a bit confusing when taken out of context with the commented value. I'd either move the regex example into the description itself or move this added comment into a separate XML comment just above the commented value.

        I'm surprised the max conf size is a property count rather than a overall size limit on the conf buffer being passed/persisted. After all, I could just specify one property with a gigantic payload and pass this safety check, and I thought this check was more about preventing excessive memory usage than excessive property counts.

        I am wondering how users/admins are going to debug their settings for the new property. I don't see any way for them to know which properties are really getting picked up. For example, if they pick up too many properties and exceed the size limit, how can they know which extra ones they are hitting? Or similarly, when token renewal fails, how can they tell what the conf looks like that was used for renewal? Wondering if we need at least a debug- or trace-level log somewhere that dumps the app-specific conf.

        Show
        jlowe Jason Lowe added a comment - Thanks for updating the patch! Last I knew, the descriptions for properties in mapred-site.xml were programatically scraped to generate documentation. Therefore the additional comment will be a bit confusing when taken out of context with the commented value. I'd either move the regex example into the description itself or move this added comment into a separate XML comment just above the commented value. I'm surprised the max conf size is a property count rather than a overall size limit on the conf buffer being passed/persisted. After all, I could just specify one property with a gigantic payload and pass this safety check, and I thought this check was more about preventing excessive memory usage than excessive property counts. I am wondering how users/admins are going to debug their settings for the new property. I don't see any way for them to know which properties are really getting picked up. For example, if they pick up too many properties and exceed the size limit, how can they know which extra ones they are hitting? Or similarly, when token renewal fails, how can they tell what the conf looks like that was used for renewal? Wondering if we need at least a debug- or trace-level log somewhere that dumps the app-specific conf.
        Hide
        jianhe Jian He added a comment -

        Thanks again for the reviews !

        I'd either move the regex example into the description itself

        done.

        I could just specify one property with a gigantic payload

        good point.. thought the number of configs indirectly means the size, and was lazy at calculating the numbers.. missed this scenario.. I changed to check based on bytes.

        I am wondering how users/admins are going to debug their settings for the new property

        good point.. it was there when I was debugging this feature.. I added the debug level logging in both YarnRunner and DelegationTokenRenewer

        uploaded a patch that addressed all comments.

        Show
        jianhe Jian He added a comment - Thanks again for the reviews ! I'd either move the regex example into the description itself done. I could just specify one property with a gigantic payload good point.. thought the number of configs indirectly means the size, and was lazy at calculating the numbers.. missed this scenario.. I changed to check based on bytes. I am wondering how users/admins are going to debug their settings for the new property good point.. it was there when I was debugging this feature.. I added the debug level logging in both YarnRunner and DelegationTokenRenewer uploaded a patch that addressed all comments.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 13s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 8 new or modified test files.
        0 mvndep 0m 23s Maven dependency ordering for branch
        +1 mvninstall 14m 17s trunk passed
        +1 compile 12m 12s trunk passed
        +1 checkstyle 1m 54s trunk passed
        +1 mvnsite 3m 36s trunk passed
        +1 mvneclipse 1m 58s trunk passed
        +1 findbugs 6m 16s trunk passed
        +1 javadoc 2m 36s trunk passed
        0 mvndep 0m 17s Maven dependency ordering for patch
        +1 mvninstall 2m 54s the patch passed
        +1 compile 11m 19s the patch passed
        +1 cc 11m 19s the patch passed
        +1 javac 11m 19s the patch passed
        -0 checkstyle 1m 57s root: The patch generated 29 new + 1445 unchanged - 10 fixed = 1474 total (was 1455)
        +1 mvnsite 3m 42s the patch passed
        +1 mvneclipse 2m 24s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 2s The patch has no ill-formed XML file.
        +1 findbugs 7m 30s the patch passed
        +1 javadoc 0m 29s hadoop-yarn-api in the patch passed.
        +1 javadoc 0m 40s hadoop-yarn-common in the patch passed.
        +1 javadoc 0m 30s hadoop-yarn-server-common in the patch passed.
        +1 javadoc 0m 30s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913)
        +1 javadoc 0m 35s hadoop-mapreduce-client-core in the patch passed.
        +1 javadoc 0m 24s hadoop-mapreduce-client-jobclient in the patch passed.
        +1 unit 0m 37s hadoop-yarn-api in the patch passed.
        +1 unit 2m 42s hadoop-yarn-common in the patch passed.
        +1 unit 0m 39s hadoop-yarn-server-common in the patch passed.
        -1 unit 39m 43s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 unit 3m 2s hadoop-mapreduce-client-core in the patch passed.
        -1 unit 109m 56s hadoop-mapreduce-client-jobclient in the patch failed.
        +1 asflicense 0m 45s The patch does not generate ASF License warnings.
        259m 19s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.TestAppManager
        Timed out junit tests org.apache.hadoop.mapred.TestMRIntermediateDataEncryption



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5910
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848456/YARN-5910.6.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc
        uname Linux 89762522c666 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 60865c8
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14718/artifact/patchprocess/diff-checkstyle-root.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14718/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14718/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14718/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: .
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14718/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 8 new or modified test files. 0 mvndep 0m 23s Maven dependency ordering for branch +1 mvninstall 14m 17s trunk passed +1 compile 12m 12s trunk passed +1 checkstyle 1m 54s trunk passed +1 mvnsite 3m 36s trunk passed +1 mvneclipse 1m 58s trunk passed +1 findbugs 6m 16s trunk passed +1 javadoc 2m 36s trunk passed 0 mvndep 0m 17s Maven dependency ordering for patch +1 mvninstall 2m 54s the patch passed +1 compile 11m 19s the patch passed +1 cc 11m 19s the patch passed +1 javac 11m 19s the patch passed -0 checkstyle 1m 57s root: The patch generated 29 new + 1445 unchanged - 10 fixed = 1474 total (was 1455) +1 mvnsite 3m 42s the patch passed +1 mvneclipse 2m 24s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 2s The patch has no ill-formed XML file. +1 findbugs 7m 30s the patch passed +1 javadoc 0m 29s hadoop-yarn-api in the patch passed. +1 javadoc 0m 40s hadoop-yarn-common in the patch passed. +1 javadoc 0m 30s hadoop-yarn-server-common in the patch passed. +1 javadoc 0m 30s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913) +1 javadoc 0m 35s hadoop-mapreduce-client-core in the patch passed. +1 javadoc 0m 24s hadoop-mapreduce-client-jobclient in the patch passed. +1 unit 0m 37s hadoop-yarn-api in the patch passed. +1 unit 2m 42s hadoop-yarn-common in the patch passed. +1 unit 0m 39s hadoop-yarn-server-common in the patch passed. -1 unit 39m 43s hadoop-yarn-server-resourcemanager in the patch failed. +1 unit 3m 2s hadoop-mapreduce-client-core in the patch passed. -1 unit 109m 56s hadoop-mapreduce-client-jobclient in the patch failed. +1 asflicense 0m 45s The patch does not generate ASF License warnings. 259m 19s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestAppManager Timed out junit tests org.apache.hadoop.mapred.TestMRIntermediateDataEncryption Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5910 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848456/YARN-5910.6.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc uname Linux 89762522c666 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 60865c8 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14718/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14718/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14718/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14718/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/14718/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 12s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 8 new or modified test files.
        0 mvndep 0m 18s Maven dependency ordering for branch
        +1 mvninstall 13m 57s trunk passed
        +1 compile 11m 55s trunk passed
        +1 checkstyle 1m 48s trunk passed
        +1 mvnsite 3m 39s trunk passed
        +1 mvneclipse 1m 57s trunk passed
        +1 findbugs 6m 23s trunk passed
        +1 javadoc 2m 36s trunk passed
        0 mvndep 0m 17s Maven dependency ordering for patch
        +1 mvninstall 2m 54s the patch passed
        +1 compile 11m 2s the patch passed
        +1 cc 11m 2s the patch passed
        +1 javac 11m 2s the patch passed
        -0 checkstyle 1m 57s root: The patch generated 29 new + 1445 unchanged - 10 fixed = 1474 total (was 1455)
        +1 mvnsite 3m 38s the patch passed
        +1 mvneclipse 2m 24s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 2s The patch has no ill-formed XML file.
        +1 findbugs 7m 17s the patch passed
        +1 javadoc 0m 30s hadoop-yarn-api in the patch passed.
        +1 javadoc 0m 42s hadoop-yarn-common in the patch passed.
        +1 javadoc 0m 24s hadoop-yarn-server-common in the patch passed.
        +1 javadoc 0m 35s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913)
        +1 javadoc 0m 32s hadoop-mapreduce-client-core in the patch passed.
        +1 javadoc 0m 23s hadoop-mapreduce-client-jobclient in the patch passed.
        +1 unit 0m 43s hadoop-yarn-api in the patch passed.
        +1 unit 2m 33s hadoop-yarn-common in the patch passed.
        +1 unit 0m 42s hadoop-yarn-server-common in the patch passed.
        -1 unit 39m 42s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 unit 3m 0s hadoop-mapreduce-client-core in the patch passed.
        -1 unit 110m 11s hadoop-mapreduce-client-jobclient in the patch failed.
        +1 asflicense 0m 47s The patch does not generate ASF License warnings.
        258m 17s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.TestAppManager
          hadoop.yarn.server.resourcemanager.TestRMRestart
        Timed out junit tests org.apache.hadoop.mapred.TestMRIntermediateDataEncryption



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5910
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848456/YARN-5910.6.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc
        uname Linux 6b4a4320c9c8 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 60865c8
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14719/artifact/patchprocess/diff-checkstyle-root.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14719/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14719/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14719/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: .
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14719/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 12s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 8 new or modified test files. 0 mvndep 0m 18s Maven dependency ordering for branch +1 mvninstall 13m 57s trunk passed +1 compile 11m 55s trunk passed +1 checkstyle 1m 48s trunk passed +1 mvnsite 3m 39s trunk passed +1 mvneclipse 1m 57s trunk passed +1 findbugs 6m 23s trunk passed +1 javadoc 2m 36s trunk passed 0 mvndep 0m 17s Maven dependency ordering for patch +1 mvninstall 2m 54s the patch passed +1 compile 11m 2s the patch passed +1 cc 11m 2s the patch passed +1 javac 11m 2s the patch passed -0 checkstyle 1m 57s root: The patch generated 29 new + 1445 unchanged - 10 fixed = 1474 total (was 1455) +1 mvnsite 3m 38s the patch passed +1 mvneclipse 2m 24s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 2s The patch has no ill-formed XML file. +1 findbugs 7m 17s the patch passed +1 javadoc 0m 30s hadoop-yarn-api in the patch passed. +1 javadoc 0m 42s hadoop-yarn-common in the patch passed. +1 javadoc 0m 24s hadoop-yarn-server-common in the patch passed. +1 javadoc 0m 35s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913) +1 javadoc 0m 32s hadoop-mapreduce-client-core in the patch passed. +1 javadoc 0m 23s hadoop-mapreduce-client-jobclient in the patch passed. +1 unit 0m 43s hadoop-yarn-api in the patch passed. +1 unit 2m 33s hadoop-yarn-common in the patch passed. +1 unit 0m 42s hadoop-yarn-server-common in the patch passed. -1 unit 39m 42s hadoop-yarn-server-resourcemanager in the patch failed. +1 unit 3m 0s hadoop-mapreduce-client-core in the patch passed. -1 unit 110m 11s hadoop-mapreduce-client-jobclient in the patch failed. +1 asflicense 0m 47s The patch does not generate ASF License warnings. 258m 17s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestAppManager   hadoop.yarn.server.resourcemanager.TestRMRestart Timed out junit tests org.apache.hadoop.mapred.TestMRIntermediateDataEncryption Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5910 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848456/YARN-5910.6.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc uname Linux 6b4a4320c9c8 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 60865c8 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14719/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14719/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14719/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14719/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/14719/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for updating the patch!

        Nit: I think it should be more clear that the regex in the documentation is just an example and not the default, e.g.: "This regex" s/b "For example the following regex".

        DEFAULT_RM_DELEGATION_TOKEN_MAX_SIZE doesn't match yarn-default.xml.

        It's confusing that the max size check is using capacity() but the error message uses position().

        I'm curious on the reasoning for removing the assert for NEW state?

        I was unable to reproduce the TestRMRestart and TestMRIntermediateDataEncryption failures with the patch, but TestAppManager fails consistently for me with the patch applied and passes consistently without. Please investigate.

        Show
        jlowe Jason Lowe added a comment - Thanks for updating the patch! Nit: I think it should be more clear that the regex in the documentation is just an example and not the default, e.g.: "This regex" s/b "For example the following regex". DEFAULT_RM_DELEGATION_TOKEN_MAX_SIZE doesn't match yarn-default.xml. It's confusing that the max size check is using capacity() but the error message uses position(). I'm curious on the reasoning for removing the assert for NEW state? I was unable to reproduce the TestRMRestart and TestMRIntermediateDataEncryption failures with the patch, but TestAppManager fails consistently for me with the patch applied and passes consistently without. Please investigate.
        Hide
        jianhe Jian He added a comment -

        It's confusing that the max size check is using capacity() but the error message uses position().

        missed to change that..

        I'm curious on the reasoning for removing the assert for NEW state?

        Because I feel that's obvious and not needed..

        TestAppManager fails consistently for me with the patch applied and passes consistently without. Please investigate.

        It's because the am containerLaunchContext is null in the UT which failed with NPE in the new code "submissionContext.getAMContainerSpec().getTokensConf()". I think it's ok to assume am ContainerLaunchContext being not null? As I see other code does the same in this call path, like "submissionContext.getAMContainerSpec().getApplicationACLs()" in RMAppManager.

        Show
        jianhe Jian He added a comment - It's confusing that the max size check is using capacity() but the error message uses position(). missed to change that.. I'm curious on the reasoning for removing the assert for NEW state? Because I feel that's obvious and not needed.. TestAppManager fails consistently for me with the patch applied and passes consistently without. Please investigate. It's because the am containerLaunchContext is null in the UT which failed with NPE in the new code "submissionContext.getAMContainerSpec().getTokensConf()". I think it's ok to assume am ContainerLaunchContext being not null? As I see other code does the same in this call path, like "submissionContext.getAMContainerSpec().getApplicationACLs()" in RMAppManager.
        Hide
        jianhe Jian He added a comment -

        new patch addressed all comments

        Show
        jianhe Jian He added a comment - new patch addressed all comments
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 16s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 8 new or modified test files.
        0 mvndep 0m 15s Maven dependency ordering for branch
        +1 mvninstall 13m 52s trunk passed
        +1 compile 10m 51s trunk passed
        +1 checkstyle 1m 56s trunk passed
        +1 mvnsite 3m 32s trunk passed
        +1 mvneclipse 2m 3s trunk passed
        +1 findbugs 6m 8s trunk passed
        +1 javadoc 2m 52s trunk passed
        0 mvndep 0m 17s Maven dependency ordering for patch
        +1 mvninstall 2m 55s the patch passed
        +1 compile 10m 28s the patch passed
        +1 cc 10m 28s the patch passed
        +1 javac 10m 28s the patch passed
        -0 checkstyle 2m 2s root: The patch generated 29 new + 1445 unchanged - 10 fixed = 1474 total (was 1455)
        +1 mvnsite 3m 52s the patch passed
        +1 mvneclipse 2m 24s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 3s The patch has no ill-formed XML file.
        +1 findbugs 7m 25s the patch passed
        +1 javadoc 0m 30s hadoop-yarn-api in the patch passed.
        +1 javadoc 0m 40s hadoop-yarn-common in the patch passed.
        +1 javadoc 0m 28s hadoop-yarn-server-common in the patch passed.
        +1 javadoc 0m 33s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913)
        +1 javadoc 0m 35s hadoop-mapreduce-client-core in the patch passed.
        +1 javadoc 0m 24s hadoop-mapreduce-client-jobclient in the patch passed.
        +1 unit 0m 40s hadoop-yarn-api in the patch passed.
        +1 unit 2m 41s hadoop-yarn-common in the patch passed.
        +1 unit 0m 41s hadoop-yarn-server-common in the patch passed.
        -1 unit 40m 0s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 unit 3m 10s hadoop-mapreduce-client-core in the patch passed.
        +1 unit 107m 36s hadoop-mapreduce-client-jobclient in the patch passed.
        +1 asflicense 1m 3s The patch does not generate ASF License warnings.
        256m 9s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5910
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848609/YARN-5910.7.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc
        uname Linux 1557df27f244 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / d79c645
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14725/artifact/patchprocess/diff-checkstyle-root.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14725/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14725/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: .
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14725/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 8 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 13m 52s trunk passed +1 compile 10m 51s trunk passed +1 checkstyle 1m 56s trunk passed +1 mvnsite 3m 32s trunk passed +1 mvneclipse 2m 3s trunk passed +1 findbugs 6m 8s trunk passed +1 javadoc 2m 52s trunk passed 0 mvndep 0m 17s Maven dependency ordering for patch +1 mvninstall 2m 55s the patch passed +1 compile 10m 28s the patch passed +1 cc 10m 28s the patch passed +1 javac 10m 28s the patch passed -0 checkstyle 2m 2s root: The patch generated 29 new + 1445 unchanged - 10 fixed = 1474 total (was 1455) +1 mvnsite 3m 52s the patch passed +1 mvneclipse 2m 24s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 3s The patch has no ill-formed XML file. +1 findbugs 7m 25s the patch passed +1 javadoc 0m 30s hadoop-yarn-api in the patch passed. +1 javadoc 0m 40s hadoop-yarn-common in the patch passed. +1 javadoc 0m 28s hadoop-yarn-server-common in the patch passed. +1 javadoc 0m 33s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913) +1 javadoc 0m 35s hadoop-mapreduce-client-core in the patch passed. +1 javadoc 0m 24s hadoop-mapreduce-client-jobclient in the patch passed. +1 unit 0m 40s hadoop-yarn-api in the patch passed. +1 unit 2m 41s hadoop-yarn-common in the patch passed. +1 unit 0m 41s hadoop-yarn-server-common in the patch passed. -1 unit 40m 0s hadoop-yarn-server-resourcemanager in the patch failed. +1 unit 3m 10s hadoop-mapreduce-client-core in the patch passed. +1 unit 107m 36s hadoop-mapreduce-client-jobclient in the patch passed. +1 asflicense 1m 3s The patch does not generate ASF License warnings. 256m 9s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5910 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848609/YARN-5910.7.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc uname Linux 1557df27f244 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / d79c645 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14725/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14725/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14725/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/14725/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        jianhe Jian He added a comment -

        testFinishedAppRemovalAfterRMRestart passed locally for me..

        Show
        jianhe Jian He added a comment - testFinishedAppRemovalAfterRMRestart passed locally for me..
        Hide
        jlowe Jason Lowe added a comment -

        +1 lgtm. The test failure is unrelated and will be fixed by YARN-5548.

        Committing this.

        Show
        jlowe Jason Lowe added a comment - +1 lgtm. The test failure is unrelated and will be fixed by YARN-5548 . Committing this.
        Hide
        jlowe Jason Lowe added a comment -

        Thanks, Jian! I committed this to trunk and branch-2.

        Show
        jlowe Jason Lowe added a comment - Thanks, Jian! I committed this to trunk and branch-2.
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11160 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11160/)
        YARN-5910. Support for multi-cluster delegation tokens. Contributed by (jlowe: rev 69fa81679f59378fd19a2c65db8019393d7c05a2)

        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
        • (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/YARNRunner.java
        • (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerLaunchContext.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
        • (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
        • (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestYARNRunner.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11160 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11160/ ) YARN-5910 . Support for multi-cluster delegation tokens. Contributed by (jlowe: rev 69fa81679f59378fd19a2c65db8019393d7c05a2) (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/YARNRunner.java (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerLaunchContext.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestYARNRunner.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
        Hide
        jianhe Jian He added a comment -

        Thanks Jason for the review and commit !

        Show
        jianhe Jian He added a comment - Thanks Jason for the review and commit !

          People

          • Assignee:
            jianhe Jian He
            Reporter:
            clayb Clay B.
          • Votes:
            0 Vote for this issue
            Watchers:
            12 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development