Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-6093

Minor bugs with AMRMtoken renewal and state store availability when using FederationRMFailoverProxyProvider during RM failover



    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • YARN-2915
    • 3.0.0-beta1
    • amrmproxy, federation
    • None
    • Reviewed


      AMRMProxy uses expired AMRMToken to talk to RM, leading to the "Invalid AMRMToken" exception. The bug is triggered when both conditions are met:
      1. RM rolls master key and renews AMRMToken for a running AM.
      2. Existing RPC connection between AMRMProxy and RM drops and attempt to reconnect via failover in FederationRMFailoverProxyProvider.

      Here's what happened:

      In DefaultRequestInterceptor.init(), we create a proxy ugi, load it with the initial AMRMToken issued by RM, and used it for initiating rmClient. Then we arrive at FederationRMFailoverProxyProvider.init(), a full copy of ugi tokens are saved locally, create an actual RM proxy and setup the RPC connection.

      Later when RM rolls master key and issues a new AMRMToken, DefaultRequestInterceptor.updateAMRMToken() updates it into the proxy ugi.

      However the new token is never used, until the existing RPC connection between AMRMProxy and RM drops for other reasons (say master RM crashes).

      When we try to reconnect, since the service name of the new AMRMToken is not yet set correctly in DefaultRequestInterceptor.updateAMRMToken(), RPC found no valid AMRMToken when trying to setup a new connection. We first hit a "Client cannot authenticate via:[TOKEN]" exception. This is expected.

      Next, FederationRMFailoverProxyProvider fails over, we reset the service token via ClientRMProxy.getRMAddress() and reconnect. Supposedly this would have worked.

      However since DefaultRequestInterceptor does not use the proxy user for later calls to rmClient, when performing failover in FederationRMFailoverProxyProvider, we are not in the proxy user. Currently the code solve the problem by reloading the current ugi with all tokens saved locally in originalTokens in method addOriginalTokens(). The problem is that the original AMRMToken loaded is no longer accepted by RM, and thus we keep hitting the "Invalid AMRMToken" exception until AM fails.

      The correct way is that rather than saving the original tokens in the proxy ugi, we save the original ugi itself. Every time we perform failover and create the new RM proxy, we use the original ugi, which is always loaded with the up-to-date AMRMToken.


        1. YARN-6093-YARN-2915.v5.patch
          11 kB
          Botong Huang
        2. YARN-6093-YARN-2915.v4.patch
          10 kB
          Botong Huang
        3. YARN-6093-YARN-2915.v3.patch
          10 kB
          Botong Huang
        4. YARN-6093-YARN-2915.v2.patch
          10 kB
          Botong Huang
        5. YARN-6093-YARN-2915.v1.patch
          10 kB
          Botong Huang
        6. YARN-6093-git08dc09581230ba595ce48fe7d3bc4eb2b6f98091.v4.patch
          10 kB
          Subramaniam Krishnan
        7. YARN-6093-08dc09581230ba595ce48fe7d3bc4eb2b6f98091.v4.patch
          10 kB
          Subramaniam Krishnan
        8. YARN-6093.v1.patch
          10 kB
          Botong Huang

        Issue Links



              botong Botong Huang
              botong Botong Huang
              0 Vote for this issue
              9 Start watching this issue