The problem with YARN service check failure is that during Rolling upgrade from HDP-2.4 to HDP-2.6 (with YARN HA turned on):
- After "core master restart" step, yarn client uses new (HDP-2.6) config and fails with Class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider not found . Forcing yarn client to use old (HDP-2.4) config until client binary is updated helps here
- After "core slave restart" step, using old YARN client config with old YARN client binary does not help. NM/RM classpath points to HDP-2.6. App job gets scheduled, but then fails with log:
- After yarn client is updated to a new binary, service check works fine.
Bottom line, this is a known problem with DistributedShell - it was never fixed to not rely on cluster's configuration. What this means is that client configuration changes like this can break DistributedShell apps over upgrades.
Unfortunately nothing we do now can fix this broken upgrade for DistributedShell - as to ideally fix it, we have to go back in time and provide changes.
We have to do two things
- Disable DistributedShell based service-check when we go from 2.4 > 2.6. The RequestHedgingRMFailoverProxyProvider is added in 2.5, so 2.5 > 2.6 is fine.
- Also fix yarn-site.xml starting 2.6 with the following change to avoid this in the future. The change is from using $HADOOP_CONF_DIR which is inherited from the NodeManager to /etc/hadoop/conf/ which is always tied to the client version.