Uploaded image for project: 'Livy'
  1. Livy
  2. LIVY-540

Livy Always connecting to ResourceManager at /0.0.0.0:8032 and not using resource manager address specified in yarn-site.xml

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Not A Problem
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: Server
    • Labels:
      None
    • Environment:
      RHEL 7.5

      Description

      We are using cloudera version of hadoop and installed livy manually using the tar ball.

      When we start Livy using the kerberos launch key principal and keytab it runs succesfully.  However, it's always connecting to local Resource Manager (log shows RMProxy: Connecting to ResourceManager at /0.0.0.0:8032) though we have specified the yarn.configuration path in the livy-env.sh and try to submit job always to that 0.0.0.0:8032. 

       

      Did any one ran into this issue before?  Why does livy doesn't read the yarn.resourcemanager.address from yarn-conf.xml. A'm I missing any configuraiton params here.

       

      yarn-site.xml

      <?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera Manager--> <configuration> <property> <name>yarn.acl.enable</name> <value>true</value> </property> <property> <name>yarn.admin.acl</name> <value>*</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>hostname.myserver:8032</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>hostname.myserver:8033</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>hostname.myserver:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>hostname.myserver:8031</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>hostname.myserver:8088</value> </property> <property> <name>yarn.resourcemanager.webapp.https.address</name> <value>hostname.myserver:8090</value> </property> <property> <name>yarn.resourcemanager.client.thread-count</name> <value>50</value> </property> <property> <name>yarn.resourcemanager.scheduler.client.thread-count</name> <value>50</value> </property> <property> <name>yarn.resourcemanager.admin.client.thread-count</name> <value>1</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value> </property> <property> <name>yarn.scheduler.increment-allocation-mb</name> <value>512</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>329591</value> </property> <property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value> </property> <property> <name>yarn.scheduler.increment-allocation-vcores</name> <value>1</value> </property> <property> <name>yarn.scheduler.maximum-allocation-vcores</name> <value>40</value> </property> <property> <name>yarn.resourcemanager.amliveliness-monitor.interval-ms</name> <value>1000</value> </property> <property> <name>yarn.am.liveness-monitor.expiry-interval-ms</name> <value>600000</value> </property> <property> <name>yarn.resourcemanager.am.max-attempts</name> <value>2</value> </property> <property> <name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name> <value>600000</value> </property> <property> <name>yarn.resourcemanager.nm.liveness-monitor.interval-ms</name> <value>1000</value> </property> <property> <name>yarn.nm.liveness-monitor.expiry-interval-ms</name> <value>600000</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.client.thread-count</name> <value>50</value> </property> <property> <name>yarn.application.classpath</name> <value>$HADOOP_CLIENT_CONF_DIR,$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*</value> </property> <property> <name>yarn.resourcemanager.scheduler.class&lt;/name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value> </property> <property> <name>yarn.resourcemanager.max-completed-applications</name> <value>10000</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/tmp/logs</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir-suffix</name> <value>logs</value> </property> <property> <name>yarn.resourcemanager.principal</name> <value>yarn/_HOST@BDS.UBS.COM</value> </property> </configuration>
      

       

       

       

      livy-env.sh

      export JAVA_HOME=/app/bds/java 
      export SPARK_HOME=/opt/cloudera/parcels/SPARK2/lib/spark2 
      export HADOOP_HOME=/opt/cloudera/parcels/CDH 
      export LIVY_PID_DIR=${LIVY_HOME} 
      export HADOOP_CONF_DIR=/etc/hadoop/conf export KRB5_CONFIG=/etc/krb5_bds.conf
      export JAVA_TOOL_OPTIONS='-Djava.security.krb5.conf=/etc/krb5_bds.conf'
      

      livy.conf

      livy.spark.master = yarn livy.spark.deploy-mode = cluster livy.impersonation.enabled = true livy.repl.enable-hive-context = true livy.server.launch.kerberos.principal=livy@XXX.com livy.server.launch.kerberos.keytab=/app/bds/security/keytabs/livy.keytab livy.superusers = hdfs
      

      livy-log

      Picked up JAVA_TOOL_OPTIONS: -Djava.security.krb5.conf=/etc/krb5_bds.conf 18/12/04 14:52:19 INFO AccessManager: AccessControlManager acls disabled;users with view permission: ;users with modify permission: ;users with super permission: hdfs;other allowed users: * 18/12/04 14:52:19 INFO LineBufferedStream: stdout: /opt/cloudera/parcels/SPARK2/lib/spark2/conf/spark-env.sh: line 81: spark.yarn.appMasterEnv.PYSPARK3_PYTHON=/app/bds/apo/BDS_py36_ds_112018/bin/python: No such file or directory 18/12/04 14:52:19 INFO LineBufferedStream: stdout: WARNING: User-defined SPARK_HOME (/app/bds/parcels/SPARK2-2.3.0.cloudera3-1.cdh5.13.3.p0.458809/lib/spark2) overrides detected (/opt/cloudera/parcels/SPARK2/lib/spark2). 18/12/04 14:52:19 INFO LineBufferedStream: stdout: WARNING: Running spark-class from user-defined location. 18/12/04 14:52:19 INFO LineBufferedStream: stdout: /opt/cloudera/parcels/SPARK2/lib/spark2/conf/spark-env.sh: line 81: spark.yarn.appMasterEnv.PYSPARK3_PYTHON=/app/bds/apo/BDS_py36_ds_112018/bin/python: No such file or directory 18/12/04 14:52:20 INFO LineBufferedStream: stdout: Picked up JAVA_TOOL_OPTIONS: -Djava.security.krb5.conf=/etc/krb5_bds.conf 18/12/04 14:52:20 INFO LineBufferedStream: stdout: Picked up JAVA_TOOL_OPTIONS: -Djava.security.krb5.conf=/etc/krb5_bds.conf 18/12/04 14:52:20 INFO LineBufferedStream: stdout: Welcome to 18/12/04 14:52:20 INFO LineBufferedStream: stdout: ____ __ 18/12/04 14:52:20 INFO LineBufferedStream: stdout: / __/__ ___ _____/ /__ 18/12/04 14:52:20 INFO LineBufferedStream: stdout: _\ \/ _ \/ _ `/ __/ '_/ 18/12/04 14:52:20 INFO LineBufferedStream: stdout: /___/ .__/\_,_/_/ /_/\_\ version 2.3.0.cloudera3 18/12/04 14:52:20 INFO LineBufferedStream: stdout: /_/ 18/12/04 14:52:20 INFO LineBufferedStream: stdout: 18/12/04 14:52:20 INFO LineBufferedStream: stdout: Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_151 18/12/04 14:52:20 INFO LineBufferedStream: stdout: Branch HEAD 18/12/04 14:52:20 INFO LineBufferedStream: stdout: Compiled by user jenkins on 2018-07-05T03:50:33Z 18/12/04 14:52:20 INFO LineBufferedStream: stdout: Revision 04c773e19117d158cf917e60a6e98488a643d49e 18/12/04 14:52:20 INFO LineBufferedStream: stdout: Url git://github.mtv.cloudera.com/CDH/spark.git 18/12/04 14:52:20 INFO LineBufferedStream: stdout: Type --help for more information. 18/12/04 14:52:20 WARN LivySparkUtils$: Current Spark (2,3) is not verified in Livy, please use it carefully 18/12/04 14:52:21 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)]) 18/12/04 14:52:21 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)]) 18/12/04 14:52:21 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[GetGroups]) 18/12/04 14:52:21 DEBUG MetricsSystemImpl: UgiMetrics, User and group related metrics 18/12/04 14:52:21 DEBUG Shell: setsid exited with exit code 0 18/12/04 14:52:21 DEBUG Groups: Creating new Groups object 18/12/04 14:52:21 DEBUG Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000; warningDeltaMs=5000 18/12/04 14:52:21 DEBUG LivyServer: Ran kinit command successfully. 18/12/04 14:52:21 DEBUG AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state INITED 18/12/04 14:52:21 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/12/04 14:52:21 DEBUG UserGroupInformation: hadoop login 18/12/04 14:52:21 DEBUG UserGroupInformation: hadoop login commit 18/12/04 14:52:21 DEBUG UserGroupInformation: using kerberos user:livy@BDS.UBS.COM 18/12/04 14:52:21 DEBUG UserGroupInformation: Using user: "livy@BDS.UBS.COM" with name livy@BDS.UBS.COM 18/12/04 14:52:21 DEBUG UserGroupInformation: User entry: "livy@BDS.UBS.COM" 18/12/04 14:52:21 DEBUG UserGroupInformation: UGI loginUser:livy@BDS.UBS.COM (auth:KERBEROS) 18/12/04 14:52:21 INFO StateStore$: Using BlackholeStateStore for recovery. 18/12/04 14:52:21 INFO BatchSessionManager: Recovered 0 batch sessions. Next session id: 0 18/12/04 14:52:21 DEBUG UserGroupInformation: Found tgt Ticket (hex) =
      

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              sdandey santosh dandey
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: