Uploaded image for project: 'Zeppelin'
  1. Zeppelin
  2. ZEPPELIN-5072

zeppelin on kubernetes hive connection bug (zeppelin0.9.0-preview2)

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Blocker
    • Resolution: Unresolved
    • None
    • None
    • Interpreters
    • None
    • zeppelin0.9.0-preview2

       

      hadoop 2.6 (kerberos)

      hive 1.1.0 (kerberos)

    • Important

    Description

      Hi, I'm having a trouble when I'm connecting to the hive 1.1.0 with zeppelin0.9.0-preview2

       

      KERBEROS authentication is required for connecting my hive.

       

      First of all, there was a problem with the code below to connect with hive 1.1.0.
      jdbc/src/main/java/org/apache/zeppelin/jdbc/JDBCInterpreter.java
      lines (749 to 753)

      if (getJDBCConfiguration(user).getPropertyMap(dbPrefix).getProperty(URL_KEY)
          .startsWith("jdbc:hive2://")) {
        HiveUtils.startHiveMonitorThread(statement, context,
            Boolean.parseBolean("hive.log.display", "true")));
      }

       

      Due to this code, there was a problem with the connection with hive 1.1.0, but I annotated and built the code to create a new docker file, and when I executed the server through bin/zeppelin.sh in the docker image(not running kubernetes just docker local env), it was connected with kerberos hive well.

       

      However, there is a problem when operating the image as running kubernetes mode.

      There was a problem when running on running kubernetes just like the docker environment, and I wanted to find the problem through the debugging mode.

      The suspicious part is that running kubernetes says the following logs.

      'sun.nio.ch.EPollSelectorImp' was able to confirm that no further updates were made.

       

      here is the zeppelin--*server.log

      running on kubernetes mode

      DEBUG [2020-09-26 10:42:30,150] ({SchedulerFactory2} RemoteInterpreterUtils.java[checkIfRemoteEndpointAccessible]:127) - Remote endpoint 'jdbc-kppvww.default.svc:12321' is not accessible (might be initializing): jdbc-kppvww.default.svc
      DEBUG [2020-09-26 10:42:31,151] ({SchedulerFactory2} RemoteInterpreterUtils.java[checkIfRemoteEndpointAccessible]:127) - Remote endpoint 'jdbc-kppvww.default.svc:12321' is not accessible (might be initializing): jdbc-kppvww.default.svc
      DEBUG [2020-09-26 10:42:32,151] ({SchedulerFactory2} RemoteInterpreterUtils.java[checkIfRemoteEndpointAccessible]:127) - Remote endpoint 'jdbc-kppvww.default.svc:12321' is not accessible (might be initializing): jdbc-kppvww.default.svc
      DEBUG [2020-09-26 10:42:33,152] ({SchedulerFactory2} RemoteInterpreterUtils.java[checkIfRemoteEndpointAccessible]:127) - Remote endpoint 'jdbc-kppvww.default.svc:12321' is not accessible (might be initializing): jdbc-kppvww.default.svc
      

       

      Zeppelin-interpreter/src/main/java/org/apache/zeppelin/interpreter/remote/RemoteInterpreterUtils.java

      A problem occurred with the checkIfRemoteEnpointAccessibe function of above java code and I don't think it's updated in 'sun.nio.ch.EPollSelectorImp'.

       

      not running on kubernetes mode (local mode (same docker image))

      DEBUG [2020-09-26 10:30:57,418] ({qtp1412925683-13} QueuedThreadPool.java[run]:940) - ran CEP:SocketChannelEndPoint@2754cfd7{/172.17.0.1:60076<->/172.17.0.2:8080,OPEN,fill=FI,flush=-,to=0/300000}{io=0/1,kio=0,kro=1}->WebSocketServerConnection@69f97a84[s=ConnectionState@75a8b010[OPENED],f=org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection$Flusher@5756b226[IDLE][queueSize=0,aggregateSize=-1,terminated=null],g=Generator[SERVER,validating,+rsv1],p=Parser@311d31c8[ExtensionStack,s=START,c=0,len=8,f=null]]:runFillable:BLOCKING in QueuedThreadPool[qtp1412925683]@543788f3{STARTED,8<=8<=400,i=1,r=8,q=0}[ReservedThreadExecutor@53fe15ff{s=1/8,p=0}]DEBUG [2020-09-26 10:30:57,418] ({qtp1412925683-18} ManagedSelector.java[select]:476) - Selector sun.nio.ch.EPollSelectorImpl@615ea420 woken with none selected
      DEBUG [2020-09-26 10:30:57,418] ({qtp1412925683-18} ManagedSelector.java[select]:485) - Selector sun.nio.ch.EPollSelectorImpl@615ea420 woken up from select, 0/0/1 selected
      DEBUG [2020-09-26 10:30:57,418] ({qtp1412925683-18} ManagedSelector.java[select]:498) - Selector sun.nio.ch.EPollSelectorImpl@615ea420 processing 0 keys, 1 update

       

      'sun.nio.ch.EPollSelectorImpl' shows that the update continues.

      As a result, there seems to be a difference between interpreter_process.log in the docker not running kubernetes environment and running kubernetes environment.

       

      And Here is the zeppelin-interpreter-*_process-jdbc.log

      running on kubernetes mode 

      ERROR [2020-09-26 10:42:40,800] ({ParallelScheduler-Worker-1} JDBCInterpreter.java[open]:225) - zeppelin will be ignored. driver.zeppelin and zeppelin.url is mandatory.DEBUG [2020-09-26 10:42:40,800] ({ParallelScheduler-Worker-1} JDBCInterpreter.java[open]:235) - JDBC PropertiesMap: {default={url=jdbc:hive2://PERSONAL_INFO.io:10000/default;principal=hive/PERSONAL_INFO.io@*.HADOOP, completer.schemaFilters=, user=PERSONAL_INFO@PERSONAL_INFO.HADOOP, statementPrecode=, splitQueries=true, proxy.user.property=hive.server2.proxy.user, password=, driver=org.apache.hive.jdbc.HiveDriver, completer.ttlInSeconds=120, precode=}, common={max_count=1000}}DEBUG [2020-09-26 10:42:40,802] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:212) - key: zeppelin.jdbc.maxRows, value: 1000DEBUG [2020-09-26 10:42:40,805] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:205) - key: default.precode, value: DEBUG [2020-09-26 10:42:40,807] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:205) - key: common.precode, value: nullDEBUG [2020-09-26 10:42:40,813] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[jobRun]:775) - Script after hooks: show databases;DEBUG [2020-09-26 10:42:40,814] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:212) - key: zeppelin.jdbc.interpolation, value: falseDEBUG [2020-09-26 10:42:40,816] ({ParallelScheduler-Worker-1} JDBCInterpreter.java[internalInterpret]:877) - Run SQL command 'show databases;'DEBUG [2020-09-26 10:42:40,816] ({ParallelScheduler-Worker-1} JDBCInterpreter.java[internalInterpret]:879) - DBPrefix: default, SQL command: 'show databases;'DEBUG [2020-09-26 10:42:40,819] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.auth.type, value: KERBEROSDEBUG [2020-09-26 10:42:41,124] ({ParallelScheduler-Worker-1} AbstractScheduler.java[runJob]:143) - Job Error, paragraph_1601116938240_457009787, %text java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.UserGroupInformation  at org.apache.zeppelin.jdbc.security.JDBCSecurityImpl.createSecureConfiguration(JDBCSecurityImpl.java:48)  at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnection(JDBCInterpreter.java:512)  at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:706)  at org.apache.zeppelin.jdbc.JDBCInterpreter.internalInterpret(JDBCInterpreter.java:881)  at org.apache.zeppelin.interpreter.AbstractInterpreter.interpret(AbstractInterpreter.java:47)  at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110)  at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:776)  at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668)  at org.apache.zeppelin.scheduler.Job.run(Job.java:172)  at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130)  at org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)  at java.lang.Thread.run(Thread.java:748)
       INFO [2020-09-26 10:42:41,126] ({ParallelScheduler-Worker-1} AbstractScheduler.java[runJob]:152) - Job paragraph_1601116938240_457009787 finished by scheduler org.apache.zeppelin.jdbc.JDBCInterpreter176877981DEBUG [2020-09-26 10:42:41,287] ({pool-2-thread-3} RemoteInterpreterServer.java[resourcePoolGetAll]:1112) - Request resourcePoolGetAll from ZeppelinServer
      

       

      not running on kubernetes mode (local mode (same docker image))

      ERROR [2020-09-26 10:30:58,754] ({ParallelScheduler-Worker-1} JDBCInterpreter.java[open]:225) - zeppelin will be ignored. driver.zeppelin and zeppelin.url is mandatory.ERROR [2020-09-26 10:30:58,754] ({ParallelScheduler-Worker-1} JDBCInterpreter.java[open]:225) - zeppelin will be ignored. driver.zeppelin and zeppelin.url is mandatory.DEBUG [2020-09-26 10:30:58,754] ({ParallelScheduler-Worker-1} JDBCInterpreter.java[open]:235) - JDBC PropertiesMap: {default={url=jdbc:hive2://PERSONAL_INFO.io:10000/default;principal=hive/PERSONAL_INFO.io@PERSONAL_INFO.HADOOP, completer.schemaFilters=, user=PERSONAL_INFO@PERSONAL_INFO.HADOOP, statementPrecode=, splitQueries=true, proxy.user.property=hive.server2.proxy.user, password=, driver=org.apache.hive.jdbc.HiveDriver, completer.ttlInSeconds=120, precode=}, common={max_count=1000}}DEBUG [2020-09-26 10:30:58,755] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:212) - key: zeppelin.jdbc.maxRows, value: 1000DEBUG [2020-09-26 10:30:58,757] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:205) - key: default.precode, value: DEBUG [2020-09-26 10:30:58,758] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:205) - key: common.precode, value: nullDEBUG [2020-09-26 10:30:58,760] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[jobRun]:775) - Script after hooks: show databases;DEBUG [2020-09-26 10:30:58,761] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:212) - key: zeppelin.jdbc.interpolation, value: falseDEBUG [2020-09-26 10:30:58,761] ({ParallelScheduler-Worker-1} JDBCInterpreter.java[internalInterpret]:877) - Run SQL command 'show databases;'DEBUG [2020-09-26 10:30:58,761] ({ParallelScheduler-Worker-1} JDBCInterpreter.java[internalInterpret]:879) - DBPrefix: default, SQL command: 'show databases;'DEBUG [2020-09-26 10:30:58,763] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.auth.type, value: KERBEROSDEBUG [2020-09-26 10:30:58,764] ({pool-2-thread-2} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.concurrent.use, value: trueDEBUG [2020-09-26 10:30:58,764] ({pool-2-thread-2} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.concurrent.max_connection, value: 10DEBUG [2020-09-26 10:30:58,781] ({pool-3-thread-1} MutableMetricsFactory.java[newForField]:42) - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])DEBUG [2020-09-26 10:30:58,793] ({pool-3-thread-1} MutableMetricsFactory.java[newForField]:42) - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])DEBUG [2020-09-26 10:30:58,793] ({pool-3-thread-1} MutableMetricsFactory.java[newForField]:42) - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[GetGroups])DEBUG [2020-09-26 10:30:58,795] ({pool-3-thread-1} MetricsSystemImpl.java[register]:231) - UgiMetrics, User and group related metricsDEBUG [2020-09-26 10:30:58,984] ({pool-3-thread-1} Groups.java[getUserToGroupsMappingService]:278) -  Creating new Groups objectDEBUG [2020-09-26 10:30:58,987] ({pool-3-thread-1} NativeCodeLoader.java[<clinit>]:46) - Trying to load the custom-built native-hadoop library...DEBUG [2020-09-26 10:30:58,987] ({pool-3-thread-1} NativeCodeLoader.java[<clinit>]:55) - Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.pathDEBUG [2020-09-26 10:30:58,988] ({pool-3-thread-1} NativeCodeLoader.java[<clinit>]:56) - java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib WARN [2020-09-26 10:30:58,988] ({pool-3-thread-1} NativeCodeLoader.java[<clinit>]:62) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicableDEBUG [2020-09-26 10:30:58,988] ({pool-3-thread-1} JniBasedUnixGroupsMappingWithFallback.java[<init>]:41) - Falling back to shell basedDEBUG [2020-09-26 10:30:58,989] ({pool-3-thread-1} JniBasedUnixGroupsMappingWithFallback.java[<init>]:45) - Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMappingDEBUG [2020-09-26 10:30:59,052] ({pool-3-thread-1} Shell.java[checkHadoopHome]:320) - Failed to detect a valid hadoop home directoryjava.io.IOException: HADOOP_HOME or hadoop.home.dir are not set. at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:302) at org.apache.hadoop.util.Shell.<clinit>(Shell.java:327) at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79) at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:104) at org.apache.hadoop.security.Groups.<init>(Groups.java:86) at org.apache.hadoop.security.Groups.<init>(Groups.java:66) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:271) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:248) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:763) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:748) at org.apache.hadoop.security.UserGroupInformation.isLoginKeytabBased(UserGroupInformation.java:1142) at org.apache.zeppelin.jdbc.JDBCInterpreter.runKerberosLogin(JDBCInterpreter.java:187) at org.apache.zeppelin.interpreter.KerberosInterpreter$1.call(KerberosInterpreter.java:135) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)DEBUG [2020-09-26 10:30:59,065] ({pool-3-thread-1} Shell.java[isSetsidSupported]:396) - setsid exited with exit code 0DEBUG [2020-09-26 10:30:59,065] ({pool-3-thread-1} Groups.java[<init>]:91) - Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000DEBUG [2020-09-26 10:30:59,070] ({pool-3-thread-1} UserGroupInformation.java[login]:209) - hadoop loginDEBUG [2020-09-26 10:30:59,071] ({pool-3-thread-1} UserGroupInformation.java[commit]:144) - hadoop login commitDEBUG [2020-09-26 10:30:59,074] ({pool-3-thread-1} UserGroupInformation.java[commit]:174) - using local user:UnixPrincipal: rootDEBUG [2020-09-26 10:30:59,074] ({pool-3-thread-1} UserGroupInformation.java[commit]:180) - Using user: "UnixPrincipal: root" with name rootDEBUG [2020-09-26 10:30:59,074] ({pool-3-thread-1} UserGroupInformation.java[commit]:190) - User entry: "root"DEBUG [2020-09-26 10:30:59,075] ({pool-3-thread-1} UserGroupInformation.java[loginUserFromSubject]:799) - UGI loginUser:root (auth:SIMPLE) INFO [2020-09-26 10:30:59,075] ({pool-3-thread-1} KerberosInterpreter.java[call]:143) - runKerberosLogin failed for 1 time(s).DEBUG [2020-09-26 10:31:00,661] ({ParallelScheduler-Worker-1} UserGroupInformation.java[login]:209) - hadoop loginDEBUG [2020-09-26 10:31:00,663] ({ParallelScheduler-Worker-1} UserGroupInformation.java[commit]:144) - hadoop login commitDEBUG [2020-09-26 10:31:00,663] ({ParallelScheduler-Worker-1} UserGroupInformation.java[commit]:158) - using kerberos user:PERSONAL_INFO@PERSONAL_INFO.HADOOPDEBUG [2020-09-26 10:31:00,663] ({ParallelScheduler-Worker-1} UserGroupInformation.java[commit]:180) - Using user: "PERSONAL_INFO@PERSONAL_INFO.HADOOP" with name PERSONAL_INFO@PERSONAL_INFO.HADOOPDEBUG [2020-09-26 10:31:00,663] ({ParallelScheduler-Worker-1} UserGroupInformation.java[commit]:190) - User entry: "PERSONAL_INFO@PERSONAL_INFO.HADOOP" INFO [2020-09-26 10:31:00,664] ({ParallelScheduler-Worker-1} UserGroupInformation.java[loginUserFromKeytab]:938) - Login successful for user PERSONAL_INFO@PERSONAL_INFO.HADOOP using keytab file /zeppelin/PERSONAL_INFO.PERSONAL_INFO.HADOOP.keytabDEBUG [2020-09-26 10:31:00,664] ({pool-3-thread-1} UserGroupInformation.java[reloginFromTicketCache]:1054) - Initiating logout for PERSONAL_INFO@PERSONAL_INFO.HADOOPDEBUG [2020-09-26 10:31:00,664] ({pool-3-thread-1} UserGroupInformation.java[logout]:217) - hadoop logoutDEBUG [2020-09-26 10:31:00,664] ({pool-3-thread-1} UserGroupInformation.java[reloginFromTicketCache]:1066) - Initiating re-login for PERSONAL_INFO@PERSONAL_INFO.HADOOPDEBUG [2020-09-26 10:31:00,666] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.auth.kerberos.proxy.enable, value: nullDEBUG [2020-09-26 10:31:00,666] ({pool-3-thread-1} UserGroupInformation.java[login]:209) - hadoop loginDEBUG [2020-09-26 10:31:00,667] ({pool-3-thread-1} UserGroupInformation.java[commit]:144) - hadoop login commitDEBUG [2020-09-26 10:31:00,667] ({pool-3-thread-1} UserGroupInformation.java[commit]:149) - using existing subject:[PERSONAL_INFO@PERSONAL_INFO.HADOOP, UnixPrincipal: root, UnixNumericUserPrincipal: 0, UnixNumericGroupPrincipal [Primary Group]: 0] INFO [2020-09-26 10:31:00,667] ({pool-3-thread-1} KerberosInterpreter.java[call]:136) - Ran runKerberosLogin command successfully.DEBUG [2020-09-26 10:31:00,708] ({ParallelScheduler-Worker-1} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.maxConnLifetime, value: -1 INFO [2020-09-26 10:31:00,740] ({ParallelScheduler-Worker-1} Utils.java[parseURL]:285) - Supplied authorities: PERSONAL_INFO.io:10000 INFO [2020-09-26 10:31:00,740] ({ParallelScheduler-Worker-1} Utils.java[parseURL]:372) - Resolved authority: PERSONAL_INFO.io:10000DEBUG [2020-09-26 10:31:00,753] ({ParallelScheduler-Worker-1} HadoopThriftAuthBridge.java[loginUserHasCurrentAuthMethod]:155) - Current authMethod = KERBEROSDEBUG [2020-09-26 10:31:00,754] ({ParallelScheduler-Worker-1} HadoopThriftAuthBridge.java[createClientWithConf]:90) - Not setting UGI conf as passed-in authMethod of kerberos = current. INFO [2020-09-26 10:31:00,774] ({ParallelScheduler-Worker-1} HiveConnection.java[openTransport]:189) - Will try to open client transport with JDBC Uri: jdbc:hive2://PERSONAL_INFO.io:10000/default;principal=hive/PERSONAL_INFO.io@PERSONAL_INFO.HADOOPDEBUG [2020-09-26 10:31:00,775] ({ParallelScheduler-Worker-1} UserGroupInformation.java[logPrivilegedAction]:1652) - PrivilegedAction as:PERSONAL_INFO@PERSONAL_INFO.HADOOP (auth:KERBEROS) from:org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)DEBUG [2020-09-26 10:31:00,776] ({ParallelScheduler-Worker-1} TSaslTransport.java[open]:261) - opening transport org.apache.thrift.transport.TSaslClientTransport@c2d98ERROR [2020-09-26 10:31:00,897] ({ParallelScheduler-Worker-1} TSaslTransport.java[open]:315) - SASL negotiation failurejavax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:190) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:163) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.apache.commons.dbcp2.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:79) at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:205) at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:836) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:434) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at org.apache.commons.dbcp2.PoolingDriver.connect(PoolingDriver.java:129) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:270) at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnectionFromPool(JDBCInterpreter.java:487) at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnection(JDBCInterpreter.java:520) at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:706) at org.apache.zeppelin.jdbc.JDBCInterpreter.internalInterpret(JDBCInterpreter.java:881) at org.apache.zeppelin.interpreter.AbstractInterpreter.interpret(AbstractInterpreter.java:47) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:776) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130) at org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:162) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:189) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 36 moreDEBUG [2020-09-26 10:31:00,900] ({ParallelScheduler-Worker-1} TSaslTransport.java[sendSaslMessage]:162) - CLIENT: Writing message with status BAD and payload length 19 INFO [2020-09-26 10:31:00,901] ({ParallelScheduler-Worker-1} HiveConnection.java[openTransport]:194) - Could not open client transport with JDBC Uri: jdbc:hive2://PERSONAL_INFO.io:10000/default;principal=hive/PERSONAL_INFO.io@PERSONAL_INFO.HADOOPDEBUG [2020-09-26 10:31:00,905] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onUpdate]:957) - Output Update for index 0: DEBUG [2020-09-26 10:31:00,930] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://PERSONAL_INFO.io:10000/default;principal=hive/PERSONAL_INFO.io@PERSONAL_INFO.HADOOP: GSS initiate failed
      DEBUG [2020-09-26 10:31:00,953] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:215)
      DEBUG [2020-09-26 10:31:00,953] ({pool-2-thread-2} RemoteInterpreterServer.java[resourcePoolGetAll]:1112) - Request resourcePoolGetAll from ZeppelinServerDEBUG [2020-09-26 10:31:00,955] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:163)
      DEBUG [2020-09-26 10:31:00,956] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
      DEBUG [2020-09-26 10:31:00,956] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at java.sql.DriverManager.getConnection(DriverManager.java:664)
      DEBUG [2020-09-26 10:31:00,957] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at java.sql.DriverManager.getConnection(DriverManager.java:208)
      DEBUG [2020-09-26 10:31:00,959] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.commons.dbcp2.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:79)
      DEBUG [2020-09-26 10:31:00,960] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:205)
      DEBUG [2020-09-26 10:31:00,960] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:836)
      DEBUG [2020-09-26 10:31:00,961] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:434)
      DEBUG [2020-09-26 10:31:00,961] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361)
      DEBUG [2020-09-26 10:31:00,962] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.commons.dbcp2.PoolingDriver.connect(PoolingDriver.java:129)
      DEBUG [2020-09-26 10:31:00,963] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at java.sql.DriverManager.getConnection(DriverManager.java:664)
      DEBUG [2020-09-26 10:31:00,964] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at java.sql.DriverManager.getConnection(DriverManager.java:270)
      DEBUG [2020-09-26 10:31:00,964] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnectionFromPool(JDBCInterpreter.java:487)
      DEBUG [2020-09-26 10:31:00,965] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnection(JDBCInterpreter.java:520)
      DEBUG [2020-09-26 10:31:00,966] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:706)
      DEBUG [2020-09-26 10:31:00,967] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.zeppelin.jdbc.JDBCInterpreter.internalInterpret(JDBCInterpreter.java:881)
      DEBUG [2020-09-26 10:31:00,968] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.zeppelin.interpreter.AbstractInterpreter.interpret(AbstractInterpreter.java:47)
      DEBUG [2020-09-26 10:31:00,968] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110)
      DEBUG [2020-09-26 10:31:00,969] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:776)
      DEBUG [2020-09-26 10:31:00,969] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668)
      DEBUG [2020-09-26 10:31:00,970] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
      DEBUG [2020-09-26 10:31:00,970] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130)
      DEBUG [2020-09-26 10:31:00,971] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)
      DEBUG [2020-09-26 10:31:00,971] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      DEBUG [2020-09-26 10:31:00,971] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      DEBUG [2020-09-26 10:31:00,972] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at java.lang.Thread.run(Thread.java:748)
      DEBUG [2020-09-26 10:31:00,972] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append: Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed
      DEBUG [2020-09-26 10:31:00,973] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
      DEBUG [2020-09-26 10:31:00,973] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
      DEBUG [2020-09-26 10:31:00,974] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
      DEBUG [2020-09-26 10:31:00,974] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
      DEBUG [2020-09-26 10:31:00,975] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
      DEBUG [2020-09-26 10:31:00,975] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at java.security.AccessController.doPrivileged(Native Method)
      DEBUG [2020-09-26 10:31:00,975] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at javax.security.auth.Subject.doAs(Subject.java:422)
      DEBUG [2020-09-26 10:31:00,976] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
      DEBUG [2020-09-26 10:31:00,976] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
      DEBUG [2020-09-26 10:31:00,977] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:190)
      DEBUG [2020-09-26 10:31:00,977] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[onAppend]:947) - Output Append:  ... 26 more
      DEBUG [2020-09-26 10:31:00,980] ({ParallelScheduler-Worker-1} RemoteInterpreterServer.java[jobRun]:795) - InterpreterResultMessage: %text java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://PERSONAL_INFO.io:10000/default;principal=hive/PERSONAL_INFO.io@PERSONAL_INFO.HADOOP: GSS initiate failed at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:215) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:163) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.apache.commons.dbcp2.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:79) at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:205) at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:836) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:434) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at org.apache.commons.dbcp2.PoolingDriver.connect(PoolingDriver.java:129) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:270) at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnectionFromPool(JDBCInterpreter.java:487) at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnection(JDBCInterpreter.java:520) at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:706) at org.apache.zeppelin.jdbc.JDBCInterpreter.internalInterpret(JDBCInterpreter.java:881) at org.apache.zeppelin.interpreter.AbstractInterpreter.interpret(AbstractInterpreter.java:47) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:776) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130) at org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:190) ... 26 more
      DEBUG [2020-09-26 10:31:00,981] ({ParallelScheduler-Worker-1} AbstractScheduler.java[runJob]:143) - Job Error, paragraph_1601116251411_1220021722, %text java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://PERSONAL_INFO.io:10000/default;principal=hive/PERSONAL_INFO.io@PERSONAL_INFO.HADOOP: GSS initiate failed at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:215) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:163) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.apache.commons.dbcp2.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:79) at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:205) at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:836) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:434) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) at org.apache.commons.dbcp2.PoolingDriver.connect(PoolingDriver.java:129) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:270) at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnectionFromPool(JDBCInterpreter.java:487) at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnection(JDBCInterpreter.java:520) at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:706) at org.apache.zeppelin.jdbc.JDBCInterpreter.internalInterpret(JDBCInterpreter.java:881) at org.apache.zeppelin.interpreter.AbstractInterpreter.interpret(AbstractInterpreter.java:47) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:776) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130) at org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:190) ... 26 more
       INFO [2020-09-26 10:31:00,981] ({ParallelScheduler-Worker-1} AbstractScheduler.java[runJob]:152) - Job paragraph_1601116251411_1220021722 finished by scheduler org.apache.zeppelin.jdbc.JDBCInterpreter2082164389DEBUG [2020-09-26 10:31:01,027] ({pool-2-thread-1} RemoteInterpreterServer.java[resourcePoolGetAll]:1112) - Request resourcePoolGetAll from ZeppelinServerDEBUG [2020-09-26 10:31:02,483] ({pool-2-thread-1} RemoteInterpreterServer.java[interpret]:593) - st:show databases;DEBUG [2020-09-26 10:31:02,484] ({pool-2-thread-1} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.concurrent.use, value: trueDEBUG [2020-09-26 10:31:02,485] ({pool-2-thread-1} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.concurrent.max_connection, value: 10 INFO [2020-09-26 10:31:02,498] ({ParallelScheduler-Worker-2} AbstractScheduler.java[runJob]:125) - Job paragraph_1601116251411_1220021722 started by scheduler org.apache.zeppelin.jdbc.JDBCInterpreter2082164389DEBUG [2020-09-26 10:31:02,509] ({ParallelScheduler-Worker-2} RemoteInterpreterServer.java[jobRun]:775) - Script after hooks: show databases;DEBUG [2020-09-26 10:31:02,509] ({ParallelScheduler-Worker-2} Interpreter.java[getProperty]:212) - key: zeppelin.jdbc.interpolation, value: falseDEBUG [2020-09-26 10:31:02,511] ({ParallelScheduler-Worker-2} JDBCInterpreter.java[internalInterpret]:877) - Run SQL command 'show databases;'DEBUG [2020-09-26 10:31:02,511] ({ParallelScheduler-Worker-2} JDBCInterpreter.java[internalInterpret]:879) - DBPrefix: default, SQL command: 'show databases;'DEBUG [2020-09-26 10:31:02,511] ({ParallelScheduler-Worker-2} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.auth.type, value: KERBEROSDEBUG [2020-09-26 10:31:02,583] ({pool-2-thread-2} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.concurrent.use, value: trueDEBUG [2020-09-26 10:31:02,584] ({pool-2-thread-2} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.concurrent.max_connection, value: 10DEBUG [2020-09-26 10:31:02,724] ({ParallelScheduler-Worker-2} UserGroupInformation.java[login]:209) - hadoop loginDEBUG [2020-09-26 10:31:02,725] ({ParallelScheduler-Worker-2} UserGroupInformation.java[commit]:144) - hadoop login commitDEBUG [2020-09-26 10:31:02,725] ({ParallelScheduler-Worker-2} UserGroupInformation.java[commit]:158) - using kerberos user:PERSONAL_INFO@PERSONAL_INFO.HADOOPDEBUG [2020-09-26 10:31:02,725] ({ParallelScheduler-Worker-2} UserGroupInformation.java[commit]:180) - Using user: "PERSONAL_INFO@PERSONAL_INFO.HADOOP" with name PERSONAL_INFO@PERSONAL_INFO.HADOOPDEBUG [2020-09-26 10:31:02,725] ({ParallelScheduler-Worker-2} UserGroupInformation.java[commit]:190) - User entry: "PERSONAL_INFO@PERSONAL_INFO.HADOOP" INFO [2020-09-26 10:31:02,726] ({ParallelScheduler-Worker-2} UserGroupInformation.java[loginUserFromKeytab]:938) - Login successful for user PERSONAL_INFO@PERSONAL_INFO.HADOOP using keytab file /zeppelin/PERSONAL_INFO.PERSONAL_INFO.HADOOP.keytabDEBUG [2020-09-26 10:31:02,726] ({ParallelScheduler-Worker-2} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.auth.kerberos.proxy.enable, value: nullDEBUG [2020-09-26 10:31:02,727] ({ParallelScheduler-Worker-2} Interpreter.java[getProperty]:205) - key: zeppelin.jdbc.maxConnLifetime, value: -1 INFO [2020-09-26 10:31:02,728] ({ParallelScheduler-Worker-2} Utils.java[parseURL]:285) - Supplied authorities: PERSONAL_INFO.io:10000 INFO [2020-09-26 10:31:02,729] ({ParallelScheduler-Worker-2} Utils.java[parseURL]:372) - Resolved authority: PERSONAL_INFO.io:10000DEBUG [2020-09-26 10:31:02,729] ({ParallelScheduler-Worker-2} HadoopThriftAuthBridge.java[loginUserHasCurrentAuthMethod]:155) - Current authMethod = KERBEROSDEBUG [2020-09-26 10:31:02,729] ({ParallelScheduler-Worker-2} HadoopThriftAuthBridge.java[createClientWithConf]:90) - Not setting UGI conf as passed-in authMethod of kerberos = current. INFO [2020-09-26 10:31:02,730] ({ParallelScheduler-Worker-2} HiveConnection.java[openTransport]:189) - Will try to open client transport with JDBC Uri: jdbc:hive2://PERSONAL_INFO.io:10000/default;principal=hive/PERSONAL_INFO.io@PERSONAL_INFO.HADOOPDEBUG [2020-09-26 10:31:02,730] ({ParallelScheduler-Worker-2} UserGroupInformation.java[logPrivilegedAction]:1652) - PrivilegedAction as:PERSONAL_INFO@PERSONAL_INFO.HADOOP (auth:KERBEROS) from:org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)DEBUG [2020-09-26 10:31:02,730] ({ParallelScheduler-Worker-2} TSaslTransport.java[open]:261) - opening transport org.apache.thrift.transport.TSaslClientTransport@11d7cb45DEBUG [2020-09-26 10:31:05,746] ({ParallelScheduler-Worker-2} TSaslClientTransport.java[handleSaslStartMessage]:96) - Sending mechanism name GSSAPI and initial response of length 585DEBUG [2020-09-26 10:31:05,746] ({ParallelScheduler-Worker-2} TSaslTransport.java[sendSaslMessage]:162) - CLIENT: Writing message with status START and payload length 6DEBUG [2020-09-26 10:31:05,746] ({ParallelScheduler-Worker-2} TSaslTransport.java[sendSaslMessage]:162) - CLIENT: Writing message with status OK and payload length 585DEBUG [2020-09-26 10:31:05,746] ({ParallelScheduler-Worker-2} TSaslTransport.java[open]:273) - CLIENT: Start message handledDEBUG [2020-09-26 10:31:05,931] ({ParallelScheduler-Worker-2} TSaslTransport.java[receiveSaslMessage]:206) - CLIENT: Received message with status OK and payload length 108DEBUG [2020-09-26 10:31:05,934] ({ParallelScheduler-Worker-2} TSaslTransport.java[sendSaslMessage]:162) - CLIENT: Writing message with status OK and payload length 0DEBUG [2020-09-26 10:31:06,112] ({ParallelScheduler-Worker-2} TSaslTransport.java[receiveSaslMessage]:206) - CLIENT: Received message with status OK and payload length 32DEBUG [2020-09-26 10:31:06,114] ({ParallelScheduler-Worker-2} TSaslTransport.java[sendSaslMessage]:162) - CLIENT: Writing message with status COMPLETE and payload length 32DEBUG [2020-09-26 10:31:06,115] ({ParallelScheduler-Worker-2} TSaslTransport.java[open]:296) - CLIENT: Main negotiation loop completeDEBUG [2020-09-26 10:31:06,115] ({ParallelScheduler-Worker-2} TSaslTransport.java[open]:306) - CLIENT: SASL Client receiving last messageDEBUG [2020-09-26 10:31:06,225] ({ParallelScheduler-Worker-2} TSaslTransport.java[receiveSaslMessage]:206) - CLIENT: Received message with status COMPLETE and payload length 0DEBUG [2020-09-26 10:31:06,244] ({ParallelScheduler-Worker-2} TSaslTransport.java[flush]:498) - writing data length: 71DEBUG [2020-09-26 10:31:06,619] ({ParallelScheduler-Worker-2} TSaslTransport.java[readFrame]:459) - CLIENT: reading data length: 109DEBUG [2020-09-26 10:31:06,643] ({ParallelScheduler-Worker-2} Interpreter.java[getProperty]:205) - key: default.statementPrecode, value: DEBUG [2020-09-26 10:31:06,649] ({ParallelScheduler-Worker-2} TSaslTransport.java[flush]:498) - writing data length: 121DEBUG [2020-09-26 10:31:06,719] ({ParallelScheduler-Worker-2} TSaslTransport.java[readFrame]:459) - CLIENT: reading data length: 109DEBUG [2020-09-26 10:31:06,733] ({ParallelScheduler-Worker-2} TSaslTransport.java[flush]:498) - writing data length: 100DEBUG [2020-09-26 10:31:06,776] ({ParallelScheduler-Worker-2} TSaslTransport.java[readFrame]:459) - CLIENT: reading data length: 321DEBUG [2020-09-26 10:31:06,790] ({ParallelScheduler-Worker-2} TSaslTransport.java[flush]:498) - writing data length: 102DEBUG [2020-09-26 10:31:06,925] ({ParallelScheduler-Worker-2} TSaslTransport.java[readFrame]:459) - CLIENT: reading data length: 136DEBUG [2020-09-26 10:31:06,963] ({ParallelScheduler-Worker-2} TSaslTransport.java[flush]:498) - writing data length: 112DEBUG [2020-09-26 10:31:07,121] ({ParallelScheduler-Worker-2} TSaslTransport.java[readFrame]:459) - CLIENT: reading data length: 3283DEBUG [2020-09-26 10:31:07,149] ({ParallelScheduler-Worker-2} HiveQueryResultSet.java[next]:381) - Fetched row string:
      

       

      At the not running on kubernetes mode, I found out that the main negotiation loop was successfully completed finally at TSaslTransport.java and received the QueryResultSet from the hive.

       

      I'd appreciate your help. Thank you for reading it.

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            shaun.glass lim se yoon
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: