Hive
  1. Hive
  2. HIVE-7292 Hive on Spark
  3. HIVE-7439

Spark job monitoring and error reporting [Spark Branch]

    Details

      Description

      After Hive submits a job to Spark cluster, we need to report to user the job progress, such as the percentage done, to the user. This is especially important for long running queries. Moreover, if there is an error during job submission or execution, it's also crucial for hive to fetch the error log and/or stacktrace and feedback it to the user.

      Please refer design doc on wiki for more information.

      CLEAR LIBRARY CACHE

      1. hive on spark job status.PNG
        34 kB
        Chengxiang Li
      2. HIVE-7439.3-spark.patch
        27 kB
        Chengxiang Li
      3. HIVE-7439.3-spark.patch
        27 kB
        Xuefu Zhang
      4. HIVE-7439.2-spark.patch
        27 kB
        Chengxiang Li
      5. HIVE-7439.2-spark.patch
        27 kB
        Xuefu Zhang
      6. HIVE-7439.1-spark.patch
        26 kB
        Chengxiang Li

        Issue Links

          Activity

          Hide
          Lefty Leverenz added a comment -

          Adding TODOC15 (which means TODOC1.1.0).

          Show
          Lefty Leverenz added a comment - Adding TODOC15 (which means TODOC1.1.0).
          Show
          Lefty Leverenz added a comment - Hm, maybe we want some documentation for this after all. (See doc comments on HIVE-8834 : https://issues.apache.org/jira/browse/HIVE-8834?focusedCommentId=14228268&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14228268 .)
          Hide
          Chengxiang Li added a comment -

          Thanks you guys, I've created HIVE-8455 to track the Spark job progress format printing issue.

          Show
          Chengxiang Li added a comment - Thanks you guys, I've created HIVE-8455 to track the Spark job progress format printing issue.
          Hide
          Lefty Leverenz added a comment -

          Thanks Xuefu Zhang, I just wanted to be sure.

          Show
          Lefty Leverenz added a comment - Thanks Xuefu Zhang , I just wanted to be sure.
          Hide
          Xuefu Zhang added a comment -

          Hi Lefty Leverenz, this is not required to get started, but it's useful for user to get an idea of the job progress. Hopefully, the information presented to the user is self-evident. Thus, I think we are okay for now.

          Show
          Xuefu Zhang added a comment - Hi Lefty Leverenz , this is not required to get started, but it's useful for user to get an idea of the job progress. Hopefully, the information presented to the user is self-evident. Thus, I think we are okay for now.
          Hide
          Lefty Leverenz added a comment -

          Does this need to be documented in the wiki?

          Show
          Lefty Leverenz added a comment - Does this need to be documented in the wiki? Hive on Spark: Getting Started
          Hide
          Rui Li added a comment -

          Brock Noland - Yep, that'll be fine.

          Show
          Rui Li added a comment - Brock Noland - Yep, that'll be fine.
          Hide
          Xuefu Zhang added a comment -

          I committed to Spark branch. Thanks to Chengxiang for this great contribution.

          Chengxiang Li, could you create a followup JIRa for what Rui mentioned? It doesn't have be resolved right way though.

          Show
          Xuefu Zhang added a comment - I committed to Spark branch. Thanks to Chengxiang for this great contribution. Chengxiang Li , could you create a followup JIRa for what Rui mentioned? It doesn't have be resolved right way though.
          Hide
          Brock Noland added a comment -

          Only a minor point: can we print the meaning of all these statistics at the top so that users can better understand what they see?

          Agreed...Perhaps we should do this as follow-on? It'd be great to get this one in.

          Show
          Brock Noland added a comment - Only a minor point: can we print the meaning of all these statistics at the top so that users can better understand what they see? Agreed...Perhaps we should do this as follow-on? It'd be great to get this one in.
          Hide
          Rui Li added a comment -

          +1 patch looks good to me.

          Only a minor point: can we print the meaning of all these statistics at the top so that users can better understand what they see?

          Show
          Rui Li added a comment - +1 patch looks good to me. Only a minor point: can we print the meaning of all these statistics at the top so that users can better understand what they see?
          Hide
          Xuefu Zhang added a comment -

          Thank, Rui Li. We can make changes accordingly, once SPARK-3902 settles in.

          +1 for the patch.

          Show
          Xuefu Zhang added a comment - Thank, Rui Li . We can make changes accordingly, once SPARK-3902 settles in. +1 for the patch.
          Hide
          Rui Li added a comment -

          The async APIs are stabilized in SPARK-3902.

          Show
          Rui Li added a comment - The async APIs are stabilized in SPARK-3902 .
          Hide
          Xuefu Zhang added a comment -

          Patch looks good to me. Rui Li, could you also take a look, since you are familiar with Spark-related? Thanks.

          Show
          Xuefu Zhang added a comment - Patch looks good to me. Rui Li , could you also take a look, since you are familiar with Spark-related? Thanks.
          Hide
          Hive QA added a comment -

          Overall: -1 at least one tests failed

          Here are the results of testing the latest attachment:
          https://issues.apache.org/jira/secure/attachment/12674525/HIVE-7439.3-spark.patch

          ERROR: -1 due to 2 failed/errored test(s), 4341 tests executed
          Failed tests:

          org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
          org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_parallel
          

          Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/209/testReport
          Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/209/console
          Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-209/

          Messages:

          Executing org.apache.hive.ptest.execution.PrepPhase
          Executing org.apache.hive.ptest.execution.ExecutionPhase
          Executing org.apache.hive.ptest.execution.ReportingPhase
          Tests exited with: TestsFailedException: 2 tests failed
          

          This message is automatically generated.

          ATTACHMENT ID: 12674525

          Show
          Hive QA added a comment - Overall : -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12674525/HIVE-7439.3-spark.patch ERROR: -1 due to 2 failed/errored test(s), 4341 tests executed Failed tests: org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_parallel Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/209/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/209/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-209/ Messages: Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed This message is automatically generated. ATTACHMENT ID: 12674525
          Hide
          Xuefu Zhang added a comment -

          Thanks, Chengxiang Li. I will do that shortly. I will reload your patch after the update.

          Show
          Xuefu Zhang added a comment - Thanks, Chengxiang Li . I will do that shortly. I will reload your patch after the update.
          Hide
          Chengxiang Li added a comment -

          Seems we need to update Spark jar in maven repository as we depend on SPARK-3446 in this patch.

          Show
          Chengxiang Li added a comment - Seems we need to update Spark jar in maven repository as we depend on SPARK-3446 in this patch.
          Hide
          Hive QA added a comment -

          Overall: -1 no tests executed

          Here are the results of testing the latest attachment:
          https://issues.apache.org/jira/secure/attachment/12674471/HIVE-7439.3-spark.patch

          Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/208/testReport
          Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/208/console
          Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-208/

          Messages:

          **** This message was trimmed, see log for full details ****
          warning(200): IdentifiersParser.g:68:4: 
          Decision can match input such as "LPAREN KW_CASE KW_IF" using multiple alternatives: 1, 2
          
          As a result, alternative(s) 2 were disabled for that input
          warning(200): IdentifiersParser.g:68:4: 
          Decision can match input such as "LPAREN LPAREN KW_IF" using multiple alternatives: 1, 2
          
          As a result, alternative(s) 2 were disabled for that input
          warning(200): IdentifiersParser.g:115:5: 
          Decision can match input such as "KW_CLUSTER KW_BY LPAREN" using multiple alternatives: 1, 2
          
          As a result, alternative(s) 2 were disabled for that input
          warning(200): IdentifiersParser.g:127:5: 
          Decision can match input such as "KW_PARTITION KW_BY LPAREN" using multiple alternatives: 1, 2
          
          As a result, alternative(s) 2 were disabled for that input
          warning(200): IdentifiersParser.g:138:5: 
          Decision can match input such as "KW_DISTRIBUTE KW_BY LPAREN" using multiple alternatives: 1, 2
          
          As a result, alternative(s) 2 were disabled for that input
          warning(200): IdentifiersParser.g:149:5: 
          Decision can match input such as "KW_SORT KW_BY LPAREN" using multiple alternatives: 1, 2
          
          As a result, alternative(s) 2 were disabled for that input
          warning(200): IdentifiersParser.g:166:7: 
          Decision can match input such as "STAR" using multiple alternatives: 1, 2
          
          As a result, alternative(s) 2 were disabled for that input
          warning(200): IdentifiersParser.g:179:5: 
          Decision can match input such as "KW_STRUCT" using multiple alternatives: 4, 6
          
          As a result, alternative(s) 6 were disabled for that input
          warning(200): IdentifiersParser.g:179:5: 
          Decision can match input such as "KW_ARRAY" using multiple alternatives: 2, 6
          
          As a result, alternative(s) 6 were disabled for that input
          warning(200): IdentifiersParser.g:179:5: 
          Decision can match input such as "KW_UNIONTYPE" using multiple alternatives: 5, 6
          
          As a result, alternative(s) 6 were disabled for that input
          warning(200): IdentifiersParser.g:261:5: 
          Decision can match input such as "KW_TRUE" using multiple alternatives: 3, 8
          
          As a result, alternative(s) 8 were disabled for that input
          warning(200): IdentifiersParser.g:261:5: 
          Decision can match input such as "KW_NULL" using multiple alternatives: 1, 8
          
          As a result, alternative(s) 8 were disabled for that input
          warning(200): IdentifiersParser.g:261:5: 
          Decision can match input such as "KW_FALSE" using multiple alternatives: 3, 8
          
          As a result, alternative(s) 8 were disabled for that input
          warning(200): IdentifiersParser.g:261:5: 
          Decision can match input such as "KW_DATE StringLiteral" using multiple alternatives: 2, 3
          
          As a result, alternative(s) 3 were disabled for that input
          warning(200): IdentifiersParser.g:393:5: 
          Decision can match input such as "KW_BETWEEN KW_MAP LPAREN" using multiple alternatives: 8, 9
          
          As a result, alternative(s) 9 were disabled for that input
          warning(200): IdentifiersParser.g:393:5: 
          Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT KW_INTO" using multiple alternatives: 2, 9
          
          As a result, alternative(s) 9 were disabled for that input
          warning(200): IdentifiersParser.g:393:5: 
          Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_UNION KW_ALL" using multiple alternatives: 2, 9
          
          As a result, alternative(s) 9 were disabled for that input
          warning(200): IdentifiersParser.g:393:5: 
          Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY" using multiple alternatives: 2, 9
          
          As a result, alternative(s) 9 were disabled for that input
          warning(200): IdentifiersParser.g:393:5: 
          Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_CLUSTER KW_BY" using multiple alternatives: 2, 9
          
          As a result, alternative(s) 9 were disabled for that input
          warning(200): IdentifiersParser.g:393:5: 
          Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL KW_VIEW" using multiple alternatives: 2, 9
          
          As a result, alternative(s) 9 were disabled for that input
          warning(200): IdentifiersParser.g:393:5: 
          Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN" using multiple alternatives: 2, 9
          
          As a result, alternative(s) 9 were disabled for that input
          warning(200): IdentifiersParser.g:393:5: 
          Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE KW_BY" using multiple alternatives: 2, 9
          
          As a result, alternative(s) 9 were disabled for that input
          warning(200): IdentifiersParser.g:393:5: 
          Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER KW_BY" using multiple alternatives: 2, 9
          
          As a result, alternative(s) 9 were disabled for that input
          warning(200): IdentifiersParser.g:393:5: 
          Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_GROUP KW_BY" using multiple alternatives: 2, 9
          
          As a result, alternative(s) 9 were disabled for that input
          warning(200): IdentifiersParser.g:393:5: 
          Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT KW_OVERWRITE" using multiple alternatives: 2, 9
          
          As a result, alternative(s) 9 were disabled for that input
          warning(200): IdentifiersParser.g:518:5: 
          Decision can match input such as "{AMPERSAND..BITWISEXOR, DIV..DIVIDE, EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE}" using multiple alternatives: 1, 3
          
          As a result, alternative(s) 3 were disabled for that input
          [INFO] 
          [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-exec ---
          Downloading: http://conjars.org/repo/org/apache/optiq/optiq/0.9.1-incubating-SNAPSHOT/maven-metadata.xml
          Downloading: https://repository.jboss.org/nexus/content/groups/public/net/hydromatic/linq4j/0.4/linq4j-0.4.pom
          Downloading: http://repo.maven.apache.org/maven2/net/hydromatic/linq4j/0.4/linq4j-0.4.pom
          Downloading: https://repository.jboss.org/nexus/content/groups/public/net/hydromatic/quidem/0.1.1/quidem-0.1.1.pom
          Downloading: http://repo.maven.apache.org/maven2/net/hydromatic/quidem/0.1.1/quidem-0.1.1.pom
          Downloading: https://repository.jboss.org/nexus/content/groups/public/eigenbase/eigenbase-properties/1.1.4/eigenbase-properties-1.1.4.pom
          Downloading: http://repo.maven.apache.org/maven2/eigenbase/eigenbase-properties/1.1.4/eigenbase-properties-1.1.4.pom
          [INFO] 
          [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-exec ---
          [INFO] Using 'UTF-8' encoding to copy filtered resources.
          [INFO] Copying 2 resources
          [INFO] Copying 3 resources
          [INFO] 
          [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-exec ---
          [INFO] Executing tasks
          
          main:
          [INFO] Executed tasks
          [INFO] 
          [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-exec ---
          [INFO] Compiling 1966 source files to /data/hive-ptest/working/apache-svn-spark-source/ql/target/classes
          [INFO] -------------------------------------------------------------
          [WARNING] COMPILATION WARNING : 
          [INFO] -------------------------------------------------------------
          [WARNING] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java: Some input files use or override a deprecated API.
          [WARNING] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java: Recompile with -Xlint:deprecation for details.
          [WARNING] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java: Some input files use unchecked or unsafe operations.
          [WARNING] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java: Recompile with -Xlint:unchecked for details.
          [INFO] 4 warnings 
          [INFO] -------------------------------------------------------------
          [INFO] -------------------------------------------------------------
          [ERROR] COMPILATION ERROR : 
          [INFO] -------------------------------------------------------------
          [ERROR] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkClient.java:[188,25] cannot find symbol
            symbol:   method jobIds()
            location: variable future of type org.apache.spark.FutureAction
          [INFO] 1 error
          [INFO] -------------------------------------------------------------
          [INFO] ------------------------------------------------------------------------
          [INFO] Reactor Summary:
          [INFO] 
          [INFO] Hive .............................................. SUCCESS [2.586s]
          [INFO] Hive Shims Common ................................. SUCCESS [2.311s]
          [INFO] Hive Shims 0.20 ................................... SUCCESS [0.902s]
          [INFO] Hive Shims Secure Common .......................... SUCCESS [1.510s]
          [INFO] Hive Shims 0.20S .................................. SUCCESS [0.784s]
          [INFO] Hive Shims 0.23 ................................... SUCCESS [2.060s]
          [INFO] Hive Shims ........................................ SUCCESS [0.306s]
          [INFO] Hive Common ....................................... SUCCESS [4.577s]
          [INFO] Hive Serde ........................................ SUCCESS [4.103s]
          [INFO] Hive Metastore .................................... SUCCESS [11.257s]
          [INFO] Hive Ant Utilities ................................ SUCCESS [0.357s]
          [INFO] Hive Query Language ............................... FAILURE [24.465s]
          [INFO] Hive Service ...................................... SKIPPED
          [INFO] Hive Accumulo Handler ............................. SKIPPED
          [INFO] Hive JDBC ......................................... SKIPPED
          [INFO] Hive Beeline ...................................... SKIPPED
          [INFO] Hive CLI .......................................... SKIPPED
          [INFO] Hive Contrib ...................................... SKIPPED
          [INFO] Hive HBase Handler ................................ SKIPPED
          [INFO] Hive HCatalog ..................................... SKIPPED
          [INFO] Hive HCatalog Core ................................ SKIPPED
          [INFO] Hive HCatalog Pig Adapter ......................... SKIPPED
          [INFO] Hive HCatalog Server Extensions ................... SKIPPED
          [INFO] Hive HCatalog Webhcat Java Client ................. SKIPPED
          [INFO] Hive HCatalog Webhcat ............................. SKIPPED
          [INFO] Hive HCatalog Streaming ........................... SKIPPED
          [INFO] Hive HWI .......................................... SKIPPED
          [INFO] Hive ODBC ......................................... SKIPPED
          [INFO] Hive Shims Aggregator ............................. SKIPPED
          [INFO] Hive TestUtils .................................... SKIPPED
          [INFO] Hive Packaging .................................... SKIPPED
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD FAILURE
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 56.202s
          [INFO] Finished at: Mon Oct 13 04:05:28 EDT 2014
          [INFO] Final Memory: 86M/980M
          [INFO] ------------------------------------------------------------------------
          [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hive-exec: Compilation failure
          [ERROR] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkClient.java:[188,25] cannot find symbol
          [ERROR] symbol:   method jobIds()
          [ERROR] location: variable future of type org.apache.spark.FutureAction
          [ERROR] -> [Help 1]
          [ERROR] 
          [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
          [ERROR] Re-run Maven using the -X switch to enable full debug logging.
          [ERROR] 
          [ERROR] For more information about the errors and possible solutions, please read the following articles:
          [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
          [ERROR] 
          [ERROR] After correcting the problems, you can resume the build with the command
          [ERROR]   mvn <goals> -rf :hive-exec
          '
          

          This message is automatically generated.

          ATTACHMENT ID: 12674471

          Show
          Hive QA added a comment - Overall : -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12674471/HIVE-7439.3-spark.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/208/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/208/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-208/ Messages: **** This message was trimmed, see log for full details **** warning(200): IdentifiersParser.g:68:4: Decision can match input such as "LPAREN KW_CASE KW_IF" using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input warning(200): IdentifiersParser.g:68:4: Decision can match input such as "LPAREN LPAREN KW_IF" using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input warning(200): IdentifiersParser.g:115:5: Decision can match input such as "KW_CLUSTER KW_BY LPAREN" using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input warning(200): IdentifiersParser.g:127:5: Decision can match input such as "KW_PARTITION KW_BY LPAREN" using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input warning(200): IdentifiersParser.g:138:5: Decision can match input such as "KW_DISTRIBUTE KW_BY LPAREN" using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input warning(200): IdentifiersParser.g:149:5: Decision can match input such as "KW_SORT KW_BY LPAREN" using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input warning(200): IdentifiersParser.g:166:7: Decision can match input such as "STAR" using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input warning(200): IdentifiersParser.g:179:5: Decision can match input such as "KW_STRUCT" using multiple alternatives: 4, 6 As a result, alternative(s) 6 were disabled for that input warning(200): IdentifiersParser.g:179:5: Decision can match input such as "KW_ARRAY" using multiple alternatives: 2, 6 As a result, alternative(s) 6 were disabled for that input warning(200): IdentifiersParser.g:179:5: Decision can match input such as "KW_UNIONTYPE" using multiple alternatives: 5, 6 As a result, alternative(s) 6 were disabled for that input warning(200): IdentifiersParser.g:261:5: Decision can match input such as "KW_TRUE" using multiple alternatives: 3, 8 As a result, alternative(s) 8 were disabled for that input warning(200): IdentifiersParser.g:261:5: Decision can match input such as "KW_NULL" using multiple alternatives: 1, 8 As a result, alternative(s) 8 were disabled for that input warning(200): IdentifiersParser.g:261:5: Decision can match input such as "KW_FALSE" using multiple alternatives: 3, 8 As a result, alternative(s) 8 were disabled for that input warning(200): IdentifiersParser.g:261:5: Decision can match input such as "KW_DATE StringLiteral" using multiple alternatives: 2, 3 As a result, alternative(s) 3 were disabled for that input warning(200): IdentifiersParser.g:393:5: Decision can match input such as "KW_BETWEEN KW_MAP LPAREN" using multiple alternatives: 8, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:393:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT KW_INTO" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:393:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_UNION KW_ALL" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:393:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:393:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_CLUSTER KW_BY" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:393:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL KW_VIEW" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:393:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:393:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE KW_BY" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:393:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER KW_BY" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:393:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_GROUP KW_BY" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:393:5: Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT KW_OVERWRITE" using multiple alternatives: 2, 9 As a result, alternative(s) 9 were disabled for that input warning(200): IdentifiersParser.g:518:5: Decision can match input such as "{AMPERSAND..BITWISEXOR, DIV..DIVIDE, EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE}" using multiple alternatives: 1, 3 As a result, alternative(s) 3 were disabled for that input [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-exec --- Downloading: http://conjars.org/repo/org/apache/optiq/optiq/0.9.1-incubating-SNAPSHOT/maven-metadata.xml Downloading: https://repository.jboss.org/nexus/content/groups/public/net/hydromatic/linq4j/0.4/linq4j-0.4.pom Downloading: http://repo.maven.apache.org/maven2/net/hydromatic/linq4j/0.4/linq4j-0.4.pom Downloading: https://repository.jboss.org/nexus/content/groups/public/net/hydromatic/quidem/0.1.1/quidem-0.1.1.pom Downloading: http://repo.maven.apache.org/maven2/net/hydromatic/quidem/0.1.1/quidem-0.1.1.pom Downloading: https://repository.jboss.org/nexus/content/groups/public/eigenbase/eigenbase-properties/1.1.4/eigenbase-properties-1.1.4.pom Downloading: http://repo.maven.apache.org/maven2/eigenbase/eigenbase-properties/1.1.4/eigenbase-properties-1.1.4.pom [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-exec --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 2 resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-exec --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-exec --- [INFO] Compiling 1966 source files to /data/hive-ptest/working/apache-svn-spark-source/ql/target/classes [INFO] ------------------------------------------------------------- [WARNING] COMPILATION WARNING : [INFO] ------------------------------------------------------------- [WARNING] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java: Some input files use or override a deprecated API. [WARNING] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java: Recompile with -Xlint:deprecation for details. [WARNING] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java: Some input files use unchecked or unsafe operations. [WARNING] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java: Recompile with -Xlint:unchecked for details. [INFO] 4 warnings [INFO] ------------------------------------------------------------- [INFO] ------------------------------------------------------------- [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkClient.java:[188,25] cannot find symbol symbol: method jobIds() location: variable future of type org.apache.spark.FutureAction [INFO] 1 error [INFO] ------------------------------------------------------------- [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Hive .............................................. SUCCESS [2.586s] [INFO] Hive Shims Common ................................. SUCCESS [2.311s] [INFO] Hive Shims 0.20 ................................... SUCCESS [0.902s] [INFO] Hive Shims Secure Common .......................... SUCCESS [1.510s] [INFO] Hive Shims 0.20S .................................. SUCCESS [0.784s] [INFO] Hive Shims 0.23 ................................... SUCCESS [2.060s] [INFO] Hive Shims ........................................ SUCCESS [0.306s] [INFO] Hive Common ....................................... SUCCESS [4.577s] [INFO] Hive Serde ........................................ SUCCESS [4.103s] [INFO] Hive Metastore .................................... SUCCESS [11.257s] [INFO] Hive Ant Utilities ................................ SUCCESS [0.357s] [INFO] Hive Query Language ............................... FAILURE [24.465s] [INFO] Hive Service ...................................... SKIPPED [INFO] Hive Accumulo Handler ............................. SKIPPED [INFO] Hive JDBC ......................................... SKIPPED [INFO] Hive Beeline ...................................... SKIPPED [INFO] Hive CLI .......................................... SKIPPED [INFO] Hive Contrib ...................................... SKIPPED [INFO] Hive HBase Handler ................................ SKIPPED [INFO] Hive HCatalog ..................................... SKIPPED [INFO] Hive HCatalog Core ................................ SKIPPED [INFO] Hive HCatalog Pig Adapter ......................... SKIPPED [INFO] Hive HCatalog Server Extensions ................... SKIPPED [INFO] Hive HCatalog Webhcat Java Client ................. SKIPPED [INFO] Hive HCatalog Webhcat ............................. SKIPPED [INFO] Hive HCatalog Streaming ........................... SKIPPED [INFO] Hive HWI .......................................... SKIPPED [INFO] Hive ODBC ......................................... SKIPPED [INFO] Hive Shims Aggregator ............................. SKIPPED [INFO] Hive TestUtils .................................... SKIPPED [INFO] Hive Packaging .................................... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 56.202s [INFO] Finished at: Mon Oct 13 04:05:28 EDT 2014 [INFO] Final Memory: 86M/980M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hive-exec: Compilation failure [ERROR] /data/hive-ptest/working/apache-svn-spark-source/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkClient.java:[188,25] cannot find symbol [ERROR] symbol: method jobIds() [ERROR] location: variable future of type org.apache.spark.FutureAction [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :hive-exec ' This message is automatically generated. ATTACHMENT ID: 12674471
          Hide
          Chengxiang Li added a comment -

          update patch as it's conflict with last commit.

          Show
          Chengxiang Li added a comment - update patch as it's conflict with last commit.
          Hide
          Hive QA added a comment -

          Overall: -1 no tests executed

          Here are the results of testing the latest attachment:
          https://issues.apache.org/jira/secure/attachment/12674413/HIVE-7439.2-spark.patch

          Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/207/testReport
          Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/207/console
          Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-207/

          Messages:

          Executing org.apache.hive.ptest.execution.PrepPhase
          Tests exited with: NonZeroExitCodeException
          Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
          + export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
          + JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
          + export PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
          + PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
          + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
          + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
          + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128'
          + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128'
          + cd /data/hive-ptest/working/
          + tee /data/hive-ptest/logs/PreCommit-HIVE-SPARK-Build-207/source-prep.txt
          + [[ false == \t\r\u\e ]]
          + mkdir -p maven ivy
          + [[ svn = \s\v\n ]]
          + [[ -n '' ]]
          + [[ -d apache-svn-spark-source ]]
          + [[ ! -d apache-svn-spark-source/.svn ]]
          + [[ ! -d apache-svn-spark-source ]]
          + cd apache-svn-spark-source
          + svn revert -R .
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/MapTran.java'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTran.java'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/IdentityTran.java'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/ReduceTran.java'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/MapInput.java'
          ++ svn status --no-ignore
          ++ egrep -v '^X|^Performing status on external'
          ++ awk '{print $2}'
          + rm -rf target datanucleus.log ant/target shims/target shims/0.20/target shims/0.20S/target shims/0.23/target shims/aggregator/target shims/common/target shims/common-secure/target packaging/target hbase-handler/target testutils/target jdbc/target metastore/target itests/target itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target itests/hive-minikdc/target itests/hive-unit/target itests/custom-serde/target itests/util/target itests/qtest-spark/target hcatalog/target hcatalog/core/target hcatalog/streaming/target hcatalog/server-extensions/target hcatalog/hcatalog-pig-adapter/target hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target accumulo-handler/target hwi/target common/target common/src/gen service/target contrib/target serde/target beeline/target cli/target odbc/target ql/dependency-reduced-pom.xml ql/target ql/src/java/org/apache/hadoop/hive/ql/exec/spark/ShuffleTran.java
          + svn update
          A    ql/src/java/org/apache/hadoop/hive/ql/exec/spark/ShuffleTran.java
          U    ql/src/java/org/apache/hadoop/hive/ql/exec/spark/ReduceTran.java
          U    ql/src/java/org/apache/hadoop/hive/ql/exec/spark/MapInput.java
          U    ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java
          U    ql/src/java/org/apache/hadoop/hive/ql/exec/spark/MapTran.java
          U    ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTran.java
          U    ql/src/java/org/apache/hadoop/hive/ql/exec/spark/IdentityTran.java
          
          Fetching external item into 'hcatalog/src/test/e2e/harness'
          Updated external to revision 1631185.
          
          Updated to revision 1631185.
          + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
          + patchFilePath=/data/hive-ptest/working/scratch/build.patch
          + [[ -f /data/hive-ptest/working/scratch/build.patch ]]
          + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
          + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch
          The patch does not appear to apply with p0, p1, or p2
          + exit 1
          '
          

          This message is automatically generated.

          ATTACHMENT ID: 12674413

          Show
          Hive QA added a comment - Overall : -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12674413/HIVE-7439.2-spark.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/207/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/207/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-207/ Messages: Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]] + export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + export PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-SPARK-Build-207/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ svn = \s\v\n ]] + [[ -n '' ]] + [[ -d apache-svn-spark-source ]] + [[ ! -d apache-svn-spark-source/.svn ]] + [[ ! -d apache-svn-spark-source ]] + cd apache-svn-spark-source + svn revert -R . Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/MapTran.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTran.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/IdentityTran.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/ReduceTran.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/MapInput.java' ++ svn status --no-ignore ++ egrep -v '^X|^Performing status on external' ++ awk '{print $2}' + rm -rf target datanucleus.log ant/target shims/target shims/0.20/target shims/0.20S/target shims/0.23/target shims/aggregator/target shims/common/target shims/common-secure/target packaging/target hbase-handler/target testutils/target jdbc/target metastore/target itests/target itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target itests/hive-minikdc/target itests/hive-unit/target itests/custom-serde/target itests/util/target itests/qtest-spark/target hcatalog/target hcatalog/core/target hcatalog/streaming/target hcatalog/server-extensions/target hcatalog/hcatalog-pig-adapter/target hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target accumulo-handler/target hwi/target common/target common/src/gen service/target contrib/target serde/target beeline/target cli/target odbc/target ql/dependency-reduced-pom.xml ql/target ql/src/java/org/apache/hadoop/hive/ql/exec/spark/ShuffleTran.java + svn update A ql/src/java/org/apache/hadoop/hive/ql/exec/spark/ShuffleTran.java U ql/src/java/org/apache/hadoop/hive/ql/exec/spark/ReduceTran.java U ql/src/java/org/apache/hadoop/hive/ql/exec/spark/MapInput.java U ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java U ql/src/java/org/apache/hadoop/hive/ql/exec/spark/MapTran.java U ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTran.java U ql/src/java/org/apache/hadoop/hive/ql/exec/spark/IdentityTran.java Fetching external item into 'hcatalog/src/test/e2e/harness' Updated external to revision 1631185. Updated to revision 1631185. + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' This message is automatically generated. ATTACHMENT ID: 12674413
          Hide
          Xuefu Zhang added a comment -

          Reattach the same patch #2 to trigger the test run.

          Show
          Xuefu Zhang added a comment - Reattach the same patch #2 to trigger the test run.
          Hide
          Chengxiang Li added a comment -

          add support to print job stages info before job state.

          Show
          Chengxiang Li added a comment - add support to print job stages info before job state.
          Hide
          Chengxiang Li added a comment -

          Yes, that make sense, i would add stages info in next patch.

          Show
          Chengxiang Li added a comment - Yes, that make sense, i would add stages info in next patch.
          Hide
          Xuefu Zhang added a comment -

          Hi Chengxiang Li, This looks very nice. Just wondering if we can also show the total number of stages in the progressor? Otherwise, user sees 29/29 thinking that the job is done and then finds there are 5 more stages to go.

          Show
          Xuefu Zhang added a comment - Hi Chengxiang Li , This looks very nice. Just wondering if we can also show the total number of stages in the progressor? Otherwise, user sees 29/29 thinking that the job is done and then finds there are 5 more stages to go.
          Hide
          Chengxiang Li added a comment -

          Hive on Spark query status format:
          A_B: C(+D-E)/F

          identifier description
          A stage id
          B stage attempt id
          C finished task count
          D running task count
          E failed task count
          F total task count
          Show
          Chengxiang Li added a comment - Hive on Spark query status format: A_B: C(+D-E)/F identifier description A stage id B stage attempt id C finished task count D running task count E failed task count F total task count
          Hide
          Chengxiang Li added a comment -

          I identified Hive requirement of Spark job status, console print logic, and implemented status collection through SparkProgressListener here.

          Show
          Chengxiang Li added a comment - I identified Hive requirement of Spark job status, console print logic, and implemented status collection through SparkProgressListener here.
          Hide
          Brock Noland added a comment -

          I think that we'll need the API from HIVE-7874 to do this work.

          Show
          Brock Noland added a comment - I think that we'll need the API from HIVE-7874 to do this work.

            People

            • Assignee:
              Chengxiang Li
              Reporter:
              Xuefu Zhang
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development