Uploaded image for project: 'Apache Hudi'
  1. Apache Hudi
  2. HUDI-6982 Run LST benchmark and Collect performance stats
  3. HUDI-6983

Error Report: more than one row returned by a subquery used as an expression

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • None
    • None
    • None
    • 2

    Description

      Exception in thread "main" java.lang.RuntimeException: Thread did not finish correctly
          at com.microsoft.lst_bench.common.LSTBenchmarkExecutor.checkResults(LSTBenchmarkExecutor.java:167)
          at com.microsoft.lst_bench.common.LSTBenchmarkExecutor.execute(LSTBenchmarkExecutor.java:121)
          at com.microsoft.lst_bench.Driver.main(Driver.java:147)
      Caused by: java.util.concurrent.ExecutionException: java.sql.SQLException: org.apache.hive.service.cli.HiveSQLException: Error running query: java.lang.IllegalStateException: more than one row returned by a subquery used as an expression:
      Subquery subquery#40371, [id=#49903]
      +- AdaptiveSparkPlan isFinalPlan=true
         +- == Final Plan ==
            *(5) Project [d_week_seq#40616]
            +- *(5) Filter (isnotnull(d_date#40614) AND (d_date#40614 = 2000-02-12))
               +- *(5) Scan MergeOnReadSnapshotRelation(org.apache.spark.sql.SQLContext@1407eedc,Map(path -> s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim, hoodie.write.lock.zookeeper.url -> ip-10-0-86-217.us-west-2.compute.internal, hoodie.write.lock.zookeeper.base_path -> /hudi, hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://ip-10-0-86-217.us-west-2.compute.internal:10000, hoodie.datasource.query.type -> snapshot, hoodie.cleaner.policy.failed.writes -> EAGER, hoodie.write.lock.zookeeper.port -> 2181, hoodie.write.lock.provider -> org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider, hoodie.write.concurrency.mode -> single_writer),HoodieTableMetaClient{basePath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim', metaPath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim/.hoodie', tableType=MERGE_ON_READ},List(),None,None) hudi_tpcds.date_dim[d_week_seq#40616,d_date#40614] PushedFilters: [IsNotNull(d_date), EqualTo(d_date,2000-02-12)], ReadSchema: struct<d_week_seq:int,d_date:date>
         +- == Initial Plan ==
            Project [d_week_seq#40616]
            +- Filter (isnotnull(d_date#40614) AND (d_date#40614 = 2000-02-12))
               +- Scan MergeOnReadSnapshotRelation(org.apache.spark.sql.SQLContext@1407eedc,Map(path -> s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim, hoodie.write.lock.zookeeper.url -> ip-10-0-86-217.us-west-2.compute.internal, hoodie.write.lock.zookeeper.base_path -> /hudi, hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://ip-10-0-86-217.us-west-2.compute.internal:10000, hoodie.datasource.query.type -> snapshot, hoodie.cleaner.policy.failed.writes -> EAGER, hoodie.write.lock.zookeeper.port -> 2181, hoodie.write.lock.provider -> org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider, hoodie.write.concurrency.mode -> single_writer),HoodieTableMetaClient{basePath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim', metaPath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim/.hoodie', tableType=MERGE_ON_READ},List(),None,None) hudi_tpcds.date_dim[d_week_seq#40616,d_date#40614] PushedFilters: [IsNotNull(d_date), EqualTo(d_date,2000-02-12)], ReadSchema: struct<d_week_seq:int,d_date:date>    at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:44)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:325)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:230)
          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
          at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:79)
          at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:63)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:230)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:225)
          at java.base/java.security.AccessController.doPrivileged(Native Method)
          at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
          at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:239)
          at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
          at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
          at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
          at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
          at java.base/java.lang.Thread.run(Thread.java:829)
      Caused by: java.lang.IllegalStateException: more than one row returned by a subquery used as an expression:
      Subquery subquery#40371, [id=#49903]
      +- AdaptiveSparkPlan isFinalPlan=true
         +- == Final Plan ==
            *(5) Project [d_week_seq#40616]
            +- *(5) Filter (isnotnull(d_date#40614) AND (d_date#40614 = 2000-02-12))
               +- *(5) Scan MergeOnReadSnapshotRelation(org.apache.spark.sql.SQLContext@1407eedc,Map(path -> s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim, hoodie.write.lock.zookeeper.url -> ip-10-0-86-217.us-west-2.compute.internal, hoodie.write.lock.zookeeper.base_path -> /hudi, hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://ip-10-0-86-217.us-west-2.compute.internal:10000, hoodie.datasource.query.type -> snapshot, hoodie.cleaner.policy.failed.writes -> EAGER, hoodie.write.lock.zookeeper.port -> 2181, hoodie.write.lock.provider -> org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider, hoodie.write.concurrency.mode -> single_writer),HoodieTableMetaClient{basePath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim', metaPath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim/.hoodie', tableType=MERGE_ON_READ},List(),None,None) hudi_tpcds.date_dim[d_week_seq#40616,d_date#40614] PushedFilters: [IsNotNull(d_date), EqualTo(d_date,2000-02-12)], ReadSchema: struct<d_week_seq:int,d_date:date>
         +- == Initial Plan ==
            Project [d_week_seq#40616]
            +- Filter (isnotnull(d_date#40614) AND (d_date#40614 = 2000-02-12))
               +- Scan MergeOnReadSnapshotRelation(org.apache.spark.sql.SQLContext@1407eedc,Map(path -> s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim, hoodie.write.lock.zookeeper.url -> ip-10-0-86-217.us-west-2.compute.internal, hoodie.write.lock.zookeeper.base_path -> /hudi, hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://ip-10-0-86-217.us-west-2.compute.internal:10000, hoodie.datasource.query.type -> snapshot, hoodie.cleaner.policy.failed.writes -> EAGER, hoodie.write.lock.zookeeper.port -> 2181, hoodie.write.lock.provider -> org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider, hoodie.write.concurrency.mode -> single_writer),HoodieTableMetaClient{basePath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim', metaPath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim/.hoodie', tableType=MERGE_ON_READ},List(),None,None) hudi_tpcds.date_dim[d_week_seq#40616,d_date#40614] PushedFilters: [IsNotNull(d_date), EqualTo(d_date,2000-02-12)], ReadSchema: struct<d_week_seq:int,d_date:date>    at org.apache.spark.sql.execution.ScalarSubquery.updateResult(subquery.scala:131)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$waitForSubqueries$1(SparkPlan.scala:281)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$waitForSubqueries$1$adapted(SparkPlan.scala:280)
          at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
          at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
          at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
          at org.apache.spark.sql.execution.SparkPlan.waitForSubqueries(SparkPlan.scala:280)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:250)
          at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
          at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:248)
          at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:96)
          at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:96)
          at org.apache.spark.sql.execution.FilterExec.produce(basicPhysicalOperators.scala:274)
          at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:57)
          at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:101)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:251)
          at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
          at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:248)
          at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:96)
          at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:96)
          at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:43)
          at org.apache.spark.sql.execution.WholeStageCodegenExec.doCodeGen(WholeStageCodegenExec.scala:816)
          at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:918)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:213)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:251)
          at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
          at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:248)
          at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:209)
          at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:359)
          at org.apache.spark.sql.execution.SparkPlan.executeCollectIterator(SparkPlan.scala:458)
          at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.org$apache$spark$sql$execution$exchange$BroadcastExchangeExec$$doComputeRelation(BroadcastExchangeExec.scala:179)
          at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anon$1.doCompute(BroadcastExchangeExec.scala:172)
          at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anon$1.doCompute(BroadcastExchangeExec.scala:168)
          at org.apache.spark.sql.execution.AsyncDriverOperation.$anonfun$compute$1(AsyncDriverOperation.scala:73)
          at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
          at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
          at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:216)
          at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withExecutionId$1(SQLExecution.scala:199)
          at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
          at org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:196)
          at org.apache.spark.sql.execution.AsyncDriverOperation.compute(AsyncDriverOperation.scala:67)
          at org.apache.spark.sql.execution.AsyncDriverOperation.$anonfun$computeFuture$1(AsyncDriverOperation.scala:53)
          at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$1(SQLExecution.scala:267)
          at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
          at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
          at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
          at java.base/java.lang.Thread.run(Thread.java:829)
          at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.checkNoFailures(AdaptiveExecutor.scala:154)
          at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.doRun(AdaptiveExecutor.scala:88)
          at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.tryRunningAndGetFuture(AdaptiveExecutor.scala:66)
          at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.execute(AdaptiveExecutor.scala:57)
          at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$getFinalPhysicalPlan$1(AdaptiveSparkPlanExec.scala:249)
          at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
          at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.getFinalPhysicalPlan(AdaptiveSparkPlanExec.scala:248)
          at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.withFinalPlanUpdate(AdaptiveSparkPlanExec.scala:521)
          at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.executeCollect(AdaptiveSparkPlanExec.scala:483)
          at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3932)
          at org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:3161)
          at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3922)
          at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:554)
          at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3920)
          at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
          at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
          at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:114)
          at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$7(SQLExecution.scala:139)
          at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
          at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
          at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:139)
          at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
          at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:138)
          at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
          at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
          at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3920)
          at org.apache.spark.sql.Dataset.collect(Dataset.scala:3161)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:300)
          ... 16 more
          at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
          at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
          at com.microsoft.lst_bench.common.LSTBenchmarkExecutor.checkResults(LSTBenchmarkExecutor.java:165)
          ... 2 more
      Caused by: java.sql.SQLException: org.apache.hive.service.cli.HiveSQLException: Error running query: java.lang.IllegalStateException: more than one row returned by a subquery used as an expression:
      Subquery subquery#40371, [id=#49903]
      +- AdaptiveSparkPlan isFinalPlan=true
         +- == Final Plan ==
            *(5) Project [d_week_seq#40616]
            +- *(5) Filter (isnotnull(d_date#40614) AND (d_date#40614 = 2000-02-12))
               +- *(5) Scan MergeOnReadSnapshotRelation(org.apache.spark.sql.SQLContext@1407eedc,Map(path -> s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim, hoodie.write.lock.zookeeper.url -> ip-10-0-86-217.us-west-2.compute.internal, hoodie.write.lock.zookeeper.base_path -> /hudi, hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://ip-10-0-86-217.us-west-2.compute.internal:10000, hoodie.datasource.query.type -> snapshot, hoodie.cleaner.policy.failed.writes -> EAGER, hoodie.write.lock.zookeeper.port -> 2181, hoodie.write.lock.provider -> org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider, hoodie.write.concurrency.mode -> single_writer),HoodieTableMetaClient{basePath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim', metaPath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim/.hoodie', tableType=MERGE_ON_READ},List(),None,None) hudi_tpcds.date_dim[d_week_seq#40616,d_date#40614] PushedFilters: [IsNotNull(d_date), EqualTo(d_date,2000-02-12)], ReadSchema: struct<d_week_seq:int,d_date:date>
         +- == Initial Plan ==
            Project [d_week_seq#40616]
            +- Filter (isnotnull(d_date#40614) AND (d_date#40614 = 2000-02-12))
               +- Scan MergeOnReadSnapshotRelation(org.apache.spark.sql.SQLContext@1407eedc,Map(path -> s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim, hoodie.write.lock.zookeeper.url -> ip-10-0-86-217.us-west-2.compute.internal, hoodie.write.lock.zookeeper.base_path -> /hudi, hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://ip-10-0-86-217.us-west-2.compute.internal:10000, hoodie.datasource.query.type -> snapshot, hoodie.cleaner.policy.failed.writes -> EAGER, hoodie.write.lock.zookeeper.port -> 2181, hoodie.write.lock.provider -> org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider, hoodie.write.concurrency.mode -> single_writer),HoodieTableMetaClient{basePath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim', metaPath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim/.hoodie', tableType=MERGE_ON_READ},List(),None,None) hudi_tpcds.date_dim[d_week_seq#40616,d_date#40614] PushedFilters: [IsNotNull(d_date), EqualTo(d_date,2000-02-12)], ReadSchema: struct<d_week_seq:int,d_date:date>    at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:44)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:325)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:230)
          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
          at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:79)
          at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:63)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:230)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:225)
          at java.base/java.security.AccessController.doPrivileged(Native Method)
          at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
          at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:239)
          at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
          at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
          at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
          at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
          at java.base/java.lang.Thread.run(Thread.java:829)
      Caused by: java.lang.IllegalStateException: more than one row returned by a subquery used as an expression:
      Subquery subquery#40371, [id=#49903]
      +- AdaptiveSparkPlan isFinalPlan=true
         +- == Final Plan ==
            *(5) Project [d_week_seq#40616]
            +- *(5) Filter (isnotnull(d_date#40614) AND (d_date#40614 = 2000-02-12))
               +- *(5) Scan MergeOnReadSnapshotRelation(org.apache.spark.sql.SQLContext@1407eedc,Map(path -> s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim, hoodie.write.lock.zookeeper.url -> ip-10-0-86-217.us-west-2.compute.internal, hoodie.write.lock.zookeeper.base_path -> /hudi, hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://ip-10-0-86-217.us-west-2.compute.internal:10000, hoodie.datasource.query.type -> snapshot, hoodie.cleaner.policy.failed.writes -> EAGER, hoodie.write.lock.zookeeper.port -> 2181, hoodie.write.lock.provider -> org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider, hoodie.write.concurrency.mode -> single_writer),HoodieTableMetaClient{basePath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim', metaPath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim/.hoodie', tableType=MERGE_ON_READ},List(),None,None) hudi_tpcds.date_dim[d_week_seq#40616,d_date#40614] PushedFilters: [IsNotNull(d_date), EqualTo(d_date,2000-02-12)], ReadSchema: struct<d_week_seq:int,d_date:date>
         +- == Initial Plan ==
            Project [d_week_seq#40616]
            +- Filter (isnotnull(d_date#40614) AND (d_date#40614 = 2000-02-12))
               +- Scan MergeOnReadSnapshotRelation(org.apache.spark.sql.SQLContext@1407eedc,Map(path -> s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim, hoodie.write.lock.zookeeper.url -> ip-10-0-86-217.us-west-2.compute.internal, hoodie.write.lock.zookeeper.base_path -> /hudi, hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://ip-10-0-86-217.us-west-2.compute.internal:10000, hoodie.datasource.query.type -> snapshot, hoodie.cleaner.policy.failed.writes -> EAGER, hoodie.write.lock.zookeeper.port -> 2181, hoodie.write.lock.provider -> org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider, hoodie.write.concurrency.mode -> single_writer),HoodieTableMetaClient{basePath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim', metaPath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim/.hoodie', tableType=MERGE_ON_READ},List(),None,None) hudi_tpcds.date_dim[d_week_seq#40616,d_date#40614] PushedFilters: [IsNotNull(d_date), EqualTo(d_date,2000-02-12)], ReadSchema: struct<d_week_seq:int,d_date:date>    at org.apache.spark.sql.execution.ScalarSubquery.updateResult(subquery.scala:131)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$waitForSubqueries$1(SparkPlan.scala:281)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$waitForSubqueries$1$adapted(SparkPlan.scala:280)
          at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
          at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
          at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
          at org.apache.spark.sql.execution.SparkPlan.waitForSubqueries(SparkPlan.scala:280)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:250)
          at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
          at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:248)
          at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:96)
          at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:96)
          at org.apache.spark.sql.execution.FilterExec.produce(basicPhysicalOperators.scala:274)
          at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:57)
          at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:101)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:251)
          at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
          at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:248)
          at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:96)
          at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:96)
          at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:43)
          at org.apache.spark.sql.execution.WholeStageCodegenExec.doCodeGen(WholeStageCodegenExec.scala:816)
          at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:918)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:213)
          at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:251)
          at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
          at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:248)
          at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:209)
          at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:359)
          at org.apache.spark.sql.execution.SparkPlan.executeCollectIterator(SparkPlan.scala:458)
          at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.org$apache$spark$sql$execution$exchange$BroadcastExchangeExec$$doComputeRelation(BroadcastExchangeExec.scala:179)
          at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anon$1.doCompute(BroadcastExchangeExec.scala:172)
          at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anon$1.doCompute(BroadcastExchangeExec.scala:168)
          at org.apache.spark.sql.execution.AsyncDriverOperation.$anonfun$compute$1(AsyncDriverOperation.scala:73)
          at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
          at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
          at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:216)
          at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withExecutionId$1(SQLExecution.scala:199)
          at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
          at org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:196)
          at org.apache.spark.sql.execution.AsyncDriverOperation.compute(AsyncDriverOperation.scala:67)
          at org.apache.spark.sql.execution.AsyncDriverOperation.$anonfun$computeFuture$1(AsyncDriverOperation.scala:53)
          at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$1(SQLExecution.scala:267)
          at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
          at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
          at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
          at java.base/java.lang.Thread.run(Thread.java:829)
          at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.checkNoFailures(AdaptiveExecutor.scala:154)
          at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.doRun(AdaptiveExecutor.scala:88)
          at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.tryRunningAndGetFuture(AdaptiveExecutor.scala:66)
          at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.execute(AdaptiveExecutor.scala:57)
          at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$getFinalPhysicalPlan$1(AdaptiveSparkPlanExec.scala:249)
          at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
          at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.getFinalPhysicalPlan(AdaptiveSparkPlanExec.scala:248)
          at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.withFinalPlanUpdate(AdaptiveSparkPlanExec.scala:521)
          at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.executeCollect(AdaptiveSparkPlanExec.scala:483)
          at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3932)
          at org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:3161)
          at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3922)
          at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:554)
          at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3920)
          at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
          at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
          at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:114)
          at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$7(SQLExecution.scala:139)
          at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
          at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
          at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:139)
          at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
          at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:138)
          at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
          at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
          at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3920)
          at org.apache.spark.sql.Dataset.collect(Dataset.scala:3161)
          at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:300)
          ... 16 more    at org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:401)
          at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:266)
          at com.microsoft.lst_bench.common.LSTBenchmarkExecutor$Worker.executeTask(LSTBenchmarkExecutor.java:274)
          at com.microsoft.lst_bench.common.LSTBenchmarkExecutor$Worker.call(LSTBenchmarkExecutor.java:248)
          at com.microsoft.lst_bench.common.LSTBenchmarkExecutor$Worker.call(LSTBenchmarkExecutor.java:222)
          at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
      2023-10-25T09:31:42,051  INFO [main] telemetry.JDBCTelemetryRegistry: Creating new logging tables...
      2023-10-25T09:31:45,007  INFO [main] telemetry.JDBCTelemetryRegistry: Logging tables created.
      2023-10-25T09:31:45,192  INFO [main] common.LSTBenchmarkExecutor: Running experiment: spark_hud_sf_1, run-id: d8a9deed-f145-4e19-a27c-8c7f9010cdb5
      2023-10-25T09:31:45,194  INFO [main] common.LSTBenchmarkExecutor: Experiment start time: 2023_10_25_09_31_45_192
      2023-10-25T09:31:45,194  INFO [main] common.LSTBenchmarkExecutor: Starting repetition: 0
      2023-10-25T09:31:45,195  INFO [main] common.LSTBenchmarkExecutor: Running setup phase...
      2023-10-25T09:31:52,929  INFO [main] telemetry.JDBCTelemetryRegistry: Flushing events to database...
      2023-10-25T09:32:00,776  INFO [main] telemetry.JDBCTelemetryRegistry: Events flushed to database.
      2023-10-25T09:32:00,786  INFO [main] common.LSTBenchmarkExecutor: Phase setup finished in 7 seconds.
      2023-10-25T09:32:00,786  INFO [main] common.LSTBenchmarkExecutor: Running setup_data_maintenance phase...
      2023-10-25T09:32:36,548  INFO [main] telemetry.JDBCTelemetryRegistry: Flushing events to database...
      2023-10-25T09:32:40,796  INFO [main] telemetry.JDBCTelemetryRegistry: Events flushed to database.
      2023-10-25T09:32:40,803  INFO [main] common.LSTBenchmarkExecutor: Phase setup_data_maintenance finished in 35 seconds.
      2023-10-25T09:32:40,803  INFO [main] common.LSTBenchmarkExecutor: Running init phase...
      2023-10-25T09:32:45,813  INFO [main] telemetry.JDBCTelemetryRegistry: Flushing events to database...
      2023-10-25T09:32:49,138  INFO [main] telemetry.JDBCTelemetryRegistry: Events flushed to database.
      2023-10-25T09:32:49,144  INFO [main] common.LSTBenchmarkExecutor: Phase init finished in 5 seconds.
      2023-10-25T09:32:49,144  INFO [main] common.LSTBenchmarkExecutor: Running build phase...
      2023-10-25T09:47:42,392  INFO [main] telemetry.JDBCTelemetryRegistry: Flushing events to database...
      2023-10-25T09:47:46,585  INFO [main] telemetry.JDBCTelemetryRegistry: Events flushed to database.
      2023-10-25T09:47:46,592  INFO [main] common.LSTBenchmarkExecutor: Phase build finished in 893 seconds.
      2023-10-25T09:47:46,592  INFO [main] common.LSTBenchmarkExecutor: Running data_maintenance_1 phase...
      2023-10-25T10:21:18,295  INFO [main] telemetry.JDBCTelemetryRegistry: Flushing events to database...
      2023-10-25T10:21:21,984  INFO [main] telemetry.JDBCTelemetryRegistry: Events flushed to database.
      2023-10-25T10:21:21,994  INFO [main] common.LSTBenchmarkExecutor: Phase data_maintenance_1 finished in 2011 seconds.
      2023-10-25T10:21:21,994  INFO [main] common.LSTBenchmarkExecutor: Running single_user_2_0 phase...
      2023-10-25T10:33:21,651 ERROR [pool-2-thread-1] common.LSTBenchmarkExecutor: Exception executing statement: query58.sql_0
      2023-10-25T10:33:21,652 ERROR [pool-2-thread-1] common.LSTBenchmarkExecutor: Exception executing file: query58.sql
      2023-10-25T10:33:21,652 ERROR [pool-2-thread-1] common.LSTBenchmarkExecutor: Exception executing task: single_user_0
      2023-10-25T10:33:21,657 ERROR [pool-2-thread-1] common.LSTBenchmarkExecutor: Exception executing session: 0
      2023-10-25T10:33:21,658 ERROR [main] common.LSTBenchmarkExecutor: Exception executing phase: single_user_2_0
      2023-10-25T10:33:21,658  INFO [main] telemetry.JDBCTelemetryRegistry: Flushing events to database...
      2023-10-25T10:33:25,448  INFO [main] telemetry.JDBCTelemetryRegistry: Events flushed to database.
      2023-10-25T10:33:25,460 ERROR [main] common.LSTBenchmarkExecutor: Exception executing experiment: spark_hud_sf_1
      2023-10-25T10:33:25,472  INFO [main] telemetry.JDBCTelemetryRegistry: Flushing events to database...
      2023-10-25T10:33:28,343  INFO [main] telemetry.JDBCTelemetryRegistry: Events flushed to database.
      Exception in thread "main" java.lang.RuntimeException: Thread did not finish correctly
              at com.microsoft.lst_bench.common.LSTBenchmarkExecutor.checkResults(LSTBenchmarkExecutor.java:167)
              at com.microsoft.lst_bench.common.LSTBenchmarkExecutor.execute(LSTBenchmarkExecutor.java:121)
              at com.microsoft.lst_bench.Driver.main(Driver.java:147)
      Caused by: java.util.concurrent.ExecutionException: java.sql.SQLException: org.apache.hive.service.cli.HiveSQLException: Error running query: java.lang.IllegalStateException: more than one row returned by a subquery used as an expression:
      Subquery subquery#40371, [id=#49903]
      +- AdaptiveSparkPlan isFinalPlan=true
         +- == Final Plan ==
            *(5) Project [d_week_seq#40616]
            +- *(5) Filter (isnotnull(d_date#40614) AND (d_date#40614 = 2000-02-12))
               +- *(5) Scan MergeOnReadSnapshotRelation(org.apache.spark.sql.SQLContext@1407eedc,Map(path -> s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim, hoodie.write.lock.zookeeper.url -> ip-10-0-86-217.us-west-2.compute.internal, hoodie.write.lock.zookeeper.base_path -> /hudi, hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://ip-10-0-86-217.us-west-2.compute.internal:10000, hoodie.datasource.query.type -> snapshot, hoodie.cleaner.policy.failed.writes -> EAGER, hoodie.write.lock.zookeeper.port -> 2181, hoodie.write.lock.provider -> org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider, hoodie.write.concurrency.mode -> single_writer),HoodieTableMetaClient{basePath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim', metaPath='s3a://rxusandbox-us-west-2/testcases/lstbench/hudi/sf_1/date_dim/.hoodie', tableType=MERGE_ON_READ},List(),None,None) hudi_tpcds.date_dim[d_week_seq#40616,d_date#40614] PushedFilters: [IsNotNull(d_date), EqualTo(d_date,2000-02-12)], ReadSchema: struct<d_week_seq:int,d_date:date>
      "1698226300.log" 291L, 33453B
              at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
              at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:216)
              at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withExecutionId$1(SQLExecution.scala:199)
              at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
              at org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:196)
              at org.apache.spark.sql.execution.AsyncDriverOperation.compute(AsyncDriverOperation.scala:67)
              at org.apache.spark.sql.execution.AsyncDriverOperation.$anonfun$computeFuture$1(AsyncDriverOperation.scala:53)
              at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$1(SQLExecution.scala:267)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at java.base/java.lang.Thread.run(Thread.java:829)
              at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.checkNoFailures(AdaptiveExecutor.scala:154)
              at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.doRun(AdaptiveExecutor.scala:88)
              at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.tryRunningAndGetFuture(AdaptiveExecutor.scala:66)
              at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.execute(AdaptiveExecutor.scala:57)
              at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$getFinalPhysicalPlan$1(AdaptiveSparkPlanExec.scala:249)
              at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
              at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.getFinalPhysicalPlan(AdaptiveSparkPlanExec.scala:248)
              at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.withFinalPlanUpdate(AdaptiveSparkPlanExec.scala:521)
              at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.executeCollect(AdaptiveSparkPlanExec.scala:483)
              at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3932)
              at org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:3161)
              at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3922)
              at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:554)
              at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3920)
              at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
              at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
              at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:114)
              at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$7(SQLExecution.scala:139)
              at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
              at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
              at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:139)
              at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
              at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:138)
              at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
              at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
              at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3920)
              at org.apache.spark.sql.Dataset.collect(Dataset.scala:3161)
              at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:300)
              ... 16 more        at org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:401)
              at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:266)
              at com.microsoft.lst_bench.common.LSTBenchmarkExecutor$Worker.executeTask(LSTBenchmarkExecutor.java:274)
              at com.microsoft.lst_bench.common.LSTBenchmarkExecutor$Worker.call(LSTBenchmarkExecutor.java:248)
              at com.microsoft.lst_bench.common.LSTBenchmarkExecutor$Worker.call(LSTBenchmarkExecutor.java:222)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at java.base/java.lang.Thread.run(Thread.java:829)
             
              at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
              at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:216)
              at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withExecutionId$1(SQLExecution.scala:199)
              at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
              at org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:196)
              at org.apache.spark.sql.execution.AsyncDriverOperation.compute(AsyncDriverOperation.scala:67)
              at org.apache.spark.sql.execution.AsyncDriverOperation.$anonfun$computeFuture$1(AsyncDriverOperation.scala:53)
              at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$1(SQLExecution.scala:267)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at java.base/java.lang.Thread.run(Thread.java:829)
              at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.checkNoFailures(AdaptiveExecutor.scala:154)
              at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.doRun(AdaptiveExecutor.scala:88)
              at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.tryRunningAndGetFuture(AdaptiveExecutor.scala:66)
              at org.apache.spark.sql.execution.adaptive.AdaptiveExecutor.execute(AdaptiveExecutor.scala:57)
              at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$getFinalPhysicalPlan$1(AdaptiveSparkPlanExec.scala:249)
              at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
              at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.getFinalPhysicalPlan(AdaptiveSparkPlanExec.scala:248)
              at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.withFinalPlanUpdate(AdaptiveSparkPlanExec.scala:521)
              at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.executeCollect(AdaptiveSparkPlanExec.scala:483)
              at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3932)
              at org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:3161)
              at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3922)
              at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:554)
              at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3920)
              at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
              at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
              at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:114)
              at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$7(SQLExecution.scala:139)
              at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
              at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
              at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:139)
              at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
              at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:138)
              at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
              at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
              at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3920)
              at org.apache.spark.sql.Dataset.collect(Dataset.scala:3161)
              at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:300)
              ... 16 more        at org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:401)
              at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:266)
              at com.microsoft.lst_bench.common.LSTBenchmarkExecutor$Worker.executeTask(LSTBenchmarkExecutor.java:274)
              at com.microsoft.lst_bench.common.LSTBenchmarkExecutor$Worker.call(LSTBenchmarkExecutor.java:248)
              at com.microsoft.lst_bench.common.LSTBenchmarkExecutor$Worker.call(LSTBenchmarkExecutor.java:222)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at java.base/java.lang.Thread.run(Thread.java:829) 

      Attachments

        Activity

          People

            Unassigned Unassigned
            linliu Lin Liu
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: