2021-10-15 01:43:15,659 INFO [pool-1-thread-1] application.SparkApplication : Executor task org.apache.kylin.engine.spark.job.CubeBuildJob with args : {"distMetaUrl":"kylin_metadata@hdfs,path=hdfs://master/kylin/kylin_metadata/kylin_test/job_tmp/7141e9e5-526d-4d0a-9c97-5c4f212ddc0c-01/meta","submitter":"ADMIN","dataRangeEnd":"9223372036854775807","targetModel":"13692406-a75e-90cb-c7a3-53084cd7749f","dataRangeStart":"0","project":"kylin_test","className":"org.apache.kylin.engine.spark.job.CubeBuildJob","segmentName":"FULL_BUILD","parentId":"7141e9e5-526d-4d0a-9c97-5c4f212ddc0c","jobId":"7141e9e5-526d-4d0a-9c97-5c4f212ddc0c","outputMetaUrl":"kylin_metadata@jdbc,url=jdbc:mysql://localhost:3306/kylin,username=root,password=******,maxActive=10,maxIdle=10","segmentId":"2c4154a5-0ea3-6ddb-98ff-849cba4de4e5","cuboidsNum":"7","cubeName":"testCube","jobType":"BUILD","cubeId":"4c05965c-c337-151d-84bf-49755f204794","segmentIds":"2c4154a5-0ea3-6ddb-98ff-849cba4de4e5"} 2021-10-15 01:43:15,664 INFO [pool-1-thread-1] utils.MetaDumpUtil : Ready to load KylinConfig from uri: kylin_metadata@hdfs,path=hdfs://master/kylin/kylin_metadata/kylin_test/job_tmp/7141e9e5-526d-4d0a-9c97-5c4f212ddc0c-01/meta 2021-10-15 01:43:15,793 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.metadata.url.identifier : kylin_metadata 2021-10-15 01:43:15,801 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.log.spark-executor-properties-file : /opt/kylin/conf/spark-executor-log4j.properties 2021-10-15 01:43:15,802 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.source.provider.0 : org.apache.kylin.engine.spark.source.HiveSource 2021-10-15 01:43:15,808 INFO [pool-1-thread-1] application.SparkApplication : Start set spark conf automatically. 2021-10-15 01:43:17,115 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-10-15 01:43:17,181 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-10-15 01:43:17,262 INFO [pool-1-thread-1] job.CubeBuildJob : The maximum number of tasks required to run the job is 2.0 2021-10-15 01:43:17,262 INFO [pool-1-thread-1] job.CubeBuildJob : require cores: 0 2021-10-15 01:43:17,312 INFO [pool-1-thread-1] application.SparkApplication : Exist count distinct measure: false 2021-10-15 01:43:17,407 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stdout {"scheduler":{"schedulerInfo":{"type":"capacityScheduler","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"queueName":"root","queues":{"queue":[{"type":"capacitySchedulerLeafQueueInfo","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"absoluteCapacity":100.0,"absoluteMaxCapacity":100.0,"absoluteUsedCapacity":0.0,"numApplications":0,"queueName":"default","state":"RUNNING","resourcesUsed":{"memory":0,"vCores":0},"hideReservationQueues":false,"nodeLabels":["*"],"numActiveApplications":0,"numPendingApplications":0,"numContainers":0,"maxApplications":10000,"maxApplicationsPerUser":10000,"userLimit":100,"users":null,"userLimitFactor":1.0,"AMResourceLimit":{"memory":1024,"vCores":1},"usedAMResource":{"memory":0,"vCores":0},"userAMResourceLimit":{"memory":1024,"vCores":1},"preemptionDisabled":true}]}}}} 2021-10-15 01:43:17,409 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr % Total % Received % Xferd Average Speed Time Time Time Current 2021-10-15 01:43:17,409 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr Dload Upload Total Spent Left Speed 2021-10-15 01:43:17,409 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 2021-10-15 01:43:17,409 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2021-10-15 01:43:17,409 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 100 821 0 821 0 0 120k 0 --:--:-- --:--:-- --:--:-- 133k 2021-10-15 01:43:17,410 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : Thread wait for executing command curl -k --negotiate -u : "http://0.0.0.0:8088/ws/v1/cluster/scheduler" 2021-10-15 01:43:17,438 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Queue available capacity: 1.0. 2021-10-15 01:43:17,438 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Current queue used memory is 0, seem available resource as infinite. 2021-10-15 01:43:17,439 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Cluster available capacity: 1.0. 2021-10-15 01:43:17,441 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Capacity actual available resource: AvailableResource(ResourceInfo(2147483647,2147483647),ResourceInfo(2147483647,2147483647)). 2021-10-15 01:43:17,491 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stdout {"scheduler":{"schedulerInfo":{"type":"capacityScheduler","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"queueName":"root","queues":{"queue":[{"type":"capacitySchedulerLeafQueueInfo","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"absoluteCapacity":100.0,"absoluteMaxCapacity":100.0,"absoluteUsedCapacity":0.0,"numApplications":0,"queueName":"default","state":"RUNNING","resourcesUsed":{"memory":0,"vCores":0},"hideReservationQueues":false,"nodeLabels":["*"],"numActiveApplications":0,"numPendingApplications":0,"numContainers":0,"maxApplications":10000,"maxApplicationsPerUser":10000,"userLimit":100,"users":null,"userLimitFactor":1.0,"AMResourceLimit":{"memory":1024,"vCores":1},"usedAMResource":{"memory":0,"vCores":0},"userAMResourceLimit":{"memory":1024,"vCores":1},"preemptionDisabled":true}]}}}} 2021-10-15 01:43:17,491 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr % Total % Received % Xferd Average Speed Time Time Time Current 2021-10-15 01:43:17,491 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr Dload Upload Total Spent Left Speed 2021-10-15 01:43:17,491 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 2021-10-15 01:43:17,491 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2021-10-15 01:43:17,491 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 100 821 0 821 0 0 144k 0 --:--:-- --:--:-- --:--:-- 160k 2021-10-15 01:43:17,491 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : Thread wait for executing command curl -k --negotiate -u : "http://0.0.0.0:8088/ws/v1/cluster/scheduler" 2021-10-15 01:43:17,495 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Queue available capacity: 1.0. 2021-10-15 01:43:17,495 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Current queue used memory is 0, seem available resource as infinite. 2021-10-15 01:43:17,495 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Cluster available capacity: 1.0. 2021-10-15 01:43:17,495 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Capacity actual available resource: AvailableResource(ResourceInfo(2147483647,2147483647),ResourceInfo(2147483647,2147483647)). 2021-10-15 01:43:17,498 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.executor.memory = 1GB. 2021-10-15 01:43:17,498 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: count_distinct = false. 2021-10-15 01:43:17,498 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.executor.cores = 1. 2021-10-15 01:43:17,498 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.executor.memoryOverhead = 512MB. 2021-10-15 01:43:17,498 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.executor.instances = 5. 2021-10-15 01:43:17,498 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.yarn.queue = default. 2021-10-15 01:43:17,498 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.sql.shuffle.partitions = 2. 2021-10-15 01:43:17,499 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.yarn.queue=default. 2021-10-15 01:43:17,499 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.history.fs.logDirectory=hdfs:///kylin/spark-history. 2021-10-15 01:43:17,499 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.driver.extraJavaOptions=-XX:+CrashOnOutOfMemoryError. 2021-10-15 01:43:17,499 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.master=yarn. 2021-10-15 01:43:17,499 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.executor.extraJavaOptions=-Dfile.encoding=UTF-8 -Dhdp.version=current -Dlog4j.configuration=spark-executor-log4j.properties -Dlog4j.debug -Dkylin.hdfs.working.dir=hdfs://master/kylin/kylin_metadata/ -Dkylin.metadata.identifier=kylin_metadata -Dkylin.spark.category=job -Dkylin.spark.project=kylin_test -Dkylin.spark.identifier=7141e9e5-526d-4d0a-9c97-5c4f212ddc0c -Dkylin.spark.jobName=7141e9e5-526d-4d0a-9c97-5c4f212ddc0c-01 -Duser.timezone=America/New_York. 2021-10-15 01:43:17,499 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.hadoop.yarn.timeline-service.enabled=false. 2021-10-15 01:43:17,500 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.eventLog.enabled=true. 2021-10-15 01:43:17,500 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.eventLog.dir=hdfs:///kylin/spark-history. 2021-10-15 01:43:17,500 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.submit.deployMode=client. 2021-10-15 01:43:17,521 INFO [pool-1-thread-1] util.TimeZoneUtils : System timezone set to America/New_York, TimeZoneId: America/New_York. 2021-10-15 01:43:17,521 INFO [pool-1-thread-1] application.SparkApplication : Sleep for random seconds to avoid submitting too many spark job at the same time. 2021-10-15 01:44:15,313 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stdout {"scheduler":{"schedulerInfo":{"type":"capacityScheduler","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"queueName":"root","queues":{"queue":[{"type":"capacitySchedulerLeafQueueInfo","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"absoluteCapacity":100.0,"absoluteMaxCapacity":100.0,"absoluteUsedCapacity":0.0,"numApplications":0,"queueName":"default","state":"RUNNING","resourcesUsed":{"memory":0,"vCores":0},"hideReservationQueues":false,"nodeLabels":["*"],"numActiveApplications":0,"numPendingApplications":0,"numContainers":0,"maxApplications":10000,"maxApplicationsPerUser":10000,"userLimit":100,"users":null,"userLimitFactor":1.0,"AMResourceLimit":{"memory":1024,"vCores":1},"usedAMResource":{"memory":0,"vCores":0},"userAMResourceLimit":{"memory":1024,"vCores":1},"preemptionDisabled":true}]}}}} 2021-10-15 01:44:15,313 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr % Total % Received % Xferd Average Speed Time Time Time Current 2021-10-15 01:44:15,313 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr Dload Upload Total Spent Left Speed 2021-10-15 01:44:15,313 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 2021-10-15 01:44:15,313 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2021-10-15 01:44:15,313 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 100 821 0 821 0 0 96474 0 --:--:-- --:--:-- --:--:-- 100k 2021-10-15 01:44:15,313 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : Thread wait for executing command curl -k --negotiate -u : "http://0.0.0.0:8088/ws/v1/cluster/scheduler" 2021-10-15 01:44:15,317 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Queue available capacity: 1.0. 2021-10-15 01:44:15,317 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Current queue used memory is 0, seem available resource as infinite. 2021-10-15 01:44:15,317 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Cluster available capacity: 1.0. 2021-10-15 01:44:15,317 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Capacity actual available resource: AvailableResource(ResourceInfo(2147483647,2147483647),ResourceInfo(2147483647,2147483647)). 2021-10-15 01:44:16,203 INFO [pool-1-thread-1] util.log : Logging initialized @63149ms 2021-10-15 01:44:16,293 INFO [pool-1-thread-1] server.Server : jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 2021-10-15 01:44:16,321 INFO [pool-1-thread-1] server.Server : Started @63267ms 2021-10-15 01:44:16,351 INFO [pool-1-thread-1] server.AbstractConnector : Started ServerConnector@11937e9a{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-10-15 01:44:16,387 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@575fa30f{/jobs,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,388 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@364a69f5{/jobs/json,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,388 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1b83df9d{/jobs/job,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,389 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@671c8a42{/jobs/job/json,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,389 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@27c5295{/stages,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,390 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7860a70{/stages/json,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,390 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6e2d43f9{/stages/stage,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,391 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3a0adb60{/stages/stage/json,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,392 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3e3c6553{/stages/pool,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,392 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@57e26fca{/stages/pool/json,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,392 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1e6a19ac{/storage,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,393 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@72d710da{/storage/json,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,393 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2f1cf52e{/storage/rdd,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,394 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6bc44338{/storage/rdd/json,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,394 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@252d8095{/environment,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,394 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6fb6052a{/environment/json,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,395 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@274b3877{/executors,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,395 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6e5f6225{/executors/json,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,395 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@27bf7e53{/executors/threadDump,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,396 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@78ffb0db{/executors/threadDump/json,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,407 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@609a733{/static,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,408 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4cd75e89{/,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,410 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6d17e97b{/api,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,410 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@117624e2{/jobs/job/kill,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,411 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@5705ed61{/stages/stage/kill,null,AVAILABLE,@Spark} 2021-10-15 01:44:16,619 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at /0.0.0.0:8032 2021-10-15 01:44:16,834 WARN [pool-1-thread-1] yarn.Client : Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 2021-10-15 01:44:20,957 INFO [pool-1-thread-1] impl.YarnClientImpl : Submitted application application_1632384995057_0051 2021-10-15 01:44:26,415 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@25115973{/metrics/json,null,AVAILABLE,@Spark} 2021-10-15 01:44:29,971 ERROR [pool-1-thread-1] client.TransportClient : Failed to send RPC RPC 5040229096254295546 to /192.168.101.31:43852: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:44:29,977 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Attempted to get executor loss reason for executor id 1 at RPC address 192.168.101.31:43866, but got no response. Marking as slave lost. java.io.IOException: Failed to send RPC RPC 5040229096254295546 to /192.168.101.31:43852: java.nio.channels.ClosedChannelException at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362) at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) ... 12 more 2021-10-15 01:44:29,988 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 1 on henghe-031: Slave lost 2021-10-15 01:44:35,039 ERROR [pool-1-thread-1] cluster.YarnClientSchedulerBackend : YARN application has exited unexpectedly with state UNDEFINED! Check the YARN application logs for more details. 2021-10-15 01:44:35,040 ERROR [pool-1-thread-1] cluster.YarnClientSchedulerBackend : Diagnostics message: Shutdown hook called before final status was reported. 2021-10-15 01:44:35,064 INFO [pool-1-thread-1] server.AbstractConnector : Stopped Spark@11937e9a{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-10-15 01:44:35,086 ERROR [pool-1-thread-1] client.TransportClient : Failed to send RPC RPC 8364660475066970016 to /192.168.101.31:43874: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:44:35,090 ERROR [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful java.io.IOException: Failed to send RPC RPC 8364660475066970016 to /192.168.101.31:43874: java.nio.channels.ClosedChannelException at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362) at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) ... 12 more 2021-10-15 01:44:35,094 ERROR [pool-1-thread-1] util.Utils : Uncaught exception in thread YARN application state monitor org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:574) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:98) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:164) at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:669) at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2078) at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1949) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340) at org.apache.spark.SparkContext.stop(SparkContext.scala:1948) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:121) Caused by: java.io.IOException: Failed to send RPC RPC 8364660475066970016 to /192.168.101.31:43874: java.nio.channels.ClosedChannelException at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362) at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) ... 12 more 2021-10-15 01:44:35,096 ERROR [pool-1-thread-1] spark.SparkContext : Error initializing SparkContext. java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:44:35,102 ERROR [pool-1-thread-1] application.SparkApplication : The spark job execute failed! java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:44:35,107 ERROR [pool-1-thread-1] application.JobMonitor : Job failed the 1 times. java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeBuildJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:96) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) ... 4 more 2021-10-15 01:44:35,138 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at /0.0.0.0:8032 2021-10-15 01:44:35,168 INFO [pool-1-thread-1] cluster.YarnInfoFetcher : Cluster maximum resource allocation ResourceInfo(8192,8) 2021-10-15 01:44:35,203 INFO [pool-1-thread-1] application.SparkApplication : Executor task org.apache.kylin.engine.spark.job.CubeBuildJob with args : {"distMetaUrl":"kylin_metadata@hdfs,path=hdfs://master/kylin/kylin_metadata/kylin_test/job_tmp/7141e9e5-526d-4d0a-9c97-5c4f212ddc0c-01/meta","submitter":"ADMIN","dataRangeEnd":"9223372036854775807","targetModel":"13692406-a75e-90cb-c7a3-53084cd7749f","dataRangeStart":"0","project":"kylin_test","className":"org.apache.kylin.engine.spark.job.CubeBuildJob","segmentName":"FULL_BUILD","parentId":"7141e9e5-526d-4d0a-9c97-5c4f212ddc0c","jobId":"7141e9e5-526d-4d0a-9c97-5c4f212ddc0c","outputMetaUrl":"kylin_metadata@jdbc,url=jdbc:mysql://localhost:3306/kylin,username=root,password=******,maxActive=10,maxIdle=10","segmentId":"2c4154a5-0ea3-6ddb-98ff-849cba4de4e5","cuboidsNum":"7","cubeName":"testCube","jobType":"BUILD","cubeId":"4c05965c-c337-151d-84bf-49755f204794","segmentIds":"2c4154a5-0ea3-6ddb-98ff-849cba4de4e5"} 2021-10-15 01:44:35,203 INFO [pool-1-thread-1] utils.MetaDumpUtil : Ready to load KylinConfig from uri: kylin_metadata@hdfs,path=hdfs://master/kylin/kylin_metadata/kylin_test/job_tmp/7141e9e5-526d-4d0a-9c97-5c4f212ddc0c-01/meta 2021-10-15 01:44:35,243 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.metadata.url.identifier : kylin_metadata 2021-10-15 01:44:35,244 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.log.spark-executor-properties-file : /opt/kylin/conf/spark-executor-log4j.properties 2021-10-15 01:44:35,244 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.source.provider.0 : org.apache.kylin.engine.spark.source.HiveSource 2021-10-15 01:44:35,244 INFO [pool-1-thread-1] util.TimeZoneUtils : System timezone set to America/New_York, TimeZoneId: America/New_York. 2021-10-15 01:44:35,244 INFO [pool-1-thread-1] application.SparkApplication : Sleep for random seconds to avoid submitting too many spark job at the same time. 2021-10-15 01:45:12,598 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stdout {"scheduler":{"schedulerInfo":{"type":"capacityScheduler","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"queueName":"root","queues":{"queue":[{"type":"capacitySchedulerLeafQueueInfo","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"absoluteCapacity":100.0,"absoluteMaxCapacity":100.0,"absoluteUsedCapacity":0.0,"numApplications":0,"queueName":"default","state":"RUNNING","resourcesUsed":{"memory":0,"vCores":0},"hideReservationQueues":false,"nodeLabels":["*"],"numActiveApplications":0,"numPendingApplications":0,"numContainers":0,"maxApplications":10000,"maxApplicationsPerUser":10000,"userLimit":100,"users":null,"userLimitFactor":1.0,"AMResourceLimit":{"memory":1024,"vCores":1},"usedAMResource":{"memory":0,"vCores":0},"userAMResourceLimit":{"memory":1024,"vCores":1},"preemptionDisabled":true}]}}}} 2021-10-15 01:45:12,598 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr % Total % Received % Xferd Average Speed Time Time Time Current 2021-10-15 01:45:12,598 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr Dload Upload Total Spent Left Speed 2021-10-15 01:45:12,598 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 2021-10-15 01:45:12,598 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2021-10-15 01:45:12,599 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 100 821 0 821 0 0 194k 0 --:--:-- --:--:-- --:--:-- 267k 2021-10-15 01:45:12,599 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : Thread wait for executing command curl -k --negotiate -u : "http://0.0.0.0:8088/ws/v1/cluster/scheduler" 2021-10-15 01:45:12,602 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Queue available capacity: 1.0. 2021-10-15 01:45:12,602 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Current queue used memory is 0, seem available resource as infinite. 2021-10-15 01:45:12,602 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Cluster available capacity: 1.0. 2021-10-15 01:45:12,602 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Capacity actual available resource: AvailableResource(ResourceInfo(2147483647,2147483647),ResourceInfo(2147483647,2147483647)). 2021-10-15 01:45:12,607 WARN [pool-1-thread-1] spark.SparkContext : Another SparkContext is being constructed (or threw an exception in its constructor). This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at: org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) 2021-10-15 01:45:12,691 INFO [pool-1-thread-1] server.Server : jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 2021-10-15 01:45:12,692 INFO [pool-1-thread-1] server.Server : Started @119638ms 2021-10-15 01:45:12,693 INFO [pool-1-thread-1] server.AbstractConnector : Started ServerConnector@1fa14745{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-10-15 01:45:12,693 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@30f72b0e{/jobs,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,694 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3dd16981{/jobs/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,694 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@437b05d8{/jobs/job,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,694 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@56f5f99d{/jobs/job/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,695 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@5fd30716{/stages,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,696 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@69f8e00c{/stages/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,696 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@72367d2e{/stages/stage,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,696 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@bf64be4{/stages/stage/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,697 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4c5a163b{/stages/pool,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,697 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@620c673c{/stages/pool/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,697 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2392a95e{/storage,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,697 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1a827fa3{/storage/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,698 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6f55906{/storage/rdd,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,698 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@741fb63{/storage/rdd/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,698 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@31af7a8c{/environment,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,698 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@38d9647d{/environment/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,699 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@14c83709{/executors,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,699 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1c5a4c10{/executors/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,699 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@566b2e0d{/executors/threadDump,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,700 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@25625231{/executors/threadDump/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,700 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@704749eb{/static,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,700 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3eb3a13f{/,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,701 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@31a7fba2{/api,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,701 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4c21dd5b{/jobs/job/kill,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,701 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6f3837b9{/stages/stage/kill,null,AVAILABLE,@Spark} 2021-10-15 01:45:12,752 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at /0.0.0.0:8032 2021-10-15 01:45:12,768 WARN [pool-1-thread-1] yarn.Client : Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 2021-10-15 01:45:16,120 INFO [pool-1-thread-1] impl.YarnClientImpl : Submitted application application_1632384995057_0053 2021-10-15 01:45:21,154 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@580c844e{/metrics/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:29,183 ERROR [pool-1-thread-1] cluster.YarnClientSchedulerBackend : YARN application has exited unexpectedly with state FAILED! Check the YARN application logs for more details. 2021-10-15 01:45:29,184 ERROR [pool-1-thread-1] cluster.YarnClientSchedulerBackend : Diagnostics message: Application application_1632384995057_0053 failed 2 times due to AM Container for appattempt_1632384995057_0053_000002 exited with exitCode: -103 For more detailed output, check application tracking page:http://henghe-031:8088/cluster/app/application_1632384995057_0053Then, click on links to logs of each attempt. Diagnostics: Container [pid=8866,containerID=container_1632384995057_0053_02_000001] is running beyond virtual memory limits. Current usage: 227.7 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1632384995057_0053_02_000001 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 8872 8866 8866 8866 (java) 461 25 2361798656 57986 /usr/java/jdk1.8/bin/java -server -Xmx512m -Djava.io.tmpdir=/hadoop_data/tmp/nm-local-dir/usercache/root/appcache/application_1632384995057_0053/container_1632384995057_0053_02_000001/tmp -Dspark.yarn.app.container.log.dir=/opt/hadoop/logs/userlogs/application_1632384995057_0053/container_1632384995057_0053_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg henghe-031:33613 --properties-file /hadoop_data/tmp/nm-local-dir/usercache/root/appcache/application_1632384995057_0053/container_1632384995057_0053_02_000001/__spark_conf__/__spark_conf__.properties |- 8866 8863 8866 8866 (bash) 0 0 116011008 302 /bin/bash -c /usr/java/jdk1.8/bin/java -server -Xmx512m -Djava.io.tmpdir=/hadoop_data/tmp/nm-local-dir/usercache/root/appcache/application_1632384995057_0053/container_1632384995057_0053_02_000001/tmp -Dspark.yarn.app.container.log.dir=/opt/hadoop/logs/userlogs/application_1632384995057_0053/container_1632384995057_0053_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'henghe-031:33613' --properties-file /hadoop_data/tmp/nm-local-dir/usercache/root/appcache/application_1632384995057_0053/container_1632384995057_0053_02_000001/__spark_conf__/__spark_conf__.properties 1> /opt/hadoop/logs/userlogs/application_1632384995057_0053/container_1632384995057_0053_02_000001/stdout 2> /opt/hadoop/logs/userlogs/application_1632384995057_0053/container_1632384995057_0053_02_000001/stderr Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 Failing this attempt. Failing the application. 2021-10-15 01:45:29,187 INFO [pool-1-thread-1] server.AbstractConnector : Stopped Spark@1fa14745{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-10-15 01:45:29,192 ERROR [pool-1-thread-1] client.TransportClient : Failed to send RPC RPC 5785212861201371447 to /192.168.101.31:35966: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:45:29,193 ERROR [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful java.io.IOException: Failed to send RPC RPC 5785212861201371447 to /192.168.101.31:35966: java.nio.channels.ClosedChannelException at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362) at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) ... 12 more 2021-10-15 01:45:29,193 ERROR [pool-1-thread-1] util.Utils : Uncaught exception in thread YARN application state monitor org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:574) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:98) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:164) at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:669) at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2078) at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1949) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340) at org.apache.spark.SparkContext.stop(SparkContext.scala:1948) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:121) Caused by: java.io.IOException: Failed to send RPC RPC 5785212861201371447 to /192.168.101.31:35966: java.nio.channels.ClosedChannelException at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362) at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) ... 12 more 2021-10-15 01:45:29,253 ERROR [pool-1-thread-1] spark.SparkContext : Error initializing SparkContext. java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:45:29,254 ERROR [pool-1-thread-1] application.SparkApplication : The spark job execute failed! java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:45:29,254 ERROR [pool-1-thread-1] application.JobMonitor : Job failed the 2 times. java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeBuildJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:96) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) ... 4 more 2021-10-15 01:45:29,279 INFO [pool-1-thread-1] application.SparkApplication : Executor task org.apache.kylin.engine.spark.job.CubeBuildJob with args : {"distMetaUrl":"kylin_metadata@hdfs,path=hdfs://master/kylin/kylin_metadata/kylin_test/job_tmp/7141e9e5-526d-4d0a-9c97-5c4f212ddc0c-01/meta","submitter":"ADMIN","dataRangeEnd":"9223372036854775807","targetModel":"13692406-a75e-90cb-c7a3-53084cd7749f","dataRangeStart":"0","project":"kylin_test","className":"org.apache.kylin.engine.spark.job.CubeBuildJob","segmentName":"FULL_BUILD","parentId":"7141e9e5-526d-4d0a-9c97-5c4f212ddc0c","jobId":"7141e9e5-526d-4d0a-9c97-5c4f212ddc0c","outputMetaUrl":"kylin_metadata@jdbc,url=jdbc:mysql://localhost:3306/kylin,username=root,password=******,maxActive=10,maxIdle=10","segmentId":"2c4154a5-0ea3-6ddb-98ff-849cba4de4e5","cuboidsNum":"7","cubeName":"testCube","jobType":"BUILD","cubeId":"4c05965c-c337-151d-84bf-49755f204794","segmentIds":"2c4154a5-0ea3-6ddb-98ff-849cba4de4e5"} 2021-10-15 01:45:29,279 INFO [pool-1-thread-1] utils.MetaDumpUtil : Ready to load KylinConfig from uri: kylin_metadata@hdfs,path=hdfs://master/kylin/kylin_metadata/kylin_test/job_tmp/7141e9e5-526d-4d0a-9c97-5c4f212ddc0c-01/meta 2021-10-15 01:45:29,301 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.metadata.url.identifier : kylin_metadata 2021-10-15 01:45:29,302 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.log.spark-executor-properties-file : /opt/kylin/conf/spark-executor-log4j.properties 2021-10-15 01:45:29,302 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.source.provider.0 : org.apache.kylin.engine.spark.source.HiveSource 2021-10-15 01:45:29,302 INFO [pool-1-thread-1] util.TimeZoneUtils : System timezone set to America/New_York, TimeZoneId: America/New_York. 2021-10-15 01:45:29,302 INFO [pool-1-thread-1] application.SparkApplication : Sleep for random seconds to avoid submitting too many spark job at the same time. 2021-10-15 01:45:49,279 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stdout {"scheduler":{"schedulerInfo":{"type":"capacityScheduler","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"queueName":"root","queues":{"queue":[{"type":"capacitySchedulerLeafQueueInfo","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"absoluteCapacity":100.0,"absoluteMaxCapacity":100.0,"absoluteUsedCapacity":0.0,"numApplications":0,"queueName":"default","state":"RUNNING","resourcesUsed":{"memory":0,"vCores":0},"hideReservationQueues":false,"nodeLabels":["*"],"numActiveApplications":0,"numPendingApplications":0,"numContainers":0,"maxApplications":10000,"maxApplicationsPerUser":10000,"userLimit":100,"users":null,"userLimitFactor":1.0,"AMResourceLimit":{"memory":1024,"vCores":1},"usedAMResource":{"memory":0,"vCores":0},"userAMResourceLimit":{"memory":1024,"vCores":1},"preemptionDisabled":true}]}}}} 2021-10-15 01:45:49,279 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr % Total % Received % Xferd Average Speed Time Time Time Current 2021-10-15 01:45:49,279 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr Dload Upload Total Spent Left Speed 2021-10-15 01:45:49,279 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 2021-10-15 01:45:49,279 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2021-10-15 01:45:49,279 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 100 821 0 821 0 0 148k 0 --:--:-- --:--:-- --:--:-- 160k 2021-10-15 01:45:49,279 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : Thread wait for executing command curl -k --negotiate -u : "http://0.0.0.0:8088/ws/v1/cluster/scheduler" 2021-10-15 01:45:49,283 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Queue available capacity: 1.0. 2021-10-15 01:45:49,283 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Current queue used memory is 0, seem available resource as infinite. 2021-10-15 01:45:49,283 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Cluster available capacity: 1.0. 2021-10-15 01:45:49,284 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Capacity actual available resource: AvailableResource(ResourceInfo(2147483647,2147483647),ResourceInfo(2147483647,2147483647)). 2021-10-15 01:45:49,286 WARN [pool-1-thread-1] spark.SparkContext : Another SparkContext is being constructed (or threw an exception in its constructor). This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at: org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) 2021-10-15 01:45:49,376 INFO [pool-1-thread-1] server.Server : jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 2021-10-15 01:45:49,378 INFO [pool-1-thread-1] server.Server : Started @156324ms 2021-10-15 01:45:49,379 INFO [pool-1-thread-1] server.AbstractConnector : Started ServerConnector@43d1b214{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-10-15 01:45:49,380 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3b49e609{/jobs,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,380 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@55d92e61{/jobs/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,381 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@525c9304{/jobs/job,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,381 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1c53da49{/jobs/job/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,381 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@685e1e64{/stages,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,381 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@74688f28{/stages/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,381 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@429af612{/stages/stage,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,382 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4019ed9a{/stages/stage/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,382 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@58ac76c3{/stages/pool,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,382 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@688da331{/stages/pool/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,382 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7d32e254{/storage,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,383 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@72c8b555{/storage/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,383 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@49e9cfa1{/storage/rdd,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,383 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4c787986{/storage/rdd/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,383 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2632121f{/environment,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,384 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@11676991{/environment/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,384 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@16f438db{/executors,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,384 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@19cfc7ca{/executors/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,384 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2fdb9c06{/executors/threadDump,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,384 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@5da538d{/executors/threadDump/json,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,385 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@325f810d{/static,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,385 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@5549f11b{/,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,386 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@26816c18{/api,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,386 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3f09b5b8{/jobs/job/kill,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,386 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@25d25a6b{/stages/stage/kill,null,AVAILABLE,@Spark} 2021-10-15 01:45:49,462 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at /0.0.0.0:8032 2021-10-15 01:45:49,486 WARN [pool-1-thread-1] yarn.Client : Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 2021-10-15 01:45:52,646 INFO [pool-1-thread-1] impl.YarnClientImpl : Submitted application application_1632384995057_0054 2021-10-15 01:45:57,673 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@146f9d6f{/metrics/json,null,AVAILABLE,@Spark} 2021-10-15 01:46:05,701 ERROR [pool-1-thread-1] cluster.YarnClientSchedulerBackend : YARN application has exited unexpectedly with state FAILED! Check the YARN application logs for more details. 2021-10-15 01:46:05,701 ERROR [pool-1-thread-1] cluster.YarnClientSchedulerBackend : Diagnostics message: Application application_1632384995057_0054 failed 2 times due to AM Container for appattempt_1632384995057_0054_000002 exited with exitCode: -103 For more detailed output, check application tracking page:http://henghe-031:8088/cluster/app/application_1632384995057_0054Then, click on links to logs of each attempt. Diagnostics: Container [pid=10581,containerID=container_1632384995057_0054_02_000001] is running beyond virtual memory limits. Current usage: 394.6 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1632384995057_0054_02_000001 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 10581 10575 10581 10581 (bash) 0 0 116011008 302 /bin/bash -c /usr/java/jdk1.8/bin/java -server -Xmx512m -Djava.io.tmpdir=/hadoop_data/tmp/nm-local-dir/usercache/root/appcache/application_1632384995057_0054/container_1632384995057_0054_02_000001/tmp -Dspark.yarn.app.container.log.dir=/opt/hadoop/logs/userlogs/application_1632384995057_0054/container_1632384995057_0054_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'henghe-031:34292' --properties-file /hadoop_data/tmp/nm-local-dir/usercache/root/appcache/application_1632384995057_0054/container_1632384995057_0054_02_000001/__spark_conf__/__spark_conf__.properties 1> /opt/hadoop/logs/userlogs/application_1632384995057_0054/container_1632384995057_0054_02_000001/stdout 2> /opt/hadoop/logs/userlogs/application_1632384995057_0054/container_1632384995057_0054_02_000001/stderr |- 10587 10581 10581 10581 (java) 1002 39 2449235968 100712 /usr/java/jdk1.8/bin/java -server -Xmx512m -Djava.io.tmpdir=/hadoop_data/tmp/nm-local-dir/usercache/root/appcache/application_1632384995057_0054/container_1632384995057_0054_02_000001/tmp -Dspark.yarn.app.container.log.dir=/opt/hadoop/logs/userlogs/application_1632384995057_0054/container_1632384995057_0054_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg henghe-031:34292 --properties-file /hadoop_data/tmp/nm-local-dir/usercache/root/appcache/application_1632384995057_0054/container_1632384995057_0054_02_000001/__spark_conf__/__spark_conf__.properties Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 Failing this attempt. Failing the application. 2021-10-15 01:46:05,709 INFO [pool-1-thread-1] server.AbstractConnector : Stopped Spark@43d1b214{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-10-15 01:46:05,719 ERROR [pool-1-thread-1] client.TransportClient : Failed to send RPC RPC 6810359905211076411 to /192.168.101.31:53082: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:46:05,719 ERROR [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful java.io.IOException: Failed to send RPC RPC 6810359905211076411 to /192.168.101.31:53082: java.nio.channels.ClosedChannelException at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362) at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) ... 12 more 2021-10-15 01:46:05,720 ERROR [pool-1-thread-1] util.Utils : Uncaught exception in thread YARN application state monitor org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:574) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:98) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:164) at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:669) at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2078) at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1949) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340) at org.apache.spark.SparkContext.stop(SparkContext.scala:1948) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:121) Caused by: java.io.IOException: Failed to send RPC RPC 6810359905211076411 to /192.168.101.31:53082: java.nio.channels.ClosedChannelException at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362) at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) ... 12 more 2021-10-15 01:46:05,766 ERROR [pool-1-thread-1] spark.SparkContext : Error initializing SparkContext. java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:46:05,766 ERROR [pool-1-thread-1] application.SparkApplication : The spark job execute failed! java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:46:05,767 ERROR [pool-1-thread-1] application.JobMonitor : Job failed the 3 times. java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeBuildJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:96) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) ... 4 more 2021-10-15 01:46:05,813 INFO [pool-1-thread-1] application.SparkApplication : Executor task org.apache.kylin.engine.spark.job.CubeBuildJob with args : {"distMetaUrl":"kylin_metadata@hdfs,path=hdfs://master/kylin/kylin_metadata/kylin_test/job_tmp/7141e9e5-526d-4d0a-9c97-5c4f212ddc0c-01/meta","submitter":"ADMIN","dataRangeEnd":"9223372036854775807","targetModel":"13692406-a75e-90cb-c7a3-53084cd7749f","dataRangeStart":"0","project":"kylin_test","className":"org.apache.kylin.engine.spark.job.CubeBuildJob","segmentName":"FULL_BUILD","parentId":"7141e9e5-526d-4d0a-9c97-5c4f212ddc0c","jobId":"7141e9e5-526d-4d0a-9c97-5c4f212ddc0c","outputMetaUrl":"kylin_metadata@jdbc,url=jdbc:mysql://localhost:3306/kylin,username=root,password=******,maxActive=10,maxIdle=10","segmentId":"2c4154a5-0ea3-6ddb-98ff-849cba4de4e5","cuboidsNum":"7","cubeName":"testCube","jobType":"BUILD","cubeId":"4c05965c-c337-151d-84bf-49755f204794","segmentIds":"2c4154a5-0ea3-6ddb-98ff-849cba4de4e5"} 2021-10-15 01:46:05,813 INFO [pool-1-thread-1] utils.MetaDumpUtil : Ready to load KylinConfig from uri: kylin_metadata@hdfs,path=hdfs://master/kylin/kylin_metadata/kylin_test/job_tmp/7141e9e5-526d-4d0a-9c97-5c4f212ddc0c-01/meta 2021-10-15 01:46:05,838 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.metadata.url.identifier : kylin_metadata 2021-10-15 01:46:05,838 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.log.spark-executor-properties-file : /opt/kylin/conf/spark-executor-log4j.properties 2021-10-15 01:46:05,838 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.source.provider.0 : org.apache.kylin.engine.spark.source.HiveSource 2021-10-15 01:46:05,838 INFO [pool-1-thread-1] util.TimeZoneUtils : System timezone set to America/New_York, TimeZoneId: America/New_York. 2021-10-15 01:46:05,839 INFO [pool-1-thread-1] application.SparkApplication : Sleep for random seconds to avoid submitting too many spark job at the same time. 2021-10-15 01:46:55,496 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stdout {"scheduler":{"schedulerInfo":{"type":"capacityScheduler","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"queueName":"root","queues":{"queue":[{"type":"capacitySchedulerLeafQueueInfo","capacity":100.0,"usedCapacity":0.0,"maxCapacity":100.0,"absoluteCapacity":100.0,"absoluteMaxCapacity":100.0,"absoluteUsedCapacity":0.0,"numApplications":0,"queueName":"default","state":"RUNNING","resourcesUsed":{"memory":0,"vCores":0},"hideReservationQueues":false,"nodeLabels":["*"],"numActiveApplications":0,"numPendingApplications":0,"numContainers":0,"maxApplications":10000,"maxApplicationsPerUser":10000,"userLimit":100,"users":null,"userLimitFactor":1.0,"AMResourceLimit":{"memory":1024,"vCores":1},"usedAMResource":{"memory":0,"vCores":0},"userAMResourceLimit":{"memory":1024,"vCores":1},"preemptionDisabled":true}]}}}} 2021-10-15 01:46:55,496 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr % Total % Received % Xferd Average Speed Time Time Time Current 2021-10-15 01:46:55,496 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr Dload Upload Total Spent Left Speed 2021-10-15 01:46:55,496 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 2021-10-15 01:46:55,496 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2021-10-15 01:46:55,496 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr 100 821 0 821 0 0 239k 0 --:--:-- --:--:-- --:--:-- 267k 2021-10-15 01:46:55,496 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : Thread wait for executing command curl -k --negotiate -u : "http://0.0.0.0:8088/ws/v1/cluster/scheduler" 2021-10-15 01:46:55,500 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Queue available capacity: 1.0. 2021-10-15 01:46:55,500 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Current queue used memory is 0, seem available resource as infinite. 2021-10-15 01:46:55,500 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Cluster available capacity: 1.0. 2021-10-15 01:46:55,500 INFO [pool-1-thread-1] parser.CapacitySchedulerParser : Capacity actual available resource: AvailableResource(ResourceInfo(2147483647,2147483647),ResourceInfo(2147483647,2147483647)). 2021-10-15 01:46:55,502 WARN [pool-1-thread-1] spark.SparkContext : Another SparkContext is being constructed (or threw an exception in its constructor). This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at: org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) 2021-10-15 01:46:55,577 INFO [pool-1-thread-1] server.Server : jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 2021-10-15 01:46:55,578 INFO [pool-1-thread-1] server.Server : Started @222525ms 2021-10-15 01:46:55,579 INFO [pool-1-thread-1] server.AbstractConnector : Started ServerConnector@2ee62ad7{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-10-15 01:46:55,580 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@24cad913{/jobs,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,580 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@53f08903{/jobs/json,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,580 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@714b084b{/jobs/job,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,581 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6e643427{/jobs/job/json,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,581 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1c147ab9{/stages,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,581 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4b7b50a9{/stages/json,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,581 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@62db10dd{/stages/stage,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,581 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2d51d5ad{/stages/stage/json,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,581 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@122fa30c{/stages/pool,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,582 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3c057f8{/stages/pool/json,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,582 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3927e2a5{/storage,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,582 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1a78a491{/storage/json,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,582 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@75a04311{/storage/rdd,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,582 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@745ea5ca{/storage/rdd/json,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,583 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@726a432e{/environment,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,583 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2d692dde{/environment/json,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,583 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@5aa64ba9{/executors,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,583 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@5bd7d5c2{/executors/json,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,583 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7b048ea5{/executors/threadDump,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,584 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1fe75bc6{/executors/threadDump/json,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,584 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@77f4b090{/static,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,584 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@12e80269{/,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,585 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4dc6d737{/api,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,585 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3af8d189{/jobs/job/kill,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,585 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@685167f5{/stages/stage/kill,null,AVAILABLE,@Spark} 2021-10-15 01:46:55,630 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at /0.0.0.0:8032 2021-10-15 01:46:55,645 WARN [pool-1-thread-1] yarn.Client : Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 2021-10-15 01:46:59,038 INFO [pool-1-thread-1] impl.YarnClientImpl : Submitted application application_1632384995057_0055 2021-10-15 01:47:04,058 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@b85f96c{/metrics/json,null,AVAILABLE,@Spark} 2021-10-15 01:47:12,078 ERROR [Thread-1] cluster.YarnClientSchedulerBackend : YARN application has exited unexpectedly with state UNDEFINED! Check the YARN application logs for more details. 2021-10-15 01:47:12,078 ERROR [Thread-1] cluster.YarnClientSchedulerBackend : Diagnostics message: Shutdown hook called before final status was reported. 2021-10-15 01:47:12,081 INFO [Thread-1] server.AbstractConnector : Stopped Spark@2ee62ad7{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-10-15 01:47:12,085 ERROR [Thread-1] client.TransportClient : Failed to send RPC RPC 5251815457999106930 to /192.168.101.31:49704: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:47:12,085 ERROR [Thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful java.io.IOException: Failed to send RPC RPC 5251815457999106930 to /192.168.101.31:49704: java.nio.channels.ClosedChannelException at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362) at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) ... 12 more 2021-10-15 01:47:12,086 ERROR [Thread-1] util.Utils : Uncaught exception in thread YARN application state monitor org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:574) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:98) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:164) at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:669) at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2078) at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1949) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340) at org.apache.spark.SparkContext.stop(SparkContext.scala:1948) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:121) Caused by: java.io.IOException: Failed to send RPC RPC 5251815457999106930 to /192.168.101.31:49704: java.nio.channels.ClosedChannelException at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362) at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) ... 12 more 2021-10-15 01:47:12,117 ERROR [Thread-1] spark.SparkContext : Error initializing SparkContext. java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:47:12,117 ERROR [Thread-1] application.SparkApplication : The spark job execute failed! java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-10-15 01:47:12,119 ERROR [Thread-1] application.JobWorkSpace : Job failed eventually. Reason: Retry times exceed MaxRetry set in the KylinConfig. java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeBuildJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:96) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) ... 4 more 2021-10-15 01:47:12,130 INFO [Thread-1] application.SparkApplication : ==========================[BUILD CUBE]=============================== auto spark config :{spark.executor.memory=1GB, count_distinct=false, spark.executor.cores=1, spark.executor.memoryOverhead=512MB, spark.executor.instances=5, spark.yarn.queue=default, spark.sql.shuffle.partitions=2} wait time: 0 build time: 1634276832117 build from layouts : build from flat table : cuboids num per segment : {} abnormal layouts : {} retry times : 4 job retry infos : RetryInfo{ overrideConf : {spark.executor.memory=1536MB, spark.executor.memoryOverhead=308MB}, throwable : java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeBuildJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:96) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) ... 4 more } RetryInfo{ overrideConf : {spark.executor.memory=2304MB, spark.executor.memoryOverhead=461MB}, throwable : java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeBuildJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:96) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) ... 4 more } RetryInfo{ overrideConf : {spark.executor.memory=3456MB, spark.executor.memoryOverhead=692MB}, throwable : java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeBuildJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:96) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) ... 4 more } RetryInfo{ overrideConf : {}, throwable : java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeBuildJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:96) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:834) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:201) at org.apache.spark.SparkContext.(SparkContext.scala:560) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:289) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93) ... 4 more } ==========================[BUILD CUBE]===============================