2021-06-08 15:20:19,383 WARN [pool-1-thread-1] util.NativeCodeLoader : Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2021-06-08 15:20:19,964 INFO [pool-1-thread-1] application.SparkApplication : Executor task org.apache.kylin.engine.spark.job.CubeMergeJob with args : {"distMetaUrl":"kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta","submitter":"SYSTEM","dataRangeEnd":"1622332800000","targetModel":"cee6d39a-b052-4351-ba8a-73ddd583836e","dataRangeStart":"1619827200000","project":"user_growth","className":"org.apache.kylin.engine.spark.job.CubeMergeJob","segmentName":"20210501000000_20210530000000","parentId":"4e85bb17-9201-441f-afcb-f17827cc0d18","jobId":"4e85bb17-9201-441f-afcb-f17827cc0d18","outputMetaUrl":"kylin_metadata@jdbc,url=jdbc:mysql://10.238.2.228:6033/kylin,username=kylin,password=******,maxActive=10,maxIdle=10","segmentId":"c0d90f42-34ef-d9d1-e06f-42eee385b290","cuboidsNum":"63","cubeName":"his_msg_push_event","jobType":"MERGE","cubeId":"77dfdfc0-44df-9963-0792-3b2fcca55734","segmentIds":"c0d90f42-34ef-d9d1-e06f-42eee385b290"} 2021-06-08 15:20:19,966 INFO [pool-1-thread-1] utils.MetaDumpUtil : Ready to load KylinConfig from uri: kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 15:20:20,453 WARN [pool-1-thread-1] shortcircuit.DomainSocketFactory : The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 2021-06-08 15:20:20,654 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.metadata.url.identifier : kylin_metadata 2021-06-08 15:20:20,654 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.log.spark-executor-properties-file : /opt/appdata/disk01/app/kylin/conf/spark-executor-log4j.properties 2021-06-08 15:20:20,655 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.source.provider.0 : org.apache.kylin.engine.spark.source.HiveSource 2021-06-08 15:20:20,659 INFO [pool-1-thread-1] application.SparkApplication : Start set spark conf automatically. 2021-06-08 15:20:21,639 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,667 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,686 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,705 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,723 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,747 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,766 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,785 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,804 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,823 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,842 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,861 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,879 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,898 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,922 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,942 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,960 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,979 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:21,997 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,016 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,035 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,055 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,073 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,091 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,110 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,128 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,148 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,167 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,186 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,204 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,223 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,242 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,260 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,279 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,298 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,316 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,338 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,358 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,377 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,394 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,412 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,430 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,451 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,469 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,487 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,505 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,523 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,541 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,559 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,577 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,596 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,613 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,631 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,651 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,668 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,686 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,704 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,721 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,739 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,755 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,777 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,794 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,811 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,828 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,845 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,863 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,890 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,907 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,923 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,940 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,957 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,973 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:22,989 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,016 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,033 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,050 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,066 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,083 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,099 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,115 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,131 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,147 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,163 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,180 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,196 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,212 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,229 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,245 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,261 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,278 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,294 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,310 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,326 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,344 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,360 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,376 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,393 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,409 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,424 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,440 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,457 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,473 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,488 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,504 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,520 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,536 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,552 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,568 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,584 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,600 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,616 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,632 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,647 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,664 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,680 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,696 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,712 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,728 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,744 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,760 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,776 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,792 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,809 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,826 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,843 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,860 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,878 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,894 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,910 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,928 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,944 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,959 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,975 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:23,991 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,007 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,023 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,039 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,054 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,070 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,086 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,101 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,117 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,143 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,159 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,174 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,190 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,206 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,222 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,237 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,253 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,269 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,285 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,301 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,317 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,333 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,348 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,364 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,380 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,396 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,411 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,427 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,443 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,458 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,474 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,490 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,506 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,522 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,538 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,553 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,569 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,585 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,600 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,616 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,632 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,647 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,663 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,679 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,694 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,710 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,726 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,742 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,758 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,774 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,790 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,805 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,821 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,837 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,853 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,869 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,884 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,900 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,916 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,931 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,947 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,962 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,978 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:24,994 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,010 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,025 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,041 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,056 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,072 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,087 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,103 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,119 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,134 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,150 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,165 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,181 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,196 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,212 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,228 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,243 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,259 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,275 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,290 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,306 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,321 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,337 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,352 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,368 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,383 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,399 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,415 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,430 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,446 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,463 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,479 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,495 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,512 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,528 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,545 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,561 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,577 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,594 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,611 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,626 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,642 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,658 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,674 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,690 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,707 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,723 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,748 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,764 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,780 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,796 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,811 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,827 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,842 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,858 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,874 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,890 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,906 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,921 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,937 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,952 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,968 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:25,984 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,000 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,016 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,031 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,046 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,062 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,078 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,093 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,109 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,124 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,140 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,155 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,170 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,186 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,201 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,217 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,232 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,247 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,263 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,278 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,293 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,309 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,324 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,339 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,355 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,370 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,386 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,401 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,416 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,432 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,447 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,463 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,478 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,493 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,509 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,524 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,540 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,555 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,570 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,588 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,603 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,618 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,634 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,649 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,664 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,679 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,695 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,711 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,727 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,742 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,757 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,773 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,788 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,803 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,819 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,834 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,850 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:20:26,884 INFO [pool-1-thread-1] application.SparkApplication : Exist count distinct measure: true 2021-06-08 15:20:26,942 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr curl: option --negotiate: the installed libcurl version doesn't support this 2021-06-08 15:20:26,943 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : stderr curl: try 'curl --help' or 'curl --manual' for more information 2021-06-08 15:20:26,943 INFO [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : Thread wait for executing command curl -k --negotiate -u : "http://umetrip11-hdp2.6-111.travelsky.com:8088/ws/v1/cluster/scheduler" 2021-06-08 15:20:26,944 WARN [pool-1-thread-1] cluster.SchedulerInfoCmdHelper : Error occurred when get scheduler info from cmd List(curl -k --negotiate -u : "http://umetrip11-hdp2.6-111.travelsky.com:8088/ws/v1/cluster/scheduler") 2021-06-08 15:20:26,944 WARN [pool-1-thread-1] rule.ExecutorInstancesRule : Apply rule error for rule org.apache.spark.conf.rule.ExecutorInstancesRule java.lang.RuntimeException: at org.apache.kylin.cluster.SchedulerInfoCmdHelper$.schedulerInfo(SchedulerInfoCmdHelper.scala:47) at org.apache.kylin.cluster.YarnInfoFetcher.fetchQueueAvailableResource(YarnInfoFetcher.scala:49) at org.apache.spark.conf.rule.ExecutorInstancesRule.doApply(SparkConfRule.scala:110) at org.apache.spark.conf.rule.SparkConfRule$class.apply(SparkConfRule.scala:30) at org.apache.spark.conf.rule.ExecutorInstancesRule.apply(SparkConfRule.scala:98) at org.apache.kylin.engine.spark.utils.SparkConfHelper.lambda$generateSparkConf$0(SparkConfHelper.java:72) at org.apache.kylin.shaded.com.google.common.collect.ImmutableList.forEach(ImmutableList.java:405) at org.apache.kylin.engine.spark.utils.SparkConfHelper.generateSparkConf(SparkConfHelper.java:72) at org.apache.kylin.engine.spark.application.SparkApplication.autoSetSparkConf(SparkApplication.java:334) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:234) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:20:26,945 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.executor.memory = 10GB. 2021-06-08 15:20:26,945 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: count_distinct = true. 2021-06-08 15:20:26,945 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.executor.cores = 5. 2021-06-08 15:20:26,945 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.executor.memoryOverhead = 2GB. 2021-06-08 15:20:26,945 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.executor.instances = 5. 2021-06-08 15:20:26,945 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.yarn.queue = kylin. 2021-06-08 15:20:26,945 INFO [pool-1-thread-1] utils.SparkConfHelper : Auto set spark conf: spark.sql.shuffle.partitions = 43. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.yarn.queue=kylin. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.yarn.am.extraJavaOptions=-Dhdp.version=current. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.history.fs.logDirectory=hdfs:///kylin/spark-history. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.driver.extraJavaOptions=-Dhdp.version=current. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.dynamicAllocation.enabled=true. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.master=yarn. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.executor.extraJavaOptions=-Dfile.encoding=UTF-8 -Dhdp.version=current -Dlog4j.configuration=spark-executor-log4j.properties -Dlog4j.debug -Dkylin.hdfs.working.dir=hdfs://umecluster/kylin_new/kylin_metadata/ -Dkylin.metadata.identifier=kylin_metadata -Dkylin.spark.category=job -Dkylin.spark.project=user_growth -Dkylin.spark.identifier=4e85bb17-9201-441f-afcb-f17827cc0d18 -Dkylin.spark.jobName=4e85bb17-9201-441f-afcb-f17827cc0d18-01 -Duser.timezone=GMT+8. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.hadoop.yarn.timeline-service.enabled=false. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.driver.cores=1. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.executor.memory=20G. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.eventLog.enabled=true. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.eventLog.dir=hdfs:///kylin/spark-history. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.dynamicAllocation.minExecutors=1. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.executor.cores=5. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.sql.shuffle.partitions=120. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.dynamicAllocation.maxExecutors=24. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.executor.memoryOverhead=2048M. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.driver.memory=10G. 2021-06-08 15:20:26,946 INFO [pool-1-thread-1] application.SparkApplication : Override user-defined spark conf, set spark.shuffle.service.enabled=true. 2021-06-08 15:20:26,949 INFO [pool-1-thread-1] util.TimeZoneUtils : System timezone set to GMT+8, TimeZoneId: GMT+08:00. 2021-06-08 15:20:26,949 INFO [pool-1-thread-1] application.SparkApplication : Sleep for random seconds to avoid submitting too many spark job at the same time. 2021-06-08 15:21:13,559 WARN [pool-1-thread-1] application.SparkApplication : Error occurred when check resource. Ignore it and try to submit this job. java.util.NoSuchElementException: spark.driver.memoryOverhead at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:246) at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:246) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.SparkConf.get(SparkConf.scala:246) at org.apache.spark.utils.ResourceUtils$.checkResource(ResourceUtils.scala:70) at org.apache.spark.utils.ResourceUtils.checkResource(ResourceUtils.scala) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:259) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:21:14,233 INFO [pool-1-thread-1] util.log : Logging initialized @55988ms 2021-06-08 15:21:14,302 INFO [pool-1-thread-1] server.Server : jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 2021-06-08 15:21:14,320 INFO [pool-1-thread-1] server.Server : Started @56075ms 2021-06-08 15:21:14,366 INFO [pool-1-thread-1] server.AbstractConnector : Started ServerConnector@6808649{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-06-08 15:21:14,393 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@5c43d4c5{/jobs,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,394 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@14219cfc{/jobs/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,394 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6b082d1c{/jobs/job,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,395 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@a04b86c{/jobs/job/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,395 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@12da03ac{/stages,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,395 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3810ac26{/stages/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,396 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@5a34ed32{/stages/stage,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,397 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@a95a8f5{/stages/stage/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,397 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@773599e8{/stages/pool,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,397 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@65b57c37{/stages/pool/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,398 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@188cd7c6{/storage,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,398 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3f791622{/storage/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,399 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@645822fc{/storage/rdd,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,399 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2a42be84{/storage/rdd/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,399 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@548e6eed{/environment,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,400 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2331ba58{/environment/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,400 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6db2300{/executors,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,401 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@39520bc1{/executors/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,401 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1e0cf300{/executors/threadDump,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,401 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@30c179e3{/executors/threadDump/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,410 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6fdd8fd8{/static,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,410 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@31cd56e8{/,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,411 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1a3bd28c{/api,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,412 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7ed2420d{/jobs/job/kill,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,412 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3b3fd1d5{/stages/stage/kill,null,AVAILABLE,@Spark} 2021-06-08 15:21:14,559 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at umetrip11-hdp2.6-111.travelsky.com/10.5.145.111:8050 2021-06-08 15:21:14,703 WARN [pool-1-thread-1] yarn.Client : Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 2021-06-08 15:21:18,102 INFO [pool-1-thread-1] impl.YarnClientImpl : Submitted application application_1617093658603_222713 2021-06-08 15:21:25,185 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@12f17d{/metrics/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:30,324 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at umetrip11-hdp2.6-111.travelsky.com/10.5.145.111:8050 2021-06-08 15:21:30,669 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@77f68831{/SQL,null,AVAILABLE,@Spark} 2021-06-08 15:21:30,670 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4d8b98c4{/SQL/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:30,671 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@72791295{/SQL/execution,null,AVAILABLE,@Spark} 2021-06-08 15:21:30,671 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6bbbff3f{/SQL/execution/json,null,AVAILABLE,@Spark} 2021-06-08 15:21:30,673 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@46d38675{/static/sql,null,AVAILABLE,@Spark} 2021-06-08 15:21:31,083 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.cube.CubeManager 2021-06-08 15:21:31,100 INFO [pool-1-thread-1] cube.CubeManager : Initializing CubeManager with config kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 15:21:31,102 INFO [pool-1-thread-1] persistence.ResourceStore : Using metadata url kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta for resource store 2021-06-08 15:21:31,121 INFO [pool-1-thread-1] persistence.HDFSResourceStore : hdfs meta path : hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 15:21:31,123 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading CubeInstance from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/cube 2021-06-08 15:21:31,218 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.cube.CubeDescManager 2021-06-08 15:21:31,218 INFO [pool-1-thread-1] cube.CubeDescManager : Initializing CubeDescManager with config kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 15:21:31,219 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading CubeDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/cube_desc 2021-06-08 15:21:31,261 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.project.ProjectManager 2021-06-08 15:21:31,261 INFO [pool-1-thread-1] project.ProjectManager : Initializing ProjectManager with metadata url kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 15:21:31,262 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading ProjectInstance from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/project 2021-06-08 15:21:31,272 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 ProjectInstance(s) out of 1 resource with 0 errors 2021-06-08 15:21:31,273 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.cachesync.Broadcaster 2021-06-08 15:21:31,274 DEBUG [pool-1-thread-1] cachesync.Broadcaster : 3 nodes in the cluster: [10.5.145.128:7070, 10.238.6.117:7070, 10.238.6.118:7070] 2021-06-08 15:21:31,277 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.model.DataModelManager 2021-06-08 15:21:31,282 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.TableMetadataManager 2021-06-08 15:21:31,282 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading TableDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/table 2021-06-08 15:21:31,299 DEBUG [pool-1-thread-1] measure.MeasureTypeFactory : registering COUNT_DISTINCT(hllc), class org.apache.kylin.measure.hllc.HLLCMeasureType$Factory 2021-06-08 15:21:31,302 DEBUG [pool-1-thread-1] measure.MeasureTypeFactory : registering COUNT_DISTINCT(bitmap), class org.apache.kylin.measure.bitmap.BitmapMeasureType$Factory 2021-06-08 15:21:31,307 DEBUG [pool-1-thread-1] measure.MeasureTypeFactory : registering TOP_N(topn), class org.apache.kylin.measure.topn.TopNMeasureType$Factory 2021-06-08 15:21:31,309 DEBUG [pool-1-thread-1] measure.MeasureTypeFactory : registering RAW(raw), class org.apache.kylin.measure.raw.RawMeasureType$Factory 2021-06-08 15:21:31,310 DEBUG [pool-1-thread-1] measure.MeasureTypeFactory : registering EXTENDED_COLUMN(extendedcolumn), class org.apache.kylin.measure.extendedcolumn.ExtendedColumnMeasureType$Factory 2021-06-08 15:21:31,311 DEBUG [pool-1-thread-1] measure.MeasureTypeFactory : registering PERCENTILE_APPROX(percentile), class org.apache.kylin.measure.percentile.PercentileMeasureType$Factory 2021-06-08 15:21:31,312 DEBUG [pool-1-thread-1] measure.MeasureTypeFactory : registering COUNT_DISTINCT(dim_dc), class org.apache.kylin.measure.dim.DimCountDistinctMeasureType$Factory 2021-06-08 15:21:31,313 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 TableDesc(s) out of 1 resource with 0 errors 2021-06-08 15:21:31,314 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading TableExtDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/table_exd 2021-06-08 15:21:31,323 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 TableExtDesc(s) out of 1 resource with 0 errors 2021-06-08 15:21:31,323 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading ExternalFilterDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/ext_filter 2021-06-08 15:21:31,324 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 0 ExternalFilterDesc(s) out of 0 resource with 0 errors 2021-06-08 15:21:31,324 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading DataModelDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/model_desc 2021-06-08 15:21:31,344 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 DataModelDesc(s) out of 1 resource with 0 errors 2021-06-08 15:21:31,357 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 CubeDesc(s) out of 1 resource with 0 errors 2021-06-08 15:21:31,357 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 CubeInstance(s) out of 1 resource with 0 errors 2021-06-08 15:21:37,386 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:21:37,787 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:21:37,788 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:22:07,346 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:22:07,347 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:22:18,097 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:22:18,100 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[910] has 357294 row, 1191775380 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:22:18,115 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[910] has 357294 row, 1191775380 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:22:18,115 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:22:18,115 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[910] repartition to 5 2021-06-08 15:22:18,390 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:22:18,390 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:22:38,441 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 1 for reason Container killed by YARN for exceeding memory limits. 24.4 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:22:38,448 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 1 on umetrip29-hdp2.6-129.travelsky.com: Container killed by YARN for exceeding memory limits. 24.4 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:22:38,457 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 4.0 in stage 12.0 (TID 769, umetrip29-hdp2.6-129.travelsky.com, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 24.4 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:22:50,442 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 2 for reason Container killed by YARN for exceeding memory limits. 25.0 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:22:50,443 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 2 on r4200g2-app.travelsky.com: Container killed by YARN for exceeding memory limits. 25.0 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:22:50,444 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 4.1 in stage 12.0 (TID 770, r4200g2-app.travelsky.com, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 25.0 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:22:50,522 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.238.6.114:57333 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:23:19,031 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/910_temp. 2021-06-08 15:23:19,031 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 60916 ms. 2021-06-08 15:23:19,051 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:23:19,052 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:23:19,765 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:23:19,834 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:23:19,834 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:23:49,641 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:23:49,642 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:23:55,858 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:23:55,859 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[654] has 85045 row, 654683569 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:23:55,875 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[654] has 85045 row, 654683569 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:23:55,875 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:23:55,875 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[654] repartition to 3 2021-06-08 15:23:56,117 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:23:56,117 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:24:07,162 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/654_temp. 2021-06-08 15:24:07,162 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 11287 ms. 2021-06-08 15:24:07,180 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:24:07,181 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:24:07,695 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:24:07,766 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:24:07,766 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:24:09,032 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 24 for reason Container killed by YARN for exceeding memory limits. 24.1 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:24:09,032 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 24 on umetrip32-hdp2.6-132.travelsky.com: Container killed by YARN for exceeding memory limits. 24.1 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:24:09,032 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 216.0 in stage 31.0 (TID 1510, umetrip32-hdp2.6-132.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 24.1 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:24:09,032 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 213.0 in stage 31.0 (TID 1494, umetrip32-hdp2.6-132.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 24.1 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:24:09,032 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 219.0 in stage 31.0 (TID 1517, umetrip32-hdp2.6-132.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 24.1 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:24:09,032 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 210.0 in stage 31.0 (TID 1478, umetrip32-hdp2.6-132.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 24.1 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:24:09,032 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 207.0 in stage 31.0 (TID 1462, umetrip32-hdp2.6-132.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 24.1 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:24:09,386 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.132:24144 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:24:27,267 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:24:27,267 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:24:32,098 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:24:32,100 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[782] has 208663 row, 940413705 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:24:32,114 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[782] has 208663 row, 940413705 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:24:32,114 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:24:32,114 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[782] repartition to 5 2021-06-08 15:24:32,347 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:24:32,347 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:24:39,121 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/782_temp. 2021-06-08 15:24:39,121 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7007 ms. 2021-06-08 15:24:39,143 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:24:39,144 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:24:39,685 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:24:39,751 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:24:39,751 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:25:01,751 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 5 on umetrip08-hdp2.6-108.travelsky.com: Container killed by YARN for exceeding memory limits. 22.7 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:25:01,751 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 5 for reason Container killed by YARN for exceeding memory limits. 22.7 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:25:01,751 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 34.0 in stage 44.0 (TID 2337, umetrip08-hdp2.6-108.travelsky.com, executor 5): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 22.7 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:25:16,761 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 41 for reason Container killed by YARN for exceeding memory limits. 22.9 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:25:16,761 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 41 on umetrip11-hdp2.6-111.travelsky.com: Container killed by YARN for exceeding memory limits. 22.9 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:25:16,761 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 34.1 in stage 44.0 (TID 2447, umetrip11-hdp2.6-111.travelsky.com, executor 41): ExecutorLostFailure (executor 41 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 22.9 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:25:18,575 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.111:44375 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:25:46,888 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:25:46,888 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:25:53,559 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:25:53,560 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[655] has 101546 row, 656133811 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:25:53,579 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[655] has 101546 row, 656133811 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:25:53,579 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:25:53,579 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[655] repartition to 3 2021-06-08 15:25:56,865 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:25:56,865 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:26:07,955 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/655_temp. 2021-06-08 15:26:07,955 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 14376 ms. 2021-06-08 15:26:07,975 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:26:07,976 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:26:08,452 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:26:08,516 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:26:08,516 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:26:39,179 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:26:39,179 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:26:44,519 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:26:44,520 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[911] has 401884 row, 1193267888 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:26:44,536 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[911] has 401884 row, 1193267888 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:26:44,536 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:26:44,536 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[911] repartition to 5 2021-06-08 15:26:44,730 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:26:44,730 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:26:51,055 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/911_temp. 2021-06-08 15:26:51,055 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6519 ms. 2021-06-08 15:26:51,075 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:26:51,076 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:26:51,764 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:26:51,831 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:26:51,831 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:27:08,357 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 40 for reason Container killed by YARN for exceeding memory limits. 22.8 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:08,357 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 40 on umetrip33-hdp2.6-133.travelsky.com: Container killed by YARN for exceeding memory limits. 22.8 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:08,357 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 15.0 in stage 70.0 (TID 3975, umetrip33-hdp2.6-133.travelsky.com, executor 40): ExecutorLostFailure (executor 40 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 22.8 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:08,843 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.133:15733 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:27:23,365 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 39 for reason Container killed by YARN for exceeding memory limits. 23.1 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:23,365 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 39 on umetrip33-hdp2.6-133.travelsky.com: Container killed by YARN for exceeding memory limits. 23.1 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:23,365 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 15.1 in stage 70.0 (TID 4152, umetrip33-hdp2.6-133.travelsky.com, executor 39): ExecutorLostFailure (executor 39 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 23.1 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:24,234 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.133:15725 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:27:35,371 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 30 for reason Container killed by YARN for exceeding memory limits. 22.3 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:35,371 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 30 on r4200g2-app.travelsky.com: Container killed by YARN for exceeding memory limits. 22.3 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:35,372 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 15.2 in stage 70.0 (TID 4153, r4200g2-app.travelsky.com, executor 30): ExecutorLostFailure (executor 30 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 22.3 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:35,963 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.238.6.114:58451 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:27:50,372 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 29 for reason Container killed by YARN for exceeding memory limits. 23.7 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:50,372 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 29 on umetrip19-hdp2.6-119.travelsky.com: Container killed by YARN for exceeding memory limits. 23.7 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:50,373 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 15.3 in stage 70.0 (TID 4154, umetrip19-hdp2.6-119.travelsky.com, executor 29): ExecutorLostFailure (executor 29 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 23.7 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:27:50,374 ERROR [pool-1-thread-1] scheduler.TaskSetManager : Task 15 in stage 70.0 failed 4 times; aborting job 2021-06-08 15:27:50,389 ERROR [pool-1-thread-1] datasources.FileFormatWriter : Aborting job 5616aa5c-21a2-4847-bdb9-0e4944181f83. org.apache.spark.SparkException: Job aborted due to stage failure: Task 15 in stage 70.0 failed 4 times, most recent failure: Lost task 15.3 in stage 70.0 (TID 4154, umetrip19-hdp2.6-119.travelsky.com, executor 29): ExecutorLostFailure (executor 29 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 23.7 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:27:50,393 ERROR [pool-1-thread-1] job.BuildLayoutWithUpdate : Error occurred when run merge-cuboid-783 org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 15 in stage 70.0 failed 4 times, most recent failure: Lost task 15.3 in stage 70.0 (TID 4154, umetrip19-hdp2.6-119.travelsky.com, executor 29): ExecutorLostFailure (executor 29 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 23.7 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more 2021-06-08 15:27:50,394 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:27:50,407 INFO [pool-1-thread-1] server.AbstractConnector : Stopped Spark@6808649{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-06-08 15:27:50,470 ERROR [pool-1-thread-1] application.SparkApplication : The spark job execute failed! java.lang.RuntimeException: org.apache.spark.SparkException: Job aborted. at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate.updateLayout(BuildLayoutWithUpdate.java:70) at org.apache.kylin.engine.spark.job.CubeMergeJob.mergeSegments(CubeMergeJob.java:122) at org.apache.kylin.engine.spark.job.CubeMergeJob.doExecute(CubeMergeJob.java:82) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:298) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 15 in stage 70.0 failed 4 times, most recent failure: Lost task 15.3 in stage 70.0 (TID 4154, umetrip19-hdp2.6-119.travelsky.com, executor 29): ExecutorLostFailure (executor 29 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 23.7 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more 2021-06-08 15:27:50,471 ERROR [pool-1-thread-1] application.JobMonitor : Job failed the 1 times. java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeMergeJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:92) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: org.apache.spark.SparkException: Job aborted. at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate.updateLayout(BuildLayoutWithUpdate.java:70) at org.apache.kylin.engine.spark.job.CubeMergeJob.mergeSegments(CubeMergeJob.java:122) at org.apache.kylin.engine.spark.job.CubeMergeJob.doExecute(CubeMergeJob.java:82) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:298) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) ... 4 more Caused by: org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 15 in stage 70.0 failed 4 times, most recent failure: Lost task 15.3 in stage 70.0 (TID 4154, umetrip19-hdp2.6-119.travelsky.com, executor 29): ExecutorLostFailure (executor 29 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 23.7 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more 2021-06-08 15:27:50,489 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at umetrip11-hdp2.6-111.travelsky.com/10.5.145.111:8050 2021-06-08 15:27:50,492 INFO [pool-1-thread-1] cluster.YarnInfoFetcher : Cluster maximum resource allocation ResourceInfo(49152,25) 2021-06-08 15:27:50,494 INFO [pool-1-thread-1] application.SparkApplication : Executor task org.apache.kylin.engine.spark.job.CubeMergeJob with args : {"distMetaUrl":"kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta","submitter":"SYSTEM","dataRangeEnd":"1622332800000","targetModel":"cee6d39a-b052-4351-ba8a-73ddd583836e","dataRangeStart":"1619827200000","project":"user_growth","className":"org.apache.kylin.engine.spark.job.CubeMergeJob","segmentName":"20210501000000_20210530000000","parentId":"4e85bb17-9201-441f-afcb-f17827cc0d18","jobId":"4e85bb17-9201-441f-afcb-f17827cc0d18","outputMetaUrl":"kylin_metadata@jdbc,url=jdbc:mysql://10.238.2.228:6033/kylin,username=kylin,password=******,maxActive=10,maxIdle=10","segmentId":"c0d90f42-34ef-d9d1-e06f-42eee385b290","cuboidsNum":"63","cubeName":"his_msg_push_event","jobType":"MERGE","cubeId":"77dfdfc0-44df-9963-0792-3b2fcca55734","segmentIds":"c0d90f42-34ef-d9d1-e06f-42eee385b290"} 2021-06-08 15:27:50,494 INFO [pool-1-thread-1] utils.MetaDumpUtil : Ready to load KylinConfig from uri: kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 15:27:50,516 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.metadata.url.identifier : kylin_metadata 2021-06-08 15:27:50,516 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.log.spark-executor-properties-file : /opt/appdata/disk01/app/kylin/conf/spark-executor-log4j.properties 2021-06-08 15:27:50,516 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.source.provider.0 : org.apache.kylin.engine.spark.source.HiveSource 2021-06-08 15:27:50,517 INFO [pool-1-thread-1] util.TimeZoneUtils : System timezone set to GMT+8, TimeZoneId: GMT+08:00. 2021-06-08 15:27:50,517 INFO [pool-1-thread-1] application.SparkApplication : Sleep for random seconds to avoid submitting too many spark job at the same time. 2021-06-08 15:28:10,355 WARN [pool-1-thread-1] application.SparkApplication : Error occurred when check resource. Ignore it and try to submit this job. java.util.NoSuchElementException: spark.driver.memoryOverhead at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:246) at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:246) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.SparkConf.get(SparkConf.scala:246) at org.apache.spark.utils.ResourceUtils$.checkResource(ResourceUtils.scala:70) at org.apache.spark.utils.ResourceUtils.checkResource(ResourceUtils.scala) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:259) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:28:10,440 INFO [pool-1-thread-1] server.Server : jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 2021-06-08 15:28:10,441 INFO [pool-1-thread-1] server.Server : Started @472196ms 2021-06-08 15:28:10,442 INFO [pool-1-thread-1] server.AbstractConnector : Started ServerConnector@27882681{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-06-08 15:28:10,443 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2b7b691b{/jobs,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,443 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2b9ec94b{/jobs/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,443 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6b16cb69{/jobs/job,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,443 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@d4c54a4{/jobs/job/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,443 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7d8fc7a7{/stages,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,444 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@20068dcc{/stages/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,444 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7d964726{/stages/stage,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,444 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1290c544{/stages/stage/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,444 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@11218a4e{/stages/pool,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,445 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@55d0d4b{/stages/pool/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,445 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4abdd5b1{/storage,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,445 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@5e938347{/storage/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,445 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@43fd1121{/storage/rdd,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,446 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2e276b40{/storage/rdd/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,446 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@a932368{/environment,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,446 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@485800c6{/environment/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,446 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@630774f5{/executors,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,446 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@766f7ebe{/executors/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,447 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@65f48b78{/executors/threadDump,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,447 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6da2f930{/executors/threadDump/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,447 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@35da98ad{/static,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,448 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@38cd8a26{/,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,448 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7923790c{/api,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,448 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@624c9b4a{/jobs/job/kill,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,449 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3604a19f{/stages/stage/kill,null,AVAILABLE,@Spark} 2021-06-08 15:28:10,495 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at umetrip11-hdp2.6-111.travelsky.com/10.5.145.111:8050 2021-06-08 15:28:10,500 WARN [pool-1-thread-1] yarn.Client : Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 2021-06-08 15:28:13,709 INFO [pool-1-thread-1] impl.YarnClientImpl : Submitted application application_1617093658603_222717 2021-06-08 15:28:18,722 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@315d548a{/metrics/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:25,638 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at umetrip11-hdp2.6-111.travelsky.com/10.5.145.111:8050 2021-06-08 15:28:25,670 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3b011c3f{/SQL,null,AVAILABLE,@Spark} 2021-06-08 15:28:25,670 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@60758d1{/SQL/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:25,671 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1769b3e2{/SQL/execution,null,AVAILABLE,@Spark} 2021-06-08 15:28:25,671 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@df80af3{/SQL/execution/json,null,AVAILABLE,@Spark} 2021-06-08 15:28:25,672 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6bd910bb{/static/sql,null,AVAILABLE,@Spark} 2021-06-08 15:28:25,674 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.cube.CubeManager 2021-06-08 15:28:25,674 INFO [pool-1-thread-1] cube.CubeManager : Initializing CubeManager with config kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 15:28:25,674 INFO [pool-1-thread-1] persistence.ResourceStore : Using metadata url kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta for resource store 2021-06-08 15:28:25,691 INFO [pool-1-thread-1] persistence.HDFSResourceStore : hdfs meta path : hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 15:28:25,692 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading CubeInstance from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/cube 2021-06-08 15:28:25,700 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.cube.CubeDescManager 2021-06-08 15:28:25,700 INFO [pool-1-thread-1] cube.CubeDescManager : Initializing CubeDescManager with config kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 15:28:25,700 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading CubeDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/cube_desc 2021-06-08 15:28:25,704 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.project.ProjectManager 2021-06-08 15:28:25,704 INFO [pool-1-thread-1] project.ProjectManager : Initializing ProjectManager with metadata url kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 15:28:25,704 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading ProjectInstance from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/project 2021-06-08 15:28:25,707 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 ProjectInstance(s) out of 1 resource with 0 errors 2021-06-08 15:28:25,707 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.cachesync.Broadcaster 2021-06-08 15:28:25,708 DEBUG [pool-1-thread-1] cachesync.Broadcaster : 3 nodes in the cluster: [10.5.145.128:7070, 10.238.6.117:7070, 10.238.6.118:7070] 2021-06-08 15:28:25,708 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.model.DataModelManager 2021-06-08 15:28:25,708 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.TableMetadataManager 2021-06-08 15:28:25,708 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading TableDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/table 2021-06-08 15:28:25,712 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 TableDesc(s) out of 1 resource with 0 errors 2021-06-08 15:28:25,712 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading TableExtDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/table_exd 2021-06-08 15:28:25,716 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 TableExtDesc(s) out of 1 resource with 0 errors 2021-06-08 15:28:25,716 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading ExternalFilterDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/ext_filter 2021-06-08 15:28:25,716 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 0 ExternalFilterDesc(s) out of 0 resource with 0 errors 2021-06-08 15:28:25,716 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading DataModelDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/model_desc 2021-06-08 15:28:25,720 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 DataModelDesc(s) out of 1 resource with 0 errors 2021-06-08 15:28:25,721 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 CubeDesc(s) out of 1 resource with 0 errors 2021-06-08 15:28:25,721 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 CubeInstance(s) out of 1 resource with 0 errors 2021-06-08 15:28:33,062 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:28:33,153 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:28:33,153 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:29:33,774 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:29:33,774 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:29:38,362 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:29:38,364 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[910] has 357294 row, 1191799870 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:29:38,379 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[910] has 357294 row, 1191799870 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:29:38,379 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:29:38,379 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[910] repartition to 5 2021-06-08 15:29:38,657 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:29:38,657 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:30:07,968 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 3 for reason Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:30:07,968 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 3 on umetrip16-hdp2.6-116.travelsky.com: Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:30:07,968 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 2.0 in stage 17.0 (TID 1170, umetrip16-hdp2.6-116.travelsky.com, executor 3): ExecutorLostFailure (executor 3 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:30:10,968 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.116:12358 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:30:42,220 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/910_temp. 2021-06-08 15:30:42,220 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 63841 ms. 2021-06-08 15:30:42,237 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:30:42,238 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:30:43,259 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:30:43,329 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:30:43,329 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:31:28,680 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:31:28,680 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:31:41,406 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:31:41,407 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[654] has 85045 row, 654696321 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:31:41,423 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[654] has 85045 row, 654696321 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:31:41,423 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:31:41,423 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[654] repartition to 3 2021-06-08 15:31:41,673 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:31:41,673 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:32:10,126 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/654_temp. 2021-06-08 15:32:10,126 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 28703 ms. 2021-06-08 15:32:10,142 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:32:10,143 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:32:11,558 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:32:11,649 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:32:11,649 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:32:54,213 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:32:54,213 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:32:58,019 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:32:58,020 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[782] has 208663 row, 940433415 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:32:58,037 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[782] has 208663 row, 940433415 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:32:58,037 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:32:58,037 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[782] repartition to 5 2021-06-08 15:32:58,261 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:32:58,261 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:33:11,288 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 42 for reason Container killed by YARN for exceeding memory limits. 37.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:33:11,288 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 42 on umetrip19-hdp2.6-119.travelsky.com: Container killed by YARN for exceeding memory limits. 37.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:33:11,289 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 1.0 in stage 53.0 (TID 3192, umetrip19-hdp2.6-119.travelsky.com, executor 42): ExecutorLostFailure (executor 42 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 37.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:33:11,289 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 4.0 in stage 53.0 (TID 3195, umetrip19-hdp2.6-119.travelsky.com, executor 42): ExecutorLostFailure (executor 42 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 37.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:33:40,267 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/782_temp. 2021-06-08 15:33:40,267 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 42230 ms. 2021-06-08 15:33:40,286 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:33:40,287 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:33:41,915 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:33:41,987 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:33:41,987 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:34:07,186 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:34:07,186 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:34:13,436 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:34:13,437 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[655] has 101546 row, 656147355 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:34:13,452 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[655] has 101546 row, 656147355 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:34:13,452 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:34:13,452 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[655] repartition to 3 2021-06-08 15:34:13,669 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:34:13,669 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:34:27,799 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/655_temp. 2021-06-08 15:34:27,799 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 14347 ms. 2021-06-08 15:34:27,816 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:34:27,816 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:34:28,788 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:34:28,866 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:34:28,866 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:34:50,426 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:34:50,426 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:34:57,510 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:34:57,511 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[911] has 401884 row, 1193293014 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:34:57,526 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[911] has 401884 row, 1193293014 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:34:57,526 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:34:57,527 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[911] repartition to 5 2021-06-08 15:34:57,716 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:34:57,716 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:35:12,547 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/911_temp. 2021-06-08 15:35:12,547 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 15021 ms. 2021-06-08 15:35:12,565 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:35:12,566 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:35:13,649 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:35:13,721 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:35:13,722 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:35:52,483 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:35:52,483 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:35:57,420 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:35:57,422 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[783] has 238128 row, 942126597 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:35:57,439 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[783] has 238128 row, 942126597 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:35:57,439 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:35:57,439 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[783] repartition to 5 2021-06-08 15:35:57,684 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:35:57,684 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:36:04,925 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/783_temp. 2021-06-08 15:36:04,925 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7486 ms. 2021-06-08 15:36:04,944 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:36:04,944 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:36:05,817 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:36:05,887 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:36:05,887 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:36:21,794 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:36:21,794 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:36:22,835 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:36:22,837 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[527] has 56804 row, 438754508 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 15:36:22,853 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[527] has 56804 row, 438754508 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 15:36:22,853 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:36:22,853 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[527] repartition to 3 2021-06-08 15:36:23,053 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:36:23,054 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:36:28,652 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/527_temp. 2021-06-08 15:36:28,652 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 5799 ms. 2021-06-08 15:36:28,671 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:36:28,672 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:36:29,706 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:36:29,777 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:36:29,777 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:36:49,513 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:36:49,513 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:36:54,427 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:36:54,428 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[926] has 598417 row, 1408545743 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 15:36:54,444 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[926] has 598417 row, 1408545743 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 15:36:54,444 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:36:54,444 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[926] repartition to 6 2021-06-08 15:36:54,631 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:36:54,631 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:37:02,891 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/926_temp. 2021-06-08 15:37:02,891 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 8447 ms. 2021-06-08 15:37:02,911 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:37:02,912 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:37:04,717 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:37:04,782 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:37:04,782 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:37:26,879 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:37:26,879 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:37:32,390 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:37:32,391 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[670] has 151229 row, 864430822 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 15:37:32,405 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[670] has 151229 row, 864430822 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 15:37:32,405 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:37:32,405 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[670] repartition to 4 2021-06-08 15:37:32,584 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:37:32,584 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:37:41,706 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/670_temp. 2021-06-08 15:37:41,706 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 9301 ms. 2021-06-08 15:37:41,724 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:37:41,725 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:37:42,645 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:37:42,711 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:37:42,711 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:38:00,404 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:38:00,404 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:38:05,382 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:38:05,383 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[542] has 84023 row, 611463118 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:38:05,398 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[542] has 84023 row, 611463118 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:38:05,398 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:38:05,398 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[542] repartition to 3 2021-06-08 15:38:05,597 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:38:05,598 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:38:25,654 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/542_temp. 2021-06-08 15:38:25,654 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 20256 ms. 2021-06-08 15:38:25,673 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:38:25,673 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:38:26,578 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:38:26,644 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:38:26,644 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:38:50,813 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:38:50,813 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:38:55,463 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:38:55,465 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[798] has 356137 row, 1160102399 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:38:55,480 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[798] has 356137 row, 1160102399 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:38:55,480 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:38:55,480 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[798] repartition to 5 2021-06-08 15:38:55,676 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:38:55,676 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:39:16,169 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 27 for reason Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:39:16,169 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 27 on umetrip28-hdp2.6-128.travelsky.com: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:39:16,169 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 4.0 in stage 197.0 (TID 13219, umetrip28-hdp2.6-128.travelsky.com, executor 27): ExecutorLostFailure (executor 27 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:39:19,021 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.128:1373 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:39:43,809 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/798_temp. 2021-06-08 15:39:43,809 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 48329 ms. 2021-06-08 15:39:43,827 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:39:43,828 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:39:44,717 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:39:44,780 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:39:44,780 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:39:59,818 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 51 for reason Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:39:59,819 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 51 on umetrip30-hdp2.6-130.travelsky.com: Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:39:59,819 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 135.0 in stage 208.0 (TID 13376, umetrip30-hdp2.6-130.travelsky.com, executor 51): ExecutorLostFailure (executor 51 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:39:59,819 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 344.0 in stage 208.0 (TID 13536, umetrip30-hdp2.6-130.travelsky.com, executor 51): ExecutorLostFailure (executor 51 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:40:00,117 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.130:52050 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:40:17,826 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 36 for reason Container killed by YARN for exceeding memory limits. 36.4 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:40:17,826 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 36 on umetrip25-hdp2.6-125.travelsky.com: Container killed by YARN for exceeding memory limits. 36.4 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:40:17,827 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 135.1 in stage 208.0 (TID 13650, umetrip25-hdp2.6-125.travelsky.com, executor 36): ExecutorLostFailure (executor 36 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.4 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:40:21,288 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.125:3215 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:40:38,836 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 34 for reason Container killed by YARN for exceeding memory limits. 37.6 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:40:38,836 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 34 on umetrip11-hdp2.6-111.travelsky.com: Container killed by YARN for exceeding memory limits. 37.6 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:40:38,836 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 135.2 in stage 208.0 (TID 13651, umetrip11-hdp2.6-111.travelsky.com, executor 34): ExecutorLostFailure (executor 34 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 37.6 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:40:40,592 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.111:9746 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:41:10,620 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:41:10,620 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:41:34,115 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:41:34,116 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[671] has 175104 row, 866501554 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 15:41:34,131 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[671] has 175104 row, 866501554 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 15:41:34,131 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:41:34,131 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[671] repartition to 4 2021-06-08 15:41:34,321 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:41:34,321 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:41:53,344 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/671_temp. 2021-06-08 15:41:53,344 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 19213 ms. 2021-06-08 15:41:53,363 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:41:53,364 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:41:54,664 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:41:54,732 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:41:54,732 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:42:36,574 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:42:36,574 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:42:44,860 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:42:44,861 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[799] has 395212 row, 1162009121 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:42:44,876 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[799] has 395212 row, 1162009121 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:42:44,876 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:42:44,876 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[799] repartition to 5 2021-06-08 15:42:45,107 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:42:45,107 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:43:07,191 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/799_temp. 2021-06-08 15:43:07,191 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 22315 ms. 2021-06-08 15:43:07,211 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:43:07,212 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:43:08,199 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:43:08,268 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:43:08,269 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:43:34,718 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:43:34,718 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:43:36,028 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:43:36,028 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[543] has 99258 row, 613384024 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:43:36,044 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[543] has 99258 row, 613384024 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:43:36,044 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:43:36,044 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[543] repartition to 3 2021-06-08 15:43:36,225 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:43:36,225 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:43:44,932 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/543_temp. 2021-06-08 15:43:44,932 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 8888 ms. 2021-06-08 15:43:44,951 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:43:44,952 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:43:46,015 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:43:46,082 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:43:46,083 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:44:34,148 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:44:34,148 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:44:39,657 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:44:39,658 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[927] has 655663 row, 1409871398 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 15:44:39,672 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[927] has 655663 row, 1409871398 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 15:44:39,672 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:44:39,672 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[927] repartition to 6 2021-06-08 15:44:39,901 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:44:39,901 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:44:52,424 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/927_temp. 2021-06-08 15:44:52,424 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 12752 ms. 2021-06-08 15:44:52,445 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:44:52,446 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:44:53,805 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:44:53,871 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:44:53,871 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:45:30,395 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:45:30,395 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:45:35,399 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:45:35,400 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[686] has 115549 row, 671167303 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 15:45:35,414 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[686] has 115549 row, 671167303 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 15:45:35,414 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:45:35,414 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[686] repartition to 4 2021-06-08 15:45:35,606 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:45:35,606 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:45:44,402 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/686_temp. 2021-06-08 15:45:44,402 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 8988 ms. 2021-06-08 15:45:44,421 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:45:44,422 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:45:45,270 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:45:45,333 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:45:45,333 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:46:02,874 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:46:02,874 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:46:07,160 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:46:07,161 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[558] has 65187 row, 457830086 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 15:46:07,177 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[558] has 65187 row, 457830086 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 15:46:07,177 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:46:07,177 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[558] repartition to 3 2021-06-08 15:46:07,364 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:46:07,364 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:46:13,787 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/558_temp. 2021-06-08 15:46:13,787 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6610 ms. 2021-06-08 15:46:13,806 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:46:13,807 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:46:14,875 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:46:14,944 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:46:14,945 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:46:29,161 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 70 for reason Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:46:29,161 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 70 on umetrip19-hdp2.6-119.travelsky.com: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:46:29,161 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 347.0 in stage 316.0 (TID 20957, umetrip19-hdp2.6-119.travelsky.com, executor 70): ExecutorLostFailure (executor 70 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:46:29,161 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 56.0 in stage 316.0 (TID 20914, umetrip19-hdp2.6-119.travelsky.com, executor 70): ExecutorLostFailure (executor 70 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:46:31,014 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.119:47595 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:47:01,124 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:47:01,124 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:47:06,453 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:47:06,455 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[942] has 444713 row, 1207549601 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:47:06,475 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[942] has 444713 row, 1207549601 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:47:06,475 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:47:06,475 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[942] repartition to 5 2021-06-08 15:47:06,670 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:47:06,670 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:47:55,422 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/942_temp. 2021-06-08 15:47:55,422 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 48947 ms. 2021-06-08 15:47:55,440 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:47:55,441 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:47:56,517 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:47:56,582 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:47:56,582 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:48:32,371 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:48:32,371 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:48:37,396 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:48:37,397 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[814] has 265469 row, 957316192 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:48:37,438 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[814] has 265469 row, 957316192 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:48:37,438 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:48:37,438 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[814] repartition to 5 2021-06-08 15:48:37,619 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:48:37,619 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:48:43,485 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/814_temp. 2021-06-08 15:48:43,485 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6047 ms. 2021-06-08 15:48:43,504 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:48:43,504 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:48:44,466 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:48:44,537 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:48:44,537 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:49:05,657 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:49:05,658 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:49:10,349 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:49:10,350 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[815] has 296248 row, 959664020 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:49:10,366 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[815] has 296248 row, 959664020 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:49:10,366 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:49:10,366 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[815] repartition to 5 2021-06-08 15:49:10,559 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:49:10,559 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:49:18,328 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/815_temp. 2021-06-08 15:49:18,328 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7962 ms. 2021-06-08 15:49:18,346 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:49:18,347 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:49:19,300 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:49:19,367 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:49:19,367 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:49:44,588 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:49:44,588 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:49:50,488 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:49:50,490 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[943] has 490637 row, 1209003808 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 15:49:50,505 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[943] has 490637 row, 1209003808 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 15:49:50,505 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:49:50,505 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[943] repartition to 6 2021-06-08 15:49:50,716 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:49:50,716 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:50:08,010 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 63 for reason Container killed by YARN for exceeding memory limits. 38.2 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:50:08,011 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 63 on umetrip30-hdp2.6-130.travelsky.com: Container killed by YARN for exceeding memory limits. 38.2 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:50:08,011 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 3.0 in stage 377.0 (TID 26154, umetrip30-hdp2.6-130.travelsky.com, executor 63): ExecutorLostFailure (executor 63 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 38.2 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:50:09,820 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.130:54142 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:50:17,504 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/943_temp. 2021-06-08 15:50:17,504 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 26999 ms. 2021-06-08 15:50:17,523 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:50:17,523 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:50:18,445 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:50:18,508 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:50:18,509 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:50:31,207 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:50:31,207 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:50:36,930 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:50:36,931 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[559] has 76496 row, 459317743 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 15:50:36,947 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[559] has 76496 row, 459317743 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 15:50:36,947 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:50:36,947 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[559] repartition to 3 2021-06-08 15:50:37,128 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:50:37,128 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:50:58,250 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/559_temp. 2021-06-08 15:50:58,250 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 21303 ms. 2021-06-08 15:50:58,266 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:50:58,266 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:50:59,954 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:51:00,017 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:51:00,017 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:51:05,113 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 75 for reason Container killed by YARN for exceeding memory limits. 36.5 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:51:05,113 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 75 on umetrip16-hdp2.6-116.travelsky.com: Container killed by YARN for exceeding memory limits. 36.5 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:51:05,113 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 24.0 in stage 406.0 (TID 27241, umetrip16-hdp2.6-116.travelsky.com, executor 75): ExecutorLostFailure (executor 75 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.5 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:51:05,113 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 11.0 in stage 406.0 (TID 27052, umetrip16-hdp2.6-116.travelsky.com, executor 75): ExecutorLostFailure (executor 75 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.5 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:51:05,113 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 88.0 in stage 406.0 (TID 27264, umetrip16-hdp2.6-116.travelsky.com, executor 75): ExecutorLostFailure (executor 75 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.5 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:51:05,114 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 172.0 in stage 406.0 (TID 27162, umetrip16-hdp2.6-116.travelsky.com, executor 75): ExecutorLostFailure (executor 75 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.5 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:51:08,570 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.116:41768 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 15:51:26,298 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:51:26,298 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:51:32,797 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:51:32,801 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[687] has 133257 row, 673308069 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 15:51:32,817 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[687] has 133257 row, 673308069 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 15:51:32,817 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:51:32,817 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[687] repartition to 4 2021-06-08 15:51:33,930 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:51:33,930 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:52:04,383 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/687_temp. 2021-06-08 15:52:04,383 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 31566 ms. 2021-06-08 15:52:04,401 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:52:04,402 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:52:05,374 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:52:05,448 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:52:05,448 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:52:42,715 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:52:42,715 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:52:49,491 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:52:49,492 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[958] has 703546 row, 1419921055 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 15:52:49,508 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[958] has 703546 row, 1419921055 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 15:52:49,508 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:52:49,508 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[958] repartition to 6 2021-06-08 15:52:49,694 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:52:49,694 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:52:59,353 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/958_temp. 2021-06-08 15:52:59,353 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 9845 ms. 2021-06-08 15:52:59,371 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:52:59,372 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:53:00,230 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:53:00,296 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:53:00,296 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:53:15,526 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:53:15,526 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:53:20,817 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:53:20,818 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[574] has 110993 row, 630685059 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:53:20,834 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[574] has 110993 row, 630685059 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:53:20,834 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:53:20,834 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[574] repartition to 3 2021-06-08 15:53:21,017 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:53:21,018 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:53:26,955 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/574_temp. 2021-06-08 15:53:26,955 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6121 ms. 2021-06-08 15:53:26,973 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:53:26,974 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:53:27,875 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:53:27,978 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:53:27,978 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:53:54,736 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:53:54,736 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:53:59,362 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:53:59,363 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[830] has 427268 row, 1173133782 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:53:59,378 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[830] has 427268 row, 1173133782 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:53:59,379 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:53:59,379 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[830] repartition to 5 2021-06-08 15:53:59,571 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:53:59,572 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:54:05,842 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/830_temp. 2021-06-08 15:54:05,842 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6463 ms. 2021-06-08 15:54:05,861 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:54:05,861 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:54:06,809 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:54:06,875 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:54:06,876 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:54:37,792 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:54:37,792 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:54:42,648 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:54:42,650 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[702] has 193983 row, 880562318 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 15:54:42,665 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[702] has 193983 row, 880562318 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 15:54:42,665 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:54:42,665 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[702] repartition to 4 2021-06-08 15:54:42,853 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:54:42,853 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:54:50,575 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/702_temp. 2021-06-08 15:54:50,575 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7910 ms. 2021-06-08 15:54:50,593 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:54:50,594 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:54:51,459 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:54:51,524 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:54:51,525 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:55:09,080 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:55:09,080 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:55:14,258 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:55:14,259 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[575] has 127318 row, 633025827 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:55:14,273 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[575] has 127318 row, 633025827 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 15:55:14,273 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:55:14,273 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[575] repartition to 3 2021-06-08 15:55:14,450 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:55:14,450 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:55:33,059 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/575_temp. 2021-06-08 15:55:33,059 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 18786 ms. 2021-06-08 15:55:33,077 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:55:33,078 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:55:34,000 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:55:34,067 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:55:34,067 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:56:01,726 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:56:01,726 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:56:06,534 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:56:06,535 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[703] has 219088 row, 882924859 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 15:56:06,551 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[703] has 219088 row, 882924859 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 15:56:06,551 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:56:06,551 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[703] repartition to 4 2021-06-08 15:56:06,723 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:56:06,723 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:56:14,362 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/703_temp. 2021-06-08 15:56:14,362 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7811 ms. 2021-06-08 15:56:14,381 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:56:14,381 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:56:15,217 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:56:15,287 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:56:15,287 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:56:42,934 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:56:42,934 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:56:49,477 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:56:49,495 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[959] has 762117 row, 1422096846 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 15:56:49,515 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[959] has 762117 row, 1422096846 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 15:56:49,515 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:56:49,515 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[959] repartition to 6 2021-06-08 15:56:50,006 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:56:50,006 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:56:55,801 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/959_temp. 2021-06-08 15:56:55,801 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6286 ms. 2021-06-08 15:56:55,818 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:56:55,819 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:56:56,787 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:56:56,851 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:56:56,851 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:57:27,433 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:57:27,433 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:57:32,537 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:57:32,538 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[831] has 467671 row, 1175255830 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:57:32,552 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[831] has 467671 row, 1175255830 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 15:57:32,552 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:57:32,552 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[831] repartition to 5 2021-06-08 15:57:32,731 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:57:32,731 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:57:38,650 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/831_temp. 2021-06-08 15:57:38,650 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6098 ms. 2021-06-08 15:57:38,668 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:57:38,669 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:57:39,643 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:57:39,706 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:57:39,706 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:58:12,273 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:58:12,273 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:58:17,183 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:58:17,184 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[590] has 54998 row, 526792724 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 15:58:17,198 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[590] has 54998 row, 526792724 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 15:58:17,198 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:58:17,198 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[590] repartition to 3 2021-06-08 15:58:17,376 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:58:17,376 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:58:32,193 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/590_temp. 2021-06-08 15:58:32,193 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 14995 ms. 2021-06-08 15:58:32,208 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:58:32,209 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:58:33,119 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:58:33,186 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:58:33,186 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:58:47,415 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:58:47,415 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:58:52,209 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:58:52,211 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[974] has 370079 row, 1242525621 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 15:58:52,226 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[974] has 370079 row, 1242525621 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 15:58:52,226 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:58:52,226 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[974] repartition to 6 2021-06-08 15:58:52,397 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:58:52,397 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:59:00,758 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/974_temp. 2021-06-08 15:59:00,758 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 8532 ms. 2021-06-08 15:59:00,776 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:59:00,777 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:59:01,681 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:59:01,751 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:59:01,751 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:59:20,669 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 15:59:20,669 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 15:59:25,157 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:59:25,158 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[846] has 216822 row, 1001598549 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:59:25,172 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[846] has 216822 row, 1001598549 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 15:59:25,172 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 15:59:25,172 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[846] repartition to 5 2021-06-08 15:59:25,344 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:59:25,344 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:59:29,961 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/846_temp. 2021-06-08 15:59:29,961 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4789 ms. 2021-06-08 15:59:29,976 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 15:59:29,977 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 15:59:30,784 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 15:59:30,845 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:59:30,845 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 15:59:53,506 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 28 for reason Container killed by YARN for exceeding memory limits. 36.6 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:59:53,506 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 28 on umetrip39-hdp2.6-139.travelsky.com: Container killed by YARN for exceeding memory limits. 36.6 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:59:53,506 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 12.0 in stage 622.0 (TID 43049, umetrip39-hdp2.6-139.travelsky.com, executor 28): ExecutorLostFailure (executor 28 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.6 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 15:59:55,399 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.139:24326 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:00:10,975 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 61 for reason Container killed by YARN for exceeding memory limits. 37.2 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:00:10,975 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 61 on r4200g1-app.travelsky.com: Container killed by YARN for exceeding memory limits. 37.2 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:00:10,975 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 12.1 in stage 622.0 (TID 43145, r4200g1-app.travelsky.com, executor 61): ExecutorLostFailure (executor 61 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 37.2 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:00:25,295 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:00:25,295 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:00:30,656 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:00:30,657 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[718] has 98289 row, 724832821 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:00:30,672 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[718] has 98289 row, 724832821 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:00:30,672 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:00:30,672 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[718] repartition to 4 2021-06-08 16:00:30,842 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:00:30,842 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:00:41,988 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/718_temp. 2021-06-08 16:00:41,988 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 11316 ms. 2021-06-08 16:00:42,005 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:00:42,006 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:00:42,976 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:00:43,047 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:00:43,047 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:01:03,865 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 77 for reason Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:01:03,865 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 77 on umetrip29-hdp2.6-129.travelsky.com: Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:01:03,865 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 210.0 in stage 640.0 (TID 44109, umetrip29-hdp2.6-129.travelsky.com, executor 77): ExecutorLostFailure (executor 77 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:01:03,866 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 37.0 in stage 640.0 (TID 44010, umetrip29-hdp2.6-129.travelsky.com, executor 77): ExecutorLostFailure (executor 77 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:01:06,285 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.129:8138 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:01:21,871 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 55 for reason Container killed by YARN for exceeding memory limits. 36.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:01:21,871 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 55 on umetrip24-hdp2.6-124.travelsky.com: Container killed by YARN for exceeding memory limits. 36.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:01:21,871 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 210.1 in stage 640.0 (TID 44218, umetrip24-hdp2.6-124.travelsky.com, executor 55): ExecutorLostFailure (executor 55 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:01:21,876 WARN [pool-1-thread-1] spark.ExecutorAllocationManager : Attempted to mark unknown executor 55 idle 2021-06-08 16:01:24,584 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.124:45726 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:01:49,120 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:01:49,120 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:01:50,710 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.238.6.115:15946 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:01:50,710 ERROR [pool-1-thread-1] client.TransportResponseHandler : Still have 1 requests outstanding when connection from /10.238.6.115:15946 is closed 2021-06-08 16:01:50,728 WARN [pool-1-thread-1] storage.BlockManagerMasterEndpoint : Error trying to remove broadcast 1353 from block manager BlockManagerId(60, r4200h1-app.travelsky.com, 12888, None) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:01:51,442 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.238.6.115:15948 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:01:51,442 ERROR [pool-1-thread-1] client.TransportResponseHandler : Still have 1 requests outstanding when connection from /10.238.6.115:15948 is closed 2021-06-08 16:01:51,450 WARN [pool-1-thread-1] storage.BlockManagerMasterEndpoint : Error trying to remove broadcast 1353 from block manager BlockManagerId(58, r4200h1-app.travelsky.com, 31359, None) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:01:53,210 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.238.6.106:41182 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:01:53,211 ERROR [pool-1-thread-1] client.TransportResponseHandler : Still have 1 requests outstanding when connection from /10.238.6.106:41182 is closed 2021-06-08 16:01:53,212 WARN [pool-1-thread-1] storage.BlockManagerMasterEndpoint : Error trying to remove broadcast 1353 from block manager BlockManagerId(31, r4200c2-app.travelsky.com, 5225, None) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:01:54,329 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.118:5686 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:01:54,329 ERROR [pool-1-thread-1] client.TransportResponseHandler : Still have 1 requests outstanding when connection from /10.5.145.118:5686 is closed 2021-06-08 16:01:54,738 WARN [pool-1-thread-1] storage.BlockManagerMasterEndpoint : Error trying to remove broadcast 1352 from block manager BlockManagerId(68, umetrip18-hdp2.6-118.travelsky.com, 25818, None) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:01:56,044 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.238.6.106:49392 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:01:56,044 ERROR [pool-1-thread-1] client.TransportResponseHandler : Still have 1 requests outstanding when connection from /10.238.6.106:49392 is closed 2021-06-08 16:01:56,056 WARN [pool-1-thread-1] storage.BlockManagerMasterEndpoint : Error trying to remove broadcast 1352 from block manager BlockManagerId(80, r4200c2-app.travelsky.com, 21536, None) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:02:14,004 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:02:14,830 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[719] has 117166 row, 726346591 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:02:14,845 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[719] has 117166 row, 726346591 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:02:14,845 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:02:14,845 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[719] repartition to 4 2021-06-08 16:02:19,071 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:02:19,071 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:02:34,366 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/719_temp. 2021-06-08 16:02:34,366 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 19521 ms. 2021-06-08 16:02:34,381 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:02:34,382 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:02:35,275 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:02:35,341 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:02:35,341 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:03:35,702 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:03:35,702 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:03:51,075 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:03:51,076 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[975] has 416918 row, 1244500037 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:03:51,090 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[975] has 416918 row, 1244500037 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:03:51,090 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:03:51,090 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[975] repartition to 6 2021-06-08 16:03:51,257 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:03:51,257 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:04:00,866 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/975_temp. 2021-06-08 16:04:00,866 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 9776 ms. 2021-06-08 16:04:00,884 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:04:00,885 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:04:01,949 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:04:02,014 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:04:02,014 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:04:20,153 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:04:20,153 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:04:25,202 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:04:25,203 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[591] has 67473 row, 528609220 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 16:04:25,217 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[591] has 67473 row, 528609220 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 16:04:25,217 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:04:25,217 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[591] repartition to 3 2021-06-08 16:04:25,386 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:04:25,386 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:04:32,662 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/591_temp. 2021-06-08 16:04:32,662 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7445 ms. 2021-06-08 16:04:32,680 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:04:32,681 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:04:33,691 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:04:33,754 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:04:33,754 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:04:54,604 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:04:54,604 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:05:00,550 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:05:00,561 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[847] has 248441 row, 1003337403 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:05:00,589 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[847] has 248441 row, 1003337403 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:05:00,589 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:05:00,589 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[847] repartition to 5 2021-06-08 16:05:01,610 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:05:01,610 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:05:12,786 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/847_temp. 2021-06-08 16:05:12,786 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 12197 ms. 2021-06-08 16:05:12,804 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:05:12,805 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:05:13,830 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:05:13,894 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:05:13,894 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:05:50,721 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:05:50,721 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:05:55,750 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:05:55,751 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[606] has 98387 row, 707200445 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:05:55,765 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[606] has 98387 row, 707200445 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:05:55,765 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:05:55,765 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[606] repartition to 4 2021-06-08 16:05:55,948 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:05:55,948 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:06:10,306 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/606_temp. 2021-06-08 16:06:10,306 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 14541 ms. 2021-06-08 16:06:10,324 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:06:10,325 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:06:11,369 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:06:11,431 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:06:11,431 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:06:33,618 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:06:33,618 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:06:38,234 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:06:38,235 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[862] has 370029 row, 1219783068 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:06:38,249 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[862] has 370029 row, 1219783068 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:06:38,249 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:06:38,249 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[862] repartition to 6 2021-06-08 16:06:38,431 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:06:38,431 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:06:44,126 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/862_temp. 2021-06-08 16:06:44,126 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 5877 ms. 2021-06-08 16:06:44,141 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:06:44,142 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:06:45,242 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:06:45,305 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:06:45,305 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:07:19,775 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:07:19,775 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:07:26,991 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:07:27,003 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[734] has 173143 row, 938863721 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 16:07:27,033 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[734] has 173143 row, 938863721 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 16:07:27,033 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:07:27,033 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[734] repartition to 4 2021-06-08 16:07:27,979 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:07:27,979 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:07:36,095 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/734_temp. 2021-06-08 16:07:36,095 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 9062 ms. 2021-06-08 16:07:36,114 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:07:36,115 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:07:37,076 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:07:37,142 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:07:37,142 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:07:58,082 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:07:58,082 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:08:03,383 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:08:03,384 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[990] has 619664 row, 1449784559 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:08:03,398 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[990] has 619664 row, 1449784559 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:08:03,398 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:08:03,398 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[990] repartition to 6 2021-06-08 16:08:03,606 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:08:03,606 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:08:11,318 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/990_temp. 2021-06-08 16:08:11,318 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7920 ms. 2021-06-08 16:08:11,336 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:08:11,337 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:08:12,387 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:08:12,450 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:08:12,450 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:08:27,754 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:08:27,755 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:08:33,539 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:08:33,540 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[607] has 117301 row, 709319554 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:08:33,554 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[607] has 117301 row, 709319554 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:08:33,554 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:08:33,554 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[607] repartition to 4 2021-06-08 16:08:33,735 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:08:33,735 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:08:38,308 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/607_temp. 2021-06-08 16:08:38,308 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4754 ms. 2021-06-08 16:08:38,327 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:08:38,327 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:08:39,197 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:08:39,258 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:08:39,258 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:09:12,089 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:09:12,089 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:09:17,689 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:09:17,690 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[863] has 412634 row, 1221338892 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:09:17,704 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[863] has 412634 row, 1221338892 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:09:17,704 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:09:17,704 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[863] repartition to 6 2021-06-08 16:09:17,876 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:09:17,876 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:09:23,393 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/863_temp. 2021-06-08 16:09:23,393 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 5689 ms. 2021-06-08 16:09:23,413 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:09:23,414 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:09:25,507 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:09:25,599 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:09:25,599 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:09:42,899 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:09:42,899 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:09:48,353 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:09:48,354 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[991] has 680218 row, 1451269487 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:09:48,368 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[991] has 680218 row, 1451269487 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:09:48,368 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:09:48,368 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[991] repartition to 6 2021-06-08 16:09:48,537 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:09:48,537 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:09:52,675 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/991_temp. 2021-06-08 16:09:52,675 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4307 ms. 2021-06-08 16:09:52,691 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:09:52,692 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:09:53,489 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:09:53,551 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:09:53,551 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:10:13,644 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:10:13,644 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:10:18,163 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:10:18,164 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[735] has 200644 row, 940319665 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:10:18,178 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[735] has 200644 row, 940319665 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:10:18,178 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:10:18,178 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[735] repartition to 5 2021-06-08 16:10:18,349 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:10:18,349 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:10:23,185 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/735_temp. 2021-06-08 16:10:23,185 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 5007 ms. 2021-06-08 16:10:23,200 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:10:23,201 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:10:24,175 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:10:24,237 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:10:24,237 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:10:52,131 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:10:52,131 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:10:57,185 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:10:57,186 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[750] has 133363 row, 743698791 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:10:57,200 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[750] has 133363 row, 743698791 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:10:57,200 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:10:57,200 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[750] repartition to 4 2021-06-08 16:10:57,375 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:10:57,375 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:11:09,873 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/750_temp. 2021-06-08 16:11:09,873 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 12673 ms. 2021-06-08 16:11:09,888 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:11:09,889 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:11:10,794 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:11:10,907 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:11:10,907 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:11:39,287 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:11:39,288 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:11:44,072 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:11:44,073 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[622] has 78063 row, 545939083 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:11:44,086 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[622] has 78063 row, 545939083 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:11:44,086 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:11:44,087 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[622] repartition to 3 2021-06-08 16:11:44,268 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:11:44,268 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:12:12,008 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/622_temp. 2021-06-08 16:12:12,008 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 27922 ms. 2021-06-08 16:12:12,025 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:12:12,026 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:12:12,945 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:12:13,005 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:12:13,005 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:13:09,154 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:13:09,154 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:13:40,159 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:13:40,171 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1006] has 461875 row, 1257730828 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:13:40,197 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1006] has 461875 row, 1257730828 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:13:40,197 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:13:40,197 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[1006] repartition to 6 2021-06-08 16:13:40,814 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:13:40,814 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:14:15,544 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 95 for reason Container killed by YARN for exceeding memory limits. 36.5 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:14:15,544 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 95 on r4200i1-app.travelsky.com: Container killed by YARN for exceeding memory limits. 36.5 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:14:15,544 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 5.0 in stage 899.0 (TID 63271, r4200i1-app.travelsky.com, executor 95): ExecutorLostFailure (executor 95 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.5 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:14:16,997 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.238.6.117:25347 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:14:30,550 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 74 for reason Container killed by YARN for exceeding memory limits. 37.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:14:30,550 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 74 on umetrip04-hdp2.6-104.travelsky.com: Container killed by YARN for exceeding memory limits. 37.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:14:30,550 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 5.1 in stage 899.0 (TID 63272, umetrip04-hdp2.6-104.travelsky.com, executor 74): ExecutorLostFailure (executor 74 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 37.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:14:56,502 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/1006_temp. 2021-06-08 16:14:56,502 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 76305 ms. 2021-06-08 16:14:56,518 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:14:56,518 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:14:57,391 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:14:57,451 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:14:57,451 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:15:45,042 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:15:45,042 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:15:55,536 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:15:55,537 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[878] has 277848 row, 1018107081 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:15:55,551 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[878] has 277848 row, 1018107081 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:15:55,551 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:15:55,551 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[878] repartition to 5 2021-06-08 16:15:55,774 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:15:55,774 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:16:20,569 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/878_temp. 2021-06-08 16:16:20,569 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 25018 ms. 2021-06-08 16:16:20,589 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:16:20,590 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:16:21,904 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:16:21,972 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:16:21,972 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:16:50,221 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:16:50,221 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:16:55,665 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:16:55,666 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[879] has 310852 row, 1019613740 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:16:55,680 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[879] has 310852 row, 1019613740 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:16:55,680 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:16:55,680 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[879] repartition to 5 2021-06-08 16:16:55,864 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:16:55,864 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:17:03,440 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/879_temp. 2021-06-08 16:17:03,440 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7760 ms. 2021-06-08 16:17:03,457 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:17:03,458 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:17:04,663 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:17:04,731 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:17:04,731 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:18:02,362 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:18:02,362 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:18:09,461 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:18:09,462 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[751] has 153537 row, 745625809 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:18:09,477 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[751] has 153537 row, 745625809 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:18:09,477 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:18:09,477 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[751] repartition to 4 2021-06-08 16:18:09,662 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:18:09,662 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:18:32,474 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/751_temp. 2021-06-08 16:18:32,474 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 22997 ms. 2021-06-08 16:18:32,511 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:18:32,521 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:18:34,346 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:18:34,447 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:18:34,447 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:19:16,785 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:19:16,785 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:19:24,721 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:19:24,721 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[623] has 91694 row, 547767913 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:19:24,735 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[623] has 91694 row, 547767913 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:19:24,735 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:19:24,735 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[623] repartition to 3 2021-06-08 16:19:24,943 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:19:24,943 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:19:37,866 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 117 on r4200j-app.travelsky.com: Container killed by YARN for exceeding memory limits. 37.4 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:19:37,866 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 117 for reason Container killed by YARN for exceeding memory limits. 37.4 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:19:37,866 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 0.0 in stage 971.0 (TID 67536, r4200j-app.travelsky.com, executor 117): ExecutorLostFailure (executor 117 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 37.4 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:19:37,866 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 2.0 in stage 971.0 (TID 67538, r4200j-app.travelsky.com, executor 117): ExecutorLostFailure (executor 117 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 37.4 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:19:37,866 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 1.0 in stage 971.0 (TID 67537, r4200j-app.travelsky.com, executor 117): ExecutorLostFailure (executor 117 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 37.4 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:20:11,335 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/623_temp. 2021-06-08 16:20:11,336 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 46601 ms. 2021-06-08 16:20:11,352 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:20:11,353 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:20:12,462 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:20:12,526 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:20:12,526 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:20:53,338 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:20:53,338 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:20:58,706 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:20:58,707 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1007] has 510101 row, 1260766444 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:20:58,723 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1007] has 510101 row, 1260766444 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:20:58,723 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:20:58,723 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[1007] repartition to 6 2021-06-08 16:20:58,919 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:20:58,919 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:21:07,089 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/1007_temp. 2021-06-08 16:21:07,089 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 8366 ms. 2021-06-08 16:21:07,107 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:21:07,108 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:21:08,460 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:21:08,526 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:21:08,526 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:21:39,389 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:21:39,389 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:21:48,448 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:21:48,449 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[894] has 447648 row, 1229642205 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:21:48,464 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[894] has 447648 row, 1229642205 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:21:48,464 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:21:48,464 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[894] repartition to 6 2021-06-08 16:21:48,638 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:21:48,638 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:22:01,150 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/894_temp. 2021-06-08 16:22:01,150 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 12686 ms. 2021-06-08 16:22:01,169 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:22:01,175 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:22:02,211 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:22:02,273 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:22:02,273 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:22:40,797 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:22:40,797 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:22:46,322 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:22:46,323 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[766] has 222884 row, 950962428 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:22:46,338 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[766] has 222884 row, 950962428 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:22:46,338 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:22:46,338 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[766] repartition to 5 2021-06-08 16:22:46,508 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:22:46,508 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:23:13,266 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/766_temp. 2021-06-08 16:23:13,266 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 26928 ms. 2021-06-08 16:23:13,284 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:23:13,284 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:23:14,468 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:23:14,528 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:23:14,529 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:23:54,185 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:23:54,185 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:23:59,469 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:23:59,470 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[638] has 132440 row, 720252660 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:23:59,486 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[638] has 132440 row, 720252660 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:23:59,486 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:23:59,486 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[638] repartition to 4 2021-06-08 16:23:59,660 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:23:59,660 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:24:23,123 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 103 for reason Container killed by YARN for exceeding memory limits. 38.6 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:24:23,123 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 103 on umetrip24-hdp2.6-124.travelsky.com: Container killed by YARN for exceeding memory limits. 38.6 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:24:23,124 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 1.0 in stage 1043.0 (TID 73076, umetrip24-hdp2.6-124.travelsky.com, executor 103): ExecutorLostFailure (executor 103 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 38.6 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:24:24,401 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.124:3133 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:24:39,179 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/638_temp. 2021-06-08 16:24:39,179 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 39693 ms. 2021-06-08 16:24:39,198 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:24:39,199 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:24:40,408 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:24:40,476 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:24:40,476 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:25:15,925 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:25:15,925 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:25:21,834 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:25:21,836 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1022] has 731258 row, 1459694052 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:25:21,850 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1022] has 731258 row, 1459694052 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:25:21,850 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:25:21,850 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[1022] repartition to 6 2021-06-08 16:25:22,041 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:25:22,041 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:25:30,633 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/1022_temp. 2021-06-08 16:25:30,633 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 8783 ms. 2021-06-08 16:25:30,669 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:25:30,669 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:25:31,606 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:25:31,674 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:25:31,674 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:26:14,698 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:26:14,698 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:26:20,889 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:26:20,890 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[895] has 491632 row, 1231656839 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:26:20,905 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[895] has 491632 row, 1231656839 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 16:26:20,905 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:26:20,905 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[895] repartition to 6 2021-06-08 16:26:21,076 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:26:21,076 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:26:34,463 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/895_temp. 2021-06-08 16:26:34,463 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 13558 ms. 2021-06-08 16:26:34,478 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:26:34,479 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:26:35,416 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:26:35,494 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:26:35,494 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:26:53,900 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:26:53,900 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:26:59,959 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:26:59,960 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1023] has 793169 row, 1461932635 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:26:59,976 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1023] has 793169 row, 1461932635 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:26:59,976 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:26:59,976 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[1023] repartition to 6 2021-06-08 16:27:00,162 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:27:00,162 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:27:11,018 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/1023_temp. 2021-06-08 16:27:11,018 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 11042 ms. 2021-06-08 16:27:11,035 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:27:11,036 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:27:12,810 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:27:12,887 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:27:12,887 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:27:34,230 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 98 for reason Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:27:34,230 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 98 on umetrip29-hdp2.6-129.travelsky.com: Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:27:34,231 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 340.0 in stage 1108.0 (TID 78541, umetrip29-hdp2.6-129.travelsky.com, executor 98): ExecutorLostFailure (executor 98 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:27:34,232 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 111.0 in stage 1108.0 (TID 78436, umetrip29-hdp2.6-129.travelsky.com, executor 98): ExecutorLostFailure (executor 98 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:27:38,444 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.129:22578 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:27:43,234 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 115 for reason Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:27:43,234 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 115 on r4200i1-app.travelsky.com: Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:27:43,234 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 244.0 in stage 1108.0 (TID 78494, r4200i1-app.travelsky.com, executor 115): ExecutorLostFailure (executor 115 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:27:43,234 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 15.0 in stage 1108.0 (TID 78359, r4200i1-app.travelsky.com, executor 115): ExecutorLostFailure (executor 115 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 38.1 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:27:45,689 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.238.6.117:33183 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:27:58,240 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 116 for reason Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:27:58,240 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 116 on r4200i1-app.travelsky.com: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:27:58,240 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 244.1 in stage 1108.0 (TID 78729, r4200i1-app.travelsky.com, executor 116): ExecutorLostFailure (executor 116 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:27:58,240 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 15.1 in stage 1108.0 (TID 78728, r4200i1-app.travelsky.com, executor 116): ExecutorLostFailure (executor 116 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:01,242 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 99 for reason Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:01,242 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 99 on umetrip40-hdp2.6-140.travelsky.com: Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:01,242 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 277.0 in stage 1108.0 (TID 78596, umetrip40-hdp2.6-140.travelsky.com, executor 99): ExecutorLostFailure (executor 99 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:01,242 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 312.0 in stage 1108.0 (TID 78627, umetrip40-hdp2.6-140.travelsky.com, executor 99): ExecutorLostFailure (executor 99 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:01,242 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 83.0 in stage 1108.0 (TID 78564, umetrip40-hdp2.6-140.travelsky.com, executor 99): ExecutorLostFailure (executor 99 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:01,243 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 48.0 in stage 1108.0 (TID 78459, umetrip40-hdp2.6-140.travelsky.com, executor 99): ExecutorLostFailure (executor 99 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 36.9 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:03,747 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.140:48876 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:28:22,455 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.130:15169 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:28:22,456 ERROR [pool-1-thread-1] client.TransportResponseHandler : Still have 1 requests outstanding when connection from /10.5.145.130:15169 is closed 2021-06-08 16:28:22,456 WARN [pool-1-thread-1] storage.BlockManagerMasterEndpoint : Error trying to remove broadcast 1245 from block manager BlockManagerId(113, umetrip30-hdp2.6-130.travelsky.com, 10536, None) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:28:25,169 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 108 for reason Container killed by YARN for exceeding memory limits. 38.7 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:25,170 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 108 on r4200h1-app.travelsky.com: Container killed by YARN for exceeding memory limits. 38.7 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:25,170 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 244.2 in stage 1108.0 (TID 78731, r4200h1-app.travelsky.com, executor 108): ExecutorLostFailure (executor 108 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 38.7 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:25,181 WARN [pool-1-thread-1] spark.ExecutorAllocationManager : Attempted to mark unknown executor 108 idle 2021-06-08 16:28:26,014 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.238.6.115:50380 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:28:43,719 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 109 on r4200h1-app.travelsky.com: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:43,719 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 109 for reason Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:43,719 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 244.3 in stage 1108.0 (TID 78736, r4200h1-app.travelsky.com, executor 109): ExecutorLostFailure (executor 109 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:28:43,719 ERROR [pool-1-thread-1] scheduler.TaskSetManager : Task 244 in stage 1108.0 failed 4 times; aborting job 2021-06-08 16:28:43,722 ERROR [pool-1-thread-1] datasources.FileFormatWriter : Aborting job 4dc47d12-41cb-4ced-b977-738454ffc1d4. org.apache.spark.SparkException: Job aborted due to stage failure: Task 244 in stage 1108.0 failed 4 times, most recent failure: Lost task 244.3 in stage 1108.0 (TID 78736, r4200h1-app.travelsky.com, executor 109): ExecutorLostFailure (executor 109 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:28:43,725 ERROR [pool-1-thread-1] job.BuildLayoutWithUpdate : Error occurred when run merge-cuboid-767 org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 244 in stage 1108.0 failed 4 times, most recent failure: Lost task 244.3 in stage 1108.0 (TID 78736, r4200h1-app.travelsky.com, executor 109): ExecutorLostFailure (executor 109 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more 2021-06-08 16:28:43,725 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:28:43,728 INFO [pool-1-thread-1] server.AbstractConnector : Stopped Spark@27882681{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-06-08 16:28:43,775 ERROR [pool-1-thread-1] application.SparkApplication : The spark job execute failed! java.lang.RuntimeException: org.apache.spark.SparkException: Job aborted. at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate.updateLayout(BuildLayoutWithUpdate.java:70) at org.apache.kylin.engine.spark.job.CubeMergeJob.mergeSegments(CubeMergeJob.java:122) at org.apache.kylin.engine.spark.job.CubeMergeJob.doExecute(CubeMergeJob.java:82) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:298) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 244 in stage 1108.0 failed 4 times, most recent failure: Lost task 244.3 in stage 1108.0 (TID 78736, r4200h1-app.travelsky.com, executor 109): ExecutorLostFailure (executor 109 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more 2021-06-08 16:28:43,776 ERROR [pool-1-thread-1] application.JobMonitor : Job failed the 2 times. java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeMergeJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:92) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: org.apache.spark.SparkException: Job aborted. at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate.updateLayout(BuildLayoutWithUpdate.java:70) at org.apache.kylin.engine.spark.job.CubeMergeJob.mergeSegments(CubeMergeJob.java:122) at org.apache.kylin.engine.spark.job.CubeMergeJob.doExecute(CubeMergeJob.java:82) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:298) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) ... 4 more Caused by: org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 244 in stage 1108.0 failed 4 times, most recent failure: Lost task 244.3 in stage 1108.0 (TID 78736, r4200h1-app.travelsky.com, executor 109): ExecutorLostFailure (executor 109 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more 2021-06-08 16:28:43,778 INFO [pool-1-thread-1] application.SparkApplication : Executor task org.apache.kylin.engine.spark.job.CubeMergeJob with args : {"distMetaUrl":"kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta","submitter":"SYSTEM","dataRangeEnd":"1622332800000","targetModel":"cee6d39a-b052-4351-ba8a-73ddd583836e","dataRangeStart":"1619827200000","project":"user_growth","className":"org.apache.kylin.engine.spark.job.CubeMergeJob","segmentName":"20210501000000_20210530000000","parentId":"4e85bb17-9201-441f-afcb-f17827cc0d18","jobId":"4e85bb17-9201-441f-afcb-f17827cc0d18","outputMetaUrl":"kylin_metadata@jdbc,url=jdbc:mysql://10.238.2.228:6033/kylin,username=kylin,password=******,maxActive=10,maxIdle=10","segmentId":"c0d90f42-34ef-d9d1-e06f-42eee385b290","cuboidsNum":"63","cubeName":"his_msg_push_event","jobType":"MERGE","cubeId":"77dfdfc0-44df-9963-0792-3b2fcca55734","segmentIds":"c0d90f42-34ef-d9d1-e06f-42eee385b290"} 2021-06-08 16:28:43,778 INFO [pool-1-thread-1] utils.MetaDumpUtil : Ready to load KylinConfig from uri: kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 16:28:43,798 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.metadata.url.identifier : kylin_metadata 2021-06-08 16:28:43,798 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.log.spark-executor-properties-file : /opt/appdata/disk01/app/kylin/conf/spark-executor-log4j.properties 2021-06-08 16:28:43,798 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.source.provider.0 : org.apache.kylin.engine.spark.source.HiveSource 2021-06-08 16:28:43,798 INFO [pool-1-thread-1] util.TimeZoneUtils : System timezone set to GMT+8, TimeZoneId: GMT+08:00. 2021-06-08 16:28:43,798 INFO [pool-1-thread-1] application.SparkApplication : Sleep for random seconds to avoid submitting too many spark job at the same time. 2021-06-08 16:29:38,510 WARN [pool-1-thread-1] application.SparkApplication : Error occurred when check resource. Ignore it and try to submit this job. java.util.NoSuchElementException: spark.driver.memoryOverhead at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:246) at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:246) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.SparkConf.get(SparkConf.scala:246) at org.apache.spark.utils.ResourceUtils$.checkResource(ResourceUtils.scala:70) at org.apache.spark.utils.ResourceUtils.checkResource(ResourceUtils.scala) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:259) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:29:38,582 INFO [pool-1-thread-1] server.Server : jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 2021-06-08 16:29:38,583 INFO [pool-1-thread-1] server.Server : Started @4160338ms 2021-06-08 16:29:38,583 INFO [pool-1-thread-1] server.AbstractConnector : Started ServerConnector@7e0bfb2b{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-06-08 16:29:38,584 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@490f8c82{/jobs,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,584 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@52fb1c5e{/jobs/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,584 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@19ae2990{/jobs/job,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,585 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@38dd4b2d{/jobs/job/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,585 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@d03bb28{/stages,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,585 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7905f3a{/stages/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,585 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6f9a14f1{/stages/stage,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,585 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3dccae0c{/stages/stage/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,586 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2d868866{/stages/pool,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,586 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@286a7b63{/stages/pool/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,586 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1768df03{/storage,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,586 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4f8caf2c{/storage/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,586 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@56d0db03{/storage/rdd,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,587 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2d1a13d9{/storage/rdd/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,587 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7cdb51be{/environment,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,587 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@560474c6{/environment/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,587 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@65ca5c6{/executors,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,588 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7ba5ea0f{/executors/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,588 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@5dd50264{/executors/threadDump,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,588 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2882ff5c{/executors/threadDump/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,589 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4b931d1f{/static,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,589 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1fb273f0{/,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,589 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6a018440{/api,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,589 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6ee7a11{/jobs/job/kill,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,590 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@acfa4cf{/stages/stage/kill,null,AVAILABLE,@Spark} 2021-06-08 16:29:38,633 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at umetrip11-hdp2.6-111.travelsky.com/10.5.145.111:8050 2021-06-08 16:29:38,640 WARN [pool-1-thread-1] yarn.Client : Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 2021-06-08 16:29:41,988 INFO [pool-1-thread-1] impl.YarnClientImpl : Submitted application application_1617093658603_222860 2021-06-08 16:29:46,997 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7f163e8d{/metrics/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:55,581 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at umetrip11-hdp2.6-111.travelsky.com/10.5.145.111:8050 2021-06-08 16:29:55,654 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@45655367{/SQL,null,AVAILABLE,@Spark} 2021-06-08 16:29:55,654 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@573f5109{/SQL/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:55,655 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6377afb4{/SQL/execution,null,AVAILABLE,@Spark} 2021-06-08 16:29:55,655 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@5dd3640e{/SQL/execution/json,null,AVAILABLE,@Spark} 2021-06-08 16:29:55,656 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@20316c0a{/static/sql,null,AVAILABLE,@Spark} 2021-06-08 16:29:55,659 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.cube.CubeManager 2021-06-08 16:29:55,659 INFO [pool-1-thread-1] cube.CubeManager : Initializing CubeManager with config kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 16:29:55,659 INFO [pool-1-thread-1] persistence.ResourceStore : Using metadata url kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta for resource store 2021-06-08 16:29:55,675 INFO [pool-1-thread-1] persistence.HDFSResourceStore : hdfs meta path : hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 16:29:55,676 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading CubeInstance from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/cube 2021-06-08 16:29:55,682 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.cube.CubeDescManager 2021-06-08 16:29:55,682 INFO [pool-1-thread-1] cube.CubeDescManager : Initializing CubeDescManager with config kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 16:29:55,683 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading CubeDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/cube_desc 2021-06-08 16:29:55,687 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.project.ProjectManager 2021-06-08 16:29:55,687 INFO [pool-1-thread-1] project.ProjectManager : Initializing ProjectManager with metadata url kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 16:29:55,687 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading ProjectInstance from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/project 2021-06-08 16:29:55,694 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 ProjectInstance(s) out of 1 resource with 0 errors 2021-06-08 16:29:55,694 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.cachesync.Broadcaster 2021-06-08 16:29:55,694 DEBUG [pool-1-thread-1] cachesync.Broadcaster : 3 nodes in the cluster: [10.5.145.128:7070, 10.238.6.117:7070, 10.238.6.118:7070] 2021-06-08 16:29:55,694 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.model.DataModelManager 2021-06-08 16:29:55,694 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.TableMetadataManager 2021-06-08 16:29:55,694 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading TableDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/table 2021-06-08 16:29:55,698 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 TableDesc(s) out of 1 resource with 0 errors 2021-06-08 16:29:55,698 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading TableExtDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/table_exd 2021-06-08 16:29:55,701 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 TableExtDesc(s) out of 1 resource with 0 errors 2021-06-08 16:29:55,701 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading ExternalFilterDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/ext_filter 2021-06-08 16:29:55,701 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 0 ExternalFilterDesc(s) out of 0 resource with 0 errors 2021-06-08 16:29:55,702 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading DataModelDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/model_desc 2021-06-08 16:29:55,705 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 DataModelDesc(s) out of 1 resource with 0 errors 2021-06-08 16:29:55,706 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 CubeDesc(s) out of 1 resource with 0 errors 2021-06-08 16:29:55,706 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 CubeInstance(s) out of 1 resource with 0 errors 2021-06-08 16:30:03,590 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:30:03,687 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:30:03,687 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:30:39,041 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:30:39,041 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:30:45,793 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:30:45,794 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[910] has 357294 row, 1191815546 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:30:45,809 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[910] has 357294 row, 1191815546 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:30:45,809 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:30:45,809 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[910] repartition to 5 2021-06-08 16:30:46,017 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:30:46,017 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:30:53,697 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/910_temp. 2021-06-08 16:30:53,697 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7888 ms. 2021-06-08 16:30:53,716 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:30:53,717 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:30:55,233 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:30:55,324 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:30:55,324 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:31:25,371 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:31:25,372 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:31:30,615 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:31:30,616 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[654] has 85045 row, 654703931 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:31:30,630 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[654] has 85045 row, 654703931 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:31:30,630 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:31:30,630 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[654] repartition to 3 2021-06-08 16:31:30,815 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:31:30,815 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:31:40,448 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/654_temp. 2021-06-08 16:31:40,448 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 9818 ms. 2021-06-08 16:31:40,464 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:31:40,464 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:31:41,908 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:31:42,037 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:31:42,037 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:32:31,049 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:32:31,049 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:32:44,084 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:32:44,085 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[782] has 208663 row, 940445037 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:32:44,099 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[782] has 208663 row, 940445037 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:32:44,099 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:32:44,099 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[782] repartition to 5 2021-06-08 16:32:44,302 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:32:44,302 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:33:13,180 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/782_temp. 2021-06-08 16:33:13,180 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 29081 ms. 2021-06-08 16:33:13,195 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:33:13,196 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:33:14,427 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:33:14,517 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:33:14,517 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:33:33,364 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:33:33,364 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:33:34,560 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:33:34,561 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[655] has 101546 row, 656155385 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:33:34,575 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[655] has 101546 row, 656155385 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:33:34,575 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:33:34,575 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[655] repartition to 3 2021-06-08 16:33:34,749 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:33:34,750 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:33:46,751 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/655_temp. 2021-06-08 16:33:46,751 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 12176 ms. 2021-06-08 16:33:46,769 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:33:46,769 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:33:48,227 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:33:48,316 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:33:48,316 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:34:06,674 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:34:06,674 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:34:12,069 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:34:12,070 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[911] has 401884 row, 1193309110 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:34:12,084 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[911] has 401884 row, 1193309110 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:34:12,084 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:34:12,085 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[911] repartition to 5 2021-06-08 16:34:12,263 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:34:12,264 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:34:18,171 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/911_temp. 2021-06-08 16:34:18,171 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6087 ms. 2021-06-08 16:34:18,189 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:34:18,190 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:34:19,456 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:34:19,544 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:34:19,544 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:34:46,850 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:34:46,850 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:34:51,611 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:34:51,612 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[783] has 238128 row, 942138295 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:34:51,626 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[783] has 238128 row, 942138295 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:34:51,626 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:34:51,626 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[783] repartition to 5 2021-06-08 16:34:51,806 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:34:51,806 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:34:56,494 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/783_temp. 2021-06-08 16:34:56,494 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4868 ms. 2021-06-08 16:34:56,512 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:34:56,513 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:34:57,903 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:34:57,996 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:34:57,996 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:35:12,771 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:35:12,771 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:35:17,432 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:35:17,433 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[527] has 56804 row, 438759812 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 16:35:17,447 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[527] has 56804 row, 438759812 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 16:35:17,447 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:35:17,447 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[527] repartition to 3 2021-06-08 16:35:17,622 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:35:17,622 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:35:21,929 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/527_temp. 2021-06-08 16:35:21,929 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4482 ms. 2021-06-08 16:35:21,947 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:35:21,948 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:35:23,214 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:35:23,305 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:35:23,306 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:35:42,047 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:35:42,048 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:35:47,925 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:35:47,926 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[926] has 598417 row, 1408564205 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:35:47,941 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[926] has 598417 row, 1408564205 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:35:47,941 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:35:47,941 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[926] repartition to 6 2021-06-08 16:35:48,106 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:35:48,106 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:35:54,396 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/926_temp. 2021-06-08 16:35:54,396 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6455 ms. 2021-06-08 16:35:54,412 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:35:54,413 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:35:55,751 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:35:55,840 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:35:55,840 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:36:11,617 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:36:11,617 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:36:16,445 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:36:16,446 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[670] has 151229 row, 864442030 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 16:36:16,462 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[670] has 151229 row, 864442030 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 16:36:16,462 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:36:16,462 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[670] repartition to 4 2021-06-08 16:36:16,636 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:36:16,636 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:36:22,032 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/670_temp. 2021-06-08 16:36:22,032 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 5570 ms. 2021-06-08 16:36:22,052 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:36:22,053 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:36:23,258 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:36:23,348 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:36:23,348 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:36:36,806 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:36:36,806 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:36:41,564 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:36:41,565 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[542] has 84023 row, 611470940 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:36:41,579 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[542] has 84023 row, 611470940 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:36:41,579 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:36:41,579 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[542] repartition to 3 2021-06-08 16:36:41,756 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:36:41,756 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:36:50,386 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/542_temp. 2021-06-08 16:36:50,387 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 8808 ms. 2021-06-08 16:36:50,402 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:36:50,403 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:36:51,787 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:36:51,881 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:36:51,881 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:37:15,290 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:37:15,290 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:37:20,753 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:37:20,754 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[798] has 356137 row, 1160117451 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:37:20,767 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[798] has 356137 row, 1160117451 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:37:20,767 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:37:20,767 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[798] repartition to 5 2021-06-08 16:37:20,942 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:37:20,942 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:37:26,025 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/798_temp. 2021-06-08 16:37:26,025 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 5258 ms. 2021-06-08 16:37:26,044 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:37:26,045 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:37:27,321 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:37:27,417 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:37:27,417 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:37:41,849 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 4 for reason Container killed by YARN for exceeding memory limits. 46.9 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:37:41,849 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 4 on umetrip30-hdp2.6-130.travelsky.com: Container killed by YARN for exceeding memory limits. 46.9 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:37:41,849 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 135.0 in stage 268.0 (TID 18395, umetrip30-hdp2.6-130.travelsky.com, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.9 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:37:41,849 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 344.0 in stage 268.0 (TID 18607, umetrip30-hdp2.6-130.travelsky.com, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.9 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:37:41,850 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 553.0 in stage 268.0 (TID 18657, umetrip30-hdp2.6-130.travelsky.com, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.9 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:37:45,363 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.130:45842 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:38:09,123 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:38:09,123 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:38:16,430 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:38:16,435 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[671] has 175104 row, 866512662 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 16:38:16,454 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[671] has 175104 row, 866512662 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 16:38:16,454 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:38:16,454 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[671] repartition to 4 2021-06-08 16:38:17,450 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:38:17,450 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:38:34,375 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/671_temp. 2021-06-08 16:38:34,375 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 17921 ms. 2021-06-08 16:38:34,390 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:38:34,391 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:38:36,596 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:38:36,688 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:38:36,688 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:39:04,420 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:39:04,420 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:39:10,803 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:39:10,804 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[799] has 395212 row, 1162023867 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:39:10,818 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[799] has 395212 row, 1162023867 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:39:10,818 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:39:10,818 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[799] repartition to 5 2021-06-08 16:39:10,991 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:39:10,991 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:39:19,222 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/799_temp. 2021-06-08 16:39:19,222 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 8404 ms. 2021-06-08 16:39:19,240 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:39:19,240 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:39:20,507 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:39:20,597 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:39:20,597 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:39:38,328 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:39:38,328 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:39:43,446 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:39:43,447 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[543] has 99258 row, 613392180 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:39:43,461 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[543] has 99258 row, 613392180 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:39:43,461 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:39:43,461 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[543] repartition to 3 2021-06-08 16:39:43,633 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:39:43,633 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:39:48,067 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/543_temp. 2021-06-08 16:39:48,067 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4606 ms. 2021-06-08 16:39:48,085 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:39:48,086 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:39:49,447 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:39:49,540 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:39:49,540 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:40:17,328 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:40:17,328 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:40:23,830 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:40:23,831 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[927] has 655663 row, 1409889878 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:40:23,845 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[927] has 655663 row, 1409889878 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:40:23,845 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:40:23,845 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[927] repartition to 6 2021-06-08 16:40:24,063 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:40:24,063 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:40:34,606 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/927_temp. 2021-06-08 16:40:34,606 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 10761 ms. 2021-06-08 16:40:34,622 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:40:34,622 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:40:36,598 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:40:36,698 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:40:36,699 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:40:45,011 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 24 for reason Container killed by YARN for exceeding memory limits. 53.1 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:40:45,012 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 24 on r4200g2-app.travelsky.com: Container killed by YARN for exceeding memory limits. 53.1 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:40:45,012 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 104.0 in stage 360.0 (TID 25688, r4200g2-app.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 53.1 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:40:45,012 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 266.0 in stage 360.0 (TID 25738, r4200g2-app.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 53.1 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:40:45,012 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 428.0 in stage 360.0 (TID 25800, r4200g2-app.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 53.1 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:41:07,627 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 13 for reason Container killed by YARN for exceeding memory limits. 49.1 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:41:07,627 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 13 on r4200d1-app.travelsky.com: Container killed by YARN for exceeding memory limits. 49.1 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:41:07,628 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 428.1 in stage 360.0 (TID 26126, r4200d1-app.travelsky.com, executor 13): ExecutorLostFailure (executor 13 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 49.1 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:41:31,647 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 3 for reason Container killed by YARN for exceeding memory limits. 46.2 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:41:31,647 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 3 on umetrip35-hdp2.6-135.travelsky.com: Container killed by YARN for exceeding memory limits. 46.2 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:41:31,648 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 428.2 in stage 360.0 (TID 26129, umetrip35-hdp2.6-135.travelsky.com, executor 3): ExecutorLostFailure (executor 3 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.2 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:41:35,729 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.135:37078 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:42:12,556 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 1 for reason Container killed by YARN for exceeding memory limits. 48.4 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:42:12,556 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 1 on umetrip40-hdp2.6-140.travelsky.com: Container killed by YARN for exceeding memory limits. 48.4 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:42:12,556 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 428.3 in stage 360.0 (TID 26130, umetrip40-hdp2.6-140.travelsky.com, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 48.4 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:42:12,556 ERROR [pool-1-thread-1] scheduler.TaskSetManager : Task 428 in stage 360.0 failed 4 times; aborting job 2021-06-08 16:42:12,558 ERROR [pool-1-thread-1] datasources.FileFormatWriter : Aborting job f02bebb6-1202-452e-98ff-789382630111. org.apache.spark.SparkException: Job aborted due to stage failure: Task 428 in stage 360.0 failed 4 times, most recent failure: Lost task 428.3 in stage 360.0 (TID 26130, umetrip40-hdp2.6-140.travelsky.com, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 48.4 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:42:12,561 ERROR [pool-1-thread-1] job.BuildLayoutWithUpdate : Error occurred when run merge-cuboid-686 org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 428 in stage 360.0 failed 4 times, most recent failure: Lost task 428.3 in stage 360.0 (TID 26130, umetrip40-hdp2.6-140.travelsky.com, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 48.4 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more 2021-06-08 16:42:12,561 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:42:12,564 INFO [pool-1-thread-1] server.AbstractConnector : Stopped Spark@7e0bfb2b{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-06-08 16:42:12,647 ERROR [pool-1-thread-1] application.SparkApplication : The spark job execute failed! java.lang.RuntimeException: org.apache.spark.SparkException: Job aborted. at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate.updateLayout(BuildLayoutWithUpdate.java:70) at org.apache.kylin.engine.spark.job.CubeMergeJob.mergeSegments(CubeMergeJob.java:122) at org.apache.kylin.engine.spark.job.CubeMergeJob.doExecute(CubeMergeJob.java:82) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:298) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 428 in stage 360.0 failed 4 times, most recent failure: Lost task 428.3 in stage 360.0 (TID 26130, umetrip40-hdp2.6-140.travelsky.com, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 48.4 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more 2021-06-08 16:42:12,647 ERROR [pool-1-thread-1] application.JobMonitor : Job failed the 3 times. java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeMergeJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:92) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: org.apache.spark.SparkException: Job aborted. at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate.updateLayout(BuildLayoutWithUpdate.java:70) at org.apache.kylin.engine.spark.job.CubeMergeJob.mergeSegments(CubeMergeJob.java:122) at org.apache.kylin.engine.spark.job.CubeMergeJob.doExecute(CubeMergeJob.java:82) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:298) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) ... 4 more Caused by: org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 428 in stage 360.0 failed 4 times, most recent failure: Lost task 428.3 in stage 360.0 (TID 26130, umetrip40-hdp2.6-140.travelsky.com, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 48.4 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more 2021-06-08 16:42:12,648 INFO [pool-1-thread-1] application.SparkApplication : Executor task org.apache.kylin.engine.spark.job.CubeMergeJob with args : {"distMetaUrl":"kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta","submitter":"SYSTEM","dataRangeEnd":"1622332800000","targetModel":"cee6d39a-b052-4351-ba8a-73ddd583836e","dataRangeStart":"1619827200000","project":"user_growth","className":"org.apache.kylin.engine.spark.job.CubeMergeJob","segmentName":"20210501000000_20210530000000","parentId":"4e85bb17-9201-441f-afcb-f17827cc0d18","jobId":"4e85bb17-9201-441f-afcb-f17827cc0d18","outputMetaUrl":"kylin_metadata@jdbc,url=jdbc:mysql://10.238.2.228:6033/kylin,username=kylin,password=******,maxActive=10,maxIdle=10","segmentId":"c0d90f42-34ef-d9d1-e06f-42eee385b290","cuboidsNum":"63","cubeName":"his_msg_push_event","jobType":"MERGE","cubeId":"77dfdfc0-44df-9963-0792-3b2fcca55734","segmentIds":"c0d90f42-34ef-d9d1-e06f-42eee385b290"} 2021-06-08 16:42:12,648 INFO [pool-1-thread-1] utils.MetaDumpUtil : Ready to load KylinConfig from uri: kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 16:42:12,682 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.metadata.url.identifier : kylin_metadata 2021-06-08 16:42:12,682 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.log.spark-executor-properties-file : /opt/appdata/disk01/app/kylin/conf/spark-executor-log4j.properties 2021-06-08 16:42:12,682 INFO [pool-1-thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.source.provider.0 : org.apache.kylin.engine.spark.source.HiveSource 2021-06-08 16:42:12,682 INFO [pool-1-thread-1] util.TimeZoneUtils : System timezone set to GMT+8, TimeZoneId: GMT+08:00. 2021-06-08 16:42:12,682 INFO [pool-1-thread-1] application.SparkApplication : Sleep for random seconds to avoid submitting too many spark job at the same time. 2021-06-08 16:42:57,560 WARN [pool-1-thread-1] application.SparkApplication : Error occurred when check resource. Ignore it and try to submit this job. java.util.NoSuchElementException: spark.driver.memoryOverhead at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:246) at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:246) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.SparkConf.get(SparkConf.scala:246) at org.apache.spark.utils.ResourceUtils$.checkResource(ResourceUtils.scala:70) at org.apache.spark.utils.ResourceUtils.checkResource(ResourceUtils.scala) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:259) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:42:57,641 INFO [pool-1-thread-1] server.Server : jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 2021-06-08 16:42:57,643 INFO [pool-1-thread-1] server.Server : Started @4959398ms 2021-06-08 16:42:57,644 INFO [pool-1-thread-1] server.AbstractConnector : Started ServerConnector@4fae798d{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-06-08 16:42:57,644 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@d622c4b{/jobs,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,644 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2c88317b{/jobs/json,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,645 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@b9a3090{/jobs/job,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,645 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3744427f{/jobs/job/json,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,645 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2b5820a8{/stages,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,645 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@46b6df51{/stages/json,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,645 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1c7279a7{/stages/stage,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,646 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@9b087ec{/stages/stage/json,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,646 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6387bb12{/stages/pool,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,646 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@133bac64{/stages/pool/json,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,646 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1c8b40e5{/storage,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,646 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@11893b7f{/storage/json,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,647 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4f884d77{/storage/rdd,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,647 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@19e906e{/storage/rdd/json,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,647 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@47d474d0{/environment,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,647 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1fdefd96{/environment/json,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,647 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@1230f8ef{/executors,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,648 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@67d81727{/executors/json,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,648 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3ce9df8{/executors/threadDump,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,648 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6b94e25a{/executors/threadDump/json,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,649 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7e769847{/static,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,649 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@59b0d981{/,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,649 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@2ee6be12{/api,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,649 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@28d20714{/jobs/job/kill,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,650 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@77f1bf5e{/stages/stage/kill,null,AVAILABLE,@Spark} 2021-06-08 16:42:57,696 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at umetrip11-hdp2.6-111.travelsky.com/10.5.145.111:8050 2021-06-08 16:42:57,729 WARN [pool-1-thread-1] yarn.Client : Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 2021-06-08 16:43:01,224 INFO [pool-1-thread-1] impl.YarnClientImpl : Submitted application application_1617093658603_222869 2021-06-08 16:43:06,233 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4a4fb90{/metrics/json,null,AVAILABLE,@Spark} 2021-06-08 16:43:12,993 INFO [pool-1-thread-1] client.RMProxy : Connecting to ResourceManager at umetrip11-hdp2.6-111.travelsky.com/10.5.145.111:8050 2021-06-08 16:43:13,060 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@654238f{/SQL,null,AVAILABLE,@Spark} 2021-06-08 16:43:13,060 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@17e309a{/SQL/json,null,AVAILABLE,@Spark} 2021-06-08 16:43:13,061 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@3892ac96{/SQL/execution,null,AVAILABLE,@Spark} 2021-06-08 16:43:13,061 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@64c93bf9{/SQL/execution/json,null,AVAILABLE,@Spark} 2021-06-08 16:43:13,062 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@76231037{/static/sql,null,AVAILABLE,@Spark} 2021-06-08 16:43:13,064 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.cube.CubeManager 2021-06-08 16:43:13,064 INFO [pool-1-thread-1] cube.CubeManager : Initializing CubeManager with config kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 16:43:13,064 INFO [pool-1-thread-1] persistence.ResourceStore : Using metadata url kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta for resource store 2021-06-08 16:43:13,081 INFO [pool-1-thread-1] persistence.HDFSResourceStore : hdfs meta path : hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 16:43:13,082 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading CubeInstance from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/cube 2021-06-08 16:43:13,087 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.cube.CubeDescManager 2021-06-08 16:43:13,087 INFO [pool-1-thread-1] cube.CubeDescManager : Initializing CubeDescManager with config kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 16:43:13,087 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading CubeDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/cube_desc 2021-06-08 16:43:13,091 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.project.ProjectManager 2021-06-08 16:43:13,091 INFO [pool-1-thread-1] project.ProjectManager : Initializing ProjectManager with metadata url kylin_metadata@hdfs,path=hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta 2021-06-08 16:43:13,091 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading ProjectInstance from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/project 2021-06-08 16:43:13,093 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 ProjectInstance(s) out of 1 resource with 0 errors 2021-06-08 16:43:13,093 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.cachesync.Broadcaster 2021-06-08 16:43:13,094 DEBUG [pool-1-thread-1] cachesync.Broadcaster : 3 nodes in the cluster: [10.5.145.128:7070, 10.238.6.117:7070, 10.238.6.118:7070] 2021-06-08 16:43:13,094 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.model.DataModelManager 2021-06-08 16:43:13,094 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.TableMetadataManager 2021-06-08 16:43:13,094 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading TableDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/table 2021-06-08 16:43:13,097 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 TableDesc(s) out of 1 resource with 0 errors 2021-06-08 16:43:13,097 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading TableExtDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/table_exd 2021-06-08 16:43:13,100 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 TableExtDesc(s) out of 1 resource with 0 errors 2021-06-08 16:43:13,100 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading ExternalFilterDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/ext_filter 2021-06-08 16:43:13,100 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 0 ExternalFilterDesc(s) out of 0 resource with 0 errors 2021-06-08 16:43:13,100 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading DataModelDesc from hdfs://umecluster/kylin_new/kylin_metadata/user_growth/job_tmp/4e85bb17-9201-441f-afcb-f17827cc0d18-01/meta/model_desc 2021-06-08 16:43:13,103 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 DataModelDesc(s) out of 1 resource with 0 errors 2021-06-08 16:43:13,104 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 CubeDesc(s) out of 1 resource with 0 errors 2021-06-08 16:43:13,104 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 CubeInstance(s) out of 1 resource with 0 errors 2021-06-08 16:43:22,750 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:43:22,861 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:43:22,861 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:44:05,610 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:44:05,610 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:44:12,531 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:44:12,532 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[910] has 357294 row, 1191821266 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:44:12,546 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[910] has 357294 row, 1191821266 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:44:12,546 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:44:12,546 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[910] repartition to 5 2021-06-08 16:44:12,765 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:44:12,765 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:44:30,843 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/910_temp. 2021-06-08 16:44:30,843 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 18297 ms. 2021-06-08 16:44:30,859 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:44:30,860 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:44:33,105 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:44:33,213 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:44:33,213 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:45:20,115 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:45:20,115 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:45:27,263 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:45:27,293 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[654] has 85045 row, 654707221 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:45:27,310 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[654] has 85045 row, 654707221 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:45:27,310 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:45:27,310 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[654] repartition to 3 2021-06-08 16:45:28,209 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:45:28,209 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:45:47,548 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/654_temp. 2021-06-08 16:45:47,548 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 20238 ms. 2021-06-08 16:45:47,566 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:45:47,567 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:45:49,671 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:45:49,778 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:45:49,778 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:46:46,247 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:46:46,247 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:46:51,997 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:46:51,998 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[782] has 208663 row, 940449977 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:46:52,011 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[782] has 208663 row, 940449977 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:46:52,011 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:46:52,011 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[782] repartition to 5 2021-06-08 16:46:52,204 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:46:52,204 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:47:23,319 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/782_temp. 2021-06-08 16:47:23,319 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 31308 ms. 2021-06-08 16:47:23,337 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:47:23,338 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:47:25,762 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:47:25,885 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:47:25,885 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:48:05,563 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:48:05,563 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:48:10,911 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:48:10,913 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[655] has 101546 row, 656158859 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:48:10,928 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[655] has 101546 row, 656158859 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:48:10,928 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:48:10,928 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[655] repartition to 3 2021-06-08 16:48:11,108 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:48:11,108 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:48:18,215 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/655_temp. 2021-06-08 16:48:18,215 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7287 ms. 2021-06-08 16:48:18,233 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:48:18,234 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:48:20,017 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:48:20,188 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:48:20,188 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:48:43,074 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:48:43,074 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:48:49,067 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:48:49,068 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[911] has 401884 row, 1193314464 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:48:49,082 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[911] has 401884 row, 1193314464 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:48:49,082 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:48:49,082 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[911] repartition to 5 2021-06-08 16:48:49,260 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:48:49,260 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:49:01,851 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/911_temp. 2021-06-08 16:49:01,851 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 12769 ms. 2021-06-08 16:49:01,869 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:49:01,869 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:49:03,684 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:49:03,798 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:49:03,798 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:49:16,717 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 19 for reason Container killed by YARN for exceeding memory limits. 44.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:49:16,717 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 19 on umetrip33-hdp2.6-133.travelsky.com: Container killed by YARN for exceeding memory limits. 44.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:49:16,717 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 241.0 in stage 160.0 (TID 10472, umetrip33-hdp2.6-133.travelsky.com, executor 19): ExecutorLostFailure (executor 19 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:49:16,717 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 467.0 in stage 160.0 (TID 10636, umetrip33-hdp2.6-133.travelsky.com, executor 19): ExecutorLostFailure (executor 19 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:49:16,717 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 693.0 in stage 160.0 (TID 10763, umetrip33-hdp2.6-133.travelsky.com, executor 19): ExecutorLostFailure (executor 19 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:49:19,720 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 22 for reason Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:49:19,720 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 22 on umetrip05-hdp2.6-105.travelsky.com: Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:49:19,720 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 335.0 in stage 160.0 (TID 10649, umetrip05-hdp2.6-105.travelsky.com, executor 22): ExecutorLostFailure (executor 22 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:49:19,721 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 109.0 in stage 160.0 (TID 10367, umetrip05-hdp2.6-105.travelsky.com, executor 22): ExecutorLostFailure (executor 22 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:49:19,721 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 787.0 in stage 160.0 (TID 10953, umetrip05-hdp2.6-105.travelsky.com, executor 22): ExecutorLostFailure (executor 22 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:49:19,721 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 561.0 in stage 160.0 (TID 10875, umetrip05-hdp2.6-105.travelsky.com, executor 22): ExecutorLostFailure (executor 22 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:49:21,689 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.133:47041 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:49:23,818 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.105:36685 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:50:03,572 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:50:03,572 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:50:14,201 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:50:14,202 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[783] has 238128 row, 942143011 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:50:14,218 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[783] has 238128 row, 942143011 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 16:50:14,218 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:50:14,218 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[783] repartition to 5 2021-06-08 16:50:14,406 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:50:14,406 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:50:27,779 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/783_temp. 2021-06-08 16:50:27,779 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 13561 ms. 2021-06-08 16:50:27,797 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:50:27,798 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:50:30,081 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:50:30,192 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:50:30,192 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:51:02,050 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:51:02,050 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:51:08,224 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:51:08,225 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[527] has 56804 row, 438762222 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 16:51:08,241 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[527] has 56804 row, 438762222 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 16:51:08,241 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:51:08,241 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[527] repartition to 3 2021-06-08 16:51:08,433 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:51:08,434 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:51:13,651 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/527_temp. 2021-06-08 16:51:13,651 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 5410 ms. 2021-06-08 16:51:13,669 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:51:13,669 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:51:15,480 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:51:15,591 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:51:15,591 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:51:51,287 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:51:51,287 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:51:58,615 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:51:58,616 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[926] has 598417 row, 1408570665 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:51:58,631 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[926] has 598417 row, 1408570665 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:51:58,631 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:51:58,631 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[926] repartition to 6 2021-06-08 16:51:58,817 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:51:58,817 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:52:05,452 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/926_temp. 2021-06-08 16:52:05,452 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6821 ms. 2021-06-08 16:52:05,474 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:52:05,479 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:52:16,788 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:52:16,899 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:52:16,899 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:52:42,192 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:52:42,192 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:52:49,467 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:52:49,468 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[670] has 151229 row, 864446584 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 16:52:49,483 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[670] has 151229 row, 864446584 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 16:52:49,483 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:52:49,483 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[670] repartition to 4 2021-06-08 16:52:49,672 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:52:49,672 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:52:57,075 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/670_temp. 2021-06-08 16:52:57,075 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7592 ms. 2021-06-08 16:52:57,110 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:52:57,126 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:53:00,369 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:53:00,514 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:53:00,514 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:53:16,283 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:53:16,283 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:53:18,423 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:53:18,423 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[542] has 84023 row, 611474548 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:53:18,439 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[542] has 84023 row, 611474548 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:53:18,439 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:53:18,439 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[542] repartition to 3 2021-06-08 16:53:18,616 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:53:18,616 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:53:25,064 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/542_temp. 2021-06-08 16:53:25,064 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6625 ms. 2021-06-08 16:53:25,082 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:53:25,083 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:53:26,893 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:53:27,003 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:53:27,004 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:53:53,771 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:53:53,771 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:54:00,155 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:54:00,156 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[798] has 356137 row, 1160122829 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:54:00,170 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[798] has 356137 row, 1160122829 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:54:00,170 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:54:00,170 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[798] repartition to 5 2021-06-08 16:54:00,349 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:54:00,349 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:54:12,971 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/798_temp. 2021-06-08 16:54:12,971 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 12801 ms. 2021-06-08 16:54:12,987 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:54:12,988 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:54:14,845 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:54:14,958 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:54:14,958 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:54:39,590 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 27 for reason Container killed by YARN for exceeding memory limits. 46.2 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:54:39,590 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 27 on umetrip03-hdp2.6-103.travelsky.com: Container killed by YARN for exceeding memory limits. 46.2 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:54:39,591 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 135.0 in stage 328.0 (TID 22991, umetrip03-hdp2.6-103.travelsky.com, executor 27): ExecutorLostFailure (executor 27 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.2 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:54:39,591 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 344.0 in stage 328.0 (TID 23335, umetrip03-hdp2.6-103.travelsky.com, executor 27): ExecutorLostFailure (executor 27 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.2 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:54:42,541 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.103:37243 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:55:10,936 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:55:10,936 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:55:14,153 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:55:14,154 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[671] has 175104 row, 866517078 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 16:55:14,169 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[671] has 175104 row, 866517078 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 16:55:14,169 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:55:14,169 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[671] repartition to 4 2021-06-08 16:55:14,367 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:55:14,367 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:55:26,613 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/671_temp. 2021-06-08 16:55:26,613 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 12444 ms. 2021-06-08 16:55:26,630 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:55:26,632 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:55:29,017 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:55:29,132 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:55:29,132 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:55:56,033 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:55:56,034 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:56:02,158 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:56:02,158 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[799] has 395212 row, 1162029049 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:56:02,173 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[799] has 395212 row, 1162029049 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 16:56:02,173 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:56:02,173 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[799] repartition to 5 2021-06-08 16:56:02,354 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:56:02,354 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:56:08,981 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/799_temp. 2021-06-08 16:56:08,982 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6808 ms. 2021-06-08 16:56:08,998 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:56:08,998 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:56:10,872 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:56:10,987 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:56:10,987 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:56:37,932 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:56:37,932 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:56:44,677 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:56:44,688 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[543] has 99258 row, 613395512 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:56:44,703 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[543] has 99258 row, 613395512 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 16:56:44,704 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:56:44,704 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[543] repartition to 3 2021-06-08 16:56:45,591 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:56:45,591 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:57:02,705 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/543_temp. 2021-06-08 16:57:02,705 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 18001 ms. 2021-06-08 16:57:02,723 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:57:02,723 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:57:04,893 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:57:05,008 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:57:05,008 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:57:30,141 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:57:30,141 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:57:36,268 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:57:36,269 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[927] has 655663 row, 1409895520 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:57:36,283 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[927] has 655663 row, 1409895520 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 16:57:36,283 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:57:36,283 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[927] repartition to 6 2021-06-08 16:57:36,465 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:57:36,465 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:57:50,252 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/927_temp. 2021-06-08 16:57:50,252 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 13969 ms. 2021-06-08 16:57:50,267 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:57:50,268 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:57:52,085 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:57:52,201 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:57:52,201 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:58:17,426 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 17 for reason Container killed by YARN for exceeding memory limits. 45.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:58:17,427 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 17 on umetrip23-hdp2.6-123.travelsky.com: Container killed by YARN for exceeding memory limits. 45.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:58:17,427 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 590.0 in stage 440.0 (TID 32672, umetrip23-hdp2.6-123.travelsky.com, executor 17): ExecutorLostFailure (executor 17 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 45.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:58:20,187 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.123:39469 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 16:58:35,434 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 24 for reason Container killed by YARN for exceeding memory limits. 45.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:58:35,434 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 24 on umetrip36-hdp2.6-136.travelsky.com: Container killed by YARN for exceeding memory limits. 45.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:58:35,435 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 266.0 in stage 440.0 (TID 32486, umetrip36-hdp2.6-136.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 45.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:58:35,435 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 590.1 in stage 440.0 (TID 32928, umetrip36-hdp2.6-136.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 45.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:58:35,435 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 428.0 in stage 440.0 (TID 32592, umetrip36-hdp2.6-136.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 45.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:58:35,435 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 104.0 in stage 440.0 (TID 32325, umetrip36-hdp2.6-136.travelsky.com, executor 24): ExecutorLostFailure (executor 24 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 45.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 16:59:20,510 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 16:59:20,510 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 16:59:32,525 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:59:32,526 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[686] has 115549 row, 671179165 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:59:32,540 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[686] has 115549 row, 671179165 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 16:59:32,540 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 16:59:32,540 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[686] repartition to 4 2021-06-08 16:59:32,758 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:59:32,758 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:59:50,501 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/686_temp. 2021-06-08 16:59:50,501 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 17961 ms. 2021-06-08 16:59:50,519 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 16:59:50,520 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 16:59:54,330 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 16:59:54,441 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 16:59:54,442 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:00:20,167 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:00:20,167 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:00:27,431 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:00:27,432 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[558] has 65187 row, 457838878 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 17:00:27,445 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[558] has 65187 row, 457838878 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 17:00:27,445 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:00:27,445 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[558] repartition to 3 2021-06-08 17:00:27,629 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:00:27,629 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:00:33,492 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/558_temp. 2021-06-08 17:00:33,492 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6047 ms. 2021-06-08 17:00:33,507 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:00:33,508 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:00:35,733 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:00:35,843 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:00:35,843 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:01:18,642 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:01:18,642 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:01:25,802 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:01:25,803 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[942] has 444713 row, 1207571261 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 17:01:25,820 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[942] has 444713 row, 1207571261 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 17:01:25,820 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:01:25,820 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[942] repartition to 5 2021-06-08 17:01:26,378 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:01:26,378 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:01:39,476 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/942_temp. 2021-06-08 17:01:39,476 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 13656 ms. 2021-06-08 17:01:39,491 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:01:39,492 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:01:41,740 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:01:41,849 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:01:41,849 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:02:24,490 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:02:24,490 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:02:30,548 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:02:30,549 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[814] has 265469 row, 957333426 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:02:30,563 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[814] has 265469 row, 957333426 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:02:30,563 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:02:30,563 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[814] repartition to 5 2021-06-08 17:02:30,743 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:02:30,743 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:02:37,961 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/814_temp. 2021-06-08 17:02:37,961 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7398 ms. 2021-06-08 17:02:37,977 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:02:37,978 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:02:39,952 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:02:40,063 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:02:40,063 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:03:03,697 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:03:03,697 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:03:10,380 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:03:10,381 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[815] has 296248 row, 959680772 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:03:10,394 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[815] has 296248 row, 959680772 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:03:10,394 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:03:10,394 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[815] repartition to 5 2021-06-08 17:03:10,587 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:03:10,587 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:03:16,267 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/815_temp. 2021-06-08 17:03:16,267 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 5873 ms. 2021-06-08 17:03:16,284 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:03:16,285 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:03:18,251 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:03:18,430 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:03:18,430 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:03:40,443 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:03:40,443 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:03:46,549 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:03:46,549 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[943] has 490637 row, 1209025450 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:03:46,563 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[943] has 490637 row, 1209025450 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:03:46,563 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:03:46,563 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[943] repartition to 6 2021-06-08 17:03:46,753 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:03:46,753 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:03:52,867 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/943_temp. 2021-06-08 17:03:52,867 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6304 ms. 2021-06-08 17:03:52,890 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:03:52,891 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:03:54,740 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:03:54,852 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:03:54,852 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:04:16,444 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:04:16,444 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:04:18,910 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:04:18,911 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[559] has 76496 row, 459326461 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 17:04:18,926 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[559] has 76496 row, 459326461 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 17:04:18,926 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:04:18,927 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[559] repartition to 3 2021-06-08 17:04:19,114 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:04:19,114 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:04:28,762 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/559_temp. 2021-06-08 17:04:28,762 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 9836 ms. 2021-06-08 17:04:28,779 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:04:28,780 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:04:30,847 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:04:30,960 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:04:30,961 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:04:56,736 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:04:56,736 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:05:02,005 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:05:02,006 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[687] has 133257 row, 673319951 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:05:02,020 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[687] has 133257 row, 673319951 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:05:02,020 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:05:02,020 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[687] repartition to 4 2021-06-08 17:05:02,228 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:05:02,228 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:05:10,265 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/687_temp. 2021-06-08 17:05:10,265 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 8245 ms. 2021-06-08 17:05:10,285 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:05:10,286 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:05:12,355 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:05:12,470 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:05:12,470 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:05:36,401 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:05:36,401 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:05:43,233 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:05:43,234 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[958] has 703546 row, 1419946219 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:05:43,248 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[958] has 703546 row, 1419946219 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:05:43,248 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:05:43,248 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[958] repartition to 6 2021-06-08 17:05:43,434 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:05:43,434 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:05:49,323 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/958_temp. 2021-06-08 17:05:49,323 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6075 ms. 2021-06-08 17:05:49,340 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:05:49,341 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:05:51,277 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:05:51,386 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:05:51,386 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:06:09,648 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:06:09,648 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:06:14,624 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:06:14,625 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[574] has 110993 row, 630697311 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 17:06:14,639 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[574] has 110993 row, 630697311 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 17:06:14,639 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:06:14,639 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[574] repartition to 3 2021-06-08 17:06:14,825 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:06:14,825 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:06:31,579 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/574_temp. 2021-06-08 17:06:31,579 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 16940 ms. 2021-06-08 17:06:31,597 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:06:31,598 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:06:33,616 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:06:33,729 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:06:33,729 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:06:58,549 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:06:58,549 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:07:04,853 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:07:04,854 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[830] has 427268 row, 1173154580 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 17:07:04,867 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[830] has 427268 row, 1173154580 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 17:07:04,867 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:07:04,867 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[830] repartition to 5 2021-06-08 17:07:05,053 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:07:05,053 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:07:12,397 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/830_temp. 2021-06-08 17:07:12,397 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7530 ms. 2021-06-08 17:07:12,419 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:07:12,420 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:07:15,540 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:07:15,656 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:07:15,656 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:07:33,556 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:07:33,556 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:07:38,813 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:07:38,814 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[702] has 193983 row, 880578674 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 17:07:38,828 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[702] has 193983 row, 880578674 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 17:07:38,828 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:07:38,828 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[702] repartition to 4 2021-06-08 17:07:39,014 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:07:39,014 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:07:45,337 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/702_temp. 2021-06-08 17:07:45,337 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6509 ms. 2021-06-08 17:07:45,354 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:07:45,355 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:07:47,297 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:07:47,409 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:07:47,409 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:07:58,288 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:07:58,288 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:08:03,587 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:08:03,588 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[575] has 127318 row, 633038017 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 17:08:03,601 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[575] has 127318 row, 633038017 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 17:08:03,601 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:08:03,601 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[575] repartition to 3 2021-06-08 17:08:03,784 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:08:03,784 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:08:09,003 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/575_temp. 2021-06-08 17:08:09,003 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 5402 ms. 2021-06-08 17:08:09,021 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:08:09,021 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:08:11,080 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:08:11,193 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:08:11,194 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:08:29,809 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:08:29,809 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:08:34,937 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:08:34,938 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[703] has 219088 row, 882940841 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 17:08:34,953 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[703] has 219088 row, 882940841 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 17:08:34,953 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:08:34,953 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[703] repartition to 4 2021-06-08 17:08:35,133 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:08:35,133 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:08:42,422 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/703_temp. 2021-06-08 17:08:42,422 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7469 ms. 2021-06-08 17:08:42,440 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:08:42,440 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:08:44,506 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:08:44,615 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:08:44,615 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:09:03,665 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:09:03,666 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:09:10,084 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:09:10,084 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[959] has 762117 row, 1422121514 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:09:10,099 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[959] has 762117 row, 1422121514 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:09:10,099 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:09:10,099 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[959] repartition to 6 2021-06-08 17:09:10,283 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:09:10,283 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:09:15,024 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/959_temp. 2021-06-08 17:09:15,024 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4925 ms. 2021-06-08 17:09:15,039 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:09:15,040 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:09:16,957 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:09:17,063 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:09:17,063 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:09:46,567 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:09:46,567 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:09:53,131 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:09:53,131 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[831] has 467671 row, 1175275822 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 17:09:53,145 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[831] has 467671 row, 1175275822 bytes and 120 files. Partition count calculated by file size is 9, calculated by row count is 1, final is 5. 2021-06-08 17:09:53,145 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:09:53,145 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[831] repartition to 5 2021-06-08 17:09:53,328 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:09:53,328 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:10:03,180 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/831_temp. 2021-06-08 17:10:03,180 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 10035 ms. 2021-06-08 17:10:03,197 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:10:03,198 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:10:05,077 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:10:05,177 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:10:05,177 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:10:28,462 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 35 for reason Container killed by YARN for exceeding memory limits. 48.1 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:10:28,462 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 35 on umetrip22-hdp2.6-122.travelsky.com: Container killed by YARN for exceeding memory limits. 48.1 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:10:28,463 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 51.0 in stage 888.0 (TID 67212, umetrip22-hdp2.6-122.travelsky.com, executor 35): ExecutorLostFailure (executor 35 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 48.1 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:10:28,463 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 305.0 in stage 888.0 (TID 67499, umetrip22-hdp2.6-122.travelsky.com, executor 35): ExecutorLostFailure (executor 35 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 48.1 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:10:28,463 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 432.0 in stage 888.0 (TID 67594, umetrip22-hdp2.6-122.travelsky.com, executor 35): ExecutorLostFailure (executor 35 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 48.1 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:10:34,473 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 41 for reason Container killed by YARN for exceeding memory limits. 44.4 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:10:34,473 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 41 on umetrip21-hdp2.6-121.travelsky.com: Container killed by YARN for exceeding memory limits. 44.4 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:10:34,474 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 470.0 in stage 888.0 (TID 67548, umetrip21-hdp2.6-121.travelsky.com, executor 41): ExecutorLostFailure (executor 41 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.4 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:10:34,474 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 89.0 in stage 888.0 (TID 67257, umetrip21-hdp2.6-121.travelsky.com, executor 41): ExecutorLostFailure (executor 41 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.4 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:10:34,474 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 343.0 in stage 888.0 (TID 67496, umetrip21-hdp2.6-121.travelsky.com, executor 41): ExecutorLostFailure (executor 41 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.4 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:10:34,474 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 216.0 in stage 888.0 (TID 67387, umetrip21-hdp2.6-121.travelsky.com, executor 41): ExecutorLostFailure (executor 41 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.4 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:10:34,478 WARN [pool-1-thread-1] spark.ExecutorAllocationManager : Attempted to mark unknown executor 41 idle 2021-06-08 17:10:37,889 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.121:56672 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.buffer.WrappedByteBuf.writeBytes(WrappedByteBuf.java:821) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 17:11:10,619 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:11:10,619 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:11:15,948 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:11:15,949 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[590] has 54998 row, 526801660 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 17:11:15,963 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[590] has 54998 row, 526801660 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 17:11:15,963 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:11:15,963 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[590] repartition to 3 2021-06-08 17:11:16,134 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:11:16,135 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:11:48,867 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/590_temp. 2021-06-08 17:11:48,867 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 32904 ms. 2021-06-08 17:11:48,886 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:11:48,887 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:11:52,658 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:11:52,761 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:11:52,761 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:12:25,432 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:12:25,432 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:12:31,311 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:12:31,312 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[974] has 370079 row, 1242548393 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:12:31,327 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[974] has 370079 row, 1242548393 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:12:31,327 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:12:31,327 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[974] repartition to 6 2021-06-08 17:12:31,527 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:12:31,528 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:12:37,727 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/974_temp. 2021-06-08 17:12:37,727 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6400 ms. 2021-06-08 17:12:37,745 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:12:37,746 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:12:39,891 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:12:39,994 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:12:39,994 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:13:23,292 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:13:23,293 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:13:29,693 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:13:29,695 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[846] has 216822 row, 1001616617 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:13:29,710 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[846] has 216822 row, 1001616617 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:13:29,710 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:13:29,710 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[846] repartition to 5 2021-06-08 17:13:29,897 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:13:29,897 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:13:41,534 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/846_temp. 2021-06-08 17:13:41,534 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 11824 ms. 2021-06-08 17:13:41,552 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:13:41,553 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:13:44,218 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:13:44,339 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:13:44,339 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:14:11,519 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 49 for reason Container killed by YARN for exceeding memory limits. 45.8 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:11,520 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 49 on umetrip35-hdp2.6-135.travelsky.com: Container killed by YARN for exceeding memory limits. 45.8 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:11,520 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 534.0 in stage 972.0 (TID 74219, umetrip35-hdp2.6-135.travelsky.com, executor 49): ExecutorLostFailure (executor 49 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 45.8 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:14,522 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 42 for reason Container killed by YARN for exceeding memory limits. 44.8 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:14,522 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 42 on umetrip28-hdp2.6-128.travelsky.com: Container killed by YARN for exceeding memory limits. 44.8 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:14,522 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 360.0 in stage 972.0 (TID 74193, umetrip28-hdp2.6-128.travelsky.com, executor 42): ExecutorLostFailure (executor 42 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.8 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:14,622 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.135:50591 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 17:14:18,632 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.128:53637 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 17:14:20,525 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 30 for reason Container killed by YARN for exceeding memory limits. 44.1 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:20,525 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 30 on r4200d1-app.travelsky.com: Container killed by YARN for exceeding memory limits. 44.1 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:20,525 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 12.0 in stage 972.0 (TID 74141, r4200d1-app.travelsky.com, executor 30): ExecutorLostFailure (executor 30 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.1 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:23,291 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.238.6.107:48407 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 17:14:35,532 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 53 for reason Container killed by YARN for exceeding memory limits. 47.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:35,532 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 53 on umetrip15-hdp2.6-115.travelsky.com: Container killed by YARN for exceeding memory limits. 47.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:35,532 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 534.1 in stage 972.0 (TID 74244, umetrip15-hdp2.6-115.travelsky.com, executor 53): ExecutorLostFailure (executor 53 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 47.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:38,446 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.115:63868 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 17:14:47,537 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 34 for reason Container killed by YARN for exceeding memory limits. 46.7 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:47,537 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 34 on umetrip04-hdp2.6-104.travelsky.com: Container killed by YARN for exceeding memory limits. 46.7 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:47,537 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 12.1 in stage 972.0 (TID 74246, umetrip04-hdp2.6-104.travelsky.com, executor 34): ExecutorLostFailure (executor 34 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.7 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:14:50,759 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.104:6663 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 17:16:01,538 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:16:01,538 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:16:12,701 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:16:12,702 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[718] has 98289 row, 724845213 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:16:12,717 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[718] has 98289 row, 724845213 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:16:12,717 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:16:12,717 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[718] repartition to 4 2021-06-08 17:16:12,920 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:16:12,920 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:16:33,033 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/718_temp. 2021-06-08 17:16:33,033 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 20316 ms. 2021-06-08 17:16:33,052 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:16:33,053 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:16:35,675 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:16:35,810 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:16:35,811 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:17:17,035 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:17:17,035 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:17:23,092 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:17:23,093 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[719] has 117166 row, 726359103 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:17:23,106 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[719] has 117166 row, 726359103 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:17:23,106 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:17:23,106 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[719] repartition to 4 2021-06-08 17:17:23,329 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:17:23,329 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:17:35,259 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/719_temp. 2021-06-08 17:17:35,259 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 12153 ms. 2021-06-08 17:17:35,277 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:17:35,277 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:17:37,864 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:17:37,968 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:17:37,968 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:19:13,360 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:19:13,360 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:19:24,008 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:19:24,012 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[975] has 416918 row, 1244522185 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:19:24,027 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[975] has 416918 row, 1244522185 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:19:24,027 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:19:24,027 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[975] repartition to 6 2021-06-08 17:19:24,520 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:19:24,520 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:19:37,049 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/975_temp. 2021-06-08 17:19:37,049 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 13022 ms. 2021-06-08 17:19:37,064 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:19:37,065 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:19:39,707 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:19:39,809 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:19:39,809 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:19:58,230 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:19:58,230 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:20:04,046 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:20:04,047 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[591] has 67473 row, 528618660 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 17:20:04,063 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[591] has 67473 row, 528618660 bytes and 120 files. Partition count calculated by file size is 4, calculated by row count is 1, final is 3. 2021-06-08 17:20:04,063 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:20:04,063 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[591] repartition to 3 2021-06-08 17:20:04,252 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:20:04,252 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:20:08,459 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/591_temp. 2021-06-08 17:20:08,459 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4396 ms. 2021-06-08 17:20:08,478 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:20:08,478 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:20:10,744 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:20:10,848 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:20:10,848 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:20:35,379 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:20:35,379 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:20:42,902 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:20:42,916 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[847] has 248441 row, 1003355415 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:20:42,933 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[847] has 248441 row, 1003355415 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:20:42,933 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:20:42,933 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[847] repartition to 5 2021-06-08 17:20:43,112 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:20:43,112 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:20:49,516 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/847_temp. 2021-06-08 17:20:49,516 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6583 ms. 2021-06-08 17:20:49,532 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:20:49,533 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:20:51,723 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:20:51,938 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:20:51,938 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:21:38,883 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:21:38,883 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:21:45,086 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:21:45,086 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[606] has 98387 row, 707213449 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:21:45,100 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[606] has 98387 row, 707213449 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:21:45,100 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:21:45,100 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[606] repartition to 4 2021-06-08 17:21:45,275 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:21:45,275 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:21:51,843 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/606_temp. 2021-06-08 17:21:51,843 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6743 ms. 2021-06-08 17:21:51,859 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:21:51,859 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:21:53,832 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:21:53,950 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:21:53,950 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:22:22,384 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:22:22,385 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:22:27,808 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:22:27,809 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[862] has 370029 row, 1219805072 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:22:27,825 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[862] has 370029 row, 1219805072 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:22:27,825 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:22:27,825 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[862] repartition to 6 2021-06-08 17:22:28,019 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:22:28,019 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:22:36,959 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/862_temp. 2021-06-08 17:22:36,959 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 9134 ms. 2021-06-08 17:22:36,978 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:22:36,979 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:22:38,892 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:22:38,998 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:22:38,998 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:23:01,536 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:23:01,537 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:23:07,999 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:23:08,000 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[734] has 173143 row, 938881453 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 17:23:08,013 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[734] has 173143 row, 938881453 bytes and 120 files. Partition count calculated by file size is 7, calculated by row count is 1, final is 4. 2021-06-08 17:23:08,013 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:23:08,013 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[734] repartition to 4 2021-06-08 17:23:08,186 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:23:08,186 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:23:17,383 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/734_temp. 2021-06-08 17:23:17,383 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 9370 ms. 2021-06-08 17:23:17,408 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:23:17,418 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:23:21,620 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:23:21,738 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:23:21,738 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:24:12,338 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:24:12,338 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:24:31,174 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:24:31,176 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[990] has 619664 row, 1449810229 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:24:31,190 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[990] has 619664 row, 1449810229 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:24:31,190 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:24:31,190 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[990] repartition to 6 2021-06-08 17:24:31,372 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:24:31,372 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:24:37,643 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/990_temp. 2021-06-08 17:24:37,643 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6453 ms. 2021-06-08 17:24:37,659 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:24:37,659 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:24:39,755 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:24:39,875 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:24:39,875 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:24:57,341 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:24:57,341 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:25:04,732 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:25:04,733 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[607] has 117301 row, 709333256 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:25:04,748 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[607] has 117301 row, 709333256 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:25:04,748 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:25:04,748 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[607] repartition to 4 2021-06-08 17:25:04,943 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:25:04,944 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:25:11,867 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/607_temp. 2021-06-08 17:25:11,867 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7119 ms. 2021-06-08 17:25:11,885 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:25:11,886 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:25:14,941 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:25:15,106 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:25:15,106 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:25:48,538 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:25:48,538 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:25:57,296 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:25:57,297 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[863] has 412634 row, 1221360176 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:25:57,313 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[863] has 412634 row, 1221360176 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:25:57,313 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:25:57,314 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[863] repartition to 6 2021-06-08 17:25:57,582 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:25:57,582 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:26:01,501 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/863_temp. 2021-06-08 17:26:01,501 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4188 ms. 2021-06-08 17:26:01,517 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:26:01,517 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:26:03,337 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:26:03,447 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:26:03,447 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:26:29,981 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:26:29,981 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:26:36,031 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:26:36,032 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[991] has 680218 row, 1451294787 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:26:36,046 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[991] has 680218 row, 1451294787 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:26:36,046 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:26:36,046 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[991] repartition to 6 2021-06-08 17:26:36,225 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:26:36,226 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:26:40,726 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/991_temp. 2021-06-08 17:26:40,726 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4680 ms. 2021-06-08 17:26:40,741 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:26:40,741 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:26:42,507 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:26:42,631 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:26:42,631 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:26:57,449 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:26:57,449 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:27:03,656 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:27:03,660 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[735] has 200644 row, 940337079 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:27:03,678 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[735] has 200644 row, 940337079 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:27:03,678 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:27:03,678 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[735] repartition to 5 2021-06-08 17:27:04,379 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:27:04,379 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:27:13,395 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/735_temp. 2021-06-08 17:27:13,395 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 9717 ms. 2021-06-08 17:27:13,416 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:27:13,417 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:27:16,697 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:27:16,811 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:27:16,811 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:27:52,677 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:27:52,677 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:27:58,066 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:27:58,068 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[750] has 133363 row, 743712065 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:27:58,085 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[750] has 133363 row, 743712065 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:27:58,085 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:27:58,085 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[750] repartition to 4 2021-06-08 17:27:58,267 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:27:58,267 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:28:15,181 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 80 on r4200d1-app.travelsky.com: Container killed by YARN for exceeding memory limits. 44.1 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:28:15,181 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 80 for reason Container killed by YARN for exceeding memory limits. 44.1 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:28:15,181 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 0.0 in stage 1343.0 (TID 105128, r4200d1-app.travelsky.com, executor 80): ExecutorLostFailure (executor 80 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.1 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:28:35,335 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/750_temp. 2021-06-08 17:28:35,335 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 37250 ms. 2021-06-08 17:28:35,351 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:28:35,351 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:28:39,431 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:28:39,533 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:28:39,534 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:29:14,256 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:29:14,256 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:29:21,275 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:29:21,276 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[622] has 78063 row, 545949309 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 17:29:21,292 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[622] has 78063 row, 545949309 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 17:29:21,292 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:29:21,292 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[622] repartition to 3 2021-06-08 17:29:21,479 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:29:21,479 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:29:38,807 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 91 for reason Container killed by YARN for exceeding memory limits. 48.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:29:38,807 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 91 on umetrip16-hdp2.6-116.travelsky.com: Container killed by YARN for exceeding memory limits. 48.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:29:38,808 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 0.0 in stage 1371.0 (TID 106563, umetrip16-hdp2.6-116.travelsky.com, executor 91): ExecutorLostFailure (executor 91 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 48.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:29:38,808 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 2.0 in stage 1371.0 (TID 106565, umetrip16-hdp2.6-116.travelsky.com, executor 91): ExecutorLostFailure (executor 91 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 48.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:29:45,460 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.116:47334 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 17:30:02,965 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/622_temp. 2021-06-08 17:30:02,965 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 41673 ms. 2021-06-08 17:30:02,981 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:30:02,982 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:30:05,198 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:30:05,337 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:30:05,337 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:30:31,493 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:30:31,493 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:30:38,240 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:30:38,241 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1006] has 461875 row, 1257753354 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:30:38,255 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1006] has 461875 row, 1257753354 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:30:38,255 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:30:38,255 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[1006] repartition to 6 2021-06-08 17:30:38,430 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:30:38,430 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:30:42,727 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/1006_temp. 2021-06-08 17:30:42,727 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4472 ms. 2021-06-08 17:30:42,742 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:30:42,743 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:30:44,622 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:30:44,726 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:30:44,726 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:31:02,979 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:31:02,979 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:31:07,926 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:31:07,927 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[878] has 277848 row, 1018125167 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:31:07,941 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[878] has 277848 row, 1018125167 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:31:07,941 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:31:07,941 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[878] repartition to 5 2021-06-08 17:31:08,131 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:31:08,131 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:31:15,349 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/878_temp. 2021-06-08 17:31:15,349 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7408 ms. 2021-06-08 17:31:15,365 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:31:15,365 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:31:17,368 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:31:17,491 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:31:17,491 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:31:38,233 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:31:38,233 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:31:43,953 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:31:43,954 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[879] has 310852 row, 1019631902 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:31:43,967 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[879] has 310852 row, 1019631902 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:31:43,967 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:31:43,967 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[879] repartition to 5 2021-06-08 17:31:44,153 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:31:44,153 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:31:50,103 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/879_temp. 2021-06-08 17:31:50,103 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6136 ms. 2021-06-08 17:31:50,120 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:31:50,120 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:31:52,010 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:31:52,118 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:31:52,118 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:32:40,819 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 81 for reason Container killed by YARN for exceeding memory limits. 44.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:32:40,819 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 81 on umetrip14-hdp2.6-114.travelsky.com: Container killed by YARN for exceeding memory limits. 44.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:32:40,819 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 342.0 in stage 1476.0 (TID 114376, umetrip14-hdp2.6-114.travelsky.com, executor 81): ExecutorLostFailure (executor 81 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:32:40,820 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 163.0 in stage 1476.0 (TID 114220, umetrip14-hdp2.6-114.travelsky.com, executor 81): ExecutorLostFailure (executor 81 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 44.6 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:32:44,439 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.114:49865 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 17:33:01,769 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:33:01,770 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:33:09,757 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:33:09,758 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[751] has 153537 row, 745639013 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:33:09,776 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[751] has 153537 row, 745639013 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:33:09,776 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:33:09,776 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[751] repartition to 4 2021-06-08 17:33:09,954 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:33:09,954 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:33:22,534 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/751_temp. 2021-06-08 17:33:22,534 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 12758 ms. 2021-06-08 17:33:22,552 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:33:22,553 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:33:24,428 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:33:24,531 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:33:24,531 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:34:00,496 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:34:00,496 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:34:05,785 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:34:05,796 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[623] has 91694 row, 547778421 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 17:34:05,814 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[623] has 91694 row, 547778421 bytes and 120 files. Partition count calculated by file size is 5, calculated by row count is 1, final is 3. 2021-06-08 17:34:05,814 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:34:05,814 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[623] repartition to 3 2021-06-08 17:34:06,196 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:34:06,197 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:34:30,453 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/623_temp. 2021-06-08 17:34:30,453 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 24639 ms. 2021-06-08 17:34:30,471 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:34:30,472 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:34:33,805 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:34:33,916 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:34:33,916 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:35:09,037 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:35:09,037 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:35:15,311 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:35:15,312 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1007] has 510101 row, 1260788976 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:35:15,325 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1007] has 510101 row, 1260788976 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:35:15,325 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:35:15,325 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[1007] repartition to 6 2021-06-08 17:35:15,490 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:35:15,491 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:35:30,471 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/1007_temp. 2021-06-08 17:35:30,471 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 15146 ms. 2021-06-08 17:35:30,488 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:35:30,489 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:35:36,640 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:35:36,759 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:35:36,759 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:36:00,264 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:36:00,264 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:36:07,847 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:36:07,848 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[894] has 447648 row, 1229664347 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:36:07,862 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[894] has 447648 row, 1229664347 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:36:07,862 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:36:07,862 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[894] repartition to 6 2021-06-08 17:36:08,054 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:36:08,054 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:36:14,512 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/894_temp. 2021-06-08 17:36:14,512 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 6650 ms. 2021-06-08 17:36:14,527 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:36:14,528 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:36:16,260 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:36:16,364 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:36:16,364 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:36:46,398 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:36:46,398 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:36:51,770 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:36:51,771 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[766] has 222884 row, 950980180 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:36:51,786 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[766] has 222884 row, 950980180 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:36:51,786 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:36:51,786 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[766] repartition to 5 2021-06-08 17:36:51,979 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:36:51,979 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:37:09,669 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/766_temp. 2021-06-08 17:37:09,669 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 17883 ms. 2021-06-08 17:37:09,687 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:37:09,687 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:37:19,705 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:37:19,811 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:37:19,811 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:38:13,273 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:38:13,273 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:38:21,238 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:38:21,239 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[638] has 132440 row, 720266894 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:38:21,257 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[638] has 132440 row, 720266894 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:38:21,257 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:38:21,257 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[638] repartition to 4 2021-06-08 17:38:21,455 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:38:21,455 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:38:47,083 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/638_temp. 2021-06-08 17:38:47,083 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 25826 ms. 2021-06-08 17:38:47,100 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:38:47,101 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:38:49,156 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:38:49,265 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:38:49,265 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:39:19,976 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:39:19,976 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:39:32,150 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:39:32,151 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1022] has 731258 row, 1459719796 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:39:32,165 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1022] has 731258 row, 1459719796 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:39:32,165 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:39:32,165 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[1022] repartition to 6 2021-06-08 17:39:32,348 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:39:32,348 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:39:36,359 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/1022_temp. 2021-06-08 17:39:36,359 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 4194 ms. 2021-06-08 17:39:36,374 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:39:36,375 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:39:38,192 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:39:38,303 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:39:38,303 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:40:12,869 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:40:12,869 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:40:18,938 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:40:18,939 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[895] has 491632 row, 1231678471 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:40:18,952 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[895] has 491632 row, 1231678471 bytes and 120 files. Partition count calculated by file size is 10, calculated by row count is 1, final is 6. 2021-06-08 17:40:18,952 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:40:18,952 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[895] repartition to 6 2021-06-08 17:40:19,137 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:40:19,137 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:40:28,297 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/895_temp. 2021-06-08 17:40:28,297 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 9345 ms. 2021-06-08 17:40:28,315 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:40:28,316 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:40:32,615 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:40:32,721 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:40:32,721 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:41:09,128 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:41:09,128 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:41:18,622 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:41:18,623 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1023] has 793169 row, 1461957965 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:41:18,640 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[1023] has 793169 row, 1461957965 bytes and 120 files. Partition count calculated by file size is 11, calculated by row count is 1, final is 6. 2021-06-08 17:41:18,640 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:41:18,641 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[1023] repartition to 6 2021-06-08 17:41:18,963 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:41:18,963 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:41:26,052 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/1023_temp. 2021-06-08 17:41:26,052 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 7412 ms. 2021-06-08 17:41:26,074 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:41:26,074 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:41:29,601 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:41:29,735 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:41:29,735 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:41:43,790 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 94 for reason Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:41:43,790 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 94 on r4200g1-app.travelsky.com: Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:41:43,790 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 506.0 in stage 1728.0 (TID 136817, r4200g1-app.travelsky.com, executor 94): ExecutorLostFailure (executor 94 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:41:43,791 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 48.0 in stage 1728.0 (TID 136281, r4200g1-app.travelsky.com, executor 94): ExecutorLostFailure (executor 94 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:41:43,791 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 770.0 in stage 1728.0 (TID 137019, r4200g1-app.travelsky.com, executor 94): ExecutorLostFailure (executor 94 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:41:43,791 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 83.0 in stage 1728.0 (TID 136349, r4200g1-app.travelsky.com, executor 94): ExecutorLostFailure (executor 94 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:41:43,791 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 735.0 in stage 1728.0 (TID 136965, r4200g1-app.travelsky.com, executor 94): ExecutorLostFailure (executor 94 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 46.0 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:42:18,543 WARN [pool-1-thread-1] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint : Requesting driver to remove executor 96 for reason Container killed by YARN for exceeding memory limits. 45.4 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:42:18,543 ERROR [pool-1-thread-1] cluster.YarnScheduler : Lost executor 96 on umetrip24-hdp2.6-124.travelsky.com: Container killed by YARN for exceeding memory limits. 45.4 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:42:18,544 WARN [pool-1-thread-1] scheduler.TaskSetManager : Lost task 473.0 in stage 1728.0 (TID 136601, umetrip24-hdp2.6-124.travelsky.com, executor 96): ExecutorLostFailure (executor 96 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 45.4 GB of 44 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. 2021-06-08 17:42:19,710 WARN [pool-1-thread-1] server.TransportChannelHandler : Exception in connection from /10.5.145.124:55668 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2021-06-08 17:42:38,080 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:42:38,081 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:42:41,820 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:42:41,821 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[767] has 251681 row, 953000219 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:42:41,834 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[767] has 251681 row, 953000219 bytes and 120 files. Partition count calculated by file size is 8, calculated by row count is 1, final is 5. 2021-06-08 17:42:41,834 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:42:41,834 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[767] repartition to 5 2021-06-08 17:42:42,007 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:42:42,007 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:43:00,704 INFO [pool-1-thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/767_temp. 2021-06-08 17:43:00,704 INFO [pool-1-thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 18870 ms. 2021-06-08 17:43:00,724 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:43:00,724 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:43:04,257 INFO [pool-1-thread-1] job.BuildLayoutWithUpdate : Wait to take job result. 2021-06-08 17:43:04,361 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:43:04,361 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:44:05,914 INFO [pool-1-thread-1] utils.JobMetricsUtils : Collect output rows failed. 2021-06-08 17:44:05,914 WARN [pool-1-thread-1] job.CubeMergeJob : Can not get cuboid row cnt, use count() to collect cuboid rows. 2021-06-08 17:44:12,541 DEBUG [pool-1-thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:44:12,541 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[639] has 152517 row, 721527554 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:44:12,555 INFO [pool-1-thread-1] utils.Repartitioner : Before repartition, cuboid[639] has 152517 row, 721527554 bytes and 120 files. Partition count calculated by file size is 6, calculated by row count is 1, final is 4. 2021-06-08 17:44:12,555 INFO [pool-1-thread-1] utils.Repartitioner : Start repartition and rewrite 2021-06-08 17:44:12,555 INFO [pool-1-thread-1] utils.Repartitioner : Cuboid[639] repartition to 4 2021-06-08 17:44:12,717 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:44:12,717 INFO [pool-1-thread-1] output.FileOutputCommitter : File Output Committer Algorithm version is 1 2021-06-08 17:44:31,975 INFO [Thread-1] utils.Repartitioner : Delete temp cuboid path successful. Temp path: hdfs://umecluster/kylin_new/kylin_metadata/user_growth/parquet/his_msg_push_event/20210501000000_20210530000000_WP4/639_temp. 2021-06-08 17:44:31,975 INFO [Thread-1] utils.Repartitioner : Repartition and rewrite ends. Cost: 19420 ms. 2021-06-08 17:44:31,992 DEBUG [Thread-1] util.HadoopUtil : Use provider:org.apache.kylin.common.storage.DefaultStorageProvider 2021-06-08 17:44:31,992 INFO [Thread-1] job.BuildLayoutWithUpdate : Take job result successful. 2021-06-08 17:44:32,018 INFO [Thread-1] cube.CubeManager : Updating cube instance 'his_msg_push_event' 2021-06-08 17:44:32,020 DEBUG [Thread-1] cachesync.CachedCrudAssist : Saving CubeInstance at /cube/his_msg_push_event.json 2021-06-08 17:44:32,049 DEBUG [Thread-1] cachesync.Broadcaster : Servers in the cluster: [10.5.145.128:7070, 10.238.6.117:7070, 10.238.6.118:7070] 2021-06-08 17:44:32,058 INFO [Thread-1] server.AbstractConnector : Stopped Spark@4fae798d{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2021-06-08 17:44:32,067 INFO [Thread-1] common.KylinConfig : Loading kylin-defaults.properties from file:/opt/appdata/disk01/app/kylin/lib/kylin-parquet-job-4.0.0-beta.jar!/kylin-defaults.properties 2021-06-08 17:44:32,071 DEBUG [Thread-1] common.KylinConfig : KYLIN_CONF property was not set, will seek KYLIN_HOME env variable 2021-06-08 17:44:32,071 INFO [Thread-1] common.KylinConfig : Use KYLIN_HOME=/opt/appdata/disk01/app/kylin 2021-06-08 17:44:32,071 WARN [Thread-1] common.BackwardCompatibilityConfig : Config 'kylin.query.metrics.enabled' is deprecated, use 'kylin.server.query-metrics-enabled' instead 2021-06-08 17:44:32,072 INFO [Thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.metadata.url.identifier : kylin_metadata 2021-06-08 17:44:32,073 INFO [Thread-1] common.KylinConfigBase : Kylin Config was updated with kylin.log.spark-executor-properties-file : /opt/appdata/disk01/app/kylin/conf/spark-executor-log4j.properties 2021-06-08 17:44:32,073 INFO [Thread-1] common.KylinConfig : Initialized a new KylinConfig from getInstanceFromEnv : 1493643283 2021-06-08 17:44:32,077 DEBUG [Thread-1] cachesync.Broadcaster : Announcing new broadcast to all: BroadcastEvent{entity=cube, event=update, cacheKey=his_msg_push_event} 2021-06-08 17:44:32,078 WARN [Thread-1] nio.NioEventLoop : Selector.select() returned prematurely 512 times in a row; rebuilding Selector io.netty.channel.nio.SelectedSelectionKeySetSelector@1080f73b. 2021-06-08 17:44:32,078 WARN [Thread-1] nio.NioEventLoop : Selector.select() returned prematurely 512 times in a row; rebuilding Selector io.netty.channel.nio.SelectedSelectionKeySetSelector@24385143. 2021-06-08 17:44:32,079 INFO [Thread-1] nio.NioEventLoop : Migrated 2 channel(s) to the new Selector. 2021-06-08 17:44:32,079 INFO [Thread-1] nio.NioEventLoop : Migrated 4 channel(s) to the new Selector. 2021-06-08 17:44:32,095 INFO [Thread-1] application.SparkApplication : ==========================[MERGE CUBE]=============================== auto spark config : {spark.executor.memory=10GB, count_distinct=true, spark.executor.cores=5, spark.executor.memoryOverhead=2GB, spark.executor.instances=5, spark.yarn.queue=kylin, spark.sql.shuffle.partitions=43} wait time: 0 build time: 3678991 merging segments : [] abnormal layouts : {910=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 654=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 782=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 655=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 911=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 783=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 527=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 926=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 670=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 542=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 798=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 671=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 799=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 543=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 927=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 686=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 558=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 942=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 814=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 815=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 943=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 559=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 687=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 958=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 574=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 830=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 702=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 575=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 703=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 959=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 831=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 590=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 974=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 846=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 718=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 719=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 975=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 591=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 847=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 606=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 862=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 734=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 990=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 607=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 863=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 991=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 735=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 750=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 622=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 1006=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 878=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 879=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 751=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 623=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 1007=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 894=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 766=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 638=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 1022=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 895=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 1023=['Job metrics seems null, use count() to collect cuboid rows.', 'Job metrics seems null, use count() to collect cuboid rows.'], 767=['Job metrics seems null, use count() to collect cuboid rows.'], 639=['Job metrics seems null, use count() to collect cuboid rows.']} retry times : 3 job retry infos : RetryInfo{ overrideConf : {spark.executor.memory=30720MB, spark.executor.memoryOverhead=6144MB}, throwable : java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeMergeJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:92) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: org.apache.spark.SparkException: Job aborted. at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate.updateLayout(BuildLayoutWithUpdate.java:70) at org.apache.kylin.engine.spark.job.CubeMergeJob.mergeSegments(CubeMergeJob.java:122) at org.apache.kylin.engine.spark.job.CubeMergeJob.doExecute(CubeMergeJob.java:82) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:298) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) ... 4 more Caused by: org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 15 in stage 70.0 failed 4 times, most recent failure: Lost task 15.3 in stage 70.0 (TID 4154, umetrip19-hdp2.6-119.travelsky.com, executor 29): ExecutorLostFailure (executor 29 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 23.7 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more } RetryInfo{ overrideConf : {spark.executor.memory=38092MB, spark.executor.memoryOverhead=7618MB}, throwable : java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeMergeJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:92) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: org.apache.spark.SparkException: Job aborted. at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate.updateLayout(BuildLayoutWithUpdate.java:70) at org.apache.kylin.engine.spark.job.CubeMergeJob.mergeSegments(CubeMergeJob.java:122) at org.apache.kylin.engine.spark.job.CubeMergeJob.doExecute(CubeMergeJob.java:82) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:298) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) ... 4 more Caused by: org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 244 in stage 1108.0 failed 4 times, most recent failure: Lost task 244.3 in stage 1108.0 (TID 78736, r4200h1-app.travelsky.com, executor 109): ExecutorLostFailure (executor 109 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 39.0 GB of 36 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more } RetryInfo{ overrideConf : {spark.executor.memory=36618MB, spark.executor.memoryOverhead=7323MB}, throwable : java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeMergeJob at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:92) at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: org.apache.spark.SparkException: Job aborted. at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate.updateLayout(BuildLayoutWithUpdate.java:70) at org.apache.kylin.engine.spark.job.CubeMergeJob.mergeSegments(CubeMergeJob.java:122) at org.apache.kylin.engine.spark.job.CubeMergeJob.doExecute(CubeMergeJob.java:82) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:298) at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89) ... 4 more Caused by: org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:567) at org.apache.kylin.engine.spark.storage.ParquetStorage.saveTo(ParquetStorage.scala:28) at org.apache.kylin.engine.spark.job.CubeMergeJob.saveAndUpdateCuboid(CubeMergeJob.java:171) at org.apache.kylin.engine.spark.job.CubeMergeJob.access$000(CubeMergeJob.java:59) at org.apache.kylin.engine.spark.job.CubeMergeJob$1.build(CubeMergeJob.java:118) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:51) at org.apache.kylin.engine.spark.job.BuildLayoutWithUpdate$1.call(BuildLayoutWithUpdate.java:43) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 428 in stage 360.0 failed 4 times, most recent failure: Lost task 428.3 in stage 360.0 (TID 26130, umetrip40-hdp2.6-140.travelsky.com, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 48.4 GB of 46 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714. Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167) ... 34 more } ==========================[MERGE CUBE]===============================