Details
-
Test
-
Status: Closed
-
Major
-
Resolution: Fixed
-
v1.6.0
-
None
-
None
-
CDH5.8 + Kylin1.6 + Kafka0.10+JDK1.8
-
Important
Description
Hi engineer,
I doubt there is a bug here . It always raises error like below . I keep my kafka server and producer running in my tutorial . Please help have a look on it , thanks!
Command:
curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '
{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://192.168.0.103:7070/kylin/api/cubes/StreamingCube25/build2
Log and error :
[root@quickstart logs]# tail -f kylin.log
2016-12-13 05:39:18,648 INFO [http-bio-7070-exec-2] kafka.KafkaConfigManager:211 : Reloading Kafka Metadata from folder kylin_metadata(key='/kafka')@kylin_metadata@hbase
2016-12-13 05:39:18,686 DEBUG [http-bio-7070-exec-2] kafka.KafkaConfigManager:236 : Loaded 2 KafkaConfig(s)
2016-12-13 05:39:20,886 DEBUG [http-bio-7070-exec-2] controller.UserController:64 : authentication.getPrincipal() is org.springframework.security.core.userdetails.User@3b40b2f: Username: ADMIN; Password: [PROTECTED]; Enabled: true; AccountNonExpired: true; credentialsNonExpired: true; AccountNonLocked: true; Granted Authorities: ROLE_ADMIN,ROLE_ANALYST,ROLE_MODELER
2016-12-13 05:39:20,948 DEBUG [http-bio-7070-exec-9] badquery.BadQueryHistoryManager:84 : Loaded 0 Bad Query(s)
2016-12-13 05:39:24,684 DEBUG [http-bio-7070-exec-6] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c-00
2016-12-13 05:39:24,787 DEBUG [http-bio-7070-exec-6] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c-00 from oldTs: 1481635421490, to newTs: 1481636364684, operation result: true
2016-12-13 05:39:24,787 INFO [http-bio-7070-exec-6] manager.ExecutableManager:292 : job id:093a2093-9d9d-4f61-805b-2cf640e2763c-00 from ERROR to READY
2016-12-13 05:39:24,790 DEBUG [http-bio-7070-exec-6] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c
2016-12-13 05:39:24,794 DEBUG [http-bio-7070-exec-6] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c from oldTs: 1481635421549, to newTs: 1481636364790, operation result: true
2016-12-13 05:39:24,794 INFO [http-bio-7070-exec-6] manager.ExecutableManager:292 : job id:093a2093-9d9d-4f61-805b-2cf640e2763c from ERROR to READY
2016-12-13 05:40:16,225 INFO [pool-8-thread-1] threadpool.DefaultScheduler:108 : CubingJob
prepare to schedule
2016-12-13 05:40:16,227 INFO [pool-8-thread-1] threadpool.DefaultScheduler:112 : CubingJob
scheduled
2016-12-13 05:40:16,228 INFO [pool-9-thread-1] execution.AbstractExecutable:99 : Executing AbstractExecutable (StreamingCube25 - 0_36517 - BUILD - GMT+08:00 2016-12-13 20:56:32)
2016-12-13 05:40:16,234 INFO [pool-8-thread-1] threadpool.DefaultScheduler:118 : Job Fetcher: 0 should running, 1 actual running, 1 ready, 0 already succeed, 3 error, 0 discarded, 0 others
2016-12-13 05:40:16,237 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c
2016-12-13 05:40:16,242 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c from oldTs: 1481636364790, to newTs: 1481636416237, operation result: true
2016-12-13 05:40:16,243 INFO [pool-9-thread-1] manager.ExecutableManager:292 : job id:093a2093-9d9d-4f61-805b-2cf640e2763c from READY to RUNNING
2016-12-13 05:40:16,250 INFO [pool-9-thread-1] execution.AbstractExecutable:99 : Executing AbstractExecutable (Save data from Kafka)
2016-12-13 05:40:16,520 INFO [pool-9-thread-1] client.RMProxy:98 : Connecting to ResourceManager at quickstart.cloudera/192.168.0.103:8032
2016-12-13 05:40:17,459 INFO [pool-9-thread-1] mapred.ClientServiceDelegate:168 : Could not get Job info from RM for job job_1481632591096_0002. Redirecting to job history server.
2016-12-13 05:40:18,610 INFO [pool-9-thread-1] mapred.ClientServiceDelegate:168 : Could not get Job info from RM for job job_1481632591096_0002. Redirecting to job history server.
2016-12-13 05:40:18,616 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c-00
2016-12-13 05:40:18,622 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c-00 from oldTs: 1481636364684, to newTs: 1481636418616, operation result: true
2016-12-13 05:40:18,623 INFO [pool-9-thread-1] manager.ExecutableManager:292 : job id:093a2093-9d9d-4f61-805b-2cf640e2763c-00 from READY to RUNNING
2016-12-13 05:40:18,786 INFO [pool-9-thread-1] common.MapReduceExecutable:112 : parameters of the MapReduceExecutable:
2016-12-13 05:40:18,789 INFO [pool-9-thread-1] common.MapReduceExecutable:113 : -conf /opt/kylin/conf/kylin_job_conf.xml -cubename StreamingCube25 -output /kylin/kylin_metadata/kylin-093a2093-9d9d-4f61-805b-2cf640e2763c/kylin_intermediate_StreamingCube25_26368213_e3fe_4eb4_a708_6a32ea0a980e -segmentid 26368213-e3fe-4eb4-a708-6a32ea0a980e -jobname Kylin_Save_Kafka_Data_StreamingCube25_Step
2016-12-13 05:40:19,132 INFO [pool-9-thread-1] hadoop.KafkaFlatTableJob:88 : Starting: Kylin_Save_Kafka_Data_StreamingCube25_Step
2016-12-13 05:40:19,133 INFO [pool-9-thread-1] common.AbstractHadoopJob:163 : append job jar: /opt/kylin/lib/kylin-job-1.6.0.jar
2016-12-13 05:40:19,133 INFO [pool-9-thread-1] common.AbstractHadoopJob:171 : append kylin.hbase.dependency: /usr/lib/hbase/bin/../lib/hbase-common-1.2.0-cdh5.8.0.jar to mapreduce.application.classpath
2016-12-13 05:40:19,133 INFO [pool-9-thread-1] common.AbstractHadoopJob:188 : Hadoop job classpath is: $HADOOP_MAPRED_HOME/,$HADOOP_MAPRED_HOME/lib/,$MR2_CLASSPATH,/usr/lib/hbase/bin/../lib/hbase-common-1.2.0-cdh5.8.0.jar
2016-12-13 05:40:19,133 INFO [pool-9-thread-1] common.AbstractHadoopJob:200 : Hive Dependencies Before Filtered: /usr/lib/hive/conf,/usr/lib/hive/lib/high-scale-lib-1.1.1.jar,/usr/lib/hive/lib/tempus-fugit-1.1.jar,/usr/lib/hive/lib/eigenbase-properties-1.1.4.jar,/usr/lib/hive/lib/asm-commons-3.1.jar,/usr/lib/hive/lib/findbugs-annotations-1.3.9-1.jar,/usr/lib/hive/lib/parquet-hadoop-bundle.jar,/usr/lib/hive/lib/commons-collections-3.2.2.jar,/usr/lib/hive/lib/hbase-annotations.jar,/usr/lib/hive/lib/datanucleus-rdbms-3.2.9.jar,/usr/lib/hive/lib/ant-1.9.1.jar,/usr/lib/hive/lib/hbase-hadoop-compat.jar,/usr/lib/hive/lib/commons-cli-1.2.jar,/usr/lib/hive/lib/activation-1.1.jar,/usr/lib/hive/lib/commons-pool-1.5.4.jar,/usr/lib/hive/lib/hive-hbase-handler.jar,/usr/lib/hive/lib/libfb303-0.9.3.jar,/usr/lib/hive/lib/commons-beanutils-core-1.8.0.jar,/usr/lib/hive/lib/hive-service.jar,/usr/lib/hive/lib/hive-metastore-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/oro-2.0.8.jar,/usr/lib/hive/lib/jackson-jaxrs-1.9.2.jar,/usr/lib/hive/lib/hive-jdbc.jar,/usr/lib/hive/lib/hive-shims-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/commons-lang-2.6.jar,/usr/lib/hive/lib/avro.jar,/usr/lib/hive/lib/jta-1.1.jar,/usr/lib/hive/lib/hive-shims-common-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/guava-14.0.1.jar,/usr/lib/hive/lib/jpam-1.1.jar,/usr/lib/hive/lib/maven-scm-api-1.4.jar,/usr/lib/hive/lib/geronimo-jta_1.1_spec-1.1.1.jar,/usr/lib/hive/lib/hive-common-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/datanucleus-api-jdo-3.2.6.jar,/usr/lib/hive/lib/commons-codec-1.4.jar,/usr/lib/hive/lib/mail-1.4.1.jar,/usr/lib/hive/lib/hive-hbase-handler-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/commons-compress-1.4.1.jar,/usr/lib/hive/lib/jackson-databind-2.2.2.jar,/usr/lib/hive/lib/accumulo-core-1.6.0.jar,/usr/lib/hive/lib/hive-shims-0.23.jar,/usr/lib/hive/lib/hive-shims-0.23-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/hive-testutils.jar,/usr/lib/hive/lib/hive-shims.jar,/usr/lib/hive/lib/commons-el-1.0.jar,/usr/lib/hive/lib/groovy-all-2.4.4.jar,/usr/lib/hive/lib/junit-4.11.jar,/usr/lib/hive/lib/jackson-annotations-2.2.2.jar,/usr/lib/hive/lib/commons-beanutils-1.7.0.jar,/usr/lib/hive/lib/hbase-protocol.jar,/usr/lib/hive/lib/velocity-1.5.jar,/usr/lib/hive/lib/metrics-jvm-3.0.2.jar,/usr/lib/hive/lib/hive-exec.jar,/usr/lib/hive/lib/commons-digester-1.8.jar,/usr/lib/hive/lib/hive-hwi.jar,/usr/lib/hive/lib/metrics-core-3.0.2.jar,/usr/lib/hive/lib/hamcrest-core-1.1.jar,/usr/lib/hive/lib/jersey-server-1.14.jar,/usr/lib/hive/lib/hive-accumulo-handler-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/antlr-runtime-3.4.jar,/usr/lib/hive/lib/xz-1.0.jar,/usr/lib/hive/lib/opencsv-2.3.jar,/usr/lib/hive/lib/snappy-java-1.0.4.1.jar,/usr/lib/hive/lib/hive-shims-scheduler.jar,/usr/lib/hive/lib/hive-common.jar,/usr/lib/hive/lib/geronimo-jaspic_1.0_spec-1.0.jar,/usr/lib/hive/lib/commons-math-2.1.jar,/usr/lib/hive/lib/jackson-xc-1.9.2.jar,/usr/lib/hive/lib/hive-cli.jar,/usr/lib/hive/lib/hive-hwi-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/mysql-connector-java.jar,/usr/lib/hive/lib/commons-logging-1.1.3.jar,/usr/lib/hive/lib/paranamer-2.3.jar,/usr/lib/hive/lib/jsr305-3.0.0.jar,/usr/lib/hive/lib/apache-log4j-extras-1.2.17.jar,/usr/lib/hive/lib/accumulo-fate-1.6.0.jar,/usr/lib/hive/lib/hive-serde-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/stringtemplate-3.2.1.jar,/usr/lib/hive/lib/maven-scm-provider-svnexe-1.4.jar,/usr/lib/hive/lib/hive-metastore.jar,/usr/lib/hive/lib/curator-recipes-2.6.0.jar,/usr/lib/hive/lib/antlr-2.7.7.jar,/usr/lib/hive/lib/logredactor-1.0.3.jar,/usr/lib/hive/lib/hive-contrib-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/hbase-hadoop2-compat.jar,/usr/lib/hive/lib/jamon-runtime-2.3.1.jar,/usr/lib/hive/lib/libthrift-0.9.3.jar,/usr/lib/hive/lib/ant-launcher-1.9.1.jar,/usr/lib/hive/lib/janino-2.7.6.jar,/usr/lib/hive/lib/datanucleus-core-3.2.10.jar,/usr/lib/hive/lib/hive-beeline.jar,/usr/lib/hive/lib/joda-time-1.6.jar,/usr/lib/hive/lib/asm-3.2.jar,/usr/lib/hive/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar,/usr/lib/hive/lib/curator-framework-2.6.0.jar,/usr/lib/hive/lib/hive-jdbc-standalone.jar,/usr/lib/hive/lib/derby-10.11.1.1.jar,/usr/lib/hive/lib/httpcore-4.2.5.jar,/usr/lib/hive/lib/httpclient-4.2.5.jar,/usr/lib/hive/lib/accumulo-trace-1.6.0.jar,/usr/lib/hive/lib/hive-ant.jar,/usr/lib/hive/lib/mockito-all-1.9.5.jar,/usr/lib/hive/lib/asm-tree-3.1.jar,/usr/lib/hive/lib/log4j-1.2.16.jar,/usr/lib/hive/lib/jasper-compiler-5.5.23.jar,/usr/lib/hive/lib/plexus-utils-1.5.6.jar,/usr/lib/hive/lib/hbase-common.jar,/usr/lib/hive/lib/jetty-all-7.6.0.v20120127.jar,/usr/lib/hive/lib/jetty-all-server-7.6.0.v20120127.jar,/usr/lib/hive/lib/curator-client-2.6.0.jar,/usr/lib/hive/lib/htrace-core.jar,/usr/lib/hive/lib/accumulo-start-1.6.0.jar,/usr/lib/hive/lib/hbase-client.jar,/usr/lib/hive/lib/jsp-api-2.1.jar,/usr/lib/hive/lib/hive-shims-scheduler-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/hive-exec-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/hive-accumulo-handler.jar,/usr/lib/hive/lib/jdo-api-3.0.1.jar,/usr/lib/hive/lib/commons-vfs2-2.0.jar,/usr/lib/hive/lib/commons-io-2.4.jar,/usr/lib/hive/lib/commons-dbcp-1.4.jar,/usr/lib/hive/lib/hive-ant-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/servlet-api-2.5.jar,/usr/lib/hive/lib/jackson-core-2.2.2.jar,/usr/lib/hive/lib/gson-2.2.4.jar,/usr/lib/hive/lib/hive-jdbc-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/hive-beeline-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/commons-compiler-2.7.6.jar,/usr/lib/hive/lib/metrics-json-3.0.2.jar,/usr/lib/hive/lib/regexp-1.3.jar,/usr/lib/hive/lib/jasper-runtime-5.5.23.jar,/usr/lib/hive/lib/hive-serde.jar,/usr/lib/hive/lib/commons-configuration-1.6.jar,/usr/lib/hive/lib/zookeeper.jar,/usr/lib/hive/lib/jersey-servlet-1.14.jar,/usr/lib/hive/lib/bonecp-0.8.0.RELEASE.jar,/usr/lib/hive/lib/hive-cli-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/hive-jdbc-1.1.0-cdh5.8.0-standalone.jar,/usr/lib/hive/lib/stax-api-1.0.1.jar,/usr/lib/hive/lib/jcommander-1.32.jar,/usr/lib/hive/lib/maven-scm-provider-svn-commons-1.4.jar,/usr/lib/hive/lib/hive-shims-common.jar,/usr/lib/hive/lib/ST4-4.0.4.jar,/usr/lib/hive/lib/hive-contrib.jar,/usr/lib/hive/lib/geronimo-annotation_1.0_spec-1.1.1.jar,/usr/lib/hive/lib/hive-service-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/hbase-server.jar,/usr/lib/hive/lib/super-csv-2.2.0.jar,/usr/lib/hive/lib/jline-2.12.jar,/usr/lib/hive/lib/commons-httpclient-3.0.1.jar,/usr/lib/hive/lib/hive-testutils-1.1.0-cdh5.8.0.jar,/usr/lib/hive-hcatalog/share/hcatalog/hive-hcatalog-core-1.1.0-cdh5.8.0.jar
2016-12-13 05:40:19,163 INFO [pool-9-thread-1] common.AbstractHadoopJob:202 : Hive Dependencies After Filtered: /usr/lib/hive/lib/hive-metastore-1.1.0-cdh5.8.0.jar,/usr/lib/hive/lib/hive-exec-1.1.0-cdh5.8.0.jar,/usr/lib/hive-hcatalog/share/hcatalog/hive-hcatalog-core-1.1.0-cdh5.8.0.jar
2016-12-13 05:40:19,163 INFO [pool-9-thread-1] common.AbstractHadoopJob:230 : Kafka Dependencies: /opt/kafka_2.11-0.10.1.0/libs/kafka-clients-0.10.1.0.jar
2016-12-13 05:40:19,258 INFO [pool-9-thread-1] common.AbstractHadoopJob:358 : Job 'tmpjars' updated – file:/usr/lib/hive/lib/hive-metastore-1.1.0-cdh5.8.0.jar,file:/usr/lib/hive/lib/hive-exec-1.1.0-cdh5.8.0.jar,file:/usr/lib/hive-hcatalog/share/hcatalog/hive-hcatalog-core-1.1.0-cdh5.8.0.jar,file:/opt/kafka_2.11-0.10.1.0/libs/kafka-clients-0.10.1.0.jar
2016-12-13 05:40:19,261 INFO [pool-9-thread-1] engine.JobEngineConfig:95 : Chosen job conf is : /opt/kylin/conf/kylin_job_conf.xml
2016-12-13 05:40:19,326 INFO [pool-9-thread-1] root:114 : Output hdfs location: /kylin/kylin_metadata/kylin-093a2093-9d9d-4f61-805b-2cf640e2763c/kylin_intermediate_StreamingCube25_26368213_e3fe_4eb4_a708_6a32ea0a980e
2016-12-13 05:40:19,326 INFO [pool-9-thread-1] root:115 : Output hdfs compression: true
2016-12-13 05:40:19,348 INFO [pool-9-thread-1] common.KylinConfig:120 : The URI /opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta is recognized as LOCAL_FOLDER
2016-12-13 05:40:19,349 INFO [pool-9-thread-1] common.KylinConfig:267 : New KylinConfig 706233382
2016-12-13 05:40:19,349 INFO [pool-9-thread-1] common.KylinConfigBase:130 : Kylin Config was updated with kylin.metadata.url : /opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta
2016-12-13 05:40:19,349 INFO [pool-9-thread-1] persistence.ResourceStore:80 : Using metadata url /opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta for resource store
2016-12-13 05:40:19,354 DEBUG [pool-9-thread-1] persistence.ResourceStore:207 : Directly saving resource /cube/StreamingCube25.json (Store /opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta)
2016-12-13 05:40:19,358 DEBUG [pool-9-thread-1] persistence.ResourceStore:207 : Directly saving resource /model_desc/StreamingModel25.json (Store /opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta)
2016-12-13 05:40:19,361 DEBUG [pool-9-thread-1] persistence.ResourceStore:207 : Directly saving resource /cube_desc/StreamingCube25.json (Store /opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta)
2016-12-13 05:40:19,363 DEBUG [pool-9-thread-1] persistence.ResourceStore:207 : Directly saving resource /table/DEFAULT.STREAMINGTB25.json (Store /opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta)
2016-12-13 05:40:19,368 DEBUG [pool-9-thread-1] persistence.ResourceStore:207 : Directly saving resource /kafka/DEFAULT.STREAMINGTB25.json (Store /opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta)
2016-12-13 05:40:19,374 DEBUG [pool-9-thread-1] persistence.ResourceStore:207 : Directly saving resource /streaming/DEFAULT.STREAMINGTB25.json (Store /opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta)
2016-12-13 05:40:19,375 INFO [pool-9-thread-1] common.AbstractHadoopJob:499 : HDFS meta dir is: file:///opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta
2016-12-13 05:40:19,375 INFO [pool-9-thread-1] common.AbstractHadoopJob:372 : Job 'tmpfiles' updated – file:///opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta
2016-12-13 05:40:19,438 INFO [pool-9-thread-1] client.RMProxy:98 : Connecting to ResourceManager at quickstart.cloudera/192.168.0.103:8032
2016-12-13 05:40:20,159 WARN [DataStreamer for file /user/root/.staging/job_1481636141061_0001/files/meta/cube_desc/StreamingCube25.json] hdfs.DFSClient:864 : Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1249)
at java.lang.Thread.join(Thread.java:1323)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:862)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:600)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:789)
2016-12-13 05:40:20,194 WARN [DataStreamer for file /user/root/.staging/job_1481636141061_0001/files/meta/table/DEFAULT.STREAMINGTB25.json] hdfs.DFSClient:864 : Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1249)
at java.lang.Thread.join(Thread.java:1323)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:862)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:600)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:789)
2016-12-13 05:40:20,925 INFO [pool-9-thread-1] consumer.ConsumerConfig:180 : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [192.168.0.103:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = StreamingCube25
heartbeat.interval.ms = 3000
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 30000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
2016-12-13 05:40:20,941 INFO [pool-9-thread-1] consumer.ConsumerConfig:180 : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [192.168.0.103:9092]
check.crcs = true
client.id = consumer-1
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = StreamingCube25
heartbeat.interval.ms = 3000
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 30000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
2016-12-13 05:40:21,037 INFO [pool-9-thread-1] utils.AppInfoParser:83 : Kafka version : 0.10.1.0
2016-12-13 05:40:21,038 INFO [pool-9-thread-1] utils.AppInfoParser:84 : Kafka commitId : 3402a74efb23d1d4
2016-12-13 05:40:21,703 INFO [pool-9-thread-1] mapreduce.JobSubmitter:202 : number of splits:3
2016-12-13 05:40:21,836 INFO [pool-9-thread-1] mapreduce.JobSubmitter:291 : Submitting tokens for job: job_1481636141061_0001
2016-12-13 05:40:22,454 INFO [pool-9-thread-1] impl.YarnClientImpl:260 : Submitted application application_1481636141061_0001
2016-12-13 05:40:22,468 INFO [pool-9-thread-1] mapreduce.Job:1311 : The url to track the job: http://quickstart.cloudera:8088/proxy/application_1481636141061_0001/
2016-12-13 05:40:22,468 INFO [pool-9-thread-1] common.AbstractHadoopJob:506 : tempMetaFileString is : file:///opt/kylin/bin/../tomcat/temp/kylin_job_meta8308167523200917443/meta
2016-12-13 05:40:22,491 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c-00
2016-12-13 05:40:22,496 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c-00 from oldTs: 1481636418616, to newTs: 1481636422491, operation result: true
2016-12-13 05:40:32,517 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c-00
2016-12-13 05:40:32,528 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c-00 from oldTs: 1481636422491, to newTs: 1481636432517, operation result: true
2016-12-13 05:40:42,544 INFO [pool-9-thread-1] mapred.ClientServiceDelegate:277 : Application state is completed. FinalApplicationStatus=FAILED. Redirecting to job history server
2016-12-13 05:40:42,684 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c-00
2016-12-13 05:40:42,690 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c-00 from oldTs: 1481636432517, to newTs: 1481636442684, operation result: true
2016-12-13 05:40:42,693 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c-00
2016-12-13 05:40:42,699 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c-00 from oldTs: 1481636442684, to newTs: 1481636442694, operation result: true
2016-12-13 05:40:42,798 WARN [pool-9-thread-1] common.HadoopCmdOutput:89 : no counters for job job_1481636141061_0001
2016-12-13 05:40:42,800 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c-00
2016-12-13 05:40:42,815 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c-00 from oldTs: 1481636442694, to newTs: 1481636442800, operation result: true
2016-12-13 05:40:42,819 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c-00
2016-12-13 05:40:42,825 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c-00 from oldTs: 1481636442800, to newTs: 1481636442820, operation result: true
2016-12-13 05:40:42,833 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c-00
2016-12-13 05:40:42,838 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c-00 from oldTs: 1481636442820, to newTs: 1481636442833, operation result: true
2016-12-13 05:40:42,838 INFO [pool-9-thread-1] manager.ExecutableManager:292 : job id:093a2093-9d9d-4f61-805b-2cf640e2763c-00 from RUNNING to ERROR
2016-12-13 05:40:42,846 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c
2016-12-13 05:40:42,852 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c from oldTs: 1481636416237, to newTs: 1481636442846, operation result: true
2016-12-13 05:40:42,860 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c
2016-12-13 05:40:42,867 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c from oldTs: 1481636442846, to newTs: 1481636442860, operation result: true
2016-12-13 05:40:42,870 DEBUG [pool-9-thread-1] dao.ExecutableDao:210 : updating job output, id: 093a2093-9d9d-4f61-805b-2cf640e2763c
2016-12-13 05:40:42,877 DEBUG [pool-9-thread-1] hbase.HBaseResourceStore:262 : Update row /execute_output/093a2093-9d9d-4f61-805b-2cf640e2763c from oldTs: 1481636442860, to newTs: 1481636442870, operation result: true
2016-12-13 05:40:42,877 INFO [pool-9-thread-1] manager.ExecutableManager:292 : job id:093a2093-9d9d-4f61-805b-2cf640e2763c from RUNNING to ERROR
2016-12-13 05:40:42,878 WARN [pool-9-thread-1] execution.AbstractExecutable:247 : no need to send email, user list is empty
2016-12-13 05:40:42,905 INFO [pool-8-thread-1] threadpool.DefaultScheduler:118 : Job Fetcher: 0 should running, 0 actual running, 0 ready, 0 already succeed, 4 error, 0 discarded, 0 others
2016-12-13 05:41:16,213 INFO [pool-8-thread-1] threadpool.DefaultScheduler:118 : Job Fetcher: 0 should running, 0 actual running, 0 ready, 0 already succeed, 4 error, 0 discarded, 0 others