Task Logs: 'attempt_201405301214_170634_r_000000_0' stdout logs stderr logs log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. syslog logs 2014-06-11 12:32:34,378 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2014-06-11 12:32:34,874 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /mnt/volume5/mapred/local/taskTracker/distcache/-384571669260501482_1698352768_208322409/elsharpynn001.prod.hulu.com/hive/tmp/hive-muthu.nivas/hive_2014-06-11_12-32-20_289_5202875887629569617-2/-mr-10004/85d1e52f-a7dd-4c76-91cf-d6270fb347be/map.xml <- /mnt/volume2/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/attempt_201405301214_170634_r_000000_0/work/map.xml 2014-06-11 12:32:34,878 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /mnt/volume6/mapred/local/taskTracker/distcache/-486233630769476382_-1591891312_208322449/elsharpynn001.prod.hulu.com/hive/tmp/hive-muthu.nivas/hive_2014-06-11_12-32-20_289_5202875887629569617-2/-mr-10004/85d1e52f-a7dd-4c76-91cf-d6270fb347be/reduce.xml <- /mnt/volume2/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/attempt_201405301214_170634_r_000000_0/work/reduce.xml 2014-06-11 12:32:34,883 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mnt/volume4/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/jars/job.jar <- /mnt/volume2/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/attempt_201405301214_170634_r_000000_0/work/job.jar 2014-06-11 12:32:34,886 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /mnt/volume4/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/jars/.job.jar.crc <- /mnt/volume2/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/attempt_201405301214_170634_r_000000_0/work/.job.jar.crc 2014-06-11 12:32:34,928 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id 2014-06-11 12:32:34,929 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=SHUFFLE, sessionId= 2014-06-11 12:32:35,303 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0 2014-06-11 12:32:35,310 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7449cf9f 2014-06-11 12:32:35,414 INFO org.apache.hadoop.mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapred.ReduceTask$ReduceCopier 2014-06-11 12:32:35,423 INFO org.apache.hadoop.mapred.ReduceTask: ShuffleRamManager: MemoryLimit=835158016, MaxSingleShuffleLimit=208789504 2014-06-11 12:32:35,434 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,435 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,436 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,437 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,439 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,440 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,441 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,442 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,443 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,444 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,445 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,447 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,448 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,449 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,451 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,452 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 2014-06-11 12:32:35,457 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201405301214_170634_r_000000_0 Thread started: Thread for merging on-disk files 2014-06-11 12:32:35,457 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201405301214_170634_r_000000_0 Thread started: Thread for merging in memory files 2014-06-11 12:32:35,457 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201405301214_170634_r_000000_0 Thread waiting: Thread for merging on-disk files 2014-06-11 12:32:35,461 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201405301214_170634_r_000000_0 Need another 8 map output(s) where 0 is already in progress 2014-06-11 12:32:35,461 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201405301214_170634_r_000000_0 Thread started: Thread for polling Map Completion Events 2014-06-11 12:32:35,461 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201405301214_170634_r_000000_0 Scheduled 0 outputs (0 slow hosts and0 dup hosts) 2014-06-11 12:32:35,471 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201405301214_170634_r_000000_0 Scheduled 8 outputs (0 slow hosts and0 dup hosts) 2014-06-11 12:32:35,827 INFO org.apache.hadoop.mapred.ReduceTask: GetMapEventsThread exiting 2014-06-11 12:32:35,827 INFO org.apache.hadoop.mapred.ReduceTask: getMapsEventsThread joined. 2014-06-11 12:32:35,828 INFO org.apache.hadoop.mapred.ReduceTask: Closed ram manager 2014-06-11 12:32:35,828 INFO org.apache.hadoop.mapred.ReduceTask: Interleaved on-disk merge complete: 0 files left. 2014-06-11 12:32:35,828 INFO org.apache.hadoop.mapred.ReduceTask: In-memory merge complete: 8 files left. 2014-06-11 12:32:35,842 INFO org.apache.hadoop.mapred.Merger: Merging 8 sorted segments 2014-06-11 12:32:35,842 INFO org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 8 segments left of total size: 25338711 bytes 2014-06-11 12:32:35,845 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.snappy] 2014-06-11 12:32:36,791 INFO org.apache.hadoop.mapred.ReduceTask: Merged 8 segments, 25338711 bytes to disk to satisfy reduce memory limit 2014-06-11 12:32:36,791 INFO org.apache.hadoop.mapred.ReduceTask: Merging 1 files, 12399088 bytes from disk 2014-06-11 12:32:36,792 INFO org.apache.hadoop.mapred.ReduceTask: Merging 0 segments, 0 bytes from memory into reduce 2014-06-11 12:32:36,792 INFO org.apache.hadoop.mapred.Merger: Merging 1 sorted segments 2014-06-11 12:32:36,796 INFO org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 25338697 bytes 2014-06-11 12:32:36,829 INFO ExecReducer: maximum memory = 1193082880 2014-06-11 12:32:36,829 INFO ExecReducer: conf classpath = [file:/mnt/volume4/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/jars/classes, file:/mnt/volume4/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/jars/, file:/mnt/volume6/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/attempt_201405301214_170634_r_000000_0/] 2014-06-11 12:32:36,830 INFO ExecReducer: thread classpath = [file:/mnt/volume4/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/jars/classes, file:/mnt/volume4/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/jars/job.jar, file:/mnt/volume2/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/attempt_201405301214_170634_r_000000_0/work/, file:/run/cloudera-scm-agent/process/9294-mapreduce-TASKTRACKER/, file:/usr/lib/jvm/j2sdk1.6-oracle/lib/tools.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/hadoop-core-2.0.0-mr1-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/activation-1.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/ant-contrib-1.0b3.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/asm-3.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/avro-1.7.4.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/avro-compiler-1.7.4.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-1.7.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-core-1.8.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-cli-1.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-codec-1.4.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-collections-3.2.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-compress-1.4.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-configuration-1.6.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-digester-1.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-el-1.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-httpclient-3.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-io-2.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-lang-2.5.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-logging-1.1.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-math-2.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/commons-net-3.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/guava-11.0.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/hadoop-fairscheduler-2.0.0-mr1-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/hsqldb-1.8.0.10.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jackson-core-asl-1.8.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jackson-jaxrs-1.8.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jackson-mapper-asl-1.8.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jackson-xc-1.8.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jasper-compiler-5.5.23.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jasper-runtime-5.5.23.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jaxb-api-2.2.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jaxb-impl-2.2.3-1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jersey-core-1.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jersey-json-1.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jersey-server-1.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jets3t-0.6.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jettison-1.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jetty-6.1.26.cloudera.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jetty-util-6.1.26.cloudera.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jline-0.9.94.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jsch-0.1.42.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jsp-api-2.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jsr305-1.3.9.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/junit-4.8.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/kfs-0.2.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/kfs-0.3.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/log4j-1.2.17.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/mockito-all-1.8.5.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/paranamer-2.3.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/protobuf-java-2.4.0a.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/servlet-api-2.5.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/slf4j-api-1.6.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/snappy-java-1.0.4.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/stax-api-1.0.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/xmlenc-0.52.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/xz-1.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/zookeeper-3.4.5-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jsp-2.1/jsp-2.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-0.20-mapreduce/lib/jsp-2.1/jsp-api-2.1.jar, file:/usr/share/cmf/lib/plugins/tt-instrumentation-4.8.0.jar, file:/usr/share/cmf/lib/plugins/event-publish-4.8.0-shaded.jar, file:/usr/share/cmf/lib/plugins/navigator-plugin-4.8.0-shaded.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-cli-1.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-codec-1.4.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-io-2.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-lang-2.5.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/xmlenc-0.52.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jsp-api-2.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-el-1.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/guava-11.0.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jersey-core-1.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jersey-server-1.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/asm-3.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/log4j-1.2.17.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/servlet-api-2.5.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jline-0.9.94.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/hadoop-hdfs-2.0.0-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/hadoop-hdfs-2.0.0-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/hadoop-hdfs-2.0.0-cdh4.4.0-tests.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-cli-1.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-codec-1.4.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-beanutils-1.7.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jets3t-0.6.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-io-2.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/junit-4.8.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-httpclient-3.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/mockito-all-1.8.5.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/slf4j-api-1.6.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/hue-plugins-2.5.0-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-lang-2.5.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jsr305-1.3.9.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/snappy-java-1.0.4.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-compress-1.4.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/xmlenc-0.52.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-configuration-1.6.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/kfs-0.3.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jsch-0.1.42.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jasper-compiler-5.5.23.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/xz-1.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/activation-1.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/stax-api-1.0.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jasper-runtime-5.5.23.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jsp-api-2.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jersey-json-1.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jettison-1.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/paranamer-2.3.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/zookeeper/zookeeper-3.4.5-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/protobuf-java-2.4.0a.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-el-1.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-digester-1.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jackson-xc-1.8.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/guava-11.0.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jersey-core-1.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/avro-1.7.4.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-collections-3.2.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jaxb-api-2.2.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jersey-server-1.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-net-3.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/asm-3.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-math-2.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-logging-1.1.1.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jackson-core-asl-1.8.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/log4j-1.2.17.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/servlet-api-2.5.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jline-0.9.94.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/hadoop-auth-2.0.0-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/hadoop-common-2.0.0-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/hadoop-annotations-2.0.0-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/hadoop-common-2.0.0-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/hadoop-common-2.0.0-cdh4.4.0-tests.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/hadoop-annotations-2.0.0-cdh4.4.0.jar, file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/hadoop-auth-2.0.0-cdh4.4.0.jar, file:/mnt/volume2/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/attempt_201405301214_170634_r_000000_0/work/, file:/opt/cloudera/parcels/HADOOP_LZO-0.4.15-1.gplextras.p0.24/lib/hadoop/lib/hadoop-lzo-cdh4-0.4.15-gplextras.jar, file:/opt/cloudera/parcels/HADOOP_LZO-0.4.15-1.gplextras.p0.24/lib/hadoop/lib/hadoop-lzo-cdh4-0.4.15-gplextras.jar] 2014-06-11 12:32:36,851 INFO org.apache.hadoop.hive.ql.exec.mr.ObjectCache: Ignoring retrieval request: __REDUCE_PLAN__ 2014-06-11 12:32:36,855 INFO org.apache.hadoop.hive.ql.log.PerfLogger: 2014-06-11 12:32:36,855 INFO org.apache.hadoop.hive.ql.exec.Utilities: Deserializing ReduceWork via kryo 2014-06-11 12:32:36,985 INFO org.apache.hadoop.hive.ql.log.PerfLogger: 2014-06-11 12:32:36,985 INFO org.apache.hadoop.hive.ql.exec.mr.ObjectCache: Ignoring cache key: __REDUCE_PLAN__ 2014-06-11 12:32:37,029 INFO ExecReducer: Id =0 Id =1 Id =2 <\Children> Id = 1 null<\Parent> <\FS> <\Children> Id = 0 null<\Parent> <\SEL> <\Children> <\Parent> <\JOIN> 2014-06-11 12:32:37,029 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: Initializing Self 0 JOIN 2014-06-11 12:32:37,036 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: JOIN struct<_col1:int,_col39:string> totalsz = 2 2014-06-11 12:32:37,036 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: Operator 0 JOIN initialized 2014-06-11 12:32:37,036 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: Initializing children of 0 JOIN 2014-06-11 12:32:37,036 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: Initializing child 1 SEL 2014-06-11 12:32:37,036 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: Initializing Self 1 SEL 2014-06-11 12:32:37,037 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: SELECT struct<_col1:int,_col39:string> 2014-06-11 12:32:37,037 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: Operator 1 SEL initialized 2014-06-11 12:32:37,037 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: Initializing children of 1 SEL 2014-06-11 12:32:37,037 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Initializing child 2 FS 2014-06-11 12:32:37,037 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Initializing Self 2 FS 2014-06-11 12:32:37,054 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Operator 2 FS initialized 2014-06-11 12:32:37,055 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Initialization Done 2 FS 2014-06-11 12:32:37,055 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: Initialization Done 1 SEL 2014-06-11 12:32:37,055 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: Initialization Done 0 JOIN 2014-06-11 12:32:37,058 INFO ExecReducer: ExecReducer: processing 1 rows: used memory = 91723728 2014-06-11 12:32:37,060 INFO ExecReducer: ExecReducer: processing 10 rows: used memory = 91723728 2014-06-11 12:32:37,065 INFO ExecReducer: ExecReducer: processing 100 rows: used memory = 91723728 2014-06-11 12:32:37,107 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Final Path: FS hdfs://elsharpynn001.prod.hulu.com:8020/hive/tmp/hive-muthu.nivas/hive_2014-06-11_12-32-20_289_5202875887629569617-1/_tmp.-ext-10001/000000_0 2014-06-11 12:32:37,107 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Writing to temp file: FS hdfs://elsharpynn001.prod.hulu.com:8020/hive/tmp/hive-muthu.nivas/hive_2014-06-11_12-32-20_289_5202875887629569617-1/_task_tmp.-ext-10001/_tmp.000000_0 2014-06-11 12:32:37,107 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: New Final Path: FS hdfs://elsharpynn001.prod.hulu.com:8020/hive/tmp/hive-muthu.nivas/hive_2014-06-11_12-32-20_289_5202875887629569617-1/_tmp.-ext-10001/000000_0 2014-06-11 12:32:37,182 INFO ExecReducer: ExecReducer: processing 1000 rows: used memory = 94273224 2014-06-11 12:32:37,592 INFO ExecReducer: ExecReducer: processing 10000 rows: used memory = 98437952 2014-06-11 12:32:37,983 INFO ExecReducer: ExecReducer: processing 100000 rows: used memory = 15561096 2014-06-11 12:32:38,480 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [526442] 2014-06-11 12:32:38,481 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [526442] 2014-06-11 12:32:38,485 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 4000 rows for join key [526442] 2014-06-11 12:32:38,494 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [527608] 2014-06-11 12:32:38,496 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [527608] 2014-06-11 12:32:38,514 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [533476] 2014-06-11 12:32:38,849 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [647728] 2014-06-11 12:32:38,871 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [650930] 2014-06-11 12:32:38,896 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [654138] 2014-06-11 12:32:38,901 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [654398] 2014-06-11 12:32:38,902 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [654398] 2014-06-11 12:32:38,905 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [654442] 2014-06-11 12:32:38,909 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [654528] 2014-06-11 12:32:38,911 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [654534] 2014-06-11 12:32:38,913 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [654842] 2014-06-11 12:32:38,914 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [654842] 2014-06-11 12:32:38,932 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [656670] 2014-06-11 12:32:38,934 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [656670] 2014-06-11 12:32:38,939 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [657188] 2014-06-11 12:32:38,946 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [657788] 2014-06-11 12:32:38,947 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [657788] 2014-06-11 12:32:38,954 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [657938] 2014-06-11 12:32:38,965 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [659466] 2014-06-11 12:32:38,967 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [659466] 2014-06-11 12:32:38,973 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [659876] 2014-06-11 12:32:38,976 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [660198] 2014-06-11 12:32:38,978 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [660208] 2014-06-11 12:32:38,979 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [660208] 2014-06-11 12:32:38,991 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [661058] 2014-06-11 12:32:39,004 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [661920] 2014-06-11 12:32:39,005 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [661920] 2014-06-11 12:32:39,007 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 4000 rows for join key [661920] 2014-06-11 12:32:39,015 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [662048] 2014-06-11 12:32:39,016 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [662048] 2014-06-11 12:32:39,020 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [662500] 2014-06-11 12:32:39,023 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [662534] 2014-06-11 12:32:39,024 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [662534] 2014-06-11 12:32:39,032 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [663110] 2014-06-11 12:32:39,033 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [663110] 2014-06-11 12:32:39,036 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 1000 rows for join key [663184] 2014-06-11 12:32:39,037 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 2000 rows for join key [663184] 2014-06-11 12:32:39,039 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 4000 rows for join key [663184] 2014-06-11 12:32:39,043 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 8000 rows for join key [663184] 2014-06-11 12:32:39,051 INFO org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 16000 rows for join key [663184] 2014-06-11 12:32:39,061 INFO org.apache.hadoop.hive.ql.exec.persistence.RowContainer: RowContainer created temp file /mnt/volume2/mapred/local/taskTracker/muthu.nivas/jobcache/job_201405301214_170634/attempt_201405301214_170634_r_000000_0/work/tmp/hive-rowcontainer413460656723947992/RowContainer1053550561043043830.tmp 2014-06-11 12:32:39,237 INFO org.apache.hadoop.mapred.FileInputFormat: Total input paths to process : 2 2014-06-11 12:32:39,295 FATAL ExecReducer: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: hdfs://elsharpynn001.prod.hulu.com:8020/hive/warehouse/video/video_20140611071209 not a SequenceFile at org.apache.hadoop.hive.ql.exec.persistence.RowContainer.first(RowContainer.java:237) at org.apache.hadoop.hive.ql.exec.persistence.RowContainer.first(RowContainer.java:74) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:644) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:758) at org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256) at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:216) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) at org.apache.hadoop.mapred.Child.main(Child.java:262) Caused by: java.io.IOException: hdfs://elsharpynn001.prod.hulu.com:8020/hive/warehouse/video/video_20140611071209 not a SequenceFile at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1805) at org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1765) at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1714) at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1728) at org.apache.hadoop.mapred.SequenceFileRecordReader.(SequenceFileRecordReader.java:43) at org.apache.hadoop.mapred.SequenceFileInputFormat.getRecordReader(SequenceFileInputFormat.java:59) at org.apache.hadoop.hive.ql.exec.persistence.RowContainer.first(RowContainer.java:226) ... 12 more 2014-06-11 12:32:39,298 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1 2014-06-11 12:32:39,299 WARN org.apache.hadoop.mapred.Child: Error running child java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: hdfs://elsharpynn001.prod.hulu.com:8020/hive/warehouse/video/video_20140611071209 not a SequenceFile at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:283) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) at org.apache.hadoop.mapred.Child.main(Child.java:262) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: hdfs://elsharpynn001.prod.hulu.com:8020/hive/warehouse/video/video_20140611071209 not a SequenceFile at org.apache.hadoop.hive.ql.exec.persistence.RowContainer.first(RowContainer.java:237) at org.apache.hadoop.hive.ql.exec.persistence.RowContainer.first(RowContainer.java:74) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:644) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:758) at org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256) at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:216) ... 7 more Caused by: java.io.IOException: hdfs://elsharpynn001.prod.hulu.com:8020/hive/warehouse/video/video_20140611071209 not a SequenceFile at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1805) at org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1765) at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1714) at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1728) at org.apache.hadoop.mapred.SequenceFileRecordReader.(SequenceFileRecordReader.java:43) at org.apache.hadoop.mapred.SequenceFileInputFormat.getRecordReader(SequenceFileInputFormat.java:59) at org.apache.hadoop.hive.ql.exec.persistence.RowContainer.first(RowContainer.java:226) ... 12 more 2014-06-11 12:32:39,302 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task