14/09/12 12:50:08 INFO log.PerfLogger:
14/09/12 12:50:08 INFO log.PerfLogger:
14/09/12 12:50:08 INFO parse.ParseDriver: Parsing command: select count(*) from u_data
14/09/12 12:50:08 INFO parse.ParseDriver: Parse Completed
14/09/12 12:50:08 INFO log.PerfLogger:
14/09/12 12:50:08 INFO log.PerfLogger:
14/09/12 12:50:08 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
14/09/12 12:50:08 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic Analysis
14/09/12 12:50:08 INFO parse.SemanticAnalyzer: Get metadata for source tables
14/09/12 12:50:08 INFO metastore.HiveMetaStore: 2: get_table : db=default tbl=u_data
14/09/12 12:50:08 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=u_data
14/09/12 12:50:08 INFO metastore.HiveMetaStore: 2: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
14/09/12 12:50:08 INFO metastore.ObjectStore: ObjectStore, initialize called
14/09/12 12:50:08 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "@" (64), after : "".
14/09/12 12:50:08 INFO metastore.ObjectStore: Initialized ObjectStore
14/09/12 12:50:09 INFO parse.SemanticAnalyzer: Get metadata for subqueries
14/09/12 12:50:09 INFO parse.SemanticAnalyzer: Get metadata for destination tables
14/09/12 12:50:09 INFO ql.Context: New scratch dir is hdfs://localhost:8020/tmp/hive-root/ff8a2b47-ad12-4d3c-b6c0-8fdd7ea6226f/hive_2014-09-12_12-50-08_144_3462732450916059086-1
14/09/12 12:50:09 INFO parse.SemanticAnalyzer: Completed getting MetaData in Semantic Analysis
14/09/12 12:50:09 INFO parse.SemanticAnalyzer: Set stats collection dir : hdfs://localhost:8020/tmp/hive-root/ff8a2b47-ad12-4d3c-b6c0-8fdd7ea6226f/hive_2014-09-12_12-50-08_144_3462732450916059086-1/-ext-10002
14/09/12 12:50:09 INFO ppd.OpProcFactory: Processing for FS(7)
14/09/12 12:50:09 INFO ppd.OpProcFactory: Processing for SEL(6)
14/09/12 12:50:09 INFO ppd.OpProcFactory: Processing for GBY(5)
14/09/12 12:50:09 INFO ppd.OpProcFactory: Processing for RS(4)
14/09/12 12:50:09 INFO ppd.OpProcFactory: Processing for GBY(3)
14/09/12 12:50:09 INFO ppd.OpProcFactory: Processing for SEL(2)
14/09/12 12:50:09 INFO ppd.OpProcFactory: Processing for TS(1)
14/09/12 12:50:10 INFO optimizer.ColumnPrunerProcFactory: RS 4 oldColExprMap: {VALUE._col0=Column[_col0]}
14/09/12 12:50:10 INFO optimizer.ColumnPrunerProcFactory: RS 4 newColExprMap: {VALUE._col0=Column[_col0]}
14/09/12 12:50:10 INFO log.PerfLogger:
14/09/12 12:50:10 INFO log.PerfLogger:
14/09/12 12:50:10 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
14/09/12 12:50:10 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
14/09/12 12:50:10 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
14/09/12 12:50:10 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
14/09/12 12:50:10 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
14/09/12 12:50:10 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
14/09/12 12:50:10 INFO parse.SemanticAnalyzer: Completed plan generation
14/09/12 12:50:10 INFO ql.Driver: Semantic Analysis Completed
14/09/12 12:50:10 INFO log.PerfLogger:
14/09/12 12:50:10 INFO exec.ListSinkOperator: Initializing Self OP[8]
14/09/12 12:50:10 INFO exec.ListSinkOperator: Operator 8 OP initialized
14/09/12 12:50:10 INFO exec.ListSinkOperator: Initialization Done 8 OP
14/09/12 12:50:10 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:bigint, comment:null)], properties:null)
14/09/12 12:50:10 INFO log.PerfLogger:
14/09/12 12:50:10 INFO log.PerfLogger:
14/09/12 12:50:10 INFO log.PerfLogger:
14/09/12 12:50:10 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
14/09/12 12:50:10 INFO log.PerfLogger:
14/09/12 12:50:10 INFO ql.Driver: Starting command: select count(*) from u_data
14/09/12 12:50:10 INFO ql.Driver: Query ID = root_20140912125050_f1629a95-5344-44aa-be27-3a01d3f5ec2f
14/09/12 12:50:10 INFO ql.Driver: Total jobs = 1
14/09/12 12:50:10 INFO log.PerfLogger:
14/09/12 12:50:10 INFO log.PerfLogger:
14/09/12 12:50:10 INFO log.PerfLogger:
14/09/12 12:50:10 INFO ql.Driver: Launching Job 1 out of 1
14/09/12 12:50:10 INFO exec.Task: Number of reduce tasks determined at compile time: 1
14/09/12 12:50:10 INFO exec.Task: In order to change the average load for a reducer (in bytes):
14/09/12 12:50:10 INFO exec.Task: set hive.exec.reducers.bytes.per.reducer=
14/09/12 12:50:10 INFO exec.Task: In order to limit the maximum number of reducers:
14/09/12 12:50:10 INFO exec.Task: set hive.exec.reducers.max=
14/09/12 12:50:10 INFO exec.Task: In order to set a constant number of reducers:
14/09/12 12:50:10 INFO exec.Task: set mapreduce.job.reduces=
14/09/12 12:50:10 INFO ql.Context: New scratch dir is hdfs://localhost:8020/tmp/hive-root/ff8a2b47-ad12-4d3c-b6c0-8fdd7ea6226f/hive_2014-09-12_12-50-08_144_3462732450916059086-2
14/09/12 12:50:10 INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
14/09/12 12:50:10 INFO exec.Utilities: Processing alias u_data
14/09/12 12:50:10 INFO exec.Utilities: Adding input file hdfs://localhost:8020/user/hive/warehouse/u_data
14/09/12 12:50:10 INFO exec.Utilities: Content Summary not cached for hdfs://localhost:8020/user/hive/warehouse/u_data
14/09/12 12:50:10 INFO ql.Context: New scratch dir is hdfs://localhost:8020/tmp/hive-root/ff8a2b47-ad12-4d3c-b6c0-8fdd7ea6226f/hive_2014-09-12_12-50-08_144_3462732450916059086-2
14/09/12 12:50:10 INFO log.PerfLogger:
14/09/12 12:50:10 INFO exec.Utilities: Serializing MapWork via kryo
14/09/12 12:50:12 INFO log.PerfLogger:
14/09/12 12:50:12 INFO Configuration.deprecation: mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication
14/09/12 12:50:12 INFO log.PerfLogger:
14/09/12 12:50:12 INFO exec.Utilities: Serializing ReduceWork via kryo
14/09/12 12:50:12 INFO log.PerfLogger:
14/09/12 12:50:13 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
14/09/12 12:50:13 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
14/09/12 12:50:13 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
14/09/12 12:50:14 INFO log.PerfLogger:
14/09/12 12:50:14 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://localhost:8020/user/hive/warehouse/u_data; using filter path hdfs://localhost:8020/user/hive/warehouse/u_data
14/09/12 12:50:14 INFO input.FileInputFormat: Total input paths to process : 1
14/09/12 12:50:14 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
14/09/12 12:50:14 INFO io.CombineHiveInputFormat: number of splits 1
14/09/12 12:50:14 INFO log.PerfLogger:
14/09/12 12:50:14 INFO mapreduce.JobSubmitter: number of splits:1
14/09/12 12:50:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1410323553898_0022
14/09/12 12:50:15 INFO impl.YarnClientImpl: Submitted application application_1410323553898_0022
14/09/12 12:50:15 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1410323553898_0022/
14/09/12 12:50:15 INFO exec.Task: Starting Job = job_1410323553898_0022, Tracking URL = http://localhost:8088/proxy/application_1410323553898_0022/
14/09/12 12:50:15 INFO exec.Task: Kill Command = /dir/hadoop-2.3.0/bin/hadoop job -kill job_1410323553898_0022
14/09/12 12:50:30 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
14/09/12 12:50:30 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/09/12 12:50:30 INFO exec.Task: 2014-09-12 12:50:30,166 Stage-1 map = 0%, reduce = 0%
14/09/12 12:50:35 INFO exec.Task: 2014-09-12 12:50:35,340 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.25 sec
14/09/12 12:50:41 INFO exec.Task: 2014-09-12 12:50:41,545 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.45 sec
14/09/12 12:50:44 INFO exec.Task: MapReduce Total cumulative CPU time: 2 seconds 450 msec
14/09/12 12:50:44 INFO exec.Task: Ended Job = job_1410323553898_0022
14/09/12 12:50:44 INFO exec.FileSinkOperator: Moving tmp dir: hdfs://localhost:8020/tmp/hive-root/ff8a2b47-ad12-4d3c-b6c0-8fdd7ea6226f/hive_2014-09-12_12-50-08_144_3462732450916059086-1/_tmp.-ext-10001 to: hdfs://localhost:8020/tmp/hive-root/ff8a2b47-ad12-4d3c-b6c0-8fdd7ea6226f/hive_2014-09-12_12-50-08_144_3462732450916059086-1/-ext-10001
14/09/12 12:50:44 INFO log.PerfLogger:
14/09/12 12:50:44 INFO log.PerfLogger:
14/09/12 12:50:44 INFO ql.Driver: MapReduce Jobs Launched:
14/09/12 12:50:44 INFO ql.Driver: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 2.45 sec HDFS Read: 1979382 HDFS Write: 7 SUCCESS
14/09/12 12:50:44 INFO ql.Driver: Total MapReduce CPU Time Spent: 2 seconds 450 msec
14/09/12 12:50:44 INFO ql.Driver: OK
14/09/12 12:50:44 INFO log.PerfLogger:
14/09/12 12:50:44 INFO log.PerfLogger:
14/09/12 12:50:44 INFO log.PerfLogger: