0: jdbc:hive2://nc-h04:10000/casino> select distinct id from foo7;
14/09/30 22:16:20 DEBUG parse.VariableSubstitution: Substitution is on: select distinct id from foo7
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 DEBUG parse.VariableSubstitution: Substitution is on: select distinct id from foo7
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO parse.ParseDriver: Parsing command: select distinct id from foo7
14/09/30 22:16:20 INFO parse.ParseDriver: Parse Completed
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
14/09/30 22:16:20 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic Analysis
14/09/30 22:16:20 INFO parse.SemanticAnalyzer: Get metadata for source tables
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 INFO parse.SemanticAnalyzer: Get metadata for subqueries
14/09/30 22:16:20 INFO parse.SemanticAnalyzer: Get metadata for destination tables
14/09/30 22:16:20 DEBUG hdfs.DFSClient: /tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-8: masked=rwx------
14/09/30 22:16:20 DEBUG ipc.Client: The ping interval is 60000 ms.
14/09/30 22:16:20 DEBUG ipc.Client: Connecting to nc-h04/192.168.20.6:8020
14/09/30 22:16:20 DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 5ms
14/09/30 22:16:20 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms
14/09/30 22:16:20 INFO ql.Context: New scratch dir is hdfs://nc-h04/tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-8
14/09/30 22:16:20 INFO parse.SemanticAnalyzer: Completed getting MetaData in Semantic Analysis
14/09/30 22:16:20 INFO parse.SemanticAnalyzer: Can not invoke CBO; query contains operators not supported for CBO.
14/09/30 22:16:20 DEBUG hive.log: DDL: struct foo7 { i32 id}
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: Created Table Plan for foo7 TS[50]
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: tree: (TOK_SELECTDI (TOK_SELEXPR (TOK_TABLE_OR_COL id)))
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: genSelectPlan: input = {((tok_table_or_col id),_col0: int)} foo7{(id,_col0: int)}
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: Created Select Plan row schema: null{(id,_col0: int)} foo7{(id,_col0: int)}
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: Created Select Plan for clause: insclause-0
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 DEBUG lazy.LazySimpleSerDe: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe initialized with: columnNames=[_col0] columnTypes=[int] separator=[[B@2b9835cb] nullstring=\N lastColumnTakesRest=false
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 DEBUG lazy.LazySimpleSerDe: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe initialized with: columnNames=[_col0] columnTypes=[int] separator=[[B@15f3b8c9] nullstring=\N lastColumnTakesRest=false
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: Created FileSink Plan for clause: insclause-0dest_path: hdfs://nc-h04/tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-8/-mr-10000 row schema: null{(id,_col0: int)} foo7{(id,_col0: int)}
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: Created Body Plan for Query Block null
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: Created Plan for Query Block null
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: Before logical optimization
TS[50]-SEL[51]-GBY[52]-RS[53]-GBY[54]-SEL[55]-FS[56]
14/09/30 22:16:20 INFO ppd.OpProcFactory: Processing for FS(56)
14/09/30 22:16:20 INFO ppd.OpProcFactory: Processing for SEL(55)
14/09/30 22:16:20 INFO ppd.OpProcFactory: Processing for GBY(54)
14/09/30 22:16:20 INFO ppd.OpProcFactory: Processing for RS(53)
14/09/30 22:16:20 INFO ppd.OpProcFactory: Processing for GBY(52)
14/09/30 22:16:20 INFO ppd.OpProcFactory: Processing for SEL(51)
14/09/30 22:16:20 INFO ppd.OpProcFactory: Processing for TS(50)
14/09/30 22:16:20 DEBUG ppd.PredicatePushDown: After PPD:
TS[50]-SEL[51]-GBY[52]-RS[53]-GBY[54]-SEL[55]-FS[56]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:TS[50] with rr:foo7{(id,id: int)(block__offset__inside__file,BLOCK__OFFSET__INSIDE__FILE: bigint)(input__file__name,INPUT__FILE__NAME: string)(row__id,ROW__ID: struct)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator TS[50]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:SEL[51] with rr:foo7{(id,id: int)(block__offset__inside__file,BLOCK__OFFSET__INSIDE__FILE: bigint)(input__file__name,INPUT__FILE__NAME: string)(row__id,ROW__ID: struct)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator SEL[51]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcFactory: New column list:(Column[id] Column[BLOCK__OFFSET__INSIDE__FILE] Column[INPUT__FILE__NAME] Column[ROW__ID])
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:GBY[52] with rr:{((tok_table_or_col id),_col0: int)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator GBY[52]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:RS[53] with rr:{((tok_table_or_col id),KEY._col0: int)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator RS[53]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:GBY[54] with rr:{((tok_table_or_col id),_col0: int)} foo7{(id,_col0: int)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator GBY[54]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:SEL[55] with rr:null{(id,_col0: int)} foo7{(id,_col0: int)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator SEL[55]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcFactory: New column list:(Column[_col0])
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:FS[56] with rr:null{(id,_col0: int)} foo7{(id,_col0: int)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator FS[56]
14/09/30 22:16:20 DEBUG optimizer.ColumnPrunerProcFactory: Reduce Sink Operator 53 key:[Column[_col0]]
14/09/30 22:16:20 INFO optimizer.ColumnPrunerProcFactory: RS 53 oldColExprMap: {KEY._col0=Column[_col0]}
14/09/30 22:16:20 INFO optimizer.ColumnPrunerProcFactory: RS 53 newColExprMap: {KEY._col0=Column[_col0]}
+-------------+--+14/09/30 22:16:20 DEBUG index.RewriteGBUsingIndex: No Valid Index Found to apply Rewrite, skipping RewriteGBUsingIndex optimization
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: After logical optimization| id |
TS[50]-SEL[51]-GBY[52]-RS[53]-GBY[54]-SEL[55]-FS[56]
+-------------+--+
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:TS[50] with rr:foo7{(id,id: int)(block__offset__inside__file,BLOCK__OFFSET__INSIDE__FILE: bigint)(input__file__name,INPUT__FILE__NAME: string)(row__id,ROW__ID: struct)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator TS[50]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:SEL[51] with rr:foo7{(id,id: int)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator SEL[51]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcFactory: New column list:(Column[id])
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:GBY[52] with rr:{((tok_table_or_col id),_col0: int)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator GBY[52]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:RS[53] with rr:{((tok_table_or_col id),KEY._col0: int)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator RS[53]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:GBY[54] with rr:{((tok_table_or_col id),_col0: int)} foo7{(id,_col0: int)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator GBY[54]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:SEL[55] with rr:null{(id,_col0: int)} foo7{(id,_col0: int)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator SEL[55]
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcFactory: New column list:(Column[_col0])
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Getting constants of op:FS[56] with rr:null{(id,_col0: int)} foo7{(id,_col0: int)}
14/09/30 22:16:20 DEBUG optimizer.ConstantPropagateProcCtx: Offerring constants [] to operator FS[56]
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 DEBUG exec.TableScanOperator: Setting stats (Num rows: 1 Data size: 1214 Basic stats: COMPLETE Column stats: NONE) on TS[50]
14/09/30 22:16:20 DEBUG annotation.StatsRulesProcFactory: [0] STATS-TS[50] (foo7): numRows: 1 dataSize: 1214 basicStatsState: COMPLETE colStatsState: NONE colStats: {}
14/09/30 22:16:20 DEBUG exec.SelectOperator: Setting stats (Num rows: 1 Data size: 1214 Basic stats: COMPLETE Column stats: NONE) on SEL[51]
14/09/30 22:16:20 DEBUG annotation.StatsRulesProcFactory: STATS-GBY[52]: inputSize: 1214 maxSplitSize: 256000000 parallelism: 1 containsGroupingSet: false sizeOfGroupingSet: 1
14/09/30 22:16:20 DEBUG annotation.StatsRulesProcFactory: [Case 1] STATS-GBY[52]: cardinality: 1
14/09/30 22:16:20 DEBUG exec.GroupByOperator: Setting stats (Num rows: 1 Data size: 1214 Basic stats: COMPLETE Column stats: NONE) on GBY[52]
14/09/30 22:16:20 DEBUG annotation.StatsRulesProcFactory: [0] STATS-GBY[52]: numRows: 1 dataSize: 1214 basicStatsState: COMPLETE colStatsState: NONE colStats: {}
14/09/30 22:16:20 DEBUG exec.ReduceSinkOperator: Setting stats (Num rows: 1 Data size: 1214 Basic stats: COMPLETE Column stats: NONE) on RS[53]
14/09/30 22:16:20 DEBUG annotation.StatsRulesProcFactory: [0] STATS-RS[53]: numRows: 1 dataSize: 1214 basicStatsState: COMPLETE colStatsState: NONE colStats: {}
14/09/30 22:16:20 DEBUG annotation.StatsRulesProcFactory: STATS-GBY[54]: inputSize: 1 maxSplitSize: 256000000 parallelism: 1 containsGroupingSet: false sizeOfGroupingSet: 1
14/09/30 22:16:20 DEBUG annotation.StatsRulesProcFactory: [Case 7] STATS-GBY[54]: cardinality: 0
+-------------+--+
No rows selected (2.037 seconds)
14/09/30 22:16:20 INFO annotation.StatsRulesProcFactory: STATS-GBY[54]: Overflow in number of rows.0 rows will be set to Long.MAX_VALUE
14/09/30 22:16:20 DEBUG exec.GroupByOperator: Setting stats (Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE) on GBY[54]
14/09/30 22:16:20 DEBUG annotation.StatsRulesProcFactory: [0] STATS-GBY[54]: numRows: 0 dataSize: 0 basicStatsState: NONE colStatsState: NONE colStats: {}
14/09/30 22:16:20 DEBUG exec.SelectOperator: Setting stats (Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE) on SEL[55]
14/09/30 22:16:20 DEBUG annotation.StatsRulesProcFactory: [1] STATS-SEL[55]: numRows: 0 dataSize: 0 basicStatsState: NONE colStatsState: NONE colStats: {}
14/09/30 22:16:20 DEBUG annotation.StatsRulesProcFactory: [0] STATS-FS[56]: numRows: 0 dataSize: 0 basicStatsState: NONE colStatsState: NONE colStats: {}
14/09/30 22:16:20 DEBUG ipc.ProtobufRpcEngine: Call: getListing took 1ms
14/09/30 22:16:20 DEBUG exec.TableScanOperator: Setting traits (org.apache.hadoop.hive.ql.plan.OpTraits@424402c7) on TS[50]
14/09/30 22:16:20 DEBUG exec.SelectOperator: Setting traits (org.apache.hadoop.hive.ql.plan.OpTraits@7f1278cd) on SEL[51]
14/09/30 22:16:20 DEBUG exec.GroupByOperator: Setting traits (org.apache.hadoop.hive.ql.plan.OpTraits@15b91be8) on GBY[52]
14/09/30 22:16:20 DEBUG exec.ReduceSinkOperator: Setting traits (org.apache.hadoop.hive.ql.plan.OpTraits@2bd7277c) on RS[53]
14/09/30 22:16:20 DEBUG exec.GroupByOperator: Setting traits (org.apache.hadoop.hive.ql.plan.OpTraits@3a615460) on GBY[54]
14/09/30 22:16:20 DEBUG exec.SelectOperator: Setting traits (org.apache.hadoop.hive.ql.plan.OpTraits@4bea8891) on SEL[55]
14/09/30 22:16:20 DEBUG exec.FileSinkOperator: Setting traits (org.apache.hadoop.hive.ql.plan.OpTraits@4bea8891) on FS[56]
14/09/30 22:16:20 INFO optimizer.SetReducerParallelism: Set parallelism for reduce sink RS[53] to: 1
14/09/30 22:16:20 DEBUG parse.TezCompiler: Component:
14/09/30 22:16:20 DEBUG parse.TezCompiler: Operator: TS, 50
14/09/30 22:16:20 DEBUG parse.TezCompiler: Component:
14/09/30 22:16:20 DEBUG parse.TezCompiler: Operator: SEL, 51
14/09/30 22:16:20 DEBUG parse.TezCompiler: Component:
14/09/30 22:16:20 DEBUG parse.TezCompiler: Operator: GBY, 52
14/09/30 22:16:20 DEBUG parse.TezCompiler: Component:
14/09/30 22:16:20 DEBUG parse.TezCompiler: Operator: RS, 53
14/09/30 22:16:20 DEBUG parse.TezCompiler: Component:
14/09/30 22:16:20 DEBUG parse.TezCompiler: Operator: GBY, 54
14/09/30 22:16:20 DEBUG parse.TezCompiler: Component:
14/09/30 22:16:20 DEBUG parse.TezCompiler: Operator: SEL, 55
14/09/30 22:16:20 DEBUG parse.TezCompiler: Component:
14/09/30 22:16:20 DEBUG parse.TezCompiler: Operator: FS, 56
14/09/30 22:16:20 INFO parse.TezCompiler: Cycle free: true
14/09/30 22:16:20 DEBUG parse.GenTezWork: Root operator: TS[50]
14/09/30 22:16:20 DEBUG parse.GenTezWork: Leaf operator: RS[53]
14/09/30 22:16:20 DEBUG parse.GenTezUtils: Adding map work (Map 1) for TS[50]
14/09/30 22:16:20 DEBUG hive.log: DDL: struct foo7 { i32 id}
14/09/30 22:16:20 DEBUG hive.log: DDL: struct foo7 { i32 id}
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 DEBUG hive.log: DDL: struct foo7 { i32 id}
14/09/30 22:16:20 DEBUG optimizer.GenMapRedUtils: Adding hdfs://nc-h04/user/hive/warehouse/casino.db/foo7 of tablefoo7
14/09/30 22:16:20 DEBUG optimizer.GenMapRedUtils: Information added for path hdfs://nc-h04/user/hive/warehouse/casino.db/foo7
14/09/30 22:16:20 DEBUG parse.GenTezWork: First pass. Leaf operator: RS[53]
14/09/30 22:16:20 DEBUG parse.GenTezWork: Root operator: GBY[54]
14/09/30 22:16:20 DEBUG parse.GenTezWork: Leaf operator: FS[56]
14/09/30 22:16:20 DEBUG parse.GenTezUtils: Adding reduce work (Reducer 2) for GBY[54]
14/09/30 22:16:20 DEBUG parse.GenTezUtils: Setting up reduce sink: RS[53] with following reduce work: Reducer 2
14/09/30 22:16:20 DEBUG parse.GenTezWork: Removing RS[53] as parent from GBY[54]
14/09/30 22:16:20 DEBUG parse.GenTezWork: First pass. Leaf operator: FS[56]
14/09/30 22:16:20 DEBUG parse.TezCompiler: There are 0 app master events.
14/09/30 22:16:20 DEBUG physical.NullScanTaskDispatcher: Looking at: Map 1
14/09/30 22:16:20 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
14/09/30 22:16:20 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
14/09/30 22:16:20 DEBUG physical.NullScanTaskDispatcher: Looking at: Map 1
14/09/30 22:16:20 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
14/09/30 22:16:20 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
14/09/30 22:16:20 DEBUG physical.NullScanTaskDispatcher: Looking at: Map 1
14/09/30 22:16:20 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
14/09/30 22:16:20 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
14/09/30 22:16:20 INFO physical.Vectorizer: Validating MapWork...
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 INFO physical.Vectorizer: Downstream operators of map-side GROUP BY will be vectorized: true
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 INFO physical.Vectorizer: Downstream operators of map-side GROUP BY will be vectorized: true
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 INFO physical.Vectorizer: Vectorizing MapWork...
14/09/30 22:16:20 INFO physical.Vectorizer: MapWorkVectorizationNodeProcessor processing Operator: TS...
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized MapWork operator TS with vectorization context key=hdfs://nc-h04/user/hive/warehouse/casino.db/foo7, vectorTypes: {}, columnMap: {id=0}
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized MapWork operator TS with vectorization context key=hdfs://nc-h04/user/hive/warehouse/casino.db/foo7, vectorTypes: {}, columnMap: {id=0}
14/09/30 22:16:20 INFO physical.Vectorizer: MapWorkVectorizationNodeProcessor processing Operator: SEL...
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized MapWork operator SEL with vectorization context key=hdfs://nc-h04/user/hive/warehouse/casino.db/foo7, vectorTypes: {}, columnMap: {id=0}
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized MapWork operator SEL added new vectorization context key=hdfs://nc-h04/user/hive/warehouse/casino.db/foo7/_SELECT_, vectorTypes: {}, columnMap: {id=0}
14/09/30 22:16:20 INFO physical.Vectorizer: MapWorkVectorizationNodeProcessor processing Operator: GBY...
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized MapWork operator GBY with vectorization context key=hdfs://nc-h04/user/hive/warehouse/casino.db/foo7/_SELECT_, vectorTypes: {}, columnMap: {id=0}
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized MapWork operator GBY added new vectorization context key=hdfs://nc-h04/user/hive/warehouse/casino.db/foo7/_SELECT_/_GROUPBY_, vectorTypes: {}, columnMap: {_col0=0}
14/09/30 22:16:20 INFO physical.Vectorizer: MapWorkVectorizationNodeProcessor processing Operator: RS...
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized MapWork operator RS with vectorization context key=hdfs://nc-h04/user/hive/warehouse/casino.db/foo7/_SELECT_/_GROUPBY_, vectorTypes: {}, columnMap: {_col0=0}
14/09/30 22:16:20 DEBUG physical.Vectorizer: vectorTypes: {hdfs://nc-h04/user/hive/warehouse/casino.db/foo7/_SELECT_={}, hdfs://nc-h04/user/hive/warehouse/casino.db/foo7/_SELECT_/_GROUPBY_={}, hdfs://nc-h04/user/hive/warehouse/casino.db/foo7={}}
14/09/30 22:16:20 DEBUG physical.Vectorizer: columnMap: {hdfs://nc-h04/user/hive/warehouse/casino.db/foo7/_SELECT_={id=0}, hdfs://nc-h04/user/hive/warehouse/casino.db/foo7/_SELECT_/_GROUPBY_={_col0=0}, hdfs://nc-h04/user/hive/warehouse/casino.db/foo7={id=0}}
14/09/30 22:16:20 INFO physical.Vectorizer: Validating ReduceWork...
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 DEBUG lazybinary.LazyBinarySerDe: LazyBinarySerDe initialized with: columnNames=[] columnTypes=[]
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 INFO physical.Vectorizer: Reduce GROUP BY mode is MERGEPARTIAL
14/09/30 22:16:20 INFO physical.Vectorizer: Reduce-side GROUP BY will process key groups
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 INFO physical.Vectorizer: Reduce GROUP BY mode is MERGEPARTIAL
14/09/30 22:16:20 INFO physical.Vectorizer: Reduce-side GROUP BY will process key groups
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 INFO physical.Vectorizer: Reduce GROUP BY mode is MERGEPARTIAL
14/09/30 22:16:20 INFO physical.Vectorizer: Reduce-side GROUP BY will process key groups
14/09/30 22:16:20 INFO physical.Vectorizer: Vectorizing ReduceWork...
14/09/30 22:16:20 INFO physical.Vectorizer: vectorizeReduceWork reducer Operator: GBY...
14/09/30 22:16:20 INFO physical.Vectorizer: ReduceWorkVectorizationNodeProcessor processing Operator: GBY...
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized ReduceWork reduce shuffle vectorization context key=_REDUCE_SHUFFLE_, vectorTypes: {}, columnMap: {KEY._col0=0}
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized ReduceWork operator GBY with vectorization context key=_REDUCE_SHUFFLE_, vectorTypes: {}, columnMap: {KEY._col0=0}
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized ReduceWork operator GBY added new vectorization context key=_REDUCE_SHUFFLE_/_GROUPBY_, vectorTypes: {}, columnMap: {_col0=0}
14/09/30 22:16:20 INFO physical.Vectorizer: ReduceWorkVectorizationNodeProcessor processing Operator: SEL...
14/09/30 22:16:20 DEBUG vector.VectorizationContext: Input Expression = int, Vectorized Expression = IdentityExpression[0]
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized ReduceWork operator SEL with vectorization context key=_REDUCE_SHUFFLE_/_GROUPBY_, vectorTypes: {}, columnMap: {_col0=0}
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized ReduceWork operator SEL added new vectorization context key=_REDUCE_SHUFFLE_/_GROUPBY_/_SELECT_, vectorTypes: {}, columnMap: {_col0=0}
14/09/30 22:16:20 INFO physical.Vectorizer: ReduceWorkVectorizationNodeProcessor processing Operator: FS...
14/09/30 22:16:20 DEBUG physical.Vectorizer: Vectorized ReduceWork operator FS with vectorization context key=_REDUCE_SHUFFLE_/_GROUPBY_/_SELECT_, vectorTypes: {}, columnMap: {_col0=0}
14/09/30 22:16:20 DEBUG physical.Vectorizer: vectorTypes: {_REDUCE_SHUFFLE_/_GROUPBY_/_SELECT_={}, _REDUCE_SHUFFLE_={}, _REDUCE_SHUFFLE_/_GROUPBY_={}}
14/09/30 22:16:20 DEBUG physical.Vectorizer: columnMap: {_REDUCE_SHUFFLE_/_GROUPBY_/_SELECT_={_col0=0}, _REDUCE_SHUFFLE_={KEY._col0=0}, _REDUCE_SHUFFLE_/_GROUPBY_={_col0=0}}
14/09/30 22:16:20 DEBUG parse.TezCompiler: Skipping stage id rearranger
14/09/30 22:16:20 INFO parse.SemanticAnalyzer: Completed plan generation
14/09/30 22:16:20 INFO ql.Driver: Semantic Analysis Completed
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: validation start
14/09/30 22:16:20 DEBUG parse.SemanticAnalyzer: not validating writeEntity, because entity is neither table nor partition
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 DEBUG lazy.LazySimpleSerDe: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe initialized with: columnNames=[_col0] columnTypes=[int] separator=[[B@3947ea7f] nullstring=\N lastColumnTakesRest=false
14/09/30 22:16:20 INFO exec.ListSinkOperator: Initializing Self OP[63]
14/09/30 22:16:20 DEBUG exec.Utilities: Use session specified class loader
14/09/30 22:16:20 INFO exec.ListSinkOperator: Operator 63 OP initialized
14/09/30 22:16:20 INFO exec.ListSinkOperator: Initialization Done 63 OP
14/09/30 22:16:20 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:id, type:int, comment:null)], properties:null)
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 DEBUG lockmgr.DbTxnManager: Setting lock request transaction to 0
14/09/30 22:16:20 DEBUG lockmgr.DbTxnManager: Adding lock component to lock request LockComponent(type:SHARED_READ, level:TABLE, dbname:casino, tablename:foo7)
14/09/30 22:16:20 DEBUG lockmgr.DbLockManager: Requesting lock
14/09/30 22:16:20 DEBUG ql.Driver: Encoding valid txns info 557:396:399:402:410:411:412:414:418:445:462:468:469:470:471:472:483:503:543:555
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO ql.Driver: Starting command: select distinct id from foo7
14/09/30 22:16:20 INFO ql.Driver: Query ID = hduser_20140930221616_83c14441-902d-4694-8f75-76e437873d26
14/09/30 22:16:20 INFO ql.Driver: Total jobs = 1
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO ql.Driver: Launching Job 1 out of 1
14/09/30 22:16:20 INFO ql.Driver: Starting task [Stage-1:MAPRED] in serial mode
14/09/30 22:16:20 INFO tez.TezSessionPoolManager: The current user: hduser, session user: hduser
14/09/30 22:16:20 DEBUG hdfs.DFSClient: /tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-10: masked=rwx------
14/09/30 22:16:20 DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 2ms
14/09/30 22:16:20 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
14/09/30 22:16:20 INFO ql.Context: New scratch dir is hdfs://nc-h04/tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-10
14/09/30 22:16:20 DEBUG tez.DagUtils: TezDir path set hdfs://nc-h04/tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-10/hduser/_tez_scratch_dir for user: hduser
14/09/30 22:16:20 DEBUG hdfs.DFSClient: /tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-10/hduser/_tez_scratch_dir: masked=rwxr-xr-x
14/09/30 22:16:20 DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 2ms
14/09/30 22:16:20 INFO exec.Task: Session is already open
14/09/30 22:16:20 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
14/09/30 22:16:20 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
14/09/30 22:16:20 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
14/09/30 22:16:20 INFO tez.DagUtils: Resource modification time: 1412108091590
14/09/30 22:16:20 DEBUG exec.Task: Adding local resource: scheme: "hdfs" host: "nc-h04" port: -1 file: "/tmp/hive/hduser/_tez_session_dir/89421977-42b8-4467-a0c1-66dda3d1b6b8/postgresql-8.4-703.jdbc4.jar"
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 DEBUG hdfs.DFSClient: /tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-10/hduser/_tez_scratch_dir/613e4f08-34be-4034-a450-088bc3acf8d1: masked=rwxr-xr-x
14/09/30 22:16:20 DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 2ms
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO exec.Utilities: Serializing ReduceWork via kryo
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO exec.Utilities: Setting plan: /tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-10/hduser/_tez_scratch_dir/613e4f08-34be-4034-a450-088bc3acf8d1/reduce.xml
14/09/30 22:16:20 DEBUG hdfs.DFSClient: /tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-8/_tmp.-ext-10001: masked=rwxr-xr-x
14/09/30 22:16:20 DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 2ms
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO log.PerfLogger:
14/09/30 22:16:20 INFO ql.Context: New scratch dir is hdfs://nc-h04/tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-10
14/09/30 22:16:21 DEBUG hdfs.DFSClient: /tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-10/hduser/_tez_scratch_dir/41389c12-de43-4273-9c96-9f954c95c2e0: masked=rwxr-xr-x
14/09/30 22:16:21 DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 2ms
14/09/30 22:16:21 INFO tez.DagUtils: Vertex has custom input? false
14/09/30 22:16:21 INFO log.PerfLogger:
14/09/30 22:16:21 INFO exec.Utilities: Serializing MapWork via kryo
14/09/30 22:16:21 INFO log.PerfLogger:
14/09/30 22:16:21 INFO exec.Utilities: Setting plan: /tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-10/hduser/_tez_scratch_dir/41389c12-de43-4273-9c96-9f954c95c2e0/map.xml
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Setting mapreduce.map.output.key.class for stage: unknown based on job level configuration. Value: org.apache.hadoop.hive.ql.io.HiveKey
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Setting mapreduce.map.output.value.class for stage: unknown based on job level configuration. Value: org.apache.hadoop.io.BytesWritable
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.reduce.shuffle.merge.percent, mr initial value=0.66, tez:tez.runtime.shuffle.merge.percent=0.66
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.reduce.shuffle.input.buffer.percent, mr initial value=0.70, tez:tez.runtime.shuffle.fetch.buffer.percent=0.70
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.task.io.sort.mb, mr initial value=100, tez:tez.runtime.io.sort.mb=100
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.reduce.shuffle.memory.limit.percent, mr initial value=0.25, tez:tez.runtime.shuffle.memory.limit.percent=0.25
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.task.io.sort.factor, mr initial value=10, tez:tez.runtime.io.sort.factor=10
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.reduce.shuffle.connect.timeout, mr initial value=180000, tez:tez.runtime.shuffle.connect.timeout=180000
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):map.sort.class, mr initial value=org.apache.hadoop.util.QuickSort, tez:tez.runtime.internal.sorter.class=org.apache.hadoop.util.QuickSort
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.task.merge.progress.records, mr initial value=10000, tez:tez.runtime.merge.progress.records=10000
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.map.output.compress, mr initial value=false, tez:tez.runtime.compress=false
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.map.output.key.class, mr initial value=org.apache.hadoop.hive.ql.io.HiveKey, tez:tez.runtime.key.class=org.apache.hadoop.hive.ql.io.HiveKey
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.map.sort.spill.percent, mr initial value=0.80, tez:tez.runtime.sort.spill.percent=0.80
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.shuffle.ssl.enabled, mr initial value=false, tez:tez.runtime.shuffle.ssl.enable=false
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.ifile.readahead, mr initial value=true, tez:tez.runtime.ifile.readahead=true
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.reduce.shuffle.parallelcopies, mr initial value=5, tez:tez.runtime.shuffle.parallel.copies=5
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.ifile.readahead.bytes, mr initial value=4194304, tez:tez.runtime.ifile.readahead.bytes=4194304
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.reduce.input.buffer.percent, mr initial value=0.0, tez:tez.runtime.task.input.post-merge.buffer.percent=0.0
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.reduce.shuffle.read.timeout, mr initial value=180000, tez:tez.runtime.shuffle.read.timeout=180000
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.map.output.value.class, mr initial value=org.apache.hadoop.io.BytesWritable, tez:tez.runtime.value.class=org.apache.hadoop.io.BytesWritable
14/09/30 22:16:21 DEBUG hadoop.MRHelpers: Config: mr(unset):mapreduce.map.output.compress.codec, mr initial value=org.apache.hadoop.io.compress.DefaultCodec, tez:tez.runtime.compress.codec=org.apache.hadoop.io.compress.DefaultCodec
14/09/30 22:16:21 DEBUG tez.DagUtils: Marking URI as needing credentials: hdfs://nc-h04/user/hive/warehouse/casino.db/foo7
14/09/30 22:16:21 INFO log.PerfLogger:
14/09/30 22:16:21 INFO log.PerfLogger:
14/09/30 22:16:21 INFO log.PerfLogger:
14/09/30 22:16:21 INFO client.TezClient: Submitting dag to TezSession, sessionName=HIVE-89421977-42b8-4467-a0c1-66dda3d1b6b8, applicationId=application_1412065018660_0041, dagName=hduser_20140930221616_83c14441-902d-4694-8f75-76e437873d26:8
14/09/30 22:16:21 DEBUG client.TezClientUtils: #sessionTokens=1, Services: application_1412065018660_0041,
14/09/30 22:16:21 DEBUG api.DAG: #dagTokens=1, Services: application_1412065018660_0041,
14/09/30 22:16:21 DEBUG ipc.Client: The ping interval is 60000 ms.
14/09/30 22:16:21 DEBUG ipc.Client: Connecting to nc-h04/192.168.20.6:8032
14/09/30 22:16:21 DEBUG ipc.ProtobufRpcEngine: Call: getApplicationReport took 3ms
14/09/30 22:16:21 DEBUG client.TezClientUtils: Connecting to Tez AM at nc-h04/192.168.20.6:33468
14/09/30 22:16:21 DEBUG security.UserGroupInformation: PrivilegedAction as:hduser (auth:SIMPLE) from:org.apache.tez.client.TezClientUtils.getAMProxy(TezClientUtils.java:829)
14/09/30 22:16:21 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@35b102dd
14/09/30 22:16:21 DEBUG ipc.Client: The ping interval is 60000 ms.
14/09/30 22:16:21 DEBUG ipc.Client: Connecting to nc-h04/192.168.20.6:33468
14/09/30 22:16:21 DEBUG ipc.ProtobufRpcEngine: Call: submitDAG took 132ms
14/09/30 22:16:21 INFO client.TezClient: Submitted dag to TezSession, sessionName=HIVE-89421977-42b8-4467-a0c1-66dda3d1b6b8, applicationId=application_1412065018660_0041, dagName=hduser_20140930221616_83c14441-902d-4694-8f75-76e437873d26:8
14/09/30 22:16:21 DEBUG service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state INITED
14/09/30 22:16:21 INFO client.RMProxy: Connecting to ResourceManager at nc-h04/192.168.20.6:8032
14/09/30 22:16:21 DEBUG security.UserGroupInformation: PrivilegedAction as:hduser (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:130)
14/09/30 22:16:21 DEBUG ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
14/09/30 22:16:21 DEBUG ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
14/09/30 22:16:21 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@35b102dd
14/09/30 22:16:21 DEBUG service.AbstractService: Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl is started
14/09/30 22:16:21 INFO log.PerfLogger:
14/09/30 22:16:21 INFO tez.TezJobMonitor:
14/09/30 22:16:21 INFO log.PerfLogger:
14/09/30 22:16:21 INFO log.PerfLogger:
14/09/30 22:16:21 DEBUG ipc.ProtobufRpcEngine: Call: getApplicationReport took 3ms
14/09/30 22:16:21 DEBUG rpc.DAGClientRPCImpl: App: application_1412065018660_0041 in state: RUNNING
14/09/30 22:16:21 DEBUG client.TezClientUtils: Connecting to Tez AM at nc-h04/192.168.20.6:33468
14/09/30 22:16:21 DEBUG security.UserGroupInformation: PrivilegedAction as:hduser (auth:SIMPLE) from:org.apache.tez.client.TezClientUtils.getAMProxy(TezClientUtils.java:829)
14/09/30 22:16:21 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@35b102dd
14/09/30 22:16:21 DEBUG rpc.DAGClientRPCImpl: GetDAGStatus via AM for app: application_1412065018660_0041 dag:dag_1412065018660_0041_3
14/09/30 22:16:21 DEBUG ipc.Client: The ping interval is 60000 ms.
14/09/30 22:16:21 DEBUG ipc.Client: Connecting to nc-h04/192.168.20.6:33468
14/09/30 22:16:21 DEBUG ipc.ProtobufRpcEngine: Call: getDAGStatus took 19ms
14/09/30 22:16:21 DEBUG exec.Heartbeater: heartbeating
14/09/30 22:16:21 DEBUG lockmgr.DbTxnManager: Heartbeating lock and transaction 0
14/09/30 22:16:21 INFO log.PerfLogger:
14/09/30 22:16:21 INFO tez.TezJobMonitor: Status: Running (Executing on YARN cluster with App id application_1412065018660_0041)
14/09/30 22:16:21 INFO log.PerfLogger:
14/09/30 22:16:21 INFO log.PerfLogger:
14/09/30 22:16:21 INFO tez.TezJobMonitor: Map 1: -/- Reducer 2: 0/1
14/09/30 22:16:21 DEBUG rpc.DAGClientRPCImpl: GetDAGStatus via AM for app: application_1412065018660_0041 dag:dag_1412065018660_0041_3
14/09/30 22:16:21 DEBUG ipc.ProtobufRpcEngine: Call: getDAGStatus took 4ms
14/09/30 22:16:21 INFO tez.TezJobMonitor: Map 1: -/- Reducer 2: 0(+1)/1
14/09/30 22:16:21 DEBUG rpc.DAGClientRPCImpl: GetDAGStatus via AM for app: application_1412065018660_0041 dag:dag_1412065018660_0041_3
14/09/30 22:16:21 DEBUG ipc.ProtobufRpcEngine: Call: getDAGStatus took 3ms
14/09/30 22:16:21 DEBUG rpc.DAGClientRPCImpl: GetDAGStatus via AM for app: application_1412065018660_0041 dag:dag_1412065018660_0041_3
14/09/30 22:16:21 DEBUG ipc.ProtobufRpcEngine: Call: getDAGStatus took 3ms
14/09/30 22:16:22 DEBUG rpc.DAGClientRPCImpl: GetDAGStatus via AM for app: application_1412065018660_0041 dag:dag_1412065018660_0041_3
14/09/30 22:16:22 DEBUG ipc.ProtobufRpcEngine: Call: getDAGStatus took 3ms
14/09/30 22:16:22 DEBUG rpc.DAGClientRPCImpl: GetDAGStatus via AM for app: application_1412065018660_0041 dag:dag_1412065018660_0041_3
14/09/30 22:16:22 DEBUG ipc.ProtobufRpcEngine: Call: getDAGStatus took 2ms
14/09/30 22:16:22 DEBUG rpc.DAGClientRPCImpl: GetDAGStatus via AM for app: application_1412065018660_0041 dag:dag_1412065018660_0041_3
14/09/30 22:16:22 DEBUG ipc.ProtobufRpcEngine: Call: getDAGStatus took 3ms
14/09/30 22:16:22 DEBUG rpc.DAGClientRPCImpl: GetDAGStatus via AM for app: application_1412065018660_0041 dag:dag_1412065018660_0041_3
14/09/30 22:16:22 DEBUG ipc.ProtobufRpcEngine: Call: getDAGStatus took 3ms
14/09/30 22:16:22 INFO log.PerfLogger:
14/09/30 22:16:22 INFO tez.TezJobMonitor: Map 1: -/- Reducer 2: 1/1
14/09/30 22:16:22 INFO tez.TezJobMonitor: Status: Finished successfully in 1.43 seconds
14/09/30 22:16:22 INFO log.PerfLogger:
14/09/30 22:16:22 DEBUG rpc.DAGClientRPCImpl: GetDAGStatus via AM for app: application_1412065018660_0041 dag:dag_1412065018660_0041_3
14/09/30 22:16:22 DEBUG ipc.ProtobufRpcEngine: Call: getDAGStatus took 6ms
14/09/30 22:16:22 INFO exec.Task: org.apache.tez.common.counters.DAGCounter:
14/09/30 22:16:22 INFO exec.Task: TOTAL_LAUNCHED_TASKS: 1
14/09/30 22:16:22 INFO exec.Task: File System Counters:
14/09/30 22:16:22 INFO exec.Task: HDFS: BYTES_READ: 0
14/09/30 22:16:22 INFO exec.Task: HDFS: BYTES_WRITTEN: 0
14/09/30 22:16:22 INFO exec.Task: HDFS: READ_OPS: 2
14/09/30 22:16:22 INFO exec.Task: HDFS: LARGE_READ_OPS: 0
14/09/30 22:16:22 INFO exec.Task: HDFS: WRITE_OPS: 2
14/09/30 22:16:22 INFO exec.Task: org.apache.tez.common.counters.TaskCounter:
14/09/30 22:16:22 INFO exec.Task: GC_TIME_MILLIS: 14
14/09/30 22:16:22 INFO exec.Task: CPU_MILLISECONDS: 1160
14/09/30 22:16:22 INFO exec.Task: PHYSICAL_MEMORY_BYTES: 155549696
14/09/30 22:16:22 INFO exec.Task: VIRTUAL_MEMORY_BYTES: 881426432
14/09/30 22:16:22 INFO exec.Task: COMMITTED_HEAP_BYTES: 201326592
14/09/30 22:16:22 INFO exec.Task: OUTPUT_RECORDS: 0
14/09/30 22:16:22 INFO exec.Task: HIVE:
14/09/30 22:16:22 INFO exec.Task: CREATED_FILES: 1
14/09/30 22:16:22 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
14/09/30 22:16:22 DEBUG ipc.ProtobufRpcEngine: Call: getListing took 1ms
14/09/30 22:16:22 DEBUG exec.Utilities: TaskId for 000000_0 = 000000
14/09/30 22:16:22 INFO vector.VectorFileSinkOperator: Moving tmp dir: hdfs://nc-h04/tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-8/_tmp.-ext-10001 to: hdfs://nc-h04/tmp/hive/hduser/1dc4a291-8600-41e6-b9ba-5bb4c1d53662/hive_2014-09-30_22-16-20_750_2154185057436475060-8/-ext-10001
14/09/30 22:16:22 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
14/09/30 22:16:22 DEBUG ipc.ProtobufRpcEngine: Call: rename took 2ms
14/09/30 22:16:22 DEBUG ipc.ProtobufRpcEngine: Call: delete took 2ms
14/09/30 22:16:22 INFO log.PerfLogger:
14/09/30 22:16:22 INFO log.PerfLogger:
14/09/30 22:16:22 INFO ql.Driver: OK
14/09/30 22:16:22 INFO log.PerfLogger:
14/09/30 22:16:22 DEBUG lockmgr.DbLockManager: Unlocking id:418761
14/09/30 22:16:22 DEBUG lockmgr.DbLockManager: Removed a lock true
14/09/30 22:16:22 INFO log.PerfLogger:
14/09/30 22:16:22 INFO log.PerfLogger: