Total jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Starting Job = job_1457504656024_0022, Tracking URL = http://namenode2:8088/proxy/application_1457504656024_0022/ Kill Command = /home/bigdata/software/hadoop/bin/hadoop job -kill job_1457504656024_0022 Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1 2016-03-10 16:20:39,349 Stage-1 map = 0%, reduce = 0% 2016-03-10 16:20:49,019 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 2.27 sec 2016-03-10 16:20:50,079 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 3.53 sec 2016-03-10 16:21:00,731 Stage-1 map = 95%, reduce = 0%, Cumulative CPU 17.58 sec 2016-03-10 16:21:01,774 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 19.19 sec 2016-03-10 16:21:13,378 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 26.77 sec MapReduce Total cumulative CPU time: 26 seconds 770 msec Ended Job = job_1457504656024_0022 Launching Job 2 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Starting Job = job_1457504656024_0023, Tracking URL = http://namenode2:8088/proxy/application_1457504656024_0023/ Kill Command = /home/bigdata/software/hadoop/bin/hadoop job -kill job_1457504656024_0023 Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1 2016-03-10 16:21:24,741 Stage-2 map = 0%, reduce = 0% 2016-03-10 16:21:32,034 Stage-2 map = 100%, reduce = 0% 2016-03-10 16:21:40,355 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.09 sec MapReduce Total cumulative CPU time: 3 seconds 90 msec Ended Job = job_1457504656024_0023 MapReduce Jobs Launched: Job 0: Map: 4 Reduce: 1 Cumulative CPU: 26.77 sec HDFS Read: 26671401 HDFS Write: 270 SUCCESS Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.09 sec HDFS Read: 679 HDFS Write: 46 SUCCESS Total MapReduce CPU Time Spent: 29 seconds 860 msec OK 12300 0 12452 0 23500 0 24000 1 24800 1 25600 0 98750 0