Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-4392

Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.12.0
    • Component/s: Query Processor
    • Labels:
      None
    • Environment:

      Apache Hadoop 0.20.1
      Apache Hive Trunk

      Description

      For Example:

      hive (default)> create table liza_1 as
      > select *, sum(key), sum(value)
      > from new_src;
      Total MapReduce jobs = 1
      Launching Job 1 out of 1
      Number of reduce tasks determined at compile time: 1
      In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
      In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
      In order to set a constant number of reducers:
      set mapred.reduce.tasks=<number>
      Starting Job = job_201304191025_0003, Tracking URL = http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
      Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job -kill job_201304191025_0003
      Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
      2013-04-22 11:09:28,017 Stage-1 map = 0%, reduce = 0%
      2013-04-22 11:09:34,054 Stage-1 map = 0%, reduce = 100%
      2013-04-22 11:09:37,074 Stage-1 map = 100%, reduce = 100%
      Ended Job = job_201304191025_0003
      Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
      FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid object name)
      FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
      MapReduce Jobs Launched:
      Job 0: Reduce: 1 HDFS Read: 0 HDFS Write: 12 SUCCESS
      Total MapReduce CPU Time Spent: 0 msec

      hive (default)> create table liza_1 as
      > select *, sum(key), sum(value)
      > from new_src
      > group by key, value;
      Total MapReduce jobs = 1
      Launching Job 1 out of 1
      Number of reduce tasks not specified. Estimated from input data size: 1
      In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
      In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
      In order to set a constant number of reducers:
      set mapred.reduce.tasks=<number>
      Starting Job = job_201304191025_0004, Tracking URL = http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
      Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job -kill job_201304191025_0004
      Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
      2013-04-22 11:11:58,945 Stage-1 map = 0%, reduce = 0%
      2013-04-22 11:12:01,964 Stage-1 map = 0%, reduce = 100%
      2013-04-22 11:12:04,982 Stage-1 map = 100%, reduce = 100%
      Ended Job = job_201304191025_0004
      Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
      FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid object name)
      FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
      MapReduce Jobs Launched:
      Job 0: Reduce: 1 HDFS Read: 0 HDFS Write: 0 SUCCESS
      Total MapReduce CPU Time Spent: 0 msec

      But the following tow Queries work:
      hive (default)> create table liza_1 as select * from new_src;
      Total MapReduce jobs = 3
      Launching Job 1 out of 3
      Number of reduce tasks is set to 0 since there's no reduce operator
      Starting Job = job_201304191025_0006, Tracking URL = http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0006
      Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job -kill job_201304191025_0006
      Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
      2013-04-22 11:15:00,681 Stage-1 map = 0%, reduce = 0%
      2013-04-22 11:15:03,697 Stage-1 map = 100%, reduce = 100%
      Ended Job = job_201304191025_0006
      Stage-4 is selected by condition resolver.
      Stage-3 is filtered out by condition resolver.
      Stage-5 is filtered out by condition resolver.
      Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive-scratchdir/hive_2013-04-22_11-14-54_632_6709035018023861094/-ext-10001
      Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
      Table default.liza_1 stats: [num_partitions: 0, num_files: 0, num_rows: 0, total_size: 0, raw_data_size: 0]
      MapReduce Jobs Launched:
      Job 0: HDFS Read: 0 HDFS Write: 0 SUCCESS
      Total MapReduce CPU Time Spent: 0 msec
      OK
      Time taken: 9.576 seconds

      hive (default)> create table liza_1 as
      > select sum (key), sum(value)
      > from new_test;
      Total MapReduce jobs = 1
      Launching Job 1 out of 1
      Number of reduce tasks determined at compile time: 1
      In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
      In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
      In order to set a constant number of reducers:
      set mapred.reduce.tasks=<number>
      Starting Job = job_201304191025_0008, Tracking URL = http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0008
      Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job -kill job_201304191025_0008
      Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
      2013-04-22 11:22:52,200 Stage-1 map = 0%, reduce = 0%
      2013-04-22 11:22:55,216 Stage-1 map = 0%, reduce = 100%
      2013-04-22 11:22:58,234 Stage-1 map = 100%, reduce = 100%
      Ended Job = job_201304191025_0008
      Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
      Table default.liza_1 stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 6, raw_data_size: 0]
      MapReduce Jobs Launched:
      Job 0: Reduce: 1 HDFS Read: 0 HDFS Write: 6 SUCCESS
      Total MapReduce CPU Time Spent: 0 msec
      OK
      Time taken: 11.115 seconds

      In MySQL :
      mysql> select * from example ;
      ----------+

      id data

      ----------+

      1 2
      2 2
      3 3

      ----------+
      3 rows in set (0.00 sec)

      mysql> select *, sum(id),count(data) from example ;
      ----------------------------+

      id data sum(id) count(data)

      ----------------------------+

      1 2 6 3

      ----------------------------+
      1 row in set (0.03 sec)

        Attachments

        1. HIVE-4392.D10431.5.patch
          49 kB
          Phabricator
        2. HIVE-4392.D10431.4.patch
          31 kB
          Phabricator
        3. HIVE-4392.D10431.3.patch
          26 kB
          Phabricator
        4. HIVE-4392.D10431.2.patch
          25 kB
          Phabricator
        5. HIVE-4392.D10431.1.patch
          13 kB
          Phabricator

          Issue Links

            Activity

              People

              • Assignee:
                navis Navis Ryu
                Reporter:
                caofangkun caofangkun
              • Votes:
                0 Vote for this issue
                Watchers:
                7 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: