Hive
  1. Hive
  2. HIVE-4732

Reduce or eliminate the expensive Schema equals() check for AvroSerde

    Details

    • Type: Improvement Improvement
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.13.0
    • Labels:
      None

      Description

      The AvroSerde spends a significant amount of time checking schema equality. Changing to compare hashcodes (which can be computed once then reused) will improve performance.

      1. HIVE-4732.1.patch
        1 kB
        Mark Wagner
      2. HIVE-4732.4.patch
        17 kB
        Mohammad Kamrul Islam
      3. HIVE-4732.5.patch
        18 kB
        Mohammad Kamrul Islam
      4. HIVE-4732.6.patch
        18 kB
        Mohammad Kamrul Islam
      5. HIVE-4732.7.patch
        18 kB
        Mohammad Kamrul Islam
      6. HIVE-4732.v1.patch
        16 kB
        Mohammad Kamrul Islam
      7. HIVE-4732.v4.patch
        17 kB
        Mohammad Kamrul Islam

        Activity

        Hide
        Mark Wagner added a comment -

        This patch improves per record read and deserialization time by ~40% in my tests. Since this is a small and useful change, I'd propose that it go into branch-.11 as well as trunk.

        Show
        Mark Wagner added a comment - This patch improves per record read and deserialization time by ~40% in my tests. Since this is a small and useful change, I'd propose that it go into branch-.11 as well as trunk.
        Hide
        Navis added a comment -

        Schema in avro-1.7.1 which is in hive-0.11.0 compares hashCodes first. This issue might be regarded as resolved.

        Show
        Navis added a comment - Schema in avro-1.7.1 which is in hive-0.11.0 compares hashCodes first. This issue might be regarded as resolved.
        Hide
        Ashutosh Chauhan added a comment -

        Mark Wagner Shall we close this as "Not a problem", per Navis comments?

        Show
        Ashutosh Chauhan added a comment - Mark Wagner Shall we close this as "Not a problem", per Navis comments?
        Hide
        Mark Wagner added a comment -

        I double checked and I am using the latest release (1.7.4) for my benchmarks. Although the equals() method uses hashcode to quickly discard unequal schemas, it still does a full recursive walk of the schema to make sure all the nodes are equal, so I think this patch is still a necessary change.

        Show
        Mark Wagner added a comment - I double checked and I am using the latest release (1.7.4) for my benchmarks. Although the equals() method uses hashcode to quickly discard unequal schemas, it still does a full recursive walk of the schema to make sure all the nodes are equal, so I think this patch is still a necessary change.
        Hide
        Edward Capriolo added a comment - - edited

        We can not use hashCode() where equals() should be used. Just because two things have the same hashCode() does not mean that they are equal. Future versions may not work the same way.

        How about something like this:

        if this.hashCode == other.hashCode && this.equals(other) {

        }

        In this way you still get the short circuit optimized behavior you want for performance for most cases, but the logic is still correct if a hashCode collision happens.

        Show
        Edward Capriolo added a comment - - edited We can not use hashCode() where equals() should be used. Just because two things have the same hashCode() does not mean that they are equal. Future versions may not work the same way. How about something like this: if this.hashCode == other.hashCode && this.equals(other) { } In this way you still get the short circuit optimized behavior you want for performance for most cases, but the logic is still correct if a hashCode collision happens.
        Hide
        Mark Wagner added a comment -

        && wouldn't allow short circuiting if hashCodes were equal, only if they're unequal. This change definitely assumes that we won't be unlucky and have a hashcode collision. An alternative is to give each RecordReader an Id and keep track of which readers have the correct schema in the deserializer. All the records from the same reader should have the same schema

        Show
        Mark Wagner added a comment - && wouldn't allow short circuiting if hashCodes were equal, only if they're unequal. This change definitely assumes that we won't be unlucky and have a hashcode collision. An alternative is to give each RecordReader an Id and keep track of which readers have the correct schema in the deserializer. All the records from the same reader should have the same schema
        Hide
        Edward Capriolo added a comment -

        This change definitely assumes that we won't be unlucky and have a hashcode collision.

        ^ is a dangerous assumption I would not want to +1.

        && wouldn't allow short circuiting if hashCodes were equal, only if they're unequal.

        Ah good point. I just meant to say 'use short circuiting and a compound if clause' (it could use ! | || && or & or whatever your need. The point was you can optimize the code without making it logically incorrect.

        Show
        Edward Capriolo added a comment - This change definitely assumes that we won't be unlucky and have a hashcode collision. ^ is a dangerous assumption I would not want to +1. && wouldn't allow short circuiting if hashCodes were equal, only if they're unequal. Ah good point. I just meant to say 'use short circuiting and a compound if clause' (it could use ! | || && or & or whatever your need. The point was you can optimize the code without making it logically incorrect.
        Hide
        Mohammad Kamrul Islam added a comment -

        Thanks Edward for the comments.
        We are now trying to take a different approach to address the same issue.
        A new patch is coming soon.

        Show
        Mohammad Kamrul Islam added a comment - Thanks Edward for the comments. We are now trying to take a different approach to address the same issue. A new patch is coming soon.
        Hide
        Mohammad Kamrul Islam added a comment -

        New patch is uploaded in RB: https://reviews.apache.org/r/12480/

        Description copied from RB:
        From our performance analysis, we found AvroSerde's schema.equals() call consumed a substantial amount ( nearly 40%) of time. This patch intends to minimize the number schema.equals() calls by pushing the check as late/fewer as possible.

        At first, we added a unique id for each record reader which is then included in every AvroGenericRecordWritable. Then, we introduce two new data structures (one hashset and one hashmap) to store intermediate data to avoid duplicates checkings. Hashset contains all the record readers' IDs that don't need any re-encoding. On the other hand, HashMap contains the already used re-encoders. It works as cache and allows re-encoders reuse. With this change, our test shows nearly 40% reduction in Avro record reading time.

        Show
        Mohammad Kamrul Islam added a comment - New patch is uploaded in RB: https://reviews.apache.org/r/12480/ Description copied from RB: From our performance analysis, we found AvroSerde's schema.equals() call consumed a substantial amount ( nearly 40%) of time. This patch intends to minimize the number schema.equals() calls by pushing the check as late/fewer as possible. At first, we added a unique id for each record reader which is then included in every AvroGenericRecordWritable. Then, we introduce two new data structures (one hashset and one hashmap) to store intermediate data to avoid duplicates checkings. Hashset contains all the record readers' IDs that don't need any re-encoding. On the other hand, HashMap contains the already used re-encoders. It works as cache and allows re-encoders reuse. With this change, our test shows nearly 40% reduction in Avro record reading time.
        Hide
        Edward Capriolo added a comment -
        this.recordReaderID = UUID.randomUUID();
        

        Random UUID does not guarantee no collisions. I believe using a TimeUUID is better

        Show
        Edward Capriolo added a comment - this .recordReaderID = UUID.randomUUID(); Random UUID does not guarantee no collisions. I believe using a TimeUUID is better
        Show
        Edward Capriolo added a comment - See: http://en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates
        Hide
        Ashutosh Chauhan added a comment -

        Thanks, Mohammad Kamrul Islam for addressing concern. Overall looks good. One quick question: I see in AvroGenericRecordWritable::write(Dataoutput out) this line
        out.writeUTF(recordReaderID.toString()); Doesn't this mean id is now persisted on-disk? I thought id is generated by reader at read time and than added to record, but I don't get why while writing record we need to write it. Seems like I am missing something obvious.

        Show
        Ashutosh Chauhan added a comment - Thanks, Mohammad Kamrul Islam for addressing concern. Overall looks good. One quick question: I see in AvroGenericRecordWritable::write(Dataoutput out) this line out.writeUTF(recordReaderID.toString()); Doesn't this mean id is now persisted on-disk? I thought id is generated by reader at read time and than added to record, but I don't get why while writing record we need to write it. Seems like I am missing something obvious.
        Hide
        Mark Wagner added a comment -

        The write and readFields methods are used when serializing the writable, but not when persisting to disk. We'll still want to maintain that id if the record is serialized and deserialized so we can do the equality comparison on the other side. I don't believe that those methods are ever actually used in Hive (the ORC equivalent of AvroGenericRecordWritable doesn't even implement it), but for completeness they are included.

        Show
        Mark Wagner added a comment - The write and readFields methods are used when serializing the writable, but not when persisting to disk. We'll still want to maintain that id if the record is serialized and deserialized so we can do the equality comparison on the other side. I don't believe that those methods are ever actually used in Hive (the ORC equivalent of AvroGenericRecordWritable doesn't even implement it), but for completeness they are included.
        Hide
        Ashutosh Chauhan added a comment -

        Thanks, Mark Wagner for explanation. +1

        Show
        Ashutosh Chauhan added a comment - Thanks, Mark Wagner for explanation. +1
        Hide
        Ashutosh Chauhan added a comment -

        Can one of you upload the patch on jira, so that HIVE QA have a run on it?

        Show
        Ashutosh Chauhan added a comment - Can one of you upload the patch on jira, so that HIVE QA have a run on it?
        Hide
        Mohammad Kamrul Islam added a comment -

        Same patch review in RB.

        Show
        Mohammad Kamrul Islam added a comment - Same patch review in RB.
        Hide
        Mohammad Kamrul Islam added a comment -

        renamed the file for pre-commit build.

        Show
        Mohammad Kamrul Islam added a comment - renamed the file for pre-commit build.
        Hide
        Edward Capriolo added a comment -

        Patch is not ready for commit.

        + /**
        + * A unique ID for each record reader.
        + */
        + final private UUID recordReaderID;

        + this.recordReaderID = UUID.randomUUID();
        }

        Your comment conflicts your code randomUUID() is not unique. See the link above.

        Show
        Edward Capriolo added a comment - Patch is not ready for commit. + /** + * A unique ID for each record reader. + */ + final private UUID recordReaderID; + this.recordReaderID = UUID.randomUUID(); } Your comment conflicts your code randomUUID() is not unique. See the link above.
        Hide
        Hive QA added a comment -

        Overall: -1 at least one tests failed

        Here are the results of testing the latest attachment:
        https://issues.apache.org/jira/secure/attachment/12603120/HIVE-4732.4.patch

        ERROR: -1 due to 1 failed/errored test(s), 3108 tests executed
        Failed tests:

        org.apache.hadoop.hive.serde2.avro.TestGenericAvroRecordWritable.writableContractIsImplementedCorrectly
        

        Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/747/testReport
        Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/747/console

        Messages:

        Executing org.apache.hive.ptest.execution.PrepPhase
        Executing org.apache.hive.ptest.execution.ExecutionPhase
        Executing org.apache.hive.ptest.execution.ReportingPhase
        Tests failed with: TestsFailedException: 1 tests failed
        

        This message is automatically generated.

        Show
        Hive QA added a comment - Overall : -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12603120/HIVE-4732.4.patch ERROR: -1 due to 1 failed/errored test(s), 3108 tests executed Failed tests: org.apache.hadoop.hive.serde2.avro.TestGenericAvroRecordWritable.writableContractIsImplementedCorrectly Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/747/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/747/console Messages: Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests failed with: TestsFailedException: 1 tests failed This message is automatically generated.
        Hide
        Mohammad Kamrul Islam added a comment -

        Edward Capriolo: I can see your point. Indeed a very informative link.
        As the link mentioned, the probability of ID collisions are very very rare.
        Pasted from wikipedia:
        "To put these numbers into perspective, the annual risk of someone being hit by a meteorite is estimated to be one chance in 17 billion,[38] which means the probability is about 0.00000000006 (6 × 10−11), equivalent to the odds of creating a few tens of trillions of UUIDs in a year and having one duplicate. In other words, only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%. The probability of one duplicate would be about 50% if every person on earth owns 600 million UUIDs."

        With these probability, will it be necessary to make thing complex. Moreover, these IDs are often few in one hive session.

        Show
        Mohammad Kamrul Islam added a comment - Edward Capriolo : I can see your point. Indeed a very informative link. As the link mentioned, the probability of ID collisions are very very rare. Pasted from wikipedia: "To put these numbers into perspective, the annual risk of someone being hit by a meteorite is estimated to be one chance in 17 billion, [38] which means the probability is about 0.00000000006 (6 × 10−11), equivalent to the odds of creating a few tens of trillions of UUIDs in a year and having one duplicate. In other words, only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%. The probability of one duplicate would be about 50% if every person on earth owns 600 million UUIDs." With these probability, will it be necessary to make thing complex. Moreover, these IDs are often few in one hive session.
        Hide
        Mohammad Kamrul Islam added a comment -

        Fixed the failed testcase.

        Show
        Mohammad Kamrul Islam added a comment - Fixed the failed testcase.
        Hide
        Hive QA added a comment -

        Overall: +1 all checks pass

        Here are the results of testing the latest attachment:
        https://issues.apache.org/jira/secure/attachment/12603500/HIVE-4732.5.patch

        SUCCESS: +1 3126 tests passed

        Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/774/testReport
        Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/774/console

        Messages:

        Executing org.apache.hive.ptest.execution.PrepPhase
        Executing org.apache.hive.ptest.execution.ExecutionPhase
        Executing org.apache.hive.ptest.execution.ReportingPhase
        

        This message is automatically generated.

        Show
        Hive QA added a comment - Overall : +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12603500/HIVE-4732.5.patch SUCCESS: +1 3126 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/774/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/774/console Messages: Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase This message is automatically generated.
        Hide
        Ashutosh Chauhan added a comment -

        I agree with Mohammad Kamrul Islam analysis. Lets not complicate the code for highly obscure case. Edward Capriolo Let us know if you disagree.

        Show
        Ashutosh Chauhan added a comment - I agree with Mohammad Kamrul Islam analysis. Lets not complicate the code for highly obscure case. Edward Capriolo Let us know if you disagree.
        Hide
        Edward Capriolo added a comment -

        I do disagree, because it is not complex to generate a GUID that will never collide.

        http://www.javapractices.com/topic/TopicAction.do?Id=56

        An implementation would likely replace 1 line of code with between 2 to 4. It is not a complex task and there are probably hundreds of references on how to do this on the internet.

        import java.rmi.server.UID;
        
        public class UniqueId {
        
          /**
          * Build and display some UID objects.
          */
          public static void main (String... arguments) {
            for (int idx=0; idx<10; ++idx){
              UID userId = new UID();
              System.out.println("User Id: " + userId);
            }
          }
        } 
        

        Would you rather have:
        1) a parachute that very very rarely does not work
        2) a parachute that always works

        Show
        Edward Capriolo added a comment - I do disagree, because it is not complex to generate a GUID that will never collide. http://www.javapractices.com/topic/TopicAction.do?Id=56 An implementation would likely replace 1 line of code with between 2 to 4. It is not a complex task and there are probably hundreds of references on how to do this on the internet. import java.rmi.server.UID; public class UniqueId { /** * Build and display some UID objects. */ public static void main ( String ... arguments) { for ( int idx=0; idx<10; ++idx){ UID userId = new UID(); System .out.println( "User Id: " + userId); } } } Would you rather have: 1) a parachute that very very rarely does not work 2) a parachute that always works
        Hide
        Edward Capriolo added a comment -

        If you do not want to do it just file another Jira issue and assign it to me and Ill do it.

        Show
        Edward Capriolo added a comment - If you do not want to do it just file another Jira issue and assign it to me and Ill do it.
        Hide
        Ashutosh Chauhan added a comment -

        Its upto Mohammad Kamrul Islam whether he wants to do it in current jira or a new one. I am fine either way.

        Show
        Ashutosh Chauhan added a comment - Its upto Mohammad Kamrul Islam whether he wants to do it in current jira or a new one. I am fine either way.
        Hide
        Mohammad Kamrul Islam added a comment -

        Incorporating Edward Capriolo comments.

        Show
        Mohammad Kamrul Islam added a comment - Incorporating Edward Capriolo comments.
        Hide
        Hive QA added a comment -

        Overall: -1 at least one tests failed

        Here are the results of testing the latest attachment:
        https://issues.apache.org/jira/secure/attachment/12604084/HIVE-4732.6.patch

        ERROR: -1 due to 174 failed/errored test(s), 1242 tests executed
        Failed tests:

        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_external_table_ppd
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_external_table_queries
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_map_queries
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_map_queries_prefix
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_storage_queries
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_joins
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_ppd_key_range
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_pushdown
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_queries
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_scan_params
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_single_sourced_multi_insert
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_stats
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_stats2
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_stats3
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_stats_empty_partition
        org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_ppd_key_ranges
        org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver_cascade_dbdrop_hadoop20
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket4
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket5
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketmapjoin7
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_disable_merge_for_bucketing
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_groupby2
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_bucketed_table
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_dyn_part
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_map_operators
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_merge
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_num_buckets
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_join1
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_load_fs2
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_parallel_orderby
        org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_reduce_deduplicate
        org.apache.hadoop.hive.hwi.TestHWISessionManager.testHiveDriver
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testConversionsBaseResultSet
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDatabaseMetaData
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDescribeTable
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDriverProperties
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testErrorMessages
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testExplainStmt
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetCatalogs
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetColumns
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetColumnsMetaData
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetSchemas
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetTableTypes
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetTables
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testNullType
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testPrepareStatement
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testResultSetMetaData
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testSelectAll
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testSelectAllFetchSize
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testSelectAllMaxRows
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testSelectAllPartioned
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testSetCommand
        org.apache.hadoop.hive.jdbc.TestJdbcDriver.testShowTables
        org.apache.hadoop.hive.ql.TestLocationQueries.testAlterTablePartitionLocation_alter5
        org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1
        org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapPlan1
        org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapPlan2
        org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan1
        org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan2
        org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan3
        org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan4
        org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan5
        org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan6
        org.apache.hadoop.hive.ql.history.TestHiveHistory.testSimpleQuery
        org.apache.hadoop.hive.ql.io.TestSymlinkTextInputFormat.testCombine
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_ambiguous_join_col
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_duplicate_alias
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_garbage
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_insert_wrong_number_columns
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_create_table
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_dot
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_function_param2
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_index
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_list_index
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_list_index2
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_map_index
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_map_index2
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_select
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_macro_reserved_word
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_missing_overwrite
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_nonkey_groupby
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_quoted_string
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column1
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column2
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column3
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column4
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column5
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column6
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_function1
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_function2
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_function3
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_function4
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_table1
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_table2
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_wrong_distinct1
        org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_wrong_distinct2
        org.apache.hadoop.hive.service.TestHiveServer.testDynamicSerde
        org.apache.hadoop.hive.service.TestHiveServer.testFetch
        org.apache.hadoop.hive.service.TestHiveServer.testNonHiveCommand
        org.apache.hcatalog.cli.TestSemanticAnalysis.testAddReplaceCols
        org.apache.hcatalog.cli.TestSemanticAnalysis.testDescDB
        org.apache.hcatalog.fileformats.TestOrcDynamicPartitioned.testHCatDynamicPartitionedTable
        org.apache.hcatalog.fileformats.TestOrcDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask
        org.apache.hcatalog.mapreduce.TestHCatDynamicPartitioned.testHCatDynamicPartitionedTable
        org.apache.hcatalog.mapreduce.TestHCatDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask
        org.apache.hcatalog.mapreduce.TestHCatInputFormat.testBadRecordHandlingFails
        org.apache.hcatalog.mapreduce.TestHCatInputFormat.testBadRecordHandlingPasses
        org.apache.hcatalog.pig.TestHCatLoaderStorer.testSmallTinyInt
        org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testDatabaseOps
        org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testShowDatabases
        org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testShowTables
        org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testTableOps
        org.apache.hive.beeline.src.test.TestBeeLineWithArgs.testPositiveScriptFile
        org.apache.hive.hcatalog.cli.TestSemanticAnalysis.testAddReplaceCols
        org.apache.hive.hcatalog.cli.TestSemanticAnalysis.testDescDB
        org.apache.hive.hcatalog.fileformats.TestOrcDynamicPartitioned.testHCatDynamicPartitionedTable
        org.apache.hive.hcatalog.fileformats.TestOrcDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask
        org.apache.hive.hcatalog.mapreduce.TestHCatDynamicPartitioned.testHCatDynamicPartitionedTable
        org.apache.hive.hcatalog.mapreduce.TestHCatDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask
        org.apache.hive.hcatalog.mapreduce.TestHCatExternalDynamicPartitioned.testHCatDynamicPartitionedTable
        org.apache.hive.hcatalog.mapreduce.TestHCatExternalDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask
        org.apache.hive.hcatalog.mapreduce.TestHCatInputFormat.testBadRecordHandlingFails
        org.apache.hive.hcatalog.mapreduce.TestHCatInputFormat.testBadRecordHandlingPasses
        org.apache.hive.hcatalog.pig.TestE2EScenarios.testReadOrcAndRCFromPig
        org.apache.hive.hcatalog.pig.TestHCatLoaderStorer.testSmallTinyInt
        org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testDatabaseOps
        org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testShowDatabases
        org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testShowTables
        org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testTableOps
        org.apache.hive.jdbc.TestJdbcDriver2.testBadURL
        org.apache.hive.jdbc.TestJdbcDriver2.testBuiltInUDFCol
        org.apache.hive.jdbc.TestJdbcDriver2.testDataTypes
        org.apache.hive.jdbc.TestJdbcDriver2.testDataTypes2
        org.apache.hive.jdbc.TestJdbcDriver2.testDatabaseMetaData
        org.apache.hive.jdbc.TestJdbcDriver2.testDescribeTable
        org.apache.hive.jdbc.TestJdbcDriver2.testDriverProperties
        org.apache.hive.jdbc.TestJdbcDriver2.testDuplicateColumnNameOrder
        org.apache.hive.jdbc.TestJdbcDriver2.testErrorDiag
        org.apache.hive.jdbc.TestJdbcDriver2.testErrorMessages
        org.apache.hive.jdbc.TestJdbcDriver2.testExecutePreparedStatement
        org.apache.hive.jdbc.TestJdbcDriver2.testExecuteQueryException
        org.apache.hive.jdbc.TestJdbcDriver2.testExplainStmt
        org.apache.hive.jdbc.TestJdbcDriver2.testExprCol
        org.apache.hive.jdbc.TestJdbcDriver2.testImportedKeys
        org.apache.hive.jdbc.TestJdbcDriver2.testInvalidURL
        org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetCatalogs
        org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetClassicTableTypes
        org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetColumns
        org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetColumnsMetaData
        org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetHiveTableTypes
        org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetSchemas
        org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetTableTypes
        org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetTables
        org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetTablesClassic
        org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetTablesHive
        org.apache.hive.jdbc.TestJdbcDriver2.testNullResultSet
        org.apache.hive.jdbc.TestJdbcDriver2.testNullType
        org.apache.hive.jdbc.TestJdbcDriver2.testOutOfBoundCols
        org.apache.hive.jdbc.TestJdbcDriver2.testPostClose
        org.apache.hive.jdbc.TestJdbcDriver2.testPrepareStatement
        org.apache.hive.jdbc.TestJdbcDriver2.testPrimaryKeys
        org.apache.hive.jdbc.TestJdbcDriver2.testProcCols
        org.apache.hive.jdbc.TestJdbcDriver2.testProccedures
        org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
        org.apache.hive.jdbc.TestJdbcDriver2.testSelectAll
        org.apache.hive.jdbc.TestJdbcDriver2.testSelectAllFetchSize
        org.apache.hive.jdbc.TestJdbcDriver2.testSelectAllMaxRows
        org.apache.hive.jdbc.TestJdbcDriver2.testSelectAllPartioned
        org.apache.hive.jdbc.TestJdbcDriver2.testSetCommand
        org.apache.hive.jdbc.TestJdbcDriver2.testShowTables
        org.apache.hive.service.cli.TestEmbeddedThriftCLIService.testExecuteStatement
        org.apache.hive.service.cli.TestEmbeddedThriftCLIService.testExecuteStatementAsync
        

        Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/828/testReport
        Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/828/console

        Messages:

        Executing org.apache.hive.ptest.execution.PrepPhase
        Executing org.apache.hive.ptest.execution.ExecutionPhase
        Executing org.apache.hive.ptest.execution.ReportingPhase
        Tests failed with: TestsFailedException: 174 tests failed
        

        This message is automatically generated.

        Show
        Hive QA added a comment - Overall : -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12604084/HIVE-4732.6.patch ERROR: -1 due to 174 failed/errored test(s), 1242 tests executed Failed tests: org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_external_table_ppd org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_external_table_queries org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_map_queries org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_map_queries_prefix org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_storage_queries org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_joins org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_ppd_key_range org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_pushdown org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_queries org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_scan_params org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_single_sourced_multi_insert org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_stats org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_stats2 org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_stats3 org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_stats_empty_partition org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_ppd_key_ranges org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver_cascade_dbdrop_hadoop20 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket4 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket5 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketmapjoin7 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_disable_merge_for_bucketing org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_groupby2 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_bucketed_table org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_dyn_part org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_map_operators org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_merge org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_num_buckets org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_join1 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_load_fs2 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_parallel_orderby org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_reduce_deduplicate org.apache.hadoop.hive.hwi.TestHWISessionManager.testHiveDriver org.apache.hadoop.hive.jdbc.TestJdbcDriver.testConversionsBaseResultSet org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDatabaseMetaData org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDescribeTable org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDriverProperties org.apache.hadoop.hive.jdbc.TestJdbcDriver.testErrorMessages org.apache.hadoop.hive.jdbc.TestJdbcDriver.testExplainStmt org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetCatalogs org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetColumns org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetColumnsMetaData org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetSchemas org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetTableTypes org.apache.hadoop.hive.jdbc.TestJdbcDriver.testMetaDataGetTables org.apache.hadoop.hive.jdbc.TestJdbcDriver.testNullType org.apache.hadoop.hive.jdbc.TestJdbcDriver.testPrepareStatement org.apache.hadoop.hive.jdbc.TestJdbcDriver.testResultSetMetaData org.apache.hadoop.hive.jdbc.TestJdbcDriver.testSelectAll org.apache.hadoop.hive.jdbc.TestJdbcDriver.testSelectAllFetchSize org.apache.hadoop.hive.jdbc.TestJdbcDriver.testSelectAllMaxRows org.apache.hadoop.hive.jdbc.TestJdbcDriver.testSelectAllPartioned org.apache.hadoop.hive.jdbc.TestJdbcDriver.testSetCommand org.apache.hadoop.hive.jdbc.TestJdbcDriver.testShowTables org.apache.hadoop.hive.ql.TestLocationQueries.testAlterTablePartitionLocation_alter5 org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapPlan1 org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapPlan2 org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan1 org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan2 org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan3 org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan4 org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan5 org.apache.hadoop.hive.ql.exec.TestExecDriver.testMapRedPlan6 org.apache.hadoop.hive.ql.history.TestHiveHistory.testSimpleQuery org.apache.hadoop.hive.ql.io.TestSymlinkTextInputFormat.testCombine org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_ambiguous_join_col org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_duplicate_alias org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_garbage org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_insert_wrong_number_columns org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_create_table org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_dot org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_function_param2 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_index org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_list_index org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_list_index2 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_map_index org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_map_index2 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_select org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_macro_reserved_word org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_missing_overwrite org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_nonkey_groupby org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_quoted_string org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column1 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column2 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column3 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column4 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column5 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_column6 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_function1 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_function2 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_function3 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_function4 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_table1 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_unknown_table2 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_wrong_distinct1 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_wrong_distinct2 org.apache.hadoop.hive.service.TestHiveServer.testDynamicSerde org.apache.hadoop.hive.service.TestHiveServer.testFetch org.apache.hadoop.hive.service.TestHiveServer.testNonHiveCommand org.apache.hcatalog.cli.TestSemanticAnalysis.testAddReplaceCols org.apache.hcatalog.cli.TestSemanticAnalysis.testDescDB org.apache.hcatalog.fileformats.TestOrcDynamicPartitioned.testHCatDynamicPartitionedTable org.apache.hcatalog.fileformats.TestOrcDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask org.apache.hcatalog.mapreduce.TestHCatDynamicPartitioned.testHCatDynamicPartitionedTable org.apache.hcatalog.mapreduce.TestHCatDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask org.apache.hcatalog.mapreduce.TestHCatInputFormat.testBadRecordHandlingFails org.apache.hcatalog.mapreduce.TestHCatInputFormat.testBadRecordHandlingPasses org.apache.hcatalog.pig.TestHCatLoaderStorer.testSmallTinyInt org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testDatabaseOps org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testShowDatabases org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testShowTables org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testTableOps org.apache.hive.beeline.src.test.TestBeeLineWithArgs.testPositiveScriptFile org.apache.hive.hcatalog.cli.TestSemanticAnalysis.testAddReplaceCols org.apache.hive.hcatalog.cli.TestSemanticAnalysis.testDescDB org.apache.hive.hcatalog.fileformats.TestOrcDynamicPartitioned.testHCatDynamicPartitionedTable org.apache.hive.hcatalog.fileformats.TestOrcDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask org.apache.hive.hcatalog.mapreduce.TestHCatDynamicPartitioned.testHCatDynamicPartitionedTable org.apache.hive.hcatalog.mapreduce.TestHCatDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask org.apache.hive.hcatalog.mapreduce.TestHCatExternalDynamicPartitioned.testHCatDynamicPartitionedTable org.apache.hive.hcatalog.mapreduce.TestHCatExternalDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask org.apache.hive.hcatalog.mapreduce.TestHCatInputFormat.testBadRecordHandlingFails org.apache.hive.hcatalog.mapreduce.TestHCatInputFormat.testBadRecordHandlingPasses org.apache.hive.hcatalog.pig.TestE2EScenarios.testReadOrcAndRCFromPig org.apache.hive.hcatalog.pig.TestHCatLoaderStorer.testSmallTinyInt org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testDatabaseOps org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testShowDatabases org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testShowTables org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testTableOps org.apache.hive.jdbc.TestJdbcDriver2.testBadURL org.apache.hive.jdbc.TestJdbcDriver2.testBuiltInUDFCol org.apache.hive.jdbc.TestJdbcDriver2.testDataTypes org.apache.hive.jdbc.TestJdbcDriver2.testDataTypes2 org.apache.hive.jdbc.TestJdbcDriver2.testDatabaseMetaData org.apache.hive.jdbc.TestJdbcDriver2.testDescribeTable org.apache.hive.jdbc.TestJdbcDriver2.testDriverProperties org.apache.hive.jdbc.TestJdbcDriver2.testDuplicateColumnNameOrder org.apache.hive.jdbc.TestJdbcDriver2.testErrorDiag org.apache.hive.jdbc.TestJdbcDriver2.testErrorMessages org.apache.hive.jdbc.TestJdbcDriver2.testExecutePreparedStatement org.apache.hive.jdbc.TestJdbcDriver2.testExecuteQueryException org.apache.hive.jdbc.TestJdbcDriver2.testExplainStmt org.apache.hive.jdbc.TestJdbcDriver2.testExprCol org.apache.hive.jdbc.TestJdbcDriver2.testImportedKeys org.apache.hive.jdbc.TestJdbcDriver2.testInvalidURL org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetCatalogs org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetClassicTableTypes org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetColumns org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetColumnsMetaData org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetHiveTableTypes org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetSchemas org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetTableTypes org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetTables org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetTablesClassic org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetTablesHive org.apache.hive.jdbc.TestJdbcDriver2.testNullResultSet org.apache.hive.jdbc.TestJdbcDriver2.testNullType org.apache.hive.jdbc.TestJdbcDriver2.testOutOfBoundCols org.apache.hive.jdbc.TestJdbcDriver2.testPostClose org.apache.hive.jdbc.TestJdbcDriver2.testPrepareStatement org.apache.hive.jdbc.TestJdbcDriver2.testPrimaryKeys org.apache.hive.jdbc.TestJdbcDriver2.testProcCols org.apache.hive.jdbc.TestJdbcDriver2.testProccedures org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData org.apache.hive.jdbc.TestJdbcDriver2.testSelectAll org.apache.hive.jdbc.TestJdbcDriver2.testSelectAllFetchSize org.apache.hive.jdbc.TestJdbcDriver2.testSelectAllMaxRows org.apache.hive.jdbc.TestJdbcDriver2.testSelectAllPartioned org.apache.hive.jdbc.TestJdbcDriver2.testSetCommand org.apache.hive.jdbc.TestJdbcDriver2.testShowTables org.apache.hive.service.cli.TestEmbeddedThriftCLIService.testExecuteStatement org.apache.hive.service.cli.TestEmbeddedThriftCLIService.testExecuteStatementAsync Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/828/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/828/console Messages: Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests failed with: TestsFailedException: 174 tests failed This message is automatically generated.
        Hide
        Mohammad Kamrul Islam added a comment -

        Uploading the previous patch. trunk had an issue in lat pre-commit build.

        Show
        Mohammad Kamrul Islam added a comment - Uploading the previous patch. trunk had an issue in lat pre-commit build.
        Hide
        Hive QA added a comment -

        Overall: +1 all checks pass

        Here are the results of testing the latest attachment:
        https://issues.apache.org/jira/secure/attachment/12604184/HIVE-4732.7.patch

        SUCCESS: +1 3128 tests passed

        Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/836/testReport
        Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/836/console

        Messages:

        Executing org.apache.hive.ptest.execution.PrepPhase
        Executing org.apache.hive.ptest.execution.ExecutionPhase
        Executing org.apache.hive.ptest.execution.ReportingPhase
        

        This message is automatically generated.

        Show
        Hive QA added a comment - Overall : +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12604184/HIVE-4732.7.patch SUCCESS: +1 3128 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/836/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/836/console Messages: Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase This message is automatically generated.
        Hide
        Ashutosh Chauhan added a comment -

        Committed to trunk. Thanks, Mohammad!

        Show
        Ashutosh Chauhan added a comment - Committed to trunk. Thanks, Mohammad!
        Hide
        Hudson added a comment -

        FAILURE: Integrated in Hive-trunk-hadoop2-ptest #110 (See https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/110/)
        HIVE-4732 : Reduce or eliminate the expensive Schema equals() check for AvroSerde (Mohammad Kamrul Islam via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1525290)

        • /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java
        • /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java
        • /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java
        Show
        Hudson added a comment - FAILURE: Integrated in Hive-trunk-hadoop2-ptest #110 (See https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/110/ ) HIVE-4732 : Reduce or eliminate the expensive Schema equals() check for AvroSerde (Mohammad Kamrul Islam via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1525290 ) /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in Hive-trunk-hadoop1-ptest #178 (See https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/178/)
        HIVE-4732 : Reduce or eliminate the expensive Schema equals() check for AvroSerde (Mohammad Kamrul Islam via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1525290)

        • /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java
        • /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java
        • /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java
        Show
        Hudson added a comment - FAILURE: Integrated in Hive-trunk-hadoop1-ptest #178 (See https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/178/ ) HIVE-4732 : Reduce or eliminate the expensive Schema equals() check for AvroSerde (Mohammad Kamrul Islam via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1525290 ) /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in Hive-trunk-hadoop2 #449 (See https://builds.apache.org/job/Hive-trunk-hadoop2/449/)
        HIVE-4732 : Reduce or eliminate the expensive Schema equals() check for AvroSerde (Mohammad Kamrul Islam via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1525290)

        • /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java
        • /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java
        • /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java
        Show
        Hudson added a comment - FAILURE: Integrated in Hive-trunk-hadoop2 #449 (See https://builds.apache.org/job/Hive-trunk-hadoop2/449/ ) HIVE-4732 : Reduce or eliminate the expensive Schema equals() check for AvroSerde (Mohammad Kamrul Islam via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1525290 ) /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in Hive-trunk-h0.21 #2350 (See https://builds.apache.org/job/Hive-trunk-h0.21/2350/)
        HIVE-4732 : Reduce or eliminate the expensive Schema equals() check for AvroSerde (Mohammad Kamrul Islam via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1525290)

        • /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java
        • /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java
        • /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java
        • /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java
        Show
        Hudson added a comment - FAILURE: Integrated in Hive-trunk-h0.21 #2350 (See https://builds.apache.org/job/Hive-trunk-h0.21/2350/ ) HIVE-4732 : Reduce or eliminate the expensive Schema equals() check for AvroSerde (Mohammad Kamrul Islam via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1525290 ) /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java

          People

          • Assignee:
            Mohammad Kamrul Islam
            Reporter:
            Mark Wagner
          • Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development