Hive
  1. Hive
  2. HIVE-5302

PartitionPruner logs warning on Avro non-partitioned data

    Details

    • Type: Bug Bug
    • Status: Patch Available
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.11.0
    • Fix Version/s: None
    • Labels:

      Description

      While updating HIVE-3585 I found a test case that causes the failure in the MetaStoreUtils partition retrieval from back in HIVE-4789.

      in this case, the failure is triggered when the partition pruner is handed a non-partitioned table and has to construct a pseudo-partition.

      e.g.

        INSERT OVERWRITE TABLE partitioned_table PARTITION(col) SELECT id, foo, col FROM non_partitioned_table WHERE col <= 9;
      
      1. HIVE-5302.1-branch-0.12.patch.txt
        287 kB
        Sean Busbey
      2. HIVE-5302.1.patch.txt
        281 kB
        Sean Busbey
      3. HIVE-5302.1.patch.txt
        281 kB
        Brock Noland

        Issue Links

          Activity

          Sean Busbey created issue -
          Sean Busbey made changes -
          Field Original Value New Value
          Assignee Sean Busbey [ busbey ]
          Sean Busbey made changes -
          Link This issue blocks HIVE-3585 [ HIVE-3585 ]
          Hide
          Sean Busbey added a comment -
          Show
          Sean Busbey added a comment - Review board for patch to trunk
          Hide
          Sean Busbey added a comment -

          Patches for trunk and for branch-0.12.

          this touches lots of .out files, so it will probably go stale quickly.

          Show
          Sean Busbey added a comment - Patches for trunk and for branch-0.12. this touches lots of .out files, so it will probably go stale quickly.
          Sean Busbey made changes -
          Attachment HIVE-5302.1.patch.txt [ 12603828 ]
          Attachment HIVE-5302.1-branch-0.12.patch.txt [ 12603829 ]
          Sean Busbey made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Edward Capriolo added a comment -

          I am reviewing this now. Navis et all. Can I get a second set of eyes on this?

          Show
          Edward Capriolo added a comment - I am reviewing this now. Navis et all. Can I get a second set of eyes on this?
          Hide
          Brock Noland added a comment -

          Reuploading the trunk the patch so it gets tested. The script just takes the lastest patch.

          Show
          Brock Noland added a comment - Reuploading the trunk the patch so it gets tested. The script just takes the lastest patch.
          Brock Noland made changes -
          Attachment HIVE-5302.1.patch.txt [ 12603850 ]
          Show
          Brock Noland added a comment - The change looks reasonable to me. About this change Ashutosh said " Your changes in MetaStoreUtils are indeed reasonable. I just wanted to make sure whether they are really needed. If you can come up with a testcase, which shows the failure without changes in MetaStoreUtils, that will make it easier to concretize why these changes are useful. " in HIVE-4789 .
          Hide
          Sean Busbey added a comment -

          In case I didn't make this clear enough in the RB, the additional query added avro_partitioned.q does fail without the changes to MetaStoreUtils.

          Show
          Sean Busbey added a comment - In case I didn't make this clear enough in the RB, the additional query added avro_partitioned.q does fail without the changes to MetaStoreUtils.
          Hide
          Brock Noland added a comment -

          Yep

          Show
          Brock Noland added a comment - Yep
          Hide
          Ashutosh Chauhan added a comment -

          Thanks, Sean Busbey for coming up with a testcase.
          Though, changes in .q.out files indicate that it will make explain extended confusing for people, since partition properties will now list numPartitions which should really be shown as table properties.

          Show
          Ashutosh Chauhan added a comment - Thanks, Sean Busbey for coming up with a testcase. Though, changes in .q.out files indicate that it will make explain extended confusing for people, since partition properties will now list numPartitions which should really be shown as table properties.
          Hide
          Ashutosh Chauhan added a comment -

          And as an extension to that all table level properties will now also automagically appear as partition properties which doesn't feel right. Normally, it should never be a requirement that partition need to know table properties. Problem arises because of weirdity in how AvroSerde works since it stores its schema in properties object instead of in metastore columns table. I think this problem is too specific to Avro, so this should be done in Avro specific code, AvroSerde perhaps.

          Show
          Ashutosh Chauhan added a comment - And as an extension to that all table level properties will now also automagically appear as partition properties which doesn't feel right. Normally, it should never be a requirement that partition need to know table properties. Problem arises because of weirdity in how AvroSerde works since it stores its schema in properties object instead of in metastore columns table. I think this problem is too specific to Avro, so this should be done in Avro specific code, AvroSerde perhaps.
          Hide
          Ashutosh Chauhan added a comment -

          Secondly, if you look at .xml file changes it clearly shows it will bloat plan with unnecessary info that is not required at execution time. I really think we should spend more time on getting your test case to work in less intrusive fashion.

          Show
          Ashutosh Chauhan added a comment - Secondly, if you look at .xml file changes it clearly shows it will bloat plan with unnecessary info that is not required at execution time. I really think we should spend more time on getting your test case to work in less intrusive fashion.
          Hide
          Edward Capriolo added a comment -

          So is it the case is that avro-serde was working but some other change in hive-11, broke already existing functionality?

          I do not see a huge problem with table properties showing in partition properies as long as the two do not collide/clash with each other. However if there is a cleaner way to do this without bloating the plan that seems like a reasonable endeavor. Does anyway have a concrete suggestion as to how this could be written instead?

          Show
          Edward Capriolo added a comment - So is it the case is that avro-serde was working but some other change in hive-11, broke already existing functionality? I do not see a huge problem with table properties showing in partition properies as long as the two do not collide/clash with each other. However if there is a cleaner way to do this without bloating the plan that seems like a reasonable endeavor. Does anyway have a concrete suggestion as to how this could be written instead?
          Hide
          Hive QA added a comment -

          Overall: -1 at least one tests failed

          Here are the results of testing the latest attachment:
          https://issues.apache.org/jira/secure/attachment/12603850/HIVE-5302.1.patch.txt

          ERROR: -1 due to 1 failed/errored test(s), 3126 tests executed
          Failed tests:

          org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1
          

          Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/804/testReport
          Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/804/console

          Messages:

          Executing org.apache.hive.ptest.execution.PrepPhase
          Executing org.apache.hive.ptest.execution.ExecutionPhase
          Executing org.apache.hive.ptest.execution.ReportingPhase
          Tests failed with: TestsFailedException: 1 tests failed
          

          This message is automatically generated.

          Show
          Hive QA added a comment - Overall : -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12603850/HIVE-5302.1.patch.txt ERROR: -1 due to 1 failed/errored test(s), 3126 tests executed Failed tests: org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1 Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/804/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/804/console Messages: Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests failed with: TestsFailedException: 1 tests failed This message is automatically generated.
          Hide
          Mark Wagner added a comment -

          Yes, that's right Edward Capriolo. This ultimately traces back to the changes made in HIVE-3833.

          Sean Busbey, I'm having difficulty reproducing the failure. I've added your changes to avro_partitioned, as well as a select from the table. It goes through cleanly and the result looks correct:

          An Unearthly Child      23 November 1963        1
          The Power of the Daleks 5 November 1966 2
          Horror of Fang Rock     3 September 1977        4
          Castrolava      4 January 1982  5
          The Mysterious Planet   6 September 1986        6
          

          I ran this on trunk and pulled right before running. Any idea what might be different between us? How did the test case fail for you without the MetaStoreUtils changes?

          Show
          Mark Wagner added a comment - Yes, that's right Edward Capriolo . This ultimately traces back to the changes made in HIVE-3833 . Sean Busbey , I'm having difficulty reproducing the failure. I've added your changes to avro_partitioned, as well as a select from the table. It goes through cleanly and the result looks correct: An Unearthly Child 23 November 1963 1 The Power of the Daleks 5 November 1966 2 Horror of Fang Rock 3 September 1977 4 Castrolava 4 January 1982 5 The Mysterious Planet 6 September 1986 6 I ran this on trunk and pulled right before running. Any idea what might be different between us? How did the test case fail for you without the MetaStoreUtils changes?
          Hide
          Edward Capriolo added a comment -

          Mark Wagner Sean Busbey Can the two of you come to a consensus as to whether the bug still exists?

          Ashutosh Chauhan I understand your debate about bloating the plan, however the plan is fairly ephemeral and changes quite often. If we can confirm the issue, this is surely a 0.12 blocker. You have mentioned that you would like to see this issue resolved a different way. Without a concrete suggestion as to what the better way might be we are at a stand still.

          I do not thing we want to hold up 0.12 longer then we need to, and I do not thing we want avro broken. Does anyone want to add anything? If not I am +1 on this patch.

          Show
          Edward Capriolo added a comment - Mark Wagner Sean Busbey Can the two of you come to a consensus as to whether the bug still exists? Ashutosh Chauhan I understand your debate about bloating the plan, however the plan is fairly ephemeral and changes quite often. If we can confirm the issue, this is surely a 0.12 blocker. You have mentioned that you would like to see this issue resolved a different way. Without a concrete suggestion as to what the better way might be we are at a stand still. I do not thing we want to hold up 0.12 longer then we need to, and I do not thing we want avro broken. Does anyone want to add anything? If not I am +1 on this patch.
          Hide
          Edward Capriolo added a comment -

          Sean Busbey

          hive>  SELECT * FROM episodes_partitioned WHERE doctor_pt > 6 ORDER BY air_date;
          Total MapReduce jobs = 1
          Launching Job 1 out of 1
          Number of reduce tasks determined at compile time: 1
          In order to change the average load for a reducer (in bytes):
            set hive.exec.reducers.bytes.per.reducer=<number>
          In order to limit the maximum number of reducers:
            set hive.exec.reducers.max=<number>
          In order to set a constant number of reducers:
            set mapred.reduce.tasks=<number>
          Execution log at: /tmp/edward/.log
          Job running in-process (local Hadoop)
          Hadoop job information for null: number of mappers: 0; number of reducers: 0
          2013-09-27 22:52:03,525 null map = 100%,  reduce = 100%
          Ended Job = job_local_0001
          Execution completed successfully
          Mapred Local Task Succeeded . Convert the Join into MapJoin
          OK
          The Doctor's Wife	14 May 2011	11	11
          Rose	26 March 2005	9	9
          The Eleventh Hour	3 April 2010	11	11
          Time taken: 4.121 seconds, Fetched: 3 row(s)
          hive> show partitions episodes_partitioned;
          

          I also confirmed this is working hive-trunk -> hadoop 0.20.2 outside unit tests. Do you think this is still an issue. If it is not an issue we can just commit the .q file to ensure there is no regressions.

          Show
          Edward Capriolo added a comment - Sean Busbey hive> SELECT * FROM episodes_partitioned WHERE doctor_pt > 6 ORDER BY air_date; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Execution log at: /tmp/edward/.log Job running in-process (local Hadoop) Hadoop job information for null: number of mappers: 0; number of reducers: 0 2013-09-27 22:52:03,525 null map = 100%, reduce = 100% Ended Job = job_local_0001 Execution completed successfully Mapred Local Task Succeeded . Convert the Join into MapJoin OK The Doctor's Wife 14 May 2011 11 11 Rose 26 March 2005 9 9 The Eleventh Hour 3 April 2010 11 11 Time taken: 4.121 seconds, Fetched: 3 row(s) hive> show partitions episodes_partitioned; I also confirmed this is working hive-trunk -> hadoop 0.20.2 outside unit tests. Do you think this is still an issue. If it is not an issue we can just commit the .q file to ensure there is no regressions.
          Hide
          Sean Busbey added a comment -

          Arg. Okay, tl;dr: I need to go back to the drawing board on finding a suitable test. Please lower priority or close as appropriate.

          Long version:

          In setting up my test case I was too quick to presume AvroSerdeException showing up in the logs was a hard failure. But there does appear to be a non-fatal problem when the partition pruner optimization is working with a non-partitioned avro table. It attempts to make a shadow partition to represent the whole table. Creating this partition relies on an initializer that goes through a code path for instantiating the SerDe based on feedback just from MetaStoreUtils.

          So the AvroSerDe fails during initialization (and logs a WARN about it with an AvroSerdeException), but since this instance of the serde is never actually used, it doesn't result in a failure.

          you can see this by even running the basic sanity test:

            $> ant clean package
          …
            $> ant -Dmodule=ql -Dtestcase=TestCliDriver -Dqfile=avro_sanity_test.q test
          …
          BUILD SUCCESSFUL
          Total time: 1 minute 15 seconds
            $> less build/ql/tmp/hive.log
          

          In the log grep for AvroSerdeException (for me it's line 3198)

          So sad Sean will need to go back to finding a case where this explodes in a way that stops things.

          On the matter of query plan bloat, we could isolate related changes to the Avro Serde so long as there's a way to get at table properties during SerDe initialization. That way it could check partition-specific and then fall back to table on its own. I'll worry about that once I find a test case.

          Show
          Sean Busbey added a comment - Arg. Okay, tl;dr: I need to go back to the drawing board on finding a suitable test. Please lower priority or close as appropriate. Long version: In setting up my test case I was too quick to presume AvroSerdeException showing up in the logs was a hard failure. But there does appear to be a non-fatal problem when the partition pruner optimization is working with a non-partitioned avro table. It attempts to make a shadow partition to represent the whole table. Creating this partition relies on an initializer that goes through a code path for instantiating the SerDe based on feedback just from MetaStoreUtils. So the AvroSerDe fails during initialization (and logs a WARN about it with an AvroSerdeException), but since this instance of the serde is never actually used, it doesn't result in a failure. you can see this by even running the basic sanity test: $> ant clean package … $> ant -Dmodule=ql -Dtestcase=TestCliDriver -Dqfile=avro_sanity_test.q test … BUILD SUCCESSFUL Total time: 1 minute 15 seconds $> less build/ql/tmp/hive.log In the log grep for AvroSerdeException (for me it's line 3198) So sad Sean will need to go back to finding a case where this explodes in a way that stops things. On the matter of query plan bloat, we could isolate related changes to the Avro Serde so long as there's a way to get at table properties during SerDe initialization. That way it could check partition-specific and then fall back to table on its own. I'll worry about that once I find a test case.
          Hide
          Edward Capriolo added a comment -

          We do not necessarily need a documented testable case in the to justify the change, seeing a non fatal error in the logs is reason enough to apply the patch.

          In the matter of query plan bloat, we could isolate related changes to the Avro Serde so long as there's a way to get at table properties during SerDe initialization. That way it could check partition-specific and then fall back to table on its own. I'll worry about that once I find a test case.

          I would focus less on finding a test case. We can treat this as an optimization, and take your word that their are cases where the current system does not work. See if you can find this other way to solve this without effecting the plan, i think that is a big win for all parties, if it is not possible there is nothing wrong with committing your original patch in my eyes.

          Show
          Edward Capriolo added a comment - We do not necessarily need a documented testable case in the to justify the change, seeing a non fatal error in the logs is reason enough to apply the patch. In the matter of query plan bloat, we could isolate related changes to the Avro Serde so long as there's a way to get at table properties during SerDe initialization. That way it could check partition-specific and then fall back to table on its own. I'll worry about that once I find a test case. I would focus less on finding a test case. We can treat this as an optimization, and take your word that their are cases where the current system does not work. See if you can find this other way to solve this without effecting the plan, i think that is a big win for all parties, if it is not possible there is nothing wrong with committing your original patch in my eyes.
          Thejas M Nair made changes -
          Priority Blocker [ 1 ] Critical [ 2 ]
          Hide
          Thejas M Nair added a comment -

          Changing the priority to critical instead of blocker.

          Show
          Thejas M Nair added a comment - Changing the priority to critical instead of blocker.
          Prasad Mujumdar made changes -
          Link This issue is duplicated by HIVE-5456 [ HIVE-5456 ]
          Prasad Mujumdar made changes -
          Link This issue is duplicated by HIVE-5456 [ HIVE-5456 ]
          Karen Clark made changes -
          Link This issue is duplicated by HIVE-5456 [ HIVE-5456 ]
          Hide
          Hive QA added a comment -

          Overall: -1 no tests executed

          Here are the results of testing the latest attachment:
          https://issues.apache.org/jira/secure/attachment/12603850/HIVE-5302.1.patch.txt

          Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1249/testReport
          Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1249/console

          Messages:

          Executing org.apache.hive.ptest.execution.PrepPhase
          Tests exited with: NonZeroExitCodeException
          Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n '' ]]
          + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
          + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
          + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128'
          + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128'
          + cd /data/hive-ptest/working/
          + tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1249/source-prep.txt
          + [[ false == \t\r\u\e ]]
          + mkdir -p maven ivy
          + [[ svn = \s\v\n ]]
          + [[ -n '' ]]
          + [[ -d apache-svn-trunk-source ]]
          + [[ ! -d apache-svn-trunk-source/.svn ]]
          + [[ ! -d apache-svn-trunk-source ]]
          + cd apache-svn-trunk-source
          + svn revert -R .
          Reverted 'metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java'
          Reverted 'metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java'
          Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java'
          Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java'
          Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java'
          Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java'
          Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java'
          Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java'
          Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java'
          Reverted 'metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py'
          Reverted 'metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py'
          Reverted 'metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote'
          Reverted 'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp'
          Reverted 'metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp'
          Reverted 'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h'
          Reverted 'metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h'
          Reverted 'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp'
          Reverted 'metastore/src/gen/thrift/gen-rb/thrift_hive_metastore.rb'
          Reverted 'metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb'
          Reverted 'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java'
          Reverted 'metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php'
          Reverted 'metastore/src/gen/thrift/gen-php/metastore/Types.php'
          Reverted 'metastore/if/hive_metastore.thrift'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/plan/DropTableDesc.java'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Partition.java'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/ArchiveUtils.java'
          Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java'
          ++ egrep -v '^X|^Performing status on external'
          ++ awk '{print $2}'
          ++ svn status --no-ignore
          + rm -rf target datanucleus.log ant/target shims/target shims/0.20/target shims/0.20S/target shims/0.23/target shims/aggregator/target shims/common/target shims/common-secure/target packaging/target hbase-handler/target testutils/target jdbc/target metastore/target metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsExpr.java metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsRequest.java metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RequestPartsSpec.java metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsResult.java itests/target itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target itests/hive-unit/target itests/custom-serde/target itests/util/target hcatalog/target hcatalog/storage-handlers/hbase/target hcatalog/server-extensions/target hcatalog/core/target hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target hwi/target common/target common/src/gen service/target contrib/target serde/target beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target
          + svn update
          
          Fetching external item into 'hcatalog/src/test/e2e/harness'
          External at revision 1566024.
          
          At revision 1566024.
          + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
          + patchFilePath=/data/hive-ptest/working/scratch/build.patch
          + [[ -f /data/hive-ptest/working/scratch/build.patch ]]
          + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
          + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch
          The patch does not appear to apply with p0, p1, or p2
          + exit 1
          '
          

          This message is automatically generated.

          ATTACHMENT ID: 12603850

          Show
          Hive QA added a comment - Overall : -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12603850/HIVE-5302.1.patch.txt Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1249/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1249/console Messages: Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n '' ]] + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1249/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ svn = \s\v\n ]] + [[ -n '' ]] + [[ -d apache-svn-trunk-source ]] + [[ ! -d apache-svn-trunk-source/.svn ]] + [[ ! -d apache-svn-trunk-source ]] + cd apache-svn-trunk-source + svn revert -R . Reverted 'metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java' Reverted 'metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java' Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java' Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java' Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java' Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java' Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java' Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java' Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java' Reverted 'metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py' Reverted 'metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py' Reverted 'metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote' Reverted 'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp' Reverted 'metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp' Reverted 'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h' Reverted 'metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h' Reverted 'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp' Reverted 'metastore/src/gen/thrift/gen-rb/thrift_hive_metastore.rb' Reverted 'metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb' Reverted 'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java' Reverted 'metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php' Reverted 'metastore/src/gen/thrift/gen-php/metastore/Types.php' Reverted 'metastore/if/hive_metastore.thrift' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/plan/DropTableDesc.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Partition.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/ArchiveUtils.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java' ++ egrep -v '^X|^Performing status on external' ++ awk '{print $2}' ++ svn status --no-ignore + rm -rf target datanucleus.log ant/target shims/target shims/0.20/target shims/0.20S/target shims/0.23/target shims/aggregator/target shims/common/target shims/common-secure/target packaging/target hbase-handler/target testutils/target jdbc/target metastore/target metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsExpr.java metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsRequest.java metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RequestPartsSpec.java metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsResult.java itests/target itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target itests/hive-unit/target itests/custom-serde/target itests/util/target hcatalog/target hcatalog/storage-handlers/hbase/target hcatalog/server-extensions/target hcatalog/core/target hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target hwi/target common/target common/src/gen service/target contrib/target serde/target beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target + svn update Fetching external item into 'hcatalog/src/test/e2e/harness' External at revision 1566024. At revision 1566024. + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' This message is automatically generated. ATTACHMENT ID: 12603850
          Hide
          Brock Noland added a comment -

          I believe this isn't fatal, so I have updated the title. Please correct me if I am wrong.

          Show
          Brock Noland added a comment - I believe this isn't fatal, so I have updated the title. Please correct me if I am wrong.
          Brock Noland made changes -
          Summary PartitionPruner fails on Avro non-partitioned data PartitionPruner logs warning on Avro non-partitioned data
          Cihad OGE made changes -
          Priority Critical [ 2 ] Major [ 3 ]
          Johndee Burks made changes -
          Link This issue is related to HIVE-9056 [ HIVE-9056 ]
          Transition Time In Source Status Execution Times Last Executer Last Execution Date
          Open Open Patch Available Patch Available
          1d 5h 34m 1 Sean Busbey 18/Sep/13 15:15

            People

            • Assignee:
              Sean Busbey
              Reporter:
              Sean Busbey
            • Votes:
              1 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:

                Development