Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-6455

test_partition_metadata_compatibility flaky failures

    XMLWordPrintableJSON

Details

    Description

      This test has been observed to fail fairly frequently on GVO jobs.

      https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/1095/consoleFull
      https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/1101/consoleFull

      07:54:35 [gw5] FAILED metadata/test_partition_metadata.py::TestPartitionMetadata::test_partition_metadata_compatibility[exec_option: {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: seq/snap/block] 07:54:35 [gw2] FAILED metadata/test_partition_metadata.py::TestPartitionMetadata::test_partition_metadata_compatibility[exec_option: {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: avro/snap/block]
      
      08:35:07 FAIL metadata/test_partition_metadata.py::TestPartitionMetadata::()::test_partition_metadata_compatibility[exec_option: {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: seq/snap/block]
      08:35:07 FAIL metadata/test_partition_metadata.py::TestPartitionMetadata::()::test_partition_metadata_compatibility[exec_option: {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: avro/snap/block]
      08:35:07 =================================== FAILURES ===================================
      08:35:07  TestPartitionMetadata.test_partition_metadata_compatibility[exec_option: {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: seq/snap/block] 
      08:35:07 [gw5] linux2 -- Python 2.7.12 /home/ubuntu/Impala/bin/../infra/python/env/bin/python
      08:35:07 metadata/test_partition_metadata.py:127: in test_partition_metadata_compatibility
      08:35:07     "insert into %s partition (x) values(1,1)" % FQ_TBL_HIVE)
      08:35:07 common/impala_test_suite.py:665: in run_stmt_in_hive
      08:35:07     raise RuntimeError(stderr)
      08:35:07 E   RuntimeError: SLF4J: Class path contains multiple SLF4J bindings.
      08:35:07 E   SLF4J: Found binding in [jar:file:/home/ubuntu/Impala/toolchain/cdh_components/hbase-1.2.0-cdh5.15.0-SNAPSHOT/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      08:35:07 E   SLF4J: Found binding in [jar:file:/home/ubuntu/Impala/toolchain/cdh_components/hadoop-2.6.0-cdh5.15.0-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      08:35:07 E   SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
      08:35:07 E   SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
      08:35:07 E   scan complete in 2ms
      08:35:07 E   Connecting to jdbc:hive2://localhost:11050
      08:35:07 E   Connected to: Apache Hive (version 1.1.0-cdh5.15.0-SNAPSHOT)
      08:35:07 E   Driver: Hive JDBC (version 1.1.0-cdh5.15.0-SNAPSHOT)
      08:35:07 E   Transaction isolation: TRANSACTION_REPEATABLE_READ
      08:35:07 E   No rows affected (0.042 seconds)
      08:35:07 E   INFO  : Compiling command(queryId=ubuntu_20180130075454_4bf94127-1f71-4a2e-be7e-9086b1b1f34e): insert into test_partition_metadata_compatibility_cca30e65.part_parquet_tbl_hive partition (x) values(1,1)
      08:35:07 E   INFO  : Semantic Analysis Completed
      08:35:07 E   INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, type:int, comment:null), FieldSchema(name:_col1, type:string, comment:null)], properties:null)
      08:35:07 E   INFO  : Completed compiling command(queryId=ubuntu_20180130075454_4bf94127-1f71-4a2e-be7e-9086b1b1f34e); Time taken: 0.105 seconds
      08:35:07 E   INFO  : Executing command(queryId=ubuntu_20180130075454_4bf94127-1f71-4a2e-be7e-9086b1b1f34e): insert into test_partition_metadata_compatibility_cca30e65.part_parquet_tbl_hive partition (x) values(1,1)
      08:35:07 E   INFO  : Query ID = ubuntu_20180130075454_4bf94127-1f71-4a2e-be7e-9086b1b1f34e
      08:35:07 E   INFO  : Total jobs = 3
      08:35:07 E   INFO  : Launching Job 1 out of 3
      08:35:07 E   INFO  : Starting task [Stage-1:MAPRED] in serial mode
      08:35:07 E   INFO  : Number of reduce tasks is set to 0 since there's no reduce operator
      08:35:07 E   INFO  : number of splits:1
      08:35:07 E   INFO  : Submitting tokens for job: job_local512703045_0002
      08:35:07 E   INFO  : Cleaning up the staging area file:/tmp/hadoop-ubuntu/mapred/staging/ubuntu512703045/.staging/job_local512703045_0002
      08:35:07 E   ERROR : Job Submission failed with exception 'java.io.IOException(java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File /tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp does not exist)'
      08:35:07 E   java.io.IOException: java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File /tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp does not exist
      08:35:07 E   	at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:143)
      08:35:07 E   	at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:171)
      08:35:07 E   	at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:758)
      08:35:07 E   	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:244)
      08:35:07 E   	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
      08:35:07 E   	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
      08:35:07 E   	at java.security.AccessController.doPrivileged(Native Method)
      08:35:07 E   	at javax.security.auth.Subject.doAs(Subject.java:422)
      08:35:07 E   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
      08:35:07 E   	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
      08:35:07 E   	at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578)
      08:35:07 E   	at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573)
      08:35:07 E   	at java.security.AccessController.doPrivileged(Native Method)
      08:35:07 E   	at javax.security.auth.Subject.doAs(Subject.java:422)
      08:35:07 E   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
      08:35:07 E   	at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573)
      08:35:07 E   	at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564)
      08:35:07 E   	at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:436)
      08:35:07 E   	at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142)
      08:35:07 E   	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
      08:35:07 E   	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99)
      08:35:07 E   	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052)
      08:35:07 E   	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748)
      08:35:07 E   	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501)
      08:35:07 E   	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285)
      08:35:07 E   	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280)
      08:35:07 E   	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236)
      08:35:07 E   	at org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89)
      08:35:07 E   	at org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301)
      08:35:07 E   	at java.security.AccessController.doPrivileged(Native Method)
      08:35:07 E   	at javax.security.auth.Subject.doAs(Subject.java:422)
      08:35:07 E   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
      08:35:07 E   	at org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314)
      08:35:07 E   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      08:35:07 E   	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      08:35:07 E   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      08:35:07 E   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      08:35:07 E   	at java.lang.Thread.run(Thread.java:748)
      08:35:07 E   Caused by: java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File /tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp does not exist
      08:35:07 E   	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
      08:35:07 E   	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
      08:35:07 E   	at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:139)
      08:35:07 E   	... 37 more
      08:35:07 E   Caused by: java.io.FileNotFoundException: File /tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp does not exist
      08:35:07 E   	at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:598)
      08:35:07 E   	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:811)
      08:35:07 E   	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:588)
      08:35:07 E   	at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileLinkStatusInternal(RawLocalFileSystem.java:827)
      08:35:07 E   	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:813)
      08:35:07 E   	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatus(RawLocalFileSystem.java:784)
      08:35:07 E   	at org.apache.hadoop.fs.DelegateToFileSystem.getFileLinkStatus(DelegateToFileSystem.java:132)
      08:35:07 E   	at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:701)
      08:35:07 E   	at org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:236)
      08:35:07 E   	at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:674)
      08:35:07 E   	at org.apache.hadoop.fs.FileContext.rename(FileContext.java:932)
      08:35:07 E   	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:369)
      08:35:07 E   	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
      08:35:07 E   	... 4 more
      08:35:07 E   
      08:35:07 E   ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
      08:35:07 E   INFO  : Completed executing command(queryId=ubuntu_20180130075454_4bf94127-1f71-4a2e-be7e-9086b1b1f34e); Time taken: 0.479 seconds
      08:35:07 E   Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=1)
      08:35:07 E   Closing: 0: jdbc:hive2://localhost:11050
      08:35:07 ---------------------------- Captured stderr setup -----------------------------
      08:35:07 -- connecting to: localhost:21000
      08:35:07 SET sync_ddl=False;
      08:35:07 -- executing against localhost:21000
      08:35:07 DROP DATABASE IF EXISTS `test_partition_metadata_compatibility_cca30e65` CASCADE;
      08:35:07 
      08:35:07 SET sync_ddl=False;
      08:35:07 -- executing against localhost:21000
      08:35:07 CREATE DATABASE `test_partition_metadata_compatibility_cca30e65`;
      08:35:07 
      08:35:07 MainThread: Created database "test_partition_metadata_compatibility_cca30e65" for test ID "metadata/test_partition_metadata.py::TestPartitionMetadata::()::test_partition_metadata_compatibility[exec_option: {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: seq/snap/block]"
      08:35:07  TestPartitionMetadata.test_partition_metadata_compatibility[exec_option: {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: avro/snap/block] 
      08:35:07 [gw2] linux2 -- Python 2.7.12 /home/ubuntu/Impala/bin/../infra/python/env/bin/python
      08:35:07 metadata/test_partition_metadata.py:127: in test_partition_metadata_compatibility
      08:35:07     "insert into %s partition (x) values(1,1)" % FQ_TBL_HIVE)
      08:35:07 common/impala_test_suite.py:665: in run_stmt_in_hive
      08:35:07     raise RuntimeError(stderr)
      08:35:07 E   RuntimeError: SLF4J: Class path contains multiple SLF4J bindings.
      08:35:07 E   SLF4J: Found binding in [jar:file:/home/ubuntu/Impala/toolchain/cdh_components/hbase-1.2.0-cdh5.15.0-SNAPSHOT/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      08:35:07 E   SLF4J: Found binding in [jar:file:/home/ubuntu/Impala/toolchain/cdh_components/hadoop-2.6.0-cdh5.15.0-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      08:35:07 E   SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
      08:35:07 E   SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
      08:35:07 E   scan complete in 2ms
      08:35:07 E   Connecting to jdbc:hive2://localhost:11050
      08:35:07 E   Connected to: Apache Hive (version 1.1.0-cdh5.15.0-SNAPSHOT)
      08:35:07 E   Driver: Hive JDBC (version 1.1.0-cdh5.15.0-SNAPSHOT)
      08:35:07 E   Transaction isolation: TRANSACTION_REPEATABLE_READ
      08:35:07 E   No rows affected (0.085 seconds)
      08:35:07 E   INFO  : Compiling command(queryId=ubuntu_20180130075454_9972bada-1343-4927-97f3-d12a2b550629): insert into test_partition_metadata_compatibility_b2ac5e.part_parquet_tbl_hive partition (x) values(1,1)
      08:35:07 E   INFO  : Semantic Analysis Completed
      08:35:07 E   INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, type:int, comment:null), FieldSchema(name:_col1, type:string, comment:null)], properties:null)
      08:35:07 E   INFO  : Completed compiling command(queryId=ubuntu_20180130075454_9972bada-1343-4927-97f3-d12a2b550629); Time taken: 0.292 seconds
      08:35:07 E   INFO  : Executing command(queryId=ubuntu_20180130075454_9972bada-1343-4927-97f3-d12a2b550629): insert into test_partition_metadata_compatibility_b2ac5e.part_parquet_tbl_hive partition (x) values(1,1)
      08:35:07 E   INFO  : Query ID = ubuntu_20180130075454_9972bada-1343-4927-97f3-d12a2b550629
      08:35:07 E   INFO  : Total jobs = 3
      08:35:07 E   INFO  : Launching Job 1 out of 3
      08:35:07 E   INFO  : Starting task [Stage-1:MAPRED] in serial mode
      08:35:07 E   INFO  : Number of reduce tasks is set to 0 since there's no reduce operator
      08:35:07 E   INFO  : number of splits:1
      08:35:07 E   INFO  : Submitting tokens for job: job_local1043698527_0001
      08:35:07 E   INFO  : Cleaning up the staging area file:/tmp/hadoop-ubuntu/mapred/staging/ubuntu1043698527/.staging/job_local1043698527_0001
      08:35:07 E   ERROR : Job Submission failed with exception 'java.io.IOException(java.util.concurrent.ExecutionException: java.io.IOException: Unable to rename file: [/tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp/tmp_postgresql-9.0-801.jdbc4.jar] to [/tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp/postgresql-9.0-801.jdbc4.jar])'
      08:35:07 E   java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Unable to rename file: [/tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp/tmp_postgresql-9.0-801.jdbc4.jar] to [/tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp/postgresql-9.0-801.jdbc4.jar]
      08:35:07 E   	at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:143)
      08:35:07 E   	at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:171)
      08:35:07 E   	at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:758)
      08:35:07 E   	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:244)
      08:35:07 E   	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
      08:35:07 E   	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
      08:35:07 E   	at java.security.AccessController.doPrivileged(Native Method)
      08:35:07 E   	at javax.security.auth.Subject.doAs(Subject.java:422)
      08:35:07 E   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
      08:35:07 E   	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
      08:35:07 E   	at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578)
      08:35:07 E   	at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573)
      08:35:07 E   	at java.security.AccessController.doPrivileged(Native Method)
      08:35:07 E   	at javax.security.auth.Subject.doAs(Subject.java:422)
      08:35:07 E   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
      08:35:07 E   	at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573)
      08:35:07 E   	at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564)
      08:35:07 E   	at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:436)
      08:35:07 E   	at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142)
      08:35:07 E   	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
      08:35:07 E   	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99)
      08:35:07 E   	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052)
      08:35:07 E   	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748)
      08:35:07 E   	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501)
      08:35:07 E   	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285)
      08:35:07 E   	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280)
      08:35:07 E   	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236)
      08:35:07 E   	at org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89)
      08:35:07 E   	at org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301)
      08:35:07 E   	at java.security.AccessController.doPrivileged(Native Method)
      08:35:07 E   	at javax.security.auth.Subject.doAs(Subject.java:422)
      08:35:07 E   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
      08:35:07 E   	at org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314)
      08:35:07 E   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      08:35:07 E   	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      08:35:07 E   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      08:35:07 E   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      08:35:07 E   	at java.lang.Thread.run(Thread.java:748)
      08:35:07 E   Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Unable to rename file: [/tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp/tmp_postgresql-9.0-801.jdbc4.jar] to [/tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp/postgresql-9.0-801.jdbc4.jar]
      08:35:07 E   	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
      08:35:07 E   	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
      08:35:07 E   	at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:139)
      08:35:07 E   	... 37 more
      08:35:07 E   Caused by: java.io.IOException: Unable to rename file: [/tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp/tmp_postgresql-9.0-801.jdbc4.jar] to [/tmp/hadoop-ubuntu/mapred/local/1517298875469_tmp/postgresql-9.0-801.jdbc4.jar]
      08:35:07 E   	at org.apache.hadoop.yarn.util.FSDownload.unpack(FSDownload.java:327)
      08:35:07 E   	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:367)
      08:35:07 E   	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
      08:35:07 E   	... 4 more
      08:35:07 E   
      08:35:07 E   ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
      08:35:07 E   INFO  : Completed executing command(queryId=ubuntu_20180130075454_9972bada-1343-4927-97f3-d12a2b550629); Time taken: 0.587 seconds
      08:35:07 E   Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=1)
      08:35:07 E   Closing: 0: jdbc:hive2://localhost:11050
      08:35:07 ---------------------------- Captured stderr setup -----------------------------
      08:35:07 -- connecting to: localhost:21000
      08:35:07 SET sync_ddl=False;
      08:35:07 -- executing against localhost:21000
      08:35:07 DROP DATABASE IF EXISTS `test_partition_metadata_compatibility_b2ac5e` CASCADE;
      08:35:07 
      08:35:07 SET sync_ddl=False;
      08:35:07 -- executing against localhost:21000
      08:35:07 CREATE DATABASE `test_partition_metadata_compatibility_b2ac5e`;
      08:35:07 
      08:35:07 MainThread: Created database "test_partition_metadata_compatibility_b2ac5e" for test ID "metadata/test_partition_metadata.py::TestPartitionMetadata::()::test_partition_metadata_compatibility[exec_option: {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: avro/snap/block]"
      

      Attachments

        Activity

          People

            tarmstrong Tim Armstrong
            sailesh Sailesh Mukil
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: