Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Not A Bug
-
Impala 2.6.0
-
None
Description
To reproduce, do the following:
- In Hive, "create table purge_test_s3 (x int) location 's3a://[bucket]/purge_test_s3';"
- Use the AWS CLI or the AWS Web interface to copy files to the above mentioned location.
- In Hive, "drop table purge_test_s3 purge;"
The Metastore logs say:
2016-05-20 17:01:41,259 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-103]: Not moving s3a://[bucket]/purge_test_s3 to trash
2016-05-20 17:01:41,364 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-103]: Deleted the diretory s3a://[bucket]/purge_test_s3
However, the files are still there. The weird part is that the Hadoop S3A connector reads the files correctly but is not able to delete them.
If instead of the AWS CLI or the AWS Web interface, we use the hadoop CLI to copy the files, "drop table ... purge" works just fine. If we insert the files using Hive, it works fine as well.
The root cause of the problem has been found and is mentioned below in Aaron's comment.
Attachments
Issue Links
- is related to
-
HADOOP-13230 S3A to optionally retain directory markers
- Resolved