Description
Any INSERT INTO statement run on S3 tables and when the scratch directory is saved on S3 is deleting old rows of the table.
hive> set hive.blobstore.use.blobstore.as.scratchdir=true; hive> create table t1 (id int, name string) location 's3a://spena-bucket/t1'; hive> insert into table t1 values (1,'name1'); hive> select * from t1; 1 name1 hive> insert into table t1 values (2,'name2'); hive> select * from t1; 2 name2
Attachments
Attachments
Issue Links
- breaks
-
HIVE-15280 Hive.mvFile() misses the "." char when joining the filename + extension
-
- Resolved
-
- is related to
-
HIVE-16402 Upgrade to Hadoop 2.8.0
-
- Closed
-
-
HIVE-16411 Revert HIVE-15199
-
- Closed
-
- relates to
-
HADOOP-13823 s3a rename: fail if dest file exists
-
- Resolved
-
-
HIVE-12988 Improve dynamic partition loading IV
-
- Closed
-
- links to
(1 links to)