HADOOP-882 improves S3FileSystem so that when certain communications problems with S3 occur the operation is retried. However, the retry mechanism cannot handle a block transfer failure, since blocks may be very large and we don't want to buffer them in memory. This improvement is to write a wrapper (using java.lang.reflect.Proxy if possible - see discussion in HADOOP-882) that can retry block transfers.
|Status||Resolved [ 5 ]||Closed [ 6 ]|
|Resolution||Fixed [ 1 ]|
|Status||Patch Available [ 10002 ]||Resolved [ 5 ]|
|Fix Version/s||0.12.0 [ 12312293 ]|
|Status||Open [ 1 ]||Patch Available [ 10002 ]|
This was not intentional. I think initialization should probably not
be retried currently (in the future I might be worth thinking through
the initialization cases that need retrying).
It looks like this might be the cause of the exception you found.
S3 should only retry for IOExceptions - I'll fix this.
I did this for consistency with HDFS and also because of a recent
issue with creating files in /tmp. I agree that createTempFile with
the signature you suggest is the way to go. I'll change it.
Yes, this crossed my mind and I think we should have it. It also
occurred to me that with annotation driven retries (
wouldn't be able to expose the parameters as configurables (as far as
Possibly. Although the files should be deleted after transfer anyway.
The deleteOnExit was a backup, but you are right that it doesn't
On failure I want to delete the file, but to do this I close the
stream first. I agree a better comment is needed.
Thanks for the detailed feedback. I'll look at making improvements
over the coming days.
|Assignee||Tom White [ tomwhite ]|