Description
The consequence is that mapreduce probably is not splitting s3a files in the expected way. This is similar to HADOOP-5861 (which was for s3n, though s3n was passing 5G rather than 0 for block size).
FileInputFormat.getSplits() relies on the FileStatus block size being set:
if (isSplitable(job, path)) { long blockSize = file.getBlockSize(); long splitSize = computeSplitSize(blockSize, minSize, maxSize);
However, S3AFileSystem does not set the FileStatus block size field. From S3AFileStatus.java:
// Files public S3AFileStatus(long length, long modification_time, Path path) { super(length, false, 1, 0, modification_time, path); isEmptyDirectory = false; }
I think it should use S3AFileSystem.getDefaultBlockSize() for each file's block size (where it's currently passing 0).
Attachments
Attachments
Issue Links
- is related to
-
HADOOP-11601 Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty files
- Resolved
-
HADOOP-11606 intermittent failure of TestS3AFileSystemContract.testRenameRootDirForbidden
- Resolved