Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
3.3.9
-
None
-
None
Description
testing branch-3.3 on a bucket with array buffering, I get a fairly useless error message.
Problem here is that AbstractCommitITest forces fs.s3a.fast.upload.buffer to array and somehow multipart is disabled.
- error text needs fixing (move to %s)
- s3a fs should fail in init if the threshold is isn't allowed for the store type
- then determine why the test is failing, and fix.
The tests do work in trunk, so something in branch-3.3 is stopping this
Also
- correct the name of the constant STORE_CAPABILITY_DIRECTORY_MARKER_MULTIPART_UPLOAD_ENABLED to match the actual string "fs.s3a.capability.multipart.uploads.enabled"; accidental copy there
- capability FS_MULTIPART_UPLOADER must declare itself as disabled when multipart is off
[ERROR] testRevertCommit(org.apache.hadoop.fs.s3a.commit.ITestCommitOperations) Time elapsed: 0.688 s <<< ERROR! java.lang.IllegalArgumentException: Invalid block size: %d [-1] at org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.checkArgument(Preconditions.java:205) at org.apache.hadoop.fs.s3a.S3ADataBlocks$ArrayBlockFactory.create(S3ADataBlocks.java:397) at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:235) at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:217) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerCreateFile(S3AFileSystem.java:1887) at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$create$7(S3AFileSystem.java:1789) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449) at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2479) at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2498) at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:1788) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1233) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1210) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1091) at org.apache.hadoop.fs.contract.ContractTestUtils.createFile(ContractTestUtils.java:650) at org.apache.hadoop.fs.contract.ContractTestUtils.touch(ContractTestUtils.java:686) at org.apache.hadoop.fs.s3a.commit.ITestCommitOperations.testRevertCommit(ITestCommitOperations.java:543)
Attachments
Issue Links
- is caused by
-
HADOOP-18637 S3A to support upload of files greater than 2 GB using DiskBlocks
- Resolved