Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
1.7.1
-
None
-
None
Description
We think S3 Flink Filesystem when creating localTemp directory is not splitting and handling the availability of multiple local temp directories . As a result we are seeing exception mentioned below any time we run in a EC2 instance type with more than one ephemeral drive or EBS volume.
https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-base/src/main/java/org/apache/flink/fs/s3/common/FlinkS3FileSystem.java#L101
Timestamp: 2019-01-29, 12:42:39
java.nio.file.NoSuchFileException: /mnt/yarn/usercache/hadoop/appcache/application_1548598173158_0004,/mnt1/yarn/usercache/hadoop/appcache/application_1548598173158_0004/.tmp_072167ee-6432-412c-809a-bd0599961cf0
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
at java.nio.file.Files.newOutputStream(Files.java:216)
at org.apache.flink.fs.s3.common.utils.RefCountedTmpFileCreator.apply(RefCountedTmpFileCreator.java:80)
at org.apache.flink.fs.s3.common.utils.RefCountedTmpFileCreator.apply(RefCountedTmpFileCreator.java:39)
at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.openNew(RefCountedBufferingFileStream.java:174)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.boundedBufferingFileStream(S3RecoverableFsDataOutputStream.java:271)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.newStream(S3RecoverableFsDataOutputStream.java:236)
at org.apache.flink.fs.s3.common.writer.S3RecoverableWriter.open(S3RecoverableWriter.java:78)
Attachments
Issue Links
- duplicates
-
FLINK-11302 FlinkS3FileSystem uses an incorrect path for temporary files
- Closed