Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Won't Fix
-
1.8.1, 1.9.1
-
None
-
None
Description
not able to use StreamingFileSink to write to swift file storage
Code:
flink version: 1.9.1. ( tried with 1.8.1 as well, same exception)
scala 2.11
build tool : maven
main part of the code:
val eligibleItems: DataStream[EligibleItem] = env.fromCollection(Seq(
EligibleItem("pencil"),
EligibleItem("rubber"),
EligibleItem("beer")))(TypeInformation.of(classOf[EligibleItem]))
val factory2: ParquetWriterFactory[EligibleItem] = ParquetAvroWriters.forReflectRecord(classOf[EligibleItem])
val sink: StreamingFileSink[EligibleItem] = StreamingFileSink
.forBulkFormat(new Path(capHadoopPath),factory2)
.build()
eligibleItems.addSink(sink)
.setParallelism(1)
.uid("TEST_1")
.name("TEST")
scenario : when path is set to point to swift ( capHadoopPath = "swift://<path>" ) , getting exception - java.lang.UnsupportedOperationException: Recoverable writers on Hadoop are only supported for HDFS and for Hadoop version 2.7 or newerjava.lang.UnsupportedOperationException: Recoverable writers on Hadoop are only supported for HDFS and for Hadoop version 2.7 or newer at org.apache.flink.fs.openstackhadoop.shaded.org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriter.<init>(HadoopRecoverableWriter.java:57)
Attachments
Attachments
Issue Links
- duplicates
-
FLINK-14170 Support hadoop < 2.7 with StreamingFileSink.BulkFormatBuilder
- Closed
- is superceded by
-
FLINK-21819 Remove swift FS filesystem
- Closed