Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
We need a way to limit the number of streams that Flink FileSystems concurrently open.
For example, for very small HDFS clusters with few RPC handlers, a large Flink job trying to build up many connections during a checkpoint causes failures due to rejected connections.
I propose to add a file system that can wrap another existing file system The file system may track the progress of streams and close streams that have been inactive for too long, to avoid locked streams of taking up the complete pool.
Attachments
Issue Links
- links to