Details
-
Bug
-
Status: Resolved
-
P2
-
Resolution: Fixed
-
2.0.0, 2.1.0
-
None
-
Flink 1.2.1 and 1.3.0, Java HotSpot and OpenJDK 8, macOS 10.12.6 and unknown Linux
Description
I’ve been running a Beam pipeline on Flink. Depending on the dataset size and the heap memory configuration of the jobmanager and taskmanager, I may run into an EOFException, which causes the job to fail.
As discussed on Flink's mailinglist (stacktrace enclosed), Flink catches these EOFExceptions and activates disk spillover. Because Beam wraps these exceptions, this mechanism fails, the exception travels up the stack, and the job aborts.
Hopefully this is enough information and this is something that can be adjusted for in Beam. I'd be glad to provide more information where needed.
Attachments
Issue Links
- links to