Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
1.13.0, 1.14.0
-
None
-
None
Description
Submitting a detached, per-job YARN cluster in Flink (like this: ./bin/flink run -m yarn-cluster -d ./examples/streaming/TopSpeedWindowing.jar), leads to the following exception:
2021-04-28 11:39:00,786 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - Found Web Interface ip-172-31-27-232.eu-central-1.compute.internal:45689 of application 'application_1619607372651_0005'. Job has been submitted with JobID 5543e81db9c2de78b646088891f23bfc Exception in thread "Thread-4" java.lang.IllegalStateException: Trying to access closed classloader. Please check if you store classloaders directly or indirectly in static fields. If the stacktrace suggests that the leak occurs in a third party library and cannot be fixed immediately, you can disable this check with the configuration 'classloader.check-leaked-classloader'. at org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:164) at org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResource(FlinkUserCodeClassLoaders.java:183) at org.apache.hadoop.conf.Configuration.getResource(Configuration.java:2570) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2783) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2758) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2638) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1100) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1707) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1688) at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183) at org.apache.hadoop.util.ShutdownHookManager.shutdownExecutor(ShutdownHookManager.java:145) at org.apache.hadoop.util.ShutdownHookManager.access$300(ShutdownHookManager.java:65) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:102)
The job is still running as expected.
Detached submission with ./bin/flink run-application -t yarn-application -d works as expected. This is also the documented approach.
Attachments
Issue Links
- is duplicated by
-
FLINK-19916 Hadoop3 ShutdownHookManager visit closed ClassLoader
-
- Open
-