Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
1.3.0
-
None
-
None
Description
Whenever a process is producing too much output on stderr, the current implementation will run into a deadlock between the JVM and the unix process started by the ExecuteStreamCommand.
This is a known issue that is fully described here: http://java-monitor.com/forum/showthread.php?t=4067
In short:
- If the process produces too much stderr that is not consumed by ExecuteStreamCommand, it will block until data is read.
- The current processor implementation is reading from stderr only after having called process.waitFor()
- Thus, the two processes are waiting for each other and fall into a deadlock
The following setup will lead to a deadlock:
A jar containing the following Main application:
object Main extends App { import scala.collection.JavaConverters._ val str = Source.fromInputStream(this.getClass.getResourceAsStream("/1mb.txt")).mkString System.err.println(str) }
The following NiFi Flow:
Configuration for ExecuteStreamCommand:
The script is simply containing a call to the jar:
java -jar stderr.jar
Once the processor calls the script, it appears as "processing" indefinitely and can only be stopped by restarting NiFi.
I already have a running solution that I will publish as soon as possible.