Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
1.0.1
-
None
-
Linux
Description
I'm writing some hook scripts, and while debugging, at least, they write a lot of text to stderr. I noticed my scripts sometimes hang. I believe I have tracked it down to libsvn_repos/hooks.c. The run_hook_cmd() code creates a pipe and passes the write end of the pipe to the hook script as standard error. It then runs the hook, and if it fails, reads error information from the read end of the pipe. But if the pipe fills, the hook blocks while writing to the write end of the pipe. Since nobody is yet reading from the read end of the pipe, it blocks indefinitely. I don't know anything about the apr functions that are used in this file, but is there a way to run the command asynchronously? You could then continue execution in hook.c and read everything from the read end of the pipe while the hook runs. When EOF is seen on the pipe, call some (hopefully available) apr function like waitpid() to wait for the hook to complete and return the exit status. Then proceed as today by checking the exit status. This algorithm would ensure that the hook won't hang while writing to the pipe. If you want to limit the amount of text the hook can write, you can always start throwing away data after a certain number of lines, to prevent a bad hook from causing hook.c to consume too much memory. Even documentating that hooks shouldn't write too much to stderr seems inadequate to me. In unusual situations (e.g., some required network connection is down or some required filesystem isn't mounted), the hook could unintentionally produce a lot of error messages. You don't want the hook to hang in a situation like this.
Original issue reported by jph
Attachments
Issue Links
- is duplicated by
-
SVN-2661 Should read hook output before reaping.
- Closed