I use a simple script to conduct a whole-web crawl (generate, fetch, updatedb, and repeat until target depth reached). While this is running, I monitor the progress via the jobtracker's browser-based UI. Sometimes there's a fairly long pause after one mapreduce job completes and the next one gets launched, so I mistakenly assume that depth has been reached. I then launch a segread -list or readdb -stats command to summarize the results. Doing so apparently kills any active jobs with absolutely no warning in any of the logs, the console output, or the jobtracker's UI. The jobs just stop writing to the logs and any child processes disappear. Usually, the jobtracker and tasktrackers remain up and respond to subsequent commands.