Hadoop Common
  1. Hadoop Common
  2. HADOOP-1857

Ability to run a script when a task fails to capture stack traces

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.14.0
    • Fix Version/s: 0.16.0
    • Component/s: None
    • Labels:
      None

      Description

      This basically is for providing a better user interface for debugging failed
      jobs. Today we see stack traces for failed tasks on the job ui if the job
      happened to be a Java MR job. For non-Java jobs like Streaming, Pipes, the
      diagnostic info on the job UI is not helpful enough to debug what might have
      gone wrong. They are usually framework traces and not app traces.
      We want to be able to provide a facility, via user-provided scripts, for doing
      post-processing on task logs, input, output, etc. There should be some default
      scripts like running core dumps under gdb for locating illegal instructions,
      the last few lines from stderr, etc. These outputs could be sent to the
      tasktracker and in turn to the jobtracker which would then display it on the
      job UI on demand.

      1. patch-1857.txt
        39 kB
        Amareshwari Sriramadasu
      2. patch-1857.txt
        39 kB
        Amareshwari Sriramadasu
      3. patch-1857.txt
        36 kB
        Amareshwari Sriramadasu
      4. patch-1857.txt
        36 kB
        Amareshwari Sriramadasu
      5. patch-1857.txt
        36 kB
        Amareshwari Sriramadasu
      6. patch-1857.txt
        36 kB
        Amareshwari Sriramadasu
      7. patch-1857.txt
        37 kB
        Amareshwari Sriramadasu
      8. patch-1857.txt
        53 kB
        Amareshwari Sriramadasu
      9. patch-1857.txt
        53 kB
        Amareshwari Sriramadasu
      10. patch-1857.txt
        53 kB
        Amareshwari Sriramadasu
      11. patch-1857.txt
        53 kB
        Amareshwari Sriramadasu
      12. tt-no-warn.patch
        11 kB
        Owen O'Malley

        Activity

        Hide
        Amareshwari Sriramadasu added a comment -

        The proposal is as follows:

        1. API for the script:

        API is added through JobConf.
        JobConf.

        {set/get}DebugScript(file) will set or get Debug script the user wants to run when the task fails.
        JobConf.{set/get}

        DebugCommand(String cmd) will set or get the Debug Command to run the script.

        For example, the command can look like the following:

        $> script_name -intput $stdout / $stderr
        -core $core
        -output $output_file

        $stdout, $stderr are the task's stdout and stderr files respectively.
        $core is the core file to be processed.
        $ouput_file is the file to store the output of the script.

        User can use $stdout, $stderr, $core parameters to get the required done.

        2. Distributed Cache:

        The script is copied into the nodes using DistributedCache by adding methods addCacheExecutable() and getCacheExecutables() and variable isExecutable similar to addCacheArchive(), getCacheArchives() and isArchive.

        3. gdb:
        Default scripts to run core dumps under gdb will be provided. User can specify gdb parameters in .gdbinit file

        4. When to call the script?

        The script can be called in two positions.
        i) Whenever a task fails; before releaseCache().
        ii) Whenever a Job fails; have to make sure the cache files exists.

        5. Display output:

        The output of the script is saved in $output_file. And the output is sent to JobTracker using TaskTracker.reportDiagnosticInfo() and displayed on the Job UI on demand.

        Please let me know your comments on the proposal. especially on when to call the script.

        Show
        Amareshwari Sriramadasu added a comment - The proposal is as follows: 1. API for the script: API is added through JobConf. JobConf. {set/get}DebugScript(file) will set or get Debug script the user wants to run when the task fails. JobConf.{set/get} DebugCommand(String cmd) will set or get the Debug Command to run the script. For example, the command can look like the following: $> script_name -intput $stdout / $stderr -core $core -output $output_file $stdout, $stderr are the task's stdout and stderr files respectively. $core is the core file to be processed. $ouput_file is the file to store the output of the script. User can use $stdout, $stderr, $core parameters to get the required done. 2. Distributed Cache: The script is copied into the nodes using DistributedCache by adding methods addCacheExecutable() and getCacheExecutables() and variable isExecutable similar to addCacheArchive(), getCacheArchives() and isArchive. 3. gdb: Default scripts to run core dumps under gdb will be provided. User can specify gdb parameters in .gdbinit file 4. When to call the script? The script can be called in two positions. i) Whenever a task fails; before releaseCache(). ii) Whenever a Job fails; have to make sure the cache files exists. 5. Display output: The output of the script is saved in $output_file. And the output is sent to JobTracker using TaskTracker.reportDiagnosticInfo() and displayed on the Job UI on demand. Please let me know your comments on the proposal. especially on when to call the script.
        Hide
        Arun C Murthy added a comment -

        3. gdb:

        Default scripts to run core dumps under gdb will be provided. User can specify gdb parameters in .gdbinit file

        Maybe we should also provide out-of-box solutions for perl and python debuggers too... what do others think? Maybe second pass?

        Show
        Arun C Murthy added a comment - 3. gdb: Default scripts to run core dumps under gdb will be provided. User can specify gdb parameters in .gdbinit file Maybe we should also provide out-of-box solutions for perl and python debuggers too... what do others think? Maybe second pass?
        Hide
        Sameer Paranjpye added a comment -

        A couple of comments:

        • Why have separate config variables for the script and the command line? Seems like you could just supply the command line. The DistributedCache should be used to send the script over if needed (otherwise we can assume that the specified program is installed)
        • Where does the .gdbinit file get picked up from? The users home directory? That seems brittle. How does it get sent over to the executing node? The .gdbinit file should also be sent over via the DistributedCache.

        I'd vote for handling perl and python in a second pass.

        Show
        Sameer Paranjpye added a comment - A couple of comments: Why have separate config variables for the script and the command line? Seems like you could just supply the command line. The DistributedCache should be used to send the script over if needed (otherwise we can assume that the specified program is installed) Where does the .gdbinit file get picked up from? The users home directory? That seems brittle. How does it get sent over to the executing node? The .gdbinit file should also be sent over via the DistributedCache. I'd vote for handling perl and python in a second pass.
        Hide
        Raghu Angadi added a comment - - edited

        If it s a C/C++ program, why not print the stacktrace in the signal handler to stderr? Is stderr shown on webui? With a multithreaded program, stack trace with core may not show the stacktrace for the offending thread, rather for the last thread that is killed last (at least on x86 Linux, anyway).

        Show
        Raghu Angadi added a comment - - edited If it s a C/C++ program, why not print the stacktrace in the signal handler to stderr? Is stderr shown on webui? With a multithreaded program, stack trace with core may not show the stacktrace for the offending thread, rather for the last thread that is killed last (at least on x86 Linux, anyway).
        Hide
        Runping Qi added a comment -

        I think we need to handle the case for C++ pipe and the case for streaming differently.

        In all cases, it will be helpful to log the offending key/value pair, and the progress stats (how many key/value pairs have been processed, etc)

        In C++ pipe case, we know the executable is in C++, thus Raghu's suggestion is good.
        For streaming, the executable can be anything. Not sure how do you get stacktrace.
        It is really up to the executable.

        Show
        Runping Qi added a comment - I think we need to handle the case for C++ pipe and the case for streaming differently. In all cases, it will be helpful to log the offending key/value pair, and the progress stats (how many key/value pairs have been processed, etc) In C++ pipe case, we know the executable is in C++, thus Raghu's suggestion is good. For streaming, the executable can be anything. Not sure how do you get stacktrace. It is really up to the executable.
        Hide
        Owen O'Malley added a comment -

        The stdout and stderr of all tasks is already sent to the user's console for failed tasks. (It can be configured off or turned on for all tasks.) So python and perl stack traces will already be sent to the user's console. Adding diagnostic messages for the offending key and value would be useful, but is a completely different patch.

        Show
        Owen O'Malley added a comment - The stdout and stderr of all tasks is already sent to the user's console for failed tasks. (It can be configured off or turned on for all tasks.) So python and perl stack traces will already be sent to the user's console. Adding diagnostic messages for the offending key and value would be useful, but is a completely different patch.
        Hide
        Owen O'Malley added a comment -

        A few more comments:
        1. I like Sameer's suggestion for having just a single config attribute with the command line to run.
        2. I think it should only be run when tasks fail. Jobs failing is a very different matter.
        3. The script should write its output to stdout and stderr.
        4. The framework should append the output of the script to the user log stdout and stderr.

        Show
        Owen O'Malley added a comment - A few more comments: 1. I like Sameer's suggestion for having just a single config attribute with the command line to run. 2. I think it should only be run when tasks fail. Jobs failing is a very different matter. 3. The script should write its output to stdout and stderr. 4. The framework should append the output of the script to the user log stdout and stderr.
        Hide
        Owen O'Malley added a comment -

        Another reasonable place to put the output of the script would be to put them in the diagnostic message for the Task, but that would involve more complex changes to the framework.

        Show
        Owen O'Malley added a comment - Another reasonable place to put the output of the script would be to put them in the diagnostic message for the Task, but that would involve more complex changes to the framework.
        Hide
        arkady borkovsky added a comment -

        For streaming, it would be good to have some default failure handling.
        It would be nice if the default task failure handling covers the following:

        First of all: the message in the UI should state that the streaming command has failed

        Second: Runping' suggestion about the current record and tasks stats is most useful.
        Add to this
        ls -l for the current directory and
        getenv

        Third: a few regexp patterns can capture most of the typical failures:

        • shell error message (command not found, wrong permissions, specific messages from awk, grep, sed, etc.)
        • Perl and Python stack traces from the stderr
        • if a core file is present – print its stack.
          This will cover most of the problems.

        Also: certain type of errors – command not find, syntax error in a script, etc – should kill the job without retries.
        If an error of this kind happened in one task (== re pattern matched), it will happen in all tasks.

        Show
        arkady borkovsky added a comment - For streaming, it would be good to have some default failure handling. It would be nice if the default task failure handling covers the following: First of all: the message in the UI should state that the streaming command has failed Second: Runping' suggestion about the current record and tasks stats is most useful. Add to this ls -l for the current directory and getenv Third: a few regexp patterns can capture most of the typical failures: shell error message (command not found, wrong permissions, specific messages from awk, grep, sed, etc.) Perl and Python stack traces from the stderr if a core file is present – print its stack. This will cover most of the problems. Also: certain type of errors – command not find, syntax error in a script, etc – should kill the job without retries. If an error of this kind happened in one task (== re pattern matched), it will happen in all tasks.
        Hide
        Amareshwari Sriramadasu added a comment -

        I'm working on documentation and test cases for the same

        Show
        Amareshwari Sriramadasu added a comment - I'm working on documentation and test cases for the same
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12366590/patch1857.txt
        against trunk revision r579410.

        @author +1. The patch does not contain any @author tags.

        patch -1. The patch command could not apply the patch.

        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/830/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12366590/patch1857.txt against trunk revision r579410. @author +1. The patch does not contain any @author tags. patch -1. The patch command could not apply the patch. Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/830/console This message is automatically generated.
        Hide
        Amareshwari Sriramadasu added a comment -

        After incorporating comments on this issue, patch is available now.
        The patch has code for running debug script and a test case to validate the same.
        The detailed usage documentation is available in the wiki.

        The documentation is available at http://wiki.apache.org/lucene-hadoop/HowToDebugMapReducePrograms

        Show
        Amareshwari Sriramadasu added a comment - After incorporating comments on this issue, patch is available now. The patch has code for running debug script and a test case to validate the same. The detailed usage documentation is available in the wiki. The documentation is available at http://wiki.apache.org/lucene-hadoop/HowToDebugMapReducePrograms
        Hide
        Amareshwari Sriramadasu added a comment -

        Usage Documentation :

        A facility is provided, via user-provided scripts, for doing post-processing on task logs, task's stdout, stderr, syslog and core files. There is a default script which processes core dumps under gdb and prints stack trace. The last five lines from stdout and stderr of debug script are printed on the diagnostics. These outputs are displayed on job UI on demand.

        How to submit debug command:
        A quick way to set debug command is to set the properties "mapred.map.task.debug.command" and "mapred.reduce.task.debug.command" for debugging map task and reduce task respectively. These properties can also be set by APIs conf.setMapDebugCommand(String cmd) and conf.setReduceDebugCommand(String cmd). The debug command can consist of @stdout@, @stderr@, @syslog@ and @core@ to access task's stdout, stderr, syslog and core files respectively. In case of streaming, debug command can be submitted with command-line options -mapdebug, -reducedebug for debugging mapper and redcuer respectively.
        For example, the debug command can be 'myScript @stderr@'. This command has executable myScript. And myScript processes failed task's stderr.
        The debug command can be a gdb command where user can submit a command file to execute using -x option. Then debug command can look like 'gdb <program-name> -c @core@ -x <gdb-cmd-fle> '. This command processes core file of the failed task <program-name> and executes commands in <gdb-cmd-file>. Please make sure gdb command file has 'quit' in its last line.

        How to submit debug script:
        To submit the debug script file, first put the file in dfs.
        The executable can be added by setting the property "mapred.cache.executables" with value <path>#<executable-name>. For more than one executable, they can be added as comma seperated executable paths. Executable property can also be set by APIs DistributedCache.addCacheExecutable(URI,conf) and DistributedCache.setCacheExecutables(URI[],conf) where URI is of the form "hdfs://host:port/<path>#<executable-name>". For Streaming, the executable can be added through -cacheExecutable URI.
        For gdb, the gdb command file need not be executable. But, the command file needs to be in dfs. It can be added to cache by setting the property "mapred.cache.files" with the value <path>#<cmd-file> or through the API DistributedCache.addCacheFile(URI,conf). Please make sure the property "mapred.create.symlink" is set to "yes"

        All this documentation is incorporated in Java doc also.

        Show
        Amareshwari Sriramadasu added a comment - Usage Documentation : A facility is provided, via user-provided scripts, for doing post-processing on task logs, task's stdout, stderr, syslog and core files. There is a default script which processes core dumps under gdb and prints stack trace. The last five lines from stdout and stderr of debug script are printed on the diagnostics. These outputs are displayed on job UI on demand. How to submit debug command: A quick way to set debug command is to set the properties "mapred.map.task.debug.command" and "mapred.reduce.task.debug.command" for debugging map task and reduce task respectively. These properties can also be set by APIs conf.setMapDebugCommand(String cmd) and conf.setReduceDebugCommand(String cmd). The debug command can consist of @stdout@, @stderr@, @syslog@ and @core@ to access task's stdout, stderr, syslog and core files respectively. In case of streaming, debug command can be submitted with command-line options -mapdebug, -reducedebug for debugging mapper and redcuer respectively. For example, the debug command can be 'myScript @stderr@'. This command has executable myScript. And myScript processes failed task's stderr. The debug command can be a gdb command where user can submit a command file to execute using -x option. Then debug command can look like 'gdb <program-name> -c @core@ -x <gdb-cmd-fle> '. This command processes core file of the failed task <program-name> and executes commands in <gdb-cmd-file>. Please make sure gdb command file has 'quit' in its last line. How to submit debug script: To submit the debug script file, first put the file in dfs. The executable can be added by setting the property "mapred.cache.executables" with value <path>#<executable-name>. For more than one executable, they can be added as comma seperated executable paths. Executable property can also be set by APIs DistributedCache.addCacheExecutable(URI,conf) and DistributedCache.setCacheExecutables(URI[],conf) where URI is of the form "hdfs://host:port/<path>#<executable-name>". For Streaming, the executable can be added through -cacheExecutable URI. For gdb, the gdb command file need not be executable. But, the command file needs to be in dfs. It can be added to cache by setting the property "mapred.cache.files" with the value <path>#<cmd-file> or through the API DistributedCache.addCacheFile(URI,conf). Please make sure the property "mapred.create.symlink" is set to "yes" All this documentation is incorporated in Java doc also.
        Hide
        Amareshwari Sriramadasu added a comment -

        All this documentation will be part of Overall mapred documentation.

        Show
        Amareshwari Sriramadasu added a comment - All this documentation will be part of Overall mapred documentation.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12366837/patch-1857.txt
        against trunk revision r581028.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs +1. The patch does not introduce any new Findbugs warnings.

        core tests -1. The patch failed core unit tests.

        contrib tests -1. The patch failed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/857/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/857/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/857/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/857/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12366837/patch-1857.txt against trunk revision r581028. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests -1. The patch failed core unit tests. contrib tests -1. The patch failed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/857/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/857/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/857/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/857/console This message is automatically generated.
        Hide
        Devaraj Das added a comment -

        Could you please check why core-tests are failing with this patch?

        Show
        Devaraj Das added a comment - Could you please check why core-tests are failing with this patch?
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12366976/patch-1857.txt
        against trunk revision r581492.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs -1. The patch appears to introduce 2 new Findbugs warnings.

        core tests -1. The patch failed core unit tests.

        contrib tests +1. The patch passed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/872/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/872/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/872/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/872/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12366976/patch-1857.txt against trunk revision r581492. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce 2 new Findbugs warnings. core tests -1. The patch failed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/872/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/872/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/872/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/872/console This message is automatically generated.
        Hide
        Amareshwari Sriramadasu added a comment -

        The findbug warnings given are fixed except the syncronization issues in function

        {set/get}

        JobConf().
        First of all that is an existing code. And setJobConf is called from only one method which is already synchronized. getJobConf() is not called anywhere. So, this should not be a problem.

        Show
        Amareshwari Sriramadasu added a comment - The findbug warnings given are fixed except the syncronization issues in function {set/get} JobConf(). First of all that is an existing code. And setJobConf is called from only one method which is already synchronized. getJobConf() is not called anywhere. So, this should not be a problem.
        Hide
        Amareshwari Sriramadasu added a comment -

        This patch failed to run script on solaris machine. could be path problem.
        I uploaded patch which would fix this.

        Show
        Amareshwari Sriramadasu added a comment - This patch failed to run script on solaris machine. could be path problem. I uploaded patch which would fix this.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12366995/patch-1857.txt
        against trunk revision r581492.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs -1. The patch appears to introduce 1 new Findbugs warnings.

        core tests +1. The patch passed core unit tests.

        contrib tests -1. The patch failed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/873/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/873/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/873/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/873/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12366995/patch-1857.txt against trunk revision r581492. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce 1 new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests -1. The patch failed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/873/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/873/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/873/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/873/console This message is automatically generated.
        Hide
        Amareshwari Sriramadasu added a comment -

        Submitting patch again with 'info threads' added in gdb default behavior

        Show
        Amareshwari Sriramadasu added a comment - Submitting patch again with 'info threads' added in gdb default behavior
        Hide
        Amareshwari Sriramadasu added a comment -

        Submitting patch again with "info threads" added in gdb default behavior

        Show
        Amareshwari Sriramadasu added a comment - Submitting patch again with "info threads" added in gdb default behavior
        Hide
        Amareshwari Sriramadasu added a comment - - edited

        Default behavior:

        Java programs:
        Stdout, stderr are shown on job UI. Stacktrace is printed on diagnostics.

        Pipes:
        Stdout, stderr are shown on the job UI.
        Default gdb script is run which prints info abt threads: thread Id and function in which it was running when task failed. And prints stack tarce where task has failed.

        Streaming:
        Stdout, stderr are shown on the Job UI.
        The exception details are shown on task diagnostics.

        Show
        Amareshwari Sriramadasu added a comment - - edited Default behavior: Java programs: Stdout, stderr are shown on job UI. Stacktrace is printed on diagnostics. Pipes: Stdout, stderr are shown on the job UI. Default gdb script is run which prints info abt threads: thread Id and function in which it was running when task failed. And prints stack tarce where task has failed. Streaming: Stdout, stderr are shown on the Job UI. The exception details are shown on task diagnostics.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12367051/patch-1857.txt
        against trunk revision r581745.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs -1. The patch appears to introduce 1 new Findbugs warnings.

        core tests +1. The patch passed core unit tests.

        contrib tests +1. The patch passed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/883/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/883/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/883/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/883/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12367051/patch-1857.txt against trunk revision r581745. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce 1 new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/883/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/883/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/883/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/883/console This message is automatically generated.
        Hide
        Owen O'Malley added a comment -

        You need to fix the new find bugs warning.

        I can't see the need to have different debug scripts for mappers and reducers.

        I think all of the output (stdout and stderr) from the debug script should be put together when it is stored on the task tracker.

        I don't think adding the concept of executable to the file cache is appropriate. It is basically compensating for the lack of permissions in hdfs, which will be addressed more directly. In the mean time, I think that all files coming out of the cache should have the "x" permission set. Note that pipes and streaming already do this...

        Why were the config files for the pipes examples changed to add the "#" part of the url?

        Rather than let the user specify a command line that has a bunch (of undocumented) @varaibles, I think it would be better to always use the same parameters:

        basically, something like:

        $script @stdout@ @stderr@ @jobconf@

        and let the script find the core file if it cares.

        By default the entire output of the script should be added to diagnostic and 5 is much much too small.

        Show
        Owen O'Malley added a comment - You need to fix the new find bugs warning. I can't see the need to have different debug scripts for mappers and reducers. I think all of the output (stdout and stderr) from the debug script should be put together when it is stored on the task tracker. I don't think adding the concept of executable to the file cache is appropriate. It is basically compensating for the lack of permissions in hdfs, which will be addressed more directly. In the mean time, I think that all files coming out of the cache should have the "x" permission set. Note that pipes and streaming already do this... Why were the config files for the pipes examples changed to add the "#" part of the url? Rather than let the user specify a command line that has a bunch (of undocumented) @varaibles, I think it would be better to always use the same parameters: basically, something like: $script @stdout@ @stderr@ @jobconf@ and let the script find the core file if it cares. By default the entire output of the script should be added to diagnostic and 5 is much much too small.
        Hide
        Amareshwari Sriramadasu added a comment - - edited

        You need to fix the new find bugs warning.

        The warning is harmless. May be we will supress it.

        I can't see the need to have different debug scripts for mappers and reducers.

        We need two scripts, since mapper and reducer codes are entirely different. Many times, we may need to debug only one of them. For example, streaming will have two different scripts for mapper and reducer. And users would like to debug them seperately.

        I think all of the output (stdout and stderr) from the debug script should be put together when it is stored on the task tracker.

        This can be done by concatenating the files if we want. But redirection in the command is not possible, since we dont know the order.

        I don't think adding the concept of executable to the file cache is appropriate. It is basically compensating for the lack of permissions in hdfs, which will be addressed more directly. In the mean time, I think that all files coming out of the cache should have the "x" permission set. Note that pipes and streaming already do this...

        ok. This can be done. Then, should we do symlink for all files?

        Why were the config files for the pipes examples changed to add the "#" part of the url?

        For running gdb script by default, we need the c++ executable program to be present in current working directory. So, we need to have symlink of the executable.

        Rather than let the user specify a command line that has a bunch (of undocumented) @varaibles, I think it would be better to always use the same parameters: basically, something like: $script @stdout@ @stderr@ @jobconf@ and let the script find the core file if it cares.

        Now, we are using @stdout@, @stderr@, @syslog@ and @core@ for the command.
        Since pipes have gdb default script which needs core file, we can have that code. And that's a convience given to the user. If you insist, we can remove that.

        By default the entire output of the script should be added to diagnostic and 5 is much much too small.

        ok. This will be done.

        Show
        Amareshwari Sriramadasu added a comment - - edited You need to fix the new find bugs warning. The warning is harmless. May be we will supress it. I can't see the need to have different debug scripts for mappers and reducers. We need two scripts, since mapper and reducer codes are entirely different. Many times, we may need to debug only one of them. For example, streaming will have two different scripts for mapper and reducer. And users would like to debug them seperately. I think all of the output (stdout and stderr) from the debug script should be put together when it is stored on the task tracker. This can be done by concatenating the files if we want. But redirection in the command is not possible, since we dont know the order. I don't think adding the concept of executable to the file cache is appropriate. It is basically compensating for the lack of permissions in hdfs, which will be addressed more directly. In the mean time, I think that all files coming out of the cache should have the "x" permission set. Note that pipes and streaming already do this... ok. This can be done. Then, should we do symlink for all files? Why were the config files for the pipes examples changed to add the "#" part of the url? For running gdb script by default, we need the c++ executable program to be present in current working directory. So, we need to have symlink of the executable. Rather than let the user specify a command line that has a bunch (of undocumented) @varaibles, I think it would be better to always use the same parameters: basically, something like: $script @stdout@ @stderr@ @jobconf@ and let the script find the core file if it cares. Now, we are using @stdout@, @stderr@, @syslog@ and @core@ for the command. Since pipes have gdb default script which needs core file, we can have that code. And that's a convience given to the user. If you insist, we can remove that. By default the entire output of the script should be added to diagnostic and 5 is much much too small. ok. This will be done.
        Hide
        Owen O'Malley added a comment -

        Here is a minor tweak to the TaskTracker.java from your patch that gets rid of all of the inconsistent synchronization warnings in the TaskTracker, including the ones that have been in there for a long time.

        Show
        Owen O'Malley added a comment - Here is a minor tweak to the TaskTracker.java from your patch that gets rid of all of the inconsistent synchronization warnings in the TaskTracker, including the ones that have been in there for a long time.
        Hide
        Owen O'Malley added a comment -

        Ok, I can see having different scripts for map and reduce, since you can mix stream or pipes with Java. (We should probably even support combinations of the two, eventually. smile) I don't think these scripts are things you want to turn on to "debug", but rather hooks that you'll always leave on to give more details about problems when they occur.

        This can be done by concatenating the files if we want. But redirection in the command is not possible, since we dont know the order.

        I don't understand this. I was thinking that we run the script something like:

        bash -e "$script $stdout $stderr $jobconf 2>&1 > $debugout"
        

        to tie the stdout and stderr streams together.

        Doesn't the "file" command when run on a core file give the executable name? Why does the executable need to be in the current working directory? That doesn't sound right.

        In terms of the parameters, it just seems like the script should have a single interface rather than supporting a bunch of variables that the user can put together. Especially since you are adding a fair amount of code to find core files that could be done just as well, if not better in the script itself.

        Show
        Owen O'Malley added a comment - Ok, I can see having different scripts for map and reduce, since you can mix stream or pipes with Java. (We should probably even support combinations of the two, eventually. smile ) I don't think these scripts are things you want to turn on to "debug", but rather hooks that you'll always leave on to give more details about problems when they occur. This can be done by concatenating the files if we want. But redirection in the command is not possible, since we dont know the order. I don't understand this. I was thinking that we run the script something like: bash -e "$script $stdout $stderr $jobconf 2>&1 > $debugout" to tie the stdout and stderr streams together. Doesn't the "file" command when run on a core file give the executable name? Why does the executable need to be in the current working directory? That doesn't sound right. In terms of the parameters, it just seems like the script should have a single interface rather than supporting a bunch of variables that the user can put together. Especially since you are adding a fair amount of code to find core files that could be done just as well, if not better in the script itself.
        Hide
        Amareshwari Sriramadasu added a comment -

        Both stdout and stderr of debug script can be redirected to debugout.
        And we dont need $jobconf in the command, we should have $syslog.

        Doesn't the "file" command when run on a core file give the executable name? Why does the executable need to be in the current working directory? That doesn't sound right.

        Here, executable has a symlink in the current working directory. We need to have a symlink in current working directory or else we need to know the path of executable from the framework and replace when needed. I feel a symlink is better than finding the path and replacing when needed, as we need @program@ to replace. The '#' is added in the config files for the pipes examples for creating symlink.

        In terms of the parameters, it just seems like the script should have a single interface rather than supporting a bunch of variables that the user can put together.

        The interface we have now supports for both submitting a command (without doing any add cache file) and a script file if he wants.
        Now, for pipes, we are adding the default debug command as 'gdb <program> -c <core> -x <cmd-file>'. If we move this to a script, we do not know the <program> in the script file, which would need <program> as argument for the script. I feel the interface we have now has more flexibility to the user, than single interface.

        Show
        Amareshwari Sriramadasu added a comment - Both stdout and stderr of debug script can be redirected to debugout. And we dont need $jobconf in the command, we should have $syslog. Doesn't the "file" command when run on a core file give the executable name? Why does the executable need to be in the current working directory? That doesn't sound right. Here, executable has a symlink in the current working directory. We need to have a symlink in current working directory or else we need to know the path of executable from the framework and replace when needed. I feel a symlink is better than finding the path and replacing when needed, as we need @program@ to replace. The '#' is added in the config files for the pipes examples for creating symlink. In terms of the parameters, it just seems like the script should have a single interface rather than supporting a bunch of variables that the user can put together. The interface we have now supports for both submitting a command (without doing any add cache file) and a script file if he wants. Now, for pipes, we are adding the default debug command as 'gdb <program> -c <core> -x <cmd-file>'. If we move this to a script, we do not know the <program> in the script file, which would need <program> as argument for the script. I feel the interface we have now has more flexibility to the user, than single interface.
        Hide
        Amareshwari Sriramadasu added a comment - - edited

        The patch attached incorparates the comments.

        Changes done in this patch are

        1. The command has single interface: $script $stdout $stderr $syslog $jobconf
        2. Adding executables is removed. And all files coming out of distributed cache have executation permission.
        3. Code for finding core file is removed. And default script for pipes will do it in the script.
        4. Both stdout and stdin of debug script are directed to debugout.
        5. Everything in debugout is added in diagnostics.

        Usage documentation is updated at http://wiki.apache.org/lucene-hadoop/HowToDebugMapReducePrograms

        Show
        Amareshwari Sriramadasu added a comment - - edited The patch attached incorparates the comments. Changes done in this patch are 1. The command has single interface: $script $stdout $stderr $syslog $jobconf 2. Adding executables is removed. And all files coming out of distributed cache have executation permission. 3. Code for finding core file is removed. And default script for pipes will do it in the script. 4. Both stdout and stdin of debug script are directed to debugout. 5. Everything in debugout is added in diagnostics. Usage documentation is updated at http://wiki.apache.org/lucene-hadoop/HowToDebugMapReducePrograms
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12367567/patch-1857.txt
        against trunk revision r584044.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs -1. The patch appears to introduce 2 new Findbugs warnings.

        core tests +1. The patch passed core unit tests.

        contrib tests -1. The patch failed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/932/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/932/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/932/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/932/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12367567/patch-1857.txt against trunk revision r584044. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce 2 new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests -1. The patch failed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/932/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/932/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/932/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/932/console This message is automatically generated.
        Hide
        Amareshwari Sriramadasu added a comment -

        Fixed Findbugs warnings. Submitting the patch again.

        Show
        Amareshwari Sriramadasu added a comment - Fixed Findbugs warnings. Submitting the patch again.
        Hide
        Amareshwari Sriramadasu added a comment -

        Patch with fix for findbugs warning

        Show
        Amareshwari Sriramadasu added a comment - Patch with fix for findbugs warning
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12367730/patch-1857.txt
        against trunk revision r584336.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs +1. The patch does not introduce any new Findbugs warnings.

        core tests -1. The patch failed core unit tests.

        contrib tests +1. The patch passed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/944/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/944/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/944/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/944/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12367730/patch-1857.txt against trunk revision r584336. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests -1. The patch failed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/944/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/944/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/944/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/944/console This message is automatically generated.
        Hide
        Amareshwari Sriramadasu added a comment -

        Submitting the patch again with findbugs warning fix.
        Here, the pipes program have a 5th argument for the debug script. i.e. program name.
        Now, Program name is used for default gdb script.
        We obtain the program name from symlink fragment of the URI. So, the executable URI for pipes
        program is <path>#<program-name>.
        For pipes programs, the script interface is
        $script $stdout $stderr $syslog $jobconf $program

        Show
        Amareshwari Sriramadasu added a comment - Submitting the patch again with findbugs warning fix. Here, the pipes program have a 5th argument for the debug script. i.e. program name. Now, Program name is used for default gdb script. We obtain the program name from symlink fragment of the URI. So, the executable URI for pipes program is <path>#<program-name>. For pipes programs, the script interface is $script $stdout $stderr $syslog $jobconf $program
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12367851/patch-1857.txt
        against trunk revision r585366.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs +1. The patch does not introduce any new Findbugs warnings.

        core tests +1. The patch passed core unit tests.

        contrib tests +1. The patch passed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/960/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/960/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/960/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/960/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12367851/patch-1857.txt against trunk revision r585366. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/960/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/960/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/960/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/960/console This message is automatically generated.
        Hide
        Devaraj Das added a comment -

        Sorry, this patch doesn't apply anymore. Could you please regenerate the patch with the current trunk?

        Show
        Devaraj Das added a comment - Sorry, this patch doesn't apply anymore. Could you please regenerate the patch with the current trunk?
        Hide
        Amareshwari Sriramadasu added a comment -

        Regenerated patch with the current trunk.

        Show
        Amareshwari Sriramadasu added a comment - Regenerated patch with the current trunk.
        Hide
        Arun C Murthy added a comment -

        Can you please update the documentation to better reflect the feature, usage etc. ? In particular there should be a one-line mention of the feature in the
        description section of JobConf (header) and more structured documentation for JobConf.setDebugScript. Thanks!

        Show
        Arun C Murthy added a comment - Can you please update the documentation to better reflect the feature, usage etc. ? In particular there should be a one-line mention of the feature in the description section of JobConf (header) and more structured documentation for JobConf.setDebugScript . Thanks!
        Hide
        Amareshwari Sriramadasu added a comment -

        Updated documentation as suggested.

        Show
        Amareshwari Sriramadasu added a comment - Updated documentation as suggested.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12368363/patch-1857.txt
        against trunk revision r588341.

        @author +1. The patch does not contain any @author tags.

        patch -1. The patch command could not apply the patch.

        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/994/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12368363/patch-1857.txt against trunk revision r588341. @author +1. The patch does not contain any @author tags. patch -1. The patch command could not apply the patch. Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/994/console This message is automatically generated.
        Hide
        Amareshwari Sriramadasu added a comment -

        Regenerated patch again with the trunk.

        Show
        Amareshwari Sriramadasu added a comment - Regenerated patch again with the trunk.
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12368451/patch-1857.txt
        against trunk revision r588341.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs +1. The patch does not introduce any new Findbugs warnings.

        core tests +1. The patch passed core unit tests.

        contrib tests +1. The patch passed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1001/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1001/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1001/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1001/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12368451/patch-1857.txt against trunk revision r588341. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1001/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1001/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1001/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1001/console This message is automatically generated.
        Hide
        Devaraj Das added a comment -

        +1

        Show
        Devaraj Das added a comment - +1
        Hide
        Devaraj Das added a comment -

        I just committed this. Thanks, Amareshwari!

        Show
        Devaraj Das added a comment - I just committed this. Thanks, Amareshwari!
        Hide
        Hudson added a comment -
        Show
        Hudson added a comment - Integrated in Hadoop-Nightly #283 (See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/283/ )

          People

          • Assignee:
            Amareshwari Sriramadasu
            Reporter:
            Amareshwari Sriramadasu
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development