Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-772

Chaging LineRecordReader algo so that it does not need to skip backwards in the stream

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.21.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Incompatible change, Reviewed

      Description

      The current algorithm of the LineRecordReader needs to move backwards in the stream (in its constructor) to correctly position itself in the stream. So it moves back one byte from the start of its split and try to read a record (i.e. a line) and throws that away. This is so because it is sure that, this line would be taken care of by some other mapper. This algorithm is difficult and in-efficient if used for compressed stream where data is coming to the LineRecordReader via some codecs. (Although in the current implementation, Hadoop does not split a compressed file and only makes one split from the start to the end of the file and so only one mapper handles it. We are currently working on BZip2 codecs where splitting is possible to work with Hadoop. So this proposed change will make it possible to uniformly handle plain as well as compressed stream.)

      In the new algorithm, each mapper always skips its first line because it is sure that, that line would have been read by some other mapper. So now each mapper must finish its reading at a record boundary which is always beyond its upper split limit. Due to this change, LineRecordReader does not need to move backwards in the stream.

      1. 4010-mapreduce.patch
        2 kB
        Chris Douglas
      2. Hadoop-4010_version2.patch
        2 kB
        Abdul Qadeer
      3. Hadoop-4010_version3.patch
        4 kB
        Abdul Qadeer
      4. Hadoop-4010.patch
        2 kB
        Abdul Qadeer

        Issue Links

          Activity

          Hide
          Abdul Qadeer added a comment -

          Code to implement the suggested changes in LineRecordReader.

          Show
          Abdul Qadeer added a comment - Code to implement the suggested changes in LineRecordReader.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12388782/Hadoop-4010.patch
          against trunk revision 688936.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no tests are needed for this patch.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3103/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3103/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3103/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3103/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12388782/Hadoop-4010.patch against trunk revision 688936. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no tests are needed for this patch. -1 javadoc. The javadoc tool appears to have generated 1 warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3103/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3103/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3103/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3103/console This message is automatically generated.
          Hide
          Chris Douglas added a comment -

          Though TestMiniMRDFSSort is probably related to HADOOP-3950 and TestDatanodeDeath has been seen elsewhere (HADOOP-3628), the other unit tests should pass or be modified to reflect new semantics. In the latter case, this should be marked as an incompatible change.

          The comment in this patch explains the intent of the change more than the code it annotates. The reasoning is useful and appropriate to the JIRA, but the comment in the code should explain the algorithm.

          Show
          Chris Douglas added a comment - Though TestMiniMRDFSSort is probably related to HADOOP-3950 and TestDatanodeDeath has been seen elsewhere ( HADOOP-3628 ), the other unit tests should pass or be modified to reflect new semantics. In the latter case, this should be marked as an incompatible change. The comment in this patch explains the intent of the change more than the code it annotates. The reasoning is useful and appropriate to the JIRA, but the comment in the code should explain the algorithm.
          Hide
          Abdul Qadeer added a comment -

          Bug fixes.

          Show
          Abdul Qadeer added a comment - Bug fixes.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12389242/Hadoop-4010_version2.patch
          against trunk revision 690641.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3151/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3151/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3151/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3151/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12389242/Hadoop-4010_version2.patch against trunk revision 690641. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3151/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3151/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3151/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3151/console This message is automatically generated.
          Hide
          Chris Douglas added a comment -

          Canceling patch while unit test failures are resolved.

          It looks like cacheString and cacheString2 aren't getting broken up as they should be from xargs. Does this handle files with a single line?

          I also don't understand the change to TestLineInputFormat. NLineInputFormat is a special case, but it should still work. The last split is a special case because N may not evenly divide the number of input lines; unless it's also the last split, the first shouldn't be a special case.

          Show
          Chris Douglas added a comment - Canceling patch while unit test failures are resolved. It looks like cacheString and cacheString2 aren't getting broken up as they should be from xargs. Does this handle files with a single line? I also don't understand the change to TestLineInputFormat. NLineInputFormat is a special case, but it should still work. The last split is a special case because N may not evenly divide the number of input lines; unless it's also the last split, the first shouldn't be a special case.
          Hide
          Abdul Qadeer added a comment -

          (1) In TestLineInputFormat, as you mentioned, equal number of lines
          are placed in a split, except the last one. Due to new LineRecordReader
          algorithm, the first split will process one more line as compared to other
          mappers. Due to this reason I am leaving the first split as well.

          (2) About the caching test failure, I am not really sure what is happening.
          I tried the LineRecordReader in isolation, for the same kind of test and it
          works. Something is going wrong in symlink stuff. I want to debug
          the test case but doing so in Eclipse gives error that WebApps are not
          not classpath, when infact I have put them on the eclipse classpath.
          Any suggestion to debug this test case?

          Thanks,
          Abdul Qadeer

          Show
          Abdul Qadeer added a comment - (1) In TestLineInputFormat, as you mentioned, equal number of lines are placed in a split, except the last one. Due to new LineRecordReader algorithm, the first split will process one more line as compared to other mappers. Due to this reason I am leaving the first split as well. (2) About the caching test failure, I am not really sure what is happening. I tried the LineRecordReader in isolation, for the same kind of test and it works. Something is going wrong in symlink stuff. I want to debug the test case but doing so in Eclipse gives error that WebApps are not not classpath, when infact I have put them on the eclipse classpath. Any suggestion to debug this test case? Thanks, Abdul Qadeer
          Hide
          Chris Douglas added a comment -

          Due to new LineRecordReader algorithm, the first split will process one more line as compared to other mappers

          That's probably not going to be acceptable to users of NLineInputFormat. Users employing N formatted lines to initialize and run a mapper may find their jobs no longer work if the input is offset or if a map receives N+1 lines. If this is necessary for the new algorithm, rewriting or somehow accommodating this case may be required.

          Something is going wrong in symlink stuff. I want to debug the test case but doing so in Eclipse gives error[...]

          Sorry, I don't use eclipse. It looks like the symlink resolution is working; both cache files are picked up as arguments from the input file. At a glance, what appears to be going wrong is newline detection or propagation between invocations of cat from xargs, a bad interaction with streaming (it also uses LineRecordReader, IIRC), or input exercising an edge case for LineRecordReader. Since it sounds like you've ruled out the latter, have you tried running a streaming job like the one in the testcase? I suspect the cache isn't necessary to reproduce this.

          Show
          Chris Douglas added a comment - Due to new LineRecordReader algorithm, the first split will process one more line as compared to other mappers That's probably not going to be acceptable to users of NLineInputFormat. Users employing N formatted lines to initialize and run a mapper may find their jobs no longer work if the input is offset or if a map receives N+1 lines. If this is necessary for the new algorithm, rewriting or somehow accommodating this case may be required. Something is going wrong in symlink stuff. I want to debug the test case but doing so in Eclipse gives error [...] Sorry, I don't use eclipse. It looks like the symlink resolution is working; both cache files are picked up as arguments from the input file. At a glance, what appears to be going wrong is newline detection or propagation between invocations of cat from xargs, a bad interaction with streaming (it also uses LineRecordReader, IIRC), or input exercising an edge case for LineRecordReader. Since it sounds like you've ruled out the latter, have you tried running a streaming job like the one in the testcase? I suspect the cache isn't necessary to reproduce this.
          Hide
          Abdul Qadeer added a comment -

          Due to new LineRecordReader algorithm, the first split will process one more line as compared to other mappers

          That's probably not going to be acceptable to users of NLineInputFormat. Users employing N formatted lines to initialize and run a mapper may find their jobs no longer work if the input is offset or if a map receives N+1 lines. If this is necessary for the new algorithm, rewriting or somehow accommodating this case may be required.

          I have changed NLineInputFormat to work it with the new LineRecordReader algorithm.
          The diff of the file is in the following. After this change I don't need to make any change
          in the TestLineInputFormat test case.

          — src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java (revisio
          n 687954)
          +++ src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java (working
          copy)
          @@ -93,10 +93,19 @@
          long begin = 0;
          long length = 0;
          int num = -1;

          • while ((num = lr.readLine(line)) > 0) {
            + while ((num = lr.readLine(line)) > 0) {
            numLines++;
            length += num;
            if (numLines == N) {
            + //NLineInputFormat uses LineRecordReader, which
            + //always reads at least one character out of its
            + //upper split boundary. So to use LineRecordReader
            + // such that there are N lines in each split, we move
            + //back the upper split limits of each split by one
            + //character.
            + if(begin == 0) { + length--; + }

            splits.add(new FileSplit(fileName, begin, length, new String[]{}));

          begin += length;
          length = 0;

          Show
          Abdul Qadeer added a comment - Due to new LineRecordReader algorithm, the first split will process one more line as compared to other mappers That's probably not going to be acceptable to users of NLineInputFormat. Users employing N formatted lines to initialize and run a mapper may find their jobs no longer work if the input is offset or if a map receives N+1 lines. If this is necessary for the new algorithm, rewriting or somehow accommodating this case may be required. I have changed NLineInputFormat to work it with the new LineRecordReader algorithm. The diff of the file is in the following. After this change I don't need to make any change in the TestLineInputFormat test case. — src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java (revisio n 687954) +++ src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java (working copy) @@ -93,10 +93,19 @@ long begin = 0; long length = 0; int num = -1; while ((num = lr.readLine(line)) > 0) { + while ((num = lr.readLine(line)) > 0) { numLines++; length += num; if (numLines == N) { + //NLineInputFormat uses LineRecordReader, which + //always reads at least one character out of its + //upper split boundary. So to use LineRecordReader + // such that there are N lines in each split, we move + //back the upper split limits of each split by one + //character. + if(begin == 0) { + length--; + } splits.add(new FileSplit(fileName, begin, length, new String[]{})); begin += length; length = 0;
          Hide
          Abdul Qadeer added a comment -

          (1) The code comments in LineRecordReader are condensed.

          (2) NLineInputFormat is changed that it works with the new LineRecordReader. It is guaranteed that each mapper will get N lines, except for the last split.

          (3) TestMultipleCachefiles.java is updated. The output of this test case was dependent that how a file is assigned to a mapper. Please see this (https://issues.apache.org/jira/browse/HADOOP-4182) JIRA for the details.

          Show
          Abdul Qadeer added a comment - (1) The code comments in LineRecordReader are condensed. (2) NLineInputFormat is changed that it works with the new LineRecordReader. It is guaranteed that each mapper will get N lines, except for the last split. (3) TestMultipleCachefiles.java is updated. The output of this test case was dependent that how a file is assigned to a mapper. Please see this ( https://issues.apache.org/jira/browse/HADOOP-4182 ) JIRA for the details.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12390169/Hadoop-4010_version3.patch
          against trunk revision 696149.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3282/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3282/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3282/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3282/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12390169/Hadoop-4010_version3.patch against trunk revision 696149. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3282/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3282/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3282/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3282/console This message is automatically generated.
          Hide
          Owen O'Malley added a comment - - edited

          The back skip was put in to handle a strange corner case:

          a b c \r \n d e f \r \n g h i \r \n

          Assume the split is between the first \r and \n. The right answer is:

          first: "abc", "def"
          second: "ghi"

          But what I believe your patch will do is:

          first: "abc", "def"
          second: "def", "ghi"

          because it will spot the \n and assume the second line should be handled.

          Show
          Owen O'Malley added a comment - - edited The back skip was put in to handle a strange corner case: a b c \r \n d e f \r \n g h i \r \n Assume the split is between the first \r and \n. The right answer is: first: "abc", "def" second: "ghi" But what I believe your patch will do is: first: "abc", "def" second: "def", "ghi" because it will spot the \n and assume the second line should be handled.
          Hide
          Abdul Qadeer added a comment -

          Just to make sure I understand correctly, you mean
          that if there are two splits such that

          a b c \r is one split while
          \n d e f \r \n g h i \r \n is the second split.

          start = 0; end = 3 for the first split
          start = 3; end = 14 for the second split

          For Split 1:

          (1) Constructor will not throw away first line because
          start != 0 will fail.
          (2) In the next method, the first read line will return
          abc and current pos = 5 (i.e. points to d)
          So in the next iteration of next(), the check that
          while (pos <= end) will fail because pos = 5; end = 3

          For Split 2:
          (1) Constructor will try to throw first line. After that
          pos = 5 (i.e. points to d)
          (2) next() will read def and gfi

          So it looks okay to me? Have I missed something?

          Show
          Abdul Qadeer added a comment - Just to make sure I understand correctly, you mean that if there are two splits such that a b c \r is one split while \n d e f \r \n g h i \r \n is the second split. start = 0; end = 3 for the first split start = 3; end = 14 for the second split For Split 1: (1) Constructor will not throw away first line because start != 0 will fail. (2) In the next method, the first read line will return abc and current pos = 5 (i.e. points to d) So in the next iteration of next(), the check that while (pos <= end) will fail because pos = 5; end = 3 For Split 2: (1) Constructor will try to throw first line. After that pos = 5 (i.e. points to d) (2) next() will read def and gfi So it looks okay to me? Have I missed something?
          Hide
          Chris Douglas added a comment -

          It looks like the original commit of the back skip is ancient (soon after Nutch was moved out of the Incubator). After going over possible cases with Owen, it looks like removing the backup and changing LineRecordReader to have its end condition as pos <= end will work. After reading your explanation in HADOOP-4182, the TestMultipleCachefiles change looks OK.

          +1, assuming all unit tests still pass.

          Show
          Chris Douglas added a comment - It looks like the original commit of the back skip is ancient (soon after Nutch was moved out of the Incubator). After going over possible cases with Owen, it looks like removing the backup and changing LineRecordReader to have its end condition as pos <= end will work. After reading your explanation in HADOOP-4182 , the TestMultipleCachefiles change looks OK. +1, assuming all unit tests still pass.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12390169/Hadoop-4010_version3.patch
          against trunk revision 734870.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

          +1 core tests. The patch passed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3755/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3755/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3755/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3755/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12390169/Hadoop-4010_version3.patch against trunk revision 734870. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 Eclipse classpath. The patch retains Eclipse classpath integrity. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3755/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3755/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3755/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3755/console This message is automatically generated.
          Hide
          Chris Douglas added a comment -

          I committed this. Thanks, Abdul

          Show
          Chris Douglas added a comment - I committed this. Thanks, Abdul
          Hide
          Chris Douglas added a comment -

          The changes to mapred.LineRecordReader should have been propagated to mapreduce.lib.input.LineRecordReader

          Show
          Chris Douglas added a comment - The changes to mapred.LineRecordReader should have been propagated to mapreduce.lib.input.LineRecordReader
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12413963/4010-mapreduce.patch
          against trunk revision 795489.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/412/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/412/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/412/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/412/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12413963/4010-mapreduce.patch against trunk revision 795489. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/412/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/412/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/412/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/412/console This message is automatically generated.
          Hide
          Abdul Qadeer added a comment -

          Seeing the patch queue looks like everyone is failing those 7 contrib test cases.

          Show
          Abdul Qadeer added a comment - Seeing the patch queue looks like everyone is failing those 7 contrib test cases.
          Hide
          Amareshwari Sriramadasu added a comment -

          changes look good to me.

          Show
          Amareshwari Sriramadasu added a comment - changes look good to me.
          Hide
          Chris Douglas added a comment -

          I committed the mapreduce changes

          Show
          Chris Douglas added a comment - I committed the mapreduce changes

            People

            • Assignee:
              Abdul Qadeer
              Reporter:
              Abdul Qadeer
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development