Hadoop Common
  1. Hadoop Common
  2. HADOOP-2657

Enhancements to DFSClient to support flushing data at any point in time

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.17.0
    • Component/s: None
    • Labels:
      None
    • Release Note:
      A new API DFSOututStream.flush() flushes all outstanding data to the pipeline of datanodes.

      Description

      The HDFS Append Design (HADOOP-1700) requires that there be a public API to flush data written to a HDFS file that can be invoked by an application. This API (popularly referred to a fflush(OutputStream)) will ensure that data written to the DFSOutputStream is flushed to datanodes and any required metadata is persisted on Namenode.

      This API has to handle the case when the client decides to flush after writing data that is not a exact multiple of io.bytes.per.checksum.

      1. flush.patch
        23 kB
        dhruba borthakur
      2. flush2.patch
        21 kB
        dhruba borthakur
      3. flush3.patch
        22 kB
        dhruba borthakur
      4. flush4.patch
        21 kB
        dhruba borthakur
      5. flush5.patch
        23 kB
        dhruba borthakur
      6. flush6.patch
        23 kB
        dhruba borthakur
      7. flush7.patch
        23 kB
        dhruba borthakur
      8. flush8.patch
        24 kB
        dhruba borthakur
      9. flush9.patch
        25 kB
        dhruba borthakur

        Issue Links

          Activity

          dhruba borthakur created issue -
          dhruba borthakur made changes -
          Field Original Value New Value
          Link This issue is related to HADOOP-1700 [ HADOOP-1700 ]
          Hide
          dhruba borthakur added a comment -

          This patch implements the "flush" API for an DFS OutputStream. All data is flushed to datanodes and the block list for this file is persisted on the namenode.

          Show
          dhruba borthakur added a comment - This patch implements the "flush" API for an DFS OutputStream. All data is flushed to datanodes and the block list for this file is persisted on the namenode.
          dhruba borthakur made changes -
          Attachment flush.patch [ 12376072 ]
          dhruba borthakur made changes -
          Assignee dhruba borthakur [ dhruba ]
          Hide
          dhruba borthakur added a comment -

          merged patch with latest trunk.

          Show
          dhruba borthakur added a comment - merged patch with latest trunk.
          dhruba borthakur made changes -
          Attachment flush2.patch [ 12376536 ]
          Hide
          Raghu Angadi added a comment -

          Could you point me to specific portion of a documentation or description of what this is supposed to do? There are quite a few changes in DFSClient but none on DataNode? Not sure how that ensures data flushes that are not at the checksum chunk boundaries work.

          Also, I think we should not require user not to flush before close. What is inefficient if user does flush() followed by close() instead of just close()?

          Show
          Raghu Angadi added a comment - Could you point me to specific portion of a documentation or description of what this is supposed to do? There are quite a few changes in DFSClient but none on DataNode? Not sure how that ensures data flushes that are not at the checksum chunk boundaries work. Also, I think we should not require user not to flush before close. What is inefficient if user does flush() followed by close() instead of just close()?
          Hide
          dhruba borthakur added a comment -

          This patch does nto require changes in the datanode because the datanode already has code that deals with packet replays. Each packet has a "offset in the block". This patch ensures that packets that are flushed have the correct value set in "offset in the block".

          A user can "flush" before "close", no problem. In this case, it is likely that the flush will result in a RPC to the namenode (to persist block locations). The close will have another RPC to the namenode that closes the file. Thus, there will be two RPCs to the namenode. If the application does multiple flushes followed by a close (without writing any new data) it will result in at most two RPCs to the namenode.

          Show
          dhruba borthakur added a comment - This patch does nto require changes in the datanode because the datanode already has code that deals with packet replays. Each packet has a "offset in the block". This patch ensures that packets that are flushed have the correct value set in "offset in the block". A user can "flush" before "close", no problem. In this case, it is likely that the flush will result in a RPC to the namenode (to persist block locations). The close will have another RPC to the namenode that closes the file. Thus, there will be two RPCs to the namenode. If the application does multiple flushes followed by a close (without writing any new data) it will result in at most two RPCs to the namenode.
          Hide
          dhruba borthakur added a comment -

          The implementation of a flush API requires that a partial checksum chunk be written to the file. The application can write more data after the flush; this requires that the FileSystem have a way to rewind the current write position to the beginning of the last partial chunk. Since this feature is not readily available on most file-system impementations (other than HDFS), I propose that flush() throw an exception if it is not supported for a particular file system.

          Show
          dhruba borthakur added a comment - The implementation of a flush API requires that a partial checksum chunk be written to the file. The application can write more data after the flush; this requires that the FileSystem have a way to rewind the current write position to the beginning of the last partial chunk. Since this feature is not readily available on most file-system impementations (other than HDFS), I propose that flush() throw an exception if it is not supported for a particular file system.
          dhruba borthakur made changes -
          Attachment flush3.patch [ 12376610 ]
          Hide
          Doug Cutting added a comment -

          We should change FSOutputStream to implement Seekable, and have the default implementation of seek throw an IOException, then use this in CheckSumFileSystem to rewind and overwrite the checksum. Then folks will only fail if they attempt to write more data after they've flushed on a ChecksumFileSystem that doesn't support seek. I don't think we will have any filesystems that both extend CheckSumFileSystem and can't support seek. Only LocalFileSystem currently extends CheckSumFileSystem, and it does support seek. So flush() shouldn't ever fail for existing FileSystem's, but seek() will fail for most output streams (probably all except local). Does that make sense?

          Show
          Doug Cutting added a comment - We should change FSOutputStream to implement Seekable, and have the default implementation of seek throw an IOException, then use this in CheckSumFileSystem to rewind and overwrite the checksum. Then folks will only fail if they attempt to write more data after they've flushed on a ChecksumFileSystem that doesn't support seek. I don't think we will have any filesystems that both extend CheckSumFileSystem and can't support seek. Only LocalFileSystem currently extends CheckSumFileSystem, and it does support seek. So flush() shouldn't ever fail for existing FileSystem's, but seek() will fail for most output streams (probably all except local). Does that make sense?
          Hide
          dhruba borthakur added a comment -

          In the current trunk, FSDataOutputStream.flush() actually results in a flush call to the underlying stream. It does not flush the last CRC chunk that might be buffered. To keep backward compatibility, it might be ok to keep this precise semantics for all filesystems other than HDFS. For HDFS, it will flush the last CRC chunk too. Do you think that this is acceptable?

          Show
          dhruba borthakur added a comment - In the current trunk, FSDataOutputStream.flush() actually results in a flush call to the underlying stream. It does not flush the last CRC chunk that might be buffered. To keep backward compatibility, it might be ok to keep this precise semantics for all filesystems other than HDFS. For HDFS, it will flush the last CRC chunk too. Do you think that this is acceptable?
          Hide
          Doug Cutting added a comment -

          This is a bug in ChecksumFileSystem: if you call flush, it should flush the checksum stream too. But we perhaps don't have to fix that bug in this issue.

          Changing flush() to throw an exception on all but HDFS (as proposed above) would not be good. This issue should improve flush() for HDFS, and not break it for all other filesystems.

          Filing a separate issue to improve flush for ChecksumFileSystem would be good. This could either be done as I suggested above, by having FSOutputStream implement Seekable, but only implementing seek() in the local filesystem. Or instead, we could leave the FSOutputStream API alone, and ChecksumFileSystem could, when more output is written after a flush, throw an exception if the underlying FSOutputStream implementation doesn't implement Seekable. In either case, RawLocalFileSystem would implement Seekable for its FSOutputStream implementation, and ChecksumFileSystem could use this to rewind checksum output when data is appended after a flush().

          Show
          Doug Cutting added a comment - This is a bug in ChecksumFileSystem: if you call flush, it should flush the checksum stream too. But we perhaps don't have to fix that bug in this issue. Changing flush() to throw an exception on all but HDFS (as proposed above) would not be good. This issue should improve flush() for HDFS, and not break it for all other filesystems. Filing a separate issue to improve flush for ChecksumFileSystem would be good. This could either be done as I suggested above, by having FSOutputStream implement Seekable, but only implementing seek() in the local filesystem. Or instead, we could leave the FSOutputStream API alone, and ChecksumFileSystem could, when more output is written after a flush, throw an exception if the underlying FSOutputStream implementation doesn't implement Seekable. In either case, RawLocalFileSystem would implement Seekable for its FSOutputStream implementation, and ChecksumFileSystem could use this to rewind checksum output when data is appended after a flush().
          Hide
          dhruba borthakur added a comment -

          I agree with you completely. I was suggesting that the flush API do exactly the same as it does today for all filesystems other than HDFS. I will file another JIRA that describes the bug (that the last chunk does not get flushed). Thanks for your comments.

          Show
          dhruba borthakur added a comment - I agree with you completely. I was suggesting that the flush API do exactly the same as it does today for all filesystems other than HDFS. I will file another JIRA that describes the bug (that the last chunk does not get flushed). Thanks for your comments.
          dhruba borthakur made changes -
          Link This issue relates to HADOOP-2913 [ HADOOP-2913 ]
          Hide
          dhruba borthakur added a comment -

          Merged patch with latest trunk. Incorporated Doug's review comments. This patch does not change the flush semantics for non-HDFS filesystems.

          Show
          dhruba borthakur added a comment - Merged patch with latest trunk. Incorporated Doug's review comments. This patch does not change the flush semantics for non-HDFS filesystems.
          dhruba borthakur made changes -
          Attachment flush4.patch [ 12376791 ]
          Hide
          Raghu Angadi added a comment -
          1. The patch needs to be updated for trunk.
          2. why isn't FSOutputSummer.flushBuffer() just flustBuffer(false);?
          3. not sure why only the latter does count = chunkLen;
          4. In DFSClient : flush() sets closed to true without the clean up done in closeInternal(), should it invoke closeInternal() instead?
          5. I don't think I followed everything thoroughly. I will chat with you regd specifics if required.

          General thought : The flush implemented here looks very much like fsync() to me.. thats why we have extra RPC cost if user flushes data just before closing. This even invokes namenode.fsync(). Waiting for ack from datanodes is another thing that makes it behave like fsync(). Obviously there is nothing wrong with it extra guarantees. it is just more than what user might want and expect when they invoke flush(). These extra guarantees usually tend to have extra costs and might limit (now or in future) primary advantages of HDFS : scalability, throughput, and reliability. I would just flush the data to socket and not wait for anything else. In constrast, fsync() tends to be used much less frequently because users know it would be costly.

          Show
          Raghu Angadi added a comment - The patch needs to be updated for trunk. why isn't FSOutputSummer.flushBuffer() just flustBuffer(false); ? not sure why only the latter does count = chunkLen; In DFSClient : flush() sets closed to true without the clean up done in closeInternal() , should it invoke closeInternal() instead? I don't think I followed everything thoroughly. I will chat with you regd specifics if required. General thought : The flush implemented here looks very much like fsync() to me.. thats why we have extra RPC cost if user flushes data just before closing. This even invokes namenode.fsync(). Waiting for ack from datanodes is another thing that makes it behave like fsync(). Obviously there is nothing wrong with it extra guarantees. it is just more than what user might want and expect when they invoke flush(). These extra guarantees usually tend to have extra costs and might limit (now or in future) primary advantages of HDFS : scalability, throughput, and reliability. I would just flush the data to socket and not wait for anything else. In constrast, fsync() tends to be used much less frequently because users know it would be costly.
          Hide
          dhruba borthakur added a comment -

          Incorporated Raghu's review comments.

          One question is "what is the semantics of flush?". My opinion is that the client should confirm that the data has reached the OS buffers on all datanodes in the pipeline before the flush call returns. This will enable applications like HBase to use this flush API on the HBase transaction log (which is a HDFS file) and rest easy that it is persisted.

          If the DFSOutputStream.flush() does not guarantee that the data has reached the OS buffers on datanode(s) then this API migth not be useful for HBase.

          Show
          dhruba borthakur added a comment - Incorporated Raghu's review comments. One question is "what is the semantics of flush?". My opinion is that the client should confirm that the data has reached the OS buffers on all datanodes in the pipeline before the flush call returns. This will enable applications like HBase to use this flush API on the HBase transaction log (which is a HDFS file) and rest easy that it is persisted. If the DFSOutputStream.flush() does not guarantee that the data has reached the OS buffers on datanode(s) then this API migth not be useful for HBase.
          dhruba borthakur made changes -
          Attachment flush5.patch [ 12377299 ]
          Hide
          Raghu Angadi added a comment -

          with flush5.patch : Not sure if we need lines 2312 through 2334 in DFSClient.java. i.e. every thing seems to be ok even if we just delete them.

          Show
          Raghu Angadi added a comment - with flush5.patch : Not sure if we need lines 2312 through 2334 in DFSClient.java. i.e. every thing seems to be ok even if we just delete them.
          Hide
          dhruba borthakur added a comment -

          The reason this portion of code is needed is because flushBuffer may invoke writeChunk() for the last partial cached chunk. This causes currentPacket to change. The portion of code that you mentioned reverts back the changes to currentPacket.

          Show
          dhruba borthakur added a comment - The reason this portion of code is needed is because flushBuffer may invoke writeChunk() for the last partial cached chunk. This causes currentPacket to change. The portion of code that you mentioned reverts back the changes to currentPacket.
          Hide
          Raghu Angadi added a comment -

          The code right after that also does the same. In fact the code I mentioned does not get executed if writeChunk() writes any data.

          Show
          Raghu Angadi added a comment - The code right after that also does the same. In fact the code I mentioned does not get executed if writeChunk() writes any data.
          Hide
          dhruba borthakur added a comment -

          Maybe you missed the "return" at the end of that code block. This block avoids calling flushInternal if no new data has been written since the last flush call. Am I missing something?

          Show
          dhruba borthakur added a comment - Maybe you missed the "return" at the end of that code block. This block avoids calling flushInternal if no new data has been written since the last flush call. Am I missing something?
          Hide
          Raghu Angadi added a comment -

          I guess not. I was trying to figure out difference between in these two cases of reverting.
          +1.

          Show
          Raghu Angadi added a comment - I guess not. I was trying to figure out difference between in these two cases of reverting. +1.
          Hide
          Raghu Angadi added a comment -

          ok. Now I see. I guess

          if ( cond ) {
            block
            return
          } 
          flushInternal()
          block
          return;
          

          could be replaced by

          if (!cond) }
            flushInternal();
          }
          block;
          return;
          
          Show
          Raghu Angadi added a comment - ok. Now I see. I guess if ( cond ) { block return } flushInternal() block return ; could be replaced by if (!cond) } flushInternal(); } block; return ;
          Hide
          dhruba borthakur added a comment -

          Incorporated Raghu's comments.

          Show
          dhruba borthakur added a comment - Incorporated Raghu's comments.
          dhruba borthakur made changes -
          Attachment flush6.patch [ 12377392 ]
          Hide
          Raghu Angadi added a comment -

          +1.

          Show
          Raghu Angadi added a comment - +1.
          dhruba borthakur made changes -
          Fix Version/s 0.17.0 [ 12312913 ]
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12377392/flush6.patch
          against trunk revision 619744.

          @author +1. The patch does not contain any @author tags.

          tests included +1. The patch appears to include 3 new or modified tests.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new javac compiler warnings.

          release audit +1. The applied patch does not generate any new release audit warnings.

          findbugs -1. The patch appears to introduce 4 new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1920/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1920/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1920/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1920/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12377392/flush6.patch against trunk revision 619744. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 3 new or modified tests. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new javac compiler warnings. release audit +1. The applied patch does not generate any new release audit warnings. findbugs -1. The patch appears to introduce 4 new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1920/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1920/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1920/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1920/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          Findbugs warnings

          Show
          dhruba borthakur added a comment - Findbugs warnings
          dhruba borthakur made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Hide
          dhruba borthakur added a comment -

          Fixed two findbugs warnings. There is one more warning in writeChunk that is not introduced by this patch and is actually safe.

          Show
          dhruba borthakur added a comment - Fixed two findbugs warnings. There is one more warning in writeChunk that is not introduced by this patch and is actually safe.
          dhruba borthakur made changes -
          Attachment flush7.patch [ 12377504 ]
          Hide
          dhruba borthakur added a comment -

          Fixed two findbugs warnings.

          Show
          dhruba borthakur added a comment - Fixed two findbugs warnings.
          dhruba borthakur made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12377504/flush7.patch
          against trunk revision 619744.

          @author +1. The patch does not contain any @author tags.

          tests included +1. The patch appears to include 3 new or modified tests.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new javac compiler warnings.

          release audit +1. The applied patch does not generate any new release audit warnings.

          findbugs -1. The patch appears to introduce 4 new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1925/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1925/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1925/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1925/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12377504/flush7.patch against trunk revision 619744. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 3 new or modified tests. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new javac compiler warnings. release audit +1. The applied patch does not generate any new release audit warnings. findbugs -1. The patch appears to introduce 4 new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1925/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1925/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1925/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1925/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          Another findbug warning.

          Show
          dhruba borthakur added a comment - Another findbug warning.
          dhruba borthakur made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Hide
          dhruba borthakur added a comment -

          Findbugs reported that packetSize was not synchronized correctly in setPacketSize() method. However, this method is called only by the unit tests and that too at the beginning of the test. So, there isn't a real problem. However, to get rid of the findbugs warning, I syncronized this method too.

          This still does not fix another findbugs warning that this patch generates because it complains about accessing "streamer". This warning is harmless and can be safely ignored.

          Show
          dhruba borthakur added a comment - Findbugs reported that packetSize was not synchronized correctly in setPacketSize() method. However, this method is called only by the unit tests and that too at the beginning of the test. So, there isn't a real problem. However, to get rid of the findbugs warning, I syncronized this method too. This still does not fix another findbugs warning that this patch generates because it complains about accessing "streamer". This warning is harmless and can be safely ignored.
          dhruba borthakur made changes -
          Attachment flush8.patch [ 12377511 ]
          dhruba borthakur made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          dhruba borthakur added a comment -

          merged patch with latest trunk.

          Show
          dhruba borthakur added a comment - merged patch with latest trunk.
          dhruba borthakur made changes -
          Attachment flush9.patch [ 12377553 ]
          dhruba borthakur made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Hide
          dhruba borthakur added a comment -

          The last patch submission did not trigger HudsonQA test. Merged patch with latest trunk and resubmitting.

          Show
          dhruba borthakur added a comment - The last patch submission did not trigger HudsonQA test. Merged patch with latest trunk and resubmitting.
          dhruba borthakur made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12377511/flush8.patch
          against trunk revision 619744.

          @author +1. The patch does not contain any @author tags.

          tests included +1. The patch appears to include 3 new or modified tests.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new javac compiler warnings.

          release audit +1. The applied patch does not generate any new release audit warnings.

          findbugs -1. The patch appears to introduce 3 new Findbugs warnings.

          core tests -1. The patch failed core unit tests.

          contrib tests -1. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1929/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1929/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1929/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1929/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12377511/flush8.patch against trunk revision 619744. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 3 new or modified tests. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new javac compiler warnings. release audit +1. The applied patch does not generate any new release audit warnings. findbugs -1. The patch appears to introduce 3 new Findbugs warnings. core tests -1. The patch failed core unit tests. contrib tests -1. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1929/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1929/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1929/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1929/console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12377553/flush9.patch
          against trunk revision 619744.

          @author +1. The patch does not contain any @author tags.

          tests included +1. The patch appears to include 3 new or modified tests.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new javac compiler warnings.

          release audit +1. The applied patch does not generate any new release audit warnings.

          findbugs -1. The patch appears to introduce 4 new Findbugs warnings.

          core tests -1. The patch failed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1935/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1935/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1935/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1935/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12377553/flush9.patch against trunk revision 619744. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 3 new or modified tests. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new javac compiler warnings. release audit +1. The applied patch does not generate any new release audit warnings. findbugs -1. The patch appears to introduce 4 new Findbugs warnings. core tests -1. The patch failed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1935/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1935/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1935/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1935/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          The test failed on solaris with the TestBalancer timing out. There is an outstanding issue HADOOP-2784 that documents this issue.

          Show
          dhruba borthakur added a comment - The test failed on solaris with the TestBalancer timing out. There is an outstanding issue HADOOP-2784 that documents this issue.
          Hide
          dhruba borthakur added a comment -

          I just committed this.

          Show
          dhruba borthakur added a comment - I just committed this.
          dhruba borthakur made changes -
          Status Patch Available [ 10002 ] Resolved [ 5 ]
          Resolution Fixed [ 1 ]
          Hide
          Hudson added a comment -
          Show
          Hudson added a comment - Integrated in Hadoop-trunk #426 (See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/426/ )
          dhruba borthakur made changes -
          Release Note A new API DFSOututStream.flush() flushes all outstanding data to the pipeline of datanodes.
          Nigel Daley made changes -
          Status Resolved [ 5 ] Closed [ 6 ]
          Owen O'Malley made changes -
          Component/s dfs [ 12310710 ]
          Transition Time In Source Status Execution Times Last Executer Last Execution Date
          Patch Available Patch Available Open Open
          1d 13h 33m 3 dhruba borthakur 10/Mar/08 21:27
          Open Open Patch Available Patch Available
          50d 13h 46m 4 dhruba borthakur 10/Mar/08 21:29
          Patch Available Patch Available Resolved Resolved
          23h 42m 1 dhruba borthakur 11/Mar/08 21:12
          Resolved Resolved Closed Closed
          70d 22h 53m 1 Nigel Daley 21/May/08 21:05

            People

            • Assignee:
              dhruba borthakur
              Reporter:
              dhruba borthakur
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development