Details

    • Type: New Feature
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0-alpha1
    • Fix Version/s: 2.7.0
    • Component/s: datanode, hdfs-client, namenode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      1. HDFS now can choose to append data to a new block instead of end of the last partial block. Users can pass {{CreateFlag.APPEND}} and {{CreateFlag.NEW_BLOCK}} to the {{append}} API to indicate this requirement.
      2. HDFS now allows users to pass {{SyncFlag.END_BLOCK}} to the {{hsync}} API to finish the current block and write remaining data to a new block.
      Show
      1. HDFS now can choose to append data to a new block instead of end of the last partial block. Users can pass {{CreateFlag.APPEND}} and {{CreateFlag.NEW_BLOCK}} to the {{append}} API to indicate this requirement. 2. HDFS now allows users to pass {{SyncFlag.END_BLOCK}} to the {{hsync}} API to finish the current block and write remaining data to a new block.

      Description

      Currently HDFS supports fixed length blocks. Supporting variable length block will allow new use cases and features to be built on top of HDFS.

      1. editsStored
        6 kB
        Jing Zhao
      2. HDFS-3689.000.patch
        74 kB
        Jing Zhao
      3. HDFS-3689.001.patch
        88 kB
        Jing Zhao
      4. HDFS-3689.002.patch
        100 kB
        Jing Zhao
      5. HDFS-3689.003.patch
        112 kB
        Jing Zhao
      6. HDFS-3689.003.patch
        112 kB
        Jing Zhao
      7. HDFS-3689.004.patch
        114 kB
        Jing Zhao
      8. HDFS-3689.005.patch
        113 kB
        Jing Zhao
      9. HDFS-3689.006.patch
        119 kB
        Jing Zhao
      10. HDFS-3689.007.patch
        121 kB
        Jing Zhao
      11. HDFS-3689.008.patch
        167 kB
        Jing Zhao
      12. HDFS-3689.008.patch
        167 kB
        Jing Zhao
      13. HDFS-3689.009.patch
        169 kB
        Jing Zhao
      14. HDFS-3689.009.patch
        170 kB
        Jing Zhao
      15. HDFS-3689.010.patch
        170 kB
        Jing Zhao
      16. HDFS-3689.branch-2.patch
        169 kB
        Jing Zhao

        Issue Links

          Activity

          Hide
          sureshms Suresh Srinivas added a comment -

          Some of the use cases include - support for more flexible concat operations than the highly restricted concat operation currently supported in HDFS. This helps data loading, merging small files into a large file etc. Variable length block also enables ability to stop writing to the current block and start writing to a new block at arbitrary boundaries instead of fixed block size lengths, which could simplify handling errors in some cases.

          Show
          sureshms Suresh Srinivas added a comment - Some of the use cases include - support for more flexible concat operations than the highly restricted concat operation currently supported in HDFS. This helps data loading, merging small files into a large file etc. Variable length block also enables ability to stop writing to the current block and start writing to a new block at arbitrary boundaries instead of fixed block size lengths, which could simplify handling errors in some cases.
          Hide
          tlipcon Todd Lipcon added a comment -

          Hey Suresh. This is an interesting idea which I've thought about a couple times, too.

          Whenever I think about it, though, I wonder: will the variable length blocks have to all be multiples of the checksum chunk size? Or do you think they can start and stop on arbitrary byte boundaries?

          Show
          tlipcon Todd Lipcon added a comment - Hey Suresh. This is an interesting idea which I've thought about a couple times, too. Whenever I think about it, though, I wonder: will the variable length blocks have to all be multiples of the checksum chunk size? Or do you think they can start and stop on arbitrary byte boundaries?
          Hide
          sureshms Suresh Srinivas added a comment -

          My feeling is it should be able to start and stop on arbitrary boundaries. There are quite a few issues related to this that I have not through through. Also the impact of possibly applications assuming fixed size block etc. This change would also require quite a bit of testing.

          Show
          sureshms Suresh Srinivas added a comment - My feeling is it should be able to start and stop on arbitrary boundaries. There are quite a few issues related to this that I have not through through. Also the impact of possibly applications assuming fixed size block etc. This change would also require quite a bit of testing.
          Hide
          jingzhao Jing Zhao added a comment -

          Looks like we do not have a lot of dependency on the fixed block length. I've found the following several places related to the preferred block size of the file:

          1. concat. Currently concat has very strict preconditions including all the corresponding files have the same preferred block size and all the files (except the last one) should consist of full block.
          2. writing data. DFSOutputstream/DataStreamer allocates a new block after writing a full block.
          3. append. New data is always appended to the end of the last block unless it's just on the block boundary.
          4. addBlock. In the addBlock RPC, the client specifies the previous block as null to indicate that this is the first new block for an append operation right at the block boundary. NameNode calls analyzeFileState to verify this logic.
          Show
          jingzhao Jing Zhao added a comment - Looks like we do not have a lot of dependency on the fixed block length. I've found the following several places related to the preferred block size of the file: concat. Currently concat has very strict preconditions including all the corresponding files have the same preferred block size and all the files (except the last one) should consist of full block. writing data. DFSOutputstream/DataStreamer allocates a new block after writing a full block. append. New data is always appended to the end of the last block unless it's just on the block boundary. addBlock. In the addBlock RPC, the client specifies the previous block as null to indicate that this is the first new block for an append operation right at the block boundary. NameNode calls analyzeFileState to verify this logic.
          Hide
          jingzhao Jing Zhao added a comment -

          Upload a preliminary patch that addresses the above #3 and #4. Specifically a new append RPC is added which always appends new data to a new block, and thus the previous last block becomes a block with variable length. Some unit tests are added to make sure the appended file can still be read/pread by the current DFSInputStream.

          If this is a correct direction towards variable block length support, the remaining work can be:

          1. bug fix and code cleanup
          2. loose the restriction of concat
          3. add support in DFSOutputStream to let clients specify when to allocate a new block
          Show
          jingzhao Jing Zhao added a comment - Upload a preliminary patch that addresses the above #3 and #4. Specifically a new append RPC is added which always appends new data to a new block, and thus the previous last block becomes a block with variable length. Some unit tests are added to make sure the appended file can still be read/pread by the current DFSInputStream. If this is a correct direction towards variable block length support, the remaining work can be: bug fix and code cleanup loose the restriction of concat add support in DFSOutputStream to let clients specify when to allocate a new block
          Hide
          cutting Doug Cutting added a comment -

          Many applications assume that blocks but the last are the same size, partitioning their work along block boundaries, e.g., FileInputFormat#getSplits. Performance will likely suffer when using such applications on files with mixed-length blocks.

          Show
          cutting Doug Cutting added a comment - Many applications assume that blocks but the last are the same size, partitioning their work along block boundaries, e.g., FileInputFormat#getSplits. Performance will likely suffer when using such applications on files with mixed-length blocks.
          Hide
          jingzhao Jing Zhao added a comment -

          Thanks for the comments, Doug Cutting! Do you think it is possible that we can locate and fix these applications? Or can a lower limit of the block length be helpful on the performance perspective? Basically I guess variable block length can also improve the performance and efficiency of a lot of applications. For example, Hive may save up to 40% of storage that is currently cost by padding. Thus it will be great if we can have this feature while still keep the existing applications working fine.

          Show
          jingzhao Jing Zhao added a comment - Thanks for the comments, Doug Cutting ! Do you think it is possible that we can locate and fix these applications? Or can a lower limit of the block length be helpful on the performance perspective? Basically I guess variable block length can also improve the performance and efficiency of a lot of applications. For example, Hive may save up to 40% of storage that is currently cost by padding. Thus it will be great if we can have this feature while still keep the existing applications working fine.
          Hide
          jingzhao Jing Zhao added a comment -

          Update the patch a little bit to demonstrate #3: the support in DFSOutputStream to let clients specify when to allocate a new block. Currently I simply add a new SyncFlag named END_BLOCK. Clients can call hsync(END_BLOCK) to force the completion of the current block. Some unit tests are also included to make sure the basic functionality works.

          Show
          jingzhao Jing Zhao added a comment - Update the patch a little bit to demonstrate #3: the support in DFSOutputStream to let clients specify when to allocate a new block. Currently I simply add a new SyncFlag named END_BLOCK. Clients can call hsync(END_BLOCK) to force the completion of the current block. Some unit tests are also included to make sure the basic functionality works.
          Hide
          kihwal Kihwal Lee added a comment -

          We also need to think about file checksum. I think we can make it consistent even with variable block sizes. But when we move on to the next step of allowing concat to work on the same way, things can get complicated. E.g. when copying a file, some use cases may require duplicating the same block layout and the checksum settings per block.

          Show
          kihwal Kihwal Lee added a comment - We also need to think about file checksum. I think we can make it consistent even with variable block sizes. But when we move on to the next step of allowing concat to work on the same way, things can get complicated. E.g. when copying a file, some use cases may require duplicating the same block layout and the checksum settings per block.
          Hide
          octo47 Andrey Stepachev added a comment -

          May it is more convent to implement sparse files (http://en.wikipedia.org/wiki/Sparse_file).
          Improved formats like ORC and Parquet can benefit from such files and use sparseness for efficient merging.
          To prevent from reading sparse files with old applications, they should feed some flag (like ALLOW_SPARSE) to open method, error will be generated otherwise. Reading sparse parts can generate no IO, just return zeros.

          Show
          octo47 Andrey Stepachev added a comment - May it is more convent to implement sparse files ( http://en.wikipedia.org/wiki/Sparse_file ). Improved formats like ORC and Parquet can benefit from such files and use sparseness for efficient merging. To prevent from reading sparse files with old applications, they should feed some flag (like ALLOW_SPARSE) to open method, error will be generated otherwise. Reading sparse parts can generate no IO, just return zeros.
          Hide
          tlipcon Todd Lipcon added a comment -

          Nice idea, Andrey. Jing, can you explain more what the use case is for padding blocks in Hive? If we had a new API like DFSOutputStream.writeZeros(int length);, we could implement it with a special packet type, and have the DN just call "truncate" to extend the size of the underlying block file. It would have to write the appropriate checksum data as well, but that's still a ~99% reduction in IO. As far as I'm aware, all commonly used local file systems do support sparse files.

          Show
          tlipcon Todd Lipcon added a comment - Nice idea, Andrey. Jing, can you explain more what the use case is for padding blocks in Hive? If we had a new API like DFSOutputStream.writeZeros(int length); , we could implement it with a special packet type, and have the DN just call "truncate" to extend the size of the underlying block file. It would have to write the appropriate checksum data as well, but that's still a ~99% reduction in IO. As far as I'm aware, all commonly used local file systems do support sparse files.
          Hide
          cutting Doug Cutting added a comment -

          +1 for sparse files. They could provide the same advantages of variable-length blocks without creating performance surprises for existing apps. You'd be required to use formats that allow for zeros/sparsity, so the feature would be opt-in by apps. Not sure why a flag is required on open.

          Show
          cutting Doug Cutting added a comment - +1 for sparse files. They could provide the same advantages of variable-length blocks without creating performance surprises for existing apps. You'd be required to use formats that allow for zeros/sparsity, so the feature would be opt-in by apps. Not sure why a flag is required on open.
          Hide
          octo47 Andrey Stepachev added a comment -

          Doug Cutting I mentioned flag just as one of possible solutions to prevent accidental reads of merged files (text files for example). With this flag application declares, that it is ok with paddings and that it will handle them properly.

          Show
          octo47 Andrey Stepachev added a comment - Doug Cutting I mentioned flag just as one of possible solutions to prevent accidental reads of merged files (text files for example). With this flag application declares, that it is ok with paddings and that it will handle them properly.
          Hide
          sureshms Suresh Srinivas added a comment -

          Doug Cutting, the applications still need to be able to deal with sparse files right? specifically the issue you brought up earlier is still an issue, right?

          Show
          sureshms Suresh Srinivas added a comment - Doug Cutting , the applications still need to be able to deal with sparse files right? specifically the issue you brought up earlier is still an issue, right?
          Hide
          jingzhao Jing Zhao added a comment -

          The sparse file idea sounds good to me. Let me see if I can implement a demo for this. Also, let's hear from Owen O'Malley, Gunther Hagleitner, and Gopal V who understand the hive use cases to see if this can meet their requirement.

          Show
          jingzhao Jing Zhao added a comment - The sparse file idea sounds good to me. Let me see if I can implement a demo for this. Also, let's hear from Owen O'Malley , Gunther Hagleitner , and Gopal V who understand the hive use cases to see if this can meet their requirement.
          Hide
          gopalv Gopal V added a comment -

          The sparse file idea was thrown around a while back, jokingly as a "faster way to write zeroes".

          That said, the zero padding issue has already been worked around in ORC - HIVE-7231

          Show
          gopalv Gopal V added a comment - The sparse file idea was thrown around a while back, jokingly as a "faster way to write zeroes". That said, the zero padding issue has already been worked around in ORC - HIVE-7231
          Hide
          sureshms Suresh Srinivas added a comment -

          I also like the idea of sparse files. But I do not think it is going to be any easier than supporting variable length blocks. I also think applications need changes to be aware of sparse files.

          Show
          sureshms Suresh Srinivas added a comment - I also like the idea of sparse files. But I do not think it is going to be any easier than supporting variable length blocks. I also think applications need changes to be aware of sparse files.
          Hide
          jingzhao Jing Zhao added a comment -

          Another question with the sparse file is that when we copy the replica (e.g., under-replication recovery, or Balancer, or distcp) across DN, will we lose the benefits of the sparse files unless we let the DNs/tools understand the sparse semantic?

          Show
          jingzhao Jing Zhao added a comment - Another question with the sparse file is that when we copy the replica (e.g., under-replication recovery, or Balancer, or distcp) across DN, will we lose the benefits of the sparse files unless we let the DNs/tools understand the sparse semantic?
          Hide
          cutting Doug Cutting added a comment -

          Suresh Srinivas, variable length blocks would permit applications to read data without modification, but with surprising performance and impact on cluster resources. One could, e.g., efficiently append a bunch of CSV files to generate a bit CSV file that has variable length blocks, then run MapReduce jobs over that file. But the file reads would no longer be block aligned and the job would behave differently than one might expect.

          On the other hand, a sparse file would permit folks to append data as efficiently as variable-length blocks, but to unmodified applications their input would now have chunks of zeros inserted and would likely not be well-formatted data. So using sparse files forces applications to explicitly adopt the feature, rather than appearing to still work but with radically different performance.

          It might be better not to have a "transparent" feature that contains performance surprises, but instead have something that both writers and readers must knowingly adopt.

          Show
          cutting Doug Cutting added a comment - Suresh Srinivas , variable length blocks would permit applications to read data without modification, but with surprising performance and impact on cluster resources. One could, e.g., efficiently append a bunch of CSV files to generate a bit CSV file that has variable length blocks, then run MapReduce jobs over that file. But the file reads would no longer be block aligned and the job would behave differently than one might expect. On the other hand, a sparse file would permit folks to append data as efficiently as variable-length blocks, but to unmodified applications their input would now have chunks of zeros inserted and would likely not be well-formatted data. So using sparse files forces applications to explicitly adopt the feature, rather than appearing to still work but with radically different performance. It might be better not to have a "transparent" feature that contains performance surprises, but instead have something that both writers and readers must knowingly adopt.
          Hide
          octo47 Andrey Stepachev added a comment -

          Jing Zhao of course they should. Sparse property should be maintained in metadata (i think). That gives an opportunity to handle sparse files right from client (generate zeros right in the client code with no data passed)

          Show
          octo47 Andrey Stepachev added a comment - Jing Zhao of course they should. Sparse property should be maintained in metadata (i think). That gives an opportunity to handle sparse files right from client (generate zeros right in the client code with no data passed)
          Hide
          jingzhao Jing Zhao added a comment -

          It might be better not to have a "transparent" feature that contains performance surprises, but instead have something that both writers and readers must knowingly adopt.

          Yes, if I understand correctly, both the variable block length and the sparse files require that the writers and readers both know the changes. And the current implementation is not an exact "transparent" feature since 1) the writer needs to call either the new append API or hsync(END_BLOCK) to generate a variable-length block, and 2) the reader can identify the block length.

          of course they should. Sparse property should be maintained in metadata (i think)

          Thanks for the response, Andrey Stepachev! My concern here is that if by doing sparse files we finally still need to add extra metadata to both DN (and data transfer protocol) and NN (to support sparse block copy triggered by tools like distcp), we may loose the beauty and simplicity of the sparse file idea. And based on the discussion between Suresh Srinivas and Doug Cutting, we still require the assumption that both writers and readers must knowingly adopt.

          Show
          jingzhao Jing Zhao added a comment - It might be better not to have a "transparent" feature that contains performance surprises, but instead have something that both writers and readers must knowingly adopt. Yes, if I understand correctly, both the variable block length and the sparse files require that the writers and readers both know the changes. And the current implementation is not an exact "transparent" feature since 1) the writer needs to call either the new append API or hsync(END_BLOCK) to generate a variable-length block, and 2) the reader can identify the block length. of course they should. Sparse property should be maintained in metadata (i think) Thanks for the response, Andrey Stepachev ! My concern here is that if by doing sparse files we finally still need to add extra metadata to both DN (and data transfer protocol) and NN (to support sparse block copy triggered by tools like distcp), we may loose the beauty and simplicity of the sparse file idea. And based on the discussion between Suresh Srinivas and Doug Cutting , we still require the assumption that both writers and readers must knowingly adopt.
          Hide
          octo47 Andrey Stepachev added a comment -

          You are right, Jing Zhao, it brings a lot of work. But without metadata, I'm wondering, how replication will work.

          Show
          octo47 Andrey Stepachev added a comment - You are right, Jing Zhao , it brings a lot of work. But without metadata, I'm wondering, how replication will work.
          Hide
          tlipcon Todd Lipcon added a comment -

          The replication issue with sparse files isn't new. rsync for example handles this with the "--sparse" flag. I haven't looked at the implementation, but my guess is that it would be relatively easy to implement this on the DN side following whatever technique rsync does. One thought is that we could identify runs of zeros fairly easily by looking at the checksums: an all-zero checksum chunk has a constant crc32 which we can compare for in a single instruction. The DN could relatively easily loop through the checksums of an incoming data packet, and verify whether it is all zeros, and if so, turn it into a sparse write.

          Show
          tlipcon Todd Lipcon added a comment - The replication issue with sparse files isn't new. rsync for example handles this with the "--sparse" flag. I haven't looked at the implementation, but my guess is that it would be relatively easy to implement this on the DN side following whatever technique rsync does. One thought is that we could identify runs of zeros fairly easily by looking at the checksums: an all-zero checksum chunk has a constant crc32 which we can compare for in a single instruction. The DN could relatively easily loop through the checksums of an incoming data packet, and verify whether it is all zeros, and if so, turn it into a sparse write.
          Hide
          octo47 Andrey Stepachev added a comment -

          Todd Lipcon, here can be full sparse support, or sparse semantics for variable length blocks.
          idea was to make sparse hdfs files. hdfs block will carry additional field, which will denote amount of real data written to it.
          on reading side client code will be able to:
          a) query exactly how many data available, and effectively skip zeros.
          b) if this data will read client, which don't bother on zeros client side code will generate zeros locally.

          But idea to use sparse files for block storage is good by its own. Especially in case of immutable data.

          Show
          octo47 Andrey Stepachev added a comment - Todd Lipcon , here can be full sparse support, or sparse semantics for variable length blocks. idea was to make sparse hdfs files. hdfs block will carry additional field, which will denote amount of real data written to it. on reading side client code will be able to: a) query exactly how many data available, and effectively skip zeros. b) if this data will read client, which don't bother on zeros client side code will generate zeros locally. But idea to use sparse files for block storage is good by its own. Especially in case of immutable data.
          Hide
          cmccabe Colin P. McCabe added a comment -

          One thought is that we could identify runs of zeros fairly easily by looking at the checksums: an all-zero checksum chunk has a constant crc32 which we can compare for in a single instruction. The DN could relatively easily loop through the checksums of an incoming data packet, and verify whether it is all zeros, and if so, turn it into a sparse write.

          Interesting idea. This would allow us to automatically deal with long stretches of zeroes by creating sparse block files on the datanode. Of course we have to check that the zero checksum really did come from a zeroed checksum chunk, rather than an unlikely coincidence. I wonder if we could create sparse files without any new APIs this way...

          Show
          cmccabe Colin P. McCabe added a comment - One thought is that we could identify runs of zeros fairly easily by looking at the checksums: an all-zero checksum chunk has a constant crc32 which we can compare for in a single instruction. The DN could relatively easily loop through the checksums of an incoming data packet, and verify whether it is all zeros, and if so, turn it into a sparse write. Interesting idea. This would allow us to automatically deal with long stretches of zeroes by creating sparse block files on the datanode. Of course we have to check that the zero checksum really did come from a zeroed checksum chunk, rather than an unlikely coincidence. I wonder if we could create sparse files without any new APIs this way...
          Hide
          cmccabe Colin P. McCabe added a comment -

          So, there are a few use-cases for variable-length blocks that we've kicked around in the past:

          • Simpler implementation of append and pipeline recovery. We could just start a new block and forget about the old blocks. genstamp can go away, as well as all the pipeline recovery code and replica state machine. Replicas are then either finalized or not, like in the original Hadoop versions.
          • Make hdfsConcat fully generic, rather than requiring N-1 of the files being concatted to be exactly 1 block long like now. This would make that call a lot more useful. (Implemented above by Jing)
          • Some file formats really, really want to have block-aligned records. This is natural if you want to have one node process a set of records... you don't want "torn" records that span multiple datanodes. Apache Parquet is certainly one of these formats; I think ORCFile is too. Right now these file formats need to accept "torn" records or add padding. I guess sparse files could make the padding less inefficient.

          Disadvantages of variable-length blocks:

          • As Doug pointed out, MapReduce InputFormats that use # of blocks to decide on a good data split won't work too well. I wonder how much effort it would take to convert these to take block length into account?
          • Other applications may also be assuming fixed block sizes, although our APIs have never technically guaranteed that.
          Show
          cmccabe Colin P. McCabe added a comment - So, there are a few use-cases for variable-length blocks that we've kicked around in the past: Simpler implementation of append and pipeline recovery. We could just start a new block and forget about the old blocks. genstamp can go away, as well as all the pipeline recovery code and replica state machine. Replicas are then either finalized or not, like in the original Hadoop versions. Make hdfsConcat fully generic, rather than requiring N-1 of the files being concatted to be exactly 1 block long like now. This would make that call a lot more useful. (Implemented above by Jing) Some file formats really, really want to have block-aligned records. This is natural if you want to have one node process a set of records... you don't want "torn" records that span multiple datanodes. Apache Parquet is certainly one of these formats; I think ORCFile is too. Right now these file formats need to accept "torn" records or add padding. I guess sparse files could make the padding less inefficient. Disadvantages of variable-length blocks: As Doug pointed out, MapReduce InputFormats that use # of blocks to decide on a good data split won't work too well. I wonder how much effort it would take to convert these to take block length into account? Other applications may also be assuming fixed block sizes, although our APIs have never technically guaranteed that.
          Hide
          cutting Doug Cutting added a comment -

          Right now these file formats need to accept "torn" records or add padding.

          ... or set the block size for the file to something large and start a new file whenever output approaches that, keeping each file in a single (big) block, guaranteeing that no records cross block boundaries.

          Show
          cutting Doug Cutting added a comment - Right now these file formats need to accept "torn" records or add padding. ... or set the block size for the file to something large and start a new file whenever output approaches that, keeping each file in a single (big) block, guaranteeing that no records cross block boundaries.
          Hide
          owen.omalley Owen O'Malley added a comment -

          Since this is a discussion of what to put into trunk, incompatible changes aren't a blocker. Furthermore, most clients would never see the difference. Variable length blocks would dramatically improve the ability of HDFS to support better file formats like ORC.

          On the other hand, I've had very bad experiences with sparse files on Unix. It is all too easy for a user to copy a sparse file and not understand that the copy is 10x larger than the original. That would be bad and I do not think that HDFS should support it at all.

          Show
          owen.omalley Owen O'Malley added a comment - Since this is a discussion of what to put into trunk, incompatible changes aren't a blocker. Furthermore, most clients would never see the difference. Variable length blocks would dramatically improve the ability of HDFS to support better file formats like ORC. On the other hand, I've had very bad experiences with sparse files on Unix. It is all too easy for a user to copy a sparse file and not understand that the copy is 10x larger than the original. That would be bad and I do not think that HDFS should support it at all.
          Hide
          owen.omalley Owen O'Malley added a comment -

          One follow up is that fixing MapReduce to use the actual block boundaries rather than dividing up the file in fixed size splits would not be difficult and would make the generated file splits for ORC and other block compressed files much much better.

          Furthermore, note that we could remove the need for lzo and zlib index files for text files by having TextOutputFormat cut the block at a line boundary and flush the compression codec. Thus TextInputFormat could divide the file at block boundaries and have them align at both a compression chunk boundary and a line break. That would be great.

          Show
          owen.omalley Owen O'Malley added a comment - One follow up is that fixing MapReduce to use the actual block boundaries rather than dividing up the file in fixed size splits would not be difficult and would make the generated file splits for ORC and other block compressed files much much better. Furthermore, note that we could remove the need for lzo and zlib index files for text files by having TextOutputFormat cut the block at a line boundary and flush the compression codec. Thus TextInputFormat could divide the file at block boundaries and have them align at both a compression chunk boundary and a line break. That would be great .
          Hide
          jingzhao Jing Zhao added a comment -

          Rebase the patch.

          Show
          jingzhao Jing Zhao added a comment - Rebase the patch.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12688963/HDFS-3689.002.patch
          against trunk revision 66cfe1d.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          -1 javac. The applied patch generated 1220 javac compiler warnings (more than the trunk's current 1219 warnings).

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 5 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

          The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9121//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9121//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9121//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9121//artifact/patchprocess/diffJavacWarnings.txt
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9121//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12688963/HDFS-3689.002.patch against trunk revision 66cfe1d. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. -1 javac . The applied patch generated 1220 javac compiler warnings (more than the trunk's current 1219 warnings). +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 5 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9121//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9121//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9121//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9121//artifact/patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9121//console This message is automatically generated.
          Hide
          jingzhao Jing Zhao added a comment -

          Update the patch with concat support.

          Show
          jingzhao Jing Zhao added a comment - Update the patch with concat support.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12692131/HDFS-3689.003.patch
          against trunk revision f92e503.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          -1 javac. The patch appears to cause the build to fail.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9204//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12692131/HDFS-3689.003.patch against trunk revision f92e503. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. -1 javac . The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9204//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12692140/HDFS-3689.003.patch
          against trunk revision f92e503.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
          org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
          org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9205//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9205//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12692140/HDFS-3689.003.patch against trunk revision f92e503. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9205//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9205//console This message is automatically generated.
          Hide
          daryn Daryn Sharp added a comment -

          To simplify review, would you please provide a bullet list summary of exactly what features the latest patch includes as well as any incompatibilities?

          Show
          daryn Daryn Sharp added a comment - To simplify review, would you please provide a bullet list summary of exactly what features the latest patch includes as well as any incompatibilities?
          Hide
          jingzhao Jing Zhao added a comment -

          Sure. Currently the patch includes the following changes:

          1. Add a new append2 API which always appends new data to a new block, and thus the previous last block becomes a block with variable length. Some unit tests are added to make sure the appended file can still be read/pread by the current DFSInputStream.
          2. Add support in DFSOutputStream to let clients specify when to allocate a new block. The patch simply adds a new SyncFlag named END_BLOCK. Clients can call hsync(END_BLOCK) to force the completion of the current block.
          3. Loose the restriction of concat. The current concat has the following restrictions:
          • The src files and the target file must be in the same directory and cannot be empty
          • The target file and all but the last src files cannot have partial block
          • The src files and the target file must share the same replication factor and preferred block size
          • The src files and the target file cannot be in any snapshot

          The current patch makes the following changes which I think needs further discussion and confirmation:

          • The src files and the target file do not need to be in the same directory
          • The src files and the target file can have partial blocks
          • The src/target files may have different preferred block size and replication factor, and after the concat the target file keeps its original setting
          • The src files still cannot be included in any snapshot (see HDFS-4529 for details), but the target file can be in a snapshot
          Show
          jingzhao Jing Zhao added a comment - Sure. Currently the patch includes the following changes: Add a new append2 API which always appends new data to a new block, and thus the previous last block becomes a block with variable length. Some unit tests are added to make sure the appended file can still be read/pread by the current DFSInputStream. Add support in DFSOutputStream to let clients specify when to allocate a new block. The patch simply adds a new SyncFlag named END_BLOCK. Clients can call hsync(END_BLOCK) to force the completion of the current block. Loose the restriction of concat. The current concat has the following restrictions: The src files and the target file must be in the same directory and cannot be empty The target file and all but the last src files cannot have partial block The src files and the target file must share the same replication factor and preferred block size The src files and the target file cannot be in any snapshot The current patch makes the following changes which I think needs further discussion and confirmation: The src files and the target file do not need to be in the same directory The src files and the target file can have partial blocks The src/target files may have different preferred block size and replication factor, and after the concat the target file keeps its original setting The src files still cannot be included in any snapshot (see HDFS-4529 for details), but the target file can be in a snapshot
          Hide
          jingzhao Jing Zhao added a comment -

          After an offline discussion with Sanjay Radia and Tsz Wo Nicholas Sze, the 005 patch still keeps the restriction that the source files and the target file should be in the same directory. In this way we do not need to update the quota.

          Show
          jingzhao Jing Zhao added a comment - After an offline discussion with Sanjay Radia and Tsz Wo Nicholas Sze , the 005 patch still keeps the restriction that the source files and the target file should be in the same directory. In this way we do not need to update the quota.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12692294/HDFS-3689.004.patch
          against trunk revision d336d13.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.ha.TestZKFailoverControllerStress
          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9210//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9210//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12692294/HDFS-3689.004.patch against trunk revision d336d13. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.ha.TestZKFailoverControllerStress org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9210//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9210//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12692346/HDFS-3689.005.patch
          against trunk revision 6464a89.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

          The test build failed in hadoop-hdfs-project/hadoop-hdfs-nfs

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9214//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9214//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12692346/HDFS-3689.005.patch against trunk revision 6464a89. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer The test build failed in hadoop-hdfs-project/hadoop-hdfs-nfs Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9214//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9214//console This message is automatically generated.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          Thanks for working on this! Some comments so far:

          • Instead of adding append2, how about adding another append method with a boolean appendToNewBlock parameter? The original append could just call it with appendToNewBlock=false. We also don't need Append2RequestProto/Append2ResponseProto. Just add an optional field to AppendRequestProto/AppendResponseProto.
          • We could also add appendToNewBlock to DFSOutputStream constructor to reduce code duplication.
          • Typo in the code below? Should it be flushBuffer(!endBlock, true)?
            //DFSOutputStream.flushOrSync
            -        // flush checksum buffer, but keep checksum buffer intact
            -        int numKept = flushBuffer(true, true);
            +        // flush checksum buffer, but keep checksum buffer intact if we do not
            +        // need to end the current block
            +        int numKept = flushBuffer(true, !endBlock);
            
          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - Thanks for working on this! Some comments so far: Instead of adding append2, how about adding another append method with a boolean appendToNewBlock parameter? The original append could just call it with appendToNewBlock=false. We also don't need Append2RequestProto/Append2ResponseProto. Just add an optional field to AppendRequestProto/AppendResponseProto. We could also add appendToNewBlock to DFSOutputStream constructor to reduce code duplication. Typo in the code below? Should it be flushBuffer(!endBlock, true)? //DFSOutputStream.flushOrSync - // flush checksum buffer, but keep checksum buffer intact - int numKept = flushBuffer( true , true ); + // flush checksum buffer, but keep checksum buffer intact if we do not + // need to end the current block + int numKept = flushBuffer( true , !endBlock);
          Hide
          jingzhao Jing Zhao added a comment -

          Thanks for the review, Nicholas! Update the patch to address the comments. Here's a summary of the changes:

          1. remove the new Append2 API and add a boolean newBlock to ClientProtocol#append to indicate whether the data should be appended to a new block
          2. still add a new OP_APPEND editlog and use it also for the old append operation (instead of using OP_ADD)
          3. update the APPEND inotify event
          4. Fix the bug in flushOrSync as pointed by Nicholas
          Show
          jingzhao Jing Zhao added a comment - Thanks for the review, Nicholas! Update the patch to address the comments. Here's a summary of the changes: remove the new Append2 API and add a boolean newBlock to ClientProtocol#append to indicate whether the data should be appended to a new block still add a new OP_APPEND editlog and use it also for the old append operation (instead of using OP_ADD) update the APPEND inotify event Fix the bug in flushOrSync as pointed by Nicholas
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12692638/HDFS-3689.006.patch
          against trunk revision 780a6bf.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 11 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
          org.apache.hadoop.hdfs.TestDFSShell

          The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.hdfs.server.balancer.TestBalancer

          The test build failed in hadoop-hdfs-project/hadoop-hdfs-nfs

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9233//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9233//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12692638/HDFS-3689.006.patch against trunk revision 780a6bf. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 11 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.hdfs.server.namenode.TestFileTruncate org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer org.apache.hadoop.hdfs.TestDFSShell The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.hdfs.server.balancer.TestBalancer The test build failed in hadoop-hdfs-project/hadoop-hdfs-nfs Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9233//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9233//console This message is automatically generated.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -
          • For concat,
            • We also need to enforce the same replication in concat if we don't want to update disk quota.
            • Let's move the code for checking src parent directory to verifySrcFiles. We should print out the path when creating an IllegalArgumentException.
            • In addition, could you also check if debug is enabled in FSDirConcatOp.concat? Otherwise, it will compute the srcs string even if debug is disabled.
          • Unintentional format change in PBHelper.convertEditsResponse(..)? There is a long line.
          • Let's also chanage WebHDFS to support append to new block. We may do it separately.
          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - For concat, We also need to enforce the same replication in concat if we don't want to update disk quota. Let's move the code for checking src parent directory to verifySrcFiles. We should print out the path when creating an IllegalArgumentException. In addition, could you also check if debug is enabled in FSDirConcatOp.concat? Otherwise, it will compute the srcs string even if debug is disabled. Unintentional format change in PBHelper.convertEditsResponse(..)? There is a long line. Let's also chanage WebHDFS to support append to new block. We may do it separately.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -
          • Let's rename prepareFileForWrite to prepareFileForAppend.
          • Need default for inotify.proto
          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - Let's rename prepareFileForWrite to prepareFileForAppend. Need default for inotify.proto
          Hide
          jingzhao Jing Zhao added a comment -

          Thanks for the review, Nicholas! Update the patch to address the comments.

          We also need to enforce the same replication in concat if we don't want to update disk quota.

          The new patch just updates the diskspace quota usage after the concat.

          Show
          jingzhao Jing Zhao added a comment - Thanks for the review, Nicholas! Update the patch to address the comments. We also need to enforce the same replication in concat if we don't want to update disk quota. The new patch just updates the diskspace quota usage after the concat.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12693194/HDFS-3689.007.patch
          against trunk revision 0a2d3e7.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 12 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.ha.TestZKFailoverController
          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9272//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9272//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12693194/HDFS-3689.007.patch against trunk revision 0a2d3e7. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 12 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.ha.TestZKFailoverController org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9272//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9272//console This message is automatically generated.
          Hide
          cmccabe Colin P. McCabe added a comment -

          Thanks for working on this, Jing.

          320	  public FSDataOutputStream append(Path f, final boolean toNewBlock,
          321	      final int bufferSize, final Progressable progress) throws IOException {
          

          Let's make toNewBlock a CreateFlag and create a new append function that takes an EnumSet of CreateFlag. We talked about this earlier and the combinatorial explosion of FileSystem overloaded functions is a real problem, which the CreateFlag strategy solves really well. This also makes it easy to use the feature via FileContext.

          Also, are you targetting this for 3.0 or for a 2.x release?

          Show
          cmccabe Colin P. McCabe added a comment - Thanks for working on this, Jing. 320 public FSDataOutputStream append(Path f, final boolean toNewBlock, 321 final int bufferSize, final Progressable progress) throws IOException { Let's make toNewBlock a CreateFlag and create a new append function that takes an EnumSet of CreateFlag . We talked about this earlier and the combinatorial explosion of FileSystem overloaded functions is a real problem, which the CreateFlag strategy solves really well. This also makes it easy to use the feature via FileContext . Also, are you targetting this for 3.0 or for a 2.x release?
          Hide
          jingzhao Jing Zhao added a comment -

          Thanks for the comments, Colin. To use CreateFlag is a good suggestion. Update the patch to address your comments.

          Currently the patch targets for 3.0. We can merge this into 2.x after making sure this feature does not break existing applications' functionalities. So far I only checked FileInputFormat and looks like variable length block may only affect its performance but will not break its functionality.

          Show
          jingzhao Jing Zhao added a comment - Thanks for the comments, Colin. To use CreateFlag is a good suggestion. Update the patch to address your comments. Currently the patch targets for 3.0. We can merge this into 2.x after making sure this feature does not break existing applications' functionalities. So far I only checked FileInputFormat and looks like variable length block may only affect its performance but will not break its functionality.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12693724/HDFS-3689.008.patch
          against trunk revision 0742591.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9295//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12693724/HDFS-3689.008.patch against trunk revision 0742591. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9295//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12693729/HDFS-3689.008.patch
          against trunk revision 0742591.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 14 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

          The test build failed in hadoop-hdfs-project/hadoop-hdfs-nfs

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9296//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9296//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12693729/HDFS-3689.008.patch against trunk revision 0742591. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 14 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer The test build failed in hadoop-hdfs-project/hadoop-hdfs-nfs Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9296//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9296//console This message is automatically generated.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -
          +    FSDirectory.unprotectedUpdateCount(targetIIP, targetIIP.length() - 1,
          +        -count, trgInode.diskspaceConsumed() - oldDiskSpace);
          
          • Disk usage is possibly increase after concat. So we need to verify quota in the very beginning and then update quota at the end.
          • We should add new tests for both increasing and decreasing disk space cases.
          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - + FSDirectory.unprotectedUpdateCount(targetIIP, targetIIP.length() - 1, + -count, trgInode.diskspaceConsumed() - oldDiskSpace); Disk usage is possibly increase after concat. So we need to verify quota in the very beginning and then update quota at the end. We should add new tests for both increasing and decreasing disk space cases.
          Hide
          jingzhao Jing Zhao added a comment -

          Thanks again Nicholas! Update the patch to add quota verification.

          Show
          jingzhao Jing Zhao added a comment - Thanks again Nicholas! Update the patch to add quota verification.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12694112/HDFS-3689.009.patch
          against trunk revision 3aab354.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 14 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9311//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9311//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12694112/HDFS-3689.009.patch against trunk revision 3aab354. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 14 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9311//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9311//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12694208/HDFS-3689.009.patch
          against trunk revision 24aa462.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 14 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
          org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

          The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9313//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9313//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12694208/HDFS-3689.009.patch against trunk revision 24aa462. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 14 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.security.ssl.TestReloadingX509TrustManager org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.namenode.TestFileTruncate org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9313//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9313//console This message is automatically generated.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -
          • Add an if-statement to check targetRepl - src.getBlockReplication() != 0 before adding it to delta since src.computeFileSize() is a little bit expensive.
          • The if-condition below should check if delta <= 0 and the comment "if delta is <0" should be updated to "if delta is <= 0".
            +    if (!fsd.getFSNamesystem().isImageLoaded() || fsd.shouldSkipQuotaChecks()) {
            +      // Do not check quota if delta is <0 or editlog is still being processed
            +      return;
            
          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - Add an if-statement to check targetRepl - src.getBlockReplication() != 0 before adding it to delta since src.computeFileSize() is a little bit expensive. The if-condition below should check if delta <= 0 and the comment "if delta is <0" should be updated to "if delta is <= 0". + if (!fsd.getFSNamesystem().isImageLoaded() || fsd.shouldSkipQuotaChecks()) { + // Do not check quota if delta is <0 or editlog is still being processed + return ;
          Hide
          jingzhao Jing Zhao added a comment -

          Thanks Nicholas! Update the patch to address the comments.

          The if-condition below should

          This if-condition has actually been covered by verifyQuota. Thus I only update the comment here.

          Show
          jingzhao Jing Zhao added a comment - Thanks Nicholas! Update the patch to address the comments. The if-condition below should This if-condition has actually been covered by verifyQuota . Thus I only update the comment here.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12694704/HDFS-3689.010.patch
          against trunk revision 6f9fe76.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 14 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9336//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9336//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12694704/HDFS-3689.010.patch against trunk revision 6f9fe76. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 14 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9336//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9336//console This message is automatically generated.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          +1 patch looks good.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - +1 patch looks good.
          Hide
          jingzhao Jing Zhao added a comment -

          Thanks for the review, Nicholas! I've committed this to trunk. And thanks to all for the discussion.

          Show
          jingzhao Jing Zhao added a comment - Thanks for the review, Nicholas! I've committed this to trunk. And thanks to all for the discussion.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #6940 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6940/)
          HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
          • hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #6940 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6940/ ) HDFS-3689 . Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #87 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/87/)
          HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #87 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/87/ ) HDFS-3689 . Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #821 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/821/)
          HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
          • hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #821 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/821/ ) HDFS-3689 . Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #84 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/84/)
          HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #84 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/84/ ) HDFS-3689 . Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2019 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2019/)
          HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2019 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2019/ ) HDFS-3689 . Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #88 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/88/)
          HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
          • hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #88 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/88/ ) HDFS-3689 . Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #2038 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2038/)
          HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
          • hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2038 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2038/ ) HDFS-3689 . Add support for variable length block. Contributed by Jing Zhao. (jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Hi Jing Zhao, can we merge this change to branch-2? Thanks.

          Show
          arpitagarwal Arpit Agarwal added a comment - Hi Jing Zhao , can we merge this change to branch-2? Thanks.
          Hide
          jingzhao Jing Zhao added a comment -

          Yeah, I will post a patch for branch-2 soon.

          Show
          jingzhao Jing Zhao added a comment - Yeah, I will post a patch for branch-2 soon.
          Hide
          jingzhao Jing Zhao added a comment -

          Post the patch for branch-2. So far I have not found any place where the functionality is broken by the variable length block. Maybe we should merge this to branch-2 this week? Note that the variable length block will not be generated unless the user explicitly passes in the CreateFlag#NEW_BLOCK while creating the file. Also if we find anything is broken by this feature we can fix them in separate jiras.

          Show
          jingzhao Jing Zhao added a comment - Post the patch for branch-2. So far I have not found any place where the functionality is broken by the variable length block. Maybe we should merge this to branch-2 this week? Note that the variable length block will not be generated unless the user explicitly passes in the CreateFlag#NEW_BLOCK while creating the file. Also if we find anything is broken by this feature we can fix them in separate jiras.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Thanks a lot for generating the branch-2 merge patch Jing!

          Since no existing clients will be affected by this feature +1 on merging to branch-2 this week and fixing any issues as they come up.

          Show
          arpitagarwal Arpit Agarwal added a comment - Thanks a lot for generating the branch-2 merge patch Jing! Since no existing clients will be affected by this feature +1 on merging to branch-2 this week and fixing any issues as they come up.
          Hide
          sureshms Suresh Srinivas added a comment -

          Jing Zhao, can you please add a release note on how one uses this feature. Also do you think we need to have fsck output to say what files have variable length block? Finally, should we now change the concat implementation to be more flexible based on this feature?

          Show
          sureshms Suresh Srinivas added a comment - Jing Zhao , can you please add a release note on how one uses this feature. Also do you think we need to have fsck output to say what files have variable length block? Finally, should we now change the concat implementation to be more flexible based on this feature?
          Hide
          jingzhao Jing Zhao added a comment -

          Thanks for the comments, Suresh! The current patch already includes the changes on the concat. I will add the release note and also open a new jira to track the fsck part.

          Show
          jingzhao Jing Zhao added a comment - Thanks for the comments, Suresh! The current patch already includes the changes on the concat. I will add the release note and also open a new jira to track the fsck part.
          Hide
          vinayrpet Vinayakumar B added a comment -

          HDFS-7703 is depend on this for the branch-2 commit. So holding the HDFS-7703 merge to branch-2 until this is in.

          Show
          vinayrpet Vinayakumar B added a comment - HDFS-7703 is depend on this for the branch-2 commit. So holding the HDFS-7703 merge to branch-2 until this is in.
          Hide
          jingzhao Jing Zhao added a comment -

          I've merged this to branch-2.

          Show
          jingzhao Jing Zhao added a comment - I've merged this to branch-2.
          Hide
          vinayrpet Vinayakumar B added a comment -

          Thanks Jing Zhao.

          Show
          vinayrpet Vinayakumar B added a comment - Thanks Jing Zhao .

            People

            • Assignee:
              jingzhao Jing Zhao
              Reporter:
              sureshms Suresh Srinivas
            • Votes:
              1 Vote for this issue
              Watchers:
              41 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development