Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0-alpha1
    • Fix Version/s: 3.0.0-beta1
    • Component/s: None
    • Labels:
    • Target Version/s:
    • Hadoop Flags:
      Incompatible change, Reviewed
    • Release Note:
      Hide
      FileStatus and FsPermission Writable serialization is deprecated and its implementation (incompatibly) replaced with protocol buffers. The FsPermissionProto record moved from hdfs.proto to acl.proto. HdfsFileStatus is now a subtype of FileStatus. FsPermissionExtension with its associated flags for ACLs, encryption, and erasure coding has been deprecated; users should query these attributes on the FileStatus object directly. The FsPermission instance in AclStatus no longer retains or reports these extended attributes (likely unused).
      Show
      FileStatus and FsPermission Writable serialization is deprecated and its implementation (incompatibly) replaced with protocol buffers. The FsPermissionProto record moved from hdfs.proto to acl.proto. HdfsFileStatus is now a subtype of FileStatus. FsPermissionExtension with its associated flags for ACLs, encryption, and erasure coding has been deprecated; users should query these attributes on the FileStatus object directly. The FsPermission instance in AclStatus no longer retains or reports these extended attributes (likely unused).

      Description

      FileStatus was a Writable in Hadoop 2 and earlier. Originally, we used this to serialize it and send it over the wire. But in Hadoop 2 and later, we have the protobuf HdfsFileStatusProto which serves to serialize this information. The protobuf form is preferable, since it allows us to add new fields in a backwards-compatible way. Another issue is that already a lot of subclasses of FileStatus don't override the Writable methods of the superclass, breaking the interface contract that read(status.write) should be equal to the original status.

      In Hadoop 3, we should just make FileStatus serialize itself via protobuf so that we don't have to deal with these issues. It's probably too late to do this in Hadoop 2, since user code may be relying on the existing FileStatus serialization there.

      1. HDFS-6984.001.patch
        2 kB
        Colin P. McCabe
      2. HDFS-6984.002.patch
        7 kB
        Colin P. McCabe
      3. HDFS-6984.003.patch
        20 kB
        Chris Douglas
      4. HDFS-6984.004.patch
        21 kB
        Chris Douglas
      5. HDFS-6984.005.patch
        24 kB
        Chris Douglas
      6. HDFS-6984.006.patch
        83 kB
        Chris Douglas
      7. HDFS-6984.007.patch
        83 kB
        Chris Douglas
      8. HDFS-6984.008.patch
        87 kB
        Chris Douglas
      9. HDFS-6984.009.patch
        93 kB
        Chris Douglas
      10. HDFS-6984.010.patch
        97 kB
        Chris Douglas
      11. HDFS-6984.011.patch
        97 kB
        Chris Douglas
      12. HDFS-6984.012.patch
        102 kB
        Chris Douglas
      13. HDFS-6984.013.patch
        103 kB
        Chris Douglas
      14. HDFS-6984.014.patch
        103 kB
        Chris Douglas
      15. HDFS-6984.015.patch
        104 kB
        Chris Douglas
      16. HDFS-6984.nowritable.patch
        8 kB
        Andrew Wang

        Issue Links

          Activity

          Hide
          cmccabe Colin P. McCabe added a comment -

          I don't anticipate any maintenance issues from having this change in Hadoop 3 but not in Hadoop 2.x. We already are unable to change the write/read methods of that class due to compatibility woes, so that code is effectively frozen. This patch just drops the frozen code out of Hadoop 3. The main motivation is that this will make it easier for us to add more stuff to FileStatus in the future without worrying about the read/write methods of the Writable.

          Show
          cmccabe Colin P. McCabe added a comment - I don't anticipate any maintenance issues from having this change in Hadoop 3 but not in Hadoop 2.x. We already are unable to change the write/read methods of that class due to compatibility woes, so that code is effectively frozen. This patch just drops the frozen code out of Hadoop 3. The main motivation is that this will make it easier for us to add more stuff to FileStatus in the future without worrying about the read/write methods of the Writable.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12665993/HDFS-6984.001.patch
          against trunk revision a0ccf83.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          -1 javac. The patch appears to cause the build to fail.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7874//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12665993/HDFS-6984.001.patch against trunk revision a0ccf83. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 javac . The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7874//console This message is automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          +1 for the proposal.

          I agree that it's best to make FileStatus not a Writable on the next major version. I also agree that it's too late to do it in 2.x. Even though it's not used for wire serialization, there's no telling if someone decided to serialize a bunch of FileStatus instances to a sequence file (or whatever else). Removing the Writable implementation in 2.x would break the ability to read back those files after upgrading software.

          I'm not sure why the Jenkins run failed. I expect this change is OK as long as all HDFS permission and ACL tests continue to run successfully.

          Show
          cnauroth Chris Nauroth added a comment - +1 for the proposal. I agree that it's best to make FileStatus not a Writable on the next major version. I also agree that it's too late to do it in 2.x. Even though it's not used for wire serialization, there's no telling if someone decided to serialize a bunch of FileStatus instances to a sequence file (or whatever else). Removing the Writable implementation in 2.x would break the ability to read back those files after upgrading software. I'm not sure why the Jenkins run failed. I expect this change is OK as long as all HDFS permission and ACL tests continue to run successfully.
          Hide
          cmccabe Colin P. McCabe added a comment -

          So, it looks like DistCp depends on FileStatus being writable in a pretty fundamental way, since it wants to use it as a MapReduce value in CopyMapper.java:

          public class CopyMapper extends Mapper<Text, CopyListingFileStatus, Text, Text> {
          ...
          

          Maybe, rather than get rid of the "implements Writable", we should just use protobuf for the serialization in FileStatus#write. That allows us to add whatever fields we want later via optional protobuf members.

          Show
          cmccabe Colin P. McCabe added a comment - So, it looks like DistCp depends on FileStatus being writable in a pretty fundamental way, since it wants to use it as a MapReduce value in CopyMapper.java: public class CopyMapper extends Mapper<Text, CopyListingFileStatus, Text, Text> { ... Maybe, rather than get rid of the "implements Writable", we should just use protobuf for the serialization in FileStatus#write . That allows us to add whatever fields we want later via optional protobuf members.
          Hide
          cnauroth Chris Nauroth added a comment -

          So, it looks like DistCp depends on FileStatus being writable...

          Last time I looked at this, I actually planned on replacing DistCp's usage of FileStatus serialization with its own custom data type. I believe it doesn't need all of the fields of FileStatus, so there is potential for a marginal space/performance improvement by omitting the unnecessaries.

          Show
          cnauroth Chris Nauroth added a comment - So, it looks like DistCp depends on FileStatus being writable... Last time I looked at this, I actually planned on replacing DistCp's usage of FileStatus serialization with its own custom data type. I believe it doesn't need all of the fields of FileStatus , so there is potential for a marginal space/performance improvement by omitting the unnecessaries.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 15m 28s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          -1 javac 0m 37s The patch appears to cause the build to fail.



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12665993/HDFS-6984.001.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / a319771
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10735/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 15m 28s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 javac 0m 37s The patch appears to cause the build to fail. Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12665993/HDFS-6984.001.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / a319771 Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10735/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 15m 3s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          -1 javac 0m 38s The patch appears to cause the build to fail.



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12665993/HDFS-6984.001.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / a319771
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10761/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 15m 3s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 javac 0m 38s The patch appears to cause the build to fail. Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12665993/HDFS-6984.001.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / a319771 Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10761/console This message was automatically generated.
          Hide
          cmccabe Colin P. McCabe added a comment -

          I guess making it no longer Writable is probably too big of a change. DistCp and other programs make use of the fact that they can write out and later read back FileStatus objects. However, it is really unpleasant that we can't add new fields to the serialized representation of FileStatus.

          Here's a new version that fixes this dilemma by changing the serialization format to be Protobuf for FileStatus objects. This will let us add new fields to FileStatus in the future. I think this change makes sense for Hadoop 3 rather than Hadoop 2, since it is incompatible with the previous FileStatus serialization.

          Show
          cmccabe Colin P. McCabe added a comment - I guess making it no longer Writable is probably too big of a change. DistCp and other programs make use of the fact that they can write out and later read back FileStatus objects. However, it is really unpleasant that we can't add new fields to the serialized representation of FileStatus. Here's a new version that fixes this dilemma by changing the serialization format to be Protobuf for FileStatus objects. This will let us add new fields to FileStatus in the future. I think this change makes sense for Hadoop 3 rather than Hadoop 2, since it is incompatible with the previous FileStatus serialization.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 7m 51s trunk passed
          +1 compile 8m 14s trunk passed with JDK v1.8.0_66
          +1 compile 7m 28s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 16s trunk passed
          +1 mvnsite 1m 4s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 51s trunk passed
          +1 javadoc 1m 1s trunk passed with JDK v1.8.0_66
          +1 javadoc 1m 5s trunk passed with JDK v1.7.0_91
          +1 mvninstall 1m 29s the patch passed
          +1 compile 8m 12s the patch passed with JDK v1.8.0_66
          -1 cc 13m 49s root-jdk1.8.0_66 with JDK v1.8.0_66 generated 1 new issues (was 10, now 10).
          +1 cc 8m 12s the patch passed
          +1 javac 8m 12s the patch passed
          +1 compile 7m 24s the patch passed with JDK v1.7.0_91
          +1 cc 7m 24s the patch passed
          +1 javac 7m 24s the patch passed
          -1 checkstyle 0m 16s Patch generated 27 new checkstyle issues in hadoop-common-project/hadoop-common (total was 22, now 49).
          +1 mvnsite 1m 3s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          -1 findbugs 2m 3s hadoop-common-project/hadoop-common introduced 2 new FindBugs issues.
          +1 javadoc 1m 2s the patch passed with JDK v1.8.0_66
          +1 javadoc 1m 9s the patch passed with JDK v1.7.0_91
          +1 unit 8m 28s hadoop-common in the patch passed with JDK v1.8.0_66.
          +1 unit 8m 14s hadoop-common in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 21s Patch does not generate ASF License warnings.
          70m 9s



          Reason Tests
          FindBugs module:hadoop-common-project/hadoop-common
            Class org.apache.hadoop.fs.FileStatusProtos$FileStatusProto defines non-transient non-serializable instance field unknownFields In FileStatusProtos.java:instance field unknownFields In FileStatusProtos.java
            Useless control flow in org.apache.hadoop.fs.FileStatusProtos$FileStatusProto$Builder.maybeForceBuilderInitialization() At FileStatusProtos.java: At FileStatusProtos.java:[line 1057]



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12782647/HDFS-6984.002.patch
          JIRA Issue HDFS-6984
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle cc
          uname Linux fc7ebae8ce02 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 2a30386
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          cc root-jdk1.8.0_66: https://builds.apache.org/job/PreCommit-HDFS-Build/14137/artifact/patchprocess/diff-compile-cc-root-jdk1.8.0_66.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14137/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14137/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14137/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Max memory used 76MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14137/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 7m 51s trunk passed +1 compile 8m 14s trunk passed with JDK v1.8.0_66 +1 compile 7m 28s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 16s trunk passed +1 mvnsite 1m 4s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 51s trunk passed +1 javadoc 1m 1s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 5s trunk passed with JDK v1.7.0_91 +1 mvninstall 1m 29s the patch passed +1 compile 8m 12s the patch passed with JDK v1.8.0_66 -1 cc 13m 49s root-jdk1.8.0_66 with JDK v1.8.0_66 generated 1 new issues (was 10, now 10). +1 cc 8m 12s the patch passed +1 javac 8m 12s the patch passed +1 compile 7m 24s the patch passed with JDK v1.7.0_91 +1 cc 7m 24s the patch passed +1 javac 7m 24s the patch passed -1 checkstyle 0m 16s Patch generated 27 new checkstyle issues in hadoop-common-project/hadoop-common (total was 22, now 49). +1 mvnsite 1m 3s the patch passed +1 mvneclipse 0m 12s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 0s The patch has no ill-formed XML file. -1 findbugs 2m 3s hadoop-common-project/hadoop-common introduced 2 new FindBugs issues. +1 javadoc 1m 2s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 9s the patch passed with JDK v1.7.0_91 +1 unit 8m 28s hadoop-common in the patch passed with JDK v1.8.0_66. +1 unit 8m 14s hadoop-common in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 21s Patch does not generate ASF License warnings. 70m 9s Reason Tests FindBugs module:hadoop-common-project/hadoop-common   Class org.apache.hadoop.fs.FileStatusProtos$FileStatusProto defines non-transient non-serializable instance field unknownFields In FileStatusProtos.java:instance field unknownFields In FileStatusProtos.java   Useless control flow in org.apache.hadoop.fs.FileStatusProtos$FileStatusProto$Builder.maybeForceBuilderInitialization() At FileStatusProtos.java: At FileStatusProtos.java: [line 1057] Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12782647/HDFS-6984.002.patch JIRA Issue HDFS-6984 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle cc uname Linux fc7ebae8ce02 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2a30386 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 cc root-jdk1.8.0_66: https://builds.apache.org/job/PreCommit-HDFS-Build/14137/artifact/patchprocess/diff-compile-cc-root-jdk1.8.0_66.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14137/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14137/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14137/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14137/console This message was automatically generated.
          Hide
          chris.douglas Chris Douglas added a comment -

          I actually planned on replacing DistCp's usage of FileStatus serialization with its own custom data type

          Filed HADOOP-13626

          Show
          chris.douglas Chris Douglas added a comment - I actually planned on replacing DistCp's usage of FileStatus serialization with its own custom data type Filed HADOOP-13626
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 12m 26s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 7m 22s trunk passed
          +1 compile 7m 23s trunk passed
          +1 checkstyle 0m 24s trunk passed
          +1 mvnsite 0m 57s trunk passed
          +1 mvneclipse 0m 12s trunk passed
          +1 findbugs 1m 22s trunk passed
          +1 javadoc 0m 46s trunk passed
          +1 mvninstall 0m 38s the patch passed
          +1 compile 7m 16s the patch passed
          +1 cc 7m 16s the patch passed
          +1 javac 7m 16s the patch passed
          -0 checkstyle 0m 23s hadoop-common-project/hadoop-common: The patch generated 27 new + 19 unchanged - 0 fixed = 46 total (was 19)
          +1 mvnsite 0m 53s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          -1 findbugs 1m 29s hadoop-common-project/hadoop-common generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)
          +1 javadoc 0m 44s the patch passed
          +1 unit 7m 13s hadoop-common in the patch passed.
          +1 asflicense 0m 20s The patch does not generate ASF License warnings.
          50m 44s



          Reason Tests
          FindBugs module:hadoop-common-project/hadoop-common
            Class org.apache.hadoop.fs.FileStatusProtos$FileStatusProto defines non-transient non-serializable instance field unknownFields In FileStatusProtos.java:instance field unknownFields In FileStatusProtos.java
            Useless control flow in org.apache.hadoop.fs.FileStatusProtos$FileStatusProto$Builder.maybeForceBuilderInitialization() At FileStatusProtos.java: At FileStatusProtos.java:[line 1057]



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12782647/HDFS-6984.002.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle cc
          uname Linux 4cf3d029293f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / c54f6ef
          Default Java 1.8.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16795/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/16795/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16795/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16795/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 12m 26s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 7m 22s trunk passed +1 compile 7m 23s trunk passed +1 checkstyle 0m 24s trunk passed +1 mvnsite 0m 57s trunk passed +1 mvneclipse 0m 12s trunk passed +1 findbugs 1m 22s trunk passed +1 javadoc 0m 46s trunk passed +1 mvninstall 0m 38s the patch passed +1 compile 7m 16s the patch passed +1 cc 7m 16s the patch passed +1 javac 7m 16s the patch passed -0 checkstyle 0m 23s hadoop-common-project/hadoop-common: The patch generated 27 new + 19 unchanged - 0 fixed = 46 total (was 19) +1 mvnsite 0m 53s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. -1 findbugs 1m 29s hadoop-common-project/hadoop-common generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) +1 javadoc 0m 44s the patch passed +1 unit 7m 13s hadoop-common in the patch passed. +1 asflicense 0m 20s The patch does not generate ASF License warnings. 50m 44s Reason Tests FindBugs module:hadoop-common-project/hadoop-common   Class org.apache.hadoop.fs.FileStatusProtos$FileStatusProto defines non-transient non-serializable instance field unknownFields In FileStatusProtos.java:instance field unknownFields In FileStatusProtos.java   Useless control flow in org.apache.hadoop.fs.FileStatusProtos$FileStatusProto$Builder.maybeForceBuilderInitialization() At FileStatusProtos.java: At FileStatusProtos.java: [line 1057] Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12782647/HDFS-6984.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle cc uname Linux 4cf3d029293f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / c54f6ef Default Java 1.8.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16795/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/16795/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16795/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16795/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          FWIW, I'd like FileStatus in Hadoop 2 to implement Serializable. Why? makes it trivial to pass across processes in Spark and similar remote-closure APIs

          Show
          stevel@apache.org Steve Loughran added a comment - FWIW, I'd like FileStatus in Hadoop 2 to implement Serializable. Why? makes it trivial to pass across processes in Spark and similar remote-closure APIs
          Hide
          chris.douglas Chris Douglas added a comment -

          Rebased Colin P. McCabe's patch.

          • Changed FileStatus to be Serializable, per Steve Loughran's suggestion. This cascaded to a few other classes, which I halted at HdfsBlockLocation (changing the final ref to transient). Looking through its usage this is probably correct, since the fields not redundant with BlockLocation are things like tokens, which are internal to DFSClient.
          • Changed required fields to <path, filetype> from <path, filetype, permission mask, owner, group, mtime>. mtime in particular isn't always cheap on some systems, and the owner/group/perms may be lies placeholders if the FS is required to populate them. The filetype is arguable. YARN overwhelmingly favors optional fields for everything, FWIW.
          • Changed field IDs to match HdfsFileStatusProto. In proto2 at least, this cross-serialization works (added a test). In HDFS-7878, FileStatus can leave its PathHandle as bytes, provided they occupy the same field ID.
          Show
          chris.douglas Chris Douglas added a comment - Rebased Colin P. McCabe 's patch. Changed FileStatus to be Serializable , per Steve Loughran 's suggestion. This cascaded to a few other classes, which I halted at HdfsBlockLocation (changing the final ref to transient). Looking through its usage this is probably correct, since the fields not redundant with BlockLocation are things like tokens, which are internal to DFSClient. Changed required fields to <path, filetype> from <path, filetype, permission mask, owner, group, mtime> . mtime in particular isn't always cheap on some systems, and the owner/group/perms may be lies placeholders if the FS is required to populate them. The filetype is arguable. YARN overwhelmingly favors optional fields for everything, FWIW. Changed field IDs to match HdfsFileStatusProto . In proto2 at least, this cross-serialization works (added a test). In HDFS-7878 , FileStatus can leave its PathHandle as bytes , provided they occupy the same field ID.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 20s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 19s Maven dependency ordering for branch
          +1 mvninstall 7m 9s trunk passed
          +1 compile 6m 51s trunk passed
          +1 checkstyle 1m 28s trunk passed
          +1 mvnsite 2m 26s trunk passed
          +1 mvneclipse 0m 38s trunk passed
          +1 findbugs 4m 34s trunk passed
          +1 javadoc 1m 53s trunk passed
          0 mvndep 0m 17s Maven dependency ordering for patch
          +1 mvninstall 1m 57s the patch passed
          +1 compile 7m 55s the patch passed
          +1 cc 7m 55s the patch passed
          +1 javac 7m 55s the patch passed
          +1 checkstyle 1m 38s the patch passed
          +1 mvnsite 3m 2s the patch passed
          +1 mvneclipse 0m 39s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 2s The patch has no ill-formed XML file.
          +1 findbugs 5m 49s the patch passed
          +1 javadoc 2m 3s the patch passed
          +1 unit 9m 6s hadoop-common in the patch passed.
          +1 unit 0m 58s hadoop-hdfs-client in the patch passed.
          -1 unit 79m 15s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 27s The patch does not generate ASF License warnings.
          139m 50s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.balancer.TestBalancer
            hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12834247/HDFS-6984.003.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux a100249d6bf2 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / e9c4616
          Default Java 1.8.0_101
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17222/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17222/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17222/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 20s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 19s Maven dependency ordering for branch +1 mvninstall 7m 9s trunk passed +1 compile 6m 51s trunk passed +1 checkstyle 1m 28s trunk passed +1 mvnsite 2m 26s trunk passed +1 mvneclipse 0m 38s trunk passed +1 findbugs 4m 34s trunk passed +1 javadoc 1m 53s trunk passed 0 mvndep 0m 17s Maven dependency ordering for patch +1 mvninstall 1m 57s the patch passed +1 compile 7m 55s the patch passed +1 cc 7m 55s the patch passed +1 javac 7m 55s the patch passed +1 checkstyle 1m 38s the patch passed +1 mvnsite 3m 2s the patch passed +1 mvneclipse 0m 39s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 2s The patch has no ill-formed XML file. +1 findbugs 5m 49s the patch passed +1 javadoc 2m 3s the patch passed +1 unit 9m 6s hadoop-common in the patch passed. +1 unit 0m 58s hadoop-hdfs-client in the patch passed. -1 unit 79m 15s hadoop-hdfs in the patch failed. +1 asflicense 0m 27s The patch does not generate ASF License warnings. 139m 50s Reason Tests Failed junit tests hadoop.hdfs.server.balancer.TestBalancer   hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12834247/HDFS-6984.003.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux a100249d6bf2 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e9c4616 Default Java 1.8.0_101 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/17222/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17222/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17222/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          wow, thanks for that extra work! appreciated. I didn't go into the details enough to give it a full review...not familar enough with the code.

          1. I see in FileStatus.java line 300+ the input size is checked for being negative. What if the size came back as 2^31? I think you need an upper bounds check if you really want to defend against malicious endpoints
          Show
          stevel@apache.org Steve Loughran added a comment - wow, thanks for that extra work! appreciated. I didn't go into the details enough to give it a full review...not familar enough with the code. I see in FileStatus.java line 300+ the input size is checked for being negative. What if the size came back as 2^31? I think you need an upper bounds check if you really want to defend against malicious endpoints
          Hide
          andrew.wang Andrew Wang added a comment -

          Setting target version for tracking.

          Show
          andrew.wang Andrew Wang added a comment - Setting target version for tracking.
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks for revving this Chris. I don't have a ton of background here, but my review:

          • Can we split the Serializable stuff into a separate change? Changing booleans into Boolean objects is additional overhead, and combining these two changes makes it harder to review. Also, I am not a Java serialization expert, but IIUC maintaining compatibility is difficult, which means maybe people should just use PB anyway.
          • The serialization loses the extra isEncrypted and FsPermission#getAclBit bits since it calls toShort rather than toExtendedShort. Seems like we should save these, though the fact we pack these bits into FsPermission is an internal implementation detail. Adding new booleans to FileStatus might break cross-serialization with HdfsFileStatus though. What is the usecase for cross-serialization?
          • Could use some more extensive unit tests that test the default values and error conditions with the shorts.
          • Test nit: please also avoid the wildcard import.

          Regarding Steve's comment on bounds checking, PB by default has a 64MB max message size. We could use that as a reasonable upper-bound on the size of a FileStatus.

          Show
          andrew.wang Andrew Wang added a comment - Thanks for revving this Chris. I don't have a ton of background here, but my review: Can we split the Serializable stuff into a separate change? Changing booleans into Boolean objects is additional overhead, and combining these two changes makes it harder to review. Also, I am not a Java serialization expert, but IIUC maintaining compatibility is difficult, which means maybe people should just use PB anyway. The serialization loses the extra isEncrypted and FsPermission#getAclBit bits since it calls toShort rather than toExtendedShort . Seems like we should save these, though the fact we pack these bits into FsPermission is an internal implementation detail. Adding new booleans to FileStatus might break cross-serialization with HdfsFileStatus though. What is the usecase for cross-serialization? Could use some more extensive unit tests that test the default values and error conditions with the shorts. Test nit: please also avoid the wildcard import. Regarding Steve's comment on bounds checking, PB by default has a 64MB max message size. We could use that as a reasonable upper-bound on the size of a FileStatus.
          Hide
          andrew.wang Andrew Wang added a comment -

          Because I was curious, I re-read the history of the ACL bit over on HDFS-6326. My paraphrased notes:

          • There was a big effort to avoid exposing the ACL bit to callers of FsPermission#toShort and #write and so on for compatibility reasons.
          • Adding the full ACL to FileStatus was discarded for performance reasons.
          • Adding a new bitfield is okay performance wise, but was discarded because it breaks Writable compatibility.

          The takeaways for me are that we should make a separate bitfield for these flags. If we want to preserve cross-serialization, we'd need to also add this field to HdfsFileStatus, and we'd always have to be careful with field numbers.

          This entire saga has also made me eager to purge any remaining uses of Writable from our API.

          Show
          andrew.wang Andrew Wang added a comment - Because I was curious, I re-read the history of the ACL bit over on HDFS-6326 . My paraphrased notes: There was a big effort to avoid exposing the ACL bit to callers of FsPermission#toShort and #write and so on for compatibility reasons. Adding the full ACL to FileStatus was discarded for performance reasons. Adding a new bitfield is okay performance wise, but was discarded because it breaks Writable compatibility. The takeaways for me are that we should make a separate bitfield for these flags. If we want to preserve cross-serialization, we'd need to also add this field to HdfsFileStatus, and we'd always have to be careful with field numbers. This entire saga has also made me eager to purge any remaining uses of Writable from our API.
          Hide
          andrew.wang Andrew Wang added a comment -

          Here's also a more radical idea: how about we simply stop implementing Writable altogether in FileStatus? I did an empirical test by successfully building all of CDH with a patch that does this. This includes apps like HBase, Hive, Spark, Solr, Impala, Oozie, Avro, Parquet, etc.

          I'm guessing FileStatus was originally made Writable as a shortcut for DistCp. Maybe there are custom apps that use Writable, but the vast majority of Cloudera users interact with a Hadoop cluster via a higher-level framework.

          Show
          andrew.wang Andrew Wang added a comment - Here's also a more radical idea: how about we simply stop implementing Writable altogether in FileStatus? I did an empirical test by successfully building all of CDH with a patch that does this. This includes apps like HBase, Hive, Spark, Solr, Impala, Oozie, Avro, Parquet, etc. I'm guessing FileStatus was originally made Writable as a shortcut for DistCp. Maybe there are custom apps that use Writable, but the vast majority of Cloudera users interact with a Hadoop cluster via a higher-level framework.
          Hide
          andrew.wang Andrew Wang added a comment -

          Here's the sample patch I used to remove Writable.

          Show
          andrew.wang Andrew Wang added a comment - Here's the sample patch I used to remove Writable.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 14s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          0 mvndep 0m 18s Maven dependency ordering for branch
          +1 mvninstall 7m 48s trunk passed
          +1 compile 10m 41s trunk passed
          +1 checkstyle 1m 40s trunk passed
          +1 mvnsite 2m 11s trunk passed
          +1 mvneclipse 0m 38s trunk passed
          +1 findbugs 3m 34s trunk passed
          +1 javadoc 1m 37s trunk passed
          0 mvndep 0m 16s Maven dependency ordering for patch
          +1 mvninstall 1m 31s the patch passed
          +1 compile 10m 4s the patch passed
          +1 javac 10m 4s the patch passed
          -0 checkstyle 1m 40s root: The patch generated 21 new + 67 unchanged - 27 fixed = 88 total (was 94)
          +1 mvnsite 2m 8s the patch passed
          +1 mvneclipse 0m 39s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 3m 48s the patch passed
          +1 javadoc 1m 39s the patch passed
          +1 unit 8m 11s hadoop-common in the patch passed.
          -1 unit 71m 27s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 41s The patch does not generate ASF License warnings.
          131m 57s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.namenode.ha.TestHASafeMode



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842609/HDFS-6984.nowritable.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 5351181484ef 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 5bd7dec
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17813/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17813/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17813/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17813/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. 0 mvndep 0m 18s Maven dependency ordering for branch +1 mvninstall 7m 48s trunk passed +1 compile 10m 41s trunk passed +1 checkstyle 1m 40s trunk passed +1 mvnsite 2m 11s trunk passed +1 mvneclipse 0m 38s trunk passed +1 findbugs 3m 34s trunk passed +1 javadoc 1m 37s trunk passed 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 31s the patch passed +1 compile 10m 4s the patch passed +1 javac 10m 4s the patch passed -0 checkstyle 1m 40s root: The patch generated 21 new + 67 unchanged - 27 fixed = 88 total (was 94) +1 mvnsite 2m 8s the patch passed +1 mvneclipse 0m 39s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 3m 48s the patch passed +1 javadoc 1m 39s the patch passed +1 unit 8m 11s hadoop-common in the patch passed. -1 unit 71m 27s hadoop-hdfs in the patch failed. +1 asflicense 0m 41s The patch does not generate ASF License warnings. 131m 57s Reason Tests Failed junit tests hadoop.hdfs.server.namenode.ha.TestHASafeMode Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842609/HDFS-6984.nowritable.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 5351181484ef 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 5bd7dec Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17813/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/17813/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17813/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17813/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          chris.douglas Chris Douglas added a comment -

          Can we split the Serializable stuff into a separate change? Changing booleans into Boolean objects is additional overhead, and combining these two changes makes it harder to review.

          Reluctantly... this will create multiple, conflicting patches, but OK. The two are somewhat related, in that InodeType is required in the PB impl, so Serializable should consider that missing field from the stream (i.e., null) as an error. That said, nothing enforces or relies on consistent semantics between the two serializations.

          Also, I am not a Java serialization expert, but IIUC maintaining compatibility is difficult, which means maybe people should just use PB anyway.

          Steve Loughran requested it for Spark, so if there's a way to configure Spark to use PB then we can avoid it. Even distcp doesn't really have a cross-version compatibility requirement, so this seemed mostly harmless.

          how about we simply stop implementing Writable altogether in FileStatus?

          Agreed, I don't think this functionality is often used. Though your point about the ACL bit and the encryption bit highlights why I'd like some PB representation for HDFS-7878, and why the 2.x PB semantics are useful.

          Consider DistributedFileSystem#listStatusInternal. Each HdfsFileStatus instance copies its fields to a new FileStatus object by HdfsFileStatus#makeQualified, those HdfsFileStatus fields having been populated by copying fields from the PB record returned from RPC. Setting aside the inefficiency of creating copies, this also drops information returned by the NN to construct the generic FileStatus, which (per your summary of HDFS-6326) discards even more information to preserve Writable compatibility and performance.

          In HDFS-7878, a PathHandle (or whatever) needs some FS-specific metadata to be opaquely serialized as bytes in FileStatus, but resolved by a FileSystem instance. If a receiver deserializes a FileStatus object without knowing the source FS (i.e., as a vanilla FileStatusProto object), a serialized HdfsFileStatus object will populate those fields it understands, and (in PB 2.x) the record will retain unknown fields. The unit test in v003 verifies this.

          Sergey Shelukhin cited a case in Hive/LLAP where this object needs to be passed across process boundaries (as splits, etc.), and HDFS-9806 needs a nonce from the FS to detect changes. Neither knows the FileStatus type before it's read, but by using PB the PathHandle data are addressable and the basic fields (path, type, owner, etc.) are available, and that's enough.

          If HDFS were to return HdfsFileStatus wrapping the HdfsFileStatusProto from which it's derived, that would not only be more efficient (avoiding copies), but serializing it and reading it back as a vanilla FileStatus would also work (again, per v003).

          There's no particular reason that requires implementing the Writable contract, though. I'm ambivalent about removing it, since HdfsFileStatus could be modified to write HdfsFileStatusProto objects compatible with FileStatus equally well through some utility class.

          I'm guessing FileStatus was originally made Writable as a shortcut for DistCp

          Hadoop RPC used to require Writable types, so it was probably to implement listStatus and getFileStatus.

          Show
          chris.douglas Chris Douglas added a comment - Can we split the Serializable stuff into a separate change? Changing booleans into Boolean objects is additional overhead, and combining these two changes makes it harder to review. Reluctantly... this will create multiple, conflicting patches, but OK. The two are somewhat related, in that InodeType is required in the PB impl, so Serializable should consider that missing field from the stream (i.e., null) as an error. That said, nothing enforces or relies on consistent semantics between the two serializations. Also, I am not a Java serialization expert, but IIUC maintaining compatibility is difficult, which means maybe people should just use PB anyway. Steve Loughran requested it for Spark, so if there's a way to configure Spark to use PB then we can avoid it. Even distcp doesn't really have a cross-version compatibility requirement, so this seemed mostly harmless. how about we simply stop implementing Writable altogether in FileStatus? Agreed, I don't think this functionality is often used. Though your point about the ACL bit and the encryption bit highlights why I'd like some PB representation for HDFS-7878 , and why the 2.x PB semantics are useful. Consider DistributedFileSystem#listStatusInternal . Each HdfsFileStatus instance copies its fields to a new FileStatus object by HdfsFileStatus#makeQualified , those HdfsFileStatus fields having been populated by copying fields from the PB record returned from RPC. Setting aside the inefficiency of creating copies, this also drops information returned by the NN to construct the generic FileStatus , which (per your summary of HDFS-6326 ) discards even more information to preserve Writable compatibility and performance. In HDFS-7878 , a PathHandle (or whatever) needs some FS-specific metadata to be opaquely serialized as bytes in FileStatus, but resolved by a FileSystem instance. If a receiver deserializes a FileStatus object without knowing the source FS (i.e., as a vanilla FileStatusProto object), a serialized HdfsFileStatus object will populate those fields it understands, and (in PB 2.x) the record will retain unknown fields. The unit test in v003 verifies this. Sergey Shelukhin cited a case in Hive/LLAP where this object needs to be passed across process boundaries (as splits, etc.), and HDFS-9806 needs a nonce from the FS to detect changes. Neither knows the FileStatus type before it's read, but by using PB the PathHandle data are addressable and the basic fields (path, type, owner, etc.) are available, and that's enough. If HDFS were to return HdfsFileStatus wrapping the HdfsFileStatusProto from which it's derived, that would not only be more efficient (avoiding copies), but serializing it and reading it back as a vanilla FileStatus would also work (again, per v003). There's no particular reason that requires implementing the Writable contract, though. I'm ambivalent about removing it, since HdfsFileStatus could be modified to write HdfsFileStatusProto objects compatible with FileStatus equally well through some utility class. I'm guessing FileStatus was originally made Writable as a shortcut for DistCp Hadoop RPC used to require Writable types, so it was probably to implement listStatus and getFileStatus .
          Hide
          chris.douglas Chris Douglas added a comment -

          Thanks for reviewing this, Andrew.

          I tried to explain the cross-serialization use case below. I think we can modify the FsPermission to keep the encryption and ACL bits in the protobuf record, rather than the special encoding it uses now.

          Show
          chris.douglas Chris Douglas added a comment - Thanks for reviewing this, Andrew. I tried to explain the cross-serialization use case below. I think we can modify the FsPermission to keep the encryption and ACL bits in the protobuf record, rather than the special encoding it uses now.
          Hide
          chris.douglas Chris Douglas added a comment -

          Can we split the Serializable stuff into a separate change?

          Moved to HADOOP-13895

          Show
          chris.douglas Chris Douglas added a comment - Can we split the Serializable stuff into a separate change? Moved to HADOOP-13895
          Hide
          andrew.wang Andrew Wang added a comment -

          IIUC the usecases mentioned are the following:

          Sergey Shelukhin cited a case in Hive/LLAP where this object needs to be passed across process boundaries (as splits, etc.)

          If this Hive usecase is the one from this comment on HDFS-7878 then it sounds like they want to be able to get a PathHandle from a FileStatus, serialize the PathHandle, pass it to a worker, and then deserialize the PathHandle and open a file with it.

          This means we need to be able to add new fields to FileStatus, and that PathHandle should be an opaque PB-encoded blob. It doesn't require cross-serialization between FileStatus and HdfsFileStatus, and we don't necessarily need FileStatus to be serializable either.

          and HDFS-9806 needs a nonce from the FS to detect changes. Neither knows the FileStatus type before it's read, but by using PB the PathHandle data are addressable and the basic fields (path, type, owner, etc.) are available, and that's enough.

          Is HDFS-9806 using the FileSystem interface to access the backing storage system? It sounds like we need to be able to add a new field to FileStatus, but not necessarily cross-serialization (or serialization) of a FileStatus.

          Basically, I'm wondering if we should push all FileStatus-related serialization duties into application code. The distcp conversion is an example of the benefits: space efficiency was improved by not including unused fields, and it made it clear that the serialization only needs to be compatible within a single job execution.

          Regarding cross-serialization, yes we can save some copies by making HdfsFileStatus implement FileStatus and returning an HdfsFileStatus directly in FileSystem, but HdfsFileStatus has even more fields that a user is unlikely to care about. It also seems weird from an API perspective to include extra, inaccessible data in the serialized version of a class.

          Thoughts?

          Show
          andrew.wang Andrew Wang added a comment - IIUC the usecases mentioned are the following: Sergey Shelukhin cited a case in Hive/LLAP where this object needs to be passed across process boundaries (as splits, etc.) If this Hive usecase is the one from this comment on HDFS-7878 then it sounds like they want to be able to get a PathHandle from a FileStatus , serialize the PathHandle , pass it to a worker, and then deserialize the PathHandle and open a file with it. This means we need to be able to add new fields to FileStatus, and that PathHandle should be an opaque PB-encoded blob. It doesn't require cross-serialization between FileStatus and HdfsFileStatus, and we don't necessarily need FileStatus to be serializable either. and HDFS-9806 needs a nonce from the FS to detect changes. Neither knows the FileStatus type before it's read, but by using PB the PathHandle data are addressable and the basic fields (path, type, owner, etc.) are available, and that's enough. Is HDFS-9806 using the FileSystem interface to access the backing storage system? It sounds like we need to be able to add a new field to FileStatus, but not necessarily cross-serialization (or serialization) of a FileStatus. Basically, I'm wondering if we should push all FileStatus-related serialization duties into application code. The distcp conversion is an example of the benefits: space efficiency was improved by not including unused fields, and it made it clear that the serialization only needs to be compatible within a single job execution. Regarding cross-serialization, yes we can save some copies by making HdfsFileStatus implement FileStatus and returning an HdfsFileStatus directly in FileSystem, but HdfsFileStatus has even more fields that a user is unlikely to care about. It also seems weird from an API perspective to include extra, inaccessible data in the serialized version of a class. Thoughts?
          Hide
          chris.douglas Chris Douglas added a comment - - edited

          This means we need to be able to add new fields to FileStatus, and that PathHandle should be an opaque PB-encoded blob. It doesn't require cross-serialization between FileStatus and HdfsFileStatus, and we don't necessarily need FileStatus to be serializable either.

          Yes, I agree. To add new fields to FileStatus, we were changing its serialization in 3.x to PB (since updating Writable is problematic). If we're going to use PB, then arranging its fields so it could be compatible with HdfsFileStatus was just anticipating future changes; you're right, cross-serialization is not a requirement. By arranging the type this way, the bytes field is accessible deserialized into a vanilla FileStatus, but if one were to deserialize into HdfsFileStatus then the PathHandle could have typed, versioned fields without extra steps. It's free to make this easy, so the patch doesn't make it hard.

          The current proposal in HDFS-7878 offers open(FileStatus) instead of open(PathHandle) to avoid adding new functions to FileSystem for listing, deleting, renaming, etc. against a PathHandle. It'd be convenient if, as is fairly common, processes could just throw the result of a FileSystem query to a service/framework using some of the automatic serialization (e.g., Serializable in Spark and Writable in MapReduce), but we could move it to a library. Again, I'm pretty ambivalent about removing Writable from FileStatus, since we still have it on other data returned by FileSystem (like Path), but it is largely vestigial in 3.x.

          Is HDFS-9806 using the FileSystem interface to access the backing storage system? It sounds like we need to be able to add a new field to FileStatus, but not necessarily cross-serialization (or serialization) of a FileStatus.

          Yes, that's correct. We just need a consistent place to add "metadata that uniquely identifies a file" so we can handle it consistently across FileSystem implementations.

          Regarding cross-serialization, yes we can save some copies by making HdfsFileStatus implement FileStatus and returning an HdfsFileStatus directly in FileSystem, but HdfsFileStatus has even more fields that a user is unlikely to care about. It also seems weird from an API perspective to include extra, inaccessible data in the serialized version of a class.

          If the record doesn't leave the process or the deserializer knew to use a HdfsFileStatus then it'd be accessible. Unlike today, a client could use the data we return from the NN if we didn't throw it out.

          I hear you on the inefficiency of carrying excess data per record, but the NN is a more critical bottleneck, so perhaps we should focus on not returning the unused data at all. Among clients, unless they're caching thousands of FileStatus objects in memory, the overhead is negligible, and the solution is to build efficient composites (i.e., instead of shaving redundant bytes per record, we can compress repeated data across millions of records).

          Show
          chris.douglas Chris Douglas added a comment - - edited This means we need to be able to add new fields to FileStatus, and that PathHandle should be an opaque PB-encoded blob. It doesn't require cross-serialization between FileStatus and HdfsFileStatus, and we don't necessarily need FileStatus to be serializable either. Yes, I agree. To add new fields to FileStatus , we were changing its serialization in 3.x to PB (since updating Writable is problematic). If we're going to use PB, then arranging its fields so it could be compatible with HdfsFileStatus was just anticipating future changes; you're right, cross-serialization is not a requirement. By arranging the type this way, the bytes field is accessible deserialized into a vanilla FileStatus , but if one were to deserialize into HdfsFileStatus then the PathHandle could have typed, versioned fields without extra steps. It's free to make this easy, so the patch doesn't make it hard. The current proposal in HDFS-7878 offers open(FileStatus) instead of open(PathHandle) to avoid adding new functions to FileSystem for listing, deleting, renaming, etc. against a PathHandle . It'd be convenient if, as is fairly common, processes could just throw the result of a FileSystem query to a service/framework using some of the automatic serialization (e.g., Serializable in Spark and Writable in MapReduce), but we could move it to a library. Again, I'm pretty ambivalent about removing Writable from FileStatus , since we still have it on other data returned by FileSystem (like Path ), but it is largely vestigial in 3.x. Is HDFS-9806 using the FileSystem interface to access the backing storage system? It sounds like we need to be able to add a new field to FileStatus, but not necessarily cross-serialization (or serialization) of a FileStatus. Yes, that's correct. We just need a consistent place to add "metadata that uniquely identifies a file" so we can handle it consistently across FileSystem implementations. Regarding cross-serialization, yes we can save some copies by making HdfsFileStatus implement FileStatus and returning an HdfsFileStatus directly in FileSystem, but HdfsFileStatus has even more fields that a user is unlikely to care about. It also seems weird from an API perspective to include extra, inaccessible data in the serialized version of a class. If the record doesn't leave the process or the deserializer knew to use a HdfsFileStatus then it'd be accessible. Unlike today, a client could use the data we return from the NN if we didn't throw it out. I hear you on the inefficiency of carrying excess data per record, but the NN is a more critical bottleneck, so perhaps we should focus on not returning the unused data at all. Among clients, unless they're caching thousands of FileStatus objects in memory, the overhead is negligible, and the solution is to build efficient composites (i.e., instead of shaving redundant bytes per record, we can compress repeated data across millions of records).
          Hide
          andrew.wang Andrew Wang added a comment -

          The current proposal in HDFS-7878 offers open(FileStatus) instead of open(PathHandle) to avoid adding new functions to FileSystem for listing, deleting, renaming, etc. against a PathHandle...

          So for the Hive usecase, they would have to pass around a full serialized FileStatus even though open() only needs the PathHandle field? This API seems fine for same-process usage (HDFS-9806?) but inefficient for cross-process. I think UNIX users are also used to the idea of an inode id separate from a file status.

          I still lean toward removing Writable altogether, since it reduces our API surface. Similarly, I'd rather not open up HdfsFileStatus as a public API (without a concrete usecase) since it expands our API surface. The cross-serialization is also fragile since we need to be careful not to reuse field numbers across two structures, and I've seen numbering mistakes made before even for normal PB changes.

          Show
          andrew.wang Andrew Wang added a comment - The current proposal in HDFS-7878 offers open(FileStatus) instead of open(PathHandle) to avoid adding new functions to FileSystem for listing, deleting, renaming, etc. against a PathHandle... So for the Hive usecase, they would have to pass around a full serialized FileStatus even though open() only needs the PathHandle field? This API seems fine for same-process usage ( HDFS-9806 ?) but inefficient for cross-process. I think UNIX users are also used to the idea of an inode id separate from a file status. I still lean toward removing Writable altogether, since it reduces our API surface. Similarly, I'd rather not open up HdfsFileStatus as a public API (without a concrete usecase) since it expands our API surface. The cross-serialization is also fragile since we need to be careful not to reuse field numbers across two structures, and I've seen numbering mistakes made before even for normal PB changes.
          Hide
          chris.douglas Chris Douglas added a comment -

          So for the Hive usecase, they would have to pass around a full serialized FileStatus even though open() only needs the PathHandle field? This API seems fine for same-process usage (HDFS-9806?) but inefficient for cross-process. I think UNIX users are also used to the idea of an inode id separate from a file status.

          Sure, but this wasn't what I was trying to get at. On its efficiency:

          1. The overhead at the NN is significant, since it's often the cluster bottleneck. If we're returning information that's immediately discarded, the client-side inefficiency of not-immediately-discarding is not irrelevant, but it's not significant. For a few hundred FileStatus records, this overhead is less than 1MB, and it's often available for GC (most FileStatus objects are short-lived).
          2. For cases where we want to manage thousands or millions of FileStatus instances, this overhead may become significant. But we can compress repeated data, and most collections of FileStatus objects are mostly repeated data. Further, most of these fields are optional, and if an application wants to transfer only necessary fields (e.g., the Path and PathHandle, omitting everything else), that's fine.
          3. I share your caution w.r.t. the API surface, which is why HDFS-7878 avoids adding lots of new calls accepting PathHandle objects. Users of FileSystem almost always intend to refer to the entity returned by the last query, not whatever happens to exist at that path now.

          In exchange for non-optimal record size, the API gains some convenient symmetry (i.e., get a FileStatus, use a FileStatus) that at least makes it possible to avoid the TOCTOU races that are uncommon, but annoying. The serialization can omit fields. We can use generic container formats that understand PB to compress collections of FileStatus objects. We can even use a more generic type (FileStatus) if the specific type follows some PB conventions. Considering the cost of these, the distance from "optimal" seems well within the ambient noise.

          I still lean toward removing Writable altogether, since it reduces our API surface. Similarly, I'd rather not open up HdfsFileStatus as a public API (without a concrete usecase) since it expands our API surface.

          Again, I'm mostly ambivalent about Writable. Steve Loughran? Preserving the automatic serialization that some user programs may rely on... we can try removing it in 3.x-alpha/beta, and see if anyone complains. If I moved this to a library, would that move this forward?

          I didn't mean to suggest that HdfsFileStatus should be a public API (with all the restrictions on evolving it).

          The cross-serialization is also fragile since we need to be careful not to reuse field numbers across two structures, and I've seen numbering mistakes made before even for normal PB changes.

          Granted, but a hole in the PB toolchain doesn't mean we shouldn't use the feature. We can add more comments to the .proto. Reading through HDFS-6326, perhaps making FsPermission a protobuf record would help.

          Show
          chris.douglas Chris Douglas added a comment - So for the Hive usecase, they would have to pass around a full serialized FileStatus even though open() only needs the PathHandle field? This API seems fine for same-process usage ( HDFS-9806 ?) but inefficient for cross-process. I think UNIX users are also used to the idea of an inode id separate from a file status. Sure, but this wasn't what I was trying to get at. On its efficiency: The overhead at the NN is significant, since it's often the cluster bottleneck. If we're returning information that's immediately discarded, the client-side inefficiency of not-immediately-discarding is not irrelevant, but it's not significant. For a few hundred FileStatus records, this overhead is less than 1MB, and it's often available for GC (most FileStatus objects are short-lived). For cases where we want to manage thousands or millions of FileStatus instances, this overhead may become significant. But we can compress repeated data, and most collections of FileStatus objects are mostly repeated data. Further, most of these fields are optional, and if an application wants to transfer only necessary fields (e.g., the Path and PathHandle, omitting everything else), that's fine. I share your caution w.r.t. the API surface, which is why HDFS-7878 avoids adding lots of new calls accepting PathHandle objects. Users of FileSystem almost always intend to refer to the entity returned by the last query, not whatever happens to exist at that path now. In exchange for non-optimal record size, the API gains some convenient symmetry (i.e., get a FileStatus, use a FileStatus) that at least makes it possible to avoid the TOCTOU races that are uncommon, but annoying. The serialization can omit fields. We can use generic container formats that understand PB to compress collections of FileStatus objects. We can even use a more generic type ( FileStatus ) if the specific type follows some PB conventions. Considering the cost of these, the distance from "optimal" seems well within the ambient noise. I still lean toward removing Writable altogether, since it reduces our API surface. Similarly, I'd rather not open up HdfsFileStatus as a public API (without a concrete usecase) since it expands our API surface. Again, I'm mostly ambivalent about Writable . Steve Loughran ? Preserving the automatic serialization that some user programs may rely on... we can try removing it in 3.x-alpha/beta, and see if anyone complains. If I moved this to a library, would that move this forward? I didn't mean to suggest that HdfsFileStatus should be a public API (with all the restrictions on evolving it). The cross-serialization is also fragile since we need to be careful not to reuse field numbers across two structures, and I've seen numbering mistakes made before even for normal PB changes. Granted, but a hole in the PB toolchain doesn't mean we shouldn't use the feature. We can add more comments to the .proto. Reading through HDFS-6326 , perhaps making FsPermission a protobuf record would help.
          Hide
          andrew.wang Andrew Wang added a comment -

          I think "the serialization can omit fields" is an abstraction breakage. If we're going to have an open API that takes a FileStatus, the implementation should be allowed to use all the fields expected to be present for a FileStatus of that FileSystem. This means serialization that omits fields for efficiency isn't supportable, and is why I'd prefer a PathHandle. Regarding TOCTOU, this would be addressed by including the PathHandle in the returned FileStatus, right?

          This discussion is an aside though to the matter at hand, and we should continue on HDFS-7878. I think we already agreed above that as long as we can add new fields to FileStatus, we can satisfy the basic requirements of HDFS-7878.

          I didn't mean to suggest that HdfsFileStatus should be a public API (with all the restrictions on evolving it)....<hole in the PB toolchain>

          If we don't intend to make HdfsFileStatus public, what's the point of cross-serialization? We also need to qualify the path for an HdfsFileStatus to become a FileStatus, so I don't know how zero-copy it can be anyway.

          I don't feel that strongly about removing Writable, but the nowritable patch is simple and to the point, and I still haven't grasped the benefit of keeping FileStatus Writable, even via PB. We don't think there are many (any?) apps out there using the Writable interface. Cross-serialization doesn't have an immediate usecase. HDFS-7878 IMO needs a serializable PathHandle, not a full FileStatus.

          Show
          andrew.wang Andrew Wang added a comment - I think "the serialization can omit fields" is an abstraction breakage. If we're going to have an open API that takes a FileStatus, the implementation should be allowed to use all the fields expected to be present for a FileStatus of that FileSystem. This means serialization that omits fields for efficiency isn't supportable, and is why I'd prefer a PathHandle. Regarding TOCTOU, this would be addressed by including the PathHandle in the returned FileStatus, right? This discussion is an aside though to the matter at hand, and we should continue on HDFS-7878 . I think we already agreed above that as long as we can add new fields to FileStatus, we can satisfy the basic requirements of HDFS-7878 . I didn't mean to suggest that HdfsFileStatus should be a public API (with all the restrictions on evolving it)....<hole in the PB toolchain> If we don't intend to make HdfsFileStatus public, what's the point of cross-serialization? We also need to qualify the path for an HdfsFileStatus to become a FileStatus, so I don't know how zero-copy it can be anyway. I don't feel that strongly about removing Writable, but the nowritable patch is simple and to the point, and I still haven't grasped the benefit of keeping FileStatus Writable, even via PB. We don't think there are many (any?) apps out there using the Writable interface. Cross-serialization doesn't have an immediate usecase. HDFS-7878 IMO needs a serializable PathHandle, not a full FileStatus.
          Hide
          chris.douglas Chris Douglas added a comment -

          serialization that omits fields for efficiency isn't supportable, and is why I'd prefer a PathHandle.

          With this constraint, we either expand the API surface- adding functions that accept PathHandle instances, rather than FileStatus- or all fields are required in FileStatus objects. I see your point w.r.t. confused semantics for partial FileStatus objects. In completeness, we could change the return type to Optional<T> for FileStatus fields, but that's a nonstarter from a compatibility perspective.

          We can pick this up in HDFS-7878.

          I don't feel that strongly about removing Writable, but the nowritable patch is simple and to the point

          Well... sure. Deleting code is direct. I don't mind moving the protobuf serialization to a utility library. Would that move this forward?

          Cross-serialization doesn't have an immediate usecase. HDFS-7878 IMO needs a serializable PathHandle, not a full FileStatus.

          The goal was not to make cross-serialization a requirement, but to make future efficiency possible.

          Show
          chris.douglas Chris Douglas added a comment - serialization that omits fields for efficiency isn't supportable, and is why I'd prefer a PathHandle. With this constraint, we either expand the API surface- adding functions that accept PathHandle instances, rather than FileStatus- or all fields are required in FileStatus objects. I see your point w.r.t. confused semantics for partial FileStatus objects. In completeness, we could change the return type to Optional<T> for FileStatus fields, but that's a nonstarter from a compatibility perspective. We can pick this up in HDFS-7878 . I don't feel that strongly about removing Writable, but the nowritable patch is simple and to the point Well... sure. Deleting code is direct. I don't mind moving the protobuf serialization to a utility library. Would that move this forward? Cross-serialization doesn't have an immediate usecase. HDFS-7878 IMO needs a serializable PathHandle, not a full FileStatus. The goal was not to make cross-serialization a requirement, but to make future efficiency possible.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          What is there in the hadoop compat guidelines about serialize/writable compat. Wire compatibility is covered to the extent of protobuf, but not much else.

          to be ruthless, I'd propose something like "protobuf-serialized data: compatible across minor versions; may break over major", java serializable: less/none.

          where java serialization is going to cause problems if versions change is in things like spark checkpointing, which uses java or kryo ser/deser. They'll have to expect failures on point updates. I don't see spark itself stating anything there either

          Show
          stevel@apache.org Steve Loughran added a comment - What is there in the hadoop compat guidelines about serialize/writable compat. Wire compatibility is covered to the extent of protobuf, but not much else. to be ruthless, I'd propose something like "protobuf-serialized data: compatible across minor versions; may break over major", java serializable: less/none. where java serialization is going to cause problems if versions change is in things like spark checkpointing, which uses java or kryo ser/deser. They'll have to expect failures on point updates. I don't see spark itself stating anything there either
          Hide
          andrew.wang Andrew Wang added a comment -

          I think this is a "user-level file format" and covered by:

          https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/Compatibility.html#File_formats__Metadata

          Non-forward-compatible user-file format changes are restricted to major releases. When user-file formats change, new releases are expected to read existing formats, but may write data in formats incompatible with prior releases. Also, the community shall prefer to create a new format that programs must opt in to instead of making incompatible changes to existing formats.

          I think that's basically what Steve Loughran said above.

          Chris Douglas thanks for bearing with, let's move the serde into a library and get this committed.

          Show
          andrew.wang Andrew Wang added a comment - I think this is a "user-level file format" and covered by: https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/Compatibility.html#File_formats__Metadata Non-forward-compatible user-file format changes are restricted to major releases. When user-file formats change, new releases are expected to read existing formats, but may write data in formats incompatible with prior releases. Also, the community shall prefer to create a new format that programs must opt in to instead of making incompatible changes to existing formats. I think that's basically what Steve Loughran said above. Chris Douglas thanks for bearing with, let's move the serde into a library and get this committed.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 30s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          0 mvndep 0m 18s Maven dependency ordering for branch
          +1 mvninstall 15m 35s trunk passed
          -1 compile 8m 34s root in trunk failed.
          +1 checkstyle 2m 11s trunk passed
          +1 mvnsite 2m 22s trunk passed
          +1 mvneclipse 0m 51s trunk passed
          +1 findbugs 3m 52s trunk passed
          +1 javadoc 1m 53s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 1m 42s the patch passed
          -1 compile 7m 49s root in the patch failed.
          -1 cc 7m 49s root in the patch failed.
          -1 javac 7m 49s root in the patch failed.
          -0 checkstyle 2m 18s root: The patch generated 48 new + 72 unchanged - 29 fixed = 120 total (was 101)
          +1 mvnsite 2m 42s the patch passed
          +1 mvneclipse 0m 47s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 3s The patch has no ill-formed XML file.
          +1 findbugs 4m 46s the patch passed
          +1 javadoc 1m 54s the patch passed
          +1 unit 9m 41s hadoop-common in the patch passed.
          -1 unit 74m 53s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 50s The patch does not generate ASF License warnings.
          145m 24s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12857384/HDFS-6984.004.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux e7102ca39923 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / fa67a96
          Default Java 1.8.0_121
          compile https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/branch-compile-root.txt
          findbugs v3.0.0
          compile https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/patch-compile-root.txt
          cc https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/patch-compile-root.txt
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/patch-compile-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18719/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18719/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 30s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. 0 mvndep 0m 18s Maven dependency ordering for branch +1 mvninstall 15m 35s trunk passed -1 compile 8m 34s root in trunk failed. +1 checkstyle 2m 11s trunk passed +1 mvnsite 2m 22s trunk passed +1 mvneclipse 0m 51s trunk passed +1 findbugs 3m 52s trunk passed +1 javadoc 1m 53s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 1m 42s the patch passed -1 compile 7m 49s root in the patch failed. -1 cc 7m 49s root in the patch failed. -1 javac 7m 49s root in the patch failed. -0 checkstyle 2m 18s root: The patch generated 48 new + 72 unchanged - 29 fixed = 120 total (was 101) +1 mvnsite 2m 42s the patch passed +1 mvneclipse 0m 47s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 3s The patch has no ill-formed XML file. +1 findbugs 4m 46s the patch passed +1 javadoc 1m 54s the patch passed +1 unit 9m 41s hadoop-common in the patch passed. -1 unit 74m 53s hadoop-hdfs in the patch failed. +1 asflicense 0m 50s The patch does not generate ASF License warnings. 145m 24s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12857384/HDFS-6984.004.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux e7102ca39923 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / fa67a96 Default Java 1.8.0_121 compile https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/branch-compile-root.txt findbugs v3.0.0 compile https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/patch-compile-root.txt cc https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/patch-compile-root.txt javac https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/patch-compile-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/18719/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18719/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18719/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          chris.douglas Chris Douglas added a comment - - edited

          Fixed some checkstyle warnings. Compilation failures are due to YARN-6336

          Show
          chris.douglas Chris Douglas added a comment - - edited Fixed some checkstyle warnings. Compilation failures are due to YARN-6336
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 17s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 13m 18s trunk passed
          -1 compile 7m 27s root in trunk failed.
          +1 checkstyle 2m 8s trunk passed
          +1 mvnsite 2m 21s trunk passed
          +1 mvneclipse 0m 52s trunk passed
          +1 findbugs 3m 46s trunk passed
          +1 javadoc 1m 54s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 1m 32s the patch passed
          -1 compile 7m 14s root in the patch failed.
          -1 cc 7m 14s root in the patch failed.
          -1 javac 7m 14s root in the patch failed.
          -0 checkstyle 2m 8s root: The patch generated 6 new + 72 unchanged - 29 fixed = 78 total (was 101)
          +1 mvnsite 2m 17s the patch passed
          +1 mvneclipse 0m 50s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 3s The patch has no ill-formed XML file.
          +1 findbugs 4m 4s the patch passed
          +1 javadoc 1m 51s the patch passed
          +1 unit 8m 30s hadoop-common in the patch passed.
          -1 unit 69m 21s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 46s The patch does not generate ASF License warnings.
          132m 51s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12858834/HDFS-6984.005.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 1513df8ca62c 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / cc1292e
          Default Java 1.8.0_121
          compile https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/branch-compile-root.txt
          findbugs v3.0.0
          compile https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/patch-compile-root.txt
          cc https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/patch-compile-root.txt
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/patch-compile-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18726/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18726/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 13m 18s trunk passed -1 compile 7m 27s root in trunk failed. +1 checkstyle 2m 8s trunk passed +1 mvnsite 2m 21s trunk passed +1 mvneclipse 0m 52s trunk passed +1 findbugs 3m 46s trunk passed +1 javadoc 1m 54s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 1m 32s the patch passed -1 compile 7m 14s root in the patch failed. -1 cc 7m 14s root in the patch failed. -1 javac 7m 14s root in the patch failed. -0 checkstyle 2m 8s root: The patch generated 6 new + 72 unchanged - 29 fixed = 78 total (was 101) +1 mvnsite 2m 17s the patch passed +1 mvneclipse 0m 50s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 3s The patch has no ill-formed XML file. +1 findbugs 4m 4s the patch passed +1 javadoc 1m 51s the patch passed +1 unit 8m 30s hadoop-common in the patch passed. -1 unit 69m 21s hadoop-hdfs in the patch failed. +1 asflicense 0m 46s The patch does not generate ASF License warnings. 132m 51s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12858834/HDFS-6984.005.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 1513df8ca62c 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / cc1292e Default Java 1.8.0_121 compile https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/branch-compile-root.txt findbugs v3.0.0 compile https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/patch-compile-root.txt cc https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/patch-compile-root.txt javac https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/patch-compile-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/18726/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18726/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18726/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          andrew.wang Andrew Wang added a comment -

          Hi Chris, thanks for revving, a few review comments:

          I was wondering if you saw my comment way back about the ACL bit / encrypted bit / etc:

          The takeaways for me are that we should make a separate bitfield for these flags. If we want to preserve cross-serialization, we'd need to also add this field to HdfsFileStatus, and we'd always have to be careful with field numbers.

          Right now, it looks like if you pass in an HdfsFileStatus with these bits set, they're dropped. It'd be good to unit test these getters. If you can think up a unit test to detect the addition of new bits (e.g. isErasureCoded), that'd also be great.

          Since a lot of fields are optional in the PB, should we also test with these optional fields unset? I'm wondering if the resulting FileStatus is filled in with reasonable defaults.

          Generally beefing up test coverage would be good too, since it seems like we lost some of the basic "try writing and reading some different statuses" test from TestFileStatus.

          Show
          andrew.wang Andrew Wang added a comment - Hi Chris, thanks for revving, a few review comments: I was wondering if you saw my comment way back about the ACL bit / encrypted bit / etc: The takeaways for me are that we should make a separate bitfield for these flags. If we want to preserve cross-serialization, we'd need to also add this field to HdfsFileStatus, and we'd always have to be careful with field numbers. Right now, it looks like if you pass in an HdfsFileStatus with these bits set, they're dropped. It'd be good to unit test these getters. If you can think up a unit test to detect the addition of new bits (e.g. isErasureCoded), that'd also be great. Since a lot of fields are optional in the PB, should we also test with these optional fields unset? I'm wondering if the resulting FileStatus is filled in with reasonable defaults. Generally beefing up test coverage would be good too, since it seems like we lost some of the basic "try writing and reading some different statuses" test from TestFileStatus.
          Hide
          chris.douglas Chris Douglas added a comment -

          Thanks for taking a look, Andrew Wang.

          Sorry, I had forgotten your earlier comment. Should we also try to remove Writable from FsPermission? We could deprecate the Writable API instead of removing it from these classes, in case projects/users depend on it downstream... the serialization/conversion can still live in a library, but be called from the deprecated methods.

          Since both FsPermission#getAclBit and FsPermission#getEncryptedBit/FileStatus#isEncrypted are user-facing, should these also be part of FSProtos? The payload for FileEncryptionInfoProto is likely HDFS-specific, or I'd suggest populating these using the presence of the fields. While we're at it, should we also change HdfsFileStatusProto to stop packing the acl/encryption bits among the permission bits?

          Kind of a sidebar: is there a reason encryption info is included in HdfsFileStatus, but ACLs are not? Would it be inappropriate to add a FileSystem#getAclStatus(FileStatus), in case an implementation returns this information in its response (potentially avoiding the 2-RPC overhead)?

          I'll add some additional tests.

          Show
          chris.douglas Chris Douglas added a comment - Thanks for taking a look, Andrew Wang . Sorry, I had forgotten your earlier comment. Should we also try to remove Writable from FsPermission ? We could deprecate the Writable API instead of removing it from these classes, in case projects/users depend on it downstream... the serialization/conversion can still live in a library, but be called from the deprecated methods. Since both FsPermission#getAclBit and FsPermission#getEncryptedBit / FileStatus#isEncrypted are user-facing, should these also be part of FSProtos? The payload for FileEncryptionInfoProto is likely HDFS-specific, or I'd suggest populating these using the presence of the fields. While we're at it, should we also change HdfsFileStatusProto to stop packing the acl/encryption bits among the permission bits? Kind of a sidebar: is there a reason encryption info is included in HdfsFileStatus, but ACLs are not? Would it be inappropriate to add a FileSystem#getAclStatus(FileStatus) , in case an implementation returns this information in its response (potentially avoiding the 2-RPC overhead)? I'll add some additional tests.
          Hide
          andrew.wang Andrew Wang added a comment -

          Hi Chris, replies inline, lots of agreement with your direction:

          Should we also try to remove Writable from FsPermission? We could deprecate the Writable API instead of removing it from these classes, in case projects/users depend on it downstream... the serialization/conversion can still live in a library, but be called from the deprecated methods.

          SGTM

          Since both FsPermission#getAclBit and FsPermission#getEncryptedBit/FileStatus#isEncrypted are user-facing, should these also be part of FSProtos? ...While we're at it, should we also change HdfsFileStatusProto to stop packing the acl/encryption bits among the permission bits?

          Also sounds great, I'd love a bitfield rather than stuffing these in FsPermission.

          is there a reason encryption info is included in HdfsFileStatus, but ACLs are not? Would it be inappropriate to add a FileSystem#getAclStatus(FileStatus), in case an implementation returns this information in its response (potentially avoiding the 2-RPC overhead)?

          We need the FEInfo to read or write a file, and the ACLs are rarely needed by the client since they're enforced server-side.

          I don't think there are any plans to include ACLs in HdfsFileStatus because of the bloat, but your suggestion makes sense if there is such an FS implementation. Good follow-on.

          Show
          andrew.wang Andrew Wang added a comment - Hi Chris, replies inline, lots of agreement with your direction: Should we also try to remove Writable from FsPermission? We could deprecate the Writable API instead of removing it from these classes, in case projects/users depend on it downstream... the serialization/conversion can still live in a library, but be called from the deprecated methods. SGTM Since both FsPermission#getAclBit and FsPermission#getEncryptedBit/FileStatus#isEncrypted are user-facing, should these also be part of FSProtos? ...While we're at it, should we also change HdfsFileStatusProto to stop packing the acl/encryption bits among the permission bits? Also sounds great, I'd love a bitfield rather than stuffing these in FsPermission. is there a reason encryption info is included in HdfsFileStatus, but ACLs are not? Would it be inappropriate to add a FileSystem#getAclStatus(FileStatus), in case an implementation returns this information in its response (potentially avoiding the 2-RPC overhead)? We need the FEInfo to read or write a file, and the ACLs are rarely needed by the client since they're enforced server-side. I don't think there are any plans to include ACLs in HdfsFileStatus because of the bloat, but your suggestion makes sense if there is such an FS implementation. Good follow-on.
          Hide
          chris.douglas Chris Douglas added a comment -

          v006 of the patch does the following:

          • Moves all the protobuf serialization to library code
          • Deprecates Writable APIs for both FileStatus and FsPermission
          • Where possible, removes FsPermissionExtension.
            • Too many unit tests rely on this for HDFS, and trying to change those caused even more bloat to the patch. v006 moves its instantiation to a private convert method. If this is OK, I'll file a followup JIRA to clear up the unit tests.
            • I don't know how many downstream applications rely on the hasAcl, isEncrypted, or isErasureCoded methods on FsPermission, but these are deprecated (rather than removed) in the patch.
            • Introduced an intermediate, private FlaggedFileStatus class to preserve the attributes formerly mixed in with the permission bits. This could have been in LocatedFileStatus, but downstream clients may check instanceof LocatedFileStatus and assume the null locations are correct.
            • Still need to deprecate FsPermission#toExtendedShort, will post a followup patch with any other checkstyle/findbugs fixes to v006
          • Make HdfsFileStatus extend FileStatus
            • After HADOOP-13895, the Serializable API bled further into the API
            • getSymlink annoyingly throws an IOException if !isSymlink(). Overriding it in HdfsFileStatus required changing the return type, and to comply with the contract tests there are some superfluous try/catch statements in e.g., the JSON utils
            • The JSON code tried to preserve the acl/crypt/ec bits of the FsPermission on AclStatus. This seems incorrect, but I can try to find a solution if it is meaningful.
          Show
          chris.douglas Chris Douglas added a comment - v006 of the patch does the following: Moves all the protobuf serialization to library code Deprecates Writable APIs for both FileStatus and FsPermission Where possible, removes FsPermissionExtension . Too many unit tests rely on this for HDFS, and trying to change those caused even more bloat to the patch. v006 moves its instantiation to a private convert method. If this is OK, I'll file a followup JIRA to clear up the unit tests. I don't know how many downstream applications rely on the hasAcl , isEncrypted , or isErasureCoded methods on FsPermission , but these are deprecated (rather than removed) in the patch. Introduced an intermediate, private FlaggedFileStatus class to preserve the attributes formerly mixed in with the permission bits. This could have been in LocatedFileStatus , but downstream clients may check instanceof LocatedFileStatus and assume the null locations are correct. Still need to deprecate FsPermission#toExtendedShort , will post a followup patch with any other checkstyle/findbugs fixes to v006 Make HdfsFileStatus extend FileStatus After HADOOP-13895 , the Serializable API bled further into the API getSymlink annoyingly throws an IOException if !isSymlink() . Overriding it in HdfsFileStatus required changing the return type, and to comply with the contract tests there are some superfluous try/catch statements in e.g., the JSON utils The JSON code tried to preserve the acl/crypt/ec bits of the FsPermission on AclStatus . This seems incorrect, but I can try to find a solution if it is meaningful.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 13s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 7 new or modified test files.
          0 mvndep 1m 24s Maven dependency ordering for branch
          +1 mvninstall 15m 22s trunk passed
          +1 compile 18m 4s trunk passed
          +1 checkstyle 2m 5s trunk passed
          +1 mvnsite 3m 44s trunk passed
          +1 mvneclipse 1m 22s trunk passed
          -1 findbugs 1m 24s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
          -1 findbugs 1m 29s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings.
          -1 findbugs 1m 46s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings.
          +1 javadoc 2m 25s trunk passed
          0 mvndep 0m 14s Maven dependency ordering for patch
          +1 mvninstall 2m 35s the patch passed
          +1 compile 14m 7s the patch passed
          +1 cc 14m 7s the patch passed
          -1 javac 14m 7s root generated 68 new + 788 unchanged - 0 fixed = 856 total (was 788)
          -0 checkstyle 2m 4s root: The patch generated 19 new + 772 unchanged - 24 fixed = 791 total (was 796)
          +1 mvnsite 3m 35s the patch passed
          +1 mvneclipse 1m 23s the patch passed
          -1 whitespace 0m 0s The patch 1 line(s) with tabs.
          +1 xml 0m 2s The patch has no ill-formed XML file.
          -1 findbugs 1m 37s hadoop-hdfs-project/hadoop-hdfs-client generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2)
          +1 javadoc 2m 25s the patch passed
          -1 unit 6m 54s hadoop-common in the patch failed.
          +1 unit 1m 19s hadoop-hdfs-client in the patch passed.
          -1 unit 64m 12s hadoop-hdfs in the patch failed.
          -1 unit 3m 24s hadoop-hdfs-httpfs in the patch failed.
          +1 asflicense 0m 36s The patch does not generate ASF License warnings.
          160m 9s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client
            org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus doesn't override HdfsFileStatus.equals(Object) At HdfsLocatedFileStatus.java:At HdfsLocatedFileStatus.java:[line 1]
            Class org.apache.hadoop.hdfs.protocol.LocatedBlocks defines non-transient non-serializable instance field blocks In LocatedBlocks.java:instance field blocks In LocatedBlocks.java
            Class org.apache.hadoop.hdfs.protocol.LocatedBlocks defines non-transient non-serializable instance field lastLocatedBlock In LocatedBlocks.java:instance field lastLocatedBlock In LocatedBlocks.java
          Failed junit tests hadoop.net.TestDNS
            hadoop.hdfs.server.namenode.TestFileContextAcl
            hadoop.hdfs.server.namenode.TestAuditLogger
            hadoop.cli.TestAclCLI
            hadoop.hdfs.web.TestWebHDFSAcl
            hadoop.hdfs.server.namenode.TestFSImageWithAcl
            hadoop.cli.TestAclCLIWithPosixAclInheritance
            hadoop.hdfs.web.TestWebHDFS
            hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
            hadoop.hdfs.TestEncryptionZonesWithKMS
            hadoop.hdfs.TestEncryptionZones
            hadoop.hdfs.server.namenode.TestNestedEncryptionZones
            hadoop.hdfs.server.namenode.snapshot.TestSnapshot
            hadoop.hdfs.server.namenode.TestStartup
            hadoop.hdfs.server.namenode.TestINodeAttributeProvider
            hadoop.hdfs.TestDFSShell
            hadoop.hdfs.server.namenode.TestMetadataVersionOutput
            hadoop.hdfs.server.namenode.TestNameNodeAcl
            hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem
            hadoop.fs.http.server.TestHttpFSServer
            hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ac17dc
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12865644/HDFS-6984.006.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux cdce620461e5 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 19a7e94
          Default Java 1.8.0_121
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/whitespace-tabs.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19240/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19240/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 7 new or modified test files. 0 mvndep 1m 24s Maven dependency ordering for branch +1 mvninstall 15m 22s trunk passed +1 compile 18m 4s trunk passed +1 checkstyle 2m 5s trunk passed +1 mvnsite 3m 44s trunk passed +1 mvneclipse 1m 22s trunk passed -1 findbugs 1m 24s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. -1 findbugs 1m 29s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings. -1 findbugs 1m 46s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. +1 javadoc 2m 25s trunk passed 0 mvndep 0m 14s Maven dependency ordering for patch +1 mvninstall 2m 35s the patch passed +1 compile 14m 7s the patch passed +1 cc 14m 7s the patch passed -1 javac 14m 7s root generated 68 new + 788 unchanged - 0 fixed = 856 total (was 788) -0 checkstyle 2m 4s root: The patch generated 19 new + 772 unchanged - 24 fixed = 791 total (was 796) +1 mvnsite 3m 35s the patch passed +1 mvneclipse 1m 23s the patch passed -1 whitespace 0m 0s The patch 1 line(s) with tabs. +1 xml 0m 2s The patch has no ill-formed XML file. -1 findbugs 1m 37s hadoop-hdfs-project/hadoop-hdfs-client generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2) +1 javadoc 2m 25s the patch passed -1 unit 6m 54s hadoop-common in the patch failed. +1 unit 1m 19s hadoop-hdfs-client in the patch passed. -1 unit 64m 12s hadoop-hdfs in the patch failed. -1 unit 3m 24s hadoop-hdfs-httpfs in the patch failed. +1 asflicense 0m 36s The patch does not generate ASF License warnings. 160m 9s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client   org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus doesn't override HdfsFileStatus.equals(Object) At HdfsLocatedFileStatus.java:At HdfsLocatedFileStatus.java: [line 1]   Class org.apache.hadoop.hdfs.protocol.LocatedBlocks defines non-transient non-serializable instance field blocks In LocatedBlocks.java:instance field blocks In LocatedBlocks.java   Class org.apache.hadoop.hdfs.protocol.LocatedBlocks defines non-transient non-serializable instance field lastLocatedBlock In LocatedBlocks.java:instance field lastLocatedBlock In LocatedBlocks.java Failed junit tests hadoop.net.TestDNS   hadoop.hdfs.server.namenode.TestFileContextAcl   hadoop.hdfs.server.namenode.TestAuditLogger   hadoop.cli.TestAclCLI   hadoop.hdfs.web.TestWebHDFSAcl   hadoop.hdfs.server.namenode.TestFSImageWithAcl   hadoop.cli.TestAclCLIWithPosixAclInheritance   hadoop.hdfs.web.TestWebHDFS   hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot   hadoop.hdfs.TestEncryptionZonesWithKMS   hadoop.hdfs.TestEncryptionZones   hadoop.hdfs.server.namenode.TestNestedEncryptionZones   hadoop.hdfs.server.namenode.snapshot.TestSnapshot   hadoop.hdfs.server.namenode.TestStartup   hadoop.hdfs.server.namenode.TestINodeAttributeProvider   hadoop.hdfs.TestDFSShell   hadoop.hdfs.server.namenode.TestMetadataVersionOutput   hadoop.hdfs.server.namenode.TestNameNodeAcl   hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem   hadoop.fs.http.server.TestHttpFSServer   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem Subsystem Report/Notes Docker Image:yetus/hadoop:0ac17dc JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12865644/HDFS-6984.006.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux cdce620461e5 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 19a7e94 Default Java 1.8.0_121 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html javac https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/whitespace-tabs.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html unit https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19240/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19240/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19240/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          chris.douglas Chris Douglas added a comment -

          Fix some warnings, fix some tests. Will add additional tests for the PB serialization/deserialization.

          Show
          chris.douglas Chris Douglas added a comment - Fix some warnings, fix some tests. Will add additional tests for the PB serialization/deserialization.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 14s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 7 new or modified test files.
          0 mvndep 1m 34s Maven dependency ordering for branch
          +1 mvninstall 13m 39s trunk passed
          +1 compile 17m 14s trunk passed
          +1 checkstyle 2m 3s trunk passed
          +1 mvnsite 4m 4s trunk passed
          +1 mvneclipse 1m 17s trunk passed
          -1 findbugs 1m 38s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
          -1 findbugs 1m 39s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings.
          -1 findbugs 1m 57s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings.
          +1 javadoc 2m 30s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 3m 10s the patch passed
          +1 compile 15m 8s the patch passed
          +1 cc 15m 8s the patch passed
          -1 javac 15m 8s root generated 71 new + 788 unchanged - 0 fixed = 859 total (was 788)
          -0 checkstyle 2m 6s root: The patch generated 14 new + 764 unchanged - 24 fixed = 778 total (was 788)
          +1 mvnsite 3m 36s the patch passed
          +1 mvneclipse 1m 23s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 2s The patch has no ill-formed XML file.
          -1 findbugs 1m 37s hadoop-hdfs-project/hadoop-hdfs-client generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2)
          +1 javadoc 2m 23s the patch passed
          +1 unit 8m 19s hadoop-common in the patch passed.
          +1 unit 1m 20s hadoop-hdfs-client in the patch passed.
          -1 unit 63m 46s hadoop-hdfs in the patch failed.
          -1 unit 3m 25s hadoop-hdfs-httpfs in the patch failed.
          +1 asflicense 0m 36s The patch does not generate ASF License warnings.
          161m 11s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client
            The field org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.locations is transient but isn't set by deserialization In HdfsLocatedFileStatus.java:but isn't set by deserialization In HdfsLocatedFileStatus.java
            Class org.apache.hadoop.hdfs.protocol.LocatedBlocks defines non-transient non-serializable instance field blocks In LocatedBlocks.java:instance field blocks In LocatedBlocks.java
            Class org.apache.hadoop.hdfs.protocol.LocatedBlocks defines non-transient non-serializable instance field lastLocatedBlock In LocatedBlocks.java:instance field lastLocatedBlock In LocatedBlocks.java
          Failed junit tests hadoop.hdfs.server.namenode.snapshot.TestSnapshot
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.TestEncryptionZonesWithKMS
            hadoop.hdfs.server.namenode.TestNestedEncryptionZones
            hadoop.hdfs.server.namenode.TestMetadataVersionOutput
            hadoop.hdfs.web.TestWebHDFS
            hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
            hadoop.hdfs.web.TestWebHDFSAcl
            hadoop.hdfs.server.namenode.TestDecommissioningStatus
            hadoop.hdfs.TestEncryptionZones
            hadoop.hdfs.server.namenode.TestStartup
            hadoop.hdfs.server.namenode.TestINodeAttributeProvider
            hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem
            hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem
            hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ac17dc
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12865685/HDFS-6984.007.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 6ca030f26ba7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 19a7e94
          Default Java 1.8.0_121
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/diff-checkstyle-root.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19244/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19244/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 7 new or modified test files. 0 mvndep 1m 34s Maven dependency ordering for branch +1 mvninstall 13m 39s trunk passed +1 compile 17m 14s trunk passed +1 checkstyle 2m 3s trunk passed +1 mvnsite 4m 4s trunk passed +1 mvneclipse 1m 17s trunk passed -1 findbugs 1m 38s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. -1 findbugs 1m 39s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings. -1 findbugs 1m 57s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. +1 javadoc 2m 30s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 3m 10s the patch passed +1 compile 15m 8s the patch passed +1 cc 15m 8s the patch passed -1 javac 15m 8s root generated 71 new + 788 unchanged - 0 fixed = 859 total (was 788) -0 checkstyle 2m 6s root: The patch generated 14 new + 764 unchanged - 24 fixed = 778 total (was 788) +1 mvnsite 3m 36s the patch passed +1 mvneclipse 1m 23s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 2s The patch has no ill-formed XML file. -1 findbugs 1m 37s hadoop-hdfs-project/hadoop-hdfs-client generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2) +1 javadoc 2m 23s the patch passed +1 unit 8m 19s hadoop-common in the patch passed. +1 unit 1m 20s hadoop-hdfs-client in the patch passed. -1 unit 63m 46s hadoop-hdfs in the patch failed. -1 unit 3m 25s hadoop-hdfs-httpfs in the patch failed. +1 asflicense 0m 36s The patch does not generate ASF License warnings. 161m 11s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client   The field org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.locations is transient but isn't set by deserialization In HdfsLocatedFileStatus.java:but isn't set by deserialization In HdfsLocatedFileStatus.java   Class org.apache.hadoop.hdfs.protocol.LocatedBlocks defines non-transient non-serializable instance field blocks In LocatedBlocks.java:instance field blocks In LocatedBlocks.java   Class org.apache.hadoop.hdfs.protocol.LocatedBlocks defines non-transient non-serializable instance field lastLocatedBlock In LocatedBlocks.java:instance field lastLocatedBlock In LocatedBlocks.java Failed junit tests hadoop.hdfs.server.namenode.snapshot.TestSnapshot   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.TestEncryptionZonesWithKMS   hadoop.hdfs.server.namenode.TestNestedEncryptionZones   hadoop.hdfs.server.namenode.TestMetadataVersionOutput   hadoop.hdfs.web.TestWebHDFS   hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot   hadoop.hdfs.web.TestWebHDFSAcl   hadoop.hdfs.server.namenode.TestDecommissioningStatus   hadoop.hdfs.TestEncryptionZones   hadoop.hdfs.server.namenode.TestStartup   hadoop.hdfs.server.namenode.TestINodeAttributeProvider   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem   hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem   hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem Subsystem Report/Notes Docker Image:yetus/hadoop:0ac17dc JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12865685/HDFS-6984.007.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 6ca030f26ba7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 19a7e94 Default Java 1.8.0_121 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html javac https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/diff-checkstyle-root.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html unit https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19244/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19244/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19244/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 12m 17s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 7 new or modified test files.
          0 mvndep 1m 24s Maven dependency ordering for branch
          +1 mvninstall 14m 41s trunk passed
          +1 compile 17m 20s trunk passed
          +1 checkstyle 2m 5s trunk passed
          +1 mvnsite 3m 45s trunk passed
          +1 mvneclipse 1m 23s trunk passed
          -1 findbugs 1m 27s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
          -1 findbugs 1m 28s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings.
          -1 findbugs 1m 47s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings.
          +1 javadoc 2m 22s trunk passed
          0 mvndep 0m 14s Maven dependency ordering for patch
          +1 mvninstall 2m 28s the patch passed
          +1 compile 15m 2s the patch passed
          +1 cc 15m 2s the patch passed
          -1 javac 15m 2s root generated 71 new + 788 unchanged - 0 fixed = 859 total (was 788)
          -0 checkstyle 2m 12s root: The patch generated 21 new + 762 unchanged - 24 fixed = 783 total (was 786)
          +1 mvnsite 3m 42s the patch passed
          +1 mvneclipse 1m 20s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 4s The patch has no ill-formed XML file.
          -1 findbugs 1m 52s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2)
          +1 javadoc 2m 36s the patch passed
          +1 unit 8m 17s hadoop-common in the patch passed.
          +1 unit 1m 39s hadoop-hdfs-client in the patch passed.
          -1 unit 75m 13s hadoop-hdfs in the patch failed.
          -1 unit 3m 25s hadoop-hdfs-httpfs in the patch failed.
          -1 asflicense 0m 36s The patch generated 1 ASF License warnings.
          185m 15s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client
            org.apache.hadoop.hdfs.web.WebHdfsFileStatus doesn't override org.apache.hadoop.hdfs.protocol.HdfsFileStatus.equals(Object) At WebHdfsFileStatus.java:At WebHdfsFileStatus.java:[line 1]
          Failed junit tests hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
            hadoop.hdfs.TestEncryptionZonesWithKMS
            hadoop.hdfs.TestEncryptionZones
            hadoop.hdfs.server.namenode.TestNestedEncryptionZones
            hadoop.hdfs.server.namenode.snapshot.TestSnapshot
            hadoop.hdfs.server.namenode.TestStartup
            hadoop.hdfs.server.namenode.TestINodeAttributeProvider
            hadoop.hdfs.server.namenode.TestMetadataVersionOutput
            hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
            hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12865842/HDFS-6984.008.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 6e177001db8a 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 1058b40
          Default Java 1.8.0_121
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/diff-checkstyle-root.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19256/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19256/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 12m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 7 new or modified test files. 0 mvndep 1m 24s Maven dependency ordering for branch +1 mvninstall 14m 41s trunk passed +1 compile 17m 20s trunk passed +1 checkstyle 2m 5s trunk passed +1 mvnsite 3m 45s trunk passed +1 mvneclipse 1m 23s trunk passed -1 findbugs 1m 27s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. -1 findbugs 1m 28s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings. -1 findbugs 1m 47s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. +1 javadoc 2m 22s trunk passed 0 mvndep 0m 14s Maven dependency ordering for patch +1 mvninstall 2m 28s the patch passed +1 compile 15m 2s the patch passed +1 cc 15m 2s the patch passed -1 javac 15m 2s root generated 71 new + 788 unchanged - 0 fixed = 859 total (was 788) -0 checkstyle 2m 12s root: The patch generated 21 new + 762 unchanged - 24 fixed = 783 total (was 786) +1 mvnsite 3m 42s the patch passed +1 mvneclipse 1m 20s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 4s The patch has no ill-formed XML file. -1 findbugs 1m 52s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) +1 javadoc 2m 36s the patch passed +1 unit 8m 17s hadoop-common in the patch passed. +1 unit 1m 39s hadoop-hdfs-client in the patch passed. -1 unit 75m 13s hadoop-hdfs in the patch failed. -1 unit 3m 25s hadoop-hdfs-httpfs in the patch failed. -1 asflicense 0m 36s The patch generated 1 ASF License warnings. 185m 15s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client   org.apache.hadoop.hdfs.web.WebHdfsFileStatus doesn't override org.apache.hadoop.hdfs.protocol.HdfsFileStatus.equals(Object) At WebHdfsFileStatus.java:At WebHdfsFileStatus.java: [line 1] Failed junit tests hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot   hadoop.hdfs.TestEncryptionZonesWithKMS   hadoop.hdfs.TestEncryptionZones   hadoop.hdfs.server.namenode.TestNestedEncryptionZones   hadoop.hdfs.server.namenode.snapshot.TestSnapshot   hadoop.hdfs.server.namenode.TestStartup   hadoop.hdfs.server.namenode.TestINodeAttributeProvider   hadoop.hdfs.server.namenode.TestMetadataVersionOutput   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport   hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12865842/HDFS-6984.008.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 6e177001db8a 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 1058b40 Default Java 1.8.0_121 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html javac https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/diff-checkstyle-root.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html unit https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19256/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/19256/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19256/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          chris.douglas Chris Douglas added a comment -

          Sorry for the noise caused by using Jenkins to run tests. I was trying to avoid an explicit Flags type for HdfsFileStatus, but if feInfo is not present for directories in encryption zones, it's probably cleaner this way.

          We could add a different API for resolving whether some set of features/capabilities are available/applied for a given FileSystem/FileStatus, and avoid the EnumSet in the constructor, but this issue has sprawled far enough. At least it's not in the user-facing API.

          This restores the FsPermissionExtension on the client side (for applications calling the deprecated methods). If the flags are not set (on older servers) it sets the flags from the old permission bits. For backwards compatibility with 2.x clients, we can either keep setting the permission bits server side or backport the flags to branch-2 for the "2.x version compatible with 3.x clusters" release. Any opinions on this? It's simplest to keep setting the bits server-side, and deleting that code when we remove FsPermissionExtension.

          I also had a question about snapshots. I'm not certain whether EC and encryption zones work with them. The current patch doesn't handle ACLs, but I wanted to get a quick check on that before digging through the implementation.

          Andrew Wang, are you OK with the general direction of v009?

          Show
          chris.douglas Chris Douglas added a comment - Sorry for the noise caused by using Jenkins to run tests. I was trying to avoid an explicit Flags type for HdfsFileStatus, but if feInfo is not present for directories in encryption zones, it's probably cleaner this way. We could add a different API for resolving whether some set of features/capabilities are available/applied for a given FileSystem / FileStatus , and avoid the EnumSet in the constructor, but this issue has sprawled far enough. At least it's not in the user-facing API. This restores the FsPermissionExtension on the client side (for applications calling the deprecated methods). If the flags are not set (on older servers) it sets the flags from the old permission bits. For backwards compatibility with 2.x clients, we can either keep setting the permission bits server side or backport the flags to branch-2 for the "2.x version compatible with 3.x clusters" release. Any opinions on this? It's simplest to keep setting the bits server-side, and deleting that code when we remove FsPermissionExtension . I also had a question about snapshots. I'm not certain whether EC and encryption zones work with them. The current patch doesn't handle ACLs, but I wanted to get a quick check on that before digging through the implementation. Andrew Wang , are you OK with the general direction of v009?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 7 new or modified test files.
          0 mvndep 0m 37s Maven dependency ordering for branch
          +1 mvninstall 13m 55s trunk passed
          +1 compile 15m 31s trunk passed
          +1 checkstyle 2m 4s trunk passed
          +1 mvnsite 3m 38s trunk passed
          +1 mvneclipse 1m 23s trunk passed
          -1 findbugs 1m 23s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
          -1 findbugs 1m 28s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings.
          -1 findbugs 1m 44s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings.
          +1 javadoc 2m 23s trunk passed
          0 mvndep 0m 13s Maven dependency ordering for patch
          +1 mvninstall 2m 28s the patch passed
          +1 compile 17m 9s the patch passed
          +1 cc 17m 9s the patch passed
          -1 javac 17m 9s root generated 79 new + 788 unchanged - 0 fixed = 867 total (was 788)
          -0 checkstyle 4m 0s root: The patch generated 21 new + 770 unchanged - 25 fixed = 791 total (was 795)
          +1 mvnsite 4m 7s the patch passed
          +1 mvneclipse 1m 21s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 3s The patch has no ill-formed XML file.
          +1 findbugs 9m 15s the patch passed
          +1 javadoc 3m 29s the patch passed
          +1 unit 7m 37s hadoop-common in the patch passed.
          +1 unit 1m 19s hadoop-hdfs-client in the patch passed.
          -1 unit 69m 33s hadoop-hdfs in the patch failed.
          -1 unit 3m 34s hadoop-hdfs-httpfs in the patch failed.
          +1 asflicense 0m 40s The patch does not generate ASF License warnings.
          173m 29s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.namenode.TestMetadataVersionOutput
            hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
            hadoop.hdfs.server.namenode.snapshot.TestSnapshot
            hadoop.hdfs.server.namenode.TestINodeAttributeProvider
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.server.namenode.TestStartup
            hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12866007/HDFS-6984.009.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 011aa9d2865d 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9f0aea0
          Default Java 1.8.0_121
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19278/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19278/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 7 new or modified test files. 0 mvndep 0m 37s Maven dependency ordering for branch +1 mvninstall 13m 55s trunk passed +1 compile 15m 31s trunk passed +1 checkstyle 2m 4s trunk passed +1 mvnsite 3m 38s trunk passed +1 mvneclipse 1m 23s trunk passed -1 findbugs 1m 23s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. -1 findbugs 1m 28s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings. -1 findbugs 1m 44s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. +1 javadoc 2m 23s trunk passed 0 mvndep 0m 13s Maven dependency ordering for patch +1 mvninstall 2m 28s the patch passed +1 compile 17m 9s the patch passed +1 cc 17m 9s the patch passed -1 javac 17m 9s root generated 79 new + 788 unchanged - 0 fixed = 867 total (was 788) -0 checkstyle 4m 0s root: The patch generated 21 new + 770 unchanged - 25 fixed = 791 total (was 795) +1 mvnsite 4m 7s the patch passed +1 mvneclipse 1m 21s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 3s The patch has no ill-formed XML file. +1 findbugs 9m 15s the patch passed +1 javadoc 3m 29s the patch passed +1 unit 7m 37s hadoop-common in the patch passed. +1 unit 1m 19s hadoop-hdfs-client in the patch passed. -1 unit 69m 33s hadoop-hdfs in the patch failed. -1 unit 3m 34s hadoop-hdfs-httpfs in the patch failed. +1 asflicense 0m 40s The patch does not generate ASF License warnings. 173m 29s Reason Tests Failed junit tests hadoop.hdfs.server.namenode.TestMetadataVersionOutput   hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot   hadoop.hdfs.server.namenode.snapshot.TestSnapshot   hadoop.hdfs.server.namenode.TestINodeAttributeProvider   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.server.namenode.TestStartup   hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12866007/HDFS-6984.009.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 011aa9d2865d 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9f0aea0 Default Java 1.8.0_121 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html javac https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19278/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19278/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19278/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks for the rev Chris. Looks like this latest patch has grown some new functionality.

          For backwards compatibility with 2.x clients, we can either keep setting the permission bits server side or backport the flags to branch-2 for the "2.x version compatible with 3.x clusters" release. Any opinions on this? It's simplest to keep setting the bits server-side, and deleting that code when we remove FsPermissionExtension.

          I'd like to keep setting the bits server-side. There's no guarantee whether 2.9.0 or 3.0.0 will GA first, and if we have the option of avoiding incompatibility, I'd like to take it.

          I also had a question about snapshots. I'm not certain whether EC and encryption zones work with them. The current patch doesn't handle ACLs, but I wanted to get a quick check on that before digging through the implementation.

          Do you have a unit test for this behavior? Bit surprised that this doesn't work. If it's not a regression, we can handle this in a different JIRA.

          The JSON code tried to preserve the acl/crypt/ec bits of the FsPermission on AclStatus. This seems incorrect, but I can try to find a solution if it is meaningful.

          Could you give a pointer to this? These bits are new to the JSON code, so it'd be good to catch any bugs quickly.

          A high-level comment:

          I'm worried about making HdfsFileStatus extend FileStatus, particularly the link target. HdfsFileStatus is used on the NameNode, and there is potentially some mangling from turning the link target String into a Path. The target is supposed to be an opaque value within the filesystem, left to be interpreted by the client.

          How bad is it to split this change out? Wasn't clear what we gain by having HdfsFileStatus extend FileStatus.

          Some nits:

          • HttpsFileSystem: Please do not use a wildcard import
          • Double ";" in PBHelperClient:
          import org.apache.hadoop.hdfs.protocol.FsPermissionExtension;;
          
          • Extra import in SnapshotInfo
          Show
          andrew.wang Andrew Wang added a comment - Thanks for the rev Chris. Looks like this latest patch has grown some new functionality. For backwards compatibility with 2.x clients, we can either keep setting the permission bits server side or backport the flags to branch-2 for the "2.x version compatible with 3.x clusters" release. Any opinions on this? It's simplest to keep setting the bits server-side, and deleting that code when we remove FsPermissionExtension. I'd like to keep setting the bits server-side. There's no guarantee whether 2.9.0 or 3.0.0 will GA first, and if we have the option of avoiding incompatibility, I'd like to take it. I also had a question about snapshots. I'm not certain whether EC and encryption zones work with them. The current patch doesn't handle ACLs, but I wanted to get a quick check on that before digging through the implementation. Do you have a unit test for this behavior? Bit surprised that this doesn't work. If it's not a regression, we can handle this in a different JIRA. The JSON code tried to preserve the acl/crypt/ec bits of the FsPermission on AclStatus. This seems incorrect, but I can try to find a solution if it is meaningful. Could you give a pointer to this? These bits are new to the JSON code, so it'd be good to catch any bugs quickly. A high-level comment: I'm worried about making HdfsFileStatus extend FileStatus, particularly the link target. HdfsFileStatus is used on the NameNode, and there is potentially some mangling from turning the link target String into a Path. The target is supposed to be an opaque value within the filesystem, left to be interpreted by the client. How bad is it to split this change out? Wasn't clear what we gain by having HdfsFileStatus extend FileStatus. Some nits: HttpsFileSystem: Please do not use a wildcard import Double ";" in PBHelperClient: import org.apache.hadoop.hdfs.protocol.FsPermissionExtension;; Extra import in SnapshotInfo
          Hide
          chris.douglas Chris Douglas added a comment -

          Thanks for taking a look, Andrew.

          Looks like this latest patch has grown some new functionality.

          That's not intended. It includes Steve Loughran's request that the user-facing types be Serializable, and your request to move the flags out of FsPermissionExtension, deprecating Writable APIs in/around FileStatus.

          Versions before v009 tried to avoid a set of flags for features in HdfsFileStatus and FileStatus, but ACLs/encryption rely on the FsPermissionExtension bits, not the PB record. I tried variants that used a placeholder, "remote" stub in the PB field that's sufficient to fill out the FileStatus API, but there are too many required fields in the PB messages to make that efficient or clean. ACLs, encryption, and EC are all handled slightly differently. Instead of adding to client impl complexity, v009 gives up and adds a set of flags.

          By new functionality, are you referring to anything beyond making HdfsFileStatus a subtype of FileStatus?

          I'd like to keep setting the bits server-side. There's no guarantee whether 2.9.0 or 3.0.0 will GA first, and if we have the option of avoiding incompatibility, I'd like to take it.

          Agreed. I hope we eventually remove this, or at least stop adding new flags to FsPermission.

          Could you give a pointer to this? These bits are new to the JSON code, so it'd be good to catch any bugs quickly.

          Here is the relevant line. It's consistent with the documentation- that is the FsPermission[Extension] for that path- but including the flag bits in the AclStatus permission field looks more like an accident of the implementation, not part of its intended API.

          I'm worried about making HdfsFileStatus extend FileStatus, particularly the link target. HdfsFileStatus is used on the NameNode, and there is potentially some mangling from turning the link target String into a Path. The target is supposed to be an opaque value within the filesystem, left to be interpreted by the client.

          For this particular case, shouldn't the caller be using HdfsFileStatus::getSymlinkInBytes in contexts where it's opaque? I share your concern, but extending FileStatus also required touching code that didn't expect (spurious) IOExceptions. In those and other existing contexts, I didn't find any instances where the String wasn't converted to a Path. Converting the byte[] to a String may also mangle it somewhat.

          To treat it as opaque for the patch, I'll change the serialization to use the getSymlinkInBytes conversion, rather than Path::toString. I'll also go through its uses, and try to find all the cases that don't immediately convert the String to a Path.

          How bad is it to split this change out?

          I've split this twice already, to fix concerns over DistCp and for Serializable. We can change the scope of the JIRA, but (IMHO) these changes are related, and implied by previous review requests.

          Wasn't clear what we gain by having HdfsFileStatus extend FileStatus

          The case Sergey Shelukhin raised for HDFS-7878 involved passing a handle across process boundaries. If a PathHandle is required to store sufficient state to open the file, then it needs to copy (and serialize) data from HdfsFileStatus that, in most cases, will not be used. We can construct this on-demand, if the HdfsFileStatus is kept intact. There's also the minor inefficiency of creating a new FileStatus instance and copying fields, but 1) that's often incurred anyway because the Located* types are irreconcilable and 2) in the noise.

          FileStatus is exposing attributes that (AFAIK) only apply to HDFS (e.g., EC, encryption); the types are 1:1 mappings right now. If HDFS wants to pass additional information, or encode it differently, then it can return a different data type (perhaps with this as a field). As it's used inside the NN, it lazily binds some fields, but I don't know of a use that's inconsistent with the POD type. I wish FileStatus were an interface and not a concrete class, but that ship has sailed.

          If you're really uncomfortable with the change, then I can try to roll it back.

          HttpsFileSystem: Please do not use a wildcard import

          Recently switched to Intellij; not intended. Will fix my settings.

          Show
          chris.douglas Chris Douglas added a comment - Thanks for taking a look, Andrew. Looks like this latest patch has grown some new functionality. That's not intended. It includes Steve Loughran 's request that the user-facing types be Serializable , and your request to move the flags out of FsPermissionExtension , deprecating Writable APIs in/around FileStatus. Versions before v009 tried to avoid a set of flags for features in HdfsFileStatus and FileStatus, but ACLs/encryption rely on the FsPermissionExtension bits, not the PB record. I tried variants that used a placeholder, "remote" stub in the PB field that's sufficient to fill out the FileStatus API, but there are too many required fields in the PB messages to make that efficient or clean. ACLs, encryption, and EC are all handled slightly differently. Instead of adding to client impl complexity, v009 gives up and adds a set of flags. By new functionality, are you referring to anything beyond making HdfsFileStatus a subtype of FileStatus? I'd like to keep setting the bits server-side. There's no guarantee whether 2.9.0 or 3.0.0 will GA first, and if we have the option of avoiding incompatibility, I'd like to take it. Agreed. I hope we eventually remove this, or at least stop adding new flags to FsPermission. Could you give a pointer to this? These bits are new to the JSON code, so it'd be good to catch any bugs quickly. Here is the relevant line. It's consistent with the documentation- that is the FsPermission[Extension] for that path- but including the flag bits in the AclStatus permission field looks more like an accident of the implementation, not part of its intended API. I'm worried about making HdfsFileStatus extend FileStatus, particularly the link target. HdfsFileStatus is used on the NameNode, and there is potentially some mangling from turning the link target String into a Path. The target is supposed to be an opaque value within the filesystem, left to be interpreted by the client. For this particular case, shouldn't the caller be using HdfsFileStatus::getSymlinkInBytes in contexts where it's opaque? I share your concern, but extending FileStatus also required touching code that didn't expect (spurious) IOExceptions. In those and other existing contexts, I didn't find any instances where the String wasn't converted to a Path. Converting the byte[] to a String may also mangle it somewhat. To treat it as opaque for the patch, I'll change the serialization to use the getSymlinkInBytes conversion, rather than Path::toString . I'll also go through its uses, and try to find all the cases that don't immediately convert the String to a Path. How bad is it to split this change out? I've split this twice already, to fix concerns over DistCp and for Serializable . We can change the scope of the JIRA, but (IMHO) these changes are related, and implied by previous review requests. Wasn't clear what we gain by having HdfsFileStatus extend FileStatus The case Sergey Shelukhin raised for HDFS-7878 involved passing a handle across process boundaries. If a PathHandle is required to store sufficient state to open the file, then it needs to copy (and serialize) data from HdfsFileStatus that, in most cases, will not be used. We can construct this on-demand, if the HdfsFileStatus is kept intact. There's also the minor inefficiency of creating a new FileStatus instance and copying fields, but 1) that's often incurred anyway because the Located* types are irreconcilable and 2) in the noise. FileStatus is exposing attributes that (AFAIK) only apply to HDFS (e.g., EC, encryption); the types are 1:1 mappings right now. If HDFS wants to pass additional information, or encode it differently, then it can return a different data type (perhaps with this as a field). As it's used inside the NN, it lazily binds some fields, but I don't know of a use that's inconsistent with the POD type. I wish FileStatus were an interface and not a concrete class, but that ship has sailed. If you're really uncomfortable with the change, then I can try to roll it back. HttpsFileSystem: Please do not use a wildcard import Recently switched to Intellij; not intended. Will fix my settings.
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks again for the continued work here Chris, I want to get this in as badly as you. I spent a little more time looking at the patch, and I'm comfortable with HdfsFileStatus extends FileStatus once we make sure symlinks are treated opaquely.

          Symlinks:

          • I saw three calls to HdfsFileStatus#getSymlink: FSNamesystem#logAuditEvent, NNRpcServer#getLinkTarget, JsonUtil#toJsonMap. I think the latter two should be calling getSymlinkBytes for opacity.
          • We should add a warning to HdfsFileStatus#getSymlink about handling symlink targets opaquely, for future coders.

          The rest:

          Here is the relevant line. It's consistent with the documentation- that is the FsPermission[Extension] for that path- but including the flag bits in the AclStatus permission field looks more like an accident of the implementation, not part of its intended API.

          Good catch. I see you removed this already in the 009 patch, agree that it looks like an accident.

          I'm convinced by the rest. Documenting the other pending issues here for completeness:

          • Any additional checking for symlink target opacity and conversion to a Path. I only checked usages when it's a HdfsFileStatus, not usages of the base FileStatus method.
          • Snapshots potentially not working with EC or encryption? If so, this might be more broadly an xattrs+snapshots bug.
          • Backwards compatibility by still setting the permission bits for old clients.
          Show
          andrew.wang Andrew Wang added a comment - Thanks again for the continued work here Chris, I want to get this in as badly as you. I spent a little more time looking at the patch, and I'm comfortable with HdfsFileStatus extends FileStatus once we make sure symlinks are treated opaquely. Symlinks: I saw three calls to HdfsFileStatus#getSymlink: FSNamesystem#logAuditEvent, NNRpcServer#getLinkTarget, JsonUtil#toJsonMap. I think the latter two should be calling getSymlinkBytes for opacity. We should add a warning to HdfsFileStatus#getSymlink about handling symlink targets opaquely, for future coders. The rest: Here is the relevant line. It's consistent with the documentation- that is the FsPermission [Extension] for that path- but including the flag bits in the AclStatus permission field looks more like an accident of the implementation, not part of its intended API. Good catch. I see you removed this already in the 009 patch, agree that it looks like an accident. I'm convinced by the rest. Documenting the other pending issues here for completeness: Any additional checking for symlink target opacity and conversion to a Path. I only checked usages when it's a HdfsFileStatus , not usages of the base FileStatus method. Snapshots potentially not working with EC or encryption? If so, this might be more broadly an xattrs+snapshots bug. Backwards compatibility by still setting the permission bits for old clients.
          Hide
          chris.douglas Chris Douglas added a comment -

          No worries; thanks for the thorough review, Andrew.

          I'll run this through Jenkins again, fix up snapshots, and we can finish this.

          Show
          chris.douglas Chris Douglas added a comment - No worries; thanks for the thorough review, Andrew. I'll run this through Jenkins again, fix up snapshots, and we can finish this.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 22s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 8 new or modified test files.
          0 mvndep 1m 31s Maven dependency ordering for branch
          +1 mvninstall 13m 29s trunk passed
          +1 compile 16m 5s trunk passed
          +1 checkstyle 2m 5s trunk passed
          +1 mvnsite 3m 38s trunk passed
          +1 mvneclipse 1m 21s trunk passed
          -1 findbugs 1m 26s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
          -1 findbugs 1m 29s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings.
          -1 findbugs 1m 46s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings.
          +1 javadoc 2m 22s trunk passed
          0 mvndep 0m 13s Maven dependency ordering for patch
          +1 mvninstall 2m 37s the patch passed
          +1 compile 15m 18s the patch passed
          +1 cc 15m 18s the patch passed
          -1 javac 15m 18s root generated 80 new + 787 unchanged - 0 fixed = 867 total (was 787)
          -0 checkstyle 2m 5s root: The patch generated 23 new + 795 unchanged - 25 fixed = 818 total (was 820)
          +1 mvnsite 3m 58s the patch passed
          +1 mvneclipse 1m 16s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 4s The patch has no ill-formed XML file.
          +1 findbugs 6m 36s the patch passed
          +1 javadoc 2m 29s the patch passed
          +1 unit 8m 59s hadoop-common in the patch passed.
          +1 unit 1m 25s hadoop-hdfs-client in the patch passed.
          -1 unit 68m 46s hadoop-hdfs in the patch failed.
          +1 unit 3m 23s hadoop-hdfs-httpfs in the patch passed.
          +1 asflicense 0m 37s The patch does not generate ASF License warnings.
          165m 27s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.namenode.snapshot.TestSnapshot
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
            hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
            hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
            hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery
            hadoop.hdfs.server.namenode.TestINodeAttributeProvider



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12867054/HDFS-6984.010.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 358cb17f60af 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 54fd0e4
          Default Java 1.8.0_121
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19355/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19355/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 22s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 8 new or modified test files. 0 mvndep 1m 31s Maven dependency ordering for branch +1 mvninstall 13m 29s trunk passed +1 compile 16m 5s trunk passed +1 checkstyle 2m 5s trunk passed +1 mvnsite 3m 38s trunk passed +1 mvneclipse 1m 21s trunk passed -1 findbugs 1m 26s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. -1 findbugs 1m 29s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings. -1 findbugs 1m 46s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. +1 javadoc 2m 22s trunk passed 0 mvndep 0m 13s Maven dependency ordering for patch +1 mvninstall 2m 37s the patch passed +1 compile 15m 18s the patch passed +1 cc 15m 18s the patch passed -1 javac 15m 18s root generated 80 new + 787 unchanged - 0 fixed = 867 total (was 787) -0 checkstyle 2m 5s root: The patch generated 23 new + 795 unchanged - 25 fixed = 818 total (was 820) +1 mvnsite 3m 58s the patch passed +1 mvneclipse 1m 16s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 4s The patch has no ill-formed XML file. +1 findbugs 6m 36s the patch passed +1 javadoc 2m 29s the patch passed +1 unit 8m 59s hadoop-common in the patch passed. +1 unit 1m 25s hadoop-hdfs-client in the patch passed. -1 unit 68m 46s hadoop-hdfs in the patch failed. +1 unit 3m 23s hadoop-hdfs-httpfs in the patch passed. +1 asflicense 0m 37s The patch does not generate ASF License warnings. 165m 27s Reason Tests Failed junit tests hadoop.hdfs.server.namenode.snapshot.TestSnapshot   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080   hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure   hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery   hadoop.hdfs.server.namenode.TestINodeAttributeProvider Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12867054/HDFS-6984.010.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 358cb17f60af 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 54fd0e4 Default Java 1.8.0_121 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html javac https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19355/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19355/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19355/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 8 new or modified test files.
          0 mvndep 1m 29s Maven dependency ordering for branch
          +1 mvninstall 12m 43s trunk passed
          +1 compile 11m 58s trunk passed
          +1 checkstyle 2m 1s trunk passed
          +1 mvnsite 3m 37s trunk passed
          +1 mvneclipse 1m 28s trunk passed
          -1 findbugs 1m 17s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
          +1 javadoc 2m 25s trunk passed
          0 mvndep 0m 16s Maven dependency ordering for patch
          +1 mvninstall 2m 5s the patch passed
          +1 compile 11m 56s the patch passed
          +1 cc 11m 56s the patch passed
          -1 javac 11m 56s root generated 80 new + 787 unchanged - 0 fixed = 867 total (was 787)
          -0 checkstyle 2m 0s root: The patch generated 19 new + 796 unchanged - 22 fixed = 815 total (was 818)
          +1 mvnsite 3m 8s the patch passed
          +1 mvneclipse 1m 24s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 3s The patch has no ill-formed XML file.
          +1 findbugs 5m 23s the patch passed
          +1 javadoc 2m 5s the patch passed
          -1 unit 7m 12s hadoop-common in the patch failed.
          +1 unit 1m 24s hadoop-hdfs-client in the patch passed.
          -1 unit 66m 19s hadoop-hdfs in the patch failed.
          +1 unit 3m 35s hadoop-hdfs-httpfs in the patch passed.
          +1 asflicense 0m 29s The patch does not generate ASF License warnings.
          149m 42s



          Reason Tests
          Failed junit tests hadoop.security.TestRaceWhenRelogin
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010
            hadoop.hdfs.server.namenode.TestINodeAttributeProvider
            hadoop.hdfs.web.TestWebHdfsTimeouts
            hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12868639/HDFS-6984.011.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 4018dba0e168 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / ef9e536
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19481/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/19481/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19481/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19481/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19481/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19481/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19481/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 8 new or modified test files. 0 mvndep 1m 29s Maven dependency ordering for branch +1 mvninstall 12m 43s trunk passed +1 compile 11m 58s trunk passed +1 checkstyle 2m 1s trunk passed +1 mvnsite 3m 37s trunk passed +1 mvneclipse 1m 28s trunk passed -1 findbugs 1m 17s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. +1 javadoc 2m 25s trunk passed 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 2m 5s the patch passed +1 compile 11m 56s the patch passed +1 cc 11m 56s the patch passed -1 javac 11m 56s root generated 80 new + 787 unchanged - 0 fixed = 867 total (was 787) -0 checkstyle 2m 0s root: The patch generated 19 new + 796 unchanged - 22 fixed = 815 total (was 818) +1 mvnsite 3m 8s the patch passed +1 mvneclipse 1m 24s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 3s The patch has no ill-formed XML file. +1 findbugs 5m 23s the patch passed +1 javadoc 2m 5s the patch passed -1 unit 7m 12s hadoop-common in the patch failed. +1 unit 1m 24s hadoop-hdfs-client in the patch passed. -1 unit 66m 19s hadoop-hdfs in the patch failed. +1 unit 3m 35s hadoop-hdfs-httpfs in the patch passed. +1 asflicense 0m 29s The patch does not generate ASF License warnings. 149m 42s Reason Tests Failed junit tests hadoop.security.TestRaceWhenRelogin   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010   hadoop.hdfs.server.namenode.TestINodeAttributeProvider   hadoop.hdfs.web.TestWebHdfsTimeouts   hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12868639/HDFS-6984.011.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 4018dba0e168 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / ef9e536 Default Java 1.8.0_131 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19481/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html javac https://builds.apache.org/job/PreCommit-HDFS-Build/19481/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19481/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19481/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19481/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19481/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19481/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          chris.douglas Chris Douglas added a comment -

          The TestINodeAttributeProvider failure is reproducible, but the other test failures don't appear to be related. The javac warnings are deprecation warnings introduced by the patch.

          Show
          chris.douglas Chris Douglas added a comment - The TestINodeAttributeProvider failure is reproducible, but the other test failures don't appear to be related. The javac warnings are deprecation warnings introduced by the patch.
          Hide
          chris.douglas Chris Douglas added a comment -
          • Added more tests
          • Removed FlaggedFileStatus class to preserve attribute flags for all FileStatus objects, through serialization
          Show
          chris.douglas Chris Douglas added a comment - Added more tests Removed FlaggedFileStatus class to preserve attribute flags for all FileStatus objects, through serialization
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 9 new or modified test files.
          0 mvndep 1m 26s Maven dependency ordering for branch
          +1 mvninstall 12m 2s trunk passed
          +1 compile 12m 24s trunk passed
          +1 checkstyle 1m 44s trunk passed
          +1 mvnsite 3m 12s trunk passed
          +1 mvneclipse 1m 3s trunk passed
          -1 findbugs 1m 22s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
          +1 javadoc 2m 0s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 2m 12s the patch passed
          +1 compile 12m 15s the patch passed
          +1 cc 12m 15s the patch passed
          -1 javac 12m 15s root generated 80 new + 787 unchanged - 0 fixed = 867 total (was 787)
          -0 checkstyle 1m 41s root: The patch generated 21 new + 789 unchanged - 22 fixed = 810 total (was 811)
          +1 mvnsite 3m 6s the patch passed
          +1 mvneclipse 1m 3s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 3s The patch has no ill-formed XML file.
          +1 findbugs 5m 24s the patch passed
          +1 javadoc 2m 9s the patch passed
          +1 unit 7m 42s hadoop-common in the patch passed.
          +1 unit 1m 19s hadoop-hdfs-client in the patch passed.
          -1 unit 69m 35s hadoop-hdfs in the patch failed.
          +1 unit 3m 28s hadoop-hdfs-httpfs in the patch passed.
          -1 asflicense 0m 31s The patch generated 1 ASF License warnings.
          151m 0s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.web.TestWebHdfsTimeouts
            hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12869980/HDFS-6984.012.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux aeaaf29cea0e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 2b5ad48
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19623/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/19623/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19623/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19623/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19623/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/19623/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19623/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 9 new or modified test files. 0 mvndep 1m 26s Maven dependency ordering for branch +1 mvninstall 12m 2s trunk passed +1 compile 12m 24s trunk passed +1 checkstyle 1m 44s trunk passed +1 mvnsite 3m 12s trunk passed +1 mvneclipse 1m 3s trunk passed -1 findbugs 1m 22s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. +1 javadoc 2m 0s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 2m 12s the patch passed +1 compile 12m 15s the patch passed +1 cc 12m 15s the patch passed -1 javac 12m 15s root generated 80 new + 787 unchanged - 0 fixed = 867 total (was 787) -0 checkstyle 1m 41s root: The patch generated 21 new + 789 unchanged - 22 fixed = 810 total (was 811) +1 mvnsite 3m 6s the patch passed +1 mvneclipse 1m 3s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 3s The patch has no ill-formed XML file. +1 findbugs 5m 24s the patch passed +1 javadoc 2m 9s the patch passed +1 unit 7m 42s hadoop-common in the patch passed. +1 unit 1m 19s hadoop-hdfs-client in the patch passed. -1 unit 69m 35s hadoop-hdfs in the patch failed. +1 unit 3m 28s hadoop-hdfs-httpfs in the patch passed. -1 asflicense 0m 31s The patch generated 1 ASF License warnings. 151m 0s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.web.TestWebHdfsTimeouts   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12869980/HDFS-6984.012.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux aeaaf29cea0e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2b5ad48 Default Java 1.8.0_131 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19623/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html javac https://builds.apache.org/job/PreCommit-HDFS-Build/19623/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19623/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19623/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19623/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/19623/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19623/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 31s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 9 new or modified test files.
          0 mvndep 1m 33s Maven dependency ordering for branch
          +1 mvninstall 15m 28s trunk passed
          +1 compile 14m 21s trunk passed
          +1 checkstyle 2m 8s trunk passed
          +1 mvnsite 4m 7s trunk passed
          +1 mvneclipse 1m 21s trunk passed
          -1 findbugs 1m 38s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
          +1 javadoc 2m 25s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 2m 50s the patch passed
          +1 compile 13m 40s the patch passed
          +1 cc 13m 40s the patch passed
          -1 javac 13m 40s root generated 79 new + 787 unchanged - 0 fixed = 866 total (was 787)
          -0 checkstyle 2m 4s root: The patch generated 12 new + 792 unchanged - 22 fixed = 804 total (was 814)
          +1 mvnsite 3m 47s the patch passed
          +1 mvneclipse 1m 20s the patch passed
          +1 whitespace 0m 1s The patch has no whitespace issues.
          +1 xml 0m 4s The patch has no ill-formed XML file.
          +1 findbugs 6m 3s the patch passed
          +1 javadoc 2m 26s the patch passed
          +1 unit 7m 55s hadoop-common in the patch passed.
          +1 unit 1m 25s hadoop-hdfs-client in the patch passed.
          -1 unit 65m 47s hadoop-hdfs in the patch failed.
          +1 unit 3m 39s hadoop-hdfs-httpfs in the patch passed.
          +1 asflicense 0m 36s The patch does not generate ASF License warnings.
          161m 12s



          Reason Tests
          Failed junit tests hadoop.hdfs.TestMaintenanceState
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12870098/HDFS-6984.013.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux cc2c4685d608 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / aea4293
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19634/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/19634/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19634/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19634/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19634/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19634/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 31s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 9 new or modified test files. 0 mvndep 1m 33s Maven dependency ordering for branch +1 mvninstall 15m 28s trunk passed +1 compile 14m 21s trunk passed +1 checkstyle 2m 8s trunk passed +1 mvnsite 4m 7s trunk passed +1 mvneclipse 1m 21s trunk passed -1 findbugs 1m 38s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. +1 javadoc 2m 25s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 2m 50s the patch passed +1 compile 13m 40s the patch passed +1 cc 13m 40s the patch passed -1 javac 13m 40s root generated 79 new + 787 unchanged - 0 fixed = 866 total (was 787) -0 checkstyle 2m 4s root: The patch generated 12 new + 792 unchanged - 22 fixed = 804 total (was 814) +1 mvnsite 3m 47s the patch passed +1 mvneclipse 1m 20s the patch passed +1 whitespace 0m 1s The patch has no whitespace issues. +1 xml 0m 4s The patch has no ill-formed XML file. +1 findbugs 6m 3s the patch passed +1 javadoc 2m 26s the patch passed +1 unit 7m 55s hadoop-common in the patch passed. +1 unit 1m 25s hadoop-hdfs-client in the patch passed. -1 unit 65m 47s hadoop-hdfs in the patch failed. +1 unit 3m 39s hadoop-hdfs-httpfs in the patch passed. +1 asflicense 0m 36s The patch does not generate ASF License warnings. 161m 12s Reason Tests Failed junit tests hadoop.hdfs.TestMaintenanceState   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12870098/HDFS-6984.013.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux cc2c4685d608 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / aea4293 Default Java 1.8.0_131 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19634/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html javac https://builds.apache.org/job/PreCommit-HDFS-Build/19634/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19634/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19634/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19634/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19634/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          chris.douglas Chris Douglas added a comment -

          This is ready for review. None of the unit test failures are related, the checkstyle warnings are for the existing fields/cstr, and the javac warnings come from classes/methods deprecated in this patch.

          Show
          chris.douglas Chris Douglas added a comment - This is ready for review. None of the unit test failures are related, the checkstyle warnings are for the existing fields/cstr, and the javac warnings come from classes/methods deprecated in this patch.
          Hide
          chris.douglas Chris Douglas added a comment -

          Andrew Wang do you have cycles to review this?

          Show
          chris.douglas Chris Douglas added a comment - Andrew Wang do you have cycles to review this?
          Hide
          chris.douglas Chris Douglas added a comment -

          Rebased patch

          Show
          chris.douglas Chris Douglas added a comment - Rebased patch
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 16m 43s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 9 new or modified test files.
          0 mvndep 1m 41s Maven dependency ordering for branch
          +1 mvninstall 13m 43s trunk passed
          +1 compile 14m 31s trunk passed
          +1 checkstyle 2m 9s trunk passed
          +1 mvnsite 3m 43s trunk passed
          +1 findbugs 5m 13s trunk passed
          +1 javadoc 2m 26s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 2m 23s the patch passed
          +1 compile 10m 38s the patch passed
          +1 cc 10m 38s the patch passed
          -1 javac 10m 38s root generated 79 new + 823 unchanged - 0 fixed = 902 total (was 823)
          -0 checkstyle 2m 3s root: The patch generated 12 new + 783 unchanged - 22 fixed = 795 total (was 805)
          +1 mvnsite 3m 39s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 4s The patch has no ill-formed XML file.
          +1 findbugs 5m 57s the patch passed
          +1 javadoc 2m 27s the patch passed
          +1 unit 7m 41s hadoop-common in the patch passed.
          +1 unit 1m 21s hadoop-hdfs-client in the patch passed.
          -1 unit 66m 21s hadoop-hdfs in the patch failed.
          -1 unit 0m 36s hadoop-hdfs-httpfs in the patch failed.
          +1 asflicense 0m 36s The patch does not generate ASF License warnings.
          167m 4s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12874572/HDFS-6984.014.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 508b980bd7f1 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / a9d3412
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/20051/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/20051/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/20051/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/20051/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/20051/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/20051/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 16m 43s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 9 new or modified test files. 0 mvndep 1m 41s Maven dependency ordering for branch +1 mvninstall 13m 43s trunk passed +1 compile 14m 31s trunk passed +1 checkstyle 2m 9s trunk passed +1 mvnsite 3m 43s trunk passed +1 findbugs 5m 13s trunk passed +1 javadoc 2m 26s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 2m 23s the patch passed +1 compile 10m 38s the patch passed +1 cc 10m 38s the patch passed -1 javac 10m 38s root generated 79 new + 823 unchanged - 0 fixed = 902 total (was 823) -0 checkstyle 2m 3s root: The patch generated 12 new + 783 unchanged - 22 fixed = 795 total (was 805) +1 mvnsite 3m 39s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 4s The patch has no ill-formed XML file. +1 findbugs 5m 57s the patch passed +1 javadoc 2m 27s the patch passed +1 unit 7m 41s hadoop-common in the patch passed. +1 unit 1m 21s hadoop-hdfs-client in the patch passed. -1 unit 66m 21s hadoop-hdfs in the patch failed. -1 unit 0m 36s hadoop-hdfs-httpfs in the patch failed. +1 asflicense 0m 36s The patch does not generate ASF License warnings. 167m 4s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12874572/HDFS-6984.014.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 508b980bd7f1 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / a9d3412 Default Java 1.8.0_131 findbugs v3.1.0-RC1 javac https://builds.apache.org/job/PreCommit-HDFS-Build/20051/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/20051/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/20051/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/20051/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/20051/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/20051/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          chris.douglas Chris Douglas added a comment -

          Andrew Wang, would you mind reviewing this, now that 3.0.0-alpha4 is released? This is blocking HDFS-7878, which the last blocker for HDFS-9806.

          Show
          chris.douglas Chris Douglas added a comment - Andrew Wang , would you mind reviewing this, now that 3.0.0-alpha4 is released? This is blocking HDFS-7878 , which the last blocker for HDFS-9806 .
          Hide
          chris.douglas Chris Douglas added a comment -

          Rebased patch.

          Show
          chris.douglas Chris Douglas added a comment - Rebased patch.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 10 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 44s Maven dependency ordering for branch
          -1 mvninstall 15m 6s root in trunk failed.
          +1 compile 13m 54s trunk passed
          +1 checkstyle 2m 1s trunk passed
          +1 mvnsite 4m 24s trunk passed
          -1 findbugs 1m 28s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings.
          -1 findbugs 1m 47s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings.
          +1 javadoc 2m 22s trunk passed
                Patch Compile Tests
          0 mvndep 0m 14s Maven dependency ordering for patch
          +1 mvninstall 2m 16s the patch passed
          +1 compile 11m 20s the patch passed
          +1 cc 11m 20s the patch passed
          -1 javac 11m 20s root generated 80 new + 1313 unchanged - 0 fixed = 1393 total (was 1313)
          -0 checkstyle 2m 4s root: The patch generated 12 new + 824 unchanged - 22 fixed = 836 total (was 846)
          +1 mvnsite 4m 28s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 4s The patch has no ill-formed XML file.
          +1 findbugs 1m 38s hadoop-common in the patch passed.
          +1 findbugs 1m 51s hadoop-hdfs-client in the patch passed.
          +1 findbugs 2m 15s hadoop-hdfs-project/hadoop-hdfs generated 0 new + 9 unchanged - 1 fixed = 9 total (was 10)
          +1 findbugs 0m 41s hadoop-hdfs-httpfs in the patch passed.
          +1 javadoc 2m 39s the patch passed
                Other Tests
          +1 unit 8m 34s hadoop-common in the patch passed.
          +1 unit 1m 30s hadoop-hdfs-client in the patch passed.
          -1 unit 76m 1s hadoop-hdfs in the patch failed.
          +1 unit 3m 37s hadoop-hdfs-httpfs in the patch passed.
          +1 asflicense 0m 35s The patch does not generate ASF License warnings.
          165m 23s



          Reason Tests
          Failed junit tests hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
            hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
            hadoop.hdfs.server.namenode.TestDecommissioningStatus
            hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-6984
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12878457/HDFS-6984.015.patch
          Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 1368018f85ce 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 465c213
          Default Java 1.8.0_131
          mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/branch-mvninstall-root.txt
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/20388/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/20388/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 10 new or modified test files.       trunk Compile Tests 0 mvndep 0m 44s Maven dependency ordering for branch -1 mvninstall 15m 6s root in trunk failed. +1 compile 13m 54s trunk passed +1 checkstyle 2m 1s trunk passed +1 mvnsite 4m 24s trunk passed -1 findbugs 1m 28s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings. -1 findbugs 1m 47s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. +1 javadoc 2m 22s trunk passed       Patch Compile Tests 0 mvndep 0m 14s Maven dependency ordering for patch +1 mvninstall 2m 16s the patch passed +1 compile 11m 20s the patch passed +1 cc 11m 20s the patch passed -1 javac 11m 20s root generated 80 new + 1313 unchanged - 0 fixed = 1393 total (was 1313) -0 checkstyle 2m 4s root: The patch generated 12 new + 824 unchanged - 22 fixed = 836 total (was 846) +1 mvnsite 4m 28s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 4s The patch has no ill-formed XML file. +1 findbugs 1m 38s hadoop-common in the patch passed. +1 findbugs 1m 51s hadoop-hdfs-client in the patch passed. +1 findbugs 2m 15s hadoop-hdfs-project/hadoop-hdfs generated 0 new + 9 unchanged - 1 fixed = 9 total (was 10) +1 findbugs 0m 41s hadoop-hdfs-httpfs in the patch passed. +1 javadoc 2m 39s the patch passed       Other Tests +1 unit 8m 34s hadoop-common in the patch passed. +1 unit 1m 30s hadoop-hdfs-client in the patch passed. -1 unit 76m 1s hadoop-hdfs in the patch failed. +1 unit 3m 37s hadoop-hdfs-httpfs in the patch passed. +1 asflicense 0m 35s The patch does not generate ASF License warnings. 165m 23s Reason Tests Failed junit tests hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy   hadoop.hdfs.server.namenode.TestDecommissioningStatus   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-6984 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12878457/HDFS-6984.015.patch Optional Tests asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 1368018f85ce 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 465c213 Default Java 1.8.0_131 mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/branch-mvninstall-root.txt findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html javac https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/20388/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/20388/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/20388/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          andrew.wang Andrew Wang added a comment -

          +1 LGTM, thanks for working on this Chris. I applied this to 3.0.0-alpha4 and ran it through our internal integration test suite, and it passed. Great work!

          Some final checks:

          • Is the release note on this JIRA up to date? There are also misc improvements like removing the acl/enc/ec bits from the WebHDFS AclStatus that we could call out.
          • Based on code inspection it looks like the wire-format changes are compatible, but it'd be good if you could comment on manual testing. WebHDFS and HttpFS in particular need to be both forward compatible as well as backwards compatible to help with upgrade, and it'd be real nice to support this for the PB protocol too since this is such a fundamental primitive.
          Show
          andrew.wang Andrew Wang added a comment - +1 LGTM, thanks for working on this Chris. I applied this to 3.0.0-alpha4 and ran it through our internal integration test suite, and it passed. Great work! Some final checks: Is the release note on this JIRA up to date? There are also misc improvements like removing the acl/enc/ec bits from the WebHDFS AclStatus that we could call out. Based on code inspection it looks like the wire-format changes are compatible, but it'd be good if you could comment on manual testing. WebHDFS and HttpFS in particular need to be both forward compatible as well as backwards compatible to help with upgrade, and it'd be real nice to support this for the PB protocol too since this is such a fundamental primitive.
          Hide
          chris.douglas Chris Douglas added a comment -

          Thanks, Andrew Wang! I'll commit this shortly.

          I filed HDFS-12250 to clean up some of the deprecation warnings.

          Is the release note on this JIRA up to date? There are also misc improvements like removing the acl/enc/ec bits from the WebHDFS AclStatus that we could call out.

          Sure, I'll update it.

          Based on code inspection it looks like the wire-format changes are compatible, but it'd be good if you could comment on manual testing

          As we discussed earlier, including these bits in the AclStatus seemed accidental. I can try a few operations with an old client to make sure there are no errors. I don't have access to environments where these APIs would be stressed.

          Show
          chris.douglas Chris Douglas added a comment - Thanks, Andrew Wang ! I'll commit this shortly. I filed HDFS-12250 to clean up some of the deprecation warnings. Is the release note on this JIRA up to date? There are also misc improvements like removing the acl/enc/ec bits from the WebHDFS AclStatus that we could call out. Sure, I'll update it. Based on code inspection it looks like the wire-format changes are compatible, but it'd be good if you could comment on manual testing As we discussed earlier, including these bits in the AclStatus seemed accidental. I can try a few operations with an old client to make sure there are no errors. I don't have access to environments where these APIs would be stressed.
          Hide
          chris.douglas Chris Douglas added a comment -

          I committed this.

          Show
          chris.douglas Chris Douglas added a comment - I committed this.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12107 (See https://builds.apache.org/job/Hadoop-trunk-Commit/12107/)
          HDFS-6984. Serialize FileStatus via protobuf. (cdouglas: rev 12e44e7bdaf53d3720a89d32f0cc2717241bd6b2)

          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
          • (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/protocolPB/PBHelper.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConstants.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
          • (add) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/protocolPB/TestFSSerialization.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
          • (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusSerialization.java
          • (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotInfo.java
          • (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/package-info.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/AclTestHelpers.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
          • (edit) hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
          • (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/protocolPB/package-info.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
          • (add) hadoop-common-project/hadoop-common/src/main/proto/FSProtos.proto
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
          • (edit) hadoop-common-project/hadoop-common/pom.xml
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocatedFileStatus.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/FsPermissionExtension.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/acl.proto
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12107 (See https://builds.apache.org/job/Hadoop-trunk-Commit/12107/ ) HDFS-6984 . Serialize FileStatus via protobuf. (cdouglas: rev 12e44e7bdaf53d3720a89d32f0cc2717241bd6b2) (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/protocolPB/PBHelper.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConstants.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java (add) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/protocolPB/TestFSSerialization.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusSerialization.java (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotInfo.java (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/package-info.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/AclTestHelpers.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java (edit) hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/protocolPB/package-info.java (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java (add) hadoop-common-project/hadoop-common/src/main/proto/FSProtos.proto (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java (edit) hadoop-common-project/hadoop-common/pom.xml (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocatedFileStatus.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/FsPermissionExtension.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java (edit) hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/acl.proto (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java

            People

            • Assignee:
              chris.douglas Chris Douglas
              Reporter:
              cmccabe Colin P. McCabe
            • Votes:
              0 Vote for this issue
              Watchers:
              19 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development